diff --git a/-9FQT4oBgHgl3EQfKjXJ/content/tmp_files/2301.13261v1.pdf.txt b/-9FQT4oBgHgl3EQfKjXJ/content/tmp_files/2301.13261v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..9a6ea5dd1938756c88cad11ad753e8e077e8516e --- /dev/null +++ b/-9FQT4oBgHgl3EQfKjXJ/content/tmp_files/2301.13261v1.pdf.txt @@ -0,0 +1,1536 @@ +Published as a conference paper at ICLR 2023 +EMERGENCE OF MAPS +IN THE MEMORIES +OF BLIND NAVIGATION AGENTS +Erik Wijmans1,2∗Manolis Savva2,3 Irfan Essa1,4 Stefan Lee5 Ari S. Morcos2 Dhruv Batra1,2 +1Georgia Institute of Technology +2FAIR, Meta AI +3Simon Fraser University +4Google Research Atlanta 5Oregon State University +ABSTRACT +Animal navigation research posits that organisms build and maintain internal spa- +tial representations, or maps, of their environment. We ask if machines – specifi- +cally, artificial intelligence (AI) navigation agents – also build implicit (or ‘men- +tal’) maps. A positive answer to this question would (a) explain the surprising +phenomenon in recent literature of ostensibly map-free neural-networks achieving +strong performance, and (b) strengthen the evidence of mapping as a fundamental +mechanism for navigation by intelligent embodied agents, whether they be biolog- +ical or artificial. Unlike animal navigation, we can judiciously design the agent’s +perceptual system and control the learning paradigm to nullify alternative naviga- +tion mechanisms. Specifically, we train ‘blind’ agents – with sensing limited to +only egomotion and no other sensing of any kind – to perform PointGoal navi- +gation (‘go to ∆x, ∆y’) via reinforcement learning. Our agents are composed of +navigation-agnostic components (fully-connected and recurrent neural networks), +and our experimental setup provides no inductive bias towards mapping. Despite +these harsh conditions, we find that blind agents are (1) surprisingly effective nav- +igators in new environments (∼95% success); (2) they utilize memory over long +horizons (remembering ∼1,000 steps of past experience in an episode); (3) this +memory enables them to exhibit intelligent behavior (following walls, detecting +collisions, taking shortcuts); (4) there is emergence of maps and collision detection +neurons in the representations of the environment built by a blind agent as it nav- +igates; and (5) the emergent maps are selective and task dependent (e.g. the agent +‘forgets’ exploratory detours). Overall, this paper presents no new techniques for +the AI audience, but a surprising finding, an insight, and an explanation. +1 +INTRODUCTION +Decades of research into intelligent animal navigation posits that organisms build and maintain inter- +nal spatial representations (or maps)1 of their environment, that enables the organism to determine +and follow task-appropriate paths (Tolman, 1948; O’keefe & Nadel, 1978; Epstein et al., 2017). +Hamsters, wolves, chimpanzees, and bats leverage prior exploration to determine and follow short- +cuts they may never have taken before (Chapuis & Scardigli, 1993; Peters, 1976; Menzel, 1973; +Toledo et al., 2020; Harten et al., 2020). Even blind mole rats and animals rendered situationally- +blind in dark environments demonstrate shortcut behaviors (Avni et al., 2008; Kimchi et al., 2004; +Maaswinkel & Whishaw, 1999). Ants forage for food along meandering paths but take near-optimal +return trips (M¨uller & Wehner, 1988), though there is some controversy about whether insects like +ants and bees are capable of forming maps (Cruse & Wehner, 2011; Cheung et al., 2014). +Analogously, mapping and localization techniques have long played a central role in enabling non- +biological navigation agents (or robots) to exhibit intelligent behavior (Thrun et al., 2005; Institute, +∗Correspondence to etw@gatech.edu. +1Throughout this work, we use ‘maps’ to refer to a spatial representation of the environment that enables +intelligent navigation behavior like taking shortcuts. We provide a detailed discussion and contrast w.r.t. a +‘cognitive map’ as defined by O’keefe & Nadel (1978) in Apx. B.1. +1 +arXiv:2301.13261v1 [cs.AI] 30 Jan 2023 + +Published as a conference paper at ICLR 2023 +1972; Ayache & Faugeras, 1988; Smith et al., 1990). More recently, the machine learning commu- +nity has produced a surprising phenomenon – neural-network models for navigation that curiously +do not contain any explicit mapping modules but still achieve remarkably high performance (Savva +et al., 2019; Wijmans et al., 2020; Kadian et al., 2020; Chattopadhyay et al., 2021; Khandelwal +et al., 2022; Partsey et al., 2022; Reed et al., 2022). For instance, Wijmans et al. (2020) showed that +a simple ‘pixels-to-actions’ architecture (using a CNN and RNN) can navigate to a given point in +a novel environment with near-perfect accuracy; Partsey et al. (2022) further generalized this result +to more realistic sensors and actuators. Reed et al. (2022) showed a similar general purpose archi- +tecture (a transformer) can perform a wide variety of embodied tasks, including navigation. The +mechanisms explaining this ability remain unknown. Understanding them is both of scientific and +practical importance due to safety considerations involved with deploying such systems. +In this work, we investigate the following question – is mapping an emergent phenomenon? Specif- +ically, do artificial intelligence (AI) agents learn to build internal spatial representations (or ‘mental’ +maps) of their environment as a natural consequence of learning to navigate? +The specific task we study is PointGoal navigation (Anderson et al., 2018), where an AI agent is +introduced into a new (unexplored) environment and tasked with navigating to a relative location – +‘go 5m north, 2m west relative to start’2. This is analogous to the direction and distance of foraging +locations communicated by the waggle dance of honey bees (Von Frisch, 1967). +Unlike animal navigation studies, experiments with AI agents allow us to precisely isolate map- +ping from alternative mechanisms proposed for animal navigation – the use of visual land- +marks (Von Frisch, 1967), orientation by the arrangement of stars (Lockley, 1967), gradients of +olfaction or other senses (Ioal`e et al., 1990). We achieve this isolation by judiciously designing +the agent’s perceptual system and the learning paradigm such that these alternative mechanisms are +rendered implausible. Our agents are effectively ‘blind’; they possess a minimal perceptual system +capable of sensing only egomotion, i.e. change in the agent’s location and orientation as the it moves +– no vision, no audio, no olfactory, no haptic, no magnetic, or any other sensing of any kind. This +perceptual system is deliberately impoverished to isolate the contribution of memory, and is inspired +by blind mole rats, who perform localization via path integration and use the Earth’s magnetic field +as a compass (Kimchi et al., 2004). Further still, our agents are composed of navigation-agnostic, +generic, and ubiquitous architectural components (fully-connected layers and LSTM-based recur- +rent neural networks), and our experimental setup provides no inductive bias towards mapping – no +map-like or spatial structural components in the agent, no mapping supervision, no auxiliary tasks, +nothing other than a reward for making progress towards a goal. +Surprisingly, even under these deliberately harsh conditions, we find the emergence of map-like +spatial representations in the agent’s non-spatial unstructured memory, enabling it to not only suc- +cessfully navigate to the goal but also exhibit intelligent behavior (like taking shortcuts, following +walls, detecting collisions) similar to aforementioned animal studies, and predict free-space in the +environment. Essentially, we demonstrate an ‘existence proof’ or an ontogenetic developmental ac- +count for the emergence of mapping without any previous predisposition. Our results also explain +the aforementioned surprising finding in recent literature – that ostensibly map-free neural-network +achieve strong autonomous navigation performance – by demonstrating that these ‘map-free’ sys- +tems in fact learn to construct and maintain map-like representations of their environment. +Concretely, we ask and answer following questions: +1) Is it possible to effectively navigate with just egomotion sensing? Yes. We find that our ‘blind’ +agents are highly effective in navigating new environments – reaching the goal with 95.1%±1.3% +success rate. And they traverse moderately efficient (though far from optimal) paths, reaching +62.9%±1.6% of optimal path efficiency. We stress that these are novel testing environments, the +agent has not memorized paths within a training environment but has learned efficient navigation +strategies that generalize to novel environments, such as emergent wall-following behavior. +2) What mechanism explains this strong performance by ‘blind’ agents? Memory. We find that +memoryless agents completely fail at this task, achieving nearly 0% success. More importantly, +we find that agents with memory utilize information stored over a long temporal and spatial hori- +zon and that collision-detection neurons emerge within this memory. Navigation performance as +a function of the number of past actions/observations encoded in the agent’s memory does not +2The description in English is purely for explanatory purposes; the agent receives relative goal coordinates. +2 + +Published as a conference paper at ICLR 2023 +saturate till one thousand steps (corresponding to the agent traversing 89.1±0.66 meters), suggest- +ing that the agent ‘remembers’ a long history of the episode. +3) What information does the memory encode about the environment? Implicit maps. We perform +an AI rendition of Menzel (1973)’s experiments, where a chimpanzee is carried by a human and +shown the location of food hidden in the environment. When the animal is set free to collect the +food, it does not retrace the demonstrator’s steps but takes shortcuts to collect the food faster. +Analogously, we train a blind agent to navigate from a source location (S) to a target location +(T). After it has finished navigating, we transplant its constructed episodic memory into a second +‘probe’-agent (which is also blind). We find that this implanted-memory probe-agent performs +dramatically better in navigating from S to T (and T to S) than it would without the memory +transplant. Similar to the chimpanzee, the probe agent takes shortcuts, typically cutting out +backtracks or excursions that the memory-creator had undertaken as it tried to work its way +around the obstacles. These experiments provide compelling evidence that blind agents learn to +build and use implicit map-like representations of their environment solely through learning to +navigate. Intriguingly further still, we find that surprisingly detailed metric occupancy maps of +the environment (indicating free-space) can be explicitly decoded from the agent’s memory. +4) Are maps task-dependent? Yes. We find that the emergent maps are a function of the navigation +goal. Agents ‘forget’ excursions and detours, i.e. their episodic memory only preserves the +features of the environment relevant to navigating to their goal. This, in part, explains why +transplanting episodic memory from one agent to another leads it to take shortcuts – because the +excursion and detours are simply forgotten. +Overall, our experiments and analyses demonstrate that ‘blind’ agents solve PointGoalNav by +combining information over long time horizons to build detailed maps of their environment, solely +through the learning signals imposed by goal-driven navigation. In biological systems, convergent +evolution of analogous structures that cannot be attributed to a common ancestor (e.g. eyes in +vertebrates and jellyfish (Kozmik et al., 2008)) is often an indicator that the structure is a natural +response to the ecological niche and selection pressures. Analogously, our results suggest that +mapping may be a natural solution to the problem of navigation by intelligent embodied agents, +whether they be biological or artificial. We now describe our findings for each question in detail. +2 +BLIND AGENTS ARE EFFECTIVE NAVIGATORS +We train navigation agents for PointGoalNav in virtualized 3D replicas of real houses utilizing +the AI Habitat simulator (Savva et al., 2019; Szot et al., 2021) and Gibson (Xia et al., 2018) and +Matterport3D (Chang et al., 2017) datasets. +The agent is physically embodied as an cylinder +with a diameter 0.2m and height 1.5m. In each episode, the agent is randomly initialized in the +environment, which establishes an episodic agent-centric coordinate system. The goal location +is specified in cartesian coordinates (xg, yg, zg) in this system. +The agent has four actions – +move forward (0.25 meters), turn left (10◦), turn right (10◦), and stop (to signal reaching the +goal), and allowed a maximum of 2,000 steps to reach the specified goal. It is equipped with an +egomotion sensor providing it relative position (∆x, ∆y, ∆z) and relative ‘heading’ (or yaw angle) +∆θ between successive steps, which is integrated to keep track of the agent’s location and heading +relative to start [xt, yt, zt, θt]. This is sometimes referred to as a ‘GPS+Compass’ sensor in this +literature (Savva et al., 2019; Wijmans et al., 2020). +We use two task-performance dependent metrics: i) Success, defined as whether or not the agent +predicted the stop action within 0.2 meters of the target, and ii) Success weighted by inverse Path +Length (SPL) (Anderson et al., 2018), defined as success weighted by the efficiency of agent’s path +compared to the oracle path (the shortest path). Given the high success rates we observe, SPL can be +roughly interpreted as efficiency of the path taken compared to the oracle path – e.g. an SPL of 95% +means the agent took a path 95% as efficient as the oracle path while an SPL of 50% means the agent +took a path 50% as efficient. Note that performance is evaluated in previously unseen environments +to evaluate whether agents can generalize, not just memorize. +The agent’s policy is instantiated as a long short-term memory (LSTM) (Hochreiter & Schmidhuber, +1997) recurrent neural network – formally, given current observations ot = [xg, yg, zg, xt, yt, zt, θt], +(ht, ct) = LSTM(ot, (ht−1, ct−1)). We refer to this (ht, ct) as the agent’s internal memory repre- +sentation. Note that only contains information gathered during the current navigation episode. We +train our agents for this task using a reinforcement learning (Sutton & Barto, 1992) algorithm called +DD-PPO (Wijmans et al., 2020). The reward has a term for making progress towards the goal and +3 + +Published as a conference paper at ICLR 2023 +GPS+Compass +(A) +(B) +1 +3 +2 +4 +6 +5 +Forward — Collided +Forward — No Collision +Turn — No Collision +(C) +Agent +Bug — Always Right +Bug — Always Left +Clairvoyant Bug +Figure 1: (A) PointGoal navigation. An agent is initialized in a novel environment (bluesquare) +and task with navigation to a point specified relative to the start location (red square). We study +‘blind’ agents, equipped with just an egomotion sensor (called GPS+Compass in this literature). +(B) ‘Blind’ agent vs. bug. Our learned ‘blind’ agent compared to 2 variants and an oracle equipped +variant of the Bug algorithm (Lumelsky & Stepanov, 1987). The Bug algorithm initially orients +itself towards the goal and then proceeds towards the goal. Upon hitting a wall, it follows along the +wall until it reaches the other side. The oracle version is told whether wall-following left or right +is optimal, providing an upper-bound on Bug algorithm performance. +(C) t-SNE of the agent’s +internal representation for collisions. We find 4 overall clusters corresponding to the previous +action taken and whether or not that action led to a collision. +for successfully reaching it. Neither the training procedure nor agent architecture contain explicit +inductive biases towards mapping or planning relative to a map. Apx. A.1 describes training details. +Agent +Success +SPL +1 Blind +95.1±1.3 62.9±1.6 +2 Clairvoyant Bug +100±0.0 +46.0 +3 Sighted (Depth) +94.0 +83.0 +(Ramakrishnan et al., 2021) +Table 1: PointGoalNav performance agents +on PointGoalNav. We find that blind agents +are surprisingly effective (success) though +not efficient (SPL) navigators. +They have +similar success as an agent equipped with a +Depth camera and higher SPL than a clair- +voyant version of the ‘Bug’ algorithm. +Surprisingly, we find that agents trained under this +impoverished sensing regime are able to navigate +with near-perfect efficacy – reaching the goal with +95.1%±1.3% success rate (Table 1), even in situa- +tions where the agent must take hundreds of actions +and traverse over 25m. This performance is simi- +lar in success rate (95.1 vs 94.0)3 to a sighted agent +(equipped with a depth camera) trained on a larger +dataset (HM3D) (Ramakrishnan et al., 2021). The +paths taken by the blind agent are moderately ef- +ficient but (as one might expect) far less so than a +sighted agent (62.9 vs 83.0 SPL). +At this point, it might be tempting to believe that this +is an easy navigation problem, but we urge the reader +to fight hindsight bias. We contend that the SPL of +this blind agent is surprisingly high given the impoverished sensor suite. To put this SPL in context, +we compare it with ‘Bug algorithms’ (Lumelsky & Stepanov, 1987), which are motion planning +algorithms inspired by insect navigation, involving an agent equipped with only a localization sensor. +In these algorithms, the agent first orients itself towards the goal and then travels directly towards +it until it encounters a wall, in which case it follows along the wall along one of two directions of +travel. The primary challenge for Bug algorithms is determining whether to go left or right upon +reaching a wall. To provide an upper bound on performance, we implement a ‘clairvoyant’ Bug +algorithm agent with an oracle that tells it whether left or right is optimal. Even with the additional +privileged information, the ‘clairvoyant’ Bug agent achieves an SPL of 46%, which is considerably +less efficient than the ‘blind’ agent. Fig. 1b shows an example of the path our blind agent takes +compared to 3 variants of the Bug algorithm. This shows that blind navigation agents trained with +reinforcement learning are highly efficient at navigating in previously unseen environments given +their sensor suite. +2.1 +EMERGENCE OF WALL-FOLLOWING BEHAVIOR AND COLLISION-DETECTION NEURONS +Fig. 1b shows the blind agent exhibiting wall-following behavior (also see blue paths in Fig. A6 +and videos in supplement). This behavior is remarkably consistent; the agent spends the majority +3It may seem like the blind agent outperforms the sighted agent, but the mean performance of Ramakrishnan +et al. (2021) is within our error bars. +4 + +Published as a conference paper at ICLR 2023 +of an episode near a wall. This is surprising because it is trained to navigate to the target location +as quickly as possible, thus, it would be rewarded for traveling in straighter paths (that avoid walls). +We hypothesize that this strategy emerges due to two factors. 1) The agent is blind, it has no +way to determine where the obstacles are in the environment besides ‘bumping’ into them. 2) The +environment is unknown to the agent. While this is clearly true for testing environments it is also +functionally true for training environments because the coordinate system is episodic, every episode +uses a randomly-instantiated coordinate system based on how the agent was spawned; and the since +the agent is blind, it cannot perform visual localization. +We test both hypotheses. To test (2), we provide an experiment in Apx. C.1 showing that when +the agent is trained in a single environment with a consistent global coordinate system, it learns to +memorize the shortest paths in this environment and wall-following does not emerge. Consequently, +this agent is unable to navigate in new environment, achieving 100% success on train and 0% on test. +To test (1), we analyze whether the agent is capable of detecting collisions. Note that the agent is +not equipped with a collision sensor. In principle, the agent can infer whether it collided – if tries +to move forward and the resulting egomotion is atypical, then it is likely that a collision happened. +This leads us to ask – does the agent’s memory contain information about collisions? We train +a linear classifier that uses the (frozen) internal representation (ht+1, ct+1) to predict if action at +resulted in a collision (details in Apx. A.5). The classifier achieves 98% accuracy on held-out data. +As comparison, random guessing on this 2-class problem would achieve 50%. This shows the +agent’s memory not only predicts its collisions, but also that collision-vs-not are linearly separable in +internal-representation space, which strongly suggests that the agent has learned a collision sensor. +Next, we examine how collisions are structured in the agent’s internal representation by identifying +the subspace that is used for collisions. Specifically, we re-train the linear classifier with an ℓ1- +weight penalty to encourage sparsity. We then select the top 10 neurons (from 3072) with the largest +weight magnitude; this reduces dimensionality by 99.7% while still achieving 96% collision-vs-not +accuracy. We use t-SNE (Van der Maaten & Hinton, 2008) and the techniques in Kobak & Berens +(2019) to create a 2-dimension visualization of the resulting 10-dimension space. We find 4 distinct +semantically-meaningful clusters (Fig. 1c). One cluster always fires for collisions, one for forward +actions that did not result in a collision, and the other two correspond to turning actions. Notice that +these exceedingly small number of dimensions and neurons essentially predict all collisions and +movement of the agent. We include videos in the supplementary materials. +3 +MEMORY IS USED OVER LONG HORIZONS +10 +0 +10 +1 +10 +2 +10 +3 +Memory Length (log-scale) +0 +20 +40 +60 +80 +100 +Performance (Higher is better) +SPL +Success +Figure 2: +Navigation perfor- +mance vs. memory length. Agent +performance does not saturate until +memory can contain information +from hundreds of steps. A memory +of 103 steps is half the maximum +episode length. +Next, we examine how memory is utilized by asking if the +agent uses memory solely to remember short-term informa- +tion (e.g. did it collide in the last step?) or whether it also in- +cludes long-range information (e.g. did it collide hundreds of +steps ago?). To answer this question, we restrict the memory +capacity of our agent. Specifically, let k denote the memory +budget. At each time t, we take the previous k observations, +[ot−k+1, . . . , ot], and construct the internal representation +(ht, ct) via the recurrence (hi, ci) = LSTM(oi, (hi−1, ci−1)) +for t − k < i ≤ t where (ht−k, ct−k) = (0, 0). +If the agent is only leveraging its memory for short-term stor- +age we would expect performance to saturate at a small value +of k. Instead, Fig. 2 shows that the agent leverages its memory +for significantly long term storage. When memoryless (k = 1), +the agent completely fail at the task, achieving nearly 0% suc- +cess. Navigation performance as a function of the memory +budget (k) does not saturate till one thousand steps. Recall +that the agent can move forward 0.25 meters or turn 10◦ at +each step. +The average distance traveled in 1000 steps is +89.1±0.66 meters, indicating that it remembers information over long temporal and spatial horizons. +In Apx. C.6 we train agents to operate at a specific memory budget. We find that a budget of k = 256, +the largest we are able to train, is not sufficient to achieve the performance of unbounded. +5 + +Published as a conference paper at ICLR 2023 +Agent Network +Probe Network +LSTM +LSTM +oA +T-1 +LSTM +LSTM +oA +T +hA +T-2 +aA +T-2 +aA +T-1 +aA +T +hA +T +hP +2 +oP +1 +aP +1 +aP +2 +oP +2 +S +T +Stop Gradient +(A) +SecondNav(S→T) +SecondNav(T→S) +Probe Type +Success +SPL +Success +SPL +1 AllZeroMemory +91.6±0.40 71.1±0.27 +91.0±0.40 +70.8±0.25 +2 UntrainedAgentMemory +92.4±0.28 72.0±0.19 +91.2±0.54 +72.2±0.35 +3 TrainedAgentMemory +96.2±0.23 85.0±0.16 +96.0±0.16 +84.8±0.22 +(B) +Figure 3: +(A) Probe experiment. First, an agent navigates (blue path, blue LSTM) from start +(green sphere) to target (red sphere). After the agent navigates, we task a probe (purple LSTM) with +performing the same navigation episode with the additional information encapsulated in the agent’s +internal representation (or memory), hA +T. The probe is able to navigate more efficiently by taking +shortcuts (purple path). As denoted by the dashed line between the probe and agent networks, the +probe does not influence what the agent stores in its internal representation. Environment in the +image from the Replica Dataset (Straub et al., 2019). +(B) Agent memory transplant increases +probe efficiency (SPL). Results of our trained probe agent under three configurations – initialized +with an empty representation (AllZeroMemory), a representation of a random agent walked along +the trained agent’s path (UntrainedAgentMemory), and the final representation of the trained agent +(TrainedAgentMemory). 95% confidence interval reported over 5 agent-probe pairs. +4 +MEMORY ENABLES SHORTCUTS +To investigate what information is encoded in the memory of our blind agents, we develop an exper- +imental paradigm based on ‘probe’ agents. A probe is a secondary navigation agent4 that is struc- +turally identical to the original (sensing, architecture, etc.), but parametrically augmented with the +primary agent’s constructed episodic memory representation (hT , cT ). The probe has no influence +on the agent, i.e. no gradients (or rewards) follow from probe to agent (please see training details +in Apx. A.2). We use this paradigm to examine whether the agent’s final internal representation +contains sufficient information for taking shortcuts in the environment. +As illustrated in Fig. 3A, the agent first navigates from source (S) to target (T). After the agent +reaches T, a probe is initialized5 at S, its memory initialized with the agent’s final memory repre- +sentation, i.e. (h0, c0)probe = (hT , cT )agent, and tasked with navigating to T. We refer to this probe +task as SecondNav(S→T). All evaluations are conducted in environments not used for training the +agent nor the probe. Thus, any environmental information in the agent’s memory must have been +gathered during its trajectory (and not during any past exposure during learning). Similarly, all initial +knowledge the probe has of the environment must come from the agent’s memory (hT , cT )agent. +Our hypothesis is that the agent’s memory contains a spatial representation of the environment, +which the probe can leverage. If the hypothesis is true, we would expect the probe to navigate Sec- +ondNav(S→T) more efficiently than the agent (e.g. by taking shortcuts and cutting out exploratory +excursions taken by the agent). If not, we would expect the probe to perform on-par with the agent +since the probe is being trained on essentially the same task as the agent6. In our experiments, we +find that the probe is significantly more efficient than the agent – SPL of 62.9%±1.6% (agent) vs. +85.0%±1.6% (probe). It is worth stressing how remarkable the performance of the probe is – in a +new environment, a blind probe navigating without a map traverses a path that is within 15% of the +shortest path on the map. The best known sighted agents (equipped with an RGB camera, Depth +sensor, and egomotion sensor) achieve an SPL of 84% on this task (Ramakrishnan et al., 2021). +Essentially, the memories of a blind agent are as valuable as having vision! +Fig. 3A shows the difference in paths between the agent and probe (and videos showing more exam- +ples are available in the supplement). While the agent exhibits wall-following behavior, the probe +4To avoid confusion, we refer to this probe agent as ‘probe’ and the primary agent as ‘agent’ from this point. +5The probe’s heading at S is set to the agent’s final heading upon reaching T. +6We note that an argument can be made that if the agent’s memory is useless to the probe, then the probe is +being trained on a harder task since it must learn to navigate and ignore the agent’s memory. But this argument +would predict the probe’s performance to be lower not higher than the agent. +6 + +Published as a conference paper at ICLR 2023 +B +A +12.4% +Non-navigable +Navigable +Ground Truth +Prediction +32.4% +Ground Truth +Prediction +B +A +Figure 4: Learning navigation improves map prediction from memory. (Left) Accuracy (In- +tersection over Union) distributions (via kernel density estimation) and means (dashed lines); +TrainedAgentMemory has a higher mean than UntrainedAgentMemory with p-value ≤ 10−5 (via +Wilcoxon signed-rank test (Wilcoxon, 1992)). (Right) Example ground truth and predicted occu- +pancy maps using TrainedAgentMemory (corresponding to (A) and (B) IoU points). Light grey +is non-navigable and dark grey is navigable. The agent path is drawn in light blue and navigates +from start (green) to target (red). We can see that when the agent travels close to one wall, the map +decoder predicts another wall parallel to it, indicating a corridor. +instead takes more direct paths and rarely performs wall following. Recall that the only difference in +the agent and probe is the contents of the initial hidden state – reward is identical (and available only +during training), training environments are identical (although the episodes are different), and eval- +uation episodes are identical – meaning that the environmental representation in the agent’s episodic +memory is what enables the probe to navigate more efficiently. +We further compare this result (which we denote as TrainedAgentMemory) with two control groups: +1) AllZeroMemory: An empty (all zeros) episodic memory to test for any systematic biases in the +probe tasks. This probe contains identical information at the start of an episode as the agent (i.e. +no information). 2) UntrainedAgentMemory: Episodic memory generated by an untrained agent +(i.e. with a random setting of neural network parameters) as it is walked along the trajectory of the +trained agent. This disentangles the agent’s structure from its parameters; and tests whether simply +being encoded by an LSTM (even one with random parameters) provides an inductive bias towards +building good environmental representations (Wieting & Kiela, 2019). +We find no evidence for this inductive bias – UntrainedAgentMemory performs no better than +AllZeroMemory (Fig. 3B, row 1 vs. 2). Furthermore, TrainedAgentMemory significantly outper- +forms both controls by +13 points SPL and +4 points Success (Fig. 3B, row 3 vs. 1 and 2). Taken +together, these two results indicate that the ability to construct useful spatial representations of the +environment from a trajectory is decidedly a learned behavior. +Next, we examine if there is any directional preference in the episodic memory constructed by +the agent. Our claim is that even though the agent navigates from S to T, if its memory indeed +contains map-like spatial representations, it should also support probes for the reverse task Second- +Nav(T→S). Indeed, we find that TrainedAgentMemory probe performs the same (within margin of +error) on both SecondNav(S→T) and SecondNav(T→S) (Fig. 3B right column) – indicating that +the memory is equally useful in both directions. In Apx. C.2 we demonstrate that the probe removes +excursions from the agent’s path and takes shortcuts through previously unseen parts of the envi- +ronment. Overall, these results provide compelling evidence that blind agents learn to build and use +implicit map-like representations that enable shortcuts and reasoning about previously untraversed +locations in the environment, solely through learning to navigate between two points. +5 +LEARNING NAVIGATION IMPROVES METRIC MAP DECODING +Next, we tackle the question ‘Does the agent build episodic representations capable of decod- +ing metric maps (occupancy grids) of the environment?’. Formally, given the final representation +(hT , cT )agent, we train a separate decoding network to predict an allocentric top-down occupancy +grid (free-space vs not) of the environment. As with the probes, no gradients are propagated from +the decoder to the agent’s internal representation. We constrain the network to make predictions for +a location only if the agent reached within 2.5 meters of it (refer to Apx. A.3 for details). Note that +since the agents are ‘blind’ predictions about any unvisited location require reasoning about unseen +7 + +Published as a conference paper at ICLR 2023 +Non-Excursion +Excursion +Predicted +Visited Chance +5 +25 +50 +75 +100 +(A) +(B) +Figure 5: (A) Excursion prediction example. Qualitative example of the previously-visited loca- +tion decoder making systematic errors when decoding an excursion. Blue represents the confidence +of the decoder that the agent was previously at a given location; we can see that it is lower in the path +interval marked in red (excursion) than the rest. +(B) Remembrance of excursions. Performance +of decoders when predicting previous agent locations broken down into three categories. ‘Non- +excursion’ is all predictions where the current location of the agent and the prediction time step are +not part of an excursions. ‘Excursion’ is when the prediction time step is part of an excursion. ‘Exit’ +is when the prediction time step is part of the last 10% of the excursion. X-axis is the distance into +the past and Y-axis is the relative error between the true and predicted locations. +space. As before, we compare the internal representation produced by TrainedAgentMemory to +internal representation produced by an agent with random parameters, UntrainedAgentMemory. +Fig. 4 shows the distribution of map-prediction accuracy, measured as interaction-over-union (IoU) +with the true occupancy grid. We find that TrainedAgentMemory enables uniformly more accurate +predictions than UntrainedAgentMemory– 32.5% vs 12.5% average IoU. The qualitative examples +show that the predictor is commonly able to make accurate predictions about unvisited locations, e.g. +when the agent travels close to one wall, the decoder predicts another parallel to it, indicating a cor- +ridor. These results show that the internal representation contains necessary information to decode +accurate occupancy maps, even for unseen locations. We note that the environment structural priors +are also necessary to prediction unseen locations. Thus agent memory is necessary but not sufficient. +In Apx. C.4, we conduct this analysis on ‘sighted’ navigation agents (equipped with a Depth camera +and egomotion sensor). Perhaps counter-intuitively, we do not find conclusive evidence that metric +maps can be decoded from the memory of sighted agents (despite their sensing suite being a strict +superset of blind agents). Our conjecture is that for higher-level strategies like map-building to +emerge, the learning problem must not admit ‘trivial’ solutions such as the ones deep reinforcement +learning is know to latch onto (Baker et al., 2020; Lehman et al., 2020; Kadian et al., 2020). We +believe that the minimal perception system used in our work served to create a challenging learning +problem, which in turn limited the possible ‘trivial’ solutions, thus inducing map-building. +6 +MAPPING IS TASK-DEPENDENT: AGENT FORGETS EXCURSIONS +Given that the agent is memory-limited, it stands to reason that it might need to choose what informa- +tion to preserve and what to ‘forget’. To examine this, we attempt to decode the agent’s past positions +from its memory. Formally, given internal state at time t, (ht, ct), we train a prediction network fk(·) +to predict the agent’s location k steps in to the past, i.e. ˆst−k = fk(ht, ct)+st, k ∈ [1, 256]. Given +ground truth location st+k, we evaluate the decoder via relative L2 error ||ˆst+k−st+k||/||st+k−st|| +(refer to Apx. A.4 for details). Qualitative analysis of past prediction results shows that the agent +forgets excursions7, i.e. excursions are harder to decode (see Fig. 5a). To quantify this, we man- +ually labelled excursions in 216 randomly sampled episodes in evaluation environments. Fig. 5b +shows that excursions are harder to decode than non-excursions, indicating that the agent does in- +deed forget excursions. Interestingly, we find that the exit of the excursion is considerably easier to +decode, indicating that the end of the excursion performs a similar function to landmarks in animal +and human navigation (Chan et al., 2012). +7We define an excursion as a sub-path that approximately forms a loop. +8 + +Published as a conference paper at ICLR 2023 +In the appendix, we study several additional questions that could not be accommodated in the main +paper. In Apx. C.2 we further examine the probe’s performance. In Apx. C.3 we examine predicting +future agent locations. In Apx. C.5 we use agent’s hidden state as a world model. +7 +RELATED WORK +Characterizing spatial representations. +Prior work has shown that LSTMs build grid- +cell (O’keefe & Nadel, 1978) representations of an environment when trained directly for path +integration within that environment (Banino et al., 2018; Cueva & Wei, 2018; Sorscher et al., 2020). +In contrast, our work provides no direct supervision for path integration, localization, or mapping. +Banino et al. (2018) demonstrated that these maps aid in navigation by training a navigation agent +that utilizes this cognitive map. In contrast, we show that LSTMs trained for navigation learn to +build spatial representations in novel environments. Whether or not LSTMs trained under this +setting also utilize grid-cells is a question for future work. Bruce et al. (2018) demonstrated that +LSTMs learn localization when trained for navigation in a single environment. We show that they +learn mapping when given location and trained in many environments. Huynh et al. (2020) proposed +a spatial memory architecture and demonstrated that a spatial representation emerges when trained +on a localization task. We show that spatial representations emerge in non-spatial neural networks +trained for navigation. Dwivedi et al. (2022) examined what navigation agents learn about their +environments. We provided a detailed account of emergent mapping in larger environments, over +longer time horizons, and show the emergence of intelligent behavior and mapping in blind agents, +which is not the focus of prior work. +‘Map-free’ navigation agents. Learned agents that navigate without an explicit mapping module +(called ‘map-free’ or ‘pixels-to-actions’) have shown strong performance on a variety of tasks (Savva +et al., 2019; Wijmans et al., 2020; Kadian et al., 2020; Chattopadhyay et al., 2021; Khandelwal et al., +2022; Partsey et al., 2022; Reed et al., 2022). In this work, we do not provide any novel techniques +nor make any experimental advancement in the efficacy of such (sighted) agents. However, we make +two key findings. First, that blind agents are highly effective navigators for PointGoalNav, exhibit- +ing similar efficacy as sighted agents. Second, we begin to explain how ‘map-free’ navigation agents +perform their task: they build implicit maps in their memory, although the story is a bit nuanced +due to the results in Apx. C.4; we suspect this understanding might be extended in future work. +8 +OUTLOOK: LIMITATIONS, REPRODUCIBILITY +In this work, we have shown that ‘blind’ AI navigation agents – agents with similar perception as +blind mole rats – are capable of performing goal-driven navigation to a high degree of performance. +We then showed that these AI navigation agents learn to build map-like representations (supporting +the ability to take shortcuts, follow walls, and predict free-space and collisions) of their environ- +ment solely through learning goal-driven navigation. Our agents and training regime have no added +inductive bias towards map-building, be it explicit or implicit, implying that cognitive maps may +be a natural solution to the inductive biases imposed by navigation by intelligent embodied agents, +whether they be biological or artificial. In a similar manner, convergent evolution (Kozmik et al., +2008), where two unrelated intelligent systems independently arrive at similar mechanisms, suggests +that the mechanism is a natural response of having to adapt to the environment and the task. +Our results also provide an explanation of the surprising success of map-free neural network nav- +igation agents by showing that these agents in fact learn to build map-like internal representations +with no learning signal other than goal driven navigation. This result establish a link between how +‘map-free’ systems navigate with analytic mapping-and-planning techniques (Thrun et al., 2005; +Institute, 1972; Ayache & Faugeras, 1988; Smith et al., 1990). +Our results and analyses also point towards future directions in AI navigation research. Specifically, +imbuing AI navigation agents with explicit (e.g. architectural design) or implicit (e.g. training regime +or auxiliary objectives) priors that bias agents towards learning an internal representation with the +features found here may improve their performance. Further, it may better equip them to learn more +challenging tasks such as rearrangement of an environment by moving objects (Batra et al., 2020). +We see several limitations and areas for future work. First, we examined ground-based navigation +agents operating in digitizations of real houses. This limits the agent a 2D manifold and induces +strong structural priors on environment layout. As such, it is unclear how our results generalize +9 + +Published as a conference paper at ICLR 2023 +to a drone flying through a large forest. Second, we examined agents with a minimal perceptual +system. In the supplementary text, we attempted to decode occupancy grids (metric maps) from +Depth sensor equipped agents and did not find convincing evidence. Our conjecture is that for +higher-level strategies like map-building to emerge, the learning problem must not admit ‘trivial’ +solutions. We believe that the minimal perception system used in our work also served to create +such a challenging learning problem. Third, our experiments do not study the effects of actuation +noise, which is an important consideration in both robot navigation systems and path integration +in biological systems. Fourth, we examine an implicit map-building mechanism (an LSTM), a +similar set of experiments could be performed for agents with a differentiable read/write map but +no direct mapping supervision. Fifth, our agents only explore their environment for a short period +of time (an episode) before their memory is reset. Animals and robots at deployment experience +their environment for significantly longer periods of time. Finally, we do not provide a complete +mechanistic account for how the agent learns to build its map or what else it stores in its memory. +Acknowledgements: We thank Abhishek Kadian for his help in implementing the first version of +the SecondNav(T→S) probe experiment. We thank Jitendra Malik for his feedback on the draft and +guidance. EW is supported in part by an ARCS fellowship. The Georgia Tech effort was supported +in part by NSF, ONR YIP, and ARO PECASE. The Oregon State effort is supported in part by +the DARPA Machine Common Sense program. The views and conclusions contained herein are +those of the authors and should not be interpreted as necessarily representing the official policies or +endorsements, either expressed or implied, of the U.S. Government, or any sponsor. +Reproducibility Statement: Implementation details of our analyses are provided in the appendix. +Our work builds on datasets and code that are already open-sourced, and our analysis code will be +open-sourced. +REFERENCES +Peter Anderson, Angel X. Chang, Devendra Singh Chaplot, Alexey Dosovitskiy, Saurabh Gupta, +Vladlen Koltun, Jana Kosecka, Jitendra Malik, Roozbeh Mottaghi, Manolis Savva, and +Amir Roshan Zamir. On evaluation of embodied navigation agents. CoRR, abs/1807.06757, +2018. URL http://arxiv.org/abs/1807.06757. +Reut Avni, Yael Tzvaigrach, and David Eilam. Exploration and navigation in the blind mole rat +(spalax ehrenbergi): global calibration as a primer of spatial representation. Journal of Experi- +mental Biology, 211(17):2817–2826, 2008. +Nicholas Ayache and Olivier D Faugeras. Building, registrating, and fusing noisy visual maps. The +International Journal of Robotics Research, 7(6):45–65, 1988. +Bowen Baker, Ingmar Kanitscheider, Todor Markov, Yi Wu, Glenn Powell, Bob McGrew, and Igor +Mordatch. Emergent tool use from multi-agent autocurricula. In Proceedings of the International +Conference on Learning Representations (ICLR), 2020. +Andrea Banino, Caswell Barry, Benigno Uria, Charles Blundell, Timothy Lillicrap, Piotr Mirowski, +Alexander Pritzel, Martin J. Chadwick, Thomas Degris, Joseph Modayil, Greg Wayne, Hubert +Soyer, Fabio Viola, Brian Zhang, Ross Goroshin, Neil Rabinowitz, Razvan Pascanu, Charlie Beat- +tie, Stig Petersen, Amir Sadik, Stephen Gaffney, Helen King, Koray Kavukcuoglu, Demis Hass- +abis, Raia Hadsell, and Dharshan Kumaran. Vector-based navigation using grid-like representa- +tions in artificial agents. Nature, 557(7705):429–433, 2018. doi: 10.1038/s41586-018-0102-6. +URL https://doi.org/10.1038/s41586-018-0102-6. +Dhruv Batra, Angel X Chang, Sonia Chernova, Andrew J Davison, Jia Deng, Vladlen Koltun, Sergey +Levine, Jitendra Malik, Igor Mordatch, Roozbeh Mottaghi, Manolis Savva, and Hao Su. Rear- +rangement: A challenge for embodied ai. In arXiv preprint arXiv:2011.01975, 2020. +Jake Bruce, Niko S¨underhauf, Piotr Mirowski, Raia Hadsell, and Michael Milford. Learning deploy- +able navigation policies at kilometer scale from a single traversal. Conference on Robot Learning +(CoRL), 2018. +Edgar Chan, Oliver Baumann, Mark A Bellgrove, and Jason B Mattingley. From objects to land- +marks: the function of visual location information in spatial navigation. Frontiers in psychology, +3:304, 2012. +10 + +Published as a conference paper at ICLR 2023 +Angel Chang, Angela Dai, Thomas Funkhouser, Maciej Halber, Matthias Niessner, Manolis Savva, +Shuran Song, Andy Zeng, and Yinda Zhang. Matterport3d: Learning from rgb-d data in indoor +environments. In International Conference on 3D Vision (3DV), 2017. License: http://kaldir. +vc.in.tum.de/matterport/MP TOS.pdf. +Nicole Chapuis and Patricia Scardigli. Shortcut ability in hamsters (mesocricetus auratus): The +role of environmental and kinesthetic information. Animal Learning & Behavior, 21(3):255–265, +1993. +Prithvijit Chattopadhyay, Judy Hoffman, Roozbeh Mottaghi, and Ani Kembhavi. Robustnav: To- +wards benchmarking robustness in embodied navigation. In Proceedings of IEEE Conference on +Computer Vision and Pattern Recognition (CVPR), 2021. +Allen Cheung, Matthew Collett, Thomas S. Collett, Alex Dewar, Fred Dyer, Paul Graham, Michael +Mangan, Ajay Narendra, Andrew Philippides, Wolfgang St¨urzl, Barbara Webb, Antoine Wys- +trach, and Jochen Zeil. Still no convincing evidence for cognitive map use by honeybees. Pro- +ceedings of the National Academy of Sciences, 111(42):E4396–E4397, 2014. ISSN 0027-8424. +doi: 10.1073/pnas.1413581111. URL https://www.pnas.org/content/111/42/E4396. +Holk Cruse and R¨udiger Wehner. No need for a cognitive map: Decentralized memory for in- +sect navigation. PLOS Computational Biology, 7(3):1–10, 03 2011. doi: 10.1371/journal.pcbi. +1002009. URL https://doi.org/10.1371/journal.pcbi.1002009. +Christopher J. Cueva and Xue-Xin Wei. Emergence of grid-like representations by training recur- +rent neural networks to perform spatial localization. In Proceedings of the International Confer- +ence on Learning Representations (ICLR), 2018. URL https://openreview.net/forum?id= +B17JTOe0-. +Kshitij Dwivedi, Gemma Roig, Aniruddha Kembhavi, and Roozbeh Mottaghi. What do navigation +agents learn about their environment? In Proceedings of IEEE Conference on Computer Vision +and Pattern Recognition (CVPR), pp. 10276–10285, 2022. +Russell Epstein, E Z Patai, Joshua Julian, and Hugo Spiers. The cognitive map in humans: Spatial +navigation and beyond. Nature Neuroscience, 20:1504–1513, 10 2017. doi: 10.1038/nn.4656. +Charles R. Gallistel. Learning, development, and conceptual change.The organization of learning. +The MIT Press, 1990. +Priya Goyal, Piotr Doll´ar, Ross B. Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, +Andrew Tulloch, Yangqing Jia, and Kaiming He. Accurate, large minibatch SGD: training ima- +genet in 1 hour. CoRR, abs/1706.02677, 2017. URL http://arxiv.org/abs/1706.02677. +Lee Harten, Amitay Katz, Aya Goldshtein, Michal Handel, and Yossi Yovel. The ontogeny of a +mammalian cognitive map in the real world. Science, 369(6500):194–197, 2020. +Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- +nition. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), +2016. +Sepp Hochreiter and J¨urgen Schmidhuber. Long short-term memory. Neural Computation, 9(8): +1735–1780, 1997. +Peter J Huber. Robust estimation of a location parameter. In The Annals of Mathematical Statistics, +pp. 73–101. JSTOR, 1964. +Tri Huynh, Michael Maire, and Matthew R. Walter. Multigrid neural memory. In Proceedings of +the International Conference on Machine Learning (ICML), pp. 4561–4571. PMLR, 2020. +Stanford Research Institute. Shakey: An experiment in robot planning and learning., 1972. +P Ioal`e, M Nozzolini, and F Papi. Homing pigeons do extract directional information from olfactory +stimuli. Behavioral Ecology and Sociobiology, 26(5):301–305, 1990. +11 + +Published as a conference paper at ICLR 2023 +Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by +reducing internal covariate shift. In Proceedings of the International Conference on Machine +Learning (ICML), 2015. +Lucia F Jacobs. The evolution of the cognitive map. Brain, behavior and evolution, 62(2):128–139, +2003. +Abhishek Kadian, Joanne Truong, Aaron Gokaslan, Alexander Clegg, Erik Wijmans, Stefan Lee, +Manolis Savva, Sonia Chernova, and Dhruv Batra. Are we making real progress in simulated +environments? measuring the sim2real gap in embodied visual navigation. In IEEE Robotics and +Automation Letters (RA-L), 2020. +Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Pe- +ter Tang. On large-batch training for deep learning: Generalization gap and sharp minima. In +Proceedings of the International Conference on Learning Representations (ICLR), 2017. +Apoorv Khandelwal, Luca Weihs, Roozbeh Mottaghi, and Aniruddha Kembhavi. Simple but effec- +tive: Clip embeddings for embodied ai. In Proceedings of IEEE Conference on Computer Vision +and Pattern Recognition (CVPR), pp. 14829–14838, 2022. +Tali Kimchi, Ariane S Etienne, and Joseph Terkel. A subterranean mammal uses the magnetic +compass for path integration. Proceedings of the National Academy of Sciences, 101(4):1105– +1109, 2004. +Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Proceedings of +the International Conference on Learning Representations (ICLR), 2015. +Dmitry Kobak and Philipp Berens. The art of using t-sne for single-cell transcriptomics. Nature +communications, 10(1):1–14, 2019. +Zbynek Kozmik, Jana Ruzickova, Kristyna Jonasova, Yoshifumi Matsumoto, Pavel Vopalensky, +Iryna Kozmikova, Hynek Strnad, Shoji Kawamura, Joram Piatigorsky, Vaclav Paces, et al. As- +sembly of the cnidarian camera-type eye from vertebrate-like components. Proceedings of the +National Academy of Sciences, 105(26):8989–8993, 2008. +Joel Lehman, Jeff Clune, Dusan Misevic, Christoph Adami, Lee Altenberg, Julie Beaulieu, Peter J +Bentley, Samuel Bernard, Guillaume Beslon, David M Bryson, et al. The surprising creativity of +digital evolution: A collection of anecdotes from the evolutionary computation and artificial life +research communities. Artificial Life, 26(2):274–306, 2020. +Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Doll´ar. Focal loss for dense +object detection. In Proceedings of IEEE International Conference on Computer Vision (ICCV), +pp. 2980–2988, 2017. +Rosanne Liu, Joel Lehman, Piero Molino, Felipe Petroski Such, Eric Frank, Alex Sergeev, and Jason +Yosinski. An intriguing failing of convolutional neural networks and the coordconv solution. In +Advances in Neural Information Processing Systems (NeurIPS), pp. 9605–9616, 2018. +Ronald Mathias Lockley. Animal navigation. Pan Books, 1967. +Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In Proceedings of the +International Conference on Learning Representations (ICLR), 2019. +Vladimir J Lumelsky and Alexander A Stepanov. Path-planning strategies for a point mobile au- +tomaton moving amidst unknown obstacles of arbitrary shape. Algorithmica, 2(1-4):403–430, +1987. +Hans Maaswinkel and Ian Q Whishaw. Homing with locale, taxon, and dead reckoning strategies +by foraging rats: sensory hierarchy in spatial navigation. Behavioural brain research, 99(2): +143–152, 1999. +Emil W Menzel. Chimpanzee spatial memory organization. Science, 182(4115):943–945, 1973. +12 + +Published as a conference paper at ICLR 2023 +Martin M¨uller and R¨udiger Wehner. Path integration in desert ants, cataglyphis fortis. Proceedings +of the National Academy of Sciences, 85(14):5287–5290, 1988. +Vinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines. +In Proceedings of the International Conference on Machine Learning (ICML), 2010. +John O’keefe and Lynn Nadel. The hippocampus as a cognitive map. Oxford: Clarendon Press, +1978. +Ruslan Partsey, Erik Wijmans, Naoki Yokoyama, Oles Dobosevych, Dhruv Batra, and Oleksandr +Maksymets. Is mapping necessary for realistic pointgoal navigation? In Proceedings of IEEE +Conference on Computer Vision and Pattern Recognition (CVPR), pp. 17232–17241, 2022. +R. Peters. Cognitive maps in wolves and men. Environmental design research, 2:247–253, 1976. +Santhosh K Ramakrishnan, Aaron Gokaslan, Erik Wijmans, Oleksandr Maksymets, Alex Clegg, +John Turner, Eric Undersander, Wojciech Galuba, Andrew Westbury, Angel X Chang, et al. +Habitat-matterport 3d dataset (hm3d): 1000 large-scale 3d environments for embodied ai. Neural +Information Processing Systems – Benchmarks and Datasets, 2021. +Scott Reed, Konrad Zolna, Emilio Parisotto, Sergio Gomez Colmenarejo, Alexander Novikov, +Gabriel Barth-Maron, Mai Gimenez, Yury Sulsky, Jackie Kay, Jost Tobias Springenberg, et al. +A generalist agent. arXiv preprint arXiv:2205.06175, 2022. +Manolis Savva, Abhishek Kadian, Oleksandr Maksymets, Yili Zhao, Erik Wijmans, Bhavana Jain, +Julian Straub, Jia Liu, Vladlen Koltun, Jitendra Malik, Devi Parikh, and Dhruv Batra. Habitat: +A Platform for Embodied AI Research. In Proceedings of IEEE International Conference on +Computer Vision (ICCV), 2019. +John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. +High- +dimensional continuous control using generalized advantage estimation. In Proceedings of the +International Conference on Learning Representations (ICLR), 2016. +John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy +optimization algorithms. CoRR, abs/1707.06347, 2017. +Randall Smith, Matthew Self, and Peter Cheeseman. Estimating uncertain spatial relationships in +robotics. In Autonomous robot vehicles, pp. 167–193. Springer, 1990. +Ben Sorscher, Gabriel C. Mel, Samuel A. Ocko, Lisa Giocomo, and Surya Ganguli. +A uni- +fied theory for the computational and mechanistic origins of grid cells. +In bioRxiv preprint +bioRxiv:2020.12.29.424583, 2020. doi: 10.1101/2020.12.29.424583. +Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. +Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning +Research (JMLR), 15(1):1929–1958, 2014. +Julian Straub, Thomas Whelan, Lingni Ma, Yufan Chen, Erik Wijmans, Simon Green, Jakob J. +Engel, Raul Mur-Artal, Carl Ren, Shobhit Verma, Anton Clarkson, Mingfei Yan, Brian Budge, +Yajie Yan, Xiaqing Pan, June Yon, Yuyang Zou, Kimberly Leon, Nigel Carter, Jesus Briales, Tyler +Gillingham, Elias Mueggler, Luis Pesqueira, Manolis Savva, Dhruv Batra, Hauke M. Strasdat, +Renzo De Nardi, Michael Goesele, Steven Lovegrove, and Richard A. Newcombe. The replica +dataset: A digital replica of indoor spaces. CoRR, abs/1906.05797, 2019. URL http://arxiv. +org/abs/1906.05797. +Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. MIT press, 1992. +Andrew Szot, Alex Clegg, Eric Undersander, Erik Wijmans, Yili Zhao, John Turner, Noah Maestre, +Mustafa Mukadam, Devendra Chaplot, Oleksandr Maksymets, Aaron Gokaslan, Vladimir Von- +drus, Sameer Dharur, Franziska Meier, Wojciech Galuba, Angel Chang, Zsolt Kira, Vladlen +Koltun, Jitendra Malik, Manolis Savva, and Dhruv Batra. Habitat 2.0: Training home assis- +tants to rearrange their habitat. Advances in Neural Information Processing Systems (NeurIPS), +2021. +13 + +Published as a conference paper at ICLR 2023 +Sebastian Thrun, Wolfram Burgard, and Dieter Fox. Probabilistic robotics (intelligent robotics and +autonomous agents), 2005. +Sivan Toledo, David Shohami, Ingo Schiffner, Emmanuel Lourie, Yotam Orchan, Yoav Bartan, and +Ran Nathan. Cognitive map–based navigation in wild bats revealed by a new high-throughput +tracking system. Science, 369(6500):188–193, 2020. +Edward C. Tolman. Cognitive maps in rats and men. Psychological Review, 55(4):189–208, 1948. +doi: 10.1037/h0061626. +Jonathan Tompson, Ross Goroshin, Arjun Jain, Yann LeCun, and Christoph Bregler. Efficient object +localization using convolutional networks. In Proceedings of IEEE Conference on Computer +Vision and Pattern Recognition (CVPR), pp. 648–656, 2015. +Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of machine +learning research, 9(11), 2008. +Karl Von Frisch. The dance language and orientation of bees. Harvard University Press, 1967. +John Wieting and Douwe Kiela. No training required: Exploring random encoders for sentence clas- +sification. In Proceedings of the International Conference on Learning Representations (ICLR), +2019. +Erik Wijmans, Abhishek Kadian, Ari Morcos, Stefan Lee, Irfan Essa, Devi Parikh, Manolis Savva, +and Dhruv Batra. DD-PPO: Learning near-perfect pointgoal navigators from 2.5 billion frames. +In Proceedings of the International Conference on Learning Representations (ICLR), 2020. +Frank Wilcoxon. Individual comparisons by ranking methods. In Breakthroughs in statistics, pp. +196–202. Springer, 1992. +Fei Xia, Amir R Zamir, Zhiyang He, Alexander Sax, Jitendra Malik, and Silvio Savarese. Gibson +env: Real-world perception for embodied agents. In Proceedings of IEEE Conference on Com- +puter Vision and Pattern Recognition (CVPR), 2018. License: https://storage.googleapis. +com/gibson material/Agreement%20GDS%2006-04-18.pdf. +A +METHODS AND MATERIALS +A.1 +POINTGOAL NAVIGATION TRAINING +Task. In PointGoal Navigation, the agent is tasked with navigating to a point specified relative to +its initial location, i.e an input of (δx, δy) corresponds to going δx meters forward and δy meters +to the right. The agent succeeds if it predicts the stop action within 0.2 meters of the specified +point. The agent has access to 4 low-level actions – move forward (0.25 meters), turn left (10◦), +turn right (10◦), and stop. There is no noise in the agent’s actuations. +Sensors. The agent has access to solely an idealized GPS+Compass sensor that provides it heading +and position relative to the starting orientation and location at each time step. There is no noise in +the agent’s sensors. +Architecture. The agent is parameterized by a 3-layer LSTM (Hochreiter & Schmidhuber, 1997) +with a 512-d hidden dimension. At each time-step, the agent receives observations g (the location of +the goal relative to start), GPS (its current position relative to start), and compass (its current heading +relative to start). We also explicitly give the agent an indicator of if it is close to goal in the form +of min(||g − GPS||, 0.5) as we find the agent does not learn robust stopping logic otherwise. All +4 inputs are projected to 32-d using separated fully-connected layers. These are then concatenated +with a learned 32-d embedding of the previous action taken to form a 160-d input that is then given +to the LSTM. The output of the LSTM is then processed by a fully-connected layer to produce a +softmax distribution of the action space and an estimate of the value function. +Training Data. We construct our training data based on the Gibson (Xia et al., 2018) and Matter- +port3D dataset (Chang et al., 2017). We training on 411 scenes from Gibson and 72 from Matter- +port3D. +14 + +Published as a conference paper at ICLR 2023 +Training Procedure. We train our agents using Proximal Policy Optimization (PPO) (Schulman +et al., 2017) with Generalized Advantage Estimation (GAE) (Schulman et al., 2016). We use Decen- +tralized Distributed PPO (DD-PPO) (Wijmans et al., 2020) to train on 16 GPUs. Each GPU/worker +collects 256 steps of experience from 16 agents (each in different scenes) and then performs 2 +epochs of PPO with 2 mini-batchs per epoch. We use the Adam optimize (Kingma & Ba, 2015) +with a learning rate of 2.5 × 10−4. We set the discount factor γ to 0.99, the PPO clip to 0.2, and the +GAE hyper-parameter τ to 0.95. We train until convergence (around 2 billion steps of experience). +At every timestep, t, the agent is in state st and takes action at, and transitions to state st+. It +receives shaped reward in the form: +rt = +�2.5 · Success +if at is Stop +−∆geo dist(st, st+1) − λ +Otherwise +(1) +where ∆geo dist(st, st+1) is the change in geodesic (shortest path) distance to goal between st and +st+1 and λ=0.001 is a slack penalty encouraging shorter episodes. +Evaluation Procedure. We evaluate the agent in the 18 scenes from the Matterport3D test set. +We use the episodes from Savva et al. (Savva et al., 2019), which consist of 56 episodes per scene +(1008 in total). Episode range in distance from 1.2 to 30 meters. The ratio of geodesic distance to +euclidean distance between start and goal is restricted to be greater than or equal to 1.1, ensuring +that episodes are not simple straight lines. Note that reward is not available during evaluation. +The agent is evaluated under two metrics, Success, whether or not the agent called the stop action +with 0.2 meters of the goal and Success weighted by normalized inverse Path Length (SPL) (An- +derson et al., 2018). SPL is calculated as follows: given the agent’s path [s1, . . . , sT ] and the initial +geodesic distance to goal di for episode i, we first compute the length of the agent’s path +li = +T +� +t=2 +||st − st−1||2 +(2) +then SPL for episode i as +SPLi = Successi · +di +min{di, li} +(3) +We then report SPL as the average of SPLi across all episodes. +A.2 +PROBE TRAINING +Task. The probe task is to either navigate from start to goal again (SecondNav(S→T)) or navigate +from goal to start (SecondNav(T→S)). For SecondNav(S→T), the probe is initialized at the starting +location but with the agent’s final heading. For SecondNav(T→S), the probe is initialized with the +agent’s final heading and position. In both cases, the probe and the agent share the same coordinate +system – i.e. in SecondNav(T→S), the initial GPS and Compass readings for the probe are identical +the the final GPS and Compass readings for the agent. When the agent does not successfully reach +the goal, the probe task is necessarily undefined and we do not instantiate a probe. +Sensors, Architecture, Training Procedure, Training Data. The probe uses the same sensor suite, +architecture, training procedure, and training data as the agent, described in Section A.1 +Note that no gradients (or rewards) follow from probe to agent. From the agent’s perspective, the +probe does not exist. From the probe’s perspective, the agent provides a dataset of initial locations +(or goals) and initial hidden states. +Evaluation Procedure. We evaluate the probe in a similar manner the agent except that any episode +which the agent is unable to complete (5%) is removed due to the probe task being undefined if the +agent is unable to complete the task. The agent reaches the goal 95% of the time, thus only 50 out of +1008 possible probe evaluation episodes are invalidated. The control probe type accounts for this. +We ignore the agent’s trajectory when computing SPL for the probe. +A.3 +OCCUPANCY MAP DECODING +Task. We train a decoding network to predict the top-down occupancy map of the environment from +the final internal state of the agent (ht, ct). We limit the decoder to only predict within 2.5 meters +of any location the agent visited. +15 + +Published as a conference paper at ICLR 2023 +Architecture. The map-decoder is constructed as follows: First the internal state (ht, ct) is concate- +nated into a 512×6-d vector. The vector is then passed to a 2-layer MLP with a hidden dimension of +512-d that produces a 4608-d vector. This 4608-d vector is then reshaped into a [128, 6, 6] feature- +map. The feature map is processed by a series of Coordinate Convolution (CoordConv) (Liu et al., +2018) Coordinate Up-Convolution (CoordUpConv) layers decrease the channel-depth and increase +spatial resolution to [16, 96, 96]. Specifically, after an initial CoordConv with an output channel- +depth of 128, we use a series of 4 CoordUpConv-CoordConv layers where each CoordUpConv doubles +the spatial dimensions (quadruples spatial resolution) and each CoordConv reduces channel-depth +by half. We then use a final 1x1-Convolution to create a [2, 96, 96] tensor representing the non- +normalized log-probabilities of whether or not an given location is navigable or not. +Each CoordConv has kernel size 3, padding 1, and stride 1. CoordUpConv has kernel size 3, padding +0, and stride 2. Before all CoordConv and CoordUpConv, we use 2D Dropout (Srivastava et al., 2014; +Tompson et al., 2015) with a zero-out probability of 0.05. We use Batch Normalization layers (Ioffe +& Szegedy, 2015) and the ReLU activation function (Nair & Hinton, 2010) after all layers except +the terminal layer. +Training Data. We construct our training data by having a trained agent perform episodes of Point- +Goal navigation on the training dataset. Note that while evaluation is done utilizing the final hidden +state, we construct our training dataset by taking 30 time steps (evenly spaced) from the trajectory +and ensuring the final step is included. +Training Procedure. We train on 8 GPUs with a batch size of 128 per GPU (total batch size +of 1024). We use the AdamW optimizer (Kingma & Ba, 2015; Loshchilov & Hutter, 2019) with +an initial learning rate of 10−3 and linearly scale the learning rate to 1.6 × 10−2 over the first +5 epochs (Goyal et al., 2017) and use a weight-decay of 10−5. We use the validation dataset to +perform early-stopping. We use Focal Loss (Lin et al., 2017) (a weighted version of Cross Entropy +Loss) with γ = 2.0, αNotNavigable = 0.75, and αNavigable = 0.25 to handle the class imbalance. +Evaluation Data and Procedure. We construct our evaluation data using the validation dataset. +Note that the scenes in evaluation are novel to both the agent and the decoder. We evaluate the +predicted occupancy map from the final hidden state/final time step. We collect a total of 5,000 +episodes. +A.4 +PAST AND FUTURE POSITION PREDICTION +Task. We train a decoder to predict the change in agent location given the internal state at time t +(ht, ct). Specifically, let st be the agent’s position at time t where the coordinate system is defined +by the agent’s starting location (i.e. s0 = 0), and st+k be its position k steps into the future/past, +then the decoder is trained to model f((ht, ct)) = st+k − st. +Architecture. The decoder is a 3-layer MLP that produces a 3 dimensional output with hidden sizes +of 256 and 128. We use Batch Normalization (Ioffe & Szegedy, 2015) and the ReLU activation +function (Nair & Hinton, 2010) after all layers except the last. +Training Data. The training data is collected from executing a trained agent on episodes from the +training set. For each episode, we collect all possible pairs of st, st+k for a given value of k. +Training Procedure. We use the AdamW optimizer (Kingma & Ba, 2015; Loshchilov & Hutter, +2019) with a learning rate of 10−3, a weight decay of 10−4, and a batch size of 256. We use a +Smooth L1 Loss/Huber Loss (Huber, 1964) between the ground-truth change in position and the +predicted change in position. We use the validation set to perform early stopping. +Evaluation Procedure. We evaluate the trained decoded on held-out scenes. Note that the held-out +scenes are novel both to the agent and the decoder. +Visualization of Predictions. For visualization the predictions of past vitiation, we found it easier +to train a second decoder that predicts all locations the agent visited previously on a 2D top down +map given the internal state (ht, ct). This decoder shares the exact same architecture and train- +ing procedure as the occupancy grid decoder. The decoder removes the temporal aspect from the +prediction, so it is ill-suited for any time-dependent analysis, but produces clearer visualizations. +Excursion Calibrated Analysis. To perform the excursions forgetting analysis, we use the excur- +sion labeled episodes. We marked the end of the excursion as the last 10% of the steps that are part +16 + +Published as a conference paper at ICLR 2023 +of the excursion. For a given point in time t, we classify that point into one of {Non-Excursion, +Excursion, Exit}. We then examine how well this point is remembered by calculating the error of +predicting the point t from t + k, i.e. how well can t be predicted when it is k steps into the past. +When t is part of an excursions (both the excursion and the exit) we limit t + k to either be part of +the same excursion or not part of an excursion. When t is not part of an excursion, t + k must also +not be part of an excursion nor can there be any excursion in the range [t, t + k]. +A.5 +COLLISION PREDICTION LINEAR PROBE +Task. The task of this probe is to predict of the previous action taken lead to a collision given the +current hidden state. Specifically it seeks to learn a function Collidedt = f((ht, ct)) where (ht, ct) +is the internal state at time t and Collidedt is whether or not the previous action, at−1 lead to a +collision. +Architecture. The architecture is logistic classifier that takes the concatentation of the internal state +and produces logprob of Collidedt. +Training Data. We construct our training data by having a trained agent perform episodes of Point- +Goal navigation on the training set. We collect a total of 10 million samples and then randomly +select 1 million for training. We then normalize each dimension independently by computing mean +and standard deviation and then subtract mean and divide by standard deviation. This ensures that +all dimensions have the same average magnitude. +Training Procedure. We training on 1 GPU with a batch size of 256. We use the Adam opti- +mizer (Kingma & Ba, 2015) with a learning rate of 5 × 10−4. We train for 20 epochs. +Evaluation Data and Procedure. We construct our evaluation data using the same procedure as the +training data, but on the validation dataset and collect 200,00 samples (which is then subsampled to +20,000). +Important Dimension Selection. To select which dimensions are important for predicting collsions, +we re-train our probe with various L1 penalties. We sweep from 0 to 1000 and then select the penalty +that results in the lowest number of significant dimensions without substantially reducing accuracy. +We determine the number of significant dimensions by first ordering all dimensions by the L1 norm +of the corresponding weight and then finding the smallest number of dimensions we can keep while +maintaining 99% of the performance of keeping all dimensions for that classifier. +The t-SNE manifold is computed using 20,000 samples. This is then randomly subsampled to 1,500 +for visualization. +A.6 +DATA AND MATERIALS AVAILABILITY +The Gibson (Xia et al., 2018) and Matterport3D (Chang et al., 2017) datasets can be acquired from +their respective distributors. Habitat (Savva et al., 2019) is open source. Code to reproduce experi- +ments will be made available. +B +ADDITIONAL DISCUSSIONS +B.1 +RELATIONSHIP TO COGNITIVE MAPS +Throughout the text, we use the term ‘map’ to mean a spatial representation that supports intelligent +behaviors like taking shortcuts. Whether or not this term is distinct from the specific concept of a +‘cognitive map’ is debated. +Cognitive maps, as defined by O’keefe & Nadel (1978), imply a set of properties and are generally +attached to a specific mechanism. The existence of a cognitive map requires that the agent be +able to reach a desired goal in the environment from any starting location without being given that +starting location, i.e. be able to navigate against a map. Further, cognitive maps refer to a specific +mechanism – place cells and grid cells being present in the hippocampus. Other works have also +studied ‘cognitive maps’ and not put such restrictions on its definition (Gallistel, 1990; Tolman, +1948), however these broader definitions have been debated (Jacobs, 2003). +Our work shows that the spatial information contained within the agent’s hidden state enables map- +like properties – a secondary agent to take shortcuts through previously unexplored free space – and +supports the decoding of a metric map. However, these do not fully cover the proprieties of O’keefe +17 + +Published as a conference paper at ICLR 2023 +& Nadel (1978)’s definition nor do we make a mechanistic claim about how this information is +stored in the neural network, though we do find the emergence of collision-detection neurons. +C +ADDITIONAL EXPERIMENTS +C.1 +BLIND SHORTEST PATH NAVIGATION WITH TRUE STATE +In the main text, we posited that blind agents learn wall-following as this an effective strategy for +blind navigation in unknown environments. We posit that this is because the agent does not have ac- +cess to true state (it does not know the current environment nor where it is in global coordinates). In +this experiment we show that blind agents learn to take shortest paths, as opposed to wall-following, +when trained in a single environment (implicitly informing the agent of the current environment) +and uses the global coordinate system. 8 +We use an identical agent architecture and training procedure as outline for PointGoal navigation +training in the Materials and Methods with two differences: 1) A single training and test environment +and 2) usage of the global coordinates within the environment for both goal specific and the agent’s +GPS+Compass sensor. We perform this experiment on 3 scenes, 1 from the Gibson val dataset and +2 from Matterport3D val dataset. The average SPL during training is 99±0.1 showing that the blind +agent learns shortest path navigation not wall-following. Figure A6 shows examples of an agent +trained in a single scene with global coordinates and an agent trained in many scenes with episodic +coordinates. +These two settings, i) where the agent uses an episodic coordinate system and navigates in unknown +environments, and ii) where the agent uses global coordinates and navigates in a known environment +can be seen as the difference between a partially observable Markov decision process (POMDP) and +a Markov decision process. In the POMDP case, the agent must learn a generalizable policy while +it can overfit in the MDP case. +C.2 +FURTHER ANALYSIS OF THE PROBE’S PERFORMANCE +In the main text, we showed that the probe is indeed much more efficient than the agent, but how +is this gain achieved? Our hypothesis is that the probe improves upon the agent’s path by taking +shortcuts and eliminating excursions (representing an ‘out and back’). We define an excursion as a +sub-path that approximately forms a loop. To quantify excursions, we manually annotate excursions +in 216 randomly sampled episodes in evaluation environments. Of the labeled episodes, 62% have a +least 1 excursion. On average, an episode has 0.95 excursions, and excursions have an average length +of 101 steps (corresponding to 8.23 meters). Since excursions represent unnecessary portions of the +trajectory, this indicates that the probe should be able improve upon the agent’s path by removing +these excursions. +We quantify this excursion removal via the normalized Chamfer distance between the agent’s path +and the probe’s path. Formally, given the agent’s path Agent=[s(agent) +1 +, . . . , s(agent) +T +] and the probe’s +path Probe=[s(probe) +1 +, . . . , s(probe) +N +] where s ∈ R3 is a point in the environment: +PathDiff(Agent, Probe) = 1 +N +N +� +i=1 +min +1≤j≤T GeoDist(s(agent) +i +, s(probe) +j +), +(4) +where GeoDist(·, ·) indicates the geodesic distance (shortest traverseable path-length). +Note that Chamfer distance is not symmetric. PathDiff(Probe, Agent) measures the average distance +of a point on the probe path s(probe) +j +from the closest point on the agent path. A large PathDiff(Probe, +Agent) indicates that the probe travels through novel parts of the environments (compared to the +agent). Conversely, PathDiff(Agent, Probe) measures the average distance of a point on the agent +path s(agent) +i +from the closest point on the probe path. A large +� +PathDiff(Agent, Probe) − PathD- +iff(Probe, Agent) +� +gap indicates that agent path contains excursions while the probe does not; thus, +8Recall that in the episodic coordinate system the origin is defined by the agent’s starting position and +orientation. In the global coordinate system the origin is an arbitrary but consistent location (we simply use +the origin for a given scene defined in the dataset). Thus in the global coordinate system the goal is specified +as ‘Go to (x, y)’ where x and y are specified in the global coordinate system, not with respect to the agent’s +current location. +18 + +Published as a conference paper at ICLR 2023 +we refer to this gap as Excursion Removal. To visually understand why this is the case, consider +the example agent and probe paths in Fig. A7. Point (C) lies on an excursion in the agent path. +It contributes a term to PathDiff(Agent, Probe) but not to PathDiff(Probe, Agent) because (D) is +closer to (E) than (C). +On both SecondNav(S→T) and SecondNav(T→S), we find that as the efficiency of a probe in- +creases, Excursion Removal also increases (Table A2, row 1 vs. 2, 2 vs. 3), confirming that the +TrainedAgentMemory probe is more efficient because it removes excursions. +We next consider if the TrainedAgentMemory probe also travels through previously unexplored +space in addition to removing excursions. To quantify this, we report PathDiff(Probe, Agent) on +episodes where agent SPL is less than average (less than 62.9%).9 If probes take the same path as +the agent, we would expect this metric to be zero. If, however, probes travel through previously +unexplored space to minimize travel distance, we would expect this metric to be significantly non- +zero. Indeed, on SecondNav(S→T), we find the TrainedAgentMemory probe is 0.32 meters away +on average from the closest point on the agent’s path (99% empirical bootstrap of the mean gives +a range of (0.299, 0.341)). See Fig. A7 for a visual example. On SecondNav(T→S), this effect is +slightly more pronounced, the TrainedAgentMemory probe is 0.55 meters away on average (99% +empirical bootstrap of the mean gives a range of (0.52, 0.588)). Taken holistically, these results show +that the probe is both more efficient than the agent and consistently travels through new parts of the +environment (that the agent did not travel through). Thus, the spatial representation in the agent’s +memory is not simply a ‘literal’ episodic summarization, but also contains anticipatory inferences +about previously unexplored spaces being navigable (e.g. traveling along the hypotenuses instead of +sides of a room). +In the text above we reported free space inference only on episodes where the agent gets an SPL +bellow average. In Fig. A12 we provide a plot of Free Space Inference vs. Agent SPL to show the +impact of other cutoff points. In Fig. A13 we also provide a similar plot of Excursion Removal +vs. Agent SPL. In both cases, as agent SPL increase, the probe is able to infer less free space or +remove less excursions. +C.3 +FUTURE VISITATION PREDICTION +In the main text we examined what types of systematic errors are made when decoding past agent +locations, here we provide addition analysis and look at predicting future observations as that will +reveal if there are any idiosyncrasies in what can be predicted about future vs. what will happen in +the future. +Given ground truth location st+k, we evaluate the decoder via i) absolute L2 error ||ˆst+k−st+k|| and +ii) relative L2 error ||ˆst+k − st+k||/||st+k − st||. To determine baseline (or chance) performance, +we train a second set of decoders where instead of using the correct internal state (ht, ct) as the +input, we randomly select an internal state from a different trajectory. This will evaluate if there are +any inherent biases in the task. +In Fig. A8, we find that the decoder is able to accurately predict where the agent has been, even for +long time horizons – e.g. at 100 time steps in the past, relative error is 0.55 and absolute error is 1.0m, +compared to relative error of 1.0 and absolute error of 3.2m for the chance baseline prediction. For +short time horizons the decoder is also able to accurately predict where the agent will be in the future +– e.g. at 10 time steps into the future, relative and absolute error are below chance. Interestingly, we +see that for longer range future predictions, the decoder is worse than chance in relative error but on- +par in absolute error. This apparent contradiction arises due to the decoders making (relatively) large +systematic errors when the agent backtracks. In order for the decoder to predict backtracking, the +agent would need to already know its future trajectory will be sub-optimal (i.e. lead to backtracking) +but still take that trajectory. This is in contradiction with the objective the agent is trained for, to +reach the goal as quickly as possible, and thus the agent would not take a given path if it knew it +would lead to backtracking. +9We restrict to a subset where the agent has relatively low SPL to improve dynamic range. When the agent +has high SPL, there won’t be excursions to remove and this metric will naturally be low. In the supplementary +text we provide plots of this metric vs. agent SPL. +19 + +Published as a conference paper at ICLR 2023 +C.4 +EXTENSION TO SIGHTED NAVIGATION AGENTS +In the main text we analyzed how ‘blind’ agents, those with limited perceptual systems, utilize their +memory and found evidence that they build cognitive maps. Here, we extend our analysis to agents +with rich perceptual systems, those equipped with a Depth camera and an egomotion sensor. Our +primary experimental paradigm relies on showing that a probe is able to take shortcuts when given +the agent’s memory. This experimental paradigm relies on the probe being able to take a shorter +path than the agent. Navigation agents with vision can perform PointNav near-perfectly (Wijmans +et al., 2020) and thus there isn’t room for improving, rendering this experiment infeasible. As a +supplement to this experiment, we also show that a metric map (top-down occupancy grid) can be +decoded from the agents memory. This procedure can also be applied to sighted agents. +We use the ResNet50 (He et al., 2016) Gibson-2plus (Xia et al., 2018) pre-train model from Wijmans +et al. (Wijmans et al., 2020) and train an occupancy grid decoder using the same procedure as in +the main text. Note however we utilize only Gibson for training and the Gibson validation scenes +as held-out data instead of Matterport3D as this agent was only trained on Gibson. As before, we +compare performance from TrainedAgentMemory with UntrainedAgentMemory. +We find mixed results. +When measuring performance with Intersection-over-Union (IoU), +UntrainedAgentMemory outperforms TrainedAgentMemory (40.1% vs. 42.9%). However, when +measuring performance with average class balanced accuracy, TrainedAgentMemory outperforms +UntrainedAgentMemory (61.8% vs. 53.1%). Fig. A9 and Fig. A10 show the corresponding distri- +bution plots. +Overall, this experiment does not provide convincing evidence either way to whether vision- +equipped agents build metric maps in their memory. However, it does show that vision-equipped +agents, if they do maintain a map of their environment, create one that is considerably more chal- +lenging to decode. Further, we note this does not necessarily imply similarly mixed results as to +whether or not vision agents maintain a still spatial but sparser representation, such as a topological +graph, as their rich perception can fill in the details in the moment. +C.5 +NAVIGATION FROM MEMORY ALONE +In the main text we showed that agents learn to build map-like representations. A map-like repre- +sentation of the environment, should, to a degree, support navigation with no external information, +i.e. by dead reckoning. Given that the actions are deterministic, the probe should be able to perform +either task without external inputs and only the agent’s internal representation and the previously +taken action. The localization performed by the probe in this setting is similar to path integration, +however, it must also be able to handle any collisions that occur when navigating. +Fig. A11 shows performance vs. episode length for SecondNav(S→T) and SecondNav(T→S). +There are two primary trends. For short navigation episodes (≤5m), the agent is able to complete +the task often. We also find that under this setting, SecondNav(T→S) is an easier task. This is due +to the information conveyed to the probe by its initial heading. In SecondNav(T→S), the probe can +make progress by simply turning around and going forward, while in SecondNav(S→T), the final +heading of the agent is not informative of which way the probe should navigate initially. Overall, +these results show that the representation built by the agent is sufficient to navigate short distances +with no external information. +Experiment procedure. This experiment mirrors the probe experiment described in methods and +materials with three differences: 1) The input from the GPS+Compass sensor is zero-ed out. 2) +The change in distance to goal shaping in the reward is normalized by the distance from initial state +to goal. We find that the prediction of the value function suffers considerably otherwise. 3) An +additional reward signal as to whether or not the last action taken decreased the angle between the +probe’s current heading and the direction along the shortest path to goal is added. We find the probe +has challenges learning to turn around on the SecondNav(T→S) task otherwise (as it almost always +starts facing 180◦ in the wrong direction). +Let hgt +t be the heading along the shortest path to goal from the probe’s current position st, ht be the +probe’s current heading, then AngularDistance(hgt +t , ht) is the error in the probe’s heading. The full +20 + +Published as a conference paper at ICLR 2023 +reward for this probe is then +rt(st, at, st+1) = +� +� +� +� +� +� +� +2.5 · Success +if at is Stop +−10.0 · ∆geo dist(st, st+1)/GeoDist(s0, g) +−0.25 · ∆HeadingError(st, st+1) +−λ +Otherwise +(5) +C.6 +MEMORY LENGTH +The method presented in the main text to examine memory length is post-hoc analysis performed +on the ‘blind’ PointGoal Navigation agents and thus the agent is operating out-of-distribution. From +the agent’s view, it is still performing a valid PointGoal navigation episode, just with a different +starting location, but the agent may not have taken the same sequence of actions if started from that +location. While we would still expect performance to stature with a small k if the memory length +is indeed short, it is imprecise with measuring the exact memory length of the agent and does not +answer what memory budget is required to perform the task. +Here we examined training agents with a fixed memory length LSTM. Fig. A14 shows similar +trends to those described in the main paper – performance increases as the memory budget increases +– however performance is higher when the agent is trained for a given memory budget. Due to the +increased compute needed to train the model (e.g. training a model with a memory length of 128 is +128× more computationally costly), we where unable to train for a memory budget longer than 256. +We also note the non-monotonicity in Fig. A14. We conjecture that this is a consequence of inducing +the negative effects of large-batch optimization (Keskar et al., 2017) – training with a memory budget +of k effectively increases the batch size by a factor of k. Keeping the batch size constant has its own +drawbacks; reducing the number of parallel environments will harm data diversity and result in +overfitting while reducing the rollout length increases the bias of the return estimate and makes +credit assignment harder. Thus we kept number of environments and rollout length constant. +D +SUPPLEMENTARY VIDEOS +Movies S1-3 Videos showing blind agent navigation with the location of the hidden state in the +collision t-SNE space. Notice that the hidden state stays within a cluster throughout a series of +actions. +21 + +Published as a conference paper at ICLR 2023 +SecondNav(S→T) +SecondNav(T→S) +Probe Type +Excursion Removal +Excursion Removal +1 AllZeroMemory +0.21±0.017 +0.21±0.004 +2 UntrainedAgentMemory +0.23±0.009 +0.25±0.009 +3 TrainedAgentMemory +0.52±0.014 +0.51±0.011 +Table A2: Excursion removal result of our trained probe agent under three configurations – ini- +tialized with an empty representation (AllZeroMemory), a representation of a random agent walked +along the trained agent’s path (UntrainedAgentMemory), and the final representation of the trained +agent (TrainedAgentMemory). 95% confidence interval reported over 5 agent-probe pairs. +Navigable +Not Navigable +Agent Path +Novel Scene, Episodic Coordinates +Agent Path +Known Scene, Global Coordinates +Figure A6: True state trajectory comparison. Example trajectories of an agent with true state +(trained for a specific environment and using global coordinates), green line, compared to an agent +trained for many environments and using episodic coordinates, blue line. The later is what we +examine in this work. Notice that the agent with true state take shortest path trajectories while the +agent without true state instead exhibits strong wall-following behavior. +22 + +30Published as a conference paper at ICLR 2023 +PathDiff(P,A) +Probe Path +Agent Path +PathDiff(A,P) - PathDiff(P,A) +Excursion Removal +Free Space Inference +A +B +E +D +C +Figure A7: Two categories of probe shortcut. ‘Excursion Removal’ is when the probe removes +excursions from the agent’s path. The dashed line shows the distance between the points in the +excursion and the closest point in the probe’s path. ‘Free Space Inference’ occurs when the probe +travels through previously unvisited locations in the environments. The dashed lines show the dis- +tance between any points in the probe’s path and the closest point in the agent’s path. +200 +100 +0 +100 +200 +Time Offset +0 +2 +4 +6 +Error +Absolute L2 Error +200 +100 +0 +100 +200 +Time Offset +0.5 +1.0 +1.5 +2.0 +Relative L2 Error +Actual +Chance +Figure A8: Past and future prediction. Performance of decoders trained to predict where the agent +was in the past/will be in the future. On the x-axis is how far into the past or future the decoder +is predicting (positive values are future predictions and negative values are past predictions). The +y-axis is either absolute or relative L2 error between the predicted location of the agent and the true +location. +23 + +Published as a conference paper at ICLR 2023 +0.0 +0.2 +0.4 +0.6 +0.8 +1.0 +Map Prediction Accuracy (IoU) +UntrainedAgentMemory +TrainedAgentMemory +Figure A9: Map prediction accuracy (Intersection over Union) for Depth sensor equipped agents. +0.2 +0.3 +0.4 +0.5 +0.6 +0.7 +0.8 +0.9 +Map Prediction Accuracy (Class Balanced Accuracy) +UntrainedAgentMemory +TrainedAgentMemory +Figure A10: Map prediction accuracy (class balanced accuracy) for Depth sensor equipped agents. +5 +10 +15 +20 +25 +30 +GeodesicDistance(Start, Goal) +0 +10 +20 +30 +40 +50 +60 +70 +Performance (SPL; Higher is better) +SecondNav(S +T) +5 +10 +15 +20 +25 +30 +GeodesicDistance(Start, Goal) +SecondNav(T +S) +Figure A11: Memory-only probe performance. Performance (in SPL; higher is better) as a func- +tion of geodesic distance from start to goal for the TrainedAgentMemory probe without inputs on +SecondNav(S→T) and SecondNav(T→S). More information can be found under the ‘Navigation +from memory alone’ header. +24 + +Published as a conference paper at ICLR 2023 +20 +40 +60 +80 +Agent Performance (SPL; Higher is better) +0.2 +0.4 +0.6 +0.8 +1.0 +1.2 +1.4 +Free Space Inference +SecondNav(S +T) +20 +40 +60 +80 +Agent Performance (SPL; Higher is better) +SecondNav(T +S) +Figure A12: Free Space Inference for the TrainedAgentMemory probe on both SecondNav(S→T) +and SecondNav(T→S) as a function of agent SPL. We see that as agent SPL decreases, the probe is +able to take paths that inference more free space. +20 +40 +60 +80 +Agent Performance (SPL; Higher is better) +0 +1 +2 +3 +Excursion Removal +SecondNav(S +T) +20 +40 +60 +80 +Agent Performance (SPL; Higher is better) +SecondNav(T +S) +Figure A13: Excursion Removal for the TrainedAgentMemory probe on both SecondNav(S→T) +and SecondNav(T→S) as a function of agent SPL. We see that as agent SPL decreases, excursion +removal increases since the probe is able to remove additional excursions. +0 +50 +100 +150 +200 +250 +Memory Length +0 +20 +40 +60 +80 +100 +Performance (higher is better) +Metric +SPL +Success +Figure A14: Performance vs. memory length for agents trained under a given memory length. Note +that longer memory lengths are challenging to train for under this methodology as it induces the +negative effects of large-batch optimization and is computationally expensive. +25 + +Published as a conference paper at ICLR 2023 +A +B +D +C +Ground Truth +12.4% +32.4% +Prediction +Ground Truth +Prediction +D +C +A +B +Non-navigable +Navigable +Figure A15: Map prediction with poor examples. In the main text we shows qualitative examples +for the average prediction and a good prediction. Here we show two additional examples: A, a very +poor quality prediction. This shows that the decoder sometimes does make large mistakes. B, the +average prediction for the UntrainedAgentMemory decoder. This shows the qualitative difference +between the average UntrainedAgentMemory and TrainedAgentMemory prediction. +26 + diff --git a/-9FQT4oBgHgl3EQfKjXJ/content/tmp_files/load_file.txt b/-9FQT4oBgHgl3EQfKjXJ/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..d19f352ca557e77fe0062b35f688a3834552d9bf --- /dev/null +++ b/-9FQT4oBgHgl3EQfKjXJ/content/tmp_files/load_file.txt @@ -0,0 +1,1292 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf,len=1291 +page_content='Published as a conference paper at ICLR 2023 EMERGENCE OF MAPS IN THE MEMORIES OF BLIND NAVIGATION AGENTS Erik Wijmans1,2∗Manolis Savva2,3 Irfan Essa1,4 Stefan Lee5 Ari S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Morcos2 Dhruv Batra1,2 1Georgia Institute of Technology 2FAIR, Meta AI 3Simon Fraser University 4Google Research Atlanta 5Oregon State University ABSTRACT Animal navigation research posits that organisms build and maintain internal spa- tial representations, or maps, of their environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We ask if machines – specifi- cally, artificial intelligence (AI) navigation agents – also build implicit (or ‘men- tal’) maps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' A positive answer to this question would (a) explain the surprising phenomenon in recent literature of ostensibly map-free neural-networks achieving strong performance, and (b) strengthen the evidence of mapping as a fundamental mechanism for navigation by intelligent embodied agents, whether they be biolog- ical or artificial.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Unlike animal navigation, we can judiciously design the agent’s perceptual system and control the learning paradigm to nullify alternative naviga- tion mechanisms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Specifically, we train ‘blind’ agents – with sensing limited to only egomotion and no other sensing of any kind – to perform PointGoal navi- gation (‘go to ∆x, ∆y’) via reinforcement learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Our agents are composed of navigation-agnostic components (fully-connected and recurrent neural networks), and our experimental setup provides no inductive bias towards mapping.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Despite these harsh conditions, we find that blind agents are (1) surprisingly effective nav- igators in new environments (∼95% success);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' (2) they utilize memory over long horizons (remembering ∼1,000 steps of past experience in an episode);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' (3) this memory enables them to exhibit intelligent behavior (following walls, detecting collisions, taking shortcuts);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' (4) there is emergence of maps and collision detection neurons in the representations of the environment built by a blind agent as it nav- igates;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' and (5) the emergent maps are selective and task dependent (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' the agent ‘forgets’ exploratory detours).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Overall, this paper presents no new techniques for the AI audience, but a surprising finding, an insight, and an explanation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 1 INTRODUCTION Decades of research into intelligent animal navigation posits that organisms build and maintain inter- nal spatial representations (or maps)1 of their environment, that enables the organism to determine and follow task-appropriate paths (Tolman, 1948;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' O’keefe & Nadel, 1978;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Epstein et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Hamsters, wolves, chimpanzees, and bats leverage prior exploration to determine and follow short- cuts they may never have taken before (Chapuis & Scardigli, 1993;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Peters, 1976;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Menzel, 1973;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Toledo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Harten et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Even blind mole rats and animals rendered situationally- blind in dark environments demonstrate shortcut behaviors (Avni et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2008;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Kimchi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2004;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Maaswinkel & Whishaw, 1999).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Ants forage for food along meandering paths but take near-optimal return trips (M¨uller & Wehner, 1988), though there is some controversy about whether insects like ants and bees are capable of forming maps (Cruse & Wehner, 2011;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Cheung et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Analogously, mapping and localization techniques have long played a central role in enabling non- biological navigation agents (or robots) to exhibit intelligent behavior (Thrun et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2005;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Institute, ∗Correspondence to etw@gatech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 1Throughout this work, we use ‘maps’ to refer to a spatial representation of the environment that enables intelligent navigation behavior like taking shortcuts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We provide a detailed discussion and contrast w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' a ‘cognitive map’ as defined by O’keefe & Nadel (1978) in Apx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 1 arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='13261v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='AI] 30 Jan 2023 Published as a conference paper at ICLR 2023 1972;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Ayache & Faugeras, 1988;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Smith et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 1990).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' More recently, the machine learning commu- nity has produced a surprising phenomenon – neural-network models for navigation that curiously do not contain any explicit mapping modules but still achieve remarkably high performance (Savva et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Wijmans et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Kadian et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Chattopadhyay et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Khandelwal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Partsey et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Reed et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' For instance, Wijmans et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' (2020) showed that a simple ‘pixels-to-actions’ architecture (using a CNN and RNN) can navigate to a given point in a novel environment with near-perfect accuracy;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Partsey et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' (2022) further generalized this result to more realistic sensors and actuators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Reed et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' (2022) showed a similar general purpose archi- tecture (a transformer) can perform a wide variety of embodied tasks, including navigation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The mechanisms explaining this ability remain unknown.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Understanding them is both of scientific and practical importance due to safety considerations involved with deploying such systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In this work, we investigate the following question – is mapping an emergent phenomenon?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Specif- ically, do artificial intelligence (AI) agents learn to build internal spatial representations (or ‘mental’ maps) of their environment as a natural consequence of learning to navigate?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The specific task we study is PointGoal navigation (Anderson et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2018), where an AI agent is introduced into a new (unexplored) environment and tasked with navigating to a relative location – ‘go 5m north, 2m west relative to start’2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' This is analogous to the direction and distance of foraging locations communicated by the waggle dance of honey bees (Von Frisch, 1967).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Unlike animal navigation studies, experiments with AI agents allow us to precisely isolate map- ping from alternative mechanisms proposed for animal navigation – the use of visual land- marks (Von Frisch, 1967), orientation by the arrangement of stars (Lockley, 1967), gradients of olfaction or other senses (Ioal`e et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 1990).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We achieve this isolation by judiciously designing the agent’s perceptual system and the learning paradigm such that these alternative mechanisms are rendered implausible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Our agents are effectively ‘blind’;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' they possess a minimal perceptual system capable of sensing only egomotion, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' change in the agent’s location and orientation as the it moves – no vision, no audio, no olfactory, no haptic, no magnetic, or any other sensing of any kind.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' This perceptual system is deliberately impoverished to isolate the contribution of memory, and is inspired by blind mole rats, who perform localization via path integration and use the Earth’s magnetic field as a compass (Kimchi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2004).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Further still, our agents are composed of navigation-agnostic, generic, and ubiquitous architectural components (fully-connected layers and LSTM-based recur- rent neural networks), and our experimental setup provides no inductive bias towards mapping – no map-like or spatial structural components in the agent, no mapping supervision, no auxiliary tasks, nothing other than a reward for making progress towards a goal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Surprisingly, even under these deliberately harsh conditions, we find the emergence of map-like spatial representations in the agent’s non-spatial unstructured memory, enabling it to not only suc- cessfully navigate to the goal but also exhibit intelligent behavior (like taking shortcuts, following walls, detecting collisions) similar to aforementioned animal studies, and predict free-space in the environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Essentially, we demonstrate an ‘existence proof’ or an ontogenetic developmental ac- count for the emergence of mapping without any previous predisposition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Our results also explain the aforementioned surprising finding in recent literature – that ostensibly map-free neural-network achieve strong autonomous navigation performance – by demonstrating that these ‘map-free’ sys- tems in fact learn to construct and maintain map-like representations of their environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Concretely, we ask and answer following questions: 1) Is it possible to effectively navigate with just egomotion sensing?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Yes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We find that our ‘blind’ agents are highly effective in navigating new environments – reaching the goal with 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='1%±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='3% success rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' And they traverse moderately efficient (though far from optimal) paths, reaching 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='9%±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='6% of optimal path efficiency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We stress that these are novel testing environments, the agent has not memorized paths within a training environment but has learned efficient navigation strategies that generalize to novel environments, such as emergent wall-following behavior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 2) What mechanism explains this strong performance by ‘blind’ agents?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We find that memoryless agents completely fail at this task, achieving nearly 0% success.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' More importantly, we find that agents with memory utilize information stored over a long temporal and spatial hori- zon and that collision-detection neurons emerge within this memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Navigation performance as a function of the number of past actions/observations encoded in the agent’s memory does not 2The description in English is purely for explanatory purposes;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' the agent receives relative goal coordinates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 2 Published as a conference paper at ICLR 2023 saturate till one thousand steps (corresponding to the agent traversing 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='1±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='66 meters), suggest- ing that the agent ‘remembers’ a long history of the episode.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 3) What information does the memory encode about the environment?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Implicit maps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We perform an AI rendition of Menzel (1973)’s experiments, where a chimpanzee is carried by a human and shown the location of food hidden in the environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' When the animal is set free to collect the food, it does not retrace the demonstrator’s steps but takes shortcuts to collect the food faster.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Analogously, we train a blind agent to navigate from a source location (S) to a target location (T).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' After it has finished navigating, we transplant its constructed episodic memory into a second ‘probe’-agent (which is also blind).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We find that this implanted-memory probe-agent performs dramatically better in navigating from S to T (and T to S) than it would without the memory transplant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Similar to the chimpanzee, the probe agent takes shortcuts, typically cutting out backtracks or excursions that the memory-creator had undertaken as it tried to work its way around the obstacles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' These experiments provide compelling evidence that blind agents learn to build and use implicit map-like representations of their environment solely through learning to navigate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Intriguingly further still, we find that surprisingly detailed metric occupancy maps of the environment (indicating free-space) can be explicitly decoded from the agent’s memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 4) Are maps task-dependent?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Yes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We find that the emergent maps are a function of the navigation goal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Agents ‘forget’ excursions and detours, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' their episodic memory only preserves the features of the environment relevant to navigating to their goal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' This, in part, explains why transplanting episodic memory from one agent to another leads it to take shortcuts – because the excursion and detours are simply forgotten.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Overall, our experiments and analyses demonstrate that ‘blind’ agents solve PointGoalNav by combining information over long time horizons to build detailed maps of their environment, solely through the learning signals imposed by goal-driven navigation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In biological systems, convergent evolution of analogous structures that cannot be attributed to a common ancestor (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' eyes in vertebrates and jellyfish (Kozmik et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2008)) is often an indicator that the structure is a natural response to the ecological niche and selection pressures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Analogously, our results suggest that mapping may be a natural solution to the problem of navigation by intelligent embodied agents, whether they be biological or artificial.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We now describe our findings for each question in detail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 2 BLIND AGENTS ARE EFFECTIVE NAVIGATORS We train navigation agents for PointGoalNav in virtualized 3D replicas of real houses utilizing the AI Habitat simulator (Savva et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Szot et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2021) and Gibson (Xia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2018) and Matterport3D (Chang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2017) datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The agent is physically embodied as an cylinder with a diameter 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='2m and height 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='5m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In each episode, the agent is randomly initialized in the environment, which establishes an episodic agent-centric coordinate system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The goal location is specified in cartesian coordinates (xg, yg, zg) in this system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The agent has four actions – move forward (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='25 meters), turn left (10◦), turn right (10◦), and stop (to signal reaching the goal), and allowed a maximum of 2,000 steps to reach the specified goal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' It is equipped with an egomotion sensor providing it relative position (∆x, ∆y, ∆z) and relative ‘heading’ (or yaw angle) ∆θ between successive steps, which is integrated to keep track of the agent’s location and heading relative to start [xt, yt, zt, θt].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' This is sometimes referred to as a ‘GPS+Compass’ sensor in this literature (Savva et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Wijmans et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We use two task-performance dependent metrics: i) Success, defined as whether or not the agent predicted the stop action within 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='2 meters of the target, and ii) Success weighted by inverse Path Length (SPL) (Anderson et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2018), defined as success weighted by the efficiency of agent’s path compared to the oracle path (the shortest path).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Given the high success rates we observe, SPL can be roughly interpreted as efficiency of the path taken compared to the oracle path – e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' an SPL of 95% means the agent took a path 95% as efficient as the oracle path while an SPL of 50% means the agent took a path 50% as efficient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Note that performance is evaluated in previously unseen environments to evaluate whether agents can generalize, not just memorize.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The agent’s policy is instantiated as a long short-term memory (LSTM) (Hochreiter & Schmidhuber, 1997) recurrent neural network – formally, given current observations ot = [xg, yg, zg, xt, yt, zt, θt], (ht, ct) = LSTM(ot, (ht−1, ct−1)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We refer to this (ht, ct) as the agent’s internal memory repre- sentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Note that only contains information gathered during the current navigation episode.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We train our agents for this task using a reinforcement learning (Sutton & Barto, 1992) algorithm called DD-PPO (Wijmans et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The reward has a term for making progress towards the goal and 3 Published as a conference paper at ICLR 2023 GPS+Compass (A) (B) 1 3 2 4 6 5 Forward — Collided Forward — No Collision Turn — No Collision (C) Agent Bug — Always Right Bug — Always Left Clairvoyant Bug Figure 1: (A) PointGoal navigation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' An agent is initialized in a novel environment (bluesquare) and task with navigation to a point specified relative to the start location (red square).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We study ‘blind’ agents, equipped with just an egomotion sensor (called GPS+Compass in this literature).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' (B) ‘Blind’ agent vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' bug.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Our learned ‘blind’ agent compared to 2 variants and an oracle equipped variant of the Bug algorithm (Lumelsky & Stepanov, 1987).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The Bug algorithm initially orients itself towards the goal and then proceeds towards the goal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Upon hitting a wall, it follows along the wall until it reaches the other side.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The oracle version is told whether wall-following left or right is optimal, providing an upper-bound on Bug algorithm performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' (C) t-SNE of the agent’s internal representation for collisions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We find 4 overall clusters corresponding to the previous action taken and whether or not that action led to a collision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' for successfully reaching it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Neither the training procedure nor agent architecture contain explicit inductive biases towards mapping or planning relative to a map.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Apx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='1 describes training details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Agent Success SPL 1 Blind 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='1±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='3 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='9±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='6 2 Clairvoyant Bug 100±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='0 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='0 3 Sighted (Depth) 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='0 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='0 (Ramakrishnan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2021) Table 1: PointGoalNav performance agents on PointGoalNav.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We find that blind agents are surprisingly effective (success) though not efficient (SPL) navigators.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' They have similar success as an agent equipped with a Depth camera and higher SPL than a clair- voyant version of the ‘Bug’ algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Surprisingly, we find that agents trained under this impoverished sensing regime are able to navigate with near-perfect efficacy – reaching the goal with 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='1%±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='3% success rate (Table 1), even in situa- tions where the agent must take hundreds of actions and traverse over 25m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' This performance is simi- lar in success rate (95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='1 vs 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='0)3 to a sighted agent (equipped with a depth camera) trained on a larger dataset (HM3D) (Ramakrishnan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The paths taken by the blind agent are moderately ef- ficient but (as one might expect) far less so than a sighted agent (62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='9 vs 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='0 SPL).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' At this point, it might be tempting to believe that this is an easy navigation problem, but we urge the reader to fight hindsight bias.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We contend that the SPL of this blind agent is surprisingly high given the impoverished sensor suite.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' To put this SPL in context, we compare it with ‘Bug algorithms’ (Lumelsky & Stepanov, 1987), which are motion planning algorithms inspired by insect navigation, involving an agent equipped with only a localization sensor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In these algorithms, the agent first orients itself towards the goal and then travels directly towards it until it encounters a wall, in which case it follows along the wall along one of two directions of travel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The primary challenge for Bug algorithms is determining whether to go left or right upon reaching a wall.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' To provide an upper bound on performance, we implement a ‘clairvoyant’ Bug algorithm agent with an oracle that tells it whether left or right is optimal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Even with the additional privileged information, the ‘clairvoyant’ Bug agent achieves an SPL of 46%, which is considerably less efficient than the ‘blind’ agent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 1b shows an example of the path our blind agent takes compared to 3 variants of the Bug algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' This shows that blind navigation agents trained with reinforcement learning are highly efficient at navigating in previously unseen environments given their sensor suite.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='1 EMERGENCE OF WALL-FOLLOWING BEHAVIOR AND COLLISION-DETECTION NEURONS Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 1b shows the blind agent exhibiting wall-following behavior (also see blue paths in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' A6 and videos in supplement).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' This behavior is remarkably consistent;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' the agent spends the majority 3It may seem like the blind agent outperforms the sighted agent, but the mean performance of Ramakrishnan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' (2021) is within our error bars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 4 Published as a conference paper at ICLR 2023 of an episode near a wall.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' This is surprising because it is trained to navigate to the target location as quickly as possible, thus, it would be rewarded for traveling in straighter paths (that avoid walls).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We hypothesize that this strategy emerges due to two factors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 1) The agent is blind, it has no way to determine where the obstacles are in the environment besides ‘bumping’ into them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 2) The environment is unknown to the agent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' While this is clearly true for testing environments it is also functionally true for training environments because the coordinate system is episodic, every episode uses a randomly-instantiated coordinate system based on how the agent was spawned;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' and the since the agent is blind, it cannot perform visual localization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We test both hypotheses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' To test (2), we provide an experiment in Apx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='1 showing that when the agent is trained in a single environment with a consistent global coordinate system, it learns to memorize the shortest paths in this environment and wall-following does not emerge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Consequently, this agent is unable to navigate in new environment, achieving 100% success on train and 0% on test.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' To test (1), we analyze whether the agent is capable of detecting collisions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Note that the agent is not equipped with a collision sensor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In principle, the agent can infer whether it collided – if tries to move forward and the resulting egomotion is atypical, then it is likely that a collision happened.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' This leads us to ask – does the agent’s memory contain information about collisions?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We train a linear classifier that uses the (frozen) internal representation (ht+1, ct+1) to predict if action at resulted in a collision (details in Apx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The classifier achieves 98% accuracy on held-out data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' As comparison, random guessing on this 2-class problem would achieve 50%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' This shows the agent’s memory not only predicts its collisions, but also that collision-vs-not are linearly separable in internal-representation space, which strongly suggests that the agent has learned a collision sensor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Next, we examine how collisions are structured in the agent’s internal representation by identifying the subspace that is used for collisions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Specifically, we re-train the linear classifier with an ℓ1- weight penalty to encourage sparsity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We then select the top 10 neurons (from 3072) with the largest weight magnitude;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' this reduces dimensionality by 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='7% while still achieving 96% collision-vs-not accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We use t-SNE (Van der Maaten & Hinton, 2008) and the techniques in Kobak & Berens (2019) to create a 2-dimension visualization of the resulting 10-dimension space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We find 4 distinct semantically-meaningful clusters (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 1c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' One cluster always fires for collisions, one for forward actions that did not result in a collision, and the other two correspond to turning actions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Notice that these exceedingly small number of dimensions and neurons essentially predict all collisions and movement of the agent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We include videos in the supplementary materials.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 3 MEMORY IS USED OVER LONG HORIZONS 10 0 10 1 10 2 10 3 Memory Length (log-scale) 0 20 40 60 80 100 Performance (Higher is better) SPL Success Figure 2: Navigation perfor- mance vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' memory length.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Agent performance does not saturate until memory can contain information from hundreds of steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' A memory of 103 steps is half the maximum episode length.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Next, we examine how memory is utilized by asking if the agent uses memory solely to remember short-term informa- tion (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' did it collide in the last step?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=') or whether it also in- cludes long-range information (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' did it collide hundreds of steps ago?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=').' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' To answer this question, we restrict the memory capacity of our agent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Specifically, let k denote the memory budget.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' At each time t, we take the previous k observations, [ot−k+1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' , ot], and construct the internal representation (ht, ct) via the recurrence (hi, ci) = LSTM(oi, (hi−1, ci−1)) for t − k < i ≤ t where (ht−k, ct−k) = (0, 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' If the agent is only leveraging its memory for short-term stor- age we would expect performance to saturate at a small value of k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Instead, Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 2 shows that the agent leverages its memory for significantly long term storage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' When memoryless (k = 1), the agent completely fail at the task, achieving nearly 0% suc- cess.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Navigation performance as a function of the memory budget (k) does not saturate till one thousand steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Recall that the agent can move forward 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='25 meters or turn 10◦ at each step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The average distance traveled in 1000 steps is 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='1±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='66 meters, indicating that it remembers information over long temporal and spatial horizons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Apx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='6 we train agents to operate at a specific memory budget.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We find that a budget of k = 256, the largest we are able to train, is not sufficient to achieve the performance of unbounded.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 5 Published as a conference paper at ICLR 2023 Agent Network Probe Network LSTM LSTM oA T-1 LSTM LSTM oA T hA T-2 aA T-2 aA T-1 aA T hA T hP 2 oP 1 aP 1 aP 2 oP 2 S T Stop Gradient (A) SecondNav(S→T) SecondNav(T→S) Probe Type Success SPL Success SPL 1 AllZeroMemory 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='6±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='40 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='1±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='27 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='0±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='40 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='8±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='25 2 UntrainedAgentMemory 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='4±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='28 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='0±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='19 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='2±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='54 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='2±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='35 3 TrainedAgentMemory 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='2±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='23 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='0±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='16 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='0±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='16 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='8±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='22 (B) Figure 3: (A) Probe experiment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' First, an agent navigates (blue path, blue LSTM) from start (green sphere) to target (red sphere).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' After the agent navigates, we task a probe (purple LSTM) with performing the same navigation episode with the additional information encapsulated in the agent’s internal representation (or memory), hA T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The probe is able to navigate more efficiently by taking shortcuts (purple path).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' As denoted by the dashed line between the probe and agent networks, the probe does not influence what the agent stores in its internal representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Environment in the image from the Replica Dataset (Straub et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' (B) Agent memory transplant increases probe efficiency (SPL).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Results of our trained probe agent under three configurations – initialized with an empty representation (AllZeroMemory), a representation of a random agent walked along the trained agent’s path (UntrainedAgentMemory), and the final representation of the trained agent (TrainedAgentMemory).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 95% confidence interval reported over 5 agent-probe pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 4 MEMORY ENABLES SHORTCUTS To investigate what information is encoded in the memory of our blind agents, we develop an exper- imental paradigm based on ‘probe’ agents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' A probe is a secondary navigation agent4 that is struc- turally identical to the original (sensing, architecture, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' ), but parametrically augmented with the primary agent’s constructed episodic memory representation (hT , cT ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The probe has no influence on the agent, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' no gradients (or rewards) follow from probe to agent (please see training details in Apx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We use this paradigm to examine whether the agent’s final internal representation contains sufficient information for taking shortcuts in the environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' As illustrated in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 3A, the agent first navigates from source (S) to target (T).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' After the agent reaches T, a probe is initialized5 at S, its memory initialized with the agent’s final memory repre- sentation, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' (h0, c0)probe = (hT , cT )agent, and tasked with navigating to T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We refer to this probe task as SecondNav(S→T).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' All evaluations are conducted in environments not used for training the agent nor the probe.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Thus, any environmental information in the agent’s memory must have been gathered during its trajectory (and not during any past exposure during learning).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Similarly, all initial knowledge the probe has of the environment must come from the agent’s memory (hT , cT )agent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Our hypothesis is that the agent’s memory contains a spatial representation of the environment, which the probe can leverage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' If the hypothesis is true, we would expect the probe to navigate Sec- ondNav(S→T) more efficiently than the agent (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' by taking shortcuts and cutting out exploratory excursions taken by the agent).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' If not, we would expect the probe to perform on-par with the agent since the probe is being trained on essentially the same task as the agent6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In our experiments, we find that the probe is significantly more efficient than the agent – SPL of 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='9%±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='6% (agent) vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='0%±1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='6% (probe).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' It is worth stressing how remarkable the performance of the probe is – in a new environment, a blind probe navigating without a map traverses a path that is within 15% of the shortest path on the map.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The best known sighted agents (equipped with an RGB camera, Depth sensor, and egomotion sensor) achieve an SPL of 84% on this task (Ramakrishnan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Essentially, the memories of a blind agent are as valuable as having vision!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 3A shows the difference in paths between the agent and probe (and videos showing more exam- ples are available in the supplement).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' While the agent exhibits wall-following behavior, the probe 4To avoid confusion, we refer to this probe agent as ‘probe’ and the primary agent as ‘agent’ from this point.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 5The probe’s heading at S is set to the agent’s final heading upon reaching T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 6We note that an argument can be made that if the agent’s memory is useless to the probe, then the probe is being trained on a harder task since it must learn to navigate and ignore the agent’s memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' But this argument would predict the probe’s performance to be lower not higher than the agent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 6 Published as a conference paper at ICLR 2023 B A 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='4% Non-navigable Navigable Ground Truth Prediction 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='4% Ground Truth Prediction B A Figure 4: Learning navigation improves map prediction from memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' (Left) Accuracy (In- tersection over Union) distributions (via kernel density estimation) and means (dashed lines);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' TrainedAgentMemory has a higher mean than UntrainedAgentMemory with p-value ≤ 10−5 (via Wilcoxon signed-rank test (Wilcoxon, 1992)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' (Right) Example ground truth and predicted occu- pancy maps using TrainedAgentMemory (corresponding to (A) and (B) IoU points).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Light grey is non-navigable and dark grey is navigable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The agent path is drawn in light blue and navigates from start (green) to target (red).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We can see that when the agent travels close to one wall, the map decoder predicts another wall parallel to it, indicating a corridor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' instead takes more direct paths and rarely performs wall following.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Recall that the only difference in the agent and probe is the contents of the initial hidden state – reward is identical (and available only during training), training environments are identical (although the episodes are different), and eval- uation episodes are identical – meaning that the environmental representation in the agent’s episodic memory is what enables the probe to navigate more efficiently.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We further compare this result (which we denote as TrainedAgentMemory) with two control groups: 1) AllZeroMemory: An empty (all zeros) episodic memory to test for any systematic biases in the probe tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' This probe contains identical information at the start of an episode as the agent (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' no information).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 2) UntrainedAgentMemory: Episodic memory generated by an untrained agent (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' with a random setting of neural network parameters) as it is walked along the trajectory of the trained agent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' This disentangles the agent’s structure from its parameters;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' and tests whether simply being encoded by an LSTM (even one with random parameters) provides an inductive bias towards building good environmental representations (Wieting & Kiela, 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We find no evidence for this inductive bias – UntrainedAgentMemory performs no better than AllZeroMemory (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 3B, row 1 vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Furthermore, TrainedAgentMemory significantly outper- forms both controls by +13 points SPL and +4 points Success (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 3B, row 3 vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 1 and 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Taken together, these two results indicate that the ability to construct useful spatial representations of the environment from a trajectory is decidedly a learned behavior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Next, we examine if there is any directional preference in the episodic memory constructed by the agent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Our claim is that even though the agent navigates from S to T, if its memory indeed contains map-like spatial representations, it should also support probes for the reverse task Second- Nav(T→S).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Indeed, we find that TrainedAgentMemory probe performs the same (within margin of error) on both SecondNav(S→T) and SecondNav(T→S) (Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 3B right column) – indicating that the memory is equally useful in both directions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Apx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='2 we demonstrate that the probe removes excursions from the agent’s path and takes shortcuts through previously unseen parts of the envi- ronment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Overall, these results provide compelling evidence that blind agents learn to build and use implicit map-like representations that enable shortcuts and reasoning about previously untraversed locations in the environment, solely through learning to navigate between two points.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 5 LEARNING NAVIGATION IMPROVES METRIC MAP DECODING Next, we tackle the question ‘Does the agent build episodic representations capable of decod- ing metric maps (occupancy grids) of the environment?’' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Formally, given the final representation (hT , cT )agent, we train a separate decoding network to predict an allocentric top-down occupancy grid (free-space vs not) of the environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' As with the probes, no gradients are propagated from the decoder to the agent’s internal representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We constrain the network to make predictions for a location only if the agent reached within 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='5 meters of it (refer to Apx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='3 for details).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Note that since the agents are ‘blind’ predictions about any unvisited location require reasoning about unseen 7 Published as a conference paper at ICLR 2023 Non-Excursion Excursion Predicted Visited Chance 5 25 50 75 100 (A) (B) Figure 5: (A) Excursion prediction example.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Qualitative example of the previously-visited loca- tion decoder making systematic errors when decoding an excursion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Blue represents the confidence of the decoder that the agent was previously at a given location;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' we can see that it is lower in the path interval marked in red (excursion) than the rest.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' (B) Remembrance of excursions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Performance of decoders when predicting previous agent locations broken down into three categories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' ‘Non- excursion’ is all predictions where the current location of the agent and the prediction time step are not part of an excursions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' ‘Excursion’ is when the prediction time step is part of an excursion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' ‘Exit’ is when the prediction time step is part of the last 10% of the excursion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' X-axis is the distance into the past and Y-axis is the relative error between the true and predicted locations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' As before, we compare the internal representation produced by TrainedAgentMemory to internal representation produced by an agent with random parameters, UntrainedAgentMemory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 4 shows the distribution of map-prediction accuracy, measured as interaction-over-union (IoU) with the true occupancy grid.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We find that TrainedAgentMemory enables uniformly more accurate predictions than UntrainedAgentMemory– 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='5% vs 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='5% average IoU.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The qualitative examples show that the predictor is commonly able to make accurate predictions about unvisited locations, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' when the agent travels close to one wall, the decoder predicts another parallel to it, indicating a cor- ridor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' These results show that the internal representation contains necessary information to decode accurate occupancy maps, even for unseen locations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We note that the environment structural priors are also necessary to prediction unseen locations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Thus agent memory is necessary but not sufficient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Apx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='4, we conduct this analysis on ‘sighted’ navigation agents (equipped with a Depth camera and egomotion sensor).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Perhaps counter-intuitively, we do not find conclusive evidence that metric maps can be decoded from the memory of sighted agents (despite their sensing suite being a strict superset of blind agents).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Our conjecture is that for higher-level strategies like map-building to emerge, the learning problem must not admit ‘trivial’ solutions such as the ones deep reinforcement learning is know to latch onto (Baker et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Lehman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Kadian et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We believe that the minimal perception system used in our work served to create a challenging learning problem, which in turn limited the possible ‘trivial’ solutions, thus inducing map-building.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 6 MAPPING IS TASK-DEPENDENT: AGENT FORGETS EXCURSIONS Given that the agent is memory-limited, it stands to reason that it might need to choose what informa- tion to preserve and what to ‘forget’.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' To examine this, we attempt to decode the agent’s past positions from its memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Formally, given internal state at time t, (ht, ct), we train a prediction network fk(·) to predict the agent’s location k steps in to the past, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' ˆst−k = fk(ht, ct)+st, k ∈ [1, 256].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Given ground truth location st+k, we evaluate the decoder via relative L2 error ||ˆst+k−st+k||/||st+k−st|| (refer to Apx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='4 for details).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Qualitative analysis of past prediction results shows that the agent forgets excursions7, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' excursions are harder to decode (see Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 5a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' To quantify this, we man- ually labelled excursions in 216 randomly sampled episodes in evaluation environments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 5b shows that excursions are harder to decode than non-excursions, indicating that the agent does in- deed forget excursions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Interestingly, we find that the exit of the excursion is considerably easier to decode, indicating that the end of the excursion performs a similar function to landmarks in animal and human navigation (Chan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2012).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 7We define an excursion as a sub-path that approximately forms a loop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 8 Published as a conference paper at ICLR 2023 In the appendix, we study several additional questions that could not be accommodated in the main paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Apx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='2 we further examine the probe’s performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Apx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='3 we examine predicting future agent locations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Apx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='5 we use agent’s hidden state as a world model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 7 RELATED WORK Characterizing spatial representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Prior work has shown that LSTMs build grid- cell (O’keefe & Nadel, 1978) representations of an environment when trained directly for path integration within that environment (Banino et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Cueva & Wei, 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Sorscher et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In contrast, our work provides no direct supervision for path integration, localization, or mapping.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Banino et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' (2018) demonstrated that these maps aid in navigation by training a navigation agent that utilizes this cognitive map.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In contrast, we show that LSTMs trained for navigation learn to build spatial representations in novel environments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Whether or not LSTMs trained under this setting also utilize grid-cells is a question for future work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Bruce et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' (2018) demonstrated that LSTMs learn localization when trained for navigation in a single environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We show that they learn mapping when given location and trained in many environments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Huynh et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' (2020) proposed a spatial memory architecture and demonstrated that a spatial representation emerges when trained on a localization task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We show that spatial representations emerge in non-spatial neural networks trained for navigation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Dwivedi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' (2022) examined what navigation agents learn about their environments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We provided a detailed account of emergent mapping in larger environments, over longer time horizons, and show the emergence of intelligent behavior and mapping in blind agents, which is not the focus of prior work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' ‘Map-free’ navigation agents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Learned agents that navigate without an explicit mapping module (called ‘map-free’ or ‘pixels-to-actions’) have shown strong performance on a variety of tasks (Savva et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Wijmans et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Kadian et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Chattopadhyay et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Khandelwal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Partsey et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Reed et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In this work, we do not provide any novel techniques nor make any experimental advancement in the efficacy of such (sighted) agents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' However, we make two key findings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' First, that blind agents are highly effective navigators for PointGoalNav, exhibit- ing similar efficacy as sighted agents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Second, we begin to explain how ‘map-free’ navigation agents perform their task: they build implicit maps in their memory, although the story is a bit nuanced due to the results in Apx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='4;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' we suspect this understanding might be extended in future work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 8 OUTLOOK: LIMITATIONS, REPRODUCIBILITY In this work, we have shown that ‘blind’ AI navigation agents – agents with similar perception as blind mole rats – are capable of performing goal-driven navigation to a high degree of performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We then showed that these AI navigation agents learn to build map-like representations (supporting the ability to take shortcuts, follow walls, and predict free-space and collisions) of their environ- ment solely through learning goal-driven navigation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Our agents and training regime have no added inductive bias towards map-building, be it explicit or implicit, implying that cognitive maps may be a natural solution to the inductive biases imposed by navigation by intelligent embodied agents, whether they be biological or artificial.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In a similar manner, convergent evolution (Kozmik et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2008), where two unrelated intelligent systems independently arrive at similar mechanisms, suggests that the mechanism is a natural response of having to adapt to the environment and the task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Our results also provide an explanation of the surprising success of map-free neural network nav- igation agents by showing that these agents in fact learn to build map-like internal representations with no learning signal other than goal driven navigation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' This result establish a link between how ‘map-free’ systems navigate with analytic mapping-and-planning techniques (Thrun et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2005;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Institute, 1972;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Ayache & Faugeras, 1988;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Smith et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 1990).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Our results and analyses also point towards future directions in AI navigation research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Specifically, imbuing AI navigation agents with explicit (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' architectural design) or implicit (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' training regime or auxiliary objectives) priors that bias agents towards learning an internal representation with the features found here may improve their performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Further, it may better equip them to learn more challenging tasks such as rearrangement of an environment by moving objects (Batra et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We see several limitations and areas for future work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' First, we examined ground-based navigation agents operating in digitizations of real houses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' This limits the agent a 2D manifold and induces strong structural priors on environment layout.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' As such, it is unclear how our results generalize 9 Published as a conference paper at ICLR 2023 to a drone flying through a large forest.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Second, we examined agents with a minimal perceptual system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In the supplementary text, we attempted to decode occupancy grids (metric maps) from Depth sensor equipped agents and did not find convincing evidence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Our conjecture is that for higher-level strategies like map-building to emerge, the learning problem must not admit ‘trivial’ solutions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We believe that the minimal perception system used in our work also served to create such a challenging learning problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Third, our experiments do not study the effects of actuation noise, which is an important consideration in both robot navigation systems and path integration in biological systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Fourth, we examine an implicit map-building mechanism (an LSTM), a similar set of experiments could be performed for agents with a differentiable read/write map but no direct mapping supervision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Fifth, our agents only explore their environment for a short period of time (an episode) before their memory is reset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Animals and robots at deployment experience their environment for significantly longer periods of time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Finally, we do not provide a complete mechanistic account for how the agent learns to build its map or what else it stores in its memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Acknowledgements: We thank Abhishek Kadian for his help in implementing the first version of the SecondNav(T→S) probe experiment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We thank Jitendra Malik for his feedback on the draft and guidance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' EW is supported in part by an ARCS fellowship.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The Georgia Tech effort was supported in part by NSF, ONR YIP, and ARO PECASE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The Oregon State effort is supported in part by the DARPA Machine Common Sense program.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Government, or any sponsor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Reproducibility Statement: Implementation details of our analyses are provided in the appendix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Our work builds on datasets and code that are already open-sourced, and our analysis code will be open-sourced.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' REFERENCES Peter Anderson, Angel X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Chang, Devendra Singh Chaplot, Alexey Dosovitskiy, Saurabh Gupta, Vladlen Koltun, Jana Kosecka, Jitendra Malik, Roozbeh Mottaghi, Manolis Savva, and Amir Roshan Zamir.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' On evaluation of embodied navigation agents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' CoRR, abs/1807.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='06757, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' URL http://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='org/abs/1807.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='06757.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Reut Avni, Yael Tzvaigrach, and David Eilam.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Exploration and navigation in the blind mole rat (spalax ehrenbergi): global calibration as a primer of spatial representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Journal of Experi- mental Biology, 211(17):2817–2826, 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Nicholas Ayache and Olivier D Faugeras.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Building, registrating, and fusing noisy visual maps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The International Journal of Robotics Research, 7(6):45–65, 1988.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Bowen Baker, Ingmar Kanitscheider, Todor Markov, Yi Wu, Glenn Powell, Bob McGrew, and Igor Mordatch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Emergent tool use from multi-agent autocurricula.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Andrea Banino, Caswell Barry, Benigno Uria, Charles Blundell, Timothy Lillicrap, Piotr Mirowski, Alexander Pritzel, Martin J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Chadwick, Thomas Degris, Joseph Modayil, Greg Wayne, Hubert Soyer, Fabio Viola, Brian Zhang, Ross Goroshin, Neil Rabinowitz, Razvan Pascanu, Charlie Beat- tie, Stig Petersen, Amir Sadik, Stephen Gaffney, Helen King, Koray Kavukcuoglu, Demis Hass- abis, Raia Hadsell, and Dharshan Kumaran.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Vector-based navigation using grid-like representa- tions in artificial agents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Nature, 557(7705):429–433, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='1038/s41586-018-0102-6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='1038/s41586-018-0102-6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Dhruv Batra, Angel X Chang, Sonia Chernova, Andrew J Davison, Jia Deng, Vladlen Koltun, Sergey Levine, Jitendra Malik, Igor Mordatch, Roozbeh Mottaghi, Manolis Savva, and Hao Su.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Rear- rangement: A challenge for embodied ai.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In arXiv preprint arXiv:2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='01975, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Jake Bruce, Niko S¨underhauf, Piotr Mirowski, Raia Hadsell, and Michael Milford.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Learning deploy- able navigation policies at kilometer scale from a single traversal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Conference on Robot Learning (CoRL), 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Edgar Chan, Oliver Baumann, Mark A Bellgrove, and Jason B Mattingley.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' From objects to land- marks: the function of visual location information in spatial navigation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Frontiers in psychology, 3:304, 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 10 Published as a conference paper at ICLR 2023 Angel Chang, Angela Dai, Thomas Funkhouser, Maciej Halber, Matthias Niessner, Manolis Savva, Shuran Song, Andy Zeng, and Yinda Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Matterport3d: Learning from rgb-d data in indoor environments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In International Conference on 3D Vision (3DV), 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' License: http://kaldir.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' vc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='in.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='tum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='de/matterport/MP TOS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='pdf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Nicole Chapuis and Patricia Scardigli.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Shortcut ability in hamsters (mesocricetus auratus): The role of environmental and kinesthetic information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Animal Learning & Behavior, 21(3):255–265, 1993.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Prithvijit Chattopadhyay, Judy Hoffman, Roozbeh Mottaghi, and Ani Kembhavi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Robustnav: To- wards benchmarking robustness in embodied navigation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Allen Cheung, Matthew Collett, Thomas S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Collett, Alex Dewar, Fred Dyer, Paul Graham, Michael Mangan, Ajay Narendra, Andrew Philippides, Wolfgang St¨urzl, Barbara Webb, Antoine Wys- trach, and Jochen Zeil.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Still no convincing evidence for cognitive map use by honeybees.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Pro- ceedings of the National Academy of Sciences, 111(42):E4396–E4397, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' ISSN 0027-8424.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='1073/pnas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='1413581111.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' URL https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='pnas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='org/content/111/42/E4396.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Holk Cruse and R¨udiger Wehner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' No need for a cognitive map: Decentralized memory for in- sect navigation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' PLOS Computational Biology, 7(3):1–10, 03 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='1371/journal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='pcbi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 1002009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='1371/journal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='pcbi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='1002009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Christopher J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Cueva and Xue-Xin Wei.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Emergence of grid-like representations by training recur- rent neural networks to perform spatial localization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Proceedings of the International Confer- ence on Learning Representations (ICLR), 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' URL https://openreview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='net/forum?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='id= B17JTOe0-.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Kshitij Dwivedi, Gemma Roig, Aniruddha Kembhavi, and Roozbeh Mottaghi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' What do navigation agents learn about their environment?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 10276–10285, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Russell Epstein, E Z Patai, Joshua Julian, and Hugo Spiers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The cognitive map in humans: Spatial navigation and beyond.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Nature Neuroscience, 20:1504–1513, 10 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='1038/nn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='4656.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Charles R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Gallistel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Learning, development, and conceptual change.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='The organization of learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The MIT Press, 1990.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Priya Goyal, Piotr Doll´ar, Ross B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Accurate, large minibatch SGD: training ima- genet in 1 hour.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' CoRR, abs/1706.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='02677, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' URL http://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='org/abs/1706.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='02677.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Lee Harten, Amitay Katz, Aya Goldshtein, Michal Handel, and Yossi Yovel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The ontogeny of a mammalian cognitive map in the real world.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Science, 369(6500):194–197, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Deep residual learning for image recog- nition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Sepp Hochreiter and J¨urgen Schmidhuber.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Long short-term memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Neural Computation, 9(8): 1735–1780, 1997.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Peter J Huber.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Robust estimation of a location parameter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In The Annals of Mathematical Statistics, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 73–101.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' JSTOR, 1964.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Tri Huynh, Michael Maire, and Matthew R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Walter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Multigrid neural memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Proceedings of the International Conference on Machine Learning (ICML), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 4561–4571.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' PMLR, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Stanford Research Institute.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Shakey: An experiment in robot planning and learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 1972.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' P Ioal`e, M Nozzolini, and F Papi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Homing pigeons do extract directional information from olfactory stimuli.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Behavioral Ecology and Sociobiology, 26(5):301–305, 1990.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 11 Published as a conference paper at ICLR 2023 Sergey Ioffe and Christian Szegedy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Batch normalization: Accelerating deep network training by reducing internal covariate shift.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Proceedings of the International Conference on Machine Learning (ICML), 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Lucia F Jacobs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The evolution of the cognitive map.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Brain, behavior and evolution, 62(2):128–139, 2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Abhishek Kadian, Joanne Truong, Aaron Gokaslan, Alexander Clegg, Erik Wijmans, Stefan Lee, Manolis Savva, Sonia Chernova, and Dhruv Batra.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Are we making real progress in simulated environments?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' measuring the sim2real gap in embodied visual navigation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In IEEE Robotics and Automation Letters (RA-L), 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Nitish Shirish Keskar, Dheevatsa Mudigere, Jorge Nocedal, Mikhail Smelyanskiy, and Ping Tak Pe- ter Tang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' On large-batch training for deep learning: Generalization gap and sharp minima.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Apoorv Khandelwal, Luca Weihs, Roozbeh Mottaghi, and Aniruddha Kembhavi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Simple but effec- tive: Clip embeddings for embodied ai.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 14829–14838, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Tali Kimchi, Ariane S Etienne, and Joseph Terkel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' A subterranean mammal uses the magnetic compass for path integration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Proceedings of the National Academy of Sciences, 101(4):1105– 1109, 2004.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Diederik P Kingma and Jimmy Ba.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Adam: A method for stochastic optimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Dmitry Kobak and Philipp Berens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The art of using t-sne for single-cell transcriptomics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Nature communications, 10(1):1–14, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Zbynek Kozmik, Jana Ruzickova, Kristyna Jonasova, Yoshifumi Matsumoto, Pavel Vopalensky, Iryna Kozmikova, Hynek Strnad, Shoji Kawamura, Joram Piatigorsky, Vaclav Paces, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' As- sembly of the cnidarian camera-type eye from vertebrate-like components.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Proceedings of the National Academy of Sciences, 105(26):8989–8993, 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Joel Lehman, Jeff Clune, Dusan Misevic, Christoph Adami, Lee Altenberg, Julie Beaulieu, Peter J Bentley, Samuel Bernard, Guillaume Beslon, David M Bryson, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The surprising creativity of digital evolution: A collection of anecdotes from the evolutionary computation and artificial life research communities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Artificial Life, 26(2):274–306, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Doll´ar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Focal loss for dense object detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Proceedings of IEEE International Conference on Computer Vision (ICCV), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 2980–2988, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Rosanne Liu, Joel Lehman, Piero Molino, Felipe Petroski Such, Eric Frank, Alex Sergeev, and Jason Yosinski.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' An intriguing failing of convolutional neural networks and the coordconv solution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Advances in Neural Information Processing Systems (NeurIPS), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 9605–9616, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Ronald Mathias Lockley.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Animal navigation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Pan Books, 1967.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Ilya Loshchilov and Frank Hutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Decoupled weight decay regularization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Vladimir J Lumelsky and Alexander A Stepanov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Path-planning strategies for a point mobile au- tomaton moving amidst unknown obstacles of arbitrary shape.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Algorithmica, 2(1-4):403–430, 1987.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Hans Maaswinkel and Ian Q Whishaw.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Homing with locale, taxon, and dead reckoning strategies by foraging rats: sensory hierarchy in spatial navigation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Behavioural brain research, 99(2): 143–152, 1999.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Emil W Menzel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Chimpanzee spatial memory organization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Science, 182(4115):943–945, 1973.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 12 Published as a conference paper at ICLR 2023 Martin M¨uller and R¨udiger Wehner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Path integration in desert ants, cataglyphis fortis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Proceedings of the National Academy of Sciences, 85(14):5287–5290, 1988.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Vinod Nair and Geoffrey E Hinton.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Rectified linear units improve restricted boltzmann machines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Proceedings of the International Conference on Machine Learning (ICML), 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' John O’keefe and Lynn Nadel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The hippocampus as a cognitive map.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Oxford: Clarendon Press, 1978.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Ruslan Partsey, Erik Wijmans, Naoki Yokoyama, Oles Dobosevych, Dhruv Batra, and Oleksandr Maksymets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Is mapping necessary for realistic pointgoal navigation?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 17232–17241, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Peters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Cognitive maps in wolves and men.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Environmental design research, 2:247–253, 1976.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Santhosh K Ramakrishnan, Aaron Gokaslan, Erik Wijmans, Oleksandr Maksymets, Alex Clegg, John Turner, Eric Undersander, Wojciech Galuba, Andrew Westbury, Angel X Chang, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Habitat-matterport 3d dataset (hm3d): 1000 large-scale 3d environments for embodied ai.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Neural Information Processing Systems – Benchmarks and Datasets, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Scott Reed, Konrad Zolna, Emilio Parisotto, Sergio Gomez Colmenarejo, Alexander Novikov, Gabriel Barth-Maron, Mai Gimenez, Yury Sulsky, Jackie Kay, Jost Tobias Springenberg, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' A generalist agent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' arXiv preprint arXiv:2205.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='06175, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Manolis Savva, Abhishek Kadian, Oleksandr Maksymets, Yili Zhao, Erik Wijmans, Bhavana Jain, Julian Straub, Jia Liu, Vladlen Koltun, Jitendra Malik, Devi Parikh, and Dhruv Batra.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Habitat: A Platform for Embodied AI Research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Proceedings of IEEE International Conference on Computer Vision (ICCV), 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' High- dimensional continuous control using generalized advantage estimation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Proximal policy optimization algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' CoRR, abs/1707.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='06347, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Randall Smith, Matthew Self, and Peter Cheeseman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Estimating uncertain spatial relationships in robotics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Autonomous robot vehicles, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 167–193.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Springer, 1990.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Ben Sorscher, Gabriel C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Mel, Samuel A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Ocko, Lisa Giocomo, and Surya Ganguli.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' A uni- fied theory for the computational and mechanistic origins of grid cells.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In bioRxiv preprint bioRxiv:2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='424583, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='1101/2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='424583.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Dropout: a simple way to prevent neural networks from overfitting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Journal of Machine Learning Research (JMLR), 15(1):1929–1958, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Julian Straub, Thomas Whelan, Lingni Ma, Yufan Chen, Erik Wijmans, Simon Green, Jakob J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Engel, Raul Mur-Artal, Carl Ren, Shobhit Verma, Anton Clarkson, Mingfei Yan, Brian Budge, Yajie Yan, Xiaqing Pan, June Yon, Yuyang Zou, Kimberly Leon, Nigel Carter, Jesus Briales, Tyler Gillingham, Elias Mueggler, Luis Pesqueira, Manolis Savva, Dhruv Batra, Hauke M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Strasdat, Renzo De Nardi, Michael Goesele, Steven Lovegrove, and Richard A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Newcombe.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The replica dataset: A digital replica of indoor spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' CoRR, abs/1906.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='05797, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' URL http://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' org/abs/1906.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='05797.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Richard S Sutton and Andrew G Barto.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Reinforcement learning: An introduction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' MIT press, 1992.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Andrew Szot, Alex Clegg, Eric Undersander, Erik Wijmans, Yili Zhao, John Turner, Noah Maestre, Mustafa Mukadam, Devendra Chaplot, Oleksandr Maksymets, Aaron Gokaslan, Vladimir Von- drus, Sameer Dharur, Franziska Meier, Wojciech Galuba, Angel Chang, Zsolt Kira, Vladlen Koltun, Jitendra Malik, Manolis Savva, and Dhruv Batra.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Habitat 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='0: Training home assis- tants to rearrange their habitat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Advances in Neural Information Processing Systems (NeurIPS), 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 13 Published as a conference paper at ICLR 2023 Sebastian Thrun, Wolfram Burgard, and Dieter Fox.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Probabilistic robotics (intelligent robotics and autonomous agents), 2005.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Sivan Toledo, David Shohami, Ingo Schiffner, Emmanuel Lourie, Yotam Orchan, Yoav Bartan, and Ran Nathan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Cognitive map–based navigation in wild bats revealed by a new high-throughput tracking system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Science, 369(6500):188–193, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Edward C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Tolman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Cognitive maps in rats and men.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Psychological Review, 55(4):189–208, 1948.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='1037/h0061626.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Jonathan Tompson, Ross Goroshin, Arjun Jain, Yann LeCun, and Christoph Bregler.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Efficient object localization using convolutional networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 648–656, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Laurens Van der Maaten and Geoffrey Hinton.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Visualizing data using t-sne.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Journal of machine learning research, 9(11), 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Karl Von Frisch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The dance language and orientation of bees.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Harvard University Press, 1967.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' John Wieting and Douwe Kiela.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' No training required: Exploring random encoders for sentence clas- sification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Erik Wijmans, Abhishek Kadian, Ari Morcos, Stefan Lee, Irfan Essa, Devi Parikh, Manolis Savva, and Dhruv Batra.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' DD-PPO: Learning near-perfect pointgoal navigators from 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='5 billion frames.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Proceedings of the International Conference on Learning Representations (ICLR), 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Frank Wilcoxon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Individual comparisons by ranking methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Breakthroughs in statistics, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 196–202.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Springer, 1992.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Fei Xia, Amir R Zamir, Zhiyang He, Alexander Sax, Jitendra Malik, and Silvio Savarese.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Gibson env: Real-world perception for embodied agents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Proceedings of IEEE Conference on Com- puter Vision and Pattern Recognition (CVPR), 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' License: https://storage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='googleapis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' com/gibson material/Agreement%20GDS%2006-04-18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='pdf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' A METHODS AND MATERIALS A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='1 POINTGOAL NAVIGATION TRAINING Task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In PointGoal Navigation, the agent is tasked with navigating to a point specified relative to its initial location, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='e an input of (δx, δy) corresponds to going δx meters forward and δy meters to the right.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The agent succeeds if it predicts the stop action within 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='2 meters of the specified point.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The agent has access to 4 low-level actions – move forward (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='25 meters), turn left (10◦), turn right (10◦), and stop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' There is no noise in the agent’s actuations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Sensors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The agent has access to solely an idealized GPS+Compass sensor that provides it heading and position relative to the starting orientation and location at each time step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' There is no noise in the agent’s sensors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The agent is parameterized by a 3-layer LSTM (Hochreiter & Schmidhuber, 1997) with a 512-d hidden dimension.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' At each time-step, the agent receives observations g (the location of the goal relative to start), GPS (its current position relative to start), and compass (its current heading relative to start).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We also explicitly give the agent an indicator of if it is close to goal in the form of min(||g − GPS||, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='5) as we find the agent does not learn robust stopping logic otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' All 4 inputs are projected to 32-d using separated fully-connected layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' These are then concatenated with a learned 32-d embedding of the previous action taken to form a 160-d input that is then given to the LSTM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The output of the LSTM is then processed by a fully-connected layer to produce a softmax distribution of the action space and an estimate of the value function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Training Data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We construct our training data based on the Gibson (Xia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2018) and Matter- port3D dataset (Chang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We training on 411 scenes from Gibson and 72 from Matter- port3D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 14 Published as a conference paper at ICLR 2023 Training Procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We train our agents using Proximal Policy Optimization (PPO) (Schulman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2017) with Generalized Advantage Estimation (GAE) (Schulman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We use Decen- tralized Distributed PPO (DD-PPO) (Wijmans et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2020) to train on 16 GPUs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Each GPU/worker collects 256 steps of experience from 16 agents (each in different scenes) and then performs 2 epochs of PPO with 2 mini-batchs per epoch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We use the Adam optimize (Kingma & Ba, 2015) with a learning rate of 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='5 × 10−4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We set the discount factor γ to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='99, the PPO clip to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='2, and the GAE hyper-parameter τ to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We train until convergence (around 2 billion steps of experience).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' At every timestep, t, the agent is in state st and takes action at, and transitions to state st+.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' It receives shaped reward in the form: rt = �2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='5 · Success if at is Stop −∆geo dist(st, st+1) − λ Otherwise (1) where ∆geo dist(st, st+1) is the change in geodesic (shortest path) distance to goal between st and st+1 and λ=0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='001 is a slack penalty encouraging shorter episodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Evaluation Procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We evaluate the agent in the 18 scenes from the Matterport3D test set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We use the episodes from Savva et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' (Savva et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2019), which consist of 56 episodes per scene (1008 in total).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Episode range in distance from 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='2 to 30 meters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The ratio of geodesic distance to euclidean distance between start and goal is restricted to be greater than or equal to 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='1, ensuring that episodes are not simple straight lines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Note that reward is not available during evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The agent is evaluated under two metrics, Success, whether or not the agent called the stop action with 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='2 meters of the goal and Success weighted by normalized inverse Path Length (SPL) (An- derson et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' SPL is calculated as follows: given the agent’s path [s1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' , sT ] and the initial geodesic distance to goal di for episode i, we first compute the length of the agent’s path li = T � t=2 ||st − st−1||2 (2) then SPL for episode i as SPLi = Successi · di min{di, li} (3) We then report SPL as the average of SPLi across all episodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='2 PROBE TRAINING Task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The probe task is to either navigate from start to goal again (SecondNav(S→T)) or navigate from goal to start (SecondNav(T→S)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' For SecondNav(S→T), the probe is initialized at the starting location but with the agent’s final heading.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' For SecondNav(T→S), the probe is initialized with the agent’s final heading and position.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In both cases, the probe and the agent share the same coordinate system – i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' in SecondNav(T→S), the initial GPS and Compass readings for the probe are identical the the final GPS and Compass readings for the agent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' When the agent does not successfully reach the goal, the probe task is necessarily undefined and we do not instantiate a probe.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Sensors, Architecture, Training Procedure, Training Data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The probe uses the same sensor suite, architecture, training procedure, and training data as the agent, described in Section A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='1 Note that no gradients (or rewards) follow from probe to agent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' From the agent’s perspective, the probe does not exist.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' From the probe’s perspective, the agent provides a dataset of initial locations (or goals) and initial hidden states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Evaluation Procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We evaluate the probe in a similar manner the agent except that any episode which the agent is unable to complete (5%) is removed due to the probe task being undefined if the agent is unable to complete the task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The agent reaches the goal 95% of the time, thus only 50 out of 1008 possible probe evaluation episodes are invalidated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The control probe type accounts for this.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We ignore the agent’s trajectory when computing SPL for the probe.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='3 OCCUPANCY MAP DECODING Task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We train a decoding network to predict the top-down occupancy map of the environment from the final internal state of the agent (ht, ct).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We limit the decoder to only predict within 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='5 meters of any location the agent visited.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 15 Published as a conference paper at ICLR 2023 Architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The map-decoder is constructed as follows: First the internal state (ht, ct) is concate- nated into a 512×6-d vector.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The vector is then passed to a 2-layer MLP with a hidden dimension of 512-d that produces a 4608-d vector.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' This 4608-d vector is then reshaped into a [128, 6, 6] feature- map.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The feature map is processed by a series of Coordinate Convolution (CoordConv) (Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2018) Coordinate Up-Convolution (CoordUpConv) layers decrease the channel-depth and increase spatial resolution to [16, 96, 96].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Specifically, after an initial CoordConv with an output channel- depth of 128, we use a series of 4 CoordUpConv-CoordConv layers where each CoordUpConv doubles the spatial dimensions (quadruples spatial resolution) and each CoordConv reduces channel-depth by half.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We then use a final 1x1-Convolution to create a [2, 96, 96] tensor representing the non- normalized log-probabilities of whether or not an given location is navigable or not.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Each CoordConv has kernel size 3, padding 1, and stride 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' CoordUpConv has kernel size 3, padding 0, and stride 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Before all CoordConv and CoordUpConv, we use 2D Dropout (Srivastava et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2014;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Tompson et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2015) with a zero-out probability of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='05.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We use Batch Normalization layers (Ioffe & Szegedy, 2015) and the ReLU activation function (Nair & Hinton, 2010) after all layers except the terminal layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Training Data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We construct our training data by having a trained agent perform episodes of Point- Goal navigation on the training dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Note that while evaluation is done utilizing the final hidden state, we construct our training dataset by taking 30 time steps (evenly spaced) from the trajectory and ensuring the final step is included.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Training Procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We train on 8 GPUs with a batch size of 128 per GPU (total batch size of 1024).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We use the AdamW optimizer (Kingma & Ba, 2015;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Loshchilov & Hutter, 2019) with an initial learning rate of 10−3 and linearly scale the learning rate to 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='6 × 10−2 over the first 5 epochs (Goyal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2017) and use a weight-decay of 10−5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We use the validation dataset to perform early-stopping.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We use Focal Loss (Lin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2017) (a weighted version of Cross Entropy Loss) with γ = 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='0, αNotNavigable = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='75, and αNavigable = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='25 to handle the class imbalance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Evaluation Data and Procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We construct our evaluation data using the validation dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Note that the scenes in evaluation are novel to both the agent and the decoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We evaluate the predicted occupancy map from the final hidden state/final time step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We collect a total of 5,000 episodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='4 PAST AND FUTURE POSITION PREDICTION Task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We train a decoder to predict the change in agent location given the internal state at time t (ht, ct).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Specifically, let st be the agent’s position at time t where the coordinate system is defined by the agent’s starting location (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' s0 = 0), and st+k be its position k steps into the future/past, then the decoder is trained to model f((ht, ct)) = st+k − st.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The decoder is a 3-layer MLP that produces a 3 dimensional output with hidden sizes of 256 and 128.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We use Batch Normalization (Ioffe & Szegedy, 2015) and the ReLU activation function (Nair & Hinton, 2010) after all layers except the last.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Training Data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The training data is collected from executing a trained agent on episodes from the training set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' For each episode, we collect all possible pairs of st, st+k for a given value of k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Training Procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We use the AdamW optimizer (Kingma & Ba, 2015;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Loshchilov & Hutter, 2019) with a learning rate of 10−3, a weight decay of 10−4, and a batch size of 256.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We use a Smooth L1 Loss/Huber Loss (Huber, 1964) between the ground-truth change in position and the predicted change in position.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We use the validation set to perform early stopping.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Evaluation Procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We evaluate the trained decoded on held-out scenes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Note that the held-out scenes are novel both to the agent and the decoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Visualization of Predictions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' For visualization the predictions of past vitiation, we found it easier to train a second decoder that predicts all locations the agent visited previously on a 2D top down map given the internal state (ht, ct).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' This decoder shares the exact same architecture and train- ing procedure as the occupancy grid decoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The decoder removes the temporal aspect from the prediction, so it is ill-suited for any time-dependent analysis, but produces clearer visualizations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Excursion Calibrated Analysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' To perform the excursions forgetting analysis, we use the excur- sion labeled episodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We marked the end of the excursion as the last 10% of the steps that are part 16 Published as a conference paper at ICLR 2023 of the excursion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' For a given point in time t, we classify that point into one of {Non-Excursion, Excursion, Exit}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We then examine how well this point is remembered by calculating the error of predicting the point t from t + k, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' how well can t be predicted when it is k steps into the past.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' When t is part of an excursions (both the excursion and the exit) we limit t + k to either be part of the same excursion or not part of an excursion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' When t is not part of an excursion, t + k must also not be part of an excursion nor can there be any excursion in the range [t, t + k].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='5 COLLISION PREDICTION LINEAR PROBE Task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The task of this probe is to predict of the previous action taken lead to a collision given the current hidden state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Specifically it seeks to learn a function Collidedt = f((ht, ct)) where (ht, ct) is the internal state at time t and Collidedt is whether or not the previous action, at−1 lead to a collision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The architecture is logistic classifier that takes the concatentation of the internal state and produces logprob of Collidedt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Training Data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We construct our training data by having a trained agent perform episodes of Point- Goal navigation on the training set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We collect a total of 10 million samples and then randomly select 1 million for training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We then normalize each dimension independently by computing mean and standard deviation and then subtract mean and divide by standard deviation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' This ensures that all dimensions have the same average magnitude.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Training Procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We training on 1 GPU with a batch size of 256.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We use the Adam opti- mizer (Kingma & Ba, 2015) with a learning rate of 5 × 10−4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We train for 20 epochs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Evaluation Data and Procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We construct our evaluation data using the same procedure as the training data, but on the validation dataset and collect 200,00 samples (which is then subsampled to 20,000).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Important Dimension Selection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' To select which dimensions are important for predicting collsions, we re-train our probe with various L1 penalties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We sweep from 0 to 1000 and then select the penalty that results in the lowest number of significant dimensions without substantially reducing accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We determine the number of significant dimensions by first ordering all dimensions by the L1 norm of the corresponding weight and then finding the smallest number of dimensions we can keep while maintaining 99% of the performance of keeping all dimensions for that classifier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The t-SNE manifold is computed using 20,000 samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' This is then randomly subsampled to 1,500 for visualization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='6 DATA AND MATERIALS AVAILABILITY The Gibson (Xia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2018) and Matterport3D (Chang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2017) datasets can be acquired from their respective distributors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Habitat (Savva et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2019) is open source.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Code to reproduce experi- ments will be made available.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' B ADDITIONAL DISCUSSIONS B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='1 RELATIONSHIP TO COGNITIVE MAPS Throughout the text, we use the term ‘map’ to mean a spatial representation that supports intelligent behaviors like taking shortcuts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Whether or not this term is distinct from the specific concept of a ‘cognitive map’ is debated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Cognitive maps, as defined by O’keefe & Nadel (1978), imply a set of properties and are generally attached to a specific mechanism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The existence of a cognitive map requires that the agent be able to reach a desired goal in the environment from any starting location without being given that starting location, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' be able to navigate against a map.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Further, cognitive maps refer to a specific mechanism – place cells and grid cells being present in the hippocampus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Other works have also studied ‘cognitive maps’ and not put such restrictions on its definition (Gallistel, 1990;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Tolman, 1948), however these broader definitions have been debated (Jacobs, 2003).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Our work shows that the spatial information contained within the agent’s hidden state enables map- like properties – a secondary agent to take shortcuts through previously unexplored free space – and supports the decoding of a metric map.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' However, these do not fully cover the proprieties of O’keefe 17 Published as a conference paper at ICLR 2023 & Nadel (1978)’s definition nor do we make a mechanistic claim about how this information is stored in the neural network, though we do find the emergence of collision-detection neurons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' C ADDITIONAL EXPERIMENTS C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='1 BLIND SHORTEST PATH NAVIGATION WITH TRUE STATE In the main text, we posited that blind agents learn wall-following as this an effective strategy for blind navigation in unknown environments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We posit that this is because the agent does not have ac- cess to true state (it does not know the current environment nor where it is in global coordinates).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In this experiment we show that blind agents learn to take shortest paths, as opposed to wall-following, when trained in a single environment (implicitly informing the agent of the current environment) and uses the global coordinate system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 8 We use an identical agent architecture and training procedure as outline for PointGoal navigation training in the Materials and Methods with two differences: 1) A single training and test environment and 2) usage of the global coordinates within the environment for both goal specific and the agent’s GPS+Compass sensor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We perform this experiment on 3 scenes, 1 from the Gibson val dataset and 2 from Matterport3D val dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The average SPL during training is 99±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='1 showing that the blind agent learns shortest path navigation not wall-following.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Figure A6 shows examples of an agent trained in a single scene with global coordinates and an agent trained in many scenes with episodic coordinates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' These two settings, i) where the agent uses an episodic coordinate system and navigates in unknown environments, and ii) where the agent uses global coordinates and navigates in a known environment can be seen as the difference between a partially observable Markov decision process (POMDP) and a Markov decision process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In the POMDP case, the agent must learn a generalizable policy while it can overfit in the MDP case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='2 FURTHER ANALYSIS OF THE PROBE’S PERFORMANCE In the main text, we showed that the probe is indeed much more efficient than the agent, but how is this gain achieved?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Our hypothesis is that the probe improves upon the agent’s path by taking shortcuts and eliminating excursions (representing an ‘out and back’).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We define an excursion as a sub-path that approximately forms a loop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' To quantify excursions, we manually annotate excursions in 216 randomly sampled episodes in evaluation environments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Of the labeled episodes, 62% have a least 1 excursion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' On average, an episode has 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='95 excursions, and excursions have an average length of 101 steps (corresponding to 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='23 meters).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Since excursions represent unnecessary portions of the trajectory, this indicates that the probe should be able improve upon the agent’s path by removing these excursions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We quantify this excursion removal via the normalized Chamfer distance between the agent’s path and the probe’s path.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Formally, given the agent’s path Agent=[s(agent) 1 , .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' , s(agent) T ] and the probe’s path Probe=[s(probe) 1 , .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' , s(probe) N ] where s ∈ R3 is a point in the environment: PathDiff(Agent, Probe) = 1 N N � i=1 min 1≤j≤T GeoDist(s(agent) i , s(probe) j ), (4) where GeoDist(·, ·) indicates the geodesic distance (shortest traverseable path-length).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Note that Chamfer distance is not symmetric.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' PathDiff(Probe, Agent) measures the average distance of a point on the probe path s(probe) j from the closest point on the agent path.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' A large PathDiff(Probe, Agent) indicates that the probe travels through novel parts of the environments (compared to the agent).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Conversely, PathDiff(Agent, Probe) measures the average distance of a point on the agent path s(agent) i from the closest point on the probe path.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' A large � PathDiff(Agent, Probe) − PathD- iff(Probe, Agent) � gap indicates that agent path contains excursions while the probe does not;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' thus, 8Recall that in the episodic coordinate system the origin is defined by the agent’s starting position and orientation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In the global coordinate system the origin is an arbitrary but consistent location (we simply use the origin for a given scene defined in the dataset).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Thus in the global coordinate system the goal is specified as ‘Go to (x, y)’ where x and y are specified in the global coordinate system, not with respect to the agent’s current location.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 18 Published as a conference paper at ICLR 2023 we refer to this gap as Excursion Removal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' To visually understand why this is the case, consider the example agent and probe paths in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' A7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Point (C) lies on an excursion in the agent path.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' It contributes a term to PathDiff(Agent, Probe) but not to PathDiff(Probe, Agent) because (D) is closer to (E) than (C).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' On both SecondNav(S→T) and SecondNav(T→S), we find that as the efficiency of a probe in- creases, Excursion Removal also increases (Table A2, row 1 vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 2, 2 vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 3), confirming that the TrainedAgentMemory probe is more efficient because it removes excursions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We next consider if the TrainedAgentMemory probe also travels through previously unexplored space in addition to removing excursions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' To quantify this, we report PathDiff(Probe, Agent) on episodes where agent SPL is less than average (less than 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='9%).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='9 If probes take the same path as the agent, we would expect this metric to be zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' If, however, probes travel through previously unexplored space to minimize travel distance, we would expect this metric to be significantly non- zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Indeed, on SecondNav(S→T), we find the TrainedAgentMemory probe is 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='32 meters away on average from the closest point on the agent’s path (99% empirical bootstrap of the mean gives a range of (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='299, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='341)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' See Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' A7 for a visual example.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' On SecondNav(T→S), this effect is slightly more pronounced, the TrainedAgentMemory probe is 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='55 meters away on average (99% empirical bootstrap of the mean gives a range of (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='52, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='588)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Taken holistically, these results show that the probe is both more efficient than the agent and consistently travels through new parts of the environment (that the agent did not travel through).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Thus, the spatial representation in the agent’s memory is not simply a ‘literal’ episodic summarization, but also contains anticipatory inferences about previously unexplored spaces being navigable (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' traveling along the hypotenuses instead of sides of a room).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In the text above we reported free space inference only on episodes where the agent gets an SPL bellow average.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' A12 we provide a plot of Free Space Inference vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Agent SPL to show the impact of other cutoff points.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' A13 we also provide a similar plot of Excursion Removal vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Agent SPL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In both cases, as agent SPL increase, the probe is able to infer less free space or remove less excursions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='3 FUTURE VISITATION PREDICTION In the main text we examined what types of systematic errors are made when decoding past agent locations, here we provide addition analysis and look at predicting future observations as that will reveal if there are any idiosyncrasies in what can be predicted about future vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' what will happen in the future.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Given ground truth location st+k, we evaluate the decoder via i) absolute L2 error ||ˆst+k−st+k|| and ii) relative L2 error ||ˆst+k − st+k||/||st+k − st||.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' To determine baseline (or chance) performance, we train a second set of decoders where instead of using the correct internal state (ht, ct) as the input, we randomly select an internal state from a different trajectory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' This will evaluate if there are any inherent biases in the task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' A8, we find that the decoder is able to accurately predict where the agent has been, even for long time horizons – e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' at 100 time steps in the past, relative error is 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='55 and absolute error is 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='0m, compared to relative error of 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='0 and absolute error of 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='2m for the chance baseline prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' For short time horizons the decoder is also able to accurately predict where the agent will be in the future – e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' at 10 time steps into the future, relative and absolute error are below chance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Interestingly, we see that for longer range future predictions, the decoder is worse than chance in relative error but on- par in absolute error.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' This apparent contradiction arises due to the decoders making (relatively) large systematic errors when the agent backtracks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In order for the decoder to predict backtracking, the agent would need to already know its future trajectory will be sub-optimal (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' lead to backtracking) but still take that trajectory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' This is in contradiction with the objective the agent is trained for, to reach the goal as quickly as possible, and thus the agent would not take a given path if it knew it would lead to backtracking.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 9We restrict to a subset where the agent has relatively low SPL to improve dynamic range.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' When the agent has high SPL, there won’t be excursions to remove and this metric will naturally be low.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In the supplementary text we provide plots of this metric vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' agent SPL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 19 Published as a conference paper at ICLR 2023 C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='4 EXTENSION TO SIGHTED NAVIGATION AGENTS In the main text we analyzed how ‘blind’ agents, those with limited perceptual systems, utilize their memory and found evidence that they build cognitive maps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Here, we extend our analysis to agents with rich perceptual systems, those equipped with a Depth camera and an egomotion sensor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Our primary experimental paradigm relies on showing that a probe is able to take shortcuts when given the agent’s memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' This experimental paradigm relies on the probe being able to take a shorter path than the agent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Navigation agents with vision can perform PointNav near-perfectly (Wijmans et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2020) and thus there isn’t room for improving, rendering this experiment infeasible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' As a supplement to this experiment, we also show that a metric map (top-down occupancy grid) can be decoded from the agents memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' This procedure can also be applied to sighted agents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We use the ResNet50 (He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2016) Gibson-2plus (Xia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2018) pre-train model from Wijmans et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' (Wijmans et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2020) and train an occupancy grid decoder using the same procedure as in the main text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Note however we utilize only Gibson for training and the Gibson validation scenes as held-out data instead of Matterport3D as this agent was only trained on Gibson.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' As before, we compare performance from TrainedAgentMemory with UntrainedAgentMemory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We find mixed results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' When measuring performance with Intersection-over-Union (IoU), UntrainedAgentMemory outperforms TrainedAgentMemory (40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='1% vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='9%).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' However, when measuring performance with average class balanced accuracy, TrainedAgentMemory outperforms UntrainedAgentMemory (61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='8% vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='1%).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' A9 and Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' A10 show the corresponding distri- bution plots.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Overall, this experiment does not provide convincing evidence either way to whether vision- equipped agents build metric maps in their memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' However, it does show that vision-equipped agents, if they do maintain a map of their environment, create one that is considerably more chal- lenging to decode.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Further, we note this does not necessarily imply similarly mixed results as to whether or not vision agents maintain a still spatial but sparser representation, such as a topological graph, as their rich perception can fill in the details in the moment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='5 NAVIGATION FROM MEMORY ALONE In the main text we showed that agents learn to build map-like representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' A map-like repre- sentation of the environment, should, to a degree, support navigation with no external information, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' by dead reckoning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Given that the actions are deterministic, the probe should be able to perform either task without external inputs and only the agent’s internal representation and the previously taken action.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The localization performed by the probe in this setting is similar to path integration, however, it must also be able to handle any collisions that occur when navigating.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' A11 shows performance vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' episode length for SecondNav(S→T) and SecondNav(T→S).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' There are two primary trends.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' For short navigation episodes (≤5m), the agent is able to complete the task often.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We also find that under this setting, SecondNav(T→S) is an easier task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' This is due to the information conveyed to the probe by its initial heading.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In SecondNav(T→S), the probe can make progress by simply turning around and going forward, while in SecondNav(S→T), the final heading of the agent is not informative of which way the probe should navigate initially.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Overall, these results show that the representation built by the agent is sufficient to navigate short distances with no external information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Experiment procedure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' This experiment mirrors the probe experiment described in methods and materials with three differences: 1) The input from the GPS+Compass sensor is zero-ed out.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 2) The change in distance to goal shaping in the reward is normalized by the distance from initial state to goal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We find that the prediction of the value function suffers considerably otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 3) An additional reward signal as to whether or not the last action taken decreased the angle between the probe’s current heading and the direction along the shortest path to goal is added.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We find the probe has challenges learning to turn around on the SecondNav(T→S) task otherwise (as it almost always starts facing 180◦ in the wrong direction).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Let hgt t be the heading along the shortest path to goal from the probe’s current position st, ht be the probe’s current heading, then AngularDistance(hgt t , ht) is the error in the probe’s heading.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The full 20 Published as a conference paper at ICLR 2023 reward for this probe is then rt(st, at, st+1) = � � � � � � � 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='5 · Success if at is Stop −10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='0 · ∆geo dist(st, st+1)/GeoDist(s0, g) −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='25 · ∆HeadingError(st, st+1) −λ Otherwise (5) C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='6 MEMORY LENGTH The method presented in the main text to examine memory length is post-hoc analysis performed on the ‘blind’ PointGoal Navigation agents and thus the agent is operating out-of-distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' From the agent’s view, it is still performing a valid PointGoal navigation episode, just with a different starting location, but the agent may not have taken the same sequence of actions if started from that location.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' While we would still expect performance to stature with a small k if the memory length is indeed short, it is imprecise with measuring the exact memory length of the agent and does not answer what memory budget is required to perform the task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Here we examined training agents with a fixed memory length LSTM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' A14 shows similar trends to those described in the main paper – performance increases as the memory budget increases – however performance is higher when the agent is trained for a given memory budget.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Due to the increased compute needed to train the model (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' training a model with a memory length of 128 is 128× more computationally costly), we where unable to train for a memory budget longer than 256.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We also note the non-monotonicity in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' A14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We conjecture that this is a consequence of inducing the negative effects of large-batch optimization (Keskar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=', 2017) – training with a memory budget of k effectively increases the batch size by a factor of k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Keeping the batch size constant has its own drawbacks;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' reducing the number of parallel environments will harm data diversity and result in overfitting while reducing the rollout length increases the bias of the return estimate and makes credit assignment harder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Thus we kept number of environments and rollout length constant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' D SUPPLEMENTARY VIDEOS Movies S1-3 Videos showing blind agent navigation with the location of the hidden state in the collision t-SNE space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Notice that the hidden state stays within a cluster throughout a series of actions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 21 Published as a conference paper at ICLR 2023 SecondNav(S→T) SecondNav(T→S) Probe Type Excursion Removal Excursion Removal 1 AllZeroMemory 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='21±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='017 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='21±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='004 2 UntrainedAgentMemory 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='23±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='009 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='25±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='009 3 TrainedAgentMemory 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='52±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='014 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='51±0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='011 Table A2: Excursion removal result of our trained probe agent under three configurations – ini- tialized with an empty representation (AllZeroMemory), a representation of a random agent walked along the trained agent’s path (UntrainedAgentMemory), and the final representation of the trained agent (TrainedAgentMemory).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 95% confidence interval reported over 5 agent-probe pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Navigable Not Navigable Agent Path Novel Scene, Episodic Coordinates Agent Path Known Scene, Global Coordinates Figure A6: True state trajectory comparison.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Example trajectories of an agent with true state (trained for a specific environment and using global coordinates), green line, compared to an agent trained for many environments and using episodic coordinates, blue line.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The later is what we examine in this work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Notice that the agent with true state take shortest path trajectories while the agent without true state instead exhibits strong wall-following behavior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 22 30Published as a conference paper at ICLR 2023 PathDiff(P,A) Probe Path Agent Path PathDiff(A,P) - PathDiff(P,A) Excursion Removal Free Space Inference A B E D C Figure A7: Two categories of probe shortcut.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' ‘Excursion Removal’ is when the probe removes excursions from the agent’s path.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The dashed line shows the distance between the points in the excursion and the closest point in the probe’s path.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' ‘Free Space Inference’ occurs when the probe travels through previously unvisited locations in the environments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The dashed lines show the dis- tance between any points in the probe’s path and the closest point in the agent’s path.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 200 100 0 100 200 Time Offset 0 2 4 6 Error Absolute L2 Error 200 100 0 100 200 Time Offset 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='0 Relative L2 Error Actual Chance Figure A8: Past and future prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Performance of decoders trained to predict where the agent was in the past/will be in the future.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' On the x-axis is how far into the past or future the decoder is predicting (positive values are future predictions and negative values are past predictions).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' The y-axis is either absolute or relative L2 error between the predicted location of the agent and the true location.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 23 Published as a conference paper at ICLR 2023 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='0 Map Prediction Accuracy (IoU) UntrainedAgentMemory TrainedAgentMemory Figure A9: Map prediction accuracy (Intersection over Union) for Depth sensor equipped agents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='9 Map Prediction Accuracy (Class Balanced Accuracy) UntrainedAgentMemory TrainedAgentMemory Figure A10: Map prediction accuracy (class balanced accuracy) for Depth sensor equipped agents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 5 10 15 20 25 30 GeodesicDistance(Start, Goal) 0 10 20 30 40 50 60 70 Performance (SPL;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Higher is better) SecondNav(S T) 5 10 15 20 25 30 GeodesicDistance(Start, Goal) SecondNav(T S) Figure A11: Memory-only probe performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Performance (in SPL;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' higher is better) as a func- tion of geodesic distance from start to goal for the TrainedAgentMemory probe without inputs on SecondNav(S→T) and SecondNav(T→S).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' More information can be found under the ‘Navigation from memory alone’ header.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 24 Published as a conference paper at ICLR 2023 20 40 60 80 Agent Performance (SPL;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Higher is better) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='4 Free Space Inference SecondNav(S T) 20 40 60 80 Agent Performance (SPL;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Higher is better) SecondNav(T S) Figure A12: Free Space Inference for the TrainedAgentMemory probe on both SecondNav(S→T) and SecondNav(T→S) as a function of agent SPL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We see that as agent SPL decreases, the probe is able to take paths that inference more free space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 20 40 60 80 Agent Performance (SPL;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Higher is better) 0 1 2 3 Excursion Removal SecondNav(S T) 20 40 60 80 Agent Performance (SPL;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Higher is better) SecondNav(T S) Figure A13: Excursion Removal for the TrainedAgentMemory probe on both SecondNav(S→T) and SecondNav(T→S) as a function of agent SPL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' We see that as agent SPL decreases, excursion removal increases since the probe is able to remove additional excursions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 0 50 100 150 200 250 Memory Length 0 20 40 60 80 100 Performance (higher is better) Metric SPL Success Figure A14: Performance vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' memory length for agents trained under a given memory length.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Note that longer memory lengths are challenging to train for under this methodology as it induces the negative effects of large-batch optimization and is computationally expensive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 25 Published as a conference paper at ICLR 2023 A B D C Ground Truth 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='4% 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content='4% Prediction Ground Truth Prediction D C A B Non-navigable Navigable Figure A15: Map prediction with poor examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' In the main text we shows qualitative examples for the average prediction and a good prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' Here we show two additional examples: A, a very poor quality prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' This shows that the decoder sometimes does make large mistakes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' B, the average prediction for the UntrainedAgentMemory decoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' This shows the qualitative difference between the average UntrainedAgentMemory and TrainedAgentMemory prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} +page_content=' 26' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-9FQT4oBgHgl3EQfKjXJ/content/2301.13261v1.pdf'} diff --git a/-dE0T4oBgHgl3EQfxAFs/content/tmp_files/2301.02640v1.pdf.txt b/-dE0T4oBgHgl3EQfxAFs/content/tmp_files/2301.02640v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..8fe6be593eafdf9c0d4c370b0cae70d7a43bfde3 --- /dev/null +++ b/-dE0T4oBgHgl3EQfxAFs/content/tmp_files/2301.02640v1.pdf.txt @@ -0,0 +1,627 @@ + +1 + + +3D dose prediction for Gamma Knife radiosurgery +using deep learning and data modification +Binghao Zhang1, Aaron Babier1, Timothy C.Y. Chan1, Mark Ruschin2 + +1 Department of Mechanical and Industrial Engineering, University of Toronto, Toronto, Canada +2 Department of Radiation Oncology, Sunnybrook Health Sciences Centre, University of Toronto, +Toronto, Canada + +E-mail: binghao.zhang@mail.utoronto.ca +Abstract +Purpose: To develop a machine learning-based, 3D dose prediction methodology for Gamma +Knife (GK) radiosurgery. The methodology accounts for cases involving targets of any +number, size, and shape. +Methods: Data from 322 GK treatment plans was modified by isolating and cropping the +contoured MRI and clinical dose distributions based on tumor location, then scaling the +resulting tumor spaces to a standard size. An accompanying 3D tensor was created for each +instance to account for tumor size. The modified dataset for 272 patients was used to train +both a generative adversarial network (GAN-GK) and a 3D U-Net model (U-Net-GK). +Unmodified data was used to train equivalent baseline models. All models were used to +predict the dose distribution of 50 out-of-sample patients. Prediction accuracy was evaluated +using gamma, with criteria of 4%/2mm, 3%/3mm, 3%/1mm and 1%/1mm. Prediction quality +was assessed using coverage, selectivity, and conformity indices. +Results: The predictions resulting from GAN-GK and U-Net-GK were similar to their clinical +counterparts, with average gamma (4%/2mm) passing rates of 84.9 ± 15.3% and 83.1 ± +17.2%, respectively. In contrast, the gamma passing rate of baseline models were +significantly worse than their respective GK-specific models (p < 0.001) at all criterion levels. +The quality of GK-specific predictions was also similar to that of clinical plans. +Conclusion: Deep learning models can use GK-specific data modification to predict 3D dose +distributions for GKRS plans with a large range in size, shape, or number of targets. Standard +deep learning models applied to unmodified GK data generated poorer predictions. + +Keywords: 3D-dose prediction, Gamma Knife, automated planning, knowledge-based planning + + + + + + + + + + +2 + + +1. Introduction +Gamma Knife (GK) radiosurgery (GKRS) is a form of radiotherapy that precisely treats abnormalities within the +brain using narrow beams of radiation. GKRS is an effective treatment for a wide array of diseases including benign +tumors, malignant tumors, vascular abnormalities, and functional disorders [1]. Conventional processes to generate +GKRS treatment plans are time-consuming for clinicians, which has motivated several studies to explore new +approaches like inverse planning [2,3]. However, a major limitation of inverse planning is that it requires human +intervention to tune parameters and personalize the resulting treatment plans. + +There exist automated planning methods for other modalities that can generate patient specific parameters for +inverse planning [4,5]. An integral part of these approaches is a machine learning (ML) method that produces dose +predictions using patient images. There is also a small set of models that incorporate additional patient features +(e.g., age, histology) to account for patient outcomes [4,5]. In general, automated planning approaches that use +predicted dose distributions are called knowledge-based planning (KBP) pipelines. A KBP pipeline is typically +presented as a two-stage process that leverages information from previous treatment plans to produce high-quality +treatment plans for new patients without human intervention. The first stage is a dose prediction model that learns +the relationship between dose and delineated medical images from previous plans. The second stage is an +optimization model that generates a treatment plan from the predicted dose distribution. + +Many recent advances in KBP have focused on 3D dose prediction using neural networks [4,5]. These approaches +have primarily been developed and tested for intensity-modulated radiotherapy (IMRT) and volumetric modulated +arc therapy (VMAT) [6-9]. However, GKRS presents three unique challenges that necessitate a new approach for +dose prediction. First, there is a large range in treatment target size. Many large targets (e.g., post-operative +metastases or benign tumors) are up to 25 times the diameter of small targets (e.g., small intact brain metastases) +[10]. This variation in target size requires a prediction model that can adequately accommodate both the smallest +and largest targets. Second, GKRS cases can have a relatively large number of targets (e.g., more than 30) with +multiple dose prescription levels. As a result, the impact of dose to one target on another can vary drastically +between patients. Third, targets are often separated by large amounts of healthy brain tissue. A standard ML +approach that considers the whole treatment volume would require a low spatial resolution (i.e., large voxel +volumes) to accommodate computational memory limits associated with large neural networks, which would be +inadequate for GKRS because it must be planned with a high spatial resolution (i.e., small voxel volumes). These +factors further increase both the complexity and spatial resolution requirements of the model. + +In this paper, we develop a novel GKRS dose prediction approach. This is an important first step towards creating +an automated GKRS planning pipeline since the quality of plans produced by such a pipeline is positively correlated +with the quality of the dose predictions [11]. Our approach accommodates any size, number, and shape of targets +without compromising the spatial resolution of the predicted dose. The proposed approach involves a novel GKRS- +specific data modification method, an upscaling step, and construction of a distance tensor to relate each target back +to its size. We demonstrate accuracy on a series of historically treated patient cases. Our high-quality predictions +could be used to estimate parameters for inverse optimization models that generate high-quality treatment plans +[6]. + +2. Methods +Our methods consisted of five main steps: (2.1) extracting clinical treatment plan data, (2.2) modifying plan +image data, (2.3) tailoring existing neural network models for GKRS, (2.4) training dose prediction models, and +(2.5) evaluating model dose predictions. + + + + +3 + + +2.1 Data Extraction +This research ethics board approved study involved retrospective access to radiotherapy plans for 322 patients +who were treated at Sunnybrook Health Sciences Centre. From each plan, we extracted the MRI images, 3D dose +distributions, and target contours. All target contours were delineated for treatment by a radiation oncologist on +high-resolution MRIs. To visualize the heterogeneity of our dataset, we plotted the distribution of the target size, +number of isocentres, number of targets, and prescription dose in a histogram. + +2.2 Data Processing +The data was processed for our GKRS dose prediction in four major ways, which are summarized in Figure 1 and +explained in the remainder of this section. Patient data was first processed into a format that was amenable for +computer vision models (e.g., consistent nomenclature, align data on a voxel grid). Most notably we converted +each target contour into a mask that labelled voxels in healthy tissue with 0 and voxels in targets with its +prescription dose (e.g., 25 Gy). These masks enabled our dose prediction models to handle plans with a wide +range of dose prescription levels that are common in GKRS. This standard pre-processing was applied to all our +data and the resulting dataset was used to train and test our baseline models. We developed three additional pre- +processing techniques for our GRKS specific approach. + +Figure 1: An overview of our workflow and the data modification techniques used in this study. Our GK-specific data +modification includes transforming patient data with a novel tumor space transformation and an upscaling method. Then we +create a new feature that we call a distance tensor to quantify the distance between tissue and targets. +First, we developed tumor spaces, which were engineered to isolate small volumes surrounding targets. +Specifically, the tumor spaces were the smallest bounding box that contained at least one target surrounded by 1 cm +of padding. To ensure that the dosimetric interactions between close targets were captured, any targets within 1 cm +of each other were taken together in one tumor space, which is shown by the example in Figure 1. We sampled +these tumor spaces from the MRI, dose distribution, and target masks of each case to create a training set of 628 +tumor spaces from 272 plans. Similarly, we created a testing set of 129 tumor spaces from the 50 plans in the test +set. + + +Tumor spacetransformation +ContouredMRl +image +MRIimage +Dose +Target +distribution +mask +GK-specific +prediction +models +Upscaling (128x128x64) +Standard pre- +processing +Upscaled +Upscaled +Upscaled +Distance +MRIimage +dose +targetmask +tensor +Clinicaldose +distribution +Baseline +prediction +models + +4 + + +Second, we developed an upscaling technique to ensure consistent dimensionality across tumor spaces. +Inconsistent dimensions normally present a challenge for computer vision models because the models are +initialized to expect data with predefined dimensions. To accommodate the range of tumor space dimensions, all +data was upscaled using spline interpolation to fit into a 128 x 128 x 64 voxel tensor. A 128 x 128 x 64 tensor size +was chosen to balance image detail and training time. The final upscaled tensors included the cropped MRI images, +dose distributions, and target masks within each respective tumor space. + +Third, for each tumor space we engineered distance tensors, which were designed to account for the distance +between each voxel and its nearest target. Each element in the distance tensor represented a voxel and had a value +equal to the Euclidean distance 𝑑 between that voxel 𝑣 and its nearest target centroid 𝑡. The measure was +calculated with respect to all the target centroids 𝑡 ∈ 𝑇 within the patient. It was evaluated over all three spatial +dimensions, indexed by 𝑖. Specifically, the value of each element in the distance tensor was calculated as + + +𝑑 = min +t∈T √∑ +(𝑣𝑖 − 𝑡𝑖)2 +3 +𝑖=1 + . + +2.3 Model Architectures +Our approach builds on the success of existing neural network models from the IMRT and VMAT literature +[6,7,12,13]. Specifically, we adapted the architectures used in previous dose prediction approaches to fit the data +size and structure of GKRS. Full details of the model architecture are presented in the accompanied supplement. +We implemented two types of models in this study, a U-Net and a generative adversarial network (GAN). The U- +Net used a standard 3D architecture to generate a 3D dose using contoured MRI images [14]. A mean squared error +loss function was used to train the U-Net. The GAN used a pix2pix architecture [14] to combine the same +architecture as our U-Net model with a discriminator, which is a second neural network within the GAN that +predicted the likelihood that a dose distribution was from a clinical plan or generated by the U-Net. Both neural +networks within the GAN were trained simultaneously such that predictions from the discriminator were used to +improve the dose produced by the U-Net model within the GAN via a typical GAN loss function. A binary cross +entropy loss function was used for the discriminator model. + +2.4 Model Training and Prediction +The modified MRI images, target masks, 3D dose distributions, and distance tensors were used to train two GKRS +specific dose prediction models, one with a GAN architecture (GAN-GK) and another with only a 3D U-Net +architecture (U-Net-GK). To accommodate different prescription doses between cases, clinical dose distributions +were normalized relative to its nominal prescription dose prior to training. Baseline models for GAN (GAN- +Baseline) and 3D U-Net (U-Net-Baseline) were trained on patient data without GRKS specific processing. The +networks were developed in Python 3.7 using TensorFlow 1.12.3. + +All models were trained using the same 272 plans in our training dataset. Each model was also trained for 200 +epochs on a Nvidia 1080 Ti GPU with 12 GB of memory, which took approximately 6.5 and 3 days for the GAN +and U-Net models, respectively. Additionally, all optimization was done via gradient descent with using the Adam +optimizer with momentum parameters β1 = 0.5, β2 = 0.999, and a learning rate of 0.0002. These hyperparameters +were selected because they have been effective for a variety of other applications and additional tuning was +computationally expensive [14]. The model was trained with a batch size of eight, which was the largest size we +could use due to computational limitations. + + + + +5 + + +Predicted 3D dose distributions for the 50 test plans were generated using each model. Dose predictions generated +by GAN-GK and U-Net-GK were scaled back to their original target size and prescription dose, and the predictions +for all tumor spaces in the patient were combined to recreate a full 3D dose distribution. A dose of zero was assigned +to all voxels that were excluded from all tumor spaces, and the average dose was used for voxels with overlapping +tumor spaces. + +2.5 Analysis +To evaluate the accuracy of the dose distribution predictions relative to the clinical delivered dose, a global 3D +gamma analysis was used [15,16]. For this analysis, we used four agreement criteria that have been used in other +GKRS evaluations (4%/2 mm, 3%/3 mm, 3%/1 mm, and 1%/1 mm) [17-19]. A low-dose threshold equal to 5% of +the maximum dose was used to compute the gamma passing rate for each patient. A two-tailed Wilcoxon signed- +rank test was used to compare the gamma passing rate of the predictions made with and without data modification, +with p < 0.05 being considered significant. + +Further analysis using a 4%/2 mm gamma passing rate was done to explore where the GKRS specific predictions +were most successful and to identify where future improvements are needed. For the purposes of this analysis, each +target was divided into three regions: i) the inside, which included all the voxels in the target mask; ii) the periphery, +which included all voxels within a two-voxel ring around each target; and iii) the outside, which included the +remaining voxels in the tumor space. + +To evaluate prediction quality, the coverage, selectivity, and conformity indices [20] were calculated for each +target and compared to the same indices for the clinical doses. To compare the difference in quality between GKRS +specific predictions and their baseline counterparts, the absolute conformity index difference between predicted and +clinical plans was calculated and compared using a two-tailed Wilcoxon signed-rank test, with a significance level +of 0.05. + +3. Results +3.1 Summary of Clinical Plan Data + +Figure 2 summarizes the dataset that was used to train and test the models. There was a large range in the size +of the targets, number of isocenters per target, and prescription dose. The number of targets per patient ranged +from 1 to 26, and the types of targets included brain metastases (treated in 1 to 5 fractions) and acoustic neuromas +(treated in 1 fraction). There was a large range in target volumes (34 to 184750 voxels, 0.0085 cc to 46.1875 cc), +number of isocenters (1 to 57), and target dose prescriptions (4 to 27.5 Gy). Over 37% and 5% of all targets also +had diameters exceeding 2 cm and 4 cm, respectively. + + + + +6 + + + +Figure 2: Characteristics of the dataset used to train and test the model. +3.2 Accuracy of Predicted GK-specific 3D Dose Distributions +Figure 3 shows the distribution of the gamma passing rate of the predictions for various levels of gamma criteria +with respect to the clinical dose. Across all criteria levels, both the GAN-GK and U-Net-GK achieved gamma +passing rates that were significantly higher (i.e., better) than that of the GAN-Baseline (Z = -7.37, p < 0.001) and +U-Net-Baseline (Z = -7.33, p < 0.001). This result indicates that the GKRS specific approaches produce dose that +is more similar to clinical dose than standard baseline approaches. We also found that the performance of each +GKRS-specific approach was comparable. For example, compared to the clinical dose using the 4%/2mm gamma +criterion, the GAN-GK and U-Net-GK achieved average gamma passing rate of 84.9 ± 15.3% and 83.1 ± 17.2%, +respectively; with a 1%/1mm gamma criterion, which is much stricter than the 4%/2mm criterion, GAN-GK and +U-Net-GK both achieved much lower average passing rates of 25.2 ± 11.6% and 24.4 ± 11.3%, respectively. + + + + +7 + + + +Figure 3: The distribution of gamma passing rates for all models at four gamma criterion levels. +With regards to the GKRS specific predictions, the sub-analysis of gamma passing rate of both models showed +that the inside of target performed slightly better than the periphery on average, with 82.2 ± 19.5% of the voxels +passing compared to 79.8 ± 16.4%. The voxels outside of the target performed the best, with an average passing +rate of 91.6 ± 10.7%. + +3.3 Quality of Predicted GK-specific 3D Dose Distributions +Table 1 shows the mean and standard deviation for the coverage index, selectivity index, conformity index, and +absolute conformity difference for the predictions with respect to the clinical dose. Overall, the GKRS specific +approach dominated their baseline alternatives in terms of the coverage, selectivity, and conformity indices. Both +the GAN-GK and U-Net-GK predicted doses with coverage, selectivity, and conformity indices that were within +8% of the clinical doses. This result implies that the predictions were very similar to the clinical doses in quality, +with an average absolute conformity difference of 0.086 ± 0.11 and 0.092 ± 0.11 for GAN-GK and U-Net-GK, +respectively. In contrast, the average conformity of baseline predictions was significantly worse than their +corresponding clinical plans, with an average absolute conformity difference of 0.177 ± 0.16 and 0.189 ± 0.17 for +GAN-Baseline and U-Net-Baseline, respectively. + + + +Clinical +GAN-GK +U-Net-GK +GAN-Baseline +U-Net-Baseline +Coverage index +0.979 ± 0.02 +0.952 ± 0.11 +0.968 ± 0.12 +0.863 ± 0.21 +0.861 ± 0.22 +Selectivity index +0.554 ± 0.22 +0.597 ± 0.22 +0.539 ± 0.21 +0.527 ± 0.21 +0.542 ± 0.18 + + + +8 + + +Conformity +index +0.546 ± 0.22 +0.560 ± 0.20 +0.513 ± 0.20 +0.452 ± 0.22 +0.474 ± 0.23 +Absolute +conformity +index difference +N/A +0.086 ± 0.11 +0.092 ± 0.11 +0.177 ± 0.16 +0.189 ± 0.17 +Table 1: Average and standard deviation in coverage index, selectivity index, conformity index, and absolute conformity +index difference (compared to clinical) for the 3D dose predictions of 50 out-of-sample patients. + +3.4 Visual Comparison of GK-specific Predictions to Baseline Predictions +Figure 4 shows an example of predictions made using GK-specific models compared to predictions made using +baseline models. The example shows two sample patients (one in each row) to showcase the model performance in +different situations. The example highlights the impact of the data modification pipeline, which enables high +resolution dose predictions. In addition, predictions made using the baseline models often resulted in predictions +with unrealistically low dose to small targets, as seen in Figure 4f. + + +Figure 4: a-b) Clinical dose distributions. c) U-Net-GK dose prediction. d) GAN-GK dose prediction. e) U-Net-Baseline +dose prediction f) GAN-Baseline dose prediction. As can be seen, predictions made using baseline models are of much lower +resolution and sometimes result in low- or no-dose predictions. +4. Discussion +In this study, we present novel data modification techniques to facilitate 3D dose prediction for GKRS. We +demonstrated that separating the prediction of a full dose distribution into several smaller predictions enables deep + +a) +C) +e) +25 +20 +15 +10 +b) +(p +f) +25 +20 +10 + +9 + + +learning models to produce more accurate and reliable predictions than those obtained from off-the-shelf methods. +Of note, our novel methodology was effective on a heterogenous patient population with a large range of target +shapes and sizes. This approach serves as a necessary first step towards developing an KBP pipeline for GKRS that +can be adapted for use in any GKRS clinic. + +Using the modified data, predictions from GAN-GK and U-Net-GK achieved gamma passing rates similar to or +better than those achieved by comparable models in other disease sites [6-8]. For example, a recent study that +developed approaches to predict 3D dose distributions of rectal cancer IMRT plans achieved gamma passing rates +between 81-90% with a gamma criterion of 3%/5mm [7], which is comparable to our GK-specific approaches that +achieved gamma passing rates of 83-85% with a gamma criterion of 4%/2mm. The similarity of the predictions +arising from GAN-GK and U-Net-GK to their clinical counterparts is encouraging given the ranges in target size, +shape, and quantity among the GKRS plans in our dataset. + +While the prediction performs well with looser criteria, when distance-to-agreement and dose difference are +restricted to 1%/1mm the predictions are relatively poor with average gamma passing rates of 25.2 ± 11.6% and +24.4 ± 11.3% for GAN-GK and U-Net-GK, respectively. However, it seems that the primary factor for this fall in +passing rate is due to the stricter dose difference criteria. When the distance-to-agreement criteria is lowered from +3mm to 1mm, with a dose difference of 3%, the passing rate only experienced an average of 10.3% and 8.7% drop +for GAN-GK and U-Net-GK, respectively. These results indicate that the methodology can produce predictions +which are similar in shape to their clinical counterparts. This is good for GKRS where spatial resolution has +relatively high clinical relevance due to steep dose gradients and small targets. In contrast, while predictions appear +less likely to match the intensity on a voxel-by-voxel basis – likely due to the small voxel volumes coupled with +steep dose gradients – achieving a more accurate dose-agreement is less clinically important because dose is often +prescribed to an isodose line in the 50-60% range. + +We included several gamma criteria to compliment similar studies in the GKRS literature that compare the +similarity of new dose distributions to their clinical counterparts. Our gamma analysis quantified the dosimetric +accuracy of predictions in terms of different spatial resolution by varying the spatial portion of the gamma criteria +between 1mm and 3mm and the dose portion between 1% and 4%. Across all gamma criteria, the predictions made +using GAN-GK and U-Net-GK perform significantly better than baseline predictions. The lower standard deviation +on the gamma passing rates of GAN-GK and U-Net-GK predictions also indicate greater consistency. Since better +dose predictions are more likely to lead to higher quality plans [11], the presented prediction methodology would +serve well as the first stage of a two-stage GKRS KBP pipeline. + +Our novel approach for dose prediction is centred around GKRS-specific data modification. This focus is +different from many previous studies that focus on developing new architectures [6,7,9,12,13]. As the contributions +are focused on the data modification process, we did not fully explore other factors that can improve the predictions +such as hyperparameters tuning, tensor sizes, and training duration. The results of this study demonstrate that +existing dose prediction models can be tailored for GKRS by data modification alone. This enables us to leverage +approaches from the rich dose prediction literature that covers other sites and modalities [6,7,13,21-23]. Most of +those studies used a GAN or U-Net architecture. While our GAN model (i.e., GAN-GK) produced marginally better +predictions than the U-Net model (i.e., U-Net-GK), a result similar to previous studies [13], it also required more +than double the training time of the U-Net model (6.5 days versus 3). As such, training and cross-validation of a U- +Net model is more practical for future GKRS datasets. + +There are several benefits to leveraging data modification techniques in the training process. First, the training +data can use all the pixels stored in the native treatment image without exceeding computational memory constraints. +This facilitates models that generate high-resolution dose predictions, as seen in Figure 3. Second, using tumor +spaces generates more unique data points for the training set. In our case, tumor spaces transformed our training + + + +10 + + +dataset of 272 plans into a set of 628 tumor spaces that were used to train our GK-specific models. We conjecture +that increasing the number of data points in the training set enabled the models to generalize better with higher- +quality predictions. Lastly, data modification provides flexibility for the shape of plan image data. Specifically, our +approach eschews the need for consistent dimensions because we crop and resize the data to consistent dimensions +using interpolation, which makes the approach adaptable to variations in data dimensions. + +We opted to use a global gamma analysis to quantify our model in addition to traditional plan quality metrics +(e.g., tumor coverage, dose conformity) since the predicted 3D dose distribution is not only limited to targets. +Furthermore, in GKRS, metrics like coverage and conformity break down especially for small targets, as there are +only a few voxels, thus making the metrics sensitive to small perturbations. Since large dose fall off is common in +GKRS plans, global gamma was chosen instead of local gamma as it is less likely to exaggerate the errors in regions +with high gradient [24]. As seen in the sub-analysis, our model performs best at predicting dose to voxels outside +of the target area and worst on the periphery of the target as one would expect given the sharpness of the gradients +there. While the predictions within the tumor were only marginally better than the periphery, the variation of dose +within the tumor is usually not considered when evaluating treatment plans with the traditional plan quality metrics +[25]. On the other hand, the result of the sub-analysis indicates that additional tuning of the models should be done +to improve the predicted periphery dose, which would likely lead to an improvement to the coverage, specificity, +and conformity of the predicted doses. + +This approach has three notable limitations. First, we used a heterogenous dataset comprised of clinical plans that +had a range in target sizes, prescription doses, number of isocenters, and number of targets (see Figure 2). For +example, 3.7% of the tumor spaces in the dataset contained more than one target. As a result, the model may be less +effective for patients with uncommon characteristics (e.g., patients with multiple nearby targets). Second, organs- +at-risk were not considered in the models. Including organ-at-risk contours in the future would likely improve the +prediction quality by directing more attention of the model towards important healthy tissue. Finally, all our training +and testing data was modified via spline interpolation, which makes the model quality dependent on the size of +interpolation errors. As a result, poorly interpolated data could have adverse effects that limit the model performance +in both the training and testing processes. + +5. Conclusion +In this study, we developed a novel KBP method for GKRS, supported by a data modification pipeline that +transforms and upscales GKRS patient data for usage in machine learning-based 3D dose prediction. We +demonstrate that utilizing the augmented data enables standard neural network models to produce high quality dose +predictions for GKRS patients that are superior to existing state-of-the-art techniques. The resulting predictions +have the potential to support the development of high-quality treatment plans as part of an automated KBP pipeline. + +6. Acknowledgements +This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit +sectors. + + + + + + + + + + +11 + + + +References +[1] Faramand A, Lunsford DL. GAMMA KNIFE RADIOSURGERY: A Review of Epidemiology and Clinical +Practice 2020.; 2020. + +[2] Levivier M, Carrillo RE, Charrier R, Martin A, Thiran J-P. A real-time optimal inverse planning for Gamma +Knife radiosurgery by convex optimization: description of the system and first dosimetry data. J Neurosurg. +2018;129(Suppl1):111-7. https://doi:10.3171/2018.7.GKS181572 + +[3] Sjölund J, Riad S, Hennix M, Nordström H. A linear programming approach to inverse planning in Gamma +Knife radiosurgery. Med Phys. 2019;46(4):1533-44. https://doi:10.1002/mp.13440 + +[4] Momin S, Fu Y, Lei Y, Roper J, Bradley J, Curran W, Liu T, Yang X. Knowledge-based radiation treatment +planning: +A +data-driven +method +survey. +J +Appl +Clin +Med +Phys. +2021;22(8):16-44. +https://doi:10.1002/acm2.13337 + +[5] Ge Y, Wu QJ. Knowledge-based planning for intensity-modulated radiation therapy: A review of data-driven +approaches. Med Phys. 2019;46(6):2760-75. https://doi:10.1002/mp.13526 + +[6] Mahmood R, Babier A, McNiven A, Diamant A, Chan TCY. Automated Treatment Planning in Radiation +Therapy +using +Generative +Adversarial +Networks. +Proc +Mach +Learn +Res. +2018;85:1-14. +http://arxiv.org/abs/1807.06489. + +[7] Zhou J, Peng Z, Song Y, Chang Y, Pei X, Sheng L, Xu G. A method of using deep learning to predict three- +dimensional dose distributions for intensity-modulated radiotherapy of rectal cancer. J Appl Clin Med Phys. +2020;21(5):26-37. https://doi:10.1002/acm2.12849 + +[8] Chen X, Men K, Li Y, Yi J, Dai J. A feasibility study on an automated method to generate patient-specific dose +distributions for radiotherapy using deep learning. Med Phys. 2019;46(1):56-64. https://doi:10.1002/mp.13262 + +[9] Qi M, Li Y, Wu A, Jia Q, Guo F, Lu X, et al. Region-specific three-dimensional dose distribution prediction: +a feasibility study on prostate VMAT cases. J Radiat Res Appl Sci. 2020;13(1):485-95. +https://doi:10.1080/16878507.2020.1756185 + +[10] Nanda A, Bir S, Ambekar S, Bollam P. Long-term outcome of gamma knife radiosurgery for metastatic brain +tumors originating from lung cancer. Surg Neurol Int. 2014;5(9):396. https://doi:10.4103/2152-7806.140197 + +[11] Babier A, Mahmood R, Zhang B, Alves V, Barragán-Montero A, Beaudry J, et al. OpenKBP-Opt: an +international and reproducible evaluation of 76 knowledge-based planning pipelines. Phys. Med. Biol. +2022;67(18). https://doi:10.1088/1361-6560/ac8044 + +[12] Fan J, Wang J, Chen Z, Hu C, Zhang Z, Hu W. Automatic treatment planning based on three-dimensional dose +distribution +predicted +from +deep +learning +technique. +Med +Phys. +2019;46(1):370-81. +https://doi:10.1002/mp.13271 + + + + +12 + + +[13] Babier A, Mahmood R, McNiven AL, Diamant A, Chan TCY. Knowledge-based automated planning with +three-dimensional +generative +adversarial +networks. +Med +Phys. +2020;47(2):297-306. +https://doi:10.1002/mp.13896 + +[14] Isola P, Zhu JY, Zhou T, Efros AA. Image-to-image translation with conditional adversarial networks. Proc - +30th +IEEE +Conf +Comput +Vis +Pattern +Recognition, +CVPR +2017. +2017;2017-Janua:5967-76. +https://doi:10.1109/CVPR.2017.632 + +[15] Low DA, Dempsey JF. Evaluation of the gamma dose distribution comparison method. Med Phys. +2003;30(9):2455-64. https://doi:10.1118/1.1598711 + +[16] Low DA, Harms WB, Mutic S, Purdy JA. A technique for the quantitative evaluation of dose distributions. +Med Phys. 1998;25(5):656-61. https://doi:10.1118/1.598248 + +[17] Gopishankar N, Wanatabe Y, Subbiah V. MRI-based polymer gel dosimetry for validating plans with multiple +matrices in Gamma Knife stereotactic radiosurgery. J. Appl. Clin. Med. Phys. 2011;12(2):133-45. +https://doi:10.112/jacmp.v12i2.3333 + +[18] Chung H, Park J, Chun K. Verification of dose profiles generated by the convolution algorithm of the gamma +knife radiosurgery planning system. Med Phys. 2017;44(9):4880-9. https://doi:10.1002/mp.12347 + +[19] Park J, Han J, Kim C, Oh C, Lee D, Suh T, Gyu D, Chung H. Application of the gamma evaluation method in +Gamma Knife film dosimetry. Med Phys. 2011;38(10):5778-87. https://doi:10.1118/1.3641644 + +[20] Torrens M, Chung C, Chung HT, Hanssens P, Jaffray D, Kemeny A, et al. Standardization of terminology in +stereotactic radiosurgery: Report from the Standardization Committee of the International Leksell Gamma +Knife Society: special topic. J Neurosurg. 2014;121(December):2-15. https://doi:10.3171/2014.7.gks141199 + +[21] Nguyen D, Jia X, Sher D, Lin M, Iqbal Z, Liu H, Jiang S. 3D radiotherapy dose prediction on head and neck +cancer patients with a hierarchically densely connected U-net deep learning architecture. Phys Med Biol. +2019;64(6). https://doi:10.1088/1361-6560/ab039b + +[22] Lee MS, Hwang D, Kim JH, Lee JS. Deep-dose: a voxel dose estimation method using deep convolutional +neural network for personalized internal dosimetry. Sci Rep. 2019;9(1):1-9. https://doi:10.1038/s41598-019- +46620-y + +[23] Kearney V, Chan JW, Wang T, Perry A, Descovich M, Morin O, et al. DoseGAN: a generative adversarial +network for synthetic dose prediction using attention-gated discrimination and generation. Sci Rep. +2020;10(1):1-8. https://doi:10.1038/s41598-020-68062-7 + +[24] Hussein M, Clark CH, Nisbet A. Challenges in calculation of the gamma index in radiotherapy - Towards good +practice. Phys. Med. 2017;36:1-11. https://doi:10.1016/j.ejmp.2017.03.001 + +[25] Menon SV, Paramu R, Bhasi S, Nair RK. Evaluation of Plan Quality Metrics in Stereotactic +Radiosurgery/Radiotherapy in the Treatment Plans of Arteriovenous Malformations. J Med Phys. +2018;43(4):214. https://doi:10.4103/JMP.JMP_25_18 + + + + +13 + + + + + diff --git a/-dE0T4oBgHgl3EQfxAFs/content/tmp_files/load_file.txt b/-dE0T4oBgHgl3EQfxAFs/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..f79677305d70eca9d24b31abe122fe7cd20b34f5 --- /dev/null +++ b/-dE0T4oBgHgl3EQfxAFs/content/tmp_files/load_file.txt @@ -0,0 +1,510 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf,len=509 +page_content='1 3D dose prediction for Gamma Knife radiosurgery using deep learning and data modification Binghao Zhang1, Aaron Babier1, Timothy C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Chan1, Mark Ruschin2 1 Department of Mechanical and Industrial Engineering, University of Toronto, Toronto, Canada 2 Department of Radiation Oncology, Sunnybrook Health Sciences Centre, University of Toronto, Toronto, Canada E-mail: binghao.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='zhang@mail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='utoronto.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='ca Abstract Purpose: To develop a machine learning-based, 3D dose prediction methodology for Gamma Knife (GK) radiosurgery.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' The methodology accounts for cases involving targets of any number, size, and shape.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Methods: Data from 322 GK treatment plans was modified by isolating and cropping the contoured MRI and clinical dose distributions based on tumor location, then scaling the resulting tumor spaces to a standard size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' An accompanying 3D tensor was created for each instance to account for tumor size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' The modified dataset for 272 patients was used to train both a generative adversarial network (GAN-GK) and a 3D U-Net model (U-Net-GK).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Unmodified data was used to train equivalent baseline models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' All models were used to predict the dose distribution of 50 out-of-sample patients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Prediction accuracy was evaluated using gamma, with criteria of 4%/2mm, 3%/3mm, 3%/1mm and 1%/1mm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Prediction quality was assessed using coverage, selectivity, and conformity indices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Results: The predictions resulting from GAN-GK and U-Net-GK were similar to their clinical counterparts, with average gamma (4%/2mm) passing rates of 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='9 ± 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='3% and 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='1 ± 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='2%, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' In contrast, the gamma passing rate of baseline models were significantly worse than their respective GK-specific models (p < 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='001) at all criterion levels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' The quality of GK-specific predictions was also similar to that of clinical plans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Conclusion: Deep learning models can use GK-specific data modification to predict 3D dose distributions for GKRS plans with a large range in size, shape, or number of targets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Standard deep learning models applied to unmodified GK data generated poorer predictions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Keywords: 3D-dose prediction, Gamma Knife, automated planning, knowledge-based planning 2 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Introduction Gamma Knife (GK) radiosurgery (GKRS) is a form of radiotherapy that precisely treats abnormalities within the brain using narrow beams of radiation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' GKRS is an effective treatment for a wide array of diseases including benign tumors, malignant tumors, vascular abnormalities, and functional disorders [1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Conventional processes to generate GKRS treatment plans are time-consuming for clinicians, which has motivated several studies to explore new approaches like inverse planning [2,3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' However, a major limitation of inverse planning is that it requires human intervention to tune parameters and personalize the resulting treatment plans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' There exist automated planning methods for other modalities that can generate patient specific parameters for inverse planning [4,5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' An integral part of these approaches is a machine learning (ML) method that produces dose predictions using patient images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' There is also a small set of models that incorporate additional patient features (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=', age, histology) to account for patient outcomes [4,5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' In general, automated planning approaches that use predicted dose distributions are called knowledge-based planning (KBP) pipelines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' A KBP pipeline is typically presented as a two-stage process that leverages information from previous treatment plans to produce high-quality treatment plans for new patients without human intervention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' The first stage is a dose prediction model that learns the relationship between dose and delineated medical images from previous plans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' The second stage is an optimization model that generates a treatment plan from the predicted dose distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Many recent advances in KBP have focused on 3D dose prediction using neural networks [4,5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' These approaches have primarily been developed and tested for intensity-modulated radiotherapy (IMRT) and volumetric modulated arc therapy (VMAT) [6-9].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' However, GKRS presents three unique challenges that necessitate a new approach for dose prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' First, there is a large range in treatment target size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Many large targets (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=', post-operative metastases or benign tumors) are up to 25 times the diameter of small targets (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=', small intact brain metastases) [10].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' This variation in target size requires a prediction model that can adequately accommodate both the smallest and largest targets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Second, GKRS cases can have a relatively large number of targets (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=', more than 30) with multiple dose prescription levels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' As a result, the impact of dose to one target on another can vary drastically between patients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Third, targets are often separated by large amounts of healthy brain tissue.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' A standard ML approach that considers the whole treatment volume would require a low spatial resolution (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=', large voxel volumes) to accommodate computational memory limits associated with large neural networks, which would be inadequate for GKRS because it must be planned with a high spatial resolution (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=', small voxel volumes).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' These factors further increase both the complexity and spatial resolution requirements of the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' In this paper, we develop a novel GKRS dose prediction approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' This is an important first step towards creating an automated GKRS planning pipeline since the quality of plans produced by such a pipeline is positively correlated with the quality of the dose predictions [11].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Our approach accommodates any size, number, and shape of targets without compromising the spatial resolution of the predicted dose.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' The proposed approach involves a novel GKRS- specific data modification method, an upscaling step, and construction of a distance tensor to relate each target back to its size.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' We demonstrate accuracy on a series of historically treated patient cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Our high-quality predictions could be used to estimate parameters for inverse optimization models that generate high-quality treatment plans [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Methods Our methods consisted of five main steps: (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='1) extracting clinical treatment plan data, (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='2) modifying plan image data, (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='3) tailoring existing neural network models for GKRS, (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='4) training dose prediction models, and (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='5) evaluating model dose predictions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' 3 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='1 Data Extraction This research ethics board approved study involved retrospective access to radiotherapy plans for 322 patients who were treated at Sunnybrook Health Sciences Centre.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' From each plan, we extracted the MRI images, 3D dose distributions, and target contours.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' All target contours were delineated for treatment by a radiation oncologist on high-resolution MRIs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' To visualize the heterogeneity of our dataset, we plotted the distribution of the target size, number of isocentres, number of targets, and prescription dose in a histogram.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='2 Data Processing The data was processed for our GKRS dose prediction in four major ways, which are summarized in Figure 1 and explained in the remainder of this section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Patient data was first processed into a format that was amenable for computer vision models (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=', consistent nomenclature, align data on a voxel grid).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Most notably we converted each target contour into a mask that labelled voxels in healthy tissue with 0 and voxels in targets with its prescription dose (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=', 25 Gy).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' These masks enabled our dose prediction models to handle plans with a wide range of dose prescription levels that are common in GKRS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' This standard pre-processing was applied to all our data and the resulting dataset was used to train and test our baseline models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' We developed three additional pre- processing techniques for our GRKS specific approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Figure 1: An overview of our workflow and the data modification techniques used in this study.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Our GK-specific data modification includes transforming patient data with a novel tumor space transformation and an upscaling method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Then we create a new feature that we call a distance tensor to quantify the distance between tissue and targets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' First, we developed tumor spaces, which were engineered to isolate small volumes surrounding targets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Specifically, the tumor spaces were the smallest bounding box that contained at least one target surrounded by 1 cm of padding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' To ensure that the dosimetric interactions between close targets were captured, any targets within 1 cm of each other were taken together in one tumor space, which is shown by the example in Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' We sampled these tumor spaces from the MRI, dose distribution, and target masks of each case to create a training set of 628 tumor spaces from 272 plans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Similarly, we created a testing set of 129 tumor spaces from the 50 plans in the test set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Tumor spacetransformation ContouredMRl image MRIimage Dose Target distribution mask GK specific prediction models Upscaling (128x128x64) Standard pre processing Upscaled Upscaled Upscaled Distance MRIimage dose targetmask tensor Clinicaldose distribution Baseline prediction models 4 Second, we developed an upscaling technique to ensure consistent dimensionality across tumor spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Inconsistent dimensions normally present a challenge for computer vision models because the models are initialized to expect data with predefined dimensions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' To accommodate the range of tumor space dimensions, all data was upscaled using spline interpolation to fit into a 128 x 128 x 64 voxel tensor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' A 128 x 128 x 64 tensor size was chosen to balance image detail and training time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' The final upscaled tensors included the cropped MRI images, dose distributions, and target masks within each respective tumor space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Third, for each tumor space we engineered distance tensors, which were designed to account for the distance between each voxel and its nearest target.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Each element in the distance tensor represented a voxel and had a value equal to the Euclidean distance 𝑑 between that voxel 𝑣 and its nearest target centroid 𝑡.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' The measure was calculated with respect to all the target centroids 𝑡 ∈ 𝑇 within the patient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' It was evaluated over all three spatial dimensions, indexed by 𝑖.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Specifically, the value of each element in the distance tensor was calculated as 𝑑 = min t∈T √∑ (𝑣𝑖 − 𝑡𝑖)2 3 𝑖=1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='3 Model Architectures Our approach builds on the success of existing neural network models from the IMRT and VMAT literature [6,7,12,13].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Specifically, we adapted the architectures used in previous dose prediction approaches to fit the data size and structure of GKRS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Full details of the model architecture are presented in the accompanied supplement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' We implemented two types of models in this study, a U-Net and a generative adversarial network (GAN).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' The U- Net used a standard 3D architecture to generate a 3D dose using contoured MRI images [14].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' A mean squared error loss function was used to train the U-Net.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' The GAN used a pix2pix architecture [14] to combine the same architecture as our U-Net model with a discriminator, which is a second neural network within the GAN that predicted the likelihood that a dose distribution was from a clinical plan or generated by the U-Net.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Both neural networks within the GAN were trained simultaneously such that predictions from the discriminator were used to improve the dose produced by the U-Net model within the GAN via a typical GAN loss function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' A binary cross entropy loss function was used for the discriminator model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='4 Model Training and Prediction The modified MRI images, target masks, 3D dose distributions, and distance tensors were used to train two GKRS specific dose prediction models, one with a GAN architecture (GAN-GK) and another with only a 3D U-Net architecture (U-Net-GK).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' To accommodate different prescription doses between cases, clinical dose distributions were normalized relative to its nominal prescription dose prior to training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Baseline models for GAN (GAN- Baseline) and 3D U-Net (U-Net-Baseline) were trained on patient data without GRKS specific processing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' The networks were developed in Python 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='7 using TensorFlow 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' All models were trained using the same 272 plans in our training dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Each model was also trained for 200 epochs on a Nvidia 1080 Ti GPU with 12 GB of memory, which took approximately 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='5 and 3 days for the GAN and U-Net models, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Additionally, all optimization was done via gradient descent with using the Adam optimizer with momentum parameters β1 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='5, β2 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='999, and a learning rate of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='0002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' These hyperparameters were selected because they have been effective for a variety of other applications and additional tuning was computationally expensive [14].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' The model was trained with a batch size of eight, which was the largest size we could use due to computational limitations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' 5 Predicted 3D dose distributions for the 50 test plans were generated using each model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Dose predictions generated by GAN-GK and U-Net-GK were scaled back to their original target size and prescription dose, and the predictions for all tumor spaces in the patient were combined to recreate a full 3D dose distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' A dose of zero was assigned to all voxels that were excluded from all tumor spaces, and the average dose was used for voxels with overlapping tumor spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='5 Analysis To evaluate the accuracy of the dose distribution predictions relative to the clinical delivered dose, a global 3D gamma analysis was used [15,16].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' For this analysis, we used four agreement criteria that have been used in other GKRS evaluations (4%/2 mm, 3%/3 mm, 3%/1 mm, and 1%/1 mm) [17-19].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' A low-dose threshold equal to 5% of the maximum dose was used to compute the gamma passing rate for each patient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' A two-tailed Wilcoxon signed- rank test was used to compare the gamma passing rate of the predictions made with and without data modification, with p < 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='05 being considered significant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Further analysis using a 4%/2 mm gamma passing rate was done to explore where the GKRS specific predictions were most successful and to identify where future improvements are needed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' For the purposes of this analysis, each target was divided into three regions: i) the inside, which included all the voxels in the target mask;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' ii) the periphery, which included all voxels within a two-voxel ring around each target;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' and iii) the outside, which included the remaining voxels in the tumor space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' To evaluate prediction quality, the coverage, selectivity, and conformity indices [20] were calculated for each target and compared to the same indices for the clinical doses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' To compare the difference in quality between GKRS specific predictions and their baseline counterparts, the absolute conformity index difference between predicted and clinical plans was calculated and compared using a two-tailed Wilcoxon signed-rank test, with a significance level of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='05.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Results 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='1 Summary of Clinical Plan Data Figure 2 summarizes the dataset that was used to train and test the models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' There was a large range in the size of the targets, number of isocenters per target, and prescription dose.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' The number of targets per patient ranged from 1 to 26, and the types of targets included brain metastases (treated in 1 to 5 fractions) and acoustic neuromas (treated in 1 fraction).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' There was a large range in target volumes (34 to 184750 voxels, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='0085 cc to 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='1875 cc), number of isocenters (1 to 57), and target dose prescriptions (4 to 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='5 Gy).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Over 37% and 5% of all targets also had diameters exceeding 2 cm and 4 cm, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' 6 Figure 2: Characteristics of the dataset used to train and test the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='2 Accuracy of Predicted GK-specific 3D Dose Distributions Figure 3 shows the distribution of the gamma passing rate of the predictions for various levels of gamma criteria with respect to the clinical dose.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Across all criteria levels, both the GAN-GK and U-Net-GK achieved gamma passing rates that were significantly higher (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=', better) than that of the GAN-Baseline (Z = -7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='37, p < 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='001) and U-Net-Baseline (Z = -7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='33, p < 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='001).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' This result indicates that the GKRS specific approaches produce dose that is more similar to clinical dose than standard baseline approaches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' We also found that the performance of each GKRS-specific approach was comparable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' For example, compared to the clinical dose using the 4%/2mm gamma criterion, the GAN-GK and U-Net-GK achieved average gamma passing rate of 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='9 ± 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='3% and 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='1 ± 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='2%, respectively;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' with a 1%/1mm gamma criterion, which is much stricter than the 4%/2mm criterion, GAN-GK and U-Net-GK both achieved much lower average passing rates of 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='2 ± 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='6% and 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='4 ± 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='3%, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' 7 Figure 3: The distribution of gamma passing rates for all models at four gamma criterion levels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' With regards to the GKRS specific predictions, the sub-analysis of gamma passing rate of both models showed that the inside of target performed slightly better than the periphery on average, with 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='2 ± 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='5% of the voxels passing compared to 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='8 ± 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='4%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' The voxels outside of the target performed the best, with an average passing rate of 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='6 ± 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='7%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='3 Quality of Predicted GK-specific 3D Dose Distributions Table 1 shows the mean and standard deviation for the coverage index, selectivity index, conformity index, and absolute conformity difference for the predictions with respect to the clinical dose.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Overall, the GKRS specific approach dominated their baseline alternatives in terms of the coverage, selectivity, and conformity indices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Both the GAN-GK and U-Net-GK predicted doses with coverage, selectivity, and conformity indices that were within 8% of the clinical doses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' This result implies that the predictions were very similar to the clinical doses in quality, with an average absolute conformity difference of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='086 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='11 and 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='092 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='11 for GAN-GK and U-Net-GK, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' In contrast, the average conformity of baseline predictions was significantly worse than their corresponding clinical plans, with an average absolute conformity difference of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='177 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='16 and 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='189 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='17 for GAN-Baseline and U-Net-Baseline, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Clinical GAN GK U Net GK GAN Baseline U Net Baseline Coverage index 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='979 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='02 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='952 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='11 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='968 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='12 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='863 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='21 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='861 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='22 Selectivity index 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='554 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='22 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='597 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='22 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='539 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='21 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='527 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='21 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='542 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='18 8 Conformity index 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='546 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='22 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='560 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='513 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='20 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='452 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='22 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='474 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='23 Absolute conformity index difference N/A 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='086 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='11 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='092 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='11 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='177 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='16 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='189 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='17 Table 1: Average and standard deviation in coverage index, selectivity index, conformity index, and absolute conformity index difference (compared to clinical) for the 3D dose predictions of 50 out-of-sample patients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='4 Visual Comparison of GK-specific Predictions to Baseline Predictions Figure 4 shows an example of predictions made using GK-specific models compared to predictions made using baseline models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' The example shows two sample patients (one in each row) to showcase the model performance in different situations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' The example highlights the impact of the data modification pipeline, which enables high resolution dose predictions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' In addition, predictions made using the baseline models often resulted in predictions with unrealistically low dose to small targets, as seen in Figure 4f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Figure 4: a-b) Clinical dose distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' c) U-Net-GK dose prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' d) GAN-GK dose prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' e) U-Net-Baseline dose prediction f) GAN-Baseline dose prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' As can be seen, predictions made using baseline models are of much lower resolution and sometimes result in low- or no-dose predictions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Discussion In this study, we present novel data modification techniques to facilitate 3D dose prediction for GKRS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' We demonstrated that separating the prediction of a full dose distribution into several smaller predictions enables deep a) C) e) 25 20 15 10 b) (p f) 25 20 10 9 learning models to produce more accurate and reliable predictions than those obtained from off-the-shelf methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Of note, our novel methodology was effective on a heterogenous patient population with a large range of target shapes and sizes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' This approach serves as a necessary first step towards developing an KBP pipeline for GKRS that can be adapted for use in any GKRS clinic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Using the modified data, predictions from GAN-GK and U-Net-GK achieved gamma passing rates similar to or better than those achieved by comparable models in other disease sites [6-8].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' For example, a recent study that developed approaches to predict 3D dose distributions of rectal cancer IMRT plans achieved gamma passing rates between 81-90% with a gamma criterion of 3%/5mm [7], which is comparable to our GK-specific approaches that achieved gamma passing rates of 83-85% with a gamma criterion of 4%/2mm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' The similarity of the predictions arising from GAN-GK and U-Net-GK to their clinical counterparts is encouraging given the ranges in target size, shape, and quantity among the GKRS plans in our dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' While the prediction performs well with looser criteria, when distance-to-agreement and dose difference are restricted to 1%/1mm the predictions are relatively poor with average gamma passing rates of 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='2 ± 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='6% and 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='4 ± 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='3% for GAN-GK and U-Net-GK, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' However, it seems that the primary factor for this fall in passing rate is due to the stricter dose difference criteria.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' When the distance-to-agreement criteria is lowered from 3mm to 1mm, with a dose difference of 3%, the passing rate only experienced an average of 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='3% and 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='7% drop for GAN-GK and U-Net-GK, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' These results indicate that the methodology can produce predictions which are similar in shape to their clinical counterparts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' This is good for GKRS where spatial resolution has relatively high clinical relevance due to steep dose gradients and small targets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' In contrast, while predictions appear less likely to match the intensity on a voxel-by-voxel basis – likely due to the small voxel volumes coupled with steep dose gradients – achieving a more accurate dose-agreement is less clinically important because dose is often prescribed to an isodose line in the 50-60% range.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' We included several gamma criteria to compliment similar studies in the GKRS literature that compare the similarity of new dose distributions to their clinical counterparts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Our gamma analysis quantified the dosimetric accuracy of predictions in terms of different spatial resolution by varying the spatial portion of the gamma criteria between 1mm and 3mm and the dose portion between 1% and 4%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Across all gamma criteria, the predictions made using GAN-GK and U-Net-GK perform significantly better than baseline predictions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' The lower standard deviation on the gamma passing rates of GAN-GK and U-Net-GK predictions also indicate greater consistency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Since better dose predictions are more likely to lead to higher quality plans [11], the presented prediction methodology would serve well as the first stage of a two-stage GKRS KBP pipeline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Our novel approach for dose prediction is centred around GKRS-specific data modification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' This focus is different from many previous studies that focus on developing new architectures [6,7,9,12,13].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' As the contributions are focused on the data modification process, we did not fully explore other factors that can improve the predictions such as hyperparameters tuning, tensor sizes, and training duration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' The results of this study demonstrate that existing dose prediction models can be tailored for GKRS by data modification alone.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' This enables us to leverage approaches from the rich dose prediction literature that covers other sites and modalities [6,7,13,21-23].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Most of those studies used a GAN or U-Net architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' While our GAN model (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=', GAN-GK) produced marginally better predictions than the U-Net model (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=', U-Net-GK), a result similar to previous studies [13], it also required more than double the training time of the U-Net model (6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='5 days versus 3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' As such, training and cross-validation of a U- Net model is more practical for future GKRS datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' There are several benefits to leveraging data modification techniques in the training process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' First, the training data can use all the pixels stored in the native treatment image without exceeding computational memory constraints.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' This facilitates models that generate high-resolution dose predictions, as seen in Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Second, using tumor spaces generates more unique data points for the training set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' In our case, tumor spaces transformed our training 10 dataset of 272 plans into a set of 628 tumor spaces that were used to train our GK-specific models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' We conjecture that increasing the number of data points in the training set enabled the models to generalize better with higher- quality predictions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Lastly, data modification provides flexibility for the shape of plan image data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Specifically, our approach eschews the need for consistent dimensions because we crop and resize the data to consistent dimensions using interpolation, which makes the approach adaptable to variations in data dimensions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' We opted to use a global gamma analysis to quantify our model in addition to traditional plan quality metrics (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=', tumor coverage, dose conformity) since the predicted 3D dose distribution is not only limited to targets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Furthermore, in GKRS, metrics like coverage and conformity break down especially for small targets, as there are only a few voxels, thus making the metrics sensitive to small perturbations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Since large dose fall off is common in GKRS plans, global gamma was chosen instead of local gamma as it is less likely to exaggerate the errors in regions with high gradient [24].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' As seen in the sub-analysis, our model performs best at predicting dose to voxels outside of the target area and worst on the periphery of the target as one would expect given the sharpness of the gradients there.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' While the predictions within the tumor were only marginally better than the periphery, the variation of dose within the tumor is usually not considered when evaluating treatment plans with the traditional plan quality metrics [25].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' On the other hand, the result of the sub-analysis indicates that additional tuning of the models should be done to improve the predicted periphery dose, which would likely lead to an improvement to the coverage, specificity, and conformity of the predicted doses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' This approach has three notable limitations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' First, we used a heterogenous dataset comprised of clinical plans that had a range in target sizes, prescription doses, number of isocenters, and number of targets (see Figure 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' For example, 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='7% of the tumor spaces in the dataset contained more than one target.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' As a result, the model may be less effective for patients with uncommon characteristics (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=', patients with multiple nearby targets).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Second, organs- at-risk were not considered in the models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Including organ-at-risk contours in the future would likely improve the prediction quality by directing more attention of the model towards important healthy tissue.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Finally, all our training and testing data was modified via spline interpolation, which makes the model quality dependent on the size of interpolation errors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' As a result, poorly interpolated data could have adverse effects that limit the model performance in both the training and testing processes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Conclusion In this study, we developed a novel KBP method for GKRS, supported by a data modification pipeline that transforms and upscales GKRS patient data for usage in machine learning-based 3D dose prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' We demonstrate that utilizing the augmented data enables standard neural network models to produce high quality dose predictions for GKRS patients that are superior to existing state-of-the-art techniques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' The resulting predictions have the potential to support the development of high-quality treatment plans as part of an automated KBP pipeline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Acknowledgements This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' 11 References [1] Faramand A, Lunsford DL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' GAMMA KNIFE RADIOSURGERY: A Review of Epidemiology and Clinical Practice 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=';' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' [2] Levivier M, Carrillo RE, Charrier R, Martin A, Thiran J-P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' A real-time optimal inverse planning for Gamma Knife radiosurgery by convex optimization: description of the system and first dosimetry data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' J Neurosurg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='129(Suppl1):111-7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' https://doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='3171/2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='GKS181572 [3] Sjölund J, Riad S, Hennix M, Nordström H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' A linear programming approach to inverse planning in Gamma Knife radiosurgery.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Med Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='46(4):1533-44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' https://doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='1002/mp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='13440 [4] Momin S, Fu Y, Lei Y, Roper J, Bradley J, Curran W, Liu T, Yang X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Knowledge-based radiation treatment planning: A data-driven method survey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' J Appl Clin Med Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='22(8):16-44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' https://doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='1002/acm2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='13337 [5] Ge Y, Wu QJ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Knowledge-based planning for intensity-modulated radiation therapy: A review of data-driven approaches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Med Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='46(6):2760-75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' https://doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='1002/mp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='13526 [6] Mahmood R, Babier A, McNiven A, Diamant A, Chan TCY.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Automated Treatment Planning in Radiation Therapy using Generative Adversarial Networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Proc Mach Learn Res.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='85:1-14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' http://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='org/abs/1807.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='06489.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' [7] Zhou J, Peng Z, Song Y, Chang Y, Pei X, Sheng L, Xu G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' A method of using deep learning to predict three- dimensional dose distributions for intensity-modulated radiotherapy of rectal cancer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' J Appl Clin Med Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='21(5):26-37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' https://doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='1002/acm2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='12849 [8] Chen X, Men K, Li Y, Yi J, Dai J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' A feasibility study on an automated method to generate patient-specific dose distributions for radiotherapy using deep learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Med Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='46(1):56-64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' https://doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='1002/mp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='13262 [9] Qi M, Li Y, Wu A, Jia Q, Guo F, Lu X, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Region-specific three-dimensional dose distribution prediction: a feasibility study on prostate VMAT cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' J Radiat Res Appl Sci.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='13(1):485-95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' https://doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='1080/16878507.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='1756185 [10] Nanda A, Bir S, Ambekar S, Bollam P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Long-term outcome of gamma knife radiosurgery for metastatic brain tumors originating from lung cancer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Surg Neurol Int.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' 2014;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='5(9):396.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' https://doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='4103/2152-7806.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='140197 [11] Babier A, Mahmood R, Zhang B, Alves V, Barragán-Montero A, Beaudry J, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' OpenKBP-Opt: an international and reproducible evaluation of 76 knowledge-based planning pipelines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Med.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Biol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='67(18).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' https://doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='1088/1361-6560/ac8044 [12] Fan J, Wang J, Chen Z, Hu C, Zhang Z, Hu W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Automatic treatment planning based on three-dimensional dose distribution predicted from deep learning technique.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Med Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='46(1):370-81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' https://doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='1002/mp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='13271 12 [13] Babier A, Mahmood R, McNiven AL, Diamant A, Chan TCY.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Knowledge-based automated planning with three-dimensional generative adversarial networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Med Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='47(2):297-306.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' https://doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='1002/mp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='13896 [14] Isola P, Zhu JY, Zhou T, Efros AA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Image-to-image translation with conditional adversarial networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Proc - 30th IEEE Conf Comput Vis Pattern Recognition, CVPR 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='2017-Janua:5967-76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' https://doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='1109/CVPR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='632 [15] Low DA, Dempsey JF.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Evaluation of the gamma dose distribution comparison method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Med Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' 2003;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='30(9):2455-64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' https://doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='1118/1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='1598711 [16] Low DA, Harms WB, Mutic S, Purdy JA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' A technique for the quantitative evaluation of dose distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Med Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' 1998;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='25(5):656-61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' https://doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='1118/1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='598248 [17] Gopishankar N, Wanatabe Y, Subbiah V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' MRI-based polymer gel dosimetry for validating plans with multiple matrices in Gamma Knife stereotactic radiosurgery.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Clin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Med.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' 2011;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='12(2):133-45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' https://doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='112/jacmp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='v12i2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='3333 [18] Chung H, Park J, Chun K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Verification of dose profiles generated by the convolution algorithm of the gamma knife radiosurgery planning system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Med Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='44(9):4880-9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' https://doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='1002/mp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='12347 [19] Park J, Han J, Kim C, Oh C, Lee D, Suh T, Gyu D, Chung H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Application of the gamma evaluation method in Gamma Knife film dosimetry.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Med Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' 2011;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='38(10):5778-87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' https://doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='1118/1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='3641644 [20] Torrens M, Chung C, Chung HT, Hanssens P, Jaffray D, Kemeny A, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Standardization of terminology in stereotactic radiosurgery: Report from the Standardization Committee of the International Leksell Gamma Knife Society: special topic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' J Neurosurg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' 2014;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='121(December):2-15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' https://doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='3171/2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='gks141199 [21] Nguyen D, Jia X, Sher D, Lin M, Iqbal Z, Liu H, Jiang S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' 3D radiotherapy dose prediction on head and neck cancer patients with a hierarchically densely connected U-net deep learning architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Phys Med Biol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='64(6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' https://doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='1088/1361-6560/ab039b [22] Lee MS, Hwang D, Kim JH, Lee JS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Deep-dose: a voxel dose estimation method using deep convolutional neural network for personalized internal dosimetry.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Sci Rep.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='9(1):1-9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' https://doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='1038/s41598-019- 46620-y [23] Kearney V, Chan JW, Wang T, Perry A, Descovich M, Morin O, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' DoseGAN: a generative adversarial network for synthetic dose prediction using attention-gated discrimination and generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Sci Rep.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='10(1):1-8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' https://doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='1038/s41598-020-68062-7 [24] Hussein M, Clark CH, Nisbet A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Challenges in calculation of the gamma index in radiotherapy - Towards good practice.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Med.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='36:1-11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' https://doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='1016/j.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='ejmp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='03.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='001 [25] Menon SV, Paramu R, Bhasi S, Nair RK.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' Evaluation of Plan Quality Metrics in Stereotactic Radiosurgery/Radiotherapy in the Treatment Plans of Arteriovenous Malformations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' J Med Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='43(4):214.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content=' https://doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='4103/JMP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} +page_content='JMP_25_18 13' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/-dE0T4oBgHgl3EQfxAFs/content/2301.02640v1.pdf'} diff --git a/-dE1T4oBgHgl3EQfUgPr/vector_store/index.pkl b/-dE1T4oBgHgl3EQfUgPr/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..d1cffb24e78cb5d121a1bbb455c8581886deaf24 --- /dev/null +++ b/-dE1T4oBgHgl3EQfUgPr/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6c030f1c741a3b1a71b02af37b7335819567d0c0fb2991d31616f05b433736a1 +size 124527 diff --git a/.gitattributes b/.gitattributes index bd2b76c318190952a31fde2e441df7a7f0d3d319..69614094998de2e97e57c48ba5e94dbf998035f2 100644 --- a/.gitattributes +++ b/.gitattributes @@ -8522,3 +8522,72 @@ u9E0T4oBgHgl3EQfsQGm/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -tex eNFIT4oBgHgl3EQfoyuB/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text ZtE5T4oBgHgl3EQfeA-9/content/2301.05616v1.pdf filter=lfs diff=lfs merge=lfs -text s9E3T4oBgHgl3EQfNAmL/content/2301.04379v1.pdf filter=lfs diff=lfs merge=lfs -text +UdE5T4oBgHgl3EQfbQ_H/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +d9E4T4oBgHgl3EQfQgxo/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +JdFJT4oBgHgl3EQfwS0f/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +bNE4T4oBgHgl3EQfPQwx/content/2301.04971v1.pdf filter=lfs diff=lfs merge=lfs -text +XdFJT4oBgHgl3EQf5i00/content/2301.11670v1.pdf filter=lfs diff=lfs merge=lfs -text +ONE3T4oBgHgl3EQfZgo4/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +atE2T4oBgHgl3EQfFAam/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +7NAzT4oBgHgl3EQf-f4D/content/2301.01933v1.pdf filter=lfs diff=lfs merge=lfs -text +r9FKT4oBgHgl3EQfJS1Z/content/2301.11737v1.pdf filter=lfs diff=lfs merge=lfs -text +2NA0T4oBgHgl3EQfM_8t/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +xtAzT4oBgHgl3EQfQvsC/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +MdE1T4oBgHgl3EQfZQR8/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +cNFRT4oBgHgl3EQfSjci/content/2301.13529v1.pdf filter=lfs diff=lfs merge=lfs -text +bNE4T4oBgHgl3EQfPQwx/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +f9A0T4oBgHgl3EQfHv-A/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +B9E1T4oBgHgl3EQfVwR_/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +2tE1T4oBgHgl3EQf5gWl/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +etAyT4oBgHgl3EQfw_lx/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +B9E1T4oBgHgl3EQfVwR_/content/2301.03106v1.pdf filter=lfs diff=lfs merge=lfs -text +sdE5T4oBgHgl3EQfKQ7c/content/2301.05465v1.pdf filter=lfs diff=lfs merge=lfs -text +r9FKT4oBgHgl3EQfJS1Z/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +n9FQT4oBgHgl3EQfqTb3/content/2301.13380v1.pdf filter=lfs diff=lfs merge=lfs -text +xdFKT4oBgHgl3EQfLC2B/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +pNFIT4oBgHgl3EQfvyuP/content/2301.11349v1.pdf filter=lfs diff=lfs merge=lfs -text +n9FQT4oBgHgl3EQfqTb3/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +adAzT4oBgHgl3EQfZPz1/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +39AyT4oBgHgl3EQfo_il/content/2301.00518v1.pdf filter=lfs diff=lfs merge=lfs -text +TdE3T4oBgHgl3EQfEAmL/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +0tFST4oBgHgl3EQfVziF/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +xdFKT4oBgHgl3EQfLC2B/content/2301.11744v1.pdf filter=lfs diff=lfs merge=lfs -text +a9AyT4oBgHgl3EQfiviR/content/2301.00402v1.pdf filter=lfs diff=lfs merge=lfs -text +LdE2T4oBgHgl3EQfqAhW/content/2301.04034v1.pdf filter=lfs diff=lfs merge=lfs -text +x9FAT4oBgHgl3EQfAhwk/content/2301.08398v1.pdf filter=lfs diff=lfs merge=lfs -text +ktE3T4oBgHgl3EQf5wsL/content/2301.04783v1.pdf filter=lfs diff=lfs merge=lfs -text +O9FOT4oBgHgl3EQf4DQT/content/2301.12948v1.pdf filter=lfs diff=lfs merge=lfs -text +wdE2T4oBgHgl3EQfLgZh/content/2301.03714v1.pdf filter=lfs diff=lfs merge=lfs -text +v9E4T4oBgHgl3EQfXQwv/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +O9FOT4oBgHgl3EQf4DQT/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +stE0T4oBgHgl3EQfrgHc/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +G9FLT4oBgHgl3EQfHS_T/content/2301.11996v1.pdf filter=lfs diff=lfs merge=lfs -text +ktFST4oBgHgl3EQfIjgl/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +mtFLT4oBgHgl3EQffC-V/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +G9FJT4oBgHgl3EQfty2e/content/2301.11619v1.pdf filter=lfs diff=lfs merge=lfs -text +fNFAT4oBgHgl3EQf7x4q/content/2301.08746v1.pdf filter=lfs diff=lfs merge=lfs -text +ONE3T4oBgHgl3EQfZgo4/content/2301.04497v1.pdf filter=lfs diff=lfs merge=lfs -text +wdE2T4oBgHgl3EQfLgZh/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +5NE4T4oBgHgl3EQfbwwx/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +GtFKT4oBgHgl3EQfcC5D/content/2301.11814v1.pdf filter=lfs diff=lfs merge=lfs -text +PNE0T4oBgHgl3EQf1AIP/content/2301.02692v1.pdf filter=lfs diff=lfs merge=lfs -text +eNFKT4oBgHgl3EQfrS7d/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +etE1T4oBgHgl3EQfyQXE/content/2301.03432v1.pdf filter=lfs diff=lfs merge=lfs -text +fNFAT4oBgHgl3EQf7x4q/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +etE1T4oBgHgl3EQfyQXE/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +BtFIT4oBgHgl3EQf_yzx/content/2301.11417v1.pdf filter=lfs diff=lfs merge=lfs -text +6NAyT4oBgHgl3EQf2fnD/content/2301.00753v1.pdf filter=lfs diff=lfs merge=lfs -text +hNE2T4oBgHgl3EQfcQei/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +R9E0T4oBgHgl3EQfUgAs/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +mdE2T4oBgHgl3EQfzAi5/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +QdE2T4oBgHgl3EQfsAgY/content/2301.04054v1.pdf filter=lfs diff=lfs merge=lfs -text +ItE0T4oBgHgl3EQfiAFl/content/2301.02439v1.pdf filter=lfs diff=lfs merge=lfs -text +T9AyT4oBgHgl3EQf8fpt/content/2301.00857v1.pdf filter=lfs diff=lfs merge=lfs -text +lNE2T4oBgHgl3EQfIwaA/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +XdFJT4oBgHgl3EQf5i00/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +ltE0T4oBgHgl3EQfZAA9/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +G9FLT4oBgHgl3EQfHS_T/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text +HdE1T4oBgHgl3EQfXgSr/content/2301.03128v1.pdf filter=lfs diff=lfs merge=lfs -text +adFPT4oBgHgl3EQfvjWp/content/2301.13160v1.pdf filter=lfs diff=lfs merge=lfs -text +UdE0T4oBgHgl3EQf2gJy/content/2301.02713v1.pdf filter=lfs diff=lfs merge=lfs -text +T9AyT4oBgHgl3EQf8fpt/vector_store/index.faiss filter=lfs diff=lfs merge=lfs -text diff --git a/09AyT4oBgHgl3EQf1PmO/content/tmp_files/2301.00732v1.pdf.txt b/09AyT4oBgHgl3EQf1PmO/content/tmp_files/2301.00732v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..c46dc4732a19e020b2ea1af832cd1409e0ccd288 --- /dev/null +++ b/09AyT4oBgHgl3EQf1PmO/content/tmp_files/2301.00732v1.pdf.txt @@ -0,0 +1,925 @@ +arXiv:2301.00732v1 [cs.CC] 2 Jan 2023 +Improved NP-Hardness of Approximation for +Orthogonality Dimension and Minrank +Dror Chawin* +Ishay Haviv* +Abstract +The orthogonality dimension of a graph G over R is the smallest integer k for which one can +assign a nonzero k-dimensional real vector to each vertex of G, such that every two adjacent +vertices receive orthogonal vectors. We prove that for every sufficiently large integer k, it is +NP-hard to decide whether the orthogonality dimension of a given graph over R is at most k +or at least 2(1−o(1))·k/2. We further prove such hardness results for the orthogonality dimension +over finite fields as well as for the closely related minrank parameter, which is motivated by +the index coding problem in information theory. This in particular implies that it is NP-hard +to approximate these graph quantities to within any constant factor. Previously, the hardness +of approximation was known to hold either assuming certain variants of the Unique Games +Conjecture or for approximation factors smaller than 3/2. The proofs involve the concept of +line digraphs and bounds on their orthogonality dimension and on the minrank of their com- +plement. +1 +Introduction +A graph G is said to be k-colorable if its vertices can be colored by k colors such that every two ad- +jacent vertices receive distinct colors. The chromatic number of G, denoted by χ(G), is the smallest +integer k for which G is k-colorable. As a fundamental and popular graph quantity, the chromatic +number has received a considerable amount of attention in the literature from a computational +perspective, as described below. +The problem of deciding whether a graph G satisfies χ(G) ≤ 3 is one of the classical twenty- +one NP-complete problems presented by Karp [26] in 1972. Khanna, Linial, and Safra [28] proved +that it is NP-hard to distinguish between graphs G that satisfy χ(G) ≤ 3 from those satisfying +χ(G) ≥ 5. This result, combined with the approach of Garey and Johnson [15] and with a result of +Stahl [39], implies that for every k ≥ 6, it is NP-hard to decide whether a graph G satisfies χ(G) ≤ k +or χ(G) ≥ 2k − 2. Brakensiek and Guruswami [6] proved that for every k ≥ 3, it is NP-hard to +distinguish between the cases χ(G) ≤ k and χ(G) ≥ 2k − 1, and the 2k − 1 bound was further +improved to 2k by Barto, Bul´ın, Krokhin, and Oprˇsal [4]. For large values of k, it was shown by +Khot [29] that it is NP-hard to decide whether a graph G satisfies χ(G) ≤ k or χ(G) ≥ kΩ(log k), and +the latter condition was strengthened to χ(G) ≥ 2k1/3 by Huang [24]. A substantial improvement +*School of Computer Science, The Academic College of Tel Aviv-Yaffo, Tel Aviv 61083, Israel. Research supported +by the Israel Science Foundation (grant No. 1218/20). +1 + +was recently obtained by Wrochna and ˇZivn´y [40], who proved that for every k ≥ 4, it is NP- +hard to decide whether a given graph G satisfies χ(G) ≤ k or χ(G) ≥ ( +k +⌊k/2⌋). The proof of this +result combined the hardness result of [24] with the construction of line digraphs [20] and with a +result of Poljak and R¨odl [36]. Note that under certain variants of the Unique Games Conjecture, +stronger hardness results are known to hold, namely, hardness of deciding whether a given graph +G satisfies χ(G) ≤ k1 or χ(G) ≥ k2 for all integers k2 > k1 ≥ 3 [10] (see also [11]). +The present paper studies the computational complexity of algebraic variants of the chromatic +number of graphs. A k-dimensional orthogonal representation of a graph G = (V, E) over a field +F is an assignment of a vector uv ∈ Fk with ⟨uv, uv⟩ ̸= 0 to each vertex v ∈ V, such that for +every two adjacent vertices v and v′ it holds that ⟨uv, uv′⟩ = 0. Here, for two vectors x, y ∈ Fk, +we consider the standard inner product defined by ⟨x, y⟩ = ∑k +i=1 xiyi with operations over F. The +orthogonality dimension of G over F, denoted by ξF(G), is the smallest integer k for which G +admits a k-dimensional orthogonal representation over F (see Remark 2.2). It can be easily seen +that for every graph G and for every field F, it holds that ξF(G) ≤ χ(G). In addition, if F is a +fixed finite field or the real field R, it further holds that ξF(G) ≥ Ω(log χ(G)). Both bounds are +known to be tight in the worst case (see Claim 2.6 and [33, Chapter 10]). The study of orthogonal +representations and orthogonality dimension was initiated in the seminal work of Lov´asz [32] on +the ϑ-function and has found applications in various areas, e.g., information theory [32], graph +theory [34], and quantum communication complexity [9, Chapter 8.5]. +The interest in the hardness of determining the orthogonality dimension of graphs dates back +to a paper of Lov´asz, Saks, and Schrijver [34], where it was noted that the problem seems difficult. +The aforementioned relations between the chromatic number and the orthogonality dimension +yield that hardness of deciding whether a graph G satisfies χ(G) ≤ k1 or χ(G) ≥ k2 implies the +hardness of deciding whether it satisfies ξF(G) ≤ k1 or ξF(G) ≥ Ω(log k2), provided that F is +a finite field or R. It therefore follows from [10] that assuming certain variants of the Unique +Games Conjecture, it is hard to decide whether a graph G satisfies ξF(G) ≤ k1 or ξF(G) ≥ k2 +for all integers k2 > k1 ≥ 3. This reasoning, however, does not yield NP-hardness results for the +orthogonality dimension (without additional complexity assumptions), even using the strongest +known NP-hardness results of the chromatic number. Yet, a result of Peeters [35] implies that for +every field F, it is NP-hard to decide if a given graph G satisfies ξF(G) ≤ 3, hence it is NP-hard +to approximate the orthogonality dimension of a graph over F to within any factor smaller than +4/3. Over the reals, the hardness of approximation for the orthogonality dimension was recently +extended in [16] to any factor smaller than 3/2. +Another algebraic quantity of graphs is the minrank parameter that was introduced in 1981 by +Haemers [19] in the study of the Shannon capacity of graphs. The minrank parameter was used +in [18, 19] to answer questions of Lov´asz [32] and was later applied by Alon [1], with a different +formulation, to disprove a conjecture of Shannon [38]. The minrank of a graph G over a field F, +denoted by minrkF(G), is closely related to the orthogonality dimension of the complement graph +G over F and satisfies minrkF(G) ≤ ξF(G). The difference between the two quantities comes, +roughly speaking, from the fact that the definition of minrank involves the notion of orthogonal bi- +representations rather than orthogonal representations (for the precise definitions, see Section 2.1). +The study of the minrank parameter is motivated by various applications in information theory +and in theoretical computer science. A prominent one is the well-studied index coding problem, +2 + +for which the minrank parameter perfectly characterizes the optimal length of its linear solutions, +as was shown by Bar-Yossef, Birk, Jayram, and Kol [3] (see Section 2.2). +Similarly to the situation of the orthogonality dimension, it was proved in [35] that for every +field F, it is NP-hard to decide if a given graph G satisfies minrkF(G) ≤ 3. It was further shown +by Dau, Skachek, and Chee [8] that it is NP-hard to decide whether a given digraph G satisfies +minrkF2(G) ≤ 2. Note that for (undirected) graphs, the minrank over any field is at most 2 if +and only if the complement graph is bipartite, a property that can be checked in polynomial time. +Motivated by the computational aspects of the index coding problem, Langberg and Sprintson [30] +related the minrank of a graph to the chromatic number of its complement and derived from [10] +that assuming certain variants of the Unique Games Conjecture, it is hard to decide whether a +given graph G satisfies minrkF(G) ≤ k1 or minrkF(G) ≥ k2, provided that k2 > k1 ≥ 3 and that F +is a finite field. Similar hardness results were obtained in [30] for additional settings of the index +coding problem, including the general (non-linear) index coding problem over a constant-size +alphabet. +1.1 +Our Contribution +This paper provides improved NP-hardness of approximation results for the orthogonality dimen- +sion and for the minrank parameter over various fields. We start with the following result, which +is concerned with the orthogonality dimension over the reals. +Theorem 1.1. There exists a function f : N → N satisfying f(k) = 2(1−o(1))·k/2 such that for every +sufficiently large integer k, it is NP-hard to decide whether a given graph G satisfies +ξR(G) ≤ k +or +ξR(G) ≥ f(k). +Theorem 1.1 implies that it is NP-hard to approximate the orthogonality dimension of a graph +over the reals to within any constant factor. Previously, such NP-hardness result was known to +hold only for approximation factors smaller than 3/2 [16]. +We proceed with the following result, which is concerned with the orthogonality dimension +and the minrank parameter over finite fields. +Theorem 1.2. For every finite field F, there exists a function f : N → N satisfying f(k) = 2(1−o(1))·k/2 +such that for every sufficiently large integer k, the following holds. +1. It is NP-hard to decide whether a given graph G satisfies ξF(G) ≤ k or ξF(G) ≥ f(k). +2. It is NP-hard to decide whether a given graph G satisfies minrkF(G) ≤ k or minrkF(G) ≥ f(k). +Theorem 1.2 implies that over any finite field, it is NP-hard to approximate the orthogonality di- +mension and the minrank of a graph to within any constant factor. Let us stress that this hardness +result relies solely on the assumption P ̸= NP rather than on stronger complexity assumptions +and thus settles a question raised in [30]. Prior to this work, it was known that it is NP-hard to +approximate the minrank of graphs to within any factor smaller than 4/3 [35] and the minrank of +digraphs over F2 to within any factor smaller than 3/2 [8]. +A central component of the proofs of Theorems 1.1 and 1.2 is the notion of line digraphs, +introduced in [20], that was first used in the context of hardness of approximation by Wrochna +3 + +and ˇZivn´y [40] (see also [17]). It was shown in [21, 36] that the chromatic number of any graph +is exponential in the chromatic number of its line digraph. This result was iteratively applied +by the authors of [40] to improve the NP-hardness of the chromatic number from the k vs. 2k1/3 +gap of [24] to their k vs. ( +k +⌊k/2⌋) gap. The main technical contribution of the present work lies +in analyzing the orthogonality dimension of line digraphs and the minrank parameter of their +complement. We actually show that on line digraphs, these graph parameters are quadratically +related to the chromatic number (see Theorems 3.5, 3.7, and 3.13). This allows us to derive our +hardness results from the hardness of the chromatic number given in [40], where the obtained gaps +are only quadratically weaker. We further discuss some limitations of our approach, involving an +analogue of Sperner’s theorem for subspaces due to Kalai [25]. +We finally show that our approach might be useful for proving hardness results for the general +(non-linear) index coding problem over a constant-size alphabet, for which no NP-hardness result +is currently known. It was shown by Langberg and Sprintson [30] that for an instance of the index +coding problem represented by a graph G, the length of an optimal solution is at most χ(G) and +at least Ω(log log χ(G)). It thus follows that an NP-hardness result for the chromatic number with +a double-exponential gap would imply an NP-hardness result for the general index coding prob- +lem. However, no such NP-hardness result is currently known for the chromatic number without +relying on further complexity assumptions. To tackle this issue, we study the index coding prob- +lem on instances which are complement of line digraphs (see Theorem 3.17). As a consequence of +our results, we obtain that the NP-hardness of the general index coding problem can be derived +from an NP-hardness result of the chromatic number with only a single-exponential gap, not that +far from the best known gap given in [40]. For a precise statement, see Theorem 4.7. +1.2 +Related Work +We gather here several related results from the literature. +• A result of Zuckerman [41] asserts that for any ε > 0, it is NP-hard to approximate the chro- +matic number of a graph on n vertices to within a factor of n1−ε. It would be interesting to +figure out if such hardness result holds for the orthogonality dimension and for the min- +rank parameter. The present paper, however, focuses on the hardness of gap problems with +constant thresholds, independent of the number of vertices. +• As mentioned earlier, Peeters [35] proved that for every field F, it is NP-hard to decide if the +minrank (or the orthogonality dimension) of a given graph is at most 3. We note that for +finite fields, this can also be derived from a result of Hell and Neˇsetˇril [23]. +• For the chromatic number of hypergraphs, the gaps for which NP-hardness is known to hold +are much stronger than for graphs. For example, it was shown in [5] that for some δ > 0, it +is NP-hard to decide if a given 4-uniform hypergraph G on n vertices satisfies χ(G) ≤ 2 or +χ(G) ≥ logδ n. An analogue result for the orthogonality dimension of hypergraphs over R +was proved in [22]. +• On the algorithmic side, a long line of work has explored the number of colors that an effi- +cient algorithm needs for properly coloring a given k-colorable graph, where k ≥ 3 is a fixed +4 + +constant. For example, there exists a polynomial-time algorithm that on a given 3-colorable +graph with n vertices uses O(n0.19996) colors [27]. Algorithms of this nature exist for the +graph parameters studied in this work as well. Indeed, there exists a polynomial-time algo- +rithm that given a graph G on n vertices with ξR(G) ≤ 3 finds a proper coloring of G with +O(n0.2413) colors [22]. Further, there exists a polynomial-time algorithm that given a graph +G on n vertices with minrkF2(G) ≤ 3 finds a proper coloring of G with O(n0.2574) colors [7]. +Note that the colorings obtained by these two algorithms provide, respectively, orthogonal +and bi-orthogonal representations for the input graph G (see Claim 2.6). +1.3 +Outline +The rest of the paper is organized as follows. In Section 2, we collect several definitions and results +that will be used throughout this paper. In Section 3, we study the underlying graphs of line +digraphs and their behavior with respect to the orthogonality dimension, the minrank parameter, +and the index coding problem. We also discuss there some limitations of our approach, given +in Sections 3.1.2 and 3.2.1. Finally, in Section 4, we prove our hardness results and complete the +proofs of Theorems 1.1 and 1.2. +2 +Preliminaries +Throughout the paper, undirected graphs are referred to as graphs, and directed graphs are re- +ferred to as digraphs. All the considered graphs and digraphs are simple, and all the logarithms +are in base 2 unless otherwise specified. For an integer n, we use the notation [n] = {1, 2, . . . , n}. +2.1 +Orthogonality Dimension and Minrank +The orthogonality dimension of a graph is defined as follows (see, e.g., [33, Chapter 11]). +Definition 2.1 (Orthogonality Dimension). A k-dimensional orthogonal representation of a graph +G = (V, E) over a field F is an assignment of a vector uv ∈ Fk with ⟨uv, uv⟩ ̸= 0 to each vertex v ∈ V, +such that ⟨uv, uv′⟩ = 0 whenever v and v′ are adjacent vertices in G. Here, for two vectors x, y ∈ Fk, we let +⟨x, y⟩ = ∑k +i=1 xiyi denote the standard inner product of x and y over F. The orthogonality dimension of +a graph G over a field F, denoted by ξF(G), is the smallest integer k for which there exists a k-dimensional +orthogonal representation of G over F. +Remark 2.2. We note that orthogonal representations are sometimes defined in the literature such that the +vectors associated with non-adjacent vertices are required to be orthogonal, that is, as orthogonal represen- +tations of the complement graph. While we find it more convenient to use the other definition in this paper, +one can view the notation ξF(G) as standing for ξF(G), i.e., the orthogonality dimension of the complement +graph. The same holds for the notion of orthogonal bi-representations, given in Definition 2.4. +The minrank parameter, introduced in [19], is defined as follows. +Definition 2.3 (Minrank). Let G = (V, E) be a digraph on the vertex set V = [n], and let F be a field. +We say that a matrix M ∈ Fn×n represents G if Mi,i ̸= 0 for every i ∈ V, and Mi,j = 0 for every distinct +5 + +vertices i, j ∈ V such that (i, j) /∈ E. The minrank of G over F is defined as +minrkF(G) = min{rankF(M) | M represents G over F}. +The definition is naturally extended to graphs by replacing every edge with two oppositely directed edges. +We next describe an alternative definition due to Peeters [35] for the minrank of graphs. This +requires the following extension of orthogonal representations, called orthogonal bi-representations. +Definition 2.4. A k-dimensional orthogonal bi-representation of a graph G = (V, E) over a field F is +an assignment of a pair of vectors (uv, wv) ∈ Fk × Fk with ⟨uv, wv⟩ ̸= 0 to each vertex v ∈ V, such that +⟨uv, wv′⟩ = ⟨uv′, wv⟩ = 0 whenever v and v′ are adjacent vertices in G. +The following proposition follows directly from Definitions 2.3 and 2.4 combined with the fact +that for every matrix M ∈ Fn×n, rankF(M) is the smallest integer k for which M can be written as +M = MT +1 · M2 for two matrices M1, M2 ∈ Fk×n. +Proposition 2.5 ([35]). For every field F and for every graph G, minrkF(G) is the smallest integer k for +which there exists a k-dimensional orthogonal bi-representation of G over F. +The following claim summarizes some known relations between the studied graph parame- +ters. We provide a quick proof for completeness. +Claim 2.6. For every field F and for every graph G, it holds that +minrkF(G) ≤ ξF(G) ≤ χ(G). +In addition, if F is finite, then +minrkF(G) ≥ log|F| χ(G). +Proof: The inequality minrkF(G) ≤ ξF(G) follows by combining Proposition 2.5 with the fact +that a k-dimensional orthogonal representation of G over F induces a k-dimensional orthogonal +bi-representation of G over F with two identical vectors for every vertex. +For the inequality ξF(G) ≤ χ(G), observe that any proper coloring of G with k colors induces +a k-dimensional orthogonal representation of G over any field F, by assigning the ith vector of the +standard basis of Fk to each vertex colored by the ith color. +Next, assuming that F is finite, we show that minrkF(G) ≥ log|F| χ(G). To this end, denote +k = minrkF(G), and apply Proposition 2.5 to obtain that there exists a k-dimensional orthogonal +bi-representation of G over F that assigns a pair (uv, wv) ∈ Fk × Fk to each vertex v of G. For +every two adjacent vertices v and v′ in G, the vectors uv and uv′ are distinct, because ⟨uv, wv′⟩ = 0 +whereas ⟨uv′, wv′⟩ ̸= 0. This implies that G admits a proper coloring with at most |F|k colors, +completing the proof. +We finally recall that a homomorphism from a graph G1 = (V1, E1) to a graph G2 = (V2, E2) +is a function g : V1 → V2 such that for every two vertices x, y ∈ V1 with {x, y} ∈ E1, it holds +that {g(x), g(y)} ∈ E2. Observe that if there exists a homomorphism from G1 to G2 then we have +χ(G1) ≤ χ(G2), and for every field F, ξF(G1) ≤ ξF(G2) and minrkF(G1) ≤ minrkF(G2). +6 + +2.2 +Index Coding +The index coding problem, introduced in [3], is concerned with economical strategies for broad- +casting information to n receivers in a way that enables each of them to retrieve its own message, a +symbol from some given alphabet Σ. For this purpose, each receiver is allowed to use some prior +side information that consists of a subset of the messages required by the other receivers. The +side information map is naturally represented by a digraph on [n], which includes an edge (i, j) if +the ith receiver knows the message required by the jth receiver. The objective is to minimize the +length of the transmitted information. For simplicity, we consider here the case of symmetric side +information maps, represented by graphs rather than by digraphs. The formal definition follows. +Definition 2.7 (Index Coding). Let G be a graph on the vertex set [n], and let Σ be an alphabet. An index +code for G over Σ of length k is an encoding function E : Σn → Σk such that for every i ∈ [n], there exists +a decoding function gi : Σk+|NG(i)| → Σ, such that for every x ∈ Σn, it holds that gi(E(x), x|NG(i)) = xi. +Here, NG(i) stands for the set of vertices in G adjacent to the vertex i, and x|NG(i) stands for the restriction +of x to the indices of NG(i). If Σ is a field F and the encoding function E is linear over F, then we say that +the index code is linear over F. +Bar-Yossef et al. [3] showed that the minrank parameter characterizes the length of optimal +solutions to the index coding problem in the linear setting. +Proposition 2.8 ([3]). For every field F and for every graph G, the minimal length of a linear index code +for G over F is minrkF(G). +3 +Line Digraphs +In 1960, Harary and Norman [20] introduced the concept of line digraphs, defined as follows. +Definition 3.1 (Line Digraph). For a digraph G = (V, E), the line digraph of G, denoted by δG, is the +digraph on the vertex set E that includes a directed edge from a vertex (x, y) to a vertex (z, w) whenever +y = z. +Definition 3.1 is naturally extended to graphs G by replacing every edge of G with two oppositely +directed edges. Note that in this case, the number of vertices in δG is twice the number of edges +in G. We will frequently consider the underlying graph of the digraph δG, i.e., the graph obtained +from δG by ignoring the directions of the edges. +The following result of Poljak and R¨odl [36], which strengthens a previous result of Harner and +Entringer [21], shows that the chromatic number of a graph G precisely determines the chromatic +number of the underlying graph of δG. The statement of the result uses the function b : N → N +defined by b(n) = ( +n +⌊n/2⌋). +Theorem 3.2 ([21, 36]). Let G be a graph, and let H be the underlying graph of the digraph δG. Then, +χ(H) = min{n | χ(G) ≤ b(n)}. +Using the fact that b(n) ∼ +2n +√ +πn/2, Theorem 3.2 implies that the chromatic number of G is expo- +nential in the chromatic number of H. Our goal in this section is to relate the chromatic number +of G to other graph parameters of H, namely, the orthogonality dimension, the minrank of the +complement, and the optimal length of an index code for the complement. +7 + +3.1 +Orthogonality Dimension +For a field F, an integer n, and a subspace U of Fn, we denote by U⊥ the subspace of Fn that +consists of the vectors that are orthogonal to U over F, i.e., +U⊥ = {w ∈ Fn | ⟨w, u⟩ = 0 for every u ∈ U}. +Consider the following family of graphs. +Definition 3.3. For a field F and an integer n, let S1(F, n) denote the graph whose vertices are all the +subspaces of Fn, where two distinct subspaces U1 and U2 are adjacent if there exists a vector w ∈ Fn with +⟨w, w⟩ ̸= 0 that satisfies w ∈ U1 ∩ U⊥ +2 and, in addition, there exists a vector w′ ∈ Fn with ⟨w′, w′⟩ ̸= 0 +that satisfies w′ ∈ U2 ∩ U⊥ +1 . +In words, two subspaces of Fn are adjacent in the graph S1(F, n) if each of them includes a non- +self-orthogonal vector that is orthogonal to the entire other subspace. Note that for an infinite field +F and for n ≥ 2, the vertex set of S1(F, n) is infinite. +We argue that the chromatic number of a graph G can be used to estimate the orthogonality +dimension of the underlying graph H of its line digraph δG. First, recall that by Theorem 3.2, the +chromatic number of H is logarithmic in χ(G). This implies, using Claim 2.6, that the orthog- +onality dimension of H over any field is at most logarithmic in χ(G). For a lower bound on the +orthogonality dimension of H, we need the following lemma that involves the chromatic numbers +of the graphs S1(F, n). +Lemma 3.4. Let F be a field, let G be a graph, let H be the underlying graph of the digraph δG, and put +n = ξF(H). Then, χ(G) ≤ χ(S1(F, n)). +Proof: Put G = (VG, EG) and H = (VH, EH). The assumption n = ξF(H) implies that there exists +an n-dimensional orthogonal representation of H over F, that is, an assignment of a vector uv ∈ Fn +with ⟨uv, uv⟩ ̸= 0 to each vertex v ∈ VH, such that ⟨uv, uv′⟩ = 0 whenever v and v′ are adjacent in +H. Recall that the vertices of H, just as the vertices of δG, are the ordered pairs (x, y) of adjacent +vertices x, y in G. +For every vertex y ∈ VG, let Uy denote the subspace spanned by the vectors of the given +orthogonal representation that are associated with the vertices of H whose tail is y, namely, +Uy = span({uv | v = (x, y) for some x ∈ VG}). +Note that Uy is a subspace of Fn, and thus a vertex of S1(F, n). +Consider the function that maps every vertex y ∈ VG of G to the vertex Uy of S1(F, n). We claim +that this function forms a homomorphism from G to S1(F, n). To see this, let x, y ∈ VG be adjacent +vertices in G, and consider the vector w = u(x,y) assigned by the given orthogonal representation +to the vertex (x, y) of H. By the definition of an orthogonal representation, it holds that ⟨w, w⟩ ̸= 0. +Since (x, y) is a vertex of H whose tail is y, it follows that w ∈ Uy. Further, every vertex of H of the +form (x′, x) for some x′ ∈ VG is adjacent in H to (x, y), hence it holds that ⟨u(x′,x), w⟩ = 0. Since +the subspace Ux is spanned by those vectors u(x′,x), we obtain that w is orthogonal to the entire +subspace Ux. It thus follows that the vector w satisfies ⟨w, w⟩ ̸= 0 and w ∈ Uy ∩ U⊥ +x . By symmetry, +there also exists a vector w′ ∈ Fn satisfying ⟨w′, w′⟩ ̸= 0 and w′ ∈ Ux ∩ U⊥ +y , hence the subspaces Ux +8 + +and Uy are adjacent vertices in S1(F, n). We conclude that the above function is a homomorphism +from G to S1(F, n), hence the chromatic numbers of these graphs satisfy χ(G) ≤ χ(S1(F, n)), as +required. +In order to derive useful bounds from Lemma 3.4, we need upper bounds on the chromatic +numbers of the graphs S1(F, n). Every vertex of S1(F, n) is a subspace of Fn and thus can be +represented by a basis that generates it. For a finite field F of size q, the number of possible bases +does not exceed qn2, which obviously yields that χ(S1(F, n)) ≤ qn2. While this simple bound +suffices for proving our hardness results for the orthogonality dimension over finite fields, we +note that the number of vertices in S1(F, n) is in fact q(1+o(1))·n2/4, where the o(1) term tends to 0 +when n tends to infinity.1 +We conclude this discussion with the following theorem. +Theorem 3.5. Let F be a finite field of size q, let G be a graph, and let H be the underlying graph of the +digraph δG. Then, it holds that +ξF(H) ≥ +� +logq χ(G). +Proof: Put n = ξF(H), and apply Lemma 3.4 to obtain that χ(G) ≤ χ(S1(F, n)) ≤ qn2. By rear- +ranging, the proof is completed. +3.1.1 +The Chromatic Number of S1(R, n) +For the real field R and for n ≥ 2, the vertex set of the graph S1(R, n) is infinite, and yet, its +chromatic number is finite. To see this, let us firstly observe a simple upper bound of 23n. To each +vertex of S1(R, n), i.e., a subspace U of Rn, assign the subset of {0, ±1}n that consists of all the +sign vectors of the vectors of U. This assignment forms a proper coloring of the graph, because for +adjacent vertices U and V there exists a nonzero vector w ∈ U that is orthogonal to V, hence the +sign vector of w belongs to the set of sign vectors of U but does not belong to the one of V (because +the inner product of two vectors with the same nonzero sign vector is positive). Since the number +of subsets of {0, ±1}n is 23n, it follows that χ(S1(R, n)) ≤ 23n. +The above double-exponential bound is not sufficient for deriving NP-hardness of approxima- +tion results for the orthogonality dimension over R from the currently known NP-hardness results +of the chromatic number. We therefore need the following lemma that provides an exponentially +better bound which is suitable for our purposes. For a vector w ∈ Rn, we use here the notation +∥w∥ = +� +⟨w, w⟩ for the Euclidean norm of w. +Lemma 3.6. For every integer n, it holds that χ(S1(R, n)) ≤ (2n + 1)n2. +Proof: We define a coloring of the vertices of the graph S1(R, n) as follows. For every vertex of +S1(R, n), i.e., a subspace U of Rn, let (u1, . . . , uk) be an arbitrary orthonormal basis of U where +k ≤ n, and assign U to the color c(U) = (u′ +1, . . . , u′ +k) where u′ +i is a vector obtained from ui by +1To see this, observe that the number of k-dimensional subspaces of Fn is precisely ∏k−1 +i=0 +qn−qi +qk−qi and that every term +in this product lies in [qn−k−1, qn−k+1]. Hence, the total number of subspaces of Fn is at least ∑n +k=0 q(n−k−1)k and at +most ∑n +k=0 q(n−k+1)k. It follows that the number of subspaces of Fn is q(1+o(1))·n2/4. +9 + +rounding each of its values to a closest integer multiple of 1 +n. Note that for every i ∈ [k], the +vectors ui and u′ +i differ in every coordinate by no more than 1 +2n in absolute value. +We claim that c is a proper coloring of S1(R, n). To see this, let U and V be adjacent vertices +in the graph. If dim(U) ̸= dim(V) then it clearly holds that c(U) ̸= c(V). So suppose that the +dimensions of U and V are equal, and put k = dim(U) = dim(V). Denote the orthonormal bases +associated with U and V by (u1, . . . , uk) and (v1, . . . , vk) respectively, and let c(U) = (u′ +1, . . . , u′ +k) +and c(V) = (v′ +1, . . . , v′ +k) be their colors. Our goal is to show that c(U) ̸= c(V). +Assume for the sake of contradiction that c(U) = c(V), that is, u′ +i = v′ +i for every i ∈ [k]. This +implies that for every i ∈ [k], the vectors ui and vi differ in each coordinate by no more than 1 +n in +absolute value, hence +∥ui − vi∥ ≤ +� +n · 1 +n2 = +1 +√n. +(1) +Since U and V are adjacent in the graph S1(R, n), by scaling, there exists a unit vector u ∈ U ∩ V⊥. +Write u = ∑i∈[k] αi · ui for coefficients α1, . . . , αk ∈ R. Since the given basis of U is orthonormal, it +follows that ∑i∈[k] α2 +i = ∥u∥2 = 1. Now, consider the vector v = ∑i∈[k] αi · vi, and observe that v is +a unit vector that belongs to the subspace V. Observe further that +∥u − v∥ = +��� ∑ +i∈[k] +αi · (ui − vi) +��� ≤ ∑ +i∈[k] +|αi| · ∥ui − vi∥ ≤ +� +∑ +i∈[k] +α2 +i +�1/2 +· +� +∑ +i∈[k] +∥ui − vi∥2�1/2 +≤ 1, +(2) +where the first inequality follows from the triangle inequality, the second from the Cauchy-Schwarz +inequality, and the third from (1) using k ≤ n. However, u and v are orthogonal unit vectors, and +as such, the distance between them satisfies ∥u − v∥ = +√ +2. This yields a contradiction to (2), +hence c(U) ̸= c(V). +To complete the proof, we observe that the number of colors used by the proper coloring c does +not exceed (2n + 1)n2. Indeed, every color can be represented by an n × n matrix whose values are +of the form a +n for integers −n ≤ a ≤ n (where the matrix associated with a subspace of dimension +k consists of the rounded k column vectors concatenated with n − k columns of zeros). Since the +number of those matrices is bounded by (2n + 1)n2, we are done. +We derive the following theorem. +Theorem 3.7. There exists a constant c > 0, such that for every graph G with χ(G) ≥ 3, the underlying +graph H of the digraph δG satisfies +ξR(H) ≥ c · +� +log χ(G) +log log χ(G). +Proof: Put n = ξR(H), and combine Lemma 3.4 with Lemma 3.6 to obtain that +χ(G) ≤ χ(S1(R, n)) ≤ (2n + 1)n2, +which yields the desired bound. +10 + +3.1.2 +The Clique Number of S1(F, n) +We next consider the clique numbers of the graphs S1(F, n), whose estimation is motivated by the +following lemma. Here, the clique number of a graph G is denoted by ω(G). +Lemma 3.8. Let F be a field, let G be a graph, and let H be the underlying graph of the digraph δG. If +χ(G) ≤ ω(S1(F, n)), then ξF(H) ≤ n. +Proof: Put m = ω(S1(F, n)), and let U1, . . . , Um be m subspaces of Fn that form a clique in S1(F, n). +Put G = (V, E), suppose that χ(G) ≤ m, and let c : V → [m] be a proper coloring of G. Notice +that for every two adjacent vertices x, y in G, the subspaces Uc(x) and Uc(y) are adjacent vertices in +S1(F, n). +We define an n-dimensional orthogonal representation of H over F as follows. Recall that +every vertex of H is a pair (x, y) of adjacent vertices x, y in G. Assign every such vertex (x, y) +to some non-self-orthogonal vector u(x,y) that lies in Uc(y) ∩ U⊥ +c(x). The existence of such a vector +follows from the adjacency of the vertices Uc(x) and Uc(y) in S1(F, n). We claim that this assign- +ment is an orthogonal representation of H. Indeed, for adjacent vertices (x, y) and (y, z) in H, the +vector u(x,y) belongs to Uc(y) whereas the vector u(y,z) is orthogonal to Uc(y), hence they satisfy +⟨u(x,y), u(y,z)⟩ = 0. Since this orthogonal representation lies in Fn, we establish that ξF(H) ≤ n. +For a graph G and for the underlying graph H of its line digraph δG, Theorem 3.2 implies that +if χ(G) ≤ ( +n +⌊n/2⌋) then χ(H) ≤ n, and thus, by Claim 2.6, ξF(H) ≤ n for every field F. This raises +the question of whether Lemma 3.8 can be used to obtain a better upper bound on ξF(H) as a +function of χ(G). For certain cases, the following result answers this question negatively. Namely, +it shows that the clique number of the graphs S1(F, n) is precisely ( +n +⌊n/2⌋), whenever the vector +space Fn has no nonzero self-orthogonal vectors (as in the case of F = R). It thus follows that +Lemma 3.8 cannot yield a better relation between the quantities ξR(H) and χ(G) than the one +stemming from Theorem 3.2. +Proposition 3.9. For a field F and an integer n such that Fn has no nonzero self-orthogonal vectors, it +holds that +ω(S1(F, n)) = +� +n +⌊n/2⌋ +� +. +The proof of Proposition 3.9 relies on the following result of Kalai [25] (see also [31]). +Theorem 3.10 ([25]). For a field F and an integer n, let (U1, W1), . . . , (Um, Wm) be m pairs of subspaces +of Fn such that +1. Ui ∩ Wi = {0} for every i ∈ [m], and +2. Ui ∩ Wj ̸= {0} for every i ̸= j ∈ [m]. +Then, m ≤ ( +n +⌊n/2⌋). +Proof of Proposition 3.9: We first show that there exists a clique in S1(F, n) of size ( +n +⌊n/2⌋). For +every set A ⊆ [n] of size |A| = ⌊n/2⌋, let UA denote the subspace of Fn spanned by the vectors +ei with i ∈ A, where ei stands for the vector of Fn with 1 on the ith entry and 0 everywhere else. +11 + +It clearly holds that for every distinct such sets A1, A2, there exists some i ∈ A1 \ A2, and that the +vector ei satisfies ⟨ei, ei⟩ = 1 and ei ∈ UA1 ∩ U⊥ +A2. It thus follows that the ( +n +⌊n/2⌋) subspaces UA with +|A| = ⌊n/2⌋ form a clique in the graph S1(F, n), as required. +We next show that the size of every clique in S1(F, n) does not exceed ( +n +⌊n/2⌋). To see this, +let U1, . . . , Um be subspaces of Fn that form a clique in S1(F, n). Consider the pairs (Ui, U⊥ +i ) for +i ∈ [m], and observe that they satisfy the conditions of Theorem 3.10. Indeed, for every i ∈ [m] +it holds that Ui ∩ U⊥ +i += {0}, because Fn has no nonzero self-orthogonal vectors. Further, since +the given collection of subspaces is a clique in S1(F, n), for every i ̸= j ∈ [m], there exists a vector +w ∈ Fn with ⟨w, w⟩ ̸= 0 such that w ∈ Ui ∩ U⊥ +j , hence, Ui ∩ U⊥ +j +̸= {0}. It thus follows from +Theorem 3.10 that m ≤ ( +n +⌊n/2⌋), as required. +3.2 +Minrank +As in the previous section, we start with a definition of a family of graphs. +Definition 3.11. For a field F and an integer n, let S2(F, n) denote the graph whose vertices are all the +pairs of subspaces of Fn, where two distinct pairs (U1, W1) and (U2, W2) are adjacent if there exist two +vectors u, w ∈ Fn with ⟨u, w⟩ ̸= 0 such that u ∈ U1 ∩ W⊥ +2 and w ∈ W1 ∩ U⊥ +2 and, in addition, there +exist two vectors u′, w′ ∈ Fn with ⟨u′, w′⟩ ̸= 0 such that u′ ∈ U2 ∩ W⊥ +1 and w′ ∈ W2 ∩ U⊥ +1 . +We next argue that the chromatic number of a graph G can be used to estimate the minrank +of the complement of the underlying graph of its line digraph δG. This is established using the +following lemma that involves the chromatic numbers of the graphs S2(F, n). Its proof resembles +that of Lemma 3.4. +Lemma 3.12. Let F be a field, let G be a graph, let H be the underlying graph of the digraph δG, and put +n = minrkF(H). Then, χ(G) ≤ χ(S2(F, n)). +Proof: Put G = (VG, EG) and H = (VH, EH). The assumption n = minrkF(H) implies, by Propo- +sition 2.5, that there exists an n-dimensional orthogonal bi-representation of H over F, that is, an +assignment of a pair of vectors (uv, wv) ∈ Fn × Fn with ⟨uv, wv⟩ ̸= 0 to each vertex v ∈ VH, such +that ⟨uv, wv′⟩ = ⟨uv′, wv⟩ = 0 whenever v and v′ are adjacent in H. +For every vertex y ∈ VG, let Uy denote the subspace spanned by the vectors uv of the given +orthogonal bi-representation associated with the vertices v of H whose tail is y, namely, +Uy = span({uv | v = (x, y) for some x ∈ VG}). +Similarly, let Wy denote the subspace spanned by the vectors wv of the given orthogonal bi- +representation associated with the vertices v of H whose tail is y, namely, +Wy = span({wv | v = (x, y) for some x ∈ VG}). +Note that Uy and Wy are subspaces of Fn, hence the pair (Uy, Wy) is a vertex of S2(F, n). +Consider the function that maps every vertex y ∈ VG of G to the vertex (Uy, Wy) of S2(F, n). +We claim that this function forms a homomorphism from G to S2(F, n). To see this, let x, y ∈ VG +be adjacent vertices in G, and consider the vectors u = u(x,y) and w = w(x,y) assigned by the +12 + +given orthogonal bi-representation to the vertex (x, y) of H. By the definition of an orthogonal +bi-representation, it holds that ⟨u, w⟩ ̸= 0. Since (x, y) is a vertex of H whose tail is y, it follows +that u ∈ Uy and w ∈ Wy. Further, every vertex of H of the form (x′, x) for some x′ ∈ VG is +adjacent in H to (x, y), hence it satisfies ⟨u(x′,x), w⟩ = ⟨u, w(x′,x)⟩ = 0. Since the subspaces Ux and +Wx are spanned, respectively, by those vectors u(x′,x) and w(x′,x), we obtain that u is orthogonal to +the subspace Wx and that w is orthogonal to the subspace Ux. It thus follows that the vectors u +and w satisfy ⟨u, w⟩ ̸= 0, u ∈ Uy ∩ W⊥ +x , and w ∈ Wy ∩ U⊥ +x . By symmetry, there also exist vectors +u′, w′ ∈ Fn satisfying ⟨u′, w′⟩ ̸= 0, u′ ∈ Ux ∩ W⊥ +y , and w′ ∈ Wx ∩ U⊥ +y , hence the pairs (Ux, Wx) and +(Uy, Wy) are adjacent vertices in S2(F, n). We conclude that the above function is a homomorphism +from G to S2(F, n), hence the chromatic numbers of these graphs satisfy χ(G) ≤ χ(S2(F, n)), as +required. +We derive the following theorem. +Theorem 3.13. Let F be a finite field of size q, let G be a graph, and let H be the underlying graph of the +digraph δG. Then, it holds that +minrkF(H) ≥ +� +1 +2 · logq χ(G). +Proof: Put n = minrkF(H), and apply Lemma 3.12 to obtain that +χ(G) ≤ χ(S2(F, n)) ≤ q2n2, +where the second inequality holds because the number of vertices in S2(F, n) does not exceed q2n2. +By rearranging, the proof is completed. +3.2.1 +The Chromatic Number of S2(R, n) +We next consider the problem of determining the chromatic numbers of the graphs S2(R, n). The +following theorem shows that these graphs cannot be properly colored using a finite number of +colors, in contrast to the graphs S1(R, n) addressed in Lemma 3.6. +Theorem 3.14. For every integer n ≥ 3, it holds that χ(S2(R, n)) = ∞. +Before proving Theorem 3.14, let us describe a significant difference between the behavior of +ξR(G) and of minrkR(G) with respect to the chromatic number χ(G). It is not difficult to see that +the chromatic number of a graph G is bounded from above by some function of ξR(G). Indeed, +given a k-dimensional orthogonal representation of a graph G over R, one can assign to each +vertex the sign vector from {0, ±1}k of its vector, obtaining a proper coloring of G with at most 3k +colors. This implies that every graph G satisfies χ(G) ≤ 3ξR(G) (see also [33, Chapter 11]). On the +other hand, the chromatic number of a graph G cannot be bounded from above by any function +of minrkR(G), as proved below. +Theorem 3.15. For every integer m, there exists a graph G such that minrkR(G) ≤ 3 and yet χ(G) ≥ m. +Proof: For an integer n > 6, consider the ‘double shift graph’ Gn defined as follows. Its vertices +are all the 3-subsets of [n], where two sets {x1, x2, x3} and {y1, y2, y3} with x1 < x2 < x3 and +y1 < y2 < y3 are adjacent in Gn if either (x2, x3) = (y1, y2) or (x1, x2) = (y2, y3). It was shown +13 + +in [13] that the graph Gn satisfies χ(Gn) = (1 + o(1)) · log log n (see also [14]), whereas its local +chromatic number, a concept introduced by Erd¨os et al. [12], is known to be 3. By an argument +of Shanmugam, Dimakis, and Langberg [37, Theorem 1], this implies that minrkR(Gn) ≤ 3 (see +also [2, Proposition 6.5]). This completes the proof. +We are ready to derive Theorem 3.14. +Proof of Theorem 3.14: It clearly suffices to prove the assertion of the theorem for n = 3. Let +F denote the subgraph of S2(R, 3) induced by the pairs (U, W) of subspaces of R3 satisfying +dim(U) = dim(W) = 1. By Proposition 2.5, for every graph G with minrkR(G) ≤ 3, there exists a +homomorphism from G to F and thus χ(G) ≤ χ(F). By Theorem 3.15, the chromatic number of a +graph G with minrkR(G) ≤ 3 can be arbitrarily large, hence χ(F) = ∞. Since F is a subgraph of +S2(R, 3), this yields that χ(S2(R, 3)) = ∞, as required. +3.3 +Index Coding +In this section, we study the optimal length of (not necessarily linear) index codes for the comple- +ment of underlying graphs of line digraphs. Recall Definition 2.7. +We start by presenting an argument of Langberg and Sprintson [30, Theorem 4(a)] that relates +the chromatic number of a graph to the length of an index code for its complement. In fact, we +slightly modify their argument to obtain the improved bound stated below (with 2|Σ|k rather than +|Σ||Σ|k in the statement of the result). +Proposition 3.16. Let Σ be an alphabet of size at least 2, and let G be a graph. If there exists an index code +for G over Σ of length k, then χ(G) ≤ 2|Σ|k. +Proof: Assume without loss of generality that {0, 1} ⊆ Σ. Put G = (V, E) and n = |V|. Suppose +that there exists an index code for G over Σ of length k, and let E : Σn → Σk and gi : Σk+|NG(i)| → Σ +for i ∈ V denote the corresponding encoding and decoding functions. +For every vertex i ∈ V, we define a function hi : Σk → {0, 1} that determines for a given +encoded message y ∈ Σk whether gi returns 0 on y when all the symbols of the side informa- +tion of the ith receiver are zeros. Formally speaking, for every y ∈ Σk, we define hi(y) = 0 if +gi(y, 0, . . . , 0) = 0, and hi(y) = 1 otherwise. +We claim that the assignment of the function hi to each vertex i ∈ V forms a proper coloring +of G. To see this, let i and j be adjacent vertices in G. Let x ∈ Σn denote the vector with 1 in the +ith entry and 0 everywhere else, and put y = E(x). By the correctness of the decoding functions, +it follows that gi(y, x|NG(i)) = xi = 1 whereas gj(y, x|NG(j)) = xj = 0. Since i and j are adjacent in +G, they are not adjacent in G, hence all the symbols in the side information x|NG(i) of i and in the +side information x|NG(j) of j are zeros. This implies that gi(y, 0, . . . , 0) = 1 and gj(y, 0, . . . , 0) = 0, +and therefore hi(y) = 1 and hj(y) = 0, which yields that hi ̸= hj, as required. Finally, observe that +the number of distinct functions hi : Σk → {0, 1} for i ∈ V does not exceed 2|Σ|k, implying that +χ(G) ≤ 2|Σ|k. +We proceed by proving an analogue of Proposition 3.16 for line digraphs. +14 + +Theorem 3.17. Let Σ be an alphabet of size at least 2, let G be a graph, and let H be the underlying graph +of the digraph δG. If there exists an index code for H over Σ of length k, then χ(G) ≤ 2|Σ|k. +Proof: Assume without loss of generality that {0, 1} ⊆ Σ. Put G = (VG, EG), H = (VH, EH), +and n = |VH|. Recall that the vertices of H are the ordered pairs of adjacent vertices in G, hence +n = 2 · |EG|. Suppose that there exists an index code for H over Σ of length k, and let E : Σn → Σk +and g(u,v) : Σk+|NH(u,v)| → Σ for (u, v) ∈ VH denote the corresponding encoding and decoding +functions. +For every vertex v ∈ VG, we define a function hv : Σk → {0, 1} that determines for a given +encoded message y ∈ Σk whether every function g(u,v) associated with a vertex (u, v) ∈ VH returns +0 on y when all the symbols in the side information of the receiver of the vertex (u, v) are zeros. +Formally speaking, for every y ∈ Σk, we define hv(y) = 0 if for every u ∈ VG with (u, v) ∈ VH, it +holds that g(u,v)(y, 0, . . . , 0) = 0, and hv(y) = 1 otherwise. +We claim that the assignment of the function hv to each vertex v ∈ VG forms a proper coloring +of G. To see this, let v1 and v2 be adjacent vertices in G, and notice that (v1, v2) is a vertex of H. Let +x ∈ Σn denote the vector with 1 in the entry of (v1, v2) and 0 everywhere else, and put y = E(x). +We first claim that hv1(y) = 0. To see this, consider any vertex (u, v1) ∈ VH, and notice +that (u, v1) and (v1, v2) are adjacent in H and are thus not adjacent in H. By the correctness +of the decoding function g(u,v1), it follows that g(u,v1)(y, x|NH(u,v1)) = x(u,v1) = 0. Since (u, v1) +and (v1, v2) are not adjacent in H, all the symbols in the side information x|NH(u,v1) of the vertex +(u, v1) are zeros. We thus obtain that for every vertex u ∈ VG with (u, v1) ∈ VH, it holds that +g(u,v1)(y, 0, . . . , 0) = 0. By the definition of hv1, it follows that hv1(y) = 0, as required. +We next claim that hv2(y) = 1. To see this, observe that by the correctness of the decoding +function g(v1,v2), it follows that g(v1,v2)(y, x|NH(v1,v2)) = x(v1,v2) = 1. It further holds that all the +symbols in the side information x|NH(v1,v2) of the vertex (v1, v2) are zeros. By the definition of hv2, +it follows that hv2(y) = 1, as required. +We obtain that every two adjacent vertices v1 and v2 in G satisfy hv1 ̸= hv2. Since the number +of functions hv : Σk → {0, 1} for v ∈ VG does not exceed 2|Σ|k, it follows that χ(G) ≤ 2|Σ|k, and we +are done. +4 +Hardness Results +In this section, we prove our hardness results for the orthogonality dimension and for minrank. +We also suggest a potential avenue for proving hardness results for the general index coding +problem over a constant-size alphabet. +The starting point of our hardness proofs is the following theorem of Wrochna and ˇZivn´y [40]. +Recall that the function b : N → N is defined by b(n) = ( +n +⌊n/2⌋). +Theorem 4.1 ([40]). For every integer k ≥ 4, it is NP-hard to decide whether a given graph G satisfies +χ(G) ≤ k or χ(G) ≥ b(k). +Our hardness results for the orthogonality dimension and the minrank parameter over finite +fields are given by the following theorem, which confirms Theorem 1.2. +15 + +Theorem 4.2. There exists a function f : N → N satisfying f(k) = (1 − o(1)) · +� +b(k) such that for +every finite field F and for every sufficiently large integer k, the following holds. +1. It is NP-hard to decide whether a given graph G satisfies +ξF(G) ≤ k or ξF(G) ≥ +1 +√ +log |F| · f(k). +2. It is NP-hard to decide whether a given graph G satisfies +minrkF(G) ≤ k or minrkF(G) ≥ +1 +√ +2·log |F| · f(k). +Proof: Fix a finite field F of size q. We start by proving the first item of the theorem. For an integer +k ≥ 4, consider the problem of deciding whether a given graph G satisfies +χ(G) ≤ b(k) or χ(G) ≥ b(b(k)), +whose NP-hardness follows from Theorem 4.1. To obtain our hardness result on the orthogonality +dimension over F, we reduce from this problem. Consider the reduction that given an input graph +G produces and outputs the underlying graph H of the digraph δG. This reduction can clearly be +implemented in polynomial time (in fact, in logarithmic space). +To prove the correctness of the reduction, we analyze the orthogonality dimension of H over +F. If G is a YES instance, that is, χ(G) ≤ b(k), then by combining Claim 2.6 with Theorem 3.2, it +follows that +ξF(H) ≤ χ(H) ≤ k. +If G is a NO instance, that is, χ(G) ≥ b(b(k)), then by Theorem 3.5, it follows that +ξF(H) ≥ +� +logq χ(G) ≥ +� +logq b(b(k)) = 1−o(1) +√ +log q · +� +b(k), +where the o(1) term tends to 0 when k tends to infinity. Note that we have used here the fact that +b(n) = Θ(2n/√n). By letting k be any sufficiently large integer, the proof of the first item of the +theorem is completed. +The proof of the second item of the theorem is similar. To avoid repetitions, we briefly mention +the needed changes in the proof. First, to obtain a hardness result for the minrank parameter, the +reduction has to output the complement H of the graph H rather than H itself. Second, in the +analysis of the NO instances, one has to apply Theorem 3.13 instead of Theorem 3.5 to obtain that +minrkF(H) ≥ +� +1 +2 · logq χ(G) ≥ +� +1 +2 · logq b(b(k)) = +1−o(1) +√ +2·log q · +� +b(k). +This completes the proof of the theorem. +As an immediate corollary of Theorem 4.2, we obtain the following. +Corollary 4.3. For every finite field F, the following holds. +1. It is NP-hard to approximate ξF(G) for a given graph G to within any constant factor. +16 + +2. It is NP-hard to approximate minrkF(G) for a given graph G to within any constant factor. +We next prove a hardness result for the orthogonality dimension over the reals, confirming +Theorem 1.1. +Theorem 4.4. There exists a function f : N → N satisfying f(k) = Θ( +� +b(k)/k) such that for every +sufficiently large integer k, it is NP-hard to decide whether a given graph G satisfies +ξR(G) ≤ k or ξR(G) ≥ f(k). +Proof: As in the proof of Theorem 4.2, for an integer k ≥ 4, we reduce from the problem of +deciding whether a given graph G satisfies +χ(G) ≤ b(k) or χ(G) ≥ b(b(k)), +whose NP-hardness follows from Theorem 4.1. Consider the polynomial-time reduction that given +an input graph G produces and outputs the underlying graph H of the digraph δG. +To prove the correctness of the reduction, we analyze the orthogonality dimension of H over +R. If G is a YES instance, that is, χ(G) ≤ b(k), then by combining Claim 2.6 with Theorem 3.2, it +follows that +ξR(H) ≤ χ(H) ≤ k. +If G is a NO instance, that is, χ(G) ≥ b(b(k)), then by Theorem 3.7 combined with the fact that +b(n) = Θ(2n/√n), it follows that +ξR(H) ≥ c · +� +log b(b(k)) +log log b(b(k)) = Θ +�� +b(k) +k +� +, +where c is an absolute positive constant. This completes the proof of the theorem. +As an immediate corollary of Theorem 4.4, we obtain the following. +Corollary 4.5. It is NP-hard to approximate ξR(G) for a given graph G to within any constant factor. +We end this section with a statement that might be useful for proving NP-hardness results for +the general index coding problem. Consider the following definition. +Definition 4.6. For an alphabet Σ and for two integers k1 < k2, let Index-CodingΣ(k1, k2) denote the +problem of deciding whether the minimal length of an index code for a given graph G over Σ is at most k1 +or at least k2. +We prove the following result. +Theorem 4.7. Let Σ be an alphabet of size at least 2, and let k1, k2 be two integers. Then, there exists a +polynomial-time reduction from the problem of deciding whether a given graph G satisfies χ(G) ≤ b(k1) +or χ(G) ≥ k2 to Index-CodingΣ(k1, log|Σ| log k2). +17 + +Proof: Consider the polynomial-time reduction that given an input graph G produces the under- +lying graph H of the digraph δG and outputs its complement H. For correctness, suppose first +that G is a YES instance, that is, χ(G) ≤ b(k1). Then, by combining Claim 2.6 with Theorem 3.2, +it follows that minrkF2(H) ≤ χ(H) ≤ k1. By Proposition 3.16, it further follows that there exists a +linear index code for H over F2 of length k1. In particular, using |Σ| ≥ 2, there exists an index code +for H over the alphabet Σ of length k1. Suppose next that G is a NO instance, that is, χ(G) ≥ k2. +By Theorem 3.17, it follows that the length of any index code for H over Σ is at least log|Σ| log k2, +so we are done. +Theorem 4.7 implies that in order to prove the NP-hardness of the general index coding prob- +lem over some finite alphabet Σ of size at least 2, it suffices to prove for some integer k that it is +NP-hard to decide whether a given graph G satisfies χ(G) ≤ b(k) or χ(G) > 2|Σ|k. +Acknowledgements +We thank the anonymous reviewers for their helpful comments. +References +[1] N. Alon. The Shannon capacity of a union. Combinatorica, 18(3):301–310, 1998. +[2] I. Attias and I. Haviv. Local orthogonality dimension. arXiv, abs/2110.00718, 2021. +[3] Z. Bar-Yossef, Y. Birk, T. S. Jayram, and T. Kol. Index coding with side information. IEEE +Trans. Inform. Theory, 57(3):1479–1494, 2011. Preliminary vesrion in FOCS’06. +[4] L. Barto, J. Bul´ın, A. A. Krokhin, and J. Oprˇsal. Algebraic approach to promise constraint +satisfaction. J. ACM, 68(4):28:1–28:66, 2021. Preliminary version in STOC’19. +[5] A. Bhangale. +NP-hardness of coloring 2-colorable hypergraph with poly-logarithmically +many colors. In Proc. of the 45th International Colloquium on Automata, Languages, and Pro- +gramming (ICALP’18), pages 15:1–15:11, 2018. +[6] J. Brakensiek and V. Guruswami. New hardness results for graph and hypergraph colorings. +In Proc. of the 31st Conference on Computational Complexity (CCC’16), pages 14:1–14:27, 2016. +[7] E. Chlamt´aˇc and I. Haviv. Linear index coding via semidefinite programming. Combinatorics, +Probability & Computing, 23(2):223–247, 2014. Preliminary version in SODA’12. +[8] S. H. Dau, V. Skachek, and Y. M. Chee. Optimal index codes with near-extreme rates. IEEE +Trans. Inform. Theory, 60(3):1515–1527, 2014. Preliminary version in ISIT’12. +[9] R. de Wolf. Quantum Computing and Communication Complexity. PhD thesis, Universiteit +van Amsterdam, 2001. +[10] I. Dinur, E. Mossel, and O. Regev. Conditional hardness for approximate coloring. SIAM J. +Comput., 39(3):843–873, 2009. Preliminary version in STOC’06. +18 + +[11] I. Dinur and I. Shinkar. On the conditional hardness of coloring a 4-colorable graph with +super-constant number of colors. In Proc. of the 13th International Workshop on Approximation +Algorithms for Combinatorial Optimization Problems (APPROX’10), pages 138–151, 2010. +[12] P. Erd¨os, Z. F¨uredi, A. Hajnal, P. Komj´ath, V. R¨odl, and ´A. Seress. Coloring graphs with locally +few colors. Discrete Mathematics, 59(1–2):21–34, 1986. +[13] P. Erd¨os and A. Hajnal. On chromatic number of infinite graphs. In Theory of Graphs, Proc. +Colloq., Tihany, pages 83–98. Academic Press, 1966. +[14] Z. F¨uredi, A. Hajnal, V. R¨odl, and W. T. Trotter. Interval orders and shift graphs. In Sets, +Graphs and Numbers, volume 60 of Colloq. Math. Soc. J´anos Bolyai, pages 297–313. 1991. +[15] M. R. Garey and D. S. Johnson. The complexity of near-optimal graph coloring. J. ACM, +23(1):43–49, 1976. +[16] A. Golovnev and I. Haviv. The (generalized) orthogonality dimension of (generalized) Kneser +graphs: Bounds and applications. Theory of Computing, 18(22):1–22, 2022. Preliminary version +in CCC’21. +[17] V. Guruswami and S. Sandeep. d-To-1 hardness of coloring 3-colorable graphs with O(1) +colors. In Proc. of the 47th International Colloquium on Automata, Languages, and Programming, +(ICALP’20), pages 62:1–62:12, 2020. +[18] W. H. Haemers. On some problems of Lov´asz concerning the Shannon capacity of a graph. +IEEE Trans. Inform. Theory, 25(2):231–232, 1979. +[19] W. H. Haemers. An upper bound for the Shannon capacity of a graph. In L. Lov´asz and V. T. +S´os, editors, Algebraic Methods in Graph Theory, volume 25/I of Colloquia Mathematica Societatis +J´anos Bolyai, pages 267–272. Bolyai Society and North-Holland, 1981. +[20] F. Harary and R. Z. Norman. Some properties of line digraphs. Rend. Circ. Mat. Palermo, +9(2):161–168, 1960. +[21] C. C. Harner and R. C. Entringer. On the arc-chromatic number of a digraph. J. Comb. Theory, +Ser. B, 13(3):219–225, 1972. +[22] I. Haviv. Approximating the orthogonality dimension of graphs and hypergraphs. In Proc. +of the 44th International Symposium on Mathematical Foundations of Computer Science (MFCS’19), +pages 39:1–39:15, 2019. +[23] P. Hell and J. Neˇsetˇril. On the complexity of H-coloring. J. Comb. Theory, Ser. B, 48(1):92–110, +1990. +[24] S. Huang. Improved hardness of approximating chromatic number. In Proc. of the 16th In- +ternational Workshop on Approximation Algorithms for Combinatorial Optimization Problems (AP- +PROX’13), pages 233–243, 2013. +19 + +[25] G. Kalai. Analogues for Sperner and Erd¨os-Ko-Rado theorems for subspaces of linear spaces. +In P. L. Hammer, editor, Combinatorics 79, volume 9 of Annals of Discrete Math., page 135. +Elsevier, 1980. +[26] R. M. Karp. +Reducibility among combinatorial problems. +In Proc. of a Symposium on the +Complexity of Computer Computations, pages 85–103, 1972. +[27] K. Kawarabayashi and M. Thorup. Coloring 3-colorable graphs with less than n1/5 colors. J. +ACM, 64(1):4:1–4:23, 2017. Preliminary versions in FOCS’12 and STACS’14. +[28] S. Khanna, N. Linial, and S. Safra. On the hardness of approximating the chromatic number. +Combinatorica, 20(3):393–415, 2000. Preliminary version in ISTCS’93. +[29] S. Khot. Improved inaproximability results for maxclique, chromatic number and approx- +imate graph coloring. +In Proc. of the 42nd Symposium on Foundations of Computer Science +(FOCS’01), pages 600–609, 2001. +[30] M. Langberg and A. Sprintson. On the hardness of approximating the network coding capac- +ity. IEEE Trans. Inform. Theory, 57(2):1008–1014, 2011. Preliminary version in ISIT’08. +[31] L. Lov´asz. Flats in matroids and geometric graphs. In Combinatorial surveys: Proc. of the 6th +British Comb. Conf., Royal Holloway Coll., pages 45–86. Academic Press, 1977. +[32] L. Lov´asz. On the Shannon capacity of a graph. IEEE Trans. Inform. Theory, 25(1):1–7, 1979. +[33] L. Lov´asz. Graphs and Geometry, volume 65. Colloquium Publications, 2019. +[34] L. Lov´asz, M. Saks, and A. Schrijver. Orthogonal representations and connectivity of graphs. +Linear Algebra Appl., 114–115:439–454, 1989. Special Issue Dedicated to Alan J. Hoffman. +[35] R. Peeters. Orthogonal representations over finite fields and the chromatic number of graphs. +Combinatorica, 16(3):417–431, 1996. +[36] S. Poljak and V. R¨odl. On the arc-chromatic number of a digraph. J. Comb. Theory, Ser. B, +31(2):190–198, 1981. +[37] K. Shanmugam, A. G. Dimakis, and M. Langberg. Local graph coloring and index coding. +In Proc. of the IEEE International Symposium on Information Theory (ISIT’13), pages 1152–1156, +2013. +[38] C. E. Shannon. The zero error capacity of a noisy channel. Institute of Radio Engineers, Trans. +Inform. Theory, IT-2:8–19, 1956. +[39] S. Stahl. n-tuple colorings and associated graphs. J. Comb. Theory, Ser. B, 20(2):185–203, 1976. +[40] M. Wrochna and S. ˇZivn´y. Improved hardness for H-colourings of G-colourable graphs. In +Proc. of the 31st Annual ACM-SIAM Symposium on Discrete Algorithms (SODA’20), pages 1426– +1435, 2020. +[41] D. Zuckerman. Linear degree extractors and the inapproximability of max clique and chro- +matic number. Theory of Computing, 3(1):103–128, 2007. Preliminary version in STOC’06. +20 + diff --git a/09AyT4oBgHgl3EQf1PmO/content/tmp_files/load_file.txt b/09AyT4oBgHgl3EQf1PmO/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..e63aa88081f3f5ae4d31db1be9f1e0742b17fb9f --- /dev/null +++ b/09AyT4oBgHgl3EQf1PmO/content/tmp_files/load_file.txt @@ -0,0 +1,925 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf,len=924 +page_content='arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='00732v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='CC] 2 Jan 2023 Improved NP-Hardness of Approximation for Orthogonality Dimension and Minrank Dror Chawin* Ishay Haviv* Abstract The orthogonality dimension of a graph G over R is the smallest integer k for which one can assign a nonzero k-dimensional real vector to each vertex of G, such that every two adjacent vertices receive orthogonal vectors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' We prove that for every sufficiently large integer k, it is NP-hard to decide whether the orthogonality dimension of a given graph over R is at most k or at least 2(1−o(1))·k/2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' We further prove such hardness results for the orthogonality dimension over finite fields as well as for the closely related minrank parameter, which is motivated by the index coding problem in information theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' This in particular implies that it is NP-hard to approximate these graph quantities to within any constant factor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Previously, the hardness of approximation was known to hold either assuming certain variants of the Unique Games Conjecture or for approximation factors smaller than 3/2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' The proofs involve the concept of line digraphs and bounds on their orthogonality dimension and on the minrank of their com- plement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' 1 Introduction A graph G is said to be k-colorable if its vertices can be colored by k colors such that every two ad- jacent vertices receive distinct colors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' The chromatic number of G, denoted by χ(G), is the smallest integer k for which G is k-colorable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' As a fundamental and popular graph quantity, the chromatic number has received a considerable amount of attention in the literature from a computational perspective, as described below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' The problem of deciding whether a graph G satisfies χ(G) ≤ 3 is one of the classical twenty- one NP-complete problems presented by Karp [26] in 1972.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Khanna, Linial, and Safra [28] proved that it is NP-hard to distinguish between graphs G that satisfy χ(G) ≤ 3 from those satisfying χ(G) ≥ 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' This result, combined with the approach of Garey and Johnson [15] and with a result of Stahl [39], implies that for every k ≥ 6, it is NP-hard to decide whether a graph G satisfies χ(G) ≤ k or χ(G) ≥ 2k − 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Brakensiek and Guruswami [6] proved that for every k ≥ 3, it is NP-hard to distinguish between the cases χ(G) ≤ k and χ(G) ≥ 2k − 1, and the 2k − 1 bound was further improved to 2k by Barto, Bul´ın, Krokhin, and Oprˇsal [4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' For large values of k, it was shown by Khot [29] that it is NP-hard to decide whether a graph G satisfies χ(G) ≤ k or χ(G) ≥ kΩ(log k), and the latter condition was strengthened to χ(G) ≥ 2k1/3 by Huang [24].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' A substantial improvement School of Computer Science, The Academic College of Tel Aviv-Yaffo, Tel Aviv 61083, Israel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Research supported by the Israel Science Foundation (grant No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' 1218/20).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' 1 was recently obtained by Wrochna and ˇZivn´y [40], who proved that for every k ≥ 4, it is NP- hard to decide whether a given graph G satisfies χ(G) ≤ k or χ(G) ≥ ( k ⌊k/2⌋).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' The proof of this result combined the hardness result of [24] with the construction of line digraphs [20] and with a result of Poljak and R¨odl [36].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Note that under certain variants of the Unique Games Conjecture, stronger hardness results are known to hold, namely, hardness of deciding whether a given graph G satisfies χ(G) ≤ k1 or χ(G) ≥ k2 for all integers k2 > k1 ≥ 3 [10] (see also [11]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' The present paper studies the computational complexity of algebraic variants of the chromatic number of graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' A k-dimensional orthogonal representation of a graph G = (V, E) over a field F is an assignment of a vector uv ∈ Fk with ⟨uv, uv⟩ ̸= 0 to each vertex v ∈ V, such that for every two adjacent vertices v and v′ it holds that ⟨uv, uv′⟩ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Here, for two vectors x, y ∈ Fk, we consider the standard inner product defined by ⟨x, y⟩ = ∑k i=1 xiyi with operations over F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' The orthogonality dimension of G over F, denoted by ξF(G), is the smallest integer k for which G admits a k-dimensional orthogonal representation over F (see Remark 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' It can be easily seen that for every graph G and for every field F, it holds that ξF(G) ≤ χ(G).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' In addition, if F is a fixed finite field or the real field R, it further holds that ξF(G) ≥ Ω(log χ(G)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Both bounds are known to be tight in the worst case (see Claim 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='6 and [33, Chapter 10]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' The study of orthogonal representations and orthogonality dimension was initiated in the seminal work of Lov´asz [32] on the ϑ-function and has found applications in various areas, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=', information theory [32], graph theory [34], and quantum communication complexity [9, Chapter 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' The interest in the hardness of determining the orthogonality dimension of graphs dates back to a paper of Lov´asz, Saks, and Schrijver [34], where it was noted that the problem seems difficult.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' The aforementioned relations between the chromatic number and the orthogonality dimension yield that hardness of deciding whether a graph G satisfies χ(G) ≤ k1 or χ(G) ≥ k2 implies the hardness of deciding whether it satisfies ξF(G) ≤ k1 or ξF(G) ≥ Ω(log k2), provided that F is a finite field or R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' It therefore follows from [10] that assuming certain variants of the Unique Games Conjecture, it is hard to decide whether a graph G satisfies ξF(G) ≤ k1 or ξF(G) ≥ k2 for all integers k2 > k1 ≥ 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' This reasoning, however, does not yield NP-hardness results for the orthogonality dimension (without additional complexity assumptions), even using the strongest known NP-hardness results of the chromatic number.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Yet, a result of Peeters [35] implies that for every field F, it is NP-hard to decide if a given graph G satisfies ξF(G) ≤ 3, hence it is NP-hard to approximate the orthogonality dimension of a graph over F to within any factor smaller than 4/3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Over the reals, the hardness of approximation for the orthogonality dimension was recently extended in [16] to any factor smaller than 3/2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Another algebraic quantity of graphs is the minrank parameter that was introduced in 1981 by Haemers [19] in the study of the Shannon capacity of graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' The minrank parameter was used in [18, 19] to answer questions of Lov´asz [32] and was later applied by Alon [1], with a different formulation, to disprove a conjecture of Shannon [38].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' The minrank of a graph G over a field F, denoted by minrkF(G), is closely related to the orthogonality dimension of the complement graph G over F and satisfies minrkF(G) ≤ ξF(G).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' The difference between the two quantities comes, roughly speaking, from the fact that the definition of minrank involves the notion of orthogonal bi- representations rather than orthogonal representations (for the precise definitions, see Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' The study of the minrank parameter is motivated by various applications in information theory and in theoretical computer science.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' A prominent one is the well-studied index coding problem, 2 for which the minrank parameter perfectly characterizes the optimal length of its linear solutions, as was shown by Bar-Yossef, Birk, Jayram, and Kol [3] (see Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Similarly to the situation of the orthogonality dimension, it was proved in [35] that for every field F, it is NP-hard to decide if a given graph G satisfies minrkF(G) ≤ 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' It was further shown by Dau, Skachek, and Chee [8] that it is NP-hard to decide whether a given digraph G satisfies minrkF2(G) ≤ 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Note that for (undirected) graphs, the minrank over any field is at most 2 if and only if the complement graph is bipartite, a property that can be checked in polynomial time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Motivated by the computational aspects of the index coding problem, Langberg and Sprintson [30] related the minrank of a graph to the chromatic number of its complement and derived from [10] that assuming certain variants of the Unique Games Conjecture, it is hard to decide whether a given graph G satisfies minrkF(G) ≤ k1 or minrkF(G) ≥ k2, provided that k2 > k1 ≥ 3 and that F is a finite field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Similar hardness results were obtained in [30] for additional settings of the index coding problem, including the general (non-linear) index coding problem over a constant-size alphabet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='1 Our Contribution This paper provides improved NP-hardness of approximation results for the orthogonality dimen- sion and for the minrank parameter over various fields.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' We start with the following result, which is concerned with the orthogonality dimension over the reals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' There exists a function f : N → N satisfying f(k) = 2(1−o(1))·k/2 such that for every sufficiently large integer k, it is NP-hard to decide whether a given graph G satisfies ξR(G) ≤ k or ξR(G) ≥ f(k).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='1 implies that it is NP-hard to approximate the orthogonality dimension of a graph over the reals to within any constant factor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Previously, such NP-hardness result was known to hold only for approximation factors smaller than 3/2 [16].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' We proceed with the following result, which is concerned with the orthogonality dimension and the minrank parameter over finite fields.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' For every finite field F, there exists a function f : N → N satisfying f(k) = 2(1−o(1))·k/2 such that for every sufficiently large integer k, the following holds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' It is NP-hard to decide whether a given graph G satisfies ξF(G) ≤ k or ξF(G) ≥ f(k).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' It is NP-hard to decide whether a given graph G satisfies minrkF(G) ≤ k or minrkF(G) ≥ f(k).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='2 implies that over any finite field, it is NP-hard to approximate the orthogonality di- mension and the minrank of a graph to within any constant factor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Let us stress that this hardness result relies solely on the assumption P ̸= NP rather than on stronger complexity assumptions and thus settles a question raised in [30].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Prior to this work, it was known that it is NP-hard to approximate the minrank of graphs to within any factor smaller than 4/3 [35] and the minrank of digraphs over F2 to within any factor smaller than 3/2 [8].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' A central component of the proofs of Theorems 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='1 and 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='2 is the notion of line digraphs, introduced in [20], that was first used in the context of hardness of approximation by Wrochna 3 and ˇZivn´y [40] (see also [17]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' It was shown in [21, 36] that the chromatic number of any graph is exponential in the chromatic number of its line digraph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' This result was iteratively applied by the authors of [40] to improve the NP-hardness of the chromatic number from the k vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' 2k1/3 gap of [24] to their k vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' ( k ⌊k/2⌋) gap.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' The main technical contribution of the present work lies in analyzing the orthogonality dimension of line digraphs and the minrank parameter of their complement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' We actually show that on line digraphs, these graph parameters are quadratically related to the chromatic number (see Theorems 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='5, 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='7, and 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='13).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' This allows us to derive our hardness results from the hardness of the chromatic number given in [40], where the obtained gaps are only quadratically weaker.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' We further discuss some limitations of our approach, involving an analogue of Sperner’s theorem for subspaces due to Kalai [25].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' We finally show that our approach might be useful for proving hardness results for the general (non-linear) index coding problem over a constant-size alphabet, for which no NP-hardness result is currently known.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' It was shown by Langberg and Sprintson [30] that for an instance of the index coding problem represented by a graph G, the length of an optimal solution is at most χ(G) and at least Ω(log log χ(G)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' It thus follows that an NP-hardness result for the chromatic number with a double-exponential gap would imply an NP-hardness result for the general index coding prob- lem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' However, no such NP-hardness result is currently known for the chromatic number without relying on further complexity assumptions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' To tackle this issue, we study the index coding prob- lem on instances which are complement of line digraphs (see Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='17).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' As a consequence of our results, we obtain that the NP-hardness of the general index coding problem can be derived from an NP-hardness result of the chromatic number with only a single-exponential gap, not that far from the best known gap given in [40].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' For a precise statement, see Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='2 Related Work We gather here several related results from the literature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' A result of Zuckerman [41] asserts that for any ε > 0, it is NP-hard to approximate the chro- matic number of a graph on n vertices to within a factor of n1−ε.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' It would be interesting to figure out if such hardness result holds for the orthogonality dimension and for the min- rank parameter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' The present paper, however, focuses on the hardness of gap problems with constant thresholds, independent of the number of vertices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' As mentioned earlier, Peeters [35] proved that for every field F, it is NP-hard to decide if the minrank (or the orthogonality dimension) of a given graph is at most 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' We note that for finite fields, this can also be derived from a result of Hell and Neˇsetˇril [23].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' For the chromatic number of hypergraphs, the gaps for which NP-hardness is known to hold are much stronger than for graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' For example, it was shown in [5] that for some δ > 0, it is NP-hard to decide if a given 4-uniform hypergraph G on n vertices satisfies χ(G) ≤ 2 or χ(G) ≥ logδ n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' An analogue result for the orthogonality dimension of hypergraphs over R was proved in [22].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' On the algorithmic side, a long line of work has explored the number of colors that an effi- cient algorithm needs for properly coloring a given k-colorable graph, where k ≥ 3 is a fixed 4 constant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' For example, there exists a polynomial-time algorithm that on a given 3-colorable graph with n vertices uses O(n0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='19996) colors [27].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Algorithms of this nature exist for the graph parameters studied in this work as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Indeed, there exists a polynomial-time algo- rithm that given a graph G on n vertices with ξR(G) ≤ 3 finds a proper coloring of G with O(n0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='2413) colors [22].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Further, there exists a polynomial-time algorithm that given a graph G on n vertices with minrkF2(G) ≤ 3 finds a proper coloring of G with O(n0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='2574) colors [7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Note that the colorings obtained by these two algorithms provide, respectively, orthogonal and bi-orthogonal representations for the input graph G (see Claim 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='6).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='3 Outline The rest of the paper is organized as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' In Section 2, we collect several definitions and results that will be used throughout this paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' In Section 3, we study the underlying graphs of line digraphs and their behavior with respect to the orthogonality dimension, the minrank parameter, and the index coding problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' We also discuss there some limitations of our approach, given in Sections 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='2 and 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Finally, in Section 4, we prove our hardness results and complete the proofs of Theorems 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='1 and 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' 2 Preliminaries Throughout the paper, undirected graphs are referred to as graphs, and directed graphs are re- ferred to as digraphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' All the considered graphs and digraphs are simple, and all the logarithms are in base 2 unless otherwise specified.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' For an integer n, we use the notation [n] = {1, 2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' , n}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='1 Orthogonality Dimension and Minrank The orthogonality dimension of a graph is defined as follows (see, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=', [33, Chapter 11]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='1 (Orthogonality Dimension).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' A k-dimensional orthogonal representation of a graph G = (V, E) over a field F is an assignment of a vector uv ∈ Fk with ⟨uv, uv⟩ ̸= 0 to each vertex v ∈ V, such that ⟨uv, uv′⟩ = 0 whenever v and v′ are adjacent vertices in G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Here, for two vectors x, y ∈ Fk, we let ⟨x, y⟩ = ∑k i=1 xiyi denote the standard inner product of x and y over F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' The orthogonality dimension of a graph G over a field F, denoted by ξF(G), is the smallest integer k for which there exists a k-dimensional orthogonal representation of G over F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Remark 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' We note that orthogonal representations are sometimes defined in the literature such that the vectors associated with non-adjacent vertices are required to be orthogonal, that is, as orthogonal represen- tations of the complement graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' While we find it more convenient to use the other definition in this paper, one can view the notation ξF(G) as standing for ξF(G), i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=', the orthogonality dimension of the complement graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' The same holds for the notion of orthogonal bi-representations, given in Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' The minrank parameter, introduced in [19], is defined as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='3 (Minrank).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Let G = (V, E) be a digraph on the vertex set V = [n], and let F be a field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' We say that a matrix M ∈ Fn×n represents G if Mi,i ̸= 0 for every i ∈ V, and Mi,j = 0 for every distinct 5 vertices i, j ∈ V such that (i, j) /∈ E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' The minrank of G over F is defined as minrkF(G) = min{rankF(M) | M represents G over F}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' The definition is naturally extended to graphs by replacing every edge with two oppositely directed edges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' We next describe an alternative definition due to Peeters [35] for the minrank of graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' This requires the following extension of orthogonal representations, called orthogonal bi-representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' A k-dimensional orthogonal bi-representation of a graph G = (V, E) over a field F is an assignment of a pair of vectors (uv, wv) ∈ Fk × Fk with ⟨uv, wv⟩ ̸= 0 to each vertex v ∈ V, such that ⟨uv, wv′⟩ = ⟨uv′, wv⟩ = 0 whenever v and v′ are adjacent vertices in G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' The following proposition follows directly from Definitions 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='3 and 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='4 combined with the fact that for every matrix M ∈ Fn×n, rankF(M) is the smallest integer k for which M can be written as M = MT 1 · M2 for two matrices M1, M2 ∈ Fk×n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Proposition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='5 ([35]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' For every field F and for every graph G, minrkF(G) is the smallest integer k for which there exists a k-dimensional orthogonal bi-representation of G over F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' The following claim summarizes some known relations between the studied graph parame- ters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' We provide a quick proof for completeness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Claim 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' For every field F and for every graph G, it holds that minrkF(G) ≤ ξF(G) ≤ χ(G).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' In addition, if F is finite, then minrkF(G) ≥ log|F| χ(G).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Proof: The inequality minrkF(G) ≤ ξF(G) follows by combining Proposition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='5 with the fact that a k-dimensional orthogonal representation of G over F induces a k-dimensional orthogonal bi-representation of G over F with two identical vectors for every vertex.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' For the inequality ξF(G) ≤ χ(G), observe that any proper coloring of G with k colors induces a k-dimensional orthogonal representation of G over any field F, by assigning the ith vector of the standard basis of Fk to each vertex colored by the ith color.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Next, assuming that F is finite, we show that minrkF(G) ≥ log|F| χ(G).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' To this end, denote k = minrkF(G), and apply Proposition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='5 to obtain that there exists a k-dimensional orthogonal bi-representation of G over F that assigns a pair (uv, wv) ∈ Fk × Fk to each vertex v of G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' For every two adjacent vertices v and v′ in G, the vectors uv and uv′ are distinct, because ⟨uv, wv′⟩ = 0 whereas ⟨uv′, wv′⟩ ̸= 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' This implies that G admits a proper coloring with at most |F|k colors, completing the proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' We finally recall that a homomorphism from a graph G1 = (V1, E1) to a graph G2 = (V2, E2) is a function g : V1 → V2 such that for every two vertices x, y ∈ V1 with {x, y} ∈ E1, it holds that {g(x), g(y)} ∈ E2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Observe that if there exists a homomorphism from G1 to G2 then we have χ(G1) ≤ χ(G2), and for every field F, ξF(G1) ≤ ξF(G2) and minrkF(G1) ≤ minrkF(G2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' 6 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='2 Index Coding The index coding problem, introduced in [3], is concerned with economical strategies for broad- casting information to n receivers in a way that enables each of them to retrieve its own message, a symbol from some given alphabet Σ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' For this purpose, each receiver is allowed to use some prior side information that consists of a subset of the messages required by the other receivers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' The side information map is naturally represented by a digraph on [n], which includes an edge (i, j) if the ith receiver knows the message required by the jth receiver.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' The objective is to minimize the length of the transmitted information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' For simplicity, we consider here the case of symmetric side information maps, represented by graphs rather than by digraphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' The formal definition follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='7 (Index Coding).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Let G be a graph on the vertex set [n], and let Σ be an alphabet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' An index code for G over Σ of length k is an encoding function E : Σn → Σk such that for every i ∈ [n], there exists a decoding function gi : Σk+|NG(i)| → Σ, such that for every x ∈ Σn, it holds that gi(E(x), x|NG(i)) = xi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Here, NG(i) stands for the set of vertices in G adjacent to the vertex i, and x|NG(i) stands for the restriction of x to the indices of NG(i).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' If Σ is a field F and the encoding function E is linear over F, then we say that the index code is linear over F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Bar-Yossef et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' [3] showed that the minrank parameter characterizes the length of optimal solutions to the index coding problem in the linear setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Proposition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='8 ([3]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' For every field F and for every graph G, the minimal length of a linear index code for G over F is minrkF(G).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' 3 Line Digraphs In 1960, Harary and Norman [20] introduced the concept of line digraphs, defined as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Definition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='1 (Line Digraph).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' For a digraph G = (V, E), the line digraph of G, denoted by δG, is the digraph on the vertex set E that includes a directed edge from a vertex (x, y) to a vertex (z, w) whenever y = z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Definition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='1 is naturally extended to graphs G by replacing every edge of G with two oppositely directed edges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Note that in this case, the number of vertices in δG is twice the number of edges in G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' We will frequently consider the underlying graph of the digraph δG, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=', the graph obtained from δG by ignoring the directions of the edges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' The following result of Poljak and R¨odl [36], which strengthens a previous result of Harner and Entringer [21], shows that the chromatic number of a graph G precisely determines the chromatic number of the underlying graph of δG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' The statement of the result uses the function b : N → N defined by b(n) = ( n ⌊n/2⌋).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='2 ([21, 36]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Let G be a graph, and let H be the underlying graph of the digraph δG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Then, χ(H) = min{n | χ(G) ≤ b(n)}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Using the fact that b(n) ∼ 2n √ πn/2, Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='2 implies that the chromatic number of G is expo- nential in the chromatic number of H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Our goal in this section is to relate the chromatic number of G to other graph parameters of H, namely, the orthogonality dimension, the minrank of the complement, and the optimal length of an index code for the complement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' 7 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='1 Orthogonality Dimension For a field F, an integer n, and a subspace U of Fn, we denote by U⊥ the subspace of Fn that consists of the vectors that are orthogonal to U over F, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=', U⊥ = {w ∈ Fn | ⟨w, u⟩ = 0 for every u ∈ U}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Consider the following family of graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Definition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' For a field F and an integer n, let S1(F, n) denote the graph whose vertices are all the subspaces of Fn, where two distinct subspaces U1 and U2 are adjacent if there exists a vector w ∈ Fn with ⟨w, w⟩ ̸= 0 that satisfies w ∈ U1 ∩ U⊥ 2 and, in addition, there exists a vector w′ ∈ Fn with ⟨w′, w′⟩ ̸= 0 that satisfies w′ ∈ U2 ∩ U⊥ 1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' In words, two subspaces of Fn are adjacent in the graph S1(F, n) if each of them includes a non- self-orthogonal vector that is orthogonal to the entire other subspace.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Note that for an infinite field F and for n ≥ 2, the vertex set of S1(F, n) is infinite.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' We argue that the chromatic number of a graph G can be used to estimate the orthogonality dimension of the underlying graph H of its line digraph δG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' First, recall that by Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='2, the chromatic number of H is logarithmic in χ(G).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' This implies, using Claim 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='6, that the orthog- onality dimension of H over any field is at most logarithmic in χ(G).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' For a lower bound on the orthogonality dimension of H, we need the following lemma that involves the chromatic numbers of the graphs S1(F, n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Let F be a field, let G be a graph, let H be the underlying graph of the digraph δG, and put n = ξF(H).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Then, χ(G) ≤ χ(S1(F, n)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Proof: Put G = (VG, EG) and H = (VH, EH).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' The assumption n = ξF(H) implies that there exists an n-dimensional orthogonal representation of H over F, that is, an assignment of a vector uv ∈ Fn with ⟨uv, uv⟩ ̸= 0 to each vertex v ∈ VH, such that ⟨uv, uv′⟩ = 0 whenever v and v′ are adjacent in H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Recall that the vertices of H, just as the vertices of δG, are the ordered pairs (x, y) of adjacent vertices x, y in G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' For every vertex y ∈ VG, let Uy denote the subspace spanned by the vectors of the given orthogonal representation that are associated with the vertices of H whose tail is y, namely, Uy = span({uv | v = (x, y) for some x ∈ VG}).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Note that Uy is a subspace of Fn, and thus a vertex of S1(F, n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Consider the function that maps every vertex y ∈ VG of G to the vertex Uy of S1(F, n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' We claim that this function forms a homomorphism from G to S1(F, n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' To see this, let x, y ∈ VG be adjacent vertices in G, and consider the vector w = u(x,y) assigned by the given orthogonal representation to the vertex (x, y) of H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' By the definition of an orthogonal representation, it holds that ⟨w, w⟩ ̸= 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Since (x, y) is a vertex of H whose tail is y, it follows that w ∈ Uy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Further, every vertex of H of the form (x′, x) for some x′ ∈ VG is adjacent in H to (x, y), hence it holds that ⟨u(x′,x), w⟩ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Since the subspace Ux is spanned by those vectors u(x′,x), we obtain that w is orthogonal to the entire subspace Ux.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' It thus follows that the vector w satisfies ⟨w, w⟩ ̸= 0 and w ∈ Uy ∩ U⊥ x .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' By symmetry, there also exists a vector w′ ∈ Fn satisfying ⟨w′, w′⟩ ̸= 0 and w′ ∈ Ux ∩ U⊥ y , hence the subspaces Ux 8 and Uy are adjacent vertices in S1(F, n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' We conclude that the above function is a homomorphism from G to S1(F, n), hence the chromatic numbers of these graphs satisfy χ(G) ≤ χ(S1(F, n)), as required.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' In order to derive useful bounds from Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='4, we need upper bounds on the chromatic numbers of the graphs S1(F, n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Every vertex of S1(F, n) is a subspace of Fn and thus can be represented by a basis that generates it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' For a finite field F of size q, the number of possible bases does not exceed qn2, which obviously yields that χ(S1(F, n)) ≤ qn2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' While this simple bound suffices for proving our hardness results for the orthogonality dimension over finite fields, we note that the number of vertices in S1(F, n) is in fact q(1+o(1))·n2/4, where the o(1) term tends to 0 when n tends to infinity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='1 We conclude this discussion with the following theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Let F be a finite field of size q, let G be a graph, and let H be the underlying graph of the digraph δG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Then, it holds that ξF(H) ≥ � logq χ(G).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Proof: Put n = ξF(H), and apply Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='4 to obtain that χ(G) ≤ χ(S1(F, n)) ≤ qn2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' By rear- ranging, the proof is completed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='1 The Chromatic Number of S1(R, n) For the real field R and for n ≥ 2, the vertex set of the graph S1(R, n) is infinite, and yet, its chromatic number is finite.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' To see this, let us firstly observe a simple upper bound of 23n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' To each vertex of S1(R, n), i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=', a subspace U of Rn, assign the subset of {0, ±1}n that consists of all the sign vectors of the vectors of U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' This assignment forms a proper coloring of the graph, because for adjacent vertices U and V there exists a nonzero vector w ∈ U that is orthogonal to V, hence the sign vector of w belongs to the set of sign vectors of U but does not belong to the one of V (because the inner product of two vectors with the same nonzero sign vector is positive).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Since the number of subsets of {0, ±1}n is 23n, it follows that χ(S1(R, n)) ≤ 23n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' The above double-exponential bound is not sufficient for deriving NP-hardness of approxima- tion results for the orthogonality dimension over R from the currently known NP-hardness results of the chromatic number.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' We therefore need the following lemma that provides an exponentially better bound which is suitable for our purposes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' For a vector w ∈ Rn, we use here the notation ∥w∥ = � ⟨w, w⟩ for the Euclidean norm of w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' For every integer n, it holds that χ(S1(R, n)) ≤ (2n + 1)n2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Proof: We define a coloring of the vertices of the graph S1(R, n) as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' For every vertex of S1(R, n), i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=', a subspace U of Rn, let (u1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' , uk) be an arbitrary orthonormal basis of U where k ≤ n, and assign U to the color c(U) = (u′ 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' , u′ k) where u′ i is a vector obtained from ui by 1To see this, observe that the number of k-dimensional subspaces of Fn is precisely ∏k−1 i=0 qn−qi qk−qi and that every term in this product lies in [qn−k−1, qn−k+1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Hence, the total number of subspaces of Fn is at least ∑n k=0 q(n−k−1)k and at most ∑n k=0 q(n−k+1)k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' It follows that the number of subspaces of Fn is q(1+o(1))·n2/4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' 9 rounding each of its values to a closest integer multiple of 1 n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Note that for every i ∈ [k], the vectors ui and u′ i differ in every coordinate by no more than 1 2n in absolute value.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' We claim that c is a proper coloring of S1(R, n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' To see this, let U and V be adjacent vertices in the graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' If dim(U) ̸= dim(V) then it clearly holds that c(U) ̸= c(V).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' So suppose that the dimensions of U and V are equal, and put k = dim(U) = dim(V).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Denote the orthonormal bases associated with U and V by (u1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' , uk) and (v1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' , vk) respectively, and let c(U) = (u′ 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' , u′ k) and c(V) = (v′ 1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' , v′ k) be their colors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Our goal is to show that c(U) ̸= c(V).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Assume for the sake of contradiction that c(U) = c(V), that is, u′ i = v′ i for every i ∈ [k].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' This implies that for every i ∈ [k], the vectors ui and vi differ in each coordinate by no more than 1 n in absolute value, hence ∥ui − vi∥ ≤ � n · 1 n2 = 1 √n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' (1) Since U and V are adjacent in the graph S1(R, n), by scaling, there exists a unit vector u ∈ U ∩ V⊥.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Write u = ∑i∈[k] αi · ui for coefficients α1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' , αk ∈ R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Since the given basis of U is orthonormal, it follows that ∑i∈[k] α2 i = ∥u∥2 = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Now, consider the vector v = ∑i∈[k] αi · vi, and observe that v is a unit vector that belongs to the subspace V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Observe further that ∥u − v∥ = ��� ∑ i∈[k] αi · (ui − vi) ��� ≤ ∑ i∈[k] |αi| · ∥ui − vi∥ ≤ � ∑ i∈[k] α2 i �1/2 � ∑ i∈[k] ∥ui − vi∥2�1/2 ≤ 1, (2) where the first inequality follows from the triangle inequality, the second from the Cauchy-Schwarz inequality, and the third from (1) using k ≤ n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' However, u and v are orthogonal unit vectors, and as such, the distance between them satisfies ∥u − v∥ = √ 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' This yields a contradiction to (2), hence c(U) ̸= c(V).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' To complete the proof, we observe that the number of colors used by the proper coloring c does not exceed (2n + 1)n2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Indeed, every color can be represented by an n × n matrix whose values are of the form a n for integers −n ≤ a ≤ n (where the matrix associated with a subspace of dimension k consists of the rounded k column vectors concatenated with n − k columns of zeros).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Since the number of those matrices is bounded by (2n + 1)n2, we are done.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' We derive the following theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' There exists a constant c > 0, such that for every graph G with χ(G) ≥ 3, the underlying graph H of the digraph δG satisfies ξR(H) ≥ c · � log χ(G) log log χ(G).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Proof: Put n = ξR(H), and combine Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='4 with Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='6 to obtain that χ(G) ≤ χ(S1(R, n)) ≤ (2n + 1)n2, which yields the desired bound.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' 10 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='2 The Clique Number of S1(F, n) We next consider the clique numbers of the graphs S1(F, n), whose estimation is motivated by the following lemma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Here, the clique number of a graph G is denoted by ω(G).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Let F be a field, let G be a graph, and let H be the underlying graph of the digraph δG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' If χ(G) ≤ ω(S1(F, n)), then ξF(H) ≤ n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Proof: Put m = ω(S1(F, n)), and let U1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' , Um be m subspaces of Fn that form a clique in S1(F, n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Put G = (V, E), suppose that χ(G) ≤ m, and let c : V → [m] be a proper coloring of G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Notice that for every two adjacent vertices x, y in G, the subspaces Uc(x) and Uc(y) are adjacent vertices in S1(F, n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' We define an n-dimensional orthogonal representation of H over F as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Recall that every vertex of H is a pair (x, y) of adjacent vertices x, y in G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Assign every such vertex (x, y) to some non-self-orthogonal vector u(x,y) that lies in Uc(y) ∩ U⊥ c(x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' The existence of such a vector follows from the adjacency of the vertices Uc(x) and Uc(y) in S1(F, n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' We claim that this assign- ment is an orthogonal representation of H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Indeed, for adjacent vertices (x, y) and (y, z) in H, the vector u(x,y) belongs to Uc(y) whereas the vector u(y,z) is orthogonal to Uc(y), hence they satisfy ⟨u(x,y), u(y,z)⟩ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Since this orthogonal representation lies in Fn, we establish that ξF(H) ≤ n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' For a graph G and for the underlying graph H of its line digraph δG, Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='2 implies that if χ(G) ≤ ( n ⌊n/2⌋) then χ(H) ≤ n, and thus, by Claim 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='6, ξF(H) ≤ n for every field F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' This raises the question of whether Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='8 can be used to obtain a better upper bound on ξF(H) as a function of χ(G).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' For certain cases, the following result answers this question negatively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Namely, it shows that the clique number of the graphs S1(F, n) is precisely ( n ⌊n/2⌋), whenever the vector space Fn has no nonzero self-orthogonal vectors (as in the case of F = R).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' It thus follows that Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='8 cannot yield a better relation between the quantities ξR(H) and χ(G) than the one stemming from Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' For a field F and an integer n such that Fn has no nonzero self-orthogonal vectors, it holds that ω(S1(F, n)) = � n ⌊n/2⌋ � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' The proof of Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='9 relies on the following result of Kalai [25] (see also [31]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='10 ([25]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' For a field F and an integer n, let (U1, W1), .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' , (Um, Wm) be m pairs of subspaces of Fn such that 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Ui ∩ Wi = {0} for every i ∈ [m], and 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Ui ∩ Wj ̸= {0} for every i ̸= j ∈ [m].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Then, m ≤ ( n ⌊n/2⌋).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Proof of Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='9: We first show that there exists a clique in S1(F, n) of size ( n ⌊n/2⌋).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' For every set A ⊆ [n] of size |A| = ⌊n/2⌋, let UA denote the subspace of Fn spanned by the vectors ei with i ∈ A, where ei stands for the vector of Fn with 1 on the ith entry and 0 everywhere else.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' 11 It clearly holds that for every distinct such sets A1, A2, there exists some i ∈ A1 \\ A2, and that the vector ei satisfies ⟨ei, ei⟩ = 1 and ei ∈ UA1 ∩ U⊥ A2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' It thus follows that the ( n ⌊n/2⌋) subspaces UA with |A| = ⌊n/2⌋ form a clique in the graph S1(F, n), as required.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' We next show that the size of every clique in S1(F, n) does not exceed ( n ⌊n/2⌋).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' To see this, let U1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' , Um be subspaces of Fn that form a clique in S1(F, n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Consider the pairs (Ui, U⊥ i ) for i ∈ [m], and observe that they satisfy the conditions of Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Indeed, for every i ∈ [m] it holds that Ui ∩ U⊥ i = {0}, because Fn has no nonzero self-orthogonal vectors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Further, since the given collection of subspaces is a clique in S1(F, n), for every i ̸= j ∈ [m], there exists a vector w ∈ Fn with ⟨w, w⟩ ̸= 0 such that w ∈ Ui ∩ U⊥ j , hence, Ui ∩ U⊥ j ̸= {0}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' It thus follows from Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='10 that m ≤ ( n ⌊n/2⌋), as required.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='2 Minrank As in the previous section, we start with a definition of a family of graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Definition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' For a field F and an integer n, let S2(F, n) denote the graph whose vertices are all the pairs of subspaces of Fn, where two distinct pairs (U1, W1) and (U2, W2) are adjacent if there exist two vectors u, w ∈ Fn with ⟨u, w⟩ ̸= 0 such that u ∈ U1 ∩ W⊥ 2 and w ∈ W1 ∩ U⊥ 2 and, in addition, there exist two vectors u′, w′ ∈ Fn with ⟨u′, w′⟩ ̸= 0 such that u′ ∈ U2 ∩ W⊥ 1 and w′ ∈ W2 ∩ U⊥ 1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' We next argue that the chromatic number of a graph G can be used to estimate the minrank of the complement of the underlying graph of its line digraph δG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' This is established using the following lemma that involves the chromatic numbers of the graphs S2(F, n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Its proof resembles that of Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Let F be a field, let G be a graph, let H be the underlying graph of the digraph δG, and put n = minrkF(H).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Then, χ(G) ≤ χ(S2(F, n)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Proof: Put G = (VG, EG) and H = (VH, EH).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' The assumption n = minrkF(H) implies, by Propo- sition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='5, that there exists an n-dimensional orthogonal bi-representation of H over F, that is, an assignment of a pair of vectors (uv, wv) ∈ Fn × Fn with ⟨uv, wv⟩ ̸= 0 to each vertex v ∈ VH, such that ⟨uv, wv′⟩ = ⟨uv′, wv⟩ = 0 whenever v and v′ are adjacent in H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' For every vertex y ∈ VG, let Uy denote the subspace spanned by the vectors uv of the given orthogonal bi-representation associated with the vertices v of H whose tail is y, namely, Uy = span({uv | v = (x, y) for some x ∈ VG}).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Similarly, let Wy denote the subspace spanned by the vectors wv of the given orthogonal bi- representation associated with the vertices v of H whose tail is y, namely, Wy = span({wv | v = (x, y) for some x ∈ VG}).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Note that Uy and Wy are subspaces of Fn, hence the pair (Uy, Wy) is a vertex of S2(F, n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Consider the function that maps every vertex y ∈ VG of G to the vertex (Uy, Wy) of S2(F, n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' We claim that this function forms a homomorphism from G to S2(F, n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' To see this, let x, y ∈ VG be adjacent vertices in G, and consider the vectors u = u(x,y) and w = w(x,y) assigned by the 12 given orthogonal bi-representation to the vertex (x, y) of H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' By the definition of an orthogonal bi-representation, it holds that ⟨u, w⟩ ̸= 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Since (x, y) is a vertex of H whose tail is y, it follows that u ∈ Uy and w ∈ Wy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Further, every vertex of H of the form (x′, x) for some x′ ∈ VG is adjacent in H to (x, y), hence it satisfies ⟨u(x′,x), w⟩ = ⟨u, w(x′,x)⟩ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Since the subspaces Ux and Wx are spanned, respectively, by those vectors u(x′,x) and w(x′,x), we obtain that u is orthogonal to the subspace Wx and that w is orthogonal to the subspace Ux.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' It thus follows that the vectors u and w satisfy ⟨u, w⟩ ̸= 0, u ∈ Uy ∩ W⊥ x , and w ∈ Wy ∩ U⊥ x .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' By symmetry, there also exist vectors u′, w′ ∈ Fn satisfying ⟨u′, w′⟩ ̸= 0, u′ ∈ Ux ∩ W⊥ y , and w′ ∈ Wx ∩ U⊥ y , hence the pairs (Ux, Wx) and (Uy, Wy) are adjacent vertices in S2(F, n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' We conclude that the above function is a homomorphism from G to S2(F, n), hence the chromatic numbers of these graphs satisfy χ(G) ≤ χ(S2(F, n)), as required.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' We derive the following theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Let F be a finite field of size q, let G be a graph, and let H be the underlying graph of the digraph δG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Then, it holds that minrkF(H) ≥ � 1 2 · logq χ(G).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Proof: Put n = minrkF(H), and apply Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='12 to obtain that χ(G) ≤ χ(S2(F, n)) ≤ q2n2, where the second inequality holds because the number of vertices in S2(F, n) does not exceed q2n2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' By rearranging, the proof is completed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='1 The Chromatic Number of S2(R, n) We next consider the problem of determining the chromatic numbers of the graphs S2(R, n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' The following theorem shows that these graphs cannot be properly colored using a finite number of colors, in contrast to the graphs S1(R, n) addressed in Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' For every integer n ≥ 3, it holds that χ(S2(R, n)) = ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Before proving Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='14, let us describe a significant difference between the behavior of ξR(G) and of minrkR(G) with respect to the chromatic number χ(G).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' It is not difficult to see that the chromatic number of a graph G is bounded from above by some function of ξR(G).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Indeed, given a k-dimensional orthogonal representation of a graph G over R, one can assign to each vertex the sign vector from {0, ±1}k of its vector, obtaining a proper coloring of G with at most 3k colors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' This implies that every graph G satisfies χ(G) ≤ 3ξR(G) (see also [33, Chapter 11]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' On the other hand, the chromatic number of a graph G cannot be bounded from above by any function of minrkR(G), as proved below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' For every integer m, there exists a graph G such that minrkR(G) ≤ 3 and yet χ(G) ≥ m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Proof: For an integer n > 6, consider the ‘double shift graph’ Gn defined as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Its vertices are all the 3-subsets of [n], where two sets {x1, x2, x3} and {y1, y2, y3} with x1 < x2 < x3 and y1 < y2 < y3 are adjacent in Gn if either (x2, x3) = (y1, y2) or (x1, x2) = (y2, y3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' It was shown 13 in [13] that the graph Gn satisfies χ(Gn) = (1 + o(1)) · log log n (see also [14]), whereas its local chromatic number, a concept introduced by Erd¨os et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' [12], is known to be 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' By an argument of Shanmugam, Dimakis, and Langberg [37, Theorem 1], this implies that minrkR(Gn) ≤ 3 (see also [2, Proposition 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='5]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' This completes the proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' We are ready to derive Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Proof of Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='14: It clearly suffices to prove the assertion of the theorem for n = 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Let F denote the subgraph of S2(R, 3) induced by the pairs (U, W) of subspaces of R3 satisfying dim(U) = dim(W) = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' By Proposition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='5, for every graph G with minrkR(G) ≤ 3, there exists a homomorphism from G to F and thus χ(G) ≤ χ(F).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' By Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='15, the chromatic number of a graph G with minrkR(G) ≤ 3 can be arbitrarily large, hence χ(F) = ∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Since F is a subgraph of S2(R, 3), this yields that χ(S2(R, 3)) = ∞, as required.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='3 Index Coding In this section, we study the optimal length of (not necessarily linear) index codes for the comple- ment of underlying graphs of line digraphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Recall Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' We start by presenting an argument of Langberg and Sprintson [30, Theorem 4(a)] that relates the chromatic number of a graph to the length of an index code for its complement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' In fact, we slightly modify their argument to obtain the improved bound stated below (with 2|Σ|k rather than |Σ||Σ|k in the statement of the result).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Let Σ be an alphabet of size at least 2, and let G be a graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' If there exists an index code for G over Σ of length k, then χ(G) ≤ 2|Σ|k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Proof: Assume without loss of generality that {0, 1} ⊆ Σ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Put G = (V, E) and n = |V|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Suppose that there exists an index code for G over Σ of length k, and let E : Σn → Σk and gi : Σk+|NG(i)| → Σ for i ∈ V denote the corresponding encoding and decoding functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' For every vertex i ∈ V, we define a function hi : Σk → {0, 1} that determines for a given encoded message y ∈ Σk whether gi returns 0 on y when all the symbols of the side informa- tion of the ith receiver are zeros.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Formally speaking, for every y ∈ Σk, we define hi(y) = 0 if gi(y, 0, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' , 0) = 0, and hi(y) = 1 otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' We claim that the assignment of the function hi to each vertex i ∈ V forms a proper coloring of G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' To see this, let i and j be adjacent vertices in G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Let x ∈ Σn denote the vector with 1 in the ith entry and 0 everywhere else, and put y = E(x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' By the correctness of the decoding functions, it follows that gi(y, x|NG(i)) = xi = 1 whereas gj(y, x|NG(j)) = xj = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Since i and j are adjacent in G, they are not adjacent in G, hence all the symbols in the side information x|NG(i) of i and in the side information x|NG(j) of j are zeros.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' This implies that gi(y, 0, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' , 0) = 1 and gj(y, 0, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' , 0) = 0, and therefore hi(y) = 1 and hj(y) = 0, which yields that hi ̸= hj, as required.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Finally, observe that the number of distinct functions hi : Σk → {0, 1} for i ∈ V does not exceed 2|Σ|k, implying that χ(G) ≤ 2|Σ|k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' We proceed by proving an analogue of Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='16 for line digraphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' 14 Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Let Σ be an alphabet of size at least 2, let G be a graph, and let H be the underlying graph of the digraph δG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' If there exists an index code for H over Σ of length k, then χ(G) ≤ 2|Σ|k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Proof: Assume without loss of generality that {0, 1} ⊆ Σ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Put G = (VG, EG), H = (VH, EH), and n = |VH|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Recall that the vertices of H are the ordered pairs of adjacent vertices in G, hence n = 2 · |EG|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Suppose that there exists an index code for H over Σ of length k, and let E : Σn → Σk and g(u,v) : Σk+|NH(u,v)| → Σ for (u, v) ∈ VH denote the corresponding encoding and decoding functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' For every vertex v ∈ VG, we define a function hv : Σk → {0, 1} that determines for a given encoded message y ∈ Σk whether every function g(u,v) associated with a vertex (u, v) ∈ VH returns 0 on y when all the symbols in the side information of the receiver of the vertex (u, v) are zeros.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Formally speaking, for every y ∈ Σk, we define hv(y) = 0 if for every u ∈ VG with (u, v) ∈ VH, it holds that g(u,v)(y, 0, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' , 0) = 0, and hv(y) = 1 otherwise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' We claim that the assignment of the function hv to each vertex v ∈ VG forms a proper coloring of G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' To see this, let v1 and v2 be adjacent vertices in G, and notice that (v1, v2) is a vertex of H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Let x ∈ Σn denote the vector with 1 in the entry of (v1, v2) and 0 everywhere else, and put y = E(x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' We first claim that hv1(y) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' To see this, consider any vertex (u, v1) ∈ VH, and notice that (u, v1) and (v1, v2) are adjacent in H and are thus not adjacent in H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' By the correctness of the decoding function g(u,v1), it follows that g(u,v1)(y, x|NH(u,v1)) = x(u,v1) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Since (u, v1) and (v1, v2) are not adjacent in H, all the symbols in the side information x|NH(u,v1) of the vertex (u, v1) are zeros.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' We thus obtain that for every vertex u ∈ VG with (u, v1) ∈ VH, it holds that g(u,v1)(y, 0, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' , 0) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' By the definition of hv1, it follows that hv1(y) = 0, as required.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' We next claim that hv2(y) = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' To see this, observe that by the correctness of the decoding function g(v1,v2), it follows that g(v1,v2)(y, x|NH(v1,v2)) = x(v1,v2) = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' It further holds that all the symbols in the side information x|NH(v1,v2) of the vertex (v1, v2) are zeros.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' By the definition of hv2, it follows that hv2(y) = 1, as required.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' We obtain that every two adjacent vertices v1 and v2 in G satisfy hv1 ̸= hv2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Since the number of functions hv : Σk → {0, 1} for v ∈ VG does not exceed 2|Σ|k, it follows that χ(G) ≤ 2|Σ|k, and we are done.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' 4 Hardness Results In this section, we prove our hardness results for the orthogonality dimension and for minrank.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' We also suggest a potential avenue for proving hardness results for the general index coding problem over a constant-size alphabet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' The starting point of our hardness proofs is the following theorem of Wrochna and ˇZivn´y [40].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Recall that the function b : N → N is defined by b(n) = ( n ⌊n/2⌋).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='1 ([40]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' For every integer k ≥ 4, it is NP-hard to decide whether a given graph G satisfies χ(G) ≤ k or χ(G) ≥ b(k).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Our hardness results for the orthogonality dimension and the minrank parameter over finite fields are given by the following theorem, which confirms Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' 15 Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' There exists a function f : N → N satisfying f(k) = (1 − o(1)) · � b(k) such that for every finite field F and for every sufficiently large integer k, the following holds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' It is NP-hard to decide whether a given graph G satisfies ξF(G) ≤ k or ξF(G) ≥ 1 √ log |F| · f(k).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' It is NP-hard to decide whether a given graph G satisfies minrkF(G) ≤ k or minrkF(G) ≥ 1 √ 2·log |F| · f(k).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Proof: Fix a finite field F of size q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' We start by proving the first item of the theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' For an integer k ≥ 4, consider the problem of deciding whether a given graph G satisfies χ(G) ≤ b(k) or χ(G) ≥ b(b(k)), whose NP-hardness follows from Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' To obtain our hardness result on the orthogonality dimension over F, we reduce from this problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Consider the reduction that given an input graph G produces and outputs the underlying graph H of the digraph δG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' This reduction can clearly be implemented in polynomial time (in fact, in logarithmic space).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' To prove the correctness of the reduction, we analyze the orthogonality dimension of H over F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' If G is a YES instance, that is, χ(G) ≤ b(k), then by combining Claim 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='6 with Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='2, it follows that ξF(H) ≤ χ(H) ≤ k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' If G is a NO instance, that is, χ(G) ≥ b(b(k)), then by Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='5, it follows that ξF(H) ≥ � logq χ(G) ≥ � logq b(b(k)) = 1−o(1) √ log q · � b(k), where the o(1) term tends to 0 when k tends to infinity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Note that we have used here the fact that b(n) = Θ(2n/√n).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' By letting k be any sufficiently large integer, the proof of the first item of the theorem is completed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' The proof of the second item of the theorem is similar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' To avoid repetitions, we briefly mention the needed changes in the proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' First, to obtain a hardness result for the minrank parameter, the reduction has to output the complement H of the graph H rather than H itself.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Second, in the analysis of the NO instances, one has to apply Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='13 instead of Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='5 to obtain that minrkF(H) ≥ � 1 2 · logq χ(G) ≥ � 1 2 · logq b(b(k)) = 1−o(1) √ 2·log q · � b(k).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' This completes the proof of the theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' As an immediate corollary of Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='2, we obtain the following.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Corollary 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' For every finite field F, the following holds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' It is NP-hard to approximate ξF(G) for a given graph G to within any constant factor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' 16 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' It is NP-hard to approximate minrkF(G) for a given graph G to within any constant factor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' We next prove a hardness result for the orthogonality dimension over the reals, confirming Theorem 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' There exists a function f : N → N satisfying f(k) = Θ( � b(k)/k) such that for every sufficiently large integer k, it is NP-hard to decide whether a given graph G satisfies ξR(G) ≤ k or ξR(G) ≥ f(k).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Proof: As in the proof of Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='2, for an integer k ≥ 4, we reduce from the problem of deciding whether a given graph G satisfies χ(G) ≤ b(k) or χ(G) ≥ b(b(k)), whose NP-hardness follows from Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Consider the polynomial-time reduction that given an input graph G produces and outputs the underlying graph H of the digraph δG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' To prove the correctness of the reduction, we analyze the orthogonality dimension of H over R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' If G is a YES instance, that is, χ(G) ≤ b(k), then by combining Claim 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='6 with Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='2, it follows that ξR(H) ≤ χ(H) ≤ k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' If G is a NO instance, that is, χ(G) ≥ b(b(k)), then by Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='7 combined with the fact that b(n) = Θ(2n/√n), it follows that ξR(H) ≥ c · � log b(b(k)) log log b(b(k)) = Θ �� b(k) k � , where c is an absolute positive constant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' This completes the proof of the theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' As an immediate corollary of Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='4, we obtain the following.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Corollary 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' It is NP-hard to approximate ξR(G) for a given graph G to within any constant factor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' We end this section with a statement that might be useful for proving NP-hardness results for the general index coding problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Consider the following definition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Definition 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' For an alphabet Σ and for two integers k1 < k2, let Index-CodingΣ(k1, k2) denote the problem of deciding whether the minimal length of an index code for a given graph G over Σ is at most k1 or at least k2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' We prove the following result.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Let Σ be an alphabet of size at least 2, and let k1, k2 be two integers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Then, there exists a polynomial-time reduction from the problem of deciding whether a given graph G satisfies χ(G) ≤ b(k1) or χ(G) ≥ k2 to Index-CodingΣ(k1, log|Σ| log k2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' 17 Proof: Consider the polynomial-time reduction that given an input graph G produces the under- lying graph H of the digraph δG and outputs its complement H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' For correctness, suppose first that G is a YES instance, that is, χ(G) ≤ b(k1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Then, by combining Claim 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='6 with Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='2, it follows that minrkF2(H) ≤ χ(H) ≤ k1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' By Proposition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='16, it further follows that there exists a linear index code for H over F2 of length k1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' In particular, using |Σ| ≥ 2, there exists an index code for H over the alphabet Σ of length k1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Suppose next that G is a NO instance, that is, χ(G) ≥ k2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' By Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='17, it follows that the length of any index code for H over Σ is at least log|Σ| log k2, so we are done.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='7 implies that in order to prove the NP-hardness of the general index coding prob- lem over some finite alphabet Σ of size at least 2, it suffices to prove for some integer k that it is NP-hard to decide whether a given graph G satisfies χ(G) ≤ b(k) or χ(G) > 2|Σ|k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Acknowledgements We thank the anonymous reviewers for their helpful comments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' References [1] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Alon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' The Shannon capacity of a union.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Combinatorica, 18(3):301–310, 1998.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' [2] I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Attias and I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Haviv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Local orthogonality dimension.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' arXiv, abs/2110.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content='00718, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' [3] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Bar-Yossef, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Birk, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Jayram, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Kol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Index coding with side information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' IEEE Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Inform.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Theory, 57(3):1479–1494, 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Preliminary vesrion in FOCS’06.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' [4] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Barto, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Bul´ın, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Krokhin, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Oprˇsal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Algebraic approach to promise constraint satisfaction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' ACM, 68(4):28:1–28:66, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Preliminary version in STOC’19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' [5] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Bhangale.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' NP-hardness of coloring 2-colorable hypergraph with poly-logarithmically many colors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' In Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' of the 45th International Colloquium on Automata, Languages, and Pro- gramming (ICALP’18), pages 15:1–15:11, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' [6] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Brakensiek and V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Guruswami.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' New hardness results for graph and hypergraph colorings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' In Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' of the 31st Conference on Computational Complexity (CCC’16), pages 14:1–14:27, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' [7] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Chlamt´aˇc and I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Haviv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Linear index coding via semidefinite programming.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Combinatorics, Probability & Computing, 23(2):223–247, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Preliminary version in SODA’12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' [8] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Dau, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Skachek, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Chee.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Optimal index codes with near-extreme rates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' IEEE Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Inform.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Theory, 60(3):1515–1527, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Preliminary version in ISIT’12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' [9] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' de Wolf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Quantum Computing and Communication Complexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' PhD thesis, Universiteit van Amsterdam, 2001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' [10] I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Dinur, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Mossel, and O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Regev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Conditional hardness for approximate coloring.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' SIAM J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=', 39(3):843–873, 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Preliminary version in STOC’06.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' 18 [11] I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Dinur and I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Shinkar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' On the conditional hardness of coloring a 4-colorable graph with super-constant number of colors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' In Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' of the 13th International Workshop on Approximation Algorithms for Combinatorial Optimization Problems (APPROX’10), pages 138–151, 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' [12] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Erd¨os, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' F¨uredi, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Hajnal, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Komj´ath, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' R¨odl, and ´A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Seress.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Coloring graphs with locally few colors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Discrete Mathematics, 59(1–2):21–34, 1986.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' [13] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Erd¨os and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Hajnal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' On chromatic number of infinite graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' In Theory of Graphs, Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Colloq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=', Tihany, pages 83–98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Academic Press, 1966.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' [14] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' F¨uredi, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Hajnal, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' R¨odl, and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Trotter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Interval orders and shift graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' In Sets, Graphs and Numbers, volume 60 of Colloq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Soc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' J´anos Bolyai, pages 297–313.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' 1991.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' [15] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Garey and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Johnson.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' The complexity of near-optimal graph coloring.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' ACM, 23(1):43–49, 1976.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' [16] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Golovnev and I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Haviv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' The (generalized) orthogonality dimension of (generalized) Kneser graphs: Bounds and applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Theory of Computing, 18(22):1–22, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Preliminary version in CCC’21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' [17] V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Guruswami and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Sandeep.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' d-To-1 hardness of coloring 3-colorable graphs with O(1) colors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' In Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' of the 47th International Colloquium on Automata, Languages, and Programming, (ICALP’20), pages 62:1–62:12, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' [18] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Haemers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' On some problems of Lov´asz concerning the Shannon capacity of a graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' IEEE Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Inform.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Theory, 25(2):231–232, 1979.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' [19] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Haemers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' An upper bound for the Shannon capacity of a graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' In L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Lov´asz and V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' S´os, editors, Algebraic Methods in Graph Theory, volume 25/I of Colloquia Mathematica Societatis J´anos Bolyai, pages 267–272.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Bolyai Society and North-Holland, 1981.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' [20] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Harary and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Norman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Some properties of line digraphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Rend.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Circ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Mat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Palermo, 9(2):161–168, 1960.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' [21] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Harner and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Entringer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' On the arc-chromatic number of a digraph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Comb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Theory, Ser.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' B, 13(3):219–225, 1972.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' [22] I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Haviv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Approximating the orthogonality dimension of graphs and hypergraphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' In Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' of the 44th International Symposium on Mathematical Foundations of Computer Science (MFCS’19), pages 39:1–39:15, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' [23] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Hell and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Neˇsetˇril.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' On the complexity of H-coloring.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Comb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Theory, Ser.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' B, 48(1):92–110, 1990.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' [24] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Huang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Improved hardness of approximating chromatic number.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' In Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' of the 16th In- ternational Workshop on Approximation Algorithms for Combinatorial Optimization Problems (AP- PROX’13), pages 233–243, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' 19 [25] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Kalai.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Analogues for Sperner and Erd¨os-Ko-Rado theorems for subspaces of linear spaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' In P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Hammer, editor, Combinatorics 79, volume 9 of Annals of Discrete Math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=', page 135.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Elsevier, 1980.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' [26] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Karp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Reducibility among combinatorial problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' In Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' of a Symposium on the Complexity of Computer Computations, pages 85–103, 1972.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' [27] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Kawarabayashi and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Thorup.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Coloring 3-colorable graphs with less than n1/5 colors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' ACM, 64(1):4:1–4:23, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Preliminary versions in FOCS’12 and STACS’14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' [28] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Khanna, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Linial, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Safra.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' On the hardness of approximating the chromatic number.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Combinatorica, 20(3):393–415, 2000.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Preliminary version in ISTCS’93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' [29] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Khot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Improved inaproximability results for maxclique, chromatic number and approx- imate graph coloring.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' In Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' of the 42nd Symposium on Foundations of Computer Science (FOCS’01), pages 600–609, 2001.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' [30] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Langberg and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Sprintson.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' On the hardness of approximating the network coding capac- ity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' IEEE Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Inform.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Theory, 57(2):1008–1014, 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Preliminary version in ISIT’08.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' [31] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Lov´asz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Flats in matroids and geometric graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' In Combinatorial surveys: Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' of the 6th British Comb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=', Royal Holloway Coll.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=', pages 45–86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Academic Press, 1977.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' [32] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Lov´asz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' On the Shannon capacity of a graph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' IEEE Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Inform.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Theory, 25(1):1–7, 1979.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' [33] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Lov´asz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Graphs and Geometry, volume 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Colloquium Publications, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' [34] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Lov´asz, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Saks, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Schrijver.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Orthogonal representations and connectivity of graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Linear Algebra Appl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=', 114–115:439–454, 1989.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Special Issue Dedicated to Alan J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Hoffman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' [35] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Peeters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Orthogonal representations over finite fields and the chromatic number of graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Combinatorica, 16(3):417–431, 1996.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' [36] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Poljak and V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' R¨odl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' On the arc-chromatic number of a digraph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Comb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Theory, Ser.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' B, 31(2):190–198, 1981.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' [37] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Shanmugam, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Dimakis, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Langberg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Local graph coloring and index coding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' In Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' of the IEEE International Symposium on Information Theory (ISIT’13), pages 1152–1156, 2013.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' [38] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Shannon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' The zero error capacity of a noisy channel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Institute of Radio Engineers, Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Inform.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Theory, IT-2:8–19, 1956.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' [39] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Stahl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' n-tuple colorings and associated graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Comb.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Theory, Ser.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' B, 20(2):185–203, 1976.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' [40] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Wrochna and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' ˇZivn´y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Improved hardness for H-colourings of G-colourable graphs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' In Proc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' of the 31st Annual ACM-SIAM Symposium on Discrete Algorithms (SODA’20), pages 1426– 1435, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' [41] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Zuckerman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Linear degree extractors and the inapproximability of max clique and chro- matic number.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Theory of Computing, 3(1):103–128, 2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' Preliminary version in STOC’06.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} +page_content=' 20' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/09AyT4oBgHgl3EQf1PmO/content/2301.00732v1.pdf'} diff --git a/0dFLT4oBgHgl3EQfoy-j/content/tmp_files/2301.12133v1.pdf.txt b/0dFLT4oBgHgl3EQfoy-j/content/tmp_files/2301.12133v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..5078128ed960655c340f045581209eeae436f15d --- /dev/null +++ b/0dFLT4oBgHgl3EQfoy-j/content/tmp_files/2301.12133v1.pdf.txt @@ -0,0 +1,1061 @@ +arXiv:2301.12133v1 [gr-qc] 28 Jan 2023 +The first variation of the matter energy-momentum tensor with respect to the metric, +and its implications on modified gravity theories +Zahra Haghani,1, ∗ Tiberiu Harko,2, 3, 4, † and Shahab Shahidi1, ‡ +1School of Physics, Damghan University, Damghan, 41167-36716, Iran +2Department of Physics, Babes-Bolyai University, +1 Kogalniceanu Street, 400084 Cluj-Napoca, Romania, +3Department of Theoretical Physics, National Institute of Physics +and Nuclear Engineering (IFIN-HH), Bucharest, 077125 Romania, +4Astronomical Observatory, 19 Ciresilor Street, 400487 Cluj-Napoca, Romania, +(Dated: January 31, 2023) +The first order variation of the matter energy-momentum tensor Tµν with respect to the metric +tensor gαβ plays an important role in modified gravity theories with geometry-matter coupling, and +in particular in the f(R, T ) modified gravity theory. +We obtain the expression of the variation +δTµν/δgαβ for the baryonic matter described by an equation given in a parametric form, with +the basic thermodynamic variables represented by the particle number density, and by the specific +entropy, respectively. The first variation of the matter energy-momentum tensor turns out to be +independent on the matter Lagrangian, and can be expressed in terms of the pressure, the energy- +momentum tensor itself, and the matter fluid four-velocity. We apply the obtained results for the +case of the f(R, T ) gravity theory, where R is the Ricci scalar, and T is the trace of the matter +energy-momentum tensor, which thus becomes a unique theory, also independent on the choice of +the matter Lagrangian. A simple cosmological model, in which the Hilbert-Einstein Lagrangian is +generalized through the addition of a term proportional to T n is considered in detail, and it is shown +that it gives a very good description of the observational values of the Hubble parameter up to a +redshift of z ≈ 2.5. +PACS numbers: 04.50.+h,04.20.Cv, 95.35.+d +I. +INTRODUCTION +There are at least three theoretical perspectives [1] +that could be used to explain the large amount of re- +cent observations, which strongly suggest a faster and +faster expanding Universe [2, 3], with a composition in +which ordinary matter represents only 5% of its com- +position, the rest being represented by the dark energy, +and the dark matter [3, 4]. +The first point of view +is represented by the dark constituents theory, which +adds two more components to the total energy mo- +mentum tensor of the Universe, representing dark mat- +ter and dark energy, respectively. +Therefore the cos- +mological dynamics is described by the field equation +Gµν = κ2T bar +µν ++ κ2T DM +µν (φ, ψµ, ...) + κ2T DE +µν (φ, ψµ, ...), +where T bar +µν , T DM +µν (φ, ψµ, ...), and T DE +µν (φ, ψµ, ...) represent +the energy-momentum tensors of baryonic matter, dark +matter, and dark energy, respectively, with φ and ψµ rep- +resenting scalar or vector fields. A well studied dark con- +stituent model is represented by the quintessence (scalar +field) description of dark energy [5, 6]. +In the dark geometry approach, an exclusively ge- +ometric attitude on the gravitational phenomena is +adopted, +by +explaining +the +cosmological +dynamics +through the modification of the geometry underly- +∗ z.haghani@du.ac.ir +† tiberiu.harko@aira.astro.ro +‡ s.shahidi@du.ac.ir +ing the Einstein field equations. +Hence, +the ex- +tended Einstein equations become in this approach +Gµν = κ2T bar +µν + κ2T (geom) +µν +(gµν, R, □R, ...), where Tµν +is the energy-momentum tensor of ordinary matter, and +T (geom) +µν +(gµν, R, □R, ...) is a purely geometric term, ob- +tained from the metric, torsion τ, nonmetricity Q, exten- +sions of Riemann geometry etc., and which can effectively +mimic dark energy, dark matter, or both. Some typical +example of dark geometric theories are the f(R) [7], f(Q) +[8], hybrid metric-Palatini gravity [9] theories, or gravi- +tational theories based on the Weyl-Cartan-Weitzenb¨ock +[10], or Weyl [11, 12], and Finsler geometries [13, 14]. +The +third +avenue +for +the +understanding +of +the +gravitational +and +cosmological +phenomena +is +rep- +resented by the dark coupling approach, +in which +the +standard +Einstein +gravitational +equations +are +generalized +to +take +the +mathematical +form +Gµν += +κ2Tµν ++ κ2T (coup) +µν +(R, Lm, T, □R, □T, ...), +where +the +effective +energy-momentum +tensor +T (coup) +µν +(gµν, R, Lm, T, □R, □T, ...) +of +the +theory +is +built up by considering the maximal extension of the +Hilbert-Einstein Lagrangian, by abandoning its additive +structure in matter and geometry. In the dark coupling +approach, matter is represented either by the trace T +of the matter energy-momentum tensor, by the matter +Lagrangian Lm or by some scalar made by Tµν such as +TµνT µν. +The dark coupling approach is also a theoretical an- +swer to the problem of the maximal extension of the +additive Hilbert-Einstein Lagrangian, which automati- + +2 +cally implies a non-additive structure of the action in +the geometric and matter variables. In a general form +the requirement of the maximal extension of the grav- +itational action can be implemented by assuming that +the Lagrangian of the gravitational field is an arbitrary +function of the curvature scalar R, and of the matter +Lagrangian Lm. One of the interesting features of the +dark coupling models is that they imply the presence +of a nonminimal geometry-matter coupling. Dark cou- +plings are not restricted to Riemannian geometry, but +they can be considered in the framework of the exten- +sions of Riemann geometry. Typical examples of dark +coupling theories are the f (R, Lm) [15, 16], f(R, T ) [17], +f (R, T, RµνT µν) [18], f(τ, T ) [19], f(Q, T ) [20], or the +f (R, T, Q, Tm) [21] theories. Other gravitational theories +implying geometry-matter coupling have been considered +in [22–27]. +One of the interesting consequences of the dark cou- +pling theories is the reconsideration of the role of the or- +dinary (baryonic) matter in the cosmological dynamics. +Through its coupling to gravity, matter becomes a key +element in the explanation of cosmic dynamics, and re- +covers its central role gravity, which is minimized or even +neglected in the dark constituents and dark geometric +type theories. An important implication of the geometry- +matter coupling is that the matter energy-momentum +tensor is generally not conserved, and thus an extra- +force is generated, acting on massive particles moving +in a gravitational field, with the particles following non- +geodesic paths [16, 17]. The possibility of the existence +of such couplings between matter and geometry have +opened interesting, and novel pathways for the study of +gravitational phenomena [28]. +However, the dependence of the gravitational action +in the dark coupling theories on Lm gives a new rele- +vance to the old problem of the degeneracy of the matter +Lagrangian. Two, physically inequivalent expressions of +the matter Lagrangian, Lm = −ρ, and Lm = P, lead to +the same energy-momentum tensor for matter. This re- +sult has important implications for dark coupling gravity +models. For example, in the framework of the f (R, Lm) +theory, it was shown in [29] that adopting for the La- +grangian density the expression Lm = p, where p is the +pressure, in the case of dust the extra force vanishes. +However, for the form Lm = ρ of the matter Lagrangian, +the extra-force does not vanish [30]. In [31] it was shown, +by using the variational formulation for the derivation +of the equations of motion, that both the matter La- +grangian, and the energy-momentum tensor, are uniquely +and completely determined by the form of the geometry- +matter coupling. Therefore, the extra-force never van- +ishes as a consequence of the thermodynamic properties +of the system. In [32] it was shown that if the particle +number is conserved, the Lagrangian of a barotropic per- +fect fluid with P = P(ρ) is Lm = −ρ +� +c2 + +� +P(ρ)/ρ2dρ +� +, +where ρ is the rest mass density. +This result can be +used successfully in the study of the modified theories +of gravity. The result is based on the assumption that +the Lagrangian does not depend on the derivatives of the +metric, and that the particle number of the fluid is a con- +served quantity, ∇µ (ρuµ) = 0. The matter Lagrangian +also plays an important role in the f(R, T ) theory of +gravity [17]. +In theories with geometry-matter coupling another im- +portant quantity, the variation of the energy-momentum +tensor with respect to the metric does appear, and plays +an important role. The corresponding second order ten- +sor is denoted as Tµν, and it is introduced via the defini- +tion [17] +Tµν ≡ gρσ δTρσ +δgµν . +If the matter Lagrangian does not depend on the deriva- +tives of the metric, one can obtain for Tµν a mathemat- +ical expression that also contains the second variation +of the matter Lagrangian with respect to the metric, +δ2Lm/δgµνδgαβ. The Lagrangian of the electromagnetic +field is quadratic in the components of the metric tensor, +and hence its second variation gives a non-zero contri- +bution to Tµν. However, the case of ordinary baryonic +matter is more complicated. At first sight, by taking into +account the explicit forms of the matter Lagrangians, +Lm = −ρ, or Lm = p, no explicit dependence on the +metric does appear, as opposed, for example, to the case +of the electromagnetic field. This would suggest that the +second variation of the matter Lagrangian always iden- +tically vanishes, no matter what its functional form is. +This conclusion may be valid indeed for some special +forms of the equation of state, but it is not correct if +one adopts a general thermodynamic description of the +baryonic fluids. +It is the goal of the present Letter to investigate the +problem of the second variation of the perfect fluid mat- +ter Lagrangian with respect to the metric tensor com- +ponents, and to analyze its impact on modified gravity +theories. As a first step in our analysis, we obtain, from +general thermodynamic considerations, the expressions +of the variations with respect to the metric and of the +baryonic matter energy density and pressure. Once these +expressions are known, a straightforward calculation, in- +volving the computation of the second variation of the +energy density and pressure, gives the first variation of +the matter energy-momentum tensor with respect to the +metric, which also allows to obtain the tensor Tµν. The +basic result of our investigation is that the tensor Tµν +is independent of the choice of the matter Lagrangian. +The effect of the second order correction is estimated in +a cosmological background. As a specific example we will +concentrate on the f(R, T ) gravity theory, in which the +tensor Tµν plays an important role. +The present Letter is organized as follows. The general +thermodynamic formalism used for the calculation of the +second variation of the matter Lagrangian is discussed +in Section II. The general expression for the second vari- +ation of the matter Lagrangian, and of the variation of +the energy-momentum tensor is presented in Section III. + +3 +Some cosmological applications of the obtained results +are presented in Section III A. We then briefly review the +basics of the f(R, T ) gravity theory in Section IV and +outline its cosmological implications for a simple choice +f(R, T ) = α|T |n. Finally, we discuss and conclude our +results in Section V. +II. +THERMODYNAMICS AND GEOMETRY +In order to obtain the second variation of the baryonic +matter Lagrangian, it is necessary to review the deriva- +tion of its first variation using thermodynamics consid- +erations. The first law of the thermodynamic is given +by +dU = T dS − PdV + µdN, +(1) +where U is the total energy, µ is the chemical potential, +related to the change in the number of particles in the +system, N is the particle number and V is the volume en- +closing the fluid. An important thermodynamic relation +is the Gibbs-Duhem equation, +U = T S − PV + µN, +(2) +which +follows +from +the +extensivity +of +the +energy, +U(λX) = λU(X), where λ is a constant, and from Euler’s +theorem of the homogeneous functions. +Let us define the particle number density n = N/V +and entropy per particle s = S/N. The first law of ther- +modynamics (1) and the Gibbs-Duhem relation (2) can +be simplified to [33, 34] +dρ = T nds + µ′dn, +(3) +ρ = µ′n − P, +(4) +where µ′ = µ+T s and we have defined the energy density +as ρ = U/V . Also, by taking the differential of the Gibbs- +Duhem relation (2) we obtain +dU = T dS + SdT − PdV − V dP + Ndµ + µdN, +and using the first law of thermodynamics (1), one can +obtain +dP = sdT + ndµ = ndµ′ − nT ds, +(5) +implying that ρ = ρ(s, n) and P = P(µ′, s). +Now, we define the particle number flux +Jµ = √−gnuµ, +(6) +and the Taub current [34] +Vµ = µ′uµ, +(7) +where uµ is the fluid 4-velocity, and n, the particle num- +ber density, can be obtained according to the relation, +n = +� +gµνJµJν +g +. +(8) +. +With the above definition, one obtains +J ≡ +� +−JµJµ = √−gn, +Jµ = Juµ, +(9) +V ≡ +� +−VµV µ = µ′, +V µ = V uµ. +(10) +In the context of general relativity, it is well-known +that there are two equivalent baryonic matter La- +grangians corresponding to +Lm = −ρ, +Lm = p, +(11) +It should be noted that from the definition of the +energy-momentum tensor as +Tµν = − +2 +√−g +δ(√−gLm) +δgµν +, +(12) +both Lagrangians in Eq. (11) give the same result, +Tµν = (ρ + P)uµuν + Pgµν. +(13) +As a next step in our study, we introduce the basic +assumptions that the variations of the entropy density s +and of the ordinary matter number flux vector density +Jµ = nuµ√−g, satisfy the two independent constraints +[35], +δs = 0, +(14) +and +δJµ = 0, +(15) +respectively. Hence, in the following we impose the re- +striction that the entropy and particle production rates re- +main unchanged during the dynamical evolution. There- +fore, the entropy and particle number currents satisfy the +conservation equations δ (Jµ∂µs) = 0 and ∇µ (nuµ) = 0, +respectively. The first of these relations is obtained by +taking the divergence of Eq. (14), contracting the ob- +tained expression with Jµ, and by using Eq. (15). +By taking the variation of the particle number n, with +the use of the assumptions previously introduced, we find +[35], +δn = n +2 (−g) uµuν +�δgµν +g +− gµν +g2 δg +� += n +2 (uµuν + gµν) δgµν. +(16) +In order to obtain the variation of the energy- +momentum tensor, we need to find the variations of the +energy density and pressure with respect to the metric, +namely, δρ/δgµν and δP/δgµν, respectively. In the case +of isentropic processes, we have +δρ = ρ + P +n +δn, +(17) +δP = n dµ′. +(18) + +4 +Let the equation of state for matter be given as ρ = +ρ (n, s). Then, since δs = 0, from the thermodynamic +relation (∂ρ/∂n)s = w = (ρ + P) /n, we obtain δρ = +wδn. +The variation of n is given by Eq (16), while the vari- +ation of µ′ from equation (10) can be obtained as, +δµ′ = δV = −VµVν +2V δgµν = −1 +2µ′uµuνδgµν. +(19) +These relations give the thermodynamic variations of +the energy density and pressure with respect to the met- +ric as, +δρ +δgµν = 1 +2(ρ + P)(gµν + uµuν), +(20) +δP +δgµν = −1 +2(ρ + P)uµuν. +(21) +Eqs. (19) and (20) can be obtained in a direct way +by starting from the definition of the matter energy- +momentum tensor, as given by Eq. (12). If the matter +Lagrangian does not depend on the derivatives of the +metric tensor, from Eq. (12) we obtain +Tµν = Lmgµν − 2 δLm +δgµν , +(22) +giving +δLm +δgµν = 1 +2Lmgµν − 1 +2Tµν. +(23) +If we take now Lm = −ρ, from the above equation we +find +δ(−ρ) +δgµν = −1 +2ρgµν − 1 +2Tµν = −1 +2(ρ + P) (gµν + uµuν) , +(24) +where we have used the expression (13) for the energy- +momentum tensor. For Lm = P, we obtain +δP +δgµν = 1 +2Pgµν − 1 +2Tµν = −1 +2(ρ + P)uµuν. +(25) +Hence, we have recovered the expressions of the varia- +tions with respect to the metric of the energy and pres- +sure variations, previously obtained from first principle +thermodynamic considerations. +III. +THE FIRST VARIATION OF THE MATTER +ENERGY-MOMENTUM TENSOR +Now, we have all the necessary tools for computing the +second variation of the energy density and of the pressure +of a perfect fluid. Taking into account that +δgµν = −gµαgνβδgαβ, +(26) +and +δuµ +δgαβ = uν δgµν +δgαβ , +(27) +respectively, one immediately obtains +δ2P +δgαβδgµν ≡ +δ +δgαβ +� δp +δgµν +� += 1 +4(ρ + P) +� +gµβuαuν + gµαuβuν + gνβuαuµ + gναuβuµ − 1 +2gαβuµuν − 1 +2gµνuαuβ +� +, +(28) +and +δ2(−ρ) +δgαβδgµν = +δ2P +δgαβδgµν +− 1 +4(ρ + P)(gαβgµν − gµαgνβ − gµβgνα), +(29) +respectively. Here, since the energy density and pressure +are scalars, we expect that the second variation is sym- +metric with respect to the change (αβ) ⇄ (µν). Hence, +we have implemented this symmetry to the above expres- +sions. +After a little algebra one can obtain from its definition +(12), and by assuming that the matter Lagrangian does +not depend on the derivatives of the metric tensor, the +variation of the energy-momentum tensor as +δTµν +δgαβ = 1 +2Lm(gαβgµν − gµαgνβ − gµβgνα) +− 1 +2Tαβgµν − 2 +δ2Lm +δgαβδgµν . +(30) +Therefore, after substituting the expressions of the sec- +ond variations of the matter Lagrangians, we find the im- +portant result that for both baryonic matter Lagrangians +in Eq. (11), we obtain, +δTµν +δgαβ = 1 +2P(gαβgµν − gµαgνβ − gµβgνα) +− 1 +2Tαβgµν − 2 +δ2P +δgαβδgµν , +(31) +implying that the expression of δTµν/δgαβ is indepen- +dent on the choice of the matter Lagrangian. This is not + +5 +the case for the approximate result obtained by neglect- +ing the second variation of the matter Lagrangian with +respect to the metric, +δTµν +δgαβ ≈ 1 +2Lm(gαβgµν − gµαgνβ − gµβgνα) − 1 +2Tαβgµν, +(32) +which obviously depends on the choice of Lagrangian +density. +It should be noted at this moment that the energy- +momentum tensor, and its variation, should be indepen- +dent to the choice of the baryonic matter Lagrangian, as +we have summarized in the previous Section on thermo- +dynamics grounds. +Eq. (31) can also be written in the form, +δTµν +δgαβ = 1 +2P(gνβgαµ + gναgβµ) − 1 +2 +� +Tανgµβ + Tβνgµα + Tαµgνβ + Tβµgνα − 1 +2Tµνgαβ + 1 +2Tαβgµν +� +. +(33) +Also, by defining a modified energy-momentum tensor +¯Tµν = (ρ + P)uµuν + 1 +2Pgµν, +(34) +one can write the first variation of the energy-momentum +tensor as +δTµν +δgαβ = −1 +2 +� +¯Tβνgµα + ¯Tανgµβ + ¯Tαµgνβ + ¯Tβµgνα − 1 +2 +¯Tµνgαβ + 1 +2 +¯Tαβgµν +� +. +(35) +‌In the well-known f(R, T ) gravity theories [17], on en- +counters with the expression gµνδTµν/δgαβ, which enters +into the modified field equations. With the result given +by Eq. (33), we define +Tαβ ≡ gµν δTµν +δgαβ = −1 +4(12 ¯Tαβ − ¯Tgαβ), +(36) +where ¯T = −ρ + P. Alternatively, we also have, +δT +δgαβ = Tαβ + Tαβ. +(37) +In the comoving frame one can then obtain, +Tµ +ν = 1 +4diag (11ρ + 7P, −ρ − 5P, −ρ − 5P, −ρ − 5P) δµ +ν . +(38) +Taking the trace of the above expression, one finds +T ≡ gµνTµν = 2(ρ − P). +(39) +The approximate results, obtained by neglecting the +second variation of the matter Lagrangian, are, +Tµ +ν ≈ −1 +2(ρ + 3P)δµ +ν , +(40) +for Lm = −ρ, and +Tµ +ν ≈ 1 +2(ρ − P)δµ +ν , +(41) +for Lm = P. +For the approximate result with Lm = −ρ we obtain +T ≈ −2(ρ + 3P), while for Lm = P we obtain T ≈ +2(ρ − P). We thus arrive to the interesting conclusion +that the approximate result with Lm = P still gives the +correct answer for the trace of the tensor T. +A. +Cosmological implications +In order to determine the effect of the new term in the +variation of the energy-momentum tensor, let us find its +behavior for a conserved matter source in a flat FLRW +Universe, with the line element +ds2 = −dt2 + a2(t) +� +dx2 + dy2 + dz2� +, +(42) +where a is the scale factor. +In this case, one has for the baryonic matter density +ρm, assumed to be in the form of dust, the expression +ρm = Ωm0 +a3 , +(43) + +6 +where Ω0m is the present time density abundance. For +the variation of the density of the radiation we have +ρr = Ωr0 +a4 . +(44) +Assume that the Universe is filled with dust and radi- +ation, with +ρ = ρm + ρr = Ωm0 +a3 ++ Ωr0 +a4 , +P = 1 +3ρr. +(45) +In this case, one obtains +T = 2Ωm0(1 + z)3 + 4 +3Ωr0(1 + z)4, +(46) +where we have introduced the redshift z, defined as +1 + z = 1 +a, +(47) +and Ωm,0 and Ωr,0 are the current values of the dust and +radiation abundances, Ωm0 = 0.305, and Ωr0 = 5.3 × +10−5, respectively [36]. +In Fig. 1 we have depicted the evolution of the new +term T as a function of the redshift. +As a result, we +expect that the new term changes the behavior of the +cosmological models in theories in which the first order +variation of the energy-momentum tensor with respect +to the metric is present in the gravitational field equa- +tions. There are major differences as compared with the +approximate relation for Lm = −ρ, but the two relations +coincide for Lm = P. +IV. +f(R, T ) GRAVITY +Now let us consider a typical gravitational theory in +which the above results can have an important influence. +Consider the action [17], +S = +� +d4x√−g(κ2R + f(R, T ) + Lm), +(48) +where f(R, T ) is an arbitrary function of the Ricci scalar +R, and of the trace of the energy-momentum tensor T . +We suppose that the Universe is filled with a perfect fluid +with the matter energy-momentum having the form (13). +The field equations can be obtained as +κ2Gµν − 1 +2fgµν + fRRµν + (gµν□ − ∇µ∇ν)fR += 1 +2Tµν − fT Tµν − fT Tµν, +(49) +where the last term is computed as in Eq. (36). It should +be noted that using the correct result Eq. (36), the choice +of the matter Lagrangian is irrelevant, both cases with +Lm = −ρ and Lm = P giving the same field equations. +With the use of the mathematical identity +(□∇ν − ∇ν□) fR = Rµν∇µfR, +FIG. 1. The behavior of the extra term T as a function of the +redshift z for the new correct expression (solid curve), and for +the previously considered approximate relation for Lm = −ρ +(dashed curve). The approximate relation with Lm = P for +T exactly coincides with the correct result. +after taking the divergence of Eq. (49) we obtain the +conservation equation in the f (R, T ) gravity theory in +the form +�1 +2 − fT +� +∇µTµν = (Tµν + Tµν) ∇µfT ++ fT +� +∇µTµν + 1 +2∇νT +� +. +(50) +As one can see from the field equations (49), the dy- +namical behavior in f(R, T ) gravity essentially depends +on the tensor Tµν. In this Letter, we will consider a sim- +ple case that indicates the importance of the new term. +Let us assume that f(R, T ) = α|T |n, and P = 0. In this +case, the field equations reduce to +κ2Gµν = 1 +2Tµν + 1 +2α|T |ngµν − nαǫ|T |n−1(Tµν + Tµν), +(51) +where ǫ = sign(T ). +Here we have T = −ρ and then +ǫ = −1. +The Friedmann and Raychaudhuri equations +are then +h2 = ¯ρm − 1 +2β(7n + 2)¯ρn +m, +(52) +h′ = −3 +2 (¯ρm − 4βn¯ρn +m) , +(53) +where we have used the following set of dimensionless +variables, +τ = H0t, +H = H0h, +¯ρ = +ρ +6κ2H2 +0 +, +β = (6κ2H2 +0)n−1α, +(54) +and we have denoted by H0 the current value of the Hub- +ble parameter, and by a prime the derivative with respect +to τ. +As an indicator of the decelerating/accelerating + +15 +10 +5 +- +correct +approximate(Lm=-p) +-10E +-15 +0.0 +0.5 +1.0 +1.5 +2.0 +:N7 +0.0 +0.5 +1.0 +1.5 +2.0 +2.5 +3.0 +3.5 +4.0 +z +50 +100 +150 +200 +250 +300 +350 +400 +H +0.0 +0.5 +1.0 +1.5 +2.0 +2.5 +3.0 +3.5 +4.0 +z +−0.6 +−0.4 +−0.2 +0.0 +0.2 +0.4 +q +FIG. 2. The behavior of the Hubble parameter H and of the deceleration parameter q as a function of the redshift for the best +fit values of the parameters as given by Eqs. (59). The dashed line represents the ΛCDM model. +0.0 +0.5 +1.0 +1.5 +2.0 +2.5 +3.0 +3.5 +4.0 +z +0.0 +0.2 +0.4 +0.6 +0.8 +1.0 +1.2 +Ω +m +FIG. 3. The behavior of the matter density parameter Ωm as +a function of redshift for the best fit values of the parameters +as given by Eq. (59). The dashed line represents the ΛCDM +model. +evolution we introduce the deceleration parameter, de- +fined as +q = d +dτ +1 +h − 1. +(55) +Note that from the normalized Friedmann equation +(52), and by taking into account that at the present time +we have h(present) = 1, we can obtain the coupling β as +β = − 2(1 − Ωm0) +(2 + 7n)Ωn +m0 +. +(56) +In order to find the best fit value of the parameter n, +H0 and Ωm0, we use the Likelihood analysis using the ob- +servational data on the Hubble parameter in the redshift +range z ∈ (0.07, 2.36) [36]. In the case of independent +0.2 +0.3 +Ω +m +0.015 +0.020 +0.025 +n +64 +66 +68 +70 +72 +H +0 +64 +66 +68 +70 +72 +H +0 +0.015 +0.020 +0.025 +n +FIG. 4. The corner plot for the values of the parameters H0, +Ωm0 and n with their 1σ and 2σ confidence levels. +data points, the likelihood function can be defined as +L = L0e−χ2/2, +(57) +where L0 is the normalization constant and the quantity +χ2 is defined as +χ2 = +� +i +�Oi − Ti +σi +�2 +. +(58) +Here i counts the data points, Oi are the observational +value, Ti are the theoretical values, and σi are the errors +associated with the ith data obtained from observations. +By maximizing the likelihood function, the best fit val- +ues of the parameters n, Ωm0 and H0 at 1σ confidence + +8 +level, can be obtained as +Ωm0 = 0.224+0.024 +−0.023, +H0 = 68.352+1.391 +−1.418, +n = 0.020+0.002 +−0.002. +(59) +Also, with the use of equation (56) we obtain +β = −0.747+0.027 +−0.026. +(60) +The redshift evolution of the Hubble function, of the +deceleration parameter q, and of the matter density pa- +rameter Ωm = ¯ρm/h2 are represented, for this model, in +Figs. 2 and 3, respectively. Also, the corner plot for the +values of the parameters H0, Ωm0 and n with their 1σ +and 2σ confidence levels is shown in Fig. 4. +V. +DISCUSSIONS AND FINAL REMARKS +In the present Letter we have obtained the complete +expression of the first variation of the matter energy- +momentum tensor with respect to the metric gµν, and +of its associated tensor Tµν. The full estimation of this +term requires the calculation of the second variations of +the matter Lagrangian with respect to the metric, a term +which was generally ignored in the previous investiga- +tions of this problem. The expression of δ2Lm/δgµνδgαβ +can be calculated straightforwardly from the first varia- +tion δLm/δgµν, which can be obtained for the two possi- +ble choices of the matter Lagrangian either from thermo- +dynamic considerations, or in a direct way by using the +definition of the energy-momentum tensor. The main re- +sult of this Letter is that the first variation of the matter +energy-momentum tensor, given by Eq. (31), is indepen- +dent of the choice of the matter Lagrangian; both possible +choices lead to the same expression (31), depending only +on the thermodynamic pressure, and its second variation. +The variation of the energy-momentum tensor can also +be expressed in terms of the pressure, and the energy- +momentum tensor itself, or in a compact form in terms +of a generalized energy-momentum tensor, formally de- +fined in Eq. (34). +The new form of the variation of the matter energy- +momentum tensor may have some important implications +on modified gravity theories with geometry-matter cou- +pling. As an important example we have considered the +particular case of the f(R, T ) gravity theory. We have +investigated the cosmological implications of a particular +representation of the f(R, T ) gravity, with action given +by Eq. (48), in which the standard Hilbert-Einstein La- +grangian is corrected by a general term f(R, T ). As a +simple case we have taken f(R, T ) = α|T |n. The gener- +alized Friedmann equations take a simple form, and they +allow a complete analysis of the cosmological features of +this simple model, and a full fitting of the observational +cosmological data, which permits the determination of +the optimal values of the free parameters. The model +gives an excellent description of the observational data +for the Hubble function, up to a redshift of z ≈ 4. In +this redshift range the model basically coincides with the +ΛCDM model. The transition from acceleration to decel- +eration takes place a redshift that again coincides with +the ΛCDM value. Moreover, the deceleration parameter +q basically coincides with the ΛCDM prediction. How- +ever, significant differences in the behavior of the matter +density do appear at higher redshifts. +The search for the “true” physical quantities from +which the matter energy-momentum tensor can be ob- +tained (−ρ or P) in a variational formulation is still go- +ing on. +Interestingly enough, the two possible matter +Lagrangians are not equivalent in any sense (physical or +mathematical), but their functional variation coincides, +leading to the same energy-momentum tensor. However, +as shown in the present Letter, the first variation of the +matter energy-momentum tensor is independent on the +adopted form of the matter Lagrangian, making the mod- +ified gravity theories containing this term unique, and +well defined. Hence, the study of the various orders of +variations of the matter Lagrangians and of the energy- +momentum tensor turns out to be an important field of +research, which could lead to a new understanding of the +mathematical formalism, and of the astrophysical and +cosmological implications of the modified gravitational +theories, and in particular of the f(R, T ) gravity. +ACKNOWLEDGMENTS +We would like to thank Dr. Nihan Katirci for useful +discussions, and suggestions. +The work of TH is sup- +ported by a grant of the Romanian Ministry of Educa- +tion and Research, CNCS-UEFISCDI, project number +PN-III-P4-ID-PCE-2020-2255 (PNCDI III). +[1] T. Harko and F. S. N. Lobo, Int. J. Mod. Phys. D 29, +2030008 (2020). +[2] D. H. Weinberg, M. J. Mortonson, D. J. Eisenstein, C. +Hirata, A. G. Riess, and E. Rozo, Physics Reports 530, +87 (2013). +[3] D. Brout et al., Astrophys. J. 938, 110 (2022). +[4] N. Aghanim et al., Planck 2018 results. VI. Cosmological +parameters, Astron. Astrophys. 641, A6 (2020). +[5] S. Tsujikawa, Class. Quant. Grav. 30, 214003 (2013). +[6] J. de Haro and L. A. Sal´o, Galaxies 9, 73 (2021). +[7] S. Nojiri, S. D. Odintsov, and V. K. Oikonomou, Phys. +Rept. 692, 1 (2017). + +9 +[8] J. B. Jimenez, L. Heisenberg, and T. Koivisto, Phys. Rev. +D 98, 044048 (2018). +[9] T. Harko, T. S. Koivisto, F. S. N. Lobo and G. J. Olmo, +Phys. Rev. D 85, 084016 (2012). +[10] Z. Haghani, T. Harko, H. R. Sepangi, and S. Shahidi, +JCAP 10, 061 (2012). +[11] D. M. Ghilencea, Eur. Phys. J. C 80, 1147 (2020). +[12] D. M. Ghilencea, Eur. Phys. J. C 81, 510 (2021). +[13] R. Hama, T. Harko, S. V.. Sabau, and S. Shahidi, Eur. +Phys. J. C 81, 742 (2021). +[14] R. Hama, T. Harko, and S. V. Sabau, Eur. Phys. J. C +82, 385 (2022). +[15] O. Bertolami, C. G. Boehmer, T. Harko, and F. S. N. +Lobo, Phys. Rev. D 75, 104016 (2007). +[16] T. Harko and F. S. N. Lobo, Eur. Phys. J. C 70, :373 +(2010). +[17] T. Harko, F. S. N. Lobo, S. Nojiri, and S. D. Odintsov, +Phys. Rev. D 84, 024020 (2011). +[18] Z. Haghani, T. Harko, F. S. N. Lobo, H. R. Sepangi, and +S. Shahidi, Phys. Rev. D 88, 044023 (2013). +[19] T. Harko, F. S. N. Lobo, G. Otalora, and E. N. Saridakis, +Phys. Rev. D 89, 124036 (2014). +[20] Y. Xu, G. Li, T. Harko, and S.-D. Liang, Eur. Phys. J. +C 79, 708 (2019). +[21] T. Harko, +N. Myrzakulov, +R. Myrzakulov, +and S. +Shahidi, PDU 34, 100886 (2021). +[22] ¨O. Akarsu, N. Katırcı, S. Kumar, R.C. Nunes, and M. +Sami, Phys. Rev. D 98, 063522 (2018). +[23] ¨O. Akarsu, J. D. Barrow, and N. M. Uzun, Phys. Rev. D +102, 124059 (2020). +[24] ¨O Akarsu, N. Katırcı, and S. Kumar, Phys. Rev. D 97, +024011 (2018). +[25] G. Acquaviva and N. Katırcı, Physics of the Dark Uni- +verse 38, 101128 (2022). +[26] H. Ludwig, O. Minazzoli, and S. Capozziello, Phys. Lett. +B 751, 576 (2015). +[27] O. Minazzoli, Phys. Rev. D 98, 124020 (2018). +[28] T. Harko and F. S. N. Lobo, Extensions of f(R) gravity: +Curvature-Matter Couplings and Hybrid Metric-Palatini +Theory, Cambridge University Press, Cambridge, 2018 +[29] T. P. Sotiriou and V. Faraoni, Class. Quant. Grav. 25, +205002 (2008). +[30] O. Bertolami, F. S. N. Lobo and J. Paramos, Phys. Rev. +D 78, 064036 (2008). +[31] T. Harko, Phys. Rev. D 81, 044021 (2010). +[32] O. Minazzoli and T. Harko, Phys. Rev. D 86, 087502 +(2012). +[33] B. F. Schutz, Phys. Rev. D 2, 2762 (1970). +[34] J. D. Brown, Class. Quant. Grav. 10, 1579 (1993). +[35] F. de Felice and C. J. S. Clarke, Relativity on curved +manifolds, Cambridge University Press, Cambridge, 1990 +[36] O. Farooq, et. al, Astophys. J. 835, 26 (2017). + diff --git a/0dFLT4oBgHgl3EQfoy-j/content/tmp_files/load_file.txt b/0dFLT4oBgHgl3EQfoy-j/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..4ad9ad02399ac70290ca136137a7f6341c2952c9 --- /dev/null +++ b/0dFLT4oBgHgl3EQfoy-j/content/tmp_files/load_file.txt @@ -0,0 +1,572 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf,len=571 +page_content='arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='12133v1 [gr-qc] 28 Jan 2023 The first variation of the matter energy-momentum tensor with respect to the metric,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' and its implications on modified gravity theories Zahra Haghani,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' ∗ Tiberiu Harko,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' 3,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' 4,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' † and Shahab Shahidi1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' ‡ 1School of Physics,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Damghan University,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Damghan,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' 41167-36716,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Iran 2Department of Physics,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Babes-Bolyai University,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' 1 Kogalniceanu Street,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' 400084 Cluj-Napoca,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Romania,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' 3Department of Theoretical Physics,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' National Institute of Physics and Nuclear Engineering (IFIN-HH),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Bucharest,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' 077125 Romania,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' 4Astronomical Observatory,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' 19 Ciresilor Street,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' 400487 Cluj-Napoca,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Romania,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' (Dated: January 31,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' 2023) The first order variation of the matter energy-momentum tensor Tµν with respect to the metric tensor gαβ plays an important role in modified gravity theories with geometry-matter coupling,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' and in particular in the f(R,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' T ) modified gravity theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' We obtain the expression of the variation δTµν/δgαβ for the baryonic matter described by an equation given in a parametric form, with the basic thermodynamic variables represented by the particle number density, and by the specific entropy, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' The first variation of the matter energy-momentum tensor turns out to be independent on the matter Lagrangian, and can be expressed in terms of the pressure, the energy- momentum tensor itself, and the matter fluid four-velocity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' We apply the obtained results for the case of the f(R, T ) gravity theory, where R is the Ricci scalar, and T is the trace of the matter energy-momentum tensor, which thus becomes a unique theory, also independent on the choice of the matter Lagrangian.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' A simple cosmological model, in which the Hilbert-Einstein Lagrangian is generalized through the addition of a term proportional to T n is considered in detail, and it is shown that it gives a very good description of the observational values of the Hubble parameter up to a redshift of z ≈ 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' PACS numbers: 04.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='+h,04.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='Cv, 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='+d I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' INTRODUCTION There are at least three theoretical perspectives [1] that could be used to explain the large amount of re- cent observations, which strongly suggest a faster and faster expanding Universe [2, 3], with a composition in which ordinary matter represents only 5% of its com- position, the rest being represented by the dark energy, and the dark matter [3, 4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' The first point of view is represented by the dark constituents theory, which adds two more components to the total energy mo- mentum tensor of the Universe, representing dark mat- ter and dark energy, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Therefore the cos- mological dynamics is described by the field equation Gµν = κ2T bar µν + κ2T DM µν (φ, ψµ, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=') + κ2T DE µν (φ, ψµ, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='), where T bar µν , T DM µν (φ, ψµ, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='), and T DE µν (φ, ψµ, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=') represent the energy-momentum tensors of baryonic matter, dark matter, and dark energy, respectively, with φ and ψµ rep- resenting scalar or vector fields.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' A well studied dark con- stituent model is represented by the quintessence (scalar field) description of dark energy [5, 6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' In the dark geometry approach, an exclusively ge- ometric attitude on the gravitational phenomena is adopted, by explaining the cosmological dynamics through the modification of the geometry underly- ∗ z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='haghani@du.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='ac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='ir † tiberiu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='harko@aira.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='astro.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='ro ‡ s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='shahidi@du.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='ac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='ir ing the Einstein field equations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Hence, the ex- tended Einstein equations become in this approach Gµν = κ2T bar µν + κ2T (geom) µν (gµν, R, □R, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='), where Tµν is the energy-momentum tensor of ordinary matter, and T (geom) µν (gµν, R, □R, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=') is a purely geometric term, ob- tained from the metric, torsion τ, nonmetricity Q, exten- sions of Riemann geometry etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=', and which can effectively mimic dark energy, dark matter, or both.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Some typical example of dark geometric theories are the f(R) [7], f(Q) [8], hybrid metric-Palatini gravity [9] theories, or gravi- tational theories based on the Weyl-Cartan-Weitzenb¨ock [10], or Weyl [11, 12], and Finsler geometries [13, 14].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' The third avenue for the understanding of the gravitational and cosmological phenomena is rep- resented by the dark coupling approach, in which the standard Einstein gravitational equations are generalized to take the mathematical form Gµν = κ2Tµν + κ2T (coup) µν (R, Lm, T, □R, □T, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='), where the effective energy-momentum tensor T (coup) µν (gµν, R, Lm, T, □R, □T, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=') of the theory is built up by considering the maximal extension of the Hilbert-Einstein Lagrangian, by abandoning its additive structure in matter and geometry.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' In the dark coupling approach, matter is represented either by the trace T of the matter energy-momentum tensor, by the matter Lagrangian Lm or by some scalar made by Tµν such as TµνT µν.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' The dark coupling approach is also a theoretical an- swer to the problem of the maximal extension of the additive Hilbert-Einstein Lagrangian, which automati- 2 cally implies a non-additive structure of the action in the geometric and matter variables.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' In a general form the requirement of the maximal extension of the grav- itational action can be implemented by assuming that the Lagrangian of the gravitational field is an arbitrary function of the curvature scalar R, and of the matter Lagrangian Lm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' One of the interesting features of the dark coupling models is that they imply the presence of a nonminimal geometry-matter coupling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Dark cou- plings are not restricted to Riemannian geometry, but they can be considered in the framework of the exten- sions of Riemann geometry.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Typical examples of dark coupling theories are the f (R, Lm) [15, 16], f(R, T ) [17], f (R, T, RµνT µν) [18], f(τ, T ) [19], f(Q, T ) [20], or the f (R, T, Q, Tm) [21] theories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Other gravitational theories implying geometry-matter coupling have been considered in [22–27].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' One of the interesting consequences of the dark cou- pling theories is the reconsideration of the role of the or- dinary (baryonic) matter in the cosmological dynamics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Through its coupling to gravity, matter becomes a key element in the explanation of cosmic dynamics, and re- covers its central role gravity, which is minimized or even neglected in the dark constituents and dark geometric type theories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' An important implication of the geometry- matter coupling is that the matter energy-momentum tensor is generally not conserved, and thus an extra- force is generated, acting on massive particles moving in a gravitational field, with the particles following non- geodesic paths [16, 17].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' The possibility of the existence of such couplings between matter and geometry have opened interesting, and novel pathways for the study of gravitational phenomena [28].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' However, the dependence of the gravitational action in the dark coupling theories on Lm gives a new rele- vance to the old problem of the degeneracy of the matter Lagrangian.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Two, physically inequivalent expressions of the matter Lagrangian, Lm = −ρ, and Lm = P, lead to the same energy-momentum tensor for matter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' This re- sult has important implications for dark coupling gravity models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' For example, in the framework of the f (R, Lm) theory, it was shown in [29] that adopting for the La- grangian density the expression Lm = p, where p is the pressure, in the case of dust the extra force vanishes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' However, for the form Lm = ρ of the matter Lagrangian, the extra-force does not vanish [30].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' In [31] it was shown, by using the variational formulation for the derivation of the equations of motion, that both the matter La- grangian, and the energy-momentum tensor, are uniquely and completely determined by the form of the geometry- matter coupling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Therefore, the extra-force never van- ishes as a consequence of the thermodynamic properties of the system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' In [32] it was shown that if the particle number is conserved, the Lagrangian of a barotropic per- fect fluid with P = P(ρ) is Lm = −ρ � c2 + � P(ρ)/ρ2dρ � , where ρ is the rest mass density.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' This result can be used successfully in the study of the modified theories of gravity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' The result is based on the assumption that the Lagrangian does not depend on the derivatives of the metric, and that the particle number of the fluid is a con- served quantity, ∇µ (ρuµ) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' The matter Lagrangian also plays an important role in the f(R, T ) theory of gravity [17].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' In theories with geometry-matter coupling another im- portant quantity, the variation of the energy-momentum tensor with respect to the metric does appear, and plays an important role.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' The corresponding second order ten- sor is denoted as Tµν, and it is introduced via the defini- tion [17] Tµν ≡ gρσ δTρσ δgµν .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' If the matter Lagrangian does not depend on the deriva- tives of the metric, one can obtain for Tµν a mathemat- ical expression that also contains the second variation of the matter Lagrangian with respect to the metric, δ2Lm/δgµνδgαβ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' The Lagrangian of the electromagnetic field is quadratic in the components of the metric tensor, and hence its second variation gives a non-zero contri- bution to Tµν.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' However, the case of ordinary baryonic matter is more complicated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' At first sight, by taking into account the explicit forms of the matter Lagrangians, Lm = −ρ, or Lm = p, no explicit dependence on the metric does appear, as opposed, for example, to the case of the electromagnetic field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' This would suggest that the second variation of the matter Lagrangian always iden- tically vanishes, no matter what its functional form is.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' This conclusion may be valid indeed for some special forms of the equation of state, but it is not correct if one adopts a general thermodynamic description of the baryonic fluids.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' It is the goal of the present Letter to investigate the problem of the second variation of the perfect fluid mat- ter Lagrangian with respect to the metric tensor com- ponents, and to analyze its impact on modified gravity theories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' As a first step in our analysis, we obtain, from general thermodynamic considerations, the expressions of the variations with respect to the metric and of the baryonic matter energy density and pressure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Once these expressions are known, a straightforward calculation, in- volving the computation of the second variation of the energy density and pressure, gives the first variation of the matter energy-momentum tensor with respect to the metric, which also allows to obtain the tensor Tµν.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' The basic result of our investigation is that the tensor Tµν is independent of the choice of the matter Lagrangian.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' The effect of the second order correction is estimated in a cosmological background.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' As a specific example we will concentrate on the f(R, T ) gravity theory, in which the tensor Tµν plays an important role.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' The present Letter is organized as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' The general thermodynamic formalism used for the calculation of the second variation of the matter Lagrangian is discussed in Section II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' The general expression for the second vari- ation of the matter Lagrangian, and of the variation of the energy-momentum tensor is presented in Section III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' 3 Some cosmological applications of the obtained results are presented in Section III A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' We then briefly review the basics of the f(R, T ) gravity theory in Section IV and outline its cosmological implications for a simple choice f(R, T ) = α|T |n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Finally, we discuss and conclude our results in Section V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' THERMODYNAMICS AND GEOMETRY In order to obtain the second variation of the baryonic matter Lagrangian, it is necessary to review the deriva- tion of its first variation using thermodynamics consid- erations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' The first law of the thermodynamic is given by dU = T dS − PdV + µdN, (1) where U is the total energy, µ is the chemical potential, related to the change in the number of particles in the system, N is the particle number and V is the volume en- closing the fluid.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' An important thermodynamic relation is the Gibbs-Duhem equation, U = T S − PV + µN, (2) which follows from the extensivity of the energy, U(λX) = λU(X), where λ is a constant, and from Euler’s theorem of the homogeneous functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Let us define the particle number density n = N/V and entropy per particle s = S/N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' The first law of ther- modynamics (1) and the Gibbs-Duhem relation (2) can be simplified to [33, 34] dρ = T nds + µ′dn, (3) ρ = µ′n − P, (4) where µ′ = µ+T s and we have defined the energy density as ρ = U/V .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Also, by taking the differential of the Gibbs- Duhem relation (2) we obtain dU = T dS + SdT − PdV − V dP + Ndµ + µdN, and using the first law of thermodynamics (1), one can obtain dP = sdT + ndµ = ndµ′ − nT ds, (5) implying that ρ = ρ(s, n) and P = P(µ′, s).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Now, we define the particle number flux Jµ = √−gnuµ, (6) and the Taub current [34] Vµ = µ′uµ, (7) where uµ is the fluid 4-velocity, and n, the particle num- ber density, can be obtained according to the relation, n = � gµνJµJν g .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' (8) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' With the above definition, one obtains J ≡ � −JµJµ = √−gn, Jµ = Juµ, (9) V ≡ � −VµV µ = µ′, V µ = V uµ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' (10) In the context of general relativity, it is well-known that there are two equivalent baryonic matter La- grangians corresponding to Lm = −ρ, Lm = p, (11) It should be noted that from the definition of the energy-momentum tensor as Tµν = − 2 √−g δ(√−gLm) δgµν , (12) both Lagrangians in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' (11) give the same result, Tµν = (ρ + P)uµuν + Pgµν.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' (13) As a next step in our study, we introduce the basic assumptions that the variations of the entropy density s and of the ordinary matter number flux vector density Jµ = nuµ√−g, satisfy the two independent constraints [35], δs = 0, (14) and δJµ = 0, (15) respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Hence, in the following we impose the re- striction that the entropy and particle production rates re- main unchanged during the dynamical evolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' There- fore, the entropy and particle number currents satisfy the conservation equations δ (Jµ∂µs) = 0 and ∇µ (nuµ) = 0, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' The first of these relations is obtained by taking the divergence of Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' (14), contracting the ob- tained expression with Jµ, and by using Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' (15).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' By taking the variation of the particle number n, with the use of the assumptions previously introduced, we find [35], δn = n 2 (−g) uµuν �δgµν g − gµν g2 δg � = n 2 (uµuν + gµν) δgµν.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' (16) In order to obtain the variation of the energy- momentum tensor, we need to find the variations of the energy density and pressure with respect to the metric, namely, δρ/δgµν and δP/δgµν, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' In the case of isentropic processes, we have δρ = ρ + P n δn, (17) δP = n dµ′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' (18) 4 Let the equation of state for matter be given as ρ = ρ (n, s).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Then, since δs = 0, from the thermodynamic relation (∂ρ/∂n)s = w = (ρ + P) /n, we obtain δρ = wδn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' The variation of n is given by Eq (16), while the vari- ation of µ′ from equation (10) can be obtained as, δµ′ = δV = −VµVν 2V δgµν = −1 2µ′uµuνδgµν.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' (19) These relations give the thermodynamic variations of the energy density and pressure with respect to the met- ric as, δρ δgµν = 1 2(ρ + P)(gµν + uµuν), (20) δP δgµν = −1 2(ρ + P)uµuν.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' (21) Eqs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' (19) and (20) can be obtained in a direct way by starting from the definition of the matter energy- momentum tensor, as given by Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' (12).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' If the matter Lagrangian does not depend on the derivatives of the metric tensor, from Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' (12) we obtain Tµν = Lmgµν − 2 δLm δgµν , (22) giving δLm δgµν = 1 2Lmgµν − 1 2Tµν.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' (23) If we take now Lm = −ρ, from the above equation we find δ(−ρ) δgµν = −1 2ρgµν − 1 2Tµν = −1 2(ρ + P) (gµν + uµuν) , (24) where we have used the expression (13) for the energy- momentum tensor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' For Lm = P, we obtain δP δgµν = 1 2Pgµν − 1 2Tµν = −1 2(ρ + P)uµuν.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' (25) Hence, we have recovered the expressions of the varia- tions with respect to the metric of the energy and pres- sure variations, previously obtained from first principle thermodynamic considerations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' THE FIRST VARIATION OF THE MATTER ENERGY-MOMENTUM TENSOR Now, we have all the necessary tools for computing the second variation of the energy density and of the pressure of a perfect fluid.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Taking into account that δgµν = −gµαgνβδgαβ, (26) and δuµ δgαβ = uν δgµν δgαβ , (27) respectively, one immediately obtains δ2P δgαβδgµν ≡ δ δgαβ � δp δgµν � = 1 4(ρ + P) � gµβuαuν + gµαuβuν + gνβuαuµ + gναuβuµ − 1 2gαβuµuν − 1 2gµνuαuβ � , (28) and δ2(−ρ) δgαβδgµν = δ2P δgαβδgµν − 1 4(ρ + P)(gαβgµν − gµαgνβ − gµβgνα), (29) respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Here, since the energy density and pressure are scalars, we expect that the second variation is sym- metric with respect to the change (αβ) ⇄ (µν).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Hence, we have implemented this symmetry to the above expres- sions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' After a little algebra one can obtain from its definition (12), and by assuming that the matter Lagrangian does not depend on the derivatives of the metric tensor, the variation of the energy-momentum tensor as δTµν δgαβ = 1 2Lm(gαβgµν − gµαgνβ − gµβgνα) − 1 2Tαβgµν − 2 δ2Lm δgαβδgµν .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' (30) Therefore, after substituting the expressions of the sec- ond variations of the matter Lagrangians, we find the im- portant result that for both baryonic matter Lagrangians in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' (11), we obtain, δTµν δgαβ = 1 2P(gαβgµν − gµαgνβ − gµβgνα) − 1 2Tαβgµν − 2 δ2P δgαβδgµν , (31) implying that the expression of δTµν/δgαβ is indepen- dent on the choice of the matter Lagrangian.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' This is not 5 the case for the approximate result obtained by neglect- ing the second variation of the matter Lagrangian with respect to the metric, δTµν δgαβ ≈ 1 2Lm(gαβgµν − gµαgνβ − gµβgνα) − 1 2Tαβgµν, (32) which obviously depends on the choice of Lagrangian density.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' It should be noted at this moment that the energy- momentum tensor, and its variation, should be indepen- dent to the choice of the baryonic matter Lagrangian, as we have summarized in the previous Section on thermo- dynamics grounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' (31) can also be written in the form, δTµν δgαβ = 1 2P(gνβgαµ + gναgβµ) − 1 2 � Tανgµβ + Tβνgµα + Tαµgνβ + Tβµgνα − 1 2Tµνgαβ + 1 2Tαβgµν � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' (33) Also, by defining a modified energy-momentum tensor ¯Tµν = (ρ + P)uµuν + 1 2Pgµν, (34) one can write the first variation of the energy-momentum tensor as δTµν δgαβ = −1 2 � ¯Tβνgµα + ¯Tανgµβ + ¯Tαµgνβ + ¯Tβµgνα − 1 2 ¯Tµνgαβ + 1 2 ¯Tαβgµν � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' (35) \u200cIn the well-known f(R, T ) gravity theories [17], on en- counters with the expression gµνδTµν/δgαβ, which enters into the modified field equations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' With the result given by Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' (33), we define Tαβ ≡ gµν δTµν δgαβ = −1 4(12 ¯Tαβ − ¯Tgαβ), (36) where ¯T = −ρ + P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Alternatively, we also have, δT δgαβ = Tαβ + Tαβ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' (37) In the comoving frame one can then obtain, Tµ ν = 1 4diag (11ρ + 7P, −ρ − 5P, −ρ − 5P, −ρ − 5P) δµ ν .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' (38) Taking the trace of the above expression, one finds T ≡ gµνTµν = 2(ρ − P).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' (39) The approximate results, obtained by neglecting the second variation of the matter Lagrangian, are, Tµ ν ≈ −1 2(ρ + 3P)δµ ν , (40) for Lm = −ρ, and Tµ ν ≈ 1 2(ρ − P)δµ ν , (41) for Lm = P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' For the approximate result with Lm = −ρ we obtain T ≈ −2(ρ + 3P), while for Lm = P we obtain T ≈ 2(ρ − P).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' We thus arrive to the interesting conclusion that the approximate result with Lm = P still gives the correct answer for the trace of the tensor T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Cosmological implications In order to determine the effect of the new term in the variation of the energy-momentum tensor, let us find its behavior for a conserved matter source in a flat FLRW Universe, with the line element ds2 = −dt2 + a2(t) � dx2 + dy2 + dz2� , (42) where a is the scale factor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' In this case, one has for the baryonic matter density ρm, assumed to be in the form of dust, the expression ρm = Ωm0 a3 , (43) 6 where Ω0m is the present time density abundance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' For the variation of the density of the radiation we have ρr = Ωr0 a4 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' (44) Assume that the Universe is filled with dust and radi- ation, with ρ = ρm + ρr = Ωm0 a3 + Ωr0 a4 , P = 1 3ρr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' (45) In this case, one obtains T = 2Ωm0(1 + z)3 + 4 3Ωr0(1 + z)4, (46) where we have introduced the redshift z, defined as 1 + z = 1 a, (47) and Ωm,0 and Ωr,0 are the current values of the dust and radiation abundances, Ωm0 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='305, and Ωr0 = 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='3 × 10−5, respectively [36].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' 1 we have depicted the evolution of the new term T as a function of the redshift.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' As a result, we expect that the new term changes the behavior of the cosmological models in theories in which the first order variation of the energy-momentum tensor with respect to the metric is present in the gravitational field equa- tions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' There are major differences as compared with the approximate relation for Lm = −ρ, but the two relations coincide for Lm = P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' f(R, T ) GRAVITY Now let us consider a typical gravitational theory in which the above results can have an important influence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Consider the action [17], S = � d4x√−g(κ2R + f(R, T ) + Lm), (48) where f(R, T ) is an arbitrary function of the Ricci scalar R, and of the trace of the energy-momentum tensor T .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' We suppose that the Universe is filled with a perfect fluid with the matter energy-momentum having the form (13).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' The field equations can be obtained as κ2Gµν − 1 2fgµν + fRRµν + (gµν□ − ∇µ∇ν)fR = 1 2Tµν − fT Tµν − fT Tµν, (49) where the last term is computed as in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' (36).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' It should be noted that using the correct result Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' (36), the choice of the matter Lagrangian is irrelevant, both cases with Lm = −ρ and Lm = P giving the same field equations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' With the use of the mathematical identity (□∇ν − ∇ν□) fR = Rµν∇µfR, FIG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' The behavior of the extra term T as a function of the redshift z for the new correct expression (solid curve), and for the previously considered approximate relation for Lm = −ρ (dashed curve).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' The approximate relation with Lm = P for T exactly coincides with the correct result.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' after taking the divergence of Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' (49) we obtain the conservation equation in the f (R, T ) gravity theory in the form �1 2 − fT � ∇µTµν = (Tµν + Tµν) ∇µfT + fT � ∇µTµν + 1 2∇νT � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' (50) As one can see from the field equations (49), the dy- namical behavior in f(R, T ) gravity essentially depends on the tensor Tµν.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' In this Letter, we will consider a sim- ple case that indicates the importance of the new term.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Let us assume that f(R, T ) = α|T |n, and P = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' In this case, the field equations reduce to κ2Gµν = 1 2Tµν + 1 2α|T |ngµν − nαǫ|T |n−1(Tµν + Tµν), (51) where ǫ = sign(T ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Here we have T = −ρ and then ǫ = −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' The Friedmann and Raychaudhuri equations are then h2 = ¯ρm − 1 2β(7n + 2)¯ρn m, (52) h′ = −3 2 (¯ρm − 4βn¯ρn m) , (53) where we have used the following set of dimensionless variables, τ = H0t, H = H0h, ¯ρ = ρ 6κ2H2 0 , β = (6κ2H2 0)n−1α, (54) and we have denoted by H0 the current value of the Hub- ble parameter, and by a prime the derivative with respect to τ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' As an indicator of the decelerating/accelerating 15 10 5 correct approximate(Lm=-p) 10E 15 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='0 :N7 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='0 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='5 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='0 z 50 100 150 200 250 300 350 400 H 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='0 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='5 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='0 z −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='6 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='4 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='4 q FIG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' The behavior of the Hubble parameter H and of the deceleration parameter q as a function of the redshift for the best fit values of the parameters as given by Eqs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' (59).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' The dashed line represents the ΛCDM model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='0 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='5 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='0 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='5 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='0 z 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='2 Ω m FIG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' The behavior of the matter density parameter Ωm as a function of redshift for the best fit values of the parameters as given by Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' (59).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' The dashed line represents the ΛCDM model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' evolution we introduce the deceleration parameter, de- fined as q = d dτ 1 h − 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' (55) Note that from the normalized Friedmann equation (52), and by taking into account that at the present time we have h(present) = 1, we can obtain the coupling β as β = − 2(1 − Ωm0) (2 + 7n)Ωn m0 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' (56) In order to find the best fit value of the parameter n, H0 and Ωm0, we use the Likelihood analysis using the ob- servational data on the Hubble parameter in the redshift range z ∈ (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='07, 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='36) [36].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' In the case of independent 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='3 Ω m 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='015 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='020 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='025 n 64 66 68 70 72 H 0 64 66 68 70 72 H 0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='015 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='020 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='025 n FIG.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' The corner plot for the values of the parameters H0, Ωm0 and n with their 1σ and 2σ confidence levels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' data points, the likelihood function can be defined as L = L0e−χ2/2, (57) where L0 is the normalization constant and the quantity χ2 is defined as χ2 = � i �Oi − Ti σi �2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' (58) Here i counts the data points, Oi are the observational value, Ti are the theoretical values, and σi are the errors associated with the ith data obtained from observations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' By maximizing the likelihood function, the best fit val- ues of the parameters n, Ωm0 and H0 at 1σ confidence 8 level, can be obtained as Ωm0 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='224+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='024 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='023, H0 = 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='352+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='391 −1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='418, n = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='020+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='002 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' (59) Also, with the use of equation (56) we obtain β = −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='747+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='027 −0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='026.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' (60) The redshift evolution of the Hubble function, of the deceleration parameter q, and of the matter density pa- rameter Ωm = ¯ρm/h2 are represented, for this model, in Figs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' 2 and 3, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Also, the corner plot for the values of the parameters H0, Ωm0 and n with their 1σ and 2σ confidence levels is shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' DISCUSSIONS AND FINAL REMARKS In the present Letter we have obtained the complete expression of the first variation of the matter energy- momentum tensor with respect to the metric gµν, and of its associated tensor Tµν.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' The full estimation of this term requires the calculation of the second variations of the matter Lagrangian with respect to the metric, a term which was generally ignored in the previous investiga- tions of this problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' The expression of δ2Lm/δgµνδgαβ can be calculated straightforwardly from the first varia- tion δLm/δgµν, which can be obtained for the two possi- ble choices of the matter Lagrangian either from thermo- dynamic considerations, or in a direct way by using the definition of the energy-momentum tensor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' The main re- sult of this Letter is that the first variation of the matter energy-momentum tensor, given by Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' (31), is indepen- dent of the choice of the matter Lagrangian;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' both possible choices lead to the same expression (31), depending only on the thermodynamic pressure, and its second variation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' The variation of the energy-momentum tensor can also be expressed in terms of the pressure, and the energy- momentum tensor itself, or in a compact form in terms of a generalized energy-momentum tensor, formally de- fined in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' (34).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' The new form of the variation of the matter energy- momentum tensor may have some important implications on modified gravity theories with geometry-matter cou- pling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' As an important example we have considered the particular case of the f(R, T ) gravity theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' We have investigated the cosmological implications of a particular representation of the f(R, T ) gravity, with action given by Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' (48), in which the standard Hilbert-Einstein La- grangian is corrected by a general term f(R, T ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' As a simple case we have taken f(R, T ) = α|T |n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' The gener- alized Friedmann equations take a simple form, and they allow a complete analysis of the cosmological features of this simple model, and a full fitting of the observational cosmological data, which permits the determination of the optimal values of the free parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' The model gives an excellent description of the observational data for the Hubble function, up to a redshift of z ≈ 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' In this redshift range the model basically coincides with the ΛCDM model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' The transition from acceleration to decel- eration takes place a redshift that again coincides with the ΛCDM value.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Moreover, the deceleration parameter q basically coincides with the ΛCDM prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' How- ever, significant differences in the behavior of the matter density do appear at higher redshifts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' The search for the “true” physical quantities from which the matter energy-momentum tensor can be ob- tained (−ρ or P) in a variational formulation is still go- ing on.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Interestingly enough, the two possible matter Lagrangians are not equivalent in any sense (physical or mathematical), but their functional variation coincides, leading to the same energy-momentum tensor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' However, as shown in the present Letter, the first variation of the matter energy-momentum tensor is independent on the adopted form of the matter Lagrangian, making the mod- ified gravity theories containing this term unique, and well defined.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Hence, the study of the various orders of variations of the matter Lagrangians and of the energy- momentum tensor turns out to be an important field of research, which could lead to a new understanding of the mathematical formalism, and of the astrophysical and cosmological implications of the modified gravitational theories, and in particular of the f(R, T ) gravity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' ACKNOWLEDGMENTS We would like to thank Dr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Nihan Katirci for useful discussions, and suggestions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' The work of TH is sup- ported by a grant of the Romanian Ministry of Educa- tion and Research, CNCS-UEFISCDI, project number PN-III-P4-ID-PCE-2020-2255 (PNCDI III).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' [1] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Harko and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Lobo, Int.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Mod.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' D 29, 2030008 (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' [2] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Weinberg, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Mortonson, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Eisenstein, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Hirata, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Riess, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Rozo, Physics Reports 530, 87 (2013).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' [3] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Brout et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=', Astrophys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' 938, 110 (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' [4] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Aghanim et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=', Planck 2018 results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' VI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Cosmological parameters, Astron.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Astrophys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' 641, A6 (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' [5] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Tsujikawa, Class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Quant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Grav.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' 30, 214003 (2013).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' [6] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' de Haro and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Sal´o, Galaxies 9, 73 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' [7] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Nojiri, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Odintsov, and V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Oikonomou, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Rept.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' 692, 1 (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' 9 [8] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Jimenez, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Heisenberg, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Koivisto, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' D 98, 044048 (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' [9] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Harko, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Koivisto, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Lobo and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Olmo, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' D 85, 084016 (2012).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' [10] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Haghani, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Harko, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Sepangi, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Shahidi, JCAP 10, 061 (2012).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' [11] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Ghilencea, Eur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' C 80, 1147 (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' [12] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Ghilencea, Eur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' C 81, 510 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' [13] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Hama, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Harko, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='. Sabau, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Shahidi, Eur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' C 81, 742 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' [14] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Hama, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Harko, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Sabau, Eur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' C 82, 385 (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' [15] O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Bertolami, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Boehmer, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Harko, and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Lobo, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' D 75, 104016 (2007).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' [16] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Harko and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Lobo, Eur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' C 70, :373 (2010).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' [17] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Harko, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Lobo, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Nojiri, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Odintsov, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' D 84, 024020 (2011).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' [18] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Haghani, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Harko, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Lobo, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Sepangi, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Shahidi, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' D 88, 044023 (2013).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' [19] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Harko, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Lobo, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Otalora, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Saridakis, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' D 89, 124036 (2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' [20] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Xu, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Li, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Harko, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='-D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Liang, Eur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' C 79, 708 (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' [21] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Harko, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Myrzakulov, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Myrzakulov, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Shahidi, PDU 34, 100886 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' [22] ¨O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Akarsu, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Katırcı, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Kumar, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content='C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Nunes, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Sami, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' D 98, 063522 (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' [23] ¨O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Akarsu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Barrow, and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Uzun, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' D 102, 124059 (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' [24] ¨O Akarsu, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Katırcı, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Kumar, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' D 97, 024011 (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' [25] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Acquaviva and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Katırcı, Physics of the Dark Uni- verse 38, 101128 (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' [26] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Ludwig, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Minazzoli, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Capozziello, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' B 751, 576 (2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' [27] O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Minazzoli, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' D 98, 124020 (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' [28] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Harko and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Lobo, Extensions of f(R) gravity: Curvature-Matter Couplings and Hybrid Metric-Palatini Theory, Cambridge University Press, Cambridge, 2018 [29] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Sotiriou and V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Faraoni, Class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Quant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Grav.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' 25, 205002 (2008).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' [30] O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Bertolami, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Lobo and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Paramos, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' D 78, 064036 (2008).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' [31] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Harko, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' D 81, 044021 (2010).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' [32] O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Minazzoli and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Harko, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' D 86, 087502 (2012).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' [33] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Schutz, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' D 2, 2762 (1970).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' [34] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Brown, Class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Quant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Grav.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' 10, 1579 (1993).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' [35] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' de Felice and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Clarke, Relativity on curved manifolds, Cambridge University Press, Cambridge, 1990 [36] O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' Farooq, et.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' al, Astophys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} +page_content=' 835, 26 (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0dFLT4oBgHgl3EQfoy-j/content/2301.12133v1.pdf'} diff --git a/0tFKT4oBgHgl3EQfOC19/content/tmp_files/2301.11757v1.pdf.txt b/0tFKT4oBgHgl3EQfOC19/content/tmp_files/2301.11757v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..6cf412d152c4c1cd2d90b19d8300c1a19d62787a --- /dev/null +++ b/0tFKT4oBgHgl3EQfOC19/content/tmp_files/2301.11757v1.pdf.txt @@ -0,0 +1,1391 @@ +Moûsai: Text-to-Music Generation with Long-Context Latent Diffusion +Flavio Schneider 1 Zhijing Jin 1 2 Bernhard Schölkopf 2 +Abstract +The recent surge in popularity of diffusion mod- +els for image generation has brought new atten- +tion to the potential of these models in other ar- +eas of media synthesis. One area that has yet to +be fully explored is the application of diffusion +models to music generation. Music generation +requires to handle multiple aspects, including the +temporal dimension, long-term structure, multi- +ple layers of overlapping sounds, and nuances that +only trained listeners can detect. In our work, we +investigate the potential of diffusion models for +text-conditional music generation. We develop a +cascading latent diffusion approach that can gen- +erate multiple minutes of high-quality stereo mu- +sic at 48kHz from textual descriptions. For each +model, we make an effort to maintain reasonable +inference speed, targeting real-time on a single +consumer GPU. In addition to trained models, we +provide a collection of open-source libraries with +the hope of facilitating future work in the field.1 +1. Introduction +Music generation, or more generally audio generation, has +multiple aspects at different levels of abstraction that make it +a challenging problem (van den Oord et al., 2016; Dieleman +et al., 2018). Regardless of its challenging nature, automated +or model-assisted music generation has been an active area +of research (Doornbusch, 2010; Salas et al., 2011; Giraudo, +2021). +Recently, with the rise of deep learning models and their suc- +cess in computer vision (Deng et al., 2009; Rombach et al., +2022; Chang et al., 2023) and natural language process- +ing (Pennington et al., 2014; Radford et al., 2018; Devlin +et al., 2019; Ouyang et al., 2022), it is also promising to +see how much benefit deep learning models can bring to +1ETH Zürich, Switzerland 2Max Planck Institute for Intelli- +gent Systems, Tübingen, Germany. Correspondence to: Flavio +Schneider <flavio.schneider.97@gmail.com>. +1We open-source the following: +– Music samples for this paper: bit.ly/anonymous-mousai +– All music samples for all models: bit.ly/audio-diffusion +– Codes: github.com/archinetai/audio-diffusion-pytorch +UNet1 +Tokenizer +UNet1 +UNet1 +Text Description +Noise +Noise +Audio +Embedding +Latent +Transformer +UNet2 +UNet2 +UNet2 +UNet2 +DiffusionDecoder +DiffusionGenerator +TextEncoder +Egyptian Darbuka, +Drums, Rythm, +(Deluxe Edition), +2 of 4 +Figure 1. Two-stage generation architecture in the inference mode +of our model. Specifically, we first encode text with a pretrained +and frozen language model into a text embedding. Then, condition- +ing on the text, we generate a compressed latent with the diffusion +generator, and finally, the compressed latent in turn is used to +condition the diffusion decoder to generate the final waveform. +audio generation. Existing audio generation models explore +the use of recursive neural networks (Mehri et al., 2017), +adversarial generative networks (Kumar et al., 2019; Kim +et al., 2021; Engel et al., 2019; Morrison et al., 2022), au- +toencoders (Deng et al., 2021), and transformers (Yu et al., +2022a). As the more recent advancement in generative mod- +els, diffusion models have been used in speech synthesis +(Kong et al., 2021; Lam et al., 2022; Leng et al., 2022), but +are still under-explored for music generation. +Moreover, there are several long-standing challenges in the +area of music generation: (1) modeling the long-term struc- +ture, (2) improving the sound quality, (3) increasing the +diversity of the generated music, and (4) enabling easier +control of the generation, such as text prompts. A single +model mastering all the proposed aspects would be a great +addition to the music industry. It can enable the broader +public to be part of the creative process by allowing them to +compose music using an accessible text-based interface, as- +arXiv:2301.11757v1 [cs.CL] 27 Jan 2023 + +Moûsai: Text-to-Music Generation with Long-Context Latent Diffusion +Table 1. Comparison of our Moûsai model with previous music generation models. We show the comparisons along the (1) audio sample +rate@the number of channels (Sample Rate↑, where the higher the better), (2) context length of the generated music (Ctx. Len.↑, +where the higher the more capable the model is to generate structural music; we use ⋆ to indicate variable length, and we assume that +autoregressive methods are variable by default, but have an upper-bound imposed by attention. ), (3) input type (Input, where we feature +using Text  as the condition for the generation), (4) type of the generate music (Music, where the more Diverse↑ genre, the better), (5) +an example of the generated music type (Example), (6) inference time (Infer. Time↓, where the shorter the better, and since the music +length is seconds or minutes, the inference time equivalent to the audio length is the shortest, and we use ⋆ to show models that can run +inference fast on CPU), and (7) total length of the music in the training data in hours (Data). +Model +Sample Rate↑ Ctx. Len.↑ Input (Text ) +Music (Diverse↑) +Example +Infer. Time↓ +Data +WaveNet (2016) +16kHz@1 +Secs +None +Piano or speech +Piano += Audio len.⋆ 260 +Jukebox (2020) +44.1kHz@1 +Mins⋆ +Lyrics, author, etc. +Song with the lyrics Song +Hours +70K +RAVE (2021) +48kHz@2 +Secs⋆ +Latent +Single-genre Music +Strings += Audio len.⋆ 100 +AudioLM (2022) 16kHz@1 +Secs⋆ +Beginning of the music +Piano or speech +Piano +Mins +40K +Musika (2022) +22.5kHz@2 +Secs +Context vector +Single-genre Music +Piano += Audio len.⋆ 1K +Riffusion (2022) +44.1kHz@1 +5s +Text (genre, author, etc.) Music of any genre Jazzy clarinet +Mins +– +AudioGen (2022) 16kHz@1 +Secs⋆ +Text (a phrase/sentence) Daily sounds +Dog barks +Hours +4K +Moûsai (Ours) +48kHz@2 +Mins⋆ +Text (genre, author, etc.) Music of any genre African drums = Audio len. +2.5K +sist creators in finding inspiration, and provide an unlimited +supply of novel audio samples. +From the landscape of existing music generation models +in Table 1, we can see that the aforementioned challenges +widely exist throughout the literature. For example, most +text-to-audio systems (Forsgren & Martiros, 2022; Kreuk +et al., 2022) can only generate a few seconds of audio, and +many tend to require long inference time up to many GPU +hours to generate one minute of audio (Dhariwal et al., 2020; +Kreuk et al., 2022). Apart from the text-to-music generation +models, if we look at the unconditional music generation, +some can generate high-quality samples and run in real time +on CPU (Caillon & Esling, 2021; Pasini & Schlüter, 2022), +but they are usually trained on a single modality (resulting in +the ability to handle only single-genre music, but not diverse +ones), and none can handle long-term structure (van den +Oord et al., 2016; Caillon & Esling, 2021; Pasini & Schlüter, +2022). +To this end, we propose Moûsai,2 a text-conditional cascad- +ing diffusion model (Figure 1) that tries to address all the +mentioned challenges at the same time. Specifically, our +Moûsai model uses a custom two-stage cascading diffusion +method shown in Figure 1. In the first stage, it compresses +the audio waveform using a novel diffusion autoencoder, +and in the second stage, it learns to generate the reduced +2Moûsai is romanized ancient Greek for Muses, the sources of +artistic inspiration (https://en.wikipedia.org/wiki/ +Muses). Given that inspiration is exactly what the system may be +lacking, this name may not be apposite, but the reminiscence to +both music and AI was simply too compelling. +latent representations conditioned on the text embedding +generated by a pretrained language model. Both stages use +an efficient U-Net optimized by us, enabling fast inference +speed which makes it realistic for usage in future applica- +tions. +In conclusion, the main contributions of our work are as +follows: +1. We make it possible to generate long-context 48kHz +stereo music exceeding the minute mark, based on +context exceeding the minute mark, and generate a +variety of music. +2. We propose an efficient 1D U-Net architecture for both +stages of the cascade, making it possible to generate +audio in real-time on a single consumer GPU. Likewise, +each stage of our system can be trained on one A100 +GPU in approximately 1 week, making it possible to +train and run the overall system using modest resources, +as available in most universities. +3. We present a new diffusion magnitude autoencoder +that can compress the audio signal 64x compared to +the original waveform with only moderate quality loss, +used by the generation stage of the architecture to apply +latent diffusion on. +2. Related Work +A common trend in the generative space has been to first +train a representation learning, compression, or upsampling +model on the input domain, and later learn a generative + +Moûsai: Text-to-Music Generation with Long-Context Latent Diffusion +model on top of the reduced representation while condition- +ing on the information of interest (Rombach et al., 2022; +Yang et al., 2022; Kreuk et al., 2022; Ho et al., 2022; Ville- +gas et al., 2022). This can be drastically more efficient than +directly learning on the raw input data, as the generative +model can work on a much lower dimensional representa- +tion and hence capture coarse structures. +Auto-encoding (Hinton & Salakhutdinov, 2006; Kingma & +Welling, 2014) or quantized auto-encoding (van den Oord +et al., 2017; Esser et al., 2021; Lee et al., 2022) are popu- +lar compression methods originally proposed for the image +domain, that have been similarly and successfully applied +as audio representations (Caillon & Esling, 2021; Pasini & +Schlüter, 2022; Baevski et al., 2020; Zeghidour et al., 2022; +Défossez et al., 2022). The two most popular directions in +the generative space suggest either to learn a quantized rep- +resentation followed by masked or autoregressive learning +on tokens (Villegas et al., 2022; Yu et al., 2022b; Chang +et al., 2023; Dhariwal et al., 2020; Borsos et al., 2022; Yang +et al., 2022; Kreuk et al., 2022), or to use learned (continous) +compressed or deterministic downsampled representation +and later apply diffusion models as generators to reconstruct +the noise-masked data in another stage (Ramesh et al., 2022; +Rombach et al., 2022; Saharia et al., 2022; Ho et al., 2022; +Forsgren & Martiros, 2022). Methods using the former to- +kenized representation have been successful but not up to +the same level of performance as the latter (“cascading") +diffusion methods. +In our work, we follow ideas from the cascading diffusion +approach, which, to the best of our knowledge, has never +been attempted for audio generation. We use a custom +two-stage cascading diffusion method, where the first stage +compresses audio using a novel diffusion autoencoder, and +the second stage learns to generate the reduced representa- +tion while conditioning on a textual description. +3. Preliminaries +In this section, we introduce several preliminaries that serve +as the basis for our model. Specifically, we give an overview +of the workings of diffusion, latent diffusion, and the U-Net. +3.1. Audio Generation +Audio generation has long been a challenging task. At the +lowest level, we have digital waveforms that control air +movement from speakers. Waveforms can be represented in +different resolutions, or sample rates. Higher sample rates +(e.g., 48kHz)allow for more temporal resolution and can +represent higher frequencies, but at the same time it is com- +putationally more demanding to generate. At higher levels +of abstraction, we find qualitative properties such as texture +(timbre) or pitch. Zooming out, we observe structure such +as rhythm and melody that can span multiple seconds, or +even structurally be composed into choruses that form min- +utes of interconnected patterns. Audio can be represented +with a single waveform (mono), two waveforms (stereo), +or even more in the case of surround sound. Audio with +two or more channels can give a sense of movement and +spatialisation. From the modelling perspective, there are +unconditional models that generate novel samples from the +training distribution without any additional information, or +conditional models that use a form of guidance, such as text, +to control the generation. Models can be trained on a single +modality (e.g., drums or piano) or on multiple modalities, +which usually require more parameters for an increased +modelling capacity and decrease in speed. +3.2. Diffusion +We employ vvv-objective diffusion as proposed by Salimans +& Ho (2022). Given a sample xxx0 from a distribution p(xxx0), +some noise schedule σt ∈ [0, 1], and some noisy data-point +xxxσt = ασtxxx0 + βσtϵϵϵ, vvv-objective diffusion tries to esti- +mate a model ˆvvvσt = f(xxxσt, σt) minimizing the following +objective: +Et∼[0,1],σt,xxxσt +î +∥fθ(xxxσt, σt) − vvvσt∥2 +2 +ó +, +(1) +where vvvσt = ∂xxxσt +σt += ασtϵϵϵ − βσtxxxσt with ασt +..= cos(φt), +and βσt +..= sin(φt) and φt ..= π +2 σt. +By estimating the rate of change, ODE samplers can be used +to turn noise into a new datapoint. In this work, we use the +DDIM sampler (Song et al., 2021), which we find to work +well and have a reasonable tradeoff between the number of +steps and audio quality. The DDIM sampler denoises the +signal by repeated application of the following: +ˆvvvσt = fθ(xxxσt, σt) +(2) +ˆxxx0 = ασtxxxσt − βσtˆvvvσt +(3) +ˆϵϵϵσt = βσtxxxσt + ασtˆvvvσt +(4) +ˆxxxσt−1 = ασt−1ˆxxx0 + βσt−1ˆϵϵϵt, +(5) +which estimates both the initial data-point and the noise at +step σt, for some T-step noise schedule σT , . . . , σ0 linearly +spaced between 1 and 0. +3.3. Latent Diffusion +Following the work on image diffusion (Rombach et al., +2022), we compress audio into a smaller representation and +apply the diffusion process on the reduced latent space. In +contrast to Rombach et al. (2022), we propose a diffusion +based autoencoder instead of a standard autoencoder, in- +creasing the representation power of the decoding process +and the amount of compressibility allowed. + +Moûsai: Text-to-Music Generation with Long-Context Latent Diffusion +Downsample +Upsample +Items +Skip +UNetBlock +Items +Items +×N +R +C +A +M +I +Figure 2. 1D U-Net architecture used both for the diffusion decoder +and latent diffusion generator. The inner dashed region indicates +that the UNetBlock can be recursively nested. Resnet items (R) +are used as convolutional blocks, modulation items (M) are used to +provide the diffusion noise level as a feature vector conditioning +, +inject items (I) are used to inject external channels as conditioning +(used for diffusion decoding only), attention items (A) are used +to share information timewise, and cross attention items (C) are +used to condition on an external (text) embeddings +. +3.4. U-Net +U-Nets were first proposed by Ronneberger et al. (2015) as +an hourglass convolutional only 2D architecture with skip +connections; originally used for medial image segmentation, +and since repurposed for multiple uses, such as image, au- +dio, and video generation. Our proposed U-Net has little +resemblance to the original work, and is infused with multi- +ple new components, such as more modern convolutional +blocks, a variety of attention blocks, conditioning blocks, +and improved skip connections, maintaining only a skeleton +of the hourglass architecture. +4. Text-to-Music Generation with Moûsai +Moûsai is composed of two independently trained models. +The first stage (DMAE) is responsible for compressing the +audio waveform 64x using a diffusion autoencoder. In the +second stage (latent text-to-audio diffusion), we generate a +novel latent space by the diffusion model while conditioning +on text embeddings obtained from a frozen transformer +language model. For both diffusion models, we use the same +efficient 1D U-Net architecture with varying configurations. +4.1. 1D U-Net +In this work, we use a 1D U-Net architecture employed in +different configurations for both the autoencoding and latent +diffusion stage (Figure 2). U-Nets with 1D convolutional +kernels are more efficient compared to 2D in terms of speed +and can be successfully used both on waveforms or on +UNet +||·|| +Noise +Encoder +STFTMag +Latent +Audio +Figure 3. Diffusion Magnitude Autoencoder (DMAE) training +scheme. The diffusion autoencoder stage learns to compress au- +dio 64x (compared to the original waveform) into a smaller latent +space. To train this stage, the waveform is first converted to a +magnitude spectrogram, then auto-encoded into a latent. At the +same time, the original audio is corrupted with a random amount +of noise and the U-Net is trained to remove that noise. During the +noise removal process, the U-Net is conditioned on the noise level +and the compressed latent +which can have access to a reduced +version of the non-noisy audio. +spectrograms if each frequency is considered as a different +channel. +We use a variety of repeated items at each resolution of the +U-Net, namely: (R) a residual 1D convolutional unit, (M) +a modulation unit used to alter the channels given features +from the diffusion noise level, (I) an inject item that con- +catenates external channels to the ones at the current depth +(the lengths must match), (A) an attention item used to share +long-context structural information, and (C) a cross atten- +tion item used to condition on text embeddings. Inject items +are applied only at a specific depth in the first stage decoder +to condition on the latent. Attention and cross attention +items are instead used only in the inner blocks of the second +stage U-Net, to learn structure and condition on text. +4.2. Diffusion Magnitude-Autoencoding (DMAE) +Diffusion autoencoders were first introduced by Preechakul +et al. (2022), as a way to condition the diffusion process on +a compressed latent vector of the input itself. Diffusion can +act as a more powerful generative decoder, and hence the in- +put can be reduced to latents with higher compression ratios. +In this work, we propose a new diffusion autoencoder that + +500 +0 +400 +-20 +300 +-40 +200 +-60 +100 +-80 +00 +200 +400 +600 +800 +1000Moûsai: Text-to-Music Generation with Long-Context Latent Diffusion +first encodes a magnitude spectrogram into a compressed +representation, and later injects the latent into intermediate +channels of the decoding 1D U-Net (Figure 3). +Let www be a waveform of shape [c, t] for c channels and t +timesteps, and (m +m +mw +ww,pppww +w) = stft(www; n = 1024, h = 256) +be the magnitude and phase obtained from a short-time +furier tranform of the waveform with a window size of +1024 and hop-length of 256. Then the resulting spectro- +grams will have shape [c · n, t +h]. We discard phase and +encode the magnitude into a latent zzz = encθenc(m +m +mw +ww) us- +ing a 1D convolutional encoder. The original waveform is +then reconstructed by decoding the latent using a diffusion +model ˆwww = decθdec(zzz,ϵϵϵ, s), where decθdec is the diffusion +sampling process with starting noise ϵϵϵ and s is the num- +ber of decoding (sampling) steps. The decoder is trained +with vvv-objective diffusion while conditioning on the latent +fθdec(wwwσt; σt,zzz), where fθdec is the proposed 1D U-Net, +called repeatedly during decoding. +Since only the magnitude is used and phase is discarded, +this diffusion autoencoder is simultaneously a compressing +autoencoder and vocoder. By using the magnitude spec- +trograms, higher compression ratios can be obtained than +autoencoding directly the waveform. We found that wave- +forms are less compressible and efficient to work with. Sim- +ilarly, discarding phase is benificial to obtain higher com- +pression ratios for the same level of quality. The diffusion +model can easily learn to generate a waveform with realistic +phase even if conditioned only on the encoded magnitude. +Depending on the desired speed/quality tradeoff, more or +less compression can be applied in this first stage. Following +our single GPU constraint, we find that 64x compression +factor is a good balance to make sure the second stage can +work on a reduced representation. +The latent space produced is then used as a starting point +for the next diffusion stage. To make sure that the reduced +latent space can be used for latent diffusion, we apply a tanh +function on the bottleneck, keeping the values in the range +[−1, 1]. A more disentangled bottleneck, such as the one +used in VAEs (Kingma & Welling, 2014) can be used, but +the additional regularization reduces the amount of allowed +compressibility. +4.3. Latent Text-to-Audio Diffusion +The second stage applies latent diffusion on the previously +obtained compressed space (Figure 4). Similarly to the pre- +vious stage we use vvv-objective diffusion with the 1D U-Net +architecture and a different configuration fθgen(zzzσt; σt,eee) +while conditoning on the text embedding eee to generate the +compressed latent zzz = encθenc(m +m +mw +w +w). The generation func- +tion ˆzzz = genθgen(eee,ϵϵϵ, s) uses again DDIM sampling and +calls the U-Net s times to generate an approximate latent ˆzzz +UNet +||·|| +Noise +Text +Embedding +Embedding +Transformer +Latent +Figure 4. Text-conditional latent diffusion generator training +scheme. This stage is trained to generate novel latent spaces +that follow a similar distribution to the ones generated by the au- +toencoder. The audio source is first encoded into the latent using +the encoder, then the latent is corrupted with a random amount +of noise, and the U-Net is trained to remove the noise. While the +U-Net denoises the signal, the noise level is provided as a feature +vector +, and an encoded textual description of the original wave- +form is provided as an embedding encoded with a frozen language +model +. +from the text embedding eee and starting noise ϵϵϵ. The final +generation stack during inference to obtain a waveform is +ˆwww = decθdec(genθgen(eee,ϵϵϵgen, sgen),ϵϵϵdec, sdec) . +(6) +The 1D U-Net used in this stage includes cross attention +blocks to provide the conditioning text embedding and mul- +tiple attention blocks to make sure information can be shared +over the entire latent, crucial to learn long-range audio struc- +ture. +Given the compressed size of the latent space, the size of +this inner U-Net can be greatly increased compared to the +first stage, maintaining a reasonable training and inference +speed, even with large parameter counts. +4.4. Text Conditioning +To obtain the text embeddings, prior work on text- +conditioning suggests either learning a joint data-text rep- +resentation (Li et al., 2022; Elizalde et al., 2022; Ramesh +et al., 2022) or using embeddings from pre-trained language +model as direct conditioning (Saharia et al., 2022; Ho et al., +2022) of the latent model. +In our model, we follow the practice in Saharia et al. (2022) +to use a pre-trained and frozen T5 language model (Raffel +et al., 2020) to generate text embeddings from the given +description. We use the classifier-free guidance (CFG) (Ho + +Moûsai: Text-to-Music Generation with Long-Context Latent Diffusion +Example Text Prompts in Our Dataset +Nr. 415 (Premium Edition), german hip hop, 2 of 7, 2012, +XATAR, Konnekt +30 Años de Exitos, Mundanzas, 2 of 6, latin pop, Lupita +D’Alessio, 2011 +emo rap 2018 Runaway Lil Peep 4 of 5 +Alone, Pt. II (Remixes) 2020 electro house Alone, Pt. II - Da +Tweekaz Remix Alan Walker +Table 2. Example text prompts in our dataset. +& Salimans, 2022) with a learned mask applied on batch +elements with a probability of 0.1 to improve the strength +of the text-embedding during inference. +5. Experimental Setup +For the experimental setup, we first give an high-level +overview of the dataset and the training setup in Section 5.1, +and then we dive into details of the implementation in Sec- +tion 5.2 and hardware requirements in Section 5.3. +5.1. Dataset and Training Setup +We train all the models on a (relatively modest) collection +that we compiled consisting of 2,500 hours of stereo music +sampled at 48kHz spanning multiple genres, artists, instru- +ments, and provenience in order to maintain a high diversity +dataset. The autoencoder is trained on random crops of +length 218 (∼5.5s at 48kHz) and the text-conditional diffu- +sion generation model is trained on fixed crops of length +221 (∼44s at 48kHz) encoded in the 32-channels, 64x com- +pressed latent. +For the textual description, we use metadata such as the title, +author, album, genre, and year of release. Given that a song +could span longer than 44s, we append a string indicating +which chunk is currently being trained on, together with the +total chunks the song is made of (e.g., 1 of 4). This allows +to select the region of interest during inference. Hence, an +example prompt is like “Egyptian Darbuka, Drums, Rythm, +(Deluxe Edition), 2 of 4.” To make the conditioning more +robust, we shuffle the list of metadata and drop each element +with a probability of 0.1. Furthermore, for 50% of the times +we concatenate the list with spaces and the other 50% of +the times we use commas to make the interface more robust +during inference. Some example prompts in our dataset can +be seen in Table 2. +5.2. Implementation Details +We train a 185M-parameter diffusion autoencoder with +7 nested U-Net blocks of increasing channel count +([256, 512, 512, 512, 1024, 1024, 1024]) and downsample +each time by 2, except for the first block ([1, 2, 2, 2, 2, 2, 2]). +The diffusion autoencoder only uses resnet and modulation +items with the following repetitions [1, 2, 2, 2, 2, 2, 2], atten- +tion is not used to allow decoding of variable and possibly +very long latents. Channel injection only happens at depth +4, which matches the output of the magnitude encoder la- +tent, post tanh application. Furthermore, we train a 857M +text-conditional generator (including the parameters of the +frozen T5-base model) with 6 nested U-Net blocks of in- +creasing channel counts ([128, 256, 512, 512, 1024, 1024]) +and again downsample each time by 2, except for the first +block ([1, 2, 2, 2, 2, 2]), we use attention blocks at the fol- +lowing depths [0, 0, 1, 1, 1, 1], skipping the first two blocks +to allow for further downsampling before sharing informa- +tion over the entire latent, instead use cross attention blocks +at all resolutions ([1, 1, 1, 1, 1, 1]). For both attention and +cross attention, we use 64 head features and 12 heads per +layer. We repeat items with an increasing count towards +the inner U-Net low-resolution and large-context blocks +([2, 2, 2, 4, 8, 8]), this allows good structural learning over +minutes of audio. Both models are trained with the AdamW +optimizer (Loshchilov & Hutter, 2019) using a learning rate +of 10−4, β1 = 0.95, β2 = 0.999, ϵ = 10−6, and wight +decay of 10−3. Moreover, we use an exponential moving +average (EMA) with β = 0.995 and power of 0.7. +5.3. Hardware Requirements +We use limited computational resources as available in a +university lab. Both models can be trained on a single A100 +GPU in 1 week of training using a batch size of 32; this is +equivalent to around 1M steps for both the diffusion autoen- +coder and latent generator. For inference, as an example, +a novel audio source of ∼88s can be synthesized less than +∼88s using a consumer GPU with a DDIM sampler and +a high step count (100 generation steps and 100 decoding +steps). +6. Results +As mentioned in Table 1, our model is the only model that +generates long-context music from text descriptions. Most +other models do not take text as input (van den Oord et al., +2016; Caillon & Esling, 2021; Borsos et al., 2022; Pasini & +Schlüter, 2022), and some others use lyrics or descriptions +of daily sounds (e.g., “a dog barking”) (Kreuk et al., 2022; +Dhariwal et al., 2020). The only text-to-music model com- +parable with our work is the Riffusion model (Forsgren & +Martiros, 2022). +We describe the merits of our model in both quantitative and +qualitative ways from multiple perspectives: (1) genre diver- +sity, (2) relevance of the music to the given text prompt, (3) +sound quality, and (4) long-term structure in the generated +music. Our analyses are reported in Sections 6.1 to 6.3. +Note that there is no perfect evaluation metric in the existing + +Moûsai: Text-to-Music Generation with Long-Context Latent Diffusion +literature (Kreuk et al., 2022; Borsos et al., 2022; Dhariwal +et al., 2020), since music is a complex artifact with a range +of properties (e.g., timbre, rhythm, and structure), not to +mention the subjectivity of music perception. In the present +work, we try our best to provide a diverse set of angles +to evaluate the generated music. In addition, we suggest +readers listen to the provided samples in order to gain a +more holistic impression of our model compared to the +Riffusion model (Forsgren & Martiros, 2022): bit.ly/ +anonymous-mousai. +6.1. Diversity & Text-to-Music Relevance +We design a listener test to illustrate the diversity and text +relevance of Moûsai. Specifically, we compose a list of 40 +text prompts spanning across several common music genres: +electronic, hip hop, metal, and pop. (See Appendix A for +the entire list of prompts, ten per category.) +Using these prompts, we generate music with both Moûsai +and the Riffusion model (Forsgren & Martiros, 2022), with +a total of 80 pieces of music, two for each prompt. Quali- +tatively, we observe that our music samples exhibit a good +diversity and fit the text descriptions well. +To validate this quantitatively, we conducted a small-scale +psychophysics evaluation, recruiting three perceivers (anno- +tators) with diverse demographic backgrounds (both female +and male, all with at least a Master’s degree of education). +Each annotator listens to all 80 music samples we provide, +and is instructed to categorize each sample into exactly +one of the four provided genres. This is a four-alternative +forced choice paradigm, i.e., a variant of the two-alternative +forced choice setting which is considered the gold standard +in psychophysics. +We record how many times the perceiver correctly identifies +the genre which the respective model was generating from. +A large number (or score) means that the model often gener- +ated music that, according to the human perceiver, plausibly +belonged to the correct category (when compared to the +other three categories). To achieve a good score, the model +needs to generate diverse and genre-specific music. We take +the score as a quality score of the model when it comes to +correctly performing text-conditional music generation. +In Figure 5, we display the confusion matrix of this genre +identification test for both our model (left) and the Riffusion +model (right). For our model, the annotators identify the +right genres most of the time, whereas for the Riffusion +model, the annotators often perceive the music as more +generic, categorizing it as Pop. +6.2. Sound Quality +Apart from the diversity and relevance, we also evaluate +the sound quality of the music we generate. From the mel +ElectronicHip Hop +Metal +Pop +Electronic +Hip Hop +Metal +Pop +0 +5 +10 +15 +20 +25 +30 +(a) Confusion matrix for the +music pieces generated by +Moûsai. +ElectronicHip Hop +Metal +Pop +Electronic +Hip Hop +Metal +Pop +0 +5 +10 +15 +20 +25 +30 +(b) Confusion matrix for the +music pieces generated by the +Riffusion model. +Figure 5. Evaluation results of genre categorization for our model +(left) and the Riffusion model (right). We show the confusion +matrix across the four common music genres (electronic, hip hop, +metal, and pop). Dark values on the diagonal mean that a model +generates music the perceivers categorize into the correct genre. +We can see that our model (left) has most mass on the diagnal, +while the riffusion model tends to generate generic samples that +are very similar to Pop for all genres, thus being difficult to be +categorized correctly. Note that each matrix adds up to 120, corre- +sponding to 40 samples per model annotated by three perceivers +each. +spectrograms we visualize in Figure 6, we can see that low- +frequency sounds are handled rather well by our model. +From the music samples we provide, it is apparent that our +model performs well with drum-like sounds as frequently +found in electronic, house, dubstep, techno, EDM, and metal +music. This is likely a consequence of the lower amount of +information required to represent low-frequency sounds. +6.3. Structure +Another qualitative advantage of our model is its capability +to handle long-term structure, as opposed to riffusion mod- +els’ context length of 5 seconds, as mentioned in Table 1. +Our generated samples exhibit structure over longer periods +of time, exceeding the minute mark. All of rhythm, loops, +riffs, and occasionally even entire choruses are found in +generated music. We find that increasing the number of at- +tention blocks (e.g., from a total of 4–8 to a total of 32+) in +the latent diffusion model can improve the general structure +of the songs, thanks to the long-context view. If the model +is trained without attention blocks, the context provided +by the U-Net is not large enough to learn any meaningful +long-term structure. +6.4. Additional Properties +In addition to the main evaluation results, we also explore +several properties of our model, namely the trade-off be- +tween speed and quality, between the compression ratio and +quality, as well as the text-audio binding. +Trade-Off between Speed and Quality. We find that 10 + +Moûsai: Text-to-Music Generation with Long-Context Latent Diffusion +Figure 6. Mel spectrogram comparison between the true samples +(top) and the auto-encoded samples (bottom); cf. text. +sampling steps in both stages can be enough to generate +reasonable audio. We can achieve improved quality and +reduced noise for high-frequency sounds by trading off the +speed, i.e., increasing the number of sampling steps in the +diffusion decoder, e.g., 50 – 100 steps). Increasing the num- +ber of sampling steps in the latent diffusion model (again +in the order of 50 – 100 steps) will similarly improve the +quality, likely due to the more detailed generated latents, +and at the same time result in an overall better structured +music. To make sure the results are comparable when vary- +ing the number of sampling steps, we use the same starting +noise in both stages. In both cases, this suggests that using +more advanced samplers could be helpful to improve on the +speed-quality trade-off. +Trade-Off between Compression Ratio and Quality. We +find that decreasing the compression ratio of the first stage +(e.g., to 32x) can improve the quality of low-frequency +sounds, but in turn will slow down the model, as the second +stage has to work on higher dimensional data. As proposed +later in Section 7, we hypothesize that using perceptually +weighted loss functions instead of L2 loss during diffusion +could help this trade-off, giving a more balanced importance +to high frequency sounds even at high compression ratios. +Text-Audio Binding. We find that the text-audio binding +works well with CFG higher than 3.0. Since the model +is trained with metadata such as title, album, artist, genre, +year, and chunk, the best keywords to control the generation +appear to be frequent descriptive names, such as the genre +of the music, or descriptions commonly found in titles, such +as “remix”, “(Deluxe Edition)”, and possibly many more. +A similar behavior has been observed and exploited in text- +to-image models to generate better looking results. We find +that the chunk based text-conditioning is coherent with the +description, for example providing a description of the form +“1 of N” will tend to result in a starting portion of a song, a +description of the form “N of N” will tend to result in the +ending portion of a song, and anything in between will tend +to result in a song playing over the entire generation period. +7. Future Work +Data and Scaling. Increasing scale of both data and the +model can very likely provide drastic quality improvements. +Following (Dhariwal et al., 2020; Borsos et al., 2022) we +suggest training with 50k-100k hours instead of 2.5k. Using +a larger pretrained language model to obtain text embed- +dings has been shown to be very important for quality in +images (Saharia et al., 2022), we hypothesize that the same +is true if applied to our second-stage model. +Diffusion. More sophisticated diffusion samplers can be +used to get higher quality for the same number of sampling +steps, or similarly more advanced distillation techniques +could be used (Salimans & Ho, 2022). +Model. Some promising future modelling approaches that +need more experimentation include: (1) training diffusion +models using perceptual losses on the waveforms instead of +L2 — this might help decrease the initial size of the U-Net, +as we would not have to process non-perceivable sounds, +(2) improving the quality of the diffusion autoencoder by +using mel-spectrograms instead of magnitude spectrograms +as input, (3) other types of conditioning which are not text- +based might be useful to navigate the audio latent space, +which is often hard to describe in words — DreamBooth- +like models (Ruiz et al., 2022). +8. Conclusion +In this work, we presented Moûsai, a waveform based audio +generation method building on two diffusion models. First, +we trained a diffusion autoencoder to compress a magnitude +only spectrogram 64x. Using a custom 1D U-Net, the com- +pressed latent is decoded back to waveform by diffusion. +In the second stage, we train a diffusion model to generate +a new latent from noise while conditioning on text embed- +dings extracted from a frozen T5 transformer model, using +a similar 1D U-Net architecture as used in the first stage. +We show that — in contrast to earlier approaches — our +model can generate minutes of high-quality music in real- +time on a consumer GPU, with compelling text-audio bind- +ing. In addition to trained models, we provide a collection +of open-source libraries with the hope of facilitating future +work in the field. We expect that the present work will +help pave the way towards higher-quality, longer-context +text-to-music generation for future applications. + +70 +70 +60 +60 +50 +50 +40 +40 +30 +30 +20 +20 +10 +10 +0 +100 +200 +300 +400 +500 +0 +100 +200 +300 +400 +500 +70 +70 +60 +60 +50 +50 +40 +40 +30 +30 +20 +20 +10 +10 +100 +200 +300 +400 +500 +0 +100 +200 +300 +400 +500Moûsai: Text-to-Music Generation with Long-Context Latent Diffusion +Author Contributions +Flavio Schneider came up with the idea and implemented +all the elements of this paper, which is part of his Master’s +thesis at ETH Zürich (Schneider, 2023). +Zhijing Jin co-supervised the Master’s thesis and the work, +conducted weekly meetings, helped designed the structure +of the paper, and led the human evaluation experiments of +this paper. +Bernhard Schölkopf supervised the work and provided +precious suggestions during the progress of this work, as +well as extensive suggestions for the writing. +All of Flavio Schneider, Zhijing Jin, and Bernhard +Schölkopf contributed significantly to the writing and pol- +ishing of the paper. +Acknowledgment +We thank Stability AI for their generous support for the com- +putational resources. We are also grateful for the generous +help by our annotators Andrew Lee, Aylin Gunal, Fernando +Gonzalez, and Yiwen Ding. We thank Fernando Gonzalez +and Zhiheng Lyu for helping to improve the format of the pa- +per. We thank Nasim Rahaman for early-stage discussions +to improve the model design and contributions. +This material is based in part upon works supported by +the German Federal Ministry of Education and Research +(BMBF): Tübingen AI Center, FKZ: 01IS18039B; and by +the Machine Learning Cluster of Excellence, EXC number +2064/1 – Project number 390727645. Zhijing Jin is sup- +ported by PhD fellowships from the Future of Life Institute +and Open Philanthropy, as well as the travel support from +ELISE (GA no 951847) for the ELLIS program. +References +Baevski, A., Zhou, Y., Mohamed, A., and Auli, M. +wav2vec 2.0: A framework for self-supervised learning +of speech representations. In Larochelle, H., Ranzato, +M., Hadsell, R., Balcan, M., and Lin, H. (eds.), Ad- +vances in Neural Information Processing Systems 33: +Annual Conference on Neural Information Processing +Systems 2020, NeurIPS 2020, December 6-12, 2020, +virtual, +2020. +URL https://proceedings. +neurips.cc/paper/2020/hash/ +92d1e1eb1cd6f9fba3227870bb6d7f07-Abstract. +html. +Borsos, +Z., +Marinier, +R., +Vincent, +D., +Kharitonov, +E., Pietquin, O., Sharifi, M., Teboul, O., Grangier, +D., Tagliasacchi, M., and Zeghidour, N. +Audiolm: +a language modeling approach to audio generation. +CoRR, abs/2209.03143, 2022. +doi: 10.48550/arXiv. +2209.03143. URL https://doi.org/10.48550/ +arXiv.2209.03143. +Caillon, A. and Esling, P. RAVE: A variational autoencoder +for fast and high-quality neural audio synthesis. CoRR, +abs/2111.05011, 2021. URL https://arxiv.org/ +abs/2111.05011. +Chang, H., Zhang, H., Barber, J., Maschinot, A., Lezama, +J., Jiang, L., Yang, M., Murphy, K., Freeman, W. T., +Rubinstein, M., Li, Y., and Krishnan, D. Muse: Text- +to-image generation via masked generative transform- +ers. CoRR, abs/2301.00704, 2023. doi: 10.48550/arXiv. +2301.00704. URL https://doi.org/10.48550/ +arXiv.2301.00704. +Défossez, A., Copet, J., Synnaeve, G., and Adi, Y. High fi- +delity neural audio compression. CoRR, abs/2210.13438, +2022. doi: 10.48550/arXiv.2210.13438. URL https: +//doi.org/10.48550/arXiv.2210.13438. +Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, +L. ImageNet: A large-scale hierarchical image database. +In Computer Vision and Pattern Recognition (CVPR), pp. +248–255, 2009. +Deng, K., Bansal, A., and Ramanan, D. Unsupervised audio- +visual synthesis via exemplar autoencoders. In 9th Inter- +national Conference on Learning Representations, ICLR +2021, Virtual Event, Austria, May 3-7, 2021. OpenRe- +view.net, 2021. URL https://openreview.net/ +forum?id=43VKWxg\_Sqr. +Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. BERT: +Pre-training of deep bidirectional transformers for lan- +guage understanding. In Proceedings of the 2019 Confer- +ence of the North American Chapter of the Association for +Computational Linguistics: Human Language Technolo- +gies, Volume 1 (Long and Short Papers), pp. 4171–4186, +Minneapolis, Minnesota, June 2019. Association for +Computational Linguistics. doi: 10.18653/v1/N19-1423. +URL https://aclanthology.org/N19-1423. +Dhariwal, P., Jun, H., Payne, C., Kim, J. W., Radford, A., +and Sutskever, I. Jukebox: A generative model for music. +CoRR, abs/2005.00341, 2020. URL https://arxiv. +org/abs/2005.00341. +Dieleman, S., van den Oord, A., and Simonyan, K. The +challenge of realistic music generation: +Modelling +raw audio at scale. +In Bengio, S., Wallach, H. M., +Larochelle, H., Grauman, K., Cesa-Bianchi, N., and +Garnett, R. (eds.), Advances in Neural Information +Processing Systems 31: Annual Conference on Neural +Information Processing Systems 2018, NeurIPS 2018, +December 3-8, 2018, Montréal, Canada, pp. 8000– +8010, +2018. +URL +https://proceedings. + +Moûsai: Text-to-Music Generation with Long-Context Latent Diffusion +neurips.cc/paper/2018/hash/ +3e441eec3456b703a4fe741005f3981f-Abstract. +html. +Doornbusch, P. Gerhard nierhaus: Algorithmic composi- +tion: Paradigms of automated music generation. Comput. +Music. J., 34(3):70–74, 2010. doi: 10.1162/COMJ\\_r\\_ +00008. URL https://doi.org/10.1162/COMJ\ +_r\_00008. +Elizalde, B., Deshmukh, S., Ismail, M. A., and Wang, H. +CLAP: learning audio concepts from natural language +supervision. +CoRR, abs/2206.04769, 2022. +doi: 10. +48550/arXiv.2206.04769. URL https://doi.org/ +10.48550/arXiv.2206.04769. +Engel, J. H., Agrawal, K. K., Chen, S., Gulrajani, I., +Donahue, C., and Roberts, A. +Gansynth: Adversar- +ial neural audio synthesis. +In 7th International Con- +ference on Learning Representations, ICLR 2019, New +Orleans, LA, USA, May 6-9, 2019. OpenReview.net, +2019. URL https://openreview.net/forum? +id=H1xQVn09FX. +Esser, P., Rombach, R., and Ommer, B. +Taming trans- +formers for high-resolution image synthesis. +In +IEEE Conference on Computer Vision and Pattern +Recognition, CVPR 2021, virtual, June 19-25, 2021, +pp. 12873–12883. Computer Vision Foundation / +IEEE, 2021. +doi: 10.1109/CVPR46437.2021.01268. +URL +https://openaccess.thecvf.com/ +content/CVPR2021/html/Esser_Taming_ +Transformers_for_High-Resolution_ +Image_Synthesis_CVPR_2021_paper.html. +Forsgren, S. and Martiros, H. Riffusion - Stable diffusion +for real-time music generation. 2022. URL https: +//riffusion.com/about. +Giraudo, S. Generation of musical patterns through operads. +CoRR, abs/2104.12432, 2021. URL https://arxiv. +org/abs/2104.12432. +Hinton, G. E. and Salakhutdinov, R. R. Reducing the di- +mensionality of data with neural networks. science, 313 +(5786):504–507, 2006. +Ho, J. and Salimans, T. +Classifier-free diffusion guid- +ance. CoRR, abs/2207.12598, 2022. doi: 10.48550/arXiv. +2207.12598. URL https://doi.org/10.48550/ +arXiv.2207.12598. +Ho, J., Chan, W., Saharia, C., Whang, J., Gao, R., Gritsenko, +A. A., Kingma, D. P., Poole, B., Norouzi, M., Fleet, D. J., +and Salimans, T. Imagen video: High definition video +generation with diffusion models. CoRR, abs/2210.02303, +2022. doi: 10.48550/arXiv.2210.02303. URL https: +//doi.org/10.48550/arXiv.2210.02303. +Kim, M., Hong, J., and Ro, Y. M. +Lip to speech +synthesis with visual context attentional GAN. +In +Ranzato, M., Beygelzimer, A., Dauphin, Y. N., Liang, +P., and Vaughan, J. W. (eds.), Advances in Neural +Information Processing Systems 34: Annual Conference +on Neural Information Processing Systems 2021, +NeurIPS 2021, December 6-14, 2021, virtual, pp. 2758– +2770, +2021. +URL +https://proceedings. +neurips.cc/paper/2021/hash/ +16437d40c29a1a7b1e78143c9c38f289-Abstract. +html. +Kingma, D. P. and Welling, M. Auto-encoding variational +bayes. In Bengio, Y. and LeCun, Y. (eds.), 2nd Interna- +tional Conference on Learning Representations, ICLR +2014, Banff, AB, Canada, April 14-16, 2014, Conference +Track Proceedings, 2014. URL http://arxiv.org/ +abs/1312.6114. +Kong, Z., Ping, W., Huang, J., Zhao, K., and Catanzaro, +B. +Diffwave: A versatile diffusion model for audio +synthesis. In 9th International Conference on Learn- +ing Representations, ICLR 2021, Virtual Event, Austria, +May 3-7, 2021. OpenReview.net, 2021. URL https: +//openreview.net/forum?id=a-xFK8Ymz5J. +Kreuk, F., Synnaeve, G., Polyak, A., Singer, U., Dé- +fossez, A., Copet, J., Parikh, D., Taigman, Y., and +Adi, Y. Audiogen: Textually guided audio generation. +CoRR, abs/2209.15352, 2022. +doi: 10.48550/arXiv. +2209.15352. URL https://doi.org/10.48550/ +arXiv.2209.15352. +Kumar, K., Kumar, R., de Boissiere, T., Gestin, L., Teoh, +W. Z., Sotelo, J., de Brébisson, A., Bengio, Y., and +Courville, A. C. +Melgan: +Generative adversarial +networks for conditional waveform synthesis. In Wallach, +H. M., Larochelle, H., Beygelzimer, A., d’Alché-Buc, F., +Fox, E. B., and Garnett, R. (eds.), Advances in Neural In- +formation Processing Systems 32: Annual Conference on +Neural Information Processing Systems 2019, NeurIPS +2019, December 8-14, 2019, Vancouver, BC, Canada, pp. +14881–14892, 2019. URL https://proceedings. +neurips.cc/paper/2019/hash/ +6804c9bca0a615bdb9374d00a9fcba59-Abstract. +html. +Lam, M. W. Y., Wang, J., Su, D., and Yu, D. BDDM: bilat- +eral denoising diffusion models for fast and high-quality +speech synthesis. In The Tenth International Conference +on Learning Representations, ICLR 2022, Virtual Event, +April 25-29, 2022. OpenReview.net, 2022. URL https: +//openreview.net/forum?id=L7wzpQttNO. +Lee, D., Kim, C., Kim, S., Cho, M., and Han, W. Au- +toregressive image generation using residual quantiza- +tion. +In IEEE/CVF Conference on Computer Vision + +Moûsai: Text-to-Music Generation with Long-Context Latent Diffusion +and Pattern Recognition, CVPR 2022, New Orleans, LA, +USA, June 18-24, 2022, pp. 11513–11522. IEEE, 2022. +doi: 10.1109/CVPR52688.2022.01123. URL https:// +doi.org/10.1109/CVPR52688.2022.01123. +Leng, Y., Chen, Z., Guo, J., Liu, H., Chen, J., Tan, X., +Mandic, D. P., He, L., Li, X., Qin, T., Zhao, S., and +Liu, T. Binauralgrad: A two-stage conditional diffu- +sion probabilistic model for binaural audio synthesis. +CoRR, abs/2205.14807, 2022. +doi: 10.48550/arXiv. +2205.14807. URL https://doi.org/10.48550/ +arXiv.2205.14807. +Li, M., Xu, R., Wang, S., Zhou, L., Lin, X., Zhu, C., Zeng, +M., Ji, H., and Chang, S. Clip-event: Connecting text +and images with event structures. In IEEE/CVF Confer- +ence on Computer Vision and Pattern Recognition, CVPR +2022, New Orleans, LA, USA, June 18-24, 2022, pp. +16399–16408. IEEE, 2022. doi: 10.1109/CVPR52688. +2022.01593. URL https://doi.org/10.1109/ +CVPR52688.2022.01593. +Loshchilov, I. and Hutter, F. Decoupled weight decay regu- +larization. In 7th International Conference on Learning +Representations, ICLR 2019, New Orleans, LA, USA, +May 6-9, 2019. OpenReview.net, 2019. URL https: +//openreview.net/forum?id=Bkg6RiCqY7. +Mehri, S., Kumar, K., Gulrajani, I., Kumar, R., Jain, S., +Sotelo, J., Courville, A. C., and Bengio, Y. Samplernn: +An unconditional end-to-end neural audio generation +model. In 5th International Conference on Learning +Representations, ICLR 2017, Toulon, France, April 24-26, +2017, Conference Track Proceedings. OpenReview.net, +2017. URL https://openreview.net/forum? +id=SkxKPDv5xl. +Morrison, M., Kumar, R., Kumar, K., Seetharaman, P., +Courville, A. C., and Bengio, Y. Chunked autoregressive +GAN for conditional waveform synthesis. In The Tenth +International Conference on Learning Representations, +ICLR 2022, Virtual Event, April 25-29, 2022. OpenRe- +view.net, 2022. URL https://openreview.net/ +forum?id=v3aeIsY\_vVX. +Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, +C. L., Mishkin, P., Zhang, C., Agarwal, S., Slama, K., +Ray, A., Schulman, J., Hilton, J., Kelton, F., Miller, +L., Simens, M., Askell, A., Welinder, P., Christiano, +P. F., Leike, J., and Lowe, R. +Training language +models to follow instructions with human feedback. +CoRR, abs/2203.02155, 2022. +doi: 10.48550/arXiv. +2203.02155. URL https://doi.org/10.48550/ +arXiv.2203.02155. +Pasini, M. and Schlüter, J. Musika! fast infinite waveform +music generation. CoRR, abs/2208.08706, 2022. doi: 10. +48550/arXiv.2208.08706. URL https://doi.org/ +10.48550/arXiv.2208.08706. +Pennington, J., Socher, R., and Manning, C. +GloVe: +Global vectors for word representation. In Proceedings +of the 2014 Conference on Empirical Methods in Nat- +ural Language Processing (EMNLP), pp. 1532–1543, +Doha, Qatar, October 2014. Association for Computa- +tional Linguistics. doi: 10.3115/v1/D14-1162. URL +https://aclanthology.org/D14-1162. +Preechakul, K., Chatthee, N., Wizadwongsa, S., and Suwa- +janakorn, S. Diffusion autoencoders: Toward a meaning- +ful and decodable representation. In IEEE/CVF Confer- +ence on Computer Vision and Pattern Recognition, CVPR +2022, New Orleans, LA, USA, June 18-24, 2022, pp. +10609–10619. IEEE, 2022. doi: 10.1109/CVPR52688. +2022.01036. URL https://doi.org/10.1109/ +CVPR52688.2022.01036. +Radford, A., Narasimhan, K., Salimans, T., and Sutskever, +I. Improving language understanding by generative pre- +training. Technical report, OpenAI, 2018. +Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., +Matena, M., Zhou, Y., Li, W., and Liu, P. J. Exploring the +limits of transfer learning with a unified text-to-text trans- +former. J. Mach. Learn. Res., 21:140:1–140:67, 2020. +URL http://jmlr.org/papers/v21/20-074. +html. +Ramesh, A., Dhariwal, P., Nichol, A., Chu, C., and Chen, +M. Hierarchical text-conditional image generation with +CLIP latents. CoRR, abs/2204.06125, 2022. doi: 10. +48550/arXiv.2204.06125. URL https://doi.org/ +10.48550/arXiv.2204.06125. +Rombach, R., Blattmann, A., Lorenz, D., Esser, P., and +Ommer, B. High-resolution image synthesis with latent +diffusion models. In IEEE/CVF Conference on Computer +Vision and Pattern Recognition, CVPR 2022, New Or- +leans, LA, USA, June 18-24, 2022, pp. 10674–10685. +IEEE, 2022. +doi: 10.1109/CVPR52688.2022.01042. +URL https://doi.org/10.1109/CVPR52688. +2022.01042. +Ronneberger, O., Fischer, P., and Brox, T. U-net: Con- +volutional networks for biomedical image segmentation. +In Navab, N., Hornegger, J., III, W. M. W., and Frangi, +A. F. (eds.), Medical Image Computing and Computer- +Assisted Intervention - MICCAI 2015 - 18th International +Conference Munich, Germany, October 5 - 9, 2015, Pro- +ceedings, Part III, volume 9351 of Lecture Notes in +Computer Science, pp. 234–241. Springer, 2015. doi: +10.1007/978-3-319-24574-4\_28. URL https://doi. +org/10.1007/978-3-319-24574-4_28. + +Moûsai: Text-to-Music Generation with Long-Context Latent Diffusion +Ruiz, N., Li, Y., Jampani, V., Pritch, Y., Rubinstein, M., +and Aberman, K. +Dreambooth: Fine tuning text-to- +image diffusion models for subject-driven generation. +CoRR, abs/2208.12242, 2022. +doi: 10.48550/arXiv. +2208.12242. URL https://doi.org/10.48550/ +arXiv.2208.12242. +Saharia, C., Chan, W., Saxena, S., Li, L., Whang, J., +Denton, E., Ghasemipour, S. K. S., Ayan, B. K., Mah- +davi, S. S., Lopes, R. G., Salimans, T., Ho, J., Fleet, +D. J., and Norouzi, M. +Photorealistic text-to-image +diffusion models with deep language understanding. +CoRR, abs/2205.11487, 2022. +doi: 10.48550/arXiv. +2205.11487. URL https://doi.org/10.48550/ +arXiv.2205.11487. +Salas, H. A. G., Gelbukh, A. F., Calvo, H., and Soria, +F. G. Automatic music composition with simple proba- +bilistic generative grammars. Polibits, 44:59–65, 2011. +doi: 10.17562/pb-44-9. URL https://doi.org/ +10.17562/pb-44-9. +Salimans, T. and Ho, J. Progressive distillation for fast +sampling of diffusion models. +In The Tenth Interna- +tional Conference on Learning Representations, ICLR +2022, Virtual Event, April 25-29, 2022. OpenReview.net, +2022. URL https://openreview.net/forum? +id=TIdIXIpzhoI. +Schneider, F. ArchiSound: Audio generation with diffu- +sion. January 2023. URL https://github.com/ +flavioschneider/master-thesis/blob/ +main/audio_diffusion_thesis.pdf. +Song, J., Meng, C., and Ermon, S. Denoising diffusion im- +plicit models. In 9th International Conference on Learn- +ing Representations, ICLR 2021, Virtual Event, Austria, +May 3-7, 2021. OpenReview.net, 2021. URL https: +//openreview.net/forum?id=St1giarCHLP. +van den Oord, A., Dieleman, S., Zen, H., Simonyan, K., +Vinyals, O., Graves, A., Kalchbrenner, N., Senior, A. W., +and Kavukcuoglu, K. Wavenet: A generative model for +raw audio. In The 9th ISCA Speech Synthesis Workshop, +Sunnyvale, CA, USA, 13-15 September 2016, pp. 125. +ISCA, 2016. +URL http://www.isca-speech. +org/archive/SSW_2016/abstracts/ssw9_ +DS-4_van_den_Oord.html. +van den Oord, A., Vinyals, O., and Kavukcuoglu, K. Neural +discrete representation learning. +In Guyon, I., von +Luxburg, U., Bengio, S., Wallach, H. M., Fergus, R., +Vishwanathan, S. V. N., and Garnett, R. (eds.), Advances +in Neural Information Processing Systems 30: Annual +Conference on Neural Information Processing Systems +2017, December 4-9, 2017, Long Beach, CA, USA, pp. +6306–6315, 2017. +URL https://proceedings. +neurips.cc/paper/2017/hash/ +7a98af17e63a0ac09ce2e96d03992fbc-Abstract. +html. +Villegas, R., Babaeizadeh, M., Kindermans, P., Moraldo, +H., Zhang, H., Saffar, M. T., Castro, S., Kunze, J., and +Erhan, D. Phenaki: Variable length video generation from +open domain textual description. CoRR, abs/2210.02399, +2022. doi: 10.48550/arXiv.2210.02399. URL https: +//doi.org/10.48550/arXiv.2210.02399. +Yang, D., Yu, J., Wang, H., Wang, W., Weng, C., Zou, +Y., and Yu, D. Diffsound: Discrete diffusion model for +text-to-sound generation. CoRR, abs/2207.09983, 2022. +doi: 10.48550/arXiv.2207.09983. URL https://doi. +org/10.48550/arXiv.2207.09983. +Yu, B., Lu, P., Wang, R., Hu, W., Tan, X., Ye, W., Zhang, +S., Qin, T., and Liu, T. Museformer: Transformer with +fine- and coarse-grained attention for music generation. +CoRR, abs/2210.10349, 2022a. +doi: 10.48550/arXiv. +2210.10349. URL https://doi.org/10.48550/ +arXiv.2210.10349. +Yu, J., Xu, Y., Koh, J. Y., Luong, T., Baid, G., Wang, Z., +Vasudevan, V., Ku, A., Yang, Y., Ayan, B. K., Hutchinson, +B., Han, W., Parekh, Z., Li, X., Zhang, H., Baldridge, J., +and Wu, Y. Scaling autoregressive models for content- +rich text-to-image generation. CoRR, abs/2206.10789, +2022b. doi: 10.48550/arXiv.2206.10789. URL https: +//doi.org/10.48550/arXiv.2206.10789. +Zeghidour, N., Luebs, A., Omran, A., Skoglund, J., and +Tagliasacchi, M. +Soundstream: An end-to-end neu- +ral audio codec. +IEEE ACM Trans. Audio Speech +Lang. Process., 30:495–507, 2022. +doi: +10.1109/ +TASLP.2021.3129994. URL https://doi.org/10. +1109/TASLP.2021.3129994. +A. Text Prompts +We list all the text prompts composed for the four common +music genres in Table 3. + +Moûsai: Text-to-Music Generation with Long-Context Latent Diffusion +Genre = Electronic +– Drops, Kanine Remix, Darkzy, Drops Remixes, bass house, +(Deluxe) (Remix) 3 of 4 +– Electronic, Dance, EDM (Deluxe) (Remix) 3 of 4 +– Electro House (Remix), 2023, 3 of 4 +– Electro Swing Remix 2030 (Deluxe Edition) 3 of 4 +– Future Bass, EDM (Remix) 3 of 4, Remix +– EDM (Deluxe) (Remix) 3 of 4 +– EDM, Vocal, Relax, Remix, 2023, 8D Audio +– Hardstyle, Drop, 8D, Remix, High Quality, 2 of 4 +– Dubstep Insane Drop Remix (Deluxe Edition), 2 of 4 +– Drop, French 79, BPM Artist, Vol. 4, Electronica, 2016 +Genre = Hip Hop +– Real Hip Hop, 2012, Lil B, Gods Father, escape room, 3 of 4 +– C’est toujours pour ceux qui savent, French Hip Hop, 2018 +(Deluxe), 3 of 4 +– Dejando Claro, Latin Hip Hop 2022 (Deluxe Edition) 3 of 4 +– Latin Hip Hop 2022 (Deluxe Edition) 3 of 4 +– Alternative Hip Hop Oh-My, 2016, (Deluxe), 3 of 4 +– Es Geht Mir Gut, German Hip Hop, 2016, (Deluxe), 3 of 4 +– Italian Hip Hop 2022 (Deluxe Edition) 3 of 4 +– RUN, Alternative Hip Hop, 2016, (Deluxe), 3 of 4 +– Hip Hop, Rap Battle, 2018 (High Quality) (Deluxe Edition) 3 +of 4 +– Hip Hop Tech, Bandlez, Hot Pursuit, brostep, 3 of 4 +Genre = Metal +– Death Metal, 2012, 3 of 4 +– Heavy Death Metal (Deluxe Edition), 3 of 4 +– Black Alternative Metal, The Pick of Death (Deluxe), 2006, 3 +of 4 +– Kill For Metal, Iron Fire, To The Grave, melodic metal, 3 of 4 +– Melodic Metal, Iron Dust (Deluxe), 2006, 3 of 4 +– Possessed Death Metal Stones (Deluxe), 2006, 3 of 4 +– Black Metal Venom, 2006, 3 of 4 +– The Heavy Death Metal War (Deluxe), 2006, 3 of 4 +– Heavy metal (Deluxe Edition), 3 of 4 +– Viking Heavy Death Metal (Deluxe), 2006, 3 of 4 +Genre = Pop +– (Everything I Do), I Do It For You, Bryan Adams, The Best +Of Me, canadian pop, 3 of 4 +– Payphone, Maroon 5, Overexposed, Pop, 2021, 3 of 4 +– 24K Magic, Bruno Mars, 24K Magic, dance pop, 3 of 4 +– Who Is It, Michael Jackson, Dangerous, Pop (Deluxe), 3 of 4 +– Forget Me, Lewis Capaldi, Forget Me, Pop Pop, 2022, 3 of 4 +– Pop, Speak Now, Taylor Swift, 2014, (Deluxe), 3 of 4 +– Pop Pop, Maroon 5, Overexposed, 2016, 3 of 4 +– Pointless, Lewis Capaldi, Pointless, Pop, 2022, 3 of 4 +– Saved, Khalid, American Teen, Pop, 2022, 3 of 4 +– Deja vu, Fearless, Pop, 2020, (Deluxe), 3 of 4 +Table 3. Text prompts composed for the four common music gen- +res: electronic, hip hop, metal, and pop. + diff --git a/0tFKT4oBgHgl3EQfOC19/content/tmp_files/load_file.txt b/0tFKT4oBgHgl3EQfOC19/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..be46fb384d9855f89b6345d89bf6eea72420e40f --- /dev/null +++ b/0tFKT4oBgHgl3EQfOC19/content/tmp_files/load_file.txt @@ -0,0 +1,1490 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf,len=1489 +page_content='Moûsai: Text-to-Music Generation with Long-Context Latent Diffusion Flavio Schneider 1 Zhijing Jin 1 2 Bernhard Schölkopf 2 Abstract The recent surge in popularity of diffusion mod- els for image generation has brought new atten- tion to the potential of these models in other ar- eas of media synthesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' One area that has yet to be fully explored is the application of diffusion models to music generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Music generation requires to handle multiple aspects, including the temporal dimension, long-term structure, multi- ple layers of overlapping sounds, and nuances that only trained listeners can detect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In our work, we investigate the potential of diffusion models for text-conditional music generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' We develop a cascading latent diffusion approach that can gen- erate multiple minutes of high-quality stereo mu- sic at 48kHz from textual descriptions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' For each model, we make an effort to maintain reasonable inference speed, targeting real-time on a single consumer GPU.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In addition to trained models, we provide a collection of open-source libraries with the hope of facilitating future work in the field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='1 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Introduction Music generation, or more generally audio generation, has multiple aspects at different levels of abstraction that make it a challenging problem (van den Oord et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2016;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Dieleman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Regardless of its challenging nature, automated or model-assisted music generation has been an active area of research (Doornbusch, 2010;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Salas et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2011;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Giraudo, 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Recently, with the rise of deep learning models and their suc- cess in computer vision (Deng et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2009;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Rombach et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Chang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2023) and natural language process- ing (Pennington et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2014;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Radford et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Devlin et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Ouyang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022), it is also promising to see how much benefit deep learning models can bring to 1ETH Zürich, Switzerland 2Max Planck Institute for Intelli- gent Systems, Tübingen, Germany.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Correspondence to: Flavio Schneider <flavio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='schneider.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='97@gmail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='com>.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 1We open-source the following: – Music samples for this paper: bit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='ly/anonymous-mousai – All music samples for all models: bit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='ly/audio-diffusion – Codes: github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='com/archinetai/audio-diffusion-pytorch UNet1 Tokenizer UNet1 UNet1 Text Description Noise Noise Audio Embedding Latent Transformer UNet2 UNet2 UNet2 UNet2 DiffusionDecoder DiffusionGenerator TextEncoder Egyptian Darbuka, Drums, Rythm, (Deluxe Edition), 2 of 4 Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Two-stage generation architecture in the inference mode of our model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Specifically, we first encode text with a pretrained and frozen language model into a text embedding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Then, condition- ing on the text, we generate a compressed latent with the diffusion generator, and finally, the compressed latent in turn is used to condition the diffusion decoder to generate the final waveform.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' audio generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Existing audio generation models explore the use of recursive neural networks (Mehri et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2017), adversarial generative networks (Kumar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Kim et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Engel et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Morrison et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022), au- toencoders (Deng et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2021), and transformers (Yu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' As the more recent advancement in generative mod- els, diffusion models have been used in speech synthesis (Kong et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Lam et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Leng et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022), but are still under-explored for music generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Moreover, there are several long-standing challenges in the area of music generation: (1) modeling the long-term struc- ture, (2) improving the sound quality, (3) increasing the diversity of the generated music, and (4) enabling easier control of the generation, such as text prompts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' A single model mastering all the proposed aspects would be a great addition to the music industry.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' It can enable the broader public to be part of the creative process by allowing them to compose music using an accessible text-based interface, as- arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='11757v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='CL] 27 Jan 2023 Moûsai: Text-to-Music Generation with Long-Context Latent Diffusion Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Comparison of our Moûsai model with previous music generation models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' We show the comparisons along the (1) audio sample rate@the number of channels (Sample Rate↑, where the higher the better), (2) context length of the generated music (Ctx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Len.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='↑, where the higher the more capable the model is to generate structural music;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' we use ⋆ to indicate variable length, and we assume that autoregressive methods are variable by default, but have an upper-bound imposed by attention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' ), (3) input type (Input, where we feature using Text \x13 as the condition for the generation), (4) type of the generate music (Music, where the more Diverse↑ genre, the better), (5) an example of the generated music type (Example), (6) inference time (Infer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Time↓, where the shorter the better, and since the music length is seconds or minutes, the inference time equivalent to the audio length is the shortest, and we use ⋆ to show models that can run inference fast on CPU), and (7) total length of the music in the training data in hours (Data).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Model Sample Rate↑ Ctx.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Len.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='↑ Input (Text \x13) Music (Diverse↑) Example Infer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Time↓ Data WaveNet (2016) 16kHz@1 Secs None Piano or speech Piano = Audio len.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='⋆ 260 Jukebox (2020) 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='1kHz@1 Mins⋆ Lyrics, author, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Song with the lyrics Song Hours 70K RAVE (2021) 48kHz@2 Secs⋆ Latent Single-genre Music Strings = Audio len.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='⋆ 100 AudioLM (2022) 16kHz@1 Secs⋆ Beginning of the music Piano or speech Piano Mins 40K Musika (2022) 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='5kHz@2 Secs Context vector Single-genre Music Piano = Audio len.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='⋆ 1K Riffusion (2022) 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='1kHz@1 5s Text (genre, author, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=') Music of any genre Jazzy clarinet Mins – AudioGen (2022) 16kHz@1 Secs⋆ Text (a phrase/sentence) Daily sounds Dog barks Hours 4K Moûsai (Ours) 48kHz@2 Mins⋆ Text (genre, author, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=') Music of any genre African drums = Audio len.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='5K sist creators in finding inspiration, and provide an unlimited supply of novel audio samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' From the landscape of existing music generation models in Table 1, we can see that the aforementioned challenges widely exist throughout the literature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' For example, most text-to-audio systems (Forsgren & Martiros, 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Kreuk et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022) can only generate a few seconds of audio, and many tend to require long inference time up to many GPU hours to generate one minute of audio (Dhariwal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Kreuk et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Apart from the text-to-music generation models, if we look at the unconditional music generation, some can generate high-quality samples and run in real time on CPU (Caillon & Esling, 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Pasini & Schlüter, 2022), but they are usually trained on a single modality (resulting in the ability to handle only single-genre music, but not diverse ones), and none can handle long-term structure (van den Oord et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2016;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Caillon & Esling, 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Pasini & Schlüter, 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' To this end, we propose Moûsai,2 a text-conditional cascad- ing diffusion model (Figure 1) that tries to address all the mentioned challenges at the same time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Specifically, our Moûsai model uses a custom two-stage cascading diffusion method shown in Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In the first stage, it compresses the audio waveform using a novel diffusion autoencoder, and in the second stage, it learns to generate the reduced 2Moûsai is romanized ancient Greek for Muses, the sources of artistic inspiration (https://en.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='wikipedia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='org/wiki/ Muses).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Given that inspiration is exactly what the system may be lacking, this name may not be apposite, but the reminiscence to both music and AI was simply too compelling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' latent representations conditioned on the text embedding generated by a pretrained language model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Both stages use an efficient U-Net optimized by us, enabling fast inference speed which makes it realistic for usage in future applica- tions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In conclusion, the main contributions of our work are as follows: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' We make it possible to generate long-context 48kHz stereo music exceeding the minute mark, based on context exceeding the minute mark, and generate a variety of music.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' We propose an efficient 1D U-Net architecture for both stages of the cascade, making it possible to generate audio in real-time on a single consumer GPU.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Likewise, each stage of our system can be trained on one A100 GPU in approximately 1 week, making it possible to train and run the overall system using modest resources, as available in most universities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' We present a new diffusion magnitude autoencoder that can compress the audio signal 64x compared to the original waveform with only moderate quality loss, used by the generation stage of the architecture to apply latent diffusion on.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Related Work A common trend in the generative space has been to first train a representation learning, compression, or upsampling model on the input domain, and later learn a generative Moûsai: Text-to-Music Generation with Long-Context Latent Diffusion model on top of the reduced representation while condition- ing on the information of interest (Rombach et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Yang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Kreuk et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Ho et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Ville- gas et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' This can be drastically more efficient than directly learning on the raw input data, as the generative model can work on a much lower dimensional representa- tion and hence capture coarse structures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Auto-encoding (Hinton & Salakhutdinov, 2006;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Kingma & Welling, 2014) or quantized auto-encoding (van den Oord et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Esser et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Lee et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022) are popu- lar compression methods originally proposed for the image domain, that have been similarly and successfully applied as audio representations (Caillon & Esling, 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Pasini & Schlüter, 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Baevski et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Zeghidour et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Défossez et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' The two most popular directions in the generative space suggest either to learn a quantized rep- resentation followed by masked or autoregressive learning on tokens (Villegas et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Yu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022b;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Chang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2023;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Dhariwal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Borsos et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Yang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Kreuk et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022), or to use learned (continous) compressed or deterministic downsampled representation and later apply diffusion models as generators to reconstruct the noise-masked data in another stage (Ramesh et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Rombach et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Saharia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Ho et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Forsgren & Martiros, 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Methods using the former to- kenized representation have been successful but not up to the same level of performance as the latter (“cascading") diffusion methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In our work, we follow ideas from the cascading diffusion approach, which, to the best of our knowledge, has never been attempted for audio generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' We use a custom two-stage cascading diffusion method, where the first stage compresses audio using a novel diffusion autoencoder, and the second stage learns to generate the reduced representa- tion while conditioning on a textual description.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Preliminaries In this section, we introduce several preliminaries that serve as the basis for our model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Specifically, we give an overview of the workings of diffusion, latent diffusion, and the U-Net.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Audio Generation Audio generation has long been a challenging task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' At the lowest level, we have digital waveforms that control air movement from speakers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Waveforms can be represented in different resolutions, or sample rates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Higher sample rates (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 48kHz)allow for more temporal resolution and can represent higher frequencies, but at the same time it is com- putationally more demanding to generate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' At higher levels of abstraction, we find qualitative properties such as texture (timbre) or pitch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Zooming out, we observe structure such as rhythm and melody that can span multiple seconds, or even structurally be composed into choruses that form min- utes of interconnected patterns.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Audio can be represented with a single waveform (mono), two waveforms (stereo), or even more in the case of surround sound.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Audio with two or more channels can give a sense of movement and spatialisation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' From the modelling perspective, there are unconditional models that generate novel samples from the training distribution without any additional information, or conditional models that use a form of guidance, such as text, to control the generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Models can be trained on a single modality (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', drums or piano) or on multiple modalities, which usually require more parameters for an increased modelling capacity and decrease in speed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Diffusion We employ vvv-objective diffusion as proposed by Salimans & Ho (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Given a sample xxx0 from a distribution p(xxx0), some noise schedule σt ∈ [0, 1], and some noisy data-point xxxσt = ασtxxx0 + βσtϵϵϵ, vvv-objective diffusion tries to esti- mate a model ˆvvvσt = f(xxxσt, σt) minimizing the following objective: Et∼[0,1],σt,xxxσt î ∥fθ(xxxσt, σt) − vvvσt∥2 2 ó , (1) where vvvσt = ∂xxxσt σt = ασtϵϵϵ − βσtxxxσt with ασt .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='.= cos(φt), and βσt .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='.= sin(φt) and φt .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='.= π 2 σt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' By estimating the rate of change, ODE samplers can be used to turn noise into a new datapoint.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In this work, we use the DDIM sampler (Song et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2021), which we find to work well and have a reasonable tradeoff between the number of steps and audio quality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' The DDIM sampler denoises the signal by repeated application of the following: ˆvvvσt = fθ(xxxσt, σt) (2) ˆxxx0 = ασtxxxσt − βσtˆvvvσt (3) ˆϵϵϵσt = βσtxxxσt + ασtˆvvvσt (4) ˆxxxσt−1 = ασt−1ˆxxx0 + βσt−1ˆϵϵϵt, (5) which estimates both the initial data-point and the noise at step σt, for some T-step noise schedule σT , .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' , σ0 linearly spaced between 1 and 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Latent Diffusion Following the work on image diffusion (Rombach et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022), we compress audio into a smaller representation and apply the diffusion process on the reduced latent space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In contrast to Rombach et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' (2022), we propose a diffusion based autoencoder instead of a standard autoencoder, in- creasing the representation power of the decoding process and the amount of compressibility allowed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Moûsai: Text-to-Music Generation with Long-Context Latent Diffusion Downsample Upsample Items Skip UNetBlock Items Items ×N R C A M I Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 1D U-Net architecture used both for the diffusion decoder and latent diffusion generator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' The inner dashed region indicates that the UNetBlock can be recursively nested.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Resnet items (R) are used as convolutional blocks, modulation items (M) are used to provide the diffusion noise level as a feature vector conditioning , inject items (I) are used to inject external channels as conditioning (used for diffusion decoding only), attention items (A) are used to share information timewise, and cross attention items (C) are used to condition on an external (text) embeddings .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' U-Net U-Nets were first proposed by Ronneberger et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' (2015) as an hourglass convolutional only 2D architecture with skip connections;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' originally used for medial image segmentation, and since repurposed for multiple uses, such as image, au- dio, and video generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Our proposed U-Net has little resemblance to the original work, and is infused with multi- ple new components, such as more modern convolutional blocks, a variety of attention blocks, conditioning blocks, and improved skip connections, maintaining only a skeleton of the hourglass architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Text-to-Music Generation with Moûsai Moûsai is composed of two independently trained models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' The first stage (DMAE) is responsible for compressing the audio waveform 64x using a diffusion autoencoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In the second stage (latent text-to-audio diffusion), we generate a novel latent space by the diffusion model while conditioning on text embeddings obtained from a frozen transformer language model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' For both diffusion models, we use the same efficient 1D U-Net architecture with varying configurations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 1D U-Net In this work, we use a 1D U-Net architecture employed in different configurations for both the autoencoding and latent diffusion stage (Figure 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' U-Nets with 1D convolutional kernels are more efficient compared to 2D in terms of speed and can be successfully used both on waveforms or on UNet ||·|| Noise Encoder STFTMag Latent Audio Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Diffusion Magnitude Autoencoder (DMAE) training scheme.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' The diffusion autoencoder stage learns to compress au- dio 64x (compared to the original waveform) into a smaller latent space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' To train this stage, the waveform is first converted to a magnitude spectrogram, then auto-encoded into a latent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' At the same time, the original audio is corrupted with a random amount of noise and the U-Net is trained to remove that noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' During the noise removal process, the U-Net is conditioned on the noise level and the compressed latent which can have access to a reduced version of the non-noisy audio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' spectrograms if each frequency is considered as a different channel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' We use a variety of repeated items at each resolution of the U-Net, namely: (R) a residual 1D convolutional unit, (M) a modulation unit used to alter the channels given features from the diffusion noise level, (I) an inject item that con- catenates external channels to the ones at the current depth (the lengths must match), (A) an attention item used to share long-context structural information, and (C) a cross atten- tion item used to condition on text embeddings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Inject items are applied only at a specific depth in the first stage decoder to condition on the latent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Attention and cross attention items are instead used only in the inner blocks of the second stage U-Net, to learn structure and condition on text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Diffusion Magnitude-Autoencoding (DMAE) Diffusion autoencoders were first introduced by Preechakul et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' (2022), as a way to condition the diffusion process on a compressed latent vector of the input itself.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Diffusion can act as a more powerful generative decoder, and hence the in- put can be reduced to latents with higher compression ratios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In this work, we propose a new diffusion autoencoder that 500 0 400 20 300 40 200 60 100 80 00 200 400 600 800 1000Moûsai: Text-to-Music Generation with Long-Context Latent Diffusion first encodes a magnitude spectrogram into a compressed representation, and later injects the latent into intermediate channels of the decoding 1D U-Net (Figure 3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Let www be a waveform of shape [c, t] for c channels and t timesteps, and (m m mw ww,pppww w) = stft(www;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' n = 1024, h = 256) be the magnitude and phase obtained from a short-time furier tranform of the waveform with a window size of 1024 and hop-length of 256.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Then the resulting spectro- grams will have shape [c · n, t h].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' We discard phase and encode the magnitude into a latent zzz = encθenc(m m mw ww) us- ing a 1D convolutional encoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' The original waveform is then reconstructed by decoding the latent using a diffusion model ˆwww = decθdec(zzz,ϵϵϵ, s), where decθdec is the diffusion sampling process with starting noise ϵϵϵ and s is the num- ber of decoding (sampling) steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' The decoder is trained with vvv-objective diffusion while conditioning on the latent fθdec(wwwσt;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' σt,zzz), where fθdec is the proposed 1D U-Net, called repeatedly during decoding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Since only the magnitude is used and phase is discarded, this diffusion autoencoder is simultaneously a compressing autoencoder and vocoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' By using the magnitude spec- trograms, higher compression ratios can be obtained than autoencoding directly the waveform.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' We found that wave- forms are less compressible and efficient to work with.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Sim- ilarly, discarding phase is benificial to obtain higher com- pression ratios for the same level of quality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' The diffusion model can easily learn to generate a waveform with realistic phase even if conditioned only on the encoded magnitude.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Depending on the desired speed/quality tradeoff, more or less compression can be applied in this first stage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Following our single GPU constraint, we find that 64x compression factor is a good balance to make sure the second stage can work on a reduced representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' The latent space produced is then used as a starting point for the next diffusion stage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' To make sure that the reduced latent space can be used for latent diffusion, we apply a tanh function on the bottleneck, keeping the values in the range [−1, 1].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' A more disentangled bottleneck, such as the one used in VAEs (Kingma & Welling, 2014) can be used, but the additional regularization reduces the amount of allowed compressibility.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Latent Text-to-Audio Diffusion The second stage applies latent diffusion on the previously obtained compressed space (Figure 4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Similarly to the pre- vious stage we use vvv-objective diffusion with the 1D U-Net architecture and a different configuration fθgen(zzzσt;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' σt,eee) while conditoning on the text embedding eee to generate the compressed latent zzz = encθenc(m m mw w w).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' The generation func- tion ˆzzz = genθgen(eee,ϵϵϵ, s) uses again DDIM sampling and calls the U-Net s times to generate an approximate latent ˆzzz UNet ||·|| Noise Text Embedding Embedding Transformer Latent Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Text-conditional latent diffusion generator training scheme.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' This stage is trained to generate novel latent spaces that follow a similar distribution to the ones generated by the au- toencoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' The audio source is first encoded into the latent using the encoder, then the latent is corrupted with a random amount of noise, and the U-Net is trained to remove the noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' While the U-Net denoises the signal, the noise level is provided as a feature vector , and an encoded textual description of the original wave- form is provided as an embedding encoded with a frozen language model .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' from the text embedding eee and starting noise ϵϵϵ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' The final generation stack during inference to obtain a waveform is ˆwww = decθdec(genθgen(eee,ϵϵϵgen, sgen),ϵϵϵdec, sdec) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' (6) The 1D U-Net used in this stage includes cross attention blocks to provide the conditioning text embedding and mul- tiple attention blocks to make sure information can be shared over the entire latent, crucial to learn long-range audio struc- ture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Given the compressed size of the latent space, the size of this inner U-Net can be greatly increased compared to the first stage, maintaining a reasonable training and inference speed, even with large parameter counts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Text Conditioning To obtain the text embeddings, prior work on text- conditioning suggests either learning a joint data-text rep- resentation (Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Elizalde et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Ramesh et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022) or using embeddings from pre-trained language model as direct conditioning (Saharia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Ho et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022) of the latent model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In our model, we follow the practice in Saharia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' (2022) to use a pre-trained and frozen T5 language model (Raffel et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2020) to generate text embeddings from the given description.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' We use the classifier-free guidance (CFG) (Ho Moûsai: Text-to-Music Generation with Long-Context Latent Diffusion Example Text Prompts in Our Dataset Nr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 415 (Premium Edition), german hip hop, 2 of 7, 2012, XATAR, Konnekt 30 Años de Exitos, Mundanzas, 2 of 6, latin pop, Lupita D’Alessio, 2011 emo rap 2018 Runaway Lil Peep 4 of 5 Alone, Pt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' II (Remixes) 2020 electro house Alone, Pt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' II - Da Tweekaz Remix Alan Walker Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Example text prompts in our dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' & Salimans, 2022) with a learned mask applied on batch elements with a probability of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='1 to improve the strength of the text-embedding during inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Experimental Setup For the experimental setup, we first give an high-level overview of the dataset and the training setup in Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='1, and then we dive into details of the implementation in Sec- tion 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2 and hardware requirements in Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Dataset and Training Setup We train all the models on a (relatively modest) collection that we compiled consisting of 2,500 hours of stereo music sampled at 48kHz spanning multiple genres, artists, instru- ments, and provenience in order to maintain a high diversity dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' The autoencoder is trained on random crops of length 218 (∼5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='5s at 48kHz) and the text-conditional diffu- sion generation model is trained on fixed crops of length 221 (∼44s at 48kHz) encoded in the 32-channels, 64x com- pressed latent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' For the textual description, we use metadata such as the title, author, album, genre, and year of release.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Given that a song could span longer than 44s, we append a string indicating which chunk is currently being trained on, together with the total chunks the song is made of (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 1 of 4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' This allows to select the region of interest during inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Hence, an example prompt is like “Egyptian Darbuka, Drums, Rythm, (Deluxe Edition), 2 of 4.” To make the conditioning more robust, we shuffle the list of metadata and drop each element with a probability of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Furthermore, for 50% of the times we concatenate the list with spaces and the other 50% of the times we use commas to make the interface more robust during inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Some example prompts in our dataset can be seen in Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Implementation Details We train a 185M-parameter diffusion autoencoder with 7 nested U-Net blocks of increasing channel count ([256, 512, 512, 512, 1024, 1024, 1024]) and downsample each time by 2, except for the first block ([1, 2, 2, 2, 2, 2, 2]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' The diffusion autoencoder only uses resnet and modulation items with the following repetitions [1, 2, 2, 2, 2, 2, 2], atten- tion is not used to allow decoding of variable and possibly very long latents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Channel injection only happens at depth 4, which matches the output of the magnitude encoder la- tent, post tanh application.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Furthermore,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' we train a 857M text-conditional generator (including the parameters of the frozen T5-base model) with 6 nested U-Net blocks of in- creasing channel counts ([128,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 256,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 512,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 512,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 1024,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 1024]) and again downsample each time by 2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' except for the first block ([1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2]),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' we use attention blocks at the fol- lowing depths [0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 0,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 1],' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' skipping the first two blocks to allow for further downsampling before sharing informa- tion over the entire latent,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' instead use cross attention blocks at all resolutions ([1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 1]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' For both attention and cross attention, we use 64 head features and 12 heads per layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' We repeat items with an increasing count towards the inner U-Net low-resolution and large-context blocks ([2, 2, 2, 4, 8, 8]), this allows good structural learning over minutes of audio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Both models are trained with the AdamW optimizer (Loshchilov & Hutter, 2019) using a learning rate of 10−4, β1 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='95, β2 = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='999, ϵ = 10−6, and wight decay of 10−3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Moreover, we use an exponential moving average (EMA) with β = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='995 and power of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Hardware Requirements We use limited computational resources as available in a university lab.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Both models can be trained on a single A100 GPU in 1 week of training using a batch size of 32;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' this is equivalent to around 1M steps for both the diffusion autoen- coder and latent generator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' For inference, as an example, a novel audio source of ∼88s can be synthesized less than ∼88s using a consumer GPU with a DDIM sampler and a high step count (100 generation steps and 100 decoding steps).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Results As mentioned in Table 1, our model is the only model that generates long-context music from text descriptions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Most other models do not take text as input (van den Oord et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2016;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Caillon & Esling, 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Borsos et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Pasini & Schlüter, 2022), and some others use lyrics or descriptions of daily sounds (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', “a dog barking”) (Kreuk et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Dhariwal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' The only text-to-music model com- parable with our work is the Riffusion model (Forsgren & Martiros, 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' We describe the merits of our model in both quantitative and qualitative ways from multiple perspectives: (1) genre diver- sity, (2) relevance of the music to the given text prompt, (3) sound quality, and (4) long-term structure in the generated music.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Our analyses are reported in Sections 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='1 to 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Note that there is no perfect evaluation metric in the existing Moûsai: Text-to-Music Generation with Long-Context Latent Diffusion literature (Kreuk et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Borsos et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Dhariwal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2020), since music is a complex artifact with a range of properties (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', timbre, rhythm, and structure), not to mention the subjectivity of music perception.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In the present work, we try our best to provide a diverse set of angles to evaluate the generated music.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In addition, we suggest readers listen to the provided samples in order to gain a more holistic impression of our model compared to the Riffusion model (Forsgren & Martiros, 2022): bit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='ly/ anonymous-mousai.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Diversity & Text-to-Music Relevance We design a listener test to illustrate the diversity and text relevance of Moûsai.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Specifically, we compose a list of 40 text prompts spanning across several common music genres: electronic, hip hop, metal, and pop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' (See Appendix A for the entire list of prompts, ten per category.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=') Using these prompts, we generate music with both Moûsai and the Riffusion model (Forsgren & Martiros, 2022), with a total of 80 pieces of music, two for each prompt.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Quali- tatively, we observe that our music samples exhibit a good diversity and fit the text descriptions well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' To validate this quantitatively, we conducted a small-scale psychophysics evaluation, recruiting three perceivers (anno- tators) with diverse demographic backgrounds (both female and male, all with at least a Master’s degree of education).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Each annotator listens to all 80 music samples we provide, and is instructed to categorize each sample into exactly one of the four provided genres.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' This is a four-alternative forced choice paradigm, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', a variant of the two-alternative forced choice setting which is considered the gold standard in psychophysics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' We record how many times the perceiver correctly identifies the genre which the respective model was generating from.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' A large number (or score) means that the model often gener- ated music that, according to the human perceiver, plausibly belonged to the correct category (when compared to the other three categories).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' To achieve a good score, the model needs to generate diverse and genre-specific music.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' We take the score as a quality score of the model when it comes to correctly performing text-conditional music generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In Figure 5, we display the confusion matrix of this genre identification test for both our model (left) and the Riffusion model (right).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' For our model, the annotators identify the right genres most of the time, whereas for the Riffusion model, the annotators often perceive the music as more generic, categorizing it as Pop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Sound Quality Apart from the diversity and relevance, we also evaluate the sound quality of the music we generate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' From the mel ElectronicHip Hop Metal Pop Electronic Hip Hop Metal Pop 0 5 10 15 20 25 30 (a) Confusion matrix for the music pieces generated by Moûsai.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' ElectronicHip Hop Metal Pop Electronic Hip Hop Metal Pop 0 5 10 15 20 25 30 (b) Confusion matrix for the music pieces generated by the Riffusion model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Figure 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Evaluation results of genre categorization for our model (left) and the Riffusion model (right).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' We show the confusion matrix across the four common music genres (electronic, hip hop, metal, and pop).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Dark values on the diagonal mean that a model generates music the perceivers categorize into the correct genre.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' We can see that our model (left) has most mass on the diagnal, while the riffusion model tends to generate generic samples that are very similar to Pop for all genres, thus being difficult to be categorized correctly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Note that each matrix adds up to 120, corre- sponding to 40 samples per model annotated by three perceivers each.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' spectrograms we visualize in Figure 6, we can see that low- frequency sounds are handled rather well by our model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' From the music samples we provide, it is apparent that our model performs well with drum-like sounds as frequently found in electronic, house, dubstep, techno, EDM, and metal music.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' This is likely a consequence of the lower amount of information required to represent low-frequency sounds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Structure Another qualitative advantage of our model is its capability to handle long-term structure, as opposed to riffusion mod- els’ context length of 5 seconds, as mentioned in Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Our generated samples exhibit structure over longer periods of time, exceeding the minute mark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' All of rhythm, loops, riffs, and occasionally even entire choruses are found in generated music.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' We find that increasing the number of at- tention blocks (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', from a total of 4–8 to a total of 32+) in the latent diffusion model can improve the general structure of the songs, thanks to the long-context view.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' If the model is trained without attention blocks, the context provided by the U-Net is not large enough to learn any meaningful long-term structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Additional Properties In addition to the main evaluation results, we also explore several properties of our model, namely the trade-off be- tween speed and quality, between the compression ratio and quality, as well as the text-audio binding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Trade-Off between Speed and Quality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' We find that 10 Moûsai: Text-to-Music Generation with Long-Context Latent Diffusion Figure 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Mel spectrogram comparison between the true samples (top) and the auto-encoded samples (bottom);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' text.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' sampling steps in both stages can be enough to generate reasonable audio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' We can achieve improved quality and reduced noise for high-frequency sounds by trading off the speed, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', increasing the number of sampling steps in the diffusion decoder, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 50 – 100 steps).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Increasing the num- ber of sampling steps in the latent diffusion model (again in the order of 50 – 100 steps) will similarly improve the quality, likely due to the more detailed generated latents, and at the same time result in an overall better structured music.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' To make sure the results are comparable when vary- ing the number of sampling steps, we use the same starting noise in both stages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In both cases, this suggests that using more advanced samplers could be helpful to improve on the speed-quality trade-off.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Trade-Off between Compression Ratio and Quality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' We find that decreasing the compression ratio of the first stage (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', to 32x) can improve the quality of low-frequency sounds, but in turn will slow down the model, as the second stage has to work on higher dimensional data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' As proposed later in Section 7, we hypothesize that using perceptually weighted loss functions instead of L2 loss during diffusion could help this trade-off, giving a more balanced importance to high frequency sounds even at high compression ratios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Text-Audio Binding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' We find that the text-audio binding works well with CFG higher than 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Since the model is trained with metadata such as title, album, artist, genre, year, and chunk, the best keywords to control the generation appear to be frequent descriptive names, such as the genre of the music, or descriptions commonly found in titles, such as “remix”, “(Deluxe Edition)”, and possibly many more.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' A similar behavior has been observed and exploited in text- to-image models to generate better looking results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' We find that the chunk based text-conditioning is coherent with the description, for example providing a description of the form “1 of N” will tend to result in a starting portion of a song, a description of the form “N of N” will tend to result in the ending portion of a song, and anything in between will tend to result in a song playing over the entire generation period.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Future Work Data and Scaling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Increasing scale of both data and the model can very likely provide drastic quality improvements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Following (Dhariwal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Borsos et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022) we suggest training with 50k-100k hours instead of 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='5k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Using a larger pretrained language model to obtain text embed- dings has been shown to be very important for quality in images (Saharia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022), we hypothesize that the same is true if applied to our second-stage model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Diffusion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' More sophisticated diffusion samplers can be used to get higher quality for the same number of sampling steps, or similarly more advanced distillation techniques could be used (Salimans & Ho, 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Some promising future modelling approaches that need more experimentation include: (1) training diffusion models using perceptual losses on the waveforms instead of L2 — this might help decrease the initial size of the U-Net,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' as we would not have to process non-perceivable sounds,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' (2) improving the quality of the diffusion autoencoder by using mel-spectrograms instead of magnitude spectrograms as input,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' (3) other types of conditioning which are not text- based might be useful to navigate the audio latent space,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' which is often hard to describe in words — DreamBooth- like models (Ruiz et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Conclusion In this work, we presented Moûsai, a waveform based audio generation method building on two diffusion models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' First, we trained a diffusion autoencoder to compress a magnitude only spectrogram 64x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Using a custom 1D U-Net, the com- pressed latent is decoded back to waveform by diffusion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In the second stage, we train a diffusion model to generate a new latent from noise while conditioning on text embed- dings extracted from a frozen T5 transformer model, using a similar 1D U-Net architecture as used in the first stage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' We show that — in contrast to earlier approaches — our model can generate minutes of high-quality music in real- time on a consumer GPU, with compelling text-audio bind- ing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In addition to trained models, we provide a collection of open-source libraries with the hope of facilitating future work in the field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' We expect that the present work will help pave the way towards higher-quality, longer-context text-to-music generation for future applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 70 70 60 60 50 50 40 40 30 30 20 20 10 10 0 100 200 300 400 500 0 100 200 300 400 500 70 70 60 60 50 50 40 40 30 30 20 20 10 10 100 200 300 400 500 0 100 200 300 400 500Moûsai: Text-to-Music Generation with Long-Context Latent Diffusion Author Contributions Flavio Schneider came up with the idea and implemented all the elements of this paper, which is part of his Master’s thesis at ETH Zürich (Schneider, 2023).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Zhijing Jin co-supervised the Master’s thesis and the work, conducted weekly meetings, helped designed the structure of the paper, and led the human evaluation experiments of this paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Bernhard Schölkopf supervised the work and provided precious suggestions during the progress of this work, as well as extensive suggestions for the writing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' All of Flavio Schneider, Zhijing Jin, and Bernhard Schölkopf contributed significantly to the writing and pol- ishing of the paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Acknowledgment We thank Stability AI for their generous support for the com- putational resources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' We are also grateful for the generous help by our annotators Andrew Lee, Aylin Gunal, Fernando Gonzalez, and Yiwen Ding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' We thank Fernando Gonzalez and Zhiheng Lyu for helping to improve the format of the pa- per.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' We thank Nasim Rahaman for early-stage discussions to improve the model design and contributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' This material is based in part upon works supported by the German Federal Ministry of Education and Research (BMBF): Tübingen AI Center, FKZ: 01IS18039B;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' and by the Machine Learning Cluster of Excellence, EXC number 2064/1 – Project number 390727645.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Zhijing Jin is sup- ported by PhD fellowships from the Future of Life Institute and Open Philanthropy, as well as the travel support from ELISE (GA no 951847) for the ELLIS program.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' References Baevski, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Zhou, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Mohamed, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Auli, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' wav2vec 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='0: A framework for self-supervised learning of speech representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In Larochelle, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Ranzato, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Hadsell, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Balcan, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Lin, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' (eds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' ), Ad- vances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://proceedings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' neurips.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='cc/paper/2020/hash/ 92d1e1eb1cd6f9fba3227870bb6d7f07-Abstract.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' html.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Borsos, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Marinier, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Vincent, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Kharitonov, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Pietquin, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Sharifi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Teboul, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Grangier, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Tagliasacchi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Zeghidour, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Audiolm: a language modeling approach to audio generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' CoRR, abs/2209.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='03143, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2209.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='03143.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='48550/ arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2209.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='03143.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Caillon, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' and Esling, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' RAVE: A variational autoencoder for fast and high-quality neural audio synthesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' CoRR, abs/2111.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='05011, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='org/ abs/2111.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='05011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Chang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Zhang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Barber, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Maschinot, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Lezama, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Jiang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Yang, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Murphy, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Freeman, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Rubinstein, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Li, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Krishnan, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Muse: Text- to-image generation via masked generative transform- ers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' CoRR, abs/2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='00704, 2023.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='00704.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='48550/ arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='00704.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Défossez, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Copet, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Synnaeve, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Adi, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' High fi- delity neural audio compression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' CoRR, abs/2210.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='13438, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2210.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='13438.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https: //doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2210.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='13438.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Deng, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Dong, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Socher, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Li, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='-J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Li, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Fei-Fei, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' ImageNet: A large-scale hierarchical image database.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In Computer Vision and Pattern Recognition (CVPR), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 248–255, 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Deng, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Bansal, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Ramanan, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Unsupervised audio- visual synthesis via exemplar autoencoders.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In 9th Inter- national Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' OpenRe- view.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='net, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://openreview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='net/ forum?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='id=43VKWxg\\_Sqr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Devlin, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Chang, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='-W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Lee, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Toutanova, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' BERT: Pre-training of deep bidirectional transformers for lan- guage understanding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In Proceedings of the 2019 Confer- ence of the North American Chapter of the Association for Computational Linguistics: Human Language Technolo- gies, Volume 1 (Long and Short Papers), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 4171–4186, Minneapolis, Minnesota, June 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Association for Computational Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='18653/v1/N19-1423.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://aclanthology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='org/N19-1423.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Dhariwal, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Jun, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Payne, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Kim, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Radford, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Sutskever, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Jukebox: A generative model for music.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' CoRR, abs/2005.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='00341, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' org/abs/2005.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='00341.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Dieleman, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', van den Oord, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Simonyan, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' The challenge of realistic music generation: Modelling raw audio at scale.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In Bengio, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Wallach, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Larochelle, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Grauman, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Cesa-Bianchi, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Garnett, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' (eds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' ), Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 8000– 8010, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://proceedings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Moûsai: Text-to-Music Generation with Long-Context Latent Diffusion neurips.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='cc/paper/2018/hash/ 3e441eec3456b703a4fe741005f3981f-Abstract.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' html.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Doornbusch, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Gerhard nierhaus: Algorithmic composi- tion: Paradigms of automated music generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Music.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 34(3):70–74, 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='1162/COMJ\\\\_r\\\\_ 00008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='1162/COMJ\\ _r\\_00008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Elizalde, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Deshmukh, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Ismail, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Wang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' CLAP: learning audio concepts from natural language supervision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' CoRR, abs/2206.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='04769, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2206.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='04769.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='org/ 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2206.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='04769.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Engel, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Agrawal, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Chen, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Gulrajani, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Donahue, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Roberts, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Gansynth: Adversar- ial neural audio synthesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In 7th International Con- ference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' OpenReview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='net, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://openreview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='net/forum?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' id=H1xQVn09FX.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Esser, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Rombach, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Ommer, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Taming trans- formers for high-resolution image synthesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021, virtual, June 19-25, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 12873–12883.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Computer Vision Foundation / IEEE, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='1109/CVPR46437.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='01268.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://openaccess.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='thecvf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='com/ content/CVPR2021/html/Esser_Taming_ Transformers_for_High-Resolution_ Image_Synthesis_CVPR_2021_paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='html.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Forsgren, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' and Martiros, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Riffusion - Stable diffusion for real-time music generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https: //riffusion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='com/about.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Giraudo, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Generation of musical patterns through operads.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' CoRR, abs/2104.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='12432, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' org/abs/2104.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='12432.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Hinton, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' and Salakhutdinov, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Reducing the di- mensionality of data with neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' science, 313 (5786):504–507, 2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Ho, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' and Salimans, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Classifier-free diffusion guid- ance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' CoRR, abs/2207.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='12598, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2207.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='12598.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='48550/ arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2207.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='12598.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Ho, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Chan, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Saharia, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Whang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Gao, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Gritsenko, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Kingma, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Poole, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Norouzi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Fleet, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Salimans, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Imagen video: High definition video generation with diffusion models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' CoRR, abs/2210.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='02303, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2210.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='02303.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https: //doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2210.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='02303.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Kim, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Hong, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Ro, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Lip to speech synthesis with visual context attentional GAN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In Ranzato, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Beygelzimer, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Dauphin, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Liang, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Vaughan, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' (eds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' ), Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2758– 2770, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://proceedings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' neurips.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='cc/paper/2021/hash/ 16437d40c29a1a7b1e78143c9c38f289-Abstract.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' html.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Kingma, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' and Welling, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Auto-encoding variational bayes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In Bengio, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' and LeCun, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' (eds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' ), 2nd Interna- tional Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL http://arxiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='org/ abs/1312.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='6114.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Kong, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Ping, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Huang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Zhao, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Catanzaro, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Diffwave: A versatile diffusion model for audio synthesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In 9th International Conference on Learn- ing Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' OpenReview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='net, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https: //openreview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='net/forum?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='id=a-xFK8Ymz5J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Kreuk, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Synnaeve, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Polyak, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Singer, U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Dé- fossez, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Copet, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Parikh, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Taigman, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Adi, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Audiogen: Textually guided audio generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' CoRR, abs/2209.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='15352, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2209.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='15352.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='48550/ arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2209.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='15352.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Kumar, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Kumar, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', de Boissiere, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Gestin, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Teoh, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Sotelo, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', de Brébisson, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Bengio, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Courville, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Melgan: Generative adversarial networks for conditional waveform synthesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In Wallach, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Larochelle, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Beygelzimer, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', d’Alché-Buc, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Fox, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Garnett, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' (eds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' ), Advances in Neural In- formation Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 14881–14892, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://proceedings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' neurips.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='cc/paper/2019/hash/ 6804c9bca0a615bdb9374d00a9fcba59-Abstract.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' html.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Lam, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Wang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Su, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Yu, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' BDDM: bilat- eral denoising diffusion models for fast and high-quality speech synthesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' OpenReview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='net, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https: //openreview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='net/forum?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='id=L7wzpQttNO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Lee, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Kim, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Kim, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Cho, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Han, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Au- toregressive image generation using residual quantiza- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In IEEE/CVF Conference on Computer Vision Moûsai: Text-to-Music Generation with Long-Context Latent Diffusion and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 11513–11522.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' IEEE, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='1109/CVPR52688.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='01123.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https:// doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='1109/CVPR52688.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='01123.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Leng, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Chen, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Guo, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Liu, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Chen, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Tan, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Mandic, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', He, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Li, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Qin, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Zhao, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Liu, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Binauralgrad: A two-stage conditional diffu- sion probabilistic model for binaural audio synthesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' CoRR, abs/2205.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='14807, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2205.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='14807.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='48550/ arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2205.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='14807.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Li, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Xu, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Wang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Zhou, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Lin, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Zhu, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Zeng, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Ji, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Chang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Clip-event: Connecting text and images with event structures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In IEEE/CVF Confer- ence on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 16399–16408.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' IEEE, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='1109/CVPR52688.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='01593.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='1109/ CVPR52688.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='01593.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Loshchilov, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' and Hutter, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Decoupled weight decay regu- larization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' OpenReview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='net, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https: //openreview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='net/forum?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='id=Bkg6RiCqY7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Mehri, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Kumar, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Gulrajani, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Kumar, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Jain, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Sotelo, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Courville, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Bengio, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Samplernn: An unconditional end-to-end neural audio generation model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' OpenReview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='net, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://openreview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='net/forum?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' id=SkxKPDv5xl.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Morrison, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Kumar, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Kumar, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Seetharaman, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Courville, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Bengio, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Chunked autoregressive GAN for conditional waveform synthesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' OpenRe- view.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='net, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://openreview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='net/ forum?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='id=v3aeIsY\\_vVX.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Ouyang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Wu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Jiang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Almeida, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Wainwright, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Mishkin, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Zhang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Agarwal, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Slama, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Ray, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Schulman, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Hilton, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Kelton, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Miller, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Simens, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Askell, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Welinder, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Christiano, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Leike, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Lowe, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Training language models to follow instructions with human feedback.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' CoRR, abs/2203.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='02155, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2203.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='02155.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='48550/ arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2203.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='02155.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Pasini, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' and Schlüter, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Musika!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' fast infinite waveform music generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' CoRR, abs/2208.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='08706, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2208.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='08706.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='org/ 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2208.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='08706.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Pennington, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Socher, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Manning, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' GloVe: Global vectors for word representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In Proceedings of the 2014 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 1532–1543, Doha, Qatar, October 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Association for Computa- tional Linguistics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='3115/v1/D14-1162.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://aclanthology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='org/D14-1162.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Preechakul, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Chatthee, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Wizadwongsa, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Suwa- janakorn, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Diffusion autoencoders: Toward a meaning- ful and decodable representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In IEEE/CVF Confer- ence on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 10609–10619.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' IEEE, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='1109/CVPR52688.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='01036.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='1109/ CVPR52688.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='01036.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Radford, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Narasimhan, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Salimans, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Sutskever, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Improving language understanding by generative pre- training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Technical report, OpenAI, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Raffel, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Shazeer, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Roberts, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Lee, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Narang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Matena, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Zhou, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Li, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Liu, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Exploring the limits of transfer learning with a unified text-to-text trans- former.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Mach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Learn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Res.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 21:140:1–140:67, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL http://jmlr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='org/papers/v21/20-074.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' html.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Ramesh, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Dhariwal, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Nichol, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Chu, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Chen, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Hierarchical text-conditional image generation with CLIP latents.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' CoRR, abs/2204.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='06125, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2204.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='06125.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='org/ 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2204.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='06125.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Rombach, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Blattmann, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Lorenz, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Esser, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Ommer, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' High-resolution image synthesis with latent diffusion models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Or- leans, LA, USA, June 18-24, 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 10674–10685.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' IEEE, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='1109/CVPR52688.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='01042.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='1109/CVPR52688.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='01042.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Ronneberger, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Fischer, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Brox, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' U-net: Con- volutional networks for biomedical image segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In Navab, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Hornegger, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', III, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Frangi, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' (eds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' ), Medical Image Computing and Computer- Assisted Intervention - MICCAI 2015 - 18th International Conference Munich, Germany, October 5 - 9, 2015, Pro- ceedings, Part III, volume 9351 of Lecture Notes in Computer Science, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 234–241.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Springer, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='1007/978-3-319-24574-4\\_28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='1007/978-3-319-24574-4_28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Moûsai: Text-to-Music Generation with Long-Context Latent Diffusion Ruiz, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Li, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Jampani, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Pritch, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Rubinstein, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Aberman, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Dreambooth: Fine tuning text-to- image diffusion models for subject-driven generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' CoRR, abs/2208.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='12242, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2208.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='12242.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='48550/ arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2208.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='12242.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Saharia, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Chan, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Saxena, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Li, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Whang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Denton, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Ghasemipour, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Ayan, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Mah- davi, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Lopes, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Salimans, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Ho, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Fleet, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Norouzi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Photorealistic text-to-image diffusion models with deep language understanding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' CoRR, abs/2205.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='11487, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2205.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='11487.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='48550/ arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2205.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='11487.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Salas, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Gelbukh, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Calvo, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Soria, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Automatic music composition with simple proba- bilistic generative grammars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Polibits, 44:59–65, 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='17562/pb-44-9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='org/ 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='17562/pb-44-9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Salimans, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' and Ho, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Progressive distillation for fast sampling of diffusion models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In The Tenth Interna- tional Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' OpenReview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='net, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://openreview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='net/forum?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' id=TIdIXIpzhoI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Schneider, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' ArchiSound: Audio generation with diffu- sion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' January 2023.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='com/ flavioschneider/master-thesis/blob/ main/audio_diffusion_thesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='pdf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Song, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Meng, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Ermon, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Denoising diffusion im- plicit models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In 9th International Conference on Learn- ing Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' OpenReview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='net, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https: //openreview.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='net/forum?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='id=St1giarCHLP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' van den Oord, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Dieleman, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Zen, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Simonyan, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Vinyals, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Graves, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Kalchbrenner, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Senior, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Kavukcuoglu, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Wavenet: A generative model for raw audio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In The 9th ISCA Speech Synthesis Workshop, Sunnyvale, CA, USA, 13-15 September 2016, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 125.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' ISCA, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL http://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='isca-speech.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' org/archive/SSW_2016/abstracts/ssw9_ DS-4_van_den_Oord.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='html.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' van den Oord, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Vinyals, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Kavukcuoglu, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Neural discrete representation learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' In Guyon, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', von Luxburg, U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Bengio, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Wallach, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Fergus, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Vishwanathan, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Garnett, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' (eds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' ), Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 6306–6315, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://proceedings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' neurips.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='cc/paper/2017/hash/ 7a98af17e63a0ac09ce2e96d03992fbc-Abstract.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' html.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Villegas, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Babaeizadeh, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Kindermans, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Moraldo, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Zhang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Saffar, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Castro, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Kunze, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Erhan, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Phenaki: Variable length video generation from open domain textual description.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' CoRR, abs/2210.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='02399, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2210.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='02399.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https: //doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2210.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='02399.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Yang, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Yu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Wang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Wang, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Weng, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Zou, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Yu, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Diffsound: Discrete diffusion model for text-to-sound generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' CoRR, abs/2207.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='09983, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2207.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='09983.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2207.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='09983.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Yu, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Lu, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Wang, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Hu, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Tan, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Ye, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Zhang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Qin, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Liu, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Museformer: Transformer with fine- and coarse-grained attention for music generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' CoRR, abs/2210.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='10349, 2022a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2210.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='10349.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='48550/ arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2210.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='10349.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Yu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Xu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Koh, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Luong, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Baid, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Wang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Vasudevan, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Ku, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Yang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Ayan, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Hutchinson, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Han, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Parekh, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Li, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Zhang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Baldridge, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Wu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Scaling autoregressive models for content- rich text-to-image generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' CoRR, abs/2206.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='10789, 2022b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2206.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='10789.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https: //doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='48550/arXiv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2206.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='10789.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Zeghidour, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Luebs, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Omran, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', Skoglund, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', and Tagliasacchi, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Soundstream: An end-to-end neu- ral audio codec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' IEEE ACM Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Audio Speech Lang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=', 30:495–507, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' doi: 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='1109/ TASLP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='3129994.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' URL https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 1109/TASLP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content='3129994.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Text Prompts We list all the text prompts composed for the four common music genres in Table 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Moûsai: Text-to-Music Generation with Long-Context Latent Diffusion Genre = Electronic – Drops,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Kanine Remix,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Darkzy,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Drops Remixes,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' bass house,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' (Deluxe) (Remix) 3 of 4 – Electronic,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Dance,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' EDM (Deluxe) (Remix) 3 of 4 – Electro House (Remix),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2023,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3 of 4 – Electro Swing Remix 2030 (Deluxe Edition) 3 of 4 – Future Bass,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' EDM (Remix) 3 of 4,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Remix – EDM (Deluxe) (Remix) 3 of 4 – EDM,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Vocal,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Relax,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Remix,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2023,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 8D Audio – Hardstyle,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Drop,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 8D,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Remix,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' High Quality,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2 of 4 – Dubstep Insane Drop Remix (Deluxe Edition),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2 of 4 – Drop,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' French 79,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' BPM Artist,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 4,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Electronica,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2016 Genre = Hip Hop – Real Hip Hop,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2012,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Lil B,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Gods Father,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' escape room,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3 of 4 – C’est toujours pour ceux qui savent,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' French Hip Hop,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2018 (Deluxe),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3 of 4 – Dejando Claro,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Latin Hip Hop 2022 (Deluxe Edition) 3 of 4 – Latin Hip Hop 2022 (Deluxe Edition) 3 of 4 – Alternative Hip Hop Oh-My,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2016,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' (Deluxe),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3 of 4 – Es Geht Mir Gut,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' German Hip Hop,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2016,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' (Deluxe),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3 of 4 – Italian Hip Hop 2022 (Deluxe Edition) 3 of 4 – RUN,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Alternative Hip Hop,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2016,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' (Deluxe),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3 of 4 – Hip Hop,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Rap Battle,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2018 (High Quality) (Deluxe Edition) 3 of 4 – Hip Hop Tech,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Bandlez,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Hot Pursuit,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' brostep,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3 of 4 Genre = Metal – Death Metal,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2012,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3 of 4 – Heavy Death Metal (Deluxe Edition),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3 of 4 – Black Alternative Metal,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' The Pick of Death (Deluxe),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2006,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3 of 4 – Kill For Metal,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Iron Fire,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' To The Grave,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' melodic metal,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3 of 4 – Melodic Metal,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Iron Dust (Deluxe),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2006,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3 of 4 – Possessed Death Metal Stones (Deluxe),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2006,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3 of 4 – Black Metal Venom,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2006,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3 of 4 – The Heavy Death Metal War (Deluxe),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2006,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3 of 4 – Heavy metal (Deluxe Edition),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3 of 4 – Viking Heavy Death Metal (Deluxe),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2006,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3 of 4 Genre = Pop – (Everything I Do),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' I Do It For You,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Bryan Adams,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' The Best Of Me,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' canadian pop,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3 of 4 – Payphone,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Maroon 5,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Overexposed,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Pop,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2021,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3 of 4 – 24K Magic,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Bruno Mars,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 24K Magic,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' dance pop,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3 of 4 – Who Is It,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Michael Jackson,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Dangerous,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Pop (Deluxe),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3 of 4 – Forget Me,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Lewis Capaldi,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Forget Me,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Pop Pop,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2022,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3 of 4 – Pop,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Speak Now,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Taylor Swift,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2014,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' (Deluxe),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3 of 4 – Pop Pop,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Maroon 5,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Overexposed,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2016,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3 of 4 – Pointless,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Lewis Capaldi,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Pointless,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Pop,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2022,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3 of 4 – Saved,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Khalid,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' American Teen,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Pop,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2022,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3 of 4 – Deja vu,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Fearless,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Pop,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 2020,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' (Deluxe),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' 3 of 4 Table 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} +page_content=' Text prompts composed for the four common music gen- res: electronic, hip hop, metal, and pop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/0tFKT4oBgHgl3EQfOC19/content/2301.11757v1.pdf'} diff --git a/0tFST4oBgHgl3EQfVziF/vector_store/index.faiss b/0tFST4oBgHgl3EQfVziF/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..a1863c61e8f091eaeaaf9e01537ef1f175307ddf --- /dev/null +++ b/0tFST4oBgHgl3EQfVziF/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:2c1df763e50deedcbaa0d71c29c258d1f20210413c23fb7a3fae62ed5203a2d4 +size 3801133 diff --git a/2NA0T4oBgHgl3EQfM_8t/vector_store/index.faiss b/2NA0T4oBgHgl3EQfM_8t/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..706aefa614ad1b06317a5233ebb182c9aa4bfbf9 --- /dev/null +++ b/2NA0T4oBgHgl3EQfM_8t/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:46373f1c355cc07ce4073650bc571a8b026c1c767106ce7914364e54ba9bf360 +size 9764909 diff --git a/2dFAT4oBgHgl3EQfDhxB/vector_store/index.pkl b/2dFAT4oBgHgl3EQfDhxB/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..33c81bb087796031b4d5a7464b6d9b916b6f0dd8 --- /dev/null +++ b/2dFAT4oBgHgl3EQfDhxB/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b1c7b20413ecf0ac352c50f07397bc87961fe81866fd9bfdb814d161a345a7dc +size 101282 diff --git a/2tE1T4oBgHgl3EQf5gWl/vector_store/index.faiss b/2tE1T4oBgHgl3EQf5gWl/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..87aa5517fe9e6c1250e1f5f34e0a837497326c45 --- /dev/null +++ b/2tE1T4oBgHgl3EQf5gWl/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e8b68b329821934aaa47752e5d4a747c90cb248da66efa0e9b1c02b918fa255c +size 9633837 diff --git a/39AyT4oBgHgl3EQfo_il/content/2301.00518v1.pdf b/39AyT4oBgHgl3EQfo_il/content/2301.00518v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..8076fef33b27be10f07eae001210fa563395ba53 --- /dev/null +++ b/39AyT4oBgHgl3EQfo_il/content/2301.00518v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7bc6fe771b7bf37c8e7a2a6993eff74ec0d1c79e7731ce38b60d3512031c6507 +size 356770 diff --git a/39AyT4oBgHgl3EQfo_il/vector_store/index.pkl b/39AyT4oBgHgl3EQfo_il/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..fe0445f8b645be425faa42069b67fa8408edbfb9 --- /dev/null +++ b/39AyT4oBgHgl3EQfo_il/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cbed80713a454b7b53e3a8e52645ae2d3208777132e8f643b15f0bae213d1c26 +size 137783 diff --git a/39E2T4oBgHgl3EQf6Aj8/content/tmp_files/2301.04197v1.pdf.txt b/39E2T4oBgHgl3EQf6Aj8/content/tmp_files/2301.04197v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..7c9e32773d72b6ba6421eca57a08f3c5952e0956 --- /dev/null +++ b/39E2T4oBgHgl3EQf6Aj8/content/tmp_files/2301.04197v1.pdf.txt @@ -0,0 +1,856 @@ +Disconnected and multiply connected spectra in the 2D attractive Hubbard model +Johan Carlström +Department of Physics, Stockholm University, 106 91 Stockholm, Sweden +(Dated: January 12, 2023) +Fermi gases and liquids display an excitation spectrum that is simply connected, ensuring closed Fermi sur- +faces. In strongly correlated systems like the cuprate superconductors, the existence of open sheets of Fermi +surface known as Fermi arcs indicate a distinctly different topology of the spectrum with no equivalent in Fermi +liquid theory. Here, we demonstrate a generic mechanism by which correlation effects in fermionic systems +can change the topology of the spectrum. Using diagrammatic Monte Carlo simulations, we demonstrate the +existence of disconnected and multiply connected excitation spectra in the attractive Hubbard model in the +BCS-BEC cross-over regime. These topologically nontrivial spectra are a prerequisite for Fermi arcs. +Landaus Fermi liquid theory [1] is the standard model +through which we understand interacting electrons in normal +metals. In this paradigm, electronic states evolve adiabatically +with increasing interactions so that there remains a direct cor- +respondence between the states in a non-interacting Fermi gas, +and the quasi-particles of the interacting system. A key con- +sequence of this relationship is that the excitation spectrum of +the interacting system inherits the topology of the bands as- +sociated with the noninteracting state. In the absence of gap- +closing points, the energy bands of Fermi gases are generally +simply connected sets, and so are consequently the spectra +of Fermi liquids. This, in turn, implies a Fermi surface that +is closed (this point also holds with nodes in the spectrum). +Strongly correlated systems often display phenomena that fall +decidedly outside of the Fermi liquid regime. In the cuprates, +superconductivity is nucleated from a pseudogap state with +open sheets of Fermi surface, which persist over a wide range +of doping levels [2]. The physical origin of these Fermi arcs +remains highly contested. +It has been observed in the cuprates that superconducting +fluctuations persist above the critical temperature [3–5], and +it has been proposed that this fact may explain the origin of +the pseudogap state [6]. +This in turn raises key questions +about the pairing regime, which also remains disputed: If the +cuprates are BCS-like, then the fluctuating region should be +understood in terms of a paired state without global phase +coherence [7]. In the BEC limit, the electrons form bound +pairs which give rise to a bosonic normal liquid at tempera- +tures far above Tc [8]. The onset of superconductivity would +then occur as these pairs condense at a much lower temper- +ature. While these two scenarios are often both referred to +by the term “preformed pairs”, they are remarkably different. +Between these two extrema lies the an extensive BCS-BEC +crossover regime [9]. +A directly opposing point of view is that preformed pairs +have no part in the emergence of Fermi arcs, and that the +pseudogap and paired states are instead antagonistic to each +other. ARPES imaging is claimed to show direct competition +between superconductivity, and a distinctly different order pa- +rameter that is associated with the pseudogap state [10, 11]. A +candidate for this order parameter is provided by a breaking of +translation symmetry [12], which is observed in STM imaging +[13, 14]. +Theoretically predicting the existence of Fermi arcs in +model Hamiltonians is challenging due to a lack of reli- +able numerical techniques for strongly correlated fermions. +Nonetheless, recent variational Monte Carlo calculations sug- +gest that the pseudogap physics observed in the cuprates is at +least qualitatively captured by the single band Hubbard model. +For Hubbard clusters up to 64 sites, Fermi arcs are observed +at a carrier concentration of 6.25%, and remnants of these are +present at 12.5% doping [15]. This may be compared to the +cuprates, where pseudogap physics persist up to a carrier con- +centration of ∼ 20% [2, 16]. The existence of Fermi arcs in a +simple model Hamiltonian like the Hubbard model is encour- +aging since it may indicate that this is a generic phenomena. +A second theoretical challenge is to qualitatively explain +how Fermi liquid theory fails in strongly correlated systems, +and connect this insight with the emergence of Fermi arcs. +Here, a key observation is that a simply connected excitation +spectrum does not permit open sheets of Fermi surface. This +relationship implies that the electronic state’s adiabatic depen- +dence on interaction strength must necessarily break down in +such a way that the connectivity of the spectrum changes, see +also Fig. 1. +In this work, we discuss how strong interactions can give +rise to non-Fermi-liquid phases which are characterized by +band fractionalization [17]. Using the attractive-interaction +Hubbard model as an example, we demonstrate that that the +operators associated with these fractional bands exhibit van- +ishing phase spaces in parts of the Brillouin zone, which +leads to disconnected or multiply connected excitation spec- +tra. These topologically nontrivial spectra are a fundamental +prerequisite for the existence of Fermi arcs. +Band fractionalization and spectral topology—To illustrate +the breakdown of Fermi liquid theory, we consider the attrac- +tive Hubbard model (AHM), which is given by +H = +� +⟨ij⟩σ +tc† +iσcjσ + +� +i +(Uni↓ni↑ − µni), U < 0. +(1) +Because of the interaction, the energy bands are generally split +into two sub-bands, [18], a phenomena that is also referred to +as band fractionalization [17]. For strong contact interaction, +these sub-bands are generally singlon-like and doublon-like +respectively, prompting us to introduce the corresponding op- +arXiv:2301.04197v1 [cond-mat.str-el] 10 Jan 2023 + +2 +Spectrum +Fermi level +Fermi arc +Figure 1. +Relationship between spectral topology and Fermi +arcs. The multiply connected spectrum intersects the Fermi level on +a set of open and disconnected lines which constitute Fermi arcs. By +contrast, a simply connected spectrum, must necessarily intersect the +Fermi level on a set of closed lines. This implies that a topologically +nontrivial spectrum is a prerequisite of Fermi arcs. +erators and associated spinors: +c† +iσ = s† +iσ + d† +iσ, s† +iσ = c† +iσ(1 − ni¯σ), d† +i = c† +iσni¯σ +Ψ† +iσ = +� +s† +iσ d† +iσ +� +, +Ψiσ = +�siσ +diσ +� +. +(2) +Here, s† and d† are the singlon and doublon creation operators +while ¯σ = −σ. We can then define a “quasi-particle” (QP) +greens function based on the outer product of the spinors: +Γσ(x2 − x1) = ⟨TτΨ† +iσ(x1) ⊗ Ψiσ(x2)⟩, +(3) +from which the ordinary electronic Greens function is ob- +tained by the summation +Gσ(x) = +� +αβ +Γαβσ(x). +(4) +In the atomic limit, the QP Greens function is diagonal, with +a frequency space representation given by +ΓA +σ (ω) = +� 1+eµ +ZA +1 +iω+µ +0 +0 +eµ+e2µ−U +ZA +1 +iω+µ−U +� +. +(5) +Here, the energy is for simplicity given in units of the tem- +perature (corresponding to the case of unit temperature). The +Greens function (5) resembles that of a two-component sys- +tem, except that it is rescaled by two “quasiparticle weights”. +To pursue this analogy we introduce the weight W according +to +W = +� 1+eµ +ZA +0 +0 +eµ+e2µ−U +ZA +� += w0σ0 + wzσz, +(6) +where we note that (6) must satisfy +w0 ≥ |wz|. +(7) +In the limit wz → w0, the system is effectively Gutzwiller +projected, and doublons can be regarded as “forbidden”. In +this scenario, the doublon operators can be said to have a van- +ishing phase space in the sense that they have a domain or +codomain which does not overlap with the sub-space on which +we project. The same can be said abut the singlon operator in +the limit wz → −w0. In these cases, the doublon or singlon +parts do not contribute to the Greens function, and thus not to +the spectrum either. +We may then express the atomic Greens function (5) in +terms of a reweighted two-component system according to +ΓA +σ (ω) = +W +iω − V , +V = +�U +2 − µ +� +σ0 − U +2 σz, +(8) +where V is the effective two-component Hamiltonian. +Next, we note that the tunneling term may be written +tc† +iσcjσ = Ψ† +iσKΨjσ, K = t(σ0 + σx). +(9) +Thus, including the first correction of the strong-coupling ex- +pansion [19], we obtain a Greens function +Γσ(ω) = ΓA +σ (ω) + ΓA +σ (ω)K(k)ΓA +σ (ω) + ... += +1 +iω − V − WK(k)W. +(10) +At this point, the effective two-component Hamiltonian He = +V + WK(k) is no longer diagonal, and the dispersion thus +mixes the singlon and doublon components. Additionally, He +is non-Hermitian, and does not generally exhibit an orthonor- +mal eigenbasis. However, due to a combination of PT sym- +metry and the condition (7), the eigenvalues remain real. +Due to the factor W, the spectral weight of the two sub- +bands are generally not equal, and one of them may even van- +ish asymptotically. This points is central to the spectral topol- +ogy: If we conduct a strong coupling expansion to higher or- +der, then we will find that the QP weight W becomes momen- +tum dependent. If the phase space for a sub-band operator of +the type (2) vanishes in part of Brillouin zone, then so does +the corresponding spectral weight, implying that the spectrum +is no longer simply connected. Strong-coupling expansion by +hand is however not feasible beyond first order, and to explore +this concept we have to employ numerical techniques. +Numerical treatment—To test the preceding conjecture, +we employ bold-line diagrammatic Monte Carlo simulations, +specifically focusing on the attractive Hubbard model in the +BCS-BEC cross over regime. +This method is based on +stochastic sampling of Feynman type graphs [20], and is un- +biased in the sense that the only systematic source of error is +truncation of the series. For a convergent series, asymptot- +ically exact results are obtained directly in the macroscopic +limit. To be able to address systems with strong interactions +we use a particular formulation known as strong-coupling di- +agrammatic Monte Carlo (SCDMC) [19, 21–24], where the + +3 +diagrammatic elements are connected vertices of propagating +electrons that are non-perturbative in U. The computational +protocol employed here is outlined in detail in [19]. +In SCDMC, the expansion parameter is the hopping integral +t. The principal observable that we compute is the polariza- +tion operator of the hopping integral, here denoted Πt(ω, k). +From the polarization operator, we obtain the dressed hopping +integral via the Bethe Salpiter equation: +˜t(ω, k) = +1 +t−1(k) − Πt(ω, k). +(11) +We expand in the dressed hopping ˜t, while retaining only the +skeleton diagrams. By iterating until convergence, we obtain +a self-consistent solution for ˜t which implicitly takes into ac- +count certain classes of diagrams to infinite order. +The Greens function of the interacting system is closely re- +lated to the dressed hopping integral, and can be obtained from +the equation +G(ω, k) = +1 +Π−1 +t (ω, k) − tk +. +(12) +To the lowest order, the polarization operator is given by the +atomic-limit Greens function, meaning that eq. (10) is repro- +duced. We conduct a self-consistent summation of all dia- +grams to order 7 which permits us to asses convergence prop- +erties of the series–for a discussion, see Appendix I. +We compute a discrete approximation for the spectrum us- +ing numerical analytical continuation [25]: First, we define +a spectral reconstruction of the Greens function and a corre- +sponding error metric according to +GR(τ, k) = +nmax +� +n=1 +An(k) e−ϵnτ +1 + eβϵn , +τ < 0, +(13) +∆[k, {An(k)}] = +� +1 +β +� +dτ[G(τ, k) − GR(τ, k)]2. (14) +We use nmax = 121 as a compromise between accuracy and +computational cost. To obtain the best estimate for the spec- +tral function A(k), we minimize the error metric ∆ through +a process of simulated annealing followed by a line-search +tecnhique: In the first stage, we use Monte Carlo to update +{An(k)} with an acceptance ration ∼ e−κ∆, while succes- +sively increasing the inverse pseudo temperature κ. In the +second stage, we minimize ∆ using Newton-Raphson. This +reduces the error only very slightly, but tends to result in a +smoother spectrum. +From the spectrum, we obtain a (discretized) estimate for +the density of states via the integral +dos(ϵn) = +� +dk +(2π)D An(k). +(15) +The normalization of Eq. (13) is such that the summations +over An and dos(ϵn) are unity. +We consider the Hubbard model with an attractive contact +interaction given by U = −5|t|, at temperatures t/T = 1 and +t/T = 4. We examine the cases of half-filling and a particle +density of ⟨ˆn⟩ ≈ 1.88. The results of our simulations are +summarized in Fig. 2. +At half-filling and a higher temperature of t/T = 1, we find +that the density of states (a) has a minimum at the Fermi level, +though the system remains gapless. The momentum-resolved +particle density (b) attains minima and maxima at ∼ 0.4 and +∼ 1.6. The spectral density (c) exhibits two smeared sub- +bands, with densities that are visibly momentum-dependent. +Reducing the temperature, the density of states (d) vanishes +at the Fermi level, indicating that the system is gapped against +fermionic excitations. The particle density extrema (e) are +now close to 0 and 2.0 respectively. The spectral density (f) +is sharply peaked, with a weight that is strongly dependent on +momentum. +If we also increase the particle density, then the upper sub- +band is strongly suppressed as a result (g). The system is now +completely filled in a large fraction of the Brillouin zone (h), +and the lower sub-band carries most of the spectral weight (i). +The momentum-dependent spectral weights can be under- +stood from the fact that the two sub-bands originate in singlon- +like and doublon-like degrees of freedom: For sufficiently +strong attraction, the Hubbard model prefers to have occupa- +tion numbers of 0 or 2. Singly occupied sites are situated at +high energy, implying that the upper sub-band is singlon-like. +At small momenta, k ≈ (0, 0), the particle density is smaller, +and the singlon operator has a substantial phase space allow- +ing for a high spectral density. Near k = (π, π), the particle +density approaches 2, meaning that the phase space for the +singlon operator vanishes, along with the spectral weight of +this sub-band. For the doublon-like component, the situation +is the opposite, with a vanishing spectral density when the +density is small. +To quantify the suppression of the spectral density, we de- +fine the spectral weight of a sub-band according to +ρ(k) = +n=nmax +� +n=nmin +An(k), +(16) +where the range of indices n should be taken to include the en- +tire sub-band, but nothing else. At a temperature of t/T = 4 +and halffilling, the system remains gapped so that we can iden- +tify the upper sub-band with positive energies and the lower +sub-band with negative energies. Doping the system, the two +sub-bands are still well separated with the density of states +vanishing at ϵ ≈ 1.5t, suggesting we use this energy as the +dividing point. At the higher temperature, the two sub-bands +overlap. We can still calculate spectral weights based on ϵ = 0 +as our dividing point, though this approximation may slightly +underestimate the spectral weight at its minimum, while over- +estimating it at the maximum. +The spectral weight of the singlon-like component is shown +in Fig. +3. +At a temperature of t/T = 1 and half-filling +(a), the singlon-like component is suppressed to ≈ 16% at + +4 +1.6 +1.2 +0.8 +0.4 +12 +-12 +0 +1.6 +1.2 +0.8 +0.4 +Dos +12 +(b) +(a) +(c) +-12 +0 +1.6 +1.2 +0.8 +0.4 +Dos +12 +-12 +0 +1.6 +2.0 +1.2 +0.8 +0.4 +Dos +Dos +15 +-15 +0 +(e) +(d) +(f) +Spectral density +Spectral density +1.6 +1.2 +0.8 +0.4 +Dos +12 +-12 +0 +1.6 +2.0 +1.2 +0.8 +Dos +15 +-15 +0 +(h) +(g) +(i) +Spectral density +Figure 2. Spectra and equation of state for the attractive Hubbard model with U = −5|t|, at temperatures of t/T = 1 (a-c) and t/T = 4 +(d-i). The figures (a-f) corresponds to half-filling, while (g-i) corresponds to ⟨ˆn⟩ ≈ 1.88. At high temperature, the spectrum (a) reveals +a suppression of the density of states at the Fermi level. The particle density (b) exhibits a minimum at k = (0, 0) with ⟨ˆn⟩ ≈ 0.4 and +a maximum at k = (π, π) with ⟨ˆn⟩ ≈ 1.6. The momentum-resolved spectral density (c) taken along the dashed line in (b), reveals two +sub-bands. Decreasing the temperature, the density of states (d) vanishes at the Fermi level, implying that the system is gapped with respect +to fermionic excitations. The particle density (e) now has minima and maxima close 0 and 2.0 respectively. The spectral density (f) reveals +sharp families of excitations with a spectral weight that is strongly dependent on momentum and almost vanishes in part of the Brillouin zone. +Increasing the particle density to ⟨ˆn⟩ ≈ 1.88, the density of states (g) reveals a large peak that is doublon-like, and a much suppressed peak +corresponding to singlons. The peaks are well separated, and the density of states vanishes at ϵ ≈ 1.5t. The spectral density reveals a large +doublon-like peak, though the singlon peak has a presence mainly near k = (0, 0). This data was obtained using an expansion order O = 6. +k ≈ (π, π). At a temperature of t/T = 4 (b), this mini- +mum drops below 1%. The strong temperature dependence is +consistent with the notion of a vanishing phase space for the +singlon operator: At k = (π, π), the system has a preference +for double occupation, and the singlon operator can only act +in the presence of thermal fluctuations. As the temperature +is reduced, these are exponentially suppressed together with +the spectral weight. Asymptotically, this results in a multiply +connected spectrum which lacks states in part of the Brillouin +zone. Increasing the particle density (c), the spectral weight +attains a maximum at k = (0, 0) while asymptotically vanish- +ing between these. The result is a disconnected spectrum. +It should be noted that we do not reach the point where the +spectrum completely vanishes since we are limited to finite +temperatures. Diagrammatic Monte Carlo generally requires +that the series converges, and this is often not the case at suffi- +ciently low temperatures. Real condensed matter systems are +also generally realized at finite temperature. However, ther- +mal fluctuations are exponentially suppressed with the inverse +temperature. If the relevant energy scale is large compared to +the temperature, then we can for all practical purposes regard +the systems as being in the asymptotic limit where the spec- + +5 +10 +10 +20 +30 +40 +500.,6 +0.6 +0.4 +0.20.8 +0.6 +0.4 +0.215 +10 +5 +0 +5 +10 +15 +10 +20 +30 +40 +5010 +20K15 +10 +5 +0 +5 +10 +15 +0 +10 +20 +30 +40 +5010 +2+ +:*20 +10 +20 +30 +4015 +10 +5 +10 +0 +10 +20 +30 +40 +505 +0.0 +(c) +0.0 +0.075 +0.125 +0.5 +1.0 +0.2 +(a) +0.5 +0.8 +(b) +Figure 3. +Spectral weight of the singlon-like sub-band, obtained +from equation (16). At half-filling and a temperature of t/T = 1 (a), +the weight is suppressed near k = (π, π) and reaches a minimum +of ≈ 16%. Reducing the temperature (b), this minimum falls below +1%. Increasing the particle density to ⟨ˆn⟩ ≈ 1.88 (c), the spectrum +retains a finite weight near k = (0, 0) but almost vanishes elsewhere. +The strong suppression of the spectral weight at certain momenta can +be understood from a vanishing phase space of singlon-like excita- +tions. +tral density vanishes in part of the Brillouin zone. Once the +spectrum has a nontrivial connectivity, there are no topologi- +cal obstacles to an intersection with the Fermi level that is an +open line in 2D, as shown in Fig. 1, or an open surface in 3D. +Conclusions—In non-Fermi-liquids, band fractionalization +effectively splits the electron energy into a distribution of +quasiparticle energies. The spectral weight of these sub-bands +is determined by the phase space of the corresponding oper- +ators, implying that it is generally momentum dependent. In +strongly correlated systems, this phase space may–to expo- +nential accuracy–vanish, creating voids in parts of the Bril- +louin zone which change the topology of the excitation spec- +trum. This effect is a prerequisite for Fermi arcs, and spectral +topology should therefore be regarded as an essential property +of strongly correlated phases. +This work was supported by the Swedish Research Coun- +cil (VR) through grant 2018-03882. Computations were per- +formed on resources provided by the Swedish National Infras- +tructure for Computing (SNIC) at the National Supercomputer +Centre in Linköping, Sweden. +[1] L.D. Landau, E.M. Lifshitz, and L.P. Pitaevskii, Course of The- +oretical Physics: Statistical Physics, Part 2 : by E.M. Lifshitz +and L.P. Pitaevskii, v. 9 (1980). +[2] Su-Di Chen, Makoto Hashimoto, Yu He, Dongjoon Song, +Ke-Jun Xu, +Jun-Feng He, +Thomas P. Devereaux, +Hi- +roshi Eisaki, Dong-Hui Lu, Jan Zaanen, +and Zhi-Xun +Shen, “Incoherent strange metal sharply bounded by a crit- +ical doping in bi2212,” Science 366, 1099–1102 (2019), +https://www.science.org/doi/pdf/10.1126/science.aaw8850. +[3] Takeshi +Kondo, +W. +Malaeb, +Y. +Ishida, +T. +Sasagawa, +H. Sakamoto, Tsunehiro Takeuchi, T. Tohyama, and S. Shin, +“Point nodes persisting far beyond tc in bi2212,” Nature Com- +munications 6, 7699 (2015). +[4] Yu He, Su-Di Chen, Zi-Xiang Li, Dan Zhao, Dongjoon Song, +Yoshiyuki Yoshida, Hiroshi Eisaki, Tao Wu, Xian-Hui Chen, +Dong-Hui Lu, Christoph Meingast, Thomas P. Devereaux, +Robert J. Birgeneau, Makoto Hashimoto, Dung-Hai Lee, and +Zhi-Xun Shen, “Superconducting fluctuations in overdoped +bi2sr2cacu2o8+δ,” Phys. Rev. X 11, 031068 (2021). +[5] N. Bergeal, J. Lesueur, M. Aprili, G. Faini, J. P. Contour, +and B. Leridon, “Pairing fluctuations in the pseudogap state of +copper-oxide superconductors probed by the josephson effect,” +Nature Physics 4, 608–611 (2008). +[6] Y. I. Seo, W. J. Choi, Shin-ichi Kimura, and Yong Seung Kwon, +“Evidence for a preformed cooper pair model in the pseudogap +spectra of a ca10(pt4as8)(fe2as2)5 single crystal with a nodal +superconducting gap,” Scientific Reports 9, 3987 (2019). +[7] John Sous, Yu He, and Steven A. Kivelson, “Absence of a bcs- +bec crossover in the cuprate superconductors,” (2022). +[8] Shengtao Jiang, Long Zou, +and Wei Ku, “Non-fermi-liquid +scattering against an emergent bose liquid: Manifestations in +the kink and other exotic quasiparticle behavior in the normal- +state cuprate superconductors,” Phys. Rev. B 99, 104507 +(2019). +[9] N. Harrison and M. K. Chan, “Magic gap ratio for optimally ro- +bust fermionic condensation and its implications for High−Tc +superconductivity,” Phys. Rev. Lett. 129, 017001 (2022). +[10] Makoto Hashimoto, Rui-Hua He, Kiyohisa Tanaka, Jean-Pierre +Testaud, Worawat Meevasana, Rob G. Moore, Donghui Lu, +Hong Yao, Yoshiyuki Yoshida, Hiroshi Eisaki, Thomas P. De- +vereaux, Zahid Hussain, +and Zhi-Xun Shen, “Particle–hole +symmetry breaking in the pseudogap state of bi2201,” Nature +Physics 6, 414–418 (2010). +[11] Makoto Hashimoto, Elizabeth A. Nowadnick, Rui-Hua He, +Inna M. Vishik, Brian Moritz, Yu He, Kiyohisa Tanaka, +Robert G. Moore, Donghui Lu, Yoshiyuki Yoshida, Motoyuki +Ishikado, Takao Sasagawa, Kazuhiro Fujita, Shigeyuki Ishida, +Shinichi Uchida, Hiroshi Eisaki, Zahid Hussain, Thomas P. De- +vereaux, and Zhi-Xun Shen, “Direct spectroscopic evidence for +phase competition between the pseudogap and superconductiv- +ity in bi2sr2cacu2o8+δ,” Nature Materials 14, 37–42 (2015). +[12] J.-H. Ma, Z.-H. Pan, F. C. Niestemski, M. Neupane, Y.-M. Xu, +P. Richard, K. Nakayama, T. Sato, T. Takahashi, H.-Q. Luo, +L. Fang, H.-H. Wen, Ziqiang Wang, H. Ding, and V. Madhavan, +“Coexistence of competing orders with two energy gaps in real +and momentum space in the high temperature superconductor +bi2sr2−xlaxcuo6+δ,” Phys. Rev. Lett. 101, 207002 (2008). +[13] W. D. Wise, M. C. Boyer, Kamalesh Chatterjee, Takeshi Kondo, +T. Takeuchi, H. Ikuta, Yayu Wang, and E. W. Hudson, “Charge- +density-wave origin of cuprate checkerboard visualized by +scanning tunnelling microscopy,” Nature Physics 4, 696–699 +(2008). +[14] J. E. Hoffman, E. W. Hudson, K. M. Lang, V. Madhavan, +H. Eisaki, S. Uchida, and J. C. Davis, “A four unit cell peri- +odic pattern of quasi-particle states surrounding vortex cores in +bi2sr2cacu2o8+δ,” Science 295, 466–469 (2002). +[15] P. Rosenberg, +D. Sénéchal, +A. M. S. Tremblay, +and +M. Charlebois, “Fermi arcs from dynamical variational monte +carlo,” (2022). +[16] S. Badoux, W. Tabis, F. Laliberté, G. Grissonnanche, B. Vi- +gnolle, D. Vignolles, J. Béard, D. A. Bonn, W. N. Hardy, +R. Liang, N. Doiron-Leyraud, Louis Taillefer, and Cyril Proust, +“Change of carrier density at the pseudogap critical point of a +cuprate superconductor,” Nature 531, 210–214 (2016). +[17] Masatoshi Imada and Takafumi J. Suzuki, “Excitons and +dark fermions as origins of mott gap, pseudogap and su- +perconductivity in cuprate superconductors - general con- +cept and basic formalism based on gap physics,” Jour- +nal of the Physical Society of Japan 88, 024701 (2019), + +20 +3:06 +https://doi.org/10.7566/JPSJ.88.024701. +[18] K. A. Chao, J. Spałek, and A. M. Ole´s, “Canonical perturbation +expansion of the hubbard model,” Phys. Rev. B 18, 3453–3464 +(1978). +[19] Johan Carlström, “Strong-coupling diagrammatic monte carlo +technique for correlated fermions and frustrated spins,” Phys. +Rev. B 103, 195147 (2021). +[20] Kris Van Houcke, Evgeny Kozik, N. Prokof’ev, and B. Svis- +tunov, “Diagrammatic monte carlo,” Physics Procedia 6, 95– +105 (2010). +[21] Johan +Carlström, +“Spin-charge +transformation +of +lattice +fermion models: duality approach for diagrammatic simulation +of strongly correlated systems,” Journal of Physics: Condensed +Matter 29, 385602 (2017). +[22] Johan Carlström, “Diagrammatic monte carlo procedure for +the spin-charge transformed hubbard model,” Phys. Rev. B 97, +075119 (2018). +[23] Johan Carlström, “Spectral shift technique for strongly cor- +related lattice fermions,” +(2021), arXiv:2111.05877 [cond- +mat.str-el]. +[24] Johan Carlström, “In situ controllable magnetic phases in doped +twisted bilayer transition metal dichalcogenides,” Phys. Rev. +Research 4, 043126 (2022). +[25] Olga Goulko, Andrey S. Mishchenko, Lode Pollet, Nikolay +Prokof’ev, and Boris Svistunov, “Numerical analytic contin- +uation: Answers to well-posed questions,” Phys. Rev. B 95, +014102 (2017). + +7 +APPENDIX I +To asses how truncation of the series affects the results, +we compare the density of states and spectral function for +the cases reported in the article at different expansion orders. +In Fig. 4 we show the case of half-filling and temperatures +t/T = 1 and t/T = 4 for expansion orders O = 5, 6, 7. At +the higher temperature, we observe that the dos changes very +little, though a small correction at ϵ = 0 is visible. The spec- +trum is qualitatively very similar, and we conclude that the +impart of truncation is very small. +At the lower temperature, we see some changes in the shape +of the dos when increasing the order from 5 to 6, though +the systems consistently remains gapped. The spectra show +a weight that does not completely vanish at O = 5, but is +strongly suppressed at higher orders. At O = 7, we begin to +see noise in the spectrum as a result of the computational cost +associated with expansions to high order. For this data set, we +can conclude that truncation of the series has a limited quanti- +tative impact, but the it does not affect any of the conclusions +derived in the paper. +In Fig. 5, we see the dos and spectra for the doped case at +expansion orders O = 5, 6, 7. In this scenario, truncation of +the series has no impact visible to the naked eye, and we can +conclude that the result is virtually exact. +In conclusion, we find that the diagrammatic Monte Carlo +simulations reported do accurately capture the physics of the +attractive Hubbard model. The results are qualitatively not +affected by truncation of the series, yet a small quantitative +uncertainty remains for one of the data sets. + +8 +(b) +(a) +(c) +(e) +(d) +(f) +(h) +(g) +(i) +(k) +(j) +(l) +Figure 4. +Convergence of the series at half-filling. The left column corresponds to an expansion order O = 5, the center corresponds to +O = 6 and the right corresponds to O = 7. (a-c) give the dos at a temperature of t/T = 1, while (d-f) give the corresponding spectra. (g-i) +give the dos at a temperature of t/T = 4, while (j-l) give the corresponding spectra. At the higher temperature, the corrections when changing +the expansion order is very small, though a slight shift in dos at the Fermi level can be observed when comparing O = 5 (a) and O = 6 (b). +At the lower temperature, we do see quantitative difference in dos between orders 5 (g) and 6 (h) while the correction at order 7 (i) is smaller. +The small peaks in the dos near the Fermi level in (g) are reflected in a suppressed fractionalized sub-band visible in (j). At orders 6 and 7, this +fractionalized sub-band vanishes. + +10 +5 +10 +0 +20 +30 +5015 +10 +5 +0 +n +10 +5 +0 +10 +20 +30 +40 +5015 +10 +5 +0 +n +10 +15 +0 +10 +20 +30 +40 +5015 +10 +5 +0 +n +10 +5 +0 +10 +20 +30 +40 +5010 +5 +5 +10 +10 +20 +30 +40 +5010 +5 +10 +10 +20 +30 +509 +(b) +(a) +(c) +(e) +(d) +(f) +Figure 5. Convergence of the series in the strongly doped case. The density is ⟨ˆn⟩ ≈ 1.88 and the temperature is t/T = 4. The left column +(a,d) corresponds to an expansion order O = 5, the center column to O = 6 and the right columns to O = 7. The dos (a-c) does not change +visibly with expansion order, and neither does the spectrum (d-f). We can therefore conclude that the observables have converged. + +15 +10 +5 +5 +0 +10 +20 +30 +40 +5015 +10 +5 +5 +10 +0 +10 +20 +30 +40 +5015 +10 +5 +5 +10 +0 +10 +20 +30 +40 +50 \ No newline at end of file diff --git a/39E2T4oBgHgl3EQf6Aj8/content/tmp_files/load_file.txt b/39E2T4oBgHgl3EQf6Aj8/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..e25ff22c82eaaa27c32b8a25c8cb05da270a1c44 --- /dev/null +++ b/39E2T4oBgHgl3EQf6Aj8/content/tmp_files/load_file.txt @@ -0,0 +1,402 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf,len=401 +page_content='Disconnected and multiply connected spectra in the 2D attractive Hubbard model Johan Carlström Department of Physics, Stockholm University, 106 91 Stockholm, Sweden (Dated: January 12, 2023) Fermi gases and liquids display an excitation spectrum that is simply connected, ensuring closed Fermi sur- faces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' In strongly correlated systems like the cuprate superconductors, the existence of open sheets of Fermi surface known as Fermi arcs indicate a distinctly different topology of the spectrum with no equivalent in Fermi liquid theory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Here, we demonstrate a generic mechanism by which correlation effects in fermionic systems can change the topology of the spectrum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Using diagrammatic Monte Carlo simulations, we demonstrate the existence of disconnected and multiply connected excitation spectra in the attractive Hubbard model in the BCS-BEC cross-over regime.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' These topologically nontrivial spectra are a prerequisite for Fermi arcs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Landaus Fermi liquid theory [1] is the standard model through which we understand interacting electrons in normal metals.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' In this paradigm, electronic states evolve adiabatically with increasing interactions so that there remains a direct cor- respondence between the states in a non-interacting Fermi gas, and the quasi-particles of the interacting system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' A key con- sequence of this relationship is that the excitation spectrum of the interacting system inherits the topology of the bands as- sociated with the noninteracting state.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' In the absence of gap- closing points, the energy bands of Fermi gases are generally simply connected sets, and so are consequently the spectra of Fermi liquids.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' This, in turn, implies a Fermi surface that is closed (this point also holds with nodes in the spectrum).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Strongly correlated systems often display phenomena that fall decidedly outside of the Fermi liquid regime.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' In the cuprates, superconductivity is nucleated from a pseudogap state with open sheets of Fermi surface, which persist over a wide range of doping levels [2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' The physical origin of these Fermi arcs remains highly contested.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' It has been observed in the cuprates that superconducting fluctuations persist above the critical temperature [3–5], and it has been proposed that this fact may explain the origin of the pseudogap state [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' This in turn raises key questions about the pairing regime, which also remains disputed: If the cuprates are BCS-like, then the fluctuating region should be understood in terms of a paired state without global phase coherence [7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' In the BEC limit, the electrons form bound pairs which give rise to a bosonic normal liquid at tempera- tures far above Tc [8].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' The onset of superconductivity would then occur as these pairs condense at a much lower temper- ature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' While these two scenarios are often both referred to by the term “preformed pairs”, they are remarkably different.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Between these two extrema lies the an extensive BCS-BEC crossover regime [9].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' A directly opposing point of view is that preformed pairs have no part in the emergence of Fermi arcs, and that the pseudogap and paired states are instead antagonistic to each other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' ARPES imaging is claimed to show direct competition between superconductivity, and a distinctly different order pa- rameter that is associated with the pseudogap state [10, 11].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' A candidate for this order parameter is provided by a breaking of translation symmetry [12], which is observed in STM imaging [13, 14].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Theoretically predicting the existence of Fermi arcs in model Hamiltonians is challenging due to a lack of reli- able numerical techniques for strongly correlated fermions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Nonetheless, recent variational Monte Carlo calculations sug- gest that the pseudogap physics observed in the cuprates is at least qualitatively captured by the single band Hubbard model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' For Hubbard clusters up to 64 sites, Fermi arcs are observed at a carrier concentration of 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='25%, and remnants of these are present at 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='5% doping [15].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' This may be compared to the cuprates, where pseudogap physics persist up to a carrier con- centration of ∼ 20% [2, 16].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' The existence of Fermi arcs in a simple model Hamiltonian like the Hubbard model is encour- aging since it may indicate that this is a generic phenomena.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' A second theoretical challenge is to qualitatively explain how Fermi liquid theory fails in strongly correlated systems, and connect this insight with the emergence of Fermi arcs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Here, a key observation is that a simply connected excitation spectrum does not permit open sheets of Fermi surface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' This relationship implies that the electronic state’s adiabatic depen- dence on interaction strength must necessarily break down in such a way that the connectivity of the spectrum changes, see also Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' In this work, we discuss how strong interactions can give rise to non-Fermi-liquid phases which are characterized by band fractionalization [17].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Using the attractive-interaction Hubbard model as an example, we demonstrate that that the operators associated with these fractional bands exhibit van- ishing phase spaces in parts of the Brillouin zone, which leads to disconnected or multiply connected excitation spec- tra.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' These topologically nontrivial spectra are a fundamental prerequisite for the existence of Fermi arcs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Band fractionalization and spectral topology—To illustrate the breakdown of Fermi liquid theory, we consider the attrac- tive Hubbard model (AHM), which is given by H = � ⟨ij⟩σ tc† iσcjσ + � i (Uni↓ni↑ − µni), U < 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' (1) Because of the interaction, the energy bands are generally split into two sub-bands, [18], a phenomena that is also referred to as band fractionalization [17].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' For strong contact interaction, these sub-bands are generally singlon-like and doublon-like respectively, prompting us to introduce the corresponding op- arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='04197v1 [cond-mat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='str-el] 10 Jan 2023 2 Spectrum Fermi level Fermi arc Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Relationship between spectral topology and Fermi arcs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' The multiply connected spectrum intersects the Fermi level on a set of open and disconnected lines which constitute Fermi arcs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' By contrast, a simply connected spectrum, must necessarily intersect the Fermi level on a set of closed lines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' This implies that a topologically nontrivial spectrum is a prerequisite of Fermi arcs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' erators and associated spinors: c† iσ = s† iσ + d† iσ, s† iσ = c† iσ(1 − ni¯σ), d† i = c† iσni¯σ Ψ† iσ = � s† iσ d† iσ � , Ψiσ = �siσ diσ � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' (2) Here, s† and d† are the singlon and doublon creation operators while ¯σ = −σ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' We can then define a “quasi-particle” (QP) greens function based on the outer product of the spinors: Γσ(x2 − x1) = ⟨TτΨ† iσ(x1) ⊗ Ψiσ(x2)⟩, (3) from which the ordinary electronic Greens function is ob- tained by the summation Gσ(x) = � αβ Γαβσ(x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' (4) In the atomic limit, the QP Greens function is diagonal, with a frequency space representation given by ΓA σ (ω) = � 1+eµ ZA 1 iω+µ 0 0 eµ+e2µ−U ZA 1 iω+µ−U � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' (5) Here, the energy is for simplicity given in units of the tem- perature (corresponding to the case of unit temperature).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' The Greens function (5) resembles that of a two-component sys- tem, except that it is rescaled by two “quasiparticle weights”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' To pursue this analogy we introduce the weight W according to W = � 1+eµ ZA 0 0 eµ+e2µ−U ZA � = w0σ0 + wzσz, (6) where we note that (6) must satisfy w0 ≥ |wz|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' (7) In the limit wz → w0, the system is effectively Gutzwiller projected, and doublons can be regarded as “forbidden”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' In this scenario, the doublon operators can be said to have a van- ishing phase space in the sense that they have a domain or codomain which does not overlap with the sub-space on which we project.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' The same can be said abut the singlon operator in the limit wz → −w0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' In these cases, the doublon or singlon parts do not contribute to the Greens function, and thus not to the spectrum either.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' We may then express the atomic Greens function (5) in terms of a reweighted two-component system according to ΓA σ (ω) = W iω − V , V = �U 2 − µ � σ0 − U 2 σz, (8) where V is the effective two-component Hamiltonian.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Next, we note that the tunneling term may be written tc† iσcjσ = Ψ† iσKΨjσ, K = t(σ0 + σx).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' (9) Thus, including the first correction of the strong-coupling ex- pansion [19], we obtain a Greens function Γσ(ω) = ΓA σ (ω) + ΓA σ (ω)K(k)ΓA σ (ω) + .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' = 1 iω − V − WK(k)W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' (10) At this point, the effective two-component Hamiltonian He = V + WK(k) is no longer diagonal, and the dispersion thus mixes the singlon and doublon components.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Additionally, He is non-Hermitian, and does not generally exhibit an orthonor- mal eigenbasis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' However, due to a combination of PT sym- metry and the condition (7), the eigenvalues remain real.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Due to the factor W, the spectral weight of the two sub- bands are generally not equal, and one of them may even van- ish asymptotically.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' This points is central to the spectral topol- ogy: If we conduct a strong coupling expansion to higher or- der, then we will find that the QP weight W becomes momen- tum dependent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' If the phase space for a sub-band operator of the type (2) vanishes in part of Brillouin zone, then so does the corresponding spectral weight, implying that the spectrum is no longer simply connected.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Strong-coupling expansion by hand is however not feasible beyond first order, and to explore this concept we have to employ numerical techniques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Numerical treatment—To test the preceding conjecture, we employ bold-line diagrammatic Monte Carlo simulations, specifically focusing on the attractive Hubbard model in the BCS-BEC cross over regime.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' This method is based on stochastic sampling of Feynman type graphs [20], and is un- biased in the sense that the only systematic source of error is truncation of the series.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' For a convergent series, asymptot- ically exact results are obtained directly in the macroscopic limit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' To be able to address systems with strong interactions we use a particular formulation known as strong-coupling di- agrammatic Monte Carlo (SCDMC) [19, 21–24], where the 3 diagrammatic elements are connected vertices of propagating electrons that are non-perturbative in U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' The computational protocol employed here is outlined in detail in [19].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' In SCDMC, the expansion parameter is the hopping integral t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' The principal observable that we compute is the polariza- tion operator of the hopping integral, here denoted Πt(ω, k).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' From the polarization operator, we obtain the dressed hopping integral via the Bethe Salpiter equation: ˜t(ω, k) = 1 t−1(k) − Πt(ω, k).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' (11) We expand in the dressed hopping ˜t, while retaining only the skeleton diagrams.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' By iterating until convergence, we obtain a self-consistent solution for ˜t which implicitly takes into ac- count certain classes of diagrams to infinite order.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' The Greens function of the interacting system is closely re- lated to the dressed hopping integral, and can be obtained from the equation G(ω, k) = 1 Π−1 t (ω, k) − tk .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' (12) To the lowest order, the polarization operator is given by the atomic-limit Greens function, meaning that eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' (10) is repro- duced.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' We conduct a self-consistent summation of all dia- grams to order 7 which permits us to asses convergence prop- erties of the series–for a discussion, see Appendix I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' We compute a discrete approximation for the spectrum us- ing numerical analytical continuation [25]: First, we define a spectral reconstruction of the Greens function and a corre- sponding error metric according to GR(τ, k) = nmax � n=1 An(k) e−ϵnτ 1 + eβϵn , τ < 0, (13) ∆[k, {An(k)}] = � 1 β � dτ[G(τ, k) − GR(τ, k)]2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' (14) We use nmax = 121 as a compromise between accuracy and computational cost.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' To obtain the best estimate for the spec- tral function A(k), we minimize the error metric ∆ through a process of simulated annealing followed by a line-search tecnhique: In the first stage, we use Monte Carlo to update {An(k)} with an acceptance ration ∼ e−κ∆, while succes- sively increasing the inverse pseudo temperature κ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' In the second stage, we minimize ∆ using Newton-Raphson.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' This reduces the error only very slightly, but tends to result in a smoother spectrum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' From the spectrum, we obtain a (discretized) estimate for the density of states via the integral dos(ϵn) = � dk (2π)D An(k).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' (15) The normalization of Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' (13) is such that the summations over An and dos(ϵn) are unity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' We consider the Hubbard model with an attractive contact interaction given by U = −5|t|, at temperatures t/T = 1 and t/T = 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' We examine the cases of half-filling and a particle density of ⟨ˆn⟩ ≈ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' The results of our simulations are summarized in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' At half-filling and a higher temperature of t/T = 1, we find that the density of states (a) has a minimum at the Fermi level, though the system remains gapless.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' The momentum-resolved particle density (b) attains minima and maxima at ∼ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='4 and ∼ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' The spectral density (c) exhibits two smeared sub- bands, with densities that are visibly momentum-dependent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Reducing the temperature, the density of states (d) vanishes at the Fermi level, indicating that the system is gapped against fermionic excitations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' The particle density extrema (e) are now close to 0 and 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='0 respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' The spectral density (f) is sharply peaked, with a weight that is strongly dependent on momentum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' If we also increase the particle density, then the upper sub- band is strongly suppressed as a result (g).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' The system is now completely filled in a large fraction of the Brillouin zone (h), and the lower sub-band carries most of the spectral weight (i).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' The momentum-dependent spectral weights can be under- stood from the fact that the two sub-bands originate in singlon- like and doublon-like degrees of freedom: For sufficiently strong attraction, the Hubbard model prefers to have occupa- tion numbers of 0 or 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Singly occupied sites are situated at high energy, implying that the upper sub-band is singlon-like.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' At small momenta, k ≈ (0, 0), the particle density is smaller, and the singlon operator has a substantial phase space allow- ing for a high spectral density.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Near k = (π, π), the particle density approaches 2, meaning that the phase space for the singlon operator vanishes, along with the spectral weight of this sub-band.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' For the doublon-like component, the situation is the opposite, with a vanishing spectral density when the density is small.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' To quantify the suppression of the spectral density, we de- fine the spectral weight of a sub-band according to ρ(k) = n=nmax � n=nmin An(k), (16) where the range of indices n should be taken to include the en- tire sub-band, but nothing else.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' At a temperature of t/T = 4 and halffilling, the system remains gapped so that we can iden- tify the upper sub-band with positive energies and the lower sub-band with negative energies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Doping the system, the two sub-bands are still well separated with the density of states vanishing at ϵ ≈ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='5t, suggesting we use this energy as the dividing point.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' At the higher temperature, the two sub-bands overlap.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' We can still calculate spectral weights based on ϵ = 0 as our dividing point, though this approximation may slightly underestimate the spectral weight at its minimum, while over- estimating it at the maximum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' The spectral weight of the singlon-like component is shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' At a temperature of t/T = 1 and half-filling (a), the singlon-like component is suppressed to ≈ 16% at 4 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='6 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='4 12 12 0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='6 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='4 Dos 12 (b) (a) (c) 12 0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='6 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='4 Dos 12 12 0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='6 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='4 Dos Dos 15 15 0 (e) (d) (f) Spectral density Spectral density 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='6 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='4 Dos 12 12 0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='6 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='8 Dos 15 15 0 (h) (g) (i) Spectral density Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Spectra and equation of state for the attractive Hubbard model with U = −5|t|, at temperatures of t/T = 1 (a-c) and t/T = 4 (d-i).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' The figures (a-f) corresponds to half-filling, while (g-i) corresponds to ⟨ˆn⟩ ≈ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' At high temperature, the spectrum (a) reveals a suppression of the density of states at the Fermi level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' The particle density (b) exhibits a minimum at k = (0, 0) with ⟨ˆn⟩ ≈ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='4 and a maximum at k = (π, π) with ⟨ˆn⟩ ≈ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' The momentum-resolved spectral density (c) taken along the dashed line in (b), reveals two sub-bands.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Decreasing the temperature, the density of states (d) vanishes at the Fermi level, implying that the system is gapped with respect to fermionic excitations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' The particle density (e) now has minima and maxima close 0 and 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='0 respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' The spectral density (f) reveals sharp families of excitations with a spectral weight that is strongly dependent on momentum and almost vanishes in part of the Brillouin zone.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Increasing the particle density to ⟨ˆn⟩ ≈ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='88, the density of states (g) reveals a large peak that is doublon-like, and a much suppressed peak corresponding to singlons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' The peaks are well separated, and the density of states vanishes at ϵ ≈ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='5t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' The spectral density reveals a large doublon-like peak, though the singlon peak has a presence mainly near k = (0, 0).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' This data was obtained using an expansion order O = 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' k ≈ (π, π).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' At a temperature of t/T = 4 (b), this mini- mum drops below 1%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' The strong temperature dependence is consistent with the notion of a vanishing phase space for the singlon operator: At k = (π, π), the system has a preference for double occupation, and the singlon operator can only act in the presence of thermal fluctuations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' As the temperature is reduced, these are exponentially suppressed together with the spectral weight.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Asymptotically, this results in a multiply connected spectrum which lacks states in part of the Brillouin zone.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Increasing the particle density (c), the spectral weight attains a maximum at k = (0, 0) while asymptotically vanish- ing between these.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' The result is a disconnected spectrum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' It should be noted that we do not reach the point where the spectrum completely vanishes since we are limited to finite temperatures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Diagrammatic Monte Carlo generally requires that the series converges, and this is often not the case at suffi- ciently low temperatures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Real condensed matter systems are also generally realized at finite temperature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' However, ther- mal fluctuations are exponentially suppressed with the inverse temperature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' If the relevant energy scale is large compared to the temperature, then we can for all practical purposes regard the systems as being in the asymptotic limit where the spec- 5 10 10 20 30 40 500.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=',6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='215 10 5 0 5 10 15 10 20 30 40 5010 20K15 10 5 0 5 10 15 0 10 20 30 40 5010 2+ :*20 10 20 30 4015 10 5 10 0 10 20 30 40 505 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='0 (c) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='075 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='125 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='2 (a) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='8 (b) Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Spectral weight of the singlon-like sub-band, obtained from equation (16).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' At half-filling and a temperature of t/T = 1 (a), the weight is suppressed near k = (π, π) and reaches a minimum of ≈ 16%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Reducing the temperature (b), this minimum falls below 1%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Increasing the particle density to ⟨ˆn⟩ ≈ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='88 (c), the spectrum retains a finite weight near k = (0, 0) but almost vanishes elsewhere.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' The strong suppression of the spectral weight at certain momenta can be understood from a vanishing phase space of singlon-like excita- tions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' tral density vanishes in part of the Brillouin zone.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Once the spectrum has a nontrivial connectivity, there are no topologi- cal obstacles to an intersection with the Fermi level that is an open line in 2D, as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' 1, or an open surface in 3D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Conclusions—In non-Fermi-liquids, band fractionalization effectively splits the electron energy into a distribution of quasiparticle energies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' The spectral weight of these sub-bands is determined by the phase space of the corresponding oper- ators, implying that it is generally momentum dependent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' In strongly correlated systems, this phase space may–to expo- nential accuracy–vanish, creating voids in parts of the Bril- louin zone which change the topology of the excitation spec- trum.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' This effect is a prerequisite for Fermi arcs, and spectral topology should therefore be regarded as an essential property of strongly correlated phases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' This work was supported by the Swedish Research Coun- cil (VR) through grant 2018-03882.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Computations were per- formed on resources provided by the Swedish National Infras- tructure for Computing (SNIC) at the National Supercomputer Centre in Linköping, Sweden.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' [1] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Landau, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Lifshitz, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Pitaevskii, Course of The- oretical Physics: Statistical Physics, Part 2 : by E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Lifshitz and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Pitaevskii, v.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' 9 (1980).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' [2] Su-Di Chen, Makoto Hashimoto, Yu He, Dongjoon Song, Ke-Jun Xu, Jun-Feng He, Thomas P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Devereaux, Hi- roshi Eisaki, Dong-Hui Lu, Jan Zaanen, and Zhi-Xun Shen, “Incoherent strange metal sharply bounded by a crit- ical doping in bi2212,” Science 366, 1099–1102 (2019), https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='science.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='org/doi/pdf/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='1126/science.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='aaw8850.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' [3] Takeshi Kondo, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Malaeb, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Ishida, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Sasagawa, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Sakamoto, Tsunehiro Takeuchi, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Tohyama, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Shin, “Point nodes persisting far beyond tc in bi2212,” Nature Com- munications 6, 7699 (2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' [4] Yu He, Su-Di Chen, Zi-Xiang Li, Dan Zhao, Dongjoon Song, Yoshiyuki Yoshida, Hiroshi Eisaki, Tao Wu, Xian-Hui Chen, Dong-Hui Lu, Christoph Meingast, Thomas P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Devereaux, Robert J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Birgeneau, Makoto Hashimoto, Dung-Hai Lee, and Zhi-Xun Shen, “Superconducting fluctuations in overdoped bi2sr2cacu2o8+δ,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' X 11, 031068 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' [5] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Bergeal, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Lesueur, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Aprili, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Faini, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Contour, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Leridon, “Pairing fluctuations in the pseudogap state of copper-oxide superconductors probed by the josephson effect,” Nature Physics 4, 608–611 (2008).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' [6] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Seo, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Choi, Shin-ichi Kimura, and Yong Seung Kwon, “Evidence for a preformed cooper pair model in the pseudogap spectra of a ca10(pt4as8)(fe2as2)5 single crystal with a nodal superconducting gap,” Scientific Reports 9, 3987 (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' [7] John Sous, Yu He, and Steven A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Kivelson, “Absence of a bcs- bec crossover in the cuprate superconductors,” (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' [8] Shengtao Jiang, Long Zou, and Wei Ku, “Non-fermi-liquid scattering against an emergent bose liquid: Manifestations in the kink and other exotic quasiparticle behavior in the normal- state cuprate superconductors,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' B 99, 104507 (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' [9] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Harrison and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Chan, “Magic gap ratio for optimally ro- bust fermionic condensation and its implications for High−Tc superconductivity,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' 129, 017001 (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' [10] Makoto Hashimoto, Rui-Hua He, Kiyohisa Tanaka, Jean-Pierre Testaud, Worawat Meevasana, Rob G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Moore, Donghui Lu, Hong Yao, Yoshiyuki Yoshida, Hiroshi Eisaki, Thomas P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' De- vereaux, Zahid Hussain, and Zhi-Xun Shen, “Particle–hole symmetry breaking in the pseudogap state of bi2201,” Nature Physics 6, 414–418 (2010).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' [11] Makoto Hashimoto, Elizabeth A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Nowadnick, Rui-Hua He, Inna M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Vishik, Brian Moritz, Yu He, Kiyohisa Tanaka, Robert G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Moore, Donghui Lu, Yoshiyuki Yoshida, Motoyuki Ishikado, Takao Sasagawa, Kazuhiro Fujita, Shigeyuki Ishida, Shinichi Uchida, Hiroshi Eisaki, Zahid Hussain, Thomas P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' De- vereaux, and Zhi-Xun Shen, “Direct spectroscopic evidence for phase competition between the pseudogap and superconductiv- ity in bi2sr2cacu2o8+δ,” Nature Materials 14, 37–42 (2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' [12] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Ma, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Pan, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Niestemski, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Neupane, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='-M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Xu, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Richard, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Nakayama, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Sato, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Takahashi, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='-Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Luo, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Fang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Wen, Ziqiang Wang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Ding, and V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Madhavan, “Coexistence of competing orders with two energy gaps in real and momentum space in the high temperature superconductor bi2sr2−xlaxcuo6+δ,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Lett.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' 101, 207002 (2008).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' [13] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Wise, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Boyer, Kamalesh Chatterjee, Takeshi Kondo, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Takeuchi, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Ikuta, Yayu Wang, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Hudson, “Charge- density-wave origin of cuprate checkerboard visualized by scanning tunnelling microscopy,” Nature Physics 4, 696–699 (2008).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' [14] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Hoffman, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Hudson, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Lang, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Madhavan, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Eisaki, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Uchida, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Davis, “A four unit cell peri- odic pattern of quasi-particle states surrounding vortex cores in bi2sr2cacu2o8+δ,” Science 295, 466–469 (2002).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' [15] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Rosenberg, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Sénéchal, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Tremblay, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Charlebois, “Fermi arcs from dynamical variational monte carlo,” (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' [16] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Badoux, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Tabis, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Laliberté, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Grissonnanche, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Vi- gnolle, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Vignolles, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Béard, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Bonn, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Hardy, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Liang, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Doiron-Leyraud, Louis Taillefer, and Cyril Proust, “Change of carrier density at the pseudogap critical point of a cuprate superconductor,” Nature 531, 210–214 (2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' [17] Masatoshi Imada and Takafumi J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Suzuki, “Excitons and dark fermions as origins of mott gap, pseudogap and su- perconductivity in cuprate superconductors - general con- cept and basic formalism based on gap physics,” Jour- nal of the Physical Society of Japan 88, 024701 (2019), 20 3:06 https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='7566/JPSJ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='024701.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' [18] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Chao, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Spałek, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Ole´s, “Canonical perturbation expansion of the hubbard model,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' B 18, 3453–3464 (1978).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' [19] Johan Carlström, “Strong-coupling diagrammatic monte carlo technique for correlated fermions and frustrated spins,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' B 103, 195147 (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' [20] Kris Van Houcke, Evgeny Kozik, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Prokof’ev, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Svis- tunov, “Diagrammatic monte carlo,” Physics Procedia 6, 95– 105 (2010).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' [21] Johan Carlström, “Spin-charge transformation of lattice fermion models: duality approach for diagrammatic simulation of strongly correlated systems,” Journal of Physics: Condensed Matter 29, 385602 (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' [22] Johan Carlström, “Diagrammatic monte carlo procedure for the spin-charge transformed hubbard model,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' B 97, 075119 (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' [23] Johan Carlström, “Spectral shift technique for strongly cor- related lattice fermions,” (2021), arXiv:2111.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='05877 [cond- mat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='str-el].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' [24] Johan Carlström, “In situ controllable magnetic phases in doped twisted bilayer transition metal dichalcogenides,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Research 4, 043126 (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' [25] Olga Goulko, Andrey S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Mishchenko, Lode Pollet, Nikolay Prokof’ev, and Boris Svistunov, “Numerical analytic contin- uation: Answers to well-posed questions,” Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' B 95, 014102 (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' 7 APPENDIX I To asses how truncation of the series affects the results, we compare the density of states and spectral function for the cases reported in the article at different expansion orders.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' 4 we show the case of half-filling and temperatures t/T = 1 and t/T = 4 for expansion orders O = 5, 6, 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' At the higher temperature, we observe that the dos changes very little, though a small correction at ϵ = 0 is visible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' The spec- trum is qualitatively very similar, and we conclude that the impart of truncation is very small.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' At the lower temperature, we see some changes in the shape of the dos when increasing the order from 5 to 6, though the systems consistently remains gapped.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' The spectra show a weight that does not completely vanish at O = 5, but is strongly suppressed at higher orders.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' At O = 7, we begin to see noise in the spectrum as a result of the computational cost associated with expansions to high order.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' For this data set, we can conclude that truncation of the series has a limited quanti- tative impact, but the it does not affect any of the conclusions derived in the paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' 5, we see the dos and spectra for the doped case at expansion orders O = 5, 6, 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' In this scenario, truncation of the series has no impact visible to the naked eye, and we can conclude that the result is virtually exact.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' In conclusion, we find that the diagrammatic Monte Carlo simulations reported do accurately capture the physics of the attractive Hubbard model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' The results are qualitatively not affected by truncation of the series, yet a small quantitative uncertainty remains for one of the data sets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' 8 (b) (a) (c) (e) (d) (f) (h) (g) (i) (k) (j) (l) Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Convergence of the series at half-filling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' The left column corresponds to an expansion order O = 5, the center corresponds to O = 6 and the right corresponds to O = 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' (a-c) give the dos at a temperature of t/T = 1, while (d-f) give the corresponding spectra.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' (g-i) give the dos at a temperature of t/T = 4, while (j-l) give the corresponding spectra.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' At the higher temperature, the corrections when changing the expansion order is very small, though a slight shift in dos at the Fermi level can be observed when comparing O = 5 (a) and O = 6 (b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' At the lower temperature, we do see quantitative difference in dos between orders 5 (g) and 6 (h) while the correction at order 7 (i) is smaller.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' The small peaks in the dos near the Fermi level in (g) are reflected in a suppressed fractionalized sub-band visible in (j).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' At orders 6 and 7, this fractionalized sub-band vanishes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' 10 5 10 0 20 30 5015 10 5 0 n 10 5 0 10 20 30 40 5015 10 5 0 n 10 15 0 10 20 30 40 5015 10 5 0 n 10 5 0 10 20 30 40 5010 5 5 10 10 20 30 40 5010 5 10 10 20 30 509 (b) (a) (c) (e) (d) (f) Figure 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' Convergence of the series in the strongly doped case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' The density is ⟨ˆn⟩ ≈ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content='88 and the temperature is t/T = 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' The left column (a,d) corresponds to an expansion order O = 5, the center column to O = 6 and the right columns to O = 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' The dos (a-c) does not change visibly with expansion order, and neither does the spectrum (d-f).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' We can therefore conclude that the observables have converged.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} +page_content=' 15 10 5 5 0 10 20 30 40 5015 10 5 5 10 0 10 20 30 40 5015 10 5 5 10 0 10 20 30 40 50' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/39E2T4oBgHgl3EQf6Aj8/content/2301.04197v1.pdf'} diff --git a/4NFRT4oBgHgl3EQfozeT/content/tmp_files/2301.13611v1.pdf.txt b/4NFRT4oBgHgl3EQfozeT/content/tmp_files/2301.13611v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..f060a6cd54ce22609dd972e404a913f3814c0eaf --- /dev/null +++ b/4NFRT4oBgHgl3EQfozeT/content/tmp_files/2301.13611v1.pdf.txt @@ -0,0 +1,1176 @@ +Bringing Stellar Evolution & Feedback Together +Summary of proposals from the Lorentz Center Workshop, 2022 +Co-authors: (names and institutions) +Sam Geen1,2 +,Poojan Agrawal3 +,Paul A. Crowther4 +,B.W. Keller5,18 +,Alex de Koter1,6 +, +Zsolt Keszthelyi1,7 +, Freeke van de Voort8 +,Ahmad A. Ali9 +,Frank Backs1 +,Lars Bonne24 +,Vittoria +Brugaletta10 +,Annelotte Derkink 1 +,Sylvia Ekström 11 +,Yvonne A. Fichtner12 +,Luca Grassitelli12,Ylva +Götberg23 +, Erin R. Higgins13 +,Eva Laplace14 +,Kong You Liow9 +,Marta Lorenzo15,27 +,Anna F. +McLeod16,18 +,Georges Meynet 11 +, Megan Newsome25,26G. André Oliva18 +,Varsha Ramachandran19 +,Martin +P. Rey,20 +,Steven Rieder11 +, Emilio Romano-Díaz12 +, Gautham Sabhahit13 +,Andreas A.C. Sander19 +,Rafia +Sarwar21 +,Hanno Stinshoff 10,21 +,Mitchel Stoop1 +,Dorottya Szécsi21 +, Maxime Trebitsch 22 +,Jorick S. +Vink13 +,Ethan Winch13 +(Author contact details and full list of institutions at end of paper) +Keywords: Stellar physics: Stellar atmospheres, Stellar evolution, Stellar processes; Stellar populations; Interstellar +medium: nebulae, Protostars, Supernova remnants, Stellar-interstellar interactions; Interdisciplinary astronomy +Abstract: Stars strongly impact their environment, and shape structures on all scales throughout the universe, in a +process known as “feedback”. Due to the complexity of both stellar evolution and the physics of larger astrophysical +structures, there remain many unanswered questions about how feedback operates, and what we can learn about stars +by studying their imprint on the wider universe. In this white paper, we summarize discussions from the Lorentz +Center meeting ‘Bringing Stellar Evolution and Feedback Together’ in April 2022, and identify key areas where +further dialogue can bring about radical changes in how we view the relationship between stars and the universe they +live in. +1 +Introduction on Scales: From the Birth of Stars to the Wider Universe +Astrophysics spans many orders of magnitude in both physical distances and time. Researchers from different fields +have varying definitions for what are considered “small” and ”large” scales. Typically, “small” refers to processes +smaller than those typically resolved in studies, whether observational or theoretical. Meanwhile, “large” typically +refers to scales outside the boundaries of the problem domain. In Figure 1 we show a diagram depicting the range of +relevant spatial and temporal scales, from stars to galaxies and beyond, in order to define and motivate discussions +around the boundaries of domains of study considered in this work. +The galactic scale, i.e. the largest physical scale considered here below the “cosmological” scale, is about 1 – 100s of +kpc. A spiral galaxy like our Milky Way contains many (giant) molecular clouds of length scale 10 – 100 pc, which +from their dense cores can form star clusters at scales of 0.1 – 10 pc. Within those dense cores, the gravitational +collapse that results in the formation of individual stars takes place. Protostars are typically surrounded by accretion +disks of sizes that range between 1 – 1000 au, and outflows. On the smallest physical scales considered here, we can +regard the (intra)-stellar structure. Within the star itself, we have the nuclear burning in the core, convection zones, +envelope and stellar surface at 0.1 – 10 R⊙. +In numerical simulations, the connection between small and large scales is crucial because it is computationally +expensive to set up and perform simulations that encompass the whole range of scales relevant to astrophysics within +a reasonable amount of computing time. Despite this, an understanding of how the scales couple is important. various +physical processes connect the smallest and largest scales with flows moving to both smaller and larger scales, often +driven by the action of stars, in a cycle of material termed “feedback”. +During the star formation process at stellar scales, the outflows launched by the disk and jet can influence the +surrounding material. Ionizing radiation, stellar winds and eventual supernovae produced by the massive stars shape +their natal molecular clouds and the interstellar medium, impacting subsequent generations of star formation. In this +work we focus primarily on processes from stars after their formation phase ends, although protostellar outflows can +be important both in themselves (Federrath et al. 2014) and in concert with other feedback processes (Kuiper & +Hosokawa 2018) as stars form in molecular clouds (Grudi´c et al. 2022; Verliat et al. 2022). Feedback processes often +1 +arXiv:2301.13611v1 [astro-ph.SR] 31 Jan 2023 + +IDgravitat. collapse +stellar feedback +0 +1 +2 +3 +4 +5 +-1 +-2 +-3 +-4 +-5 +-6 +-7 +-8 +1 au +stellar +structure +circumstellar +cloud +core +cloud +galactic +cosmological +small scales +spatial scales +star +disk/ +outflows +cloud +dense core +galaxy +log(size [pc]) +1000 au +100 +10 +1 +expanding +bubble +filament +timescales +log(time [yr]) +3 +6 +9 +stellar +evolution +low +high +disks (~100 kyr) +1–100 Myr +> 1 Gyr +Figure 1: The different length scales of star formation in log-parsec. +2 + +act in concert, e.g. in the case of supernova feedback efficiency increasing if dense star-forming environments are +dispersed by pre-supernova feedback (Geen et al. 2015; Lucas et al. 2020). +Several techniques have been developed to bridge the different length scales. From larger to smaller scales, +zoomed-in simulations are performed, such that the regions from larger scale simulations are taken as initial +conditions and the resolution of the regions is enhanced (e.g. Carlberg & Keating 2022; Dobbs et al. 2022; Rey & +Starkenburg 2022). This allows the regions of interest to be followed and studied more closely. +For example, zoom-in simulations of dense cloud cores can be used to follow their gravitational collapse into +individual stars. On the other hand, prescriptions are used to import the physics of smaller scales to the larger scales +(e.g. Gutcke et al. 2021). This is generally done using empirical relations, analytical solutions, or parametric tables. +Some recent simulations employ multiple techniques to bridge the different scales (e.g. Rieder et al. 2022). +Critical tasks for the useful presentation and communication of the results of numerical simulations are: the +determination of reliable intervals where a given quantity is valid or expected (e.g., the densities or angular +momentum content of dense cores expected from simulations at the cloud scales), and the expression, whenever +possible, of results that impact neighbouring scales using analytical formulae so that they can be used as prescriptions +(e.g., evolutionary tracks for protostars that are used in larger-scale simulations). +With the advance of observational sites (e.g. Extremely Large Telescope, James Webb Space Telescope, Athena) with +higher angular resolution, we come closer to resolving astrophysical structures large and small scales for regions in +the Local Group and beyond. Many of these sites will be able to resolve individual stars (of lower masses) for which +before, we were only able to probe the large scale structures. Observations and simulations of large and small scales +in the (near) future will provide us with essential knowledge to connect these scales. +2 +Introduction to Feedback: The Physics Connecting the Scales +Once the protostellar phase has ended, stars impact their surroundings in a number of ways. We highlight some of the +key processes by which stellar evolution processes drive feedback into the interstellar medium and beyond. +2.1 +Stellar Winds +Stellar winds refer to the ejection of matter from a star’s surface driven by radiation pressure on the gas in the star’s +atmosphere. Stellar winds impact their surroundings through a mixture of the mass loss rate ˙M and terminal velocity +vw, i.e. the velocity that the stellar wind reaches once it is fully accelerated by radiation pressure. +Observations by Groenewegen et al. (1989), Prinja et al. (1990), Crowther et al. (2016) and others confirm that these +winds leave massive stars with terminal velocities that exceed 1000 km/s. This shocks the gas around the star to +millions of degrees Kelvin, creating hot bubbles that drive strong flows into the interstellar medium (Weaver et al. +1977). +The rate of deposition of kinetic energy of stellar winds, 1/2 ˙Mv2 +w, is an important quantity in stellar feedback, where +the energy in the wind bubble accumulates over time (Weaver et al. 1977). In the mode where stellar winds cool +efficiently through thermal conduction or, more plausibly, turbulent mixing (e.g. Lancaster et al. 2021), the +momentum deposition rate ˙Mvw becomes more important. This mode is considerably weaker at driving large-scale +flows since stored energy is lost. We examine in further detail how stellar wind bubbles impact nearby star-forming +regions in Section 5. +The properties of stars play a crucial role in setting ˙M and vw (Puls et al. 2015). Factors such as metallicity (Vink +et al. 2001), rotation (Cranmer & Owocki 1995), clumping (Puls et al. 2008) and magnetic fields (ud-Doula & +Owocki 2002) are thought to play an important role in setting the precise wind properties. We return to these +processes in detail later in the paper. +One of the most important stellar properties for determining ˙M and vw is stellar mass. At solar metallicity, stars with +masses larger than around 25M⊙ do not make it to the cool red supergiant phase, but instead lose a lot of mass in +line-driven winds (Castor et al. 1975; Kudritzki & Puls 2000; Vink 2022). At lower metallcity, winds become +significantly weaker due to the lack of metal lines to couple radiation to the gas and drive material from the stellar +surface. +A significant impediment to a better understanding of stellar winds is the uncertainty in mass loss rates. For stars +below 25M⊙, mass-loss rates are uncertain by 1-2 orders of magnitude in the so-called "weak-wind regime" (Martins +et al. 2005). +3 + +For those massive stars where mass-loss starts to dominate the evolution (at about 40M⊙) the uncertainties are about +a factor 2-3 (e.g. Björklund et al. 2021). Such uncertainties were investigated in evolutionary models by Keszthelyi +et al. (2017b), finding that the discrepancies may be resolved by studying the rotational velocities of B-type +supergiants (Vink et al. 2010), given that mass loss leads to angular momentum removal and spin-down of the stellar +surface (Langer 1998; Maeder & Meynet 2000). +Stars of order 80-100M⊙ are in the transition region of Vink & Gräfener (2012), where mass-loss rates are known +very accurately, but above this transition point, mass-loss rates included in most stellar evolution and population +synthesis models are thought to be underestimated. +2.2 +Ionizing Radiation +Stellar ionizing radiation can propagate and deposit energy on a large variety of scales, starting in the stars’ own +atmospheres and extending to the intergalactic medium across the Universe, where they “reionized” the universe after +cosmic recombination. Pinpointing how much, and when, hard ionizing photons are released is thus a key input to +model how stars affect their surroundings on all scales. We highlight here recent developments, open questions, and +uncertainties in predicting the budget of ionizing photons from stellar evolution, and their coupling to galactic and +intergalactic scales. +Ionizing fluxes of stars strongly depend on the star’s temperature. Therefore, the fact that main-sequence stars are +hotter at lower metallicities has a direct impact on the resulting ionizing photon budget. However, this effect could +potentially be drastically or even totally altered by stellar evolution effects relating to rotation and binary interaction. +Binary interaction can lead to mass exchange between the two stars, resulting in “envelope-stripped”, and thus even +hotter, helium stars. Rapid rotation is also thought to efficiently mix massive stars that cannot spin down at low +metallicity, leading to the creation of helium-enriched, finally pure helium, stars, referred to as chemically +homogeneous stars (Yoon & Langer 2005a; Szécsi et al. 2015). When determining the feedback for a resolved +population of stars, it is therefore crucial to not miss the “earliest” (i.e. hottest) stars of the population as they +dominate the ionizing feedback (see, e.g., Ramachandran et al. 2018b, 2019, for recent examples). In addition, +accreting compact objects are known to emit X-rays and ionizing radiation which have been considered, to aid +photoionization of interstellar or even intergalactic gas (Chen et al. 2015; Schaerer et al. 2019; Senchyna et al. 2020). +Moreover, cluster winds and superbubbles have recently been suggested as a source of additional ionizing flux +(Oskinova & Schaerer 2022). While most of their emitted photons are too energetic to efficiently ionize gas, a +fraction of them can contribute to the total budget of hydrogen and helium-ionizing photons in the universe. +While the effective temperatures of stars can give some clues to their spectrum and ionizing power, black bodies only +provide limited representations for the ionizing fluxes of hot stars. The absorption of radiation by recombination +fronts inside the stellar wind can significantly reshape the spectral energy distribution, thereby considerably affecting +the resulting quantities of ionizing photons emitted by the star. This is particularly striking for the He II ionizing flux +that is reduced by many orders of magnitude – effectively vanishing – if the stars manage to launch an optically thick +(Wolf-Rayet type) wind (e.g. Sander & Vink 2020). This effect is not an issue for hydrogen-ionizing photons, even +though part of their flux budget is still consumed to drive stellar winds. +Direct constraints of the ionizing flux of individual stars in the local Universe would be invaluable to constrain +uncertainties of the sources of photoionization of interstellar gas, but is unfortunately limited by the unavailability of +extreme UV (EUV) observational tools. Hence, other indirect methods are necessary, for example (1) inferring the +ionizing emission from nebular spectra using scaling relations for recombination line luminosities, and (2) using the +ionizing emission from computed stellar atmosphere models that sufficiently reproduce the spectrum at other +wavelengths (UV, optical, IR). Since the stellar He II-ionizing flux is considerably affected by winds from the star, +UV observations remain an important tool to correctly determine the sources of these photons. +Radiative feedback plays a key role in regulating the lifecycle of star-forming regions, and in providing an early +mechanism to modify the phase and thermodynamics of gas in which massive stars then explode as supernovae to +drive galactic outflows. The coupling between ionizing radiation, other sources of feedback and the surrounding gas +however remains uncertain, due to the inherent challenges in modelling and observing these non-linear physical +processes occurring on multiple spatial and time scales. Quantifying the balance between feedback budgets within +H II regions has now become possible (e.g., Lopez et al. 2014; McLeod et al. 2019, 2020, 2021; Olivier et al. 2021a; +Barnes et al. 2020). However, uncertainties pointed out above in stellar evolution and synthesizing stellar population +outputs propagate into these measurements, making their interpretation challenging. Furthermore, the interaction +between radiative, wind, and supernova feedback is a strongly non-linear process, which can lead to positive +4 + +reinforcement and strong galactic outflow driving (e.g. Lucas et al. 2020) or by contrast diminish the clustering of SN +explosions and reduce their efficiency at expelling gas from a galaxy (e.g. Agertz et al. 2020; Smith et al. 2021; +Fichtner et al. 2022). Pinpointing the sign and strength of these couplings, both observationally and theoretically will +be key to interpreting galaxies in observations, understanding how they regulate their star-formation, how they enrich +their surrounding environment in metals, and how radiation escapes from them to larger, cosmological scales. +H I reionization of the universe is mostly powered by stellar sources in low-mass star-forming galaxies (e.g. +Robertson et al. 2015; Dayal et al. 2020; Yung et al. 2020; Trebitsch et al. 2021), so having a good handle of their +ionizing production is crucial, while keeping in mind that other sources of uncertainties (e.g. how much of this +ionizing radiation escapes the ISM) still needs to be addressed. Even prior to H I reionization, X-rays from the very +early stellar populations in star-forming galaxies contribute to heating the IGM, but the rate of production of these +X-rays is still uncertain. Most emission comes from X-ray binaries (e.g. Eide et al. 2018), whose populations are +poorly constrained at the highest redshifts. 21cm all-sky measurements are starting to put limits on the beginning of +this heating era (Bowman et al. 2018), although other experiments are needed to confirm this result (see e.g. Singh +et al. 2022). Next-generation facilities like the SKA will soon constrain the early heating of the Universe, making the +need for detailed models timely. In this context, detailed understanding of binary evolution of stars (and in particular +massive stars) is required to assess properly the early heating of the IGM. While He II reionization, which happens at +z ∼ 3 (e.g. Worseck et al. 2016) is thought to be mostly dominated by AGN sources (e.g. Puchwein et al. 2019; +Faucher-Giguère 2020), the contribution from stellar populations remains mostly unconstrained. Notwithstanding the +uncertainties on the escape fraction of He II-ionizing photons, the uncertainties in the stellar population models +pointed out above will translate to the contribution of these stellar populations to the He II background. In particular, +the presence of very massive stars or hydrogen-stripped stars (e.g. Götberg et al. 2020) could strongly enhance the +contribution of the overall stellar populations to He II reionization. +2.3 +Supernovae +Feedback from supernovae (SN) has long been considered a key ingredient in studies of interstellar gas (e.g. McKee +& Ostriker 1977) and galaxy evolution (e.g. Larson 1974). SNe, especially core-collapse Type II SNe, release +significant (∼ 1051 erg) energy in the initial blastwave: sufficient to destroy molecular clouds (White & Long 1991), +drive turbulence in the ISM (McCray & Snow 1979), and power galactic winds and outflows (Mathews & Baker +1971). These explosions are also major sources of metals, producing (for example) the vast bulk of interstellar +oxygen (Burbidge et al. 1957). Beyond core-collapse supernovae, thermonuclear (Type Ia) supernovae may also be a +source of feedback energy, and also contribute to the cosmic metal budget (Kawata 2001). From the cloud- and +galaxy-scale feedback perspective, the key questions connecting stellar evolution to supernovae feedback are as +follows. Which stars will end their lives as supernovae? When will these stars detonate their supernovae? What will +be the energy, mass, and metal returns of these supernovae events (and which form will the energy take at larger +scales - kinetic or thermal)? Traditionally, very simple assumptions have been made about these questions: all stars +above a certain mass (5 − 10 M⊙) detonate, with each ccSNe event depositing ∼ 1051 erg of energy and +∼ 7 − 100 M⊙ of mass into the surrounding ISM (e.g. Katz 1992). It has long been assumed that, at least on galactic +scales, uncertainties in how this energy propagates through the ISM dominates over any uncertainties in stellar +evolution models (Naab & Ostriker 2017; Rosdahl et al. 2017), and that questions relating to the details of ccSNe +detonation are swamped by uncertainties in the cooling and mixing rates of SN remnants. However, recent studies +(Keller & Kruijssen 2022) and higher-resolution simulations (Gutcke et al. 2021) have begun to reveal that the details +of stellar evolution can detectably manifest themselves on galactic scales. +Temporal evolution of the stellar structure, subject to internal and surface physical processes described in Section 3 +will lead to a stellar structure for which internal pressure gradients at some point will no longer be able to withstand +the force of gravity. Understanding these processes will allow us to ultimately answer the three key questions +identified above. Hydrodynamical models of SNe detonation predict that the occurrence of underluminous (e.g. +Lovegrove & Woosley 2013) and hyperluminous (e.g. Woosley & Heger 2006) supernovae may occur for certain +combinations of initial stellar mass, metallicity, and rotation. Adding to this is the strong theoretical predictions for +“islands of explodability”, where SN progenitors will either produce very weak SN or in some cases directly collapse +to form black holes (BHs) with no significant energy return whatsoever (Smartt 2009; Horiuchi et al. 2014; Sukhbold +& Adams 2020). Recent theoretical studies of binary star interactions have found that the significant changes induced +to both the surface and core structure also will impact which stars detonate, and the energy of the subsequent SN +(Müller et al. 2019; Laplace et al. 2021; Vartanyan et al. 2021). Despite these theoretical uncertainties, it is highly +5 + +likely that theoretical models of galaxy evolution have in general over-estimated the SN energy budget, though this +recently may be changing (Emerick et al. 2019; Gutcke et al. 2021). Better observational constraints are needed to +begin pinning down the true budget of energy for SN feedback. +Observationally, determining the SNe budget for stars across the IMF is extremely challenging, owing to the difficult +problem of connecting SNe progenitors to individual SN events. Red Supergiants (RSG) constitute the most common +SN-progenitor stage, during which the star may experience a type IIP/L explosion (Smartt 2009). However, the RSG +phase may last ∼ 2.5 × 106 to 3 × 105 yrs for stars ranging in initial mass between 9 and 20 M⊙ (Meynet et al. +2015), more massive stars, at high metallicity at least, potentially suffering from such intense mass loss that the entire +envelope is lost and the stars first become yellow or blue supergiants before experiencing core collapse (e.g., +Gräfener & Vink 2016; Kee et al. 2021). At lower metallicities, higher mass supergiants may exist and explode as +e.g. pair-instability supernovae ejecting a peculiar chemical yield (Martínez-González et al. 2022). The RSG +Betelgeuse experienced an unprecedented dimming of its visual brightness from December 2019 until April 2020, +speculated to forewarn an imminent core-collapse. Though it appears that this event likely reflected a combination of +surface activity and dust formation in a previously ejected gas cloud positioned in the line of sight (Montargès et al. +2021), the need for a dedicated monitoring campaign of a population of RSG stars for unexpected variability is +clearly opportune and may help to identify systems for which an explosion may happen within about a human +lifetime. Alternatively, the collapse of such massive stars may lead to direct black hole formation with no or only +little ejecta being expelled, consequently, with a very faint or undetectable supernova. The most promising candidate +for a disappearing star directly collapsing into a black hole showed evidence for an estimated ∼ 0.5 M⊙ of ejecta +(Gerke et al. 2015; Sukhbold & Adams 2020; Basinger et al. 2021). Wolf-Rayet stars, evolved stars that have lost or +have been stripped from their hydrogen rich envelopes are alternative candidates for an impending Ib/c (or +gamma-ray-burst) supernova explosion (e.g., Groh et al. 2013b). Within this group, Wolf-Rayet Oxygen (WO) stars +are thought to be particularly evolved and in a post core-helium burning phase of evolution where timescales until +core collapse are down to a few times ∼ 103 or 104 yrs (Meynet et al. 2015). So far, only nine WO stars are known, +the one thought to be closest to ending its life being WR102 with ∼ 1500 yrs left. Other post-main sequence objects +have been suggested as potential SN-progenitors, including Luminous Blue Variable (LBV) stars (Kotak & Vink +2006; Groh et al. 2013a) and Wolf-Rayet Nitrogen (WN) stars (Groh et al. 2013b). The former possibility is +supported by evidence that the progenitor of SN 2005gl was possibly an LBV star (Gal-Yam & Leonard 2009). +2.4 +Chemical Enrichment +Nuclear processed material may be ejected from the star/system, and thus influence the chemical abundance of the +surroundings, via at least three mechanisms; (i) stellar winds; (ii) supernova ejecta (discussed in detail in Sect. 2.3); +(iii) (non-conservative) binary interaction (discussed further in Sect. 4). Consequently, whether nuclear processed +material ends up in the interstellar medium after being created inside a star is a complex question. For example, +elements that stay inside the star for a longer time (due to not being immediately ejected in the wind) may be able to +undergo further nuclear processing. In the same way, elements may be “saved” by the wind from being processed +further. This makes the topic of chemical evolution a highly complex area of research with a number of impediments +to our understanding of it. +A deeper understanding of how and on which timescale elements are released in the interstellar medium is of great +importance for modern stellar feedback simulations. Elements being ejected during the entire lifetime of a massive +star could determine a different chemical evolution in the surrounding gas compared to the case in which they are +“instantaneously” ejected in the supernova explosion. If we had a clearer view of these processes, we could also +model more accurately how this enriched material spreads to larger scales, meaning the interstellar medium and the +rest of the galaxy, because of turbulence and other mixing processes. +Another important aspect in this regard is the comparison of the timescale in which the mixing of the newly-enriched +material occurs in the gas with that of star formation. Will the mixing be fast enough to make the metallicity of the +medium almost uniform, before a second generation of massive stars is born? As stars inherit their initial metallicity +from the gas they have formed in, understanding how the timescales for chemical evolution and mixing relate to the +time needed to form a new generation of stars would help to better understand their future evolution. Moreover, all +these processes could be very different in low-metallicity environments, for which further analysis is recommended +(see Section 5). +The efficiency of mass-loss through stellar winds is highly dependent on the mass of the star (Sect. 2.1). The higher +the mass, the higher the core temperature, leading to the activation of specific nuclear reactions. Massive and +6 + +Figure 2: Abundances relative to the Solar value plotted over time (in Gyr) for all the elements in the periodic table. +This enables the reader to follow the different ways for evolution of the elements to take place via various processes. +These processes include Big Bang Nucleosynthesis, AGB stars, Core-collapse Supernovae, Type Ia Supernovae and +Neutron star mergers. Observations are depicted as dotted lines. From Kobayashi et al. (2020), reproduced with +permission. +intermediate mass stars are known to have strong enough winds to eject nuclear-processed material. In particular, +Asymptotic Giant Branch stars (AGBs) are important contributors to carbon and nitrogen via convective dredge up of +nuclear products from the stellar core (Romano et al. 2010). +For the stellar wind (or interactions with a companion star) to be able to remove nuclear burning products, these +products – originally created in deep, hot burning regions – need to already be found at the stellar surface. This can +happen in two ways. Either the mixing between the deep layers and the surface needs to be strong (see Section 3.1); +or the layers from the top need to be first removed so the deeper layers are uncovered (see Section 2.1). In particular, +mixing induced by rotation (or rotational mixing) has been shown to lead to extremely well mixed stars which evolve +(quasi-)chemically homogeneously (Maeder 1987). But in less extreme cases, mixing (not only by rotation) can help +bringing deeper layers upwards, to be lost in the wind eventually. The decay of some isotopes serves as a counter to +this process. This can be seen in the case of 26Al, which decays rather quickly (around 6 s) into 26Mg (cf. Finlay +et al. 2012). +Figure 2 shows the elements in the periodic table together with their cosmic origin (Kobayashi et al. 2020). While the +figure shows the state-of-the-art of our current knowledge, other possible avenues for the generation of elements are +thought to exist. For example, gold has been proposed to form in kilonovae (Kasen et al. 2017a). +Subsequent generations of stars have enriched interstellar gas with nuclear-processed elements. However, chemical +enhancement is not only a time-dependent process but can be spatially traced as well. For example, the Milky Way +displays a metallicity gradient (Peimbert et al. 1978; Afflerbach et al. 1997) which decreases outwards, but other +galaxies show other trends. +Another source of uncertainty is the discrepancy between the yields found at the scale of stellar evolution modelling +and those calculated at larger scales. To connect these two quantities, investigations are required with a varying +degree of resolution as well as an understanding of the uncertainties involved in both calculations. Uncertainties +include mixing and convection for single stars, tidal effects for binaries, and in general the handling of the Eddington +limit. As one can see for example in Agrawal et al. (2022), different approaches with multiple codes can lead to +different predictions. Tracers such as CNO abundances may help resolve these discrepancies. +3 +Internal Stellar Processes +Stars are places where the four fundamental forces in physics interact (viz., gravitational, electromagnetic, strong, +and weak nuclear forces). Most global properties of stars can be inferred from the stellar structure equations, with the +7 + +He +Big Bang Nucleosynthesis +uns +Be +Core-collapse Supernovae +B +the +Type Ia Supernovae +Na +M. +Neutron Star Mergers +A1 +ive +relati +Abundance +13.8 +C.Kobayashi 2020 +>Time「Gyrassumption of hydrostatic equilibrium. However, there are several key quantities e.g., nuclear reaction rates and +opacity measurements (especially Iron or Fe), and internal processes in stars e.g., convection and overshooting, that +remain highly uncertain in modelling stars, especially massive stars. Moreover, building accurate stellar models +requires including the contribution of hydro- and magneto-hydrodynamical processes in the stellar interior such as +stellar pulsations, stellar rotation and magnetic fields. These processes are not so well-understood and remain highly +approximated in stellar models. +Despite the recent progress in these areas in the last decade, several challenges remain in stellar physics. These +include treatment of convection and the determination of the sizes of the convective zones, a proper account of all the +processes that can induce mass loss at the different phases of evolution, the instabilities triggered in radiative zones +that can transport angular momentum and chemical species (some of them likely triggered by rotation), and the +impact of magnetic field in stellar interior and at the surface. Each of these uncertainties can severely impact stellar +outputs and alter the feedback they inject into the interstellar medium. Below we discuss two significant internal +processes. +3.1 +Internal Mixing +Energy produced in stars due to nuclear burning and other processes needs to be transported away to outer layers. +The three main mechanisms responsible for this process are convection, conduction and radiation. In most stellar +evolution codes, convection is modelled using a simple but successful formalism called mixing length theory (MLT; +Böhm-Vitense 1958). If energy is carried through convection, then owing to the actual movement of particles in the +star, angular momentum and chemical species are also transported within the star. This can change the stellar +structure and radius, which in turn affects the ionization, mass-loss rates and pre-supernova structure of the star +(Dessart et al. 2013; Kaiser et al. 2020). +Convective boundary mixing (CBM) dictates the extension of the convective core and shell burning regions. There +are multiple methods of implementing CBM with various mixing profiles such as core overshooting via step, +exponential, or convective entrainment (Scott et al. 2021). The extension of the convective core via overshooting +during core H-burning has various consequences leading to stars evolving at higher luminosities with increased mass +loss over the integrated main sequence lifetime. Together, convection and associated mixing mechanisms contribute +to the internal mixing in stars. +Mixing processes can alter energy transport and the hydrogen content in the envelope, driving the evolution of +massive stars towards red and blue supergiant phases and thus dictating red to blue supergiant ratios (Schootemeijer +et al. 2019). On the main sequence, the effects of internal mixing and mass loss dominate the evolutionary pathways +which govern the fates of massive stars towards forming black holes and neutron stars. In the mass range ∼ 8–30 M⊙ +interior mixing processes dominate the lives of massive stars, and in the mass range ∼ 30–60 M⊙ stellar winds drive +the evolution towards Wolf-Rayet (WR) stripped Helium stars. The indirect effect of mass loss on interior mixing +also plays a role in the switch of evolutionary path during core He-burning (Higgins & Vink 2020; Sabhahit et al. +2021). The switch in evolutionary channels in post-MS evolution is key for predicting SNe progenitor populations. +Internal mixing mechanisms are one of the largest uncertainties in stellar physics. For example, the extent of core +overshooting, which determines the length of the main sequence may itself be mass dependent (Castro et al. 2014) +which will also influence the post-main sequence evolutionary channels that form black holes. In fact, maintaining a +sufficiently low core mass at the highest mass range can be critical in forming black holes and avoiding the pair +instability supernovae regime (Vink et al. 2021). Similarly, radiative envelopes with subsurface convective layers can +drive clumps in the wind, altering the mass-loss rates and having a large impact on SNe progenitors (Davies et al. +2007; Cantiello et al. 2009; Jiang et al. 2015), although there remain large uncertainties in these predictions. +Convection, as given by MLT, becomes highly inefficient in energy transport within the radiation dominated, low +density envelopes of massive stars with Minit > 40M⊙ whose luminosities approaches the Eddington limit (e.g., +Langer 1997; Maeder 2009), and only worsens for cooler supergiants owing to the hydrogen opacity bump at +Teff ∼ 104K. Such a situation can cause stellar evolution codes to either crash or become stuck very small time-steps +(Paxton et al. 2013). What happens in reality in such conditions, e.g., whether stars in close proximity to the +Eddington limit inflate (Gräfener et al. 2012) or not remains yet another unresolved problem. However stellar +evolution models can predict widely different post-main sequence evolution when treating these highly inflated layers +(Agrawal et al. 2022), which can have far reaching consequences in predicting the feedback properties of massive +stars. Perhaps 2D or 3D simulations, or observational constraints such as the Humphreys-Davidson limit might shed +light on what happens in such inflated, low density envelopes. +8 + +Asteroseismology may provide calibrations for the efficiencies of internal mixing processes, but main sequence stars +are usually fast rotators, and this can blur the period spacing. Low mass, slower rotators are more accessible for +providing constraints with asteroseismology (Pedersen et al. 2021; Bowman 2021). Rotation and rotational mixing +play a major role in the enrichment of massive stars. The chemical enrichment of massive stars is dominated by +rotational mixing instabilities, particularly whether the angular momentum is maintained via solid-body rotation, +which is also important for determining neutron star spin. +3.2 +Stellar magnetic fields +Stars form in a magnetised medium, and recent simulations have demonstrated the large impact that magnetic fields +play in the formation process (Oliva & Kuiper, in prep.). However, the acquisition of stellar magnetic fields is largely +unconstrained. There are two different kinds of magnetic fields that can be harboured by the massive stars. One +possible branch is dynamos, either in the convective core driven by the α-Ω cycle (similar to the surface of the Sun), +or in the radiative layers driven by differential rotation (e.g., the mechanism proposed by Spruit 2002). Such +dynamos are small-scale and vary on a short Alfvén timescale. In evolutionary models of massive stars, +dynamo-generated magnetic fields in the radiative zones are commonly invoked (Maeder & Meynet 2003, 2004, +2005; Heger et al. 2005; Potter et al. 2012; Fuller et al. 2019; Takahashi & Langer 2021). +Another branch of possibilities is relaxed, equilibrium fossil magnetic fields in the stellar radiative envelopes (e.g., +Braithwaite & Spruit 2004; Braithwaite & Nordlund 2006), which are large-scale and stable over the long-term +evolution (Ohmic timescale). Such fields are now routinely observed via spectropolarimetry (exploiting the Zeeman +effect) in a fraction of Galactic massive stars. Although no detections outside of the Galaxy have been made yet, +largely due to the limitations of current instrumentation capabilities. +The impact of fossil magnetic fields is far-reaching. These fields form a magnetosphere around the star, which +channels the stellar outflow (ud-Doula & Owocki 2002; Owocki 2004). The presence of magnetic fields can lead to +two other important effects on mass loss: magnetic mass loss quenching (reducing the mass loss rate of the star, by up +to an order of magnitude for a field of ∼ kG strength), and magnetic braking (removing angular momentum from the +star and hence leading to an observable decrease of its surface rotation). Mass-loss quenching is a powerful +mechanism that, independent of the metallicity, allows the star to retain most of its mass (Georgy et al. 2017; +Keszthelyi et al. 2017a, 2019, 2020, 2021; Petit et al. 2017). The implementation of these processes in stellar +evolution models has shown that magnetic braking very efficiently spins down the stellar surface and, depending on +the internal coupling, may also produce observable surface nitrogen enrichment (Meynet et al. 2011; Keszthelyi et al. +2019, 2021), with a grid of stellar structure and evolution models available that take account of these processes +(Keszthelyi et al. 2022). +Magnetic fields are thus a key component of stars. These are either built internally through internal dynamos or else +retained as fossil fields from the time of the star’s formation. While determining their presence and effect is difficult, +recent advances can help us to better constrain and understand this problem. +4 +External Stellar Processes: Binaries +Similar to internal processes, external processes specific to the evolution of stars in multiple systems like tidal +interactions, mass exchange, common envelope phases, stellar mergers can also impact the evolution and feedback of +the stars. It is now established that binaries play a major role in the evolution of stellar populations (Eldridge & +Stanway 2020, 2022). The majority of stars are born in binary or multiple systems and the binary fraction increases +with stellar mass (Moe & Di Stefano 2017). In addition, we now know that a significant fraction of these binaries will +interact during their lifetime and initiate mass transfer, which has a significant impact on their structure and evolution +(Sana et al. 2012). As a result of mass transfer, primaries can be stripped of their hydrogen envelope, which is +accreted onto the secondary, spinning it up, or the system may merge. Consequently, their lifetimes and core +properties change, affecting the final fate and stellar remnant. +The picture is further complicated by the fact that both internal and external stellar processes, that are by themselves +complex to properly model, can hardly be studied in isolation, as they all interact. For example, stellar rotation, +which can affect the evolution of stars, is strongly affected by tidal interactions in close binary systems. Indeed, tides +can set up exchanges between two reservoirs of angular momentum, the orbital one and the rotational one, causing +the star to spin-up or spin down depending on the circumstances and thus modifying the whole evolution of the two +9 + +components by changing the rotation rates of the star and the radius of their orbits. A great diversity in evolutionary +histories and stellar structures, for example at the time of core collapse, can be obtained through binary evolution. +Likely some of the stellar pathways made possible by binary evolution are still to be discovered. Binary evolution +impacts stellar feedback in three main ways: winds, ionizing radiation and supernovae rates. +4.1 +Impact on stellar winds +The interstellar medium continuously receives mechanical energy and chemical feedback from stellar winds of the +massive stars. Mass transfer in a close binary system will modify the nature of the wind from both components. The +stripped primary (helium star) will likely possess a faster, lower density wind than its evolved (red supergiant) +isolated counterpart, boosting the mechanical feedback. In addition, the mass-gaining secondary will usually produce +a stronger wind as a result of its increased luminosity. +Helium stars (WR stars at high mass) contribute considerable energy to the total energy budget of a population +(Fichtner et al. 2022). By way of example, in the SMC the collective wind of one multiple system (HD 5980) +dominates over hundreds of OB stars in NGC 346. Stellar populations consisting of rotating stars in a binary system +give raise to strong feedback processes specifically in low metallicities environment. +4.2 +Impact on the ionizing radiation +It is well established that the ionizing radiation from a population of exclusively single (non-rotating) stars declines +rapidly once the highest mass stars evolve off the main sequence, with a secondary (high energy) peak coinciding +with the Wolf-Rayet phase (Schmutz et al. 1992; Smith et al. 2002). Since close binary evolution is capable of +stripping the primary component of its hydrogen envelope, the effect of binary evolution on the ionizing budget of +young stellar populations is dramatic (Götberg et al. 2019), especially at high energies (helium ionizing photons), and +at low metallicities for which only exceptionally massive single stars are capable of producing WR stars, whereas +binary evolution leads to a prominent population of hot, stripped stars. +Rosdahl et al. (2018) found that, on average, binaries lead to escape fractions of ∼7–10 percent in the early universe, +about three times higher than that produced by single stars only. With such a difference in ionizing escape fractions, +their simulation of binary systems gives a cosmic reionization epoch before z∼7, while the single-star escape +fractions are not able to reionize their simulation volumes by z∼6. Observationally, these findings have major +implications for linking stellar evolution to cosmological-scale feedback. +4.3 +Impact on core-collapse supernovae +Binary evolution affects supernovae in three main ways: their energy budget, timing (location), and chemical yields. +Zapartas et al. (2017) found that the inclusion of binaries in massive stellar systems substantially increases the +number of supernovae expected among a stellar population, largely because of “late" events originating from +intermediate-mass (4 − 8M⊙) stars which would have otherwise evolved to white dwarfs, and whose binary +interactions uniquely create the conditions for supernovae. The possibility of late events affects the delay-time +distribution of supernovae: the maximum time expected for a single star to go supernova is 50 Myr, but late events +occur on scales of 50 − 200 Myr after birth. This stands in contrast with current prescriptions of supernovae timing in +feedback simulations, which often assume an instantaneous explosion within 50 Myr for massive stars. +Similarly, more massive stars that might otherwise be expected to collapse into black holes instead may experience +mass stripping and common envelope interactions that create supernova conditions on the high-mass end as well. The +widened range of initial masses that can experience supernovae from binary interactions will change the range of +energetics expected and the properties of the supernova progenitors (e.g., Podsiadlowski et al. 1992). Moreover, mass +transfer affects the structure and chemical composition of stars (e.g., Laplace et al. 2021), ultimately changing their +chemical yields. For example, Farmer et al. (2021) showed recently that at solar metallicity, binary-stripped stars can +eject twice as much carbon into their surroundings than single stars. In addition, binary systems can be the +progenitors of gravitational wave sources, which are responsible for enriching stars in r-process elements (Kasen +et al. 2017b, see also Sect. 2.4). The supernova kick imparted at the moment of explosion of one binary component +can result in a population of runaway and walkaways stars that explode in a location different from their birth +environment (e.g., Renzo et al. 2019). +10 + +4.4 +Impact of larger scales on binary formation +Feedback processes in galaxies are thought to affect the formation of binaries and stellar multiples, through +perturbations of gas clouds, feedback from stars and magnetic fields. Turbulence injected into molecular clouds +through feedback from jets, winds and ionising radiation may affect when and how stellar multiples are formed. The +quantity of angular momentum in protostar formation plays an important role in the mass of the protostellar disk, +with more rotation leading to a more massive disk that fragments earlier. Bycontrast, if more mass is concentrated at +the centre of the disk, a single massive star and/or a less massive companion will form. UV radiation and the +propagation of heavy elements can also shape the formation of protostars as well as protoplanets. +Magnetic fields are important both in star-forming regions and also in stars (see Section 3.2), and can play a role in +coupling cloud scales to stellar scales. For example, sufficiently strong magnetic field will diminish fragmentation +which then prevents but does not fully suppress binary formation. However, due to difficulties in resolution on a +cloud-scale and the cost of small-scale simulations of protostar formation, simulations have not yet converged on the +role that magnetic fields play in shaping in-situ binary formation. +Currently, most simulations do not generally take binary evolution into account in their feedback yields, however this +is slowly changing in fields such as reionization studies (Rosdahl et al. 2018) at z > 6, but recently in lower redshift +galaxies such as Fichtner et al. (2022) for a sub-L* galaxy at z = 3. +5 +Varying Metallicity in our Local Group: The Effect of Z +The Local Group is a complex environment with average present-day metallicities varying from ∼ 0.2 Z⊙ in SagDIG +(Saviane et al. 2002), to ∼ 2 Z⊙ in the Milky Way’s Galactic Centre (e.g. Nogueras-Lara et al. 2018). Additionally, +significant metallicity gradients exist within galaxies (Searle 1971; Vila-Costas & Edmunds 1992; Henry & Worthey +1999), including the Milky Way (e.g., Lemasle et al. 2018) - by metallicity of a galaxy, we typically refer to a radially +averaged quantity. Stellar evolution and small-scale feedback models usually adopt the averaged values for a given +galaxy when referencing their metallicities. +Within the Local Group, there are also large differences in densities and pressures, and star-forming mechanisms and +rates. For example, the Large Magellanic Cloud hosts a million Solar-mass starburst region in 30 Doradus (e.g. Doran +et al. 2013), while Sextans A and the SMC appear to host isolated OB stars (Garcia et al. 2019; Lorenzo et al. 2022). +Our local universe thus presents a useful testbed for studying how stellar feedback operates in a variety of conditions. +The role of metallicity applies to both the behaviour of stars themselves and the conditions in the gas in galaxies and +hence shapes the interplay between the two (Brugaletta et al. in prep.). +In general, we assume that massive stars form with roughly the same metallicity as their local environment. Their +surface abundances over their lifetime are shaped by chemical evolution as well as mixing and other processes such +as envelope self-stripping, which drastically change the feedback properties of these stars. +5.1 +Impact on Stellar Evolution and Feedback +As discussed earlier, decreasing metallicity generally decreases the impact of stellar winds on an environment (Vink +et al. 2001), since winds are driven by metal lines in the stellar atmosphere. This is largely a consequence of processes +inside the star rather than the physics of the interstellar gas. Conversely, due to reduced photon absorption in the +atmosphere, the ionizing photon emission rates are typically higher at lower stellar metallicity (Martins et al. 2005). +The effect on the gas around stars at lower metallicity is two-fold. The efficiency of mechanical and photoionization +feedback is further enhanced by the fact that metal-line cooling in photoionized gas (Ferland 2003) and +collisionally-ionized gas (Sutherland & Dopita 1993) is less efficient at low metallicity. However, lower dust +fractions mean that the strength of radiation pressure decreases (Ali 2021). +The consequence of this on feedback depends on how these feedback processes couple, and if and when any given +process dominates. Winds and supernovae create hot X-ray emitting bubbles (106 – 108 K), while photoionized +regions are heated to ∼ 104 K. These regions co-exist within nebulae (Guedel et al. 2008), and their relative position +and impact within feedback-driven nebulae remains a subject of active study. Analysis of observations in the Galactic +Centre and compact H II regions shows that dust-processed radiation pressure dominates over other processes (Barnes +et al. 2020; Olivier et al. 2021b), while in the LMC/SMC/nearby galaxies, thermal pressure from photoionized gas +dominates (Lopez et al. 2014; McLeod et al. 2019, 2021). However, in addition to metallicity, these analyses are also +affected by other environmental factors such as filling factors, ambient densities and pressures. Similarly, thermal +11 + +losses are generally believed to have an important impact on wind bubbles in order to explain the missing energy in +observed hot plasmas (Townsley et al. 2003; Lopez et al. 2014). These thermal losses may be more affected by +turbulent mixing with cold gas in the environment of the wind bubble than by metal line cooling in the wind bubbles +themselves (Rosen et al. 2014; Lancaster et al. 2021). +5.2 +Low metallicity +There remain many unknowns concerning stellar evolution in extremely low metallicity environments due to the +current limited observational capabilities and uncertain numerical ingredients, even in the case of single-star models. +Depending on their metallicity, stars follow different evolutionary paths, resulting in different spectral subtypes +dominating the mechanical and radiative yields. Between ∼ 1/10 Z⊙ and Z⊙, the mechanical luminosity during +stellar evolution is both theoretically and observationally expected to be dominated by Wolf-Rayet stars, despite their +relatively short lifetimes and rarity (Ramachandran et al. 2018a; Fichtner et al. 2022). Instead, the more abundant +stars with initial masses in the range ∼ 10-30 M⊙ are expected to end their lives as SNe, hence dominate the +mechanical luminosity after ∼ 107 yrs, i.e. at timescales comparable with the free-fall timescale of a young stellar +cluster (Krumholz & Burkhart 2016). At even lower metallicities, single-star evolution and wind models are not +expected to lead to the appearance of the WR phenomenon, with the evolutionary channel leading to H-depleted stars +being dominated by binary interaction (Shenar et al. 2020). +Their lower metal content may also lead to different evolutionary pathways that are not predicted at higher +metallicities. Evolutionary models (Brott et al. 2011) predict that, at metallicities lower than 1/10 Z⊙, fast-rotating +massive stars may evolve chemically homogeneously. In this evolutionary pathway, they can achieve temperatures +hotter than the zero-age main sequence (Yoon & Langer 2005b) and generally produce ∼ 5-10 times more ionizing +energy than their normally-evolving counterparts (Szécsi et al. 2015). +The implications arising from the evidence that the majority of massive stars are in binary systems, and the lower +angular momentum losses in low metallicity stellar models, are largely unconstrained. These effects are expected to +attenuate the otherwise steeper decrease in kinetic energy feedback in the early phases of cluster formation at low +metallicities (Fichtner et al. 2022). However, the different evolutionary pathways do not only affect the yields +estimated directly from evolutionary models. Stellar feedback, in fact, couples with the hydrodynamic evolution of +the circumstellar gas. The slow and dense stellar outflows characteristic of cool supergiants are outside the line-driven +regime and are only empirically constrained for stars in the Galactic Neighbourhood. It is likely that such slow gas +can lead to thermal dissipation at sub-parsec scales, with a growing impact at low metallicities. Stars close to their +Eddington limit during a Luminous Blue Variable phase (LBVs) are known to lose a significant fraction of their +H-rich envelope during phases of high variability (Humphreys & Davidson 1994; Vink & Gräfener 2012). Given the +metallicity-independence of the HD limit (Davies et al. 2018; McDonald et al. 2022), and the higher expected +number of redward-evolving stars at low-metallicities, one can expect that a larger fraction of the energy yield is +dissipated well-before reaching the cluster scales (Geen et al. 2015; Mackey et al. 2015; Lancaster et al. 2021). Any +systematic estimate must overcome our inability to convincingly model important stellar evolution phases such as the +LBV phase (however, see Grassitelli et al. 2021) and non-conservative mass-transfer phases in binary systems. +6 +Stars over Cosmic Time: The Effect of z +In this Section we summarise discussions concerning how stellar evolution and feedback evolve over redshift. We +focus our discussion here on redshifts up to z ∼ 2, the peak of cosmological star formation. There are likely to be +significant differences between z ∼ 2 and very high redshift, in particular the role of the first (Population III) stars in +the very early universe. As discussed earlier, aspects of stellar evolution such as binary evolution are likely to have a +strong impact on cosmological processes such as reionization around z ∼ 6 − 11. +Typical z ∼ 2 galaxies are moderately massive, deficient in iron-peak elements albeit α/Fe enhanced (Steidel et al. +2016). Their nebular properties are relatively hard, and individual star forming knots (from lensing studies) indicate +high star-formation intensities – of order ∼ 0.1M⊙/yr within a region of a few hundred parsecs (Jones et al. 2010; +Livermore et al. 2015). Within the Local Group, only 30 Doradus (Tarantula Nebula) in the LMC displays such +properties, albeit with a higher metallicity of ∼ 0.5Z⊙ (Crowther 2019). +12 + +6.1 +Star formation at low redshift (z ∼ 0 − 0.3) +Within the Local Group, where individual massive stars can generally be well spatially resolved, there are only a +small number of actively star-forming galaxies whose current metallicity is ≤ 0.2Z⊙, including the SMC, NGC +3109, IC 1613, Sextans A, WLM. Of these, the SMC has the highest star formation rate (Kennicutt et al. 2008), so is +host to several hundred O stars, albeit with only a few dozen above 40 M⊙ (Schootemeijer et al. 2021). Sextans A +has an even lower metallicity (van Zee & Haynes 2006) though also a lower star formation rate. In the context of +star-forming knots at high redshift, these are modest, since such region will host thousands of O stars, hundreds of +which are expected to exceed 40–50 M⊙. The SMC and Sextans A therefore provide our only direct route to +studying the evolution of massive stars at 0.1-0.2 Z⊙, except at the highest masses, which are poorly sampled due to +stochasticity. Sub-grid models employed in galaxy simulations (IMF, stellar models) are mainly constrained by local +observations and then applied to simulations at high-z, or rely on theoretical predictions for low metallicity stars. +Metal poor massive stellar populations beyond the Local Group have been studied via integrated stellar populations, +with the supergiant HII region Mrk 71 within NGC 2366 at 3 Mpc a striking example since it hosts massive super star +clusters and has a metallicity of ∼ 0.15Z⊙ (Gonzalez-Delgado et al. 1994; Micheva et al. 2017). This allows very +massive metal poor stars to be observed at low metallicity, albeit in an integrated stellar population. In particular UV +spectroscopy of the very young super star cluster Mrk 71-A with HST reveals strong HeII 1640 emission, providing a +direct indicator of the presence of very massive stars (LJ Smith, priv. comm.). Mrk 71 is also notable in having +evidence of leaking Lyman continuum photons (Micheva et al. 2017). +A sizeable population of Green Pea (GP) galaxies has been identified from SDSS observations whose properties +overlap with high-redshift galaxies, i.e. both are metal-poor, possess high specific star formation rates plus hard +nebular conditions in the BPT diagram (Cardamone et al. 2009), plus direct evidence for Lyman continuum leakage +in some instances (Izotov et al. 2016) and an excess soft X-ray emission (Franeck et al. 2022). In addition, there are +examples of very metal-poor star forming galaxies locally with metallicities of only a few percent of the Solar +Neighbourhood (I Zw 18, SBS 0335 Lequeux et al. 1979; Izotov et al. 1990) which are potential analogues of +star-forming galaxies in the very early Universe. Madau & Dickinson (2014) present the evolution of the average +metal-content of the Universe through its history (their Fig. 14). For example, the metallicity of Sextans A (1/10 Z⊙) +equates to ∼4 Gyr after the Big Bang. +6.2 +Star formation at z ∼ 2 +Overall whilst there are some commonalities between metal-poor star forming regions locally and those at high +redshift, some key differences remain, including composition (Fe-poor, α-enhanced, Steidel et al. 2016), higher +specific star formation intensities potentially impacting on the IMF and close binary fraction, plus even if the mass +and metallicity of a galaxy is the same at high- and low z, the environment, gas accretion and merger rate, AGN +activity, will be different. It is speculated that old galactic globular clusters (GCs) in particular are born as Young +Massive Clusters (YMCs, Portegies Zwart et al. 2010) from an α-enhanced composition, with a first generation of +metal-poor massive and intermediate-mass stars present (Bastian & Lardo 2018) which could have contributed to the +present-day chemical composition of the clusters (de Mink et al. 2009; Szécsi et al. 2018; Szécsi & Wünsch 2019). +Regarding future prospects, efforts have recently been made to build extensive spectroscopic catalogues of massive +stars in Local Group dwarf galaxies with sub-SMC metallicities (Lorenzo et al. 2022). These catalogues will yield a +proper characterization of the physical parameters of metal-poor massive stars and will correct stellar evolutionary +models. By introducing their physical properties as inputs of photoionization codes (CLOUDY Ferland et al. 1998), +we will be able to study the conditions of their surrounding interstellar medium and understand the stellar feedback of +these metal-poor massive stars. Studying this interplay between individual massive stars and their surrounding +interstellar medium in metal-poor environments can help us interpret the observations of high-z galaxies and even +estimate the amount of ionizing photons that dwarf galaxies contributed to the reionization of the Universe. +7 +From Star-by-Star Studies to IMF Averages and Population Synthesis +The sources of feedback energy from massive stars – their ionizing photon flux, the momentum carried by their +stellar winds, and their ultimate fate as supernovae – all depend strongly on the detailed physics of stellar evolution. +Without a clear understanding of the physical processes involved in the lives and deaths of massive stars, we cannot +understand the ultimate impact of stellar feedback on galaxies. Despite the urgency of this question, many theoretical +13 + +studies of galaxy evolution make use of heavily simplified assumptions of how massive stars evolve. How can we +translate the best current understanding of stellar evolution into a better foundation for theoretical models of galaxy +formation? +Stellar feedback in galaxies has been invoked as a mechanism to control the galactic star formation rate, the growth of +spheroids, the baryon and metal content of galaxy discs, among other galaxy-scale properties. Energy and +momentum injected by massive stars can destroy star-forming clouds before they can convert the bulk of their gas +into stars, and ultimately drive powerful galactic winds that remove baryons from the disc. Capturing these processes, +either in semi-analytic models or hydrodynamic simulations, must begin with a robust budget (and timeline) of the +relevant energy sources. +7.1 +What Matters at the scale of Galaxies? +Broadly speaking, the primary physical process that makes galaxies “care” about the stellar populations they contain +is feedback. Galaxy-scale feedback is generally considered to be negative, with stellar feedback limiting galactic star +formation by injecting turbulence (e.g. Padoan et al. 2016), driving galactic outflows (e.g. Larson 1974), or +destroying star-forming molecular clouds (e.g. Chevance et al. 2022). In addition to the energy and momentum that +stellar populations inject into their surroundings, the mass-loss of stars can also pollute the interstellar medium (ISM) +with metals produced in those stars, increasing the cooling rate of this gas and acting as a form of positive feedback +(Hirschmann et al. 2013). Thus, the stellar physics that determines the energy and momentum of stellar winds, SN +explosions, and UV radiation all act to change the impact of stellar feedback on the scale of galaxies. +For all but the smallest galaxies, the stellar populations driving feedback comprise tens of thousands or more stars. In +addition, simulations of galaxies typically cannot resolve individual stars except in the smallest, most isolated +systems. Thus, the primary questions that galactic astrophysicists must have for stellar astrophysicists come down to +integrated or population-averaged quantities. Simulations of galaxies may include supernovae, stellar winds, or UV +feedback (or any combination of these). What is needed are mass loss, energy and momentum injection, and UV +photon production rates as a function of time (in other words, yields of each of these quantities). A detailed study of +an individual star will not alone suffice for this: what is needed is an understanding of a fully-sampled IMF. As the +small-scale environment of individual stars is unknown and unresolved in these simulations, the only dependency of +these quantities that can be probed are ones which are again population averaged, such as the birth metallicity +(Badenes et al. 2018) or ISM density(Chabrier et al. 2014). The tool typically used to determine the +population-averaged yields needed for galaxy simulations is Population Synthesis. +7.2 +Population Synthesis and Simple Stellar Populations +No matter whether galaxies are modelled using analytic approximations, semi-analytic models, or full hydrodynamic +simulations, the phenomena occurring inside and around individual stars necessarily must be averaged across large +numbers (103 − 107) of stars. Historically, this has been done through the use of Population Synthesis of +Simple/Single Stellar Populations (SSPs). SSPs are groups of stars, sampled from a given IMF (e.g. Leitherer et al. +1999), that are assumed to have been born at a fixed time, with identical chemical properties. Population synthesis +models allow simulation codes to determine, as a function of time, the yields of mass, metals, and energy produced +by the individual star particles within those simulations (or from an assumed population in an analytic or +semi-analytic model). Typically, this is done via either tabulated outputs from a population synthesis code (e.g. +Leitherer et al. 1999; da Silva et al. 2012), or through analytic functions fit to these yields. While this hides much of +the stellar physics involved in producing these yields “under the hood” of the population synthesis model, it does +offer us the opportunity to easily incorporate more a sophisticated model of stellar evolution without significant work +required to re-design galaxy simulation codes. +8 +Connecting Theory and Observations +Theoretical approaches such as simulations are essential in astrophysics since laboratory experiments of most +astronomical phenomena are impossible. Using theoretical results to inform observational results requires the +creation of “synthetic” observations, or mock observational results generated using simulated inputs. This can take +the form of simulated stellar spectra, multi-wavelength gas emission maps, mock galaxy catalogues, and more. This +14 + +process is important both for observers, who may wish to understand the systems they observe with full 3D and time +information, and theorists who wish to better constrain their models. +Creating mock observations is a complex process with many steps that must be treated properly to produce accurate +results. This is a subject that has been widely discussed on various scales, from the regions around stars (see review +by Haworth et al. 2018) to cosmological galaxy formation (e.g. Guidi et al. 2015). +There are various hurdles relevant to stellar evolution and feedback that must be overcome if we are to close the gap +between observed systems and theoretical predictions for how they behave. One key issue is ensuring that the +physical structure of the observed system is realistic. This is highly affected by stellar feedback on all scales, which +in turn is affected by the details of (massive) stellar evolution, as discussed in previous Sections. Conversely, with +accurate theoretical models, it may be possible to use observations of feedback-driven structures as archaeological +tools to inform studies of how stars evolve. +The motion of interstellar gas is chaotic, since it requires solutions to the coupled non-linear equations for (radiative +magneto)hydrodynamics and N-body gravitation. This means that small perturbations to the early state of the cloud, +such as initial seed turbulence or differences in stellar output, can have large cumulative effects on the later evolution +of astrophysical systems. The variance from differences in stellar input and initial gas properties have been explored +in star-forming regions (Geen et al. 2018) and galaxies (Keller & Kruijssen 2022). Some linear response and +mitigation of sampling errors is recoverable using statistical analysis and comparisons of large catalogues of both +simulations and observations (Eadie et al. 2018). However, the physical divergence of solutions to sets of non-linear +equations over time remains a serious concern in reproducing astronomical phenomena using simulations. +Simulations will often necessarily simplify or omit certain details of real-world physics for the sake of producing +computationally-feasible or reducible results. Some models assume 1D or 2D geometries with symmetry in other +dimensions, or ignore effects such as (non-)ideal magnetohydrodynamics, gas chemistry, thermal conduction, etc. +Choices concerning simulated system size and resolution must also be made. Many of these assumptions may be +reasonable and lead to minimal impact on the end result (e.g. through convergence in simulation resolution), but it is +often hard to determine whether this is true without access to more expensive, physically-complete simulations. +Finally, the emission and absorption properties of stars and interstellar gas are complex, but are nonetheless required +to be reproduced in detail if we wish to create accurate synthetic observations. This may be relatively simple for +low-opacity systems with well-understood stellar populations, but becomes complex in other more general cases. +Efforts have begun to connect the actions of stars to the emission properties of interstellar nebulae (see, e.g. Pellegrini +et al. 2020). However, the problem remains a difficult and costly one. A solution requires a good understanding of +stellar evolution, feedback physics and gas microphysics and chemistry, all operating together over the lifetime of a +system. +One mitigation to these problems may be found in posing questions in a way that reduces the impact of some of the +uncertainties given above. Rather than producing a 1:1 comparison of individual objects, we may instead seek an +interval of validity - that is to say, a set of possibilities informed by simulations that constrain certain parameters. +Public data availability through standard databases would assist in this by allowing simulators and observers to access +large quantities of relevant information, provided the limitations of the simulations and observations within the +databases (e.g. resolution limits, systemic errors or important physical choices) are properly understood by the user. +To ensure that the interval of validity and limitations are properly understood, increased collaborations between +observers and simulators in the near future will be helpful. +9 +Conclusions +The interplay between stars and their environment (termed “stellar feedback”) is a long-standing problem that +nonetheless is still the subject of active study. These questions remain open for numerous reasons, relating to the +complexity of large-scale astrophysical gas dynamics and of the evolution of stars, individually and in multiple stellar +systems. +The outcome of the workshop was to identify a wide-ranging set of points of interaction between massive stars and +the gas in galaxies, from the scale of protostellar disks to cosmological scales. In addition, the workshop highlighted +the need for detailed discussions between researchers working on different aspects of both stellar evolution and +feedback. For example, bridging the scales of molecular clouds and galaxies is important in tracking how the impact +of massive stellar evolution is felt on (cosmological) galaxy scales. +Much of this work is concerned with providing an inventory of the variables and unknowns affecting each field and +15 + +how they relate to each other. For example, metallicity plays an important role in both the wind and radiation outputs +from massive stars and the impact these processes have on the gas in galaxies through radiative cooling efficiencies. +We provide detailed discussion of both theoretical and observed behaviour of stars and gas at different metallicities, +using our local galactic environment and higher redshift galaxies as observational examples of this. Meanwhile, there +remain strong uncertainties in the budget of mass, energy and chemical enrichment from winds, radiation and +supernovae at different metallicities, including whether certain stars become supernovae at all (“islands of +explodability”). +We discuss the effects governing stellar evolution, including both internal effects such as mixing and magnetic fields, +and external effects such as interaction with companion stars and how this shapes feedback. Determining the internal +structure of stars remains difficult, although there are promising techniques for doing so using asteroseismology and +comparison with theory, which in turn offers the ability to constrain a new generation of theoretical stellar evolution +models. Multiple stellar evolution greatly complicates the evolutionary path of massive stars. Nonetheless, +understanding stellar multiples remains crucial not only because a large fraction, or even the majority, of massive +stars are in binaries, but also because interacting binaries drastically change the feedback properties from massive +stars, both before and after the stars go supernova. This in turn can even influence how cosmological processes such +as reionization occur. +We note that it is important to understand not just the action of individual stars or binary systems, but how feedback +from stars combines as populations in galaxies. This in turn is important for determining what we know about +individual stars when observing distant galaxies where individual stars cannot be resolved. +Finally, we discuss efforts to compare theory and observations in detail. This remains a difficult task, since modelling +the spectral emission from atmospheres of stars, as well as (photo and collisionally-)ionized gas is non-trivial, +although more recently software tools are now able to perform this task. More worryingly, as (astrophysical) fluids +evolve non-linearly and precise information about the initial state of an observed system is often difficult to obtain, +direct one-to-one comparison is often challenging or impossible, and we must often rely on statistical comparisons. +Overall, we believe that this is an exciting time to begin widening discussions between workers in the fields of stellar +evolution and feedback, with advances in theory and observations in both fields allowing great improvements in our +understanding of astrophysics, both from the point of view of the birth and evolution of stars in a galactic context, and +also an inventory of how energy propagates from stars to shape local star formation, whole galaxies and the wider +universe. +10 +Acknowledgements +We would like to thank the anonymous referee for their work in improving the quality of the manuscript. The +workshop on which this manuscript is based was made possible thanks to the logistical and financial support of the +Lorentz Center, Leiden, Netherlands. This funding is made available by Leiden University and the Dutch Science +Foundation (NWO). The workshop was further supported by a NOVA grant for Star Formation, which SG also +acknowledges as support. SG further acknowledges support from a Spinoza award of the NWO for research on the +physics and chemistry of the interstellar medium. This research was partly funded by the National Science Center +(NCN), Poland under grant number OPUS 2021/41/B/ST9/00757. Y.A.F. and E.R.D. acknowledge support from +Collaborative Research Center 956, sub-project C4, funded by the Deutsche Forschungsgemeinschaft (DFG) – +project ID 184018867. Y.A.F was supported by the International Max Planck Research School in Astronomy and +Astrophysics. SR acknowledges funding from the European Research Council Horizon 2020 research and innovation +programme (Grant No. 833925, project STAREX). H.S. and D.Sz. were supported by the Alexander von Humboldt +Foundation. R.S was funded in part by the National Science Center(NCN), Poland under grant number OPUS +2021/41/B/ST9/00757. For the purpose of Open Access, the author has applied a CC-BY public copyright license to +any Author Accepted Manuscript (AAM)version arising from this submission. M.T. acknowledges support from the +NWO grant 0.16.VIDI.189.162 (“ODIN”). For the purpose of Open Access, the author has applied a CC-BY public +copyright license to any Author Accepted Manuscript (AAM) version arising from this submission. A.A.C.S. and +V.R. are supported by the Deutsche Forschungsgemeinschaft (DFG - German Research Foundation) in the form of an +Emmy Noether Research Group – Project-ID 445674056 (SA4064/1-1, PI Sander)" M. L. gratefully acknowledges +support by grants PID2019-105552RB-C41 and MDM-2017-0737 Unidad de Excelencia "María de Maeztu"-Centro +de Astrobiología (CSIC-INTA), funded by MCIN/AEI/10.13039/501100011033 and “ESF Investing in your future". +16 + +Contact: +Name: Sam Geen +Institution: (1) Anton Pannekoek Institute for Astronomy, University of Amsterdam, 1098 XH Amsterdam, The +Netherlands (2) Leiden Observatory, Leiden University, PO Box 9513, 2300 RA Leiden, Netherlands +Email: s.t.geen@uva.nl +Full list of institutions: +1 Anton Pannekoek Institute for Astronomy, Universiteit van Amsterdam, Science Park 904, 1098 XH Amsterdam, +Netherlands +2 Leiden Observatory, Leiden University, PO Box 9513, 2300 RA Leiden, Netherlands +3 McWilliams Center for Cosmology, Department of Physics, Carnegie Mellon University, Pittsburgh, PA 15213, +USA +4 Physics & Astronomy, University of Sheffield, Hounsfield Road, Sheffield, S3 7RH, United Kingdom +5 Department of Physics and Material Science, The University of Memphis, Memphis, TN 38152, USA +6 Institute of Astronomy, KU Leuven, Celestijnenlaan 200D, 3001 Leuven, Belgium +7 Center for Computational Astrophysics, Division of Science, National Astronomical Observatory of Japan, 2-21-1, +Osawa, Mitaka, Tokyo 181-8588, Japan +8 Cardiff Hub for for Astrophysics Research and Technology, School of Physics and Astronomy, Cardiff University, +Queen’s Buildings, The Parade, Cardiff CF24 3AA, UK +9 Department of Physics and Astronomy, University of Exeter, Stocker Road, Exeter EX4 4QL, United Kingdom +10 I. Physikalisches Institut, Universität zu Köln, Zülpicher Str. 77, 50937 Cologne, Germany +11 Department of astronomy, University of Geneva, Chemin Pegasi 51, 1290 Versoix, Switzerland +12Argelander-Institut für Astronomie, Universität Bonn, Auf dem Hügel 71, D-53121 Bonn, Germany +13 Armagh Observatory & Planetarium, College Hill, Armagh, BT619DG, United Kingdom +14 Heidelberger Institut für Theoretische Studien, Schloss-Wolfsbrunnenweg 35, 69118 Heidelberg, Germany +15 Centro de Astrobiología, CSIC-INTA. Crtra. de Torrejón a Ajalvir km 4. 28850 Torrejón de Ardoz (Madrid), Spain +16Centre for Extragalactic Astronomy, Department of Physics, Durham University, South Road, Durham DH1 3LE, +United Kingdom +17Institute for Computational Cosmology, Department of Physics, University of Durham, South Road, Durham DH1 +3LE, United Kingdom +18 Institute for Astronomy and Astrophysics, University of Tübingen, Auf der Morgenstelle 10, 72076 Tübingen, +Germany +19 Zentrum für Astronomie der Universität Heidelberg, Astronomisches Rechen-Institut, Mönchhofstr. 12-14, 69120 +Heidelberg, Germany +20 Sub-department of Astrophysics, University of Oxford, DWB, Keble Road, Oxford OX1 3RH, United Kingdom +21 Institute of Astronomy, Faculty of Physics, Astronomy and Informatics, Nicolaus Copernicus University, +Grudzi ˛adzka 5, 87-100 Toru´n, Poland +22 Kapteyn Astronomical Institute, University of Groningen, P.O. Box 800, 9700 AV Groningen, Netherlands +23 The Observatories of the Carnegie Institution for Science, 813 Santa Barbara Street, CA-91101 Pasadena, USA +24 SOFIA Science Center, USRA, NASA Ames Research Center, Moffett Field, CA 94045, USA +25 Las Cumbres Observatory, 6740 Cortona Dr, Suite 102, Goleta, CA 93117-5575, USA +26 Department of Physics, University of California, Santa Barbara, CA 93106-9530, USA +27 Departamento de Física Teórica, Universidad Autónoma de Madrid (UAM), Campus de Cantoblanco, E-28049 +Madrid, Spain +References +Afflerbach A., Churchwell E., Werner M. W., 1997, ApJ, 478, 190 +Agertz O., et al., 2020, MNRAS, 491, 1656 +Agrawal P., Szécsi D., Stevenson S., Eldridge J. J., Hurley J., 2022, MNRAS, 512, 5717 +Ali A. A., 2021, MNRAS, 501, 4136 +17 + +Badenes C., et al., 2018, ApJ, 854, 147 +Barnes A. T., Longmore S. N., Dale J. E., Krumholz M. R., Kruijssen J. M. D., Bigiel F., 2020, MNRAS, 498, 4906 +Basinger C. M., Kochanek C. S., Adams S. M., Dai X., Stanek K. Z., 2021, MNRAS, 508, 1156 +Bastian N., Lardo C., 2018, ARA&A, 56, 83 +Björklund R., Sundqvist J. O., Puls J., Najarro F., 2021, A&A, 648, A36 +Böhm-Vitense E., 1958, ZAp, 46, 108 +Bowman D. M., 2021, in OBA Stars: Variability and Magnetic Fields. p. 27, doi:10.5281/zenodo.5109690 +Bowman J. D., Rogers A. E. E., Monsalve R. A., Mozdzen T. J., Mahesh N., 2018, Nature, 555, 67 +Braithwaite J., Nordlund Å., 2006, A&A, 450, 1077 +Braithwaite J., Spruit H. C., 2004, Nature, 431, 819 +Brott I., et al., 2011, A&A, 530, A115 +Burbidge E. M., Burbidge G. R., Fowler W. A., Hoyle F., 1957, Reviews of Modern Physics, 29, 547 +Cantiello M., et al., 2009, A&A, 499, 279 +Cardamone C., et al., 2009, MNRAS, 399, 1191 +Carlberg R. G., Keating L. C., 2022, ApJ, 924, 77 +Castor J. I., Abbott D. C., Klein R. I., 1975, ApJ, 195, 157 +Castro N., Fossati L., Langer N., Simón-Díaz S., Schneider F. R. N., Izzard R. G., 2014, A&A, 570, L13 +Chabrier G., Hennebelle P., Charlot S., 2014, ApJ, 796, 75 +Chen H.-L., Woods T. E., Yungelson L. R., Gilfanov M., Han Z., 2015, MNRAS, 453, 3024 +Chevance M., et al., 2022, MNRAS, 509, 272 +Cranmer S. R., Owocki S. P., 1995, ApJ, 440, 308 +Crowther P. A., 2019, Galaxies, 7, 88 +Crowther P. A., et al., 2016, Monthly Notices of the Royal Astronomical Society, 458, 624 +Davies B., Vink J. S., Oudmaijer R. D., 2007, A&A, 469, 1045 +Davies B., Crowther P. A., Beasor E. R., 2018, MNRAS, 478, 3138 +Dayal P., et al., 2020, MNRAS, 495, 3065 +Dessart L., Hillier D. J., Waldman R., Livne E., 2013, MNRAS, 433, 1745 +Dobbs C. L., Bending T. J. R., Pettitt A. R., Bate M. R., 2022, MNRAS, 509, 954 +Doran E. I., et al., 2013, A&A, 558, A134 +Eadie G., Keller B., Harris W. E., 2018, ApJ, 865, 72 +Eide M. B., Graziani L., Ciardi B., Feng Y., Kakiichi K., Di Matteo T., 2018, MNRAS, 476, 1174 +Eldridge J. J., Stanway E. R., 2020, arXiv e-prints, p. arXiv:2005.11883 +Eldridge J. J., Stanway E. R., 2022, arXiv e-prints, p. arXiv:2202.01413 +18 + +Emerick A., Bryan G. L., Mac Low M.-M., 2019, MNRAS, 482, 1304 +Farmer R., Laplace E., de Mink S. E., Justham S., 2021, ApJ, 923, 214 +Faucher-Giguère C.-A., 2020, MNRAS, 493, 1614 +Federrath C., Schrön M., Banerjee R., Klessen R. S., 2014, ApJ, 790, 128 +Ferland G. J., 2003, Annual Review of Astronomy and Astrophysics, 41, 517 +Ferland G. J., Korista K. T., Verner D. A., Ferguson J. W., Kingdon J. B., Verner E. M., 1998, PASP, 110, 761 +Fichtner Y. A., Grassitelli L., Romano-Díaz E., Porciani C., 2022, MNRAS, 512, 4573 +Finlay P., et al., 2012, Phys. Rev. C, 85, 055501 +Franeck A., Wünsch R., Martínez-González S., Orlitová I., Boorman P., Svoboda J., Szécsi D., Douna V., 2022, ApJ, +927, 212 +Fuller J., Piro A. L., Jermyn A. S., 2019, MNRAS, 485, 3661 +Gal-Yam A., Leonard D. C., 2009, Nature, 458, 865 +Garcia M., Herrero A., Najarro F., Camacho I., Lorenzo M., 2019, MNRAS, 484, 422 +Geen S., Rosdahl J., Blaizot J., Devriendt J., Slyz A., 2015, Monthly Notices of the Royal Astronomical Society, 448, +3248 +Geen S., Watson S. K., Rosdahl J., Bieri R., Klessen R. S., Hennebelle P., 2018, MNRAS, 481, 2548 +Georgy C., Meynet G., Ekström S., Wade G. A., Petit V., Keszthelyi Z., Hirschi R., 2017, A&A, 599, L5 +Gerke J. R., Kochanek C. S., Stanek K. Z., 2015, MNRAS, 450, 3289 +Gonzalez-Delgado R. M., et al., 1994, ApJ, 437, 239 +Götberg Y., de Mink S. E., Groh J. H., Leitherer C., Norman C., 2019, A&A, 629, A134 +Götberg Y., de Mink S. E., McQuinn M., Zapartas E., Groh J. H., Norman C., 2020, A&A, 634, A134 +Gräfener G., Vink J. S., 2016, MNRAS, 455, 112 +Gräfener G., Owocki S. P., Vink J. S., 2012, A&A, 538, A40 +Grassitelli L., Langer N., Mackey J., Gräfener G., Grin N. J., Sander A. A. C., Vink J. S., 2021, A&A, 647, A99 +Groenewegen M., Lamers H., Pauldrach A., 1989, Astronomy & Astrophysics, 221, 78 +Groh J. H., Meynet G., Ekström S., 2013a, A&A, 550, L7 +Groh J. H., Meynet G., Georgy C., Ekström S., 2013b, A&A, 558, A131 +Grudi´c M. Y., Guszejnov D., Offner S. S. R., Rosen A. L., Raju A. N., Faucher-Giguère C.-A., Hopkins P. F., 2022, +MNRAS, 512, 216 +Guedel M., Briggs K. R., Montmerle T., Audard M., Rebull L., Skinner S. L., 2008, Science, 319, 309 +Guidi G., Scannapieco C., Walcher C. J., 2015, MNRAS, 454, 2381 +Gutcke T. A., Pakmor R., Naab T., Springel V., 2021, MNRAS, 501, 5597 +Haworth T. J., Glover S. C. O., Koepferl C. M., Bisbas T. G., Dale J. E., 2018, New A Rev., 82, 1 +Heger A., Woosley S. E., Spruit H. C., 2005, ApJ, 626, 350 +19 + +Henry R. B. C., Worthey G., 1999, PASP, 111, 919 +Higgins E. R., Vink J. S., 2020, A&A, 635, A175 +Hirschmann M., et al., 2013, MNRAS, 436, 2929 +Horiuchi S., Nakamura K., Takiwaki T., Kotake K., Tanaka M., 2014, MNRAS, 445, L99 +Humphreys R. M., Davidson K., 1994, PASP, 106, 1025 +Izotov I. I., Guseva N. G., Lipovetskii V. A., Kniazev A. I., Stepanian J. A., 1990, Nature, 343, 238 +Izotov Y. I., Schaerer D., Thuan T. X., Worseck G., Guseva N. G., Orlitová I., Verhamme A., 2016, MNRAS, 461, +3683 +Jiang Y.-F., Cantiello M., Bildsten L., Quataert E., Blaes O., 2015, ApJ, 813, 74 +Jones T. A., Swinbank A. M., Ellis R. S., Richard J., Stark D. P., 2010, MNRAS, 404, 1247 +Kaiser E. A., Hirschi R., Arnett W. D., Georgy C., Scott L. J. A., Cristini A., 2020, MNRAS, 496, 1967 +Kasen D., Metzger B., Barnes J., Quataert E., Ramirez-Ruiz E., 2017a, Nature, 551, 80 +Kasen D., Metzger B., Barnes J., Quataert E., Ramirez-Ruiz E., 2017b, Nature, 551, 80 +Katz N., 1992, ApJ, 391, 502 +Kawata D., 2001, ApJ, 558, 598 +Kee N. D., Sundqvist J. O., Decin L., de Koter A., Sana H., 2021, A&A, 646, A180 +Keller B. W., Kruijssen J. M. D., 2022, MNRAS, 512, 199 +Kennicutt Robert C. J., Lee J. C., Funes J. G., J. S., Sakai S., Akiyama S., 2008, ApJS, 178, 247 +Keszthelyi Z., Wade G. A., Petit V., 2017a, in Eldridge J. J., Bray J. C., McClelland L. A. S., Xiao L., eds, Vol. 329, +The Lives and Death-Throes of Massive Stars. pp 250–254 (arXiv:1702.04460), +doi:10.1017/S1743921317002745 +Keszthelyi Z., Puls J., Wade G. A., 2017b, A&A, 598, A4 +Keszthelyi Z., Meynet G., Georgy C., Wade G. A., Petit V., David-Uraz A., 2019, MNRAS, 485, 5843 +Keszthelyi Z., et al., 2020, MNRAS, 493, 518 +Keszthelyi Z., Meynet G., Martins F., de Koter A., David-Uraz A., 2021, MNRAS, 504, 2474 +Keszthelyi Z., et al., 2022, MNRAS, 10.1093/mnras/stac2598 +Kobayashi C., Karakas A. I., Lugaro M., 2020, ApJ, 900, 179 +Kotak R., Vink J. S., 2006, A&A, 460, L5 +Krumholz M. R., Burkhart B., 2016, MNRAS, 458, 1671 +Kudritzki R.-P., Puls J., 2000, ARA&A, 38, 613 +Kuiper R., Hosokawa T., 2018, A&A, 616, A101 +Lancaster L., Ostriker E. C., Kim J.-G., Kim C.-G., 2021, The Astrophysical Journal, 914, 89 +Langer N., 1997, in Nota A., Lamers H., eds, Astronomical Society of the Pacific Conference Series Vol. 120, +Luminous Blue Variables: Massive Stars in Transition. p. 83 +Langer N., 1998, A&A, 329, 551 +20 + +Laplace E., Justham S., Renzo M., Götberg Y., Farmer R., Vartanyan D., de Mink S. E., 2021, A&A, 656, A58 +Larson R. B., 1974, MNRAS, 169, 229 +Leitherer C., et al., 1999, ApJS, 123, 3 +Lemasle B., et al., 2018, A&A, 618, A160 +Lequeux J., Peimbert M., Rayo J. F., Serrano A., Torres-Peimbert S., 1979, A&A, 80, 155 +Livermore R. C., et al., 2015, MNRAS, 450, 1812 +Lopez L. A., Krumholz M. R., Bolatto A. D., Prochaska J. X., Ramirez-Ruiz E., Castro D., 2014, ApJ, 795, 121 +Lorenzo M., Garcia M., Najarro F., Herrero A., Cerviño M., Castro N., 2022, MNRAS, 516, 4164 +Lovegrove E., Woosley S. E., 2013, ApJ, 769, 109 +Lucas W. E., Bonnell I. A., Dale J. E., 2020, MNRAS, 493, 4700 +Mackey J., Castro N., Fossati L., Langer N., 2015, A&A, 582, A24 +Madau P., Dickinson M., 2014, ARA&A, 52, 415 +Maeder A., 1987, A&A, 178, 159 +Maeder A., 2009, Physics, Formation and Evolution of Rotating Stars. Springer Berlin Heidelberg, +doi:10.1007/978-3-540-76949-1 +Maeder A., Meynet G., 2000, ARA&A, 38, 143 +Maeder A., Meynet G., 2003, A&A, 411, 543 +Maeder A., Meynet G., 2004, A&A, 422, 225 +Maeder A., Meynet G., 2005, A&A, 440, 1041 +Martínez-González S., Wünsch R., Tenorio-Tagle G., Silich S., Szécsi D., Palouš J., 2022, ApJ, 934, 51 +Martins F., Schaerer D., Hillier D. J., Meynadier F., Heydari-Malayeri M., Walborn N. R., 2005, A&A, 441, 735 +Mathews W. G., Baker J. C., 1971, ApJ, 170, 241 +McCray R., Snow T. P. J., 1979, ARA&A, 17, 213 +McDonald S. L. E., Davies B., Beasor E. R., 2022, MNRAS, 510, 3132 +McKee C. F., Ostriker J. P., 1977, ApJ, 218, 148 +McLeod A. F., Dale J. E., Evans C. J., Ginsburg A., Kruijssen J. M. D., Pellegrini E. W., Ramsay S. K., Testi L., +2019, MNRAS, 486, 5263 +McLeod A. F., et al., 2020, ApJ, 891, 25 +McLeod A. F., et al., 2021, MNRAS, 508, 5425 +Meynet G., Eggenberger P., Maeder A., 2011, A&A, 525, L11 +Meynet G., et al., 2015, A&A, 575, A60 +Micheva G., Oey M. S., Jaskot A. E., James B. L., 2017, ApJ, 845, 165 +Moe M., Di Stefano R., 2017, ApJS, 230, 15 +Montargès M., et al., 2021, Nature, 594, 365 +21 + +Müller B., et al., 2019, MNRAS, 484, 3307 +Naab T., Ostriker J. P., 2017, ARA&A, 55, 59 +Nogueras-Lara F., et al., 2018, A&A, 620, A83 +Olivier G. M., Berg D. A., Chisholm J., Erb D. K., Pogge R. W., Skillman E. D., 2021a, arXiv e-prints, p. +arXiv:2109.06725 +Olivier G. M., Lopez L. A., Rosen A. L., Nayak O., Reiter M., Krumholz M. R., Bolatto A. D., 2021b, ApJ, 908, 68 +Oskinova L., Schaerer D., 2022, arXiv e-prints, p. arXiv:2203.04987 +Owocki S. P., 2004, in Maeder A., Eenens P., eds, IAU Symposium Vol. 215, Stellar Rotation. p. 515 +Padoan P., Pan L., Haugbølle T., Nordlund Å., 2016, ApJ, 822, 11 +Paxton B., et al., 2013, ApJS, 208, 4 +Pedersen M. G., et al., 2021, Nature Astronomy, 5, 715 +Peimbert M., Torres-Peimbert S., Rayo J. F., 1978, ApJ, 220, 516 +Pellegrini E. W., Rahner D., Reissl S., Glover S. C. O., Klessen R. S., Rousseau-Nepton L., Herrera-Camus R., 2020, +MNRAS, 496, 339 +Petit V., et al., 2017, MNRAS, 466, 1052 +Podsiadlowski P., Joss P. C., Hsu J. J. L., 1992, ApJ, 391, 246 +Portegies Zwart S. F., McMillan S. L. W., Gieles M., 2010, ARA&A, 48, 431 +Potter A. T., Chitre S. M., Tout C. A., 2012, MNRAS, 424, 2358 +Prinja R. K., Barlow M., Howarth I. D., 1990, Astrophysical Journal, 361, 607 +Puchwein E., Haardt F., Haehnelt M. G., Madau P., 2019, MNRAS, 485, 47 +Puls J., Vink J. S., Najarro F., 2008, A&A Rev., 16, 209 +Puls J., Sundqvist J. O., Markova N., 2015, in Meynet G., Georgy C., Groh J., Stee P., eds, IAU Symposium Vol. 307, +New Windows on Massive Stars. pp 25–36 (arXiv:1409.3582), doi:10.1017/S174392131400622X +Ramachandran V., Hainich R., Hamann W. R., Oskinova L. M., Shenar T., Sander A. A. C., Todt H., Gallagher J. S., +2018a, A&A, 609, A7 +Ramachandran V., Hamann W. R., Hainich R., Oskinova L. M., Shenar T., Sander A. A. C., Todt H., Gallagher J. S., +2018b, A&A, 615, A40 +Ramachandran V., et al., 2019, A&A, 625, A104 +Renzo M., et al., 2019, A&A, 624, A66 +Rey M. P., Starkenburg T. K., 2022, MNRAS, 510, 4208 +Rieder S., Dobbs C., Bending T., Liow K. Y., Wurster J., 2022, MNRAS, 509, 6155 +Robertson B. E., Ellis R. S., Furlanetto S. R., Dunlop J. S., 2015, ApJ, 802, L19 +Romano D., Karakas A. I., Tosi M., Matteucci F., 2010, A&A, 522, A32 +Rosdahl J., Schaye J., Dubois Y., Kimm T., Teyssier R., 2017, MNRAS, 466, 11 +Rosdahl J., et al., 2018, MNRAS, 479, 994 +22 + +Rosen A. L., Lopez L. A., Krumholz M. R., Ramirez-Ruiz E., 2014, Monthly Notices of the Royal Astronomical +Society, 442, 2701 +Sabhahit G. N., Vink J. S., Higgins E. R., Sander A. A. C., 2021, MNRAS, 506, 4473 +Sana H., et al., 2012, Science, 337, 444 +Sander A. A. C., Vink J. S., 2020, MNRAS, 499, 873 +Saviane I., Rizzi L., Held E. V., Bresolin F., Momany Y., 2002, A&A, 390, 59 +Schaerer D., Fragos T., Izotov Y. I., 2019, A&A, 622, L10 +Schmutz W., Leitherer C., Gruenwald R., 1992, PASP, 104, 1164 +Schootemeijer A., Langer N., Grin N. J., Wang C., 2019, A&A, 625, A132 +Schootemeijer A., et al., 2021, A&A, 646, A106 +Scott L. J. A., Hirschi R., Georgy C., Arnett W. D., Meakin C., Kaiser E. A., Ekström S., Yusof N., 2021, MNRAS, +503, 4208 +Searle L., 1971, ApJ, 168, 327 +Senchyna P., Stark D. P., Mirocha J., Reines A. E., Charlot S., Jones T., Mulchaey J. S., 2020, MNRAS, 494, 941 +Shenar T., Gilkis A., Vink J. S., Sana H., Sander A. A. C., 2020, A&A, 634, A79 +Singh S., et al., 2022, Nature Astronomy, 6, 607 +Smartt S. J., 2009, ARA&A, 47, 63 +Smith L. J., Norris R. P. F., Crowther P. A., 2002, MNRAS, 337, 1309 +Smith M. C., Bryan G. L., Somerville R. S., Hu C.-Y., Teyssier R., Burkhart B., Hernquist L., 2021, MNRAS, 506, +3882 +Spruit H. C., 2002, A&A, 381, 923 +Steidel C. C., Strom A. L., Pettini M., Rudie G. C., Reddy N. A., Trainor R. F., 2016, ApJ, 826, 159 +Sukhbold T., Adams S., 2020, MNRAS, 492, 2578 +Sutherland R. S., Dopita M. A., 1993, The Astrophysical Journal Supplement Series, 88, 253 +Szécsi D., Wünsch R., 2019, ApJ, 871, 20 +Szécsi D., Langer N., Yoon S.-C., Sanyal D., de Mink S., Evans C. J., Dermine T., 2015, A&A, 581, A15 +Szécsi D., Mackey J., Langer N., 2018, A&A, 612, A55 +Takahashi K., Langer N., 2021, A&A, 646, A19 +Townsley L. K., Feigelson E. D., Montmerle T., Broos P. S., Chu Y.-H., Garmire G. P., 2003, ApJ, 593, 874 +Trebitsch M., et al., 2021, A&A, 653, A154 +Vartanyan D., Laplace E., Renzo M., Götberg Y., Burrows A., de Mink S. E., 2021, ApJ, 916, L5 +Verliat A., Hennebelle P., González M., Lee Y.-N., Geen S., 2022, A&A, 663, A6 +Vila-Costas M. B., Edmunds M. G., 1992, MNRAS, 259, 121 +Vink J. S., 2022, ARA&A, p. arXiv:2109.08164 +23 + +Vink J. S., Gräfener G., 2012, ApJ, 751, L34 +Vink J. S., de Koter A., Lamers H. J. G. L. M., 2001, Astronomy and Astrophysics, 369, 574 +Vink J. S., Brott I., Gräfener G., Langer N., de Koter A., Lennon D. J., 2010, A&A, 512, L7 +Vink J. S., Higgins E. R., Sander A. A. C., Sabhahit G. N., 2021, MNRAS, 504, 146 +Weaver R., McCray R., Castor J., Shapiro P., Moore R., 1977, The Astrophysical Journal, 218, 377 +White R. L., Long K. S., 1991, ApJ, 373, 543 +Woosley S. E., Heger A., 2006, ApJ, 637, 914 +Worseck G., Prochaska J. X., Hennawi J. F., McQuinn M., 2016, ApJ, 825, 144 +Yoon S. C., Langer N., 2005a, A&A, 435, 967 +Yoon S. C., Langer N., 2005b, A&A, 443, 643 +Yung L. Y. A., Somerville R. S., Finkelstein S. L., Popping G., Davé R., Venkatesan A., Behroozi P., Ferguson H. C., +2020, MNRAS, 496, 4574 +Zapartas E., et al., 2017, A&A, 601, A29 +da Silva R. L., Fumagalli M., Krumholz M., 2012, ApJ, 745, 145 +de Mink S. E., Pols O. R., Langer N., Izzard R. G., 2009, A&A, 507, L1 +ud-Doula A., Owocki S. P., 2002, ApJ, 576, 413 +van Zee L., Haynes M. P., 2006, ApJ, 636, 214 +24 + diff --git a/4NFRT4oBgHgl3EQfozeT/content/tmp_files/load_file.txt b/4NFRT4oBgHgl3EQfozeT/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..4abaa1afd119ae0c352146788269a540cac23587 --- /dev/null +++ b/4NFRT4oBgHgl3EQfozeT/content/tmp_files/load_file.txt @@ -0,0 +1,2028 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf,len=2027 +page_content='Bringing Stellar Evolution & Feedback Together Summary of proposals from the Lorentz Center Workshop, 2022 Co-authors: (names and institutions) Sam Geen1,2 ,Poojan Agrawal3 ,Paul A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Crowther4 ,B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Keller5,18 ,Alex de Koter1,6 , Zsolt Keszthelyi1,7 , Freeke van de Voort8 ,Ahmad A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Ali9 ,Frank Backs1 ,Lars Bonne24 ,Vittoria Brugaletta10 ,Annelotte Derkink 1 ,Sylvia Ekström 11 ,Yvonne A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Fichtner12 ,Luca Grassitelli12,Ylva Götberg23 , Erin R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Higgins13 ,Eva Laplace14 ,Kong You Liow9 ,Marta Lorenzo15,27 ,Anna F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' McLeod16,18 ,Georges Meynet 11 , Megan Newsome25,26G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' André Oliva18 ,Varsha Ramachandran19 ,Martin P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Rey,20 ,Steven Rieder11 , Emilio Romano-Díaz12 , Gautham Sabhahit13 ,Andreas A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Sander19 ,Rafia Sarwar21 ,Hanno Stinshoff 10,21 ,Mitchel Stoop1 ,Dorottya Szécsi21 , Maxime Trebitsch 22 ,Jorick S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Vink13 ,Ethan Winch13 (Author contact details and full list of institutions at end of paper) Keywords: Stellar physics: Stellar atmospheres, Stellar evolution, Stellar processes;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Stellar populations;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Interstellar medium: nebulae, Protostars, Supernova remnants, Stellar-interstellar interactions;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Interdisciplinary astronomy Abstract: Stars strongly impact their environment, and shape structures on all scales throughout the universe, in a process known as “feedback”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Due to the complexity of both stellar evolution and the physics of larger astrophysical structures, there remain many unanswered questions about how feedback operates, and what we can learn about stars by studying their imprint on the wider universe.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' In this white paper, we summarize discussions from the Lorentz Center meeting ‘Bringing Stellar Evolution and Feedback Together’ in April 2022, and identify key areas where further dialogue can bring about radical changes in how we view the relationship between stars and the universe they live in.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 1 Introduction on Scales: From the Birth of Stars to the Wider Universe Astrophysics spans many orders of magnitude in both physical distances and time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Researchers from different fields have varying definitions for what are considered “small” and ”large” scales.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Typically, “small” refers to processes smaller than those typically resolved in studies, whether observational or theoretical.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Meanwhile, “large” typically refers to scales outside the boundaries of the problem domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' In Figure 1 we show a diagram depicting the range of relevant spatial and temporal scales, from stars to galaxies and beyond, in order to define and motivate discussions around the boundaries of domains of study considered in this work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' The galactic scale, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' the largest physical scale considered here below the “cosmological” scale, is about 1 – 100s of kpc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' A spiral galaxy like our Milky Way contains many (giant) molecular clouds of length scale 10 – 100 pc, which from their dense cores can form star clusters at scales of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='1 – 10 pc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Within those dense cores, the gravitational collapse that results in the formation of individual stars takes place.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Protostars are typically surrounded by accretion disks of sizes that range between 1 – 1000 au, and outflows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' On the smallest physical scales considered here, we can regard the (intra)-stellar structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Within the star itself, we have the nuclear burning in the core, convection zones, envelope and stellar surface at 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='1 – 10 R⊙.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' In numerical simulations, the connection between small and large scales is crucial because it is computationally expensive to set up and perform simulations that encompass the whole range of scales relevant to astrophysics within a reasonable amount of computing time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Despite this, an understanding of how the scales couple is important.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' various physical processes connect the smallest and largest scales with flows moving to both smaller and larger scales, often driven by the action of stars, in a cycle of material termed “feedback”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' During the star formation process at stellar scales, the outflows launched by the disk and jet can influence the surrounding material.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Ionizing radiation, stellar winds and eventual supernovae produced by the massive stars shape their natal molecular clouds and the interstellar medium, impacting subsequent generations of star formation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' In this work we focus primarily on processes from stars after their formation phase ends, although protostellar outflows can be important both in themselves (Federrath et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2014) and in concert with other feedback processes (Kuiper & Hosokawa 2018) as stars form in molecular clouds (Grudi´c et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Verliat et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Feedback processes often 1 arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='13611v1 [astro-ph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='SR] 31 Jan 2023 IDgravitat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' collapse stellar feedback 0 1 2 3 4 5 1 2 3 4 5 6 7 8 1 au stellar structure circumstellar cloud core cloud galactic cosmological small scales spatial scales star disk/ outflows cloud dense core galaxy log(size [pc]) 1000 au 100 10 1 expanding bubble filament timescales log(time [yr]) 3 6 9 stellar evolution low high disks (~100 kyr) 1–100 Myr > 1 Gyr Figure 1: The different length scales of star formation in log-parsec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2 act in concert, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' in the case of supernova feedback efficiency increasing if dense star-forming environments are dispersed by pre-supernova feedback (Geen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2015;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Lucas et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Several techniques have been developed to bridge the different length scales.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' From larger to smaller scales, zoomed-in simulations are performed, such that the regions from larger scale simulations are taken as initial conditions and the resolution of the regions is enhanced (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Carlberg & Keating 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Dobbs et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2022;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Rey & Starkenburg 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' This allows the regions of interest to be followed and studied more closely.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' For example, zoom-in simulations of dense cloud cores can be used to follow their gravitational collapse into individual stars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' On the other hand, prescriptions are used to import the physics of smaller scales to the larger scales (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Gutcke et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' This is generally done using empirical relations, analytical solutions, or parametric tables.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Some recent simulations employ multiple techniques to bridge the different scales (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Rieder et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Critical tasks for the useful presentation and communication of the results of numerical simulations are: the determination of reliable intervals where a given quantity is valid or expected (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', the densities or angular momentum content of dense cores expected from simulations at the cloud scales), and the expression, whenever possible, of results that impact neighbouring scales using analytical formulae so that they can be used as prescriptions (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', evolutionary tracks for protostars that are used in larger-scale simulations).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' With the advance of observational sites (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Extremely Large Telescope, James Webb Space Telescope, Athena) with higher angular resolution, we come closer to resolving astrophysical structures large and small scales for regions in the Local Group and beyond.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Many of these sites will be able to resolve individual stars (of lower masses) for which before, we were only able to probe the large scale structures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Observations and simulations of large and small scales in the (near) future will provide us with essential knowledge to connect these scales.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2 Introduction to Feedback: The Physics Connecting the Scales Once the protostellar phase has ended, stars impact their surroundings in a number of ways.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' We highlight some of the key processes by which stellar evolution processes drive feedback into the interstellar medium and beyond.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='1 Stellar Winds Stellar winds refer to the ejection of matter from a star’s surface driven by radiation pressure on the gas in the star’s atmosphere.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Stellar winds impact their surroundings through a mixture of the mass loss rate ˙M and terminal velocity vw, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' the velocity that the stellar wind reaches once it is fully accelerated by radiation pressure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Observations by Groenewegen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' (1989), Prinja et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' (1990), Crowther et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' (2016) and others confirm that these winds leave massive stars with terminal velocities that exceed 1000 km/s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' This shocks the gas around the star to millions of degrees Kelvin, creating hot bubbles that drive strong flows into the interstellar medium (Weaver et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 1977).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' The rate of deposition of kinetic energy of stellar winds, 1/2 ˙Mv2 w, is an important quantity in stellar feedback, where the energy in the wind bubble accumulates over time (Weaver et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 1977).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' In the mode where stellar winds cool efficiently through thermal conduction or, more plausibly, turbulent mixing (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Lancaster et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2021), the momentum deposition rate ˙Mvw becomes more important.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' This mode is considerably weaker at driving large-scale flows since stored energy is lost.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' We examine in further detail how stellar wind bubbles impact nearby star-forming regions in Section 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' The properties of stars play a crucial role in setting ˙M and vw (Puls et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Factors such as metallicity (Vink et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2001), rotation (Cranmer & Owocki 1995), clumping (Puls et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2008) and magnetic fields (ud-Doula & Owocki 2002) are thought to play an important role in setting the precise wind properties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' We return to these processes in detail later in the paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' One of the most important stellar properties for determining ˙M and vw is stellar mass.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' At solar metallicity, stars with masses larger than around 25M⊙ do not make it to the cool red supergiant phase, but instead lose a lot of mass in line-driven winds (Castor et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 1975;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Kudritzki & Puls 2000;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Vink 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' At lower metallcity, winds become significantly weaker due to the lack of metal lines to couple radiation to the gas and drive material from the stellar surface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' A significant impediment to a better understanding of stellar winds is the uncertainty in mass loss rates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' For stars below 25M⊙, mass-loss rates are uncertain by 1-2 orders of magnitude in the so-called "weak-wind regime" (Martins et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2005).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 3 For those massive stars where mass-loss starts to dominate the evolution (at about 40M⊙) the uncertainties are about a factor 2-3 (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Björklund et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Such uncertainties were investigated in evolutionary models by Keszthelyi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' (2017b), finding that the discrepancies may be resolved by studying the rotational velocities of B-type supergiants (Vink et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2010), given that mass loss leads to angular momentum removal and spin-down of the stellar surface (Langer 1998;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Maeder & Meynet 2000).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Stars of order 80-100M⊙ are in the transition region of Vink & Gräfener (2012), where mass-loss rates are known very accurately, but above this transition point, mass-loss rates included in most stellar evolution and population synthesis models are thought to be underestimated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='2 Ionizing Radiation Stellar ionizing radiation can propagate and deposit energy on a large variety of scales, starting in the stars’ own atmospheres and extending to the intergalactic medium across the Universe, where they “reionized” the universe after cosmic recombination.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Pinpointing how much, and when, hard ionizing photons are released is thus a key input to model how stars affect their surroundings on all scales.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' We highlight here recent developments, open questions, and uncertainties in predicting the budget of ionizing photons from stellar evolution, and their coupling to galactic and intergalactic scales.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Ionizing fluxes of stars strongly depend on the star’s temperature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Therefore, the fact that main-sequence stars are hotter at lower metallicities has a direct impact on the resulting ionizing photon budget.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' However, this effect could potentially be drastically or even totally altered by stellar evolution effects relating to rotation and binary interaction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Binary interaction can lead to mass exchange between the two stars, resulting in “envelope-stripped”, and thus even hotter, helium stars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Rapid rotation is also thought to efficiently mix massive stars that cannot spin down at low metallicity, leading to the creation of helium-enriched, finally pure helium, stars, referred to as chemically homogeneous stars (Yoon & Langer 2005a;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Szécsi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' When determining the feedback for a resolved population of stars, it is therefore crucial to not miss the “earliest” (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' hottest) stars of the population as they dominate the ionizing feedback (see, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Ramachandran et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2018b, 2019, for recent examples).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' In addition, accreting compact objects are known to emit X-rays and ionizing radiation which have been considered, to aid photoionization of interstellar or even intergalactic gas (Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2015;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Schaerer et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Senchyna et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Moreover, cluster winds and superbubbles have recently been suggested as a source of additional ionizing flux (Oskinova & Schaerer 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' While most of their emitted photons are too energetic to efficiently ionize gas, a fraction of them can contribute to the total budget of hydrogen and helium-ionizing photons in the universe.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' While the effective temperatures of stars can give some clues to their spectrum and ionizing power, black bodies only provide limited representations for the ionizing fluxes of hot stars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' The absorption of radiation by recombination fronts inside the stellar wind can significantly reshape the spectral energy distribution, thereby considerably affecting the resulting quantities of ionizing photons emitted by the star.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' This is particularly striking for the He II ionizing flux that is reduced by many orders of magnitude – effectively vanishing – if the stars manage to launch an optically thick (Wolf-Rayet type) wind (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Sander & Vink 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' This effect is not an issue for hydrogen-ionizing photons, even though part of their flux budget is still consumed to drive stellar winds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Direct constraints of the ionizing flux of individual stars in the local Universe would be invaluable to constrain uncertainties of the sources of photoionization of interstellar gas, but is unfortunately limited by the unavailability of extreme UV (EUV) observational tools.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Hence, other indirect methods are necessary, for example (1) inferring the ionizing emission from nebular spectra using scaling relations for recombination line luminosities, and (2) using the ionizing emission from computed stellar atmosphere models that sufficiently reproduce the spectrum at other wavelengths (UV, optical, IR).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Since the stellar He II-ionizing flux is considerably affected by winds from the star, UV observations remain an important tool to correctly determine the sources of these photons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Radiative feedback plays a key role in regulating the lifecycle of star-forming regions, and in providing an early mechanism to modify the phase and thermodynamics of gas in which massive stars then explode as supernovae to drive galactic outflows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' The coupling between ionizing radiation, other sources of feedback and the surrounding gas however remains uncertain, due to the inherent challenges in modelling and observing these non-linear physical processes occurring on multiple spatial and time scales.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Quantifying the balance between feedback budgets within H II regions has now become possible (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Lopez et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2014;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' McLeod et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2019, 2020, 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Olivier et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2021a;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Barnes et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' However, uncertainties pointed out above in stellar evolution and synthesizing stellar population outputs propagate into these measurements, making their interpretation challenging.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Furthermore, the interaction between radiative, wind, and supernova feedback is a strongly non-linear process, which can lead to positive 4 reinforcement and strong galactic outflow driving (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Lucas et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2020) or by contrast diminish the clustering of SN explosions and reduce their efficiency at expelling gas from a galaxy (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Agertz et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Smith et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Fichtner et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Pinpointing the sign and strength of these couplings, both observationally and theoretically will be key to interpreting galaxies in observations, understanding how they regulate their star-formation, how they enrich their surrounding environment in metals, and how radiation escapes from them to larger, cosmological scales.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' H I reionization of the universe is mostly powered by stellar sources in low-mass star-forming galaxies (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Robertson et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2015;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Dayal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Yung et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Trebitsch et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2021), so having a good handle of their ionizing production is crucial, while keeping in mind that other sources of uncertainties (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' how much of this ionizing radiation escapes the ISM) still needs to be addressed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Even prior to H I reionization, X-rays from the very early stellar populations in star-forming galaxies contribute to heating the IGM, but the rate of production of these X-rays is still uncertain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Most emission comes from X-ray binaries (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Eide et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2018), whose populations are poorly constrained at the highest redshifts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 21cm all-sky measurements are starting to put limits on the beginning of this heating era (Bowman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2018), although other experiments are needed to confirm this result (see e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Singh et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Next-generation facilities like the SKA will soon constrain the early heating of the Universe, making the need for detailed models timely.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' In this context, detailed understanding of binary evolution of stars (and in particular massive stars) is required to assess properly the early heating of the IGM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' While He II reionization, which happens at z ∼ 3 (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Worseck et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2016) is thought to be mostly dominated by AGN sources (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Puchwein et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Faucher-Giguère 2020), the contribution from stellar populations remains mostly unconstrained.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Notwithstanding the uncertainties on the escape fraction of He II-ionizing photons, the uncertainties in the stellar population models pointed out above will translate to the contribution of these stellar populations to the He II background.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' In particular, the presence of very massive stars or hydrogen-stripped stars (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Götberg et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2020) could strongly enhance the contribution of the overall stellar populations to He II reionization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='3 Supernovae Feedback from supernovae (SN) has long been considered a key ingredient in studies of interstellar gas (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' McKee & Ostriker 1977) and galaxy evolution (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Larson 1974).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' SNe, especially core-collapse Type II SNe, release significant (∼ 1051 erg) energy in the initial blastwave: sufficient to destroy molecular clouds (White & Long 1991), drive turbulence in the ISM (McCray & Snow 1979), and power galactic winds and outflows (Mathews & Baker 1971).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' These explosions are also major sources of metals, producing (for example) the vast bulk of interstellar oxygen (Burbidge et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 1957).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Beyond core-collapse supernovae, thermonuclear (Type Ia) supernovae may also be a source of feedback energy, and also contribute to the cosmic metal budget (Kawata 2001).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' From the cloud- and galaxy-scale feedback perspective, the key questions connecting stellar evolution to supernovae feedback are as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Which stars will end their lives as supernovae?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' When will these stars detonate their supernovae?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' What will be the energy, mass, and metal returns of these supernovae events (and which form will the energy take at larger scales - kinetic or thermal)?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Traditionally, very simple assumptions have been made about these questions: all stars above a certain mass (5 − 10 M⊙) detonate, with each ccSNe event depositing ∼ 1051 erg of energy and ∼ 7 − 100 M⊙ of mass into the surrounding ISM (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Katz 1992).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' It has long been assumed that, at least on galactic scales, uncertainties in how this energy propagates through the ISM dominates over any uncertainties in stellar evolution models (Naab & Ostriker 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Rosdahl et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2017), and that questions relating to the details of ccSNe detonation are swamped by uncertainties in the cooling and mixing rates of SN remnants.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' However, recent studies (Keller & Kruijssen 2022) and higher-resolution simulations (Gutcke et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2021) have begun to reveal that the details of stellar evolution can detectably manifest themselves on galactic scales.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Temporal evolution of the stellar structure, subject to internal and surface physical processes described in Section 3 will lead to a stellar structure for which internal pressure gradients at some point will no longer be able to withstand the force of gravity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Understanding these processes will allow us to ultimately answer the three key questions identified above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Hydrodynamical models of SNe detonation predict that the occurrence of underluminous (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Lovegrove & Woosley 2013) and hyperluminous (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Woosley & Heger 2006) supernovae may occur for certain combinations of initial stellar mass, metallicity, and rotation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Adding to this is the strong theoretical predictions for “islands of explodability”, where SN progenitors will either produce very weak SN or in some cases directly collapse to form black holes (BHs) with no significant energy return whatsoever (Smartt 2009;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Horiuchi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2014;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Sukhbold & Adams 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Recent theoretical studies of binary star interactions have found that the significant changes induced to both the surface and core structure also will impact which stars detonate, and the energy of the subsequent SN (Müller et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Laplace et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Vartanyan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Despite these theoretical uncertainties, it is highly 5 likely that theoretical models of galaxy evolution have in general over-estimated the SN energy budget, though this recently may be changing (Emerick et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Gutcke et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Better observational constraints are needed to begin pinning down the true budget of energy for SN feedback.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Observationally, determining the SNe budget for stars across the IMF is extremely challenging, owing to the difficult problem of connecting SNe progenitors to individual SN events.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Red Supergiants (RSG) constitute the most common SN-progenitor stage, during which the star may experience a type IIP/L explosion (Smartt 2009).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' However, the RSG phase may last ∼ 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='5 × 106 to 3 × 105 yrs for stars ranging in initial mass between 9 and 20 M⊙ (Meynet et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2015), more massive stars, at high metallicity at least, potentially suffering from such intense mass loss that the entire envelope is lost and the stars first become yellow or blue supergiants before experiencing core collapse (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Gräfener & Vink 2016;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Kee et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' At lower metallicities, higher mass supergiants may exist and explode as e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' pair-instability supernovae ejecting a peculiar chemical yield (Martínez-González et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' The RSG Betelgeuse experienced an unprecedented dimming of its visual brightness from December 2019 until April 2020, speculated to forewarn an imminent core-collapse.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Though it appears that this event likely reflected a combination of surface activity and dust formation in a previously ejected gas cloud positioned in the line of sight (Montargès et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2021), the need for a dedicated monitoring campaign of a population of RSG stars for unexpected variability is clearly opportune and may help to identify systems for which an explosion may happen within about a human lifetime.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Alternatively, the collapse of such massive stars may lead to direct black hole formation with no or only little ejecta being expelled, consequently, with a very faint or undetectable supernova.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' The most promising candidate for a disappearing star directly collapsing into a black hole showed evidence for an estimated ∼ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='5 M⊙ of ejecta (Gerke et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2015;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Sukhbold & Adams 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Basinger et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Wolf-Rayet stars, evolved stars that have lost or have been stripped from their hydrogen rich envelopes are alternative candidates for an impending Ib/c (or gamma-ray-burst) supernova explosion (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Groh et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2013b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Within this group, Wolf-Rayet Oxygen (WO) stars are thought to be particularly evolved and in a post core-helium burning phase of evolution where timescales until core collapse are down to a few times ∼ 103 or 104 yrs (Meynet et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' So far, only nine WO stars are known, the one thought to be closest to ending its life being WR102 with ∼ 1500 yrs left.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Other post-main sequence objects have been suggested as potential SN-progenitors, including Luminous Blue Variable (LBV) stars (Kotak & Vink 2006;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Groh et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2013a) and Wolf-Rayet Nitrogen (WN) stars (Groh et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2013b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' The former possibility is supported by evidence that the progenitor of SN 2005gl was possibly an LBV star (Gal-Yam & Leonard 2009).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='4 Chemical Enrichment Nuclear processed material may be ejected from the star/system, and thus influence the chemical abundance of the surroundings, via at least three mechanisms;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' (i) stellar winds;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' (ii) supernova ejecta (discussed in detail in Sect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='3);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' (iii) (non-conservative) binary interaction (discussed further in Sect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Consequently, whether nuclear processed material ends up in the interstellar medium after being created inside a star is a complex question.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' For example, elements that stay inside the star for a longer time (due to not being immediately ejected in the wind) may be able to undergo further nuclear processing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' In the same way, elements may be “saved” by the wind from being processed further.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' This makes the topic of chemical evolution a highly complex area of research with a number of impediments to our understanding of it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' A deeper understanding of how and on which timescale elements are released in the interstellar medium is of great importance for modern stellar feedback simulations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Elements being ejected during the entire lifetime of a massive star could determine a different chemical evolution in the surrounding gas compared to the case in which they are “instantaneously” ejected in the supernova explosion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' If we had a clearer view of these processes, we could also model more accurately how this enriched material spreads to larger scales, meaning the interstellar medium and the rest of the galaxy, because of turbulence and other mixing processes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Another important aspect in this regard is the comparison of the timescale in which the mixing of the newly-enriched material occurs in the gas with that of star formation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Will the mixing be fast enough to make the metallicity of the medium almost uniform, before a second generation of massive stars is born?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' As stars inherit their initial metallicity from the gas they have formed in, understanding how the timescales for chemical evolution and mixing relate to the time needed to form a new generation of stars would help to better understand their future evolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Moreover, all these processes could be very different in low-metallicity environments, for which further analysis is recommended (see Section 5).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' The efficiency of mass-loss through stellar winds is highly dependent on the mass of the star (Sect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' The higher the mass, the higher the core temperature, leading to the activation of specific nuclear reactions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Massive and 6 Figure 2: Abundances relative to the Solar value plotted over time (in Gyr) for all the elements in the periodic table.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' This enables the reader to follow the different ways for evolution of the elements to take place via various processes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' These processes include Big Bang Nucleosynthesis, AGB stars, Core-collapse Supernovae, Type Ia Supernovae and Neutron star mergers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Observations are depicted as dotted lines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' From Kobayashi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' (2020), reproduced with permission.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' intermediate mass stars are known to have strong enough winds to eject nuclear-processed material.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' In particular, Asymptotic Giant Branch stars (AGBs) are important contributors to carbon and nitrogen via convective dredge up of nuclear products from the stellar core (Romano et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2010).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' For the stellar wind (or interactions with a companion star) to be able to remove nuclear burning products, these products – originally created in deep, hot burning regions – need to already be found at the stellar surface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' This can happen in two ways.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Either the mixing between the deep layers and the surface needs to be strong (see Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='1);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' or the layers from the top need to be first removed so the deeper layers are uncovered (see Section 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' In particular, mixing induced by rotation (or rotational mixing) has been shown to lead to extremely well mixed stars which evolve (quasi-)chemically homogeneously (Maeder 1987).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' But in less extreme cases, mixing (not only by rotation) can help bringing deeper layers upwards, to be lost in the wind eventually.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' The decay of some isotopes serves as a counter to this process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' This can be seen in the case of 26Al, which decays rather quickly (around 6 s) into 26Mg (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Finlay et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2012).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Figure 2 shows the elements in the periodic table together with their cosmic origin (Kobayashi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' While the figure shows the state-of-the-art of our current knowledge, other possible avenues for the generation of elements are thought to exist.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' For example, gold has been proposed to form in kilonovae (Kasen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2017a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Subsequent generations of stars have enriched interstellar gas with nuclear-processed elements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' However, chemical enhancement is not only a time-dependent process but can be spatially traced as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' For example, the Milky Way displays a metallicity gradient (Peimbert et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 1978;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Afflerbach et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 1997) which decreases outwards, but other galaxies show other trends.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Another source of uncertainty is the discrepancy between the yields found at the scale of stellar evolution modelling and those calculated at larger scales.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' To connect these two quantities, investigations are required with a varying degree of resolution as well as an understanding of the uncertainties involved in both calculations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Uncertainties include mixing and convection for single stars, tidal effects for binaries, and in general the handling of the Eddington limit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' As one can see for example in Agrawal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' (2022), different approaches with multiple codes can lead to different predictions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Tracers such as CNO abundances may help resolve these discrepancies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 3 Internal Stellar Processes Stars are places where the four fundamental forces in physics interact (viz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', gravitational, electromagnetic, strong, and weak nuclear forces).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Most global properties of stars can be inferred from the stellar structure equations, with the 7 He Big Bang Nucleosynthesis uns Be Core-collapse Supernovae B the Type Ia Supernovae Na M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Neutron Star Mergers A1 ive relati Abundance 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='8 C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='Kobayashi 2020 >Time「Gyrassumption of hydrostatic equilibrium.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' However, there are several key quantities e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', nuclear reaction rates and opacity measurements (especially Iron or Fe), and internal processes in stars e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', convection and overshooting, that remain highly uncertain in modelling stars, especially massive stars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Moreover, building accurate stellar models requires including the contribution of hydro- and magneto-hydrodynamical processes in the stellar interior such as stellar pulsations, stellar rotation and magnetic fields.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' These processes are not so well-understood and remain highly approximated in stellar models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Despite the recent progress in these areas in the last decade, several challenges remain in stellar physics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' These include treatment of convection and the determination of the sizes of the convective zones, a proper account of all the processes that can induce mass loss at the different phases of evolution, the instabilities triggered in radiative zones that can transport angular momentum and chemical species (some of them likely triggered by rotation), and the impact of magnetic field in stellar interior and at the surface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Each of these uncertainties can severely impact stellar outputs and alter the feedback they inject into the interstellar medium.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Below we discuss two significant internal processes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='1 Internal Mixing Energy produced in stars due to nuclear burning and other processes needs to be transported away to outer layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' The three main mechanisms responsible for this process are convection, conduction and radiation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' In most stellar evolution codes, convection is modelled using a simple but successful formalism called mixing length theory (MLT;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Böhm-Vitense 1958).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' If energy is carried through convection, then owing to the actual movement of particles in the star, angular momentum and chemical species are also transported within the star.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' This can change the stellar structure and radius, which in turn affects the ionization, mass-loss rates and pre-supernova structure of the star (Dessart et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2013;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Kaiser et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Convective boundary mixing (CBM) dictates the extension of the convective core and shell burning regions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' There are multiple methods of implementing CBM with various mixing profiles such as core overshooting via step, exponential, or convective entrainment (Scott et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' The extension of the convective core via overshooting during core H-burning has various consequences leading to stars evolving at higher luminosities with increased mass loss over the integrated main sequence lifetime.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Together, convection and associated mixing mechanisms contribute to the internal mixing in stars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Mixing processes can alter energy transport and the hydrogen content in the envelope, driving the evolution of massive stars towards red and blue supergiant phases and thus dictating red to blue supergiant ratios (Schootemeijer et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' On the main sequence, the effects of internal mixing and mass loss dominate the evolutionary pathways which govern the fates of massive stars towards forming black holes and neutron stars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' In the mass range ∼ 8–30 M⊙ interior mixing processes dominate the lives of massive stars, and in the mass range ∼ 30–60 M⊙ stellar winds drive the evolution towards Wolf-Rayet (WR) stripped Helium stars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' The indirect effect of mass loss on interior mixing also plays a role in the switch of evolutionary path during core He-burning (Higgins & Vink 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Sabhahit et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' The switch in evolutionary channels in post-MS evolution is key for predicting SNe progenitor populations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Internal mixing mechanisms are one of the largest uncertainties in stellar physics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' For example, the extent of core overshooting, which determines the length of the main sequence may itself be mass dependent (Castro et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2014) which will also influence the post-main sequence evolutionary channels that form black holes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' In fact, maintaining a sufficiently low core mass at the highest mass range can be critical in forming black holes and avoiding the pair instability supernovae regime (Vink et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Similarly, radiative envelopes with subsurface convective layers can drive clumps in the wind, altering the mass-loss rates and having a large impact on SNe progenitors (Davies et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2007;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Cantiello et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2009;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Jiang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2015), although there remain large uncertainties in these predictions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Convection, as given by MLT, becomes highly inefficient in energy transport within the radiation dominated, low density envelopes of massive stars with Minit > 40M⊙ whose luminosities approaches the Eddington limit (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Langer 1997;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Maeder 2009), and only worsens for cooler supergiants owing to the hydrogen opacity bump at Teff ∼ 104K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Such a situation can cause stellar evolution codes to either crash or become stuck very small time-steps (Paxton et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2013).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' What happens in reality in such conditions, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', whether stars in close proximity to the Eddington limit inflate (Gräfener et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2012) or not remains yet another unresolved problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' However stellar evolution models can predict widely different post-main sequence evolution when treating these highly inflated layers (Agrawal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2022), which can have far reaching consequences in predicting the feedback properties of massive stars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Perhaps 2D or 3D simulations, or observational constraints such as the Humphreys-Davidson limit might shed light on what happens in such inflated, low density envelopes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 8 Asteroseismology may provide calibrations for the efficiencies of internal mixing processes, but main sequence stars are usually fast rotators, and this can blur the period spacing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Low mass, slower rotators are more accessible for providing constraints with asteroseismology (Pedersen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Bowman 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Rotation and rotational mixing play a major role in the enrichment of massive stars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' The chemical enrichment of massive stars is dominated by rotational mixing instabilities, particularly whether the angular momentum is maintained via solid-body rotation, which is also important for determining neutron star spin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='2 Stellar magnetic fields Stars form in a magnetised medium, and recent simulations have demonstrated the large impact that magnetic fields play in the formation process (Oliva & Kuiper, in prep.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=').' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' However, the acquisition of stellar magnetic fields is largely unconstrained.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' There are two different kinds of magnetic fields that can be harboured by the massive stars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' One possible branch is dynamos, either in the convective core driven by the α-Ω cycle (similar to the surface of the Sun), or in the radiative layers driven by differential rotation (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', the mechanism proposed by Spruit 2002).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Such dynamos are small-scale and vary on a short Alfvén timescale.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' In evolutionary models of massive stars, dynamo-generated magnetic fields in the radiative zones are commonly invoked (Maeder & Meynet 2003, 2004, 2005;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Heger et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2005;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Potter et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2012;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Fuller et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Takahashi & Langer 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Another branch of possibilities is relaxed, equilibrium fossil magnetic fields in the stellar radiative envelopes (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Braithwaite & Spruit 2004;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Braithwaite & Nordlund 2006), which are large-scale and stable over the long-term evolution (Ohmic timescale).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Such fields are now routinely observed via spectropolarimetry (exploiting the Zeeman effect) in a fraction of Galactic massive stars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Although no detections outside of the Galaxy have been made yet, largely due to the limitations of current instrumentation capabilities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' The impact of fossil magnetic fields is far-reaching.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' These fields form a magnetosphere around the star, which channels the stellar outflow (ud-Doula & Owocki 2002;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Owocki 2004).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' The presence of magnetic fields can lead to two other important effects on mass loss: magnetic mass loss quenching (reducing the mass loss rate of the star, by up to an order of magnitude for a field of ∼ kG strength), and magnetic braking (removing angular momentum from the star and hence leading to an observable decrease of its surface rotation).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Mass-loss quenching is a powerful mechanism that, independent of the metallicity, allows the star to retain most of its mass (Georgy et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2017;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Keszthelyi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2017a, 2019, 2020, 2021;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Petit et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' The implementation of these processes in stellar evolution models has shown that magnetic braking very efficiently spins down the stellar surface and, depending on the internal coupling, may also produce observable surface nitrogen enrichment (Meynet et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2011;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Keszthelyi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2019, 2021), with a grid of stellar structure and evolution models available that take account of these processes (Keszthelyi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Magnetic fields are thus a key component of stars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' These are either built internally through internal dynamos or else retained as fossil fields from the time of the star’s formation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' While determining their presence and effect is difficult, recent advances can help us to better constrain and understand this problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 4 External Stellar Processes: Binaries Similar to internal processes, external processes specific to the evolution of stars in multiple systems like tidal interactions, mass exchange, common envelope phases, stellar mergers can also impact the evolution and feedback of the stars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' It is now established that binaries play a major role in the evolution of stellar populations (Eldridge & Stanway 2020, 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' The majority of stars are born in binary or multiple systems and the binary fraction increases with stellar mass (Moe & Di Stefano 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' In addition, we now know that a significant fraction of these binaries will interact during their lifetime and initiate mass transfer, which has a significant impact on their structure and evolution (Sana et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2012).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' As a result of mass transfer, primaries can be stripped of their hydrogen envelope, which is accreted onto the secondary, spinning it up, or the system may merge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Consequently, their lifetimes and core properties change, affecting the final fate and stellar remnant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' The picture is further complicated by the fact that both internal and external stellar processes, that are by themselves complex to properly model, can hardly be studied in isolation, as they all interact.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' For example, stellar rotation, which can affect the evolution of stars, is strongly affected by tidal interactions in close binary systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Indeed, tides can set up exchanges between two reservoirs of angular momentum, the orbital one and the rotational one, causing the star to spin-up or spin down depending on the circumstances and thus modifying the whole evolution of the two 9 components by changing the rotation rates of the star and the radius of their orbits.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' A great diversity in evolutionary histories and stellar structures, for example at the time of core collapse, can be obtained through binary evolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Likely some of the stellar pathways made possible by binary evolution are still to be discovered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Binary evolution impacts stellar feedback in three main ways: winds, ionizing radiation and supernovae rates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='1 Impact on stellar winds The interstellar medium continuously receives mechanical energy and chemical feedback from stellar winds of the massive stars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Mass transfer in a close binary system will modify the nature of the wind from both components.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' The stripped primary (helium star) will likely possess a faster, lower density wind than its evolved (red supergiant) isolated counterpart, boosting the mechanical feedback.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' In addition, the mass-gaining secondary will usually produce a stronger wind as a result of its increased luminosity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Helium stars (WR stars at high mass) contribute considerable energy to the total energy budget of a population (Fichtner et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' By way of example, in the SMC the collective wind of one multiple system (HD 5980) dominates over hundreds of OB stars in NGC 346.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Stellar populations consisting of rotating stars in a binary system give raise to strong feedback processes specifically in low metallicities environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='2 Impact on the ionizing radiation It is well established that the ionizing radiation from a population of exclusively single (non-rotating) stars declines rapidly once the highest mass stars evolve off the main sequence, with a secondary (high energy) peak coinciding with the Wolf-Rayet phase (Schmutz et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 1992;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Smith et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2002).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Since close binary evolution is capable of stripping the primary component of its hydrogen envelope, the effect of binary evolution on the ionizing budget of young stellar populations is dramatic (Götberg et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2019), especially at high energies (helium ionizing photons), and at low metallicities for which only exceptionally massive single stars are capable of producing WR stars, whereas binary evolution leads to a prominent population of hot, stripped stars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Rosdahl et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' (2018) found that, on average, binaries lead to escape fractions of ∼7–10 percent in the early universe, about three times higher than that produced by single stars only.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' With such a difference in ionizing escape fractions, their simulation of binary systems gives a cosmic reionization epoch before z∼7, while the single-star escape fractions are not able to reionize their simulation volumes by z∼6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Observationally, these findings have major implications for linking stellar evolution to cosmological-scale feedback.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='3 Impact on core-collapse supernovae Binary evolution affects supernovae in three main ways: their energy budget, timing (location), and chemical yields.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Zapartas et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' (2017) found that the inclusion of binaries in massive stellar systems substantially increases the number of supernovae expected among a stellar population, largely because of “late" events originating from intermediate-mass (4 − 8M⊙) stars which would have otherwise evolved to white dwarfs, and whose binary interactions uniquely create the conditions for supernovae.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' The possibility of late events affects the delay-time distribution of supernovae: the maximum time expected for a single star to go supernova is 50 Myr, but late events occur on scales of 50 − 200 Myr after birth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' This stands in contrast with current prescriptions of supernovae timing in feedback simulations, which often assume an instantaneous explosion within 50 Myr for massive stars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Similarly, more massive stars that might otherwise be expected to collapse into black holes instead may experience mass stripping and common envelope interactions that create supernova conditions on the high-mass end as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' The widened range of initial masses that can experience supernovae from binary interactions will change the range of energetics expected and the properties of the supernova progenitors (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Podsiadlowski et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 1992).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Moreover, mass transfer affects the structure and chemical composition of stars (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Laplace et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2021), ultimately changing their chemical yields.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' For example, Farmer et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' (2021) showed recently that at solar metallicity, binary-stripped stars can eject twice as much carbon into their surroundings than single stars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' In addition, binary systems can be the progenitors of gravitational wave sources, which are responsible for enriching stars in r-process elements (Kasen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2017b, see also Sect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' The supernova kick imparted at the moment of explosion of one binary component can result in a population of runaway and walkaways stars that explode in a location different from their birth environment (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Renzo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 10 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='4 Impact of larger scales on binary formation Feedback processes in galaxies are thought to affect the formation of binaries and stellar multiples, through perturbations of gas clouds, feedback from stars and magnetic fields.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Turbulence injected into molecular clouds through feedback from jets, winds and ionising radiation may affect when and how stellar multiples are formed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' The quantity of angular momentum in protostar formation plays an important role in the mass of the protostellar disk, with more rotation leading to a more massive disk that fragments earlier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Bycontrast, if more mass is concentrated at the centre of the disk, a single massive star and/or a less massive companion will form.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' UV radiation and the propagation of heavy elements can also shape the formation of protostars as well as protoplanets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Magnetic fields are important both in star-forming regions and also in stars (see Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='2), and can play a role in coupling cloud scales to stellar scales.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' For example, sufficiently strong magnetic field will diminish fragmentation which then prevents but does not fully suppress binary formation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' However, due to difficulties in resolution on a cloud-scale and the cost of small-scale simulations of protostar formation, simulations have not yet converged on the role that magnetic fields play in shaping in-situ binary formation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Currently, most simulations do not generally take binary evolution into account in their feedback yields, however this is slowly changing in fields such as reionization studies (Rosdahl et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2018) at z > 6, but recently in lower redshift galaxies such as Fichtner et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' (2022) for a sub-L* galaxy at z = 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 5 Varying Metallicity in our Local Group: The Effect of Z The Local Group is a complex environment with average present-day metallicities varying from ∼ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='2 Z⊙ in SagDIG (Saviane et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2002), to ∼ 2 Z⊙ in the Milky Way’s Galactic Centre (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Nogueras-Lara et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Additionally, significant metallicity gradients exist within galaxies (Searle 1971;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Vila-Costas & Edmunds 1992;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Henry & Worthey 1999), including the Milky Way (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Lemasle et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2018) - by metallicity of a galaxy, we typically refer to a radially averaged quantity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Stellar evolution and small-scale feedback models usually adopt the averaged values for a given galaxy when referencing their metallicities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Within the Local Group, there are also large differences in densities and pressures, and star-forming mechanisms and rates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' For example, the Large Magellanic Cloud hosts a million Solar-mass starburst region in 30 Doradus (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Doran et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2013), while Sextans A and the SMC appear to host isolated OB stars (Garcia et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2019;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Lorenzo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Our local universe thus presents a useful testbed for studying how stellar feedback operates in a variety of conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' The role of metallicity applies to both the behaviour of stars themselves and the conditions in the gas in galaxies and hence shapes the interplay between the two (Brugaletta et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' in prep.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' In general, we assume that massive stars form with roughly the same metallicity as their local environment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Their surface abundances over their lifetime are shaped by chemical evolution as well as mixing and other processes such as envelope self-stripping, which drastically change the feedback properties of these stars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='1 Impact on Stellar Evolution and Feedback As discussed earlier, decreasing metallicity generally decreases the impact of stellar winds on an environment (Vink et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2001), since winds are driven by metal lines in the stellar atmosphere.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' This is largely a consequence of processes inside the star rather than the physics of the interstellar gas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Conversely, due to reduced photon absorption in the atmosphere, the ionizing photon emission rates are typically higher at lower stellar metallicity (Martins et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2005).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' The effect on the gas around stars at lower metallicity is two-fold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' The efficiency of mechanical and photoionization feedback is further enhanced by the fact that metal-line cooling in photoionized gas (Ferland 2003) and collisionally-ionized gas (Sutherland & Dopita 1993) is less efficient at low metallicity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' However, lower dust fractions mean that the strength of radiation pressure decreases (Ali 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' The consequence of this on feedback depends on how these feedback processes couple, and if and when any given process dominates.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Winds and supernovae create hot X-ray emitting bubbles (106 – 108 K), while photoionized regions are heated to ∼ 104 K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' These regions co-exist within nebulae (Guedel et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2008), and their relative position and impact within feedback-driven nebulae remains a subject of active study.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Analysis of observations in the Galactic Centre and compact H II regions shows that dust-processed radiation pressure dominates over other processes (Barnes et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2020;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Olivier et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2021b), while in the LMC/SMC/nearby galaxies, thermal pressure from photoionized gas dominates (Lopez et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2014;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' McLeod et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2019, 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' However, in addition to metallicity, these analyses are also affected by other environmental factors such as filling factors, ambient densities and pressures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Similarly, thermal 11 losses are generally believed to have an important impact on wind bubbles in order to explain the missing energy in observed hot plasmas (Townsley et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2003;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Lopez et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' These thermal losses may be more affected by turbulent mixing with cold gas in the environment of the wind bubble than by metal line cooling in the wind bubbles themselves (Rosen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2014;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Lancaster et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='2 Low metallicity There remain many unknowns concerning stellar evolution in extremely low metallicity environments due to the current limited observational capabilities and uncertain numerical ingredients, even in the case of single-star models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Depending on their metallicity, stars follow different evolutionary paths, resulting in different spectral subtypes dominating the mechanical and radiative yields.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Between ∼ 1/10 Z⊙ and Z⊙, the mechanical luminosity during stellar evolution is both theoretically and observationally expected to be dominated by Wolf-Rayet stars, despite their relatively short lifetimes and rarity (Ramachandran et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2018a;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Fichtner et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Instead, the more abundant stars with initial masses in the range ∼ 10-30 M⊙ are expected to end their lives as SNe, hence dominate the mechanical luminosity after ∼ 107 yrs, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' at timescales comparable with the free-fall timescale of a young stellar cluster (Krumholz & Burkhart 2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' At even lower metallicities, single-star evolution and wind models are not expected to lead to the appearance of the WR phenomenon, with the evolutionary channel leading to H-depleted stars being dominated by binary interaction (Shenar et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Their lower metal content may also lead to different evolutionary pathways that are not predicted at higher metallicities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Evolutionary models (Brott et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2011) predict that, at metallicities lower than 1/10 Z⊙, fast-rotating massive stars may evolve chemically homogeneously.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' In this evolutionary pathway, they can achieve temperatures hotter than the zero-age main sequence (Yoon & Langer 2005b) and generally produce ∼ 5-10 times more ionizing energy than their normally-evolving counterparts (Szécsi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' The implications arising from the evidence that the majority of massive stars are in binary systems, and the lower angular momentum losses in low metallicity stellar models, are largely unconstrained.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' These effects are expected to attenuate the otherwise steeper decrease in kinetic energy feedback in the early phases of cluster formation at low metallicities (Fichtner et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' However, the different evolutionary pathways do not only affect the yields estimated directly from evolutionary models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Stellar feedback, in fact, couples with the hydrodynamic evolution of the circumstellar gas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' The slow and dense stellar outflows characteristic of cool supergiants are outside the line-driven regime and are only empirically constrained for stars in the Galactic Neighbourhood.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' It is likely that such slow gas can lead to thermal dissipation at sub-parsec scales, with a growing impact at low metallicities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Stars close to their Eddington limit during a Luminous Blue Variable phase (LBVs) are known to lose a significant fraction of their H-rich envelope during phases of high variability (Humphreys & Davidson 1994;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Vink & Gräfener 2012).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Given the metallicity-independence of the HD limit (Davies et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' McDonald et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2022), and the higher expected number of redward-evolving stars at low-metallicities, one can expect that a larger fraction of the energy yield is dissipated well-before reaching the cluster scales (Geen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2015;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Mackey et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2015;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Lancaster et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Any systematic estimate must overcome our inability to convincingly model important stellar evolution phases such as the LBV phase (however, see Grassitelli et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2021) and non-conservative mass-transfer phases in binary systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 6 Stars over Cosmic Time: The Effect of z In this Section we summarise discussions concerning how stellar evolution and feedback evolve over redshift.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' We focus our discussion here on redshifts up to z ∼ 2, the peak of cosmological star formation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' There are likely to be significant differences between z ∼ 2 and very high redshift, in particular the role of the first (Population III) stars in the very early universe.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' As discussed earlier, aspects of stellar evolution such as binary evolution are likely to have a strong impact on cosmological processes such as reionization around z ∼ 6 − 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Typical z ∼ 2 galaxies are moderately massive, deficient in iron-peak elements albeit α/Fe enhanced (Steidel et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Their nebular properties are relatively hard, and individual star forming knots (from lensing studies) indicate high star-formation intensities – of order ∼ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='1M⊙/yr within a region of a few hundred parsecs (Jones et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2010;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Livermore et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Within the Local Group, only 30 Doradus (Tarantula Nebula) in the LMC displays such properties, albeit with a higher metallicity of ∼ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='5Z⊙ (Crowther 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 12 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='1 Star formation at low redshift (z ∼ 0 − 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='3) Within the Local Group, where individual massive stars can generally be well spatially resolved, there are only a small number of actively star-forming galaxies whose current metallicity is ≤ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='2Z⊙, including the SMC, NGC 3109, IC 1613, Sextans A, WLM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Of these, the SMC has the highest star formation rate (Kennicutt et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2008), so is host to several hundred O stars, albeit with only a few dozen above 40 M⊙ (Schootemeijer et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Sextans A has an even lower metallicity (van Zee & Haynes 2006) though also a lower star formation rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' In the context of star-forming knots at high redshift, these are modest, since such region will host thousands of O stars, hundreds of which are expected to exceed 40–50 M⊙.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' The SMC and Sextans A therefore provide our only direct route to studying the evolution of massive stars at 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='1-0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='2 Z⊙, except at the highest masses, which are poorly sampled due to stochasticity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Sub-grid models employed in galaxy simulations (IMF, stellar models) are mainly constrained by local observations and then applied to simulations at high-z, or rely on theoretical predictions for low metallicity stars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Metal poor massive stellar populations beyond the Local Group have been studied via integrated stellar populations, with the supergiant HII region Mrk 71 within NGC 2366 at 3 Mpc a striking example since it hosts massive super star clusters and has a metallicity of ∼ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='15Z⊙ (Gonzalez-Delgado et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 1994;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Micheva et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' This allows very massive metal poor stars to be observed at low metallicity, albeit in an integrated stellar population.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' In particular UV spectroscopy of the very young super star cluster Mrk 71-A with HST reveals strong HeII 1640 emission, providing a direct indicator of the presence of very massive stars (LJ Smith, priv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' comm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=').' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Mrk 71 is also notable in having evidence of leaking Lyman continuum photons (Micheva et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' A sizeable population of Green Pea (GP) galaxies has been identified from SDSS observations whose properties overlap with high-redshift galaxies, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' both are metal-poor, possess high specific star formation rates plus hard nebular conditions in the BPT diagram (Cardamone et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2009), plus direct evidence for Lyman continuum leakage in some instances (Izotov et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2016) and an excess soft X-ray emission (Franeck et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' In addition, there are examples of very metal-poor star forming galaxies locally with metallicities of only a few percent of the Solar Neighbourhood (I Zw 18, SBS 0335 Lequeux et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 1979;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Izotov et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 1990) which are potential analogues of star-forming galaxies in the very early Universe.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Madau & Dickinson (2014) present the evolution of the average metal-content of the Universe through its history (their Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 14).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' For example, the metallicity of Sextans A (1/10 Z⊙) equates to ∼4 Gyr after the Big Bang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='2 Star formation at z ∼ 2 Overall whilst there are some commonalities between metal-poor star forming regions locally and those at high redshift, some key differences remain, including composition (Fe-poor, α-enhanced, Steidel et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2016), higher specific star formation intensities potentially impacting on the IMF and close binary fraction, plus even if the mass and metallicity of a galaxy is the same at high- and low z, the environment, gas accretion and merger rate, AGN activity, will be different.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' It is speculated that old galactic globular clusters (GCs) in particular are born as Young Massive Clusters (YMCs, Portegies Zwart et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2010) from an α-enhanced composition, with a first generation of metal-poor massive and intermediate-mass stars present (Bastian & Lardo 2018) which could have contributed to the present-day chemical composition of the clusters (de Mink et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2009;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Szécsi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2018;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Szécsi & Wünsch 2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Regarding future prospects, efforts have recently been made to build extensive spectroscopic catalogues of massive stars in Local Group dwarf galaxies with sub-SMC metallicities (Lorenzo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' These catalogues will yield a proper characterization of the physical parameters of metal-poor massive stars and will correct stellar evolutionary models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' By introducing their physical properties as inputs of photoionization codes (CLOUDY Ferland et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 1998), we will be able to study the conditions of their surrounding interstellar medium and understand the stellar feedback of these metal-poor massive stars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Studying this interplay between individual massive stars and their surrounding interstellar medium in metal-poor environments can help us interpret the observations of high-z galaxies and even estimate the amount of ionizing photons that dwarf galaxies contributed to the reionization of the Universe.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 7 From Star-by-Star Studies to IMF Averages and Population Synthesis The sources of feedback energy from massive stars – their ionizing photon flux, the momentum carried by their stellar winds, and their ultimate fate as supernovae – all depend strongly on the detailed physics of stellar evolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Without a clear understanding of the physical processes involved in the lives and deaths of massive stars, we cannot understand the ultimate impact of stellar feedback on galaxies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Despite the urgency of this question, many theoretical 13 studies of galaxy evolution make use of heavily simplified assumptions of how massive stars evolve.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' How can we translate the best current understanding of stellar evolution into a better foundation for theoretical models of galaxy formation?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Stellar feedback in galaxies has been invoked as a mechanism to control the galactic star formation rate, the growth of spheroids, the baryon and metal content of galaxy discs, among other galaxy-scale properties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Energy and momentum injected by massive stars can destroy star-forming clouds before they can convert the bulk of their gas into stars, and ultimately drive powerful galactic winds that remove baryons from the disc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Capturing these processes, either in semi-analytic models or hydrodynamic simulations, must begin with a robust budget (and timeline) of the relevant energy sources.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='1 What Matters at the scale of Galaxies?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Broadly speaking, the primary physical process that makes galaxies “care” about the stellar populations they contain is feedback.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Galaxy-scale feedback is generally considered to be negative, with stellar feedback limiting galactic star formation by injecting turbulence (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Padoan et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2016), driving galactic outflows (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Larson 1974), or destroying star-forming molecular clouds (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Chevance et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' In addition to the energy and momentum that stellar populations inject into their surroundings, the mass-loss of stars can also pollute the interstellar medium (ISM) with metals produced in those stars, increasing the cooling rate of this gas and acting as a form of positive feedback (Hirschmann et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2013).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Thus, the stellar physics that determines the energy and momentum of stellar winds, SN explosions, and UV radiation all act to change the impact of stellar feedback on the scale of galaxies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' For all but the smallest galaxies, the stellar populations driving feedback comprise tens of thousands or more stars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' In addition, simulations of galaxies typically cannot resolve individual stars except in the smallest, most isolated systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Thus, the primary questions that galactic astrophysicists must have for stellar astrophysicists come down to integrated or population-averaged quantities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Simulations of galaxies may include supernovae, stellar winds, or UV feedback (or any combination of these).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' What is needed are mass loss, energy and momentum injection, and UV photon production rates as a function of time (in other words, yields of each of these quantities).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' A detailed study of an individual star will not alone suffice for this: what is needed is an understanding of a fully-sampled IMF.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' As the small-scale environment of individual stars is unknown and unresolved in these simulations, the only dependency of these quantities that can be probed are ones which are again population averaged, such as the birth metallicity (Badenes et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2018) or ISM density(Chabrier et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' The tool typically used to determine the population-averaged yields needed for galaxy simulations is Population Synthesis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='2 Population Synthesis and Simple Stellar Populations No matter whether galaxies are modelled using analytic approximations, semi-analytic models, or full hydrodynamic simulations, the phenomena occurring inside and around individual stars necessarily must be averaged across large numbers (103 − 107) of stars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Historically, this has been done through the use of Population Synthesis of Simple/Single Stellar Populations (SSPs).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' SSPs are groups of stars, sampled from a given IMF (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Leitherer et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 1999), that are assumed to have been born at a fixed time, with identical chemical properties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Population synthesis models allow simulation codes to determine, as a function of time, the yields of mass, metals, and energy produced by the individual star particles within those simulations (or from an assumed population in an analytic or semi-analytic model).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Typically, this is done via either tabulated outputs from a population synthesis code (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Leitherer et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 1999;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' da Silva et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2012), or through analytic functions fit to these yields.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' While this hides much of the stellar physics involved in producing these yields “under the hood” of the population synthesis model, it does offer us the opportunity to easily incorporate more a sophisticated model of stellar evolution without significant work required to re-design galaxy simulation codes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 8 Connecting Theory and Observations Theoretical approaches such as simulations are essential in astrophysics since laboratory experiments of most astronomical phenomena are impossible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Using theoretical results to inform observational results requires the creation of “synthetic” observations, or mock observational results generated using simulated inputs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' This can take the form of simulated stellar spectra, multi-wavelength gas emission maps, mock galaxy catalogues, and more.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' This 14 process is important both for observers, who may wish to understand the systems they observe with full 3D and time information, and theorists who wish to better constrain their models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Creating mock observations is a complex process with many steps that must be treated properly to produce accurate results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' This is a subject that has been widely discussed on various scales, from the regions around stars (see review by Haworth et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2018) to cosmological galaxy formation (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Guidi et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' There are various hurdles relevant to stellar evolution and feedback that must be overcome if we are to close the gap between observed systems and theoretical predictions for how they behave.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' One key issue is ensuring that the physical structure of the observed system is realistic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' This is highly affected by stellar feedback on all scales, which in turn is affected by the details of (massive) stellar evolution, as discussed in previous Sections.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Conversely, with accurate theoretical models, it may be possible to use observations of feedback-driven structures as archaeological tools to inform studies of how stars evolve.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' The motion of interstellar gas is chaotic, since it requires solutions to the coupled non-linear equations for (radiative magneto)hydrodynamics and N-body gravitation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' This means that small perturbations to the early state of the cloud, such as initial seed turbulence or differences in stellar output, can have large cumulative effects on the later evolution of astrophysical systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' The variance from differences in stellar input and initial gas properties have been explored in star-forming regions (Geen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2018) and galaxies (Keller & Kruijssen 2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Some linear response and mitigation of sampling errors is recoverable using statistical analysis and comparisons of large catalogues of both simulations and observations (Eadie et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' However, the physical divergence of solutions to sets of non-linear equations over time remains a serious concern in reproducing astronomical phenomena using simulations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Simulations will often necessarily simplify or omit certain details of real-world physics for the sake of producing computationally-feasible or reducible results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Some models assume 1D or 2D geometries with symmetry in other dimensions, or ignore effects such as (non-)ideal magnetohydrodynamics, gas chemistry, thermal conduction, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Choices concerning simulated system size and resolution must also be made.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Many of these assumptions may be reasonable and lead to minimal impact on the end result (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' through convergence in simulation resolution), but it is often hard to determine whether this is true without access to more expensive, physically-complete simulations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Finally, the emission and absorption properties of stars and interstellar gas are complex, but are nonetheless required to be reproduced in detail if we wish to create accurate synthetic observations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' This may be relatively simple for low-opacity systems with well-understood stellar populations, but becomes complex in other more general cases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Efforts have begun to connect the actions of stars to the emission properties of interstellar nebulae (see, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Pellegrini et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' However, the problem remains a difficult and costly one.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' A solution requires a good understanding of stellar evolution, feedback physics and gas microphysics and chemistry, all operating together over the lifetime of a system.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' One mitigation to these problems may be found in posing questions in a way that reduces the impact of some of the uncertainties given above.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Rather than producing a 1:1 comparison of individual objects, we may instead seek an interval of validity - that is to say, a set of possibilities informed by simulations that constrain certain parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Public data availability through standard databases would assist in this by allowing simulators and observers to access large quantities of relevant information, provided the limitations of the simulations and observations within the databases (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' resolution limits, systemic errors or important physical choices) are properly understood by the user.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' To ensure that the interval of validity and limitations are properly understood, increased collaborations between observers and simulators in the near future will be helpful.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 9 Conclusions The interplay between stars and their environment (termed “stellar feedback”) is a long-standing problem that nonetheless is still the subject of active study.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' These questions remain open for numerous reasons, relating to the complexity of large-scale astrophysical gas dynamics and of the evolution of stars, individually and in multiple stellar systems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' The outcome of the workshop was to identify a wide-ranging set of points of interaction between massive stars and the gas in galaxies, from the scale of protostellar disks to cosmological scales.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' In addition, the workshop highlighted the need for detailed discussions between researchers working on different aspects of both stellar evolution and feedback.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' For example, bridging the scales of molecular clouds and galaxies is important in tracking how the impact of massive stellar evolution is felt on (cosmological) galaxy scales.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Much of this work is concerned with providing an inventory of the variables and unknowns affecting each field and 15 how they relate to each other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' For example, metallicity plays an important role in both the wind and radiation outputs from massive stars and the impact these processes have on the gas in galaxies through radiative cooling efficiencies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' We provide detailed discussion of both theoretical and observed behaviour of stars and gas at different metallicities, using our local galactic environment and higher redshift galaxies as observational examples of this.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Meanwhile, there remain strong uncertainties in the budget of mass, energy and chemical enrichment from winds, radiation and supernovae at different metallicities, including whether certain stars become supernovae at all (“islands of explodability”).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' We discuss the effects governing stellar evolution, including both internal effects such as mixing and magnetic fields, and external effects such as interaction with companion stars and how this shapes feedback.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Determining the internal structure of stars remains difficult, although there are promising techniques for doing so using asteroseismology and comparison with theory, which in turn offers the ability to constrain a new generation of theoretical stellar evolution models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Multiple stellar evolution greatly complicates the evolutionary path of massive stars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Nonetheless, understanding stellar multiples remains crucial not only because a large fraction, or even the majority, of massive stars are in binaries, but also because interacting binaries drastically change the feedback properties from massive stars, both before and after the stars go supernova.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' This in turn can even influence how cosmological processes such as reionization occur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' We note that it is important to understand not just the action of individual stars or binary systems, but how feedback from stars combines as populations in galaxies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' This in turn is important for determining what we know about individual stars when observing distant galaxies where individual stars cannot be resolved.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Finally, we discuss efforts to compare theory and observations in detail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' This remains a difficult task, since modelling the spectral emission from atmospheres of stars, as well as (photo and collisionally-)ionized gas is non-trivial, although more recently software tools are now able to perform this task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' More worryingly, as (astrophysical) fluids evolve non-linearly and precise information about the initial state of an observed system is often difficult to obtain, direct one-to-one comparison is often challenging or impossible, and we must often rely on statistical comparisons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Overall, we believe that this is an exciting time to begin widening discussions between workers in the fields of stellar evolution and feedback, with advances in theory and observations in both fields allowing great improvements in our understanding of astrophysics, both from the point of view of the birth and evolution of stars in a galactic context, and also an inventory of how energy propagates from stars to shape local star formation, whole galaxies and the wider universe.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 10 Acknowledgements We would like to thank the anonymous referee for their work in improving the quality of the manuscript.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' The workshop on which this manuscript is based was made possible thanks to the logistical and financial support of the Lorentz Center, Leiden, Netherlands.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' This funding is made available by Leiden University and the Dutch Science Foundation (NWO).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' The workshop was further supported by a NOVA grant for Star Formation, which SG also acknowledges as support.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' SG further acknowledges support from a Spinoza award of the NWO for research on the physics and chemistry of the interstellar medium.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' This research was partly funded by the National Science Center (NCN), Poland under grant number OPUS 2021/41/B/ST9/00757.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' acknowledge support from Collaborative Research Center 956, sub-project C4, funded by the Deutsche Forschungsgemeinschaft (DFG) – project ID 184018867.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='F was supported by the International Max Planck Research School in Astronomy and Astrophysics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' SR acknowledges funding from the European Research Council Horizon 2020 research and innovation programme (Grant No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 833925, project STAREX).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='Sz.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' were supported by the Alexander von Humboldt Foundation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='S was funded in part by the National Science Center(NCN), Poland under grant number OPUS 2021/41/B/ST9/00757.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' For the purpose of Open Access, the author has applied a CC-BY public copyright license to any Author Accepted Manuscript (AAM)version arising from this submission.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' acknowledges support from the NWO grant 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='VIDI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='189.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='162 (“ODIN”).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' For the purpose of Open Access, the author has applied a CC-BY public copyright license to any Author Accepted Manuscript (AAM) version arising from this submission.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' and V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' are supported by the Deutsche Forschungsgemeinschaft (DFG - German Research Foundation) in the form of an Emmy Noether Research Group – Project-ID 445674056 (SA4064/1-1, PI Sander)" M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' gratefully acknowledges support by grants PID2019-105552RB-C41 and MDM-2017-0737 Unidad de Excelencia "María de Maeztu"-Centro de Astrobiología (CSIC-INTA), funded by MCIN/AEI/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='13039/501100011033 and “ESF Investing in your future".' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 16 Contact: Name: Sam Geen Institution: (1) Anton Pannekoek Institute for Astronomy, University of Amsterdam, 1098 XH Amsterdam, The Netherlands (2) Leiden Observatory, Leiden University, PO Box 9513, 2300 RA Leiden, Netherlands Email: s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='geen@uva.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='nl Full list of institutions: 1 Anton Pannekoek Institute for Astronomy,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Universiteit van Amsterdam,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Science Park 904,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 1098 XH Amsterdam,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Netherlands 2 Leiden Observatory,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Leiden University,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' PO Box 9513,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2300 RA Leiden,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Netherlands 3 McWilliams Center for Cosmology,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Department of Physics,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Carnegie Mellon University,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Pittsburgh,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' PA 15213,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' USA 4 Physics & Astronomy,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' University of Sheffield,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Hounsfield Road,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Sheffield,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' S3 7RH,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' United Kingdom 5 Department of Physics and Material Science,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' The University of Memphis,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Memphis,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' TN 38152,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' USA 6 Institute of Astronomy,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' KU Leuven,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Celestijnenlaan 200D,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 3001 Leuven,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Belgium 7 Center for Computational Astrophysics,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Division of Science,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' National Astronomical Observatory of Japan,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 2-21-1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Osawa,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Mitaka,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Tokyo 181-8588,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Japan 8 Cardiff Hub for for Astrophysics Research and Technology,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' School of Physics and Astronomy,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Cardiff University,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Queen’s Buildings,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' The Parade,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Cardiff CF24 3AA,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' UK 9 Department of Physics and Astronomy,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' University of Exeter,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Stocker Road,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Exeter EX4 4QL,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' United Kingdom 10 I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Physikalisches Institut, Universität zu Köln, Zülpicher Str.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 77, 50937 Cologne, Germany 11 Department of astronomy, University of Geneva, Chemin Pegasi 51, 1290 Versoix, Switzerland 12Argelander-Institut für Astronomie, Universität Bonn, Auf dem Hügel 71, D-53121 Bonn, Germany 13 Armagh Observatory & Planetarium, College Hill, Armagh, BT619DG, United Kingdom 14 Heidelberger Institut für Theoretische Studien, Schloss-Wolfsbrunnenweg 35, 69118 Heidelberg, Germany 15 Centro de Astrobiología, CSIC-INTA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Crtra.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' de Torrejón a Ajalvir km 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 28850 Torrejón de Ardoz (Madrid),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Spain 16Centre for Extragalactic Astronomy,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Department of Physics,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Durham University,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' South Road,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Durham DH1 3LE,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' United Kingdom 17Institute for Computational Cosmology,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Department of Physics,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' University of Durham,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' South Road,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Durham DH1 3LE,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' United Kingdom 18 Institute for Astronomy and Astrophysics,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' University of Tübingen,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Auf der Morgenstelle 10,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 72076 Tübingen,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Germany 19 Zentrum für Astronomie der Universität Heidelberg,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Astronomisches Rechen-Institut,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Mönchhofstr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 12-14, 69120 Heidelberg, Germany 20 Sub-department of Astrophysics, University of Oxford, DWB, Keble Road, Oxford OX1 3RH, United Kingdom 21 Institute of Astronomy, Faculty of Physics, Astronomy and Informatics, Nicolaus Copernicus University, Grudzi ˛adzka 5, 87-100 Toru´n, Poland 22 Kapteyn Astronomical Institute, University of Groningen, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Box 800,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 9700 AV Groningen,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Netherlands 23 The Observatories of the Carnegie Institution for Science,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 813 Santa Barbara Street,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' CA-91101 Pasadena,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' USA 24 SOFIA Science Center,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' USRA,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' NASA Ames Research Center,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Moffett Field,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' CA 94045,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' USA 25 Las Cumbres Observatory,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 6740 Cortona Dr,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Suite 102,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Goleta,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' CA 93117-5575,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' USA 26 Department of Physics,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' University of California,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Santa Barbara,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' CA 93106-9530,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' USA 27 Departamento de Física Teórica,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Universidad Autónoma de Madrid (UAM),' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Campus de Cantoblanco,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' E-28049 Madrid,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Spain References Afflerbach A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Churchwell E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Werner M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 1997, ApJ, 478, 190 Agertz O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2020, MNRAS, 491, 1656 Agrawal P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Szécsi D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Stevenson S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Eldridge J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Hurley J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2022, MNRAS, 512, 5717 Ali A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2021, MNRAS, 501, 4136 17 Badenes C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2018, ApJ, 854, 147 Barnes A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Longmore S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Dale J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Krumholz M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Kruijssen J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Bigiel F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2020, MNRAS, 498, 4906 Basinger C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Kochanek C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Adams S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Dai X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Stanek K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2021, MNRAS, 508, 1156 Bastian N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Lardo C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2018, ARA&A, 56, 83 Björklund R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Sundqvist J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Puls J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Najarro F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2021, A&A, 648, A36 Böhm-Vitense E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 1958, ZAp, 46, 108 Bowman D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2021, in OBA Stars: Variability and Magnetic Fields.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 27, doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='5281/zenodo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='5109690 Bowman J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Rogers A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Monsalve R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Mozdzen T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Mahesh N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2018, Nature, 555, 67 Braithwaite J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Nordlund Å.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2006, A&A, 450, 1077 Braithwaite J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Spruit H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2004, Nature, 431, 819 Brott I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2011, A&A, 530, A115 Burbidge E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Burbidge G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Fowler W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Hoyle F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 1957, Reviews of Modern Physics, 29, 547 Cantiello M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2009, A&A, 499, 279 Cardamone C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2009, MNRAS, 399, 1191 Carlberg R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Keating L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2022, ApJ, 924, 77 Castor J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Abbott D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Klein R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 1975, ApJ, 195, 157 Castro N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Fossati L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Langer N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Simón-Díaz S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Schneider F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Izzard R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2014, A&A, 570, L13 Chabrier G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Hennebelle P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Charlot S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2014, ApJ, 796, 75 Chen H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='-L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Woods T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Yungelson L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Gilfanov M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Han Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2015, MNRAS, 453, 3024 Chevance M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2022, MNRAS, 509, 272 Cranmer S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Owocki S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 1995, ApJ, 440, 308 Crowther P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2019, Galaxies, 7, 88 Crowther P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2016, Monthly Notices of the Royal Astronomical Society, 458, 624 Davies B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Vink J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Oudmaijer R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2007, A&A, 469, 1045 Davies B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Crowther P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Beasor E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2018, MNRAS, 478, 3138 Dayal P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2020, MNRAS, 495, 3065 Dessart L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Hillier D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Waldman R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Livne E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2013, MNRAS, 433, 1745 Dobbs C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Bending T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Pettitt A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Bate M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2022, MNRAS, 509, 954 Doran E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2013, A&A, 558, A134 Eadie G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Keller B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Harris W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2018, ApJ, 865, 72 Eide M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Graziani L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Ciardi B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Feng Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Kakiichi K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Di Matteo T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2018, MNRAS, 476, 1174 Eldridge J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Stanway E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2020, arXiv e-prints, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' arXiv:2005.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='11883 Eldridge J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Stanway E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2022, arXiv e-prints, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' arXiv:2202.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='01413 18 Emerick A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Bryan G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Mac Low M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='-M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2019, MNRAS, 482, 1304 Farmer R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Laplace E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', de Mink S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Justham S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2021, ApJ, 923, 214 Faucher-Giguère C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='-A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2020, MNRAS, 493, 1614 Federrath C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Schrön M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Banerjee R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Klessen R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2014, ApJ, 790, 128 Ferland G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2003, Annual Review of Astronomy and Astrophysics, 41, 517 Ferland G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Korista K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Verner D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Ferguson J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Kingdon J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Verner E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 1998, PASP, 110, 761 Fichtner Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Grassitelli L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Romano-Díaz E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Porciani C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2022, MNRAS, 512, 4573 Finlay P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2012, Phys.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' C, 85, 055501 Franeck A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Wünsch R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Martínez-González S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Orlitová I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Boorman P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Svoboda J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Szécsi D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Douna V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2022, ApJ, 927, 212 Fuller J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Piro A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Jermyn A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2019, MNRAS, 485, 3661 Gal-Yam A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Leonard D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2009, Nature, 458, 865 Garcia M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Herrero A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Najarro F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Camacho I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Lorenzo M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2019, MNRAS, 484, 422 Geen S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Rosdahl J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Blaizot J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Devriendt J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Slyz A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2015, Monthly Notices of the Royal Astronomical Society, 448, 3248 Geen S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Watson S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Rosdahl J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Bieri R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Klessen R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Hennebelle P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2018, MNRAS, 481, 2548 Georgy C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Meynet G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Ekström S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Wade G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Petit V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Keszthelyi Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Hirschi R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2017, A&A, 599, L5 Gerke J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Kochanek C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Stanek K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2015, MNRAS, 450, 3289 Gonzalez-Delgado R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 1994, ApJ, 437, 239 Götberg Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', de Mink S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Groh J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Leitherer C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Norman C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2019, A&A, 629, A134 Götberg Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', de Mink S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', McQuinn M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Zapartas E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Groh J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Norman C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2020, A&A, 634, A134 Gräfener G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Vink J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2016, MNRAS, 455, 112 Gräfener G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Owocki S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Vink J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2012, A&A, 538, A40 Grassitelli L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Langer N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Mackey J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Gräfener G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Grin N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Sander A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Vink J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2021, A&A, 647, A99 Groenewegen M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Lamers H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Pauldrach A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 1989, Astronomy & Astrophysics, 221, 78 Groh J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Meynet G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Ekström S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2013a, A&A, 550, L7 Groh J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Meynet G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Georgy C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Ekström S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2013b, A&A, 558, A131 Grudi´c M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Guszejnov D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Offner S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Rosen A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Raju A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Faucher-Giguère C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='-A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Hopkins P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2022, MNRAS, 512, 216 Guedel M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Briggs K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Montmerle T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Audard M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Rebull L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Skinner S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2008, Science, 319, 309 Guidi G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Scannapieco C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Walcher C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2015, MNRAS, 454, 2381 Gutcke T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Pakmor R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Naab T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Springel V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2021, MNRAS, 501, 5597 Haworth T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Glover S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Koepferl C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Bisbas T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Dale J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2018, New A Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 82, 1 Heger A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Woosley S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Spruit H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2005, ApJ, 626, 350 19 Henry R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Worthey G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 1999, PASP, 111, 919 Higgins E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Vink J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2020, A&A, 635, A175 Hirschmann M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2013, MNRAS, 436, 2929 Horiuchi S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Nakamura K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Takiwaki T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Kotake K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Tanaka M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2014, MNRAS, 445, L99 Humphreys R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Davidson K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 1994, PASP, 106, 1025 Izotov I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Guseva N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Lipovetskii V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Kniazev A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Stepanian J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 1990, Nature, 343, 238 Izotov Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Schaerer D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Thuan T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Worseck G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Guseva N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Orlitová I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Verhamme A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2016, MNRAS, 461, 3683 Jiang Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='-F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Cantiello M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Bildsten L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Quataert E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Blaes O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2015, ApJ, 813, 74 Jones T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Swinbank A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Ellis R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Richard J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Stark D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2010, MNRAS, 404, 1247 Kaiser E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Hirschi R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Arnett W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Georgy C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Scott L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Cristini A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2020, MNRAS, 496, 1967 Kasen D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Metzger B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Barnes J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Quataert E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Ramirez-Ruiz E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2017a, Nature, 551, 80 Kasen D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Metzger B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Barnes J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Quataert E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Ramirez-Ruiz E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2017b, Nature, 551, 80 Katz N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 1992, ApJ, 391, 502 Kawata D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2001, ApJ, 558, 598 Kee N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Sundqvist J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Decin L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', de Koter A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Sana H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2021, A&A, 646, A180 Keller B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Kruijssen J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2022, MNRAS, 512, 199 Kennicutt Robert C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Lee J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Funes J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Sakai S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Akiyama S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2008, ApJS, 178, 247 Keszthelyi Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Wade G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Petit V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2017a, in Eldridge J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Bray J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', McClelland L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Xiao L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', eds, Vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 329, The Lives and Death-Throes of Massive Stars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' pp 250–254 (arXiv:1702.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='04460), doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='1017/S1743921317002745 Keszthelyi Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Puls J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Wade G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2017b, A&A, 598, A4 Keszthelyi Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Meynet G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Georgy C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Wade G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Petit V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', David-Uraz A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2019, MNRAS, 485, 5843 Keszthelyi Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2020, MNRAS, 493, 518 Keszthelyi Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Meynet G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Martins F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', de Koter A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', David-Uraz A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2021, MNRAS, 504, 2474 Keszthelyi Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2022, MNRAS, 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='1093/mnras/stac2598 Kobayashi C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Karakas A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Lugaro M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2020, ApJ, 900, 179 Kotak R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Vink J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2006, A&A, 460, L5 Krumholz M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Burkhart B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2016, MNRAS, 458, 1671 Kudritzki R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='-P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Puls J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2000, ARA&A, 38, 613 Kuiper R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Hosokawa T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2018, A&A, 616, A101 Lancaster L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Ostriker E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Kim J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='-G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Kim C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='-G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2021, The Astrophysical Journal, 914, 89 Langer N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 1997, in Nota A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Lamers H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', eds, Astronomical Society of the Pacific Conference Series Vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 120, Luminous Blue Variables: Massive Stars in Transition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 83 Langer N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 1998, A&A, 329, 551 20 Laplace E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Justham S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Renzo M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Götberg Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Farmer R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Vartanyan D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', de Mink S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2021, A&A, 656, A58 Larson R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 1974, MNRAS, 169, 229 Leitherer C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 1999, ApJS, 123, 3 Lemasle B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2018, A&A, 618, A160 Lequeux J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Peimbert M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Rayo J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Serrano A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Torres-Peimbert S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 1979, A&A, 80, 155 Livermore R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2015, MNRAS, 450, 1812 Lopez L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Krumholz M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Bolatto A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Prochaska J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Ramirez-Ruiz E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Castro D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2014, ApJ, 795, 121 Lorenzo M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Garcia M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Najarro F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Herrero A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Cerviño M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Castro N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2022, MNRAS, 516, 4164 Lovegrove E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Woosley S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2013, ApJ, 769, 109 Lucas W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Bonnell I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Dale J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2020, MNRAS, 493, 4700 Mackey J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Castro N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Fossati L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Langer N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2015, A&A, 582, A24 Madau P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Dickinson M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2014, ARA&A, 52, 415 Maeder A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 1987, A&A, 178, 159 Maeder A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2009, Physics, Formation and Evolution of Rotating Stars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Springer Berlin Heidelberg, doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='1007/978-3-540-76949-1 Maeder A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Meynet G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2000, ARA&A, 38, 143 Maeder A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Meynet G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2003, A&A, 411, 543 Maeder A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Meynet G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2004, A&A, 422, 225 Maeder A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Meynet G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2005, A&A, 440, 1041 Martínez-González S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Wünsch R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Tenorio-Tagle G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Silich S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Szécsi D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Palouš J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2022, ApJ, 934, 51 Martins F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Schaerer D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Hillier D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Meynadier F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Heydari-Malayeri M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Walborn N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2005, A&A, 441, 735 Mathews W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Baker J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 1971, ApJ, 170, 241 McCray R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Snow T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 1979, ARA&A, 17, 213 McDonald S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Davies B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Beasor E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2022, MNRAS, 510, 3132 McKee C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Ostriker J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 1977, ApJ, 218, 148 McLeod A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Dale J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Evans C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Ginsburg A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Kruijssen J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Pellegrini E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Ramsay S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Testi L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2019, MNRAS, 486, 5263 McLeod A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2020, ApJ, 891, 25 McLeod A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2021, MNRAS, 508, 5425 Meynet G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Eggenberger P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Maeder A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2011, A&A, 525, L11 Meynet G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2015, A&A, 575, A60 Micheva G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Oey M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Jaskot A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', James B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2017, ApJ, 845, 165 Moe M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Di Stefano R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2017, ApJS, 230, 15 Montargès M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2021, Nature, 594, 365 21 Müller B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2019, MNRAS, 484, 3307 Naab T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Ostriker J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2017, ARA&A, 55, 59 Nogueras-Lara F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2018, A&A, 620, A83 Olivier G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Berg D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Chisholm J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Erb D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Pogge R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Skillman E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2021a, arXiv e-prints, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' arXiv:2109.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='06725 Olivier G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Lopez L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Rosen A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Nayak O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Reiter M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Krumholz M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Bolatto A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2021b, ApJ, 908, 68 Oskinova L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Schaerer D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2022, arXiv e-prints, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' arXiv:2203.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='04987 Owocki S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2004, in Maeder A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Eenens P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', eds, IAU Symposium Vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 215, Stellar Rotation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 515 Padoan P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Pan L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Haugbølle T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Nordlund Å.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2016, ApJ, 822, 11 Paxton B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2013, ApJS, 208, 4 Pedersen M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2021, Nature Astronomy, 5, 715 Peimbert M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Torres-Peimbert S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Rayo J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 1978, ApJ, 220, 516 Pellegrini E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Rahner D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Reissl S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Glover S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Klessen R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Rousseau-Nepton L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Herrera-Camus R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2020, MNRAS, 496, 339 Petit V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2017, MNRAS, 466, 1052 Podsiadlowski P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Joss P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Hsu J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 1992, ApJ, 391, 246 Portegies Zwart S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', McMillan S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Gieles M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2010, ARA&A, 48, 431 Potter A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Chitre S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Tout C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2012, MNRAS, 424, 2358 Prinja R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Barlow M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Howarth I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 1990, Astrophysical Journal, 361, 607 Puchwein E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Haardt F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Haehnelt M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Madau P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2019, MNRAS, 485, 47 Puls J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Vink J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Najarro F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2008, A&A Rev.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 16, 209 Puls J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Sundqvist J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Markova N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2015, in Meynet G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Georgy C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Groh J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Stee P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', eds, IAU Symposium Vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' 307, New Windows on Massive Stars.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' pp 25–36 (arXiv:1409.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='3582), doi:10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='1017/S174392131400622X Ramachandran V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Hainich R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Hamann W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Oskinova L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Shenar T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Sander A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Todt H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Gallagher J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2018a, A&A, 609, A7 Ramachandran V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Hamann W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Hainich R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Oskinova L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Shenar T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Sander A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Todt H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Gallagher J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2018b, A&A, 615, A40 Ramachandran V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2019, A&A, 625, A104 Renzo M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2019, A&A, 624, A66 Rey M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Starkenburg T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2022, MNRAS, 510, 4208 Rieder S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Dobbs C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Bending T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Liow K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Wurster J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2022, MNRAS, 509, 6155 Robertson B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Ellis R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Furlanetto S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Dunlop J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2015, ApJ, 802, L19 Romano D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Karakas A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Tosi M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Matteucci F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2010, A&A, 522, A32 Rosdahl J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Schaye J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Dubois Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Kimm T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Teyssier R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2017, MNRAS, 466, 11 Rosdahl J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2018, MNRAS, 479, 994 22 Rosen A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Lopez L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Krumholz M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Ramirez-Ruiz E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2014, Monthly Notices of the Royal Astronomical Society, 442, 2701 Sabhahit G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Vink J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Higgins E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Sander A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2021, MNRAS, 506, 4473 Sana H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2012, Science, 337, 444 Sander A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Vink J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2020, MNRAS, 499, 873 Saviane I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Rizzi L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Held E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Bresolin F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Momany Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2002, A&A, 390, 59 Schaerer D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Fragos T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Izotov Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2019, A&A, 622, L10 Schmutz W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Leitherer C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Gruenwald R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 1992, PASP, 104, 1164 Schootemeijer A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Langer N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Grin N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Wang C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2019, A&A, 625, A132 Schootemeijer A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2021, A&A, 646, A106 Scott L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Hirschi R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Georgy C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Arnett W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Meakin C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Kaiser E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Ekström S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Yusof N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2021, MNRAS, 503, 4208 Searle L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 1971, ApJ, 168, 327 Senchyna P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Stark D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Mirocha J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Reines A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Charlot S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Jones T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Mulchaey J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2020, MNRAS, 494, 941 Shenar T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Gilkis A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Vink J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Sana H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Sander A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2020, A&A, 634, A79 Singh S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2022, Nature Astronomy, 6, 607 Smartt S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2009, ARA&A, 47, 63 Smith L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Norris R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Crowther P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2002, MNRAS, 337, 1309 Smith M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Bryan G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Somerville R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Hu C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Teyssier R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Burkhart B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Hernquist L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2021, MNRAS, 506, 3882 Spruit H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2002, A&A, 381, 923 Steidel C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Strom A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Pettini M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Rudie G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Reddy N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Trainor R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2016, ApJ, 826, 159 Sukhbold T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Adams S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2020, MNRAS, 492, 2578 Sutherland R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Dopita M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 1993, The Astrophysical Journal Supplement Series, 88, 253 Szécsi D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Wünsch R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2019, ApJ, 871, 20 Szécsi D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Langer N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Yoon S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Sanyal D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', de Mink S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Evans C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Dermine T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2015, A&A, 581, A15 Szécsi D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Mackey J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Langer N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2018, A&A, 612, A55 Takahashi K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Langer N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2021, A&A, 646, A19 Townsley L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Feigelson E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Montmerle T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Broos P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Chu Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Garmire G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2003, ApJ, 593, 874 Trebitsch M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2021, A&A, 653, A154 Vartanyan D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Laplace E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Renzo M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Götberg Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Burrows A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', de Mink S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2021, ApJ, 916, L5 Verliat A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Hennebelle P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', González M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Lee Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='-N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Geen S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2022, A&A, 663, A6 Vila-Costas M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Edmunds M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 1992, MNRAS, 259, 121 Vink J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2022, ARA&A, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' arXiv:2109.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content='08164 23 Vink J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Gräfener G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2012, ApJ, 751, L34 Vink J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', de Koter A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Lamers H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2001, Astronomy and Astrophysics, 369, 574 Vink J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Brott I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Gräfener G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Langer N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', de Koter A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Lennon D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2010, A&A, 512, L7 Vink J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Higgins E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Sander A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Sabhahit G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2021, MNRAS, 504, 146 Weaver R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', McCray R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Castor J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Shapiro P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Moore R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 1977, The Astrophysical Journal, 218, 377 White R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Long K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 1991, ApJ, 373, 543 Woosley S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Heger A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2006, ApJ, 637, 914 Worseck G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Prochaska J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Hennawi J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', McQuinn M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2016, ApJ, 825, 144 Yoon S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Langer N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2005a, A&A, 435, 967 Yoon S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Langer N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2005b, A&A, 443, 643 Yung L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Somerville R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Finkelstein S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Popping G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Davé R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Venkatesan A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Behroozi P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Ferguson H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2020, MNRAS, 496, 4574 Zapartas E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2017, A&A, 601, A29 da Silva R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Fumagalli M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Krumholz M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2012, ApJ, 745, 145 de Mink S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Pols O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Langer N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Izzard R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2009, A&A, 507, L1 ud-Doula A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Owocki S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2002, ApJ, 576, 413 van Zee L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', Haynes M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} +page_content=', 2006, ApJ, 636, 214 24' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4NFRT4oBgHgl3EQfozeT/content/2301.13611v1.pdf'} diff --git a/5NE4T4oBgHgl3EQfbwwx/vector_store/index.faiss b/5NE4T4oBgHgl3EQfbwwx/vector_store/index.faiss new file mode 100644 index 0000000000000000000000000000000000000000..845b5b28cad29993432435ce8cd4d636f9968460 --- /dev/null +++ b/5NE4T4oBgHgl3EQfbwwx/vector_store/index.faiss @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:caa20b2f4dfa013d5cd21de61949ab43255877f3adf3921d4f7f214d9a7f54d7 +size 3014701 diff --git a/5NE4T4oBgHgl3EQfbwwx/vector_store/index.pkl b/5NE4T4oBgHgl3EQfbwwx/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..51e66bf5e258a0f051ec574ef79eb524fc4cc339 --- /dev/null +++ b/5NE4T4oBgHgl3EQfbwwx/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:adc330a573ab78385624b658b6b80e859540a2528e7c7ebe42fa3e43aa361d7e +size 120204 diff --git a/5tAyT4oBgHgl3EQfcfeN/content/tmp_files/2301.00284v1.pdf.txt b/5tAyT4oBgHgl3EQfcfeN/content/tmp_files/2301.00284v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..190123f5ab37d05742a022c642a1afc5ba62c3c6 --- /dev/null +++ b/5tAyT4oBgHgl3EQfcfeN/content/tmp_files/2301.00284v1.pdf.txt @@ -0,0 +1,1421 @@ +arXiv:2301.00284v1 [math.DG] 31 Dec 2022 +SQUARE ROOT NORMAL FIELDS FOR LIPSCHITZ SURFACES AND THE +WASSERSTEIN FISHER RAO METRIC +EMMANUEL HARTMAN∗, MARTIN BAUER†, AND ERIC KLASSEN‡ +Abstract. The Square Root Normal Field (SRNF) framework is a method in the area of shape analysis that defines +a (pseudo) distance between unparametrized surfaces. For piecewise linear (PL) surfaces it was recently proved that +the SRNF distance between unparametrized surfaces is equivalent to the Wasserstein Fisher Rao (WFR) metric on the +space of finitely supported measures on S2. In the present article we extend this point of view to a much larger set of +surfaces; we show that the SRNF distance on the space of Lipschitz surfaces is eqivalent to the WFR distance between +Borel measures on S2. For the space of spherical surfaces this result directly allows us to characterize the non-injectivity +and the (closure of the) image of the SRNF transform. In the last part of the paper we further generalize this result +by showing that the WFR metric for general measure spaces can be interpreted as an optimization problem over the +diffeomorphism group of an independent background space. +1. Introduction. The investigations of this article are motivated by applications in the area of +mathematical shape analysis, which seeks to quantify differences, perform classification, and explain +variability for populations of shapes [51, 40, 13, 28]. More specifically, the results of this article concern +the Square Root Normal Field distance [16] on the space of surfaces and the Wasserstein Fisher Rao +metric [9, 26] from unbalanced optimal transport. Before we describe the contributions of the current +work in more detail, we will briefly summarize some results from these two areas. +Shape analysis of surfaces: +For the purpose of this article we consider a shape to be a +parametrized surface or curve in Rd, where we identify two objects if they only differ by a trans- +lation and/or a reparametrization. In practice, it is often of interest to mod out by further shape +preserving group actions, such as the groups of rotations or scalings. To keep the presentation simple, +we will ignore these additional finite dimensional groups. Consequently, the resulting shape space is +an infinite dimensional, non-linear (quotient) space, which makes the application of statistical tech- +niques to analyse these types of data a highly challenging task. A common approach to overcome these +difficulties can be found in the area of geometric statistics [35, 36], in which one develops statistical +frameworks based on (Riemannian) geometry. In the context of shape analysis of surfaces or curves, +a variety of different metrics have been proposed for this purpose; this includes metrics induced by +(right-invariant) metrics on diffeomorphism groups [51, 31] and reparametrization invariant metrics +on the space of immersions [40, 3, 30], which are directly related to the investigations of the present +article as we will explain next. +In the latter approach the calculation of the distance (similarity) between two shapes reduces to two +tasks: calculating the geodesic distance on the space of immersions (parametrized surfaces or curves, +resp.) and minimizing over the action of the shape preserving group actions, i.e., diffeomorphisms +of the parameter space and translations in Rd. In general there do not exist any explicit formulas +for geodesics and thus computing solutions to the geodesic boundary value problems (and thus of +the distance) is a highly non-trivial task and usually has to be solved using numerical optimization +techniques, see eg. [14, 2]. +For specific examples of Riemannian metrics, however, simplifying transformations have been +developed that allow for explicit calculations of geodesics and geodesic distance. +This includes in +particular the family of Ga,b-metrics on the space of curves [5, 34, 33, 50], a family of first order +Sobolev type metrics, that are often called elastic metrics due to their connections to linear elasticity +theory; see eg. [33, 8, 5]. For the specific choice of parameters a = 1, b = 1/2 the corresponding +transformation is the so-called Square-Root-Velocity (SRV) transform [39], which is widely used in +∗Department of Mathematics, Florida State University (ehartman@fsu.edu) +†Department of Mathematics, Florida State University and University of Vienna (bauer@math.fsu.edu) +‡Department of Mathematics, Florida State University (klassen@math.fsu.edu) +1 + +2 +M. BAUER, E. HARTMAN, E. KLASSEN +applications; see [40] and the references therein. The advantage of this transformation is that it reduces +the shape comparison problem to a single optimization over the shape preserving group actions, i.e., +in the setting of the present article over reparametrizations and translations. +This computational +simplification has led to both the development of efficient algorithms [49, 12, 39] and to analytic +results on existence of minimizers and optimal parametrizations [7, 24, 44]. +The family of elastic Ga,b metrics has a natural generalization to a four parameter family of metrics +on the space of surfaces [42]. Similarly to the case of curves, simplifying transformations have also +been proposed in this more complicated situation [19, 20, 16, 41]. Notably, as a generalization of the +SRV transform, the Square Root Normal Field (SRNF) transformation [16] has been introduced. In +contrast to the situation for curves, the corresponding Riemannian metric for this transformation is +degenerate and, furthermore, it only leads to a first order approximation of the geodesic distance. +Nonetheless it defines a reparametrization invariant (pseudo-) distance on the space of surfaces, which +still allows for efficient computations using several methods of approximating the optimization over +the diffeomorphism group [23, 4] and has proven successful in several applications, see [21, 17, 29, 22]. +and the references therein. +Unbalanced Optimal transport: The second core theme of the present article can be found in +the theory of optimal transport (OT). Since Monge’s formulation of OT as a non-convex optimization +problem in the space of transport maps, many formulations of the problem have been proposed to +give insight to the theoretical properties of the problem as well as efficient methods for computing the +solution, see [45, 46] for a comprehensive overview on the field. +In classical optimal transport theory one considers normalized (probability) distributions. It is, +however, important for many applications to relax this normalization assumption and compute trans- +portation plans between arbitrary positive measures. Motivated by this observation the theory of +optimal transport has been extended to measures with different masses. This field, called unbalanced +optimal transport, has seen rapid developments in the past years and several different frameworks +have been proposed [9, 25, 27, 37]. Among them is the Wasserstein Fisher Rao (WFR) distance, an +interpolating distance between the quadratic Wasserstein metric and the Fisher–Rao metric, that was +introduced independently by [9] and [26]. The WFR distance has been applied to a variety of problems +where it is more natural to consider optimal transport in an unbalanced setting. These applications +range from color transfer [10], to earthquake epicenter location [52] and document semantic similarity +metrics [47]. Because of the growing field of applications, several algorithms have been proposed to +compute the Wasserstein Fisher Rao metric. A variation on the popular Sinkhorn algorithm to solve +for an entropy regularized version of the distance was proposed by [10] and an alternating minimization +algorithm that computes an exact solution was introduced in [6]. +1.1. Contributions of the article. Recently a new and surprising relationship between these +two areas (shape analysis and unbalanced optimal transport) has been found. Namely, in [6] it has +been shown that for triangulated surfaces the calculation of the SRNF shape distance can be reduced +to calculating the WFR distance between their corresponding surface area measures. The presentation +in [6] was entirely focused on the discrete (PL) setting and the proof of the result essentially reduced +to algebraic considerations. In the first part of the present article we build the analytical tools to +extend this result to the infinite dimensional setting, which contains in particular the original setup +of the SRNF distance; the space of smooth surfaces. The main result of this part of our article – cf. +Theorem 3.1 – shows that the SRNF shape distance between any two Lipschitz surfaces is equal to +the WFR distance between their surface area measures. +As a direct consequence of this result we are able to answer two fundamental questions regarding +the SRNF transform: since the inception of the SRNF transform, it has been understood that the map +is neither injective nor surjective [16]. Characterizing the image and non-injectivity have, however, +remained open problems. Recently a first degeneracy result in the context of closed surfaces has been +found [18]. Using our equivalence result we are able to obtain a characterization of the closure of the + +3 +image of this transform – cf. Theorem 3.6 – and a new strong degeneracy result of the corresponding +distance (non-injectivity of the transform, resp.) – cf. Theorem 3.8. +In the second part we further explore the equivalence result for more general unbalanced optimal +transport problems. Generalizations of some of the intermediate results of the first part allow us to offer +a novel formulation of the WFR metric as a diffeomorphic optimization problem – cf. Theorem 4.1. +Whereas the main result of the first part of the article relates the WFR on S2 with a specific choice +of parameter to a diffeomorphic optimization problem, we here extend this relationship to the WFR +with any choice of parameter defined on any connected, compact, oriented Riemannian manifold, N. +Notably, the space of diffeomorphisms we have to optimize over does not depend on N, but can be +chosen as the diffeomorphism group of some background manifold, that only needs to be of dimension +greater than or equal to two. +Acknowledgements. The authors thank FX Vialard and Cy Maor for useful discussions during +the preparation of this manuscript. M. Bauer was supported by NSF-grants 1912037 and 1953244 and +by FWF grant P 35813-N. E. Hartman was supported by NSF grant DMS-1953244. +2. Preliminaries. +2.1. The Wasserstein Fisher Rao Distance. In the following, we will summarize the Kan- +torovich formulation of the Wasserstein Fischer Rao distance, as introduced in [11] for measures on a +smooth, connected, compact, oriented Riemannian manifold, N. Therefore we denote by M(N) the +space of finite Borel measures on N. In the Kantorovich formulation of the Wasserstein-Fisher-Rao +distance, we will define a functional on the space of semi-couplings. +Therefore we first recall the +definition of a semi-coupling: +Definition 2.1 (Semi-couplings [11]). Given µ, ν ∈ M(N) the set of all semi-couplings from µ to +ν is given by +Γ(µ, ν) = +� +(γ0, γ1) ∈ M(N × N)2|(Proj0)#γ0 = µ, (Proj1)#γ1 = ν +� +. +To define the Wasserstein-Fisher-Rao distance from µ to ν we define a functional on the space of +semi-couplings from µ to ν. Let d denote the geodesic distance on N and δ ∈ (0, ∞). We consider the +functional +Jδ : Γ(µ, ν) → R +(γ1, γ2) �→ 4δ2 +� +µ(N) + ν(N) − 2 +� +N×N +√γ1γ2 +γ +(u, v)cos(d(u, v)/2δ)dγ(u, v) +� +where γ ∈ M(N × N) such that γ1, γ2 ≪ γ. Note that in the case where N = S2, we have d(u, v) = +cos−1(u · v). Thus for δ = 1 +2, +Jδ(γ1, γ2) = +� +S2×S2 +���� +�γ1 +γ (u, v)u − +�γ1 +γ (u, v)v +���� +2 +dγ(u, v). +(2.1) +Definition 2.2 (Wasserstein-Fisher-Rao Distance +[11, 26]). The Wasserstein-Fisher-Rao Dis- +tance on M(N) is given by +WFRδ : M(N) × M(N) → R≥0 defined via +(2.2) +(µ, ν) �→ +inf +(γ0,γ1)∈Γ(µ,ν) +� +Jδ(µ, ν). +(2.3) +Some results in this article will specifically apply to the case where δ = 1/2. To simplify our notation, +we define J := J1/2 and WFR := WFR1/2. + +4 +M. BAUER, E. HARTMAN, E. KLASSEN +2.2. The Square Root Normal Field Shape Distance. In mathematical shape analysis, one +defines metrics that measure the differences between geometric objects [51, 3, 40, 13]. In this article +we consider geometric objects described by unparameterized surfaces which are elements of an infinite +dimensional non-linear space modulo several finite and infinite dimensional group action. As a result, +computations in this space are difficult and even simple statistical operations are not well defined. +Riemannian geometry can help to overcome these challenges. In such a framework, one considers the +space of all surfaces as an infinite dimensional manifold and equips it with a Riemannian metric that is +invariant to the group action, which allows one to consider the induced metric on the quotient space. +For our purposes we will consider immersions of a smooth, connected, compact, oriented Rie- +mannian 2-dimensional manifold, M, with or without boundary. We denote the space of all Lipschitz +immersions of M into R3 by Imm(M, R3), i.e., +Imm(M, R3) = {f ∈ W 1,∞(M, R3) : T f is inj. a.e.} . +(2.4) +As we are interested in unparametrized surfaces, we have to factor out the action of the group of +diffeomorphisms. In the context of Lipschitz immersions the natural group of reparametrizations for +us to consider is the group of all orientation preserving, bi-Lipschitz diffeomorphisms: +Γ(M) = {γ ∈ W 1,∞(M, M) : γ−1 ∈ W 1,∞(M, M), |Dγ| > 0 a.e.}, +where |Dγ| denotes the Jacobian determinant of γ, which is well-defined as Dγ ∈ L∞. Note that this +reparametrization group acts by composition from the right on Imm(M, R3). In addition to the action +by the reparametrization group, we also want to identify surfaces that only differ by a translation. +This leads us to consider the following quotient space: +S := Imm(M, R3)/(Γ(M) × trans) +(2.5) +In the following we will equip Imm(M) with a reparameterization invariant distance; the so called +square root normal field (SRNF) distance. The SRNF map (distance resp.) was originally introduced +by Jermyn et al. in [15] for the space of smooth immersions, but it naturally extends to the space of +all Lipschitz surfaces, as demonstrated in [6]. We now recall the definition of this distance. +For any given f ∈ Imm(M, R3), the orientation on M allows us to consider the unit normal vector +field nf : M → R3, which is well-defined as an element of L∞(M, R3). Furthermore, let {v, w} be an +orthonormal basis of TxM. Then for any f ∈ Imm(M, R3) we can define the area multiplication factor +at x ∈ M via af(x) = |dfx(v) × dfx(w)|. The SRNF map is then given by +Φ : Imm(M, R3)/ translations → L2(M, R3) +(2.6) +f �→ qf where qf(x) := +� +af(x) nf(x). +(2.7) +From this transform we define a distance on Imm(M, R3)/ translations by +dImm(f1, f2) = ∥Φ(f1) − Φ(f2)∥L2. +Next we consider a right-action of Γ(M) on L2(M, R3) that is compatible with the mapping Φ. For +q ∈ L2(M, R3) and γ ∈ Γ(M) we let +(q ∗ γ)(x) = +� +|Dγ(x)|q(γ(x)). +(2.8) +It is easy to check that the action of Γ(M) on L2(M, R3) is by linear isometries and that for any +f ∈ Imm and γ ∈ Γ, +Φ(f) ∗ γ = Φ(f ◦ γ). + +5 +Thus, it follows that the SRNF distance on Imm(M, R3) is invariant with respect to this action and +thus it descends to a (pseudo) distance on the quotient space S, which is given by +dS([f1], [f2]) = +inf +γ∈Γ(M) d(f1, f2 ◦ γ), +[f1], [f2] ∈ S(M) +As we will see later the induced (pseudo) distance on the quotient space is highly degenerate. +2.3. Equivalence of WFR and SRNF in the piecewise linear category. In [6] a surprising +equivalence of the WFR and SRNF distance was shown: for piecewise linear surfaces it was proved +that the SRNF distance can be reduced to the WFR distance between finitely supported measures. +To formulate this result in detail we first associate to every q ∈ L2(M, R3) a measure on S2; namely, +for any open U ⊆ S2, we define +q∗U = {x ∈ M|q(x) ̸= 0 and q(x)/|q(x)| ∈ U} +and define the map +L2(M, R3) → M(S2) via q �→ µq +where for U ⊆ S2, µq(U) = +� +q∗U +q(x) · q(x)dm. +The result proved in [6] is then formulated as: +Theorem 2.3. Given two piecewise linear surfaces S1 and S2 parameterized by f and g, the SRNF +shape distance can be computed as an unbalanced transport problem. More precisely, we have +dS([f], [g]) = +inf +γ∈Γ(M) ∥qf − qg ∗ γ∥ = WFR(µqf , µqg). +where qf and qg are the SRNFs of f and g respectively. +In the next section we will extend this result of to all Lipschitz immersions (Borel-measures, resp.). +3. The SRNF distance. For the goal of extending the result of Theorem 2.3 to all Lipschitz +surfaces, we will consider specifically δ = 1 +2 in the definition of the WFR metric. +3.1. Equivalence of the WFR and SRNF distances. Our main result of this section is the +following theorem, which is slightly stronger than the desired equivalence result. +Theorem 3.1. Given q1, q2 ∈ L2(M, R3), +inf +γ∈Γ(M) ∥q1 − q2 ∗ γ∥L2 = WFR(µq1, µq2). +In particular, given f, g ∈ W 1,∞(M, R3) we can calculate their SRNF distance as an unbalanced OMT +problem via +dS([f], [g]) = WFR(µqf , µqg), +where qf and qg are the SRNFs of f and g respectively. +Remark 1. Note, that as a direct consequence of Theorem 3.1 we can also conclude the extension +of Theorem 2.3 to the original setup of the SRNF distance, the space of all smooth surfaces. +The proof of Theorem 3.1 relies on a series of technical lemmas, which we will show next. + +6 +M. BAUER, E. HARTMAN, E. KLASSEN +Lemma 3.2. Let X, Y be topological spaces and ρ : X → Y be a measurable function with respect +to the Borel σ-algebras. If µ, µ1 ∈ M(X), γ, γ1 ∈ M(Y ) such that µ1 ≪ µ, γ = ρ∗µ, and γ1 = ρ∗µ1, +then γ1 ≪ γ. Furthermore, µ1 +µ = γ1 +γ ◦ ρ almost everywhere. +Proof. Let U ⊆ Y open such that γ(U) = 0. +By definition, µ(ρ−1(U)) = 0. Since µ1 ≪ µ, +µ1(ρ−1(U)) = 0. Therefore, γ1(U) = 0. This proves γ1 ≪ γ. +Following the definitions of the Radon-Nikodym derivatives, pushforwards, and the change of variables +formula, we obtain +� +ρ−1(U) +µ1 +µ dµ = +� +ρ−1(U) +dµ1 = +� +U +dγ1 = +� +U +γ1 +γ dγ = +� +ρ−1(U) +γ1 +γ ◦ ρ dµ. +Thus, µ1 +µ = γ1 +γ ◦ ρ almost everywhere. +Given q ∈ L2(M, R3) we can define a function from M to S2 that takes every point x ∈ M to the unit +vector in the direction of q(x). As a matter of defining this function on every point, we can canonically +choose the north pole of S2 for points where q(x) = 0. +Definition 3.3. For q ∈ L2(M, R3) we define the unit vector map of q as +q : M → S2 given by +x �→ +� q(x) +|q(x)| +if q(x) ̸= 0 +(1, 0, 0) +otherwise +. +Note that since q ∈ L2(M, R3), it follows that q : M → S2 is measurable. Let q ∈ L2(M, R3). We can +define a measure, νq ∈ M(M), via +νq(U) = +� +U +|q(x)|2dm. +for all open U ⊆ M. Note that νq ≪ m and νq +m = |q|2. Further, we can equivalently define µq as the +pushforward of νq via q. +Lemma 3.4. Let q ∈ L2(M, R3) and µq ∈ M(S2) be the measure associated with q. Then µq = +q∗νq. +Proof. Let U ⊆ S2 open and define M0 = {x ∈ M|q(x) = 0}. +If (1, 0, 0) ̸∈ S2, q−1(U) = q∗(U) and thus +q∗νq(U) = +� +q−1(U) +|q(x)|2dm = +� +q∗(U) +|q(x)|2dm = µq. +If (1, 0, 0) ∈ S2, q−1(U) = q∗(U) ∪ M0 and thus +q∗νq(U) = +� +q−1(U) +|q(x)|2dm = +� +q∗(U) +|q(x)|2dm + +� +M0 +|q(x)|2dm = µq. +Leveraging what we have proven above we may show a key continuity result that will then allow us to +complete the proof of the main theorem. +Lemma 3.5. The map (L2(M, R3), ∥ · ∥L2) → (M(S2), WFR) defined via q �→ µq given by Equa- +tion (2.3) is Lipschitz continuous with Lipschitz constant K = 1. + +7 +Proof. Let q1, q2 ∈ L2(M, R3). For any semi-coupling (γ1, γ2) ∈ Γ(µq1, µq2), +WFR(µq1, µq2) ≤ +� +Jδ(γ1, γ2). +Thus, to prove the theorem we must construct (γ1, γ2) ∈ Γ(µq1, µq2) such that Jδ(γ1, γ2) = ∥q1−q2∥2 +L2. +To construct such a semi-coupling we first construct ρ : M → S2 × S2 defined as unit vector maps of +q1 and q2 on the first and second factor respectively. I.e. the map is given by ρ(x) = (q1(x), q2(x)) . +Since q1 and q2 are individually measurable, then so is ρ. We can then define γ1, γ2 ∈ M(S2 × S2) via +γ1 = ρ∗νq1 and γ2 = ρ∗νq2. +Claim 1. The pair of measures, (γ1, γ2) is a semi-coupling from µq1 to µq2. +Proof of claim. +Let U ⊆ S2 be open. Thus, +γ1(U × S2) = νq1 +� +ρ−1(U × S2) +� += νq1 +� +q1−1(U) ∩ q2−1(S2) +� += νq1 +� +q1−1(U) +� += µq1(U) +and +γ2(S2 × U) = νq2 +� +ρ−1(S2 × U) +� += νq1 +� +q1−1(S2) ∩ q2−1(U) +� += νq1 +� +q2−1(U) +� += µq2(U). +So (γ1, γ2) is a semi-coupling from µq1 to µq2. +Recall from the definition of the functional Jδ we need to construct γ ∈ M(S2 × S2) such that +γ1, γ2 ≪ γ. Define γ = ρ∗m. We know µq1, µq2 ≪ m. Thus, by Lemma 3.2, γ1, γ2 ≪ γ. Furthermore, +|q1|2 = µq1 +m = γ1 +γ ◦ ρ a.e. +and +|q2|2 = µq2 +m = γ2 +γ ◦ ρ a.e. +So, +Jδ(γ1, γ2) = +� +S2×S2 +���� +�γ1 +γ (u, v)u − +�γ1 +γ (u, v)v +���� +2 +dγ(u, v) += +� +S2×S2 +γ1 +γ (u, v)dγ(u, v) + +� +S2×S2 +γ2 +γ (u, v)dγ(u, v) +− 2 +� +S2×S2 +√γ1γ2 +γ +(u, v)⟨u, v⟩dγ(u, v) += +� +ρ−1(S2×S2) +γ1 +γ ◦ ρ(x) dm + +� +ρ−1(S2×S2) +γ2 +γ ◦ ρ(x) dm +− 2 +� +ρ−1(S2×S2) +�γ1 +γ ◦ ρ(x) +�γ2 +γ ◦ ρ(x)⟨ρ(x)⟩dγ(u, v) += +� +M +|q1(x)|2dm + +� +M +|q2(x)|2dm − 2 +� +M +|q1(x)||q2(x)| +� q1(x) +|q1(x)|, q2(x) +|q2(x)| +� +dm +=∥q1 − q2∥2 +L2 +Thus, +WFR(µq1, µq2) ≤ +� +Jδ(γ1, γ2) = 1 · ∥q1 − q2∥L2 +We are now ready to conclude the proof of Theorem 3.1: +Proof of Theorem 3.1. Let q1, q2 ∈ L2(M, R3) and let ǫ > 0. Let p1, p2 be piecewise constant +functions such that ∥q1 − p1∥L2 < ǫ/4 and ∥q2 − p2∥L2 < ǫ/4. Therefore, +inf +γ∈Γ(M) ∥q1 − p1 ∗ γ∥L2, +inf +γ∈Γ(M) ∥q2 − p2 ∗ γ∥L2, WFR(µq1, µp1), WFR(µq2, µp2) < ǫ/4. + +8 +M. BAUER, E. HARTMAN, E. KLASSEN +Thus, +inf +γ∈Γ(M) ∥q1 − q2 ∗ γ∥L2 ≤ +inf +γ∈Γ(M) ∥q1 − p1 ∗ γ∥L2 + +inf +γ∈Γ(M) ∥p2 − q2 ∗ γ∥L2 ++ +inf +γ∈Γ(M) ∥p1 − p2 ∗ γ∥L2 +≤ ǫ/2 + +inf +γ∈Γ(M) ∥p1 − p2 ∗ γ∥L2 += ǫ/2 + WFR(µp1, µp2) +≤ ǫ/2 + WFR(µq1, µp1) + WFR(µp2, µq2) + WFR(µq1, µq2) +≤ ǫ + WFR(µq1, µq2) +and +WFR(µq1, µq2) ≤ WFR(µp1, µp2) + WFR(µq1, µp1) + WFR(µp2, µq2) +≤ WFR(µp2, µq2) + ǫ/2 += +inf +γ∈Γ(M) ∥p1 − p2 ∗ γ∥L2 + ǫ/2 +≤ +inf +γ∈Γ(M) ∥q1 − p1 ∗ γ∥L2 + +inf +γ∈Γ(M) ∥p2 − q2 ∗ γ∥L2 ++ +inf +γ∈Γ(M) ∥q1 − q2 ∗ γ∥L2 + ǫ/2 +≤ +inf +γ∈Γ(M) ∥q1 − q2 ∗ γ∥L2 + ǫ. +So, +WFR(µq1, µq2) − ǫ ≤ +inf +γ∈Γ(M) ∥q1 − q2 ∗ γ∥L2 ≤ WFR(µq1, µq2) + ǫ. +Taking ǫ → 0 we can conclude infγ∈Γ(M) ∥q1 − q2 ∗ γ∥L2 = WFR(µq1, µq2). +3.2. Characterizing the closure of the image of the SRNF map. Our equivalence result +will also allow us to characterize the (closure of the) image of the SRNF map Φ in the context of +spherical surfaces: +Theorem 3.6. Let f ∈ Imm(S2, R3) and let q = Φ(f) ∈ L2(S2, R3). Then q satisfies the closure +condition +� +S2 q(x)|q(x)|dm = 0. Moreover, the closure of the image of Φ is given by the set +U := +� +q ∈ L2(S2, R3) such that +� +S2 q(x)|q(x)|dm = 0 +� +. +To prove this result we will need a classical theorem from geometric measure theory and the study of +convex polyhedra, which we will recall next: +Theorem 3.7 (Minkowski’s Theorem [1, 32, 38]). +Let µ ∈ M(S2) such that the support of µ is +not concentrated on a great circle and +� +S2 x dµ(x) = 0. +Then, there exists a unique (up to translation) convex body whose surface area measure is µ. Moreover, +if µ is finitely supported then the convex body is a polytope. + +9 +Proof of Theorem 3.6.. Let f ∈ Imm(S2, R3) and qf = Φ(f). Let S = f(S2) and V be the surface +enclosed by S. Therefore, +� +S2 qf(x)|qf(x)|dm = +� +S2 af(x)nf(x)dm = +� +S +nfdS. +Thus, this is the integral of the normal vector of a closed surface in R3. A simple application of the +divergence theorem shows that the integral of the normal vector of the closed surface is zero. To see +this, let {ei}3 +i=1 be the unit basis vectors of R3. For i = 1, 2, 3, +� +S +(nf · ei) dS = +� +V +(∇ · ei) dV = 0. +Therefore, +� +S2 qf(x)|qf(x)|dm = 0 and the image of Φ is contained in U. +To prove the converse direction let q ∈ U. We aim to construct a convex body f with µqf arbitrarily +close to µq. By the definition of U the measure µq satisfies +� +S2 n dµq(n) = 0. Since finitely supported +measures are dense with respect to the WFR metric, we can choose a finitely supported measure µq +such that +� +S2 n dµq(n) = 0 and WFR(µq, µq) < ǫ/3. +If the support of µq is not concentrated on a great circle we can invoke the Minkowski theorem +and the result follows. For the general case we will slightly deform the measure as follows. Define +ˆµq := µq + +3 +� +i=1 +ǫ +18δei + +3 +� +i=1 +ǫ +18δ−ei +where {ei}3 +i=1 is the set of unit basis vectors of R3. Then ˆµq is a finitely supported measue and satisfies +� +S2 n d ˆµq(n) = 0 and ˆµq is not supported on a single great circle. Moreover, WFR(µq, ˆµq) < ǫ/3. By +the Minkowski Theorem (Theorem 3.7) there exists a convex polytope with surface area measure given +by ˆµq. +Let f ∈ W 1,∞(S2, R3) be the PL spherical parameterization of this convex body, so that +µqf = ˆµq. Thus, there exists γ ∈ Γ(M) such that ∥qf − q ∗ γ∥L2 < WFR(µqf , µq) + ǫ/3. Therefore, +∥qf − q ∗ γ∥L2 ≤ WFR(µqf , µq) + ǫ/3 = WFR( ˆµq, µq) + ǫ/3 ≤ WFR( ˆµq, µq) + WFR(µq, µq) + ǫ/3 < ǫ, +which concludes the proof. +3.3. Characterizing the degeneracy of the SRNF distance. As a second important con- +sequence of the our equivalence result we can give a detailed proof of the degeneracy of the SRNF +distance for smooth surfaces. Degeneracy results were studied in [18] and it was further characterized +for certain PL surfaces in [6]. Here we will generalize the characterization of [6] to smooth surfaces: +Theorem 3.8. For any smooth, regular surface f ∈ C∞(S2, R3) ∩ Imm(S2, R3) there exists a +unique (up to translations) convex body that is indistinguishable from f by the SRNF shape distance, +i.e, dS([f], [f1]) = 0. +Proof of Theorem 3.8. Let f ∈ C∞(S2, R3) ∩ Imm(S2, R3) be a regular surface. By [43, Prop. +4.33] the Gauss map of f is surjective. Thus the image of qf is not contained in a single hyperplane of +R3. Furthermore, +� +S2 qf(x)|qf(x)|dm = 0. Thus, by Theorem 3.7, there exists a unique convex body +(up to translation) with surface area measure given by µqf . By Theorem 3.1 the surface f and the +convex body are SRNF distance 0 from each other. +4. The WFR metric as a diffeomorphic optimization problem. In this section, we will +generalize the results of the previous sections for the Wasserstein Fisher Rao distance on any manifold +and for any coeffecient δ. Thus characterizing the Wasserstein Fisher Rao distance as a diffeomorphic +optimization problem. Let N be a smooth, connected, compact, oriented Riemannian manifold. Define + +10 +M. BAUER, E. HARTMAN, E. KLASSEN +the cone over N via C(N) := (N × R≥0)/(N × {0}). If we let d denote the geodesic distance on N and +fix some δ ∈ (0, ∞), then we can define a metric on C(N) via +dC(N)((n1, r1), (n2, r2))2 = 4δ2r2 +1 + 4δ2r2 +2 − 8δ2r1r2cos(d(n1, n2)/2δ). +Let M be another smooth, connected, compact, oriented Riemannian manifold. Any function q : M → +C(N) can be decomposed into component functions by q(x) = (q(x), q◦(x)) where q : M → N and +q◦ : M → R≥0. We can thus define +ˆq : M → R≥0 via for all x ∈ M, ˆq(x) = +√ +2δq◦(x). +Given q1, q2 : M → C(N). The L2 distance between q1 and q2 is given by +dL2(q1, q2)2 = +� +M +dC(N)(q1(x), q2(x))2dm. +By decomposing q1 and q2, we can alternatively write +(4.1) +dL2(q1, q2)2 = +� +M +ˆq1(x)2dm + +� +M +ˆq2(x)2dm − 2 +� +M +ˆq1(x) ˆq2(x)cos(d(q1(x), q2(x))/2δ)dm +The L2 cost of a function q : M → C(N) as the distance from q to the function that maps all of M to +the cone point. In particular, using the decomposition of q, this distance is given by +dL2(0, q)2 = +� +M +ˆq(x)2 dm. +Thus, the space of L2-functions from M to C(N) as +L2(M, C(N)) := {q : M → C(N) s.t. dL2(0, q)2 < ∞} +and we equip L2(M, C(N)) with the metric dL2. We define the right action of the diffeomorphisms of +on L2(M, C(N)) component-wise. We treat ˆq as a half density and define the action of Γ(M) on this +component as the action on half-densities. Thus, we define the action of Γ(M) on L2(M, C(N)) given +by +L2(M, C(N)) × Γ(M) → L2(M, C(N)) via +(q, ˆq), γ �→ +� +q ◦ γ, ˆq ◦ γ · +� +|Dγ| +� +The main result of this section is to show that the Wasserstein Fisher Rao distance can written as the +distance between the orbits associated with the measures: +Theorem 4.1. Let N be a smooth connected compact Riemannian manifold and M be a smooth +connected compact Riemannian manifold of dimension 2 or higher. +a.) For all µ1, µ2 ∈ M(N) and q1, q2 ∈ L2(M, C(N)) such that µ1 = q1∗νq1 and µ2 = q2∗νq2 we have +WFRδ(µ1, µ2) = +inf +γ∈Γ(N) dL2(q1, q2 ∗ γ). +b.) Moreover, for all µ ∈ M(N) there exists q ∈ L2(M, C(N)) such that µ = q∗νq. If µ is a finitely +supported measure given by µ = �n +i=1 aiδui, then one can choose q piece wise constant. More +specifically, the function q given by +q(x) = +�� +uj, +� +aj +area(σj) +� +if 1 ≤ j ≤ n +(u1, 0) +if n < j ≤ m +, +where {σj}m +j=1 is a subdivision of the canonical triangulation of M with m ≥ n, satisfies µ = q∗νq. + +11 +Before we are able to prove this theorem, we will show again several technical lemmas. Therefore we +will consider specific measures associated with functions q ∈ L2(M, C(N)). First, we define νq ∈ M(M) +such that for any U ⊆ M open +νq(U) = +� +U +ˆq(x)2dm. +Note that νq ≪ m and νq +m = ˆq2. Further, we can define a pushforward of νq via q. In particular, for +every q ∈ L2(M, C(N)), we can define a Borel measure on N given by µq := q∗νq. In other words for +all U ⊆ N open +µq(U) = +� +q−1(U) +ˆq2(x)dm. +Now we will show that the orbit of any q ∈ L2(M, C(N)) under the action of Γ(M) is mapped to the +same measure on N. +Lemma 4.2. Let q ∈ L2(M, C(N)). Then for all γ ∈ Γ(M), µq = µq∗γ. +Proof. Let U ⊆ N open. Then +µq∗γ(U) = +� +γ−1(q−1(U)) +(ˆq ◦ γ(x) · +� +|Dγ|)2dm += +� +γ−1(q−1(U)) +ˆq ◦ γ(x)2 · |Dγ|dm = +� +q−1(U) +ˆq(x)2dm = µq(U). +Therefore, we can map each orbit of q ∈ L2(M, C(N)) under the half density action by Γ(M) to a +measure on N. As in the previous section, we will first show the result for piecewise constant functions +and extend by continuity. We prove the piecewise constant case in the following lemma. +Lemma 4.3. Let d ≥ 2 and M be a smooth, connected, compact, oriented Riemannian d-dimensional +manifold with or without boundary. Given two piecewise constant functions q1, q2 : M → C(N), +inf +γ∈Γ(M) dL2(q2, q2 ∗ γ) = WFRδ(µq1, µq2). +Proof. Let {σi}m +i=1 and {τj}n +j=1 be triangulations of M such that q1 is constant on each σi and q2 +is constant on each τj. Let ˆq1 : M → R, q1 : M → N be the decomposition of q1 and ˆq2 : M → R, +q2 : M → M be the decomposition of q2. Define a function ⟨·, ·⟩ : N × N → R given via ⟨u, v⟩ = +cos(d(u, v)/2δ). A brief computation shows +inf +γ∈Γ(M) d2 +L2(q1, q2 ∗ γ) = +m +� +i=1 +ai + +n +� +j=1 +bj − 2 +sup +γ∈Γ(M) +� +M +ˆq1(x) ˆq2(γ(x)) +� +|Dγ|⟨q1(x), q2(γ(x))⟩dm. +Let A be the set of all discrete semi-couplings from µq1 to µq2. Recall +WFRδ(µq1, µq2)2 = +m +� +i=1 +ai + +n +� +j=1 +bj − 2 +sup +(A,B)∈A +m +� +i=1 +n +� +j=1 +� +AijBij⟨ui, vj⟩ +Therefore, the theorem is equivalent to showing +sup +(A,B)∈A +m +� +i=1 +n +� +j=1 +� +AijBij⟨ui, vj⟩ = +sup +γ∈Γ(S2) +� +M +ˆq1(x) ˆq2(γ(x)) +� +|Dγ|⟨q1(x), q2(γ(x))⟩dm. + +12 +M. BAUER, E. HARTMAN, E. KLASSEN +Claim 2. Assume that (A, B) is a discrete semi-coupling from µq1 to µq2. Then for all ǫ > 0 there +is a PL homeomorphism γ : M → M such that +������ +� +M +ˆq1(x) ˆq2(γ(x)) +� +|Dγ|⟨q1(x), q2(γ(x))⟩dm − +� +i,j +� +AijBij⟨ui, vj⟩ +������ +< ǫ. +Proof of Claim 2. Let (A, B) be a discrete semi-coupling from µq1 to µq2 such that for each 1 ≤ i ≤ m +and 1 ≤ j ≤ n, Aij, Bij > 0. We will first prove the claim for this restricted case and extend it +to all semi-couplings by continuity. First we choose a real number r ∈ (0, 1). For each 1 ≤ i ≤ m, +subdivide σi into n smaller d-simplexes σij such that ˆq1 +2 = Aij/m(σij). Similarly, for each 1 ≤ j ≤ n, +subdivide τj into m smaller d-simplexes τij such that ˆq2 +2 = Bij/m(τij). For each 1 ≤ i ≤ m and +1 ≤ j ≤ n, choose a smaller d-simplex ˜σij, whose closure is contained in the interior of σij, such that +m(˜σij) = rm(σij). Similarly, for each 1 ≤ i ≤ m and 1 ≤ j ≤ n, choose a smaller d-simplex ˜τij, whose +closure is contained in the interior of τij, such that m(˜τij) = rm(τij). We now construct an orientation +preserving PL homeomorphism γr : M → M. First, for each 1 ≤ i ≤ m and 1 ≤ j ≤ n, define +γr : ˜σij → ˜τij to be a PL orientation preserving homeomorphism with constant area multiplication +factor, |Dγr| = m(τij)/m(σij). Note that +M − + + +m +� +i=1 +n� +j=1 +˜σo +ij + + is homeomorphic to M − + + +m +� +i=1 +n +� +j=1 +˜τ o +ij + + . +Hence, we can extend the homeomorphism γr defined on the ˜σij’s to a homeomorphism from M to M. +Note that on each ˜σij, ˆq2 +2(γr(x))|Dγr| = Bij/m(σij). Write M = M1 ∪ M2, where M1 = +m� +i=1 +n� +j=1 +˜σij +and M2 = M − M1. A simple computation shows +� +M1 +ˆq1(x) ˆq2(γr(x)) +� +|Dγr|⟨q1(x), q2(γr(x))⟩dm += +m +� +i=1 +n +� +j=1 +� +˜σij +ˆq1(x) ˆq2(γr(x)) +� +|Dγr|⟨q1(x), q2(γr(x))⟩dm += +m +� +i=1 +n +� +j=1 +� +AijBij +m(σij) m(˜σij)⟨ui, vj⟩ = +m +� +i=1 +n +� +j=1 +� +rAij +� +rBij⟨ui, vj⟩. +Meanwhile by the Schwarz inequality, +���� +� +M2 +ˆq1(x) ˆq2(γr(x)) +� +|Dγr|⟨q1(x), q2(γr(x))⟩dm +���� ≤ +� +M2 +ˆq1(x) ˆq2(γr(x)) +� +|Dγr|dm +≤ +�� +M2 +ˆq1 +2dm +�� +M2 +ˆq2 +2(γr(x))|Dγr|dm = +� +(1 − r) +� +M +ˆq1 +2dm +� +(1 − r) +� +M +ˆq2 +2dm. +So as we let r → 1, +� +M1 +ˆq1(x) ˆq2(γr(x)) +� +|Dγr|⟨q1(x), q2(γr(x))⟩dm → +m +� +i=1 +n +� +j=1 +� +AijBij⟨ui, vj⟩ +and +� +M2 +ˆq1(x) ˆq2(γr(x)) +� +|Dγr|⟨q1(x), q2(γr(x))⟩dm → 0. + +13 +Hence, +� +M +ˆq1(x) ˆq2(γr(x)) +� +|Dγr|⟨q1(x), q2(γr(x))⟩dm → +m +� +i=1 +n +� +j=1 +� +AijBij⟨ui, vj⟩. +Thus Claim 2 follows for the case in which for each 1 ≤ i ≤ m and 1 ≤ j ≤ n, Aij > 0 and Bij > 0. +The general case then follows immediately from the continuity of +m +� +i=1 +n +� +j=1 +� +AijBij⟨ui, vj⟩ +as a function of (A, B). This completes the proof of Claim 2. It follows that +sup +γ∈Γ(S2) +� +M +ˆq1(x) ˆq2(x)⟨q1(x), q2(x)⟩dm ≥ +sup +(A,B)∈A +m +� +i=1 +n +� +j=1 +� +AijBij⟨ui, vj⟩. +We are left to show the opposite inequality. +Claim 3. Assume γ is a PL-homeomorphism from M to M, then there exists a discrete semi- +coupling (A, B) such that +sup +γ∈Γ(M) +� +M +ˆq1(x) ˆq2(γ(x)) +� +|Dγ|⟨q1(x), q2(γ(x))⟩dm ≤ +sup +(A,B)∈A +m +� +i=1 +n +� +j=1 +� +AijBij⟨ui, vj⟩. +Proof of Claim 3. Let γ : M → M be an orientation preserving PL homeomorphism. For 1 ≤ i ≤ m +and 1 ≤ j ≤ n, define σij = γ−1(τj) ∩ σi and define τij = γ(σij). Now define two (m + 1) × (n + 1) +matrices A and B via: +• For 1 ≤ i ≤ m and 1 ≤ j ≤ n, Aij = +� +σij +ˆq1 +2dm and Bij = +� +τij +ˆq2 +2dm. +• For 0 ≤ i ≤ m, B0i = 0 and Ai0 = ai − +n +� +j=1 +� +σij +ˆq1 +2dm. +• For 0 ≤ j ≤ n, Aj0 = 0 and B0j = bj − +m +� +i=1 +� +τij +ˆq2 +2dm. +The pair of matrices (A, B) is a discrete semi-coupling from µq1 to µq2 by construction. We say that +(A, B) is the semi-coupling corresponding to the homeomorphism γ. Denote the area multiplication +factor of γ on σij by mij. Then by the Schwarz inequality, +� +σij +ˆq1(x) ˆq2(γ(x)) +� +|Dγ|⟨ui, vj⟩dm ≤ +�� +σij +ˆq1 +2(x)dm +�� +σij +ˆq2 +2(γ(x))|Dγ|dm⟨ui · vj⟩ += +�� +σij +ˆq1 +2(x)dm +�� +τij +ˆq2 +2(x)dm⟨ui · vj⟩ = +� +Aij +� +Bij⟨ui · vj⟩. +Summing over all i and j we obtain: +� +M +ˆq1(x) ˆq2(γ(x)) +� +|Dγ|⟨q1(x), q2(γ(x))⟩dm += +� +i,j +� +σij +ˆq1(x) ˆq2(γ(x)) +� +|Dγ|⟨q1(x), q2(γ(x))⟩dm ≤ +� +i,j +� +Aij +� +Bij⟨ui · vj⟩. + +14 +M. BAUER, E. HARTMAN, E. KLASSEN +This completes the proof of Claim 3. It follows that, +sup +γ∈Γ(M) +� +M +ˆq1(x) ˆq2(γ(x)) +� +|Dγ|⟨q1(x), q2(γ(x))⟩dm ≤ +sup +(A,B)∈A +m +� +i=1 +n +� +j=1 +� +AijBij⟨ui · vj⟩. +and thus the lemma is proved. +To extend the results to all of L2(M, C(N)) we will need the following continuity result: +Lemma 4.4. The map (L2(M, C(N)), dL2) → (M(N), WFRδ) defined via q �→ q∗νq is Lipschitz +continuous with Lipschitz constant K = 1. +Proof. Let q1, q2 ∈ L2(M, C(N)), µq1 = q1∗νq1, and µq2 = q2∗νq2. For any semi-coupling (γ1, γ2) ∈ +Γ(µq1, µq2), +WFRδ(µq1, µq2) ≤ +� +Jδ(γ1, γ2). +Thus, to prove the theorem we must construct (γ1, γ2) ∈ Γ(µq1, µq2) such that Jδ(γ1, γ2) = dL2(q1, q2). +To construct such a semi-coupling we first construct ρ : M → N×N defined a the first component maps +of q1 and q2 on the first and second factor respectively. I.e. the map is given by ρ(x) = (q1(x), q2(x)) . +Since q1 and q2 are individually measurable, then so is ρ. We can then define γ1, γ2 ∈ M(N × N) via +γ1 = ρ∗νq1 and γ2 = ρ∗νq2. +Claim 4. The pair of measures, (γ1, γ2) is a semi-coupling from µq1 to µq2. +Proof of claim. +Let U ⊆ N be open. Thus, +γ1(U × N) = νq1 +� +ρ−1(U × N) +� += νq1 +� +q1−1(U) ∩ q2−1(N) +� += νq1 +� +q1−1(U) +� += µq1(U) +and +γ2(N × U) = νq2 +� +ρ−1(N × U) +� += νq1 +� +q1−1(N) ∩ q2−1(U) +� += νq1 +� +q2−1(U) +� += µq2(U). +So (γ1, γ2) is a semi-coupling from µq1 to µq2. +Recall from the definition of the functional J we need to construct γ ∈ M(N ×N) such that γ1, γ2 ≪ γ. +Define γ = ρ∗m. We know µq1, µq2 ≪ m. Thus, by Lemma 3.2, γ1, γ2 ≪ γ. Furthermore, +ˆq1 +2 = µq1 +m = γ1 +γ ◦ ρ a.e. +and +ˆq2 +2 = µq2 +m = γ2 +γ ◦ ρ a.e. +So, +Jδ(γ1, γ2) =µ1(N) + µ2(N) − 2 +� +N×N +√γ1γ2 +γ +(u, v)cos(d(u, v)/2δ)dγ(u, v) += +� +N×N +γ1 +γ dγ + +� +N×N +γ2 +γ dγ − 2 +� +N×N +�γ1 +γ (u, v)γ2 +γ (u, v)cos(d(u, v)/2δ)dγ(u, v) += +� +ρ−1(N×N) +γ1 +γ ◦ ρ dm + +� +ρ−1(N×N) +γ2 +γ ◦ ρ dm +− 2 +� +ρ−1(N×N) +�γ1 +γ ◦ ρ(x)γ2 +γ ◦ ρ(x)cos(d(ρ(x))/2δ)dm += +� +M +ˆq1(x)2 dm + +� +M +ˆq2(x)2 dm − 2 +� +M +ˆq1(x) ˆq2(x)cos(d(q1, q2)/2δ)dm = dL2(q1, q2) +Thus, +WFRδ(µq1, µq2) ≤ +� +Jδ(γ1, γ2) = 1 · dL2(µq1, µq2) + +15 +Finally, we can leverage this continuity result to complete the proof of Theorem 4.1. +Proof of Theorem 4.1. Let µ1, µ2 ∈ M(N) and q1, q2 ∈ L2(M, C(N)) such that µ1 = q1∗νq1 and +µ2 = q2∗νq2. By an argument analogous to the proof of Theorem 3.1 we can conclude +inf +γ∈Γ(M) dL2(q1, q2 ∗ γ) = WFRδ(µ1, µ2). +This concludes the the proof of part a.). Let µ = �n +i=1 aiδui be a finitely supported measure on N. +By [48], M admits a canonical PL structure. Let m ≥ n and subdivide the triangulation of M into +m simplices given by σj for 1 ≤ j ≤ m. Let x ∈ M. Thus, there exists 1 ≤ j ≤ m such that x ∈ σj. +Thus we define +q(x) = +�� +uj, +� +aj +area(σj) +� +if 1 ≤ j ≤ n +(u1, 0) +if n < j ≤ m +. +Let U ⊆ N, then µ(U) = +� +i|ui∈U +ai. Meanwhile, q−1(U) = +� +i|ui∈U +σi. Thus, +� +q−1(U) +ˆq2(x)dm = +� +i|ui∈U +� +σi +ai +area(σi)dm = +� +i|ui∈U +ai. +To complete the proof of part b.) we will extend the result to the whole space by continuity. For any +µ ∈ M(N), let {µn} ⊆ M(N) be a sequence of finitely supported measures that converges to µ with +respect to the Wasserstein Fisher Rao. In particular, {µn} is Cauchy with respect to WFRδ. Note +that for all n ∈ N,there exists a piecewise constant qn ∈ L2(M, C(N)) satisfying +µn(U) = +� +qn−1(U) +ˆqn(x)2dm. +Thus, we can construct a sequence of functions given by q∗ +0 = q0 an for all n ∈ N, q∗n+1 = qn+1 ∗ γn +where γn is a PL homeomorphism from M to M such that +dL2(q∗ +n, qn+1 ∗ γn) = WFRδ(µn, µn+1) + 1 +2n . +Note that the existence of such a γn is guaranteed by Lemma 4.3. Since {µn} is Cauchy with respect +to WFRδ, it follows that {q∗ +n} is Cauchy with respect to dL2. By completeness of (L2(M, C(N)), dL2), +there exists a limit q ∈ L2(M, C(N)). Let U ⊆ N open. Thus, +µ(U) = lim +n→∞ µn(U) = lim +n→∞ +� +qn−1(U) +ˆqn(x)2dm = lim +n→∞ +� +M +ˆqn(x)2χqn−1(U)dm += +� +M +lim +n→∞ ˆqn(x)2χqn−1(U)dm = +� +M +ˆq(x)2χq−1(U)dm = +� +q−1(U) +ˆq(x)2dm +Thus, µ = q∗νq This completes the proof of part b.) of the theorem. +REFERENCES +[1] A. Alexandrov, Zur theorie der gemischten volumina von konvexen k¨orpern i, Mat. Sbornik NS, 1 (1938), pp. 227– +251. +[2] M. Bauer, M. Bruveris, P. Harms, and J. Møller-Andersen, A numerical framework for sobolev metrics on +the space of curves, SIAM Journal on Imaging Sciences, 10 (2017), pp. 47–73. + +16 +M. BAUER, E. HARTMAN, E. KLASSEN +[3] M. Bauer, M. Bruveris, and P. W. Michor, Overview of the geometries of shape spaces and diffeomorphism +groups, Journal of Mathematical Imaging and Vision, 50 (2014), pp. 60–97. +[4] M. Bauer, N. Charon, P. Harms, and H.-W. Hsieh, A numerical framework for elastic surface matching, +comparison, and interpolation, International Journal of Computer Vision, 129 (2021), pp. 2425–2444. +[5] M. Bauer, N. Charon, E. Klassen, S. Kurtek, T. Needham, and T. Pierron, Elastic metrics on spaces of +euclidean curves: Theory and algorithms, arXiv preprint arXiv:2209.09862, (2022). +[6] M. Bauer, E. Hartman, and E. Klassen, The square root normal field distance and unbalanced optimal transport, +Applied Mathematics & Optimization, 85 (2022), https://doi.org/10.1007/s00245-022-09867-y, https://doi. +org/10.1007%2Fs00245-022-09867-y. +[7] M. Bruveris, Optimal reparametrizations in the square root velocity framework, SIAM Journal on Mathematical +Analysis, 48 (2016), pp. 4335–4354. +[8] N. +Charon +and +L. +Younes, +Shape +spaces: +From +geometry +to +biological +plausibility, +arXiv +preprint +arXiv:2205.01237, (2022). +[9] L. Chizat, G. Peyr´e, B. Schmitzer, and F.-X. Vialard, An interpolating distance between optimal transport +and Fisher–Rao metrics, Foundations of Computational Mathematics, 18 (2018), pp. 1–44. +[10] L. Chizat, G. Peyr´e, B. Schmitzer, and F.-X. Vialard, Scaling algorithms for unbalanced optimal transport +problems, Mathematics of Computation, 87 (2018), pp. 2563–2609. +[11] L. Chizat, G. Peyr´e, B. Schmitzer, and F.-X. Vialard, Unbalanced optimal transport: Dynamic and Kan- +torovich formulations, Journal of Functional Analysis, 274 (2018), pp. 3090–3123. +[12] G. Dogan, J. Bernal, and C. R. Hagwood, A fast algorithm for elastic shape distances between closed planar +curves, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 4222– +4230. +[13] I. L. Dryden and K. V. Mardia, Statistical shape analysis: with applications in R, vol. 995, John Wiley & Sons, +2016. +[14] E. Hartman, Y. Sukurdeep, E. Klassen, N. Charon, and M. Bauer, Elastic shape analysis of surfaces with +second-order sobolev metrics: a comprehensive numerical framework, To appear in IJCV, (2022). +[15] I. H. Jermyn, S. Kurtek, E. Klassen, and A. Srivastava, Elastic shape matching of parameterized surfaces +using square root normal fields, in European conference on computer vision, Springer, 2012, pp. 804–817. +[16] I. H. Jermyn, S. Kurtek, H. Laga, and A. Srivastava, Elastic shape analysis of three-dimensional objects, +Synthesis Lectures on Computer Vision, 12 (2017), pp. 1–185. +[17] S. H. Joshi, Q. Xie, S. Kurtek, A. Srivastava, and H. Laga, Surface shape morphometry for hippocampal +modeling in alzheimer’s disease, in 2016 International Conference on Digital Image Computing: Techniques +and Applications (DICTA), IEEE, 2016, pp. 1–8. +[18] E. Klassen and P. W. Michor, Closed surfaces with different shapes that are indistinguishable by the SRNF, +Archivum Mathematicum, 56 (2020), pp. 107–114. +[19] S. Kurtek, E. Klassen, Z. Ding, and A. Srivastava, A novel Riemannian framework for shape analysis of 3D +objects, in 2010 IEEE computer society conference on computer vision and pattern recognition, IEEE, 2010, +pp. 1625–1632. +[20] S. Kurtek, E. Klassen, J. C. Gore, Z. Ding, and A. Srivastava, Elastic geodesic paths in shape space of +parameterized surfaces, IEEE transactions on pattern analysis and machine intelligence, 34 (2011), pp. 1717– +1730. +[21] S. Kurtek, C. Samir, and L. Ouchchane, Statistical shape model for simulation of realistic endometrial tissue., +in ICPRAM, 2014, pp. 421–428. +[22] H. Laga, M. Padilla, I. H. Jermyn, S. Kurtek, M. Bennamoun, and A. Srivastava, 4d atlas: Statistical +analysis of the spatiotemporal variability in longitudinal 3D shape data, arXiv preprint arXiv:2101.09403, +(2021). +[23] H. Laga, Q. Xie, I. H. Jermyn, and A. Srivastava, Numerical inversion of SRNF maps for elastic shape +analysis of genus-zero surfaces, IEEE transactions on pattern analysis and machine intelligence, 39 (2017), +pp. 2451–2464. +[24] S. Lahiri, D. Robinson, and E. Klassen, Precise matching of PL curves in RN in the square root velocity +framework, Geometry, Imaging and Computing, 2 (2015), pp. 133–186. +[25] M. Liero, A. Mielke, and G. Savar´e, Optimal transport in competition with reaction: The Hellinger–Kantorovich +distance and geodesic curves, SIAM Journal on Mathematical Analysis, 48 (2016), pp. 2869–2911. +[26] M. Liero, A. Mielke, and G. Savar´e, Optimal entropy-transport problems and a new Hellinger–Kantorovich +distance between positive measures, Inventiones mathematicae, 211 (2018), pp. 969–1117. +[27] Lombardi, Damiano and Maitre, Emmanuel, Eulerian models and algorithms for unbalanced optimal transport, +ESAIM: M2AN, 49 (2015), pp. 1717–1744, https://doi.org/10.1051/m2an/2015025, https://doi.org/10.1051/ +m2an/2015025. +[28] J. S. Marron and A. M. Alonso, Overview of object oriented data analysis, Biometrical Journal, 56 (2014), +pp. 732–753. +[29] J. Matuk, S. Mohammed, S. Kurtek, and K. Bharath, Biomedical applications of geometric functional data +analysis, in Handbook of Variational Methods for Nonlinear Geometric Data, Springer, 2020, pp. 675–701. + +17 +[30] P. W. Michor and D. Mumford, An overview of the riemannian metrics on spaces of curves using the hamiltonian +approach, Applied and Computational Harmonic Analysis, 23 (2007), pp. 74–113. +[31] M. I. Miller, A. Trouv´e, and L. Younes, On the metrics and euler-lagrange equations of computational anatomy, +Annual review of biomedical engineering, 4 (2002), pp. 375–405. +[32] H. Minkowski, Allgemeine lehrs¨atze ¨uber die convexen polyeder, Nachrichten von der Gesellschaft der Wis- +senschaften zu G¨ottingen, Mathematisch-Physikalische Klasse, 1897 (1897), pp. 198–220, http://eudml.org/ +doc/58391. +[33] W. Mio, A. Srivastava, and S. Joshi, On shape of plane elastic curves, International Journal of Computer Vision, +73 (2007), pp. 307–324. +[34] T. Needham and S. Kurtek, Simplifying transforms for general elastic metrics on the space of plane curves, +SIAM journal on imaging sciences, 13 (2020), pp. 445–473. +[35] X. Pennec, Intrinsic statistics on riemannian manifolds: Basic tools for geometric measurements, Journal of +Mathematical Imaging and Vision, 25 (2006), pp. 127–154. +[36] X. Pennec, S. Sommer, and T. Fletcher, Riemannian geometric statistics in medical image analysis, Academic +Press, 2019. +[37] B. Piccoli and F. Rossi, Generalized Wasserstein distance and its application to transport equations with source, +Archive for Rational Mechanics and Analysis, 211 (2014), pp. 335–358. +[38] R. Schneider, Convex surfaces, curvature and surface area measures, in Handbook of convex geometry, Elsevier, +1993, pp. 273–299. +[39] A. Srivastava, E. Klassen, S. H. Joshi, and I. H. Jermyn, Shape analysis of elastic curves in euclidean spaces, +IEEE transactions on pattern analysis and machine intelligence, 33 (2010), pp. 1415–1428. +[40] A. Srivastava and E. P. Klassen, Functional and shape data analysis, vol. 1, Springer, 2016. +[41] Z. Su, M. Bauer, E. Klassen, and K. Gallivan, Simplifying transformations for a family of elastic metrics on the +space of surfaces, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition +Workshops, 2020, pp. 848–849. +[42] Z. Su, M. Bauer, S. C. Preston, H. Laga, and E. Klassen, Shape analysis of surfaces using general elastic +metrics, Journal of Mathematical Imaging and Vision, 62 (2020), pp. 1087–1106. +[43] K. Tapp, Differential Geometry of Curves and Surfaces, Undergraduate Texts in Mathematics, Springer Interna- +tional Publishing, 2016, https://books.google.com/books?id=kfIqDQAAQBAJ. +[44] A. Trouv´e and L. Younes, On a class of diffeomorphic matching problems in one dimension, SIAM Journal on +Control and Optimization, 39 (2000), pp. 1112–1135. +[45] C. Villani, Topics in optimal transportation, no. 58, American Mathematical Soc., 2003. +[46] C. Villani, Optimal transport: old and new, vol. 338, Springer Science & Business Media, 2008. +[47] Z. Wang, D. P. Zhou, M. Yang, Y. Zhang, C.-Y. Rao, and H. Wu, Robust document distance with wasserstein- +fisher-rao metric, in ACML, 2020. +[48] J. H. C. Whitehead, On C1-complexes, Annals of Mathematics, (1940), pp. 809–824. +[49] E. N. Wøien and M. Grasmair, A pde-based method for shape registration, SIAM Journal on Imaging Sciences, +15 (2022), pp. 762–796. +[50] L. Younes, Computable elastic distances between shapes, SIAM Journal on Applied Mathematics, 58 (1998), +pp. 565–586. +[51] L. Younes, Shapes and diffeomorphisms, vol. 171, Springer, 2010. +[52] D. Zhou, J. Chen, H. Wu, D. H. Yang, and L. Qiu, The wasserstein-fisher-rao metric for waveform based +earthquake location, arXiv: Numerical Analysis, (2018). + diff --git a/5tAyT4oBgHgl3EQfcfeN/content/tmp_files/load_file.txt b/5tAyT4oBgHgl3EQfcfeN/content/tmp_files/load_file.txt new file mode 100644 index 0000000000000000000000000000000000000000..7c340b0b67610858e4c82581682f4645d4d0c002 --- /dev/null +++ b/5tAyT4oBgHgl3EQfcfeN/content/tmp_files/load_file.txt @@ -0,0 +1,838 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf,len=837 +page_content='arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='00284v1 [math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='DG] 31 Dec 2022 SQUARE ROOT NORMAL FIELDS FOR LIPSCHITZ SURFACES AND THE WASSERSTEIN FISHER RAO METRIC EMMANUEL HARTMAN∗, MARTIN BAUER†, AND ERIC KLASSEN‡ Abstract.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' The Square Root Normal Field (SRNF) framework is a method in the area of shape analysis that defines a (pseudo) distance between unparametrized surfaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' For piecewise linear (PL) surfaces it was recently proved that the SRNF distance between unparametrized surfaces is equivalent to the Wasserstein Fisher Rao (WFR) metric on the space of finitely supported measures on S2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' In the present article we extend this point of view to a much larger set of surfaces;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' we show that the SRNF distance on the space of Lipschitz surfaces is eqivalent to the WFR distance between Borel measures on S2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' For the space of spherical surfaces this result directly allows us to characterize the non-injectivity and the (closure of the) image of the SRNF transform.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' In the last part of the paper we further generalize this result by showing that the WFR metric for general measure spaces can be interpreted as an optimization problem over the diffeomorphism group of an independent background space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Introduction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' The investigations of this article are motivated by applications in the area of mathematical shape analysis, which seeks to quantify differences, perform classification, and explain variability for populations of shapes [51, 40, 13, 28].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' More specifically, the results of this article concern the Square Root Normal Field distance [16] on the space of surfaces and the Wasserstein Fisher Rao metric [9, 26] from unbalanced optimal transport.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Before we describe the contributions of the current work in more detail, we will briefly summarize some results from these two areas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Shape analysis of surfaces: For the purpose of this article we consider a shape to be a parametrized surface or curve in Rd, where we identify two objects if they only differ by a trans- lation and/or a reparametrization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' In practice, it is often of interest to mod out by further shape preserving group actions, such as the groups of rotations or scalings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' To keep the presentation simple, we will ignore these additional finite dimensional groups.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Consequently, the resulting shape space is an infinite dimensional, non-linear (quotient) space, which makes the application of statistical tech- niques to analyse these types of data a highly challenging task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' A common approach to overcome these difficulties can be found in the area of geometric statistics [35, 36], in which one develops statistical frameworks based on (Riemannian) geometry.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' In the context of shape analysis of surfaces or curves, a variety of different metrics have been proposed for this purpose;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' this includes metrics induced by (right-invariant) metrics on diffeomorphism groups [51, 31] and reparametrization invariant metrics on the space of immersions [40, 3, 30], which are directly related to the investigations of the present article as we will explain next.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' In the latter approach the calculation of the distance (similarity) between two shapes reduces to two tasks: calculating the geodesic distance on the space of immersions (parametrized surfaces or curves, resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=') and minimizing over the action of the shape preserving group actions, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=', diffeomorphisms of the parameter space and translations in Rd.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' In general there do not exist any explicit formulas for geodesics and thus computing solutions to the geodesic boundary value problems (and thus of the distance) is a highly non-trivial task and usually has to be solved using numerical optimization techniques, see eg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' [14, 2].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' For specific examples of Riemannian metrics, however, simplifying transformations have been developed that allow for explicit calculations of geodesics and geodesic distance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' This includes in particular the family of Ga,b-metrics on the space of curves [5, 34, 33, 50], a family of first order Sobolev type metrics, that are often called elastic metrics due to their connections to linear elasticity theory;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' see eg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' [33, 8, 5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' For the specific choice of parameters a = 1, b = 1/2 the corresponding transformation is the so-called Square-Root-Velocity (SRV) transform [39], which is widely used in ∗Department of Mathematics, Florida State University (ehartman@fsu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='edu) †Department of Mathematics, Florida State University and University of Vienna (bauer@math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='fsu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='edu) ‡Department of Mathematics, Florida State University (klassen@math.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='fsu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='edu) 1 2 M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' BAUER, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' HARTMAN, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' KLASSEN applications;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' see [40] and the references therein.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' The advantage of this transformation is that it reduces the shape comparison problem to a single optimization over the shape preserving group actions, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=', in the setting of the present article over reparametrizations and translations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' This computational simplification has led to both the development of efficient algorithms [49, 12, 39] and to analytic results on existence of minimizers and optimal parametrizations [7, 24, 44].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' The family of elastic Ga,b metrics has a natural generalization to a four parameter family of metrics on the space of surfaces [42].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Similarly to the case of curves, simplifying transformations have also been proposed in this more complicated situation [19, 20, 16, 41].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Notably, as a generalization of the SRV transform, the Square Root Normal Field (SRNF) transformation [16] has been introduced.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' In contrast to the situation for curves, the corresponding Riemannian metric for this transformation is degenerate and, furthermore, it only leads to a first order approximation of the geodesic distance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Nonetheless it defines a reparametrization invariant (pseudo-) distance on the space of surfaces, which still allows for efficient computations using several methods of approximating the optimization over the diffeomorphism group [23, 4] and has proven successful in several applications, see [21, 17, 29, 22].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' and the references therein.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Unbalanced Optimal transport: The second core theme of the present article can be found in the theory of optimal transport (OT).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Since Monge’s formulation of OT as a non-convex optimization problem in the space of transport maps, many formulations of the problem have been proposed to give insight to the theoretical properties of the problem as well as efficient methods for computing the solution, see [45, 46] for a comprehensive overview on the field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' In classical optimal transport theory one considers normalized (probability) distributions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' It is, however, important for many applications to relax this normalization assumption and compute trans- portation plans between arbitrary positive measures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Motivated by this observation the theory of optimal transport has been extended to measures with different masses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' This field, called unbalanced optimal transport, has seen rapid developments in the past years and several different frameworks have been proposed [9, 25, 27, 37].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Among them is the Wasserstein Fisher Rao (WFR) distance, an interpolating distance between the quadratic Wasserstein metric and the Fisher–Rao metric, that was introduced independently by [9] and [26].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' The WFR distance has been applied to a variety of problems where it is more natural to consider optimal transport in an unbalanced setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' These applications range from color transfer [10], to earthquake epicenter location [52] and document semantic similarity metrics [47].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Because of the growing field of applications, several algorithms have been proposed to compute the Wasserstein Fisher Rao metric.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' A variation on the popular Sinkhorn algorithm to solve for an entropy regularized version of the distance was proposed by [10] and an alternating minimization algorithm that computes an exact solution was introduced in [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Contributions of the article.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Recently a new and surprising relationship between these two areas (shape analysis and unbalanced optimal transport) has been found.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Namely, in [6] it has been shown that for triangulated surfaces the calculation of the SRNF shape distance can be reduced to calculating the WFR distance between their corresponding surface area measures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' The presentation in [6] was entirely focused on the discrete (PL) setting and the proof of the result essentially reduced to algebraic considerations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' In the first part of the present article we build the analytical tools to extend this result to the infinite dimensional setting, which contains in particular the original setup of the SRNF distance;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' the space of smooth surfaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' The main result of this part of our article – cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='1 – shows that the SRNF shape distance between any two Lipschitz surfaces is equal to the WFR distance between their surface area measures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' As a direct consequence of this result we are able to answer two fundamental questions regarding the SRNF transform: since the inception of the SRNF transform, it has been understood that the map is neither injective nor surjective [16].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Characterizing the image and non-injectivity have, however, remained open problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Recently a first degeneracy result in the context of closed surfaces has been found [18].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Using our equivalence result we are able to obtain a characterization of the closure of the 3 image of this transform – cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='6 – and a new strong degeneracy result of the corresponding distance (non-injectivity of the transform, resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=') – cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' In the second part we further explore the equivalence result for more general unbalanced optimal transport problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Generalizations of some of the intermediate results of the first part allow us to offer a novel formulation of the WFR metric as a diffeomorphic optimization problem – cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Whereas the main result of the first part of the article relates the WFR on S2 with a specific choice of parameter to a diffeomorphic optimization problem, we here extend this relationship to the WFR with any choice of parameter defined on any connected, compact, oriented Riemannian manifold, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Notably, the space of diffeomorphisms we have to optimize over does not depend on N, but can be chosen as the diffeomorphism group of some background manifold, that only needs to be of dimension greater than or equal to two.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Acknowledgements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' The authors thank FX Vialard and Cy Maor for useful discussions during the preparation of this manuscript.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Bauer was supported by NSF-grants 1912037 and 1953244 and by FWF grant P 35813-N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Hartman was supported by NSF grant DMS-1953244.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Preliminaries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' The Wasserstein Fisher Rao Distance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' In the following, we will summarize the Kan- torovich formulation of the Wasserstein Fischer Rao distance, as introduced in [11] for measures on a smooth, connected, compact, oriented Riemannian manifold, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Therefore we denote by M(N) the space of finite Borel measures on N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' In the Kantorovich formulation of the Wasserstein-Fisher-Rao distance, we will define a functional on the space of semi-couplings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Therefore we first recall the definition of a semi-coupling: Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='1 (Semi-couplings [11]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Given µ, ν ∈ M(N) the set of all semi-couplings from µ to ν is given by Γ(µ, ν) = � (γ0, γ1) ∈ M(N × N)2|(Proj0)#γ0 = µ, (Proj1)#γ1 = ν � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' To define the Wasserstein-Fisher-Rao distance from µ to ν we define a functional on the space of semi-couplings from µ to ν.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Let d denote the geodesic distance on N and δ ∈ (0, ∞).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' We consider the functional Jδ : Γ(µ, ν) → R (γ1, γ2) �→ 4δ2 � µ(N) + ν(N) − 2 � N×N √γ1γ2 γ (u, v)cos(d(u, v)/2δ)dγ(u, v) � where γ ∈ M(N × N) such that γ1, γ2 ≪ γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Note that in the case where N = S2, we have d(u, v) = cos−1(u · v).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Thus for δ = 1 2, Jδ(γ1, γ2) = � S2×S2 ���� �γ1 γ (u, v)u − �γ1 γ (u, v)v ���� 2 dγ(u, v).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='1) Definition 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='2 (Wasserstein-Fisher-Rao Distance [11, 26]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' The Wasserstein-Fisher-Rao Dis- tance on M(N) is given by WFRδ : M(N) × M(N) → R≥0 defined via (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='2) (µ, ν) �→ inf (γ0,γ1)∈Γ(µ,ν) � Jδ(µ, ν).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='3) Some results in this article will specifically apply to the case where δ = 1/2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' To simplify our notation, we define J := J1/2 and WFR := WFR1/2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 4 M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' BAUER, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' HARTMAN, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' KLASSEN 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' The Square Root Normal Field Shape Distance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' In mathematical shape analysis, one defines metrics that measure the differences between geometric objects [51, 3, 40, 13].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' In this article we consider geometric objects described by unparameterized surfaces which are elements of an infinite dimensional non-linear space modulo several finite and infinite dimensional group action.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' As a result, computations in this space are difficult and even simple statistical operations are not well defined.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Riemannian geometry can help to overcome these challenges.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' In such a framework, one considers the space of all surfaces as an infinite dimensional manifold and equips it with a Riemannian metric that is invariant to the group action, which allows one to consider the induced metric on the quotient space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' For our purposes we will consider immersions of a smooth, connected, compact, oriented Rie- mannian 2-dimensional manifold, M, with or without boundary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' We denote the space of all Lipschitz immersions of M into R3 by Imm(M, R3), i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=', Imm(M, R3) = {f ∈ W 1,∞(M, R3) : T f is inj.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='} .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='4) As we are interested in unparametrized surfaces, we have to factor out the action of the group of diffeomorphisms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' In the context of Lipschitz immersions the natural group of reparametrizations for us to consider is the group of all orientation preserving, bi-Lipschitz diffeomorphisms: Γ(M) = {γ ∈ W 1,∞(M, M) : γ−1 ∈ W 1,∞(M, M), |Dγ| > 0 a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' }, where |Dγ| denotes the Jacobian determinant of γ, which is well-defined as Dγ ∈ L∞.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Note that this reparametrization group acts by composition from the right on Imm(M, R3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' In addition to the action by the reparametrization group, we also want to identify surfaces that only differ by a translation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' This leads us to consider the following quotient space: S := Imm(M, R3)/(Γ(M) × trans) (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='5) In the following we will equip Imm(M) with a reparameterization invariant distance;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' the so called square root normal field (SRNF) distance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' The SRNF map (distance resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=') was originally introduced by Jermyn et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' in [15] for the space of smooth immersions, but it naturally extends to the space of all Lipschitz surfaces, as demonstrated in [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' We now recall the definition of this distance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' For any given f ∈ Imm(M, R3), the orientation on M allows us to consider the unit normal vector field nf : M → R3, which is well-defined as an element of L∞(M, R3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Furthermore, let {v, w} be an orthonormal basis of TxM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Then for any f ∈ Imm(M, R3) we can define the area multiplication factor at x ∈ M via af(x) = |dfx(v) × dfx(w)|.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' The SRNF map is then given by Φ : Imm(M, R3)/ translations → L2(M, R3) (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='6) f �→ qf where qf(x) := � af(x) nf(x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='7) From this transform we define a distance on Imm(M, R3)/ translations by dImm(f1, f2) = ∥Φ(f1) − Φ(f2)∥L2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Next we consider a right-action of Γ(M) on L2(M, R3) that is compatible with the mapping Φ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' For q ∈ L2(M, R3) and γ ∈ Γ(M) we let (q ∗ γ)(x) = � |Dγ(x)|q(γ(x)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='8) It is easy to check that the action of Γ(M) on L2(M, R3) is by linear isometries and that for any f ∈ Imm and γ ∈ Γ, Φ(f) ∗ γ = Φ(f ◦ γ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 5 Thus, it follows that the SRNF distance on Imm(M, R3) is invariant with respect to this action and thus it descends to a (pseudo) distance on the quotient space S, which is given by dS([f1], [f2]) = inf γ∈Γ(M) d(f1, f2 ◦ γ), [f1], [f2] ∈ S(M) As we will see later the induced (pseudo) distance on the quotient space is highly degenerate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Equivalence of WFR and SRNF in the piecewise linear category.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' In [6] a surprising equivalence of the WFR and SRNF distance was shown: for piecewise linear surfaces it was proved that the SRNF distance can be reduced to the WFR distance between finitely supported measures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' To formulate this result in detail we first associate to every q ∈ L2(M, R3) a measure on S2;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' namely, for any open U ⊆ S2, we define q∗U = {x ∈ M|q(x) ̸= 0 and q(x)/|q(x)| ∈ U} and define the map L2(M, R3) → M(S2) via q �→ µq where for U ⊆ S2, µq(U) = � q∗U q(x) · q(x)dm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' The result proved in [6] is then formulated as: Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Given two piecewise linear surfaces S1 and S2 parameterized by f and g, the SRNF shape distance can be computed as an unbalanced transport problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' More precisely, we have dS([f], [g]) = inf γ∈Γ(M) ∥qf − qg ∗ γ∥ = WFR(µqf , µqg).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' where qf and qg are the SRNFs of f and g respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' In the next section we will extend this result of to all Lipschitz immersions (Borel-measures, resp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' The SRNF distance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' For the goal of extending the result of Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='3 to all Lipschitz surfaces, we will consider specifically δ = 1 2 in the definition of the WFR metric.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Equivalence of the WFR and SRNF distances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Our main result of this section is the following theorem, which is slightly stronger than the desired equivalence result.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Given q1, q2 ∈ L2(M, R3), inf γ∈Γ(M) ∥q1 − q2 ∗ γ∥L2 = WFR(µq1, µq2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' In particular, given f, g ∈ W 1,∞(M, R3) we can calculate their SRNF distance as an unbalanced OMT problem via dS([f], [g]) = WFR(µqf , µqg), where qf and qg are the SRNFs of f and g respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Remark 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Note, that as a direct consequence of Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='1 we can also conclude the extension of Theorem 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='3 to the original setup of the SRNF distance, the space of all smooth surfaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' The proof of Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='1 relies on a series of technical lemmas, which we will show next.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 6 M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' BAUER, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' HARTMAN, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' KLASSEN Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Let X, Y be topological spaces and ρ : X → Y be a measurable function with respect to the Borel σ-algebras.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' If µ, µ1 ∈ M(X), γ, γ1 ∈ M(Y ) such that µ1 ≪ µ, γ = ρ∗µ, and γ1 = ρ∗µ1, then γ1 ≪ γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Furthermore, µ1 µ = γ1 γ ◦ ρ almost everywhere.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Let U ⊆ Y open such that γ(U) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' By definition, µ(ρ−1(U)) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Since µ1 ≪ µ, µ1(ρ−1(U)) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Therefore, γ1(U) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' This proves γ1 ≪ γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Following the definitions of the Radon-Nikodym derivatives, pushforwards, and the change of variables formula, we obtain � ρ−1(U) µ1 µ dµ = � ρ−1(U) dµ1 = � U dγ1 = � U γ1 γ dγ = � ρ−1(U) γ1 γ ◦ ρ dµ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Thus, µ1 µ = γ1 γ ◦ ρ almost everywhere.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Given q ∈ L2(M, R3) we can define a function from M to S2 that takes every point x ∈ M to the unit vector in the direction of q(x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' As a matter of defining this function on every point, we can canonically choose the north pole of S2 for points where q(x) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Definition 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' For q ∈ L2(M, R3) we define the unit vector map of q as q : M → S2 given by x �→ � q(x) |q(x)| if q(x) ̸= 0 (1, 0, 0) otherwise .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Note that since q ∈ L2(M, R3), it follows that q : M → S2 is measurable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Let q ∈ L2(M, R3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' We can define a measure, νq ∈ M(M), via νq(U) = � U |q(x)|2dm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' for all open U ⊆ M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Note that νq ≪ m and νq m = |q|2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Further, we can equivalently define µq as the pushforward of νq via q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Let q ∈ L2(M, R3) and µq ∈ M(S2) be the measure associated with q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Then µq = q∗νq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Let U ⊆ S2 open and define M0 = {x ∈ M|q(x) = 0}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' If (1, 0, 0) ̸∈ S2, q−1(U) = q∗(U) and thus q∗νq(U) = � q−1(U) |q(x)|2dm = � q∗(U) |q(x)|2dm = µq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' If (1, 0, 0) ∈ S2, q−1(U) = q∗(U) ∪ M0 and thus q∗νq(U) = � q−1(U) |q(x)|2dm = � q∗(U) |q(x)|2dm + � M0 |q(x)|2dm = µq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Leveraging what we have proven above we may show a key continuity result that will then allow us to complete the proof of the main theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' The map (L2(M, R3), ∥ · ∥L2) → (M(S2), WFR) defined via q �→ µq given by Equa- tion (2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='3) is Lipschitz continuous with Lipschitz constant K = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 7 Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Let q1, q2 ∈ L2(M, R3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' For any semi-coupling (γ1, γ2) ∈ Γ(µq1, µq2), WFR(µq1, µq2) ≤ � Jδ(γ1, γ2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Thus, to prove the theorem we must construct (γ1, γ2) ∈ Γ(µq1, µq2) such that Jδ(γ1, γ2) = ∥q1−q2∥2 L2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' To construct such a semi-coupling we first construct ρ : M → S2 × S2 defined as unit vector maps of q1 and q2 on the first and second factor respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' the map is given by ρ(x) = (q1(x), q2(x)) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Since q1 and q2 are individually measurable, then so is ρ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' We can then define γ1, γ2 ∈ M(S2 × S2) via γ1 = ρ∗νq1 and γ2 = ρ∗νq2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Claim 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' The pair of measures, (γ1, γ2) is a semi-coupling from µq1 to µq2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Proof of claim.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Let U ⊆ S2 be open.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Thus, γ1(U × S2) = νq1 � ρ−1(U × S2) � = νq1 � q1−1(U) ∩ q2−1(S2) � = νq1 � q1−1(U) � = µq1(U) and γ2(S2 × U) = νq2 � ρ−1(S2 × U) � = νq1 � q1−1(S2) ∩ q2−1(U) � = νq1 � q2−1(U) � = µq2(U).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' So (γ1, γ2) is a semi-coupling from µq1 to µq2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Recall from the definition of the functional Jδ we need to construct γ ∈ M(S2 × S2) such that γ1, γ2 ≪ γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Define γ = ρ∗m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' We know µq1, µq2 ≪ m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Thus, by Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='2, γ1, γ2 ≪ γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Furthermore, |q1|2 = µq1 m = γ1 γ ◦ ρ a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' and |q2|2 = µq2 m = γ2 γ ◦ ρ a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' So,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Jδ(γ1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' γ2) = � S2×S2 ���� �γ1 γ (u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' v)u − �γ1 γ (u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' v)v ���� 2 dγ(u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' v) = � S2×S2 γ1 γ (u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' v)dγ(u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' v) + � S2×S2 γ2 γ (u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' v)dγ(u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' v) − 2 � S2×S2 √γ1γ2 γ (u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' v)⟨u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' v⟩dγ(u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' v) = � ρ−1(S2×S2) γ1 γ ◦ ρ(x) dm + � ρ−1(S2×S2) γ2 γ ◦ ρ(x) dm − 2 � ρ−1(S2×S2) �γ1 γ ◦ ρ(x) �γ2 γ ◦ ρ(x)⟨ρ(x)⟩dγ(u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' v) = � M |q1(x)|2dm + � M |q2(x)|2dm − 2 � M |q1(x)||q2(x)| � q1(x) |q1(x)|,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' q2(x) |q2(x)| � dm =∥q1 − q2∥2 L2 Thus,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' WFR(µq1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' µq2) ≤ � Jδ(γ1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' γ2) = 1 · ∥q1 − q2∥L2 We are now ready to conclude the proof of Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='1: Proof of Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Let q1, q2 ∈ L2(M, R3) and let ǫ > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Let p1, p2 be piecewise constant functions such that ∥q1 − p1∥L2 < ǫ/4 and ∥q2 − p2∥L2 < ǫ/4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Therefore, inf γ∈Γ(M) ∥q1 − p1 ∗ γ∥L2, inf γ∈Γ(M) ∥q2 − p2 ∗ γ∥L2, WFR(µq1, µp1), WFR(µq2, µp2) < ǫ/4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 8 M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' BAUER, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' HARTMAN, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' KLASSEN Thus,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' inf γ∈Γ(M) ∥q1 − q2 ∗ γ∥L2 ≤ inf γ∈Γ(M) ∥q1 − p1 ∗ γ∥L2 + inf γ∈Γ(M) ∥p2 − q2 ∗ γ∥L2 + inf γ∈Γ(M) ∥p1 − p2 ∗ γ∥L2 ≤ ǫ/2 + inf γ∈Γ(M) ∥p1 − p2 ∗ γ∥L2 = ǫ/2 + WFR(µp1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' µp2) ≤ ǫ/2 + WFR(µq1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' µp1) + WFR(µp2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' µq2) + WFR(µq1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' µq2) ≤ ǫ + WFR(µq1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' µq2) and WFR(µq1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' µq2) ≤ WFR(µp1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' µp2) + WFR(µq1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' µp1) + WFR(µp2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' µq2) ≤ WFR(µp2,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' µq2) + ǫ/2 = inf γ∈Γ(M) ∥p1 − p2 ∗ γ∥L2 + ǫ/2 ≤ inf γ∈Γ(M) ∥q1 − p1 ∗ γ∥L2 + inf γ∈Γ(M) ∥p2 − q2 ∗ γ∥L2 + inf γ∈Γ(M) ∥q1 − q2 ∗ γ∥L2 + ǫ/2 ≤ inf γ∈Γ(M) ∥q1 − q2 ∗ γ∥L2 + ǫ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' So, WFR(µq1, µq2) − ǫ ≤ inf γ∈Γ(M) ∥q1 − q2 ∗ γ∥L2 ≤ WFR(µq1, µq2) + ǫ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Taking ǫ → 0 we can conclude infγ∈Γ(M) ∥q1 − q2 ∗ γ∥L2 = WFR(µq1, µq2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Characterizing the closure of the image of the SRNF map.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Our equivalence result will also allow us to characterize the (closure of the) image of the SRNF map Φ in the context of spherical surfaces: Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Let f ∈ Imm(S2, R3) and let q = Φ(f) ∈ L2(S2, R3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Then q satisfies the closure condition � S2 q(x)|q(x)|dm = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Moreover, the closure of the image of Φ is given by the set U := � q ∈ L2(S2, R3) such that � S2 q(x)|q(x)|dm = 0 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' To prove this result we will need a classical theorem from geometric measure theory and the study of convex polyhedra, which we will recall next: Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='7 (Minkowski’s Theorem [1, 32, 38]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Let µ ∈ M(S2) such that the support of µ is not concentrated on a great circle and � S2 x dµ(x) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Then, there exists a unique (up to translation) convex body whose surface area measure is µ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Moreover, if µ is finitely supported then the convex body is a polytope.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 9 Proof of Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='. Let f ∈ Imm(S2, R3) and qf = Φ(f).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Let S = f(S2) and V be the surface enclosed by S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Therefore, � S2 qf(x)|qf(x)|dm = � S2 af(x)nf(x)dm = � S nfdS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Thus, this is the integral of the normal vector of a closed surface in R3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' A simple application of the divergence theorem shows that the integral of the normal vector of the closed surface is zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' To see this, let {ei}3 i=1 be the unit basis vectors of R3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' For i = 1, 2, 3, � S (nf · ei) dS = � V (∇ · ei) dV = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Therefore, � S2 qf(x)|qf(x)|dm = 0 and the image of Φ is contained in U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' To prove the converse direction let q ∈ U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' We aim to construct a convex body f with µqf arbitrarily close to µq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' By the definition of U the measure µq satisfies � S2 n dµq(n) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Since finitely supported measures are dense with respect to the WFR metric, we can choose a finitely supported measure µq such that � S2 n dµq(n) = 0 and WFR(µq, µq) < ǫ/3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' If the support of µq is not concentrated on a great circle we can invoke the Minkowski theorem and the result follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' For the general case we will slightly deform the measure as follows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Define ˆµq := µq + 3 � i=1 ǫ 18δei + 3 � i=1 ǫ 18δ−ei where {ei}3 i=1 is the set of unit basis vectors of R3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Then ˆµq is a finitely supported measue and satisfies � S2 n d ˆµq(n) = 0 and ˆµq is not supported on a single great circle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Moreover, WFR(µq, ˆµq) < ǫ/3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' By the Minkowski Theorem (Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='7) there exists a convex polytope with surface area measure given by ˆµq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Let f ∈ W 1,∞(S2, R3) be the PL spherical parameterization of this convex body, so that µqf = ˆµq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Thus, there exists γ ∈ Γ(M) such that ∥qf − q ∗ γ∥L2 < WFR(µqf , µq) + ǫ/3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Therefore, ∥qf − q ∗ γ∥L2 ≤ WFR(µqf , µq) + ǫ/3 = WFR( ˆµq, µq) + ǫ/3 ≤ WFR( ˆµq, µq) + WFR(µq, µq) + ǫ/3 < ǫ, which concludes the proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Characterizing the degeneracy of the SRNF distance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' As a second important con- sequence of the our equivalence result we can give a detailed proof of the degeneracy of the SRNF distance for smooth surfaces.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Degeneracy results were studied in [18] and it was further characterized for certain PL surfaces in [6].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Here we will generalize the characterization of [6] to smooth surfaces: Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' For any smooth, regular surface f ∈ C∞(S2, R3) ∩ Imm(S2, R3) there exists a unique (up to translations) convex body that is indistinguishable from f by the SRNF shape distance, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='e, dS([f], [f1]) = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Proof of Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Let f ∈ C∞(S2, R3) ∩ Imm(S2, R3) be a regular surface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' By [43, Prop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='33] the Gauss map of f is surjective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Thus the image of qf is not contained in a single hyperplane of R3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Furthermore, � S2 qf(x)|qf(x)|dm = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Thus, by Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='7, there exists a unique convex body (up to translation) with surface area measure given by µqf .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' By Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='1 the surface f and the convex body are SRNF distance 0 from each other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' The WFR metric as a diffeomorphic optimization problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' In this section, we will generalize the results of the previous sections for the Wasserstein Fisher Rao distance on any manifold and for any coeffecient δ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Thus characterizing the Wasserstein Fisher Rao distance as a diffeomorphic optimization problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Let N be a smooth, connected, compact, oriented Riemannian manifold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Define 10 M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' BAUER, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' HARTMAN, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' KLASSEN the cone over N via C(N) := (N × R≥0)/(N × {0}).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' If we let d denote the geodesic distance on N and fix some δ ∈ (0, ∞), then we can define a metric on C(N) via dC(N)((n1, r1), (n2, r2))2 = 4δ2r2 1 + 4δ2r2 2 − 8δ2r1r2cos(d(n1, n2)/2δ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Let M be another smooth, connected, compact, oriented Riemannian manifold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Any function q : M → C(N) can be decomposed into component functions by q(x) = (q(x), q◦(x)) where q : M → N and q◦ : M → R≥0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' We can thus define ˆq : M → R≥0 via for all x ∈ M, ˆq(x) = √ 2δq◦(x).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Given q1, q2 : M → C(N).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' The L2 distance between q1 and q2 is given by dL2(q1, q2)2 = � M dC(N)(q1(x), q2(x))2dm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' By decomposing q1 and q2, we can alternatively write (4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='1) dL2(q1, q2)2 = � M ˆq1(x)2dm + � M ˆq2(x)2dm − 2 � M ˆq1(x) ˆq2(x)cos(d(q1(x), q2(x))/2δ)dm The L2 cost of a function q : M → C(N) as the distance from q to the function that maps all of M to the cone point.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' In particular, using the decomposition of q, this distance is given by dL2(0, q)2 = � M ˆq(x)2 dm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Thus, the space of L2-functions from M to C(N) as L2(M, C(N)) := {q : M → C(N) s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='t.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' dL2(0, q)2 < ∞} and we equip L2(M, C(N)) with the metric dL2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' We define the right action of the diffeomorphisms of on L2(M, C(N)) component-wise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' We treat ˆq as a half density and define the action of Γ(M) on this component as the action on half-densities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Thus, we define the action of Γ(M) on L2(M, C(N)) given by L2(M, C(N)) × Γ(M) → L2(M, C(N)) via (q, ˆq), γ �→ � q ◦ γ, ˆq ◦ γ · � |Dγ| � The main result of this section is to show that the Wasserstein Fisher Rao distance can written as the distance between the orbits associated with the measures: Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Let N be a smooth connected compact Riemannian manifold and M be a smooth connected compact Riemannian manifold of dimension 2 or higher.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=') For all µ1, µ2 ∈ M(N) and q1, q2 ∈ L2(M, C(N)) such that µ1 = q1∗νq1 and µ2 = q2∗νq2 we have WFRδ(µ1, µ2) = inf γ∈Γ(N) dL2(q1, q2 ∗ γ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=') Moreover, for all µ ∈ M(N) there exists q ∈ L2(M, C(N)) such that µ = q∗νq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' If µ is a finitely supported measure given by µ = �n i=1 aiδui, then one can choose q piece wise constant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' More specifically, the function q given by q(x) = �� uj, � aj area(σj) � if 1 ≤ j ≤ n (u1, 0) if n < j ≤ m , where {σj}m j=1 is a subdivision of the canonical triangulation of M with m ≥ n, satisfies µ = q∗νq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 11 Before we are able to prove this theorem, we will show again several technical lemmas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Therefore we will consider specific measures associated with functions q ∈ L2(M, C(N)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' First, we define νq ∈ M(M) such that for any U ⊆ M open νq(U) = � U ˆq(x)2dm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Note that νq ≪ m and νq m = ˆq2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Further, we can define a pushforward of νq via q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' In particular, for every q ∈ L2(M, C(N)), we can define a Borel measure on N given by µq := q∗νq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' In other words for all U ⊆ N open µq(U) = � q−1(U) ˆq2(x)dm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Now we will show that the orbit of any q ∈ L2(M, C(N)) under the action of Γ(M) is mapped to the same measure on N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Let q ∈ L2(M, C(N)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Then for all γ ∈ Γ(M), µq = µq∗γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Let U ⊆ N open.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Then µq∗γ(U) = � γ−1(q−1(U)) (ˆq ◦ γ(x) · � |Dγ|)2dm = � γ−1(q−1(U)) ˆq ◦ γ(x)2 · |Dγ|dm = � q−1(U) ˆq(x)2dm = µq(U).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Therefore, we can map each orbit of q ∈ L2(M, C(N)) under the half density action by Γ(M) to a measure on N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' As in the previous section, we will first show the result for piecewise constant functions and extend by continuity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' We prove the piecewise constant case in the following lemma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Let d ≥ 2 and M be a smooth, connected, compact, oriented Riemannian d-dimensional manifold with or without boundary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Given two piecewise constant functions q1, q2 : M → C(N), inf γ∈Γ(M) dL2(q2, q2 ∗ γ) = WFRδ(µq1, µq2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Let {σi}m i=1 and {τj}n j=1 be triangulations of M such that q1 is constant on each σi and q2 is constant on each τj.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Let ˆq1 : M → R, q1 : M → N be the decomposition of q1 and ˆq2 : M → R, q2 : M → M be the decomposition of q2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Define a function ⟨·, ·⟩ : N × N → R given via ⟨u, v⟩ = cos(d(u, v)/2δ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' A brief computation shows inf γ∈Γ(M) d2 L2(q1, q2 ∗ γ) = m � i=1 ai + n � j=1 bj − 2 sup γ∈Γ(M) � M ˆq1(x) ˆq2(γ(x)) � |Dγ|⟨q1(x), q2(γ(x))⟩dm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Let A be the set of all discrete semi-couplings from µq1 to µq2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Recall WFRδ(µq1, µq2)2 = m � i=1 ai + n � j=1 bj − 2 sup (A,B)∈A m � i=1 n � j=1 � AijBij⟨ui, vj⟩ Therefore, the theorem is equivalent to showing sup (A,B)∈A m � i=1 n � j=1 � AijBij⟨ui, vj⟩ = sup γ∈Γ(S2) � M ˆq1(x) ˆq2(γ(x)) � |Dγ|⟨q1(x), q2(γ(x))⟩dm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 12 M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' BAUER, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' HARTMAN, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' KLASSEN Claim 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Assume that (A, B) is a discrete semi-coupling from µq1 to µq2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Then for all ǫ > 0 there is a PL homeomorphism γ : M → M such that ������ � M ˆq1(x) ˆq2(γ(x)) � |Dγ|⟨q1(x), q2(γ(x))⟩dm − � i,j � AijBij⟨ui, vj⟩ ������ < ǫ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Proof of Claim 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Let (A, B) be a discrete semi-coupling from µq1 to µq2 such that for each 1 ≤ i ≤ m and 1 ≤ j ≤ n, Aij, Bij > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' We will first prove the claim for this restricted case and extend it to all semi-couplings by continuity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' First we choose a real number r ∈ (0, 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' For each 1 ≤ i ≤ m, subdivide σi into n smaller d-simplexes σij such that ˆq1 2 = Aij/m(σij).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Similarly, for each 1 ≤ j ≤ n, subdivide τj into m smaller d-simplexes τij such that ˆq2 2 = Bij/m(τij).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' For each 1 ≤ i ≤ m and 1 ≤ j ≤ n, choose a smaller d-simplex ˜σij, whose closure is contained in the interior of σij, such that m(˜σij) = rm(σij).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Similarly, for each 1 ≤ i ≤ m and 1 ≤ j ≤ n, choose a smaller d-simplex ˜τij, whose closure is contained in the interior of τij, such that m(˜τij) = rm(τij).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' We now construct an orientation preserving PL homeomorphism γr : M → M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' First, for each 1 ≤ i ≤ m and 1 ≤ j ≤ n, define γr : ˜σij → ˜τij to be a PL orientation preserving homeomorphism with constant area multiplication factor, |Dγr| = m(τij)/m(σij).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Note that M − \uf8eb \uf8ed m � i=1 n� j=1 ˜σo ij \uf8f6 \uf8f8 is homeomorphic to M − \uf8eb \uf8ed m � i=1 n � j=1 ˜τ o ij \uf8f6 \uf8f8 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Hence, we can extend the homeomorphism γr defined on the ˜σij’s to a homeomorphism from M to M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Note that on each ˜σij, ˆq2 2(γr(x))|Dγr| = Bij/m(σij).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Write M = M1 ∪ M2, where M1 = m� i=1 n� j=1 ˜σij and M2 = M − M1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' A simple computation shows � M1 ˆq1(x) ˆq2(γr(x)) � |Dγr|⟨q1(x), q2(γr(x))⟩dm = m � i=1 n � j=1 � ˜σij ˆq1(x) ˆq2(γr(x)) � |Dγr|⟨q1(x), q2(γr(x))⟩dm = m � i=1 n � j=1 � AijBij m(σij) m(˜σij)⟨ui, vj⟩ = m � i=1 n � j=1 � rAij � rBij⟨ui, vj⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Meanwhile by the Schwarz inequality, ���� � M2 ˆq1(x) ˆq2(γr(x)) � |Dγr|⟨q1(x), q2(γr(x))⟩dm ���� ≤ � M2 ˆq1(x) ˆq2(γr(x)) � |Dγr|dm ≤ �� M2 ˆq1 2dm �� M2 ˆq2 2(γr(x))|Dγr|dm = � (1 − r) � M ˆq1 2dm � (1 − r) � M ˆq2 2dm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' So as we let r → 1, � M1 ˆq1(x) ˆq2(γr(x)) � |Dγr|⟨q1(x), q2(γr(x))⟩dm → m � i=1 n � j=1 � AijBij⟨ui, vj⟩ and � M2 ˆq1(x) ˆq2(γr(x)) � |Dγr|⟨q1(x), q2(γr(x))⟩dm → 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 13 Hence, � M ˆq1(x) ˆq2(γr(x)) � |Dγr|⟨q1(x), q2(γr(x))⟩dm → m � i=1 n � j=1 � AijBij⟨ui, vj⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Thus Claim 2 follows for the case in which for each 1 ≤ i ≤ m and 1 ≤ j ≤ n, Aij > 0 and Bij > 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' The general case then follows immediately from the continuity of m � i=1 n � j=1 � AijBij⟨ui, vj⟩ as a function of (A, B).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' This completes the proof of Claim 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' It follows that sup γ∈Γ(S2) � M ˆq1(x) ˆq2(x)⟨q1(x), q2(x)⟩dm ≥ sup (A,B)∈A m � i=1 n � j=1 � AijBij⟨ui, vj⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' We are left to show the opposite inequality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Claim 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Assume γ is a PL-homeomorphism from M to M, then there exists a discrete semi- coupling (A, B) such that sup γ∈Γ(M) � M ˆq1(x) ˆq2(γ(x)) � |Dγ|⟨q1(x), q2(γ(x))⟩dm ≤ sup (A,B)∈A m � i=1 n � j=1 � AijBij⟨ui, vj⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Proof of Claim 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Let γ : M → M be an orientation preserving PL homeomorphism.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' For 1 ≤ i ≤ m and 1 ≤ j ≤ n, define σij = γ−1(τj) ∩ σi and define τij = γ(σij).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Now define two (m + 1) × (n + 1) matrices A and B via: For 1 ≤ i ≤ m and 1 ≤ j ≤ n, Aij = � σij ˆq1 2dm and Bij = � τij ˆq2 2dm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' For 0 ≤ i ≤ m, B0i = 0 and Ai0 = ai − n � j=1 � σij ˆq1 2dm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' For 0 ≤ j ≤ n, Aj0 = 0 and B0j = bj − m � i=1 � τij ˆq2 2dm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' The pair of matrices (A, B) is a discrete semi-coupling from µq1 to µq2 by construction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' We say that (A, B) is the semi-coupling corresponding to the homeomorphism γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Denote the area multiplication factor of γ on σij by mij.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Then by the Schwarz inequality, � σij ˆq1(x) ˆq2(γ(x)) � |Dγ|⟨ui, vj⟩dm ≤ �� σij ˆq1 2(x)dm �� σij ˆq2 2(γ(x))|Dγ|dm⟨ui · vj⟩ = �� σij ˆq1 2(x)dm �� τij ˆq2 2(x)dm⟨ui · vj⟩ = � Aij � Bij⟨ui · vj⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Summing over all i and j we obtain: � M ˆq1(x) ˆq2(γ(x)) � |Dγ|⟨q1(x), q2(γ(x))⟩dm = � i,j � σij ˆq1(x) ˆq2(γ(x)) � |Dγ|⟨q1(x), q2(γ(x))⟩dm ≤ � i,j � Aij � Bij⟨ui · vj⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 14 M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' BAUER, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' HARTMAN, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' KLASSEN This completes the proof of Claim 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' It follows that, sup γ∈Γ(M) � M ˆq1(x) ˆq2(γ(x)) � |Dγ|⟨q1(x), q2(γ(x))⟩dm ≤ sup (A,B)∈A m � i=1 n � j=1 � AijBij⟨ui · vj⟩.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' and thus the lemma is proved.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' To extend the results to all of L2(M, C(N)) we will need the following continuity result: Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' The map (L2(M, C(N)), dL2) → (M(N), WFRδ) defined via q �→ q∗νq is Lipschitz continuous with Lipschitz constant K = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Proof.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Let q1, q2 ∈ L2(M, C(N)), µq1 = q1∗νq1, and µq2 = q2∗νq2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' For any semi-coupling (γ1, γ2) ∈ Γ(µq1, µq2), WFRδ(µq1, µq2) ≤ � Jδ(γ1, γ2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Thus, to prove the theorem we must construct (γ1, γ2) ∈ Γ(µq1, µq2) such that Jδ(γ1, γ2) = dL2(q1, q2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' To construct such a semi-coupling we first construct ρ : M → N×N defined a the first component maps of q1 and q2 on the first and second factor respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' the map is given by ρ(x) = (q1(x), q2(x)) .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Since q1 and q2 are individually measurable, then so is ρ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' We can then define γ1, γ2 ∈ M(N × N) via γ1 = ρ∗νq1 and γ2 = ρ∗νq2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Claim 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' The pair of measures, (γ1, γ2) is a semi-coupling from µq1 to µq2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Proof of claim.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Let U ⊆ N be open.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Thus, γ1(U × N) = νq1 � ρ−1(U × N) � = νq1 � q1−1(U) ∩ q2−1(N) � = νq1 � q1−1(U) � = µq1(U) and γ2(N × U) = νq2 � ρ−1(N × U) � = νq1 � q1−1(N) ∩ q2−1(U) � = νq1 � q2−1(U) � = µq2(U).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' So (γ1, γ2) is a semi-coupling from µq1 to µq2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Recall from the definition of the functional J we need to construct γ ∈ M(N ×N) such that γ1, γ2 ≪ γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Define γ = ρ∗m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' We know µq1, µq2 ≪ m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Thus, by Lemma 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='2, γ1, γ2 ≪ γ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Furthermore, ˆq1 2 = µq1 m = γ1 γ ◦ ρ a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' and ˆq2 2 = µq2 m = γ2 γ ◦ ρ a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' So,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Jδ(γ1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' γ2) =µ1(N) + µ2(N) − 2 � N×N √γ1γ2 γ (u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' v)cos(d(u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' v)/2δ)dγ(u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' v) = � N×N γ1 γ dγ + � N×N γ2 γ dγ − 2 � N×N �γ1 γ (u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' v)γ2 γ (u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' v)cos(d(u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' v)/2δ)dγ(u,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' v) = � ρ−1(N×N) γ1 γ ◦ ρ dm + � ρ−1(N×N) γ2 γ ◦ ρ dm − 2 � ρ−1(N×N) �γ1 γ ◦ ρ(x)γ2 γ ◦ ρ(x)cos(d(ρ(x))/2δ)dm = � M ˆq1(x)2 dm + � M ˆq2(x)2 dm − 2 � M ˆq1(x) ˆq2(x)cos(d(q1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' q2)/2δ)dm = dL2(q1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' q2) Thus,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' WFRδ(µq1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' µq2) ≤ � Jδ(γ1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' γ2) = 1 · dL2(µq1,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' µq2) 15 Finally,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' we can leverage this continuity result to complete the proof of Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Proof of Theorem 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Let µ1, µ2 ∈ M(N) and q1, q2 ∈ L2(M, C(N)) such that µ1 = q1∗νq1 and µ2 = q2∗νq2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' By an argument analogous to the proof of Theorem 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='1 we can conclude inf γ∈Γ(M) dL2(q1, q2 ∗ γ) = WFRδ(µ1, µ2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' This concludes the the proof of part a.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=').' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Let µ = �n i=1 aiδui be a finitely supported measure on N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' By [48], M admits a canonical PL structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Let m ≥ n and subdivide the triangulation of M into m simplices given by σj for 1 ≤ j ≤ m.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Let x ∈ M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Thus, there exists 1 ≤ j ≤ m such that x ∈ σj.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Thus we define q(x) = �� uj, � aj area(σj) � if 1 ≤ j ≤ n (u1, 0) if n < j ≤ m .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Let U ⊆ N, then µ(U) = � i|ui∈U ai.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Meanwhile, q−1(U) = � i|ui∈U σi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Thus, � q−1(U) ˆq2(x)dm = � i|ui∈U � σi ai area(σi)dm = � i|ui∈U ai.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' To complete the proof of part b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=') we will extend the result to the whole space by continuity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' For any µ ∈ M(N), let {µn} ⊆ M(N) be a sequence of finitely supported measures that converges to µ with respect to the Wasserstein Fisher Rao.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' In particular, {µn} is Cauchy with respect to WFRδ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Note that for all n ∈ N,there exists a piecewise constant qn ∈ L2(M, C(N)) satisfying µn(U) = � qn−1(U) ˆqn(x)2dm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Thus, we can construct a sequence of functions given by q∗ 0 = q0 an for all n ∈ N, q∗n+1 = qn+1 ∗ γn where γn is a PL homeomorphism from M to M such that dL2(q∗ n, qn+1 ∗ γn) = WFRδ(µn, µn+1) + 1 2n .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Note that the existence of such a γn is guaranteed by Lemma 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Since {µn} is Cauchy with respect to WFRδ, it follows that {q∗ n} is Cauchy with respect to dL2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' By completeness of (L2(M, C(N)), dL2), there exists a limit q ∈ L2(M, C(N)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Let U ⊆ N open.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Thus, µ(U) = lim n→∞ µn(U) = lim n→∞ � qn−1(U) ˆqn(x)2dm = lim n→∞ � M ˆqn(x)2χqn−1(U)dm = � M lim n→∞ ˆqn(x)2χqn−1(U)dm = � M ˆq(x)2χq−1(U)dm = � q−1(U) ˆq(x)2dm Thus, µ = q∗νq This completes the proof of part b.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=') of the theorem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' REFERENCES [1] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Alexandrov, Zur theorie der gemischten volumina von konvexen k¨orpern i, Mat.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Sbornik NS, 1 (1938), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 227– 251.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' [2] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Bauer, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Bruveris, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Harms, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Møller-Andersen, A numerical framework for sobolev metrics on the space of curves, SIAM Journal on Imaging Sciences, 10 (2017), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 47–73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 16 M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' BAUER, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' HARTMAN, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' KLASSEN [3] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Bauer, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Bruveris, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Michor, Overview of the geometries of shape spaces and diffeomorphism groups, Journal of Mathematical Imaging and Vision, 50 (2014), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 60–97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' [4] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Bauer, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Charon, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Harms, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='-W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Hsieh, A numerical framework for elastic surface matching, comparison, and interpolation, International Journal of Computer Vision, 129 (2021), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 2425–2444.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' [5] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Bauer, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Charon, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Klassen, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Kurtek, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Needham, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Pierron, Elastic metrics on spaces of euclidean curves: Theory and algorithms, arXiv preprint arXiv:2209.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='09862, (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' [6] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Bauer, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Hartman, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Klassen, The square root normal field distance and unbalanced optimal transport, Applied Mathematics & Optimization, 85 (2022), https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='1007/s00245-022-09867-y, https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='1007%2Fs00245-022-09867-y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' [7] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Bruveris, Optimal reparametrizations in the square root velocity framework, SIAM Journal on Mathematical Analysis, 48 (2016), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 4335–4354.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' [8] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Charon and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Younes, Shape spaces: From geometry to biological plausibility, arXiv preprint arXiv:2205.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='01237, (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' [9] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Chizat, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Peyr´e, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Schmitzer, and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='-X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Vialard, An interpolating distance between optimal transport and Fisher–Rao metrics, Foundations of Computational Mathematics, 18 (2018), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 1–44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' [10] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Chizat, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Peyr´e, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Schmitzer, and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='-X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Vialard, Scaling algorithms for unbalanced optimal transport problems, Mathematics of Computation, 87 (2018), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 2563–2609.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' [11] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Chizat, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Peyr´e, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Schmitzer, and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='-X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Vialard, Unbalanced optimal transport: Dynamic and Kan- torovich formulations, Journal of Functional Analysis, 274 (2018), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 3090–3123.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' [12] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Dogan, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Bernal, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Hagwood, A fast algorithm for elastic shape distances between closed planar curves, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 4222– 4230.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' [13] I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Dryden and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Mardia, Statistical shape analysis: with applications in R, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 995, John Wiley & Sons, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' [14] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Hartman, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Sukurdeep, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Klassen, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Charon, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Bauer, Elastic shape analysis of surfaces with second-order sobolev metrics: a comprehensive numerical framework, To appear in IJCV, (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' [15] I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Jermyn, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Kurtek, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Klassen, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Srivastava, Elastic shape matching of parameterized surfaces using square root normal fields, in European conference on computer vision, Springer, 2012, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 804–817.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' [16] I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Jermyn, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Kurtek, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Laga, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Srivastava, Elastic shape analysis of three-dimensional objects, Synthesis Lectures on Computer Vision, 12 (2017), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 1–185.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' [17] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Joshi, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Xie, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Kurtek, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Srivastava, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Laga, Surface shape morphometry for hippocampal modeling in alzheimer’s disease, in 2016 International Conference on Digital Image Computing: Techniques and Applications (DICTA), IEEE, 2016, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 1–8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' [18] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Klassen and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Michor, Closed surfaces with different shapes that are indistinguishable by the SRNF, Archivum Mathematicum, 56 (2020), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 107–114.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' [19] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Kurtek, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Klassen, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Ding, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Srivastava, A novel Riemannian framework for shape analysis of 3D objects, in 2010 IEEE computer society conference on computer vision and pattern recognition, IEEE, 2010, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 1625–1632.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' [20] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Kurtek, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Klassen, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Gore, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Ding, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Srivastava, Elastic geodesic paths in shape space of parameterized surfaces, IEEE transactions on pattern analysis and machine intelligence, 34 (2011), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 1717– 1730.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' [21] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Kurtek, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Samir, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Ouchchane, Statistical shape model for simulation of realistic endometrial tissue.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=', in ICPRAM, 2014, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 421–428.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' [22] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Laga, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Padilla, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Jermyn, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Kurtek, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Bennamoun, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Srivastava, 4d atlas: Statistical analysis of the spatiotemporal variability in longitudinal 3D shape data, arXiv preprint arXiv:2101.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='09403, (2021).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' [23] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Laga, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Xie, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Jermyn, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Srivastava, Numerical inversion of SRNF maps for elastic shape analysis of genus-zero surfaces, IEEE transactions on pattern analysis and machine intelligence, 39 (2017), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 2451–2464.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' [24] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Lahiri, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Robinson, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Klassen, Precise matching of PL curves in RN in the square root velocity framework, Geometry, Imaging and Computing, 2 (2015), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 133–186.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' [25] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Liero, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Mielke, and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Savar´e, Optimal transport in competition with reaction: The Hellinger–Kantorovich distance and geodesic curves, SIAM Journal on Mathematical Analysis, 48 (2016), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 2869–2911.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' [26] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Liero, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Mielke, and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Savar´e, Optimal entropy-transport problems and a new Hellinger–Kantorovich distance between positive measures, Inventiones mathematicae, 211 (2018), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 969–1117.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' [27] Lombardi, Damiano and Maitre, Emmanuel, Eulerian models and algorithms for unbalanced optimal transport, ESAIM: M2AN, 49 (2015), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 1717–1744, https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='1051/m2an/2015025, https://doi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='org/10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='1051/ m2an/2015025.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' [28] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Marron and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Alonso, Overview of object oriented data analysis, Biometrical Journal, 56 (2014), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 732–753.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' [29] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Matuk, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Mohammed, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Kurtek, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Bharath, Biomedical applications of geometric functional data analysis, in Handbook of Variational Methods for Nonlinear Geometric Data, Springer, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 675–701.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 17 [30] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Michor and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Mumford, An overview of the riemannian metrics on spaces of curves using the hamiltonian approach, Applied and Computational Harmonic Analysis, 23 (2007), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 74–113.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' [31] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Miller, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Trouv´e, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Younes, On the metrics and euler-lagrange equations of computational anatomy, Annual review of biomedical engineering, 4 (2002), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 375–405.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' [32] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Minkowski, Allgemeine lehrs¨atze ¨uber die convexen polyeder, Nachrichten von der Gesellschaft der Wis- senschaften zu G¨ottingen, Mathematisch-Physikalische Klasse, 1897 (1897), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 198–220, http://eudml.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='org/ doc/58391.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' [33] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Mio, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Srivastava, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Joshi, On shape of plane elastic curves, International Journal of Computer Vision, 73 (2007), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 307–324.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' [34] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Needham and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Kurtek, Simplifying transforms for general elastic metrics on the space of plane curves, SIAM journal on imaging sciences, 13 (2020), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 445–473.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' [35] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Pennec, Intrinsic statistics on riemannian manifolds: Basic tools for geometric measurements, Journal of Mathematical Imaging and Vision, 25 (2006), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 127–154.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' [36] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Pennec, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Sommer, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Fletcher, Riemannian geometric statistics in medical image analysis, Academic Press, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' [37] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Piccoli and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Rossi, Generalized Wasserstein distance and its application to transport equations with source, Archive for Rational Mechanics and Analysis, 211 (2014), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 335–358.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' [38] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Schneider, Convex surfaces, curvature and surface area measures, in Handbook of convex geometry, Elsevier, 1993, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 273–299.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' [39] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Srivastava, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Klassen, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Joshi, and I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Jermyn, Shape analysis of elastic curves in euclidean spaces, IEEE transactions on pattern analysis and machine intelligence, 33 (2010), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 1415–1428.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' [40] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Srivastava and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Klassen, Functional and shape data analysis, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 1, Springer, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' [41] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Su, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Bauer, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Klassen, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Gallivan, Simplifying transformations for a family of elastic metrics on the space of surfaces, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 848–849.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' [42] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Su, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Bauer, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Preston, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Laga, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Klassen, Shape analysis of surfaces using general elastic metrics, Journal of Mathematical Imaging and Vision, 62 (2020), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 1087–1106.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' [43] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Tapp, Differential Geometry of Curves and Surfaces, Undergraduate Texts in Mathematics, Springer Interna- tional Publishing, 2016, https://books.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='google.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='com/books?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='id=kfIqDQAAQBAJ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' [44] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Trouv´e and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Younes, On a class of diffeomorphic matching problems in one dimension, SIAM Journal on Control and Optimization, 39 (2000), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 1112–1135.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' [45] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Villani, Topics in optimal transportation, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 58, American Mathematical Soc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=', 2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' [46] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Villani, Optimal transport: old and new, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 338, Springer Science & Business Media, 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' [47] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Wang, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Zhou, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Yang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Zhang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Rao, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Wu, Robust document distance with wasserstein- fisher-rao metric, in ACML, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' [48] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Whitehead, On C1-complexes, Annals of Mathematics, (1940), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 809–824.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' [49] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Wøien and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Grasmair, A pde-based method for shape registration, SIAM Journal on Imaging Sciences, 15 (2022), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 762–796.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' [50] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Younes, Computable elastic distances between shapes, SIAM Journal on Applied Mathematics, 58 (1998), pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 565–586.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' [51] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Younes, Shapes and diffeomorphisms, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' 171, Springer, 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' [52] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Zhou, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Chen, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Wu, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Yang, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} +page_content=' Qiu, The wasserstein-fisher-rao metric for waveform based earthquake location, arXiv: Numerical Analysis, (2018).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/5tAyT4oBgHgl3EQfcfeN/content/2301.00284v1.pdf'} diff --git a/6NAyT4oBgHgl3EQf2fnD/content/2301.00753v1.pdf b/6NAyT4oBgHgl3EQf2fnD/content/2301.00753v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..a434db0bc7c8fb77c25cb6103e37a0336091d26e --- /dev/null +++ b/6NAyT4oBgHgl3EQf2fnD/content/2301.00753v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:04c173f808ac221d3c5fcfa0d2b7382e29dad1c56036ee9beb62cf59d0b79e4e +size 242968 diff --git a/6NAyT4oBgHgl3EQf2fnD/vector_store/index.pkl b/6NAyT4oBgHgl3EQf2fnD/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..25f8574a6268cbe2c03ab18193b2e894a3e7605e --- /dev/null +++ b/6NAyT4oBgHgl3EQf2fnD/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d7d241eee692e756af33a30dce863d871fb5bea5080780930fdd655732eec9b7 +size 132639 diff --git a/6tAzT4oBgHgl3EQfgPwz/vector_store/index.pkl b/6tAzT4oBgHgl3EQfgPwz/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..a203633bfb6b235482db915f1fc5d3b83c98ce22 --- /dev/null +++ b/6tAzT4oBgHgl3EQfgPwz/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:960a3929969401acd2bf0bca7f87c82f365cb8a0d978de182e154fe66dbdbb4e +size 140219 diff --git a/7NAzT4oBgHgl3EQf-f4D/content/2301.01933v1.pdf b/7NAzT4oBgHgl3EQf-f4D/content/2301.01933v1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..0e7c838ead3f073434345397234659154a9d0db8 --- /dev/null +++ b/7NAzT4oBgHgl3EQf-f4D/content/2301.01933v1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:bdbb2eb02922992d5b0bc396a4db8834f6629d0b2e08c795ffd0e6b70720fa59 +size 1967134 diff --git a/7NAzT4oBgHgl3EQf-f4D/vector_store/index.pkl b/7NAzT4oBgHgl3EQf-f4D/vector_store/index.pkl new file mode 100644 index 0000000000000000000000000000000000000000..a7084b62c7f99039a237a7d2493c7fcf26a00c39 --- /dev/null +++ b/7NAzT4oBgHgl3EQf-f4D/vector_store/index.pkl @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:82232c7746b925471d2d265c6a235165355b3959d29830402cc4ac16271cbb8c +size 160444 diff --git a/AtFQT4oBgHgl3EQfMjZB/content/tmp_files/2301.13268v1.pdf.txt b/AtFQT4oBgHgl3EQfMjZB/content/tmp_files/2301.13268v1.pdf.txt new file mode 100644 index 0000000000000000000000000000000000000000..b5e2e8a6db381d7a164e4f02a155d24f221e6ae3 --- /dev/null +++ b/AtFQT4oBgHgl3EQfMjZB/content/tmp_files/2301.13268v1.pdf.txt @@ -0,0 +1,1264 @@ +Contextual Dynamic Prompting for Response Generation in +Task-oriented Dialog Systems +Sandesh Swamy +AWS AI Labs +sanswamy@amazon.com +Narges Tabari +AWS AI Labs +nargesam@amazon.com +Chacha Chen∗ +University of Chicago +chacha@uchicago.edu +Rashmi Gangadharaiah +AWS AI Labs +rgangad@amazon.com +Abstract +Response generation is one of the critical com- +ponents in task-oriented dialog systems. Exist- +ing studies have shown that large pre-trained +language models can be adapted to this task. +The typical paradigm of adapting such ex- +tremely large language models would be by +fine-tuning on the downstream tasks which is +not only time-consuming but also involves sig- +nificant resources and access to fine-tuning +data. Prompting (Schick and Schütze, 2020) +has been an alternative to fine-tuning in many +NLP tasks. In our work, we explore the idea +of using prompting for response generation +in task-oriented dialog systems. Specifically, +we propose an approach that performs con- +textual dynamic prompting where the prompts +are learnt from dialog contexts. +We aim to +distill useful prompting signals from the dia- +log context. On experiments with MultiWOZ +2.2 dataset (Zang et al., 2020), we show that +contextual dynamic prompts improve response +generation in terms of combined score (Mehri +et al., 2019a) by 3 absolute points, and a mas- +sive 20 points when dialog states are incor- +porated. Furthermore, human annotation on +these conversations found that agents which in- +corporate context were preferred over agents +with vanilla prefix-tuning. +1 +Introduction +With the advent of large language models (LLMs), +a vast majority of NLP tasks, including dialog sys- +tems, further fine-tune these LMs for their down- +stream tasks. +Although these approaches pro- +vide substantial improvements over traditional task- +specific models (Ham et al., 2020; Hosseini-Asl +et al., 2020; He et al., 2022), it is a time consum- +ing process that also involves significant use of +energy/resources in the form of compute. These ap- +proaches also require tuning and storing parameters +for each downstream task. +∗ Work done during an internship at AWS AI Labs +A more recent line of work, explores “prompt- +ing” LLMs to elicit the necessary knowledge re- +quired for the downstream tasks (Shin et al., 2020; +Gao et al., 2020; Schick and Schütze, 2020; Petroni +et al., 2019; Lee et al., 2021; Zhu et al., 2022). +Prompts composed of tokens or short pieces of +text (discrete prompts) inserted at the end of the +input examples. These prompts are typically man- +ually defined based on the specific downstream +task. The main motivation behind these approaches +stems from the idea that the large corpora that these +language models are trained on contain relevant in- +formation which is pertinent to the task on hand. +Adapter-tuning was proposed as an alternate ap- +proach to fine-tuning. These methods only train +task-specific layers that are inserted within pre- +trained LMs. Such a lightweight approach that add +about 4% task-specific parameters has shown to ob- +tain comparable performances to their fine-tuning +counterparts (Rebuffi et al., 2017; Houlsby et al., +2019; Lin et al., 2020a). +Drawing inspiration from prompting, prefix- +tuning approaches (Li and Liang, 2021) were pro- +posed as another alternative to fine-tuning. These +approaches pre-pend a sequence of task-specific +continuous vectors (aka prefix-) to the input. In +contrast to prompting, the prefix consists of free +parameters that do not correspond to actual real +tokens. Such an approach is more prevalent since +it only optimizes the prefix and does not tune pa- +rameters of the entire LM. +Most of the existing approaches use static +prompts, i.e., the same set of tokens are used as +“prompt tokens" regardless of input. However, we +believe that taking context into consideration is +critical especially in response generation since the +current response has to fit not only the domain but +also the information being requested in previous +turns. For example: In the MultiWOZ dataset, if +a customer asks about train bookings, the agent +response has to restrict itself to that particular do- +arXiv:2301.13268v1 [cs.CL] 30 Jan 2023 + +main. To address this problem, we explore the +idea of generating input-dependent or contextual +prompts. We want the prompts to capture and en- +code different signals for different turns of dialogs +depending on the context, hence, we call our ap- +proach dynamic context prompting. This way, we +hope to distill useful signals into the prompts and +provide the model with adequate signals to gener- +ate a desired system response. In this work, we +explore the potential of using dialog context within +a prefix tuning approach for the task of response +generation in task-oriented dialog systems (TOD). +The contributions of this paper are summarized as: +• we propose a context-dependent prefix-tuning +method for dialog response generation in TOD +systems. +• to illustrate the benefits of such an approach, +we conduct experiments on the MultiWOZ +dataset. We show that our model significantly +outperforms the original task-dependent de- +sign of the prefix-tuning method. +2 +Related Work +2.1 +Dialog Generation +With the prevalence of LLMs, the quest for an +answer to “how do we effectively adapt such mod- +els for dialog generation?" has been on the fore- +front of researchers’ minds in the dialog commu- +nity. For task-oriented dialogs, fine-tuning large +pre-trained models such as GPT-2 or T5 has made +great progress on benchmarks recently (Ham et al., +2020; Hosseini-Asl et al., 2020). Built upon these +advances, more recent line of work investigates +the effectiveness of using multi-task learning (Su +et al., 2021; Lin et al., 2020b; Yang et al., 2021), +or pre-training the model on external dialog cor- +pora (Peng et al., 2021; Liu et al., 2021). More +recently, prompting has been used to address the +sub-task of dialog state tracking (Lee et al., 2021; +Zhu et al., 2022). Different from those works, we +focus on the task of dialog response generation. +2.2 +Prompt-based Learning +As an alternative to the fine-tuning paradigm, +prompting involves a sequence of tokens appended +to the input text, which can then induce the model +to engage in a certain behavior suited to the task. +Since the release of GPT-2 (Radford et al., 2018, +2019; Brown et al., 2020), many prompt-related pa- +pers have emerged. Most of the leading approaches +in prompting use task-specific prompts, ranging +from discrete prompts (Shin et al., 2020; Gao et al., +2020; Schick and Schütze, 2020; Petroni et al., +2019) to continuous “soft prompts” (Li and Liang, +2021; Lester et al., 2021). These methods have +a fixed prompt for each task. However, in dialog +systems specifically, the context varies for every +turn. In our work, we aim to design prompts which +are context-dependent. +3 +Problem Statement +Response generation is one of the tasks carried +out in dialog systems usually in addition to dia- +log state tracking (DST). Given a dialog context +(previous turns between the system and the user) +C = [u1, s1, ..., un−1, sn−1] and the current user +utterance un, the goal of response generation is +to generate system response sn. Note that in the +actual task, we generate delexicalized system re- +sponses, given all the groundtruth previous turns +as input, following previous works (Hosseini-Asl +et al., 2020; Wen et al., 2015). +Techniques mentioned in (Ham et al., 2020; +Hosseini-Asl et al., 2020) rely on fully fine-tuning +LLMs to carry out this task. In contrast, our ap- +proach builds on the prefix-tuning framework, but +incorporates dialog context, C, as an additional +signal for the prefix tokens. As a supplement to +context C, we added dialog state information D +(up to the current turn) to further help response +generation. +4 +Contextual Dynamic Prompting +Framework +4.1 +Prefix-tuning for Response Generation +Our work is built on top of prefix tuning for genera- +tion tasks (Li and Liang, 2021), which adds a fixed +set of tunable prefix tokens/prompts to the origi- +nal input x to obtain a new input, [PREFIX; x]. +Following the denotation in (Li and Liang, 2021), +we use Pθ[i, :] to denote the ith prefix. Pθ[i, :] is +generated by: +Pθ[:, :] = MLPθ(P ′), +(1) +where P ′ is a fixed smaller matrix as input to a +feedforward neural network (MLPθ). The training +objective of prefix-tuning is same as fine-tuning, +i.e., the following log-likelihood objective: +max +θ +log pφ(y|x), + +Figure 1: The figures above indicate the differences between the vanilla prefix-tuning approach compared to our approach. In +both these variants, only the prefix tokens are tuned. +where y is the decoder output and x is the input. θ +represents the trainable parameters in the prefix tun- +ing feedforward neural network and φ denotes all +other parameters that include the frozen parameters +of the large language model. +For our task of response generation, we con- +catenate the prefix with the dialog context and +the current user utterance as input [PREFIX; +u1, s1, ..., un−1, sn−1, un]. The target output is the +system response sn as seen in Figure 1 (a). +We adopt T5 (Raffel et al., 2020) as the pre- +trained language model. T5 employs an encoder- +decoder framework which is prevalent in seq2seq +tasks (Sutskever et al., 2014; Cho et al., 2014). +4.2 +Contextual Prefix-tuning +In vanilla prefix-tuning, the parameters of the prefix +are fixed after training for any particular task to be +reused. However, a dialog system involves having +multiple turns of conversation between a system +and the user. It is imperative in such systems to +dynamically incorporate contextual information to +carry out a meaningful conversation with the user. +We explore how we can distill the dialog context +information into the prefix with a prompt encoder. +Different from the original design, we want to +encode additional signals into the prefix that dif- +fers for each input instances. In other words, we +want to generate contextual prefix or contextual +dynamic prompts. +Formally, we modify the equation (1) as follows: +Pθ[:, :] = MLPθ(encoder(C)), +(2) +where C = [u1, s1, ..., un−1, sn−1] represents the +dialog context. We first obtain the representation +of the dialog context by feeding C into a T5 en- +coder which is kept frozen as shown in Figure 1 (b). +Subsequently, we use the prompt encoder, i.e., the +feedforward neural network, to get the prefix. The +generated prefix Pθ is then concatenated with only +the current user utterance. Instead of concatenating +the whole context as the input to the T5 decoder, +we first distill the signal into the prefix tokens. As a +consequence of freezing the T5 encoder which gen- +erates the context representation, we still have the +same number of tunable parameters as the original +prefix-tuning framework. +4.3 +Input-dependent Prefix-tuning with +Dialog State +In most task-oriented dialog systems, we also have +access to the dialog state at every turn in addition +to dialog context. The dialog state has information +such as requested slots and filled slots at every turn. +We provide the dialog state D in addition to the +context C to obtain contextual dynamic prompts. +As a result, we will now modify equation (2) as: +Pθ[:, :] = MLPθ(encoder(C; Dn−1)), +(3) +we only provide the most recent dialog state +Dn−1 which is an amalgamation of all previous +dialog states D