diff --git "a/https:/huggingface.co/datasets/iamgroot42/mimir/tree/main/test/arxiv_ngram_7_0.2.jsonl" "b/https:/huggingface.co/datasets/iamgroot42/mimir/tree/main/test/arxiv_ngram_7_0.2.jsonl" deleted file mode 100644--- "a/https:/huggingface.co/datasets/iamgroot42/mimir/tree/main/test/arxiv_ngram_7_0.2.jsonl" +++ /dev/null @@ -1,500 +0,0 @@ -"---\nabstract: 'Interventions of central, top-down planning are serious limitations to the possibility of modelling the dynamics of cities. An example is the city of Paris (France), which during the 19th century experienced large modifications supervised by a central authority, the \u2018Haussmann period\u2019. In this article, we report an empirical analysis of more than 200 years (1789-2010) of the evolution of the street network of Paris. We show that the usual network measures display a smooth behavior and that the most important quantitative signatures of central planning is the spatial reorganization of centrality and the modification of the block shape distribution. Such effects can only be obtained by structural modifications at a large-scale level, with the creation of new roads not constrained by the existing geometry. The evolution of a city thus seems to result from the superimposition of continuous, local growth processes and punctual changes operating at large spatial scales.'\nauthor:\n- 'Marc Barthelemy$^{1,2}$ and Patricia Bordin$^{3,4}$ and Henri Berestycki$^{2}$ and Maurizio Gribaudi$^{5}$'\ntitle: 'Self-organization versus top-down planning in the evolution of a city'\n---\n\nIntroduction {#introduction .unnumbered}\n============\n\nA city is a highly complex system where a large number of agents interact, leading to a dynamics seemingly difficult" -"---\nabstract: 'Aedes Aegypti is the vector of several deadly diseases, including Zika. Effective and sustainable vector control measures must be deployed to keep A. aegypti numbers under control. The distribution of A. Aegypti is subject to spatial and climatic constraints. Using agent-based modeling, we model the population dynamics of A. aegypti subjected to the spatial and climatic constraints of a neighborhood in the Key West. Satellite imagery was used to identify vegetation, houses (CO$_{2}$ zones) both critical to the mosquito lifecycle. The model replicates the seasonal fluctuation of adult population sampled through field studies and approximates the population at a high of 986 (95% CI: \\[979, 993\\]) females and 1031 (95% CI: \\[1024, 1039\\]) males in the fall and a low of 316 (95% CI: \\[313, 319\\]) females and 333 (95% CI: \\[330, 336\\]) males during the winter. We then simulate two biological vector control strategies: 1) Wolbachia infection and 2) Release of Insects carrying a Dominant Lethal gene (RIDL). Our results support the probability of sustained Wolbachia infection within the population for two years after the year of release. egies, our approach provides a realistic simulation environment consisting of male and female Aedes aegypti, breeding spots, vegetation and" -"---\nabstract: 'This paper shows that when applying machine learning to digital zoom, it is beneficial to operate on real, RAW sensor data. Existing learning-based super-resolution methods do not use real sensor data, instead operating on processed RGB images. We show that these approaches forfeit detail and accuracy that can be gained by operating on raw data, particularly when zooming in on distant objects. The key barrier to using real sensor data for training is that ground-truth high-resolution imagery is missing. We show how to obtain such ground-truth data via optical zoom and contribute a dataset, SR-RAW, for real-world computational zoom. We use SR-RAW to train a deep network with a novel contextual bilateral loss that is robust to mild misalignment between input and outputs images. The trained network achieves state-of-the-art performance in 4X and 8X computational zoom. We also show that synthesizing sensor data by resampling high-resolution RGB images is an oversimplified approximation of real sensor data and noise, resulting in worse image quality.[^1]'\nauthor:\n- |\n Xuaner Zhang\\\n UC Berkeley\\\n- |\n Qifeng Chen\\\n HKUST\\\n- |\n Ren Ng\\\n UC Berkeley\\\n- |\n Vladlen Koltun\\\n Intel Labs\\\nbibliography:\n- 'main.bib'\ntitle: 'Zoom to Learn, Learn to Zoom'\n---" -"---\nabstract: 'Failure prediction of any electrical/optical component is crucial for estimating its operating life. Using high temperature operating life (HTOL) tests, it is possible to model the failure mechanisms for integrated circuits. Conventional HTOL standards are not suitable for operating life prediction of photonic components owing to their functional dependence on thermo-optic effect. This work presents an IR-assisted thermal vulnerability detection technique suitable for photonic as well as electronic components. By accurately mapping the thermal profile of an integrated circuit under a stress condition, it is possible to precisely locate the heat center for predicting the long-term operational failures within the device under test. For the first time, the reliability testing is extended to a fully functional microwave photonic system using conventional IR thermography. By applying image fusion using affine transformation on multimodal acquisition, it was demonstrated that by comparing the IR profile and GDSII layout, it is possible to accurately locate the heat centers along with spatial information on the type of component. Multiple IR profiles of optical as well as electrical components/circuits were acquired and mapped onto the layout files. In order to ascertain the degree of effectiveness of the proposed technique, IR profiles of CMOS RF" -"---\nabstract: 'Optical wavefront can be manipulated by interfering elementary beams with phase inhomogeneity. Therefore a surface allowing huge, abrupt and position-variant phase change would enable all possibilities of wavefront engineering. However, one may not have the luxury of efficient abrupt-phase-changing materials in acoustics. This motivates us to establish a counterpart mechanism for acoustics, in order to empower the wide spectrum of novel acoustic applications. Remarkably, the proposed impedance-governed generalized Snell\u2019s law (IGSL) of reflection is distinguished from that in optics. Via the manipulation of inhomogeneous acoustic impedance, extraordinary reflection can be tailored for unprecedented wavefront manipulation while ordinary reflection can be surprisingly switched on or off. Our results may power the acoustic-wave manipulation and engineering. We demonstrate novel acoustic applications by planar surfaces designed with IGSL.'\nauthor:\n- 'Jiajun Zhao$^{1,2}$, Cheng-Wei Qiu$^{1}$, Zhining Chen$^{1}$, and Baowen Li$^{2,3}$'\ntitle: 'Acoustic Wavefront Manipulation: Impedance Inhomogeneity and Extraordinary Reflection'\n---\n\nRefraction, a physical phenomena in classic optics, was recently re-visited from the viewpoints of complex refractive index of a bulky medium[@xx], abrupt phase change of an interface [@Capasso], and diffraction theory for gratings [@Smith-arXiv]. These works also shed light on the relation between the reflection and incidence, interpreted as the generalized Snell\u2019s" -"---\nabstract: |\n The JVO ALMA WebQL web service - available through the JVO ALMA FITS archive - has been upgraded to include legacy data from other telescopes, for example Nobeyama NRO45M in Japan. The updated server software has been renamed FITSWebQL. In addition, a standalone desktop version supporting Linux, macOS and Windows 10 Linux Subsystem (Bash on Windows) is also available for download from .\n\n The FITSWebQL server enables viewing of even 100GB-large FITS files in a web browser running on a PC with a limited amount of RAM. Users can interactively zoom-in to selected areas of interest with the corresponding frequency spectrum being calculated on the server in near real-time. The client (a browser) is a JavaScript application built on WebSockets, HTML5, WebGL and SVG.\n\n There are many challenges when providing a web browser-based real-time FITS data cube preview service over high-latency low-bandwidth network connections. The upgraded version tries to overcome the latency issue by predicting user mouse movements with a Kalman Filter in order to speculatively deliver the real-time spectrum data at a point where the user is likely to be looking at. The new version also allows one to view multiple FITS files simultaneously in" -"---\nabstract: 'We present stereoscopic reconstructions of the location and inclination of polar plumes of two data sets based on the two simultaneously recorded images taken by the EUVI telescopes in the SECCHI instrument package onboard the *STEREO (Solar TErrestrial RElations Observatory)* spacecraft. The ten plumes investigated show a superradial expansion in the coronal hole in 3D which is consistent with the 2D results. Their deviations from the local meridian planes are rather small with an average of $6.47^{\\circ}$. By comparing the reconstructed plumes with a dipole field with its axis along the solar rotation axis, it is found that plumes are inclined more horizontally than the dipole field. The lower the latitude is, the larger is the deviation from the dipole field. The relationship between plumes and bright points has been investigated and they are not always associated. For the first data set, based on the 3D height of plumes and the electron density derived from SUMER/*SOHO* Si[viii]{} line pair, we found that electron densities along the plumes decrease with height above the solar surface. The temperature obtained from the density scale height is 1.6 to 1.8 times larger than the temperature obtained from Mg[ix]{} line ratios. We attribute" -"---\nabstract: |\n A common situation occurring when dealing with multimedia traffic is having large data frames fragmented into smaller IP packets, and having these packets sent independently through the network. For real-time multimedia traffic, dropping even few packets of a frame may render the entire frame useless. Such traffic is usually modeled as having [*inter-packet dependencies*]{}. We study the problem of scheduling traffic with such dependencies, where each packet has a deadline by which it should arrive at its destination. Such deadlines are common for real-time multimedia applications, and are derived from stringent delay constraints posed by the application. The figure of merit in such environments is maximizing the system\u2019s [*goodput*]{}, namely, the number of frames successfully delivered.\n\n We study online algorithms for the problem of maximizing goodput of delay-bounded traffic with inter-packet dependencies, and use competitive analysis to evaluate their performance. We present competitive algorithms for the problem, as well as matching lower bounds that are tight up to a constant factor. We further present the results of a simulation study which further validates our algorithmic approach and shows that insights arising from our analysis are indeed manifested in practice.\nauthor:\n- \nbibliography:\n- 'MS.bib'\ntitle: Bounded Delay" -"---\nabstract: 'An attempt at understanding the downward overshooting in the convective envelopes of the post-main-sequence stars has been made on the basis of three-dimensional large eddy simulations, using artificially modified OPAL opacity and taking into account radiation and ionization in the equation of state. Two types of stars, an intermediate mass star and a massive star were considered. To avoid the long thermal relaxation time of the intermediate mass star, we increased the stellar energy flux artificially while trying to maintain a structure close to the one given by one-dimensional stellar model. A parametric study of the flux factor was performed. For the massive star, no such manner was necessary. Numerical results were analyzed when the system reached the statistical steady state. It was shown that the penetration distance in pressure scale heights is of the order of unit. The scaling relations among the penetration distance, input flux and vertical velocity fluctuations studied by Singh et al. (1998) were checked. The anisotropy of the turbulent convection and the diffusion models of third order moments representing the non-local transports were also investigated. These models are dramatically affected by the velocity fields and no universal constant parameters seem to exist. The" -"---\nabstract: 'International trade fluxes evolve as countries revise their portfolios of trade products towards economic development. Accordingly products\u2019 shares in international trade vary with time, reflecting the transfer of capital between distinct industrial sectors. Here we analyze the share of hundreds of product categories in world trade for four decades and find a scaling law obeyed by the annual variation of product share, which informs us of how capital flows and interacts over the product space. A model of stochastic transfer of capital between products based on the observed scaling relation is proposed and shown to reproduce exactly the empirical share distribution. The model allows analytic solutions as well as numerical simulations, which predict a pseudo-condensation of capital onto few product categories and when it will occur. At the individual level, our model finds certain products unpredictable, the excess or deficient growth of which with respect to the model prediction is shown to be correlated with the nature of goods.'\nauthor:\n- Matthieu Barbier\n- 'D.-S. Lee'\ntitle: 'Urn model for products\u2019 shares in international trade'\n---\n\nIntroduction {#sec:intro}\n============\n\nFinite and uneven distribution of resources and capabilities for production lead a huge volume of products and capital to" -"---\nabstract: 'Dissipating of disorder quantum vortices in an annular two-dimensional Bose-Einstein condensate can form a macroscopic persistent flow of atoms. We propose a protocol to create persistent flow with high winding number based on a double concentric ring-shaped configuration. We find that a sudden geometric quench of the trap from single ring-shape into double concentric ring-shape will enhance the circulation flow in the outer ring-shaped region of the trap when the initial state of the condensate is with randomly distributed vortices of the same charge. The circulation flows that we created are with high stability and good uniformity free from topological excitations. Our study is promising for new atomtronic designing, and is also helpful for quantitatively understanding quantum tunneling and interacting quantum systems driven far from equilibrium.'\nauthor:\n- Xiyu Chen\n- Tao Yang\n- 'Wen-Li Yang'\n- 'Wu-Ming Liu'\ntitle: Turbulent cascade induced persistent current of cold atomic superfluids\n---\n\nStudy of persistent flow of superfluid enables understanding of fundamental characteristics of superfluidity and may lead to applications in high-precision metrology and atomtronics[@PRL.95.143201; @PRL.103.140405; @nature.506.200]. Thanks to the technical development for achieving tailored trapping potential with arbitrary geometries, quantum transport experiments with quantum gases can be carried out" -"---\nabstract: 'We make use of a catalog of 1600 Pan-STARRS1 groups produced by the probability friends-of-friends algorithm to explore how the galaxy properties, i.e. the specific star formation rate (SSFR) and quiescent fraction, depend on stellar mass and group-centric radius. The work is the extension of [@lin14]. In this work, powered by a stacking technique plus a background subtraction for contamination removal, a finer correction and more precise results are obtained than in our previous work. We find that while the quiescent fraction increases with decreasing group-centric radius the median SSFRs of star-forming galaxies in groups at fixed stellar mass drop slightly from the field toward the group center. This suggests that the major quenching process in groups is likely a fast mechanism. On the other hand, a reduction in SSFRs by $\\sim$0.2 dex is seen inside clusters as opposed to the field galaxies. If the reduction is attributed to the slow quenching effect, the slow quenching process acts dominantly in clusters. In addition, we also examine the density$-$color relation, where the density is defined by using a sixth-nearest neighbor approach. Comparing the quiescent fractions contributed from the density and radial effect, we find that the density effect dominates" -"---\nabstract: 'We demonstrate Fourier single-pixel imaging in the terahertz regime. The experimental system is implemented with a photo-induced coded aperture setup, where a monolayer graphene on a high-resistance silicon substrate illuminated by a coded laser beam works as a terahertz modulator. Results show that high-quality terahertz images can be reconstructed using greatly reduced number of measurements. We further find that deep photo-induced terahertz modulation by adding a monolayer graphene on the silicon substrate and by using high laser power can significantly improve the image quality. Compared to Hadamard single-pixel imaging with re-ordered Hadamard matrix, the Fourier approach has higher image quality. We expect that this work will speed up the efficiency of single-pixel terahertz imaging and advance terahertz imaging applications.'\nauthor:\n- 'Rongbin She$^{1,2,\\dagger}$, Wenquan Liu$^{1,\\dagger}$, Yuanfu Lu$^{1,*}$, Zhisheng Zhou$^1$, and Guangyuan Li$^{1,*}$'\ntitle: 'Fourier single-pixel imaging in the terahertz regime'\n---\n\n[2.0]{}\n\n$^1$Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, Guangdong Province, China\n\n$^2$Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen 518055, Guangdong Province, China\n\n$^\\dagger$ These authors contributed equally.\n\nyf.lu@siat.ac.cn; gy.li@siat.ac.cn\n\nTerahertz wave refers to electromagnetic radiation with a frequency of 0.1 THz \u2013 10 THz (corresponding to wavelength of" -"---\nabstract: 'We calculate the gauge invariant cumulants (and moments) associated with the Zak phase in the Rice-Mele model. We reconstruct the underlying probability distribution by maximizing the information entropy and applying the moments as constraints. When the Wannier functions are localized within one unit cell, the probability distribution so obtained corresponds to that of the Wannier function. We show that in the fully dimerized limit the magnitude of the moments are all equal. In this limit, if the on-site interaction is decreased towards zero, the distribution shifts towards the midpoint of the unit cell, but the overall shape of the distribution remains the same. Away from this limit, if alternate hoppings are finite, and the on-site interaction is decreased, the distribution also shifts towards the midpoint of the unit cell, but it does this by changing shape, by becoming asymmetric around the maximum, as well as by shifting. We also follow the probability distribution of the polarization in cycles around the topologically non-trivial point of the model. The distribution moves across to the next unit cell, its shape distorting considerably in the process. If the radius of the cycle is large, the shift of the distribution is accompanied by" -"---\nabstract: 'The activity of a neural network is defined by patterns of spiking and silence from the individual neurons. Because spikes are (relatively) sparse, patterns of activity with increasing numbers of spikes are less probable, but with more spikes the number of possible patterns increases. This tradeoff between probability and numerosity is mathematically equivalent to the relationship between entropy and energy in statistical physics. We construct this relationship for populations of up to $N=160$ neurons in a small patch of the vertebrate retina, using a combination of direct and model\u2013based analyses of experiments on the response of this network to naturalistic movies. We see signs of a thermodynamic limit, where the entropy per neuron approaches a smooth function of the energy per neuron as $N$ increases. The form of this function corresponds to the distribution of activity being poised near an unusual kind of critical point. Networks with more or less correlation among neurons would not reach this critical state. We suggest further tests of criticality, and give a brief discussion of its functional significance.'\nauthor:\n- 'Ga\u0161per Tka\u010dik,$^a$ Thierry Mora,$^b$ Olivier Marre,$^c$ Dario Amodei,$^{d,e}$ Michael J. Berry II,$^{e,f}$ and William Bialek$^{d,g,h}$'\ntitle: 'Thermodynamics for a network of neurons:" -"---\nabstract: 'We propose an algorithm for general nonlinear eigenvalue problems to compute eigenvalues within a chosen contour and to compute the corresponding eigenvectors. Eigenvalue information is explored by contour integration incorporating different weight functions. The gathered information is processed by solving a nonlinear system of equations of small dimension. No auxiliary functions have to be introduced for linearization. The numerical implementation of the approach is straightforward and the algorithm allows for parallelization. We apply the method to two examples from physics. Resonant states of a one-dimensional quantum mechanical system and resonant states of a three-dimensional photonic nanoantenna are computed.'\nauthor:\n- 'Felix Binkowski[^1]'\n- 'Lin Zschiedrich[^2]'\n- Sven Burger\ntitle: |\n A Riesz-projection-based method for\\\n nonlinear eigenvalue problems\n---\n\nnonlinear eigenvalue problems, contour integration, Riesz projection, photonic nanoantenna, resonant states\n\nIntroduction {#sec:Intro}\n============\n\nThe numerical treatment of nonlinear eigenvalue problems (NLEVPs) is a highly relevant research field in applied mathematics [@Guettel_NLEVP_2017; @Mackey_2015; @Mehrmann_GAMM_2004]. Fundamental solution techniques from numerical linear algebra which are used for solving linear eigenproblems are not available in the case of NLEVPs. This leads to challenges for the development of suitable algorithms. We address the most general problem class of NLEVPs $$\\begin{aligned}\n T(\\lambda)v = 0, \\label{eq:NLEVP}\\end{aligned}$$" -"---\nabstract: 'Finite-grid (or aliasing) instabilities are pervasive in particle-in-cell (PIC) plasma simulation algorithms, and force the modeler to resolve the smallest (Debye) length scale in the problem regardless of dynamical relevance. These instabilities originate in the aliasing of interpolation errors between mesh quantities and particles (which live in the space-time continuum). Recently, strictly energy-conserving PIC (EC-PIC) algorithms have been developed that promise enhanced robustness against aliasing instabilities. In this study, we confirm by analysis that EC-PIC is stable against aliasing instabilities for stationary plasmas. For drifting plasmas, we demonstrate by analysis and numerical experiments that, while EC-PIC algorithms are not free from these instabilities in principle, they feature a benign stability threshold for finite-temperature plasmas that make them usable in practice for a large class of problems (featuring ambipolarity and realistic ion-electron mass ratios) without the need to resolve Debye lengths spatially. We also demonstrate that this threshold is absent for the popular momentum-conserving PIC algorithms, which are therefore unstable for both drifting and stationary plasmas.'\naddress:\n- 'Coronado Consulting, Lamy, NM 87540'\n- 'Los Alamos National Laboratory, Los Alamos, NM 87545'\n- \nauthor:\n- 'D. C. Barnes'\n- 'L. Chac\u00f3n'\nbibliography:\n- '../kinetic.bib'\n- '../numerics.bib'\ntitle: 'Finite spatial-grid" -"---\nabstract: 'We provide improved atomic calculation of wavelengths, oscillator strengths, and autoionization rates relevant to the $2\\to 3$ inner-shell transitions of Fe\u00a0VI\u2013XVI, the so-called Fe\u00a0M-shell unresolved transition array (UTA). A second order many-body perturbation theory is employed to obtain accurate transition wavelengths, which are systematically larger than previous theoretical results by 15\u201345\u00a0m[\u00c5]{}. For a few transitions of Fe\u00a0XVI and Fe\u00a0XV where laboratory measurements exist, our new wavelengths are accurate to within a few m[\u00c5]{}. Using these new calculations, the apparent discrepancy in the velocities between the Fe\u00a0M-shell UTA and other highly ionized absorption lines in the outflow of NGC\u00a03783 disappears. The oscillator strengths in our new calculation agree well with the previous theoretical data, while the new autoionization rates are significantly larger, especially for lower charge states. We attribute this discrepancy to the missing autoionization channels in the previous calculation. The increased autoionization rates may slightly affect the column density analysis of the Fe\u00a0M-shell UTA for sources with high column density and very low turbulent broadening. The complete set of atomic data is provided as an electronic table.'\nauthor:\n- 'Ming F. Gu, Tomer Holczer, Ehud Behar, and Steven M. Kahn'" -"---\nabstract: 'We explore a new approach for training neural networks where all loss functions are replaced by hard constraints. The same approach is very successful in phase retrieval, where signals are reconstructed from magnitude constraints and general characteristics (sparsity, support, etc.). Instead of taking gradient steps, the optimizer in the constraint based approach, called relaxed-reflect-reflect (RRR), derives its steps from projections to local constraints. In neural networks one such projection makes the minimal modification to the inputs $x$, the associated weights $w$, and the pre-activation value $y$ at each neuron, to satisfy the equation $x\\cdot w=y$. These projections, along with a host of other local projections (constraining pre- and post-activations, etc.) can be partitioned into two sets such that all the projections in each set can be applied concurrently \u2014 across the network *and* across all data in the training batch. This partitioning into two sets is analogous to the situation in phase retrieval and the setting for which the general purpose RRR optimizer was designed. Owing to the novelty of the method, this paper also serves as a self-contained tutorial. Starting with a single-layer network that performs non-negative matrix factorization, and concluding with a generative model comprising an" -"---\nabstract: 'It has been experimentally demonstrated only recently that a simultaneous excitation of interfering electric and magnetic resonances can lead to uni-directional scattering of visible light in zero-dimensional dielectric nanoparticles. We show both theoretically and experimentally, that strongly anisotropic scattering also occurs in individual dielectric nanowires. The effect occurs even under either pure transverse electric or pure transverse magnetic polarized normal illumination. This allows for instance to toggle the scattering direction by a simple rotation of the incident polarization. Finally, we demonstrate that directional scattering is not limited to cylindrical cross-sections, but can be further tailored by varying the shape of the nanowires.'\nauthor:\n- \n- \n- \n- \n- \n- Aur\u00e9lie Lecestre\n- Guilhem Larrieu\n- Frank Fournel\n- Vincent Larrey\n- Thierry Baron\n- \ntitle: '[Strongly directional scattering from dielectric nanowires]{}'\n---\n\nThe search for ways to control light at subwavelength dimensions has increasingly attracted the interest of researchers for about the last two decades. Due to their strong polarizability and tunable plasmon resonances, metallic nanostructures are particularly suitable for the nanoscale manipulation of light \u2013 especially at visible frequencies.[@muhlschlegel_resonant_2005] However, such plasmonic structures suffer from certain drawbacks like strong dissipation associated to the large imaginary part of" -"---\nabstract: 'Driven by a large number of potential applications in areas like bioinformatics, information retrieval and social network analysis, the problem setting of inferring relations between pairs of data objects has recently been investigated quite intensively in the machine learning community. To this end, current approaches typically consider datasets containing crisp relations, so that standard classification methods can be adopted. However, relations between objects like similarities and preferences are often expressed in a graded manner in real-world applications. A general kernel-based framework for learning relations from data is introduced here. It extends existing approaches because both crisp and graded relations are considered, and it unifies existing approaches because different types of graded relations can be modeled, including symmetric and reciprocal relations. This framework establishes important links between recent developments in fuzzy set theory and machine learning. Its usefulness is demonstrated through various experiments on synthetic and real-world data.'\nauthor:\n- |\n Willem Waegeman, Tapio Pahikkala, Antti Airola,\\\n Tapio Salakoski, Michiel Stock, Bernard De Baets\nbibliography:\n- 'referenties.bib'\n- 'myBibliography.bib'\ntitle: |\n A kernel-based framework for learning\\\n graded relations from data\n---\n\nIntroduction\n============\n\nRelational data occurs in many predictive modeling tasks, such as forecasting the winner in two-player computer" -"---\nabstract: 'We introduce an algorithm for word-level text spotting that is able to accurately and reliably determine the bounding regions of individual words of text \u201cin the wild\". Our system is formed by the cascade of two convolutional neural networks. The first network is fully convolutional and is in charge of detecting areas containing text. This results in a very reliable but possibly inaccurate segmentation of the input image. The second network (inspired by the popular YOLO architecture) analyzes each segment produced in the first stage, and predicts oriented rectangular regions containing individual words. No post-processing (e.g. text line grouping) is necessary. With execution time of 450 ms for a 1000$\\bf \\times$560 image on a Titan X GPU, our system achieves the highest score to date among published algorithms on the ICDAR 2015 Incidental Scene Text dataset benchmark [@icdarincidental].'\nauthor:\n- \nbibliography:\n- 'Siyang.bib'\ntitle: 'Cascaded Segmentation-Detection Networks for Word-Level Text Spotting'\n---\n\nIntroduction\n============\n\nFast automatic detection and reading of text (such a license plate number, a posted sign, or a street name) in images taken by a fixed or a moving camera, is very desirable for applications such as surveillance, forensics, autonomous vehicles, augmented reality (e.g., visual" -"The dynamics of systems undergoing a second order symmetry breaking phase transition has been studied recently in several instances [@AB; @LZ; @YZ; @ABZ], particularly in association with the theory of topological defect formation [@Kibble; @Zurek] in cosmology [@Book] and in experiments in $^3He$ [@He3] and $^4He$ [@He4]. With experimental data in the pipeline for the collisions of heavy ions at RHIC and later at CERN the effort to identify phase transitions in nuclear matter is also a very active field of research. The Langevin dynamics of effective scalar theories (such as the $\\sigma$ model) affords one of the few quantitative windows into such transitions [@BRS].\n\nTheoretically this situation has been modeled by the out-of equilibrium dynamics of a classical scalar field theories (with a given number of flavors) in contact with an external environment at a temperature T, which may be a function of time. The environment is only known statistically and its behavior can then be described in terms of stochastic fields obeying a fluctuation-dissipation relation. The effective evolution of the fields is therefore described by a Langevin field equation.\n\nThis stochastic classical description is of course only an approximation to the full quantum evolution. Recently attempts to treat" -"---\nabstract: |\n Based on new CCD photometry and spectroscopy we confirm the presence of two cataclysmic variables (CVs) in the very old open cluster NGC\u00a06791. One of these variables, known as B8, was observed by Rucinski, Kaluzny & Hilditch (1996) to undergo a large magnitude outburst in 1995. The spectrum of this star outside the outburst, obtained by us with MMT, shows clearly the emission lines characteristic for dwarf novae. We observed the second star, known as B7, to undergo large ($\\sim3\\;mag$) drop in its brightness over $\\sim\n 10\\;days$. The spectrum of B7 obtained in the high state resembles spectra of nova-like CVs. This star a likely member of UX\u00a0UMa subtype of CVs. Variables B7 and B8 represent only the second and the third cataclysmic variables known in open clusters. Variable B7 has observational characteristics which would make it difficult to identify as a CV with some of the methods being currently used in surveys for CVs in globular clusters.\nauthor:\n- 'J. Kaluzny'\n- 'K. Z. Stanek, P. M. Garnavich, P. Challis'\ntitle: 'Two Confirmed Cataclysmic Variables in the Old Stellar Cluster NGC 6791[^1]'\n---\n\nINTRODUCTION\n============\n\nNGC\u00a06791 is currently considered to be the one" -"---\nabstract: 'We present the first-ever microscopic dynamical simulation of the temperature-controlled Mott metal-insulator transition in the Hubbard model. By combining the efficient Gutzwiller method with molecular dynamics simulations, we demonstrate that the transformation from the correlated metal to the Mott insulator proceeds via the nucleation and growth of the Mott droplets. Moreover, the time evolution of the Mott volume fraction is found to follow a universal transformation kinetics. We show that after an initial incubation period, the early stage of the phase transformation is characterized by a constant nucleation rate and an interface-controlled cluster growth mechanism, consistent with the classical theory developed by Kolmogorov, Johnson, Mehl, and Avrami. This is followed by a novel intermediate stage of accelerated phase transformation that is significantly different from the prediction of the classical theory. Morevoer, the cluster-growth dynamics in this stage exhibits an unexpected avalanche behavior, similar to the Barkhausen noise in magnetization dynamics, even in the absence of quenched disorder. Detailed structural characterization further uncovers a universal correlation function for the transient mixed-phase states of the Mott transition. The implications of our findings for the recent nano-imaging experiments on metal-insulator transition of correlated materials are also discussed.'\nauthor:\n- 'Gia-Wei Chern'" -"---\nabstract: 'We report the results of a sensitive survey of young planetary nebulae in the CO $J=2-1$ line that significantly increases the available data on warm, dense, molecular gas in the early phases of planetary nebula formation. The observations were made using the IRAM 30\u00a0m telescope with the 3$\\times$3 pixel Heterodyne Receiver Array (HERA). The array provides an effective means of discriminating the CO emission of planetary nebulae in the galactic plane from contaminating emission of interstellar clouds along the line of sight. 110 planetary nebulae were observed in the survey and 40 were detected. The results increase the number of young planetary nebulae with known CO emission by approximately a factor of two. The CO spectra yield radial velocities for the detected nebulae, about half of which have uncertain or no velocity measurements at optical wavelengths. The CO profiles range from parabolic to double-peaked, tracing the evolution of structure in the molecular gas. The line widths are significantly larger than on the Asymptotic Giant Branch, and many of the lines show extended wings, which probably result from the effects on the envelopes of high velocity jets.'\nauthor:\n- 'P. J. Huggins, R. Bachiller, P. Planesas, T. Forveille," -"---\nauthor:\n- 'D. Trevese , V. Zitelli , F. Vagnetti , K. Boutsia , G.M. Stirpe'\ntitle: 'Optical Spectroscopy of Active Galactic Nuclei in SA57[^1]'\n---\n\n[The cosmological evolution of X-ray-selected and optically selected Active Galactic Nuclei (AGNs) show different behaviours interpreted in terms of two different populations. The difference is evident mainly for low luminosity AGNs (LLAGNs), many of which are lost by optical photometric surveys.]{} [ We are conducting a spectroscopical study of a composite sample of AGN candidates selected in SA57 following different searching techniques, to identify low luminosity AGNs and break down the sample into different classes of objects. ]{} [ AGN candidates were obtained through optical variability and/or X-ray emission. Of special interest are the extended variable objects, which are expected to be galaxies hosting LLAGNs. ]{} [Among the 26 classified objects a fair number (9) show typical AGN spectra. 10 objects show Narrow Emission Line Galaxy spectra, and in most of them (8/10) optical variability suggests the presence of LLAGNs. ]{}\n\nIntroduction\n============\n\nIn recent years a growing amount of evidence suggested the existence of a link between the evolution in cosmic time of galaxy and quasar (QSO) populations. Theoretical work discusses the" -"---\nabstract: 'During the last three decades, evidence has mounted that star and planet formation is not an isolated process, but is influenced by current and previous generations of stars. Although cool stars form in a range of environments, from isolated globules to rich embedded clusters, the influences of other stars on cool star and planet formation may be most significant in embedded clusters, where hundreds to thousands of cool stars form in close proximity to OB stars. At the cool stars 14 meeting, a splinter session was convened to discuss the role of environment in the formation of cool stars and planetary systems; with an emphasis on the \u201chot\u201d environment found in rich clusters. We review here the basic results, ideas and questions presented at the session. We have organized this contribution into five basic questions: what is the typical environment of cool star formation, what role do hot star play in cool star formation, what role does environment play in planet formation, what is the role of hot star winds and supernovae, and what was the formation environment of the Sun? The intention is to review progress made in addressing each question, and to underscore areas of agreement" -"---\nabstract: 'In active matter systems, self-propelled particles can self-organize to undergo collective motion, leading to persistent dynamical behavior out of equilibrium. In cells, cytoskeletal filaments and motor proteins exhibit activity and self-organization into complex structures important for cell mechanics, motility, and division. Collective dynamics of cytoskeletal systems can be reconstituted using filament gliding experiments, in which cytoskeletal filaments are propelled by surface-bound motor proteins. These experiments have observed diverse behavior, including flocks, polar streams, swirling vortices, and single filament spirals. Recent experiments with microtubules and kinesin motor proteins found that the effective repulsive interaction between filaments can be tuned by crowding agents in solution, altering the collective behavior. Adding a crowder reduced filament crossing, promoted alignment, and led to a transition from active, isotropically oriented filaments to locally aligned polar streams. These results suggest that tunable soft repulsion can control active phase behavior, but how altering steric interactions and filament stiffness alter collective motion is not fully understood. Here we use simulations of driven filaments with tunable soft repulsion and rigidity in order to better understand how the interplay between filament flexibility and steric effects can lead to different active steady states. We identify swirling flocks, polar streams, buckling" -"---\nabstract: 'A numerically exact solution to the many emitter \u2013 cavity problem as an open many body system is presented. The solution gives access to the full, nonperturbative density matrix and thus the full quantum statistics and quantum correlations. The numerical effort scales with the third power in the number of emitters. Notably the solution requires none of the common approximations like good/bad cavity limit. As a first application the recently discussed concept of coherent surface plasmon amplification \u2013 spaser \u2013 is addressed: A spaser consists of a plasmonic nanostructure that is driven by a set of quantum emitters. In the context of laser theory it is a laser in the (very) bad cavity limit with an extremely high light matter interaction strength. The method allows us to answer the question of spasing with a fully quantized theory.'\nauthor:\n- Marten Richter\n- Michael Gegg\n- 'T. Sverre Theuerholz'\n- Andreas Knorr\ntitle: 'Numerically exact solution of the many emitter \u2013 cavity laser problem: application to the fully quantized spaser emission'\n---\n\nIntroduction\n============\n\nFor decades open many body quantum systems consisting of a set of many ($N$) externally driven two level quantum emitters (QEs), e.g. dye molecules or" -"---\nabstract: 'We design and build the first neural temporal dependency parser. It utilizes a neural ranking model with minimal feature engineering, and parses time expressions and events in a text into a temporal dependency tree structure. We evaluate our parser on two domains: news reports and narrative stories. In a parsing-only evaluation setup where gold time expressions and events are provided, our parser reaches 0.81 and 0.70 f-score on unlabeled and labeled parsing respectively, a result that is very competitive against alternative approaches. In an end-to-end evaluation setup where time expressions and events are automatically recognized, our parser beats two strong baselines on both data domains. Our experimental results and discussions shed light on the nature of temporal dependency structures in different domains and provide insights that we believe will be valuable to future research in this area.'\nauthor:\n- |\n Yuchen Zhang\\\n Brandeis University\\\n [yuchenz@brandeis.edu]{}\\\n Nianwen Xue\\\n Brandeis University\\\n [xuen@brandeis.edu]{}\\\nbibliography:\n- 'emnlp2018.bib'\n- 'nsf-2015.bib'\ntitle: Neural Ranking Models for Temporal Dependency Structure Parsing\n---\n\nIntroduction\n============\n\nTemporal relation classification is important for a range of NLP applications that include but are not limited to story timeline construction, question answering, summarization, etc. Most work on temporal information extraction" -"---\nabstract: 'Processing quantum information on continuous variables requires a highly nonlinear element in order to attain universality. Noise reduction in processing such quantum information involves the use of a nonlinear phase state as a non-Gaussian ancilla. A necessary condition for a nonlinear phase state to implement a nonlinear phase gate is that noise in a selected nonlinear quadrature should decrease below the level of classical states. A reduction of the variance in this nonlinear quadrature below the ground state of the ancilla, a type of nonlinear squeezing, is the resource embedded in these non-Gaussian states and a figure of merit for nonlinear quantum processes. Quantum optomechanics with levitating nanoparticles trapped in nonlinear optical potentials is a promising candidate to achieve such resources in a flexible way. We provide a scheme for reconstructing this figure of merit, which we call nonlinear squeezing, in standard linear quantum optomechanics, analysing the effects of mechanical decoherence processes on the reconstruction and show that all mechanical states which exhibit reduced noise in this nonlinear quadrature are nonclassical.'\nauthor:\n- 'Darren W. Moore'\n- 'Andrey A. Rakhubovsky'\n- Radim Filip\nbibliography:\n- 'references.bib'\ntitle: '[Estimation of squeezing in a nonlinear quadrature of a mechanical oscillator]{}'" -"---\nabstract: 'Quantified () is a well-studied temporal logic that extends with quantification over atomic propositions. It has recently come to the fore as a powerful intermediary framework to study logics for strategic reasoning. We extend it to include imperfect information by parameterising quantifiers with an observation that defines how well they observe the model, thus constraining their behaviour. We consider two different semantics, one related to the notion of *no memory*, the other to *perfect recall*. We study the expressiveness of our logic, and show that it coincides with for the first semantics and with with equal level for the second one. We establish that the model-checking problem is -complete for the first semantics. While it is undecidable for the second one, we identify a syntactic fragment, defined by a notion of hierarchical formula, which we prove to be decidable thanks to an automata-theoretic approach.'\nauthor:\n- Rapha\u00ebl Berthon\n- Bastien Maubert\n- Aniello Murano\ntitle: 'Quantified CTL with imperfect information[^1]'\n---\n\nat (current page.south) [![image](logo-ce-horizontal-en-quadri-lr.png){height=\"2.5em\"}]{};\n\n[10]{}\n\nE.\u00a0A.Emerson and C.-L. Lei. Modalities for model checking: Branching time strikes back. In [*PL\u201985*]{}, pages 84\u201396. [ACM]{} Press, 1985.\n\nT.\u00a0[\u00c5]{}gotnes, V.\u00a0Goranko, and W.\u00a0Jamroga. In [*TARK\u201907*]{}, pages 15\u201324, 2007." -"---\nabstract: 'Crowdsourced wireless community network enables individual users to share their private Wi-Fi access points (APs) with each other, hence can achieve a large Wi-Fi coverage with a small deployment cost via crowdsourcing. This paper presents a novel *contract-based* incentive framework to incentivize such a Wi-Fi network crowdsourcing under incomplete information (where each user has certain *private* information such as mobility pattern and Wi-Fi access quality). In the proposed framework, the network operator designs and offers a set of contract items to users, each consisting of a Wi-Fi access price (that a user can charge others for accessing his AP) and a subscription fee (that a user needs to pay the operator for joining the community). Different from the existing contracts in the literature, in our contract model each user\u2019s best choice depends not only on his private information but also on other users\u2019 choices. This greatly complicates the contract design, as the operator needs to analyze the equilibrium choices of all users, rather than the best choice of each single user. We first derive the feasible contract that guarantees the users\u2019 truthful information disclosure based on the equilibrium analysis of user choice, and then derive the optimal (and" -"---\nabstract: 'We present studies of quantum algorithms exploiting machine learning to classify events of interest from background events, one of the most representative machine learning applications in high-energy physics. We focus on variational quantum approach to learn the properties of input data and evaluate the performance of the event classification using both simulators and quantum computing devices. Comparison of the performance with standard multi-variate classification techniques based on a boosted-decision tree and a deep neural network using classical computers shows that the quantum algorithm has comparable performance with the standard techniques at the considered ranges of the number of input variables and the size of training samples. The variational quantum algorithm is tested with quantum computers, demonstrating that the discrimination of interesting events from background is feasible. Characteristic behaviors observed during a learning process using quantum circuits with extended gate structures are discussed, as well as the implications of the current performance to the application in high-energy physics experiments.'\nauthor:\n- Koji Terashi\n- Michiru Kaneda\n- Tomoe Kishimoto\n- Masahiko Saito\n- Ryu Sawada\n- Junichi Tanaka\nbibliography:\n- 'ms.bib'\ndate: 'Received: date / Accepted: date'\ntitle: 'Event Classification with Quantum Machine Learning in High-Energy Physics'\n---\n\nIntroduction" -"---\nabstract: 'Textual distractors in current multi-choice VQA datasets are not challenging enough for state-of-the-art neural models. To better assess whether well-trained VQA models are vulnerable to potential attack such as more challenging distractors, we introduce a novel task called *textual Distractors Generation for VQA* (DG-VQA). The goal of DG-VQA is to generate the most confusing distractors in multi-choice VQA tasks represented as a tuple of image, question, and the correct answer. Consequently, such distractors expose the vulnerability of neural models. We show that distractor generation can be formulated as a Markov Decision Process, and present a reinforcement learning solution to unsupervised produce distractors. Our solution addresses the lack of large annotated corpus issue in classical distractor generation methods. Our proposed model receives reward signals from well-trained multi-choice VQA models and updates its parameters via policy gradient. The empirical results show that the generated textual distractors can successfully confuse several cutting-edge models with an average $20\\%$ accuracy drop from around $64\\%$. Furthermore, we conduct extra adversarial training to improve the robustness of VQA models by incorporating the generated distractors. The experiment validates the effectiveness of adversarial training by showing a performance improvement of $27\\%$ for the multi-choice VQA task. [^1]'" -"---\nabstract: 'This article serves a few purposes. First of all, it reviews [@DI] and previews and samples some results from four papers [@MAXCAT], [@DII], [@DIII] and [@Yang] I have been preparing. It is also a written-up and expanded version of a talk I gave at a symplectic conference in Chengdu on June 28, 2015, and it intends to provide bridges and compatibility between various pairs of virtual techniques and to demonstrate some unity among various technical viewpoints in the constructions of structures on moduli spaces in symplectic geometry. More precisely, the abstract perturbative structures (or interchangeably, virtual structures) present in each virtual theory discussed in this paper (and sometimes even the way they essentially originate in applications) are identified pairwise in a way that intertwines the (non-)perturbation mechanisms. To be more helpful to readers and not get them buried under technicalities and notations, we give the ideas and appropriate level of details so that the results will be clear to the relevant experts; meanwhile the ideas of each virtual machinery and how they are related should come through to more application-minded readers so that they might get encouraged to read papers on a given virtual machinery and possibly apply" -"---\nabstract: 'The mathematical theory of super-resolution developed recently by Cand\u00e8s and Fernandes-Granda states that a continuous, sparse frequency spectrum can be recovered with infinite precision via a (convex) atomic norm technique given a set of uniform time-space samples. This theory was then extended to the cases of partial/compressive samples and/or multiple measurement vectors via atomic norm minimization (ANM), known as off-grid/continuous compressed sensing (CCS). However, a major problem of existing atomic norm methods is that the frequencies can be recovered only if they are sufficiently separated, prohibiting commonly known high resolution. In this paper, a novel (nonconvex) sparse metric is proposed that promotes sparsity to a greater extent than the atomic norm. Using this metric an optimization problem is formulated and a locally convergent iterative algorithm is implemented. The algorithm iteratively carries out ANM with a sound reweighting strategy which enhances sparsity and resolution, and is termed as reweighted atomic-norm minimization (RAM). Extensive numerical simulations are carried out to demonstrate the advantageous performance of RAM with application to direction of arrival (DOA) estimation.'\nauthor:\n- 'Zai Yang, [*Member, IEEE*]{}, and Lihua Xie, [*Fellow, IEEE*]{} [^1]'\ntitle: Enhancing Sparsity and Resolution via Reweighted Atomic Norm Minimization\n---\n\nContinuous compressed sensing" -"---\nabstract: 'Multi-object tracking systems often consist of a combination of a detector, a short term linker, a re-identification feature extractor and a solver that takes the output from these separate components and makes a final prediction. Differently, this work aims to unify all these in a single tracking system. Towards this, we propose Siamese Track-RCNN, a two stage detect-and-track framework which consists of three functional branches: (1) the detection branch localizes object instances; (2) the Siamese-based track branch estimates the object motion and (3) the object re-identification branch re-activates the previously terminated tracks when they re-emerge. We test our tracking system on two popular datasets of the MOTChallenge. Siamese Track-RCNN achieves significantly higher results than the state-of-the-art, while also being much more efficient, thanks to its unified design.'\nauthor:\n- Bing Shuai\n- 'Andrew G. Berneshawi'\n- Davide Modolo\n- Joseph Tighe\nbibliography:\n- 'eccv2020\\_bib.bib'\ntitle:\n- 'Multiple Object Tracking with Siamese Track-RCNN'\n- Appendix\n---\n\nIntroduction\n============\n\nMulti-object tracking (MOT) deals with the problem of localizing and tracking object instances over entire video sequences. Recently, the most successful approaches in the literature are based on the \u201ctracking-by-detection\u201d paradigm, which consists of two major components: object detection and association." -"---\nabstract: 'Stock prediction is a topic undergoing intense study for many years. Finance experts and mathematicians have been working on a way to predict the future stock price so as to decide to buy the stock or sell it to make profit. Stock experts or economists, usually analyze on the previous stock values using technical indicators, sentiment analysis etc to predict the future stock price. In recent years, many researches have extensively used machine learning for predicting the stock behaviour. In this paper we propose data driven deep learning approach to predict the future stock value with the previous price with the feature extraction property of convolutional neural network and to use Neural Arithmetic Logic Units with it.'\nauthor:\n- |\n Shangeth Rajaa[^1]\\\n Department of Mathematics\\\n BITS Pilani Goa Campus\\\n Goa, India 403725\\\n `f20160442@goa.bits-pilani.ac.in`\\\n Jajati Keshari Sahoo\\\n Department of Mathematics\\\n BITS Pilani Goa Campus\\\n Goa, India 403725\\\n `jksahoo@goa.bits-pilani.ac.in`\\\nbibliography:\n- 'paper.bib'\ntitle: Convolutional Feature Extraction and Neural Arithmetic Logic Units for Stock Prediction\n---\n\nINTRODUCTION\n============\n\nA large number of people buy and sell stocks everyday in an aim to make maximum profit. Many mathematical methods and models have been developed which analyses the movement of the stock price." -"---\nabstract: 'For complex real-world systems, designing controllers are a difficult task. With the advent of neural networks as a proxy for complex function approximators, it has become popular to learn the controller directly. However, these controllers are specific to a given task and need to be relearned for a new task. Alternatively, one can learn just the model of the dynamical system and compose it with external controllers. Such a model is task (and controller) agnostic and must generalize well across the state space. This paper proposes learning a \u201csufficiently accurate\u201d model of the dynamics that explicitly enforces small residual error on pre-defined parts of the state-space. We formulate task agnostic controller design for this learned model as an optimization problem with state and control constraints that is solved in an online fashion. We validate this approach in simulation using a challenging contact-based Ball-Paddle system.'\nbibliography:\n- 'paper.bib'\n---\n\nIntroduction\n============\n\n[[One of the fundamental problems in many fields, such as robotics, is the design of controllers for complex dynamical systems. For the most part, controllers rely on the availability of a mathematical model that describes the system. However, deriving the models and estimating all their parameters (e.g., mass," -"---\nabstract: 'Single-layer carbon, or graphene, demonstrates amazing transport properties, such as the minimum conductivity near $\\frac{4e^2}{h}$ independent of shapes and mobility of samples. This indicates there exist some unusual effects due to specific Dirac dispersion relation of fermion in two dimensions. By deriving fermion-lattice interaction Hamiltonian we show that Berry phases can be produced in fermion states around two Dirac points by relative rotations of two sublattices. The Berry phases in turn remove the degeneracies of energies for states near the Fermi surface, leading to a dynamical instability of the lattice with respect to the rotations. By considering the Berry phases emerging in an uncertain way on fermion wavefunctions in vicinities of the Fermi surface, the conductivity is calculated by using the Landauer-B\u00fctticker formula together with the transfer-matrix technique, verifying $\\sim \\frac{4e^2}{h}$ quantized minimum conductivity as observed in experiments independent of shapes and sizes. The relationship between the chaotic structure of fermions due to the Berry phases and the classical transport properties are discussed. The physical meaning is profound as this relationship provides an excellent example to elucidate the mechanism of quantum-classical transition.'\nauthor:\n- 'Shi-Jie Xiong'\n- Ye Xiong\ntitle: 'Berry-Phase Induced Dynamical Instability and Minimum Conductivity in" -"---\nabstract: 'The interaction of laser cooled and trapped atoms with resonant light is limited by the linewidth of the excited state of the atom. Another precise optical oscillator is an optical Fabry-P\u00e9rot cavity. The combining of cold atoms with optical oscillators is emerging as an area with great potential for precision measurements and the creation of versatile quantum optics systems. Here we show that when driven atoms are in the collectively strongly coupled regime with the cavity, exhibiting vacuum Rabi splitting (VRS), lasing is observed for the emitted light, red detuned from atomic transition. This is demonstrated experimentally by the observation of a lasing threshold, polarisation purity, mode purity, and line narrowing. The laser is created spontaneously by the atomic emission into the cavity mode, which stimulates cavity emission, and is capable of operating continuously without a seed laser. The gain mechanism is understood by theoretical modelling and illustrates why the observed lasing is generic to the coupled system. This opens up a range of possibilities of using the phenomenon for a variety of new measurements.'\nauthor:\n- Rahul Sawant\n- 'S. A. Rangwala'\nbibliography:\n- 'ref2.bib'\n---\n\nIntroduction\n============\n\nWhile both cold atoms\u00a0[@ketterle_nobel_2002; @cornell_nobel_2002]and cavity physics\u00a0[@thompson_observation_1992;" -"---\nabstract: 'Uhrig\u2019s dynamical decoupling pulse sequence has emerged as one universal and highly promising approach to decoherence suppression. So far both the theoretical and experimental studies have examined single-qubit decoherence only. This work extends Uhrig\u2019s universal dynamical decoupling from one-qubit to two-qubit systems and even to general multi-level quantum systems. In particular, we show that by designing appropriate control Hamiltonians for a two-qubit or a multi-level system, Uhrig\u2019s pulse sequence can also preserve a generalized quantum coherence measure to the order of $1+O(T^{N+1})$, with only $N$ pulses. Our results lead to a very useful scheme for efficiently locking two-qubit entangled states. Future important applications of Uhrig\u2019s pulse sequence in preserving the quantum coherence of multi-level quantum systems can also be anticipated.'\nauthor:\n- Musawwadah Mukhtar\n- Thuan Beng Saw\n- Wee Tee Soh\n- Jiangbin Gong\ntitle: 'Universal Dynamical Decoupling: Two-Qubit States and Beyond'\n---\n\nIntroduction\n============\n\nDecoherence, i.e., the loss of quantum coherence due to system-environment coupling, is a major obstacle for a variety of fascinating quantum information tasks. Even with the assistance of error corrections, decoherence must be suppressed below an acceptable level to realize a useful quantum operation. Analogous to refocusing techniques in nuclear magnetic resonance" -"---\nabstract: 'When exposed to the high energy X-ray and ultraviolet radiation of a very active star, water vapor in the upper atmospheres of planets can be photodissociated and rapidly lost to space. In this paper, I study the chemical, thermal, and hydrodynamic processes in the upper atmospheres of terrestrial planets, concentrating on water vapor dominated atmospheres orbiting in the habitable zones of active stars. I consider different stellar activity levels and find very high levels of atmospheric escape in all cases, with the outflowing gas being dominated by atomic hydrogen and oxygen in both their neutral and ion forms. In the lower activity cases, I find that the accumulation of O$_2$ and increases in the D/H ratios in the atmospheres due to mass fractionation are possible, but in the higher activity cases no mass fractionation takes place. Connecting these results to stellar activity evolution tracks for solar mass stars, I show that huge amounts of water vapor can be lost, and both the losses and the amount of O$_2$ that can be accumulated in the atmosphere depend sensitively on the star\u2019s initial rotation rate. For an Earth-mass planet in the habitable zone of a low-mass M-dwarf, my results suggest" -"---\nabstract: 'Collaborative Topic Regression (CTR) combines ideas of probabilistic matrix factorization (PMF) and topic modeling (e.g., LDA) for recommender systems, which has gained increasing successes in many applications. Despite enjoying many advantages, the existing CTR algorithms have some critical limitations. First of all, they are often designed to work in a batch learning manner, making them unsuitable to deal with streaming data or big data in real-world recommender systems. Second, the document-specific topic proportions of LDA are fed to the downstream PMF, but not reverse, which is sub-optimal as the rating information is not exploited in discovering the low-dimensional representation of documents and thus can result in a sub-optimal representation for prediction. In this paper, we propose a novel scheme of Online Bayesian Collaborative Topic Regression (OBCTR) which is efficient and scalable for learning from data streams. Particularly, we [*jointly*]{} optimize the combined objective function of both PMF and LDA in an online learning fashion, in which both PMF and LDA tasks can be reinforced each other during the online learning process. Our encouraging experimental results on real-world data validate the effectiveness of the proposed method.'\nauthor:\n- |\n Chenghao Liu$^{1,2}$, Tao Jin$^1$, Steven C.H. Hoi$^2$, Peilin Zhao$^3$, Jianling" -"---\nabstract: |\n In many biomedical applications, outcome is measured as a \u201ctime-to-event\u201d (eg. disease progression or death). To assess the connection between features of a patient and this outcome, it is common to assume a proportional hazards model, and fit a proportional hazards regression (or Cox regression). To fit this model, a log-concave objective function known as the \u201cpartial likelihood\u201d is maximized. For moderate-sized datasets, an efficient Newton-Raphson algorithm that leverages the structure of the objective can be employed. However, in large datasets this approach has two issues: 1) The computational tricks that leverage structure can also lead to computational instability; 2) The objective does not naturally decouple: Thus, if the dataset does not fit in memory, the model can be very computationally expensive to fit. This additionally means that the objective is not directly amenable to stochastic gradient-based optimization methods. To overcome these issues, we propose a simple, new framing of proportional hazards regression: This results in an objective function that is amenable to stochastic gradient descent. We show that this simple modification allows us to efficiently fit survival models with very large datasets. This also facilitates training complex, eg. neural-network-based, models with survival data.\n\n **Keywords:** Survival Analysis," -"---\nabstract: 'A number of globular clusters appear to have undergone core collapse, in the sense that their predicted collapse time is much shorter than their current age. Simulations using gas models and Fokker-Planck approximation have shown that the central density of a globular cluster after the collapse undergoes nonlinear oscillation with large amplitude (gravothermal oscillation). However, whether such an oscillation actually takes place in a real $N$-body system has remained unsolved, because an $N$-body simulation with a sufficiently high resolution would have required the computing resource of the order of several Gflops$\\cdot$years. In the present paper, we report the result of such a simulation, performed on a dedicated special-purpose computer GRAPE-4. We simulated the evolution of isolated point-mass systems with up to 32,768 particles. The largest number of particles reported previously is 10,000. We confirmed that gravothermal oscillation takes place in an $N$-body system. The expansion phase shows all signatures that are considered as the evidences of the gravothermal nature of the oscillation. At the maximum expansion, the core radius is $\\sim 1$% of the half-mass radius for the run with 32,768 particles. The maximum core size $r_c$ depends on $N$, as $ \\propto N^{-1/3}$.'\nauthor:\n- Junichiro Makino" -"---\nabstract: 'A recent attempt to make sense of scalars in AdS with \u201cNeumann boundary conditions\u201d outside of the usual BF-window $-(d/2)^2 < m^2 l^2 < -(d/2)^2 + 1$ led to pathologies including (depending on the precise context) either IR divergences or the appearance of ghosts. Here we argue that such ghosts may be banished by imposing a UV cutoff. It is also possible to achieve this goal in certain UV completions. An example is the above AdS theory with a radial cutoff supplemented by particular boundary conditions on the cutoff surface. In this case we explicitly identify a region of parameter space for which the theory is ghost free. At low energies, this theory may be interpreted as the standard dual CFT (defined with \u201cDirichlet\u201d boundary conditions) interacting with an extra scalar via an irrelevant interaction. We also discuss the relationship to recent works on holographic fermi surfaces and quantum criticality.'\nauthor:\n- Tom\u00e1s Andrade\n- Thomas Faulkner\n- Donald Marolf\ntitle: Banishing AdS ghosts with a UV cutoff\n---\n\nIntroduction\n============\n\nAdS/CFT relates a set of Conformal Field Theories to gravitational theories in AdS [@Maldacena:1997re; @Witten:1998qj; @Gubser:1998bc]. Interesting field theory dynamics follows from simple relevant deformations of these" -"---\nabstract: 'A model is described, in which electrical breakdown in high-voltage systems is caused by stochastic fluctuations of the mobile dislocation population in the cathode. In this model, the mobile dislocation density normally fluctuates, with a finite probability to undergo a critical transition due to the effects of the external field. It is suggested that once such a transition occurs, the mobile dislocation density will increase deterministically, leading to electrical breakdown. Model parametrization is achieved via microscopic analysis of OFHC Cu cathode samples from the CERN CLIC project, allowing the creation and depletion rates of mobile dislocations to be estimated as a function of the initial physical condition of the material and the applied electric field. We find analytical expressions for the mean breakdown time and quasistationary probability distribution of the mobile dislocation density, and verify these results by using a Gillespie algorithm. A least-squares algorithm is used to fit these results with available experimental data of the dependence of the breakdown rate on the applied strength of the electric field and on temperature. The effects of the variation of some of the assumptions of the physical model are considered, and a number of additional experiments to validate the" -"---\nabstract: 'The Faraday dispersion function (FDF), which can be derived from an observed polarization spectrum by Faraday rotation measure synthesis, is a profile of polarized emissions as a function of Faraday depth. We study intrinsic FDFs along sight lines through face-on, Milky-Way-like galaxies by means of a sophisticated galactic model incorporating 3D MHD turbulence, and investigate how much the FDF contains information intrinsically. Since the FDF reflects distributions of thermal and cosmic-ray electrons as well as magnetic fields, it has been expected that the FDF could be a new probe to examine internal structures of galaxies. We, however, find that an intrinsic FDF along a sight line through a galaxy is very complicated, depending significantly on actual configurations of turbulence. We perform 800 realizations of turbulence, and find no universal shape of the FDF even if we fix the global parameters of the model. We calculate the probability distribution functions of the standard deviation, skewness, and kurtosis of FDFs and compare them for models with different global parameters. Our models predict that the presence of vertical magnetic fields and large scale-height of cosmic-ray electrons tend to make the standard deviation relatively large. Contrastingly, differences in skewness and kurtosis are" -"---\nabstract: 'Controllable building loads have the potential to increase the flexibility of power systems. A key step in developing effective and attainable load control policies is modeling the set of feasible building load profiles. In this paper, we consider buildings whose source of flexibility is their HVAC loads. We propose a data-driven method to empirically estimate a robust feasible region of the load using coarse data, that is, using only total building load and average indoor temperatures. The proposed method uses easy-to-gather coarse data and can be adapted to buildings of any type. The resulting feasible region model is robust to temperature prediction errors and is described by linear constraints. The mathematical simplicity of these constraints makes the proposed model adaptable to many power system applications, for example, economic dispatch, and optimal power flow. We validate our model using data from EnergyPlus and demonstrate its usefulness through a case study in which flexible building loads are used to balance errors of wind power forecasts.'\nauthor:\n- 'Jesus E. Contreras-Oca\u00f1a,\u00a0 Miguel A. Ortega-Vazquez,\u00a0 Daniel Kirschen,\u00a0 and\u00a0Baosen Zhang,\u00a0 [^1]'\nbibliography:\n- 'bibliography.bib'\ntitle: Tractable and Robust Modeling of Building Flexibility Using Coarse Data\n---\n\nBuildings, flexibility, data-driven modeling.\n\nNomenclature {#nomenclature" -"---\nabstract: |\n This paper aims to investigate/map the effects that perturbations applied to an accretion disk might produce on the registered Light Curves (LC). The case of accretion disks around supermassive active black holes (AGNs) is studied with the goal to explain some of the statistical properties of the observed IntraDay Variability (IDV). The region producing optical IDV is perturbed by allowing it to develop a mass density of a fractional Brownian Motion-like type. The light curves and spectral slopes are calculated and compared to observational data for different Hurst parameters. The spectral slopes of the simulated light curves vary in the range $(0.4,2.5)$. The agreement with observational data shows that a magnetized disk subjected to stochastic perturbations can produce some of the features observed in the light curves.\\\n **Keywords**: accretion, accretion discs; magnetohydrodynamics (MHD); fractional Brownian Motion\nauthor:\n- 'G. Mocanu$^{*,+}$ and N. Magyar$^*$ and A. Pardi$^*$ and A. Marcu$^*$'\ntitle: Appearance of an accretion disk perturbed by fractional Brownian Motion density\n---\n\n$^*$Faculty of Phycis, Babes-Bolyai University, Cluj Napoca, Romania, No. 1 Kolgalniceanu Street, 400084; $^+$Department of Mathematics, Technical University Cluj Napoca,Memorandumului Street 28,400114 Cluj-Napoca,Romania\n\nIntroduction\n============\n\nExtensive observational and theoretical efforts have been made in order" -"---\nabstract: 'We present preliminary calculations of electron scattering polarizations from models of structured cool star envelopes. We note that net polarizations from unresolved sources can result from non-spherical scattering envelopes and/or anisotropic illumination from a photosphere that has brightness variations. The resultant polarizations are quite small (hundreths of a percent); however, Rayleigh scattering from molecular opacity and/or dust scattering from the more extended envelope under similar considerations may produce higher polarizations.'\nauthor:\n- 'R.\u00a0Ignace$^1$, G.\u00a0D.\u00a0Henson$^1$, J.\u00a0Carson$^2$'\ntitle: Polarization from the Structured Envelopes of Cool Evolved Stars\n---\n\nThe variable nature and complex envelopes of cool evolved stars offer numerous mechanisms for the creation of polarized light, which in turn probes structure in the envelopes and flows of these star. Observations of variable polarization lead to constraints on physical models for the envelope dynamics. We consider an electron scattering chromosphere with a radial density of scatterers $n \\propto \\exp[(r-R)/H)]\n\\times \\sin^2[k(r-R)]$ illuminated by a photosphere at $r=R$. Figure\u00a0\\[ignace:fig1\\] shows a $Q-U$ diagram for a chromospheric envelope with $\\tau=1$, scale height $H=0.01R$, and lateral structure given by a spherical harmonic of $l=3, m=2$. Different curves are for different viewing inclinations, and the points are for different" -"---\nabstract: 'Since many of the currently available multi-agent frameworks are generally mostly intended for research, it can be difficult to built multi-agent systems using physical robots. In this report I describe a way to combine the multi-agent framework *Jason*, an extended version of the agent-oriented programming language AgentSpeak, with Lego robots to address this problem. By extending parts of the *Jason* reasoning cycle I show how Lego robots are able to complete tasks such as following lines on a floor and communicating to be able to avoid obstacles with minimal amount of coding. The final implementation is a functional extension that is able to built multi-agent systems using Lego agents, however there are some issues that have not been addressed. If the agents are highly dependent on percepts from their sensors, they are required to move quite slowly, because there currently is a high delay in the reasoning cycle, when it is combined with a robot. Overall the system is quite robust and can be used to make simple Lego robots perform tasks of an advanced agent in a multi-agent environment.'\nauthor:\n- Andreas Schmidt Jensen\ndate: 1 October 2010\ntitle: 'Implementing Lego Agents Using *Jason*'\n---\n\nIntroduction\n============" -"---\nabstract: 'We study the possibility that the mutual interactions between Jupiter and Saturn prevented Type II migration from driving these planets much closer to the Sun. Our work extends previous results by Masset and Snellgrove (2001), by exploring a wider set of initial conditions and disk parameters, and by using a new hydrodynamical code that properly describes for the global viscous evolution of the disk. Initially both planets migrate towards the Sun, and Saturn\u2019s migration tends to be faster. As a consequence, they eventually end up locked in a mean motion resonance. If this happens in the 2:3 resonance, the resonant motion is particularly stable, and the gaps opened by the planets in the disk may overlap. This causes a drastic change in the torque balance for the two planets, which substantially slows down the planets\u2019 inward migration. If the gap overlap is substantial, planet migration may even be stopped or reversed. As the widths of the gaps depend on disk viscosity and scale height, this mechanism is particularly efficient in low viscosity, cool disks. The initial locking of the planets in the 2:3 resonance is a likely outcome if Saturn formed at the edge of Jupiter\u2019s gap, but" -"---\nabstract: 'Multiple imputation (MI) has become popular for analyses with missing data in medical research. The standard implementation of MI is based on the assumption of data being missing at random (MAR). However, for missing data generated by missing not at random (MNAR) mechanisms, MI performed assuming MAR might not be satisfactory. For an incomplete variable in a given dataset, its corresponding population marginal distribution might also be available in an external data source. We show how this information can be readily utilised in the imputation model to calibrate inference to the population, by incorporating an appropriately calculated offset termed the \u2018calibrated-$\\delta$ adjustment\u2019. We describe the derivation of this offset from the population distribution of the incomplete variable and show how in applications it can be used to closely (and often exactly) match the post-imputation distribution to the population level. Through analytic and simulation studies, we show that our proposed calibrated-$\\delta$ adjustment MI method can give the same inference as standard MI when data are MAR, and can produce more accurate inference under two general MNAR missingness mechanisms. The method is used to impute missing ethnicity data in a type 2 diabetes prevalence case study using UK primary care" -"---\nabstract: 'The advent of 5G networking technologies has increased the expectations from mobile devices, in that, more sophisticated, computationally intense applications are expected to be delivered on the mobile device which are themselves getting smaller and sleeker. This predicates a need for offloading computationally intense parts of the applications to a resource strong cloud. Parallely, in the wireless networking world, the trend has shifted to multi-*radio* (as opposed to multi-channel) enabled communications. In this paper, we provide a comprehensive *computation* offloading solution that uses the multiple radio links available for associated data transfer, optimally. Our contributions include: a comprehensive model for the energy consumption from the perspective of the mobile device; the formulation of the joint optimization problem to minimize the energy consumed as well as allocating the associated data transfer optimally through the available radio links and an iterative algorithm that converges to a locally optimal solution. Simulations on an HTC phone, running a 14-component application and using the Amazon EC2 as the cloud, show that the solution obtained through the iterative algorithm consumes only 3% more energy than the optimal solution (obtained via exhaustive search).'\nauthor:\n- \n- \n- \nbibliography:\n- 'cloud6.bib'\ntitle: 'Cloud Offloading for Multi-Radio" -"---\nabstract: 'The ability to classify spoken speech based on the style of speaking is an important problem. With the advent of [[BPO\u2019s]{}]{}\u00a0in recent times, specifically those that cater to a population other than the local population, it has become necessary for [[BPO\u2019s]{}]{}\u00a0to identify people with certain style of speaking (American, British etc). Today [[BPO\u2019s]{}]{}\u00a0employ accent analysts to identify people having the required style of speaking. This process while involving human bias, it is becoming increasingly infeasible because of the high attrition rate in the [[BPO]{}]{}\u00a0industry. In this paper, we propose a new metric, which robustly and accurately helps classify spoken speech based on the style of speaking. The role of the proposed metric is substantiated by using it to classify real speech data collected from over seventy different people working in a [[BPO]{}]{}. We compare the performance of the metric against human experts who independently carried out the classification process. Experimental results show that the performance of the system using the novel metric performs better than two different human expert.'\nauthor:\n- 'Sunil Kopparapu[^1], Saurabh Bhatnagar, K. Sahana, Sathyanarayana,'\n- |\n Akhilesh Srivastava, P.V.S. Rao\\\n Cognitive Systems Research Laboratory\\\n Tata Infotech Limited, Navi Mumbai\\\n http://www.tatainfotech.com" -"---\nabstract: 'Quite often in database search, we only need to extract portion of the information about the satisfying item. Recently Radhakrishnan and Grover \\[RG\\] considered this problem in the following form: the database of $N$ items was divided into $K$ equally sized blocks. The algorithm has just to find the block containing the item of interest. The queries are exactly the same as in the standard database search problem. \\[RG\\] invented a quantum algorithm for this problem of partial search that took about $0.33\\sqrt{N/K}$ fewer iterations than the quantum search algorithm. They also proved that the best any quantum algorithm could do would be to save $0.78 \\sqrt(N/K)$ iterations. The main limitation of the algorithm was that it involved complicated analysis as a result of which it has been inaccessible to most of the community. This paper gives a simple analysis of the algorithm. This analysis is based on three elementary observations about quantum search, does not require a single equation and takes less than 2 pages.'\nauthor:\n- 'Vladimir\u00a0E.\u00a0Korepin$^{1}$ and Lov K.\u00a0Grover$^{2}$'\ntitle: Simple Algorithm for Partial Quantum Search\n---\n\nDatabase search is one of the few applications for which a fast quantum algorithm is" -"---\nabstract: 'One of the most salient spatio-temporal patterns in population ecology is the synchronization of fluctuating local populations across vast spatial extent. Synchronization of abundance has been widely observed across a range of spatial scales in relation to rate of dispersal among discrete populations. However, the dependence of synchrony on patterns of among-patch movement across heterogeneous landscapes has been largely ignored. Here we consider the duration of movement between two predator-prey communities connected by weak dispersal, and its effect on population synchrony. More specifically, we introduce time delayed dispersal to incorporate the finite transmission time between discrete populations across a continuous landscape. Reducing the system to a phase model using weakly connected network theory, it is found that the time delay is an important factor determining the nature and stability of phase-locked states. Our analysis predicts enhanced convergence to stable synchronous fluctuations in general, and a decreased ability of systems to produce in-phase synchronization dynamics in the presence of delayed dispersal. These results introduce delayed dispersal as a tool for understanding the importance of dispersal time across a landscape matrix in affecting metacommunity dynamics. They further highlight the importance of landscape and dispersal patterns for predicting the onset of" -"---\nabstract: 'The $n$-player Hotelling game calls for each player to choose a point on the line segment, so as to maximize the size of his Voronoi cell. This paper studies fault-tolerant versions of the Hotelling game. Two fault models are studied. The first studied: line faults and player faults. The first model assumes that the environment is prone to failure: with some probability, a disconnection occurs at a random point on the line, splitting it into two separate segments and modifying each player\u2019s Voronoi cell accordingly. A complete characterization of the Nash equilibria of this variant is provided for every $n$. Additionally, a one to one correspondence is shown between equilibria of this variant and of the Hotelling game with no faults. The second fault model assumes the players are prone to failure: each player is removed from the game with probability, changing the payoffs of the remaining players accordingly. It is shown that for $n \\geq 3$ this variant of the game has no Nash equilibria.'\nauthor:\n- Chen Avin\n- Avi Cohen\n- Zvi Lotker\n- David Peleg\n- 'Chen Avin[^1]'\n- 'Avi Cohen[^2]'\n- 'Zvi Lotker$^*$'\n- 'David Peleg$^{\\dag}$'\ntitle: 'Fault-Tolerant Hotelling Games'\n---\n\nIntroduction\n============" -"---\nabstract: |\n Dependability assurance of systems embedding components\u2014so called ** ()\u2014is a key step for their use in safety-critical applications. In emerging standardization and guidance efforts, there is a growing consensus in the value of using assurance cases for that purpose. This paper develops a quantitative notion of assurance that an is dependable, as a core component of its assurance case, also extending our prior work that applied to *components*. Specifically, we characterize assurance in the form of *assurance measures*: a probabilistic quantification of confidence that an possesses system-level properties associated with functional capabilities and dependability attributes. We illustrate the utility of assurance measures by application to a real world autonomous aviation system, also describing their role both in\n\n guiding high-level, runtime risk mitigation decisions and\n\n as a core component of the associated *dynamic assurance case*.\n\n \\\n **Keywords**: Assurance, Autonomy, Confidence, Learning-enabled systems, Machine learning, Quantification\nauthor:\n- Erfan Asaadi\n- Ewen Denney\n- 'Ganesh Pai$^{(\\textrm{\\Letter})}$'\ntitle: 'Quantifying Assurance in Learning-enabled Systems[^1]'\n---\n\nIntroduction {#s:intro}\n============\n\nThe pursuit of developing systems with increasingly autonomous capabilities is amongst the main reasons for the emergence of ** (), [i.e.,]{}systems embedding based software components. There is a growing consensus in autonomy standardization" -"---\nauthor:\n- Author\n- 'Harris V. Georgiou'\nbibliography:\n- 'math-econ-proofs\\_apa-custom\\_HG-ver1a.bib'\nnocite: '[@*]'\ntitle: Feedback models and stability analysis of three economic paradigms\n---\n\n[T]{}HERE is a frequently-stated assertion that labor cost is not the driving factor for production cost per unit, even when the selling unit is not a product but some service. However, in times of crisis and austerity, labor costs are almost always the first (and usually the only) factor that is \u2019relaxed\u2019 to lower and lower levels by enterprises, in an effort to keep the margin of profit stable when selling rates decline. Some economists justify these policies as the typical \u2019rule of thumb\u2019: when profits decline, the workers will be paid less and less, until either the business recovers or bankrupts. Others say that its is exactly the recipe of failure, since underpaid workers will rarely work twice as hard to get the business back on it feet - quite the opposite.\n\nSimilarly, incentives for new private investments, e.g. low tax rates, are often compared to public spending and the regulatory policies are usually criticized as \u2019killers\u2019 for those incentives. However, there is a definite link between changes in the investment flows and the inherent" -"---\nabstract: 'We describe calibration data, and discuss performance of the photon-counting flight detectors for the Ultraviolet Imaging Telescopes on the Astrosat observatory. The paper describes dark current, flat field and light-spot images for FUV, NUV, and Visible band detectors at more than one wavelength setting for each. We also report on nominal gain and low-gain operations, full- and sub-window read rates, and non-photon-counting modes of operation, all expected to be used in flight. We derive corrections to the event centroids from the CMOS readout arrays, for different centroid algorihtms. We derive spatial resolution values for each detector and plots of point-source signal saturation for different flux levels. We also discuss ways to correct for saturation in extended object images.'\nauthor:\n- 'J. Postma'\n- 'J.B. Hutchings'\n- 'D. Leahy'\ntitle: 'Calibration and performance of the photon-counting detectors for the Ultraviolet Imaging Telescopes (UVIT) of the Astrosat observatory[^1]'\n---\n\nIntroduction and data\n=====================\n\nAstrosat is a multi-wavelength space observatory of the Indian Space Research Organisation (ISRO). The satellite is to be launched in 2012, and contains three pointed X-ray instruments and two UV-optical telescopes, all with fields of view that are aligned. There is also an X-ray scanning sky monitor." -"---\nabstract: |\n We develop a novel, general framework for the asymptotic reduction of the bias of $M$-estimators from unbiased estimating functions. The framework relies on additive, empirical adjustments to the estimating functions that depend only on the first two derivatives of the contributions to the estimating functions. The new estimation method has markedly broader applicability than previous bias-reduction methods by applying to models that are either partially-specified or that have a likelihood that is intractable or expensive to compute, and a surrogate objective is employed. The method also offers itself to easy, general implementations for arbitrary models by using automatic differentiation. This is in contrast to other popular bias-reduction methods that require either resampling or evaluation of expectations of products of log-likelihood derivatives. If $M$-estimation is by the maximization of an objective function, then, reduced-bias $M$-estimation can be achieved by maximizing an appropriately penalized objective. That penalized objective relates closely to information criteria based on the Kullback-Leibler divergence, establishing, for the first time, a strong link between reduction of estimation bias and model selection. The reduced-bias $M$-estimators are found to have the same asymptotic distribution, and, hence, the same asymptotic efficiency properties as the original $M$-estimators, and we discuss" -"---\nabstract: 'Weak gravitational lensing is one of the most promising cosmological probes to constrain dark matter, dark energy and the nature of gravity at cosmic scales. Intrinsic alignments (IA) of galaxies have been recognized as one of the most serious systematic effects facing gravitational lensing. Such alignments must be isolated and removed to obtain a pure lensing signal. Furthermore, the alignments are related to the processes of galaxy formation, so their extracted signal can help in understanding such formation processes and improving their theoretical modeling. We report in this letter the first detection of the gravitational shear\u2013intrinsic shape (GI) correlation and the intrinsic shape\u2013galaxy density (Ig) correlation in a photometric redshift survey using the self-calibration method. These direct measurements are made from the KiDS-450 photometric galaxy survey with a significance of 2.74$\\sigma$ in the third bin for the Ig correlation, and 2.73$\\sigma$ for the GI cross-correlation between the third and fourth bins. The self-calibration method uses the information available from photometric surveys without needing to specify an IA model and will play an important role in validating IA models and IA mitigation in future surveys such as LSST, Euclid and WFIRST.'\nauthor:\n- 'Eske M. Pedersen$^{1}$'\n- 'Ji Yao$^{1,2}$'" -"---\nabstract: 'An outstanding question in X-ray single particle imaging experiments has been the feasibility of imaging sub 10-nm-sized biomolecules under realistic experimental conditions where very few photons are expected to be measured in a single snapshot and instrument background may be significant relative to particle scattering. While analyses of simulated data have shown that the determination of an average image should be feasible using Bayesian methods such as the EMC algorithm, this has yet to be demonstrated using experimental data containing realistic non-isotropic instrument background, sample variability and other experimental factors. In this work, we show that the orientation and phase retrieval steps work at photon counts diluted to the signal levels one expects from smaller molecules or with weaker pulses, using data from experimental measurements of 60-nm PR772 viruses. Even when the signal is reduced to a fraction as little as 1/256, the virus electron density determined using *ab initio* phasing is of almost the same quality as the high-signal data. However, we are still limited by the total number of patterns collected, which may soon be mitigated by the advent of high repetition-rate sources like the European XFEL and LCLS-II.'\naddress: |\n Max Planck Institute for the" -"---\nabstract: 'Tight-binding Hamiltonians with single and multiple orbitals exhibit an intriguing array of magnetic phase transitions. In most cases the spin ordered phases are insulating, while the disordered phases may be either metallic or insulating. In this paper we report a Determinant Quantum Monte Carlo study of interacting electrons in a geometry which can be regarded as a two-dimensional Periodic Anderson Model with depleted interacting ($f$) orbitals. For a single depletion, we observe an enhancement of antiferromagnetic correlations and formation of localized states. For half of the $f$-orbitals regularly depleted, the system exhibits a ferrimagnetic ground state. We obtain a quantitative determination of the nature of magnetic order, which we discuss in the context of Tsunetsugu\u2019s theorem, and show that, although the dc conductivity indicates insulating behavior at half-filling, the compressibility remains finite.'\nauthor:\n- 'N.C. Costa'\n- 'M.V. Ara\u00fajo'\n- 'J.P. Lima'\n- 'T. Paiva'\n- 'R.R. dos Santos'\n- 'R.T. Scalettar'\nbibliography:\n- 'Pam\\_depl.bib'\ntitle: Compressible Ferrimagnetism in the depleted Periodic Anderson Model\n---\n\nIntroduction\n============\n\nTight binding Hamiltonians provide insight into many of the properties of strongly correlated electron systems, from magnetism and metal-insulator transitions, to superconductivity and charge ordering[@gebhard97; @fazekas99]. The simplest of these, the" -"---\naddress: '32 Avenue de l\u2019Observatoire, 25044 Besan\u00e7on Cedex'\nauthor:\n- Michel PLANAT\ntitle: |\n Class numbers in the imaginary quadratic field\\\n and the $1/f$ noise of an electron gas \n---\n\nIntroduction to Noise\\\nin Electrical Circuits {#Electrical}\n======================\n\nNoise in electrical circuits is found in many forms, some of them have been well explained: thermal noise, shot noise, partition noise, burst noise... and one which is still subject to much debate due to its universality and the lack of a general accepted model: 1/f noise.\n\nLet us first review briefly our understanding of electrical thermal noise. Due to thermal agitation free electrons in a metallic conductor are moving around continuously causing collisions with the atoms and a continuous exchange of energy between the modes. This was first investigated experimentally by Johnson [@Joh.28] and theoretically explained by Nyquist [@Nyq.28]. The noise in any circuit kept at uniform temperature $T$ can be described by a noise voltage$(\\overline{v^2})^{1/2}$ in series with a resistor R of the circuit such that for a small frequency interval $df$ $$\\overline{v^2}~=~4~kRTp(f)~df, \\label{thermal}$$ where $p(f)=\\frac{hf}{kT}(e^{hf/kT}-1)^{-1}$ is the Planck factor, $k=1.38~10^{-23}$ [J/K]{} is Boltzmann\u2019s constant and $h=6.62~10^{-34}$ [J.s]{} is Planck\u2019s constant. For room temperature and not too high frequency" -"---\nabstract: 'We consider distributed control of double-integrator networks, where agents are subject to stochastic disturbances. We study performance of such networks in terms of *coherence*, defined through an [$\\mathcal{H}_2$ ]{}norm metric that represents the variance of nodal state fluctuations. Specifically, we address known performance limitations of the standard consensus protocol, which cause this variance to scale unboundedly with network size for a large class of networks. We propose distributed proportional integral (PI) and proportional derivative (PD) controllers that relax these limitations and achieve bounded variance, in cases where agents can access an absolute measurement of one of their states. This case applies to, for example, frequency control of power networks and vehicular formation control with limited sensing. We discuss optimal tuning of the controllers with respect to network coherence and demonstrate our results in simulations.'\nauthor:\n- 'Emma Tegling and Henrik Sandberg [^1] [^2]'\nbibliography:\n- 'EmmasBib17\\_NonPS.bib'\ntitle: |\n **On the Coherence of Large-Scale Networks\\\n with Distributed PI and PD Control**\n---\n\nIntroduction {#sec:intro}\n============\n\nThe problem of distributed control of networked systems has been extensively studied over the past decades [@Jadbabaie2003; @OlfatiSaber; @RenAtkins2005]. that is, to drive the network of agents to the same state. When the system" -"---\nabstract: 'We consider the task of [*few shot link prediction*]{}, where the goal is to predict missing edges across multiple graphs using only a small sample of known edges. We show that current link prediction methods are generally ill-equipped to handle this task\u2014as they cannot effectively transfer knowledge between graphs in a multi-graph setting and are unable to effectively learn from very sparse data. To address this challenge, we introduce a new gradient-based meta learning framework, [*Meta-Graph*]{}, that leverages higher-order gradients along with a learned graph signature function that conditionally generates a graph neural network initialization. Using a novel set of few shot link prediction benchmarks, we show that [*Meta-Graph*]{} enables not only fast adaptation but also better final convergence and can effectively learn using only a small sample of true edges.'\nauthor:\n- |\n Avishek Joey Bose [^1]\\\n McGill University, Mila\\\n `joey.bose@mail.mcgill.ca`\\\n Ankit Jain\\\n Uber AI\\\n `ankit.jain@uber.com`\\\n Piero Molino\\\n Uber AI\\\n `piero.molino@uber.com`\\\n William L. Hamilton\\\n McGill University, Mila\\\n `wlh@cs.mcgill.ca`\\\nbibliography:\n- 'bibliography.bib'\ntitle: 'Meta-Graph: Few shot Link Prediction via Meta Learning'\n---\n\nAcknowledgements {#acknowledgements .unnumbered}\n----------------\n\nThe authors would like to thank Thang Bui, Maxime Wabartha, Nadeem Ward, Sebastien Lachapelle, and Zhaocheng Zhu for helpful feedback on earlier" -"---\nabstract: 'We use a discrete dislocation dynamics (DDD) approach to study the motion of a dislocation under strong stochastic forces that may cause bending and roughening of the dislocation line on scales that are comparable to the dislocation core radius. In such situations, which may be relevant in high entropy alloys (HEA) exhibiting strong atomic scale disorder, standard scaling arguments based upon a line tension approximation may be no longer adequate and corrections to scaling need to be considered. We first study the wandering of the dislocation under thermal Langevin forces. This leads to a linear stochastic differential equation which can be exactly solved. From the Fourier modes of the thermalized dislocation line we can directly deduce the scale dependent effective line tension. We then use this information to investigate the wandering of a dislocation in a crystal with spatial, time-independent (\u2019quenched\u2019) disorder. We establish the pinning length and show how this length can be used as a predictor of the flow stress. Implications for the determination of flow stresses in HEA from molecular dynamics simulations are discussed.'\naddress:\n- 'Institute of Materials Simulation (WW8), Friedrich-Alexander University Erlangen-N[\u00fc]{}rnberg (FAU), Dr.-Mack-Str. 77, 90762 F[\u00fc]{}rth, Germany'\n- 'Department of Engineering and" -"---\nabstract: 'In many real-world applications, e.g. recommendation systems, certain items appear much more frequently than other items. However, standard embedding methods\u2014which form the basis of many ML algorithms\u2014allocate the same dimension to all of the items. This leads to statistical and memory inefficiencies. In this work, we propose mixed dimension embedding layers in which the dimension of a particular embedding vector can depend on the frequency of the item. This approach drastically reduces the memory requirement for the embedding, while maintaining and sometimes improving the ML performance. We show that the proposed mixed dimension layers achieve a higher accuracy, while using 8$\\times$ fewer parameters, for collaborative filtering on the MovieLens dataset. Also, they improve accuracy by 0.1% using half as many parameters or maintain baseline accuracy using 16$\\times$ fewer parameters for click-through rate prediction task on the Criteo Kaggle dataset.'\nauthor:\n- 'Antonio A. Ginart${\\thanks{Work done while at Facebook}^{\\ ,}}$'\n- Maxim Naumov\n- Dheevatsa Mudigere\n- Jiyan Yang\n- James Zou\nbibliography:\n- 'references.bib'\ntitle: 'Mixed Dimension Embedding with Application to Memory-Efficient Recommendation Systems'\n---\n\nIntroduction\n============\n\nIt is difficult to overstate the impact of representation learning and embedding-based models in the present AI landscape. Embedding representations power" -"---\nabstract: 'The baseline energy-resolution performance for the current generation of large-mass, low-temperature calorimeters (utilizing TES and NTD sensor technologies) is $>2$ orders of magnitude worse than theoretical predictions. A detailed study of several calorimetric detectors suggests that a mismatch between the sensor and signal bandwidths is the primary reason for suppressed sensitivity. With this understanding, we propose a detector design in which a thin-film Au pad is directly deposited onto a massive absorber that is then thermally linked to a separately fabricated TES chip via an Au wirebond, providing large electron-phonon coupling (i.e. high signal bandwidth), ease of fabrication, and cosmogenic background suppression. Interestingly, this design strategy is fully compatible with the use of hygroscopic crystals (NaI) as absorbers. An 80-mm diameter Si light detector based upon these design principles, with potential use in both dark matter and neutrinoless double-beta decay, has an estimated baseline energy resolution of 0.35eV, 20$\\times$ better than currently achievable. A 1.75kg ZnMoO$_{4}$ large-mass calorimeter would have a 3.5eV baseline resolution, 1000$\\times$ better than currently achieved with NTDs with an estimated position dependence $\\frac{\\Delta E}{E}$ of 6$\\times$10$^{-4}$, near or below the variations found in absorber thermalization in ZnMoO$_{4}$\u00a0 and TeO$_{2}$. Such minimal position dependence is" -"---\nabstract: 'The global dynamics of a homogeneous universe in Loop Quantum Cosmology is viewed as a scattering process of its geometrodynamical equivalent. This picture is applied to build a flexible (easy to generalize) and not restricted just to exactly solvable models method of verifying the preservation of the semiclassicality through the bounce. The devised method is next applied to two simple examples: $(i)$ the isotropic Friedman Robertson Walker universe, and $(ii)$ the isotropic sector of the Bianchi I model. For both of them we show, that the dispersions in the logarithm of the volume $\\ln(v)$ and scalar field momentum $\\ln(p_{\\phi})$ in the distant future and past are related via strong triangle inequalities. This implies in particular a strict preservation of the semiclassicality (in considered degrees of freedom) in both the cases $(i)$ and $(ii)$. Derived inequalities are general: valid for all the physical states within the considered models.'\nauthor:\n- 'Wojciech Kami\u0144ski${}^{1}$'\n- 'Tomasz Paw[\u0142]{}owski${}^{2,1}$'\ntitle: Cosmic recall and the scattering picture of Loop Quantum Cosmology\n---\n\nIntroduction\n============\n\nLoop Quantum Gravity [@lqg1; @lqg2] and its symmetry reduced analog, known as Loop Quantum Cosmology [@lqc1; @lqc2] have experienced over recent years a dynamical progress. In particular, an application of" -"---\nabstract: 'A maximum likelihood methodology for a general class of models is presented, using an approximate Bayesian computation (ABC) approach. The typical target of ABC methods are models with intractable likelihoods, and we combine an ABC-MCMC sampler with so-called \u201cdata cloning\u201d for maximum likelihood estimation. Accuracy of ABC methods relies on the use of a small threshold value for comparing simulations from the model and observed data. The proposed methodology shows how to use large threshold values, while the number of data-clones is increased to ease convergence towards an approximate maximum likelihood estimate. We show how to exploit the methodology to reduce the number of iterations of a standard ABC-MCMC algorithm and therefore reduce the computational effort, while obtaining reasonable point estimates. Simulation studies show the good performance of our approach on models with intractable likelihoods such as $g$-and-$k$ distributions, stochastic differential equations and state-space models.'\naddress: 'Centre for Mathematical Sciences, Box 118 Lund University, SE-22100 Lund, Sweden'\nauthor:\n- Umberto\u00a0Picchini\n- Rachele\u00a0Anderson\nbibliography:\n- 'biblio.bib'\ntitle: 'Approximate maximum likelihood estimation using data-cloning ABC'\n---\n\n=1\n\napproximate Bayesian computation ,intractable likelihood ,MCMC ,state-space model ,stochastic differential equation\n\nIntroduction\n============\n\nWe present a methodology for approximate maximum likelihood" -"---\nabstract: 'We present an efficient feature selection method that can find all multiplicative combinations of *continuous* features that are statistically significantly associated with the class variable, while rigorously correcting for multiple testing. The key to overcome the combinatorial explosion in the number of candidates is to derive a lower bound on the $p$-value for each feature combination, which enables us to massively prune combinations that can never be significant and gain more statistical power. While this problem has been addressed for binary features in the past, we here present the first solution for continuous features. In our experiments, our novel approach detects true feature combinations with higher precision and recall than competing methods that require a prior binarization of the data.'\nauthor:\n- |\n Mahito Sugiyama\\\n *National Institute of Informatics; JST PRESTO*\\\n `mahito@nii.ac.jp`\n- |\n Karsten M. Borgwardt\\\n *D-BSSE, ETH Z[\u00fc]{}rich*\\\n `karsten.borgwardt@bsse.ethz.ch`\ntitle: '**Finding Significant Combinations of Continuous Features**'\n---\n\n=1\n\nIntroduction {#sec:intro}\n============\n\nA big challenge in high-dimensional data analysis is the search for features that are statistically significantly associated with the class variable, while accounting for the inherent multiple tesing problem. This problem is relevant in a broad range of applications including natural language processing, statistical genetics," -"---\nabstract: 'The motion of electrons in or near solids, liquids and gases can be tracked by forcing their ejection with attosecond x-ray pulses, derived from femtosecond lasers. The momentum of these emitted electrons carries the imprint of the electronic state. Aberration corrected transmission electron microscopes have observed individual atoms, and have sufficient energy sensitivity to quantify atom bonding and electronic configurations. Recent developments in ultrafast electron microscopy and diffraction indicate that spatial and temporal information can be collected simultaneously. In the present work, we push the capability of femtosecond transmission electron microscopy (fs-TEM) towards that of the state of the art in ultrafast lasers and electron microscopes. This is anticipated to facilitate unprecedented elucidation of physical, chemical and biological structural dynamics on electronic time and length scales. The fs-TEM numerically studied employs a nanotip source, electrostatic acceleration to 70 keV, magnetic lens beam transport and focusing, a condenser-objective around the sample and a terahertz temporal compressor, including space charge effects during propagation. With electron emission equivalent to a 20 fs laser pulse, we find a spatial resolution below 10 nm and a temporal resolution of below 10 fs will be feasible for pulses comprised of on average 20 electrons." -"---\nabstract: 'We consider the following multi-component sparse PCA problem: given a set of data points, we seek to extract a small number of sparse components with *disjoint* supports that jointly capture the maximum possible variance. These components can be computed one by one, repeatedly solving the single-component problem and deflating the input data matrix, but as we show this greedy procedure is suboptimal. We present a novel algorithm for sparse PCA that jointly optimizes multiple disjoint components. The extracted features capture variance that lies within a multiplicative factor arbitrarily close to $1$ from the optimal. Our algorithm is combinatorial and computes the desired components by solving multiple instances of the bipartite maximum weight matching problem. Its complexity grows as a low order polynomial in the ambient dimension of the input data matrix, but exponentially in its rank. However, it can be effectively applied on a low-dimensional sketch of the data; this allows us to obtain polynomial-time approximation guarantees via spectral bounds. We evaluate our algorithm on real data-sets and empirically demonstrate that in many cases it outperforms existing, deflation-based approaches.'\nauthor:\n- |\n Megasthenis\u00a0Asteris$^{\\alpha}$, Dimitris Papailiopoulos$^{\\beta}$, Anastasios\u00a0Kyrillidis$^{\\alpha}$ Alexandros\u00a0G.\u00a0Dimakis$^{\\alpha}$\\\n $^{\\alpha}$UT Austin, $^{\\beta}$UC Berkeley\nbibliography:\n- 'spcamulti.bib'" -"---\nabstract: 'Surface tension governed by differential adhesion can drive fluid particle mixtures to segregate into distinct regions, i.e., demix. Does the same phenomenon occur in vertex models of confluent epithelial monolayers? Vertex models are different from particle models in that the interactions between the cells are shape-based, as opposed to metric-based. We investigate whether a disparity in cell shape or size alone is sufficient to drive demixing in bidisperse vertex model fluid mixtures. Surprisingly, we observe that both types of bidisperse systems robustly mix on large lengthscales. On the other hand, shape disparity generates slight demixing over a few cell diameters, i.e. micro-demixing. This result, can be understood by examining the differential energy barriers for neighbor exchanges (T1 transitions). The robustness of mixing at large scales suggests that despite some differences in cell shape and size, progenitor cells can readily mix throughout a developing tissue until acquiring means of recognizing cells of different types.'\nauthor:\n- 'Preeti Sahu*$^{1,*}$*, Daniel M. Sussman*$^1$*, M. Cristina Marchetti*$^2$*, M. Lisa Manning*$^1$*, J. M. Schwarz*$^{1,*}$*'\nbibliography:\n- '2dmixtures.bib'\ntitle: 'Large-scale mixing and small-scale demixing in a confluent model for biological tissues'\n---\n\niquid-liquid phase separation, i.e., demixing, drives patterning. In materials science, demixing between" -"---\nabstract: 'A standard approach to evaluating language models analyzes how models assign probabilities to valid versus invalid syntactic constructions (i.e.\u00a0is a grammatical sentence more probable than an ungrammatical sentence). Our work uses ambiguous relative clause attachment to extend such evaluations to cases of multiple simultaneous valid interpretations, where stark grammaticality differences are absent. We compare model performance in English and Spanish to show that non-linguistic biases in RNN LMs advantageously overlap with syntactic structure in English but not Spanish. Thus, English models may appear to acquire human-like syntactic preferences, while models trained on Spanish fail to acquire comparable human-like preferences. We conclude by relating these results to broader concerns about the relationship between comprehension (i.e.\u00a0typical language model use cases) and production (which generates the training data for language models), suggesting that necessary linguistic biases are not present in the training signal at all.'\nauthor:\n- Forrest Davis\n- |\n Marten van Schijndel\\\n Department of Linguistics\\\n Cornell University\\\n `{fd252|mv443}@cornell.edu`\nbibliography:\n- 'acl2020.bib'\ntitle: 'Recurrent Neural Network Language Models Always Learn English-Like Relative Clause Attachment'\n---\n\nIntroduction\n============\n\nLanguage modeling is widely used as pretraining for many tasks involving language processing [@petersetal18; @radfordetal18; @devlinetal19]. Since such pretraining affects so" -"---\nabstract: 'In this paper we study the effects of white-noised potentials on nonlinear quantum tunneling. We use a split-step scheme to numerically solve the nonlinear Schr\u00f6dinger equation (NLSE) with a tunneling potential. We consider three different types of potentials, namely; the single rectangular barrier, double rectangular barrier and triangular barrier. For all these three cases we show that white-noise given to potentials do not trigger modulation instability for tunneling of the sech type soliton solutions of the NLSE. However white-noised potentials trigger modulation instability for tunneling of the sinusoidal wavefunctions, thus such a wavefield turns into a chaotic one with many apparent peaks. We argue that peaks of such a field may be in the form of rational rogue wave solutions of the NLSE. Our results can be used to examine the effects of noise on quantum tunneling. Since a rogue wavefunction means a higher probability of the tunneling particle to be at a given (x,t) coordinate, our results may also be used for developing the quantum science and technology with many possible applications including but are not limited to increasing the resolution and efficiency of scanning tunneling microscopes, enhancing proton tunneling for DNA mutation and enhancing superconducting properties" -"---\nabstract: 'We present 1.3\u00a0millimeter observations of the debris disk surrounding the HR\u00a08799 multi-planet system from the Submillimeter Array to complement archival ALMA observations that spatially filtered away the bulk of the emission. The image morphology at $3\\farcs8$ (150\u00a0AU) resolution indicates an optically thin circumstellar belt, which we associate with a population of dust-producing planetesimals within the debris disk. The interferometric visibilities are fit well by an axisymmetric radial power-law model characterized by a broad width, $\\Delta R/R\\gtrsim 1$. The belt inclination and orientation parameters are consistent with the planet orbital parameters within the mutual uncertainties. The models constrain the radial location of the inner edge of the belt to $R_\\text{in}= 104_{-12}^{+8}$\u00a0AU. In a simple scenario where the chaotic zone of the outermost planet\u00a0b truncates the planetesimal distribution, this inner edge location translates into a constraint on the planet\u00a0b mass of $M_\\text{pl} = 5.8_{-3.1}^{+7.9}$\u00a0M$_{\\rm Jup}$. This mass estimate is consistent with infrared observations of the planet luminosity and standard hot-start evolutionary models, with the uncertainties allowing for a range of initial conditions. We also present new 9\u00a0millimeter observations of the debris disk from the Very Large Array and determine a millimeter spectral" -"---\ntitle: 'Precise Measurements of Branching Fractions for $\\dsp$ Meson Decays to Two Pseudoscalar Mesons'\n---\n\nINTRODUCTION {#sec:intro}\n============\n\nAmong the hadronic decays of the strange-charmed meson $D_s^+$, the theoretical treatment based on QCD-inspired models of its decays into two pseudoscalar mesons ($\\dstopp$) is the cleanest\u00a0[@Hai-Yang; @Cheng2010; @Fu-Sheng; @Yu2011]. Precision measurements of these decay rates can provide crucial calibrations to different theoretical models\u00a0[@Cheng:2019ggx; @Hai-Yang; @Cheng2010; @Fu-Sheng; @Yu2011; @Hsiang-nan; @Li2012; @Di; @Wang2017]. For each decay branching fraction (BF) listed in Table\u00a0\\[tab:results\\_BFs\\_theo\\], the precision of current measurements listed by the Particle Data Group (PDG)\u00a0[@pdg2018] is still not good enough to test theoretical models. Hence, more precise and independent measurements are desired to further improve our understanding of QCD dynamics in charm physics.\n\nIn 2019, LHCb discovered $\\emph{CP}$ violation in $D^0 \\to \\pi^+ \\pi^-$ and $D^0\\to K^+K^-$ decays with a significance of 5.3$\\sigma$\u00a0[@Aaij:2019kcg], providing stringent constraints on theoretical approaches to $\\emph{CP}$ violation in the charm sector\u00a0[@Hai-Yang; @Cheng2010; @Hsiang-nan; @Li2012]. For the strange-charmed meson $D_s^+$, there are theoretical predictions for the $\\emph{CP}$ asymmetries of the singly Cabibbo-suppressed (SCS) decay modes, which rely on the potential effect of SU(3) symmetry breaking\u00a0[@Cheng:2019ggx; @Buccella:2019kpn]. However, the current world average results," -"---\nbibliography:\n- 'eddy\\_clean.bib'\ntitle: 'Quantifying the eddy-jet feedback strength of the annular mode in an idealized GCM and reanalysis data'\n---\n\nIntroduction\n============\n\nThe annular mode is a dominant mode of variability of the extratropical circulation in both hemispheres on intraseasonal to interannual timescales [@Kidson1988; @Thompson1998; @Gong1999; @Thompson2000]. The annular mode corresponds to the leading empirical orthogonal function (EOF) of zonal mean zonal wind, which features an equivalent barotropic dipolar structure and represents latitudinal shifts of the eddy-driven midlatitude jet [@Nigam1990; @Hartmann1998; @Thompson2014; @Thompson2015]. The zonal index, the time series associated with the annular mode, is essentially the same concept as that discussed in the pioneering studies of the variability of the general circulation [@Rossby1939; @Namias1950; @Wallace1985]. The annular mode in the Northern Hemisphere is often considered in recent studies as the hemispheric manifestation of the North Atlantic Oscillation [e.g., @Wallace2000; @Vallis2004]. The annular mode is characterized by temporal persistence [@Baldwin2003; @Gerber2008a; @Gerber2008b], for which it has been suggested that a positive feedback between anomalous zonal flow and eddy fluxes is responsible [e.g., @Feldstein1998; @Robinson2000; @Gerber2006; @Lorenz2001 hereafter, LH01]. For example, @Robinson2000 suggested that at the latitudes of a positive anomaly of barotropic zonal wind, while surface drag tends" -"---\nabstract: 'We compute, for massive particles, the explicit Wigner rotations of one-particle states for arbitrary Lorentz transformations; and the explicit Hermitian generators of the infinite-dimensional unitary representation. For a pair of spin 1/2 particles, Einstein-Podolsky-Rosen-Bell entangled states and their behaviour under the Lorentz group are analysed in the context of quantum field theory. Group theoretical considerations suggest a convenient definition of the Bell states which is slightly different from the conventional assignment. The behaviour of Bell states under arbitrary Lorentz transformations can then be described succinctly. Reduced density matrices applicable to systems of identical particles are defined through Yang\u2019s prescription. The von Neumann entropy of each of the reduced density matrix is Lorentz invariant; and its relevance as a measure of entanglement is discussed, and illustrated with an explicit example. A regularization of the entropy in terms of generalized zeta functions is also suggested.'\nauthor:\n- Chopin Soo\n- 'Cyrus C. Y. Lin'\ntitle: |\n Wigner Rotations, Bell States, and Lorentz Invariance\\\n of Entanglement and von Neumann Entropy\n---\n\nIntroduction and Overview\n=========================\n\nIt can be argued that, aside from theories with infinite number of particle types such as string theory, quantum [*field*]{} theory is the only way to" -"---\nabstract: 'We study the problem of zero-order optimization of a strongly convex function. The goal is to find the minimizer of the function by a sequential exploration of its values, under measurement noise. We study the impact of higher order smoothness properties of the function on the optimization error and on the cumulative regret. To solve this problem we consider a randomized approximation of the projected gradient descent algorithm. The gradient is estimated by a randomized procedure involving two function evaluations and a smoothing kernel. We derive upper bounds for this algorithm both in the constrained and unconstrained settings and prove minimax lower bounds for any sequential search method. Our results imply that the zero-order algorithm is nearly optimal in terms of sample complexity and the problem parameters. Based on this algorithm, we also propose an estimator of the minimum value of the function achieving almost sharp oracle behavior. We compare our results with the state-of-the-art, highlighting a number of key improvements.'\nauthor:\n- Arya Akhavan\n- Massimiliano Pontil\n- 'Alexandre B. Tsybakov'\n- |\n Arya Akhavan[^1]\\\n Department of Computer Science\\\n Cranberry-Lemon University\\\n Pittsburgh, PA 15213\\\n [hippo@cs.cranberry-lemon.edu]{}\\\n \\\n Massimiliano Pontil\\\n Affiliation\\\n [email]{}\\\n \\\n Alexandre Tsybakov\\\n Affiliation\\\n Address\\\n [email]{}\\\ntitle:" -"---\nabstract: 'This technical report gives an overview of our work on control algorithms dealing with redundant robot systems for achieving human-like motion characteristics. Previously, we developed a novel control law to exhibit human-motion characteristics in redundant robot arm systems as well as arm-trunk systems for reaching tasks [@01], [@02]. This newly developed method nullifies the need for the computation of pseudo-inverse of Jacobian while the formulation and optimization of any artificial performance index is not necessary. The time-varying properties of the muscle stiffness and damping as well as the low-pass filter characteristics of human muscles have been modeled by the proposed control law to generate human-motion characteristics for reaching motion like quasi-straight line trajectory of the end-effector and symmetric bell shaped velocity profile. This report focuses on the experiments performed using a 7-DOF redundant robot-arm system which proved the effectiveness of this algorithm in imitating human-like motion characteristics. In addition, we extended this algorithm to a 19-DOF Hand-Arm System for a reach-to-grasp task. Simulations using the 19-DOF Hand-Arm System show the effectiveness of the proposed scheme for effective human-like hand-arm coordination in reach-to-grasp tasks for pinch and envelope grasps on objects of different shapes such as a box, a" -"---\nabstract: |\n We consider the efficient numerical solution of the three-dimensional wave equation with Neumann boundary conditions via time-domain boundary integral equations. A space-time Galerkin method with $C^\\infty$-smooth, compactly supported basis functions in time and piecewise polynomial basis functions in space is employed. We discuss the structure of the system matrix and its efficient parallel assembly. Different preconditioning strategies for the solution of the arising systems with block Hessenberg matrices are proposed and investigated numerically. Furthermore, a C++ implementation parallelized by OpenMP and MPI in shared and distributed memory, respectively, is presented. The code is part of the boundary element library BEM4I. Results of numerical experiments including convergence and scalability tests up to a thousand cores on a cluster are provided. The presented implementation shows good parallel scalability of the system matrix assembly. Moreover, the proposed algebraic preconditioner in combination with the FGMRES solver leads to a significant reduction of the computational time.\n\n [**AMS subject classifications.** ]{}35L05, 65N38, 65R20, 65F08\nauthor:\n- 'A. Veit[^1]'\n- 'M. Merta[^2]'\n- 'J. Zapletal[^3]'\n- 'D. Luk\u00e1\u0161[^4]'\nbibliography:\n- 'merta\\_bib.bib'\ntitle: 'Efficient Solution of Time-Domain Boundary Integral Equations Arising in Sound-Hard Scattering'\n---\n\nIntroduction\n============\n\nIntegral Formulation of the Wave Equation\n=========================================\n\nNumerical" -"---\nabstract: 'We survey our work on choreographies and behavioural contracts in multiparty interactions. In particular theories of behavioural contracts are presented which enable reasoning about correct service composition (contract compliance) and service substitutability (contract refinement preorder) under different assumptions concerning service communication: synchronous address or name based communication with patient non-preemptable or impatient invocations, or asynchronous communication. Correspondingly relations between behavioural contracts and choreographic descriptions are considered, where a contract for each communicating party is, e.g., derived by projection. The considered relations are induced as the maximal preoders which preserve contract compliance and global traces: we show maximality to hold (permitting services to be discovered/substituted independently for each party) when contract refinement preorders with all the above asymmetric communication means are considered and, instead, not to hold if the standard symmetric CCS/pi-calculus communication is considered (or when directly relating choreographies to behavioral contracts via a preorder, no matter the communication mean). The obtained maximal preorders are then characterized in terms of a new form of testing, called compliance testing, where not only tests must succeed but also the system under test (thus relating to controllability theory), and compared with classical preorders such as may/must testing, trace inclusion, etc. Finally," -"---\nabstract: 'A major stumbling block to progress in understanding basic human interactions, such as getting out of bed or opening a refrigerator, is lack of good training data. Most past efforts have gathered this data explicitly: starting with a laundry list of action labels, and then querying search engines for videos tagged with each label. In this work, we do the reverse and search implicitly: we start with a large collection of interaction-rich video data and then annotate and analyze it. We use Internet Lifestyle Vlogs as the source of surprisingly large and diverse interaction data. We show that by collecting the data first, we are able to achieve greater scale and far greater diversity in terms of actions and actors. Additionally, our data exposes biases built into common explicitly gathered data. We make sense of our data by analyzing the central component of interaction \u2013 hands. We benchmark two tasks: identifying semantic object contact at the video level and non-semantic contact state at the frame level. We additionally demonstrate future prediction of hands.'\nauthor:\n- |\n David F. Fouhey, Wei-cheng Kuo, Alexei A. Efros, Jitendra Malik\\\n EECS Department, UC Berkeley\nbibliography:\n- 'local.bib'\ntitle: 'From [*Lifestyle Vlogs*]{} to" -"---\nabstract: 'In this work we study the encoding of smooth, differentiable multivariate functions distributions in quantum registers, using quantum computers or tensor-network representations. We show that a large family of distributions can be encoded as low-entanglement states of the quantum register. These states can be efficiently created in a quantum computer, but they are also efficiently stored, manipulated and probed using Matrix-Product States techniques. Inspired by this idea, we present eight quantum-inspired numerical analysis algorithms, that include Fourier sampling, interpolation, differentiation and integration of partial derivative equations. These algorithms combine classical ideas\u2014finite-differences, spectral methods\u2014with the efficient encoding of quantum registers, and well known algorithms, such as the Quantum Fourier Transform. *When these heuristic methods work*, they provide an exponential speed-up over other classical algorithms, such as Monte Carlo integration, finite-difference and fast Fourier transforms (FFT). But even when they don\u2019t, some of these algorithms can be translated back to a quantum computer to implement a similar task.'\nauthor:\n- Juan Jos\u00e9 Garc\u00eda Ripoll\ntitle: 'Quantum-inspired algorithms for multivariate analysis: from interpolation to partial differential equations'\n---\n\nIntroduction {#sec:introduction}\n============\n\nQuantum computers use the exponential capacity of a Hilbert space to process information. A quantum computer with $m$ qubits can" -"The problem of quantum dynamics of a two-level system coupled to an environment (boson or fermion bath) is at the core of mesoscopic physics [@Leggett87]. We show that the new field of \u201cmesoscopic magnetism\u201d, which studies the tunneling of large magnetic moments in the presence of phonons and spins, is not limited to molecular complexes and nanoparticles, but it can be extended to other systems such as rare-earth ions. After the first studies on large spin molecules Mn$_{12}$-ac [@Novak95BBjmmm95; @LucFriedman96] and Fe$_{8}$ [@Fe8], the role of the spin bath on the tunnel mechanism was shown [@ProkStampGarg; @ProkStamp; @Werns00; @jmmm200; @Igor]. In particular, quasistatic fields due to dipolar interactions between molecules lead to a distribution of internal fields, and field fluctuations, essentially of nuclear spins, give homogeneous level broadening allowing the restoration of tunneling in a finite energy window, at low temperature; this broadening being much larger than the phonon one, it is more relevant to induce tunneling. This mechanism is efficient unless all nuclear spins of the molecule are frozen, which occurs only below the mK scale. In low spin molecules, large tunneling gaps favor spin\u2013phonon transitions. Although the hyperfine induced level broadening is the same as in large spin" -"---\nabstract: '\u00a0is an open source implementation of Benson\u2019s algorithm and its dual variant. Both algorithms compute primal and dual solutions of vector linear programs (VLP), which include the subclass of multiple objective linear programs (MOLP). The recent version of \u00a0can treat arbitrary vector linear programs whose upper image does not contain lines. This article surveys the theoretical background of the implementation. In particular, the role of VLP duality for the implementation is pointed out. Some numerical examples are provided. In contrast to the existing literature we consider a less restrictive class of vector linear programs.'\naddress:\n- |\n Friedrich Schiller University Jena\\\n Department of Mathematics\\\n 07737 Jena\\\n Germany\n- |\n Martin Luther University Halle\u2013Wittenberg\\\n Department of Mathematics\\\n 06099 Halle (Saale)\\\n Germany\nauthor:\n- Andreas L\u00f6hne\n- Benjamin Wei\u00dfing\nbibliography:\n- 'database.bib'\ntitle: 'The vector linear program solver [*Bensolve*]{} \u2013 notes on theoretical background'\n---\n\nvector linear programming ,linear vector optimization , multiple objective optimization 90C29 ,90C05 ,52B55 ,15A39\n\nIntroduction\n============\n\nSolution concepts\n=================\n\nDual problem and dual solutions\n===============================\n\nA few remarks on the algorithms\n===============================\n\nNumerical results\n=================\n\nBibliography {#bibliography .unnumbered}\n============" -"---\nabstract: 'We analyze backward step control globalization for finding zeros of G\u00e2teaux-differentiable functions that map from a Banach space to a Hilbert space. The results include global convergence to a distinctive solution characterized by propagating the initial guess by a generalized Newton flow with guaranteed bounds on the discrete nonlinear residual norm decrease and an (also numerically) easily controllable asymptotic linear residual convergence rate. The convergence theory can be exploited to construct efficient numerical methods, which we demonstrate for the case of a Krylov\u2013Newton method and an approximation-by-discretization multilevel framework. Both approaches optimize the asymptotic linear residual convergence rate, either over the Krylov subspace or through adaptive discretization, which in turn yields practical and efficient stopping criteria and refinement strategies that balance the nonlinear residuals with the relative residuals of the linear systems. We apply these methods to the class of nonlinear elliptic boundary value problems and present numerical results for the Carrier equation and the minimum surface equation.'\nauthor:\n- Andreas Potschka\ntitle: '[Backward step control for Hilbert space problems]{}'\n---\n\nIntroduction {#sec:introduction}\n============\n\nLet $U$ be a Banach space with norm ${\\left\\lVert.\\right\\rVert}_U$ and $V$ be a Hilbert space (we discuss generalizations to Banach spaces in section\u00a0\\[sec:Banach\\])" -"---\nabstract: 'Mobile edge computing (MEC), affords service to the vicinity of mobile devices (MDs), has become a key technology for future network. Offloading big data to the MEC server for preprocessing is a attractive choice of MDs. In the paper, we investigate data offloading from MDs to MEC servers. A coalitional game based pricing scheme is proposed. We apply coalitional game to depict the offloading relationship between MDs and MEC servers, and utilize pricing as the stimuli for the offloading. A scheduled MD chooses one MEC server within the same coalition for offloading, and pays the selected MEC server for the MEC service. We formulate a coalitional game, where MDs and MEC servers are players and their utilities are respectively defined. Next, we analyze the formulated game. Specially, the core is studied. Finally, utility performance of the proposed scheme under the 2-MD and 2-MEC- server scenario are demonstrated.'\nauthor:\n- 'Tian Zhang, Wei Chen,\u00a0 and Feng Yang [^1][^2]'\ntitle: 'Data offloading in mobile edge computing: A coalitional game based pricing approach'\n---\n\nMobile edge computing, offloading, coalitional game, pricing\n\nIntroduction\n============\n\nMobile edge computing (MEC) enabling low-latency, high-bandwidth, and agile mobile services has attracted much attention in both academia" -"---\nabstract: 'This paper makes one of the first efforts toward automatically generating *complex* questions from knowledge graphs. Particularly, we study how to leverage existing simple question datasets for this task, under two separate scenarios: using either sub-questions of the target complex questions, or distantly related pseudo sub-questions when the former are unavailable. First, a competitive base model named is designed to map mplex query raphs to natural language uestions. Afterwards, we propose two extension models, namely and , respectively for the above two scenarios. The former encodes and copies from a sub-question, while the latter further scores and aggregates multiple pseudo sub-questions. Experiment results show that the extension models significantly outperform not only base , but also its augmented variant using simple questions as additional training examples. This demonstrates the importance of *instance-level* connections between simple and corresponding complex questions, which may be underexploited by straightforward data augmentation of that builds *model-level* connections through learned parameters.'\nauthor:\n- |\n Jie Zhao, Xiang Deng, Huan Sun,\\\n The Ohio State University\\\n {zhao.1359, deng.595, sun.397}@osu.edu\nbibliography:\n- 'bibliography.bib'\ntitle: 'Easy-to-Hard: Leveraging Simple Questions for Complex Question Generation'\n---" -"---\nabstract: 'Automated decision making systems are increasingly being used in real-world applications. In these systems for the most part, the decision rules are derived by minimizing the training error on the available historical data. Therefore, if there is a bias related to a sensitive attribute such as gender, race, religion, etc. in the data, say, due to cultural/historical discriminatory practices against a certain demographic, the system could continue discrimination in decisions by including the said bias in its decision rule. We present an information theoretic framework for designing fair predictors from data, which aim to prevent discrimination against a specified sensitive attribute in a supervised learning setting. We use equalized odds as the criterion for discrimination, which demands that the prediction should be independent of the protected attribute conditioned on the actual label. To ensure fairness and generalization simultaneously, we compress the data to an auxiliary variable, which is used for the prediction task. This auxiliary variable is chosen such that it is decontaminated from the discriminatory attribute in the sense of equalized odds. The final predictor is obtained by applying a Bayesian decision rule to the auxiliary variable.'\nauthor:\n- |\n AmirEmad Ghassami$^*$, Sajad Khodadadian$^*$, Negar Kiyavash$^{*\\dagger}$\\\n Departments" -"---\nabstract: 'Planetary radar observations offer the potential for probing the properties of characteristics of solid bodies throughout the inner solar system and at least as far as the orbit of Saturn. In addition to the direct scientific value, precise orbital determinations can be obtained from planetary radar observations, which are in turn valuable for mission planning or spacecraft navigation and planetary defense. The next-generation Very Large Array would not have to be equipped with a transmitter to be an important asset in the world\u2019s planetary radar infrastructure. Bistatic radar, in which one antenna transmits (e.g., Arecibo or Goldstone) and another receives, are used commonly today, with the Green Bank Telescope (GBT) serving as a receiver. The improved sensitivity of the ngVLA relative to the GBT would improve the signal-to-noise ratios on many targets and increase the accessible volume specifically for asteroids. Goldstone-ngVLA bistatic observations would have the potential of rivaling the sensitivity of Arecibo, but with much wider sky access.'\nauthor:\n- 'M.\u00a0Brozovi[\u0107]{},$^1$ B.\u00a0J.\u00a0Butler,$^2$ Jean-Luc\u00a0Margot,$^3$ Shantanu\u00a0P.\u00a0Naidu,$^4$ and T.\u00a0Joseph\u00a0W.\u00a0Lazio$^5$'\ntitle: Planetary Bistatic Radar\n---\n\n[Jet Propulsion Laboratory, California Institute of Technology]{} [Pasadena]{}[CA]{}[91109]{}[USA]{} [National Radio Astronomy Observatory]{} [Socorro]{}[NM]{}[USA]{} [University of California, Los" -"---\nabstract: 'We present new algorithm for growth of non-clustered planar graphs by aggregation of cells with given distribution of size and constraint of connectivity $k = 3$ per node. The emergent graph structures are controlled by two parameters\u2014chemical potential of the cell aggregation and the width of the cell size distribution. We compute several statistical properties of these graphs\u2014fractal dimension of the perimeter, distribution of shortest paths between pairs of nodes and topological betweenness of nodes and links. We show how these topological properties depend on the control parameters of the aggregation process and discuss their relevance for the conduction of current in self-assembled nanopatterns.'\nauthor:\n- Milovan \u0160uvakov and Bosiljka Tadi\u0107\ntitle: 'Topology of Cell-Aggregated Planar Graphs'\n---\n\nIntroduction\n============\n\nIn recent years increased interests in various networks realizations [@SD_book; @nets_review] revealed that several new types of graphs termed [*structured graphs*]{} are more appropriate mathematical objects to describe complex network\u2019s geometry than traditional [*random graphs*]{} [@BB_book]. The variety of structures was found to emerge through evolution processes in which nodes and links are added sequentially according to specified rules, in particular, the preferential attachment rules lead to strongly inhomogeneous [*scale-free graphs*]{} [@SD_book]. In contrast to the evolving networks," -"---\nabstract: 'Multiple bases are presented for the conclusion that potentials are fundamental in electrodynamics, with electric and magnetic fields as quantities auxiliary to the scalar and vector potentials \u2013 opposite to the conventional ordering. One foundation for the concept of basic potentials and auxiliary fields consists of examples where two sets of gauge-related fields are such that one is physical and the other is erroneous, with the information for the proper choice supplied by the potentials. A major consequence is that a change of gauge is not a unitary transformation in quantum mechanics; a principle heretofore unchallenged. The primacy of potentials over fields leads to the concept of a hierarchy of physical quantities, where potentials and energies are primary, while fields and forces are secondary. Secondary quantities provide less information than do primary quantities. Some criteria by which strong laser fields are judged are based on secondary quantities, making it possible to arrive at inappropriate conclusions. This is exemplified by several field-related misconceptions as diverse as the behavior of charged particles in very low frequency propagating fields, and the fundamental problem of pair production at very high intensities. In each case, an approach based on potentials gives appropriate results," -"---\nabstract: 'In this paper we propose a pair of low-complexity user selection schemes with zero-forcing precoding for multiuser massive MIMO downlink systems, in which the base station is equipped with a large-scale antenna array. First, we derive approximations of the ergodic sum rates of the systems invoking the conventional random user selection (RUS) and the location-dependant user selection (LUS). Then, the optimal number of simultaneously served user equipments (UEs), $K^*$, is investigated to maximize the sum rate approximations. Upon exploiting $K^*$, we develop two user selection schemes, namely $K^*$-RUS and $K^*$-LUS, where $K^*$ UEs are selected either randomly or based on their locations. Both of the proposed schemes are independent of the instantaneous channel state information of small-scale fading, therefore enjoying the same extremely-low computational complexity as that of the conventional RUS scheme. Moreover, both of our proposed schemes achieve significant sum rate improvement over the conventional RUS. In addition, it is worth noting that like the conventional RUS, the $K^*$-RUS achieves good fairness among UEs.'\nauthor:\n- 'Haijing Liu, Hui Gao, *Member, IEEE*, Shaoshi Yang, *Member, IEEE*, and Tiejun Lv, *Senior Member, IEEE*[^1]'\ntitle: 'Low-Complexity Downlink User Selection for Massive MIMO Systems'\n---\n\n[Shell : Bare Demo of" -"---\nabstract: |\n Machine scheduling is a fundamental optimization problem in computer science. The task of scheduling a set of jobs on a given number of machines and minimizing the makespan is well studied and among other results, we know that EPTAS\u2019s for machine scheduling on identical machines exist. Das and Wiese initiated the research on a generalization of makespan minimization, that includes so called bag-constraints. In this variation of machine scheduling the given set of jobs is partitioned into subsets, so called bags. Given this partition a schedule is only considered feasible when on any machine there is at most one job from each bag.\n\n Das and Wiese showed that this variant of machine scheduling admits a PTAS. We will improve on this result by giving the first EPTAS for the machine scheduling problem with bag-constraints. We achieve this result by using new insights on this problem and restrictions given by the bag-constraints. We show that, to gain an approximate solution, we can relax the bag-constraints and ignore some of the restrictions. Our EPTAS uses a new instance transformation that will allow us to schedule large and small jobs independently of each other for a majority of bags. We" -"---\nabstract: 'This paper presents detailed results of neutron imaging of argon bubble flows in a rectangular liquid gallium vessel with and without the application of external horizontal magnetic field. The developed image processing algorithm is presented and its capability to extract physical information from images of low signal-to-noise ratio is demonstrated. Bubble parameters, velocity components, trajectories and relevant statistics were computed and analysed. A simpler version of the code was applied to the output of computational fluid dynamics simulations that reproduced the experiment. This work serves to further validate the neutron radiography as a suitable method for monitoring gas bubble flow in liquid metals, as well as to outline procedures that might help others to extract data from neutron radiography images with a low signal-to-noise ratio resulting from high frame rate acquisitions required to resolve rapid bubble motion.'\nauthor:\n- |\n Mihails Birjukovs\\\n Institute of Numerical Modelling\\\n University of Latvia\\\n Riga, Latvia, Jelgavas 3, 1004\\\n `mihails.birjukovs@lu.lv`\\\n Valters Dzelme\\\n Institute of Numerical Modelling\\\n University of Latvia\\\n Riga, Latvia, Jelgavas 3, 1004\\\n `valters.dzelme@lu.lv`\\\n Andris Jakovics\\\n Institute of Numerical Modelling\\\n University of Latvia\\\n Riga, Latvia, Jelgavas 3, 1004\\\n `andris.jakovics@lu.lv`\\\n Knud Thomsen\\\n Research with Neutrons and Muons\\\n Paul Scherrer Institut\\\n Villigen, Switzerland, Forschungsstrasse" -"---\nabstract: 'A tradeoff between sum rate and fairness for MISO broadcast communication employing dirty paper coding or zero-forcing dirty paper coding at physical layer is investigated in this paper. The tradeoff is based on a new design objective termed \u201ctri-stage\u201d approach as well as a new $\\ell_1$-based fairness measure that is much more robust than the well-known Jain\u2019s index for comparing fairness levels achieved by various design objectives at a much finer resolution in high SNR regime. The newly proposed tri-stage design also introduces a new concept of statistical power allocation that randomly allocates powers to users based on an optimal probability distribution derived from the tradeoff between sum rate and fairness. Simulation results show that the proposed approach can simultaneously achieve a larger sum rate and better fairness than the reputable proportional fairness criterion. A performance upper bound is also given in the paper to show that the excellent performance of the proposed approach at moderate and high SNR regimes as well as some potential for further improvement in low SNR regime.'\nauthor:\n- 'Ji-You Huang and Hsiao-feng Francis Lu [^1]'\ntitle: Achieving Large Sum Rate and Good Fairness in MISO Broadcast Communication\n---\n\nBroadcast communication, MISO, dirty" -"---\nabstract: 'Classical Cepheids are useful tracers of the Galactic young stellar population because their distances and ages can be determined from their period-luminosity and period-age relations. In addition, the radial velocities and chemical abundance of the Cepheids can be derived from spectroscopic observations, providing further insights into the structure and evolution of the Galaxy. Here, we report the radial velocities of classical Cepheids near the Galactic Center, three of which were reported in 2011, the other reported for the first time. The velocities of these Cepheids suggest that the stars orbit within the Nuclear Stellar Disk, a group of stars and interstellar matter occupying a region of $\\sim 200$\u00a0pc around the Center, although the three-dimensional velocities cannot be determined until the proper motions are known. According to our simulation, these four Cepheids formed within the Nuclear Stellar Disk like younger stars and stellar clusters therein.'\nauthor:\n- 'Noriyuki Matsunaga, Kei Fukue, Ryo Yamamoto, Naoto Kobayashi, Laura Inno, Katia Genovali, Giuseppe Bono, Junichi Baba, Michiko\u00a0S. Fujii, Sohei Kondo, Yuji Ikeda, Satoshi Hamano, Shogo Nishiyama, Tetsuya Nagata, Wako Aoki, and Takuji Tsujimoto'\ntitle: Kinematics of classical Cepheids in the Nuclear Stellar Disk\n---\n\nIntroduction\n============\n\nThe Galactic Center comprises" -"---\nabstract: 'Abstract argumentation has emerged as a method for non-monotonic reasoning that has gained tremendous traction in the symbolic artificial intelligence community. In the literature, the different approaches to abstract argumentation that were refined over the years are typically evaluated from a logics perspective; an analysis that is based on models of ideal, *rational* decision-making does not exist. In this paper, we close this gap by analyzing abstract argumentation from the perspective of the *rational man* paradigm in microeconomic theory. To assess under which conditions abstract argumentation-based choice functions can be considered *economically rational*, we define a new argumentation principle that ensures compliance with the rational man\u2019s *reference independence* property, which stipulates that a rational agent\u2019s preferences over two choice options should not be influenced by the absence or presence of additional options. We show that the argumentation semantics as proposed in Dung\u2019s classical paper, as well as all of a range of other semantics we evaluate do not fulfill this newly created principle. Consequently, we investigate how structural properties of argumentation frameworks impact the *reference independence* principle, and propose a restriction to argumentation *expansions* that allows all of the evaluated semantics to fulfill the requirements for economically rational" -"---\nabstract: 'Retrieval-based conversation systems generally tend to highly rank responses that are semantically similar or even identical to the given conversation context. While the system\u2019s goal is to find the most appropriate response, rather than the most semantically similar one, this tendency results in low-quality responses. We refer to this challenge as the echoing problem. To mitigate this problem, we utilize a hard negative mining approach at the training stage. The evaluation shows that the resulting model reduces echoing and achieves better results in terms of Average Precision and Recall@N metrics, compared to the models trained without the proposed approach.'\nauthor:\n- Denis Fedorenko\n- Nikita Smetanin\n- Artem Rodichev\nbibliography:\n- 'bibliography.bib'\ntitle: 'Avoiding Echo-Responses in a Retrieval-Based Conversation System'\n---\n\nIntroduction\n============\n\nThe task of a retrieval-based conversation system is to select the most appropriate response from a set of responses given the input context of a conversation. The context is typically an utterance or a sequence of utterances produced by a human or by the system itself. Most of the state-of-the-art approaches to retrieval-based conversation systems are based on deep neural networks (NNs) [@zhou2016multi; @DBLP:journals/corr/WuWZL16]. Under these approaches, the typical response selection pipeline consists of the" -"---\nabstract: |\n Yield stress materials form an interesting class of materials that behave like solids at small stresses, but start to flow once a critical stress is exceeded. It has already been reported both in experimental and simulation work that flow curves of different yield stress materials can be scaled with the distance to jamming or with the confining pressure. However, different scaling exponents are found between experiments and simulations. In this paper we identify sources of this discrepancy. We numerically relate the volume fraction with the confining pressure and discuss the similarities and differences between rotational and oscillatory measurements. Whereas simulations are performed in the elastic response regime close to the jamming transition and with very small amplitudes to calculate the scaling exponents, these conditions are hardly possible to achieve experimentally. Measurements are often performed far away from the critical volume fraction and at large amplitudes. We show that these differences are the underlying reason for the different exponents for rescaling flow curves.\\\n \\\n **Keywords** Yield stress materials - Rheological measurements - Herschel-Bulkley model - Universal scaling\nauthor:\n- |\n Riande I. Dekker[^1] [^2], Maureen Dinkgreve, Henri de Cagny [^3], Dion Koeze[^4],\\\n Brian P. Tighe\\\n and Daniel Bonn" -"---\nabstract: 'Spatially averaged inhomogeneous cosmologies in classical general relativity can be written in the form of effective Friedmann equations with sources that include backreaction terms. In this paper we propose to describe these backreaction terms with the help of a homogeneous scalar field evolving in a potential; we call it the \u2018morphon field\u2019. This new field links classical inhomogeneous cosmologies to scalar field cosmologies, allowing to reinterpret, e.g., quintessence scenarios by routing the physical origin of the scalar field source to inhomogeneities in the Universe. We investigate a one\u2013parameter family of scaling solutions to the backreaction problem. Subcases of these solutions (all without an assumed cosmological constant) include scale\u2013dependent models with Friedmannian kinematics that can mimic the presence of a cosmological constant or a time\u2013dependent cosmological term. We explicitly reconstruct the scalar field potential for the scaling solutions, and discuss those cases that provide a solution to the Dark Energy and coincidence problems. In this approach, Dark Energy emerges from morphon fields, a mechanism that can be understood through the proposed correspondence: the averaged cosmology is characterized by a weak decay (quintessence) or growth (phantom quintessence) of kinematical fluctuations, fed by \u2018curvature energy\u2019 that is stored in the averaged" -"---\nabstract: 'We consider a -based transmission scheme, where data is embedded into the imaginary part of the nonlinear discrete spectrum. Inspired by probabilistic amplitude shaping, we propose a scheme as a means to increase the data rate of the system. We exploit the fact that for an -based transmission scheme, the pulses in the time domain are of unequal duration by transmitting them with a dynamic symbol interval and find a capacity-achieving distribution. The scheme shapes the information symbols according to the capacity-achieving distribution and transmits them together with the parity symbols at the output of a encoder, suitably modulated, via time-sharing. We furthermore derive an achievable rate for the proposed scheme. We verify our results with simulations of the discrete-time model as well as with simulations.'\nauthor:\n- |\n Andreas\u00a0Buchberger, Alexandre\u00a0Graell\u00a0i\u00a0Amat,\u00a0\\\n Vahid\u00a0Aref,\u00a0 and\u00a0Laurent\u00a0Schmalen,\u00a0 [^1][^2][^3][^4]\ntitle: |\n Probabilistic Eigenvalue Shaping for\\\n Nonlinear Fourier Transform Transmission\n---\n\nDiscrete spectrum, nonlinear Fourier transform (NFT), probabilistic shaping, soliton communication.\n\nIntroduction {#sec:introduction}\n============\n\npropagation in optical fibers is severely impaired by nonlinear effects that should be either compensated or utilized for the design of the communication system. The \u00a0[@yousefi2014nftI-III] provides a method to transform a" -"---\nabstract: 'Numerical solutions to Newton\u2019s equations of motion for chaotic self gravitating systems of more than 2 bodies are often regarded to be irreversible. This is due to the exponential growth of errors introduced by the integration scheme and the numerical round-off in the least significant figure. This secular growth of error is sometimes attributed to the increase in entropy of the system even though Newton\u2019s equations of motion are strictly time reversible. We demonstrate that when numerical errors are reduced to below the physical perturbation and its exponential growth during integration the microscopic reversibility is retrieved. Time reversibility itself is not a guarantee for a definitive solution to the chaotic N-body problem. However, time reversible algorithms may be used to find initial conditions for which perturbed trajectories converge rather than diverge. The ability to calculate such a converging pair of solutions is a striking illustration which shows that it is possible to compute a definitive solution to a highly unstable problem. This works as follows: If you ([i]{}) use a code which is capable of producing a definitive solution (and which will therefore handle converging pairs of solutions correctly), ([ii]{}) use it to study the statistical result of" -"---\nabstract: 'The onset and nature of the earliest geomagnetic field is important for understanding the evolution of the core, atmosphere and life on Earth. A record of the early geodynamo is preserved in ancient silicate crystals containing minute magnetic inclusions. These data indicate the presence of a geodynamo during the Paleoarchean, between 3.4 and 3.45 billion years ago. While the magnetic field sheltered Earth\u2019s atmosphere from erosion at this time, standoff of the solar wind was greatly reduced, and similar to that during modern extreme solar storms. These conditions suggest that intense radiation from the young Sun may have modified the atmosphere of the young Earth by promoting loss of volatiles, including water. Such effects would have been more pronounced if the field were absent or very weak prior to 3.45 billion years ago, as suggested by some models of lower mantle evolution. The frontier is thus trying to obtain geomagnetic field records that are $\\gg$3.45 billion-years-old, as well as constraining solar wind pressure for these times. In this review we suggest pathways for constraining these parameters and the attendant history of Earth\u2019s deep interior, hydrosphere and atmosphere. In particular, we discuss new estimates for solar wind pressure for" -"---\nabstract: 'Extensive theoretical and experimental investigations on multipartite systems close to an avoided energy-level crossing reveal interesting features such as the extremisation of entanglement. Conventionally, the estimation of entanglement directly from experimental observation involves either one of two approaches: Uncertainty-relation-based estimation that captures the linear correlation between relevant observables, or rigorous but error-prone quantum state reconstruction on tomograms obtained from homodyne measurements. We investigate the behaviour, close to avoided crossings, of entanglement indicators that can be calculated directly from a numerically-generated tomogram. The systems we study are two generic bipartite continuous-variable systems: a Bose-Einstein condensate trapped in a double-well potential, and a multi-level atom interacting with a radiation field. We also consider a multipartite hybrid quantum system of superconducting qubits interacting with microwave photons. We carry out a quantitative comparison of the indicators with a standard measure of entanglement, the subsystem von Neumann entropy (SVNE). It is shown that the indicators that capture the nonlinear correlation between relevant subsystem observables are in excellent agreement with the SVNE.'\naddress: '$^{1}$ Department of Physics, Indian Institute of Technology Madras, Chennai 600036, India.'\nauthor:\n- 'B. Sharmila$^{1}$, S. Lakshmibala$^{1}$ and V. Balakrishnan$^{1}$'\nbibliography:\n- 'references.bib'\ntitle: 'Signatures of avoided energy-level crossings in" -"---\nabstract: 'We use the eight year light-curve database from the MACHO (MAssive Compact Halo Objects) project together with infrared colors and magnitudes from 2MASS (the Two Micron All Sky Survey) to identify a sample of 22,000 long period variables in the Large Magellanic Cloud (referred to hereafter as LMC LPVs). A period-luminosity diagram of these stars reveals six well-defined sequences, in substantial agreement with previous analyses of samples from OGLE (Optical Gravitational Lensing Experiment). In our analysis we identify analogues to galactic LPVs in the LMC LPV sample. We find that carbon-dominated AGB stars populate only two of the sequences, one of which includes the Mira variables. The high-luminosity end of the same two sequences are also the location of the only stars with $J-K_s > 2$, indicating that they are enshrouded in dust. The unknown mechanism that drives the variability of stars in the longest period produces different morphology in the period-luminosity diagram as compared to the shortest period sequences, which are thought to be caused by pulsation. In particular, the longest period sequence extends to lower luminosity RGB stars and the luminosity function does not peak among the AGB stars. We point out several features which will" -"---\nabstract: 'A fundamental question lies in almost every application of deep neural networks: what is the optimal neural architecture given a specific dataset? Recently, several Neural Architecture Search (NAS) frameworks have been developed that use reinforcement learning and evolutionary algorithm to search for the solution. However, most of them take a long time to find the optimal architecture due to the huge search space and the lengthy training process needed to evaluate each candidate. In addition, most of them aim at accuracy only and do not take into consideration the hardware that will be used to implement the architecture. This will potentially lead to excessive latencies beyond specifications, rendering the resulting architectures useless. To address both issues, in this paper we use Field Programmable Gate Arrays (FPGAs) as a vehicle to present a novel hardware-aware NAS framework, namely [*FNAS*]{}, which will provide an optimal neural architecture with latency guaranteed to meet the specification. In addition, with a performance abstraction model to analyze the latency of neural architectures without training, our framework can quickly prune architectures that do not satisfy the specification, leading to higher efficiency. Experimental results on common data set such as ImageNet show that in the cases" -"---\nabstract: 'In this article we discuss a new Hamiltonian PDE arising from a class of equations appearing in the study of magma, partially molten rock, in the Earth\u2019s interior. Under physically justifiable simplifications, a scalar, nonlinear, degenerate, dispersive wave equation may be derived to describe the evolution of $\\phi$, the fraction of molten rock by volume, in the Earth. These equations have two power nonlinearities which specify the constitutive realitions for bulk viscosity and permeability in terms of $\\phi$. Previously, they have been shown to admit solitary wave solutions. For a particular relation between exponents, we observe the equation to be Hamiltonian; it can be viewed as a generalization of the Benjamin-Bona-Mahoney equation. We prove that the solitary waves are nonlinearly stable, by showing that they are constrained local minimizers of an appropriate time-invariant Lyapunov functional. A consequence is an extension of the regime of global in time well-posedness for this class of equations to (large) data, which include a neighborhood of a solitary wave. Finally, we observe that these equations have [*compactons*]{}, solitary traveling waves with compact spatial support at each time.'\nauthor:\n- 'Gideon Simpson, Michael I. Weinstein, Philip Rosenau'\nbibliography:\n- 'hamiltonian\\_article.bib'\ntitle: On a Hamiltonian" -"---\nabstract: 'The mixed morphology class of supernova remnants has centrally peaked X-ray emission along with a shell-like morphology in radio emission. White & Long proposed that these remnants are evolving in a cloudy medium wherein the clouds are evaporated via thermal conduction once being overrun by the expanding shock. Their analytical model made detailed predictions regarding temperature, density and emission profiles as well as shock evolution. We present numerical hydrodynamical models in 2D and 3D including thermal conduction, testing the White & Long model and presenting results for the evolution and emission from remnants evolving in a cloudy medium. We find that, while certain general results of the White & Long model hold, such as the way the remnants expand and the flattening of the X-ray surface brightness distribution, in detail there are substantial differences. In particular we find that the X-ray luminosity is dominated by emission from shocked cloud gas early on, leading to a bright peak which then declines and flattens as evaporation becomes more important. In addition, the effects of thermal conduction on the intercloud gas, which is not included in the White & Long model, are important and lead to further flattening of the X-ray" -"---\nabstract: 'The postulate of the existence of a jamming phase diagram (Liu and Nagel, Nature 396, 21 EP (1998aa)) provides a theoretical basis for the classification of a wide range of amorphous solids (colloidal, molecular and emulsion glasses, colloidal and polymer gels, foams and granular matter) on the basis of whether these materials are in the jammed or unjammed state. Whilst such simple classification is appealing, it fails to capture that the criterion of *rigidity* of such amorphous solids may be defined with respect to a particular deformation orientation or mode (i.e. shear, extrusion, consolidation). We consider this problem via the consolidation of strong colloidal gels, and find that the critical transitions during the consolidation of a strong colloidal gel (as indicated by maxima and minima in the relative normal stress difference) correspond directly to directed, frictional and non-frictional rigidity percolation. These results indicate a hierarchy of directed, jammed states during consolidation of such amorphous solids, and a direct link between particle-scale interactions and macroscopic collective behaviour of these systems driven far from equilibrium.'\nauthor:\n- 'R. Li'\n- 'D. R. Lester'\ntitle: Hierarchical Jamming in Frictional Particle Assemblies\n---\n\nThe deformation and flow of soft matter - including" -"---\nabstract: 'We present a rational approach to the design of half-metallic heterostructures which allows the design of an infinite number of half-metallic heterostructures. The wide range of materials that can be made half-metallic using our approach makes it possible to engineer materials with tunable characteristic properties, for example low intrinsic magnetic damping, small magnetic moment or perpendicular anisotropy. We demonstrate the proposed design scheme for a series of transition metal heterostructures based on the B2 crystal structure.'\nauthor:\n- 'William H. Butler'\n- 'Claudia K.A. Mewes'\n- Chunsheng Liu\n- Tianyi Xu\ntitle: 'Rational Design of Half-Metallic Heterostructures'\n---\n\nA ferromagnet can be viewed as two different materials simultaneously occupying the same space; one material with the majority-spin electronic structure, the other material with the minority-spin electronic structure. The most extreme situation is represented by half-metals in which the electronic structure of one of the spin channels is that of a metal while that of the other is an insulator or semiconductor. Because only one of the spin channels of a half-metal is conductive, a half-metal can in principle generate a fully spin-polarized current. Therefore half-metals are ideal for many present and future spintronic applications injecting spin-dependent currents into" -"---\nabstract: 'There has been a long recognition that discrete features (n-gram features) and neural network based features have complementary strengths for language models (LMs). Improved performance can be obtained by model interpolation, which is, however, a sub-optimal two-step integration of discrete and neural features. The trans-dimensional random field (TRF) framework has the potential advantage of being able to flexibly integrate a richer set of features. However, either discrete or neural features are used alone in previous TRF LMs. This paper develops a mixed-feature TRF LM and demonstrates its advantage in integrating discrete and neural features. Various LMs are trained over PTB and Google one-billion-word datasets, and evaluated in N-best list rescoring experiments for speech recognition. Among all single LMs (i.e. without model interpolation), the mixed-feature TRF LMs perform the best, improving over both discrete TRF LMs and neural TRF LMs alone, and also being significantly better than LSTM LMs. Compared to interpolating two separately trained models with discrete and neural features respectively, the performance of mixed-feature TRF LMs matches the best interpolated model, and with simplified one-step training process and reduced training time.'\naddress: |\n $^1$Speech Processing and Machine Intelligence (SPMI) Lab, Tsinghua University, Beijing, China.\\\n $^2$State Grid Customer" -"---\nabstract: 'We analyze the consistency of electroweak breaking within the simplest \u201cdark matter completions\u201d of the high-scale type-I seesaw mechanism. We derive the full two-loop RGEs of the relevant parameters, including the quartic Higgs self-coupling $\\lambda$ of the Standard Model. For the simplest type-I seesaw with bare \u201cright-handed\u201d neutrino mass terms, we find that with sizeable Yukawa couplings, the Higgs quartic self-coupling $\\lambda$ becomes negative much before reaching the seesaw scale. For \u201clarge\u201d Yukawa couplings the type-I seesaw may be inconsistent even as an effective theory. We further show that simple extensions of the canonical type-I seesaw involving a viable dark matter candidate can indeed fix this problem rendering the Higgs vacuum stable up to Planck scale. We examine two such extensions, the type-I seesaw with spontaneous lepton number violation and the recently proposed scoto-seesaw mechanism. Both have better stability properties due to the new scalars required.'\nauthor:\n- Sanjoy Mandal\n- Rahul Srivastava\n- 'Jos\u00e9 W. F. Valle'\nbibliography:\n- 'bibliography.bib'\ntitle: ' Consistency of minimal dark matter completions of type-I seesaw'\n---\n\nIntroduction {#sec:introduction}\n============\n\nThe discovery of a scalar particle with 125 GeV mass plays a central role within particle physics\u00a0[@Aad:2012tfa; @Chatrchyan:2012xdj]. In particular, the" -"Andrew V. Goldberg[^1] and Alexander V. Karzanov[^2]\n\n**Maximum skew-symmetric flows and matchings**\n\nDecember 2003\n\n[**Abstract.**]{} The maximum integer skew-symmetric flow problem (MSFP) generalizes both the maximum flow and maximum matching problems. It was introduced by Tutte\u00a0[@tut-67] in terms of self-conjugate flows in antisymmetrical digraphs. He showed that for these objects there are natural analogs of classical theoretical results on usual network flows, such as the flow decomposition, augmenting path, and max-flow min-cut theorems. We give unified and shorter proofs for those theoretical results.\n\nWe then extend to MSFP the shortest augmenting path method of Edmonds and Karp\u00a0[@EK-72] and the blocking flow method of Dinits\u00a0[@din-70], obtaining algorithms with similar time bounds in general case. Moreover, in the cases of unit arc capacities and unit \u201cnode capacities\u201d the blocking skew-symmetric flow algorithm has time bounds similar to those established in\u00a0[@ET-75; @kar-73-2] for Dinits\u2019 algorithm. In particular, this implies an algorithm for finding a maximum matching in a nonbipartite graph in $O(\\sqrt{n}m)$ time, which matches the time bound for the algorithm of Micali and Vazirani\u00a0[@MV-80]. Finally, extending a clique compression technique of Feder and Motwani\u00a0[@FM-91] to particular skew-symmetric graphs, we speed up the implied maximum matching algorithm" -"---\nabstract: 'Balance, gait and postural control are some of the key factors in determining the overall stability of an individual. Several highend and costly solutions exist to perform movement analysis in clinical settings. OpenSim is a tool which uses 39 marker positions, obtained from such highend solutions like VICON or equivalent multicamera setup, for the analysis of inverse kinematics and inverse dynamics. However, an affordable solution for deriving musculoskeletal joint kinematics parameters using a low cost Kinect device is of immense importance. In this paper, we initially study the feasibility of using OpenSim tool on 20 joint locations of human being, obtained from Kinect data. Next, we analyze the various joint forces and torques experienced during a Single Limb Stance (SLS) exercise performed by healthy subjects in normal, overweight and obese categories. Results indicate that a subset of parameters related to forces and torque in hip, lumber and pelvis are the most important ones that contribute significantly in maintaining static balance in SLS. Statistical analysis demonstrates that the pelvis list and tilt moments are the key biomarkers for maintaining the statibility in SLS, thus leading to a possibility of personalizing the therapy in tele-rehabilitation.'\nauthor:\n- |\n Rajat Kumar" -"---\nabstract: 'Weyl functions conveniently describe the evolution of wave coherences in periodic or quadratic potentials. In this work we use Weyl functions to study the \u201cTalbot-Lau effect\u201d in a time-domain matter-wave interferometer. A \u201cdisplacement diagram\u201d is introduced to analyze and calculate the matter-wave interference for an atomic cloud in a quadratic potential that interacts with a sequence of short optical standing wave pulses producing an atomic grating echo. Unlike previous treatments, this new approach allows the atomic ensemble to have an arbitrary initial phase-space distribution, and the standing wave grating vectors to span three dimensions. Several examples are discussed to illustrate the convenience of the diagrammatic technique including the following: a two-dimensional Talbot-Lau effect, the shift in the echo time and the recoil phase for the interferometer perturbed by a quadratic potential; and the realization of a time-domain \u201cLau effect\u201d using a pulsed harmonic potential. The diagrammatic technique is applicable to diffraction gratings with arbitrary grating transmission functions. We conclude the paper with a general discussion on the Weyl function representations of matter-wave coherence, and relate the conservation of matter-wave coherence with the conservation of purity $\\varsigma={\\rm Tr}(\\hat \\rho^2)$ that distinguishes decoherence effects from dephasing effects.'\nauthor:\n- 'Saijun" -"---\nabstract: 'We describe a novel approach for computing wave correlation functions inside finite spatial domains driven by complex and statistical sources. By exploiting semiclassical approximations, we provide explicit algorithms to calculate the local mean of these correlation functions in terms of the underlying classical dynamics. By defining appropriate ensemble averages, we show that fluctuations about the mean can be characterised in terms of classical correlations. We give in particular an explicit expression relating fluctuations of diagonal contributions to those of the full wave correlation function. The methods have a wide range of applications both in quantum mechanics and for classical wave problems such as in vibro-acoustics and electromagnetism. We apply the methods here to simple quantum systems, so-called quantum maps, which model the behaviour of generic problems on Poincar\u00e9 sections. Although low-dimensional, these models exhibit a chaotic classical limit and share common characteristics with wave propagation in complex structures.'\naddress: 'School of Mathematical Sciences, University of Nottingham, UK'\nauthor:\n- 'Stephen C Creagh, Gabriele Gradoni, Timo Hartmann and Gregor Tanner'\ntitle: Propagating wave correlations in complex systems \n---\n\nIntroduction\n============\n\nThere is a long tradition of describing statistical properties of wave fields and spectra in terms of semiclassical or" -"---\nabstract: 'Distributed acoustic sensing technology is increasingly being used to support production and well management within the oil and gas sector, for example to improve flow monitoring and production profiling. This sensing technology is capable of recording substantial data volumes at multiple depths within an oil well, giving unprecedented insights into production behaviour. However the technology is also prone to recording periods of anomalous behaviour, where the same physical features are concurrently observed at multiple depths. Such features are called \u2018stripes\u2019 and are undesirable, detrimentally affecting well performance modelling. This paper focuses on the important challenge of developing a principled approach to identifying such anomalous periods within distributed acoustic signals. We extend recent work on classifying locally stationary wavelet time series to an online setting and, in so doing, introduce a computationally-efficient online procedure capable of accurately identifying anomalous regions within multivariate time series.'\nauthor:\n- 'Rebecca E. Wilson'\n- 'Idris A. Eckley'\n- 'Matthew A. Nunes'\n- Timothy Park\nbibliography:\n- 'draft.bib'\ndate: 'Received: date / Accepted: date'\nnocite: '[@*]'\ntitle: 'Dynamic detection of anomalous regions within distributed acoustic sensing data streams using locally stationary wavelet time series [^1] '\n---\n\n[example.eps]{} gsave newpath 20 20 moveto 20" -"---\nabstract: |\n In this contribution we propose and rigorously analyze new variants of adaptive Trust-Region methods for parameter optimization with PDE constraints and bilateral parameter constraints. The approach employs successively enriched Reduced Basis surrogate models that are constructed during the outer optimization loop and used as model function for the Trust-Region method. Each Trust-Region sub-problem is solved with the projected BFGS method. Moreover, we propose a non-conforming dual (NCD) approach to improve the standard RB approximation of the optimality system. Rigorous improved a posteriori error bounds are derived and used to prove convergence of the resulting NCD-corrected adaptive Trust-Region Reduced Basis algorithm. Numerical experiments demonstrate that this approach enables to reduce the computational demand for large scale or multi-scale PDE constrained optimization problems significantly.\n\n **Keywords**: PDE constrained optimization, Trust-Region method, error analysis, Reduced Basis method, model order reduction, parametrized systems, large scale problems\n\n **AMS Mathematics Subject Classification**: 49M20, 49K20, 35J20, 65N30, 90C06\nauthor:\n- |\n Tim Keil$^\\dag$, Luca Mechelli$^\\ddag$, Mario Ohlberger$^\\dag$,\\\n Felix Schindler[^1], Stefan Volkwein[^2]\nbibliography:\n- 'bibliography.bib'\ntitle: 'A non-conforming dual approach for adaptive Trust-Region Reduced Basis approximation of PDE-constrained optimization[^3] '\n---\n\nIntroduction {#sec:introduction .unnumbered}\n============\n\nWe are concerned with the development and rigorous analysis of novel" -"---\nauthor:\n- 'Vladislav A. Yastrebov'\ntitle: |\n The Elastic Contact of Rough Spheres\\\n Investigated Using a Deterministic Multi-Asperity Model\n---\n\nIn this paper we use a deterministic multi-asperity model to investigate the elastic contact of rough spheres. Synthetic rough surfaces with controllable spectra were used to identify individual asperities, their locations and curvatures. The deterministic analysis enables to capture both particular deformation modes of individual rough surfaces and also statistical deformation regimes, which involve averaging over a big number of roughness realizations. Two regimes of contact area growth were identified: the Hertzian regime at light loads at the scale of a single asperity, and the linear regime at higher loads involving multiple contacting asperities. The transition between the regimes occurs at the load which depends on the second and the fourth spectral moments. It is shown that at light indentation the radius of circumference delimiting the contact area is always considerably larger than Hertzian contact radius. Therefore, it suggests that there is no scale separation in contact problems at light loads. In particular, the geometrical shape cannot be considered separately from the surface roughness at least for approaching greater than one standard roughness deviation.\n\n[**Keywords.**]{} roughness; indentation; contact; deterministic" -"---\nabstract: 'We propose Neural Turtle Graphics (NTG), a novel generative model for spatial graphs, and demonstrate its applications in modeling city road layouts. Specifically, we represent the road layout using a graph where nodes in the graph represent control points and edges in the graph represents road segments. NTG is a sequential generative model parameterized by a neural network. It iteratively generates a new node and an edge connecting to an existing node conditioned on the current graph. We train NTG on Open Street Map data and show that it outperforms existing approaches using a set of diverse performance metrics. Moreover, our method allows users to control styles of generated road layouts mimicking existing cities as well as to sketch parts of the city road layout to be synthesized. In addition to synthesis, the proposed NTG finds uses in an analytical task of aerial road parsing. Experimental results show that it achieves state-of-the-art performance on the SpaceNet dataset.'\nauthor:\n- |\n Hang Chu$^{1,2,4}$ Daiqing Li$^{4}$ David Acuna$^{1,2,4}$ Amlan Kar$^{1,2,4}$ Maria Shugrina$^{1,2,4}$ Xinkai Wei$^{1,4}$\\\n Ming-Yu Liu$^{4}$ Antonio Torralba$^{3}$ Sanja Fidler$^{1,2,4}$\\\n $^{1}$University of Toronto $^{2}$Vector Institute $^{3}$MIT $^{4}$NVIDIA\\\n [{chuhang1122,davidj,amlan}@cs.toronto.edu, {daiqingl,mshugrina,xinkaiw,mingyul,sfidler}@nvidia.com, torralba@mit.edu]{}\nbibliography:\n- 'egbib.bib'\ntitle: Neural Turtle Graphics for Modeling City" -"---\nabstract: |\n Matrices associated with graphs, such as the Laplacian, lead to numerous interesting graph problems expressed as linear systems. One field where Laplacian linear systems play a role is network analysis, e.g.\u00a0for certain centrality measures that indicate if a node (or an edge) is important in the network. One such centrality measure is current-flow closeness.\n\n To allow network analysis workflows to profit from a fast Laplacian solver, we provide an implementation of the LAMG multigrid solver in the NetworKit package, facilitating the computation of current-flow closeness values or related quantities. Our main contribution consists of two algorithms that accelerate the current-flow computation for one node or a reasonably small node subset significantly. One algorithm is an unbiased estimator using sampling, the other one is based on the Johnson-Lindenstrauss transform. Our inexact algorithms lead to very accurate results in practice. Thanks to them one is now able to compute an estimation of current-flow closeness of one node on networks with tens of millions of nodes and edges within seconds or a few minutes. From a network analytical point of view, our experiments indicate that current-flow closeness can discriminate among different nodes significantly better than traditional shortest-path closeness and" -"---\nabstract: 'The Search for Extraterrestrial Intelligence (SETI) attempts to address the possibility of the presence of technological civilizations beyond the Earth. Benefiting from high sensitivity, large sky coverage, an innovative feed cabin for China\u2019s Five-hundred-meter Aperture Spherical radio Telescope (FAST), we performed the SETI first observations with FAST\u2019s newly commisioned 19-beam receiver; we report preliminary results in this paper. Using the data stream produced by the SERENDIP VI realtime multibeam SETI spectrometer installed at FAST, as well as its off-line data processing pipelines, we identify and remove four kinds of radio frequency interference(RFI): zone, broadband, multi-beam, and drifting, utilizing the Nebula SETI software pipeline combined with machine learning algorithms. After RFI mitigation, the Nebula pipeline identifies and ranks interesting narrow band candidate ET signals, scoring candidates by the number of times candidate signals have been seen at roughly the same sky position and same frequency, signal strength, proximity to a nearby star or object of interest, along with several other scoring criteria. We show four example candidates groups that demonstrate these RFI mitigation and candidate selection. This preliminary testing on FAST data helps to validate our SETI instrumentation techniques as well as our data processing pipeline.'\nauthor:\n- 'Zhi-Song" -"---\nauthor:\n- Utkarsh Goel\n- Moritz Steiner\nbibliography:\n- 'elide.bib'\ntitle: System to Identify and Elide Superfluous JavaScript Code for Faster Webpage Loads\n---\n\nAbstract {#abstract .unnumbered}\n========\n\nMany websites import large JavaScript\u00a0(JS) libraries to customize and enhance user experiences. Our data shows that many JS libraries are only partially utilized during a page load, and therefore, contain superfluous code that is never executed. Many top-ranked websites contain up to hundreds of kilobytes of compressed superfluous code and a JS resource on a median page contains 31% superfluous code. Superfluous JS code inflates the page weight, and thereby, the time to download, parse, and compile a JS resource. It is therefore important to monitor the usage and optimize the payload of JS resources to improve Web performance. However, given that the webpage design and functionality could depend on a user\u2019s preferences or device, among many other factors, actively loading webpages in controlled environments cannot cover all possible conditions in which webpage content and functionality changes. -1\n\nIn this paper, we show that passive measurement techniques, such as real user monitoring systems\u00a0(RUM), that monitor the performance of real user page loads under different conditions can be leveraged to" -"---\nabstract: 'Model-free control based on the idea of Reinforcement Learning is a promising approach that has recently gained extensive attention. However, Reinforcement-Learning-based control methods solely focus on the regulation problem or learn to track a reference that is generated by a time-invariant exo-system. In the latter case, controllers are only able to track the time-invariant reference dynamics which they have been trained on and need to be re-trained each time the reference dynamics change. Consequently, these methods fail in a number of applications which obviously rely on a trajectory not being generated by an exo-system. One prominent example is autonomous driving. This paper provides for the first time an adaptive optimal control method capable to track reference trajectories not being generated by a time-invariant exo-system. The main innovation is a novel Q-function that directly incorporates a given reference trajectory on a moving horizon. This new Q-function exhibits a particular structure which allows the design of an efficient, iterative, provably convergent Reinforcement Learning algorithm that enables optimal tracking. Two real-world examples demonstrate the effectiveness of our new method.'\naddress:\n- 'Institute of Control Systems, Karlsruhe Institute of Technology (KIT), 76131\u00a0Karlsruhe, Germany'\n- 'Intelligent Sensor-Actuator-Systems Laboratory, Institute for Anthropomatics and" -"---\nabstract: 'Studies of planetary atmospheric composition, variability, and evolution require appropriate theoretical and numerical tools to estimate key atmospheric parameters, among which the mass-loss rate is often the most important. In evolutionary studies, it is common to use the energy-limited formula, which is attractive for its simplicity but ignores important physical effects and can be inaccurate in many cases. To overcome this problem, we consider a recently developed grid of about 7000 one-dimensional upper-atmosphere hydrodynamic models computed for a wide range of planets with hydrogen-dominated atmospheres from which we extract the mass-loss rates. The grid boundaries are \\[1:39\\]\u00a0in planetary mass, \\[1:10\\]\u00a0in planetary radius, \\[300:2000\\]K in equilibrium temperature, \\[0.4:1.3\\]\u00a0in host star\u2019s mass, \\[0.002:1.3\\]au in orbital separation, and about \\[10$^{26}$:5$\\times$10$^{30}$\\]ergs$^{-1}$ in stellar X-ray and extreme ultraviolet luminosity. We then derive an analytical expression for the atmospheric mass-loss rates based on a fit to the values obtained from the grid. The expression provides the mass-loss rates as a function of planetary mass, planetary radius, orbital separation, and incident stellar high-energy flux. We show that this expression is a significant improvement to the energy-limited approximation for a wide range of planets. The analytical expression presented here enables significantly more accurate" -"---\nabstract: 'We use Raman spectroscopy in tandem with transmission electron microscopy and DFT simulations to show that extreme (GPa) pressure converts the phase of silicon nanowires from cubic (Si-I) to hexagonal (Si-IV) while preserving the nanowire\u2019s cylindrical morphology. In situ Raman scattering of the TO mode demonstrates the high-pressure Si-I to Si-II phase transition near 9 GPa. Raman signal of the TO phonon shows a decrease in intensity in the range 9 to 14 GPa. Then, at 17 GPa, it is no longer detectable, indicating a second phase change (Si-II to Si-V) in the 14 to 17 GPa range. Recovery of exotic phases in individual silicon nanowires from diamond anvil cell experiments reaching 17 GPa is also shown. Raman measurements indicate Si-IV as the dominant phase in pressurized nanowires after decompression. Transmission electron microscopy and electron diffraction confirm crystalline Si-IV domains in individual nanowires. Computational electromagnetic simulations suggest that heating from the Raman laser probe is negligible and that near-hydrostatic pressure is the primary driving force for the formation of hexagonal silicon nanowires.'\nauthor:\n- 'Bennett E. Smith'\n- Xuezhe Zhou\n- 'Paden B. Roder'\n- 'Evan H. Abramson'\n- 'Peter J. Pauzauskie'\nbibliography:\n- 'sinwdac.bib'\ntitle: 'Recovery of" -"---\nabstract: 'We formulate a new problem as Object Importance Estimation (OIE) in on-road driving videos, where the road users are considered as important objects if they have influence on the control decision of the ego-vehicle\u2019s driver. The importance of a road user depends on both its visual dynamics, *e.g*., appearance, motion and location, in the driving scene and the driving goal, *e.g*., the planned path, of the ego vehicle. We propose a novel framework that incorporates both visual model and goal representation to conduct OIE. To evaluate our framework, we collect an on-road driving dataset at traffic intersections in the real world and conduct human-labeled annotation of the important objects. Experimental results show that our goal-oriented method outperforms baselines and has much more improvement on the left-turn and right-turn scenarios. Furthermore, we explore the possibility of using object importance for driving control prediction and demonstrate that binary brake prediction can be improved with the information of object importance.'\nauthor:\n- 'Mingfei Gao$^{1*}$, Ashish Tawari$^{2}$ and Sujitha Martin$^{2}$[^1] [^2][^3]'\nbibliography:\n- 'egbib.bib'\ntitle: '**Goal-oriented Object Importance Estimation in On-road Driving Videos** '\n---\n\nIntroduction {#sec: intro}\n============\n\nHuman\u2019s vision system plays a key role for perceiving and interacting with traffic" -"---\nabstract: 'We study an analytical model of a Rashba nanowire that is partially covered by and coupled to a thin superconducting layer, where the uncovered region of the nanowire forms a quantum dot. We find that, even if there is no topological superconducting phase possible, there is a trivial Andreev bound state that becomes pinned exponentially close to zero energy as a function of magnetic field strength when the length of the quantum dot is tuned with respect to its spin-orbit length such that a resonance condition of Fabry-Perot type is satisfied. In this case, we find that the Andreev bound state remains pinned near zero energy for Zeeman energies that exceed the characteristic spacing between Andreev bound state levels but that are smaller than the spin-orbit energy of the quantum dot. Importantly, as the pinning of the Andreev bound state depends only on properties of the quantum dot, we conclude that this behavior is unrelated to topological superconductivity. To support our analytical model, we also perform a numerical simulation of a hybrid system while explicitly incorporating a thin superconducting layer, showing that all qualitative features of our analytical model are also present in the numerical results.'\nauthor:\n-" -"---\nabstract: |\n The Tethered Particle Motion (TPM) method has been used to observe and characterize a variety of protein-DNA interactions including DNA looping and transcription. TPM experiments exploit the Brownian motion of a DNA-tethered bead to probe biologically relevant conformational changes of the tether. In these experiments, a change in the extent of the bead\u2019s random motion is used as a reporter of the underlying macromolecular dynamics and is often deemed sufficient for TPM analysis. However, a complete understanding of how the motion depends on the physical properties of the tethered particle complex would permit more quantitative and accurate evaluation of TPM data. For instance, such understanding can help extract details about a looped complex geometry (or multiple coexisting geometries) from TPM data. To better characterize the measurement capabilities of TPM experiments involving DNA tethers, we have carried out a detailed calibration of TPM magnitudeas a function of DNA length and particle size. We also explore how experimental parameters such as acquisition time and exposure time affect the apparent motion of the tethered particle. We vary the DNA length from 200bp to 2.6kbp and consider particle diameters of 200, 490 and 970nm. We also present a systematic comparison between" -"---\nabstract: 'Sentiment analysis has become a very important tool for analysis of social media data. There are several methods developed for this research field, many of them working very differently from each other, covering distinct aspects of the problem and disparate strategies. Despite the large number of existent techniques, there is no single one which fits well in all cases or for all data sources. Supervised approaches may be able to adapt to specific situations but they require manually labeled training, which is very cumbersome and expensive to acquire, mainly for a new application. In this context, in here, we propose to combine several very popular and effective *state-of-the-practice* sentiment analysis methods, by means of an unsupervised bootstrapped strategy for polarity classification. One of our main goals is to reduce the large variability (lack of stability) of the unsupervised methods across different domains (datasets). Our solution was thoroughly tested considering thirteen different datasets in several domains such as opinions, comments, and social media. The experimental results demonstrate that our combined method (aka, 10SENT) improves the effectiveness of the classification task, but more importantly, it solves a key problem in the field. It is consistently among the best methods in" -"---\nabstract: 'We investigate theoretically the behavior of proteins as well as other large macromolecules which are incorporated into amphiphilic monolayers at the air-water interface. We assume the monolayer to be in the coexistence region of the \u201cmain\u201d transition, where domains of the liquid condensed phase coexist with the liquid expanded background. Using a simple mean-field free energy accounting for the interactions between proteins and amphiphilic molecules, we obtain the spatial protein distribution with the following characteristics. When the proteins preferentially interact with either the liquid condensed or liquid expanded domains, they will be dissolved in the respective phase. When the proteins are energetically rather indifferent to the density of the amphiphiles, they will be localized at the line boundary between the (two-dimensional) liquid expanded and condensed phases. In between these two limiting cases, a delocalization transition of the proteins takes place. This transition is accessible by changing the temperature or the amount of incorporated protein. These findings are in agreement with recent fluorescence microscopy experiments. Our results also apply to lipid multicomponent membranes showing coexistence of distinct fluid phases.'\naddress:\n- |\n School of Physics and Astronomy, Raymond and Beverly Sackler Faculty of Exact Sciences,\\\n Tel Aviv University, Ramat" -"---\nabstract: 'Point of care ultrasound (POCUS) consists in the use of ultrasound imaging in critical or emergency situations to support clinical decisions by healthcare professionals and first responders. In this setting it is essential to be able to provide means to obtain diagnostic data to potentially inexperienced users who did not receive an extensive medical training. Interpretation and acquisition of ultrasound images is not trivial. First, the user needs to find a suitable sound window which can be used to get a clear image, and then he needs to correctly interpret it to perform a diagnosis. Although many recent approaches focus on developing smart ultrasound devices that add interpretation capabilities to existing systems, our goal in this paper is to present a reinforcement learning (RL) strategy which is capable to guide novice users to the correct sonic window and enable them to obtain clinically relevant pictures of the anatomy of interest. We apply our approach to cardiac images acquired from the parasternal long axis (PLAx) view of the left ventricle of the heart.'\nauthor:\n- 'Fausto Milletari, Vighnesh Birodkar, Michal Sofka'\nbibliography:\n- 'bibliography.bib'\ntitle: 'Straight to the point: reinforcement learning for user guidance in ultrasound'\n---" -"---\nauthor:\n- Ethan Vishniac\n- Chris Lintott\n- 'Greg J. Schwarz'\n- August Muench\ntitle: 'An example of a Research Note of the American Astronomical Society (RNAAS)'\n---\n\n*Research Notes of the [American Astronomical Society](https://aas.org)* ([RNAAS](http://rnaas.aas.org)) is a publication in the AAS portfolio (alongside ApJ, AJ, ApJ Supplements, and ApJ Letters) through which authors can promptly and briefly share materials of interest with the astronomical community in a form that will be searchable via ADS and permanently archived.\n\nThe astronomical community has long faced a challenge in disseminating information that may not meet the criteria for a traditional journal article. There have generally been few options available for sharing works in progress, comments and clarifications, null results, and timely reports of observations (such as the spectrum of a supernova), as well as results that wouldn\u2019t traditionally merit a full paper (such as the discovery of a single exoplanet or contributions to the monitoring of variable sources).\n\nLaunched in 2017, RNAAS was developed as a supported and long-term communication channel for results such as these that would otherwise be difficult to broadly disseminate to the professional community and persistently archive for future reference.\n\nSubmissions to RNAAS should be brief communications" -"---\nabstract: 'We consider a family of spherical three dimensional spacelike slices embedded in the Schwarzschild solution. The mean curvature is constant on each slice but can change from slice to slice. We give a simple expression for an everywhere positive lapse and thus we show how to construct foliations. There is a barrier preventing the mean curvature from becoming large, and we show how to avoid this so as to construct a foliation where the mean curvature runs all the way from zero to infinity. No foliation exists where the mean curvature goes from minus to plus infinity. There are slicings, however, where each slice passes through the bifurcation sphere $R = 2M$ and the lapse only vanishes at this one point, and is positive everywhere else, while the mean curvature does run from minus to plus infinity. Symmetric foliations of the extended Schwarzschild spacetime degenerate at a critical point, where we show that the lapse function exponentially approaches zero.'\nauthor:\n- Edward Malec\n- Niall \u00d3 Murchadha\ntitle: 'The general spherically symmetric constant mean curvature foliations of the Schwarzschild solution.'\n---\n\nIntroduction\n============\n\nConstant mean curvature (CMC) foliations of the Schwarzschild geometry have been constructed by Brill, Cavalho" -"---\nabstract: 'Power distribution grids are exploited by Power Line Communication (PLC) technology to convey high frequency data signals. The natural conformation of such power line networks causes a relevant part of the high frequency signals traveling through them to be radiated instead of being conducted. This causes not only electromagnetic interference (EMI) with devices positioned next to power line cables, but also a consistent deterioration of the signal integrity. Since existing PLC channel models do not take into account losses due to radiation phenomena, this paper responds to the need of developing accurate network simulators. A thorough analysis is herein presented about the conducted and radiated effects on the signal integrity, digging into differential mode to common mode signal conversion due to network imbalances. The outcome of this work allows each network element to be described by a mixed-mode transmission matrix. Furthermore, the classical per-unit-length equivalent circuit of transmission lines is extended to incorporate radiation resistances. The results of this paper lay the foundations for future developments of comprehensive power line network models that incorporate conducted and radiated phenomena.'\nauthor:\n- '[^1]'\nbibliography:\n- 'IEEEabrv.bib'\n- 'biblio.bib'\ntitle: Modeling Transmission and Radiation Effects when Exploiting Power Line Networks for" -"---\nabstract: 'Outsourcing jobs to a public cloud is a cost-effective way to address the problem of satisfying the peak resource demand when the local cloud has insufficient resources. In this paper, we study on managing deadline-constrained bag-of-tasks jobs on hybrid clouds. We present a binary nonlinear programming (BNP) problem to model the hybrid cloud management where the utilization of physical machines (PMs) in the local cloud/cluster is maximized when the local resources are enough to satisfy the deadline constraints of jobs, while when not, the rent cost from the public cloud is minimized. To solve this BNP problem in polynomial time, we proposed a heuristic algorithm. Its main idea is assigning the task closest to its deadline to current core until the core cannot finish any task within its deadline. When there is no available core, the algorithm adds an available PM with most capacity or rents a new VM with highest cost-performance ratio. Extensive experimental results show that our heuristic algorithm saves 16.2%-76% rent cost and improves 47.3%-182.8% resource utilizations satisfying deadline constraints, compared with first fit decreasing algorithm.'\nauthor:\n- |\n Bo Wang\\\n \\\n \\\n \\\n \\\n Ying Song\\\n \\\n \\\n \\\n Yuzhong sun\\\n \\\n \\\n \\" -"---\nabstract: |\n Using density functional theory plus Hubbard-U (DFT+U) approach, we find that quasi one-dementation(1D) 3d transition metal(TM) zigzag nanowire can be constructed by TM adsorbed on the surface of graphyne sheet. The results show that the TM exchange coupling of the zigzag nanowire mediated by *sp* hybridized carbon atoms gives rise to long range ferromagnetic order except for Cr with anti-ferromagnetic order. The magnetic exchange interaction of TM chains follows like-Zener\u2019s $p_z$-d exchange mechanism: the coexistence of out-of plane $p_z$-d and in-plane $p_{x-y}$-d exchange. Finally, by including spin-orbit interactions within spin-DFT, we calculate the magnetic anisotropy energy of the TM chain on graphyne. We find that the Fe and Co chains show considerable magnetic anisotropy energy (MAE) and orbital magnetic moment. The easy axis of V, Cr, Mn and Fe chains is perpendicular to the surface, whereas the easy axis of Co lies in the surface. Moreover, only V chain shows relatively larger in-plane anisotropy. Our results open a new route to realize the applications of graphyne in spintronics.\\\nauthor:\n- Junjie He\n- Pan Zhou\n- 'N. Jiao'\n- 'S. Y. Ma'\n- 'K. W. Zhang'\n- 'R. Z. Wang'\n- 'L. Z. Sun'\ntitle: 'Magnetic Exchange" -"---\nabstract: 'Phase-field approaches to fracture based on energy minimization principles have been rapidly gaining popularity in recent years, and are particularly well-suited for simulating crack initiation and growth in complex fracture networks. In the phase-field framework, the surface energy associated with crack formation is calculated by evaluating a functional defined in terms of a scalar order parameter and its gradients, which in turn describe the fractures in a diffuse sense following a prescribed regularization length scale. Imposing stationarity of the total energy leads to a coupled system of partial differential equations, one enforcing stress equilibrium and another governing phase-field evolution. The two equations are coupled through an energy degradation function that models the loss of stiffness in the bulk material as it undergoes damage. In the present work, we introduce a new parametric family of degradation functions aimed at increasing the accuracy of phase-field models in predicting critical loads associated with crack nucleation as well as the propagation of existing fractures. An additional goal is the preservation of linear elastic response in the bulk material prior to fracture. Through the analysis of several numerical examples, we demonstrate the superiority of the proposed family of functions to the classical quadratic" -"---\nabstract: 'We report a study of exchange interactions in bulk CrO$_2$ calculated from first principles. We considered three near neighbor Cr-Cr exchange interactions: the interaction between corner and body center atoms mediated through a single oxygen atom; the interaction between a Cr and the Cr directly \u201cabove\u201d it in the (001) direction, also mediated by a single O atom; and the interaction between a Cr and its neighbor in the (100) direction, mediated by two intervening oxygen atoms. The interactions were calculated by rotating the moments of one or more of the Cr ions while constraining the others to remain parallel. We then fit the resulting energy vs.\u00a0angle data to the Heisenberg model and extracted exchange energy parameters with a least-squares method. We also calculated the exchange interactions using a \u201cspin-spiral\u201d technique, in which a relative angular displacement was imposed upon Cr moments in adjacent cells. Similar results were obtained with both approaches. The calculated $T=0$ K exchange interactions were subsequently used to determine the magnetization as a function of temperature via low-T spin-wave dispersion and a Monte-Carlo method. Reasonable agreement with experiment was obtained.'\nauthor:\n- 'H. Sims'\n- 'S. J. Oset'\n- 'W. H. Butler'\n-" -"---\nabstract: 'Turbulence with a large magnetic Reyonolds number, generically leads to rapidly growing magnetic noise over and above any mean field. We revisit the dynamics of this fluctuating field, in homogeneous, isotropic, helical turbulence. Assuming the turbulence to be Markovian, we first rederive, in a fairly transparent manner, the equation for the mean field, and corrected Fokker-Plank type equations for the magnetic correlations. In these equations, we also incorporate the effects of ambipolar drift which would obtain if the turbulent medium has a significant neutral component. We apply these equations to discuss a number of astrophysically interesting problems: (a) the small scale dynamo in galactic turbulence with a model Kolmogorov spectrum, incorporating the effect of ambipolar drift; (b) current helicity dynamics and the quasilinear corrections to the alpha effect; (c) growth of the current helicity and large-scale magnetic fields due to nonlinear effects.'\nauthor:\n- Kandaswamy Subramanian\ntitle: Dynamics of fluctuating magnetic fields in turbulent dynamos incorporating ambipolar drifts\n---\n\nIntroduction\n============\n\nThe origin of large-scale cosmic magnetic fields remains at present, a challenging problem. In a standard paradigm, one invokes the dynamo action involving helical turbulence and rotational shear, to generate magnetic fields ordered on scales much larger" -"---\nabstract: 'We study the spontaneous generation and evolution of bending waves in $N$-body simulations of two isolated Milky Way-like galaxy models. The models differ by their disc-to-halo mass ratios, and hence by their susceptibility to the formation of a bar and spiral structure. Seeded from shot noise in the particle distribution, bending waves rapidly form in both models and persist for many billions of years. Waves at intermediate radii manifest as corrugated structures in vertical position and velocity that are tightly wound, morphologically leading, and dominated by the $m=1$ azimuthal Fourier component. A spectral analysis of the waves suggests they are a superposition of modes from two continuous branches in the Galactocentric radius-rotational frequency plane. The lower-frequency branch is dominant and is responsible for the corrugated, leading, and warped structure. Over time, power in this branch migrates outward, lending credence to an inside-out formation scenario for the warp. Our power spectra qualitatively agree with results from linear perturbation theory and a WKB analysis, both of which include self-gravity. Thus, we conclude that the waves in our simulations are self-gravitating and not purely kinematic. These waves are reminiscent of the wave-like pattern recently found in Galactic star counts from the" -"---\nabstract: 'The metallicity of a star strongly effects both its evolution and the properties of the stellar remnant that results from its demise. It is generally accepted that stars with initial masses below $\\sim 8\\,M_\\odot$ leave behind white dwarfs and that some sub-population of these lead to Type\u00a0Ia supernovae. However, it is often tacitly assumed that metallicity has no effect on the rate of SNe\u00a0Ia. We propose that a natural consequence of the effects of metallicity is to significantly increase the SN\u00a0Ia rate in lower-metallicity galaxies. This is because lower-metallicity stars leave behind higher-mass white dwarfs, which should generally be easier to bring to an explosion. Using a simple model to relate the SN rate to galaxy age and metallicity, we find that the elevation in the rate of SNe\u00a0Ia in lower-mass galaxies measured by LOSS is readily explained. We also find that models using the same parameters agree well with cosmic SN\u00a0Ia rates up to $z\\approx2$. We discuss additional implications of metallicity, including for inferences of the SN\u00a0Ia delay time distribution and super-Chandrasekhar SNe.'\nauthor:\n- 'Matthew D.\u00a0Kistler, K.\u00a0Z.\u00a0Stanek, Christopher S.\u00a0Kochanek, Jos[\u00e9]{}\u00a0L.\u00a0Prieto, and Todd\u00a0A.\u00a0Thompson'" -"---\nabstract: 'We continue our study (Grechnev [*et al.*]{} (2013), doi:10.1007/s11207-013-0316-6; Paper I) on the 18 November 2003 geoffective event. To understand possible impact on geospace of coronal transients observed on that day, we investigated their properties from solar near-surface manifestations in extreme ultraviolet, LASCO white-light images, and dynamic radio spectra. We reconcile near-surface activity with the expansion of coronal mass ejections (CMEs) and determine their orientation relative to the earthward direction. The kinematic measurements, dynamic radio spectra, and microwave and X-ray light curves all contribute to the overall picture of the complex event and confirm an additional eruption at 08:07\u201308:20\u00a0UT close to the solar disk center presumed in Paper\u00a0I. Unusual characteristics of the ejection appear to match those expected for a source of the 20 November superstorm but make its detection in LASCO images hopeless. On the other hand, none of the CMEs observed by LASCO seem to be a promising candidate for a source of the superstorm being able to produce, at most, a glancing blow on the Earth\u2019s magnetosphere. Our analysis confirms free propagation of shock waves revealed in the event and reconciles their kinematics with \u201cEUV waves\u201d and dynamic radio spectra up to decameters.'" -"---\nabstract: 'We report on the influence of disorder on an exciton-polariton condensate in a ZnO based bulk planar microcavity and compare experimental results with a theoretical model for a non-equilibrium condensate. Experimentally, we detect intensity fluctuations within the far-field emission pattern even at high condensate densities which indicates a significant impact of disorder. We show that these effects rely on the driven dissipative nature of the condensate and argue that they can be accounted for by spatial phase inhomogeneities induced by disorder, which occur even for increasing condensate densities realized in the regime of high excitation power. Thus, non-equilibrium effects strongly suppress the stabilization of the condensate against disorder, contrarily to what is expected for equilibrium condensates in the high density limit. Numerical simulations based on our theoretical model reproduce the experimental data.'\nauthor:\n- Martin Thunert\n- Alexander Janot\n- Helena Franke\n- Chris Sturm\n- Tom Michalsky\n- Mar\u00eda Dolores Mart\u00edn\n- Luis Vi\u00f1a\n- Bernd Rosenow\n- Marius Grundmann\n- 'R\u00fcdiger Schmidt-Grund'\ntitle: Cavity Polariton Condensate in a Disordered Environment\n---\n\nIntroduction\n============\n\nThe observation of a macroscopically coherent quantum state of exciton-polaritons, a so-called polariton Bose-Einstein condensate (BEC),\u00a0[@Kasprzak.2006; @Balili.2007] has opened an active and" -"---\nabstract: 'Parallel code design is a challenging task especially when addressing petascale systems for massive parallel processing (MPP), i.e.\u00a0parallel computations on several hundreds of thousands of cores. An in-house computational fluid dynamics code, developed by our group, was designed for such high-fidelity runs in order to exhibit excellent scalability values. Basis for this code is an adaptive hierarchical data structure together with an efficient communication and (numerical) computation scheme that supports MPP. For a detailled scalability analysis, we performed several experiments on two of Germany\u2019s national supercomputers up to 140,000 processes. In this paper, we will show the results of those experiments and discuss any bottlenecks that could be observed while solving engineering-based problems such as porous media flows or thermal comfort assessments for problem sizes up to several hundred billion degrees of freedom.'\nauthor:\n- \n- \nbibliography:\n- 'paper.bib'\ntitle: 'Measuring and Comparing the Scaling Behaviour of a High-Performance CFD Code on Different Supercomputing Infrastructures'\n---\n\nhigh-performance computing, adaptive data structure, multi-grid-like solver concept, speed-up measurements\n\nIntroduction and Motivation\n===========================\n\nModern supercomputers tend to be massive parallel, i.e.\u00a0they consist of several hundreds of thousands of cores, thus making efficient code design inevitable in order to exploit" -"---\nabstract: 'Continuous-wave laser driven Kerr-nonlinear, optical microresonators have enabled a variety of novel applications and phenomena including the generation of optical frequency combs, ultra-low noise microwaves, as well as, ultra-short optical pulses. In this work we break with the paradigm of the continuous-wave optical drive and use instead periodic, pico-second optical pulses. We observe the deterministic generation of stable femtosecond dissipative cavity solitons on-top of the resonantly enhanced driving pulse. Surprisingly, the soliton pulse locks to the driving pulse enabling direct all-optical control of both the soliton\u2019s repetition rate and carrier-envelope offset frequency without the need for any actuation on the microresonator. When compared to both continuous-wave driven microresonators and non-resonant pulsed supercontinuum generation, this new approach is substantially more efficient and can yield broadband frequency combs at femto-Joule driving pulse energies and average laser powers significantly below the parametric threshold power of continuous-wave driven microresonators. The presented results bridge the fields of continuous-wave driven resonant and pulse-driven non-resonant nonlinear optics. They enables micro-photonic pulse compression, ultra-efficient low noise frequency comb and resonant supercontinuum generation for applications including optical data transfer and optical spectroscopy. From a scientific perspective the results open a new horizon for nonlinear photonics driven by" -"---\nabstract: 'In this paper, we introduce the convergence analysis of the fixed pivot technique given by S. Kumar and Ramkrishna [@Kumar:1996-1] for the nonlinear aggregation population balance equations which are of substantial interest in many areas of science: colloid chemistry, aerosol physics, astrophysics, polymer science, oil recovery dynamics, and mathematical biology. In particular, we investigate the convergence for five different types of uniform and non-uniform meshes which turns out that the fixed pivot technique is second order convergent on a uniform and non-uniform smooth meshes. Moreover, it yields first order convergence on a locally uniform mesh. Finally, the analysis exhibits that the method does not converge on an oscillatory and non-uniform random meshes. Mathematical results of the convergence analysis are also demonstrated numerically.'\nauthor:\n- Ankik Kumar Giri \u00a0\n- Erika Hausenblas\ntitle: 'Convergence analysis of sectional methods for solving aggregation population balance equations: The fixed pivot technique'\n---\n\nParticles, Aggregation, Fixed pivot technique, Consistency, Convergence\n\n45J05, 65R20, 45L05\n\nIntroduction\n============\n\nThe continuous aggregation population balance equation (PBE) or Smoluchowski coagulation equation describes the kinetics of particle growth in which particles can aggregate via binary interaction to form larger particles. This model arises in many fields of science and engineering:" -"---\nabstract: 'Motivated by the picture of a thin accretion disc around a black hole, radiating mainly in the direction perpendicular to its plane, we study the motion of test particles interacting with a test geodesic radiation flux originating in the equatorial plane of a Schwarzschild space-time and propagating initially in the perpendicular direction. We assume that the interaction with the test particles is modelled by an effective term corresponding to the Thomson-type interaction which governs the Poynting-Robertson effect. After approximating the individual photon trajectories adequately, we solve the continuity equation approximately in order to find a consistent flux density with a certain plausible prescribed equatorial profile. The combined effects of gravity and radiation are illustrated in several typical figures which confirm that the particles are generically strongly influenced by the flux. In particular, they are both collimated and accelerated in the direction perpendicular to the disc, but this acceleration is not enough to explain highly relativistic outflows emanating from some black-hole\u2013disc sources. The model can however be improved in a number of ways before posing further questions which are summarized in concluding remarks.'\ntitle: 'Particles under radiation thrust in Schwarzschild space-time from a flux perpendicular to the equatorial plane'" -"---\nabstract: 'We investigate the harmonic and anharmonic contributions to the phonon spectrum of lead telluride, and perform a complete characterization of how the anharmonic effects dominate the phonons in PbTe as temperature increases. This effect is the strongest factor in the favorable thermoelectric properties of PbTe: an optical-acoustic phonon band crossing reduces the speed of sound and the intrinsic thermal conductivity. We present the detailed temperature dependence of the dispersion relation and compare our calculated neutron scattering cross section with recent experimental measurements. We analyze the thermal resistivity\u2019s variation with temperature and clarify misconceptions about existing experimental literature. This quantitative prediction opens the way to phonon phase space engineering, to tailor the lifetimes of crucial heat carrying phonons.'\nauthor:\n- 'A.H. Romero$^{1,2}$'\n- 'E.K.U. Gross$^2$'\n- 'M.J. Verstraete$^3$'\n- Olle Hellman$^4$\nbibliography:\n- 'library.bib'\ntitle: Thermal Anharmonic Effects in PbTe from First Principles\n---\n\nHeat conversion by using thermoelectric power generation has received a huge amount of interest in the last few years: transforming a temperature gradient to a voltage difference promises to recover waste heat in thermal engines, transforming it into electrical energy. The thermoelectric efficiency of a material is captured by the figure of merit, $ZT=T S^2" -"---\nabstract: 'We examine to which extent correlated realistic nucleon-nucleon interactions, derived within the Unitary Correlation Operator Method (UCOM), can describe nuclear collective motion in the framework of first-order random-phase approximation (RPA). To this end we employ the correlated Argonne V18 interaction in calculations within the so-called \u201cExtended\" RPA (ERPA) and investigate the response of closed-shell nuclei. The ERPA is a renormalized RPA version which considers explicitly the depletion of the Fermi sea due to long-range correlations and thus allows us to examine how these affect the excitation spectra. It is found that the effect on the properties of giant resonances is rather small. Compared to the standard RPA, where excitations are built on top of the uncorrelated Hartree-Fock (HF) ground state, their centroid energies decrease by up to 1\u00a0MeV, approximately, in the isovector channel. The isoscalar response is less affected in general. Thus, the disagreement between our previous UCOM-based RPA calculations and the experimental data are to be attributed to other effects, mainly to a residual three-body force and higher-order configurations. Ground-state properties obtained within the ERPA are compared with corresponding HF and perturbation-theory results and are discussed as well. The ERPA formalism is presented in detail.'\nauthor:" -"---\nabstract: 'We performed deep $K''$-band imaging observations of 2 massive clusters, MS $0451.6-0305$ at $z = 0.55$ and MS $0440.5+0204$ at $z = 0.19$, for searching counterparts of the faint sub-mm sources behind these clusters, which would provide one of the deepest extremely red object(ERO) samples. Comparing our near-infrared images with optical images taken by the Hubble Space Telescope and by the Subaru Telescope, we identified 13 EROs in these fields. The sky distributions of EROs are consistent with the previous results, that there is a sign of strong clustering among detected EROs. Also, the surface density with corrected lensing amplification factors in both clusters are in good agreement with that derived from previous surveys. We found 7 EROs and 3 additional very red objects in a small area ($\\sim$ 0.6 arcmin$^{2}$) of the MS $0451.6-0305$ field around an extended SCUBA source. Many of their optical and near-infrared colors are consistent with dusty star-forming galaxies at high redshifts(z $\\sim$1.0\u20134.0), and they may be constituting a cluster of dusty starburst galaxies and/or lensed star-forming galaxies at high redshift. Their red $J-K''$ colors and faint optical magnitudes suggest they are relatively old massive stellar systems with ages($>$300 Mega years) suffering from" -"---\nabstract: 'We establish the analogue of the Friedlander-Mazur conjecture for Teh\u2019s reduced Lawson homology groups of real varieties, which says that the reduced Lawson homology of a real quasi-projective variety $X$ vanishes in homological degrees larger than the dimension of $X$ in all weights. As an application we obtain a vanishing of homotopy groups of the mod-2 topological groups of averaged cycles and a characterization in a range of indices of the motivic cohomology of a real variety as homotopy groups of the complex of averaged equidimensional cycles. We also establish an equivariant Poincare duality between equivariant Friedlander-Walker real morphic cohomology and dos Santos\u2019 real Lawson homology. We use this together with an equivariant extension of the mod-2 Beilinson-Lichtenbaum conjecture to compute some real Lawson homology groups in terms of Bredon cohomology.'\nauthor:\n- Jeremiah Heller\n- Mircea Voineagu\nbibliography:\n- 'remreal.bib'\ntitle: Vanishing Theorems for Real Algebraic Cycles\n---\n\nIntroduction\n============\n\nLet $X$ be a quasi-projective real variety. The Galois group $G = Gal(\\C/\\R)$ acts on $\\mcal{Z}_{q}(X_{\\C})$, the topological group of $q$-cycles on the complexification. Cycles on the real variety $X$ correspond to cycles on $X_{\\C}$ which are fixed by conjugation. Inside the topological group of $\\mcal{Z}_{q}(X_{\\C})^{G}$ of" -"---\nabstract: 'The rapid development of deep learning, a family of machine learning techniques, has spurred much interest in its application to medical imaging problems. Here, we develop a deep learning algorithm that can accurately detect breast cancer on screening mammograms using an \u201cend-to-end\u201d training approach that efficiently leverages training datasets with either complete clinical annotation or only the cancer status (label) of the whole image. In this approach, lesion annotations are required only in the initial training stage, and subsequent stages require only image-level labels, eliminating the reliance on rarely available lesion annotations. Our all convolutional network method for classifying screening mammograms attained excellent performance in comparison with previous methods. On an independent test set of digitized film mammograms from Digital Database for Screening Mammography (DDSM), the best single model achieved a per-image AUC of 0.88, and four-model averaging improved the AUC to 0.91 (sensitivity: 86.1%, specificity: 80.1%). On a validation set of full-field digital mammography (FFDM) images from the INbreast database, the best single model achieved a per-image AUC of 0.95, and four-model averaging improved the AUC to 0.98 (sensitivity: 86.7%, specificity: 96.1%). We also demonstrate that a whole image classifier trained using our end-to-end approach on the" -"---\nabstract: 'Drop-surface interaction is predominant in nature as well as in many industrial applications. Freezing rain is the frequent origin of ice accretion on surfaces. Superhydrophobic surfaces show potential for anti-icing applications as they exhibit complete drop rebounce. Nonetheless, drop shedding has to take place before freezing for effective functioning. Recently, introducing a macro-ridge to break the hydrodynamic symmetry, has been shown to reduce the residence time on the surface of a bouncing drop. However, for a practical application the surface must be decorated with a series of ridges so that most of the drops actually encounter the ridges and lift-off rapidly. Here we show that a parallel neighbor ridge can influence the dynamics of recoiling. Ridge spacing plays a key role on the performance of surface to reduce the residence time. This finding can be of great significance for the development of macro-ridged anti-icing surfaces.'\nauthor:\n- |\n Regulagadda Kartik$^{\\dagger}$, Shamit Bakshi$^{\\dagger *}$, Sarit Kumar Das$^{\\dagger \\ddagger}$\\\n $\\dagger$ Department of Mechanical Engineering, Indian Institute of Technology,\\\n Madras, India.\\\n $\\ddagger$ Current Address: Department of Mechanical Engineering, Indian Institute of\\\n Technology, Ropar, India.\\\n Email: kartik25192@gmail.com\\\n Email: shamit@iitm.ac.in\\\n Email: skdas@iitrpr.ac.in\\\n Phone: +91-44-22574700\ntitle: 'Towards the development of a macro-structured water-repellent surface'" -"---\nabstract: 'Evolutionary tracks and pulsational analysis of models with masses of 13-18 $M_\\odot$ are presented. We address two important questions. The first one deals with one of the most unresolved problems in astrophysics, i.e., the existence of a blue loop after core helium ignition; the so called \u201cto loop or not to loop\u201d problem. We show that inward overshooting from the outer convective zone in the red giant phase is prerequisite for the development of the blue loop. Our second question concerns pulsational instability of models in the core helium burning phase. We present for the first time that models on the blue loop can have unstable modes driven by the $\\kappa$ mechanism operating in the $Z-$bump. Contrary to post-main sequence models in the shell hydrogen burning phases, pulsational instability of the blue loop models depends mainly on effective temperature and metallicity is of secondary importance. Finally, we try to interpret the oscillation spectrum of the blue supergiant HD 163899, the only member of the SPBsg class, and to get some clue on the evolutionary status of the star.'\nauthor:\n- |\n J. Ostrowski$^{1}$[^1] and J. Daszy\u0144ska-Daszkiewicz$^{1}$[^2]\\\n $^{1}$Instytut Astronomiczny, Uniwersytet Wroc[\u0142]{}awski, ul. Kopernika 11, 51-622 Wroc[\u0142]{}aw, Poland\\\ndate: 'Accepted" -"---\nabstract: 'Zodiacal emission is thermal emission from interplanetary dust. Its contribution to the sky brightness is non-negligible in the region near the ecliptic plane, even in the far-infrared (far-IR) wavelength regime. We analyse zodiacal emission observed by the AKARI far-IR all-sky survey, which covers 97% of the entire sky at arcminute-scale resolution in four photometric bands, with central wavelengths of 65, 90, 140, and 160\u00a0$\\mu$m. AKARI detected small-scale structures in the zodiacal dust cloud, including the asteroidal dust bands and the circumsolar ring, at far-IR wavelengths. Although the smooth component of the zodiacal emission structure in the far-IR sky can be reproduced well by models based on existing far-IR observations, previous zodiacal emission models have discrepancies in the small-scale structures compared with observations. We investigate the geometry of the small-scale dust-band structures in the AKARI far-IR all-sky maps and construct template maps of the asteroidal dust bands and the circumsolar ring components based on the AKARI far-IR maps. In the maps, $\\pm \\timeform{1.4D}$, $\\pm \\timeform{2.1D}$ and $\\pm \\timeform{10D}$ asteroidal dust-band structures are detected in the 65\u00a0$\\mu$m and 90\u00a0$\\mu$m bands. A possible $\\pm \\timeform{17D}$ band may also have been detected. No evident dust-band structures are identified in" -"---\nabstract: 'While [*test collections*]{} provide the cornerstone for Cranfield-based evaluation of information retrieval (IR) systems, it has become practically infeasible to rely on traditional [*pooling*]{} techniques to construct test collections at the scale of today\u2019s massive document collections (e.g., ClueWeb12\u2019s 700M+ Webpages). This has motivated a flurry of studies proposing more cost-effective yet reliable IR evaluation methods. In this paper, we propose a new [*intelligent topic selection*]{} method which reduces the number of search topics (and thereby costly human relevance judgments) needed for reliable IR evaluation. To rigorously assess our method, we integrate previously disparate lines of research on intelligent topic selection and *deep* vs.\u00a0*shallow* judging (i.e., whether it is more cost-effective to collect many relevance judgments for a few topics or a few judgments for many topics). While prior work on intelligent topic selection has never been evaluated against shallow judging baselines, prior work on deep vs.\u00a0shallow judging has largely argued for shallowed judging, but assuming random topic selection. We argue that for evaluating any topic selection method, ultimately one must ask whether it is actually useful to select topics, or should one simply perform shallow judging over many topics? In seeking a rigorous answer to" -"---\nabstract: 'Recent observations with TRACE reveal that the time delay between the appearance of a cooling loop in different EUV temperature filters is proportional to the loop length, $\\Delta t_{12} \\propto L$. We model this cooling delay in terms of radiative loss and confirm this linear relationship theoretically. We derive an expression that can be used to constrain the coronal iron enhancement ${\\alpha}_{Fe}=A_{Fe}^{cor}/A_{Fe}^{Ph}$ relative to the photospheric value as function of the cooling delay $\\Delta t_{12}$, flux $F_2$, loop width $w$, and filling factor $q_w \\le 1$. With this relation we find upper limits on the iron abundance enhancement of ${\\alpha}_{Fe} \\le 4.8\\pm 1.7$ for 10 small-scale nanoflare loops, and ${\\alpha}_{Fe} \\le 1.4\\pm 0.4$ for 5 large-scale loops, in the temperature range of $T\\approx 1.0-1.4$ MK. This result supports the previous finding that low-FIP elements, including Fe, are enhanced in the corona. The same relation constitutes also a lower limit for the filling factor, which is $q_w \\ge 0.2\\pm 0.1$ and $q_w \\ge 0.8\\pm 0.2$ for the two groups of coronal loops.'\nauthor:\n- 'Markus J. Aschwanden[^1], Carolus J. Schrijver$^1$, Amy R. Winebarger[^2]$^{,}$[^3], and Harry P. Warren$^3$'\ntitle: A NEW METHOD TO CONSTRAIN THE IRON ABUNDANCE FROM COOLING DELAYS" -"---\nabstract: 'Effects of synaptic noise on the retrieval process of associative memory neural networks are studied from the viewpoint of neurobiological and biophysical understanding of information processing in the brain. We investigate the statistical mechanical properties of stochastic analog neural networks with temporally fluctuating synaptic noise, which is assumed to be white noise. Such networks, in general, defy the use of the replica method, since they have no energy concept. The self-consistent signal-to-noise analysis (SCSNA), which is an alternative to the replica method for deriving a set of order parameter equations, requires no energy concept and thus becomes available in studying networks without energy functions. Applying the SCSNA to stochastic network requires the knowledge of the Thouless-Anderson-Palmer (TAP) equation which defines the deterministic networks equivalent to the original stochastic ones. The study of the TAP equation which is of particular interest for the case without energy concept is very few, while it is closely related to the SCSNA in the case with energy concept. This paper aims to derive the TAP equation for networks with synaptic noise together with a set of order parameter equations by a hybrid use of the cavity method and the SCSNA.'\naddress: 'Department of" -"---\nabstract: 'Traditional Evolutionary Robotics (ER) employs evolutionary techniques to search for a single monolithic controller which can aid a robot to learn a desired task. These techniques suffer from *bootstrap* and *deception* issues when the tasks are complex for a single controller to learn. Behaviour-decomposition techniques have been used to divide a task into multiple subtasks and evolve separate subcontrollers for each subtask. However, these subcontrollers and the associated subcontroller arbitrator(s) are all evolved off-line. A distributed, fully embodied and evolutionary version of such approaches will greatly aid online learning and help reduce the reality gap. In this paper, we propose an immunology-inspired embodied action-evolution cum selection algorithm that can cater to distributed ER. This algorithm evolves different subcontrollers for different portions of the search space in a distributed manner just as antibodies are evolved and primed for different antigens in the antigenic space. Experimentation on a collective of real robots embodied with the algorithm showed that a repertoire of antibody-like subcontrollers was created, evolved and shared *on-the-fly* to cope up with different environmental conditions. In addition, instead of the conventionally used approach of broadcasting for sharing, we present an *Intelligent Packet Migration* scheme that reduces energy consumption.'\nauthor:" -"---\nabstract: 'The present investigation deals with the dynamics of a two-degrees-of-freedom system which consists of a main linear oscillator and a strongly nonlinear absorber with small mass. The nonlinear oscillator has a softening hysteretic characteristic represented by a Bouc-Wen model. The periodic solutions of this system are studied and their calculation is performed through an averaging procedure. The study of nonlinear modes and their stability shows, under specific conditions, the existence of localization which is responsible for a passive irreversible energy transfer from the linear oscillator to the nonlinear one. The dissipative effect of the nonlinearity appears to play an important role in the energy transfer phenomenon and some design criteria can be drawn regarding this parameter among others to optimize this energy transfer. The free transient response is investigated and it is shown that the energy transfer appears when the energy input is sufficient in accordance with the predictions from the nonlinear modes. Finally, the steady-state forced response of the system is investigated. When the input of energy is sufficient, the resonant response (close to nonlinear modes) experiences localization of the vibrations in the nonlinear absorber and jump phenomena.'\n---\n\nJournal home page: http://www.sciencedirect.com/science/journal/00207462\\\n\nDynamics of a linear" -"---\nabstract: 'We describe the InfraRed Data Reduction (IRDR) software package, a small ANSI C library of fast image processing routines for automated pipeline reduction of infrared (dithered) observations. We developed the software to satisfy certain design requirements not met in existing packages (e.g., full weight map handling) and to optimize the software for large data sets (non-interactive tasks that are CPU and disk efficient). The software includes stand-alone C programs for tasks such as running sky frame subtraction with object masking, image registration and coaddition with weight maps, dither offset measurement using cross-correlation, and object mask dilation. Although we currently use the software to process data taken with CIRSI (a near-IR mosaic imager), the software is modular and concise and should be easy to adapt/reuse for other work. IRDR is available from anonymous ftp to ftp.ast.cam.ac.uk in pub/sabbey.'\nauthor:\n- 'C.N.\u00a0Sabbey, R.G.\u00a0McMahon, J.R.\u00a0Lewis, & M.J.\u00a0Irwin'\ntitle: Infrared Imaging Data Reduction Software and Techniques\n---\n\nIntroduction\n============\n\nThe Cambridge Infrared Survey Instrument (CIRSI) is a near-IR mosaic imager that contains a 2 x 2 array of Rockwell Hawaii I 1024 x 1024 detectors (Beckett et al.\u00a01996; Mackay et al.\u00a02000). CIRSI has been in operation" -"---\nabstract: 'This review starts with a discussion of the hierarchy of scales, relevant to the description of superfluids in neutron stars, which motivates a subsequent elementary exposition of the Newtonian superfluid hydrodynamics. Starting from the Euler equations for a superfluid and a normal fluid we apply the tensor virial method to obtain the virial equations of the first, second, and third order and to compute their Eulerian perturbations. Special emphasis is put on the computation of perturbations of the new terms due to mutual gravitational attraction and mutual friction between the two fluids. The oscillation modes of superfluid Maclaurin spheroids are derived from the first and second order perturbed virial equations. We discuss two generic classes of oscillation modes which correspond to the [*co-moving*]{} and [*relative oscillations*]{} of two fluids. These modes decouple if the normal fluid is inviscid. We also discuss the mixing of these modes (when the normal fluid is viscous) and its effect on the dynamical and secular instabilities of the co-moving modes and their damping.'\nauthor:\n- 'A. Sedrakian'\n- 'I. Wasserman'\ntitle: 'The tensor virial method and its applications to self-gravitating superfluids'\n---\n\nIntroduction\n============\n\nRadio and x-ray observations of neutron stars provide strong" -"4ex\n\n[**Lattice vibrations and structural instability in Cesium near the cubic to tetragonal transition\\\n**]{}\n\nUnder pressure cesium undergoes a transition from a high-pressure fcc phase (Cs-II) to a collapsed fcc phase (Cs-III) near 4.2GPa. At 4.4GPa there follows a transition to the tetragonal Cs-IV phase. In order to investigate the lattice vibrations in the fcc phase and seek a possible dynamical instability of the lattice, the phonon spectra of fcc-Cs at volumes near the III to IV transition are calculated using Savrasov\u2019s density functional linear-response LMTO method. Compared with quasiharmonic model calculations including non-central interatomic forces up to second neighbours, at the volume $V/V_0=0.44$ ($V_0$ is the experimental volume of bcc-Cs with $a_0$=6.048[\u00c5]{}), the linear-response calculations show soft intermediate wavelength $T_{[1\\bar{1}0]}[{\\xi}{\\xi}0]$ phonons. Similar softening is also observed for short wavelength $L[\\xi\\xi\\xi]$ and $L[00\\xi]$ phonons and intermediate wavelength $L[\\xi\\xi\\xi]$ phonons. The Born-von K\u00e1rm\u00e1n analysis of dispersion curves indicates that the interplanar force constants exhibit oscillating behaviours against plane spacing $n$ and the large softening of intermediate wavelength $T_{[1\\bar{1}0]}[{\\xi}{\\xi}0]$ phonons results from a negative (110)-interplanar force-constant $\\Phi_{n=2}$. The calculated frequencies for high-symmetry $K$ and $W$ phonons and longitudinal $X$ and $L$ phonons decrease with volume compression. In particular, the frequencies of" -"---\nabstract: 'This paper addresses the problem of ad\u00a0hoc microphone array calibration where only partial information about the distances between microphones is available. We construct a matrix consisting of the pairwise distances and propose to estimate the missing entries based on a novel Euclidean distance matrix completion algorithm by alternative low-rank matrix completion and projection onto the Euclidean distance space. This approach confines the recovered matrix to the EDM cone at each iteration of the matrix completion algorithm. The theoretical guarantees of the calibration performance are obtained considering the random and locally structured missing entries as well as the measurement noise on the known distances. [This study elucidates the links between the calibration error and the number of microphones along with the noise level and the ratio of missing distances. Thorough experiments on real data recordings and simulated setups are conducted to demonstrate these theoretical insights.]{} A significant improvement is achieved by the proposed Euclidean distance matrix completion algorithm over the state-of-the-art techniques for ad hoc microphone array calibration.'\naddress:\n- 'Idiap Research Institute, Martigny, Switzerland'\n- '\u00c9cole Polytechnique F\u00e9d\u00e9rale de Lausanne (EPFL), Switzerland'\n- 'Emails: {mohammad.taghizadeh, phil.garner, herve.bourlard, afsaneh.asaei}@idiap.ch, reza.parhizkar@epfl.ch'\nauthor:\n- 'Mohammad J. Taghizadeh'\n- Reza Parhizkar" -"---\nabstract: 'Imputation of missing data is a common application in various classification problems where the feature training matrix has missingness. A widely used solution to this imputation problem is based on the lazy learning technique, $k$-nearest neighbor (kNN) approach. However, most of the previous work on missing data does not take into account the presence of the class label in the classification problem. Also, existing kNN imputation methods use variants of Minkowski distance as a measure of distance, which does not work well with heterogeneous data. In this paper, we propose a novel iterative kNN imputation technique based on class weighted grey distance between the missing datum and all the training data. Grey distance works well in heterogeneous data with missing instances. The distance is weighted by Mutual Information (MI) which is a measure of feature relevance between the features and the class label. This ensures that the imputation of the training data is directed towards improving the classification performance. This class weighted grey kNN imputation algorithm demonstrates improved performance when compared to other kNN imputation algorithms, as well as standard imputation algorithms such as MICE and missForest, in imputation and classification problems. These problems are based on simulated" -"---\nabstract: 'This study investigates the capacity region of a three-user cognitive radio network with two primary users and one cognitive user. A three-user Cognitive Interference Channel (C-IFC) is proposed by considering a three-user Interference Channel (IFC) where one of the transmitters has cognitive capabilities and knows the messages of the other two transmitters in a non-causal manner. First, two inner bounds on the capacity region of the three-user C-IFC are obtained based on using the schemes which allow all receivers to decode all messages with two different orders. Next, two sets of conditions are derived, under which the capacity region of the proposed model coincides with the capacity region of a three-user C-IFC in which all three messages are required at all receivers. Under these conditions, referred to as strong interference conditions, the capacity regions for the proposed three-user C-IFC are characterized. Moreover, the Gaussian three-user C-IFC is considered and the capacity results are derived for the Gaussian case. Some numerical examples are also provided.'\nauthor:\n- |\n Mahtab Mirmohseni, Bahareh Akhbari, and Mohammad Reza Aref\\\n Information Systems and Security Lab (ISSL)\\\n Department of Electrical Engineering, Sharif University of Technology, Tehran, Iran\\\n Email: mirmohseni@ee.sharif.edu, b\\_akhbari@ee.sharif.edu, and aref@sharif.edu [^1]\ntitle:" -"---\nabstract: 'The geometric phase can act as a signature for critical regions of interacting spin chains in the limit where the corresponding circuit in parameter space is shrunk to a point and the number of spins is extended to infinity; for finite circuit radii or finite spin chain lengths, the geometric phase is always trivial (a multiple of $2\\pi$). In this work, by contrast, two related signatures of criticality are proposed which obey finite-size scaling and which circumvent the need for assuming any unphysical limits. They are based on the notion of the *Bargmann invariant* whose phase may be regarded as a discretized version of Berry\u2019s phase. As circuits are considered which are composed of a discrete, finite set of vertices in parameter space, they are able to pass directly *through* a critical point, rather than having to circumnavigate it. The proposed mechanism is shown to provide a diagnostic tool for criticality in the case of a given non-solvable one-dimensional spin chain with nearest-neighbour interactions in the presence of an external magnetic field.'\nauthor:\n- 'Moritz E.\u00a0Reuter'\n- 'Michael J.\u00a0Hartmann'\n- 'Martin B.\u00a0Plenio'\ntitle: Geometric Phases and Critical Phenomena in a Chain of Interacting Spins\n---" -"---\nabstract: 'A relativistic effective charge model has been developed for computation of observable characteristics of multi-electron atoms and ions. A complete and orthogonal Dirac hydrogen basis set, depending on one parameter \u2014 effective nuclear charge $Z^{*}$ \u2014 identical for all single-electron wave functions of a given atom or ion, is employed for the construction of the secondary-quantized representation. The effective charge is uniquely determined by the charge of the nucleus and a set of electron occupation numbers for a given state. We thoroughly study the accuracy of the leading-order approximation for the total binding energy and demonstrate that it is independent of the number of electrons of a multi-electron atom. In addition, it is shown that the fully analytical leading-order approximation is especially suited for the description of highly charged ions since our wave functions are almost coincident with the Dirac-Hartree-Fock ones for the complete spectrum. Finally, we envision that our effective charge model is more accurate and thus can replace the Thomas-Fermi-Dirac model for all applications where it is still utilized.'\nauthor:\n- 'K. Dzikowski'\n- 'O. D. Skoromnik'\n- 'I. D. Feranchuk'\n- 'N. S. Oreshkina'\n- 'C. H. Keitel'\nbibliography:\n- 'relativistic\\_zeroth.bib'\ntitle: 'Relativistic effective charge" -"---\nabstract: 'We present high resolution interferometric observations of the cool atomic and cold molecular ISM of the TDG candidate Arp\u00a0245N, an object resembling a dwarf galaxy in the northern tidal tail of the interacting system NGC2992/3. We observed the HI line with the NRAO VLA and the CO(1$\\to$0) transition with the OVRO millimeter interferometer at $5''''-6''''$ angular resolution (750pc linear resolution). These datacubes offer the required spatial and velocity resolution to determine whether the mass concentration near the tip of the tail is a genuine feature, and hence a good TDG candidate, or an artefact caused by a fortuitous alignment of our line of sight with the direction of the tail. A preliminary analysis seems to confirm that Arp245N is a self\u2013gravitating entity.'\nauthor:\n- Elias Brinks\n- 'Pierre\u2013Alain Duc'\n- Fabian Walter\ntitle: VLA HI and OVRO CO Interferometry of a Tidal Dwarf Galaxy\n---\n\n\\#1[[*\\#1*]{}]{} \\#1[[*\\#1*]{}]{} =\n\n\\#1 1.25in .125in .25in\n\nIntroduction\n============\n\nTidal Dwarf Galaxies (TDGs) are objects resembling actively star forming dwarf galaxies and are assembled from the debris (tidal tails and bridges) launched into the IGM by violent galaxy interactions in which at least one member is a gas\u2013rich galaxy. They are composed" -"---\nabstract: 'Quantum state tomography, an important task in quantum information processing, aims at reconstructing a state from prepared measurement data. Bayesian methods are recognized to be one of the good and reliable choices in estimating quantum states\u00a0[@blume2010optimal]. Several numerical works showed that Bayesian estimations are comparable to, and even better than other methods in the problem of $1$-qubit state recovery. However, the problem of choosing prior distribution in the general case of $n$ qubits is not straightforward. More importantly, the statistical performance of Bayesian type estimators have not been studied from a theoretical perspective yet. In this paper, we propose a novel prior for quantum states (density matrices), and we define pseudo-Bayesian estimators of the density matrix. Then, using PAC-Bayesian theorems\u00a0[@catonibook], we derive rates of convergence for the posterior mean. The numerical performance of these estimators are tested on simulated and real datasets.'\nauthor:\n- |\n The Tien Mai [^1] & Pierre Alquier\\\n [CREST, ENSAE, Universit\u00e9 Paris Saclay]{}\\\n [3 av. Pierre Larousse, 92245 Malakoff CEDEX, France]{}\ntitle: 'Pseudo-Bayesian Quantum Tomography with Rank-adaptation'\n---\n\nIntroduction\n============\n\nPlaying a vital role in quantum information processing, as well as being fundamental for characterizing quantum objects, quantum state tomography focuses on" -"---\nabstract: |\n In his *Foundations of a General Theory of Manifolds*, Georg Cantor praised Bernard Bolzano as a clear defender of actual infinity who had the courage to work with infinite numbers. At the same time, he sharply criticized the way Bolzano dealt with them. Cantor\u2019s concept was based on the existence of a *one-to-one correspondence*, while Bolzano insisted on Euclid\u2019s Axiom of *the whole being greater than a part*. Cantor\u2019s set theory has eventually prevailed, and became a formal basis of contemporary mathematics, while Bolzano\u2019s approach is generally considered a step in the wrong direction.\n\n In the present paper, we demonstrate that a fragment of Bolzano\u2019s theory of infinite quantities retaining the *part-whole principle* can be extended to a consistent mathematical structure. It can be interpreted in several possible ways. We obtain either a linearly ordered ring of finite and infinitely great quantities, or a a partially ordered ring containing infinitely small, finite and infinitely great quantities. These structures can be used as a basis of the infinitesimal calculus similarly as in Non-standard Analysis, whether in its full version employing ultrafilters due to Abraham Robinson, or in the recent \u201ccheap version\u201d avoiding ultrafilters due to Terrence Tao.\nauthor:" -"---\nabstract: |\n This paper investigates the hedging performance of pegged foreign exchange market in a regime switching (RS) model introduced in @drapeau2019. We compare two prices, an exact solution and first order approximation and provide the bounds for the error. We provide exact RS delta, approximated RS delta as well as mean variance hedging strategies for this specific model and compare their performance. To improve the efficiency of the pricing and calibration procedure, the Fourier approach of this regime-switching model is developed in our work. It turns out that: 1 \u2013 the calibration of the volatility surface with this regime switching model outperforms on real data the classical SABR model; 2 \u2013 the Fourier approach is significantly faster than the direct approach; 3 \u2013 in terms of hedging, the approximated RS delta hedge is a viable alternative to the exact RS delta hedge while significantly faster.\\\n Pegged FX Markets; HKDUSD; Regime Switching; Mean-Variance Hedging; Fourier Approach.\naddress:\n- 'School of Mathematical Sciences & Shanghai Advanced Institute for Finance (CAFR)Shanghai Jiao Tong University, Shanghai, China'\n- 'School of Mathematical SciencesShanghai Jiao Tong University, Shanghai, China'\nauthor:\n- Samuel Drapeau\n- Yunbo Zhang\nbibliography:\n- 'biblio.bib'\ntitle: Pricing and Hedging Performance" -"---\nabstract: 'We start from classical Hamiltonian constraint of general relativity to obtain the Einstein-Hamiltonian-Jacobi equation. We obtain a time parameter prescription demanding that geometry itself determines the time, not the matter field, such that the time so defined being equivalent to the time that enters into the Schroedinger equation. Without any reference to the Wheeler-DeWitt equation and without invoking the expansion of exponent in WKB wavefunction in powers of Planck mass, we obtain an equation for quantum gravity in Schroedinger form containing time. We restrict ourselves to a minisuperspace description. Unlike matter field equation our equation is equivalent to the Wheeler-DeWitt equation in the sense that our solutions reproduce also the wavefunction of the Wheeler-DeWitt equation provided one evaluates the normalization constant according to the wormhole dominance proposal recently proposed by us.'\nauthor:\n- |\n S.Biswas $^{*a),b)}$, A.Shaw $^{**a)}$, B.Modak$^{a)}$ and D.Biswas$^{a)}$\\\n a) Department of Physics, University of Kalyani, West Bengal, India, Pin.- 741235\\\n b) IUCAA, Post bag 4, Ganeshkhind, Pune 411 007, India\\\n $*$ email: sbiswas@klyuniv.ernet.in\\\n $**$ email:amita@klyuniv.ernet.in\ndate: today\ntitle: |\n Quantum Gravity Equation In Schroedinger Form\\\n In Minisuperspace Description\n---\n\nKeywords : Quantum Cosmology; Quantum Gravity; Time; Minisuperspace; Wavefunction of the Universe\n\nPACS No. - 04.60," -"---\nabstract: 'We describe a general procedure for associating a minimal informationally complete quantum measurement (or MIC) with a purely probabilistic representation of the Born Rule. Such representations provide a way to understand the Born Rule as a consistency condition between probabilities assigned to the outcomes of one experiment in terms of the probabilities assigned to the outcomes of other experiments. In this setting, the difference between quantum and classical physics is the way their physical assumptions augment bare probability theory: Classical physics corresponds to a trivial augmentation\u2014one just applies the Law of Total Probability (LTP) between the scenarios\u2014while quantum theory makes use of the Born Rule expressed in one or another of the forms of our general procedure. To mark the *irreducible* difference between quantum and classical, one should seek the representations that minimize the disparity between the expressions. We prove that the representation of the Born Rule obtained from a *symmetric* informationally complete measurement (or SIC) minimizes this distinction in at least two senses\u2014the first to do with unitarily invariant distance measures between the rules, and the second to do with available volume in a reference probability simplex (roughly speaking a new kind of uncertainty principle). Both of" -"---\nabstract: 'Quantum computers have the potential to be a profoundly transformative technology, particularly in the context of quantum chemistry. However, running a chemistry application that is demonstrably useful currently requires a prohibitive number of logical operations. For example, the canonical estimate of the number of operations required to simulate the molecule FeMoco, the key component in biological nitrogen fixation, requires around $10^{15}$ logical gates\u00a0[@Reiher2017]. A quantum computer that is capable of applying logical operations at 1\u00a0Mhz rates would require more than 30 years to complete such a calculation. It is imperative to reduce this prohibitive runtime, by better understanding and optimising quantum algorithms, if the technology is to have commercial utility. The purpose of this paper is to introduce such an optimisation. The gadget that we introduce below affords a 6x improvement in runtime for Trotterized quantum chemistry employing the Jordan-Wigner transformation, without altering the required number of qubits.'\nauthor:\n- Sam Pallister\nbibliography:\n- 'main.bib'\ntitle: 'A Jordan-Wigner gadget that reduces T count by more than 6x for quantum chemistry applications'\n---\n\nUpon completion of this manuscript, we became aware of the independent discovery of this result in\u00a0[@Wang2020].\n\nQuantum algorithms for quantum chemistry come in" -"---\nabstract: 'Using an atom\u2013vacancy exchange algorithm, we investigate the kinetics of the order\u2013disorder transition in an fcc $A_3B$ binary alloy model following a temperature quench from the disordered phase. We observe two clearly distinct ordering scenarios depending on whether the final temperature $T_f$ falls above or below the ordering spinodal $T_{sp}$, which is deduced from simulations at equilibrium. For shallow quenches ($T_f>T_{sp}$) we identify an incubation time $\\tau_{inc}$ which characterizes the onset of ordering through the formation of overcritical ordered nuclei. The algorithm we use together with experimental information on tracer diffusion in Cu$_3$Au alloys allows us to estimate the physical time scale connected with $\\tau_{inc}$ in that material. Deep quenches, $T_f 9.2$ min (2 per cent on $\\Delta t > 2.3$ min for many fluctuating regions). In addition, we study the afterglow VLT/FORS2 spectrum, the optical-to-X-ray spectral energy distribution (SED) and the time decay. The SED is best fit with a broken power law with slopes $\\beta_{\\mathrm{opt}}=0.71\\pm0.01$ and $\\beta_{X}=1.59\\pm0.07$, in disagreement with the fireball model, suggesting a non-standard afterglow for . We find $A_V=0.18\\pm0.03$ mag optical extinction due to SMC-like dust and an excess X-ray absorption of log $(N_{\\mathrm{H}}/$cm$^{-2})=21.58^{+0.18}_{-0.26}$ assuming Solar abundances. The spectral analysis reveals" -"---\nabstract: 'The electron-electron pair distribution functions (PDF) of the 2-D electron fluid (2DEF) in the quantum regime (at $T$=0) are calculated using a classical-map-hyper-netted-chain (CHNC) scheme and compared with currently available Quantum Monte-Carlo (QMC) simulations in the coupling range $r_s$=1 to 50. We iteratively extract the bridge function of the \u201cequivalent\u201d classical 2-D liquid in the quantum regime. These bridge functions $B(r)$ are relatively insensitive to spin-polarization effects. The structure of the bridge functions changes significantly for $r_s>6$, suggesting the onset of strongly correlated clusters. The new $B(r)$, appropriate for the long-range Coulomb potential, can be used to replace the hard-sphere $B(r)$ previously used in these calculations. They provide accurate classical representations of the QMC-PDFs even at very strong coupling, and probably at finite-$T$ near $T=0$.'\nauthor:\n- 'M.W.C. Dharma-wardana'\ntitle: ' The Coulomb bridge function and the Pair-distribution functions of the 2-dimensional electron liquid in the quantum regime. '\n---\n\nIntroduction\n============\n\nThe pair-distribution functions (PDFs) of strongly-coupled electron fluids contain all the physical information associated with the ground-state static properties of such systems. Exchange-correlation energies, phase-transitions, and Fermi-liquid parameters like the effective mass $m^*$, and the spin-susceptibility enhancement ($g^*$) can all be evaluated from the PDFs, as" -"---\nauthor:\n- 'E\u00a0Chimczak, T\u00a0Dunaj, M\u00a0Bertandt, A\u00a0Wieczorek, G\u00a0Neunert, G\u00a0Chimczak, M\u00a0Cie[\u017c]{}, M\u00a0[\u0141]{}ukasik'\ntitle: 'Spectral and kinetic properties of electroluminescence of ZnS:Cu powder in polymer structure'\n---\n\nIntroduction\n============\n\nElectroluminescence is the phenomenon being subject of interest to many researches. During several decades very many papers were devoted to the phenomenon. The light emission from silicon carbide crystals excited by an applied voltage was first reported by Lossev in 1923\u00a0[@lossev23]. In 1936 Destriau made the electroluminescent cell based on zinc sulphide\u00a0[@destriau36]. At the end of fifties years Thornton was started work on the electroluminescent devices with vacuum deposited semiconductor layer\u00a0[@thornton59]. In the seventies much attention has been paid to doubly insulated AC thin electroluminescent devices for flat panel display\u00a0[@inoguchi74; @suyama82]. In 1990 many researches have focused on polymer light-emitting diodes\u00a0[@burroughes90]. They are rather concerned with spectral, electrical and chemical properties of the diodes. In the present paper we are concerned with spectral as well as kinetic properties of the structure investigated.\n\nExperimental\n============\n\nFigure\u00a0\\[fig1\\] shows the structure of the cell investigated. The structure consists of polymer substrate with deposited transparent electrode, luminophor in polymer matrix, dielectric layer (${\\rm{BaTiO_{3}}}$" -"---\nabstract: 'Soft biometric information such as gender can contribute to many applications like as identification and security. This paper explores the use of a Binary Statistical Features (BSIF) algorithm for classifying gender from iris texture images captured with NIR sensors. It uses the same pipeline for iris recognition systems consisting of iris segmentation, normalisation and then classification. Experiments show that applying BSIF is not straightforward since it can create artificial textures causing misclassification. In order to overcome this limitation, a new set of filters was trained from eye images and different sized filters with padding bands were tested on a subject-disjoint database. A Modified-BSIF (MBSIF) method was implemented. The latter achieved better gender classification results (94.6% and 91.33% for the left and right eye respectively). These results are competitive with the state of the art in gender classification. In an additional contribution, a novel gender labelled database was created and it will be available upon request.'\nauthor:\n- |\n Juan Tapia and Claudia Arellano\\\n Universidad Tecnologica de Chile - INACAP\\\n [j\\_tapiaf@inacap.cl]{}\\\n **A pre-print version of the paper accepted at 12th IAPR International Conference on Biometrics.**\nbibliography:\n- 'References\\_OC.bib'\ntitle: 'Gender Classification from Iris Texture Images Using a New Set" -"---\nabstract: |\n Compressed sensing (CS) demonstrates that sparse signals can be estimated from under-determined linear systems. Distributed CS (DCS) further reduces the number of measurements by considering joint sparsity within signal ensembles. DCS with jointly sparse signals has applications in multi-sensor acoustic sensing, magnetic resonance imaging with multiple coils, remote sensing, and array signal processing. Multi-measurement vector (MMV) problems consider the estimation of jointly sparse signals under the DCS framework. Two related MMV settings are studied. In the first setting, each signal vector is measured by a different independent and identically distributed (i.i.d.) measurement matrix, while in the second setting, all signal vectors are measured by the same i.i.d. matrix. Replica analysis is performed for these two MMV settings, and the minimum mean squared error (MMSE), which turns out to be identical for both settings, is obtained as a function of the noise variance and number of measurements. To showcase the application of MMV models, the MMSE\u2019s of complex CS problems with both real and complex measurement matrices are also analyzed. Multiple performance regions for MMV are identified where the MMSE behaves differently as a function of the noise variance and the number of measurements.\n\n Belief propagation (BP) is" -"---\nabstract: |\n The structures of order parameters which determine the bounds of the phase states in the framework of the $CP^{1}$ Ginzburg-Landau model were considered. Using the formulation of this model [@BFN] in terms of the gauged order parameters (the unit vector ${\\bf n}$, density $\\rho^{2}$ and momentum of particles ${\\bf c}$) we found that some universal properties of phases and field configurations are determined by the Hopf invariant, $Q$ and its generalizations. At a sufficiently high level of doping it was found that beyond the superconducting phase the charge distributions in the form of loops may be more preferable than those in the form of stripes. It was shown that in the phase with its mutual linking number $L 0.18 $ and $z_{OH} > 0.4$). Surveyors of nearby galaxies in the 21cm line may find that OH masers form a contaminant to deep, blind HI surveys for redshift velocities less than a few hundred kilometers per second. At" -"---\nabstract: 'We study how Reinforcement Learning can be employed to optimally control parameters in evolutionary algorithms. We control the mutation probability of a (1+1) evolutionary algorithm on the OneMax function. This problem is modeled as a Markov Decision Process and solved with Value Iteration via the known transition probabilities. It is then solved via $Q$-Learning, a Reinforcement Learning algorithm, where the exact transition probabilities are not needed. This approach also allows previous expert or empirical knowledge to be included into learning. It opens new perspectives, both formally and computationally, for the problem of parameter control in optimization.'\nauthor:\n- Luca Mossina$^1$\n- Emmanuel Rachelson$^1$\n- Daniel Delahaye$^2$\nbibliography:\n- 'paco\\_first.bib'\ndate: |\n $^1$ISAE-SUPAERO, Universit\u00e9 de Toulouse\\\n `name.surname@isae-supaero.fr`\\\n $^2$ENAC, Universit\u00e9 de Toulouse\\\ntitle: 'A Reinforcement Learning Perspective on the Optimal Control of Mutation Probabilities for the (1+1) Evolutionary Algorithm: First Results on the OneMax Problem'\n---\n\nProblem statement\n=================\n\nWe maximize the *OneMax* function: $OM(x) = \\sum_{i=1}^{n}x_i, \\forall x_i \\in \\{0,1\\}$ via the (1+1) Evolutionary Algorithm (EA) by which, given a random initialization of $x \\in \\{0,1\\}^n$, at every iteration, each of the bits is flipped (*mutated*) with probability $\\theta$, yielding a solution candidate $x'$. If $OM(x') > OM(x)$, $x'$" -"---\nabstract: |\n This paper presents a system capable of autonomously mapping the visible part of a bounded three-dimensional structure using a mobile ground robot equipped with a depth sensor. We describe motion planning strategies to determine appropriate successive viewpoints and attempt to fill holes automatically in a point cloud produced by the sensing and perception layer. We develop a local motion planner using potential fields to maintain a desired distance from the structure. The emphasis is on accurately reconstructing a 3D model of a structure of moderate size rather than mapping large open environments, with applications for example in architecture, construction and inspection. The proposed algorithms do not require any initialization in the form of a mesh model or a bounding box. We compare via simulations the performance of our policies to the classic frontier based exploration algorithm. We illustrate the efficacy of our approach for different structure sizes, levels of localization accuracy and range of the depth sensor.\n\n *Note to Practitioners\u2014* The objective of this work is to automate the process of building a 3D model of a structure of interest that is as complete as possible, using a mobile depth sensor, in the absence of any prior" -"---\nabstract: |\n The design of distributed gathering and convergence algorithms for tiny robots has recently received much attention. In particular, it has been shown that convergence problems can even be solved for very weak, *oblivious* robots: robots which cannot maintain state from one round to the next. The oblivious robot model is hence attractive from a self-stabilization perspective, where state is subject to adversarial manipulation. However, to the best of our knowledge, all existing robot convergence protocols rely on the assumption that robots, despite being \u201cweak\u201d, can measure distances.\n\n We in this paper initiate the study of convergence protocols for even simpler robots, called *monoculus robots*: robots which cannot measure distances. In particular, we introduce two natural models which relax the assumptions on the robots\u2019 cognitive capabilities: (1) a Locality Detection ($\\mathcal{LD}$) model in which a robot can only detect whether another robot is closer than a given constant distance or not, (2) an Orthogonal Line Agreement ($\\mathcal{OLA}$) model in which robots only agree on a pair of orthogonal lines (say North-South and West-East, but without knowing which is which).\n\n The problem turns out to be non-trivial, and simple median and angle bisection strategies can easily increase the distances" -"---\nabstract: 'The half filled Landau level is expected to be approximately particle-hole symmetric, which requires an extension of the Halperin-Lee-Read (HLR) theory of the compressible state observed at this filling. Recent work indicates that, when particle-hole symmetry is preserved, the composite Fermions experience a quantized $\\pi$-Berry phase upon winding around the composite Fermi-surface, analogous to Dirac fermions at the surface of a 3D topological insulator. In contrast, the effective low energy theory of the composite fermion liquid originally proposed by HLR lacks particle-hole symmetry and has vanishing Berry phase. In this paper, we explain how thermoelectric transport measurements can be used to test the Dirac nature of the composite Fermions by quantitatively extracting this Berry phase. First we point out that longitudinal thermopower (Seebeck effect) is non-vanishing due to the unusual nature of particle hole symmetry in this context and is not sensitive to the Berry phase. In contrast, we find that off-diagonal thermopower (Nernst effect) is directly related to the topological structure of the composite Fermi surface, vanishing for zero Berry phase and taking its maximal value for $\\pi$ Berry phase. In contrast, in purely electrical transport signatures the Berry phase contributions appear as small corrections to a" -"---\nabstract: 'Many automatically analyzable scientific questions are well-posed and offer a variety of information about the expected outcome *a priori*. Although often being neglected, this prior knowledge can be systematically exploited to make automated analysis operations sensitive to a desired phenomenon or to evaluate extracted content with respect to this prior knowledge. For instance, the performance of processing operators can be greatly enhanced by a more focused detection strategy and the direct information about the ambiguity inherent in the extracted data. We present a new concept for the estimation and propagation of uncertainty involved in image analysis operators. This allows using simple processing operators that are suitable for analyzing large-scale spatiotemporal (3D+t) microscopy images without compromising the result quality. On the foundation of fuzzy set theory, we transform available prior knowledge into a mathematical representation and extensively use it enhance the result quality of various processing operators. All presented concepts are illustrated on a typical bioimage analysis pipeline comprised of seed point detection, segmentation, multiview fusion and tracking. Furthermore, the functionality of the proposed approach is validated on a comprehensive simulated 3D+t benchmark data set that mimics embryonic development and on large-scale light-sheet microscopy data of a zebrafish embryo." -"---\nabstract: 'Vacancy-induced magnetization of a graphene layer is investigated by means of a first principle DFT method. Calculations of the formation energy and the magnetization by creating the different number of vacancies in a supercell show that a clustering with big number of vacancies in the cluster is rather favorable than that of isolated vacancies, homogeneously distributed in the layer. The magnetic moment of a cluster with big number of vacancies is shown to be not proportional with the vacancy concentration, which is in good agreement with the recent experimental results. Our studies support the idea that although the vacancies in a graphene create a magnetic moment, they do not produce a magnetic ordering. It is shown that although the Lieb\u2019s rule for the magnetization in a hexagonal structure violates, two-vacancies, including a di-vacancy, in the supercell generate quasi-localized state when they belong to the different sublattices, and instead two-vacancies generate an extended state when they belong to the same sublattices. Analytical investigation of the dynamics of carbon atom- and vacancy-concentrations according to the non-linear continuity equations shows that the vacancies, produced by irradiation at the middle of a graphene layer, migrate to the edge of the sample resulting" -"---\nabstract: 'We have studied the dynamics of an equal-mass magnetized neutron-star binary within a resistive magnetohydrodynamic (RMHD) approach in which the highly conducting stellar interior is matched to an electrovacuum exterior. Because our analysis is aimed at assessing the modifications introduced by resistive effects on the dynamics of the binary after the merger and through to collapse, we have carried out a close comparison with an equivalent simulation performed within the traditional ideal magnetohydrodynamic approximation. We have found that there are many similarities between the two evolutions but also one important difference: the survival time of the hypermassive neutron star increases in a RMHD simulation. This difference is due to a less efficient magnetic-braking mechanism in the resistive regime, in which matter can move across magnetic-field lines, thus reducing the outward transport of angular momentum. Both the RMHD and the ideal magnetohydrodynamic simulations carried here have been performed at higher resolutions and with a different grid structure than those in previous work of ours \\[L. Rezzolla, B. Giacomazzo, L. Baiotti, J. Granot, C. Kouveliotou, and M. A. Aloy, Astrophys. J. Letters 732, L6 (2011)\\], but confirm the formation of a low-density funnel with an ordered magnetic field produced by" -"---\nabstract: 'We construct a Galacto-Local Group rotation curve, combining the Galactic rotation curve with a diagram, where galacto-centric radial velocities of outer globular clusters and member galaxies of the Local Group are plotted against their galacto-centric distances. The high-velocity ends of this pseudo rotation curve within a radius $R\\sim150$ kpc are well traced by a rotation curve calculated for the NFW (Navaro, Frenk, White) and Burkert dark halo models. The NFW model indicates that the Galaxy\u2019s mass within 385 kpc, half distance to M31, is $\\sim 4\\meleven$. High-velocity ends of the pseudo rotation curve for the entire Local Group up to 1.5 Mpc indicate isothermal nature with a terminal velocity of $\\sim 200$ . In order for the Local Group to be gravitationally bound, an order of magnitude larger mass than those of the Galaxy and M31 is required. This fact suggest that the Local Group contains dark matter of mass $\\sim 5\\mtwelve$, filling the space between the Galaxy and M31. The mass density of the Galactic dark halo becomes equal to that of the Local Group\u2019s dark matter at $R \\sim 100$ kpc, beyond which the intracluster dark matter dominates. If we define the Galaxy\u2019s radius at this" -"---\nabstract: 'The drag reduction properties of a turbulent channel flow modified by spanwise sinusoidal oscillations of the walls are investigated by direct numerical simulations. The work is based on the linear relation between the drag reduction and the parameter $S$, function of the maximum wall velocity and the period of the oscillation. This quantity, first determined by @choi-xu-sung-2002 and later studied by @quadrio-ricco-2004b, has been found through physical arguments pertaining to the action of the oscillating Stokes layer on the near-wall turbulence dynamics. The predictive potential of the scaling parameter is exploited to gain insight into the drag-reducing effects of the oscillating wall technique. The period of oscillation which guarantees the maximum drag reduction for a given maximum wall displacement is studied for the first time. The issue of the minimum intensity of wall forcing required to produce a non-zero drag reduction effect and the dependence of the drag reduction on the Reynolds number are also addressed. The drag reduction data available in the literature are compared with the prediction given by the scaling parameter, thus attaining a comprehensive view of the state of the art.'\naddress:\n- |\n Department of Mathematics, Imperial College London\\\n 180 Queen\u2019s Gate -" -"---\nabstract: 'Learning semantic segmentation models under image-level supervision is far more challenging than under fully supervised setting. Without knowing the exact pixel-label correspondence, most weakly-supervised methods rely on external models to infer pseudo pixel-level labels for training semantic segmentation models. In this paper, we aim to develop a single neural network without resorting to any external models. We propose a novel self-guided strategy to fully utilize features learned across multiple levels to progressively generate the dense pseudo labels. First, we use high-level features as class-specific localization maps to roughly locate the classes. Next, we propose an affinity-guided method to encourage each localization map to be consistent with their intermediate level features. Third, we adopt the training image itself as guidance and propose a self-guided refinement to further transfer the image\u2019s inherent structure into the maps. Finally, we derive pseudo pixel-level labels from these localization maps and use the pseudo labels as ground truth to train the semantic segmentation model. Our proposed self-guided strategy is a unified framework, which is built on a single network and alternatively updates the feature representation and refines localization maps during the training procedure. Experimental results on PASCAL VOC 2012 segmentation benchmark demonstrate that our" -"---\nabstract: 'We propose a new sparse regression method called the [*component lasso*]{}, based on a simple idea. The method uses the connected-components structure of the sample covariance matrix to split the problem into smaller ones. It then applies the lasso to each subproblem separately, obtaining a coefficient vector for each one. Finally, it uses non-negative least squares to recombine the different vectors into a single solution. This step is useful in selecting and reweighting components that are correlated with the response. Simulated and real data examples show that the component lasso can outperform standard regression methods such as the lasso and elastic net, achieving a lower mean squared error as well as better support recovery. The modular structure also lends itself naturally to parallel computation.'\nauthor:\n- Nadine Hussami and Robert Tibshirani\nbibliography:\n- 'tibs.bib'\ndate:\n- |\n Department of Electrical Engineering and\\\n Departments of Health Research and Policy, and Statistics,\\\n Stanford University, Stanford CA.;\\\n nadinehu@stanford.edu; tibs@stanford.edu\n- November 2013\ntitle: A Component Lasso\n---\n\n[**Keywords.**]{} Lasso, elastic net, graphical lasso, sparsity, connected components, $\\ell_1$-minimization, non-negative least squares, grouping effect.\n\nIntroduction {#sec:intro}\n============\n\nSuppose that we have a response vector $y\\in\\R^n$, a matrix $X \\in \\R^{n\\times p}$ of predictor" -"---\nabstract: 'The rapid development of the magnetic tunnel junction (MTJ) spin torque oscillator (STO) technology demands an analytical model to enable building MTJ STO-based circuits and systems so as to evaluate and utilize MTJ STOs in various applications. In Part I of this paper, an analytical model based on the macrospin approximation, has been introduced and verified by comparing it with the measurements of three different MTJ STOs. In Part II, the full Verilog-A implementation of the proposed model is presented. To achieve a reliable model, an approach to reproduce the phase noise generated by the MTJ STO has been proposed and successfully employed. The implemented model yields a time domain signal, which retains the characteristics of operating frequency, linewidth, oscillation amplitude and DC operating point, with respect to the magnetic field and applied DC current. The Verilog-A implementation is verified against the analytical model, providing equivalent device characteristics for the full range of biasing conditions. Furthermore, a system that includes an MTJ STO and CMOS RF circuits is simulated to validate the proposed model for system- and circuit-level designs. The simulation results demonstrate that the proposed model opens the possibility to explore STO technology in a wide range" -"---\nabstract: 'We present a model study of an alternative implementation of a two-port Hall-effect microwave gyrator. Our set-up involves three electrodes, one of which acts as a common ground for the others. Based on the capacitive-coupling model of Viola and DiVincenzo, we analyze the performance of the device and we predict that ideal gyration can be achieved at specific frequencies. Interestingly, the impedance of the three-terminal gyrator can be made arbitrarily small for certain coupling strengths, so that no auxiliary impedance matching is required. Although the bandwidth of the device shrinks as the impedance decreases, it can be improved by reducing the magnetic field; it can be realistically increased up to $ 150 \\mathrm{MHz}$ at $50\\mathrm{\\Omega}$ by working at filling factor $\\nu=10$. We examine also the effects of the parasitic capacitive coupling between electrodes and we find that, although in general they strongly influence the response of device, their effect is negligible at low impedance. Finally, we analyze an interferometric implementation of a circulator, which incorporates the gyrator in a Mach-Zender-like construction. Perfect circulation in both directions can be achieved, depending on frequency and on the details of the interferometer.'\nauthor:\n- 'S. Bosco[^1,3^]{}'\n- 'F. Haupt[^1,3^]{}'\n- 'D." -"---\nabstract: 'A detailed study of the effects of phase fluctuation and dephasing on the dynamics of the entanglement generated from a coherently pumped correlated emission laser is presented. It is found that the time evolution of the entanglement is significantly reliant on the phase fluctuation and dephasing, particularly, at early stages of the lasing process. In the absence of external driving radiation, the degree of entanglement and intensity turns out to attain a maximum value just before starting to exhibit oscillation which dies at longer time scale. However, in case the driving mechanism is on, the oscillatory nature disappears due to the additional induced coherent superposition and the degree of entanglement would be larger at steady state. Moreover, the degree of entanglement as predicted by the logarithmic negativity and the Duan-Giedke-Cirac-Zoller criteria exhibits a similar nature when there is no driving radiation, although such a trend is eroded with increasing strength of the pumping radiation at longer time scale. The other important aspect of the phase fluctuation and dephasing is the possibility of relaxing the time at which the maximum entanglement is detected.'\nauthor:\n- Sintayehu Tesfa\ntitle: Effect of phase fluctuation and dephasing on the dynamics of entanglement" -"---\nabstract: |\n As the meta-analysis of more than one diagnostic tests can impact clinical decision making and patient health, there is an increasing body of research in models and methods for meta-analysis of diagnostic studies which compare the accuracy of more than one tests. The application of the existing models to compare the accuracy of three or more tests suffers from the curse of multi-dimensionality, i.e., either the number of model parameters increase rapidly or high dimensional integration is required. To overcome these issues in joint meta-analysis of studies comparing $T >2$ diagnostic tests, we propose a model that assumes the true positives and true negatives for each test are conditionally independent and binomially distributed given the $2T$-variate latent vector of sensitivities and specificities. For the random effects distribution, we employ an one-factor copula that provides flexible reflection asymmetric tail and non-linear dependence. Maximum likelihood estimation of the model is straightforward as the derivation of the likelihood requires bi-dimensional instead of $2T$-dimensional integration. Our methodology is demonstrated with an extensive simulation study and an application example that determines which is the best test for the diagnosis of rheumatoid arthritis.\n\n **Key Words:** Diagnostic tests; factor copulas; multivariate meta-analysis; mixed models;" -"---\nabstract: |\n Strongly lensed supernovae can be detected as multiply imaged or highly magnified transients. In order to compare the performances of these two observational strategies, we calculate expected discovery rates as a function of survey depth in five $grizy$ filters and for different classes of supernovae (types Ia, IIP, IIL, Ibc and IIn). We find that detections via magnification is the only effective strategy for relatively shallow pre-LSST surveys. For survey depths about the LSST capacity, both strategies yield comparable numbers of lensed supernovae. Supernova samples from the two methods are to a large extent independent and combining them increases detection rates by about 50 per cent. While the number of lensed supernovae detectable via magnification saturates at the limiting magnitudes of LSST, detection rates of multiply imaged supernova still go up drastically at increasing survey depth. Comparing potential discovery spaces, we find that lensed supernovae found via image multiplicity exhibit longer time delays and larger image separations making them more suitable for cosmological constraints than their counterparts found via magnification.\n\n We provide useful fitting functions approximating the computed discovery rates for different supernova classes and detection methods. We find that the Zwicky Transient Factory will find about" -"---\nabstract: |\n We study the latitudinal distribution and evolution of sunspot areas from Solar Cycles 12\u2013 Solar Cycles 23 (SC12-SC23) and sunspot-groups of from Solar Cycles 8\u2013Solar Cycles 23 (SC8-SC23) for even and odd cycles. The Rician distribution is the best-fit function for both even and odd sunspots group latitudinal occurrence. The mean and variance for even northern/southern butterfly wing sunspots are 14.94/14.76 and 58.62/56.08, respectively, and the mean and variance for odd northern/southern wing sunspots are 15.52/15.58 and 61.77/58.00, respectively. Sunspot groups of even cycle wings are thus at somewhat lower latitudes on the average than sunspot groups of the odd cycle wings, i.e., about 0.6 degrees for northern hemisphere wings and 0.8 degrees for southern hemisphere wings.\n\n The spatial analysis of sunspot areas between SC12-SC23 shows that the small sunspots are at lower solar latitudes of the sun than the large sunspots for both odd and even cycles, and also for both hemispheres.\n\n Temporal evolution of sunspot areas shows a lack of large sunspots after four years (exactly between 4.2\u20134.5 years), i.e., about 40% after the start of the cycle, especially for even cycles. This is related to the Gnevyshev gap and is occurring at the time" -"---\nabstract: '*It is interesting to observe that all optical materials with a positive refractive index have a value of index that is of order unity. Surprisingly, though, a deep understanding of the mechanisms that lead to this universal behavior seems to be lacking. Moreover, this observation is difficult to reconcile with the fact that a single, isolated atom is known to have a giant optical response, as characterized by a resonant scattering cross section that far exceeds its physical size. Here, we theoretically and numerically investigate the evolution of the optical properties of an ensemble of ideal atoms as a function of density, starting from the dilute gas limit, including the effects of multiple scattering and near-field interactions. Interestingly, despite the giant response of an isolated atom, we find that the maximum index does not indefinitely grow with increasing density, but rather reaches a limiting value $n\\approx 1.7$. We propose an explanation based upon strong-disorder renormalization group theory, in which the near-field interaction combined with random atomic positions results in an inhomogeneous broadening of atomic resonance frequencies. This mechanism ensures that regardless of the physical atomic density, light at any given frequency only interacts with at most a few" -"---\nauthor:\n- 'M. Dwornik [^1] Zs. Horv\u00e1th,'\n- 'L.\u00c1. Gergely'\ntitle: 'Weak and strong field approximations and circular orbits of Kehagias-Sfetsos space-time'\n---\n\nIntroduction\n============\n\nGeneral relativity (GR) has been precisely tested on the Solar system scale, however the very small and very large distance behaviour of gravity is less well verified, leading to numerous proposed modifications of GR. Recently Ho\u0159ava proposed a modification of GR at high energies, motivated by the Lifshitz scalar field theory in solid state physics. The Ho\u0159ava-Lifshitz (HL) gravitational theory introduces anisotropy between space and time. A recent review of its Lorentz invariance violation, occurring at trans-Planckian energy\\\nscales is presented in ([@visser2011]). Among the several proposed versions of the HL theory, the infrared (IR)-modified Ho\u0159ava gravity is the one which seems to be consistent with the current observational data ([@kono2009; @chen2009; @chenwang]).\n\nThe spherically symmetric space-time in vacuum HL gravity is characterized by the family of metrics ([@radi]) $$ds^{2}=-f(r)dt^{2}+f^{-1}(r)dr^{2}+r^{2}(d\\theta ^{2}+\\sin ^{2}\\theta\nd\\varphi ^{2})\\;,$$ with $$f(r)=1+(\\omega -\\Lambda )r^{2}-\\sqrt{r\\left[ \\omega \\left( \\omega -2\\Lambda\n\\right) r^{3}+\\beta \\right] }\\;. \\label{metric}$$ Here $\\beta $ is an integration constant, while $\\Lambda $ and $\\omega $ are real parameters. Depending on the values of $\\beta $ , $\\omega $ and" -"---\nabstract: 'In this paper, the design and analysis of a new bandwidth-efficient signaling method over the bandlimited intensity-modulated direct-detection (IM/DD) channel is presented. The channel can be modeled as a bandlimited channel with nonnegative input and additive white Gaussian noise (AWGN). Due to the nonnegativity constraint, standard methods for coherent bandlimited channels cannot be applied here. Previously established techniques for the IM/DD channel require bandwidth twice the required bandwidth over the conventional coherent channel. We propose a method to transmit without intersymbol interference in a bandwidth no larger than the bit rate. This is done by combining Nyquist or root-Nyquist pulses with a constant bias and using higher-order modulation formats. In fact, we can transmit with a bandwidth equal to that of coherent transmission. A trade-off between the required average optical power and the bandwidth is investigated. Depending on the bandwidth required, the most power-efficient transmission is obtained by the parametric linear pulse, the so-called \u201cbetter than Nyquist\" pulse, or the root-raised cosine pulse.'\nauthor:\n- |\n \\\n [^1]\nbibliography:\n- 'MyDatabase\\_EA.bib'\ntitle: Bandlimited Intensity Modulation\n---\n\nIntensity-modulated direct-detection (IM/DD), strictly bandlimited signaling.\n\nIntroduction\n============\n\ndemand for high-speed data transmission systems has introduced new design paradigms for optical communications." -"---\nabstract: 'When smooth, zero-on-average, periodic magnetic and electric fields are applied to a carbon mono-layer (graphene), a gap between the valence and conduction band is introduced. Here this gapped state is studied analytically. It is found that it does not correspond to a band insulator: a constant electric field induces a quantized Hall current even though the magnetic flux through the sample is zero and there are no Landau levels. The phenomenon is of the same type as that discovered by Haldane for a graphene sample in a periodic magnetic field that is not smooth, i.e. varies rapidly on the scale of the graphene lattice constant. The effect can be explained in terms of the topological theory of Thouless, Kohmoto, Nightingale and den Nijs. For the system studied in this paper, an explanation in terms of simple physical principles is also presented. Thus some of the mystery is taken out of the apparently strange phenomenon of a Hall effect without magnetic flux. Furthermore, Haldane\u2019s model requires control over external magnetic fields on length scales less than an angstrom and is therefore hard to realize experimentally. For the model studied here, control over external fields on length scales that are" -"---\naddress: |\n Department of Mathematics\\\n Polytechnic University\\\n Six Metrotech Center\\\n Brooklyn NY 11201\nauthor:\n- Deane Yang\ntitle: 'Gunther\u2019s proof of Nash\u2019s isometric embedding theorem'\n---\n\nPreface\n=======\n\nAround 1987 a German mathematician named Matthias Gunther found a new way of obtaining the existence of isometric embeddings of a Riemannian manifold. His proof appeared in [@Gun89b; @Gun91]. His approach avoids the so-called Nash-Moser iteration scheme and, therefore, the need to prove smooth tame or Moser-type estimates for the inverse of the linearized operator. This simplifies the proof of Nash\u2019s isometric embedding theorem [@Nas56] considerably.\n\nThis is an informal expository note describing his proof. It was originally written, because when I first learned Gunther\u2019s proof, it had not appeared either in preprint or published form, and I felt that everyone should know about it. Moreover, since he is at Leipzig, which at the time was part of East Germany, very few mathematicians in the U.S. knew about him or his proof.\n\nSince many still seem to be unaware of Gunther\u2019s proof, even after he gave a talk at the International Congress of Mathematicians at Kyoto in 1990 and published his proof in the proceedings [@Gun91], I have updated this note" -"---\nabstract: 'Whilst there are a plethora of algorithms for detecting changes in mean in univariate time-series, almost all struggle in real applications where there is autocorrelated noise or where the mean fluctuates locally between the abrupt changes that one wishes to detect. In these cases, default implementations, which are often based on assumptions of a constant mean between changes and independent noise, can lead to substantial over-estimation of the number of changes. We propose a principled approach to detect such abrupt changes that models local fluctuations as a random walk process and autocorrelated noise via an AR(1) process. We then estimate the number and location of changepoints by minimising a penalised cost based on this model. We develop a novel and efficient dynamic programming algorithm, DeCAFS, that can solve this minimisation problem; despite the additional challenge of dependence across segments, due to the autocorrelated noise, which makes existing algorithms inapplicable. Theory and empirical results show that our approach has greater power at detecting abrupt changes than existing approaches. We apply our method to measuring gene expression levels in bacteria.'\nauthor:\n- |\n Gaetano Romano\\\n Department of Mathematics and Statistics,\\\n Lancaster University, Lancaster, UK\\\n \u00a0\\\n Guillem Rigaill\\\n Universit\u00e9 Paris-Saclay, CNRS," -"---\nauthor:\n- Seth Lloyd\n- Vazrik Chiloyan\n- Yongjie Hu\n- Samuel Huberman\n- 'Zi-Wen Liu'\n- Gang Chen\ntitle: No energy transport without discord\n---\n\n[**Quantum systems can be correlated in ways that classical systems can not. A wide variety of non-classical forms of correlation exist [@nc; @zurek; @zurekmd; @hv; @ohhh; @phh; @wpm; @mpsvw; @datta; @dg; @luo; @hjpw; @bennett; @lcs; @ws]: amongst the best known are entanglement [@nc] and discord [@zurek; @zurekmd; @hv; @ohhh; @datta; @dg]. Quantum correlations can be used to enhance measurement accuracy [@glm] and energy transport [@reben; @ph; @goldi]. This paper shows that quantum correlations \u2013 in the form of discord \u2013 are mandatory for [*any*]{} energy transport. Without discord, energy transport cannot occur. Moreover, we show that the initial rate of heat transfer between two systems prepared at different temperatures is directly proportional to the rate of increase in diagonal/energetic discord [@lcs] between the systems. We measured the increase of energetic discord induced by nanoscale heat flow across an aluminum-sapphire interface. The rate of increase of discord is measured to be $\\bf{ 4.28 \\times 10^{24}}$ bits ${\\rm \\bf m^{-2}}$ ${\\rm\\bf K^{-1}}$ ${\\rm\\bf s^{-1}}$.**]{}\n\nDiscord measures the difference between quantum mutual information and the classical" -"---\nabstract: |\n The computation power of supercomputers grows faster than the bandwidth of their storage and network. Especially applications using hardware accelerators like Nvidia GPUs cannot save enough data to be analyzed in a later step. There is a high risk of loosing important scientific information. We introduce the in situ template library ISAAC which enables arbitrary applications like scientific simulations to live visualize their data without the need of deep copy operations or data transformation using the very same compute node and hardware accelerator the data is already residing on. Arbitrary meta data can be added to the renderings and user defined steering commands can be asynchronously sent back to the running application. Using an aggregating server, ISAAC streams the interactive visualization video and enables user to access their applications from everywhere.\n\n [^1]\n\n *Keywords: HPC, in situ, visualization, live rendering, petascale, particle-in-cell, C++11, CUDA, Alpaka, FOSS*\nauthor:\n- Alexander\u00a0Matthes\n- Axel\u00a0Huebl\n- Ren\u00e9\u00a0Widera\n- Sebastian\u00a0Grottel\n- Stefan\u00a0Gumhold\n- Michael\u00a0Bussmann\nbibliography:\n- 'citations.bib'\ndate: 'September 17, 2016'\ntitle: 'In situ, steerable, hardware-independent and data-structure agnostic visualization with ISAAC'\n---\n\n[ ]{}\n\n[ ]{}\n\n[^1]: This work is submitted for publication to Supercomputing frontiers" -"---\nabstract: 'We propose a two-terminal spin-orbit interferometer with a hot molecule inserted in one of its arms to generate pure spin currents. Local heating is achieved by coupling the vibrational modes of the molecule to a third (phononic) reservoir. We show that this spin calorimetric effect is due to the combined influence of spin-dependent wave interference and inelastic scattering. Remarkably, the device converts heat flow into spin-polarized current even without applying any voltage or temperature difference to the electronic terminals.'\nauthor:\n- 'Sun-Yong Hwang'\n- Jong Soo Lim\n- Rosa L\u00f3pez\n- Minchul Lee\n- David S\u00e1nchez\ntitle: Proposal for a local heating driven spin current generator\n---\n\nRecent experimental demonstrations of spin-polarized currents using thermal gradients only[@uch08; @sla10] has fueled the interest in finding synergies between thermoelectricity and spintronics. Thus, the field of spin caloritronics[@bau10] seeks new functionalities that exploit the coupling of charge, spin and energy degrees of freedom in nanostructures. Here we propose a molecule-based spin caloritronic device that extracts heat from a nearby phonon bath and transforms it into a spin current that flows out into coupled electronic reservoirs. Crucial to our setup is the presence of tunable spin-orbit interactions that causes traveling electrons to" -"---\nabstract: 'Deep Neural Networks (DNNs) show a significant impact on medical imaging. One significant problem with adopting DNNs for skin cancer classification is that the class frequencies in the existing datasets are imbalanced. This problem hinders the training of robust and well-generalizing models. Data Augmentation addresses this by using existing data more effectively. However, standard data augmentation implementations are manually designed and produces only limited reasonably alternative data. Instead, Generative Adversarial Networks (GANs) is utilized to generate a much broader set of augmentations. This paper proposes a novel enhancement for the progressive generative adversarial networks (PGAN) using self-attention mechanism. Self-attention mechanism is used to directly model the long-range dependencies in the feature maps. Accordingly, self-attention complements PGAN to generate fine-grained samples that comprise clinically-meaningful information. Moreover, stabilization technique was applied to the enhanced generative model. To train the generative models, ISIC 2018 skin lesion challenge dataset was used to synthesize highly realistic skin lesion samples for boosting further the classification result. We achieve accuracy of 70.1% which is 2.8% better than the non-augmented one of 67.3%.'\naddress: 'Department of Computer Science, Faculty of Computer and Information, Assiut University, Assiut, 71511'\nauthor:\n- Ibrahim Saad Ali\n- Mamdouh Farouk Mohamed" -"---\nabstract: 'We present a high resolution polarimetry and variability study of the M87 jet using *[VLA]{} and *HST* data taken during 2002 to 2008. Both data-sets have an angular resolution as high as 0.06$\"$, which is 2-3 times better than previous observations. New morphological details are revealed in both the optical and radio, which can help to reveal the energetic and magnetic field structure of the jet. By comparing the data with previously published *HST* and *VLA* observations, we show that the jet$''$s morphology in total and polarized light is changing significantly on timescales of $\\sim$a decade. We compare the evolution of the inner jet (particularly the nucleus and knot HST-1), when our observations overlap with the multi-wavelength monitoring campaigns conducted with *HST* and *Chandra*. We use these data to comment on particle acceleration and main emission processes.*'\n---\n\nIntroduction\n============\n\nThe radio galaxy M87 hosts one of the best-known extragalactic jets. Because of its proximity (d=16 Mpc, translating to a projected scale of about 80 pc per arcsec) and high surface brightness from radio through X-rays, studies of its jet emission can be undertaken at the highest resolutions in more wavebands than any other object. Its close proximity" -"---\nabstract: |\n A neural-net-like model, which is realizable using quantum holography, is proposed for quantum associative memory and pattern recognition. This Hopfield-based mathematical model/algorithm, translated to quantum formalism, has been successfully tested in computer simulations of concrete pattern-recognition applications. In parallel, the same mathematics governs quantum dynamics which can be harnessed for information processing by proper (de)coding manipulation. Since we are able to give quantum interpretation to all the elements (e.g., variables, couplings) of the model, and as far as we are able to show that processing, governed by that mathematics, is experimentally implementable in real quantum systems, we can expect efficient quantum computing \u2013 in our case pattern recognition based on quantum content-addressable associative memory.\\\n \\\n quantum, pattern recognition, Hopfield, neural net, holography, phase, associative memory\\\n \\\nauthor:\n- |\n Mitja Peru\u0161 [^1] \u00a0and Horst Bischof\\\n Graz University of Technology, Institute for Computer Vision and Graphics\\\n Inffeldgasse 16, 2.OG, A-8010 Graz, Austria\\\n www.icg.tu-graz.ac.at/$\\sim$perus & $\\sim$bischof\ntitle: |\n [**Quantum-wave pattern recognition:\\\n from simulations towards implementation**]{}\n---\n\n\\\nQuantum neural nets [@kasabov; @nq] are a branch of quantum computers needing no logic gates. It will be shown that the implementation of associative neural nets can be [*naturally*]{}-physical, i.e. no artificial" -"---\nabstract: |\n We investigate exact solutions for isothermal shock problems in different one-dimensional geometries. These solutions are given as analytical expressions if possible, or are computed using standard numerical methods for solving ordinary differential equations. We test the numerical solutions against the analytical expressions to verify the correctness of all numerical algorithms.\n\n We use similarity methods to derive a system of ordinary differential equations (ODE) yielding exact solutions for power law density distributions as initial conditions. Further, the system of ODEs accounts for implosion problems (IP) as well as explosion problems (EP) by changing the initial or boundary conditions, respectively.\n\n Taking genuinely isothermal approximations into account leads to additional insights of EPs in contrast to earlier models. We neglect a constant initial energy contribution but introduce a parameter to adjust the initial mass distribution of the system. Moreover, we show that due to this parameter a constant initial density is not allowed for isothermal EPs. Reasonable restrictions for this parameter are given.\n\n Both, the (genuinely) isothermal implosion as well as the explosion problem are solved for the first time.\nauthor:\n- 'Stephan C. Deschner[^1]'\n- 'Tobias F. Illenseer[^2]'\n- 'Wolfgang J. Duschl[^3]'\nbibliography:\n- 'library\\_clean.bib'\ntitle: '[[Self-similar solutions to" -"---\nabstract: 'We propose to compute Wasserstein barycenters (WBs) by solving for Monge maps with variational principle. We discuss the metric properties of WBs and explore their connections, especially the connections of Monge WBs, to K-means clustering and co-clustering. We also discuss the feasibility of Monge WBs on unbalanced measures and spherical domains. We propose two new problems \u2013 regularized K-means and Wasserstein barycenter compression. We demonstrate the use of VWBs in solving these clustering-related problems.'\nbibliography:\n- 'main.bib'\n---\n\nIntroduction {#sec:intro}\n============\n\nClustering distributional data according to their spatial similarities has been a core issue in machine learning.\n\nNumerous theories and algorithms for clustering problems have been developed to help understand the structure of the data and to discover homogeneous groups\n\nin their embedding spaces\n\n. Clustering algorithms also apply to unsupervised learning problems that pass information from known centroids to unknown empirical samples\n\n. Occasionally, researchers regard clustering as finding the optimal semi-discrete correspondence between distributional data or vice versa.\n\nOptimal transportation (OT) techniques have gained increasing popularity in the past two decades for measuring the distance between distributional data as well as aligning them together. Rooted in the OT theories, several OT-based clustering algorithms have emerged in" -"---\nabstract: 'In the Nastrom-Gage spectrum of atmospheric turbulence we observe a $k^{-3}$ energy spectrum that transitions into a $k^{-5/3}$ spectrum, with increasing wavenumber $k$. The transition occurs near a transition wavenumber $k_t$, located near the Rossby deformation wavenumber $k_R$. The Tung-Orlando theory interprets this spectrum as a double downscale cascade of potential enstrophy and energy, from large scales to small scales, in which the downscale potential enstrophy cascade coexists with the downscale energy cascade over the same length-scale range. We show that, in a temperature forced two-layer quasi-geostrophic model, the rates with which potential enstrophy and energy are injected place the transition wavenumber $k_t$ near $k_R$. We also show that if the potential energy dominates the kinetic energy in the forcing range, then the Ekman term suppresses the upscale cascading potential enstrophy more than it suppresses the upscale cascading energy, a behavior contrary to what occurs in two-dimensional turbulence. As a result, the ratio $\\gn/\\gee$ of injected potential enstrophy over injected energy, in the downscale direction, decreases, thereby tending to decrease the transition wavenumber $k_t$ further. Using a random Gaussian forcing model, we reach the same conclusion, under the modeling assumption that the asymmetric Ekman term predominantly suppresses the" -"---\nauthor:\n- DL Oberski\nbibliography:\n- 'identification.bib'\ntitle: 'Rank-deficiencies in a reduced information latent variable model'\n---\n\nIn a wide variety of fields, different data sources inform on the same phenomenon, the problem being to determine how these different sources should be combined, and how validly each measures the phenomenon of interest. For example, in official statistics, contradictory administrative registers and surveys may be available on citizens\u2019 employment contracts [@oberski_evaluating_2017; @pankowska_reconciliation_2018]; in family sociology, reports from different family members may not always match up [@kenny_dyadic_2006]; and in medicine, a hospital may have data on patients\u2019 condition from electrocardiograms, echocardiograms, radiological examinations and individual laboratory measurements simultaneously [@sammani_unravel:_2019]. In all such cases, latent variable models [@bartholomew_latent_2011] can prove powerful tools to combine different data sources measuring the same phenomenon in a principled manner [@hand_statistical_2018; @oberski2018research].\n\nA particularly useful approach is the \u201cmultitrait-multimethod\u201d design, which was introduced by @campbell_convergent_1959 to measure a single phenomenon (\u201ctrait\u201d) using different data sources (\u201cmethods\u201d), and to evaluate the sources\u2019 validity as measures of their underlying \u201ctraits\u201d. To analyze the resulting data, MTMM factor models were developed by @browne1984decomposition [@widaman_hierarchically_1985; @cudeck_msultiplicative_1988; @millsap_statistical_1995; @wothke1995covariance], and @eid_multitrait-multimethod_2000. Extensions to nonlinear and nonnormal latent variable models were recently developed" -"---\nabstract: 'The spin-orbit interaction of 2D electrons in the quantum wells grown from the III-V semiconductors consists of the two parts with different symmetry: the Bychkov-Rashba and the Dresselhaus terms. The last term is usually attributed to the bulk spin-orbit Hamiltonian which reflects the $T_d$ symmetry of the zincblende lattice. While it is known that the quantum well interfaces may also contribute to the Dresselhaus term, the exact structure and the relative importance of the interface and the bulk contributions are not well understood yet. To compare the bulk contribution with the interface one, we perform tight-binding calculations of the spin splittings of the electron levels in \\[100\\] GaAs/AlGaAs quantum wells and analyze the obtained spin splittings within the one-band effective mass electron Hamiltonian containing the two interface contributions to the Dresselhaus term. We show that the dependencies of the spin splittings on the quantum well width and the electric field along the growth direction are perfectly reproduced by the analytical one-band calculations and the magnitude of the interface contribution to the spin-orbit interaction for sufficiently narrow quantum wells is of the same order as the contribution from the bulk Dresselhaus Hamiltonian.'\nauthor:\n- 'P.\u00a0S.\u00a0Alekseev'\n- 'M." -"---\nabstract: 'In this paper, the performance of three deep learning methods for predicting short-term evolution and for reproducing the long-term statistics of a multi-scale spatio-temporal Lorenz 96 system is examined. The methods are: echo state network (a type of reservoir computing, RC-ESN), deep feed-forward artificial neural network (ANN), and recurrent neural network with long short-term memory (RNN-LSTM). This Lorenz 96 system has three tiers of nonlinearly interacting variables representing slow/large-scale ($X$), intermediate ($Y$), and fast/small-scale ($Z$) processes. For training or testing, only $X$ is available; $Y$ and $Z$ are never known or used. We show that RC-ESN substantially outperforms ANN and RNN-LSTM for short-term prediction, e.g., accurately forecasting the chaotic trajectories for hundreds of numerical solver\u2019s time steps, equivalent to several Lyapunov timescales. The RNN-LSTM and ANN show some prediction skills as well; RNN-LSTM bests ANN. Furthermore, even after losing the trajectory, data predicted by RC-ESN and RNN-LSTM have probability density functions (PDFs) that closely match the true PDF, even at the tails. The PDF of the data predicted using ANN, however, deviates from the true PDF. Implications, caveats, and applications to data-driven and data-assisted surrogate modeling of complex nonlinear dynamical systems such as weather/climate are discussed.'\nauthor:\n-" -"---\nauthor:\n- |\n V.A. Petrov\\\n [Institute for High Energy Physics, Protvino, 142280 Russia]{}\ntitle: Hard Diffraction and Unitarity\n---\n\n- Unitarity of the $S$-matrix which stems from the postulate of asymptotic completeness (see e.g.\u00a0\\[1\\]) refers to asymptotic states, representing physical particles. In quantum field-theoretic terms it means that one deals with on-shell, truncated Green functions. Unitarity is tightly related to (but not exhausted by) probabilistic interpretation of the scattering and production amplitudes, and effectively prevents these amplitudes from too fast growth with energy\u00a0\\[2\\].\n\n Hard processes in general, and hard diffraction in particular, are often related to off-shell amplitudes. Can unitarity, seemingly on-shell property, lead to limitations in this case also? In fact, unitarity of the $S$-matrix, when considered in the axiomatic framework, is assumed to hold off mass shell\u00a0\\[3\\], thereof, e.g., the optical theorem holds when \u201cexternal\u201d particles are virtual.\n\n However the bounds which were proven for the on-shell case cannot be derived for more general off-shell case.\n\n This leads, in particular, to a possibility of a much faster rise with energy than in the on-shell case\u00a0\\[4\\].\n\n- In this talk we limit ourselves by consideration of deeply inelastic scattering (DIS) at small\u00a0$x$, when" -"---\nabstract: 'We propose a sequential design method aiming at the estimation of an extreme quantile based on a sample of dichotomic data corresponding to peaks over a given threshold. This study is motivated by an industrial challenge in material reliability and consists in estimating a failure quantile from trials whose outcomes are reduced to indicators of whether the specimen have failed at the tested stress levels. The solution proposed is a sequential design making use of a splitting approach, decomposing the target probability level into a product of probabilities of conditional events of higher order. The method consists in gradually targeting the tail of the distribution and sampling under truncated distributions. The model is GEV or Weibull, and sequential estimation of its parameters involves an improved maximum likelihood procedure for binary data, due to the large uncertainty associated with such a restricted information.'\nauthor:\n- |\n Michel Broniatowski and Emilie Miranda\\\n LPSM, CNRS UMR 8001, Sorbonne Universite, Paris\ntitle: 'A sequential design for extreme quantiles estimation under binary sampling.'\n---\n\nConsider a non negative random variable $X$ with distribution function $G$.$%\n\\ $\u00a0Let $X_{1},..,X_{n}$ be $n$ independent copies of $X.$ The aim of this paper is to estimate" -"---\nabstract: 'The content of hot material in the corona is not constant. Soft X-ray and high-temperature EUV line observations show that new material, apparently heated and evaporated from the chromosphere, is frequently injected into the corona both in active and quiet regions. [*Active regions*]{} are found to exhibit transient brightenings, termed here microflares, due to such enhancements in emission measure. They appear at a rate of up to 10 per hour in RHESSI observations of 3\u201315 keV X-rays, occurring even during the periods of lowest solar activity so far in the mission. The RHESSI observations combined with measurements at other wavelengths yield estimates of the energy input into the corona. These observations suggest that the models for coronal heating must be complemented with respect to continuous replenishing the lower corona by chromospheric material heated to coronal temperatures. The observed micro-events are secondary phenomena and do not represent the primary energy release, nor its total amount. Nevertheless, they are an interesting source of information on the heating process(es) of the corona. The micro-events are compared to events in [*quiet regions*]{}, termed here nanoflares, which seem to be a different population, well separated in temperature and emission measure from microflares.'\naddress:" -"---\nabstract: 'As is typical in other fields of application of high throughput systems, radiology is faced with the challenge of interpreting increasingly sophisticated predictive models such as those derived from radiomics analyses. Interpretation may be guided by the learning output from machine learning models, which may however vary greatly with each technique. Whatever this output model, it will raise some essential questions. How do we interpret the prognostic model for clinical implementation? How can we identify potential information structures within sets of radiomic features, in order to create clinically interpretable models? And how can we recombine or exploit potential relationships between features towards improved interpretability? A number of statistical techniques are explored to assess (possibly nonlinear) relationships between radiological features from different angles.'\nauthor:\n- 'Eric Wolsztynski[^1] [^2]'\nbibliography:\n- 'bib\\_pet.bib'\ntitle: Statistical Exploration of Relationships Between Routine and Agnostic Features Towards Interpretable Risk Characterization \n---\n\nIntroduction\n============\n\nBuilding and interpretation of radiomics-based predictive models is discussed in many reports [@Aerts14; @Soussan14; @Buvat15; @Desseroit16; @Gillies16; @Hatt17a; @Hatt17b], which all highlight the difficulty of converting the model-based risk assessment into practical decision-making pathways for routine implementation\u2013a necessary condition to the clinical implementation of machine learning and artificial intelligence solutions in" -"---\nabstract: 'We propose an image warping-based remote rendering technique for volumes that decouples the rendering and display phases. Our work builds on prior work that samples the volume on the client using ray casting and reconstructs a z-value based on some heuristic. The color and depth buffer are then sent to the client that reuses this depth image as a stand-in for subsequent frames by warping it according to the current camera position until new data was received from the server. We augment that method by implementing the client renderer using ray tracing. By representing the pixel contributions as spheres, this allows us to effectively vary their footprint based on the distance to the viewer, which we find to give better results than point-based rasterization when applied to volumetric data sets.'\nauthor:\n- \nbibliography:\n- 'egbibsample.bib'\ntitle: ' Augmenting Image Warping-Based Remote Volume Rendering with Ray Tracing'\n---\n\nIntroduction\n============\n\nRemote rendering is an important technique to overcome the typical bandwidth limitations in in-situ scenarios, or when accessing graphics workstations over LAN or WAN using thin clients. Remote rendering algorithms can be classified by the type of data\u2014image pixels, proxy geometry, etc.\u2014that is sent over the network, and by" -"---\nabstract: 'Text simplification (TS) can be viewed as monolingual translation task, translating between text variations within a single language. Recent neural TS models draw on insights from neural machine translation to learn lexical simplification and content reduction using encoder-decoder model. But different from neural machine translation, we cannot obtain enough ordinary and simplified sentence pairs for TS, which are expensive and time-consuming to build. Target-side simplified sentences plays an important role in boosting fluency for statistical TS, and we investigate the use of simplified sentences to train, with no changes to the network architecture. We propose to pair simple training sentence with a synthetic ordinary sentence via back-translation, and treating this synthetic data as additional training data. We train encoder-decoder model using synthetic sentence pairs and original sentence pairs, which can obtain substantial improvements on the available WikiLarge data and WikiSmall data compared with the state-of-the-art methods.'\nauthor:\n- |\n Jipeng Qiang\\\n Department of Computer Science, Yangzhou University/ Yangzhou, Jiangsu, China\\\n [jpqiang@yzu.edu.cn]{}\\\nbibliography:\n- 'coling2018.bib'\ntitle: Improving Neural Text Simplification Model with Simplified Corpora\n---\n\nIntroduction\n============\n\nText simplification aims to reduce the lexical and structural complexity of a text, while still retaining the semantic meaning, which can help" -"---\nabstract: 'The long-tail phenomenon tells us that there are many items in the tail. However, not all tail items are the same. Each item acquires different kinds of users. Some items are loved by the general public, while some items are consumed by eccentric fans. In this paper, we propose a novel metric, *item eccentricity*, to incorporate this difference between consumers of the items. Eccentric items are defined as items that are consumed by eccentric users. We used this metric to analyze two real-world datasets of music and movies and observed the characteristics of items in terms of eccentricity. The results showed that our defined eccentricity of an item does not change much over time, and classified eccentric and noneccentric items present significantly distinct characteristics. The proposed metric effectively separates the eccentric and noneccentric items mixed in the tail, which could not be done with the previous measures, which only consider the popularity of items.'\nauthor:\n- \n- \nbibliography:\n- 'SMC\\_2017\\_chan.bib'\ntitle: Measuring the Eccentricity of Items\n---\n\nat (current page.south) ;\n\nIntroduction\n============\n\nIt is widely acknowledged that items in various markets follow a long-tailed distribution. A long-tailed distribution suggests the importance of tail items, which in aggregate" -"---\nauthor:\n- Igor Devetak$^1$ and Jon Yard$^2$\nbibliography:\n- 'qcmi.bib'\ntitle: The operational meaning of quantum conditional information\n---\n\nInformation might be regarded as an answer to a question. To know if it is raining, one need only look outside. However, a person living in a desert would expect a different answer than one living in a climate where on average, it rains every other day. After looking outside, who gains \u201cmore\" information? The answer to this question has nothing to do with weather \u2013 statistically, the desert dweller learns less, owing to the general predictability of desert rain patterns.\n\nThe statistical approach to quantifying information was taken by Claude Shannon [@shannon], who found that entropy plays a central role. By modeling the weather as a random variable $X$ which is equal to \u201crain\" or \u201cshine\" with probabilities $p(x)$, the information gained by looking out the window (or rather, the *uncertainty* one has before looking) is the *Shannon entropy* $$H(X) = -\\sum_x p(x)\\log_2 p(x)$$ of $X$. Suppose that the weather on each day is independent of that on the previous day, and that the overal climate is the same each day. According to Shannon\u2019s theory, the weather for $n$" -"---\nabstract: 'For tasks involving language and vision, the current state-of-the-art methods tend not to leverage any additional information that might be present to gather relevant (commonsense) knowledge. A representative task is Visual Question Answering where large diagnostic datasets have been proposed to test a system\u2019s capability of answering questions about images. The training data is often accompanied by annotations of individual object properties and spatial locations. In this work, we take a step towards integrating this additional privileged information in the form of spatial knowledge to aid in visual reasoning. We propose a framework that combines recent advances in knowledge distillation (teacher-student framework), relational reasoning and probabilistic logical languages to incorporate such knowledge in existing neural networks for the task of Visual Question Answering. Specifically, for a question posed against an image, we use a probabilistic logical language to encode the spatial knowledge and the spatial understanding about the question in the form of a mask that is directly provided to the teacher network. The student network learns from the ground-truth information as well as the teacher\u2019s prediction via distillation. We also demonstrate the impact of predicting such a mask inside the teacher\u2019s network using attention. Empirically, we show" -"---\nabstract: 'We theoretically study the influence of impurity scattering on the electric and thermal transport of borophane layer, a two-dimensional anisotropic Dirac semi-metal with two tilted and anisotropic Dirac cones. In a systematic framework, we have calculated exactly the electrical conductivity and thermoelectric coefficients of borophane in the presence of the short-range, long-range charged impurity and the short-range electro-magnetic (SREM) scatterers, by using the exact solution of the Boltzmann transport equation within the linear-response theory. Contrary to the large electron-hole asymmetry in borophane, its electron-hole conductivity is nearly symmetric. Interestingly, for the short-range scatters, just like graphene, the short-range conductivities of borophane have the constant values, independent of the chemical potential, while the conductivities of the SREM scatterers are linearly dependent on the chemical potential. Regardless of the impurity type, the electric conductivity of borophane is highly anisotropic, while the Seebeck coefficient and figure of merit (${\\it ZT}$) are isotropic. Along with the ambipolar nature of the borophane thermopower, a very high value of ${\\it ZT}$ around unity is obtained at room temperature, due to the large asymmetry between electrons and holes in borophane. More importantly, borophane attains its maximum value of ${\\it ZT}$ at very low chemical potentials," -"---\nabstract: 'We study the embedding of inflation with nilpotent multiplets in supergravity, in particular the decoupling of the sgoldstino scalar field. Instead of being imposed by hand, the nilpotency constraint on the goldstino multiplet arises in the low energy-effective theory by integrating out heavy degrees of freedom. We present explicit supergravity models in which a large but finite sgoldstino mass arises from Yukawa or gauge interactions. In both cases the inflaton potential receives two types of corrections. One is from the backreaction of the sgoldstino, the other from the heavy fields generating its mass. We show that these scale oppositely with the Volkov-Akulov cut-off scale, which makes a consistent decoupling of the sgoldstino nontrivial. Still, we identify a parameter window in which sgoldstino-less inflation can take place, up to corrections which flatten the inflaton potential.'\nauthor:\n- Emilian Dudas\n- Lucien Heurtier\n- Clemens Wieck\n- Martin Wolfgang Winkler\ntitle: 'UV Corrections in Sgoldstino-less Inflation'\n---\n\nULB-TH/16-01\\\nIFT-UAM/CSIC-16-004\\\nCPHT-RR001.012016\\\n\\\n\nIntroduction\n============\n\nConstrained chiral multiplets or, equivalently, nilpotent superfields and their application to cosmology have attracted a large amount of interest in recent years [@AlvarezGaume:2010rt; @Achucarro:2012hg; @Antoniadis:2014oya; @Buchmuller:2014pla; @Ferrara:2014kva; @Kallosh:2014via; @Dall'Agata:2014oka; @Kallosh:2014hxa; @Linde:2015uga; @Carrasco:2015uma; @Kahn:2015mla; @Scalisi:2015qga; @Carrasco:2015pla; @Dudas:2015eha; @Aparicio:2015psl;" -"---\nabstract: 'In this paper we propose an implement a general convolutional neural network (CNN) building framework for designing real-time CNNs. We validate our models by creating a real-time vision system which accomplishes the tasks of face detection, gender classification and emotion classification simultaneously in one blended step using our proposed CNN architecture. After presenting the details of the training procedure setup we proceed to evaluate on standard benchmark sets. We report accuracies of 96% in the IMDB gender dataset and 66% in the FER-2013 emotion dataset. Along with this we also introduced the very recent real-time enabled guided back-propagation visualization technique. Guided back-propagation uncovers the dynamics of the weight changes and evaluates the learned features. We argue that the careful implementation of modern CNN architectures, the use of the current regularization methods and the visualization of previously hidden features are necessary in order to reduce the gap between slow performances and real-time architectures. Our system has been validated by its deployment on a Care-O-bot 3 robot used during RoboCup@Home competitions. All our code, demos and pre-trained architectures have been released under an open-source license in our [public repository](https://github.com/oarriaga/face_classification/tree/master).'\nauthor:\n- \n- \n- \nbibliography:\n- 'references.bib'\ntitle: ' Real-time Convolutional" -"---\nabstract: |\n We analyze hydrodynamical and cosmological simulations of galaxy clusters to study scaling relations between the cluster total masses and observable quantities such as gas luminosity, gas mass, temperature, and $Y_X$, i.e., the product of the last two properties. Our simulations are performed with the Smoothed-Particle-Hydrodynamic GADGET-3 code and include different physical processes. The twofold aim of our study is to compare our simulated scaling relations with observations at low ($z\\thickapprox0$) and intermediate ($z\\thickapprox0.5$) redshifts and to explore their evolution over the redshift range $z=0-2$.\n\n The result of the comparative study shows a good agreement between our numerical models and real data. We find that AGN feedback significantly affects low-mass haloes at the highest redshifts resulting in a reduction of the slope of the mass $-$ gas mass relation $(\\sim13\\%)$ and the mass $- Y_X$ relation $(\\sim10\\%)$ at $z=2$ in comparison to $z=0$. The drop of the slope of the mass $-$ temperature relation at $z=2$ $(\\sim14\\%)$ is, instead, caused by early mergers. We investigate the impact of the slope variation on the study of the evolution of the normalization.\n\n We conclude that the observed scaling relations should be limited to the redshift range $z=0-1$ for cosmological studies" -"---\nabstract: 'The goal of data selection is to capture the most structural information from a set of data. This paper presents a fast and accurate data selection method, in which the selected samples are optimized to span the subspace of all data. We propose a new selection algorithm, referred to as iterative projection and matching (IPM), with linear complexity w.r.t. the number of data, and without any parameter to be tuned. In our algorithm, at each iteration, the maximum information from the structure of the data is captured by one selected sample, and the captured information is neglected in the next iterations by projection on the null-space of previously selected samples. The computational efficiency and the selection accuracy of our proposed algorithm outperform those of the conventional methods. Furthermore, the superiority of the proposed algorithm is shown on active learning for video action recognition dataset on UCF-101; learning using representatives on ImageNet; training a generative adversarial network (GAN) to generate multi-view images from a single-view input on CMU Multi-PIE dataset; and video summarization on UTE Egocentric dataset.'\nauthor:\n- |\n Mohsen Joneidi[^1] , Alireza Zaeemzadeh^[fnsymbol[1]{}]{}^, Nazanin Rahnavard, and Mubarak Shah\\\n University of Central Florida\\\n [{joneidi, zaeemzadeh, nazanin}@eecs.ucf.edu, shah@crcv.ucf.edu]{}\nbibliography:" -"---\nabstract: 'Many natural processes rely on optimizing the success ratio of a search process. We use an experimental setup consisting of a simple online game in which players have to find a target hidden on a board, to investigate the how the rounds are influenced by the detection of cues. We focus on the search duration and the statistics of the trajectories traced on the board. The experimental data are explained by a family of random-walk-based models and probabilistic analytical approximations. If no initial information is given to the players, the search is optimized for cues that cover an intermediate spatial scale. In addition, initial information about the extension of the cues results, in general, in faster searches. Finally, strategies used by informed players turn into non-stationary processes in which the length of each displacement evolves to show a well-defined characteristic scale that is not found in non-informed searches.'\nauthor:\n- 'Ricardo Mart\u00ednez-Garc\u00eda'\n- 'Justin M. Calabrese'\n- Crist\u00f3bal L\u00f3pez\ntitle: 'Online games: a novel approach to explore how partial information influences human random searches'\n---\n\nIntroduction {#introduction .unnumbered}\n============\n\nThe problem of searching for targets whose location is unknown arises in many fields and at different scales [@MendezChap6;" -"---\nabstract: 'First-order factoid question answering assumes that the question can be answered by a single fact in a knowledge base (KB). While this does not seem like a challenging task, many recent attempts that apply either complex linguistic reasoning or deep neural networks achieve 65%\u201376% accuracy on benchmark sets. Our approach formulates the task as two machine learning problems:\u00a0detecting the entities in the question, and classifying the question as one of the relation types in the KB. We train a recurrent neural network to solve each problem. On the SimpleQuestions dataset, our approach yields substantial improvements over previously published results \u2014 even neural networks based on much more complex architectures. The simplicity of our approach also has practical advantages, such as efficiency and modularity, that are valuable especially in an industry setting. In fact, we present a preliminary analysis of the performance of our model on real queries from Comcast\u2019s X1 entertainment platform with millions of users every day.'\nauthor:\n- Ferhan Ture\n- |\n Oliver Jojic\\\n Comcast Labs, Washington, DC 20005\\\nbibliography:\n- 'qa.bib'\ntitle: |\n No Need to *Pay Attention*:\\\n Simple Recurrent Neural Networks Work!\\\n (for Answering \u201cSimple\u201d Questions)\n---\n\nIntroduction {#sec:intro}\n============\n\nFirst-order factoid question" -"---\nabstract: 'The potentially significant role of the surface of an elastic body in the overall response of the continuum can be described using the mature theory of surface elasticity. The objective of this contribution is to detail the finite element approximation of the underlying governing equations (both in the volume and on its surface) and their solution using the open-source finite element library [deal.II]{}. The fully-nonlinear (geometric and material) setting is considered. The nonlinear problem is solved using a Newton\u2013Raphson procedure wherein the tangent contributions from the volume and surface are computed exactly. The finite element formulation is implemented within the total Lagrangian framework and a Bubnov\u2013Galerkin spatial discretization of the volume and the surface employed. The surface is assumed material. A map between the degrees of freedom on the surface and on the boundary of the volume is used to allocate the contribution from the surface to the global system matrix and residual vector. The [deal.II]{} library greatly facilitates the computation of the various surface operators, allowing the numerical implementation to closely match the theory developed in a companion paper. Key features of the theory and the numerical implementation are elucidated using a series of benchmark example problems." -"---\nabstract: 'Using imaginary-time theory, it is shown that the triple-alpha reaction rate can be reliably calculated without the need to solve scattering problems involving three charged particles. The calculated reaction rate is found to agree well with the empirical NACRE rate, which is widely adopted in stellar evolution calculations. The reason for this is explained using $R$-matrix theory. Extremely slow convergence is found to occur when a coupled-channel expansion is introduced, which helps to explain the very different reaction rates obtained using different theoretical approaches.'\nauthor:\n- 'T. Akahori'\n- 'Y. Funaki'\n- 'K. Yabana'\ntitle: 'Imaginary-time theory for triple-alpha reaction rate'\n---\n\nThe triple-alpha reaction is a key process that influences the production of all heavy elements in the universe. Accurate knowledge of the reaction rate is essential for understanding stellar evolution and nucleosynthesis. Since experimental measurements are not feasible for this reaction, theoretical evaluation of the reaction rate is crucially important.\n\nIn the triple-alpha process, the importance of $^{12}$C and $^8$Be resonances is well recognized [@Sa52; @Ho54]. At high temperature, the reaction proceeds dominantly through a resonant $0^+$ state of $^{12}$C at 7.65 MeV, which is known as the Hoyle state. At lower temperatures, processes that do" -"---\nabstract: 'Toroidal Alfv\u00e9n eigenmodes (TAEs) are gap modes induced by the toroidicity of tokamak plasmas in absence of continuum damping. They can be excited by energetic particles (EPs) when the EP drive exceeds other dampings. A TAE benchmark case, which was proposed by the International Tokamak Physics Activity (ITPA) group, is studied in this work. Numerical calculations of linear growth of TAEs driven by EPs in a circular-shaped, large aspect ratio tokamak have been performed using the Hybrid Kinetic-MHD (HK-MHD) model implemented in the NIMROD code. This HK-MHD model couples a $\\delta f$ particle-in-cell (PIC) representation of EPs with the 3D MHD representation of the bulk plasma through moment closure for the momentum conservation equation. Both the excitation of TAEs and their transition to energetic particle modes (EPMs) have been observed. The influence of EP density, temperature, density gradient and position of the maximum relative density gradient, on the frequency and the growth rate of TAEs are obtained, which are consistent with those from eigen-analysis calculations and gyrokinetic simulations for an initial Maxwellian distribution of EPs. The relative pressure gradient of EP at the radial location of TAE gap, which represents the drive strength of EPs, can strongly affect" -"---\nabstract: 'The geometric median covariation matrix is a robust multivariate indicator of dispersion which can be extended without any difficulty to functional data. We define estimators, based on recursive algorithms, that can be simply updated at each new observation and are able to deal rapidly with large samples of high dimensional data without being obliged to store all the data in memory. Asymptotic convergence properties of the recursive algorithms are studied under weak conditions. The computation of the principal components can also be performed online and this approach can be useful for online outlier detection. A simulation study clearly shows that this robust indicator is a competitive alternative to minimum covariance determinant when the dimension of the data is small and robust principal components analysis based on projection pursuit and spherical projections for high dimension data. An illustration on a large sample and high dimensional dataset consisting of individual TV audiences measured at a minute scale over a period of 24 hours confirms the interest of considering the robust principal components analysis based on the median covariation matrix. All studied algorithms are available in the R package `Gmedian` on CRAN.'\nauthor:\n- |\n Herv\u00e9 Cardot, Antoine Godichon-Baggioni\\" -"---\nauthor:\n- |\n Lauren M. Childs and Steven H. Strogatz\\\n Center for Applied Mathematics,\\\n Cornell University, Ithaca, NY 14853 USA\\\n \\\n `lmchilds@cam.cornell.edu, strogatz@cornell.edu`\ntitle: Stability diagram for the forced Kuramoto model\n---\n\nAbbreviated title: Forced Kuramoto model\n\n**The study of synchronization is a classic topic in nonlinear science. Sometimes the concern is with mutual synchronization, as in Huygens\u2019s 1665 discovery of the sympathy of pendulum clocks. In other situations, one is more interested in forced synchronization, as in the injection locking of a laser or the entrainment of circadian rhythms by the daily light-dark cycle. Here we consider a simple mathematical model in which both types of synchronization are present simultaneously, creating a conflict between them. What happens when a network of dissimilar but mutually coupled oscillators is also driven by an external periodic force? For a natural generalization of the Kuramoto model, the interaction of forcing, coupling, and randomness leads to a rich set of collective states and bifurcations. We explain all of these phenomena analytically, using an ansatz recently introduced by Ott and Antonsen.**\n\nIntroduction\n============\n\nIn 1975 Kuramoto proposed an elegant model for an enormous population of coupled biological oscillators \\[Kuramoto 1975, 1984\\]. Each oscillator was" -"---\nabstract: |\n Removing speckle noise from medical ultrasound images while preserving image features without introducing artifact and distortion is a major challenge in ultrasound image restoration. In this paper, we propose a multiframe-based adaptive despeckling (MADS) algorithm to reconstruct a high-resolution B-mode image from raw radio-frequency (RF) data that is based on a multiple input single output (MISO) model. It utilizes the speckle patterns estimated using a novel multiframe-based adaptive approach for ultrasonic speckle noise estimation (MSNE) based on a single input multiple output (SIMO) modeling of consecutive deconvolved ultrasound image frames. The elegance of the proposed despeckling algorithm is that it addresses the despeckling problem by completely following the signal generation model unlike conventional ad-hoc smoothening or filtering based approaches, and therefore, it is likely to maximally preserve the image features. As deconvolution is a necessary pre-processing step to despeckling, we describe here a $2$-D extension of the SIMO model-based $1$-D deconvolution method. Finally, a complete framework for the generation of high-resolution ultrasound B-mode image has been also established in this paper. The results show $8.55-15.91$ dB, $8.24-14.94$ dB improvement in terms of SNR and PSNR, respectively, for simulation data and $2.22-3.17$ improvement in terms of NIQE for" -"---\nabstract: |\n Reducible codes for the rank metric were introduced for cryptographic purposes. They have fast encoding and decoding algorithms, include maximum rank distance (MRD) codes and can correct many rank errors beyond half of their minimum rank distance, which makes them suitable for error-correction in network coding. In this paper, we study their security behaviour against information leakage on networks when applied as coset coding schemes, giving the following main results: 1) we give lower and upper bounds on their generalized rank weights (GRWs), which measure worst-case information leakage to the wire-tapper, 2) we find new parameters for which these codes are MRD (meaning that their first GRW is optimal), and use the previous bounds to estimate their higher GRWs, 3) we show that all linear (over the extension field) codes whose GRWs are all optimal for fixed packet and code sizes but varying length are reducible codes up to rank equivalence, and 4) we show that the information leaked to a wire-tapper when using reducible codes is often much less than the worst case given by their (optimal in some cases) GRWs. We conclude with some secondary related properties: Conditions to be rank equivalent to cartesian products" -"---\nabstract: 'Detailed chemical abundances for five stars in two Galactic globular clusters, NGC 5466 and NGC 5024, are presented from high resolution optical (from the Hobby-Eberley Telescope) and infrared spectra (from the SDSS-III APOGEE survey). We find \\[Fe/H\\] = -1.97\u00a0$\\pm$\u00a00.13 dex for NGC 5466, and \\[Fe/H\\] = -2.06\u00a0$\\pm$\u00a00.13 dex for NGC 5024, and the typical abundance pattern for globular clusters for the remaining elements, e.g., both show evidence for mixing in their light element abundance ratios (C, N), and AGB contributions in their heavy element abundances (Y, Ba, and Eu). These clusters were selected to examine chemical trends that may correlate them with the Sgr dwarf galaxy remnant, but at these low obvious differences from the Galactic abundance pattern are found. Regardless, we compare our results from the optical and infrared analyses to find that oxygen and silicon abundances from the infrared spectral lines are in better agreement with the other alpha-element ratios and with smaller random errors.'\nauthor:\n- |\n M.P. Lamb$^{1,5}$[^1], K.A. Venn$^{1}$, M.D. Shetrone$^{2}$, C.M Sakari$^{1,3}$, and B.J. Pritzl$^{4}$\\\n $^{1}$Department of Physics and Astronomy, University of Victoria, Victoria, British Columbia, V8W 3P2, Canada\\\n $^{2}$Mcdonald Observatory, University of Texas at Austin, HC75 Box" -"---\nabstract: 'We study a question answering problem on a social network, where a requester is seeking an answer from the agents on the network. The goal is to design reward mechanisms to incentivize the agents to propagate the requester\u2019s query to their neighbours if they don\u2019t have the answer. Existing mechanisms are vulnerable to Sybil-attacks, i.e., an agent may get more reward by creating fake identities. Hence, we combat this problem by first proving some impossibility results to resolve Sybil-attacks and then characterizing a class of mechanisms which satisfy Sybil-proofness (prevents Sybil-attacks) as well as other desirable properties. Except for Sybil-proofness, we also consider cost minimization for the requester and agents\u2019 collusions.'\nauthor:\n- Yao Zhang\n- 'Xiuzhen ZhangDengji ZhaoShanghai Engineering Research Center of Intelligent Vision and Imaging, ShanghaiTech University{zhangyao1, zhangxzh1, zhaodj}@shanghaitech.edu.cn'\nbibliography:\n- 'ijcai20.bib'\ntitle: 'Sybil-proof Answer Querying Mechanism'\n---\n\nIntroduction\n============\n\nThe development of online social networks has offered many opportunities for people to collaborate remotely in real time, such as P2P file-sharing network (e.g., BitTorrent) and Q&A platforms (e.g., Quora and Stack Overflow). Inspired by these applications, there are rich theoretical studies to look at the mechanism design problems on social networks\u00a0[@rahman2009survey; @emek2011mechanisms; @li2017mechanism].\n\nIn" -"---\nauthor:\n- Ralph Neuh\u00e4user\n- Markus Mugrauer\n- Andreas Seifahrt\n- 'Tobias O.B. Schmidt'\n- Nikolaus Vogt\ndate: 'Received 16 Aug 2007; accepted 4 Dec 2007 '\ntitle: 'Astrometric and photometric monitoring of GQ Lup and its sub-stellar companion[^1]'\n---\n\n[ Neuh\u00e4user et al. (2005) presented direct imaging evidence for a sub-stellar companion to the young T\u00a0Tauri star GQ Lup. Common proper motion was highly significant, but no orbital motion was detected. Faint luminosity, low gravity, and a late-M/early-L spectral type indicated that the companion is either a planet or a brown dwarf. ]{} [ We have monitored GQ Lup and its companion in order to detect orbital and parallactic motion and variability in its brightness. We also search for closer and fainter companions. ]{} [ We have taken six more images with the VLT Adaptive Optics instrument NACO from May 2005 to Feb 2007, always with the same calibration binary from Hipparcos for both astrometric and photometric calibration. By adding up all the images taken so far, we search for additional companions. ]{} [ The position of GQ Lup A and its companion compared to a nearby non-moving background object varies as expected for parallactic motion by" -"---\nabstract: 'Caching popular contents in advance is an important technique to achieve the low latency requirement and to reduce the backhaul costs in future wireless communications. Considering a network with base stations distributed as a Poisson point process (PPP), optimal content placement caching probabilities are derived for known popularity profile, which is unknown in practice. In this paper, online prediction (OP) and online learning (OL) methods are presented based on popularity prediction model (PPM) and Grassmannian prediction model (GPM), to predict the content profile for future time slots for time-varying popularities. In OP, the problem of finding the coefficients is modeled as a constrained non-negative least squares (NNLS) problem which is solved with a modified NNLS algorithm. In addition, these two models are compared with log-request prediction model (RPM), information prediction model (IPM) and average success probability (ASP) based model. Next, in OL methods for the time-varying case, the cumulative mean squared error (MSE) is minimized and the MSE regret is analyzed for each of the models. Moreover, for quasi-time varying case where the popularity changes block-wise, KWIK (know what it knows) learning method is modified for these models to improve the prediction MSE and ASP performance. Simulation results" -"---\nabstract: 'Based on the original idea of the density matrix renormalization group (DMRG) [@wh93], i.e. to include the missing boundary conditions between adjacent blocks of the blocked quantum system, we present a rigorous and nonperturbative mathematical formulation for the real-space renormalization group (RG) idea invented by L.P. Kadanoff [@ka66] and further developed by K.G. Wilson [@wi71]. This is achieved by using additional Hilbert spaces called auxiliary spaces in the construction of each single isolated block, which is then named a superblock according to the original nomenclature [@wh93]. On this superblock we define two maps called embedding and truncation for successively integrating out the small scale structure. Our method overcomes the known difficulties of the numerical DMRG, i.e. limitation to zero temperature and one space dimension.'\nauthor:\n- |\n Andreas Degenhard[^1]\\\n Department of Mathematical Physics, University of Bielefeld,\\\n Universit\u00e4tsstra\u00dfe 25, D-33615 Bielefeld, Germany\\\ntitle: |\n A nonperturbative\\\n Real-Space Renormalization Group scheme\n---\n\n\u00a0\u00a0\u00a0PACS: 75.10.Jm\n\nIntroduction\n============\n\nSoon after K.G. Wilson\u2019s dramatic success in applying a momentum space formulation of the renormalization group (RG) method [@ka66] to the Theory of Critical Phenomena and the Kondo Problem [@wi75] there was a considerable amount of efforts in applying the same type of approach" -"---\nabstract: 'Flares are known to restructure the magnetic field in the corona and to accelerate the gas between the field lines, but their effect on the photosphere is less well studied. New data of the Solar Optical Telescope (SOT) onboard Hinode provide unprecedented opportunity to uncover the photospheric effect of a solar flare, which associates with an active region NOAA AR 10930 on 2006 December 13. We find a clear lateral displacement of sunspot penumbral regions scanned by two flare ribbons. In the impulsive phase of the flare, the flare ribbons scan the sunspot at a speed of around 18 km s$^{-1}$, derived from Ca II and G-band images. We find instantaneous horizontal shear of penumbral fibrils, with initial velocities of about 1.6 km s$^{-1}$, produced when a flare ribbon passes over them. This velocity decreases rapidly at first, then gradually decays, so that about one hour later, the fibrils return to a new equilibrium. During the one hour interval, the total displacement of these fibrils is around 2.0 Mm, with an average shear velocity of 0.55 km s$^{-1}$. This lateral motion of the penumbral fibrils indicates that the magnetic footpoints of these field lines being rearranged in the" -"---\nabstract: |\n In this paper we define infinite-dimensional algebra and its representation, whose basis is naturally identified with semi-infinite configurations of the square ladder model.\n\n We also extrapolate the ideas for the cyclic 3-leg triangular ladder. All of these propose a way for generalization, which leads to representations of $N=2, \\dots$ algebras.\n\n **Keywords**: *2D lattice, square ladder, triangular ladder, conformal algebra, semi-infinite forms, fermions, quadratic algebra, superfrustration, graded Euler characteristic, cohomology, deformation, Jacobi triple product, superalgebras, operator algebras.*\nauthor:\n- Valerii Sopin\ntitle: ' Construction of an algebra corresponding to a statistical model of the square ladder (square lattice with two lines) '\n---\n\nIntroduction\n============\n\nFor each graph $\\Gamma$ we can construct a statistical model in which the set of configurations is the set of arrangements of particles at graph vertices such that at each vertex at most one particle is located and two particles cannot be located at vertices joined by an edge.\n\nThe previous paper $[1]$ discussed combinatorial properties of the set of configurations of the $2\\times n$ square lattice (or simply the square ladder) graph:\n\n![$2\\times n$ square lattice model[]{data-label=\"fig.0\"}](1.png){width=\"10.0cm\"}\n\nLet\u2019s assign the fermion algebra of anti-commuting elements $x_i$ and $y_i$ to the graph in" -"---\nabstract: 'We consider metallic carbon nanotubes with an overlying unidirectional electrical chiral (wavevector out of the radial direction, where the axial direction is included) superlattice potential. We show that for superlattices with a wavevector close to the axial direction, the electron velocity assumes the same value as for nanotubes without superlattice. Due to an increased number of phonons with different momenta but lower electron-phonon scattering probabilities, we obtain a large enhancement of the high-voltage conductance and current sustainability in comparison with the nanotube without superlattice.'\nauthor:\n- J\u00fcrgen Dietel\n- Hagen Kleinert\ndate: Received \ntitle: Strong Enhancement of High Voltage Electronic Transport in Chiral Electrical Nanotube Superlattices\n---\n\nDepending on their chirality, carbon nanotubes (NT) behave either like a semi-conductor or a metal. In the first case, they offer interesting alternative for building logical circuits. In the second case, they can be used as nanometer-sized metallic wires in logical circuits. This is particularly useful since they can sustain very high currents before breaking. At low voltages ($U \\lesssim 0.17 V $) the effective electron scattering length at room temperature in metallic NTs is mainly governed by acoustical phonon and impurity scattering with a value of a few hundred nanometers" -"---\nabstract: |\n The internet era has generated a requirement for low cost, anonymous and rapidly verifiable transactions to be used for online barter, and fast settling money have emerged as a consequence. For the most part, e-money has fulfilled this role, but the last few years have seen two new types of money emerge. Centralised virtual currencies, usually for the purpose of transacting in social and gaming economies, and crypto-currencies, which aim to eliminate the need for financial intermediaries by offering direct peer-to-peer online payments.\n\n We describe the historical context which led to the development of these currencies and some modern and recent trends in their uptake, in terms of both usage in the real economy and as investment products. As these currencies are purely digital constructs, with no government or local authority backing, we then discuss them in the context of monetary theory, in order to determine how they may be have value under each. Finally, we provide an overview of the state of regulatory readiness in terms of dealing with transactions in these currencies in various regions of the world.\nauthor:\n- 'Gareth W. Peters$\\ddag$ $\\star$ $\\ast$'\n- 'Efstathios Panayi$\\dag$ $\\ast$'\n- |\n Ariane Chapelle$\\dag$\\\n \\\n \\" -"---\nabstract: 'We report on the development and phase noise performance of a 9.1926 GHz microwave frequency synthesizer to be used as the local oscillator for a Cs fountain clock. It is based on frequency multiplication and synthesis from an ultralow phase noise 5 MHz Oven Controlled Crystal Oscillator (OCXO) and 100 MHz Voltage Controlled Crystal Oscillator (VCXO).The key component of the frequency multiplication is a non-linear transmission-line (NLTL) used as a frequency comb generator. The phase noise of the synthesizer is improved by carefully optimizing the input power, the input and output impedances of the NLTL. The absolute phase noises of the 9.1926 GHz output signal are measured to be $-64$ dBc/Hz, $-83$ dBc/Hz, $-92$ dBc/Hz, $-117$ dBc/Hz and $-119$ dBc/Hz at 1 Hz, 10Hz, 100Hz, 1 kHz and 10 kHz offset frequencies, respectively. The residual phase noise of the synthesizer is measured to be $-82$ dBc/Hz at 1 Hz offset frequency. The measurement result shows that the absolute phase noise at the frequency range of 1 - 100 Hz is mainly limited by the phase noise of the OCXO. The contribution of the absolute phase noise to the fountain clock short-term frequency stability is calculated to be $7.0" -"---\nabstract: |\n Attention is given to the interface of mathematics and physics, specifically noting that fundamental principles limit the usefulness of otherwise perfectly good mathematical general integral solutions. A new set of multivector solutions to the meta-monogenic (massive) Dirac equation is constructed which form a Hilbert space. A new integral solution is proposed which involves application of a kernel to the right side of the function, instead of to the left as usual. This allows for the introduction of a multivector generalization of the Feynman Path Integral formulation, which shows that particular \u201cgeometric groupings\u201d of solutions evolve in the manner to which we ascribe the term \u201cquantum particle\u201d. Further, it is shown that the role of usual $i$ is subplanted by the unit time basis vector, applied on the right side of the functions.\n\n Summary of talk, to appear in: [*Proceedings of the 17th Annual Lecture Series in the Mathematical Sciences, April 8-10, 1993, University of Arkansas*]{}, [Clifford Algebras in Analysis]{}, John Ryan editor (CRC Press 1994).\nauthor:\n- 'William M. Pezzaglia Jr.'\ndate: |\n Dec 15, 1993 (Ver 2.4b)\\\n Preprint: clf-alg/pezz9302\ntitle: |\n Multivector Solutions to the Hyper-\\\n Holomorphic Massive Dirac Equation\n---\n\n12.8cm .65in .65in 27.5pt\n\n\\#1[[**e**]{}\\_[\\#1]{}]{}" -"[**SYMMETRY CONSTRAINTS AND THE ELECTRONIC STRUCTURES OF A QUANTUM DOT WITH THIRTEEN ELECTRONS**]{}\n\n\u00a0\u00a0\u00a0\u00a0\u00a0 G.M. Huang, Y.M. Liu, and C.G. Bao\n\n\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 The State Key Laboratory of Optoelectronic Materials and Technologies, and\n\nDepartment of Physics, Zhongshan University, \u00a0Guangzhou, 510275, P.R. China\n\nABSTRACT: The symmetry constraints imposing on the quantum states of a dot with 13 electrons has been investigated. Based on this study, the favorable structures (FSs) of each state has been identified. Numerical calculations have been performed to inspect the role played by the FSs. It was found that, if a first-state has a remarkably competitive FS, this FS would be pursued and the state would be crystal-like and have a specific core-ring structure associated with the FS. The magic numbers are found to be closely related to the FSs.\n\nPACS(numbers): 73.61.-r\n\n1, INTRODUCTION\n\nModern experimental techniques, e.g., by using electrostatic gates and by etching, allow a certain number of electrons to be confined in semiconductor heterostructures.$^{1-6}$\u00a0\u00a0Such many-electron systems have much in common with atoms, yet they are man-made structures and are usually called \u201d quantum dot \u201d. \u00a0The number of electrons contained in a dot ranges from a few to a few thousands, they are confined in a domain" -"---\nabstract: 'Single molecule force spectroscopy methods can be used to generate folding trajectories of biopolymers from arbitrary regions of the folding landscape. We illustrate the complexity of the folding kinetics and generic aspects of the collapse of RNA and proteins upon force quench, using simulations of an RNA hairpin and theory based on the de Gennes model for homopolymer collapse. The folding time, $\\tau_F$, depends asymmetrically on $\\delta f_S = f_S - f_m$ and $\\delta f_Q = f_m - f_Q$ where $f_S$ ($f_Q$) is the stretch (quench) force, and $f_m$ is the transition mid-force of the RNA hairpin. In accord with experiments, the relaxation kinetics of the molecular extension, $R(t)$, occurs in three stages: a rapid initial decrease in the extension is followed by a plateau, and finally an abrupt reduction in $R(t)$ that occurs as the native state is approached. The duration of the plateau increases as $\\lambda =\\tau_Q/\\tau_F$ decreases (where $\\tau_Q$ is the time in which the force is reduced from $f_S$ to $f_Q$). Variations in the mechanisms of force quench relaxation as $\\lambda$ is altered are reflected in the experimentally measurable time-dependent entropy, which is computed directly from the folding trajectories. An analytical solution of the" -"---\nabstract: 'This paper proposes a solution to Stokes\u2019 paradox for asymptotically uniform viscous flow around a cylinder. The existence of a [*global*]{} stream function satisfying a perturbative form of the two-dimensional NavierStokes equations for low Reynolds number is established. This stream function satisfies the appropriate boundary conditions on both the cylinder and at infinity, but nevertheless agrees with Stokes\u2019 original results at finite radius as the Reynolds number tends to zero. The NavierStokes equations are satisfied to a power-log power of the Reynolds number. The drag on the cylinder is calculated from first principles and the free parameter of the approach can be chosen to give good agreement with data on drag. In this revised working paper we put our approach on a firmer mathematical basis using the Helmholtz-Laplace equation as a linear approximation to the NavierStokes system. In so doing we demonstrate the instability of the original paradox. We also demonstrate the absence of a paradox of Stokes-Whitehead class, and give further theoretical constraints on the free parameters of the model.'\nauthor:\n- 'William T. Shaw[^1]'\ntitle: 'A simple resolution of Stokes\u2019 paradox?[^2]'\n---\n\nKey Words: Stokes Paradox, Fluid dynamics, Stokes flow, Stream function, Biharmonic equation, Helmholtz equation," -"---\nabstract: 'With ubiquity of social media platforms, millions of people are sharing their online persona by expressing their thoughts, moods, emotions, feelings, and even their daily struggles with mental health issues voluntarily and publicly on social media. Unlike the most existing efforts which study depression by analyzing textual content, we examine and exploit multimodal big data to discern depressive behavior using a wide variety of features including individual-level demographics. By developing a multimodal framework and employing statistical techniques for fusing heterogeneous sets of features obtained by processing visual, textual and user interaction data, we significantly enhance the current state-of-the-art approaches for identifying depressed individuals on Twitter (improving the average F1-Score by 5 percent) as well as facilitate demographic inference from social media for broader applications. Besides providing insights into the relationship between demographics and mental health, our research assists in the design of a new breed of demographic-aware health interventions.'\nauthor:\n- Amir Hossein Yazdavar\n- Mohammad Saeid Mahdavinejad\n- 'Goonmeet Bajaj\\'\n- William Romine\n- Amirhassan Monadjemi\n- 'Krishnaprasad Thirunarayan\\'\n- Amit Sheth\n- Jyotishman Pathak\nbibliography:\n- 'ref.bib'\ntitle: 'Fusing Visual, Textual and Connectivity Clues for Studying Mental Health'\n---\n\nIntroduction\n============\n\nDepression is a highly prevalent" -"---\nabstract: 'One of the major performance and scalability bottlenecks in large scientific applications is parallel reading and writing to supercomputer I/O systems. The usage of parallel file systems and consistency requirements of POSIX, that all the traditional HPC parallel I/O interfaces adhere to, pose limitations to the scalability of scientific applications. Object storage is a widely used storage technology in cloud computing and is more frequently proposed for HPC workload to address and improve the current scalability and performance of I/O in scientific applications. While object storage is a promising technology, it is still unclear how scientific applications will use object storage and what the main performance benefits will be. This work addresses these questions, by emulating an object storage used by a traditional scientific application and evaluating potential performance benefits. We show that scientific applications can benefit from the usage of object storage on large scales.'\nauthor:\n- 'Steven Wei-der Chien'\n- Stefano Markidis\n- Rami Karim\n- |\n \\\n Erwin Laure\n- Sai Narasimhamurthy\nbibliography:\n- 'main.bib'\ntitle: Exploring Scientific Application Performance Using Large Scale Object Storage\n---\n\n[***Keywords\u2014*** Scientific Applications, Object Storage, Parallel I/O, HPC, HDF5]{}\n\nAcknowledgments {#acknowledgments .unnumbered}\n===============\n\nFunding for the work is received" -"---\nbibliography:\n- 'references.bib'\n---\n\n[0.490]{}\n\n[0.490]{}\n\nIntroduction\n============\n\nBitcoin surprised scholars in distributed systems, as well as in security\u00a0[@bonneau2015SoKResearch]. Authors have called the new composition of known concepts a \u201csweet spot\u201d\u00a0[@tschorsch2016BitcoinTechnical] in the design space for protocols, and praised the complex way the components are put together as a \u201ctrue leap of insight\u201d\u00a0[@narayanan2017BitcoinAcademic] of Nakamoto\u00a0[@nakamoto2008BitcoinPeertopeer]. Likely the most intriguing part is the way Bitcoin uses proof-of-work puzzles to secure a distributed log. The role of proof-of-work in Nakamoto consensus can be contemplated in several ways. First and most intuitively, the computational puzzles can be interpreted as a rate limit on new identities, which discourage Sybil attacks\u00a0[@douceur2002SybilAttack] in a lottery for blocks and new coins. Second, proof-of-work can be conceived as a game-proof variant of a probabilistic back-off mechanism, as used in media access control in computer networks. It reduces the risk of collisions when many nodes concurrently seek write access to a shared medium, the ledger. Proof-of-work has been formalized in cryptographic security models of Nakamoto consensus\u00a0[@garay2015BitcoinBackbone; @pass2017AnalysisBlockchain]. However, we are not aware of work pointing out the fundamental conflict between inclusiveness and security inherent to the way proof-of-work is used in the" -"---\nabstract: 'At present, Babcock-Leighton flux transport solar dynamo models appear as the most promising model for explaining diverse observational aspects of the sunspot cycle. The success of these flux transport dynamo models is largely dependent upon a single-cell meridional circulation with a deep equatorward component at the base of the Sun\u2019s convection zone. However, recent observations suggest that the meridional flow may in fact be very shallow (confined to the top 10% of the Sun) and more complex than previously thought. Taken together these observations raise serious concerns on the validity of the flux transport paradigm. By accounting for the turbulent pumping of magnetic flux as evidenced in magnetohydrodynamic simulations of solar convection, we demonstrate that flux transport dynamo models can generate solar-like magnetic cycles even if the meridional flow is shallow. Solar-like periodic reversals is recovered even when meridional circulation is altogether absent, however, in this case the solar surface magnetic field dynamics does not extend all the way to the polar regions. Very importantly, our results demonstrate that the Parker-Yoshimura sign rule for dynamo wave propagation can be circumvented in Babcock-Leighton dynamo models by the latitudinal component of turbulent pumping \u2013 which can generate equatorward propagating sunspot" -"---\nabstract: 'Effective spin-spin interactions between $N+1$ qubits enable the determination of the eigenvalue of an arbitrary Pauli product of dimension $N$ with a constant, small number of multi-qubit gates that is independent of $N$ and encodes the eigenvalue in the measurement basis states of an extra ancilla qubit. Such interactions are available whenever qubits can be coupled to a shared harmonic oscillator, a situation that can be realized in several physical qubit implementations. For example, suitable interactions have already been realized for up to 14 qubits in ion traps. It should be possible to implement stabilizer codes for quantum error correction with a constant number of multi-qubit gates, in contrast to typical constructions using a number of two-qubit gates that increases as a function of $N$. The special case of finding the parity of $N$ qubits only requires a small number of operations that is independent of $N$. This compares favorably to algorithms for computing the parity on conventional machines, which implies a genuine quantum advantage.'\nauthor:\n- \ntitle: 'Efficient eigenvalue determination for arbitrary Pauli products based on generalized spin-spin interactions'\n---\n\n[*We dedicate this work to Danny Segal, scientist, community builder and friend. Your bold and cheerful way" -"---\nabstract: 'We report results from numerical simulations of star formation in the early universe that focus on the dynamical behavior of metal-free gas under different initial and environmental conditions. In particular we investigate the role of turbulence, which is thought to ubiquitously accompany the collapse of high-redshift halos. We distinguish between two main cases: the birth of Population III.1 stars \u2013 those which form in the pristine halos unaffected by prior star formation \u2013 and the formation of Population III.2 stars \u2013 those forming in halos where the gas has an increased ionization fraction. We find that turbulent primordial gas is highly susceptible to fragmentation in both cases, even for turbulence in the subsonic regime, i.e.\u00a0for rms velocity dispersions as low as 20 % of the sound speed. Fragmentation is more vigorous and more widespread in pristine halos compared to pre-ionized ones. If such levels of turbulent motions were indeed present in star-forming minihalos, Pop III.1 stars would be on average of somewhat lower mass, and form in larger groups, than Pop III.2 stars. We find that fragment masses cover over two orders of magnitude, suggesting that the Population\u00a0III initial mass function may have been much broader" -"---\nauthor:\n- Julien Chopin\n- Moumita Dasgupta\n- Arshad Kudrolli\ntitle: Dynamic wrinkling and strengthening of an elastic filament in a viscous fluid\n---\n\nSlender structures embedded in complex fluids which buckle and fold as a result of mechanical compression are commonly found as in F-actin and microtubules in cell mechanics\u00a0[@Gardel2004; @Chaudhuri2007; @Jiang2008] flagella in swimming organisms\u00a0[@Powers2010; @Goldstein2006; @Son2013], fibers in paper processing\u00a0[@Lindner2012], and the earth\u2019s crust in orogenesis\u00a0[@Biot1961]. A classical result dating back to Euler states that a thin sheet or filament will buckle under axial loading above a critical strain which is proportional to the square of the mode number and the square of the ratio of its thickness to length\u00a0[@timoshenko1940strength]. While buckling typically occurs in the fundamental mode corresponding to the lowest strain, higher modes can occur depending on the constraints along the filament which may be static or dynamic in nature\u00a0[@Chopin2013; @Chopin2015; @miller2015buckling; @lagrange2016wrinkling; @Audoly2005; @Vermorel2007; @gladden2005dynamic]. Although theoretical analysis of the problem are numerous, there are few experimental systems allowing close comparison with predictions. Traditional analysis to the wrinkling observed in elas- tic filaments consider linear stability analysis with instantaneous loading which can be an oversimplification in many" -"---\nabstract: 'We present theoretical analysis and a suite of tests and procedures for addressing a broad class of redundant and misleading association rules we call *specious rules*. Specious dependencies, also known as *spurious*, *apparent*, or *illusory associations*, refer to a well-known phenomenon where marginal dependencies are merely products of interactions with other variables and disappear when conditioned on those variables. The most extreme example is Yule-Simpson\u2019s paradox where two variables present positive dependence in the marginal contingency table but negative in all partial tables defined by different levels of a confounding factor. It is accepted wisdom that in data of any nontrivial dimensionality it is infeasible to control for all of the exponentially many possible confounds of this nature. In this paper, we consider the problem of specious dependencies in the context of statistical association rule mining. We define specious rules and show they offer a unifying framework which covers many types of previously proposed redundant or misleading association rules. After theoretical analysis, we introduce practical algorithms for detecting and pruning out specious association rules efficiently under many key goodness measures, including mutual information and exact hypergeometric probabilities. We demonstrate that the procedure greatly reduces the number of associations" -"---\nabstract: 'We consider theoretically effects of random charged impurity disorder on the [*quality*]{} of high-mobility two dimensional (2D) semiconductor structures, explicitly demonstrating that the sample mobility is not necessarily a reliable or universal indicator of the sample quality in high-mobility modulation-doped 2D GaAs structures because, depending on the specific system property of interest, mobility and quality may be controlled by different aspects of the underlying disorder distribution, particularly since these systems are dominated by long-range Coulomb disorder from both near and far random quenched charged impurities. We show that in the presence of both channel and remote charged impurity scattering, which is a generic situation in modulation-doped high-mobility 2D carrier systems, it is quite possible for higher (lower) mobility structures to have lower (higher) quality as measured by the disorder-induced single-particle level broadening. In particular, we establish that there is no reason to expect a unique relationship between mobility and quality in 2D semiconductor structures as both are independent functionals of the disorder distribution, and are therefore, in principle, independent of each other. Using a simple, but reasonably realistic, \u201c2-impurity\u201d minimal model of the disorder distribution, we provide concrete examples of situations where higher (lower) mobilities correspond to lower" -"---\nabstract: 'Program synthesis is challenging largely because of the difficulty of search in a large space of programs. Human programmers routinely tackle the task of writing complex programs by writing sub-programs and then analysing their intermediate results to compose them in appropriate ways. Motivated by this intuition, we present a new synthesis approach that leverages learning to guide a bottom-up search over programs. In particular, we train a model to prioritize compositions of intermediate values during search conditioned on a given set of input-output examples. This is a powerful combination because of several emergent properties: First, in bottom-up search, intermediate programs can be executed, providing semantic information to the neural network. Second, given the concrete values from those executions, we can exploit rich features based on recent work on property signatures. Finally, bottom-up search allows the system substantial flexibility in what order to generate the solution, allowing the synthesizer to build up a program from multiple smaller sub-programs. Overall, our empirical evaluation finds that the combination of learning and bottom-up search is remarkably effective, even with simple supervised learning approaches. We demonstrate the effectiveness of our technique on a new data set for synthesis of string transformation programs.'\nauthor:" -"---\nabstract: 'As of today abuse is a pressing issue to participants and administrators of Online Social Networks (OSN). Abuse in Twitter can spawn from arguments generated for influencing outcomes of a political election, the use of bots to automatically spread misinformation, and generally speaking, activities that [*deny*]{}, [*disrupt*]{}, [*degrade*]{} or [*deceive*]{} other participants and, or the network. Given the difficulty in finding and accessing a large enough sample of abuse ground truth from the Twitter platform, we built and deployed a custom crawler that we use to judiciously collect a new dataset from the Twitter platform with the aim of characterizing the nature of abusive users, a.k.a abusive \u201cbirds\u201d, in the wild. We provide a comprehensive set of features based on users\u2019 attributes, as well as social-graph metadata. The former includes metadata about the account itself, while the latter is computed from the social graph among the sender and the receiver of each message. Attribute-based features are useful to characterize user\u2019s accounts in OSN, while graph-based features can reveal the dynamics of information dissemination across the network. In particular, we derive the Jaccard index as a key feature to reveal the benign or malicious nature of directed messages in" -"The Letter [@1] predicts \u201can unusual decrease with temperature (or even nonmonotonic temperature dependence) of the Casimir attraction force between a thin metal film and a bulk plane ideal metal...\u201d According to [@1], \u201cfor bulk samples, the Casimir force [*increases*]{} slowly with temperature\u201d. On this basis the authors of [@1] propose the experimental observation of the decreasing temperature dependence of the Casimir force magnitude per unit area, $|f(T)|$, in the configuration of a bulk ideal metal with planar boundary and a thin metal film described by the Drude model. As we demonstrate below, the statement of [@1] that for bulk samples $|f(T)|$ increases with temperature is in error. What actually happens is that $|f(T)|$ decreases with $T$ in a wide temperature region for bulk samples described by the Drude model. Here, we show that this decrease is much larger than that predicted in [@1] for a thin film and that it has already been experimentally excluded.\n\nWe have computed $|f(T)|$ for an ideal metal semispace placed at $a=100\\,$nm from a semispace made of the virtual metal considered in [@1] using the Lifshitz formula. The computational results, as a function of temperature, are presented in Fig.\u00a01 and should be compared" -"---\nabstract: |\n Lov[\u00e1]{}sz Local Lemma (LLL) is a very powerful tool in combinatorics and probability theory to show the possibility of avoiding all \u201cbad\" events under some \u201cweakly dependent\" condition. Over the last decades, the algorithmic aspect of LLL has also attracted lots of attention in theoretical computer science \u00a0[@moser2010constructive; @kolipaka2011moser; @harvey2015algorithmic]. A tight criterion under which the *abstract* version LLL (ALLL) holds was given by Shearer \u00a0[@shearer1985problem]. It turns out that Shearer\u2019s bound is generally not tight for *variable* version LLL (VLLL)\u00a0[@he2017variable]. Recently, Ambainis et al. [@ambainis2012quantum] introduced a quantum version LLL (QLLL), which was then shown to be powerful for the quantum satisfiability problem.\n\n In this paper, we prove that Shearer\u2019s bound is tight for QLLL, i.e., the relative dimension of the smallest satisfying subspace is completely characterized by the independent set polynomial, affirming a conjecture proposed by Sattath et al.\u00a0[@pnas; @Morampudi2018Many]. Our result also shows the tightness of Gily[\u00e9]{}n and Sattath\u2019s algorithm [@gilyen2016preparing], and implies that the lattice gas partition function fully characterizes quantum satisfiability for almost all Hamiltonians with large enough qudits\u00a0[@pnas].\n\n Commuting LLL (CLLL), LLL for commuting local Hamiltonians which are widely studied in the literature, is also investigated here. We" -"---\nabstract: 'The Health and Retirement Study is a longitudinal study of US adults enrolled at age 50 and older. We were interested in investigating the effect of a sudden large decline in wealth on the cognitive score of subjects. Our analysis was complicated by the lack of randomization, confounding by indication, and a substantial fraction of the sample and population will die during follow-up leading to some of our outcomes being censored. Common methods to handle these problems for example marginal structural models, may not be appropriate because it upweights subjects who are more likely to die to obtain a population that over time resembles that would have been obtained in the absence of death. We propose a refined approach by comparing the treatment effect among subjects who would survive under both sets of treatment regimes being considered. We do so by viewing this as a large missing data problem and impute the survival status and outcomes of the counterfactual. To improve the robustness of our imputation, we used a modified version of the penalized spline of propensity methods in treatment comparisons approach. We found that our proposed method worked well in various simulation scenarios and our data analysis.'" -"---\nabstract: |\n Stock market prediction is one of the most attractive research topic since the successful prediction on the market\u2019s future movement leads to significant profit. Traditional short term stock market predictions are usually based on the analysis of historical market data, such as stock prices, moving averages or daily returns. However, financial news also contains useful information on public companies and the market.\n\n Existing methods in finance literature exploit sentiment signal features, which are limited by not considering factors such as events and the news context. We address this issue by leveraging deep neural models to extract rich semantic features from news text. In particular, a Bidirectional-LSTM are used to encode the news text and capture the context information, self attention mechanism are applied to distribute attention on most relative words, news and days. In terms of predicting directional changes in both Standard & Poor\u2019s 500 index and individual companies stock price, we show that this technique is competitive with other state-of-the-art approaches, demonstrating the effectiveness of recent NLP technology advances for computational finance.\naddress: |\n Department of Electrical and Computer Engineering\\\n Queen\u2019s University, Canada\\\n Kingston, ON, Canada K7L 2N8\nauthor:\n- Huicheng Liu\nbibliography:\n- 'references.bib'\ntitle:" -"---\nabstract: 'This paper presents a method for indexing human activities in videos captured from a wearable camera being worn by patients, for studies of progression of the dementia diseases. Our method aims to produce indexes to facilitate the navigation throughout the individual video recordings, which could help doctors search for early signs of the disease in the activities of daily living. The recorded videos have strong motion and sharp lighting changes, inducing noise for the analysis. The proposed approach is based on a two steps analysis. First, we propose a new approach to segment this type of video, based on apparent motion. Each segment is characterized by two original motion descriptors, as well as color, and audio descriptors. Second, a Hidden-Markov Model formulation is used to merge the multimodal audio and video features, and classify the test segments. Experiments show the good properties of the approach on real data.'\nauthor:\n- |\n Svebor Karaman, Jenny Benois-Pineau\\\n LaBRI, Universit\u00e9 de Bordeaux\\\n 351 Cours de la Lib\u00e9ration 33405 Talence cedex, France\\\n karaman@labri.fr, jenny.benois@labri.fr\\\n- |\n R\u00e9mi M\u00e9gret\\\n IMS, Universit\u00e9 de Bordeaux\\\n 351 Cours de la Lib\u00e9ration 33405 Talence cedex, France\\\n megret@enseirb-matmeca.fr\\\n- |\n Julien Pinquier\\\n IRIT, Universit\u00e9 de Toulouse\\\n 118 route" -"---\nabstract: 'Over the past decades, progress in deployable autonomous flight systems has slowly stagnated. This is reflected in today\u2019s production air-crafts, where pilots only enable simple physics-based systems such as autopilot for takeoff, landing, navigation, and terrain/traffic avoidance. Evidently, autonomy has not gained the trust of the community where higher problem complexity and cognitive workload are required. To address trust, we must revisit the process for developing autonomous capabilities: modeling and simulation. Given the prohibitive costs for live tests, we need to prototype and evaluate autonomous aerial agents in a high fidelity flight simulator with autonomous learning capabilities applicable to flight systems: such a open-source development platform is not available. As a result, we have developed GymFG: GymFG couples and extends a high fidelity, open-source flight simulator and a robust agent learning framework to facilitate learning of more complex tasks. Furthermore, we have demonstrated the use of GymFG to train an autonomous aerial agent using Imitation Learning. With GymFG, we can now deploy innovative ideas to address complex problems and build the trust necessary to move prototypes to the real-world.'\nauthor:\n- \nbibliography:\n- 'main.bib'\ntitle: 'GymFG: A Framework with a Gym Interface for FlightGear'\n---\n\nAcknowledgments\n===============\n\nThis" -"---\nabstract: 'A tight binding model is introduced to describe the strong interaction limit of excitonic ordering. At stoichiometry, the model reduces in the strong coupling limit to a pseudo-spin model with approximate U(4) symmetry. Excitonic order appears in the pseudo-spin model as in-plane pseudo-magnetism. The U(4) symmetry unifies all possible singlet and triplet order parameters describing such states. Super-exchange, Hunds-rule coupling, and other perturbations act as anisotropies splitting the U(4) manifold, ultimately stabilizing a paramagnetic triplet state. The tendency to ferromagnetism with doping (observed experimentally in the hexaborides) is explained as a spin-flop transition to a different orientation of the U(4) order parameter. The physical mechanism favoring such a reorientation is the enhanced coherence (and hence lower kinetic energy) of the doped electrons in a ferromagnetic background relative to the paramagnet. A discussion of the physical meaning of various excitonic states and their experimental consequences is also provided.'\nauthor:\n- |\n Leon Balents\\\n Physics Department, University of California, Santa Barbara, CA 93106\ntitle: 'Excitonic order at strong-coupling: pseudo-spins, doping, and ferromagnetism'\n---\n\nIntroduction {#sec:intro}\n============\n\nThe unexpected discovery of high-$T_c$ itinerant ferromagnetism in doped hexaborides[@hexaborides]\u00a0has re-ignited interest in the problem of excitonic ordering near the semiconductor\u2013metal transition.[@Keldysh; @HalperinRice]" -"---\nabstract: 'Dynamically varying system parameters along a path enclosing an exceptional point is known to lead to chiral mode conversion. But is it necessary to include this non-Hermitian degeneracy inside the contour for this process to take place? We show that a slow enough variation of parameters, even away from the system\u2019s exceptional point, can also lead to a robust asymmetric state exchange. To study this process, we consider a prototypical two-level non-Hermitian Hamiltonian with a constant coupling between elements. Closed form solutions are obtained when the amplification/attenuation coefficients in this arrangement are varied in conjunction with the resonance detuning along a circular contour. Using asymptotic expansions, this input-independent mode conversion is theoretically proven to take place irrespective of whether the exceptional point is enclosed or not upon encirclement. Our results significantly broaden the range of parameter space required for the experimental realization of such chiral mode conversion processes.'\nauthor:\n- 'Absar U. Hassan'\n- 'Gisela L. Galmiche'\n- Gal Harari\n- Patrick LiKamWa\n- Mercedeh Khajavikhan\n- Mordechai Segev\n- 'Demetrios N. Christodoulides'\nbibliography:\n- 'References.bib'\ntitle: 'Chiral state-conversion without encircling an exceptional point'\n---\n\nRecent years have seen a surging interest in non-Hermitian systems - settings where" -"---\naddress:\n- |\n GRASP, Institut de Physique B5, Universit\u00e9 de Li\u00e8ge,\\\n B-4000 Li\u00e8ge, Belgium.\n- |\n \u00a0\\\n PACS: [82.70.Rr, 83.70.Hq]{} \nauthor:\n- 'H.Caps, M.-L.Chevalier, H.Decauwer, G.Soyez, M.Ausloos and N.Vandewalle'\ntitle: Diffusive foam wetting process in microgravity\n---\n\nFoams are paradigms of disordered cellular systems. Bubbles composing foams are indeed characterized by a wide variety of side numbers and faces areas [@weaire]. The complexity of the foam can only be described by statistical averages. Among the physical properties of interest, one can cite the topological rearrangements [@rivier], the cascades of popping bubbles [@prlnico; @jeff], the rigidity loss transition [@rigidity], etc...\n\nIn aqueous foams, a fundamental process is the drainage [@drainage] which is due to the competition between gravity forces and the capillary pressure in channels separating adjacent bubbles. The drainage-capillary effects imply that the top of the foam becomes dry while the bottom of the foam remains wet. A dry foam is composed of polyhedral bubbles meeting on thin edges, while wet foams are composed of spherical bubbles which can sometimes move freely [@rigidity].\n\nWe report here the experimental study of foam wetting in microgravity. The aims of the present letter are [*(i)*]{} to report the behavior of a foam" -"Magneto-absorption spectra[@hl; @ul; @g1] of some bulk semiconductors displays strongly asymmetric Fano-like resonances. In a recent paper of Glutsch [*et al.*]{}[@g1] the observed profiles have been explained by coupling between the discrete and continuum states of magneto-excitons. The authors show that the necessary coupling may be induced by Coulomb interaction. Both experimental and theoretical profiles obtained have asymmetric form. However, the higher energy dip of the resonance seems to be much more pronounced in the data than in the results of the calculations. Moreover, the role of Coulomb interaction between electron and hole should be strongly weakened in the high Landau number $N$ range where the magnetic energy $N\\hbar\\omega_c$ is large compared to the excitonic Rydberg.\n\nWe suggest here that the qualitative explanation of the Fano profile in magneto-absorption stems from the quasi-one-dimensional character of the electronic excitations, and the corresponding singularity in the density of states (placed into continuum of lower Landau bands), rather than from the details of the interaction. We show that a very strong asymmetry, qualitatively resembling Fano profiles, can be found in the magneto-absorption by uncorrelated electron-hole pairs when [*strong enough elastic scattering*]{} by impurities or other defects is assumed be the relevant mechanism. An" -"---\nabstract: 'Question Generation (QG) is fundamentally a simple syntactic transformation; however, many aspects of semantics influence what questions are good to form. We implement this observation by developing Syn-QG, a set of transparent syntactic rules leveraging universal dependencies, shallow semantic parsing, lexical resources, and custom rules which transform declarative sentences into question-answer pairs. We utilize PropBank argument descriptions and VerbNet state predicates to incorporate shallow semantic content, which helps generate questions of a descriptive nature and produce inferential and semantically richer questions than existing systems. In order to improve syntactic fluency and eliminate grammatically incorrect questions, we employ back-translation over the output of these syntactic rules. A set of crowd-sourced evaluations shows that our system can generate a larger number of highly grammatical and relevant questions than previous QG systems and that back-translation drastically improves grammaticality at a slight cost of generating irrelevant questions.'\nauthor:\n- |\n Kaustubh D. Dhole\\\n Amelia Science\\\n RnD, IPsoft\\\n New York, NY 10004\\\n `kdhole@ipsoft.com`\\\n Christopher D. Manning\\\n Department of Computer Science\\\n Stanford University\\\n Stanford, CA 94305\\\n `manning@stanford.edu`\\\nbibliography:\n- 'anthology.bib'\n- 'acl2020.bib'\ntitle: 'Syn-QG: Syntactic and Shallow Semantic Rules for Question Generation'\n---\n\nIntroduction\n============\n\nAutomatic Question Generation (QG) is the task of generating" -"---\nabstract: 'Since deep neural networks were developed, they have made huge contributions to people\u2019s everyday lives. Machine learning provides more rational advice than humans are capable of in almost every aspect of daily life. However, despite this achievement, the design and training of neural networks are still challenging and unpredictable procedures that have been alleged to be \u201calchemy\u201d. To lower the technical thresholds for common users, automated hyper-parameter optimization (HPO) has become a popular topic in both academic and industrial areas. This paper provides a review of the most essential topics on HPO. The first section introduces the key hyper-parameters related to model training and structure, and discusses their importance and methods to define the value range. Then, the research focuses on major optimization algorithms and their applicability, covering their efficiency and accuracy especially for deep learning networks. This study next reviews major services and tool-kits for HPO, comparing their support for state-of-the-art searching algorithms, feasibility with major deep-learning frameworks, and extensibility for new modules designed by users. The paper concludes with problems that exist when HPO is applied to deep learning, a comparison between optimization algorithms, and prominent approaches for model evaluation with limited computational resources.'\nauthor:\n-" -"---\nabstract: 'An image pyramid can extend many object detection algorithms to solve detection on multiple scales. However, interpolation during the resampling process of an image pyramid causes gradient variation, which is the difference of the gradients between the original image and the scaled images. Our key insight is that the increased variance of gradients makes the classifiers have difficulty in correctly assigning categories. We prove the existence of the gradient variation by formulating the ratio of gradient expectations between an original image and scaled images, then propose a simple and novel gradient normalization method to eliminate the effect of this variation. The proposed normalization method reduce the variance in an image pyramid and allow the classifier to focus on a smaller coverage. We show the improvement in three different visual recognition problems: pedestrian detection, pose estimation, and object detection. The method is generally applicable to many vision algorithms based on an image pyramid with gradients.'\naddress: 'Department of Computer Science and Engineering, POSTECH, Korea'\nbibliography:\n- 'egbib.bib'\ntitle: |\n Detector with Focus:\\\n Normalizing Gradient in Image Pyramid\n---\n\nnormalization, detection, gradient\n\nIntroduction {#sec:intro}\n============\n\nGradient and image pyramid are one of the essential parts for computer vision. Well-known methods" -"---\nabstract: |\n We improve on our earlier dynamical estimate of the virial masses of the haloes of Lyman-break galaxies (LBGs) at redshift $z=3$ by accounting for the effects of seeing, slit width, and observational uncertainties. From an analysis of the small number of available rotation curves for LBGs we determine a relation $V_{c7}=(1.9\\pm0.2)\\sigma$ between circular velocity at a radius of 7kpc, $V_{c7}$, and central line velocity width, $\\sigma$. We use this relation to transform the measured velocity widths of 32 LBGs to the distribution of circular velocities, $V_{c7}$, for the population of LBGs brighter than ${\\mathcal\n R=25.5}$. We compare this distribution against the predicted distribution for the \u2018massive\u2013halo\u2019 model in which LBGs pinpoint all of the highest mass dark matter haloes at that epoch. The observed LBG circular velocities are smaller than the predicted circular velocities by a factor $>1.4\\pm0.15$. This is a lower limit as we have ignored any increase of circular velocity caused by baryonic dissipation. The massive\u2013halo model predicts a median halo virial mass of $10^{12.3}$[${\\mathrm M_{\\odot}\\thinspace}$]{}, and a small spread of circular velocities, $V_{c7}$. Our median estimated dynamical mass is $<\n 10^{11.6\\pm0.3}$[${\\mathrm M_{\\odot}\\thinspace}$]{}, which is significantly smaller; furthermore, the spread of our derived circular velocities" -"---\nabstract: |\n Framed flow categories were introduced by Cohen\u2013Jones\u2013Segal as a way of encoding the flow data associated to a Floer functional. A framed flow category gives rise to a CW-complex with one cell for each object of the category. The idea is that the Floer invariant should take the form of the *stable homotopy type* of the resulting complex, recovering the Floer cohomology as its singular cohomology. Such a framed flow category was produced, for example, by Lipshitz\u2013Sarkar from the input of a knot diagram, resulting in a stable homotopy type generalizing Khovanov cohomology.\n\n In this paper we give moves that change a framed flow category without changing the associated stable homotopy type. These are inspired by moves that can be performed in the Morse\u2013Smale case without altering the underlying smooth manifold. We posit that if two framed flow categories represent the same stable homotopy type then a finite sequence of these moves is sufficient to connect the two categories. This is directed towards the goal of reducing the study of framed flow categories to a combinatorial calculus.\n\n We provide examples of calculations performed with these moves (related to the Khovanov framed flow category), and prove some general" -"---\nabstract: 'We study a three-dimensional dynamical system in two slow variables and one fast variable. We analyze the tangency of the unstable manifold of an equilibrium point with \u201cthe\u201d repelling slow manifold, in the presence of a stable periodic orbit emerging from a Hopf bifurcation. This tangency heralds complicated and chaotic mixed-mode oscillations. We classify these solutions by studying returns to a two-dimensional cross section. We use the intersections of the slow manifolds as a basis for partitioning the section according to the number and type of turns made by trajectory segments. Transverse homoclinic orbits are among the invariant sets serving as a substrate of the dynamics on this cross-section. We then turn to a one-dimensional approximation of the global returns in the system, identifying saddle-node and period-doubling bifurcations. These are interpreted in the full system as bifurcations of mixed-mode oscillations. Finally, we contrast the dynamics of our one-dimensional approximation to classical results of the quadratic family of maps. We describe the transient trajectory of a critical point of the map over a range of parameter values.'\nauthor:\n- Ian Lizarraga\nbibliography:\n- 'shnfsaa4-preprint3.bib'\ntitle: 'Tangency bifurcation of invariant manifolds in a slow-fast system'\n---\n\n> We study a" -"One of the fundamental properties characterizing a matter wave source is its degree of temporal coherence. Perfect coherence in the time domain would allow one to completely predict the phase evolution of the underlying field. In light optics, a laser comes closest to this ideal situation. The temporal coherence of a laser exceeds that of a thermal light source by far, which is central to many applications in spectroscopy, metrology and interferometry. Similarly, a matter wave source based on Bose-Einstein condensation [@bec; @atomlasers] is expected to have a substantially higher degree of temporal coherence than a thermal atom source. So far, experimental investigations of the coherence of Bose-Einstein condensates have focused on the spatial domain: The interference of two condensates has been observed [@Andrews97], the uniformity of the spatial phase has been demonstrated[@Hagley99; @Stenger99] and the spatial correlation function has been determined [@Bloch00].\n\nA measurement of the temporal coherence of Bose-Einstein condensates or atom laser beams has not yet been reported. However, there are prospects to realize matter wave sources with coherence times comparable to state-of-the-art optical lasers. Theoretically, the energy width of a matter wave beam extracted from a Bose-Einstein condensate should approach the Fourier limit which is determined" -"---\nabstract: 'We propose a method to identify and to locate \"repellers\u201d in quasi-periodically forced logistic map (QPLM), using a kind of Morse decomposition of nested attracting invariant sets. In order to obtain the invariant sets, we use an auxiliary 1+2-dimensional skew-product map system describing the evolution of a line segment in the phase space of QPLM. With this method, detailed structure of repellers can be visualized, and the emergence of a repeller in QPLM can be detected as an easily observable bifurcation in the auxiliary system. In addition to the method to detect the repellers, we propose a new numerical method for distinguishing a strange non-chaotic attractor (SNA) from a smooth torus attractor, using a correspondence between SNAs in QPLM and attractors with riddled basin in the auxiliary system.'\nauthor:\n- Tsuyoshi Chawanya\n- Takafumi Sakai\ndate: 3 March 2014\ntitle: 'On repellers in quasi-periodically forced logistic map system'\n---\n\n> The topological structure of invariant sets as well as its variation (bifurcation) is one of the most basic and essential issues for understanding the behavior of a dynamical system. However, it is sometimes not easy to obtain invariant sets in non-autonomous systems, and that might make our understanding" -"---\nabstract: 'We address the two fundamental problems of *spatial field reconstruction* and *sensor selection* in heterogeneous sensor networks: (i) how to efficiently perform *spatial field reconstruction* based on measurements obtained simultaneously from networks with both high and low quality sensors; and (ii) how to perform *query based sensor set selection with predictive MSE performance guarantee*. For the first problem, we developed a low complexity algorithm based on the *spatial best linear unbiased estimator* (S-BLUE). Next, building on the S-BLUE, we address the second problem, and develop an efficient algorithm for *query based sensor set selection with performance guarantee*. Our algorithm is based on the Cross Entropy method which solves the combinatorial optimization problem in an efficient manner.'\nauthor:\n- |\n Pengfei Zhang\\\n University of Oxford\\\n Oxford, UK\\\n Ido Nevat\\\n TUM CREATE\\\n Singapore\\\n Gareth W. Peters\\\n Heriot-Watt University\\\n Scotland, UK\\\n Wolfgang Fruehwirt\\\n University of Oxford\\\n Oxford, UK\\\n Yongchao Huang\\\n University of Oxford\\\n Oxford, UK\\\n Ivonne Anders\\\n ZAMG\\\n Vienna, Austria\\\n Michael Osborne\\\n University of Oxford\\\n Oxford, UK\\\nbibliography:\n- 'references.bib'\ntitle: 'Sensor Selection and Random Field Reconstruction for Robust and Cost-effective Heterogeneous Weather Sensor Networks for the Developing World'\n---\n\nIntroduction\n============\n\nWe consider the case where two types of sensors" -"*Suryakant Mishra, Haardik Pandey, Priyanka Yogi, Shailendra K. Saxena, Swaroop Roy, Pankaj R. Sagdeo and Rajesh Kumar*[^1]\n\nMaterial Research Laboratory, Discipline of Physics & MSEG, Indian Institute of Technology Indore, Simrol-453552, Madhya Pradesh, India\n\nABSTRACT\n\nFabrication and operation of simple solid state electrochromic devices using ethyl viologen diperchlorate in a polymer matrix is presented here. In-situ Raman and transmission/absorption studies have been done to establish the origin of bias induced color change, between a transparent and navy blue color, in the electrochromic device. The origin of bias induced color change has been attributed to the bias induced redox switching between its viologen dication and free redicle forms. Fundamental reason behind colour changes of viologen molecule has been established. In-situ UV-Vis spectra reveals that the navy blue color of the device under biased condition is not due to increase in the transparency corresponding to blue wavelength but due to suppression of the transparency corresponding to the complementary colors. Absorption modulation has been reported from the device with good ON/OFF contrast of the device.\n\n**Keywords:** Raman spectroscopy, UV-Vis spectroscopy, Viologen, Electrochromism\n\n![\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2013xxxxxxxxxxxxxxxxx\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2013](toc){width=\"8cm\"}\n\nIntroduction\n============\n\nAn electrochromic device, as name suggests, is a device which changes color as a result of an electrical" -"---\nabstract: 'Convolutional operations have two limitations: (1) do not explicitly model where to focus as the same filter is applied to all the positions, and (2) are unsuitable for modeling long-range dependencies as they only operate on a small neighborhood. While both limitations can be alleviated by attention operations, many design choices remain to be determined to use attention, especially when applying attention to videos. Towards a principled way of applying attention to videos, we address the task of spatiotemporal attention cell search. We propose a novel search space for spatiotemporal attention cells, which allows the search algorithm to flexibly explore various design choices in the cell. The discovered attention cells can be seamlessly inserted into existing backbone networks, e.g., I3D or S3D, and improve video classification accuracy by more than 2% on both Kinetics-600 and MiT datasets. The discovered attention cells outperform non-local blocks on both datasets, and demonstrate strong generalization across different modalities, backbones, and datasets. Inserting our attention cells into I3D-R50 yields state-of-the-art performance on both datasets.'\nauthor:\n- 'Xiaofang Wang[^1]'\n- Xuehan Xiong\n- Maxim Neumann\n- AJ Piergiovanni\n- 'Michael S. Ryoo'\n- Anelia Angelova\n- 'Kris M. Kitani'\n- Wei Hua\nbibliography:\n-" -"---\nabstract: 'Non-local self-similarity in natural images has been well studied as an effective prior in image restoration. However, for single image super-resolution (SISR), most existing deep non-local methods (e.g., non-local neural networks) only exploit similar patches within the same scale of the low-resolution (LR) input image. Consequently, the restoration is limited to using the same-scale information while neglecting potential high-resolution (HR) cues from other scales. In this paper, we explore the cross-scale patch recurrence property of a natural image, i.e., similar patches tend to recur many times across different scales. This is achieved using a novel cross-scale internal graph neural network ([IGNN]{}). Specifically, we dynamically construct a cross-scale graph by searching $k$-nearest neighboring patches in the downsampled LR image for each query patch in the LR image. We then obtain the corresponding $k$ HR neighboring patches in the LR image and aggregate them adaptively in accordance to the edge label of the constructed graph. In this way, the HR information can be passed from $k$ HR neighboring patches to the LR query patch to help it recover more detailed textures. Besides, these internal image-specific LR/HR exemplars are also significant complements to the external information learned from the training dataset." -"Systems of reacting particles are used to model a whole gamut of phenomena relevant to fields ranging from chemical physics through statistical physics to mathematical biology. In some applications the particles represent chemical or biological species [@Kopelman; @HZ]; in other cases they are to be interpreted as composite objects such as aggregating traffic jams [@BKS]. Excitations can also be treated as interacting particles, one example being laser-induced excitons in certain crystals [@exp]. Furthermore, domain walls occurring in a number of different contexts such as growth and coarsening processes [@KS; @GK] have dynamics with a natural particle interpretation.\n\nGenerally these systems are defined through nonequilibrium dynamics. Given such a wide variety of nonequilibrium reaction systems, it is natural to ask if they can be divided into distinct groups akin to the universality classes known for equilibrium systems.\n\nTwo reactions that have been extensively studied are single species annihilation ($A{+}A{\\to}\\emptyset$) and coalescence ($A{+}A{\\to}A$). A particularly striking result is that if the reactant motion is diffusive, the two processes belong to the same universality class [@diffuse] and the density decay is independent of the reaction rate in two dimensions and below. Moreover, these diffusive systems have also served as prototypes for the development" -"---\nabstract: 'We empirically investigate the (negative) expected accuracy as an alternative loss function to cross entropy (negative log likelihood) for classification tasks. Coupled with softmax activation, it has small derivatives over most of its domain, and is therefore hard to optimize. A modified, leaky version is evaluated on a variety of classification tasks, including digit recognition, image classification, sequence tagging and tree tagging, using a variety of neural architectures such as logistic regression, multilayer perceptron, CNN, LSTM and Tree-LSTM. We show that it yields comparable or better accuracy compared to cross entropy. Furthermore, the proposed objective is shown to be more robust to label noise.'\nauthor:\n- Ozan \u0130rsoy\nbibliography:\n- 'ref.bib'\ntitle: On Expected Accuracy\n---\n\nIntroduction\n============\n\nClassification is perhaps the most prominent supervised learning task in machine learning\u00a0[@alpaydin2009introduction]. In classification, we are interested in assigning a given instance to a set of predetermined categories, based on prior observations in our *training* data. Typically, in classification, we use the maximum likelihood approach to estimate model parameters\u00a0[@vapnik2013nature; @millar2011maximum]. In this approach, we aim to find the most likely model parameters that could explain the observations in our training set. This leads to the popular negative log" -"---\nabstract: 'This article reviews some aspects of local covariance and of the ambiguities and anomalies involved in the definition of the stress energy tensor of quantum field theory in curved spacetime. Then, a summary is given of the approach proposed by Buchholz et al.\u00a0to define local thermal equilibrium states in quantum field theory, i.e., non-equilibrium states to which, locally, one can assign thermal parameters, such as temperature or thermal stress-energy. The extension of that concept to curved spacetime is discussed and some related results are presented. Finally, the recent approach to cosmology by Dappiaggi, Fredenhagen and Pinamonti, based on a distinguished fixing of the stress-energy renormalization ambiguity in the setting of the semiclassical Einstein equations, is briefly described. The concept of local thermal equilibrium states is then applied, to yield the result that the temperature behaviour of a quantized, massless, conformally coupled linear scalar field at early cosmological times is more singular than that of classical radiation.'\naddress: |\n Institut f\u00fcr Theoretische Physik\\\n Universit\u00e4t Leipzig\\\n Vor dem Hospitaltore 1\\\n D-04103 Leipzig\\\n Germany\nauthor:\n- Rainer Verch\ntitle: 'Local Covariance, Renormalization Ambiguity, and Local Thermal Equilibrium in Cosmology'\n---\n\nLocal Covariant Quantum Field Theory\n====================================\n\nThe main theme of" -"---\nabstract: |\n We have found strong selective emission of the N\u00a0II 5000\u00a0\u00c5\u00a0complex in the spectrum of the LMC hypergiant HDE\u00a0269896, ON9.7\u00a0Ia$^+$. Since this object also has anomalously strong He\u00a0II $\\lambda$4686 emission for its spectral type, an unusually wide range of ionization in its extended atmosphere is indicated. The published model of this spectrum does not reproduce these emission features, but we show that increased nitrogen and helium abundances, together with small changes in other model parameters, can do so. The morphological and possible evolutionary relationships of HDE\u00a0269896, as illuminated by the new spectral features, to other denizens of the OB Zoo are discussed. This object may be in an immediate pre-WNVL (Very Late WN) state, which is in turn the quiescent state of at least some Luminous Blue Variables.\n\n More generally, the N\u00a0II spectrum in HDE\u00a0269896 provides a striking demonstration of the occurrence of two distinctly different kinds of line behavior in O-type spectra: normal absorption lines that develop P\u00a0Cygni profiles at high wind densities, and selective emission lines from the same ions that do not. Further analysis of these features will advance understanding of both atomic physics and" -"---\nabstract: |\n We consider multi-variable sigma function of a genus $g$ hyperelliptic curve as a function of two group of variables - jacobian variables and parameters of the curve. In the theta-functional representation of sigma-function, the second group arises as periods of first and second kind differentials of the curve. We develop representation of periods in terms of theta-constants. For the first kind periods, generalizations of Rosenhain type formulae are obtained, whilst for the second kind periods theta-constant expressions are presented which are explicitly related to the fixed co-homology basis.\\\n We describe a method of constructing differentiation operators for hyperelliptic analogues of $\\zeta$- and $\\wp$-functions on the parameters of the hyperelliptic curve. To demonstrate this method, we gave the detailed construction of these operators in the cases of genus 1 and 2.\naddress:\n- 'Steklov Mathematical Institute, Moscow'\n- 'National University of Kyiv-Mohyla Academy'\n- Institute of Magnetism NASU\nauthor:\n- 'V.M.Buchstaber'\n- 'V.Z. Enolski'\n- 'D.V.Leykin'\ntitle: 'Multi-variable sigma-functions: old and new results'\n---\n\n..\n\nIntroduction\n============\n\nOur note belongs to an area in which Emma Previato took active part in the development. Since the time of first publication of the present authors [@bel97] she has inspired them," -"---\nabstract: 'Biological systems transduce signals from their surroundings through a myriad of pathways. In this paper, we describe signal transduction as a communication system: the signal transduction receptor acts as the receiver in this system, and can be modeled as a finite-state Markov chain with transition rates governed by the input signal. Using this general model, we give the mutual information under IID inputs in discrete time, and obtain the mutual information in the continuous-time limit. We show that the mutual information has a concise closed-form expression with clear physical significance. We also give a sufficient condition under which the Shannon capacity is achieved with IID inputs. We illustrate our results with three examples: the light-gated Channelrhodopsin-2 (ChR2) receptor; the ligand-gated nicotinic acetylcholine (ACh) receptor; and the ligand-gated calmodulin (CaM) receptor. In particular, we show that the IID capacity of the ChR2 receptor is equal to its Shannon capacity. We finally discuss how the results change if only certain properties of each state can be observed, such as whether an ion channel is open or closed.'\nauthor:\n- 'Andrew W. Eckford, , and Peter J. Thomas[^1][^2][^3][^4]'\nbibliography:\n- 'MolecularInfoTheory.bib'\n- 'infotheory.bib'\n- 'signaling.bib'\n- 'Cowan.bib'\n- 'PJT.bib'\n- 'neuroscience.bib'" -"---\nabstract: 'We investigate the degree of indistinguishability of cascaded photons emitted from a 3\u2013level quantum ladder system; in our case the biexciton\u2013exciton cascade of semiconductor quantum dots. For the 3\u2013level quantum ladder system we theoretically demonstrate that the indistinguishability is inherently limited for both emitted photons and determined by the ratio of the lifetimes of the excited and intermediate states. We experimentally confirm this finding by comparing the quantum interference visibility of non\u2013cascaded emission and cascaded emission from the same semiconductor quantum dot. Quantum optical simulations produce very good agreement with the measurements and allow to explore a large parameter space. Based on our model, we propose photonic structures to optimize the lifetime ratio and overcome the limited indistinguishability of cascaded photon emission from a 3\u2013level quantum ladder system.'\nauthor:\n- Eva\u00a0Sch\u00f6ll\n- Lucas\u00a0Schweickert\n- Lukas\u00a0Hanschke\n- 'Katharina\u00a0D.\u00a0Zeuner'\n- Friedrich\u00a0Sbresny\n- Thomas\u00a0Lettner\n- Rahul\u00a0Trivedi\n- Marcus\u00a0Reindl\n- Saimon\u00a0Filipe\u00a0Covre\u00a0da\u00a0Silva\n- Rinaldo\u00a0Trotta\n- 'Jonathan\u00a0J.\u00a0Finley'\n- Jelena\u00a0Vu\u010dkovi\u0107\n- Kai\u00a0M\u00fcller\n- Armando\u00a0Rastelli\n- Val\u00a0Zwiller\n- 'Klaus\u00a0D.\u00a0J\u00f6ns'\ntitle: 'The crux of using the cascaded emission of a 3\u2013level quantum ladder system to" -"We consider Location-based Service (LBS) settings, where a LBS provider logs the requests sent by mobile device users over a period of time and later wants to publish/share these logs. Log sharing can be extremely valuable for advertising, data mining research and network management, but it poses a serious threat to the privacy of LBS users. Sender anonymity solutions prevent a malicious attacker from inferring the interests of LBS users by associating them with their service requests after gaining access to the anonymized logs. With the fast-increasing adoption of smartphones and the concern that historic user trajectories are becoming more accessible, it becomes necessary for any sender anonymity solution to protect against attackers that are trajectory-aware (i.e. have access to historic user trajectories) as well as policy-aware (i.e they know the log anonymization policy). We call such attackers TP-aware.\n\nThis paper introduces a first privacy guarantee against TP-aware attackers, called [*TP-aware sender k-anonymity*]{}. It turns out that there are many possible TP-aware anonymizations for the same LBS log, each with a different utility to the consumer of the anonymized log. The problem of finding the optimal TP-aware anonymization is investigated. We show that trajectory-awareness renders the problem computationally harder than" -"---\nabstract: |\n We present *Hubble Space Telescope* ultraviolet spectroscopy of the white dwarfs PG0843+516, PG1015+161, SDSS1228+1040, and GALEX1931+0117, which accrete circumstellar planetary debris formed from the destruction of asteroids. Combined with optical data, a minimum of five and a maximum of eleven different metals are detected in their photospheres. With metal sinking time scales of only a few days, these stars are in accretion/diffusion equilibrium, and the photospheric abundances closely reflect those of the circumstellar material. We find C/Si ratios that are consistent with that of the bulk Earth, corroborating the rocky nature of the debris. Their C/O values are also very similar to those of bulk Earth, implying that the planetary debris is dominated by Mg and Fe silicates. The abundances found for the debris at the four white dwarfs show substantial diversity, comparable at least to that seen across different meteorite classes in the solar system. PG0843+516 exhibits significant over-abundances of Fe and Ni, as well as of S and Cr, which suggests the accretion of material that has undergone melting, and possibly differentiation. PG1015+161 stands out by having the lowest Si abundance relative to all other detected elements. The Al/Ca ratio determined for the planetary debris" -"---\nabstract: 'We discuss the recent claim that the thermohaline (\u201cfingering\u201d) instability is important in accreting white dwarfs, increasing the derived accretion fluxes potentially by orders of magnitude. We present an alternative view and conclude that at least in the steady state this is not the case and the current method of estimating accretion fluxes is correct.'\nauthor:\n- Detlev\u00a0Koester\ntitle: On Thermohaline Mixing in Accreting White Dwarfs\n---\n\nIntroduction\n============\n\nThe thermohaline (saltfinger, fingering, double-diffusive) instability is well-known in Oceanography: a warm layer of saltwater on top of a cold body of freshwater may be dynamically stable but may nevertheless lead to complete mixing, if the diffusion of heat is faster than that of salt. In the case of stars, the r\u00f4le of salt is played by the molecular weight. A layer with higher molecular weight on top of a layer with smaller weight may be dynamically stable (no convection), but subject to a similar double-diffusive instability. Classical papers in the astrophysical context are [@Ulrich72] and @Kippenhahn.Ruschenplatt.ea80 [=KRT80].\n\nThe instability in the scenario of KRT80\n========================================\n\nThe instability starts from a boundary layer separating the two layers. A front of a molecular weight gradient expands into the homogenous" -"**Principal Process Analysis of dynamic GlucoCEST MRI data**\\\nStefano Casagranda[^1], Marco Pizzolato[^2], Francisco Torrealdea[^3], Xavier Golay[^4], and Timoth\u00e9 Boutelier\\\n\n**Synopsis.** GlucoCEST is an MRI contrast enhancement technique sensitive to the concentration of sugar in the tissue. Because of a difference in metabolism, it is thought that tumors consume more sugar than normal tissue. However, glucose metabolism is complex and depends on many processes, which are all important to understand the origin of the measured signal. To achieve this goal we apply here a process analysis method to a deterministic system describing the metabolism of glucose in the tissue.\n\n**Introduction.** Chemical Exchange Saturation Transfer (CEST) is an MRI contrast enhancement technique that enables the indirect detection of molecules with exchangeable protons [@c1]. GlucoCEST is a CEST technique that measures a signal related to the concentration of injected glucose and its derivates [@c1]. It is expected that metabolic anomalies due to the presence of cancerous tissue could be measured or characterized by means of glucoCEST, and in particular to its dynamic characteristics. There the signal is analyzed as a function of time following glucose metabolism, which is paramount to pathological tissue assessment. Different subvoxel compartments contribute to the signal. Particularly, a simplified" -"---\nabstract: 'In this work we contribute a novel pipeline to automatically generate training data, and to improve over state-of-the-art multi-object tracking and segmentation (MOTS) methods. Our proposed tracklet mining algorithm turns raw street-level videos into high-fidelity MOTS training data, is scalable and overcomes the need of expensive and time-consuming manual annotation approaches. We leverage state-of-the-art instance segmentation results in combination with optical flow obtained from models also trained on automatically harvested training data. Our second major contribution is [MOTSNet]{}\u2013 a deep learning, tracking-by-detection architecture for MOTS \u2013 deploying a novel mask-pooling layer for improved object association over time. Training [MOTSNet]{}with our automatically extracted data leads to significantly improved sMOTSA scores on the novel [KITTI MOTS]{}dataset (+1.9%/+7.5% on cars/pedestrians). Even without learning from a single, manually annotated MOTS training example we still improve over prior state-of-the-art, confirming the compelling properties of our pipeline. On the MOTSChallenge dataset we improve by +4.1%, further confirming the efficacy of our proposed [MOTSNet]{}.'\nauthor:\n- |\n Lorenzo Porzi$^\\dagger$, Markus Hofinger$^\\ddagger$, Idoia Ruiz$^\\ast$, Joan Serrat$^\\ast$, Samuel Rota Bul\u00f2$^\\dagger$, Peter Kontschieder$^\\dagger$\\\n Mapillary Research$^\\dagger$, Graz University of Technology$^\\ddagger$, Computer Vision Center$^\\ast$\\\n [research@mapillary.com$^\\dagger$, markus.hofinger@icg.tugraz.at$^\\ddagger$, {iruiz,joans}@cvc.uab.es$^\\ast$]{}\nbibliography:\n- 'arxiv.bib'\ntitle: 'Learning Multi-Object Tracking and Segmentation from Automatic Annotations'\n---" -"---\nabstract: 'We examine two means by which wind can impart energy to waves: sheltering and deposition of material upwards from windward surface shear. The shear driven deposition is shown to be the more efficient process. Lengthening of waves to match the wind speed is shown to be very inefficient and consume a large fraction of the energy imparted by the wind. The surface shear provides a low energy sink that absorbs most of the momentum from the wind. These produce bounds on the efficiency of wave growth. The results here are computed in a model independent and perturbation free fashion by a careful consideration of conservation laws. By combining these effects we can place bounds on the rates waves can grow in a given fetch and the relative amount of shear flow versus the, relatively small, Stokes drift that must arise.'\nauthor:\n- |\n Clifford Chafin\\\n \u00a0 [^1]\ntitle: 'Conservation Laws and Bounds on the Efficiency of Wind-Wave Growth'\n---\n\nThe generation of waves by wind is still not completely understood. It is known that wind blowing over a flat surface first generates instabilities that leads to progressive capillary waves that move at oblique angles until the component of their" -"---\nabstract: 'This paper explores the use of language models to predict 20 human traits from users\u2019 Facebook status updates. The data was collected by the myPersonality project, and includes user statuses along with their personality, gender, political identification, religion, race, satisfaction with life, IQ, self-disclosure, fair-mindedness, and belief in astrology. A single interpretable model meets state of the art results for well-studied tasks such as predicting gender and personality; and sets the standard on other traits such as IQ, sensational interests, political identity, and satisfaction with life. Additionally, highly weighted words are published for each trait. These lists are valuable for creating hypotheses about human behavior, as well as for understanding what information a model is extracting. Using performance and extracted features we analyze models built on social media. The real world problems we explore include gendered classification bias and Cambridge Analytica\u2019s use of psychographic models.'\nauthor:\n- Andrew Cutler\n- Brian Kulis\nbibliography:\n- 'big5.bib'\ntitle: Inferring Human Traits From Facebook Statuses\n---\n\nIntroduction\n============\n\nFacebook\u2019s 2 billion users spend an average of 50 minutes a day on Facebook, Messenger, or Instagram [@stewart2016facebook]. Industry seeks to obtain, model and actualize this mountain of data in a variety of" -"---\nabstract: 'When performing a national research assessment, some countries rely on citation metrics whereas others, such as the UK, primarily use peer review. In the influential *Metric Tide* report, a low agreement between metrics and peer review in the UK Research Excellence Framework (REF) was found. However, earlier studies observed much higher agreement between metrics and peer review in the REF and argued in favour of using metrics. This shows that there is considerable ambiguity in the discussion on agreement between metrics and peer review. We provide clarity in this discussion by considering four important points: (1) the level of aggregation of the analysis; (2) the use of either a size-dependent or a size-independent perspective; (3) the suitability of different measures of agreement; and (4) the uncertainty in peer review. In the context of the REF, we argue that agreement between metrics and peer review should be assessed at the institutional level rather than at the publication level. Both a size-dependent and a size-independent perspective are relevant in the REF. The interpretation of correlations may be problematic and as an alternative we therefore use measures of agreement that are based on the absolute or relative differences between metrics and" -"---\nabstract: 'We study the effect of stimulated photon emission from the vacuum in strong space-time dependent electromagnetic fields. We emphasize the viewpoint that the vacuum subjected to macroscopic electromagnetic fields with at least one nonzero electromagnetic field invariant, as, e.g., attainable by superimposing two laser beams, can represent a source term for outgoing photons. We believe that this view is particularly intuitive and allows for a straightforward and intuitive study of optical signatures of quantum vacuum nonlinearity in realistic experiments involving the collision of high-intensity laser pulses, and exemplify this view for the vacuum subjected to a strong standing electromagnetic wave as generated in the focal spot of two counter-propagating, linearly polarized high-intensity laser pulses. Focusing on a comparably simple electromagnetic field profile, which should nevertheless capture the essential features of the electromagnetic fields generated in the focal spots of real high-intensity laser beams, we provide estimates for emission characteristics and the numbers of emitted photons attainable with present and near future high-intensity laser facilities.'\nauthor:\n- Felix Karbstein\n- Rashid Shaisultanov\ntitle: Stimulated photon emission from the vacuum\n---\n\nIntroduction {#sec:intro}\n============\n\nThe fluctuations of virtual charged particles in the vacuum of quantum electrodynamics (QED) give rise to" -"---\nabstract: 'Talaia is a platform for monitoring social media and digital press. A configurable crawler gathers content with respect to user defined domains or topics. Crawled data is processed by means of the EliXa Sentiment Analysis system. A Django powered interface provides data visualization for a user-based analysis of the data. This paper presents the architecture of the system and describes in detail its different components. To prove the validity of the approach, two real use cases are accounted for: one in the cultural domain and one in the political domain. Evaluation for the sentiment analysis task in both scenarios is also provided, showing the capacity for domain adaptation.'\naddress:\n- Elhuyar Foundation\n- 'IXA NLP Group, University of the Basque Country UPV/EHU'\nauthor:\n- 'I\u00f1aki San Vicente\\*, Xabier Saralegi'\n- Rodrigo Agerri\nbibliography:\n- 'talaia\\_eaai.bib'\ntitle: Real Time Monitoring of Social Media and Digital Press\n---\n\nSentiment Analysis ,Social Media Analysis ,Crawling ,Natural Language Processing ,Digital Media Monitoring\n\nIntroduction {#isv2018:intro}\n============\n\nThe Internet is a very rich source of user-generated information. As knowledge management technologies have evolved, many organizations have turned their eyes to such information, as a way of obtaining global feedback on their activities [@chen2012business]. Some" -"---\nabstract: 'Fairness in algorithmic decision-making processes is attracting increasing concern. When an algorithm is applied to human-related decision-making an estimator solely optimizing its predictive power can learn biases on the existing data, which motivates us the notion of fairness in machine learning. while several different notions are studied in the literature, little studies are done on how these notions affect the individuals. We demonstrate such a comparison between several policies induced by well-known fairness criteria, including the color-blind (CB), the demographic parity (DP), and the equalized odds (EO). We show that the EO is the only criterion among them that removes group-level disparity. Empirical studies on the social welfare and disparity of these policies are conducted.'\nauthor:\n- |\n Junpei Komiyama\\\n The University of Tokyo\\\n `junpei@komiyama.info`\\\n- |\n Hajime Shimao\\\n Santa Fe Institute\\\n `hajime.fr@gmail.com`\\\nbibliography:\n- 'main.bib'\ntitle: Comparing Fairness Criteria Based on Social Outcome\n---\n\n[UTF8]{}[zhsong]{}\n\nIntroduction {#sec_intro}\n============\n\nThe goal of the supervised learning is to estimate label $y$ by learning an estimator ${\\hat{y}}(X)$ as a function of associated feature $X$. Arguably, an estimator of better predictive power is preferred, and standard supervised learning algorithm learns ${\\hat{y}}(X)$ from existing data. However, when it is applied to human-related" -"---\nabstract: 'Tomography deals with the reconstruction of objects from their projections, acquired along a range of angles. Discrete tomography is concerned with objects that consist of a small number of materials, which makes it possible to compute accurate reconstructions from highly limited projection data. For cases where the allowed intensity values in the reconstruction are known a priori, the discrete algebraic reconstruction technique (DART) has shown to yield accurate reconstructions from few projections. However, a key limitation is that the benefit of DART diminishes as the number of different materials increases. Many tomographic imaging techniques can simultaneously record tomographic data at multiple *channels*, each corresponding to a different weighting of the materials in the object. Whenever projection data from more than one channel is available, this additional information can potentially be exploited by the reconstruction algorithm. In this paper we present Multi-Channel DART (MC-DART), which deals effectively with multi-channel data. This class of algorithms is a generalization of DART to multiple channels and combines the information for each separate channel-reconstruction in a multi-channel segmentation step. We demonstrate that in a range of simulation experiments, MC-DART is capable of producing more accurate reconstructions compared to single-channel DART.'\nauthor:\n- Math\u00e9" -"---\nabstract: 'We investigate numerically and theoretically the effect of spatial disorder on two-dimensional split-step discrete-time quantum walks with two internal \u201ccoin\u201d states. Spatial disorder can lead to Anderson localization, inhibiting the spread of quantum walks, putting them at a disadvantage against their diffusively spreading classical counterparts. We find that spatial disorder of the most general type, i.e., position-dependent Haar random coin operators, does not lead to Anderson localization, but to a diffusive spread instead. This is a delocalization, which happens because disorder places the quantum walk to a critical point between different anomalous Floquet-Anderson insulating topological phases. We base this explanation on the relationship of this general quantum walk to a simpler case more studied in the literature, and for which disorder-induced delocalization of a topological origin has been observed. We review topological delocalization for the simpler quantum walk, using time-evolution of the wavefunctions and level spacing statistics. We apply scattering theory to two-dimensional quantum walks, and thus calculate the topological invariants of disordered quantum walks, substantiating the topological interpretation of the delocalization, and finding signatures of the delocalization in the finite-size scaling of transmission. Our results showcase how theoretical ideas and numerical tools from solid-state physics can help" -"---\nabstract: 'Here we present previously unpublished optical spectra of supernova (SN) 2001ig, a Type\u00a0IIb SN, from about a week after explosion until nearly one year later. The earliest spectrum consists of only a few broad absorption features, but soon more common Type\u00a0II SN features including hydrogen P-Cygni profiles and helium absorption become apparent. At later times, as the H features fade and the absorption becomes more prominent, we observe the SN to transition from a Type\u00a0II to a Type\u00a0Ib. Finally, observations after 250 days past explosion show a nebular-phase SN spectrum with one of the largest magnesium to oxygen intensity ratios ever seen. Additionally, we present models of the late-time spectra which indicate that the inner ejecta consist of [$\\sim\\!\\!$\u00a0]{}1.15\u00a0[M$_\\sun$]{}\u00a0of material, most of which (by mass) is in the form of oxygen, with [$\\sim\\!\\!$\u00a0]{}0.13\u00a0[M$_\\sun$]{}\u00a0of $^{56}$Ni and essentially no hydrogen.'\nauthor:\n- 'Jeffrey M. Silverman, Paolo Mazzali, Ryan Chornock, Alexei V. Filippenko, Alejandro Clocchiatti, Mark M. Phillips, Mohan Ganeshalingam, and Ryan J. Foley'\nbibliography:\n- 'astro\\_refs.bib'\ntitle: Optical Spectroscopy of the Somewhat Peculiar Type\u00a0IIb Supernova 2001ig\n---\n\nIntroduction {#s:intro}\n============\n\nIt is thought that most high-mass stars ($\\gtrsim8$\u00a0[M$_\\sun$]{})" -"---\nabstract: |\n This article is a brief review of \u201cnonfreeness\" and related measures of \u201ccorrelation\" for many-fermion systems.\n\n The many-fermion states we deem \u201cuncorrelated\" are the gauge-invariant quasi-free states. Uncorrelated states of systems of finitely many fermions we call simply \u201cfree\" states. Slater determinant states are free; all other free states are \u201csubstates\" of Slater determinant states or limits of such.\n\n The nonfreeness of a many-fermion state equals the minimum of its entropy relative to all free states. Correlation functionals closely related to nonfreeness can be defined in terms of R\u00e9nyi entropies; nonfreeness is the one that uses Shannon entropy. These correlation functionals all share desirable additivity and monotonicity properties, but nonfreeness has some additional attractive properties.\nauthor:\n- 'Alex D. Gottlieb[^1] \u00a0and Norbert J. Mauser'\ntitle: 'Nonfreeness and related functionals for measuring correlation in many-fermion states'\n---\n\nIntroduction\n============\n\n\u201cNonfreeness\" is an entropy functional of states of many-electron systems. It was introduced as a \u201cmeasure of electron correlation\" [@GottliebMauser2007; @GottliebMauserArchived] that is purely a functional of the many-electron state, depending only on the structure of the state and not upon the physical circumstances attending it, e.g., the Hamiltonian operator for the system [@correlation].\n\nBy definition, the nonfreeness of" -"---\nabstract: 'The human ability to recognize objects is impaired when the object is not shown in full. \u201cMinimal images\u201d are the smallest regions of an image that remain recognizable for humans. [@Ullman_etal_2016_PNAS] show that a slight modification of the location and size of the visible region of the minimal image produces a sharp drop in human recognition accuracy. In this paper, we demonstrate that such drops in accuracy due to changes of the visible region are a common phenomenon between humans and existing state-of-the-art deep neural networks (DNNs), and are much more prominent in DNNs. We found many cases where DNNs classified one region correctly and the other incorrectly, though they only differed by one row or column of pixels, and were often bigger than the average human minimal image size. We show that this phenomenon is independent from previous works that have reported lack of invariance to minor modifications in object location in DNNs. Our results thus reveal a new failure mode of DNNs that also affects humans to a much lesser degree. They expose how fragile DNN recognition ability is for natural images even without adversarial patterns being introduced. Bringing the robustness of DNNs in natural images" -"---\nabstract: 'Instance search is an interesting task as well as a challenging issue due to the lack of effective feature representation. In this paper, an instance level feature representation built upon fully convolutional instance-aware segmentation is proposed. The feature is ROI-pooled from the segmented instance region. So that instances in various sizes and layouts are represented by deep features in uniform length. This representation is further enhanced by the use of deformable ResNeXt blocks. Superior performance is observed in terms of its distinctiveness and scalability on a challenging evaluation dataset built by ourselves. In addition, the proposed enhancement on the network structure also shows superior performance on the instance segmentation task.'\naddress: |\n The School of Information Science and Technology,\\\n Xiamen University\\\n Xiamen, 361005, P. R. China.\\\nauthor:\n- Yu Zhan\n- 'Wan-Lei Zhao'\nbibliography:\n- 'yzhan.bib'\ntitle: |\n Instance Search via Instance Level\\\n Segmentation and Feature Representation\n---\n\nInstance search ,Instance segmentation ,CNN\n\nIntroduction\n============\n\nWith the proliferation of massive multimedia contents in our daily life, it is desired that users are allowed to browse over relevant images/videos in which the specified visual instance (e.g., an object or a landmark or a person) appears. This is known as" -"---\nauthor:\n- 'Akul\u00a0Malhotra, Sen\u00a0Lu, Kezhou\u00a0Yang, and\u00a0Abhronil\u00a0Sengupta,\u00a0[^1][^2]'\ntitle: Exploiting Oxide Based Resistive RAM Variability for Bayesian Neural Network Hardware Design\n---\n\nNeuromorphic Computing, Bayesian Neural Networks, Resistive Random Access Memory.\n\nIntroduction\n============\n\nWhile Bayesian deep learning has shown promise to serve as a pathway for enabling Probabilistic Machine Learning, the algorithms have been primarily developed without any insights regarding the underlying hardware implementation. Bayesian techniques are more computationally expensive than their non-Bayesian counterparts, thereby limiting their training and deployment in resource-constrained environments like wearables and mobile edge devices. In addition to the standard von-Neumann bottleneck [@zidan2018future] prevalent in current deep learning networks (where memory access and memory leakage can account for significant portion of the total energy consumption profile), Bayesian deep learning involves repeated sampling of network weights from learnt probability distributions (in most cases, Gaussian distributions are used which are much more hardware expensive than uniform probability distributions) and inference based on the sampled weights. For instance, implementing just a single synapse would involve a costly CMOS Gaussian random number generator circuit. Repeated parameter sampling and evaluation for just a single inference operation worsens the von-Neumann bottleneck issue further. With deep networks involving" -"---\nabstract: 'Atomistic molecular dynamics simulation is an important tool for predicting materials properties. Accuracy depends crucially on the model for the interatomic potential. The gold standard would be quantum mechanics (QM) based force calculations, but such a first-principles approach becomes prohibitively expensive at large system sizes. Efficient machine learning models (ML) have become increasingly popular as surrogates for QM. Neural networks with many thousands of parameters excel in capturing structure within a large dataset, but may struggle to extrapolate beyond the scope of the available data. Here we present a highly automated active learning approach to iteratively collect new QM data that best resolves weaknesses in the existing ML model. We exemplify our approach by developing a general potential for elemental aluminum. At each active learning iteration, the method (1) trains an ANI-style neural network potential from the available data, (2) uses this potential to drive molecular dynamics simulations, and (3) collects new QM data whenever the neural network identifies an atomic configuration for which it cannot make a good prediction. All molecular dynamics simulations are initialized to a disordered configuration, and then driven according to randomized, time-varying temperatures. This nonequilibrium molecular dynamics forms a variety of crystalline and" -"---\nabstract: 'Define the scale-free Gilbert graph based on a Boolean model with heavy-tailed radius distribution on the $d$-dimensional torus by connecting two centers of balls by an edge if at least one of the balls contains the center of the other. We investigate two asymptotic properties of this graph as the size of the torus tends to infinity. First, we determine the tail index associated with the asymptotic distribution of the sum of all power-weighted incoming and outgoing edge lengths at a randomly chosen vertex. Second, we study the behavior of chemical distances on scale-free Gilbert graphs and show the existence of different regimes depending on the tail index of the radius distribution. Despite some similarities to long-range percolation and ultra-small scale-free geometric networks, scale-free Gilbert graphs are actually more closely related to fractal percolation and this connection gives rise to different scaling limits. We also propose a modification of the graph, where the total number of edges can be reduced substantially at the cost of introducing a logarithmic factor in the chemical distances.'\nauthor:\n- Christian Hirsch\ntitle: 'From heavy-tailed Boolean models to scale-free Gilbert graphs'\n---\n\n\\#1\n\n${\\big(}\n\\def$[)]{} \u00d8 \u00b6 *[sufficiently large ]{} \u00df[sufficiently small ]{}*" -"---\nabstract: 'We present a multi-wavelength analysis of the history of star formation in the W3 complex. Using deep, near-infrared ground-based images, combined with images obtained with Spitzer and Chandra observatories, we identified and classified young embedded sources. We identified the principal clusters in the complex, and determined their structure and extension. We constructed extinction-limited samples for five principal clusters, and constructed K-band luminosity functions (KLF) that we compare with those of artificial clusters with varying ages. This analysis provided mean ages and possible age spreads for the clusters. We found that IC 1795, the centermost cluster of the complex, still hosts a large fraction of young sources with circumstellar disks. This indicates that star formation was active in IC 1795 as recently as 2 Myr ago, simultaneous to the star forming activity in the flanking embedded clusters, W3-Main and W3(OH). A comparison with carbon monoxide emission maps indicates strong velocity gradients in the gas clumps hosting W3-Main and W3(OH) and show small receding clumps of gas at IC 1795, suggestive of rapid gas removal (faster than the T Tauri timescale) in the cluster forming regions. We discuss one possible scenario for the progression of cluster formation in the W3" -"---\nabstract: |\n Interactive Educational Systems (IESs) have developed rapidly in recent years to address the issue of quality and affordability of education. Analogous to many other domains in Artificial Intelligence (AI), there are specific tasks of AI in Education (AIEd) for which labels are scarce and expensive. For instance, labels like exam score and grade are considered important in educational and social context. However, unlike interactive features automatically collected by IESs, obtaining the labels is costly as they require student actions taken outside the system. Likewise, while student events like course dropout and review correctness are automatically recorded by IESs, they are few in number as the events occur sporadically in practice. A common way of circumventing the label-scarcity problem is the pre-train/fine-tine method, where a model is trained in a relevant auxiliary task with a large amount of data before the main task. Accordingly, existing works pre-train a model to learn representations of contents in learning items (e.g. exercises). However, such methods fail to utilize the student interaction data available and model student learning behavior.\n\n To this end, we propose assessment modeling, fundamental pre-training tasks for general IESs. An assessment is a feature of student-system interactions which can" -"---\nabstract: 'We show a method, for direct numerical simulations, to trigger and maintain turbulent bands directly at low Reynolds numbers in channel flow. The key is to impose a moving localised force which induces a local flow with sufficiently strong inflectional instability. With the method, we can trigger and maintain turbulent bands at Reynolds numbers down to $Re\\simeq 500$. More importantly, we can generate any band patterns with desired relative position and orientation. The usual perturbation approach resorts to turbulent fields simulated at higher Reynolds numbers, random noise, or localised vortical perturbation, which neither assures a successful generation of bands at low Reynolds numbers nor offers a control on the orientation of the generated bands. [[A precise control on the position and orientation of turbulent bands is important for the investigation of all possible types of band interaction and for understanding the transition in channel flow at low Reynolds numbers.]{}]{}'\nauthor:\n- 'Baofang Song [^1] , Xiangkai Xiao'\ndate: '?; revised ?; accepted ?. - To be entered by editorial office'\ntitle: ' Trigger turbulent bands directly at low Reynolds numbers in channel flow using a moving-force technique '\n---\n\nIntroduction\n============\n\nSince the work of @Tsukahara2005, it has" -"---\nauthor:\n- |\n Wei Xiao$^\\dagger$, Hao Helen Zhang$^\\ddagger$, and Wenbin Lu$^\\dagger$\\\n \\\n \\\nbibliography:\n- 'OTR.bib'\ntitle: Robust regression for optimal individualized treatment rules\n---\n\n> [*Abstract:*]{} Because different patients may response quite differently to the same drug or treatment, there is increasing interest in discovering individualized treatment rule. In particular, people are eager to find the optimal individualized treatment rules, which if followed by the whole patient population would lead to the \u201cbest\u201d outcome. In this paper, we propose new estimators based on robust regression with general loss functions to estimate the optimal individualized treatment rules. The new estimators possess the following nice properties: first, they are robust against skewed, heterogeneous, heavy-tailed errors or outliers; second, they are robust against misspecification of the baseline function; third, under certain situations, the new estimator coupled with pinball loss approximately maximizes the outcome\u2019s conditional quantile instead of conditional mean, which leads to a different optimal individualized treatment rule comparing with traditional Q- and A-learning. Consistency and asymptotic normality of the proposed estimators are established. Their empirical performance is demonstrated via extensive simulation studies and an analysis of an AIDS data.\n>\n> [*Key words and phrases:*]{} Optimal individualized treatment rules; Personalized" -"---\nauthor:\n- |\n Tsung-Yi\u00a0Lin Michael\u00a0Maire Serge\u00a0Belongie Lubomir\u00a0Bourdev Ross\u00a0Girshick\\\n James\u00a0Hays Pietro\u00a0Perona Deva\u00a0Ramanan C.\u00a0Lawrence\u00a0Zitnick Piotr\u00a0Doll\u00e1r\nbibliography:\n- 'coco.bib'\ntitle: 'Microsoft COCO: Common Objects in Context'\n---\n\nIntroduction\n============\n\nOne of the primary goals of computer vision is the understanding of visual scenes. Scene understanding involves numerous tasks including recognizing what objects are present, localizing the objects in 2D and 3D, determining the objects\u2019 and scene\u2019s attributes, characterizing relationships between objects and providing a semantic description of the scene. The current object classification and detection datasets [@Imagenet; @PASCAL; @SUN; @Dollar2012PAMI] help us explore the first challenges related to scene understanding. For instance the ImageNet dataset [@Imagenet], which contains an unprecedented number of images, has recently enabled breakthroughs in both object classification and detection research [@Hinton; @GirshickDDM13; @OverFeat]. The community has also created datasets containing object attributes [@farhadi2009describing], scene attributes [@Patterson2012SunAttributes], keypoints [@bourdev2009poselets], and 3D scene information [@NYUDepth]. This leads us to the obvious question: what datasets will best continue our advance towards our ultimate goal of scene understanding?\n\n![While previous object recognition datasets have focused on (a) image classification, (b) object bounding box localization or (c) semantic pixel-level segmentation, we focus" -"---\nabstract: 'Exactification is the process of obtaining exact values of a function from its complete asymptotic expansion. Here Stirling\u2019s approximation for the logarithm of the gamma function or $\\ln \\Gamma(z)$ is derived completely whereby it is composed of the standard leading terms and an asymptotic series that is generally truncated. Nevertheless, to obtain values of $\\ln \\Gamma(z)$, the remainder must undergo regularization. Two regularization techniques are then applied: Borel summation and Mellin-Barnes (MB) regularization. The Borel-summed remainder possesses an infinite convergent sum of exponential integrals and discontinuous logarithmic terms across Stokes sectors and lines, while the MB-regularized remainders possess one MB integral, which is better to compute, and similar logarithmic terms. The MB integrals are valid over overlapping domains of convergence. Hence, two MB-regularized asymptotic forms can be used to evaluate $\\ln \\Gamma(z)$. Despite having to truncate the Borel-summed remainder, it is found that all the remainders combined with (1) the truncated asymptotic series, (2) the leading terms of Stirling\u2019s approximation and (3) their logarithmic terms yield identical values of $\\ln \\Gamma(z)$. In the few cases where the accuracy falls away, it is mostly due to a very high value for the truncation parameter, which results in the cancellation" -"---\nabstract: 'Falling liquid films become unstable due to inertial effects when the fluid layer is sufficiently thick or the slope sufficiently steep. This free surface flow of a single fluid layer has industrial applications including coating and heat transfer, which benefit from smooth and wavy interfaces, respectively. Here we discuss how the dynamics of the system are altered by feedback controls based on observations of the interface height, and supplied to the system via the perpendicular injection and suction of fluid through the wall. In this study, we model the system using both Benney and weighted-residual models that account for the fluid injection through the wall. We find that feedback using injection and suction is a remarkably effective control mechanism: the controls can be used to drive the system towards arbitrary steady states and travelling waves, and the qualitative effects are independent of the details of the flow modelling. Furthermore, we show that the system can still be successfully controlled when the feedback is applied via a set of localised actuators and only a small number of system observations are available, and that this is possible using both static (where the controls are based on only the most recent" -"---\nabstract: 'We consider a dilute system of small hard beads or hard fibers immersed in a very soft gel able to withstand large elastic deformations. Because of its low to very low shear modulus, this system is very sensitive to small forces. We calculate the local deformation induced by a constant volume force, the inclusion weight. We explain how this deformation could be put in evidence by using techniques similar to the PIV method (particle image velocimetry) used to show complex velocity fields in transparent fluids.'\nauthor:\n- Serge Mora\n- Yves Pomeau\ntitle: Soft granular matter\n---\n\nIntroduction\n============\n\nThis paper is to introduce a new kind of material. We call it \u201csoft granular matter\u201d because it is made of grains (beads or fibers of hard solid, basically non deformable in the conditions we shall consider) immersed in a very soft elastic and transparent solid which is able to withstand large deformations. Hard beads immersed in a gel have been considered before by Chaudhury and collaborators [@Chaud] in the case of beads entering the gel from its free surface under the influence of gravity. In this case, surface tension plays also an important role, which is a priori" -"---\nabstract: 'Two commonly arising computational tasks in Bayesian learning are Optimization (Maximum A Posteriori estimation) and Sampling (from the posterior distribution). In the convex case these two problems are efficiently reducible to each other. Recent work\u00a0[@MaCJFJ18] shows that in the non-convex case, sampling can sometimes be provably faster. We present a simpler and stronger separation. We then compare sampling and optimization in more detail and show that they are provably incomparable: there are families of continuous functions for which optimization is easy but sampling is NP-hard, and vice versa. Further, we show function families that exhibit a sharp phase transition in the computational complexity of sampling, as one varies the natural temperature parameter. Our results draw on a connection to analogous separations in the discrete setting which are well-studied.'\nauthor:\n- |\n Kunal Talwar\\\n Google Brain\\\n Mountain View, CA\\\n `kunal@google.com`\\\nbibliography:\n- 'refs.bib'\ntitle: Computational Separations between Sampling and Optimization\n---\n\nIntroduction\n============\n\nGiven a a compact set ${\\mathcal{X}}\\subseteq \\Re^d$ and function $f : {\\mathcal{X}}\\rightarrow \\Re$, one can define two natural problems:\n\nOptimize($f, {\\mathcal{X}}, \\eps$)\n\n: : Find $\\vx \\in {\\mathcal{X}}$ such that $f(\\vx) \\leq f(\\vx') + \\eps$ for all $\\vx' \\in {\\mathcal{X}}$.\n\nSample($f, {\\mathcal{X}}, \\eta$)\n\n: :" -"---\nabstract: 'Learning based approaches have not yet achieved their full potential in optical flow estimation, where their performance still trails heuristic approaches. In this paper, we present a CNN based patch matching approach for optical flow estimation. An important contribution of our approach is a novel thresholded loss for Siamese networks. We demonstrate that our loss performs clearly better than existing losses. It also allows to speed up training by a factor of 2 in our tests. Furthermore, we present a novel way for calculating CNN based features for different image scales, which performs better than existing methods. We also discuss new ways of evaluating the robustness of trained features for the application of patch matching for optical flow. An interesting discovery in our paper is that low-pass filtering of feature maps can increase the robustness of features created by CNNs. We proved the competitive performance of our approach by submitting it to the KITTI 2012, KITTI 2015 and MPI-Sintel evaluation portals where we obtained state-of-the-art results on all three datasets.'\nauthor:\n- |\n \u00a0\u00a0Christian Bailer$^1$\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Kiran Varanasi$^{1}$\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0Didier Stricker$^{1,2}$\\\n [Christian.Bailer@dfki.de]{}\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0[Kiran.Varanasi@dfki.de ]{}\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0[Didier.Stricker@dfki.de]{}\\\n $^1$German Research Center for Artificial Intelligence (DFKI), $^2$University of Kaiserslautern\\\nbibliography:\n- 'arxiv\\_2.bib'\ntitle: 'CNN-based" -"---\nabstract: 'We theoretically study the pump-probe response of nonequilibrium BCS superconductors coupled to optical phonons. For ultrashort pump pulses a nonadiabatic regime emerges, which is characterized by oscillations of the superconducting order parameter as well as by the generation of coherent phonons. Using the density-matrix formalism, we compute the pump-probe response in the nonadiabatic regime of the coupled Bogoliubov quasiparticle-phonon system and determine the signatures of the order parameter and the phonon oscillations in the pump-probe conductivity. We find that the nonadiabatic dynamics of the BCS superconductor reflects itself in oscillations of the pump-probe response as functions of delay time $\\delta t$ between pump and probe pulses. We argue that from the analysis of this oscillatory behavior both frequency and decay time of the algebraically decaying order-parameter oscillations can be inferred. Similarly, the coherent phonons are evidenced in the pump-probe conductivity by oscillations with the frequency of the phonons. Remarkably, we find that the oscillatory response in the pump-probe conductivity is resonantly enhanced when the frequency of the order-parameter oscillations is tuned to the phonon energy.'\nauthor:\n- 'H. Krull'\n- 'D. Manske'\n- 'G. S. Uhrig'\n- 'A. P. Schnyder'\nbibliography:\n- 'pumpprobe.bib'\ntitle: 'Signatures of nonadiabatic BCS" -"---\nabstract: |\n With recent advances in high throughput technology, researchers often find themselves running a large number of hypothesis tests (thousands+) and estimating a large number of effect-sizes. Generally there is particular interest in those effects estimated to be most extreme. Unfortunately naive estimates of these effect-sizes (even after potentially accounting for multiplicity in a testing procedure) can be severely biased. In this manuscript we explore this bias from a frequentist perspective: we give a formal definition, and show that an oracle estimator using this bias dominates the naive maximum likelihood estimate. We give a resampling estimator to approximate this oracle, and show that it works well on simulated data. We also connect this to ideas in empirical Bayes.\n\n [: bootstrap, shrinkage, mean, empirical Bayes, James-Stein, regression to the mean, selection bias, compound decision theory]{}\nauthor:\n- 'Noah Simon [^1]'\n- 'Richard Simon[^2]'\nbibliography:\n- 'man.bib'\ntitle: 'On Estimating Many Means, Selection Bias, and the Bootstrap'\n---\n\nIntroduction {#sec:intro}\n============\n\nOften, in modern applications, researchers are interested in testing and estimating effect sizes for many different features at once. In the simplest cases one is interested in estimating population means from a sample (often with the most extreme means" -"---\nabstract: 'Dynamical Sauter-Schwinger mechanism of pair creation by a time-dependent electric field comprising of $N_{\\rm rep}$ identical pulses is analyzed within the framework of the spinor and scalar quantum electrodynamics. For linearly polarized pulses, both theories predict that a single eigenmode of the matter wave follows the dynamics of a two-level system. This dynamics, however, is either governed by a Hermitian (for spin 1/2 particles) or pseudo-Hermitian (for spin 0 particles) Hamiltonian. Essentially, both theories lead to a Fraunhofer-type enhancement of the momentum distributions of created pairs. While in the fermionic case the enhancement is never perfect and it deteriorates with increasing the number of pulses in a train $N_{\\rm rep}$, in the bosonic case we observe the opposite. More specifically, it is at exceptional points where the spectra of bosonic pairs scale exactly as $N_{\\rm rep}^2$, and this scaling is even enhanced with increasing the number of pulses in a train.'\nauthor:\n- 'K. Krajewska'\n- 'J. Z. Kami\u0144ski'\ntitle: 'Unitary vs pseudo-unitary time evolution and statistical effects in the dynamical Sauter-Schwinger process'\n---\n\nIntroduction {#sec::intro}\n============\n\nDiffraction and interference of waves\u00a0[@Crawford] have played the fundamental role in the development of science. While both phenomena have been" -"---\nabstract: 'M4 and NGC 6397 are two very similar galactic globular clusters, which differ mainly in their surface brightness profile. M4 has a classic King-like profile, whereas NGC 6397 has a more concentrated profile, which is often interpreted as that of a post-core collapse cluster. @HG2008, however, found that M4 is also a post-core collapse cluster, and @GH2009 concluded that the main reason for the difference between the two surface brightness profiles is fluctuations. This conclusion was reached on the basis of Monte Carlo models, however, and in the present Letter we verify that similar fluctuations occur in $N$-body models. The models were initialised by generating initial conditions from the Monte Carlo model of NGC6397 at the simulated age of 12Gyr, and one was followed for 1Gyr. The new models help to clarify the nature of the fluctuations, which have the nature of semi-regular oscillations with a time scale of order $10^8$yr. They are influenced by the dynamical role which is played by primordial binaries in the evolution of the core.'\nauthor:\n- |\n Douglas C. Heggie$^{1}$ [^1] and Mirek Giersz$^{2}$\\\n $^1$School of Mathematics and Maxwell Institute for Mathematical Sciences, University of Edinburgh, King\u2019s Buildings, Edinburgh EH9 3JZ, UK\\" -"---\nabstract: 'The problem of inferring a clustering of a data set has been the subject of much research in Bayesian analysis, and there currently exists a solid mathematical foundation for Bayesian approaches to clustering. In particular, the class of probability distributions over partitions of a data set has been characterized in a number of ways, including via exchangeable partition probability functions (EPPFs) and the Kingman paintbox. Here, we develop a generalization of the clustering problem, called feature allocation, where we allow each data point to belong to an arbitrary, non-negative integer number of groups, now called features or topics. We define and study an \u201cexchangeable feature probability function\u201d (EFPF)\u2014analogous to the EPPF in the clustering setting\u2014for certain types of feature models. Moreover, we introduce a \u201cfeature paintbox\u201d characterization\u2014analogous to the Kingman paintbox for clustering\u2014of the class of exchangeable feature models. We provide a further characterization of the subclass of feature allocations that have EFPF representations.'\nauthor:\n- 'Tamara Broderick, Jim Pitman, Michael I. Jordan'\nbibliography:\n- 'bnp.bib'\ntitle: 'Feature allocations, probability functions, and paintboxes'\n---\n\nIntroduction {#sec:introduction}\n============\n\nExchangeability has played a key role in the development of Bayesian analysis in general and Bayesian nonparametric analysis in particular. Exchangeability" -"---\nabstract: 'We present a new algorithm for identifying the transition and emission probabilities of a hidden Markov model (HMM) from the emitted data. Expectation-maximization becomes computationally prohibitive for long observation records, which are often required for identification. The new algorithm is particularly suitable for cases where the available sample size is large enough to accurately estimate second-order output probabilities, but not higher-order ones. We show that if one is only able to obtain a reliable estimate of the pairwise co-occurrence probabilities of the emissions, it is still possible to uniquely identify the HMM if the emission probability is *sufficiently scattered*. We apply our method to hidden topic Markov modeling, and demonstrate that we can learn topics with higher quality if documents are modeled as observations of HMMs sharing the same emission (topic) probability, compared to the simple but widely used bag-of-words model.'\nbibliography:\n- 'hmm\\_refs.bib'\ntitle: |\n **Learning Hidden Markov Models from Pairwise Co-occurrences\\\n with Application to Topic Modeling**\n---\n\nIntroduction\n============\n\nHidden Markov models (HMMs) are widely used in machine learning when the data samples are time *dependent*, for example in speech recognition, language processing, and video analysis. The graphical model of a HMM is shown in Figure" -"---\nabstract: 'Although no individual piece of experimental evidence for supersymmetry is compelling so far, several are about as good as they can be with present errors. Most important, all pieces of evidence imply the same values for common parameters \u2014 a necessary condition, and one unlikely to hold if the hints from data are misleading. The parameters are sparticle or soft-breaking masses and $\\tan\\beta.$ For the parameter ranges reported here, there are so far no signals that should have occurred but did not. Given those parameters a number of predictions can test whether the evidence is real. It turns out that the predictions are mostly different from the conventional \u00dfones, and might have been difficult to recognize as signals of superpartners. They are testable at LEP2, where neutralinos and charginos will appear mainly as $\\gamma\\gamma +$ large $\\slashchar{E}$ events, $\\gamma +$ very large $\\slashchar{E}$ events, and very soft lepton pairs of same or mixed flavor. The results demonstrate that we understand a lot about how to extract an effective SUSY Lagrangian from limited data, and that we can reasonably hope to learn about the theory near the Planck scale from the data at the electroweak scale.'\nauthor:\n- |" -"---\nabstract: 'Vector solitary waves are nonlinear waves of coupled polarizations that propagate with constant velocity and shape. In mechanics, they hold the potential to control locomotion, mitigate shocks and transfer information, among other functionalities. Recently, such elastic waves were numerically observed in compressible rubber-like laminates. Here, we conduct numerical experiments to characterize the possible vector solitary waves in these laminates, and expose a new type of waves whose amplitude and velocity oscillate periodically without dispersing in time. This oscillation is a manifestation of a periodic transfer of energy between the two wave polarizations, which we consider as internal mode of the solitary wave. We find that the vector solitary waves propagate faster at higher amplitudes, and determine a lower bound for their velocity. We describe a procedure for identifying which initial strains generate such vector solitary waves. This procedure also enables an additional classification between tensile and compressive solitary waves, according to the way that the axial strain changes as the waves propagate.'\nauthor:\n- |\n Ron Ziv and Gal Shmuel[^1]\\\n [Faculty of Mechanical Engineering, TechnionIsrael Institute of Technology, Haifa 32000, Israel]{}\\\ntitle: Oscillating vector solitary waves in soft laminates\n---\n\n\\#1[\\#1]{} \\#1[\\#1]{} \\#1[\\#1\\^]{} \\#1[\\_\\#1]{} \\#1[\\#1\\^]{} \\#1[\\#1\\^[(m)]{}]{} \\#1[\\#1\\^[(f)]{}]{} \\#1[\\#1\\^[(p)]{}]{}" -"---\nabstract: 'Observation of the 21cm line signal from neutral hydrogen during the Epoch of Reionization is challenging due to extremely bright Galactic and extragalactic foregrounds and complicated instrumental calibration. A reasonable approach for mitigating these problems is the cross correlation with other observables. In this work, we present the first results of the cross power spectrum (CPS) between radio images observed by the Murchison Widefield Array and the cosmic microwave background (CMB), measured by the Planck experiment. [We study the systematics due to the ionospheric activity, the dependence of CPS on group of pointings, and frequency.]{} The resulting CPS is consistent with zero because the error is dominated by the foregrounds in the 21cm observation. Additionally, the variance of the signal indicates the presence of [unexpected systematics]{} error at small scales. Furthermore, we reduce the error by one order of magnitude with application of a foreground removal using a polynomial fitting method. Based on the results, we find that the detection of the 21cm-CMB CPS with the MWA Phase I requires more than 99.95% of the foreground signal removed, 2000 hours of deep observation and 50% of the sky fraction coverage.'\nauthor:\n- |\n S. Yoshiura$^1$[^1], K. Ichiki$^{2,3}$, B." -"---\nabstract: 'The paper develops an abstract (over-approximating) semantics for double-pushout rewriting of graphs and graph-like objects. The focus is on the so-called materialization of left-hand sides from abstract graphs, a central concept in previous work. The first contribution is an accessible, general explanation of how materializations arise from universal properties and categorical constructions, in particular partial map classifiers, in a topos. Second, we introduce an extension by enriching objects with annotations and give a precise characterization of strongest post-conditions, which are effectively computable under certain assumptions.'\nauthor:\n- Andrea Corradini\n- Tobias Heindel\n- Barbara K\u00f6nig\n- |\n \\\n Dennis Nolte\n- Arend\u00a0Rensink\nbibliography:\n- 'references.bib'\ntitle: |\n Rewriting Abstract Structures:\\\n Materialization Explained Categorically[^1]\n---\n\nIntroduction {#sec:introduction}\n============\n\nAbstract interpretation [@c:abstract-interpretation] is a fundamental static analysis technique that applies not only to conventional programs but also to general infinite-state systems. Shape analysis\u00a0[@srw:shape-analysis-3vl], a specific instance of abstract interpretation, pioneered an approach for analyzing pointer structures that keeps track of information about the \u201cheap topology\u201d, e.g., out-degrees or existence of certain paths. One central idea of shape analysis is *materialization*, which arises as companion operation to summarizing distinct objects that share relevant properties. Materialization, a.k.a.\u00a0partial concretization, is" -"---\nabstract: |\n We show that the $N=2$ superextended 1D quantum Dirac delta potential problem is characterized by the hidden nonlinear $su(2|2)$ superunitary symmetry. The unexpected feature of this simple supersymmetric system is that it admits three different $\\mathbb Z_2$-gradings, which produce a separation of 16 integrals of motion into three different sets of 8 bosonic and 8 fermionic operators. These three different graded sets of integrals generate two different nonlinear, deformed forms of $su(2|2)$, in which the Hamiltonian plays a role of a multiplicative central charge. On the ground state, the nonlinear superalgebra is reduced to the two distinct 2D Euclidean analogs of a superextended Poincar\u00e9 algebra used earlier in the literature for investigation of spontaneous supersymmetry breaking. We indicate that the observed exotic supersymmetric structure with three different $\\mathbb\n Z_2$-gradings can be useful for the search of hidden symmetries in some other quantum systems, in particular, related to the Lam\u00e9 equation.\nauthor:\n- |\n , ,\\\n [*$^{1}$Departamento de F\u00edsica, Universidad de Santiago de Chile, Casilla 307, Santiago 2, Chile*]{}\\\n [*$^2$Departamento de F\u00edsica Te\u00f3rica, At\u00f3mica y \u00d3ptica, Universidad de Valladolid, 47071, Valladolid, Spain*]{}\\\n [*E-mails: fco.correa.s@gmail.com, luismi@metodos.fam.cie.uva.es, mplyushc@lauca.usach.cl*]{}\ntitle: 'Hidden nonlinear $su(2|2)$ superunitary symmetry of $N=2$ superextended 1D Dirac delta" -"---\nabstract: |\n Monocular depth prediction plays a crucial role in understanding $3$D scene geometry. Although recent methods have achieved impressive progress in evaluation metrics such as the pixel-wise relative error, most methods neglect the geometric constraints in the 3D space. In this work, we show the importance of the high-order 3D geometric constraints for depth prediction. By designing a loss term that enforces one simple type of geometric constraints, namely, *virtual normal* directions determined by randomly sampled three points in the reconstructed 3D space, we can considerably improve the depth prediction accuracy. Significantly, the byproduct of this predicted depth being sufficiently accurate is that we are now able to recover good 3D structures of the scene such as the point cloud and surface normal directly from the depth, eliminating the necessity of training new sub-models as was previously done. Experiments on two benchmarks: NYU Depth-V2 and KITTI demonstrate the effectiveness of our method and state-of-the-art performance. Code is available at:\n\n \nauthor:\n- |\n Wei Yin$ ^1$ \u00a0\u00a0\u00a0\u00a0 Yifan Liu$ ^1$ \u00a0\u00a0\u00a0\u00a0 Chunhua Shen$ ^1$[^1] \u00a0\u00a0\u00a0\u00a0 Youliang Yan$ ^2$\\\n $ ^1 $The University of Adelaide, Australia \u00a0\u00a0\u00a0 \u00a0 \u00a0 $ ^2 $Noah\u2019s Ark Lab, Huawei Technologies\nbibliography:\n- 'arxiv\\_edition.bib'\ntitle: Enforcing geometric constraints of virtual" -"---\nabstract: 'Recent developments in machine-learning algorithms have led to impressive performance increases in many traditional application scenarios of artificial intelligence research. In the area of deep reinforcement learning, deep learning functional architectures are combined with incremental learning schemes for sequential tasks that include interaction-based, but often delayed feedback. Despite their impressive successes, modern machine-learning approaches, including deep reinforcement learning, still perform weakly when compared to flexibly adaptive biological systems in certain naturally occurring scenarios. Such scenarios include transfers to environments different than the ones in which the training took place or environments that dynamically change, both of which are often mastered by biological systems through a capability that we here term \u201cfluid adaptivity\u201d to contrast it from the much slower adaptivity (\u201ccrystallized adaptivity\u201d) of the prior learning from which the behavior emerged. In this article, we derive and discuss research strategies, based on analyzes of fluid adaptivity in biological systems and its neuronal modeling, that might aid in equipping future artificially intelligent systems with capabilities of fluid adaptivity more similar to those seen in some biologically intelligent systems. A key component of this research strategy is the dynamization of the problem space itself and the implementation of this dynamization" -"---\nabstract: |\n This paper aims to establish theoretical foundations of graph product multilayer networks (GPMNs), a family of multilayer networks that can be obtained as a graph product of two or more factor networks. Cartesian, direct (tensor), and strong product operators are considered, and then generalized. We first describe mathematical relationships between GPMNs and their factor networks regarding their degree/strength, adjacency, and Laplacian spectra, and then show that those relationships can still hold for nonsimple and generalized GPMNs. Applications of GPMNs are discussed in three areas: predicting epidemic thresholds, modeling propagation in nontrivial space and time, and analyzing higher-order properties of self-similar networks. Directions of future research are also discussed.\\\n Keywords: graph product, multilayer networks, degree/adjacency/Laplacian spectra, epidemic thresholds, propagation, self-similar networks\nauthor:\n- |\n Hiroki Sayama$^{1,2,3,4}$\\\n $^1$ Center for Collective Dynamics of Complex Systems,\\\n Binghamton University, Binghamton, New York 13902, USA\\\n $^2$ Max Planck Institute for the Physics of Complex Systems,\\\n D-01187 Dresden, Germany\\\n $^3$ Center for Complex Network Research and Department of Physics,\\\n Northeastern University, Boston, Massachusetts 02115, USA\\\n $^4$ Faculty of Commerce, Waseda University, Shinjuku, Tokyo 169-8050, Japan\nbibliography:\n- 'sayama.bib'\ntitle: 'Graph Product Multilayer Networks: Spectral Properties and Applications'\n---\n\nIntroduction {#sec:intro}\n============\n\nMultilayer networks" -"**[In defence of non-ontic accounts of quantum states]{}**\\\nSimon Friederich\\\n`email@simonfriederich.eu`\\\nPhilosophisches Seminar, Universit\u00e4t G\u00f6ttingen, Humboldtallee 19, D-37073 G\u00f6ttingen, Germany\n\n\\\nKeywords: quantum states, quantum probabilities, anthropocentric notions, micro/macro divide, explanation and prediction\\\n\nIntroduction\n============\n\nThe measurement problem and the problem of quantum \u201cnon-locality\u201d, that is, the claimed tension between quantum theory and relativity theory, are widely regarded as the most outstanding difficulties in the foundations of quantum mechanics. Possible ways to react to these problems (or \u201cparadoxes\u201d) range from changing the dynamics (as in GRW theory) to adding determinate particle and field configurations (as in pilot wave approaches) to adopting a non-standard picture of our world according to which our universe (or our mind) constantly splits into an immense number of branches (as in variants of the Everett interpretation). These are attempts to *solve* the paradoxes, either by altering the formalism of the theory or by radically altering our picture of the world so that at least one of the assumptions necessary to derive the paradoxes no longer holds.\n\nThe present paper investigates accounts of quantum theory which approach the paradoxes from an entirely different perspective. Their main ambition is to *dissolve* the paradoxes by proposing a perspective on" -"---\nauthor:\n- 'Zhanfeng Wang Yuan-chin Ivan Chang[^1]'\ntitle: Distributed sequential method for analyzing massive data \n---\n\n**Abstract**: To analyse a very large data set containing lengthy variables, we adopt a sequential estimation idea and propose a parallel divide-and-conquer method. We conduct several conventional sequential estimation procedures separately, and properly integrate their results while maintaining the desired statistical properties. Additionally, using a criterion from the statistical experiment design, we adopt an adaptive sample selection, together with an adaptive shrinkage estimation method, to simultaneously accelerate the estimation procedure and identify the effective variables. We confirm the cogency of our methods through theoretical justifications and numerical results derived from synthesized data sets. We then apply the proposed method to three real data sets, including those pertaining to appliance energy use and particulate matter concentration.\n\n[**Keywords**]{}: Sequential sampling; Stopping rule; Confidence set; Distributed/Parallel computation\n\n[[**[AMS Subject Classification (2000)]{}**]{}: Primary 62F12; Secondary 62E20]{}\n\nIntroduction\n============\n\nWhile the development of modern measurement and communication technologies has frequently made data collection procedures more efficient, we as researchers have been hard-pressed to analyse and extract information from large data sets and to keep up with our data collection capacity. Although we can leverage concepts from the divide-and-conquer" -"---\nabstract: |\n The ability to detect anomalies in time series is considered as highly valuable within plenty of application domains. The sequential nature of time series objects is responsible for an additional feature complexity, ultimately requiring specialized approaches for solving the task. Essential characteristics of time series, laying outside the time domain, are often difficult to capture with state-of-the-art anomaly detection methods, when no transformations on the time series have been applied. Inspired by the success of deep learning methods in computer vision, several studies have proposed to transform time-series into image-like representations, leading to very promising results. However, most of the previous studies implementing time-series to image encodings have focused on the supervised classification. The application to unsupervised anomaly detection tasks has been limited.\n\n The paper has the following contributions: First, we evaluate the application of six time-series to image encodings to DL algorithms: Gramian Angular Field, Markov Transition Field, Recurrence Plot, Grey Scale Encoding, Spectrogram and Scalogram. Second, we propose modifications of the original encoding definitions, to make them more robust to the variability in large datasets. And third, we provide a comprehensive comparison between using the raw time series directly and the different encodings, with and" -"---\nabstract: 'Electric scooters are becoming immensely popular across the world as a means of reliable transportation around many cities. As these e-scooters rely on batteries, it is important to understand how many of these e-scooters have enough battery life to transport riders and when these e-scooters might require a battery replacement. To this end, we develop the first stochastic model to capture the battery life dynamics of e-scooters of a large scooter network. In our model, we assume that e-scooter batteries are removable and replaced by agents called **swappers**. Thus, to gain some insight about the large scale dynamics of the system, we prove a mean field limit theorem and a functional central limit theorem for the fraction of e-scooters that lie in a particular interval of battery life. Exploiting the mean field limit and the functional central limit theorems, we develop an algorithm for determining the number of **swappers** that are needed to guarantee levels of probabilistic performance of the system. Finally, we show through a stochastic simulation and real data that our stochastic model captures the relevant dynamics.'\nauthor:\n- |\n Jamol Pender\\\n School of Operations Research and Information Engineering\\\n Cornell University\\\n 228 Rhodes Hall, Ithaca, NY" -"---\nabstract: 'The Holevo bound is a bound on the mutual information for a given quantum encoding. In 1996 Schumacher, Westmoreland and Wootters \\[Schumacher, Westmoreland and Wootters, Phys. Rev. Lett. [**76**]{}, 3452 (1996)\\] derived a bound which reduces to the Holevo bound for complete measurements, but which is tighter for incomplete measurements. The most general quantum operations may be both incomplete and inefficient. Here we show that the bound derived by SWW can be further extended to obtain one which is yet again tighter for inefficient measurements. This allows us in addition to obtain a generalization of a bound derived by Hall, and to show that the average reduction in the von Neumann entropy during a quantum operation is concave in the initial state, for all quantum operations. This is a quantum version of the concavity of the mutual information. We also show that both this average entropy reduction and the mutual information for pure state ensembles, are Schur-concave for unitarily covariant measurements; that is, for these measurements, information gain increases with initial uncertainty.'\nauthor:\n- Kurt Jacobs\ntitle: 'A bound on the mutual information, and properties of entropy reduction, for quantum channels with inefficient measurements'\n---\n\nIntroduction\n============\n\nThe" -"---\nabstract: 'Measurements of azimuthal differences between forward di-pions are sensitive to the low-${\\it x}$ gluon content of the proton and provide the best opportunity to probe for gluon saturation in nuclei. Previously reported analyses have shown that the gluon saturation regime may have been reached at STAR by looking at forward di-pions in d+Au collisions. Further insight into the uncorrelated pedestal below the near-side and away-side peaks in azimuthal correlations may be provided by differentiating between d+Au and p+Au collisions, by tagging on intact neutrons in the deuteron beam in d+Au collisions. Comparisons to recent theories indicate that multi-parton interactions play a more significant role in d+Au collisions than p+Au collisions and offer a unique opportunity to study correlations between leading partons inside nucleons. The general features found for the peaks in forward di-pion azimuthal correlations in d+Au collisions are also present in p+Au collisions.'\nauthor:\n- Chris Perkins for the STAR Collaboration\nbibliography:\n- 'sample.bib'\ntitle: 'Small-${\\it x}$ and Forward Measurements at STAR'\n---\n\n[ address=[UC Berkeley/Space Sciences Lab, Stony Brook University]{} ]{}\n\nIntroduction\n============\n\nIt is known that gluon densities in the proton rise for decreasing longitudinal partonic momentum fractions, ${\\it x}$, however this rise cannot continue" -"---\nabstract: 'In this paper, we apply a general deep learning (DL) framework for the answer selection task, which does not depend on manually defined features or linguistic tools. The basic framework is to build the embeddings of questions and answers based on bidirectional long short-term memory (biLSTM) models, and measure their closeness by cosine similarity. We further extend this basic model in two directions. One direction is to define a more composite representation for questions and answers by combining convolutional neural network with the basic framework. The other direction is to utilize a simple but efficient attention mechanism in order to generate the answer representation according to the question context. Several variations of models are provided. The models are examined by two datasets, including TREC-QA and InsuranceQA. Experimental results demonstrate that the proposed models substantially outperform several strong baselines.'\nauthor:\n- |\n Ming Tan, Cicero dos Santos, Bing Xiang & Bowen Zhou\\\n IBM Watson Core Technologies\\\n Yorktown Heights, NY, USA\\\n `{mingtan,cicerons,bingxia,zhou}@us.ibm.com`\\\nbibliography:\n- 'iclr2016\\_conference.bib'\ntitle: ' LSTM-based Deep Learning Models for non-factoid answer selection'\n---\n\nIntroduction\n============\n\nThe answer selection problem can be formulated as follows: Given a question $q$ and an answer candidate pool $\\{a_1, a_2, \\cdots ," -"---\nabstract: 'The sensitivity of a low-noise superconducting transition edge sensor (TES) is determined by the thermal conductance of the support structure that connects the active elements of the device to the heat bath. Low-noise devices require conductances in the range 0.1 to 10pWK$^{-1}$, and so have to rely on diffusive phonon scattering in long, narrow, amorphous SiN$_{\\rm x}$ legs. We show that it is possible to manufacture and operate TESs having short, ballistic low-dimensional legs (cross section 500 $\\times$ 200nm) that contain multi-element phononic interferometers and ring resonators. These legs transport heat in effectively just 5 elastic modes at the TES\u2019s operating temperature ($<$ 150mK), which is close to the quantised limit of 4. The phononic filters then reduce the thermal flux further by frequency-domain filtering. For example, a micromachined 3-element ring resonator reduced the flux to 19 % of a straight-legged ballistic device operating at the quantised limit, and 38 % of a straight-legged diffusive reference device. This work opens the way to manufacturing TESs where performance is determined entirely by filtered, few-mode, ballistic thermal transport in short, low-heat capacity legs, free from the artifacts of two level systems.'\nauthor:\n- 'E.A. Williams'\n- 'S. Withington'\n- 'C.N." -"---\nabstract: 'Neuromorphic event-based dynamic vision sensors (DVS) have much faster sampling rates and a higher dynamic range than frame-based imaging sensors. However, they are sensitive to background activity (BA) events that are unwanted. There are some filters for tackling this problem based on spatio-temporal correlation. However, they are either memory-intensive or computing-intensive. We propose *SeqXFilter*, a spatio-temporal correlation filter with only a past event window that has an O(1) space complexity and has simple computations. We explore the spatial correlation of an event with its past few events by analyzing the distribution of the events when applying different functions on the spatial distances. We find the best function to check the spatio-temporal correlation for an event for *SeqXFilter*, best separating real events and noise events. We not only give the visual denoising effect of the filter but also use two metrics for quantitatively analyzing the filter\u2019s performance. Four neuromorphic event-based datasets, recorded from four DVS with different output sizes, are used for validation of our method. The experimental results show that *SeqXFilter* achieves similar performance as baseline NNb filters, but with extremely small memory cost and simple computation logic.'\nauthor:\n- |\n Shasha Guo, Lei Wang, Xiaofan Chen, Limeng" -"---\nabstract: 'Stochastic Model Predictive Control has proved to be an efficient method to plan trajectories in uncertain environments, e.g., for autonomous vehicles. Chance constraints ensure that the probability of collision is bounded by a predefined risk parameter. However, considering chance constraints in an optimization problem can be challenging and computationally demanding. In this paper, we present a grid-based Stochastic Model Predictive Control approach. This approach allows to determine a simple deterministic reformulation of the chance constraints and reduces the computational effort, while considering the stochastic nature of the environment. Within the proposed method, we first divide the environment into a grid and, for each predicted step, assign each cell a probability value, which represents the probability that this cell will be occupied by surrounding vehicles. Then, the probabilistic grid is transformed into a binary grid of admissible and inadmissible cells by applying a threshold, representing a risk parameter. Only cells with an occupancy probability lower than the threshold are admissible for the controlled vehicle. Given the admissible cells, a convex hull is generated, which can then be used for trajectory planning. Simulations of an autonomous driving highway scenario show the benefits of the proposed grid-based Stochastic Model Predictive Control" -"---\nabstract: 'The problems that scientists face in creating well designed databases intersect with the concerns of data curation. Entity-relationship modeling and its variants have been the basis of most relational data modeling for decades. However, these abstractions and the relational model itself are intricate and have proved not to be very accessible among scientists with limited resources for data management. This paper explores one aspect of relational data models, the meaning of foreign key relationships. We observe that a foreign key produces a table relationship that generally references either an entity or repeating attributes. This paper proposes constructing foreign keys based on these two cases, and suggests that the method promotes intuitive data modeling and normalization.'\nauthor:\n- |\n Nassib Nassar\\\n RENCI, University of North Carolina at Chapel Hill\\\n nassar@renci.org\nbibliography:\n- 'nassar\\_draft.bib'\ndate: 'October 4, 2010'\ntitle: A Simple Abstraction for Data Modeling\n---\n\nIntroduction\n============\n\nScience has become dependent on the ability to share and reuse data sets that can be very large in scale and quite heterogeneous[@bell09]. In scientific communities and in digital library communities, which are concerned with dissemination and preservation of scientific and scholarly output, there has been a growing interest in many aspects" -"---\nabstract: 'Usage control models provide an integration of access control, digital rights, and trust management. To achieve this integration, usage control models support additional concepts such as attribute mutability and continuity of decision. However, these concepts may introduce an additional level of complexity to the underlying model, rendering its definition a cumbersome and prone to errors process. Applying a formal verification technique allows for a rigorous analysis of the interactions amongst the components, and thus for formal guarantees in respect of the correctness of a model. In this paper, we elaborate on a case study, where we express the high-level functional model of the UseCON usage control model in the TLA+ formal specification language, and verify its correctness for $\\leq12$ uses in both of its supporting authorisation models.'\nauthor:\n- Antonios Gouglidis\n- Christos Grompanopoulos\n- Anastasia Mavridou\nbibliography:\n- 'generic.bib'\nnocite: '[@*]'\ntitle: |\n Formal Verification of Usage Control Models:\\\n A Case Study of UseCON Using TLA+\n---\n\nIntroduction\n============\n\nAccess control systems offer the mechanisms to control and limit the actions or operations that are performed by a user or process \u2013 referred to as *subjects* \u2013 on a set of system *objects*. Specifically, an authorisation process" -"---\nabstract: 'We summarize and compare recent Molecular Dynamics simulations on the interactions of dipalmitoylphosphatidylcholine (DPPC) bilayers in the liquid crystalline phase with a number of small molecules including trehalose, a disaccharide of glucose, alcohols, and dimethylsulfoxide (DMSO). The sugar molecules tend to stabilize the structure of the bilayer as they bridge adjacent lipid headgroups. They do not strongly change the structure of the bilayer. Alcohols and DMSO destabilize the bilayer as they increase its area per molecule in the bilayer plane and decrease the order parameter. Alcohols have a stronger detrimental effect than DMSO. The observables which we compare are the area per molecule in the plane of the bilayer, the membrane thickness, and the NMR order parameter of DPPC hydrocarbon tails. The area per molecule and the order parameter are very well correlated whereas the bilayer thickness is not necessarily correlated with them.'\nauthor:\n- 'Bryan W. Lee'\n- 'Roland Faller[^1]'\n- 'Amadeu K. Sum'\n- Ilpo Vattulainen\n- Michael Patra\n- Mikko Karttunen\nbibliography:\n- 'standard.bib'\ntitle: Structural Effects of Small Molecules on Phospholipid Bilayers Investigated by Molecular Simulations\n---\n\nIntroduction\n============\n\nPhospholipid bilayers have been the focus of research for a long time due to their" -"---\nabstract: 'We present a concept for control of the ion polarization, called a transparent spin method. The spin transparency is achieved by designing such a synchrotron structure that the net spin rotation angle in one particle turn is zero. The polarization direction of any ions including deuterons can be efficiently controlled using weak quasi-static fields. These fields allow for dynamic adjustment of the polarization direction during an experiment. The main features of the Transparent Spin method are illustrated in a figure-8 collider. The results are relevant to the Electron-Ion Collider considered in the US, the ion-ion collider NICA constructed in Russia, and a polarized Electron-ion collider planned in China.'\nauthor:\n- 'Yu.N. Filatov'\n- 'A.M. Kondratenko'\n- 'M.A. Kondratenko'\n- 'Ya.S. Derbenev'\n- 'V.S. Morozov'\ntitle: Transparent Spin Method for Spin Control of Hadron Beams in Colliders\n---\n\n*Introduction.*\u00a0\u2014 Polarized beam experiments have been and remain a crucial tool in understanding particle and nuclear structure and reactions from the first principles [@b:principles]. In particular, polarized light ion ($p$, $d$, ${}^3He$) and electron beams are necessary for the successful operation of a proposed high-luminosity polarized *Electron-Ion Collider* (EIC) that is currently under active design [@b:EIC_MEIC; @b:EIC_eRHIC; @b:EIC_China]. Technologies for" -"---\nabstract: 'The encoding of quantum information in photonic time-bin qubits is apt for long distance quantum communication schemes. In practice, due to technical constraints such as detector response time, or the speed with which co-polarized time-bins can be switched, other encodings, e.g. polarization, are often preferred for operations like state detection. Here, we present the conversion of qubits between polarization and time-bin encodings using a method that is based on an ultrafast optical Kerr shutter and attain efficiencies of 97% and an average fidelity of 0.827$\\pm$0.003 with shutter speeds near 1 ps. Our demonstration delineates an essential requirement for the development of hybrid and high-rate optical quantum networks.'\nauthor:\n- Connor\u00a0Kupchak\n- 'Philip\u00a0J.\u00a0Bustard'\n- Khabat\u00a0Heshami\n- Jennifer\u00a0Erskine\n- Michael\u00a0Spanner\n- 'Duncan\u00a0G.\u00a0England'\n- 'Benjamin\u00a0J.\u00a0Sussman'\ntitle: 'Time-bin to Polarization Conversion of Ultrafast Photonic Qubits'\n---\n\nThe encoding of quantum information (QI) into photons holds much promise in numerous future technologies. The QI can be mapped onto various degrees of freedom that are used as basis-states. One attractive option is to encode onto qubits composed of two co-polarized but temporally distinct wave packets, or time-bins; these basis states are often labelled by" -"---\nabstract: 'The hierarchical nature of $\\Lambda$CDM suggests that the Magellanic Clouds must have been surrounded by a number of satellites before their infall into the Milky Way. Many of those satellites should still be in close proximity to the Clouds, but some could have dispersed ahead/behind the Clouds along their Galactic orbit. Either way, prior association with the Clouds results in strong restrictions on the present-day positions and velocities of candidate Magellanic satellites: they must lie close to the nearly-polar orbital plane of the Magellanic stream, and their distances and radial velocities must follow the latitude dependence expected for a tidal stream with the Clouds at pericenter. We use a cosmological numerical simulation of the disruption of a massive subhalo in a Milky Way-sized $\\Lambda$CDM halo to test whether any of the $20$ dwarfs recently-discovered in the DES, SMASH, Pan-STARRS, and ATLAS surveys are truly associated with the Clouds. Of the $6$ systems with kinematic data, only Hydra\u00a0II and Hor\u00a01 have distances and radial velocities consistent with a Magellanic origin. Of the remaining dwarfs, six (Hor\u00a02, Eri\u00a03, Ret\u00a03, Tuc\u00a04, Tuc\u00a05, and Phx\u00a02) have positions and distances consistent with a Magellanic origin," -"---\nabstract: |\n Based on extensive air shower simulations it is shown that the electron distributions with respect to the two angles, determining electron direction at a given shower age, for a fixed electron energy and lateral distance, are universal. It means that the distributions do not depend on the primary particle energy or mass (thus, neither on the interaction model), shower zenith angle or shower to shower fluctuations, if they are taken at the same shower age. Together with previous work showing the universality of the distributions of the electron energy, of the lateral distance (integrated over angles) and of the angle (integrated over lateral distance) for fixed electron energy this paper completes a full universal description of the electron states at various shower ages. Analytical parametrizations of the full electron states are given. It is also shown that some distributions can be described by a smaller than five numbers of variables, the new ones being products of the old ones raised to some powers.\\\n The accuracy of the present parametrization is sufficiently good for applying to showers with the primary energy uncertainty of 14$\\%$ (as it is at the Pierre Auger Observatory). The shower fluctuations in the chosen" -"---\nabstract: 'We consider quantum transition amplitudes, partition functions and observables for 3D spin foam models within $SU(2)$ quantum group deformation symmetry, where the deformation parameter is a complex fifth root of unity. By considering \u201cfermionic\u201d cycles through the foam we couple this $SU(2)$ quantum group with the same deformation of $SU(3)$, so that we have quantum numbers linked with spacetime symmetry and charge gauge symmetry in the computation of observables. The generalization to higher-dimensional Lie groups $SU(N)$, $G_2$ and $E_8$ is suggested. On this basis we discuss a unifying framework for quantum gravity. Inside the transition amplitude or partition function for geometries, we have the quantum numbers of particles and fields interacting in the form of a spin foam network $-$ in the framework of state sum models, we have a sum over quantum computations driven by the interplay between aperiodic order and topological order.'\naddress: |\n Quantum Gravity Research\\\n [*Los Angeles, CA*]{}\nauthor:\n- 'Marcelo Amaral, Raymond Aschheim, Klee Irwin'\ntitle: Quantum Gravity at the Fifth Root of Unity\n---\n\n[*Keywords*]{}: Quantum Gravity, Spin Foam, Unification Physics, Aperiodic Order, Topological Order\n\nIntroduction {#intro}\n============\n\nQuantum gravity and unification physics programs, in the absence of more concrete experimental results," -"---\nabstract: 'In this paper, we present an acoustic scene classification framework based on a large-margin factorized convolutional neural network (CNN). We adopt the factorized CNN to learn the patterns in the time-frequency domain by factorizing the 2D kernel into two separate 1D kernels. The factorized kernel leads to learn the main component of two patterns: the long-term ambient and short-term event sounds which are the key patterns of the audio scene classification. In training our model, we consider the loss function based on the triplet sampling such that the same audio scene samples from different environments are minimized, and simultaneously the different audio scene samples are maximized. With this loss function, the samples from the same audio scene are clustered independently of the environment, and thus we can get the classifier with better generalization ability in an unseen environment. We evaluated our audio scene classification framework using the dataset of the DCASE challenge 2019 task1A. Experimental results show that the proposed algorithm improves the performance of the baseline network and reduces the number of parameters to one third. Furthermore, the performance gain is higher on unseen data, and it shows that the proposed algorithm has better generalization ability.'\naddress:" -"---\nabstract: 'The lensing cross section of triaxial halos depends on the relative orientation between a halo\u2019s principal axes and its line of sight. Consequently, a lensing subsample of randomly oriented halos is not, in general, randomly oriented. Using an isothermal mass model for the lensing galaxies and their host halos, we show that the lensing subsample of halos that produces doubles is preferentially aligned along the lines of sight, whereas halos that produce quads tend to be projected along their middle axes. These preferred orientations result in different projected ellipticity distributions for quad, doubles, and random galaxies. We show that $\\approx 300$ lens systems must be discovered to detect this effect at the $95\\%$ confidence level. We also investigate the importance of halo shape for predicting the quad-to-double ratio and find that the latter depends quite sensitively on the distribution of the short-to-long axis ratio, but is otherwise nearly independent of halo shape. Finally, we estimate the impact of the preferred orientation of lensing galaxies on their projected substructure mass fraction, and find that the observed alignment between the substructure distribution and the mass distribution of halos result in a negligible bias.'\nauthor:\n- 'Eduardo Rozo, Jacqueline Chen, Andrew" -"---\nabstract: 'In a multi-objective game, each individual\u2019s payoff is a *vector-valued* function of everyone\u2019s actions. Under such vectorial payoffs, Pareto-efficiency is used to formulate each individual\u2019s best-response condition, inducing Pareto-Nash equilibria as the fundamental solution concept. In this work, we follow a classical game-theoretic agenda to study equilibria. Firstly, we show in several ways that numerous pure-strategy Pareto-Nash equilibria exist. Secondly, we propose a more consistent extension to mixed-strategy equilibria. Thirdly, we introduce a measurement of the efficiency of multiple objectives games, which purpose is to keep the information on each objective: the multi-objective coordination ratio. Finally, we provide algorithms that compute Pareto-Nash equilibria and that compute or approximate the multi-objective coordination ratio.'\nauthor:\n- Anisse Ismaili\nbibliography:\n- 'newMOG.bib'\ntitle: 'On Existence, Mixtures, Computation and Efficiency in Multi-objective Games'\n---\n\nIntroduction\n============\n\nGame theory and microeconomics assume that individuals evaluate outcomes into scalars. However, bounded rationality can hardly be modeled consistently by agents simply comparing scalars: *\u201cThe classical theory does not tolerate the incomparability of oranges and apples.\u201d* [@simon1955behavioral]. Money is another case of scalarization of the values of outcomes. For instance, while \u2018making money\u2019 theoretically creates value [@adam1776inquiry], the tobacco industry making money and killing approximately six" -"---\nabstract: |\n Kernel methods have been widely applied to machine learning and other questions of approximating an unknown function from its finite sample data. To ensure arbitrary accuracy of such approximation, various denseness conditions are imposed on the selected kernel. This note contributes to the study of universal, characteristic, and $C_0$-universal kernels. We first give simple and direct description of the difference and relation among these three kinds of universalities of kernels. We then focus on translation-invariant and weighted polynomial kernels. A simple and shorter proof of the known characterization of characteristic translation-invariant kernels will be presented. The main purpose of the note is to give a delicate discussion on the universalities of weighted polynomial kernels.\n\n [**Keywords:**]{} kernel methods, universal kernels, characteristic kernels, density, translation-invariant kernels, weighted polynomial kernels.\nauthor:\n- 'Benxun Wang[^1]andHaizhang Zhang[^2]'\ntitle: Universalities of Reproducing Kernels Revisited\n---\n\nIntroduction\n============\n\nMany scientific questions can be mathematically formulated as the learning of an unknown function from its finite sample data. Suppose the unknown target function $f_0$ lives on the input space $X$ and its sample data on the finite sampling points $x_1,x_2,\\cdots, x_n\\in X$ are available. We human beings learn from experience. By this intuition, a predictor" -"---\nabstract: 'Network densification, massive multiple-input multiple-output (MIMO) and millimeter-wave (mmWave) bands have recently emerged as some of the physical layer enablers for the future generations of wireless communication networks (5G and beyond). Grounded on prior work on sub-6\u00a0GHz cell-free massive MIMO architectures, a novel framework for cell-free mmWave massive MIMO systems is introduced that considers the use of low-complexity hybrid precoders/decoders while factors in the impact of using capacity-constrained fronthaul links. A suboptimal pilot allocation strategy is proposed that is grounded on the idea of clustering by dissimilarity. Furthermore, based on mathematically tractable expressions for the per-user achievable rates and the fronthaul capacity consumption, max-min power allocation and fronthaul quantization optimization algorithms are proposed that, combining the use of block coordinate descent methods with sequential linear optimization programs, ensure a uniformly good quality of service over the whole coverage area of the network. Simulation results show that the proposed pilot allocation strategy eludes the computational burden of the optimal small-scale CSI-based scheme while clearly outperforming the classical random pilot allocation approaches. Moreover, they also reveal the various existing trade-offs among the achievable max-min per-user rate, the fronthaul requirements and the optimal hardware complexity (i.e., number of antennas, number" -"---\nabstract: 'Non-local currents and voltages are better able at withstanding the deleterious effects of dephasing than local currents and voltages in nanoscale systems. This hypothesis is known to be true in quantum Hall set-ups. We test this hypothesis in a four terminal quantum spin Hall set up wherein we compare the local resistance measurement with the non-local one. In addition to inelastic scattering induced dephasing we also test resilience of the resistance measurements in the aforesaid set-ups to disorder and spin-flip scattering. We find the axiom that non-local resistance is less affected by the detrimental effects of disorder and dephasing to be in general untrue for quantum spin Hall case. This has important consequences since it has been widely communicated that non-local transport through edge channels in topological insulators will have potential applications in low power information processing.'\nauthor:\n- Arjun Mani\n- Colin Benjamin\ntitle: 'Fragility of non-local edge mode transport in the quantum spin Hall state'\n---\n\n1D edge modes are the hallmark of quantum Hall(QH) and quantum spin Hall(QSH) set-ups[@sanvito; @chulkov; @Arjun; @buti-sci]. These arise in quantum Hall case at high magnetic fields, however, in QSH case they arise at zero magnetic fields because of bulk" -"---\nabstract: 'We investigate anomalies in liquid silica with molecular dynamics simulations and present evidence for a fragile-to-strong transition at around 3100K-3300K. To this purpose, we studied the structure and dynamical properties of silica over a wide temperature range, finding four indicators of a fragile-to-strong transition. First, there is a density minimum at around 3000K and a density maximum at 4700K. The turning point is at 3400K. Second, the local structure characterized by the tetrahedral order parameter changes dramatically around 3000K from a higher-ordered, lower-density phase to a less ordered, higher-density phase. Third, the correlation time $\\tau$ changes from an Arrhenius behavior below 3300K to a Vogel-Fulcher-Tammann behavior at higher temperatures. Fourth, the Stokes-Einstein relation holds for temperatures below 3000K, but is replaced by a fractional relation above this temperature. Furthermore, our data indicate that dynamics become again simple above 5000K, with Arrhenius behavior and a classical Stokes-Einstein relation.'\naddress:\n- 'Institut f\u00fcr Festk\u00f6rperphysik, Technische Universit\u00e4t Darmstadt, Hochschulstr. 6, 64289 Darmstadt, Germany'\n- 'Institut f\u00fcr Festk\u00f6rperphysik, Technische Universit\u00e4t Darmstadt, Hochschulstr. 6, 64289 Darmstadt, Germany'\nauthor:\n- Julian Geske\n- Barbara Drossel\n- Michael Vogel\ntitle: 'Fragile-to-strong transition in liquid silica'\n---\n\nIntroduction\n============\n\nNetwork-forming liquids, such as H$_2$O, SiO$_2$, Si," -"---\nabstract: 'In this article we consider integrable systems on manifolds endowed with singular symplectic structures of order one. By singular symplectic structures of order one we mean structures which are symplectic away from an hypersurface along which the symplectic volume either goes to infinity or to zero in a transversal way (singularity of order one) resulting either in a $b$-symplectic form or a folded symplectic forms. The hypersurface where the form degenerates is called critical set. In this article we give a new impulse to the investigation of action-angle coordinates for this structures initiated in [@KM] and [@KMS] by proving an action-angle theorem for folded symplectic integrable systems, establishing new cotangent models for these systems and investigating duality with $b$-integrable systems via desingularization. We provide global constructions of integrable systems and investigate obstructions for global existence of action-angle coordinates in both scenarios. The new topological obstructions found emanate from the topology of the critical set of the singular symplectic manifold $Z$. The existence of these obstructions in turn implies the existence of singularities for the integrable system on $Z$.'\naddress:\n- '[Laboratory of Geometry and Dynamical Systems, Department of Mathematics]{}, Universitat Polit\u00e8cnica de Catalunya and BGSMath, Barcelona, Spain '" -"---\nabstract: 'Business Process Management (BPM) is a central element of today organizations. Despite over the years its main focus has been the support of processes in highly controlled domains, nowadays many domains of interest to the BPM community are characterized by ever-changing requirements, unpredictable environments and increasing amounts of data that influence the execution of process instances. Under such dynamic conditions, BPM systems must increase their level of automation to provide the reactivity and flexibility necessary for process management. On the other hand, the Artificial Intelligence (AI) community has concentrated its efforts on investigating dynamic domains that involve active control of computational entities and physical devices (e.g., robots, software agents, etc.). In this context, Automated Planning, which is one of the oldest areas in AI, is conceived as a model-based approach to synthesize autonomous behaviours in automated way from a model. In this paper, we discuss how automated planning techniques can be leveraged to enable new levels of automation and support for business processing, and we show some concrete examples of their successful application to the different stages of the BPM life cycle.'\nauthor:\n- Andrea Marrella\nbibliography:\n- 'bibliography.bib'\ntitle: |\n What Automated Planning can do for\\\n Business" -"---\nabstract: 'The \u201cclassic\u201d analogy of classical repulsive interactions via exchange of particles is revisited with a quantitative model and analyzed. This simple model based solely upon the principle of momentum conservation yields a nontrivial, conservative approximation at low energies while also including a type of \u201crelativistic\u201d regime in which the conservative formulation breaks down. Simulations are presented which are accessible to undergraduate students at any level in the physics curriculum as well as analytic treatments of the various regimes which should be accessible to advanced undergraduate physics majors.'\nauthor:\n- 'Jarrett L. Lancaster'\n- Colin McGuire\n- 'Aaron P. Titus'\ntitle: 'Classical particle exchange: a quantitative treatment'\n---\n\nIntroduction\n============\n\nCountless students in introductory physics learn that the \u201cexchange of virtual particles\u201d is responsible for the fundamental forces of nature. Several popular introductory textbooks contain diagrams which sketch how classical particle exchange could plausibly explain the qualitative nature of repulsive forces.[@BauerWestfall; @Mazur] Furthermore, some texts even attempt to construct analogies for how attractive forces could arise from complicated exchanges of classical objects.[@Giancoli; @YoungFreedman] In this paper, we wish to address the gaping hole in the literature regarding how such pictures may be quantitatively useful in understanding the connection between" -"---\nabstract: 'This paper studies the problem of detecting the information source in a network in which the spread of information follows the popular Susceptible-Infected-Recovered (SIR) model. We assume all nodes in the network are in the susceptible state initially except the information source which is in the infected state. Susceptible nodes may then be infected by infected nodes, and infected nodes may recover and will not be infected again after recovery. Given a snapshot of the network, from which we know all infected nodes but cannot distinguish susceptible nodes and recovered nodes, the problem is to find the information source based on the snapshot and the network topology. We develop a sample path based approach where the estimator of the information source is chosen to be the root node associated with the sample path that most likely leads to the observed snapshot. We prove for infinite-trees, the estimator is a node that minimizes the maximum distance to the infected nodes. A reverse-infection algorithm is proposed to find such an estimator in general graphs. We prove that for $g$-regular trees such that $gq>1,$ where $g$ is the node degree and $q$ is the infection probability, the estimator is within a" -"---\nabstract: |\n The mutual-exclusion property of locks stands in the way to scalability of parallel programs on many-core architectures. Locks do not allow progress guarantees, because a task may fail inside a critical section and keep holding a lock that blocks other tasks from accessing shared data. With non-blocking synchronization, the drawbacks of locks are avoided by synchronizing access to shared data by atomic read-modify-write operations.\n\n To incorporate non-blocking synchronization in Ada\u00a0202x, programmers must be able to reason about the behavior and performance of tasks in the absence of protected objects and rendezvous. We therefore extend Ada\u2019s memory model by synchronized types, which support the expression of memory ordering operations at a sufficient level of detail. To mitigate the complexity associated with non-blocking synchronization, we propose concurrent objects as a novel high-level language construct. Entities of a concurrent object execute in parallel, due to a fine-grained, optimistic synchronization mechanism. Synchronization is framed by the semantics of concurrent entry execution. The programmer is only required to label shared data accesses in the code of concurrent entries. Labels constitute memory-ordering operations expressed through attributes. To the best of our knowledge, this is the first approach to provide a non-blocking synchronization" -"---\nabstract: 'While there are a number of models that tackle the problem of calculating friction forces on the atomic level, providing a completely parameter-free approach remains a challenge. Here we present a quasi-static model to obtain an approximation to the nanofrictional response of dry, wearless systems based on quantum mechanical all-electron calculations. We propose a mechanism to allow dissipative sliding, which relies on atomic relaxations. We define two different ways of calculating the mean nanofriction force, both leading to an exponential friction-versus-load behavior for all sliding directions. Since our approach does not impose any limits on lengths and directions of the sliding paths, we investigate arbitrary sliding directions for an fcc Cu(111) interface and detect two periodic paths which form the upper and lower bound of nanofriction. For long aperiodic paths the friction force convergences to a value in between these limits. For low loads we retrieve the Derjaguin generalization of Amontons-Coulomb kinetic friction law which appears to be valid all the way down to the nanoscale. We observe a non-vanishing Derjaguin-offset even for atomically flat surfaces in dry contact.'\nauthor:\n- M Wolloch\n- G Feldbauer\n- P Mohn\n- J Redinger\n- A Vernes\nbibliography:\n- '../../PhD\\_Papers/Bib.bib'" -"---\nabstract: 'As far as we know, a useful quantum computer will require fault-tolerant gates, and existing schemes demand a prohibitively large space and time overhead. We argue that a first generation quantum computer will be very valuable to design, test, and optimize fault-tolerant protocols tailored to the noise processes of the hardware. Our argument is essentially a critical analysis of the current methods envisioned to optimize fault-tolerant schemes, which rely on hardware characterization, noise modelling, and numerical simulations. We show that, even within a very restricted set of noise models, error correction protocols depend strongly on the details of the noise model. Combined to the intrinsic difficulty of hardware characterization and of numerical simulations of fault-tolerant protocols, we arrive at the conclusion that the currently envisioned optimization cycle is of very limited scope. On the other hand, the direct characterization of a fault-tolerant scheme on a small quantum computer bypasses these difficulties, and could provide a bootstrapping path to full-scale fault-tolerant quantum computation.'\nauthor:\n- 'Pavithran S. Iyer'\n- David Poulin\nbibliography:\n- 'refs.bib'\ntitle: 'A Small Quantum Computer is Needed to Optimize Fault-Tolerant Protocols'\n---\n\n=1\n\nMotivation\n==========\n\nWhile we know that a quantum computer can in principle" -"---\nabstract: 'We study the propagation of star formation based on the investigation of the separation of young star clusters from H[ii]{} regions nearest to them. The relation between the separation and $U-B$ colour index (or age) of a star cluster was found. The average age of star clusters increases with the separation as the 1.0-1.2 power in the separation range from 40 to 200 pc and as the 0.4-0.9 power in the range of 100-500 pc in the galaxies with symmetric morphology. The galaxies with distorted asymmetric disc structure show more complex and steeper (power $>1.2$ at separations from 40 to 500 pc) dependence between the age and the separation. Our results confirm the findings of previous studies on the dominant role of turbulence in propagation of the star formation process on spatial scales up to 500 pc and on time scales up to 300 Myr. On a smaller scale ($\\le100$ pc), other physical processes, such as stellar winds and supernova explosions, play an important role along with turbulence. On the scale of stellar associations (100-200 pc and smaller), the velocity of star formation propagation is almost constant and it has a typical value of a few kms$^{-1}$.'\ndate:" -"---\nabstract: 'Spin off events and impacts can eject boulders from an asteroid surface and rubble pile asteroids can accumulate from debris following a collision between large asteroids. These processes produce a population of gravitational bound objects in orbit that can impact an asteroid surface at low velocity and with a distribution of impact angles. We present laboratory experiments of low velocity spherical projectiles into a fine granular medium, sand. We delineate velocity and impact angles giving ricochets, those giving projectiles that roll-out from the impact crater and those that stop within their impact crater. With high speed camera images and fluorescent markers on the projectiles we track spin and projectile trajectories during impact. We find that the projectile only reaches a rolling without slipping condition well after the marble has reached peak penetration depth. The required friction coefficient during the penetration phase of impact is 4-5 times lower than that of the sand suggesting that the sand is fluidized near the projectile surface during penetration. We find that the critical grazing impact critical angle dividing ricochets from roll-outs, increases with increasing impact velocity. The critical angles for ricochet and for roll-out as a function of velocity can be matched" -"---\nabstract: 'We study Schwinger pair creation of charged particles due to the inhomogeneous electric field created by the thin electron layer at the surface of quark stars (the electrosphere). As suggested earlier, due to the low photon emissivity of the quark-gluon plasma and of the electrosphere, electron-positron pair emission could be the main observational signature of quark stars. To obtain the electron-positron pair creation rate we use the tunnelling approach. Explicit expressions for the fermion creation rate per unit time per unit volume are derived, which generalize the classical Schwinger result. The finite size effects in pair production, due to the presence of a boundary (the surface of the quark star), are also considered in the framework of a simple approach. It is shown that the boundary effects induce large quantitative and qualitative deviations of the particle production rate from what one deduces with the Schwinger formula and its generalization for the electric field of the electrosphere. The electron-positron pair emissivity and flux of the electrosphere of quark stars due to pair creation is considered, and the magnitude of the boundary effects for this parameters is estimated. Due to the inhomogeneity of the electric field distribution in the electrosphere" -"---\nabstract: 'Fast and accurate solution of time-dependent partial differential equations (PDEs) is of key interest in many research fields including physics, engineering, and biology. Generally, implicit schemes are preferred over the explicit ones for better stability and correctness. The existing implicit schemes are usually iterative and employ a general-purpose solver which may be sub-optimal for a specific class of PDEs. In this paper, we propose a neural solver to learn an optimal iterative scheme for a class of PDEs, in a data-driven fashion. We attain this objective by modifying an iteration of an existing semi-implicit solver using a deep neural network. Further, we prove theoretically that our approach preserves the correctness and convergence guarantees provided by the existing iterative-solvers. We also demonstrate that our model generalizes to a different parameter setting than the one seen during training and achieves faster convergence compared to the semi-implicit schemes.'\nauthor:\n- |\n Suprosanna Shit\\\n Technical University Munich\\\n `suprosanna.shit@tum.de`\\\n Abinav R.\\\n Technical University Munich\\\n `abinav.ravi@tum.de`\\\n Ivan Ezhov\\\n Technical University Munich\\\n `ivan.ezhov@tum.de`\\\n Jana Lipkova\\\n Technical University Munich\\\n `jana.lipkova@tum.de`\\\n Marie Piraud\\\n Technical University Munich\\\n `marie.piraud@tum.de`\\\n Bjoern Menze\\\n Technical University Munich\\\n `bjoern.menze@tum.de`\nbibliography:\n- 'neurips\\_2019.bib'\ntitle: 'Implicit Neural Solver for Time-dependent Linear PDEs with Convergence Guarantee'" -"---\nabstract: 'We report the discovery of 31 blue, short period, pulsators made using data taken as part of the Rapid Temporal Survey (RATS). We find they have periods between 51\u201383 mins and full-amplitudes between 0.05\u20130.65 mag. Using the period-luminosity relationship for short period pulsating stars we determine their distance. Assuming they are pulsating in either the fundamental or first over-tone radial mode the majority are located at a distance greater than 3kpc, with several being more than 20 kpc distant. Most stars are at least 1 kpc from the Galactic plane, with three being more than 10 kpc. One is located in the direction of the Galactic anti-center and has Galactocentric distance of $\\sim$30 kpc and is $\\sim$20 kpc below the plane: they are therefore potential tracers of Galactic structure. We have obtained low-resolution spectra for a small number our targets and find they have temperatures between 7200\u20137900K and a metal content less than Solar. The colours of the pulsators and the spectral fits to those stars for which we have spectra indicate that they are either SX Phe or $\\delta$ Scuti stars. We estimate the number of SX Phe stars in our Galaxy and find significantly fewer per" -"---\nabstract: 'Spectroscopy during planetary transits is a powerful tool to probe exoplanet atmospheres. We present the near-infrared transit spectroscopy of XO-2b obtained with HST NICMOS. Uniquely for NICMOS transit spectroscopy, a companion star of similar properties to XO-2 is present in the field of view. We derive improved star and planet parameters through a photometric white-light analysis. We show a clear correlation of the spectrum noise with instrumental parameters, in particular the angle of the spectral trace on the detector. An MCMC method using a decorrelation from instrumental parameters is used to extract the planetary spectrum. Spectra derived independently from each of the 3 visits have a RMS of 430, 510, and 1000\u00a0ppm respectively. The same analysis is performed on the companion star after numerical injection of a transit with a depth constant at all wavelengths. The extracted spectra exhibit residuals of similar amplitude as for XO-2, which represent the level of remaining NICMOS systematics. This shows that extracting planetary spectra is at the limit of NICMOS\u2019 capability. We derive a spectrum for the planet XO-2b using the companion star as a reference. [The derived spectrum can be represented by a theoretical model including atmospheric water vapor or" -"---\nabstract: 'We present a new computational model for gaze prediction in egocentric videos by exploring patterns in temporal shift of gaze fixations (attention transition) that are dependent on egocentric manipulation tasks. Our assumption is that the high-level context of how a task is completed in a certain way has a strong influence on attention transition and should be modeled for gaze prediction in natural dynamic scenes. Specifically, we propose a hybrid model based on deep neural networks which integrates task-dependent attention transition with bottom-up saliency prediction. In particular, the task-dependent attention transition is learned with a recurrent neural network to exploit the temporal context of gaze fixations, *e.g.* looking at a cup after moving gaze away from a grasped bottle. Experiments on public egocentric activity datasets show that our model significantly outperforms state-of-the-art gaze prediction methods and is able to learn meaningful transition of human attention.'\nauthor:\n- Yifei Huang\n- 'Minjie Cai[^1]'\n- Zhenqiang Li\n- Yoichi Sato\nbibliography:\n- 'bib1321.bib'\ntitle: 'Predicting Gaze in Egocentric Video by Learning Task-dependent Attention Transition'\n---\n\nIntroduction\n============\n\nWith the increasing popularity of wearable or action cameras in recording our life experience, egocentric vision [@betancourt2015evolution], which aims at automatic analysis of" -"---\nabstract: 'The search strategy or the discovery of new effects for heavy neutrinos often rely on their different decay channels to detectable particles. In particular in this work we study the decay of a Majorana neutrino with interactions obtained from an effective general theory modeling new physics at the scale $\\Lambda$. The results obtained are general because they are based in an effective theory and not in specific models. We are interested in relatively light heavy Majorana neutrinos, with masses lower than the $W$ mass ($m_N.'\nauthor:\n- Uri Shaham\n- Tom Zahavy\n- Cesar Caraballo\n- Shiwani Mahajan\n- Daisy Massey\n- Harlan Krumholz\nbibliography:\n- 'questionnaire.bib'\ntitle: Learning to Ask Medical Questions using Reinforcement Learning\n---\n\nIntroduction {#sec:intro}\n============\n\nFeature selection is an important topic in traditional machine learning\u00a0[@li2018feature], which motivated a large number of widely adopted works, e.g., Lasso\u00a0[@tibshirani1996regression]. In various cases, the process of obtaining input measurements requires considerable effort" -"---\nabstract: 'We detect a novel radiative cascade from a neutral semiconductor quantum dot. The cascade initiates from a metastable biexciton state in which the holes form a spin-triplet configuration, Pauli-blockaded from relaxation to the spin-singlet ground state. The triplet biexciton has two photon-phonon-photon decay paths. Unlike in the singlet-ground state biexciton radiative cascade, in which the two photons are co-linearly polarized, in the triplet biexciton cascade they are cross-linearly polarized. We measured the two-photon polarization density matrix and show that the phonon emitted when the intermediate exciton relaxes from excited to ground state, preserves the exciton\u2019s spin. The phonon, thus, does not carry with it any which-path information other than its energy. Nevertheless, entanglement distillation by spectral filtering was found to be rather ineffective for this cascade. This deficiency results from the opposite sign of the anisotropic electron-hole exchange interaction in the excited exciton relative to that in the ground exciton.'\nauthor:\n- 'Y.\u00a0Kodriano'\n- 'E.\u00a0Poem'\n- 'N.\u00a0H.\u00a0Lindner'\n- 'C.\u00a0Tradonsky'\n- 'B.\u00a0D.\u00a0Gerardot'\n- 'P.\u00a0M.\u00a0Petroff'\n- 'J.\u00a0E.\u00a0Avron'\n- 'D.\u00a0Gershoni'\ntitle: 'Radiative cascade from quantum dot metastable spin-blockaded biexciton'\n---\n\nIntroduction\n============\n\nA quantum dot (QD) containing two electron-hole" -"---\nabstract: 'We have performed a comprehensive study of the UV emission detected from AGB stars by the Galaxy Evolution Explorer (GALEX). Of the 468 AGB stars in our sample, 316 were observed by GALEX. In the NUV bandpass ($\\lambda_{\\rm eff} \\sim 2310~\\AA$), 179 AGB stars were detected and 137 were not detected. Only 38 AGB stars were detected in the FUV bandpass ($\\lambda_{\\rm eff} \\sim1528~\\AA$). We find that NUV emission is correlated with optical to near infrared emission leading to higher detection fractions among the brightest, hence closest, AGB stars. Comparing the AGB time-variable visible phased light curves to corresponding GALEX NUV phased light curves we find evidence that for some AGB stars the NUV emission varies in phase with the visible light curves. We also find evidence that the NUV emission, and possibly, the FUV emission are anti-correlated with the circumstellar envelope density. These results suggest that the origin of the GALEX-detected UV emission is an inherent characteristic of the AGB stars that can most likely be traced to a combination of photospheric and chromospheric emission. In most cases, UV detections of AGB stars are not likely to be indicative of the presence of binary companions.'\nauthor:\n-" -"---\nabstract: 'We present an exactly solvable spin-3/2 model defined on a pentacoordinated three-dimensional graphite lattice, which realizes a novel quantum spin liquid with second-order topology. The exact solutions are described by Majorana fermions coupled to a background $\\mathbb{Z}_2$ gauge field, whose ground-state flux configuration gives rise to an emergent off-centered spacetime inversion symmetry. The symmetry protects topologically nontrivial band structures for the Majorana fermions, particularly including nodal-line semimetal phases with twofold topological charges: the second Stiefel-Whitney number and the quantized Berry phase. The former leads to rich topological phenomena on the system boundaries. There are two nodal-line semimetal phases hosting hinge Fermi arcs located on different hinges, and they are separated by a critical Dirac semimetal state with surface helical Fermi arcs. In addition, we show that rich symmetry/topology can be explored in our model by simply varying the lattice or interaction arrangement. As an example, we discuss how to achieve a topological gapped phase with surface Dirac points.'\naddress: 'Research Laboratory for Quantum Materials, Singapore University of Technology and Design, Singapore 487372, Singapore'\nauthor:\n- 'Y. X. Zhao'\n- 'Y. Lu'\n- 'Shengyuan A. Yang'\nbibliography:\n- 'Second-order\\_Spin\\_Liquids.bib'\ntitle: 'Topological second-order spin-$3/2$ liquids with hinge Fermi arcs'\n---" -"---\nabstract: 'For the explosion mechanism of Type Ia supernovae (SNe Ia), different scenarios have been suggested. In these, the propagation of the burning front through the exploding white dwarf star proceeds in different modes, and consequently imprints of the explosion model on the nucleosynthetic yields can be expected. The nucleosynthetic characteristics of various explosion mechanisms is explored based on three two-dimensional explosion simulations representing extreme cases: a pure turbulent deflagration, a delayed detonation following an approximately spherical ignition of the initial deflagration, and a delayed detonation arising from a highly asymmetric deflagration ignition. Apart from this initial condition, the deflagration stage is treated in a parameter-free approach. The detonation is initiated when the turbulent burning enters the distributed burning regime. This occurs at densities around $10^{7}$ g cm$^{-3}$ \u2013 relatively low as compared to existing nucleosynthesis studies for one-dimensional spherically symmetric models. The burning in these multidimensional models is different from that in one-dimensional simulations as the detonation wave propagates both into unburned material in the high density region near the center of a white dwarf and into the low density region near the surface. Thus, the resulting yield is a mixture of different explosive burning products, from carbon-burning" -"---\nabstract: 'The toughness of a polymer material can increase significantly if two networks are combined into one material. This toughening effect is a consequence of a transition from a brittle to a ductile failure response. Although this transition and the accompanying toughening effect have been demonstrated in hydrogels first, the concept has been proven effective in elastomers and in macroscopic composites as well. This suggests that the transition is not caused by a specific molecular architecture, but rather by a general physical principle related to the mechanical interplay between two interpenetrating networks. Here we employ theory and computer simulations, inspired by this general principle, to investigate how disorder controls the brittle-to-ductile transition both at the macroscopic and the microscopic level. A random spring network model featuring two different spring types, enables us to study the joined effect of initial disorder and network-induced stress heterogeneity on this transition. We reveal that a mechanical force balance gives a good description of the brittle-to-ductile transition. In addition, the inclusion of disorder in the spring model predicts four different failure regimes along the brittle-to-ductile response in agreement with experimental findings. Finally, we show that the network structure can result in stress concentration, diffuse" -"---\nabstract: 'Even though a lot of effort has been invested in analyzing client-side web applications during the past decade, the existing tools often fail to deal with the complexity of modern JavaScript applications. However, from an attacker point of view, the client side of such web applications can reveal invaluable information about the server side. In this paper, first we study the existing tools and enumerate the most crucial features a security-aware client-side analysis should be supporting. Next, we propose to detect vulnerabilities in modern client-side JavaScript applications that are built upon complex libraries and frameworks. In particular, we take the first step in closing the gap between state-aware crawling and client-side security analysis by proposing a feedback-driven security-aware guided crawler that is able to analyze complex frameworks automatically, and increase the coverage of security-sensitive parts of the program efficiently. Moreover, we propose a new lightweight client-side taint analysis that outperforms the start-of-the-art tools, requires no modification to browsers, and reports non-trivial taint flows on modern JavaScript applications.'\nauthor:\n- \nbibliography:\n- 'references.bib'\ntitle: 'Gelato: Feedback-driven and Guided Security Analysis of Client-side Web Applications'\n---\n\n=1" -"---\nabstract: 'In this paper we examine the spectral changes in a white light laser filament due to different pulse shapes generated by a pulse shaping setup. We particularly explore how the properties of the filament spectra can be controlled by parametrically tailored white light pulses. The experiments are carried out in a gas cell with up to $9\\,bar$ of argon. Plasma generation and self-phase modulation strongly affect the pulse in the spectral and temporal domain. By exploiting these effects we show that the pulse spectrum can be modified in a desired way by either using second order parametric chirp functions to shift the filament spectrum to higher or lower wavelengths, or by optimizing pulse shapes with a genetic algorithm to generate more complex filament spectra. This paper is one of the first examples of the application of complex, parametrically shaped white light pulses.'\nauthor:\n- 'A. Patas $^1$'\n- 'M. Matthews $^2$'\n- 'S. Hermelin $^{2,3}$'\n- 'J. Gateau $^2$'\n- 'J. Kasparian $^{2,4}$'\n- 'J. P. Wolf $^2$'\n- 'A. Lindinger $^1$'\ndate: 'June 7, 2018'\ntitle: 'Modifications of filament spectra by shaped octave-spanning laser pulses'\n---\n\nIntroduction\n============\n\nA broad spectrum is required when generating ultrashort laser" -"---\nabstract: |\n We propose a general framework to study constructions of Euclidean lattices from linear codes over finite fields. In particular, we prove general conditions for an ensemble constructed using linear codes to contain dense lattices (i.e., with packing density comparable to the Minkowski-Hlawka lower bound). Specializing to number field lattices, we obtain a number of interesting corollaries - for instance, the best known packing density of ideal lattices, and an elementary coding-theoretic construction of asymptotically dense Hurwitz lattices. All results are algorithmically effective, in the sense that, for any dimension, a finite family containing dense lattices is exhibited. For suitable constructions based on Craig\u2019s lattices, this family is smaller, in terms of alphabet-size, than previous ones in the literature.\\\n **Keywords:** Lattices, sphere packings, random codes, ideal lattices, codes over matrix rings\nauthor:\n- 'Antonio Campello [^1]'\nbibliography:\n- 'campbel.bib'\ntitle: |\n Random Ensembles of Lattices\\\n from Generalized Reductions\n---\n\nIntroduction\n============\n\nThere has been a renewed interest in the search for new constructions of lattices from error-correcting codes due to their various recent applications, such as coding for fading wiretap channels [@KosiOngOggier], Gaussian relay networks [@Adaptative], compound fading channels [@Our] and index codes [@Index], to name only a" -"---\nabstract: 'Neural codes are collections of binary vectors that represent the firing patterns of neurons. The information given by a neural code $C$ can be represented by its neural ideal $J_C$. In turn, the polynomials in $J_C$ can be used to determine the relationships among the receptive fields of the neurons. In a paper by Curto et al., three such relationships, known as the Type 1-3 relations, were linked to the neural ideal by three if-and-only-if statements. Later, Garcia et al. discovered the Type 4-6 relations. These new relations differed from the first three in that they were related to $J_C$ by one-way implications. In this paper, we first show that the converses of these new implications are false at the level of both the neural ideal $J_C$ and the larger ideal $I(C)$ of a code. We then present modified statements of these relations that, like the first three, can be related by if-and-only-if statements to both $J_C$ and $I(C)$. Using the modified relations, we uncover a new relationship involving $J_C$, $I(C)$, and the Type 1-6 relations.'\nauthor:\n- Angelique Morvant\nbibliography:\n- 'MasterBibTeX.bib'\ndate: 'March 8, 2018'\ntitle: Strengthening Relationships between Neural Ideals and Receptive Fields\n---\n\nIntroduction" -"---\nabstract: 'We show that early stellar encounters can explain the high eccentricities and inclinations observed in the outer part ($>42$AU) of the Edgeworth-Kuiper Belt (EKB). We consider the proto-sun as a member of a stellar aggregation that undergoes dissolution on a timescale $\\sim 10^8$ yrs, such that the solar nebula experiences a flyby encounter at pericenter distance ($q$) on the order of $100$AU. Using numerical simulations we show that a stellar encounter pumps the velocity dispersion in the young solar nebula in the outer parts. In the case of a nearly parabolic encounter with a solar-mass companion the velocity dispersion at $a \\simg 0.25q$ is pumped up to such an extent that collisions between planetesimals would be expected to become highly disruptive, halting further growth of planetesimals. This has the consequence that planet formation is forestalled in that region. We also find that a stellar encounter with pericenter distance $q \\sim 100$\u2013$200$AU could have pumped up the velocity dispersion of EKB objects outside 42AU to the observed magnitude while preserving that inside Neptune\u2019s 3:2 mean-motion resonance (located at 39.5AU). This allows for the efficient capture of objects by the resonance during a phase of orbital migration by proto-Neptune, which" -"---\nabstract: 'Correspondence selection aiming at seeking correct feature correspondences from raw feature matches is pivotal for a number of feature-matching-based tasks. Various 2D (image) correspondence selection algorithms have been presented with decades of progress. Unfortunately, the lack of an in-depth evaluation makes it difficult for developers to choose a proper algorithm given a specific application. This paper fills this gap by evaluating eight 2D correspondence selection algorithms ranging from classical methods to the most recent ones on four standard datasets. The diversity of experimental datasets brings various nuisances including zoom, rotation, blur, viewpoint change, JPEG compression, light change, different rendering styles and multi-structures for comprehensive test. To further create different distributions of initial matches, a set of combinations of detector and descriptor is also taken into consideration. We measure the quality of a correspondence selection algorithm from four perspectives, i.e., precision, recall, F-measure and efficiency. According to evaluation results, the current advantages and limitations of all considered algorithms are aggregately summarized which could be treated as a \u201cuser guide\u201d for the following developers.'\nauthor:\n- 'Chen\u00a0Zhao,\u00a0Jiaqi\u00a0Yang,\u00a0Yang\u00a0Xiao,\u00a0and Zhiguo\u00a0Cao [^1]'\nbibliography:\n- 'mybibfile.bib'\ntitle: Comparative evaluation of 2D feature correspondence selection algorithms\n---\n\n[Shell" -"---\nabstract: |\n This paper presents a modular, extensible and highly efficient open source framework for registration based tracking targeted at robotics applications. It is implemented entirely in C++ and is designed from the ground up to easily integrate with systems that support any of several major vision and robotics libraries including OpenCV, ROS, ViSP and Eigen. It is also faster and more precise than other existing systems.\n\n To establish the theoretical basis for its design, a new way to conceptualize registration based trackers is introduced that decomposes them into three constituent sub modules - Search Method, Appearance Model and State Space Model. In the process, the seminal work by Baker & Matthews [@Baker04lucasKanade_paper] is extended with several important advances since its publication.\n\n In addition to being a practical solution for fast and high precision tracking, this system can also serve as a useful research tool by allowing existing and new methods for any of the sub modules to be studied better. When a new method is introduced for one of these, the breakdown can help to experimentally find the combination of methods for the others that is optimum for it. By extensive use of generic programming, the system makes" -"---\nabstract: '> Abstract argumentation framework () is a unifying framework able to encompass a variety of nonmonotonic reasoning approaches, logic programming and computational argumentation. Yet, efficient approaches for most of the decision and enumeration problems associated to s are missing, thus potentially limiting the efficacy of argumentation-based approaches in real domains. In this paper, we present an algorithm for enumerating the preferred extensions of abstract argumentation frameworks which exploits parallel computation. To this purpose, the SCC-recursive semantics definition schema is adopted, where extensions are defined at the level of specific sub-frameworks. The algorithm shows significant performance improvements in large frameworks, in terms of number of solutions found and speedup.'\nauthor:\n- 'Federico Cerutti[^1]'\n- 'Ilias Tachmazidis[^2]'\n- 'Mauro Vallati[^3]'\n- 'Sotirios Batsakis[^4]'\n- 'Massimiliano Giacomin[^5]'\n- 'Grigoris Antoniou[^6]'\nbibliography:\n- 'ref.bib'\ntitle: 'Exploiting Parallelism for Hard Problems in Abstract Argumentation: Technical Report'\n---\n\nIntroduction\n============\n\nDung\u2019s theory of abstract argumentation [@dung1995] is a unifying framework able to encompass a large variety of specific formalisms in the areas of nonmonotonic reasoning, logic programming and computational argumentation. It is based on the notion of argumentation framework (), consisting of a set of arguments and an *attack* relation between them. Different *argumentation" -"---\nauthor:\n- 'G. Maciejewski'\n- 'A. Niedzielski'\ndate: 'Received 10 January 2006 / Accepted 10 January 2006'\ntitle: CCD BV survey of 42 open clusters\n---\n\n[We present results of a photometric survey whose aim was to derive structural and astrophysical parameters for 42 open clusters. While our sample is definitively not representative of the total open cluster sample in the Galaxy, it does cover a wide range of cluster parameters and is uniform enough to allow for simple statistical considerations.]{} [BV wide-field CCD photometry was obtained for open clusters for which photometric, structural, and dynamical evolution parameters were determined. The limiting and core radii were determined by analyzing radial density profiles. The ages, reddenings, and distances were obtained from the solar metallicity isochrone fitting. The mass function was used to study the dynamical state of the systems, mass segregation effect and to estimate the total mass and number of cluster members.]{} [This study reports on the first determination of basic parameters for 11 out of 42 observed open clusters. The angular sizes for the majority of the observed clusters appear to be several times larger than the catalogue data indicate. The core and limiting cluster radii are correlated" -"---\nabstract: 'The ever-increasing power of the personal computer has led to easy parallel implementations of Markov chain Monte Carlo (MCMC). However, almost all work in estimating the variance of Monte Carlo averages, including the efficient batch means (BM) estimator, focuses on a single-chain MCMC run. We demonstrate that simply averaging covariance matrix estimators from multiple chains (average BM) can yield critical underestimates in small sample sizes, especially for slow mixing Markov chains. We propose a multivariate replicated batch means (RBM) estimator that utilizes information from parallel chains, thereby correcting for the underestimation. Under weak conditions on the mixing rate of the process, the RBM and ABM estimator are both strongly consistent and exhibit similar large-sample bias and variance. However, in small runs the RBM estimator can be dramatically superior. This is demonstrated through a variety of examples, including a two-variable Gibbs sampler for a bivariate Gaussian target distribution. Here, we obtain a closed-form expression for the asymptotic covariance matrix of the Monte Carlo estimator, a useful result for benchmarking in the future.'\nauthor:\n- |\n Kushagra Gupta\\\n Department of Mathematics and Statistics\\\n IIT Kanpur\\\n `kushgpt@iitk.ac.in`\n- |\n Dootika Vats[^1]\\\n Department of Mathematics and Statistics\\\n IIT Kanpur\\\n `dootika@iitk.ac.in`\\\nbibliography:\n-" -"---\nauthor:\n- Moumita Das\n- 'Alex J. Levine'\n- 'F.C. MacKintosh'\ntitle: Buckling and force propagation along intracellular microtubules\n---\n\nThe mechanical response of most eukaryotic cells depends on their *cytoskeleton*, a composite network of filamentous proteins \u00a0[@alberts]. Microtubules (MTs) are the stiffest of these cytoskeletal filaments, and they play an important role in organization of, and transport within the cell. Their mechanical rigidity allows them to support significant stresses in the cytoplasm. These stresses can be highly inhomogeneous, with compressive/tensile forces directed along stiff MTs, permitting directed force transmission and mechanical signaling over several microns within the cell. As with macroscopic elastic rods, however, even the comparatively rigid MTs cannot, on their own, withstand as large *compressive* loads as *tensile* loads. This is because of the classical Euler buckling instability limiting the compressive force to a maximum value, which actually vanishes for long rods. It was recently shown, however, that even long MTs *can* bear large compressive loads, as a result of their coupling to the surrounding elastic matrix of the cytoskeleton [@cliff]. This composite aspect of the cytoskeleton has important consequences for cell mechanics and *mechanotransduction* [@ingber; @janmey:2004; @janmey:1998; @wang]\u2014the generation, transmission, and sensing of forces by" -"---\nauthor:\n- 'Prosenjit Kundu, Pitambar Khanra, Chittaranjan Hens,'\n- Pinaki Pal\ntitle: Optimizing synchronization in multiplex networks of phase oscillators\n---\n\nDiverse collective phenomena can emerge in complex systems consisting of interacting dynamical units on complex network topology. One such emergent collective phenomenon is synchronization [@Pikovsky_synchronization_book; @Arenas_PhysReport2008], observed and tested in different real world systems including group of fireflies, power grid networks, brain, cellular and chemical oscillators [@Strogatz_synchronization_book; @Pikovsky_synchronization_book; @Arenas_PhysReport2008; @Motter_NatPhys2013; @Belykh_PRL2005; @Dorogovtsev_RMP2008]. On the other hand, significant advancement has been made in characterizing the statistical scaling of diverse complex network topologies and its profound applications to real situations [@Dorogovtsev_RMP2008; @Albert_RMP2002; @Cohen_complex_book]. Therefore, it has become imparative to understand how the interplay between network topology and nodal states influence the emergent dynamics in complex networks[@Barzel_NatPhys2013; @Hens_NatPhysics2019]. Researchers have been trying to interlink the collective macroscopic property such as synchronization with network structure [@Arenas_PhysReport2008; @Ichinomiya_PRE2004; @Restrepo_PRE2005; @Arenas_PRL2006; @Jesus_PRL2007; @Skardal_PRL2014] for long, yet it is not fully understood how structural or degree heterogeneity affects the collective emergent behavior (say synchronization) of coupled oscillators or vice versa.\n\nCurrently, multiplex network [@Boccaletti_PhysReport2016; @Domenico_NatPhys2016; @Danziger_NatPhys2019; @Jalan_PRE2019] has became an interesting topic to the researchers for its diverse application in real world ranging from transportation" -"---\nabstract: 'We present a new asynchronous model of computation named *Stellar Resolution* based on first-order unification [@herbrand1930recherches; @robinson1965machine]. This model of computation is obtained as a formalisation of Girard\u2019s transcendental syntax programme, sketched in a series of three articles [@girard2017transcendental; @girard2016transcendental; @girard2016transcendental3]. As such, it is the first step towards a proper formal treatment of Girard\u2019s proposal to tackle first-order logic in a proofs-as-program approach [@girard2016transcendental3]. After establishing formal definitions and basic properties of stellar resolution, we explain how it generalises traditional models of computation, such as logic programming and combinatorial models such as Wang tilings. We then explain how it can represent multiplicative proof-structures [@girard1987linear], their cut-elimination and the correctness criterion of Danos-Regnier [@danos1989structure]. Further use of realisability techniques lead to dynamic semantics for Multiplicative Linear Logic, following previous Geometry of Interaction models.'\nauthor:\n- |\n Boris Eng\\\n Universit\u00e9 Sorbonne Paris Nord\\\n LIPN \u2013 UMR 7030\\\n `engboris@hotmail.fr`\n- |\n Thomas Seiller\\\n CNRS\\\n LIPN \u2013 UMR 7030\\\n `seiller@lipn.fr`\nbibliography:\n- 'references.bib'\ntitle: 'Stellar Resolution: Multiplicatives'\n---\n\nIntroduction {#sec:intro}\n============\n\nWe present a new asynchronous model of computation named *Stellar Resolution* based on first-order unification [@herbrand1930recherches; @robinson1965machine]. This model arises from work in proof theory, and more precisely proof-theoretic semantics" -"---\nabstract: |\n We present RON, an efficient and effective framework for generic object detection. Our motivation is to smartly associate the best of the region-based (e.g., Faster R-CNN) and region-free (e.g., SSD) methodologies. Under fully convolutional architecture, RON mainly focuses on two fundamental problems: (a) multi-scale object localization and (b) negative sample mining. To address (a), we design the reverse connection, which enables the network to detect objects on multi-levels of CNNs. To deal with (b), we propose the objectness prior to significantly reduce the searching space of objects. We optimize the reverse connection, objectness prior and object detector jointly by a multi-task loss function, thus RON can directly predict final detection results from all locations of various feature maps.\n\n Extensive experiments on the challenging PASCAL VOC 2007, PASCAL VOC 2012 and MS COCO benchmarks demonstrate the competitive performance of RON. Specifically, with VGG-16 and low resolution 384$\\times$384 input size, the network gets 81.3% mAP on PASCAL VOC 2007, 80.7% mAP on PASCAL VOC 2012 datasets. Its superiority increases when datasets become larger and more difficult, as demonstrated by the results on the MS COCO dataset. With 1.5G GPU memory at test phase, the speed of the network is" -"---\nabstract: 'One scenario proposed to explain the million degrees solar corona is a finely-stranded corona where each strand is heated by a rapid pulse. However, such fine structure has neither been resolved through direct imaging observations nor conclusively shown through indirect observations of extended superhot plasma. Recently it has been shown that the observed difference in appearance of cool and warm coronal loops ($\\sim1$\u00a0MK, $\\sim2-3$\u00a0MK, respectively) \u2013 warm loops appearing \u201cfuzzier\u201d than cool loops \u2013 can be explained by models of loops composed of subarcsecond strands, which are impulsively heated up to $\\sim10$\u00a0MK. That work predicts that images of hot coronal loops ($\\gtrsim6$\u00a0MK) should again show fine structure. Here we show that the predicted effect is indeed widely observed in an active region with the Solar Dynamics Observatory, thus supporting a scenario where impulsive heating of fine loop strands plays an important role in powering the active corona.'\nauthor:\n- 'Fabio Reale, Massimiliano Guarrasi, Paola Testa, Edward E. DeLuca, Giovanni Peres, Leon Golub'\ntitle: Solar Dynamics Observatory discovers thin high temperature strands in coronal active regions\n---\n\nIntroduction\n============\n\nThe bright corona consists of magnetic loop-like tubes which confine the heated plasma. It has been" -"---\nabstract: 'A color image contains luminance and chrominance components representing the intensity and color information respectively. The objective of the work presented in this paper is to show the significance of incorporating the chrominance information for the task of scene classification. An improved color-to-grayscale image conversion algorithm by effectively incorporating the chrominance information is proposed using color-to-gay structure similarity index (C2G-SSIM) and singular value decomposition (SVD) to improve the perceptual quality of the converted grayscale images. The experimental result analysis based on the image quality assessment for image decolorization called C2G-SSIM and success rate (Cadik and COLOR250 datasets) shows that the proposed image decolorization technique performs better than 8 existing benchmark algorithms for image decolorization. In the second part of the paper, the effectiveness of incorporating the chrominance component in scene classification task is demonstrated using the deep belief network (DBN) based image classification system developed using dense scale invariant feature transform (SIFT) as features. The levels of chrominance information incorporated by the proposed image decolorization technique is confirmed by the improvement in the overall scene classification accuracy . Also, the overall scene classification performance is improved by the combination of models obtained using the proposed and the conventional" -"---\nabstract: 'It was recently pointed out that topological liquid phases arising in the fractional quantum Hall effect (FQHE) are not required to be rotationally invariant, as most variational wavefunctions proposed to date have been. Instead, they possess a geometric degree of freedom corresponding to a shear deformation that acts like an intrinsic metric. We apply this idea to a system with an anisotropic band mass, as is intrinsically the case in many-valley semiconductors such as AlAs and Si, or in isotropic systems like GaAs in the presence of a tilted magnetic field, which breaks the rotational invariance. We perform exact diagonalization calculations with periodic boundary conditions (torus geometry) for various filling fractions in the lowest, first and second Landau levels. In the lowest Landau level, we demonstrate that FQHE states generally survive the breakdown of rotational invariance by moderate values of the band mass anisotropy. At 1/3 filling, we generate a variational family of Laughlin wavefunctions parametrized by the metric degree of freedom. We show that the intrinsic metric of the Laughlin state adjusts as the band mass anisotropy or the dielectric tensor are varied, while the phase remains robust. In the $n=1$ Landau level, mass anisotropy drives transitions" -"---\nabstract: 'We propose a novel approach to estimate the Cox model with temporal covariates. Our new approach treats the temporal covariates as arising from a longitudinal process which is modeled jointly with the event time. Different from the literature, the longitudinal process in our model is specified as a bounded variational process and determined by a family of Initial Value Problems associated with an Ordinary Differential Equation. Our specification has the advantage that only the observation of the temporal covariates at the time to event and the time to event itself are required to fit the model, while it is fine but not necessary to have more longitudinal observations. This fact makes our approach very useful for many medical outcome datasets, like the New York State\u2019s Statewide Planning and Research Cooperative System (SPARCS) and the National Inpatient Sample (NIS), where it is important to find the hazard rate of being discharged given the accumulative cost but only the total cost at the discharge time is available due to the protection of patients\u2019 information. Our estimation procedure is based on maximizing the full information likelihood function. The resulting estimators are shown to be consistent and asymptotically normally distributed. Variable selection" -"---\nabstract: 'In this paper we are concerned with the challenging problem of producing a full image sequence of a deformable face given only an image and generic facial motions encoded by a set of sparse landmarks. To this end we build upon recent breakthroughs in image-to-image translation such as pix2pix, CycleGAN and StarGAN which learn Deep Convolutional Neural Networks (DCNNs) that learn to map aligned pairs or images between different domains (, having different labels) and propose a new architecture which is not driven any more by labels but by spatial maps, facial landmarks. In particular, we propose the MotionGAN which transforms an input face image into a new one according to a heatmap of target landmarks. We show that it is possible to create very realistic face videos using a single image and a set of target landmarks. Furthermore, our method can be used to edit a facial image with arbitrary motions according to landmarks (, expression, speech, ). This provides much more flexibility to face editing, expression transfer, facial video creation, . than models based on discrete expressions, audio or action units.'\nauthor:\n- |\n \\\n [{kritaphat.songsri-in11, s.zafeiriou}@imperial.ac.uk]{}\nbibliography:\n- 'egbib.bib'\ntitle: Face Video Generation from a" -"---\nabstract: 'This manuscript investigates unconditional and conditional-on-stopping maximum likelihood estimators (MLEs), information measures and information loss associated with conditioning in group sequential designs (GSDs). The possibility of early stopping brings truncation to the distributional form of MLEs; sequentially, GSD decisions eliminate some events from the sample space. Multiple testing induces mixtures on the adapted sample space. Distributions of MLEs are mixtures of truncated distributions. Test statistics that are asymptotically normal without GSD, have asymptotic distributions, under GSD, that are non-normal mixtures of truncated normal distributions under local alternatives; under fixed alternatives, asymptotic distributions of test statistics are degenerate. Estimation of various statistical quantities such as information, information fractions, and confidence intervals should account for the effect of planned adaptations. Calculation of adapted information fractions requires substantial computational effort. Therefore, a new GSD is proposed in which stage-specific sample sizes are fully determined by desired operational characteristics, and calculation of information fractions is not needed.'\nauthor:\n- |\n Sergey Tarima\\\n Institute for Health and Society, Medical College of Wisconsin\\\n and\\\n Nancy Flournoy\\\n Department of Statistics, University of Missouri-Columbia\nbibliography:\n- 'bib.bib'\ntitle: '**Effect of Interim Adaptations in Group Sequential Designs**'\n---\n\n\\#1\n\n1\n\n[1]{}\n\n0\n\n[1]{}\n\n[**Effect of interim adaptations" -"---\nabstract: 'Geochemical data provide key information on the timing of accretion and on the prevailing physical conditions during core/mantle differentiation. However, their interpretation depends critically on the efficiency of metal/silicate chemical equilibration, which is poorly constrained. Fluid dynamics experiments suggest that, before its fragmentation, a volume of liquid metal falling into a magma ocean undergoes a change of topology from a compact volume of metal toward a collection of sheets and ligaments. We investigate here to what extent the vigorous stretching of the metal phase by the turbulent flow can increase the equilibration efficiency through what is known as stretching enhanced diffusion. We obtain scaling laws giving the equilibration times of sheets and ligaments as functions of a P\u00e9clet number based on the stretching rate. At large P\u00e9clet, stretching drastically decreases the equilibration time, which in this limit depends only weakly on the diffusivity. We also perform 2D numerical simulations of the evolution of a volume of metal falling into a magma ocean, from which we identify several equilibration regimes depending on the values of the P\u00e9clet (Pe), Reynolds (Re), and Bond (Bo) numbers. At large Pe, Re and Bo, the metal phase is vigorously stretched and convoluted in" -"---\nabstract: 'We suggest to subject anharmonically trapped Bose-Einstein condensates to sinusoidal forcing with a smooth, slowly changing envelope, and to measure the coherence of the system after such pulses. In a series of measurements with successively increased maximum forcing strength one then expects an adiabatic return of the condensate to its initial state as long as the pulses remain sufficiently weak. In contrast, once the maximum driving amplitude exceeds a certain critical value there should be a drastic loss of coherence, reflecting significant heating induced by the pulse. This predicted experimental signature is traced to the loss of an effective adiabatic invariant, and to the ensuing breakdown of adiabatic motion of the system\u2019s Floquet state when the many-body dynamics become chaotic. Our scenario is illustrated with the help of a two-site model of a forced bosonic Josephson junction, but should also hold for other, experimentally accessible configurations.'\nauthor:\n- Christoph Heinisch\n- Martin Holthaus\ndate: 'April 27, 2016'\ntitle: 'Entropy production within a pulsed Bose-Einstein condensate'\n---\n\nIntroduction {#sec:1}\n============\n\nIn 1974 an influential series of experiments on the microwave-induced multiphoton ionization of highly excited Hydrogen atoms was initiated by J.\u00a0E.\u00a0Bayfield and P.\u00a0M.\u00a0Koch\u00a0[@BayfieldKoch74]. Sending" -"---\nabstract: |\n In his lectures at [*College de France*]{}, P.L. Lions introduced the concept of Master equation, see [@PLL] for Mean Field Games. It is introduced in a heuristic fashion, from the prospective as a system of partial differential equations, that the equation is associated to a Nash equilibrium for a large, but finite, number of players. The method, also explained in [@PCA], composed of a formalism of derivations. The interest of this equation is that it contains interesting particular cases, which can be studied directly, in particular the system of HJB-FP (Hamilton-Jacobi-Bellman, Fokker-Planck) equations obtained as the limit of the finite Nash equilibrium game, when the trajectories are independent, see [@LAL]. Usually, in mean field theory, one can bypass the large Nash equilibrium, by introducing the concept of representative agent, whose action is influenced by a distribution of similar agents, and obtains directly the system of HJB-FP equations of interest, see for instance [@BFY]. Apparently, there is no such approach for the Master equation. We show here that it is possible. We first do it for the Mean Field type control problem, for which we interpret completely the Master equation. For the Mean Field Games itself, we solve" -"---\nabstract: 'Ultrafast electronic dynamics are typically studied using pulsed lasers. We demonstrate a complementary experimental approach: quantum simulation of ultrafast dynamics using trapped ultracold atoms. Counter-intuitively, this technique emulates some of the fastest processes in atomic physics with some of the slowest, leading to a temporal magnification factor of up to twelve orders of magnitude. In these experiments, time-varying forces on neutral atoms in the ground state of a tunable optical trap emulate the electric fields of a pulsed laser acting on bound charged particles. We demonstrate the correspondence with ultrafast science by a sequence of experiments: nonlinear spectroscopy of a many-body bound state, control of the excitation spectrum by potential shaping, observation of sub-cycle unbinding dynamics during strong few-cycle pulses, and direct measurement of carrier-envelope phase dependence of the response to an ultrafast-equivalent pulse. These results establish cold atom quantum simulation as a complementary tool for studying ultrafast dynamics.'\nauthor:\n- Ruwan Senaratne\n- 'Shankari V. Rajagopal'\n- Toshihiko Shimasaki\n- 'Peter E. Dotti'\n- 'Kurt M. Fujiwara'\n- Kevin Singh\n- 'Zachary A. Geiger'\n- 'David M.\u00a0Weld'\ntitle: Quantum Simulation of Ultrafast Dynamics Using Trapped Ultracold Atoms\n---\n\n[^1]\n\n[^2]\n\nThe study of ultrafast-equivalent electronic and" -"---\nabstract: 'The ability to simultaneously leverage multiple modes of sensor information is critical for perception of an automated vehicle\u2019s physical surroundings. Spatio-temporal alignment of registration of the incoming information is often a prerequisite to analyzing the fused data. The persistence and reliability of multi-modal registration is therefore the key to the stability of decision support systems ingesting the fused information. LiDAR-video systems like on those many driverless cars are a common example of where keeping the LiDAR and video channels registered to common physical features is important. We develop a deep learning method that takes multiple channels of heterogeneous data, to detect the misalignment of the LiDAR-video inputs. A number of variations were tested on the Ford LiDAR-video driving test data set and will be discussed. To the best of our knowledge the use of multi-modal deep convolutional neural networks for dynamic real-time LiDAR-video registration has not been presented.'\nauthor:\n- |\n Michael Giering, Vivek Venugopalan and Kishore Reddy\\\n United Technologies Research Center\\\n E. Hartford, CT 06018, USA\\\n Email: {gierinmj, venugov, reddykk}@utrc.utc.com\nbibliography:\n- 'references.bib'\ntitle: 'Multi-modal Sensor Registration for Vehicle Perception via Deep Neural Networks'\n---\n\nMotivation {#sec:motivation}\n==========\n\nNavigation and situational awareness of optionally manned vehicles requires" -"---\nabstract: 'We report the discovery of a relatively faint ($V=15.5$) early-type WN star in the SMC. The line strength and width of He\u00a0II $\\lambda 4686$ emission is similar to that of the other SMC WNs, and the presense of N\u00a0V $\\lambda 4603,19$ emission (coupled with the lack of N\u00a0III) suggests this star is of spectral type WN3-4.5, and thus is similar in type to the other SMC WRs. Also like the other SMC WN stars, an early-type absorption spectrum is weakly present. The absolute magnitude is comparable to that of other (single) Galactic early-type WNs. The star is located in the Hodge\u00a053 OB association, which is also the home of two other SMC WNs. This star, which we designate SMC-WR12, was actually detected at a high significance level in an earlier interference-filter survey, but the wrong star was observed as part of a spectroscopic followup, and this case of mistaken identity resulted in its Wolf-Rayet nature not being recognized until now.'\nauthor:\n- Philip Massey\n- 'K. A. G. Olsen'\n- 'J. Wm. Parker'\ntitle: |\n The Discovery of a Twelfth Wolf-Rayet Star\\\n in the Small Magellanic Cloud\n---\n\nIntroduction\n============\n\nWolf-Rayet stars (WRs) are" -"---\nabstract: 'Self-organizing map(SOM) have been widely applied in clustering, this paper focused on centroids of clusters and what they reveal. When the input vectors consists of time, latitude and longitude, the map can be strongly linked to physical world, providing valuable information. Beyond basic clustering, a novel approach to address the temporal element is developed, enabling 3D SOM to track behaviors in multiple periods concurrently. Combined with adaptations targeting to process heterogeneous data relating to distribution in time and space, the paper offers a fresh scope for business and services based on temporal-spatial pattern.'\naddress: |\n Huazhong University of Science and Technology\\\n Luoyu Road 1037, Wuhan, China\\\n dingy@hust.edu.cn\nauthor:\n- Yu Ding\nbibliography:\n- 'mybibfile.bib'\ntitle: 'Analysis of Massive Heterogeneous Temporal-Spatial Data with 3D Self-Organizing Map and Time Vector '\n---\n\nSelf-Organizing Map, Multi-Period Pattern, Heterogeneous Data\n\nIntroduction\n============\n\nBackground of Research\n----------------------\n\nWith the development of information gathering technology, people can access to tremendous amount of real-time occurrence data consisting of coordinates both in time and space, such as the e-commerce orders, Uber requests[@Uberrequest], crime incident reports[@7MajorFelony], and vehicle collisions[@carcollision]. The massiveness conceals patterns requiring feasible a tool to identify. Following research is tightly related to their features" -"---\nabstract: 'We propose a variation to the commonly used Word Error Rate (WER) metric for speech recognition evaluation which incorporates the alignment of phonemes, in the absence of time boundary information. After computing the Levenshtein alignment on words in the reference and hypothesis transcripts, spans of adjacent errors are converted into phonemes with word and syllable boundaries and a phonetic Levenshtein alignment is performed. The phoneme alignment information is used to correct the word alignment labels in each error region. We demonstrate that our Phonetically-Oriented Word Error Rate (POWER) yields similar scores to WER with the added advantages of better word alignments and the ability to capture one-to-many alignments corresponding to homophonic errors in speech recognition hypotheses. These improved alignments allow us to better trace the impact of Levenshtein error types in speech recognition on downstream tasks such as speech translation.'\naddress: |\n Fondazione Bruno Kessler\\\n Trento, Italy\nbibliography:\n- 'paper.bib'\ntitle: |\n Phonetically-Oriented Word Error Alignment for\\\n Speech Recognition Error Analysis in Speech Translation\n---\n\nautomatic speech recognition, speech translation, mixed-effects models, error analysis\n\nIntroduction {#sec:intro}\n============\n\nSpoken language translation (SLT) systems are comprised by, at minimum, two components: an automatic speech recognition (ASR) system which provides audio" -"---\nabstract: 'Universal grasping of a diverse range of previously unseen objects from heaps is a grand challenge in e-commerce order fulfillment, manufacturing, and home service robotics. Recently, deep learning based grasping approaches have demonstrated results that make them increasingly interesting for industrial deployments. This paper explores the problem from an automation systems point-of-view. We develop a robotics grasping system using Dex-Net, which is fully integrated at the controller level. Two neural networks are deployed on a novel industrial AI hardware acceleration module close to a PLC with a power footprint of less than 10 W for the overall system. The software is tightly integrated with the hardware allowing for fast and efficient data processing and real-time communication. The success rate of grasping an object form a bin is up to 95% with more than 350 picks per hour, if object and receptive bins are in close proximity. The system was presented at the Hannover Fair 2019 (world\u2019s largest industrial trade fair) and other events, where it performed over 5,000 grasps per event.'\nauthor:\n- |\n Eugen Solowjow$^{1}$, Ines Ugalde$^{1}$, Yash Shahapurkar$^{1}$, Juan Aparicio$^{1}$,\\\n Jeff Mahler$^{2,3}$, Vishal Satish$^{2}$, Ken Goldberg$^{2}$, Heiko Claussen$^{1}$[^1][^2][^3]\ntitle: |\n **Industrial Robot Grasping with Deep Learning\\" -"---\nabstract: |\n Mapping resolution has recently been identified as a key limitation in successfully locating the drivers of atrial fibrillation. Using a simple cellular automata model of atrial fibrillation, we demonstrate a method by which re-entrant drivers can be located quickly and accurately using a collection of indirect electrogram measurements. The method proposed employs simple, out of the box machine learning algorithms to correlate characteristic electrogram gradients with the displacement of an electrogram recording from a re-entrant driver. Such a method is less sensitive to local fluctuations in electrical activity. As a result, the method successfully locates 95.4% of drivers in tissues containing a single driver, and 94.8% (92.5%) for the first (second) driver in tissues containing two drivers of atrial fibrillation. Additionally, we demonstrate how the technique can be applied to tissues with an arbitrary number of drivers. Extending the technique for use in clinical practice could alleviate the limitations in current ablation techniques that arise from limited mapping resolution.\n\n **Keywords:** atrial fibrillation, arrythmia, cellular automata, targetted ablation, machine learning, electrograms\nauthor:\n- 'Max Falkenberg McGillivray$^{1,2,\\dagger, *}$'\n- 'William Cheng$^{1,2,\\dagger}$'\n- 'Nicholas S. Peters$^{3}$'\n- 'Kim Christensen$^{1,2,3}$'\nbibliography:\n- 'ref.bib'\ntitle: 'Machine learning methods for locating re-entrant drivers" -"---\nabstract: 'Eigenmode analysis is one of the most promising methods of analyzing large data sets in ongoing and near-future galaxy surveys. In such analyses, a fast evaluation of the correlation matrix in arbitrary cosmological models is crucial. The observational effects, including peculiar velocity distortions in redshift space, light-cone effects, selection effects, and effects of the complex shape of the survey geometry, should be taken into account in the analysis. In the framework of the linear theory of gravitational instability, we provide the methodology to quickly compute the correlation matrix. Our methods are not restricted to shallow redshift surveys, arbitrarily deep samples can be dealt with as well. Therefore, our methods are useful in constraining the geometry of the universe and the dark energy component, as well as the power spectrum of galaxies, since ongoing and near-future galaxy surveys probe the universe at intermediate to deep redshifts, $z \\sim$ 0.2\u20135. In addition to the detailed methods to compute the correlation matrix in 3-dimensional redshift surveys, methods to calculate the matrix in 2-dimensional projected samples are also provided. Prospects of applying our methods to likelihood estimation of the cosmological parameters are discussed.'\nauthor:\n- Takahiko Matsubara\n- 'Alexander S. Szalay, Adrian" -"---\nabstract: 'For the first time, we construct a catalog of compact groups selected from a complete, magnitude-limited redshift survey. We select groups with $N \\geq 3$ members based on projected separation and association in redshift space alone. We evaluate the characteristics of the Redshift Survey Compact Groups (RSCG\u2019s). Their physical properties (membership frequency, velocity dispersion, density) are similar to those of the Hickson \\[ApJ, 255, 382 (1982)\\] Compact Groups. Hickson\u2019s isolation criterion is a strong function of the physical and angular group radii and is a poor predictor of the group environment. In fact, most RSCG\u2019s are embedded in dense environments. The luminosity function for RSCG\u2019s is mildly inconsistent with the survey luminosity function \u2014 the characteristic luminosity is brighter and the faint end shallower for the RSCG galaxies. We construct a model of the selection function of compact groups. Using this selection function, we estimate the abundance of RSCG\u2019s; for groups with $N \\geq 4$ members the abundance is $3.8 \\times 10^{-5}\\ {h}^3\\ {\\rm Mpc}^{-3}$. For all RSCG\u2019s ($N \\geq 3$) the abundance is $1.4 \\times 10^{-4}\\ {h}^3\\ {\\rm Mpc}^{-3}$.'\nauthor:\n- 'Elizabeth Barton and Margaret J. Geller'\n- Massimo Ramella\n- 'Ronald O. Marzke'\n- 'L. Nicolaci" -"---\nabstract: 'Deep learning models have recently shown to be vulnerable to backdoor poisoning, an insidious attack where the victim model predicts clean images correctly but classifies the same images as the target class when a trigger poison pattern is added. This poison pattern can be embedded in the training dataset by the adversary. Existing defenses are effective under certain conditions such as a small size of the poison pattern, knowledge about the ratio of poisoned training samples or when a validated clean dataset is available. Since a defender may not have such prior knowledge or resources, we propose a defense against backdoor poisoning that is effective even when those prerequisites are not met. It is made up of several parts: one to extract a backdoor poison signal, detect poison target and base classes, and filter out poisoned from clean samples with proven guarantees. The final part of our defense involves retraining the poisoned model on a dataset augmented with the extracted poison signal and corrective relabeling of poisoned samples to neutralize the backdoor. Our approach has shown to be effective in defending against backdoor attacks that use both small and large-sized poison patterns on nine different target-base class pairs" -"---\nauthor:\n- 'P. E. Bett'\n- 'H. E. Thornton'\n- 'R. T. Clark'\nbibliography:\n- 'philipbett\\_ems2012\\_article.bib'\ntitle: European wind variability over 140 yr\n---\n\nKnowing the form of the wind speed distribution is of critical importance when assessing the wind energy potential at a site. Typically, when wind farm developers or investors consider a site, they assess it using (at best) the past 20\u201330 yr, with data from direct observations, NWP models, and reanalyses. These recent decades reflect our personal experience of wind speeds, but they do not show the longer-term historical context. Understanding whether the most recent decades were more or less windy than normal, or if there are any significant long-term trends, is key to understanding the range of possible future windspeeds we might experience over the coming $\\sim\n5$ yr, or over the lifetime of a wind farm ($\\sim 25$ yr). This information is important not just for managing wind farms, but also for planning investment in future wind energy projects.\n\nIn this study, we show wind speed distributions for Europe over 140 yr (1871\u20132010), utilising the *Twentieth Century Reanalysis* data set [20CR, @Compo2011]. This reanalysis incorporates observations of sea-level pressure and surface pressure alone, with" -"---\nabstract: 'We discuss and test possible evolutionary connections between Blue Compact Dwarf galaxies (BCDs) and other types of dwarf galaxies. BCDs provide ideal laboratories to study intense star formation episodes in low mass dwarf galaxies, and have sometimes been considered a short-lived evolutionary stage between types of dwarf galaxies. To test these connections, we consider a sample of BCDs as well as a comparison sample of nearby galaxies from the Local Volume Legacy (LVL) survey for context. We fit the multi-wavelength spectral energy distributions (SED, far-ultra-violet to far-infrared) of each galaxy with a grid of theoretical models to determine their stellar masses and star formation properties. We compare our results for BCDs with the LVL galaxies to put BCDs in the context of normal galaxy evolution. The SED fits demonstrate that the star formation events currently underway in BCDs are at the extreme of the continuum of normal dwarf galaxies, both in terms of the relative mass involved and in the relative increase over previous star formation rates. Today\u2019s BCDs are distinctive objects in a state of extreme star formation which is rapidly transforming them. This study also suggests ways to identify former BCDs whose star formation episodes have" -"---\nabstract: 'FNCMa is visually double with a separation of $\\sim$0.6arcsec. Sixty high-cadence VLT/[*UVES*]{} spectra permit the A and B components to be disentangled, as the relative contribution of each star to the total light entering the spectrograph fluctuates between exposures due to changes in seeing. Component A exhibits rapid line-profile variations, leading us to attribute the photometric variability seen by HIPPARCOS (with a derived $P=0.08866$d) to this component. From a total of 122 archival and new echelle spectra it is shown that component A is an SB1 binary with an orbital period of 117.55 days. The eccentricity of 0.6 may result in tidal modulation of the pulsation(s) of component Aa.'\n---\n\nIntroducing FNCMa\n=================\n\nFNCMa (HD53974) is a bright ($V=5.4$mag) B0.5III star and visually double. Within about a century, the relative position of components A and B, which are separated by $\\sim$0.6 arcsec, has changed marginally at most. A is brighter than B by about 1.2 mag.\n\nObservations and data reduction\n===============================\n\nThe ESO Science Archive contains 60 VLT/[*UVES*]{} echelle spectra of FNCMa obtained within 1.4 hours for a study of interstellar medium, and three more spectra from [*FEROS*]{} at the 2.2-m ESO/MPG telescope, La Silla. In 2009 and" -"---\nabstract: |\n This article proposes an efficient Bayesian inference for piecewise exponential hazard (PEH) models, which allow the effect of a covariate on the survival time to vary over time. The proposed inference methodology is based on a particle smoothing (PS) algorithm that depends on three particle filters. Efficient proposal (importance) distributions for the particle filters tailored to the nature of survival data and PEH models are developed using the Laplace approximation of the posterior distribution and linear Bayes theory. The algorithm is applied to both simulated and real data, and the results show that it generates an effective sample size that is more than two orders of magnitude larger than a state-of-the-art MCMC sampler for the same computing time, and scales well in high-dimensional and relatively large data.\n\n **Key words**: Hazard function, Linear Bayes, particle filter, particle smoothing, piecewise exponential, Survival function.\nauthor:\n- Parfait Munezero\nbibliography:\n- 'Bibliography.bib'\ntitle: Efficient Particle Smoothing for Bayesian Inference in Dynamic Survival Models\n---\n\n\\\n[Parfait.Munezero@stat.su.se]{}\n\nIntroduction\n============\n\nThe standard model for analysing survival data is the proportional hazards model which specifies the hazard function as a product of a baseline hazard (an unknown function of time, $t$) and a relative" -"---\nabstract: 'Time resolved spectroscopy of the Intermediate Polar FOAqr reveals the presence of multiple periodicities in the UV range. A strong orbital modulation dominates both continuum and emission line flux variabilities, while line velocity motions are only detected at the rotational frequency. A prominent orbital periodicity is also observed in coordinated optical photometry, where FOAqr was previously found to be spin dominated. The spectral dependence of the main periodicities shows the presence of multi-temperature components in FOAqr and for the first time a hot and a cool component in the rotational modulation. From a comparison with previous UV and optical data obtained in 1990, no spectral variations in the orbital and rotational variabilities are detected, indicating no significant changes in the effects of X-ray illumination but rather a shrinking of the accretion curtain accompained by an increase in size of the thickened part of the accretion disc. These observations, consistent with the recently discovered long term trend in the X-ray pulsation amplitudes, independently confirm a change in the accretion mode in FOAqr, which switched from a disc-fed into a disc-overflow state, likely triggered by mass accretion variations.'\nauthor:\n- 'D. de Martino, R. Silvotti, D.A.H Buckley, B.T. G\u00e4nsicke M." -"---\nabstract: |\n We review some recent progress in studying the nuclear physics especially nucleon-nucleon (NN) force within the gauge-gravity duality, in context of noncritical string theory. Our main focus is on the holographic QCD model based on the $AdS_6$ background. We explain the noncritical holography model and obtain the vector-meson spectrum and pion decay constant. Also, we study the NN interaction in this frame and calculate the nucleon-meson coupling constants. A further topic covered is a toy model for calculating the light nuclei potential. In particular, we calculate the light nuclei binding energies and also excited energies of some available excited states. We compare our results with the results of other nuclear models and also with the experimental data. Moreover, we describe some other issues which are studied using the gauge-gravity duality.\n\n [ **Key words:** 11.25.-w Strings and branes ;11.25.Pm Noncritical string theory; 11.25.Tq Gauge/string duality ;21.10.Dr Binding energies and masses ; 21.45.-v Few-body systems ]{}\nauthor:\n- |\n M. R. Pahlavani[^1]\\\n \\\n \\\n R. Morad[^2]\\\ntitle: 'Application of AdS/CFT in Nuclear Physics'\n---\n\nIntroduction\n============\n\nOne of the fundamental ingredients of nuclear physics is the nuclear force with which point-like nucleons interact with each other. Since Yukawa, many" -"---\nabstract: 'We propose the PeerRank method for peer assessment. This constructs a grade for an agent based on the grades proposed by the agents evaluating the agent. Since the grade of an agent is a measure of their ability to grade correctly, the PeerRank method weights grades by the grades of the grading agent. The PeerRank method also provides an incentive for agents to grade correctly. As the grades of an agent depend on the grades of the grading agents, and as these grades themselves depend on the grades of other agents, we define the PeerRank method by a fixed point equation similar to the PageRank method for ranking web-pages. We identify some formal properties of the PeerRank method (for example, it satisfies axioms of unanimity, no dummy, no discrimination and symmetry), discuss some examples, compare with related work and evaluate the performance on some synthetic data. Our results show considerable promise, reducing the error in grade predictions by a factor of 2 or more in many cases over the natural baseline of averaging peer grades.'\nauthor:\n- Toby Walsh\nbibliography:\n- '/Users/twalsh/Documents/biblio/a-z.bib'\n- '/Users/twalsh/Documents/biblio/a-z2.bib'\n- '/Users/twalsh/Documents/biblio/pub.bib'\n- '/Users/twalsh/Documents/biblio/pub2.bib'\ntitle: 'The [PeerRank]{} Method for Peer Assessment'\n---\n\nINTRODUCTION\n============" -"---\nabstract: 'The optical properties of the $z = 0.435$ quasar PKS 1222+216 (4C+21.35) are summarized since the discovery of impressive $\\gamma\\/$-ray activity in this source by [*Fermi*]{}/LAT. Unlike several other $\\gamma\\/$-ray-bright blazars, there appears to be little connection between optical and $\\gamma\\/$-ray activity. Spectropolarimetry shows this object to be a composite system with optical emission from both a polarized, variable synchrotron power-law and unpolarized light from a stable blue continuum source (+broad emission-line region) contributing to the observed spectrum. Spectrophotometry over a period of about two years does not detect significant variability in the strong, broad emission lines, despite large optical continuum variations. This suggests that the relativistic jet has little influence on the output of the broad emission-line region, possibly either because the highly beamed continuum ionizes only a small portion of the line-emitting gas, or the observed non-thermal continuum originates parsecs downstream from the base of the jet, further away from the central engine than the broad emission-line region.'\nauthor:\n- 'P. S. Smith'\n- 'G. D. Schmidt'\n- 'B. T. Jannuzi'\ntitle: The Optical Properties of PKS 1222+216 During the Fermi Mission\n---\n\nINTRODUCTION AND OBSERVATIONS\n=============================\n\nSince the announcement on 2009 April 17\u00a0[@ref1] that" -"---\nabstract: 'With the wide adoption of mobile devices, today\u2019s location tracking systems such as satellites, cellular base stations and wireless access points are continuously producing tremendous amounts of location data of moving objects. The ability to discover moving objects that travel together, i.e., traveling companions, from their trajectories is desired by many applications such as intelligent transportation systems and location-based services. Existing algorithms are either based on pattern mining methods that define a particular pattern of traveling companions or based on representation learning methods that learn similar representations for similar trajectories. The former methods suffer from the pairwise point-matching problem and the latter often ignore the temporal proximity between trajectories. In this work, we propose a generic deep representation learning model using autoencoders, namely, ATTN-MEAN, for the discovery of traveling companions. ATTN-MEAN collectively injects spatial and temporal information into its input embeddings using skip-gram, positional encoding techniques, respectively. Besides, our model further encourages trajectories to learn from their neighbours by leveraging the Sort-Tile-Recursive algorithm, mean operation and global attention mechanism. After obtaining the representations from the encoders, we run DBSCAN to cluster the representations to find travelling companion. The corresponding trajectories in the same cluster are considered as traveling" -"---\nabstract: 'Signal models based on sparse representation have received considerable attention in recent years. Compared to synthesis dictionary learning, sparsifying transform learning involves highly efficient sparse coding and operator update steps. In this work, we propose a Multi-layer Residual Sparsifying Transform (MRST) learning model wherein the transform domain residuals are jointly sparsified over layers. In particular, the transforms for the deeper layers exploit the more intricate properties of the residual maps. We investigate the application of the learned MRST model for low-dose CT reconstruction using Penalized Weighted Least Squares (PWLS) optimization. Experimental results on Mayo Clinic data show that the MRST model outperforms conventional methods such as FBP and PWLS methods based on edge-preserving (EP) regularizer and single-layer transform (ST) model, especially for maintaining some subtle details.'\nauthor:\n- 'Xikai Yang, Xuehang Zheng, Yong Long$^\\star$, Saiprasad Ravishankar, [^1] [^2] [^3]'\nbibliography:\n- 'refs.bib'\ntitle: 'Learned Multi-layer Residual Sparsifying Transform Model for Low-dose CT Reconstruction'\n---\n\nLow-dose CT, Statistical image reconstruction, Sparse representation, Transform learning, Unsupervised learning.\n\nIntroduction\n============\n\nSignal models exploiting sparsity have been shown to be useful in a variety of applications such as compression, restoration, denoising, reconstruction, etc. Natural signals can be modeled as sparse in a" -"---\nabstract: 'With the abundance of machine learning methods available and the temptation of using them all in an ensemble method, having a model-agnostic method of feature selection is incredibly alluring. Principal component analysis was developed in 1901 and has been a strong contender in this role since, but in the end is an unsupervised method. It offers no guarantee that the features that are selected have good predictive power because it does not know what is being predicted. To this end, Peng et al. developed the minimum redundancy-maximum relevance (mRMR) method in 2005. It uses the mutual information not only between predictors but also includes the mutual information with the response in its calculation. Estimating mutual information and entropy tend to be expensive and problematic endeavors, which leads to excessive processing times even for dataset that is approximately 750 by 750 in a Leave-One-Subject-Out jackknife situation. To remedy this, we use a method from 2012 called Distance Correlation Sure Independence Screening (DC-SIS) which uses the distance correlation measure of Sz\u00e9kely et al. to select features that have the greatest dependence with the response. We show that this method produces statistically indistinguishable results to the mRMR selection method on Parkinson\u2019s" -"---\nabstract: 'Post-starburst or \u201cE+A\" galaxies are rapidly transitioning from star-forming to quiescence. While the current star formation rate of post-starbursts is already at the level of early type galaxies, we recently discovered that many have large CO-traced molecular gas reservoirs consistent with normal star forming galaxies. These observations raise the question of why these galaxies have such low star formation rates. Here we present an ALMA search for the denser gas traced by HCN (1\u20130) and HCO$^+$ (1\u20130) in two CO-luminous, quiescent post-starburst galaxies. Intriguingly, we fail to detect either molecule. The upper limits are consistent with the low star formation rates and with early-type galaxies. The HCN/CO luminosity ratio upper limits are low compared to star-forming and even many early type galaxies. This implied low dense gas mass fraction explains the low star formation rates relative to the CO-traced molecular gas and suggests the state of the gas in post-starburst galaxies is unusual, with some mechanism inhibiting its collapse to denser states. We conclude that post-starbursts galaxies are now quiescent because little dense gas is available, in contrast to the significant CO-traced lower density gas reservoirs that still remain.'\nauthor:\n- 'K. Decker French$^\\dagger$'\n- 'Ann I. Zabludoff'" -"---\nauthor:\n- 'Arash Azari [^1]'\n- 'Kristian K. M\u00fcller-Nedebock [^2]'\ntitle: Entropic competition in polymeric systems under geometrical confinement\n---\n\nIntroduction \n=============\n\nSpatial organization is one of the key features in nature and the evolution of species where, in general, the balance between size, shape, efficiency, environmental parameters, and energy consumption does matter [@harold2005molecules]. In other words, one expects and does see many examples of spatial organization, crowding, and geometrical confinement everywhere, especially inside the living cell where we have large number of components within a very compact space; examples include the DNA compaction and multiple chromosomes organization inside the nucleus of the cell where very long biopolymers are organized inside a very small space [@zhou2008; @cremer2001; @Lanctot; @Bickmore], assembly or disassembly of proteins like polypeptide chain in chaperone [@hartl2002; @MitBest2008], besides some theoretical examples of polymer confinement [@Kindt20112001; @Jun15082006; @Cook21092009; @TakaElie2006; @Taka; @Nano2006; @SuAxel2007; @MorThir2009; @smyhar; @Gao; @halverson]. In addition to the theoretical and biological implications, these ideas are relevant to the nanodevices and their fabrication [@wu2004composite; @claessens; @Xu2014; @Reisner].\n\n![(color online) Schematic representation of the segmented chains and the confining geometry: (a) stiff segmented polymer consists of linear close-packing of united monomers $n_{m}$, which creates the segment," -"---\nabstract: 'We investigate the behavior of dissipative particle dynamics (DPD) with time-correlated random noise. A new stochastic force for DPD is proposed which consists of a random force whose noise has an algebraic correlation proportional to $1/t$ and is generated by the so called Kangaroo process. We stress the benefits of a time correlated noise in stochastic systems. We show that the system exhibits significantly different properties from classical DPD, driven by Wiener noise. While the probability distribution function of the velocity is Gaussian, the acceleration develops a bi-modal character. Although the fluctuation dissipation theorem may not strictly hold, we demonstrate that the system reaches equilibrium states with fluctuation-dissipation balance. We believe that our explorative research on the DPD model may stimulate the application of modified DPD to unconventional problems beyond molecular modeling.'\nauthor:\n- 'Morgane Borreguero $^{(a)}$'\n- 'Marco Ellero $^{(b)}$'\n- 'N.A. Adams $^{(a)}$'\nbibliography:\n- 'Kangaroo2.bib'\ntitle: Investigation of dissipative particle dynamics with colored noise\n---\n\nIntroduction\n============\n\nAn important aspect in the choice of a numerical model of a physical problem are the length and time-scale defining the quantities of interest. At macroscopic scales continuum-based methods are appropriate, while Molecular Dynamics (MD) models can capture" -"---\nabstract: 'The Lindblad generators of the master equation define which kind of decoherence happens in an open quantum system. We are working with a two qubit system and choose the generators to be projection operators on the eigenstates of the system and unitary bilocal rotations of them. The resulting decoherence modes are studied in detail. Besides the general solutions we investigate the special case of maximally entangled states \u2013 the Bell singlet states. The results are depicted in the so-called spin geometry picture which allows to illustrate the evolution of the (nonlocal) correlations stored in a certain state. The question for which conditions the path traced out in the geometric picture depends only on the relative angle between the bilocal rotations is addressed.'\nauthor:\n- Katharina Durstberger\nbibliography:\n- '/Users/kadu/Library/texmf/tex/latex/bibliography/bibliography-kadu.bib'\ntitle: Spin geometry of entangled qubits under bilocal decoherence modes\n---\n\nIntroduction\n============\n\nThe theory of open quantum systems plays a major role in many applications of quantum physics since perfect isolation of a quantum system is never possible. Because the environmental degrees of freedom are not accessible the dynamics of open quantum systems are described by effective dynamics: the quantum master equation [@BreuerPetruccione]. The notion of decoherence is" -"---\nabstract: 'We investigate the problem of facial expression recognition using 3D data. Building from one of the most successful frameworks for facial analysis using exclusively 3D geometry, we extend the analysis from a curve-based representation into a spectral representation, which allows a complete description of the underlying surface that can be further tuned to the desired level of detail. Spectral representations are based on the decomposition of the geometry in its spatial frequency components, much like a Fourier transform, which are related to intrinsic characteristics of the surface. In this work, we propose the use of Graph Laplacian Features (GLF), which results from the projection of local surface patches into a common basis obtained from the Graph Laplacian eigenspace. We test the proposed approach in the BU-3DFE database in terms of expressions and Action Units recognition. Our results confirm that the proposed GLF produces consistently higher recognition rates than the curves-based approach, thanks to a more complete description of the surface, while requiring a lower computational complexity. We also show that the GLF outperform the most popular alternative approach for spectral representation, Shape-DNA, which is based on the Laplace Beltrami Operator and cannot provide a stable basis that guarantee" -"---\nabstract: 'Risk is part of the fabric of every business; surprisingly, there is little work on establishing best practices for systematic, repeatable risk identification, arguably the first step of any risk management process. In this paper, we present a proposal that constitutes a more holistic risk management approach, a methodology for computer-supported risk identification is proposed that may lead to more consistent (objective, repeatable) risk analysis.'\nauthor:\n- |\n Jochen L.\u00a0Leidner, Ph.D.\\\n Director of Research\\\n Thomson Reuters\nbibliography:\n- 'risk-mining.bib'\ndate: '2015-10-27'\ntitle: 'Computer-Supported Risk Identification for the Holistic Management of Risks'\n---\n\nIntroduction\n============\n\nPursuing any kind of business activity is inseparably interwoven with being exposed to different kinds of risk [@Beck:1992; @Adams:1995; @Bernstein:1998; @Taleb:2007; @Gigerenzer:2013]: Is the customer I am dealing with liquid and honest, i.e.\u00a0can I rely on being paid? Are my vendors delivering my supplies punctually, and to the quality I need? Am I in compliance with all applicable laws and regulations (commercial law, health & safety, financial reporting, tax, human resources etc.)? Are my products and services still relevant, or is demand shrinking or are markets disrupted by new inventions or commoditization of technologies? Are my competitors outperforming my product or undercutting" -"---\nabstract: '[The exploration of the notion of observability exhibits transparently the rich interplay between algebraic and geometric ideas in *geometric invariant theory*. The concept of *observable subgroup* was introduced in the early 1960s with the purpose of studying extensions of representations from an affine algebraic subgroup to the whole group. The extent of its importance in *representation and invariant theory* in particular for Hilbert\u2019s $14^{\\text{th}}$ problem was noticed almost immediately. An important strenghtening appeared in the mid 1970s when the concept of *strong observability* was introduced and it was shown that the notion of observability can be understood as an intermediate step in the notion of reductivity (or semisimplicity), when adequately generalized. More recently starting in 2010, the concept of observable subgroup was expanded to include the concept of *observable action* of an affine algebraic group on an affine variety, launching a series of new applications. In 2006 the related concept of *observable adjunction* was introduced, and its application to module categories over tensor categories was noticed. In the current survey, we follow (approximately) the historical development of the subject introducing along the way, the definitions and some of the main results including some of the proofs. For the" -"---\nabstract: 'A large number of natural language processing tasks exist to analyze syntax, semantics, and information content of human language. These seemingly very different tasks are usually solved by specially designed architectures. In this paper, we provide the simple insight that a great variety of tasks can be represented in a single unified format consisting of labeling spans and relations between spans, thus a single task-independent model can be used across different tasks. We perform extensive experiments to test this insight on 10 disparate tasks as broad as dependency parsing (syntax), semantic role labeling (semantics), relation extraction (information content), aspect based sentiment analysis (sentiment), and many others, achieving comparable performance as state-of-the-art specialized models. We further demonstrate benefits in multi-task learning. We convert these datasets into a unified format to build a benchmark, which provides a holistic testbed for evaluating future models for generalized natural language analysis.'\nauthor:\n- |\n Zhengbao Jiang\\\n Language Technologies Institute\\\n Carnegie Mellon University\\\n `zhengbaj@cs.cmu.edu`\\\n Wei Xu\\\n Department of Computer Science and Engineering\\\n Ohio State University\\\n `xu.1265@osu.edu`\\\n Jun Araki\\\n Bosch Research North America\\\n `jun.araki@us.bosch.com`\\\n Graham Neubig\\\n Language Technologies Institute\\\n Carnegie Mellon University\\\n `gneubig@cs.cmu.edu`\\\nbibliography:\n- 'iclr2020\\_conference.bib'\ntitle: |\n Generalizing Natural Language Analysis\\\n through Span-relation Representations" -"---\nabstract: 'Solar flares are 3D phenomenon but modelling a flare in 3D, including many of the important processes in the chromosphere, is a computational challenge. Accurately modelling the chromosphere is important, even if the transition region and corona are the areas of interest, due to the flow of energy, mass, and radiation through the interconnected layers. We present a solar flare arcade model, that aims to bridge the gap between 1D and 3D modelling. Our approach is limited to the synthesis of optically thin emission. Using observed active region loop structures in a 3D domain we graft simulated 1D flare atmospheres onto each loop, synthesise the emission and then project that emission onto to the 2D observational plane. Emission from SDO/AIA, GOES/XRS, and IRIS/SG Fe xxi 1354.1\u00c5\u00a0was forward modelled. We analyse the temperatures, durations, mass flows, and line widths associated with the flare, finding qualitative agreement but certain quantitative differences. Compared to observations, the Doppler shifts are of similar magnitude but decay too quickly. They are not as ordered, containing a larger amount of scatter compared to observations. The duration of gradual phase emission from GOES and AIA emission is also too short. Fe xxi lines" -"---\nabstract: 'In this paper, Sphere Decoding (SD) algorithms for Spatial Modulation (SM) are developed to reduce the computational complexity of Maximum\u2013Likelihood (ML) detectors. Two SDs specifically designed for SM are proposed and analysed in terms of Bit Error Ratio (BER) and computational complexity. Using Monte Carlo simulations and mathematical analysis, it is shown that by carefully choosing the initial radius the proposed sphere decoder algorithms offer the same BER as ML detection, with a significant reduction in the computational complexity. A tight closed form expression for the BER performance of SM\u2013SD is derived in the paper, along with an algorithm for choosing the initial radius which provides near to optimum performance. Also, it is shown that none of the proposed SDs are always superior to the others, but the best SD to use depends on the target spectral efficiency. The computational complexity trade\u2013off offered by the proposed solutions is studied via analysis and simulation, and is shown to validate our findings. Finally, the performance of SM\u2013SDs are compared to Spatial Multiplexing (SMX) applying ML decoder and applying SD. It is shown that for the same spectral efficiency, SM\u2013SD offers up to\u00a0$84\\%$ reduction in complexity compared to SMX\u2013SD, with" -"---\nabstract: 'How would a cellular network designed for maximal energy efficiency look like? To answer this fundamental question, tools from stochastic geometry are used in this paper to model future cellular networks and obtain a new lower bound on the average uplink spectral efficiency. This enables us to formulate a tractable uplink energy efficiency (EE) maximization problem and solve it analytically with respect to the density of base stations (BSs), the transmit power levels, the number of BS antennas and users per cell, and the pilot reuse factor. The closed-form expressions obtained from this general EE maximization framework provide valuable insights on the interplay between the optimization variables, hardware characteristics, and propagation environment. Small cells are proved to give high EE, but the EE improvement saturates quickly with the BS density. Interestingly, the maximal EE is achieved by also equipping the BSs with multiple antennas and operate in a \u201cmassive MIMO\u201d fashion, where the array gain from coherent detection mitigates interference and the multiplexing of many users reduces the energy cost per user.'\nauthor:\n- |\n Emil Bj[\u00f6]{}rnson, *Member, IEEE*, Luca Sanguinetti, *Senior Member, IEEE*,\\\n and Marios Kountouris, *Senior Member, IEEE* [^1]\nbibliography:\n- 'IEEEabrv.bib'\n- 'refs.bib'\ntitle: 'Deploying" -"---\nabstract: 'The graviton localized on the $3$-brane is examined in Randall-Sundrum brane-world scenario from the viewpoint of one-dimensional singular quantum mechanics. For the Randall-Sundrum single brane scenario the one-parameter family of the fixed-energy amplitude is explicitly computed where the free parameter $\\xi$ parametrizes the various boundary conditions at the brane. The general criterion for the localized graviton to be massless is derived when $\\xi$ is arbitrary but non-zero. When $\\xi=0$, the massless graviton is obtained via a coupling constant renormalization. For the two branes picture the fixed-energy amplitude is in general dependent on the two free parameters. The numerical test indicates that there is no massless graviton in this picture. For the positive-tension brane, however, the localized graviton becomes massless when the distance between branes are infinitely large, which is essentially identical to the single brane picture. For the negative-tension brane there is no massless graviton regardless of the distance between branes and choice of boundary conditions.'\naddress: 'Department of Physics, Kyungnam University, Masan, 631-701, Korea.'\nauthor:\n- 'D. K. Park and Hungsoo Kim'\ntitle: 'Singular Quantum Mechanical Viewpoint of Localized Gravity in Brane-World Scenario'\n---\n\nIntroduction\n============\n\nThe first Randall-Sundrum(RS1) brane-world scenario[@rs99-1] was designed to solve the gauge" -"---\nabstract: |\n In the last decades there have been an increasing interest in improving the accuracy of spacecraft navigation and trajectory data. In the course of this plan some anomalies have been found that cannot, in principle, be explained in the context of the most accurate orbital models including all known effects from classical dynamics and general relativity. Of particular interest for its puzzling nature, and the lack of any accepted explanation for the moment, is the flyby anomaly discovered in some spacecraft flybys of the Earth over the course of twenty years. This anomaly manifest itself as the impossibility of matching the pre and post-encounter Doppler tracking and ranging data within a single orbit but, on the contrary, a difference of a few mm$/$s in the asymptotic velocities is required to perform the fitting.\n\n Nevertheless, no dedicated missions have been carried out to elucidate the origin of this phenomenon with the objective either of revising our understanding of gravity or to improve the accuracy of spacecraft Doppler tracking by revealing a conventional origin.\n\n With the occasion of the Juno mission arrival at Jupiter and the close flybys of this planet, that are currently been performed, we have developed" -"---\nabstract: 'A new experimental technique to control crystal growth in confinement and pressure of a solid wall on the growing crystal has been developed and applied to calcite. At low contact pressure a cavity forms, the growth rim undergoes a transition from smooth to stepwise dynamic causing fast, wobbling growth at the confined surface. When contact pressure is increased to 10\u00a0kPa the wobbling growth stops, the growth rim becomes smooth again and the growth of the confined surface relaxes to a rate below the detection limit of the measurements. A new, complete theoretical description of the process is presented.'\nauthor:\n- 'Lei Li, Felix Kohler, Anja R[\u00f8]{}yne, Dag Kristian Dysthe'\nbibliography:\n- 'limits.bib'\ntitle: Limits to crystallization pressure\n---\n\nIntroduction\n============\n\nCrystallization pressure has been studied over 150 years since Jean Lavalle firstly observed a growing crystal exert a pressure along the growing direction in 1853.\u00a0[@Jean1853] Field observations have also indicated that growing crystals are able to exert a pressure which could lead to break mineral rocks and building stones.\u00a0[@Watts1978] The crystallization force is so important in weathering engineer\u00a0[@Scherer2004a; @Rodriguez-Navarro1999], cement formation\u00a0[@Flatt2007] and geophysics\u00a0[@Maliva1988; @Fletcher2001]. But the process of crystal growth under confinement" -"---\nabstract: |\n A wireless sensor network (WSN) consists of multiple wireless sensor nodes that communicate each other to fulfill a particular task. In this paper, we emphasize on the networks whose deployments admit lower dimensional substructures, such as collinear groups in 2D, or coplanar groups in 3D. When these groups are given as a part of the input, we describe an algorithm to utilize this information to perform a low-cost localization.\n\n In emergency situations such as fire, earthquake etc. inside a building, wireless sensor networks might be very crucial to provide critical information and help the rescue teams to move very quickly by decreasing their burden of exploring the environment. Thus, it is very important to develop a system that provides information quickly and without consuming too much energy. We observe that in these type of environments, sensor nodes tend to form *hyperplanar groups*. A hyperplane is a subspace of one dimension less than its ambient space, and accordingly, a *hyperplanar group* of sensor nodes is a group of nodes that sit on the same hyperplane. When we consider a floor of a building, the nodes can be deployed on the corridors to form collinear groups, and when we" -"---\nabstract: 'This paper presents a new method for automatically generating numerical invariants for imperative programs. Given a program, our procedure computes a binary input/output relation on program states which over-approximates the behaviour of the program. It is compositional in the sense that it operates by decomposing the program into parts, computing an abstract meaning of each part, and then composing the meanings. Our method for approximating loop behaviour is based on first approximating the meaning of the loop body, extracting recurrence relations from that approximation, and then using the closed forms to approximate the loop. Our experiments demonstrate that on verification tasks, our method is competitive with leading invariant generation and verification tools.'\nauthor:\n- Azadeh Farzan and Zachary Kincaid\ntitle: Compositional Invariant Generation via Linear Recurrence Analysis\n---\n\nIntroduction {#sec:intro}\n============\n\nCompositional program analyses operate by decomposing a program into parts, computing an abstract meaning of each part, and then composing the meanings. Compositional analyses have a number of desirable properties, including scalability, parallelizability, and applicability to incomplete programs. However, compositionality comes with a price: since each program fragment is analyzed independently of its context, the analysis cannot benefit from contextual information. This paper presents a compositional method" -"---\nabstract: 'We develop a theory describing density profile of the semi-flexible polymers absorbed onto a planar surface. The theoretical analysis consists of two parts. As a first part, we calculate a density profile of the adsorbed polymers by developing an extension of the Bethe-Peierls approximation to the case of nonhomogeneous systems. This approach relies on the combination of the single chain adsorption theory and the lattice version of the self-consistent field theory. Semi-flexibility of a chain is described by incorporating a finite coordination number of the lattice into the consideration, in the spirit of the previous Silberberg approach. The developed lattice theory incorporates the interaction between nearest-neighbor pairs of segments and finite chain length. The theory is completely mapped into the Scheutjens-Fleer theory in the limit of infinite coordination number. As a second part of the developed approach, we calculate the configurational entropy to investigate how the density structure of the semi-flexible polymers near the surface relates to possible reduction of glass transition temperature near nonabsorbing surface and enhancement near the strongly attractive surface.'\nauthor:\n- 'F. Semeriyanov, A.I. Chervanyov, G. Heinrich'\ntitle: 'Theoretical estimation of density profile of semiflexible polymers adsorbed on a surface and thermodynamic glass transition" -"---\nabstract: 'Investigating the spin parameter distribution of sub[haloes]{}\u00a0in two high resolution isolated halo simulations, recent work by Onions et al. suggested that typical subhalo spins are consistently lower than the spin distribution found for field [haloes]{}. To further examine this puzzle, we have analyzed simulations of a cosmological volume with sufficient resolution to resolve a significant subhalo population. We confirm the result of Onions et al. and show that the typical spin of a subhalo decreases with decreasing mass and increasing proximity to the host halo center. We interpret this as the growing influence of tidal stripping in removing the outer layers, and hence the higher angular momentum particles, of the sub[haloes]{}\u00a0as they move within the host potential. Investigating the redshift dependence of this effect, we find that the typical subhalo spin is smaller with decreasing redshift. This indicates a temporal evolution as expected in the tidal stripping scenario.'\nauthor:\n- 'Yang Wang, Weipeng Lin, Frazer R. Pearce, Hanni Lux, Stuart I. Muldrew & Julian Onions'\nbibliography:\n- 'mn-jour.bib'\n- 'Haloes.bib'\ntitle: '[Solving the puzzle of subhalo spins]{}'\n---\n\nIntroduction {#sec:introduction}\n============\n\nIn the standard model of structure formation, the rotation velocities of disc galaxies are correlated" -"---\nabstract: 'We consider the relativistic electron-positron field interacting with itself via the Coulomb potential defined with the physically motivated, positive, density-density quartic interaction. The more usual normal-ordered Hamiltonian differs from the bare Hamiltonian by a quadratic term and, by choosing the normal ordering in a suitable, self-consistent manner, the quadratic term can be seen to be equivalent to a renormalization of the Dirac operator. Formally, this amounts to a Bogolubov-Valatin transformation, but in reality it is non-perturbative, for it leads to an inequivalent, fine-structure dependent representation of the canonical anticommutation relations. This non-perturbative redefinition of the electron/positron states can be interpreted as a mass, wave-function and charge renormalization, among other possibilities, but the main point is that a non-perturbative definition of normal ordering might be a useful starting point for developing a consistent quantum electrodynamics.'\naddress:\n- |\n Departments of Mathematics and Physics\\\n Jadwin Hall\\\n Princeton University\\\n P.O.B. 708\\\n Princeton, NJ 08544-0708\\\n USA\n- |\n Mathematik I\\\n Universit\u00e4t Regensburg\\\n 93040 Regensburg\\\n Germany\nauthor:\n- 'Elliott H.\u00a0Lieb'\n- Heinz Siedentop\ndate: 'March 2, 2000'\ntitle: 'Renormalization of the Regularized Relativistic Electron-Positron Field'\n---\n\nIntroduction\\[S1\\]\n==================\n\nIn relativistic quantum electrodynamics (QED) the quantized electron-positron field $\\Psi(x)$, which is an operator-valued" -"---\nabstract: 'Though the neutrino-driven convection model for the core-collapse explosion mechanism has received strong support in recent years, there are still many uncertainties in the explosion parameters \u2013 such as explosion energy, remnant mass, and end-of-life stellar abundances as initial conditions. Using a broad set of spherically symmetric core-collapse simulations we examine the effects of these key parameters on explosive nucleosynthesis and final explosion yields. Post-bounce temperature and density evolution of ZAMS 15, 20, and 25 solar mass progenitors are post-processed through the Nucleosynthesis Grid (NuGrid) nuclear network to obtain detailed explosive yields. In particular, this study focuses on radio-isotopes that are of particular interest to the next generation of gamma-ray astronomical observations; $\\isotope[43]{K}$, $\\isotope[47]{Ca}$, $\\isotope[44]{Sc}$, $\\isotope[47]{Sc}$, $\\isotope[48]{V}$, $\\isotope[48]{Cr}$, $\\isotope[51]{Cr}$, $\\isotope[52]{Mn}$, $\\isotope[59]{Fe}$, $\\isotope[56]{Co}$, $\\isotope[57]{Co}$, and $\\isotope[57]{Ni}$. These nuclides may be key in advancing our understanding of the inner workings of core-collapse supernovae by probing the parameters of the explosion engine. We find that the isotopes that are strong indicators of explosion energy are , , , , and , those that are dependent on the progenitor structure are , , and , and those that probe neither are , , , and . We discuss prospects of observing these" -"---\nabstract: 'The question of whether a singularity can form in an initially regular flow, described by the 3D incompressible Navier-Stokes (NS) equations, is a fundamental problem in mathematical physics. The NS regularity problem is super-critical, i.e., there is a \u2018scaling gap\u2019 between what can be established by mathematical analysis and what is needed to rule out a singularity. A recently introduced mathematical framework\u2013based on a suitably defined \u2018scale of sparseness\u2019 of the regions of intense vorticity\u2013brought the first scaling reduction of the NS super-criticality since the 1960s. Here, we put this framework to the first numerical test using a spatially highly resolved computational simulation performed near a \u2018burst\u2019 of the vorticity magnitude. The results confirm that the scale is well suited to detect the onset of dissipation and provide strong numerical evidence that ongoing mathematical efforts may succeed in closing the scaling gap.'\nauthor:\n- Janet Rafner\n- Zoran Gruji\u0107\n- Christian Bach\n- Jakob Andreas B\u00e6rentzen\n- Bo Gervang\n- Ruo Jia\n- Scott Leinweber\n- Marek Misztal\n- Jacob Sherson\ntitle: ' Geometry of turbulent dissipation and the Navier-Stokes regularity problem '\n---\n\nIntroduction\n============\n\nHumans have been fascinated with the geometry of fluid flows for centuries." -"---\nabstract: 'For a unital ring, it is an open question whether flatness of simple modules implies all modules are flat and thus the ring is von Neumann regular. The question was raised by Ramamurthi over 40 years ago\u00a0[@ram] who called such rings SF-rings (i.e., simple modules are flat). In this note we show that a SF Steinberg algebra of an ample Hausdorff groupoid, graded by an ordered group, has an aperiodic unit space. For graph groupoids this implies that the graphs are acyclic. Combining with the Abrams-Rangaswamy Theorem\u00a0[@abramsranga], it follows that SF Leavitt path algebras are regular, answering Ramamurthi\u2019s question in positive for the class of Leavitt path algebras.'\naddress:\n- |\n Cochin University of Science and Technology, India\\\n Current address: Western Sydney University\\\n Australia\n- |\n Western Sydney University\\\n Australia\n- |\n Western Sydney University\\\n Australia\nauthor:\n- 'A.A. Ambily'\n- Roozbeh Hazrat\n- Huanhuan Li\ntitle: Simple flat Leavitt path algebras are von Neumann regular\n---\n\n*On the occasion of his 80th birthday\\\nto Kulumani M. Rangaswamy\\\nwhose passion for Mathematics is contagious*\n\nIntroduction\n============\n\nThere is a substantial amount of literature on characterising a ring in terms of certain properties of its simple modules"