id stringlengths 16 29 | text stringlengths 86 3.49k | source stringlengths 14 112 |
|---|---|---|
arxiv_dataset-97001804.08307 | Multimode theory of Gaussian states in uniformly accelerated frames
quant-ph
We use the formalism of noisy Gaussian channels to derive explicit
transformation laws describing how an arbitrary multimode Gaussian state of a
scalar quantum field is perceived by a number of accelerating observers, each
having access to at least one of the modes. Our work, which generalizes earlier
results of Ahmadi, et al., Phys. Rev. D 93, 124031 (2016), is the next step
torwards a better understanding of the effect of gravity on the states of
quantum fields.
| arxiv topic:quant-ph |
arxiv_dataset-97011804.08407 | Robust Safety for Autonomous Vehicles through Reconfigurable Networking
cs.NI cs.PF cs.SY
Autonomous vehicles bring the promise of enhancing the consumer experience in
terms of comfort and convenience and, in particular, the safety of the
autonomous vehicle. Safety functions in autonomous vehicles such as Automatic
Emergency Braking and Lane Centering Assist rely on computation, information
sharing, and the timely actuation of the safety functions. One opportunity to
achieve robust autonomous vehicle safety is by enhancing the robustness of
in-vehicle networking architectures that support built-in resiliency
mechanisms. Software Defined Networking (SDN) is an advanced networking
paradigm that allows fine-grained manipulation of routing tables and routing
engines and the implementation of complex features such as failover, which is a
mechanism of protecting in-vehicle networks from failure, and in which a
standby link automatically takes over once the main link fails. In this paper,
we leverage SDN network programmability features to enable resiliency in the
autonomous vehicle realm. We demonstrate that a Software Defined In-Vehicle
Networking (SDIVN) does not add overhead compared to Legacy In-Vehicle Networks
(LIVNs) under non-failure conditions and we highlight its superiority in the
case of a link failure and its timely delivery of messages. We verify the
proposed architectures benefits using a simulation environment that we have
developed and we validate our design choices through testing and simulations
| arxiv topic:cs.NI cs.PF cs.SY |
arxiv_dataset-97021804.08507 | Standard versus Strict Bounded Real Lemma with infinite-dimensional
state space I: The State-Space-Similarity Approach
math.FA
The Bounded Real Lemma, i.e., the state-space linear matrix inequality
characterization (referred to as Kalman-Yakubovich-Popov or KYP inequality) of
when an input/state/output linear system satisfies a dissipation inequality,
has recently been studied for infinite-dimensional discrete-time systems in a
number of different settings: with or without stability assumptions, with or
without controllability/observability assumptions, with or without strict
inequalities. In these various settings, sometimes unbounded solutions of the
KYP inequality are required while in other instances bounded solutions suffice.
In a series of reports we show how these diverse results can be reconciled and
unified. This first instalment focusses on the state-space-similarity approach
to the bounded real lemma. We shall show how these results can be seen as
corollaries of a new State-Space-Similarity theorem for infinite-dimensional
linear systems.
| arxiv topic:math.FA |
arxiv_dataset-97031804.08607 | Benchmarking projective simulation in navigation problems
cs.LG cs.AI stat.ML
Projective simulation (PS) is a model for intelligent agents with a
deliberation capacity that is based on episodic memory. The model has been
shown to provide a flexible framework for constructing reinforcement-learning
agents, and it allows for quantum mechanical generalization, which leads to a
speed-up in deliberation time. PS agents have been applied successfully in the
context of complex skill learning in robotics, and in the design of
state-of-the-art quantum experiments. In this paper, we study the performance
of projective simulation in two benchmarking problems in navigation, namely the
grid world and the mountain car problem. The performance of PS is compared to
standard tabular reinforcement learning approaches, Q-learning and SARSA. Our
comparison demonstrates that the performance of PS and standard learning
approaches are qualitatively and quantitatively similar, while it is much
easier to choose optimal model parameters in case of projective simulation,
with a reduced computational effort of one to two orders of magnitude. Our
results show that the projective simulation model stands out for its simplicity
in terms of the number of model parameters, which makes it simple to set up the
learning agent in unknown task environments.
| arxiv topic:cs.LG cs.AI stat.ML |
arxiv_dataset-97041804.08707 | Pseudospin-valley coupled edge states in a photonic topological
insulator
physics.app-ph cond-mat.mes-hall physics.optics
Pseudo-spin and valley degrees of freedom (DOFs) engineered in photonic
analogues of topological insulators (TI) provide potential approaches to
optical encoding and robust signal transport. Here we observe a ballistic edge
state whose spin-valley indices are locked to the direction of propagation
along the interface between a valley photonic crystal and a metacrystal
emulating the quantum spin Hall effect. We demonstrate the inhibition of
inter-valley scattering at a Y-junction formed at the interfaces between
photonic TIs carrying different spin-valley Chern numbers. These results open
up the possibility of using the valley DOF to control the flow of optical
signals in 2D structures.
| arxiv topic:physics.app-ph cond-mat.mes-hall physics.optics |
arxiv_dataset-97051804.08807 | A comparison of methods for modeling marginal non-zero daily rainfall
across the Australian continent
stat.AP
Naveau et al. (2016) have recently developed a class of methods, based on
extreme-value theory (EVT), for capturing low, moderate, and heavy rainfall
simultaneously, without the need to choose a threshold typical to EVT methods.
We analyse the performance of Naveau et al.'s methods, along with mixtures of
gamma distributions, by fitting them to marginal non-zero rainfall from 16,968
sites spanning the Australian continent and which represent a wide variety of
rainfall patterns. Performance is assessed by the distribution across sites of
the log ratios of each method's estimated quantiles and the empirical
quantiles. We do so for quantiles corresponding to low, moderate, and heavy
rainfall. Under this metric, mixtures of three and four gamma distributions
outperform Naveau et al's methods for small and moderate rainfall, and provide
equivalent fits for heavy rainfall.
| arxiv topic:stat.AP |
arxiv_dataset-97061804.08907 | Exoplanets: past, present, and future
astro-ph.EP
Our understanding of extra-solar planet systems is highly driven by advances
in observations in the past decade. Thanks to high precision spectrograph, we
are able to reveal unseen companions to stars with the radial velocity method.
High precision photometry from the space, especially with the Kepler mission,
enables us to detect planets when they transit their stars and dim the stellar
light by merely one percent or smaller. Ultra wide-field, high cadence,
continuous monitoring of the Galactic bulge from different sites around the
southern hemisphere provides us the opportunity to observe microlensing effects
caused by planetary systems from the solar neighborhood, all the way to the
Milky Way center. The exquisite AO imaging from ground-based large telescopes,
coupled with high-contrast coronagraph, captured the photons directly emitted
by planets around other stars. In this article, I present a concise review of
the extra-solar planet discoveries, discussing the strengths and weaknesses of
the major planetary detection methods, providing an overview of our current
understanding of planetary formation and evolution given the tremendous
observations delivered by various methods, as well as on-going and planned
observation endeavors to provide a clear picture of extra-solar planetary
systems.
| arxiv topic:astro-ph.EP |
arxiv_dataset-97071804.09007 | Solving Horn Clauses on Inductive Data Types Without Induction
cs.LO cs.PL
We address the problem of verifying the satisfiability of Constrained Horn
Clauses (CHCs) based on theories of inductively defined data structures, such
as lists and trees. We propose a transformation technique whose objective is
the removal of these data structures from CHCs, hence reducing their
satisfiability to a satisfiability problem for CHCs on integers and booleans.
We propose a transformation algorithm and identify a class of clauses where it
always succeeds. We also consider an extension of that algorithm, which
combines clause transformation with reasoning on integer constraints. Via an
experimental evaluation we show that our technique greatly improves the
effectiveness of applying the Z3 solver to CHCs. We also show that our
verification technique based on CHC transformation followed by CHC solving, is
competitive with respect to CHC solvers extended with induction. This paper is
under consideration for acceptance in TPLP.
| arxiv topic:cs.LO cs.PL |
arxiv_dataset-97081804.09107 | SITAN: Services for Fault-Tolerant Ad Hoc Networks with Unknown
Participants
cs.DC
The evolution of mobile devices with various capabilities (e.g., smartphones
and tablets), together with their ability to collaborate in impromptu ad hoc
networks, opens new opportunities for the design of innovative distributed
applications. The development of these applications needs to address several
difficulties, such as the unreliability of the network, the imprecise set of
participants, or the presence of malicious nodes. In this paper we describe a
middleware, called SITAN, that offers a number of communication, group
membership and coordination services specially conceived for these settings.
These services are implemented by a stack of Byzantine fault-tolerant
protocols, enabling applications that are built on top of them to operate
correctly despite the uncertainty of the environment. The protocol stack was
implemented in Android and NS-3, which allowed the experimentation in
representative scenarios. Overall, the results show that the protocols are able
to finish their execution within a small time window, which is acceptable for
various kinds of applications.
| arxiv topic:cs.DC |
arxiv_dataset-97091804.09207 | Finitely $\mathcal{F}$-amenable actions and Decomposition Complexity of
Groups
math.GT math.GR
In his work on the Farrell-Jones Conjecture, Arthur Bartels introduced the
concept of a "finitely $\mathcal{F}$-amenable" group action, where
$\mathcal{F}$ is a family of subgroups. We show how a finitely
$\mathcal{F}$-amenable action of a countable group $G$ on a compact metric
space, where the asymptotic dimensions of the elements of $\mathcal{F}$ are
bounded from above, gives an upper bound for the asymptotic dimension of $G$
viewed as a metric space with a proper left invariant metric. We generalize
this to families $\mathcal{F}$ whose elements are contained in a collection,
$\mathfrak{C}$, of metric families that satisfies some basic permanence
properties: If $G$ is a countable group and each element of $\mathcal{F}$
belongs to $\mathfrak{C}$ and there exists a finitely $\mathcal{F}$-amenable
action of $G$ on a compact metrizable space, then $G$ is in $\mathfrak{C}$.
Examples of such collections of metric families include: metric families with
weak finite decomposition complexity, exact metric families, and metric
families that coarsely embed into Hilbert space.
| arxiv topic:math.GT math.GR |
arxiv_dataset-97101804.09307 | Ambient Backscatter Systems: Exact Average Bit Error Rate under Fading
Channels
cs.IT cs.NI math.IT
The success of Internet-of-Things (IoT) paradigm relies on, among other
things, developing energy-efficient communication techniques that can enable
information exchange among billions of battery-operated IoT devices. With its
technological capability of simultaneous information and energy transfer,
ambient backscatter is quickly emerging as an appealing solution for this
communication paradigm, especially for the links with low data rate
requirement. In this paper, we study signal detection and characterize exact
bit error rate for the ambient backscatter system. In particular, we formulate
a binary hypothesis testing problem at the receiver and analyze system
performance under three detection techniques: a) mean threshold (MT), b)
maximum likelihood threshold (MLT), and c) approximate MLT. Motivated by the
energy-constrained nature of IoT devices, we perform the above analyses for two
receiver types: i) the ones that can accurately track channel state information
(CSI), and ii) the ones that cannot. Two main features of the analysis that
distinguish this work from the prior art are the characterization of the exact
conditional density functions of the average received signal energy, and the
characterization of exact average bit error rate (BER) for this setup. The key
challenge lies in the handling of correlation between channel gains of two
hypotheses for the derivation of joint probability distribution of magnitude
squared channel gains that is needed for the BER analysis.
| arxiv topic:cs.IT cs.NI math.IT |
arxiv_dataset-97111804.09407 | The use of a pruned modular decomposition for Maximum Matching
algorithms on some graph classes
cs.DS
We address the following general question: given a graph class C on which we
can solve Maximum Matching in (quasi) linear time, does the same hold true for
the class of graphs that can be modularly decomposed into C ? A major
difficulty in this task is that the Maximum Matching problem is not preserved
by quotient, thereby making difficult to exploit the structural properties of
the quotient subgraphs of the modular decomposition. So far, we are only aware
of a recent framework in [Coudert et al., SODA'18] that only applies when the
quotient subgraphs have bounded order and/or under additional assumptions on
the nontriv-ial modules in the graph. As a first attempt toward improving this
framework we study the combined effect of modular decomposition with a pruning
process over the quotient subgraphs. More precisely, we remove sequentially
from all such subgraphs their so-called one-vertex extensions (i.e., pendant,
anti-pendant, twin, universal and isolated vertices). Doing so, we obtain a
"pruned modular decomposition", that can be computed in O(m log n)-time. Our
main result is that if all the pruned quotient subgraphs have bounded order
then a maximum matching can be computed in linear time. This result is mostly
based on two pruning rules on pendant and anti-pendant modules -- that are
adjacent, respectively, to one or all but one other modules in the graph.
Furthermore, these two latter rules are surprisingly intricate and we consider
them as our main technical contribution in the paper. We stress that the class
of graphs that can be totally decomposed by the pruned modular decomposition
contains all the distance-hereditary graphs, and so, it is larger than
cographs. In particular, as a byproduct of our approach we also obtain the
first known linear-time algorithms for Maximum Matching on distance-hereditary
graphs and graphs with modular-treewidth at most one. Finally, we can use an
extended version of our framework in order to compute a maximum matching, in
linear-time, for all graph classes that can be modularly decomposed into
cycles. Our work is the first to explain why the existence of some nice
ordering over the modules of a graph, instead of just over its vertices, can
help to speed up the computation of maximum matchings on some graph classes.
| arxiv topic:cs.DS |
arxiv_dataset-97121804.09507 | Gate-tunable Hall sensors on large area CVD graphene protected by h-BN
with 1D edge contacts
physics.app-ph
Graphene is an excellent material for Hall sensors due to its atomically thin
structure, high carrier mobility and low carrier density. However, graphene
devices need to be protected from the environment for reliable and durable
performance in different environmental conditions. Here we present magnetic
Hall sensors fabricated on large area commercially available CVD graphene
protected by exfoliated hexagonal boron nitride (h-BN). To connect the graphene
active regions of Hall samples to the outputs the 1D edge contacts were
utilized which show reliable and stable electrical properties. The operation of
the Hall sensors shows the current-related sensitivity up to 345 V/(AT). By
changing the carrier concentration and type in graphene by the application of
gate voltage we are able to tune the Hall sensitivity.
| arxiv topic:physics.app-ph |
arxiv_dataset-97131804.09607 | The Assouad spectrum and the quasi-Assouad dimension: a tale of two
spectra
math.CA math.MG
We consider the Assouad spectrum, introduced by Fraser and Yu, along with a
natural variant that we call the `upper Assouad spectrum'. These spectra are
designed to interpolate between the upper box-counting and Assouad dimensions.
It is known that the Assouad spectrum approaches the upper box-counting
dimension at the left hand side of its domain, but does not necessarily
approach the Assouad dimension on the right. Here we show that it necessarily
approaches the \emph{quasi-Assouad dimension} at the right hand side of its
domain. We further show that the upper Assouad spectrum can be expressed in
terms of the Assouad spectrum, thus motivating the definition used by
Fraser-Yu.
We also provide a large family of examples demonstrating new phenomena
relating to the form of the Assouad spectrum. For example, we prove that it can
be strictly concave, exhibit phase transitions of any order, and need not be
piecewise differentiable.
| arxiv topic:math.CA math.MG |
arxiv_dataset-97141804.09707 | Transitive PSL(2,11)-invariant k-arcs in PG(4,q)
math.CO
A \textit{k}-arc in the projective space ${\rm PG}(n,q)$ is a set of $k$
projective points such that no subcollection of $n+1$ points is contained in a
hyperplane. In this paper, we construct new $60$-arcs and $110$-arcs in ${\rm
PG}(4,q)$ that do not arise from rational or elliptic curves. We introduce
computational methods that, when given a set $\mathcal{P}$ of projective points
in the projective space of dimension $n$ over an algebraic number field
$\mathcal{Q}(\xi)$, determines a complete list of primes $p$ for which the
reduction modulo $p$ of $\mathcal{P}$ to the projective space ${\rm PG}(n,p^h)$
may fail to be a $k$-arc. Using these methods, we prove that there are
infinitely many primes $p$ such that ${\rm PG}(4,p)$ contains a ${\rm
PSL}(2,11)$-invariant $110$-arc, where ${\rm PSL}(2,11)$ is given in one of its
natural irreducible representations as a subgroup of ${\rm PGL}(5,p)$.
Similarly, we show that there exist ${\rm PSL}(2,11)$-invariant $110$-arcs in
${\rm PG}(4,p^2)$ and ${\rm PSL}(2,11)$-invariant $60$-arcs in ${\rm PG}(4,p)$
for infinitely many primes $p$.
| arxiv topic:math.CO |
arxiv_dataset-97151804.09807 | Generalizations of Stillman's conjecture via twisted commutative
algebras
math.AC math.AG
Combining recent results on noetherianity of twisted commutative algebras by
Draisma and the resolution of Stillman's conjecture by Ananyan-Hochster, we
prove a broad generalization of Stillman's conjecture. Our theorem yields an
array of boundedness results in commutative algebra that only depend on the
degrees of the generators of an ideal, and not the number of variables in the
ambient polynomial ring.
| arxiv topic:math.AC math.AG |
arxiv_dataset-97161804.09907 | On Estimating Edit Distance: Alignment, Dimension Reduction, and
Embeddings
cs.DS
Edit distance is a fundamental measure of distance between strings and has
been widely studied in computer science. While the problem of estimating edit
distance has been studied extensively, the equally important question of
actually producing an alignment (i.e., the sequence of edits) has received far
less attention. Somewhat surprisingly, we show that any algorithm to estimate
edit distance can be used in a black-box fashion to produce an approximate
alignment of strings, with modest loss in approximation factor and small loss
in run time. Plugging in the result of Andoni, Krauthgamer, and Onak, we obtain
an alignment that is a $(\log n)^{O(1/\varepsilon^2)}$ approximation in time
$\tilde{O}(n^{1 + \varepsilon})$.
Closely related to the study of approximation algorithms is the study of
metric embeddings for edit distance. We show that min-hash techniques can be
useful in designing edit distance embeddings through three results: (1) An
embedding from Ulam distance (edit distance over permutations) to Hamming space
that matches the best known distortion of $O(\log n)$ and also implicitly
encodes a sequence of edits between the strings; (2) In the case where the edit
distance between the input strings is known to have an upper bound $K$, we show
that embeddings of edit distance into Hamming space with distortion $f(n)$ can
be modified in a black-box fashion to give distortion
$O(f(\operatorname{poly}(K)))$ for a class of periodic-free strings; (3) A
randomized dimension-reduction map with contraction $c$ and asymptotically
optimal expected distortion $O(c)$, improving on the previous $\tilde{O}(c^{1 +
2 / \log \log \log n})$ distortion result of Batu, Ergun, and Sahinalp.
| arxiv topic:cs.DS |
arxiv_dataset-97171804.10007 | On right coideal subalgebras of quantum groups
math.QA math.RT
Right coideal subalgebras are interesting substructures of Hopf algebras such
as quantum groups. Examples of right coideal subalgebras are the quantum Borel
part as well as quantum symmetric pairs. Classifying right coideal subalgebras
is a difficult question with notable results by Schneider, Heckenberger and
Kolb. After reviewing these results, as main result we prove that an arbitrary
right coideal subalgebras has a particularly nice set of generators. This
allows in principle to specify the set of right coideal subalgebras in a given
case. As application we determine right coideal subalgebras of the quantum
groups Uq(sl2) and Uq(sl3) and discuss their representation theoretic
properties.
| arxiv topic:math.QA math.RT |
arxiv_dataset-97181804.10107 | New member candidates of Upper Scorpius from Gaia DR1
astro-ph.SR astro-ph.GA
Context. Selecting a cluster in proper motion space is an established method
for identifying members of a star forming region. The first data release from
Gaia (DR1) provides an extremely large and precise stellar catalogue, which
when combined with the Tycho-2 catalogue gives the 2.5 million parallaxes and
proper motions contained within the Tycho-Gaia Astrometric Solution (TGAS).
Aims. We aim to identify new member candidates of the nearby Upper Scorpius
subgroup of the Scorpius-Centaurus Complex within the TGAS catalogue. In doing
so, we also aim to validate the use of the DBSCAN clustering algorithm on
spatial and kinematic data as a robust member selection method. Methods. We
constructed a method for member selection using a density-based clustering
algorithm (DBSCAN) applied over proper motion and distance. We then applied
this method to Upper Scorpius, and evaluated the results and performance of the
method. Results. We identified 167 member candidates of Upper Scorpius, of
which 78 are new, distributed within a 10$^{\circ}$ radius from its core. These
member candidates have a mean distance of 145.6 $\pm$ 7.5 pc, and a mean proper
motion of (-11.4, -23.5) $\pm$ (0.7, 0.4) mas/yr. These values are consistent
with measured distances and proper motions of previously identified bona-fide
members of the Upper Scorpius association.
| arxiv topic:astro-ph.SR astro-ph.GA |
arxiv_dataset-97191804.10207 | Modelling dust rings in early-type galaxies through a sequence of
radiative transfer simulations and 2D image fitting
astro-ph.GA
A large fraction of early-type galaxies (ETGs) host prominent dust features,
and central dust rings are arguably the most interesting among them. We present
here `Lord Of The Rings' (LOTR), a new methodology which allows to integrate
the extinction by dust rings in a 2D fitting modelling of the surface
brightness distribution. Our pipeline acts in two steps, first using the
surface fitting software GALFIT to determine the unabsorbed stellar emission,
and then adopting the radiative transfer code SKIRT to apply dust extinction.
We apply our technique to NGC 4552 and NGC 4494, two nearby ETGs. We show that
the extinction by a dust ring can mimic, in a surface brightness profile, a
central point source (e.g. an unresolved nuclear stellar cluster or an active
galactic nucleus; AGN) superimposed to a `core' (i.e. a central flattening of
the stellar light commonly observed in massive ETGs). We discuss how properly
accounting for dust features is of paramount importance to derive correct
fluxes especially for low luminosity AGNs (LLAGNs). We suggest that the
geometries of dust features are strictly connected with how relaxed is the
gravitational potential, i.e. with the evolutionary stage of the host galaxy.
Additionally, we find hints that the dust mass contained in the ring relates to
the AGN activity.
| arxiv topic:astro-ph.GA |
arxiv_dataset-97201804.10307 | Optimal energy-conserving discontinuous Galerkin methods for linear
symmetric hyperbolic systems
math.NA
We propose energy-conserving discontinuous Galerkin (DG) methods for
symmetric linear hyperbolic systems on general unstructured meshes. Optimal a
priori error estimates of order $k+1$ are obtained for the semi-discrete scheme
in one dimension, and in multi-dimensions on Cartesian meshes when
tensor-product polynomials of degree $k$ are used. A high-order
energy-conserving Lax-Wendroff time discretization is also presented.
Extensive numerical results in one dimension, and two dimensions on both
rectangular and triangular meshes are presented to support the theoretical
findings and to assess the new methods. One particular method (with the
doubling of unknowns) is found to be optimally convergent on triangular meshes
for all the examples considered in this paper. The method is also compared with
the classical (dissipative) upwinding DG method and (conservative) DG method
with a central flux. It is numerically observed for the new method to have a
superior performance for long-time simulations.
| arxiv topic:math.NA |
arxiv_dataset-97211804.10407 | Characterization of half-radial matrices
math.NA
Numerical radius $r(A)$ is the radius of the smallest ball with the center at
zero containing the field of values of a given square matrix $A$. It is well
known that $r(A)\leq \|A\| \leq 2r(A)$, where $\| \cdot \|$ is the matrix
2-norm. Matrices attaining the lower bound are called radial, and have been
analyzed thoroughly. This is not the case for matrices attaining the upper
bound where only partial results are available. In this paper we consider
matrices satisfying $r(A)=\|A\|/2$, and call them half-radial. We summarize the
existing results and formulate new ones. In particular, we investigate their
singular value decomposition and algebraic structure, and provide other
necessary and sufficient conditions for a matrix to be half-radial. Based on
that, we study the extreme case of the attainable constant~$2$ in Crouzeix's
conjecture. The presented results support the conjecture of Greenbaum and
Overton, that the Crabb-Choi-Crouzeix matrix always plays an important role in
this extreme case.
| arxiv topic:math.NA |
arxiv_dataset-97221804.10507 | Sound up-to techniques and Complete abstract domains
cs.LO
Abstract interpretation is a method to automatically find invariants of
programs or pieces of code whose semantics is given via least fixed-points.
Up-to techniques have been introduced as enhancements of coinduction, an
abstract principle to prove properties expressed via greatest fixed-points.
While abstract interpretation is always sound by definition, the soundness of
up-to techniques needs some ingenuity to be proven. For completeness, the
setting is switched: up-to techniques are always complete, while abstract
domains are not.
In this work we show that, under reasonable assumptions, there is an evident
connection between sound up-to techniques and complete abstract domains.
| arxiv topic:cs.LO |
arxiv_dataset-97231804.10607 | Gaia DR2 in 6D: Searching for the fastest stars in the Galaxy
astro-ph.GA astro-ph.SR
We search for the fastest stars in the subset of stars with radial velocity
measurements of the second data release (DR2) of the European Space Agency
mission Gaia. Starting from the observed positions, parallaxes, proper motions,
and radial velocities, we construct the distance and total velocity
distribution of more than $7$ million stars in our Milky Way, deriving the full
6D phase space information in Galactocentric coordinates. These information are
shared in a catalogue, publicly available at
http://home.strw.leidenuniv.nl/~marchetti/research.html. To search for unbound
stars, we then focus on stars with a probability greater than $50 \%$ of being
unbound from the Milky Way. This cut results in a clean sample of $125$ sources
with reliable astrometric parameters and radial velocities. Of these, $20$
stars have probabilities greater than 80 $\%$ of being unbound from the Galaxy.
On this latter sub-sample, we perform orbit integration to characterize the
stars' orbital parameter distributions. As expected given the relatively small
sample size of bright stars, we find no hypervelocity star candidates, stars
that are moving on orbits consistent with coming from the Galactic Centre.
Instead, we find $7$ hyper-runaway star candidates, coming from the Galactic
disk. Surprisingly, the remaining $13$ unbound stars cannot be traced back to
the Galaxy, including two of the fastest stars (around $700$ km/s). If
conformed, these may constitute the tip of the iceberg of a large extragalactic
population or the extreme velocity tail of stellar streams.
| arxiv topic:astro-ph.GA astro-ph.SR |
arxiv_dataset-97241804.10707 | Remote Credential Management with Mutual Attestation for Trusted
Execution Environments
cs.CR
Trusted Execution Environments (TEEs) are rapidly emerging as a root-of-trust
for protecting sensitive applications and data using hardware-backed isolated
worlds of execution. TEEs provide robust assurances regarding critical
algorithm execution, tamper-resistant credential storage, and platform
integrity using remote attestation. However, the challenge of remotely managing
credentials between TEEs remains largely unaddressed in existing literature. In
this work, we present novel protocols using mutual attestation for supporting
four aspects of secure remote credential management with TEEs: backups,
updates, migration, and revocation. The proposed protocols are agnostic to the
underlying TEE implementation and subjected to formal verification using
Scyther, which found no attacks.
| arxiv topic:cs.CR |
arxiv_dataset-97251804.10807 | Sojourn-time distribution of virus capsid in interchromatin corrals of a
cell nucleus
physics.bio-ph cond-mat.stat-mech q-bio.SC
Virus capsids in interchromatin corrals of a cell nucleus are experimentally
known to exhibit anomalous diffusion as well as normal diffusion, leading to
the Gaussian distribution of the diffusion-exponent fluctuations over the
corrals. Here, the sojourn-time distribution of the virus capsid in local areas
of the corral, i.e., probability distribution of the sojourn time
characterizing diffusion in the local areas, is examined. Such an area is
regarded as a virtual cubic block, the diffusion property in which is normal or
anomalous. The distribution, in which the Gaussian fluctuation is incorporated,
is shown to tend to slowly decay. Then, the block-size dependence of average
sojourn time is discussed. A comment is also made on (non-)Markovianity of the
process of moving through the blocks.
| arxiv topic:physics.bio-ph cond-mat.stat-mech q-bio.SC |
arxiv_dataset-97261804.10907 | On the origin of surfaces-dependent growth of benzoic acid crystal
inferred through the droplet evaporation method
cond-mat.mtrl-sci
Crystal growth behavior of benzoic acid crystals on different surfaces was
examined. The performed experiments documented the existence of very strong
influence introduced by polar surfaces as glass, gelatin, and polyvinyl alcohol
(PVA) on the growth of benzoic acid crystals. These surfaces impose strong
orientation effect resulting in a dramatic reduction of number of faces seen
with x-ray powder diffractions (XPRD). However, scrapping the crystal off the
surface leads to a morphology that is similar to the one observed for bulk
crystallization. The surfaces of low wettability (paraffin) seem to be useful
for preparation of amorphous powders, even for well-crystallizable compounds.
The performed quantum chemistry computations characterized energetic
contributions to stabilization of morphology related faces. It has been
demonstrated, that the dominant face (002) of benzoic acid crystal, growing on
polar surfaces, is characterized by the highest densities of intermolecular
interaction energies determining the highest cohesive properties among all
studied faces. Additionally, the inter-layer interactions, which stand for
adhesive properties, are also the strongest in the case of this face. Thus,
quantum chemistry computations providing detailed description of energetic
contributions can be successfully used for clarification of adhesive and
cohesive nature of benzoic acids crystal faces.
| arxiv topic:cond-mat.mtrl-sci |
arxiv_dataset-97271804.11007 | Triangle Inscribed-Triangle Picking
math.GM
Given a triangle ABC, we derive the probability distribution function and the
moments of the area of an inscribed triangle RST whose vertices are uniformly
distributed on AB, BC, and CA. The theoretical results are confirmed by a Monte
Carlo simulation.
| arxiv topic:math.GM |
arxiv_dataset-97281804.11107 | An existence result for dissipative nonhomogeneous hyperbolic equations
via a minimization approach
math.AP math.FA
We discuss a purely variational approach to the study of a wide class of
second order nonhomogeneous dissipative hyperbolic PDEs. Precisely, we focus on
the wave-like equations that present also a nonzero source term and a
first-order-in-time linear term. The paper carries on the research program
initiated in (Serra&Tilli'12), and developed in (Serra&Tilli'16),
(Tentarelli&Tilli '18), on the De Giorgi approach to hyperbolic equations.
| arxiv topic:math.AP math.FA |
arxiv_dataset-97291804.11207 | An Anti-fraud System for Car Insurance Claim Based on Visual Evidence
cs.CV
Automatically scene understanding using machine learning algorithms has been
widely applied to different industries to reduce the cost of manual labor.
Nowadays, insurance companies launch express vehicle insurance claim and
settlement by allowing customers uploading pictures taken by mobile devices.
This kind of insurance claim is treated as small claim and can be processed
either manually or automatically in a quick fashion. However, due to the
increasing amount of claims every day, system or people are likely to be fooled
by repeated claims for identical case leading to big lost to insurance
companies.Thus, an anti-fraud checking before processing the claim is
necessary. We create the first data set of car damage images collected from
internet and local parking lots. In addition, we proposed an approach to
generate robust deep features by locating the damages accurately and
efficiently in the images. The state-of-the-art real-time object detector YOLO
\cite{redmon2016you}is modified to train and discover damage region as an
important part of the pipeline. Both local and global deep features are
extracted using VGG model\cite{Simonyan14c}, which are fused later for more
robust system performance. Experiments show our approach is effective in
preventing fraud claims as well as meet the requirement to speed up the
insurance claim prepossessing.
| arxiv topic:cs.CV |
arxiv_dataset-97301804.11307 | Practical Low-Dimensional Halfspace Range Space Sampling
cs.CG
We develop, analyze, implement, and compare new algorithms for creating
$\varepsilon$-samples of range spaces defined by halfspaces which have size
sub-quadratic in $1/\varepsilon$, and have runtime linear in the input size and
near-quadratic in $1/\varepsilon$. The key to our solution is an efficient
construction of partition trees. Despite not requiring any techniques developed
after the early 1990s, apparently such a result was not ever explicitly
described. We demonstrate that our implementations, including new
implementations of several variants of partition trees, do indeed run in time
linear in the input, appear to run linear in output size, and observe smaller
error for the same size sample compared to the ubiquitous random sample (which
requires size quadratic in $1/\varepsilon$). This result has direct
applications in speeding up discrepancy evaluation, approximate range counting,
and spatial anomaly detection.
| arxiv topic:cs.CG |
arxiv_dataset-97311805.00055 | Efficient numerical simulations with Tensor Networks: Tensor Network
Python (TeNPy)
cond-mat.str-el
Tensor product state (TPS) based methods are powerful tools to efficiently
simulate quantum many-body systems in and out of equilibrium. In particular,
the one-dimensional matrix-product (MPS) formalism is by now an established
tool in condensed matter theory and quantum chemistry. In these lecture notes,
we combine a compact review of basic TPS concepts with the introduction of a
versatile tensor library for Python (TeNPy) [https://github.com/tenpy/tenpy].
As concrete examples, we consider the MPS based time-evolving block decimation
and the density matrix renormalization group algorithm. Moreover, we provide a
practical guide on how to implement abelian symmetries (e.g., a particle number
conservation) to accelerate tensor operations.
| arxiv topic:cond-mat.str-el |
arxiv_dataset-97321805.00155 | Live Functional Programming with Typed Holes
cs.PL
This paper develops a dynamic semantics for incomplete functional programs,
starting from the static semantics developed in recent work on Hazelnut. We
model incomplete functional programs as expressions with holes, with empty
holes standing for missing expressions or types, and non-empty holes operating
as membranes around static and dynamic type inconsistencies. Rather than
aborting when evaluation encounters any of these holes as in some existing
systems, evaluation proceeds around holes, tracking the closure around each
hole instance as it flows through the remainder of the program. Editor services
can use the information in these hole closures to help the programmer develop
and confirm their mental model of the behavior of the complete portions of the
program as they decide how to fill the remaining holes. Hole closures also
enable a fill-and-resume operation that avoids the need to restart evaluation
after edits that amount to hole filling. Formally, the semantics borrows
machinery from both gradual type theory (which supplies the basis for handling
unfilled type holes) and contextual modal type theory (which supplies a logical
basis for hole closures), combining these and developing additional machinery
necessary to continue evaluation past holes while maintaining type safety. We
have mechanized the metatheory of the core calculus, called Hazelnut Live,
using the Agda proof assistant.
We have also implemented these ideas into the Hazel programming environment.
The implementation inserts holes automatically, following the Hazelnut edit
action calculus, to guarantee that every editor state has some (possibly
incomplete) type. Taken together with this paper's type safety property, the
result is a proof-of-concept live programming environment where rich dynamic
feedback is truly available without gaps, i.e. for every reachable editor
state.
| arxiv topic:cs.PL |
arxiv_dataset-97331805.00255 | A proof of the Murnaghan--Nakayama rule using Specht modules and tableau
combinatorics
math.RT math.CO
The Murnaghan--Nakayama rule is a combinatorial rule for the character values
of symmetric groups. We give a new combinatorial proof by explicitly finding
the trace of the representing matrices in the standard basis of Specht modules.
This gives an essentially bijective proof of the rule. A key lemma is an
extension of a straightening result proved by the second author to
skew-tableaux. Our module theoretic methods also give short proofs of Pieri's
rule and Young's rule.
| arxiv topic:math.RT math.CO |
arxiv_dataset-97341805.00355 | Sample-to-Sample Correspondence for Unsupervised Domain Adaptation
cs.LG cs.CV stat.ML
The assumption that training and testing samples are generated from the same
distribution does not always hold for real-world machine-learning applications.
The procedure of tackling this discrepancy between the training (source) and
testing (target) domains is known as domain adaptation. We propose an
unsupervised version of domain adaptation that considers the presence of only
unlabelled data in the target domain. Our approach centers on finding
correspondences between samples of each domain. The correspondences are
obtained by treating the source and target samples as graphs and using a convex
criterion to match them. The criteria used are first-order and second-order
similarities between the graphs as well as a class-based regularization. We
have also developed a computationally efficient routine for the convex
optimization, thus allowing the proposed method to be used widely. To verify
the effectiveness of the proposed method, computer simulations were conducted
on synthetic, image classification and sentiment classification datasets.
Results validated that the proposed local sample-to-sample matching method
out-performs traditional moment-matching methods and is competitive with
respect to current local domain-adaptation methods.
| arxiv topic:cs.LG cs.CV stat.ML |
arxiv_dataset-97351805.00455 | Band Alignment in Quantum Wells from Automatically Tuned DFT+$U$
cond-mat.mtrl-sci
Band alignment between two materials is of fundamental importance for
multitude of applications. However, density functional theory (DFT) either
underestimates the bandgap - as is the case with local density approximation
(LDA) or generalized gradient approximation (GGA) - or is highly
computationally demanding, as is the case with hybrid-functional methods. The
latter can become prohibitive in electronic-structure calculations of
supercells which describe quantum wells. We propose to apply the DFT$+U$
method, with $U$ for each atomic shell being treated as set of tuning
parameters, to automatically fit the bulk bandgap and the lattice constant, and
then use thus obtained $U$ parameters in large supercell calculations to
determine the band alignment. We apply this procedure to
InP/In$_{0.5}$Ga$_{0.5}$As, In$_{0.5}$Ga$_{0.5}$As/In$_{0.5}$Al$_{0.5}$As and
InP/In$_{0.5}$Al$_{0.5}$As quantum wells, and obtain good agreement with
experimental results. Although this procedure requires some experimental input,
it provides both meaningful valence and conduction band offsets while,
crucially, lattice relaxation is taken into account. The computational cost of
this procedure is comparable to that of LDA. We believe that this is a
practical procedure that can be useful for providing accurate estimate of band
alignments between more complicated alloys.
| arxiv topic:cond-mat.mtrl-sci |
arxiv_dataset-97361805.00555 | A general framework for modelling zero inflation
stat.ME
We propose a new framework for the modelling of count data exhibiting zero
inflation (ZI). The main part of this framework includes a new and more general
parameterisation for ZI models which naturally includes both over- and
under-inflation. It further sheds new theoretical light on modelling and
inference and permits a simpler alternative, which we term as multiplicative,
in contrast to the dominant mixture and hurdle models. Our approach gives the
statistician access to new types of ZI of which mixture and hurdle are special
cases. We outline a simple parameterised modelling approach which can help to
infer both ZI type and degree and provide an underlying treatment that shows
that current ZI models are themselves typically within the exponential family,
thus permitting much simpler theory, computation and classical inference. We
outline some possibilities for a natural Bayesian framework for inference; and
a rich basis for work on correlated ZI counts.
The present paper is an incomplete report on the underlying theory. A later
version will include computational issues and provide further examples.
| arxiv topic:stat.ME |
arxiv_dataset-97371805.00655 | Convolutional Sequence to Sequence Model for Human Dynamics
cs.CV
Human motion modeling is a classic problem in computer vision and graphics.
Challenges in modeling human motion include high dimensional prediction as well
as extremely complicated dynamics.We present a novel approach to human motion
modeling based on convolutional neural networks (CNN). The hierarchical
structure of CNN makes it capable of capturing both spatial and temporal
correlations effectively. In our proposed approach,a convolutional long-term
encoder is used to encode the whole given motion sequence into a long-term
hidden variable, which is used with a decoder to predict the remainder of the
sequence. The decoder itself also has an encoder-decoder structure, in which
the short-term encoder encodes a shorter sequence to a short-term hidden
variable, and the spatial decoder maps the long and short-term hidden variable
to motion predictions. By using such a model, we are able to capture both
invariant and dynamic information of human motion, which results in more
accurate predictions. Experiments show that our algorithm outperforms the
state-of-the-art methods on the Human3.6M and CMU Motion Capture datasets. Our
code is available at the project website.
| arxiv topic:cs.CV |
arxiv_dataset-97381805.00755 | Milling and meandering: Flocking dynamics of stochastically interacting
agents with a field of view
cond-mat.stat-mech nlin.AO
We introduce a stochastic agent-based model for the flocking dynamics of
self-propelled particles that exhibit velocity-alignment interactions with
neighbours within their field of view. The stochasticity in the dynamics of the
model arises purely from the uncertainties at the level of interactions.
Despite the absence of attractive forces, this model gives rise to a wide array
of emergent patterns that exhibit long-time spatial cohesion. In order to gain
further insights into the dynamical nature of the resulting patterns, we
investigate the system behaviour using an algorithm that identifies spatially
distinct clusters of the flock and computes their corresponding angular
momenta. Our results suggest that the choice of field of view is crucial in
determining the resulting emergent dynamics of stochastically interacting
particles.
| arxiv topic:cond-mat.stat-mech nlin.AO |
arxiv_dataset-97391805.00855 | Noise Suppression in X-Ray Fourier-Transform Holography Based on
Two-Block Fresnel Zone Plate Interferometer with Common Optical Axis
physics.optics
Strict requirements were imposed on the sizes of testing sample in the
previously suggested scheme of hard X-ray Fourier-transform holography based on
a two-block Fresnel zone plate interferometer with common optical axis. The
failure of these requirements leads to appearance of noise in the reconstructed
image. In this work, the mechanism of noise formation, as well as possibility
of its suppression are considered.
| arxiv topic:physics.optics |
arxiv_dataset-97401805.00955 | Rough wall turbulent Taylor-Couette flow: the effect of the rib height
physics.flu-dyn
In this study, we combine experiments and direct numerical simulations to
investigate the effects of the height of transverse ribs at the walls on both
global and local flow properties in turbulent Taylor-Couette flow. We create
rib roughness by attaching up to 6 axial obstacles to the surfaces of the
cylinders over an extensive range of rib heights, up to blockages of 25% of the
gap width. In the asymptotic ultimate regime, where the transport is
independent of viscosity, we emperically find that the prefactor of the
$Nu_{\omega} \propto Ta^{1/2}$ scaling (corresponding to the drag coefficient
$C_f(Re)$ being constant) scales with the number of ribs $N_r$ and by the rib
height $h^{1.71}$. The physical mechanism behind this is that the dominant
contribution to the torque originates from the pressure forces acting on the
rib which scale with rib height. The measured scaling relation of $N_r
h^{1.71}$ is slightly smaller than the expected $N_r h^2$ scaling, presumably
because the ribs cannot be regarded as completely isolated but interact. In the
counter-rotating regime with smooth walls, the momentum transport is increased
by turbulent Taylor vortices. We find that also in the presence of transverse
ribs these vortices persist. In the counter-rotating regime, even for large
roughness heights, the momentum transport is enhanced by these vortices.
| arxiv topic:physics.flu-dyn |
arxiv_dataset-97411805.01055 | Vision-based Structural Inspection using Multiscale Deep Convolutional
Neural Networks
cs.CV
Current methods of practice for inspection of civil infrastructure typically
involve visual assessments conducted manually by trained inspectors. For
post-earthquake structural inspections, the number of structures to be
inspected often far exceeds the capability of the available inspectors. The
labor intensive and time consuming natures of manual inspection have engendered
research into development of algorithms for automated damage identification
using computer vision techniques. In this paper, a novel damage localization
and classification technique based on a state of the art computer vision
algorithm is presented to address several key limitations of current computer
vision techniques. The proposed algorithm carries out a pixel-wise
classification of each image at multiple scales using a deep convolutional
neural network and can recognize 6 different types of damage. The resulting
output is a segmented image where the portion of the image representing damage
is outlined and classified as one of the trained damage categories. The
proposed method is evaluated in terms of pixel accuracy and the application of
the method to real world images is shown.
| arxiv topic:cs.CV |
arxiv_dataset-97421805.01155 | Angular dependence of eta photoproduction in photon-induced reaction
nucl-th
Photoproduction of eta mesons from nucleons can provide valuable information
about the excitation spectrum of the nucleons. The angular dependence of
photoproduction in the photon-induced reaction is investigated in the
multi-source thermal model. The results are compared with experimental data
from the decay mode. They are in good agreement with the experimental data. It
is shown that the movement factor increases linearly with the photon beam
energies. And, the deformation and translation of emission sources are visually
given in the formalism.
| arxiv topic:nucl-th |
arxiv_dataset-97431805.01255 | Constant slope, entropy and horseshoes for a map on a tame graph
math.DS
We study continuous countably (strictly) monotone maps defined on a tame
graph, i.e., a special Peano continuum for which the set containing
branchpoints and endpoints has a countable closure. In our investigation we
confine ourselves to the countable Markov case. We show a necessary and
sufficient condition under which a locally eventually onto, countably Markov
map $f$ of a tame graph $G$ is conjugate to a constant slope map $g$ of a
countably affine tame graph. In particular, we show that in the case of a
Markov map $f$ that corresponds to recurrent transition matrix, the condition
is satisfied for constant slope $e^{h_{\operatorname{top}}(f)}$, where
$h_{\operatorname{top}}(f)$ is the topological entropy of $f$. Moreover, we
show that in our class the topological entropy $h_{\operatorname{top}}(f)$ is
achievable through horseshoes of the map $f$.
| arxiv topic:math.DS |
arxiv_dataset-97441805.01355 | Minimax redundancy for Markov chains with large state space
cs.IT eess.SP math.IT
For any Markov source, there exist universal codes whose normalized
codelength approaches the Shannon limit asymptotically as the number of samples
goes to infinity. This paper investigates how fast the gap between the
normalized codelength of the "best" universal compressor and the Shannon limit
(i.e. the compression redundancy) vanishes non-asymptotically in terms of the
alphabet size and mixing time of the Markov source. We show that, for Markov
sources whose relaxation time is at least $1 + \frac{(2+c)}{\sqrt{k}}$, where
$k$ is the state space size (and $c>0$ is a constant), the phase transition for
the number of samples required to achieve vanishing compression redundancy is
precisely $\Theta(k^2)$.
| arxiv topic:cs.IT eess.SP math.IT |
arxiv_dataset-97451805.01455 | Geometrical control of active turbulence in curved topographies
cond-mat.soft
We investigate the turbulent dynamics of a two-dimensional active nematic
liquid crystal con- strained on a curved surface. Using a combination of
hydrodynamic and particle-based simulations, we demonstrate that the
fundamental structural features of the fluid, such as the topological charge
density, the defect number density, the nematic order parameter and defect
creation and annihilation rates, are simple linear functions of the substrate
Gaussian curvature, which then acts as a control parameter for the chaotic
flow. Our theoretical predictions are then compared with experiments on
microtubule-kinesin suspensions confined on toroidal active droplets, finding
excellent qualitative agreement.
| arxiv topic:cond-mat.soft |
arxiv_dataset-97461805.01555 | An End-to-end Approach for Handling Unknown Slot Values in Dialogue
State Tracking
cs.CL
We highlight a practical yet rarely discussed problem in dialogue state
tracking (DST), namely handling unknown slot values. Previous approaches
generally assume predefined candidate lists and thus are not designed to output
unknown values, especially when the spoken language understanding (SLU) module
is absent as in many end-to-end (E2E) systems. We describe in this paper an E2E
architecture based on the pointer network (PtrNet) that can effectively extract
unknown slot values while still obtains state-of-the-art accuracy on the
standard DSTC2 benchmark. We also provide extensive empirical evidence to show
that tracking unknown values can be challenging and our approach can bring
significant improvement with the help of an effective feature dropout
technique.
| arxiv topic:cs.CL |
arxiv_dataset-97471805.01655 | Parametrization of cross sections for elementary hadronic collisions
involving strange particles
nucl-th
The production of strange particles (kaons, hyperons) and hypernuclei in
light charged particle induced reactions in the energy range of a few GeV (2-15
GeV) has become a topic of active research in several facilities (e.g., HypHI
and PANDA at GSI and/or FAIR (Germany), JLab (USA), and JPARC (Japan)). This
energy range represents the low-energy limit of the string models (degree of
freedom: quark and gluon) or the high-energy limit of the so-called spallation
models (degree of freedom: hadrons). A well known spallation model is INCL, the
Li\`ege intranuclear cascade model (combined with a de-excitation model to
complete the reaction). INCL, known to give good results up to 2-3 GeV, was
recently upgraded by the implementation of multiple pion emission to extend the
energy range of applicability up to roughly 15 GeV. The next step, to account
also for strange particle production, both for refining the high energy domain
and making it usable when strangeness appears, requires the following main
ingredients: i) the relevant elementary cross sections (production, scattering,
and absorption) and ii) the characteristics of the associated final states.
Some of those ingredients are already known and, sometimes, are already used in
models of the same type (e.g., Bertini, GiBUU), but this paper aims at
reviewing the situation by compiling, updating, and comparing the necessary
elementary information which are independent of the model used.
| arxiv topic:nucl-th |
arxiv_dataset-97481805.01755 | Two theorems about the P versus NP problem
cs.CC math.LO
Two theorems about the P versus NP problem be proved in this article (1)
There exists a language $L$, that the statement $L \in \textbf{P}$ is
independent of ZFC. (2) There exists a language $L \in \textbf{NP}$, for any
polynomial time deterministic Turing machine $M$, we cannot prove $L$ is
decidable on $M$.
| arxiv topic:cs.CC math.LO |
arxiv_dataset-97491805.01855 | Consideration of learning orientations as an application of achievement
goals in evaluating life science majors in introductory physics
physics.ed-ph
When considering performing an Introductory Physics for Life Sciences course
transformation for one's own institution, life science majors' achievement
goals are a necessary consideration to ensure the pedagogical transformation
will be effective. However, achievement goals are rarely an explicit
consideration in physics education research topics such as metacognition. We
investigate a sample population of 218 students in a first-semester
introductory algebra-based physics course, drawn from 14 laboratory sections
within six semesters of course sections, to determine the influence of
achievement goals on life science majors' attitudes towards physics. Learning
orientations that, respectively, pertain to mastery goals and performance
goals, in addition to a learning orientation that does not report a performance
goal, were recorded from students in the specific context of learning a
problem-solving framework during an in-class exercise. Students' learning
orientations, defined within the context of students' self-reported statements
in the specific context of a problem-solving-related research-based course
implementation, are compared to pre-post results on physics problem-solving
items in a well-established attitudinal survey instrument, in order to
establish the categories' validity. In addition, mastery-related and
performance-related orientations appear to extend to overall pre-post
attitudinal shifts, but not to force and motion concepts or to overall course
grade, within the scope of an introductory physics course. There also appears
to be differentiation regarding overall course performance within health
science majors, but not within biology majors, in terms of learning
orientations; however, health science majors generally appear to fare less well
on all measurements in the study than do biology majors, regardless of learning
orientations.
| arxiv topic:physics.ed-ph |
arxiv_dataset-97501805.01955 | Improve Uncertainty Estimation for Unknown Classes in Bayesian Neural
Networks with Semi-Supervised /One Set Classification
cs.LG stat.ML
Although deep neural network (DNN) has achieved many state-of-the-art
results, estimating the uncertainty presented in the DNN model and the data is
a challenging task. Problems related to uncertainty such as classifying unknown
classes (class which does not appear in the training data) data as known class
with high confidence, is critically concerned in the safety domain area (e.g,
autonomous driving, medical diagnosis). In this paper, we show that applying
current Bayesian Neural Network (BNN) techniques alone does not effectively
capture the uncertainty. To tackle this problem, we introduce a simple way to
improve the BNN by using one class classification (in this paper, we use the
term "set classification" instead). We empirically show the result of our
method on an experiment which involves three datasets: MNIST, notMNIST and
FMNIST.
| arxiv topic:cs.LG stat.ML |
arxiv_dataset-97511805.02055 | Second order Sobolev type inequalities in the hyperbolic spaces
math.FA math.AP
We establish several Poincar\'e--Sobolev type inequalities for the
Lapalce--Beltrami operator $\Delta_g$ in the hyperbolic space $\mathbb H^n$
with $n\geq 5$. These inequalities could be seen as the improved second order
Poincar\'e inequality with remainder terms involving with the sharp Rellich
inequality or sharp Sobolev inequality in $\mathbb H^n$. The novelty of these
inequalities is that it combines both the sharp Poincar\'e inequality and the
sharp Rellich inequality or the sharp Sobolev inequality for $\Delta_g$ in
$\mathbb H^n$. As a consequence, we obtain the Poincar\'e--Sobolev inequality
for the second order GJMS operator $P_2$ in $\mathbb H^n$. In dimension $4$, we
obtain an improvement of the sharp Adams inequality and an Adams inequality
with exact growth for radial functions in $\mathbb H^4$.
| arxiv topic:math.FA math.AP |
arxiv_dataset-97521805.02155 | Comparison Study of Nonlinear Optimization of Step Durations and Foot
Placement for Dynamic Walking
cs.RO
This paper studies bipedal locomotion as a nonlinear optimization problem
based on continuous and discrete dynamics, by simultaneously optimizing the
remaining step duration, the next step duration and the foot location to
achieve robustness. The linear inverted pendulum as the motion model captures
the center of mass dynamics and its low-dimensionality makes the problem more
tractable. We first formulate a holistic approach to search for optimality in
the three-dimensional parametric space and use these results as baseline. To
further improve computational efficiency, our study investigates a sequential
approach with two stages of customized optimization that first optimizes the
current step duration, and subsequently the duration and location of the next
step. The effectiveness of both approaches is successfully demonstrated in
simulation by applying different perturbations. The comparison study shows that
these two approaches find mostly the same optimal solutions, but the latter
requires considerably less computational time, which suggests that the proposed
sequential approach is well suited for real-time implementation with a minor
trade-off in optimality.
| arxiv topic:cs.RO |
arxiv_dataset-97531805.02255 | Identities involving Narayana numbers
math.CO
Narayana's cows problem is a problem similar to the Fibonacci's rabbit
problem. We define the numbers which are the solutions of this problem as
Narayana's cows numbers. Narayana's cows sequence satisfies the third order
recurrence relation $N_{r}=N_{r-1}+N_{r-3}$ ($r\geq3$) with initial condition
$N_{0} =0$, $N_{1} = N_{2}= 1$. In this paper, the $ar+b$ subscripted Narayana
numbers will be expressed by three $a$ step apart Narayana numbers for any
$1\leq b\leq a$ ($a\in \mathbb{Z}$). Furthermore, we study the sum
$S_{N,r}^{(4,b)}=\sum_{k=0}^{r}N_{4k+b}$ of $4$ step apart Narayana numbers for
any $1\leq b\leq 4$.
| arxiv topic:math.CO |
arxiv_dataset-97541805.02355 | Adaptive Polarization Control for Coherent Optical Links with
Polarization Multiplexed Carrier
eess.SP
Self-homodyne systems with polarization multiplexed carrier offer an LO-less
coherent receiver with simplified signal processing requirement that can be a
good candidate for high-speed short-reach data center interconnects. The
practical implementation of these systems is limited by the requirement of
polarization control at the receiver end for separating the carrier and the
modulated signal. In this paper, effect of polarization impairments in
polarization diversity based systems is studied and modeled. A novel and
practical adaptive polarization control technique based on optical power
feedback from one polarization is proposed for polarization multiplexed carrier
based systems and verified through simulation results. The application of the
proposed concept is experimentally demonstrated also for a QPSK system with
polarization multiplexed carrier.
| arxiv topic:eess.SP |
arxiv_dataset-97551805.02455 | Positive Gaussian kernels also have Gaussian minimizers
math.FA
We study lower bounds on multilinear operators with Gaussian kernels acting
on Lebesgue spaces, with exponents below one. We put forward natural conditions
when the optimal constant can be computed by inspecting centered Gaussian
functions only, and we give necessary and sufficient conditions for this
constant to be positive. Our work provides a counterpart to Lieb's results on
maximizers of multilinear operators with real Gaussian kernels, also known as
the multidimensional Brascamp-Lieb inequality. It unifies and extends several
inverse inequalities.
| arxiv topic:math.FA |
arxiv_dataset-97561805.02555 | Bifurcation structure of periodic patterns in the Lugiato-Lefever
equation with anomalous dispersion
nlin.PS
We study the stability and bifurcation structure of spatially extended
patterns arising in nonlin- ear optical resonators with a Kerr-type
nonlinearity and anomalous group velocity dispersion, as described by the
Lugiato-Lefever equation. While there exists a one-parameter family of patterns
with different wavelengths, we focus our attention on the pattern with critical
wave number k c arising from the modulational instability of the homogeneous
state. We find that the branch of solutions associated with this pattern
connects to a branch of patterns with wave number $2k_c$ . This next branch
also connects to a branch of patterns with double wave number, this time $4k_c$
, and this process repeats through a series of 2:1 spatial resonances. For
values of the detuning parameter approaching $\theta = 2$ from below the
critical wave number $k_c$ approaches zero and this bifurcation structure is
related to the foliated snaking bifurcation structure organizing spatially
localized bright solitons. Secondary bifurcations that these patterns undergo
and the resulting temporal dynamics are also studied.
| arxiv topic:nlin.PS |
arxiv_dataset-97571805.02655 | On the physical mechanisms governing the cloud lifecycle in the Central
Molecular Zone of the Milky Way
astro-ph.GA
We apply an analytic theory for environmentally-dependent molecular cloud
lifetimes to the Central Molecular Zone of the Milky Way. Within this theory,
the cloud lifetime in the Galactic centre is obtained by combining the
time-scales for gravitational instability, galactic shear, epicyclic
perturbations and cloud-cloud collisions. We find that at galactocentric radii
$\sim 45$-$120$ pc, corresponding to the location of the '100-pc stream', cloud
evolution is primarily dominated by gravitational collapse, with median cloud
lifetimes between 1.4 and 3.9 Myr. At all other galactocentric radii, galactic
shear dominates the cloud lifecycle, and we predict that molecular clouds are
dispersed on time-scales between 3 and 9 Myr, without a significant degree of
star formation. Along the outer edge of the 100-pc stream, between radii of 100
and 120 pc, the time-scales for epicyclic perturbations and gravitational
free-fall are similar. This similarity of time-scales lends support to the
hypothesis that, depending on the orbital geometry and timing of the orbital
phase, cloud collapse and star formation in the 100-pc stream may be triggered
by a tidal compression at pericentre. Based on the derived time-scales, this
should happen in approximately 20 per cent of all accretion events onto the
100-pc stream.
| arxiv topic:astro-ph.GA |
arxiv_dataset-97581805.02755 | EngineCL: Usability and Performance in Heterogeneous Computing
cs.DC
Heterogeneous systems have become one of the most common architectures today,
thanks to their excellent performance and energy consumption. However, due to
their heterogeneity they are very complex to program and even more to achieve
performance portability on different devices. This paper presents EngineCL, a
new OpenCL-based runtime system that outstandingly simplifies the co-execution
of a single massive data-parallel kernel on all the devices of a heterogeneous
system. It performs a set of low level tasks regarding the management of
devices, their disjoint memory spaces and scheduling the workload between the
system devices while providing a layered API. EngineCL has been validated in
two compute nodes (HPC and commodity system), that combine six devices with
different architectures. Experimental results show that it has excellent
usability compared with OpenCL; a maximum 2.8% of overhead compared to the
native version under loads of less than a second of execution and a tendency
towards zero for longer execution times; and it can reach an average efficiency
of 0.89 when balancing the load.
| arxiv topic:cs.DC |
arxiv_dataset-97591805.02855 | Tile2Vec: Unsupervised representation learning for spatially distributed
data
cs.CV cs.LG stat.ML
Geospatial analysis lacks methods like the word vector representations and
pre-trained networks that significantly boost performance across a wide range
of natural language and computer vision tasks. To fill this gap, we introduce
Tile2Vec, an unsupervised representation learning algorithm that extends the
distributional hypothesis from natural language -- words appearing in similar
contexts tend to have similar meanings -- to spatially distributed data. We
demonstrate empirically that Tile2Vec learns semantically meaningful
representations on three datasets. Our learned representations significantly
improve performance in downstream classification tasks and, similar to word
vectors, visual analogies can be obtained via simple arithmetic in the latent
space.
| arxiv topic:cs.CV cs.LG stat.ML |
arxiv_dataset-97601805.02955 | Selective correlations in finite quantum systems and the Desargues
property
math-ph math.MP quant-ph
The Desargues property is well known in the context of projective geometry.
An analogous property is presented in the context of both classical and Quantum
Physics. In a classical context, the Desargues property implies that two
logical circuits with the same input, show in their outputs selective
correlations. In general their outputs are uncorrelated, but if the output of
one has a particular value, then the output of the other has another particular
value. In a quantum context, the Desargues property implies that two
experiments each of which involves two successive projective measurements, have
selective correlations. For a particular set of projectors, if in one
experiment the second measurement does not change the output of the first
measurement, then the same is true in the other experiment.
| arxiv topic:math-ph math.MP quant-ph |
arxiv_dataset-97611805.03055 | Parallel Graph Connectivity in Log Diameter Rounds
cs.DS cs.DC
We study graph connectivity problem in MPC model. On an undirected graph with
$n$ nodes and $m$ edges, $O(\log n)$ round connectivity algorithms have been
known for over 35 years. However, no algorithms with better complexity bounds
were known. In this work, we give fully scalable, faster algorithms for the
connectivity problem, by parameterizing the time complexity as a function of
the diameter of the graph. Our main result is a $O(\log D \log\log_{m/n} n)$
time connectivity algorithm for diameter-$D$ graphs, using $\Theta(m)$ total
memory. If our algorithm can use more memory, it can terminate in fewer rounds,
and there is no lower bound on the memory per processor.
We extend our results to related graph problems such as spanning forest,
finding a DFS sequence, exact/approximate minimum spanning forest, and
bottleneck spanning forest. We also show that achieving similar bounds for
reachability in directed graphs would imply faster boolean matrix
multiplication algorithms.
We introduce several new algorithmic ideas. We describe a general technique
called double exponential speed problem size reduction which roughly means that
if we can use total memory $N$ to reduce a problem from size $n$ to $n/k$, for
$k=(N/n)^{\Theta(1)}$ in one phase, then we can solve the problem in
$O(\log\log_{N/n} n)$ phases. In order to achieve this fast reduction for graph
connectivity, we use a multistep algorithm. One key step is a carefully
constructed truncated broadcasting scheme where each node broadcasts neighbor
sets to its neighbors in a way that limits the size of the resulting neighbor
sets. Another key step is random leader contraction, where we choose a smaller
set of leaders than many previous works do.
| arxiv topic:cs.DS cs.DC |
arxiv_dataset-97621805.03155 | Symplectic Pseudospectral Time-Domain Scheme for Solving Time-Dependent
Schrodinger Equation
physics.comp-ph math-ph math.MP math.NA math.SG
A symplectic pseudospectral time-domain (SPSTD) scheme is developed to solve
Schrodinger equation. Instead of spatial finite differences in conventional
finite-difference time-domain (FDTD) method, the fast Fourier transform is used
to calculate the spatial derivatives. In time domain, the scheme adopts
high-order symplectic integrators to simulate time evolution of Schrodinger
equation. A detailed numerical study on the eigenvalue problems of 1D quantum
well and 3D harmonic oscillator is carried out. The simulation results strongly
confirm the advantages of the SPSTD scheme over the traditional PSTD method and
FDTD approach. Furthermore, by comparing to the traditional PSTD method and the
non-symplectic Runge-Kutta (RK) method, the explicit SPSTD scheme which is an
infinite order of accuracy in space domain and energy-conserving in time
domain, is well suited for a long-term simulation.
| arxiv topic:physics.comp-ph math-ph math.MP math.NA math.SG |
arxiv_dataset-97631805.03255 | A Fixed Mesh Method With Immersed Finite Elements for Solving Interface
Inverse Problems
math.NA
We present a new fixed mesh algorithm for solving a class of interface
inverse problems for the typical elliptic interface problems. These interface
inverse problems are formulated as shape optimization prob- lems whose
objective functionals depend on the shape of the interface. Regardless of the
location of the interface, both the governing partial differential equations
and the objective functional are discretized optimally, with respect to the
involved polynomial space, by an immersed finite element (IFE) method on a
fixed mesh. Furthermore, the formula for the gradient of the descritized
objective function is de- rived within the IFE framework that can be computed
accurately and efficiently through the discretized adjoint procedure. Features
of this proposed IFE method based on a fixed mesh are demonstrated by its
applications to three representative interface inverse problems: the interface
inverse problem with an internal measurement on a sub-domain, a
Dirichlet-Neumann type inverse problem whose data is given on the boundary, and
a heat dissipation design problem.
| arxiv topic:math.NA |
arxiv_dataset-97641805.03355 | Exponential Stability Estimate of Symplectic Integrators for Integrable
Hamiltonian Systems
math.DS math.NA
We prove a Nekhoroshev-type theorem for nearly integrable symplectic map. As
an application of the theorem, we obtain the exponential stability symplectic
algorithms. Meanwhile, we can get the bounds for the perturbation, the
variation of the action variables, and the exponential time respectively. These
results provide a new insight into the nonlinear stability analysis of
symplectic algorithms. Combined with our previous results on the numerical KAM
theorem for symplectic algorithms (2018), we give a more complete
characterization on the complex nonlinear dynamical behavior of symplectic
algorithms.
| arxiv topic:math.DS math.NA |
arxiv_dataset-97651805.03455 | Homology spheres yielding lens spaces
math.GT
It is known by the author that there exist 20 families of Dehn surgeries in
the Poincar\'e homology sphere yielding lens spaces. In this paper, we give the
concrete knot diagrams of the families and extend them to families of lens
space surgeries in Brieskorn homology spheres. We illustrate families of lens
space surgeries in $\Sigma(2,3,6n\pm1)$ and $\Sigma(2,2s+1,2(2s+1)n\pm1)$ and
so on. As other examples, we give lens space surgeries in graph homology
spheres, which are obtained by splicing two Brieskorn homology spheres.
| arxiv topic:math.GT |
arxiv_dataset-97661805.03555 | Photocarrier extraction in GaAsSb/GaAsN type-II QW superlattice solar
cells
cond-mat.mes-hall cond-mat.mtrl-sci physics.app-ph physics.optics
Photocarrier transport and extraction in GaAsSb/GaAsN type-II quantum well
superlattices are investigated by means of inelastic quantum transport
calculations based on the non-equilibrium Green's function formalism.
Evaluation of the local density of states and of the spectral current flow
enables the identification of different regimes for carrier localization,
transport, and extraction as a function of configurational parameters. These
include the number of periods, the thicknesses of the individual layers in one
period, the built-in electric field, and the temperature of operation. The
results for the carrier extraction efficiency are related to experimental data
for different symmetric GaAsSb/GaAsN type-II quantum well superlattice solar
cell devices and provide a qualitative explanation for the experimentally
observed dependence of photovoltaic device performance on period thickness.
| arxiv topic:cond-mat.mes-hall cond-mat.mtrl-sci physics.app-ph physics.optics |
arxiv_dataset-97671805.03655 | Peculiar Supernovae
astro-ph.HE
What makes a supernova truly `peculiar?' In this chapter we attempt to
address this question by tracing the history of the use of `peculiar' as a
descriptor of non-standard supernovae back to the original binary spectroscopic
classification of Type I vs. Type II proposed by Minkowski (1941). A handful of
noteworthy examples (including SN 2012au, SN 2014C, iPTF14hls, and iPTF15eqv)
are highlighted to illustrate a general theme: classes of supernovae that were
once thought to be peculiar are later seen as logical branches of standard
events. This is not always the case, however, and we discuss ASASSN-15lh as an
example of a transient with an origin that remains contentious. We remark on
how late-time observations at all wavelengths (radio-through-X-ray) that probe
1) the kinematic and chemical properties of the supernova ejecta and 2) the
progenitor star system's mass loss in the terminal phases preceding the
explosion, have often been critical in understanding the nature of seemingly
unusual events.
| arxiv topic:astro-ph.HE |
arxiv_dataset-97681805.03755 | EPA-RIMM: A Framework for Dynamic SMM-based Runtime Integrity
Measurement
cs.CR
Runtime integrity measurements identify unexpected changes in operating
systems and hypervisors during operation, enabling early detection of
persistent threats. System Management Mode, a privileged x86 CPU mode, has the
potential to effectively perform such rootkit detection. Previously proposed
SMM-based approaches demonstrated effective detection capabilities, but at a
cost of performance degradation and software side effects. In this paper we
introduce our solution to these problems, an SMM-based Extensible, Performance
Aware Runtime Integrity Measurement Mechanism called EPA-RIMM. The EPA-RIMM
architecture features a performance-sensitive design that decomposes large
integrity measurements and schedules them to control perturbation and side
effects. EPA-RIMM's decomposition of long-running measurements into shorter
tasks, extensibility, and use of SMM complicates the efforts of malicious code
to detect or avoid the integrity measurements. Using a Minnowboard-based
prototype, we demonstrate its detection capabilities and performance impacts.
Early results are promising, and suggest that EPA-RIMM will meet
production-level performance constraints while continuously monitoring key OS
and hypervisor data structures for signs of attack.
| arxiv topic:cs.CR |
arxiv_dataset-97691805.03855 | Modelling the matter bispectrum towards nonlinear scales - two and three
loops in perturbation theories
astro-ph.CO
I compute the matter bispectrum of large-scale structure up to two loops in
the Standard Perturbation Theory and up to three loops in the MPTbreeze
renormalised perturbation theory, determining the contributing loop diagrams
and evaluating them numerically. In the process I remove the leading
divergences in the integrands, thus making them infrared-safe. By comparing the
results to numerical simulations, I show that in the case of the Standard
Perturbation Theory, the bispectrum at two loops is more accurate than at one
loop, up to $k_{\textrm{max}} \sim 0.09 \, h/\textrm{Mpc}$ at $z=0$ and
$k_{\textrm{max}} \sim 0.11 \, h/\textrm{Mpc}$ at $z=1$. The MPTbreeze can be
employed to accurately model the matter bispectrum up to $k_{\textrm{max}} \sim
0.17 \, h/\textrm{Mpc}$ at $z=0$ and $k_{\textrm{max}} \sim 0.24 \,
h/\textrm{Mpc}$ at $z=1$ using the results at three loops.
| arxiv topic:astro-ph.CO |
arxiv_dataset-97701805.03955 | Enhanced entanglement criterion via symmetric informationally complete
measurements
quant-ph
We show that a special type of measurements, called symmetric informationally
complete positive operator-valued measures (SIC POVMs), provide a stronger
entanglement detection criterion than the computable cross-norm or realignment
criterion based on local orthogonal observables. As an illustration, we
demonstrate the enhanced entanglement detection power in simple systems of
qubit and qutrit pairs. This observation highlights the significance of SIC
POVMs for entanglement detection.
| arxiv topic:quant-ph |
arxiv_dataset-97711805.04055 | Reconfiguration of Satisfying Assignments and Subset Sums: Easy to Find,
Hard to Connect
cs.CC
We consider the computational complexity of reconfiguration problems, in
which one is given two combinatorial configurations satisfying some
constraints, and is asked to transform one into the other using elementary
transformations, while satisfying the constraints at all times. Such problems
appear naturally in many contexts, such as model checking, motion planning,
enumeration and sampling, and recreational mathematics. We provide hardness
results for problems in this family, in which the constraints and operations
are particularly simple. More precisely, we prove the PSPACE-completeness of
the following decision problems:
$\bullet$ Given two satisfying assignments to a planar monotone instance of
Not-All-Equal 3-SAT, can one assignment be transformed into the other by single
variable `flips' (assignment changes), preserving satisfiability at every step?
$\bullet$ Given two subsets of a set S of integers with the same sum, can one
subset be transformed into the other by adding or removing at most three
elements of S at a time, such that the intermediate subsets also have the same
sum?
$\bullet$ Given two points in $\{0,1\}^n$ contained in a polytope P specified
by a constant number of linear inequalities, is there a path in the n-hypercube
connecting the two points and contained in P?
These problems can be interpreted as reconfiguration analogues of standard
problems in NP. Interestingly, the instances of the NP problems that appear as
input to the reconfiguration problems in our reductions can be shown to lie in
P. In particular, the elements of S and the coefficients of the inequalities
defining P can be restricted to have logarithmic bit-length.
| arxiv topic:cs.CC |
arxiv_dataset-97721805.04155 | Efficient and flexible MATLAB implementation of 2D and 3D elastoplastic
problems
math.NA
We propose an effective and flexible way to implement 2D and 3D elastoplastic
problems in MATLAB using fully vectorized codes. Our technique is applied to a
broad class of the problems including perfect plasticity or plasticity with
hardening and several yield criteria. The problems are formulated in terms of
displacements, discretized by the implicit Euler method in time and the finite
element method in space, and solved by the semismooth Newton method. We discuss
in detail selected models with the von Mises and Prager-Drucker yield criteria
and four types of finite elements. The related codes are available for
download. A particular interest is devoted to the assembling of tangential
stiffness matrices. Since these matrices are repeatedly constructed in each
Newton iteration and in each time step, we propose another vectorized
assembling than current ones known for the elastic stiffness matrices. The main
idea is based on a construction of two large and sparse matrices representing
the strain-displacement and tangent operators, respectively, where the former
matrix remains fixed and the latter one is updated only at some integration
points. Comparisons with other available MATLAB codes show that our technique
is also efficient for purely elastic problems. In elastoplasticity, the
assembly times are linearly proportional to the number of integration points in
a plastic phase and additional times due to plasticity never exceed assembly
time of the elastic stiffness matrix.
| arxiv topic:math.NA |
arxiv_dataset-97731805.04255 | Proceedings Sixth Workshop on Trends in Functional Programming in
Education
cs.PL
The Sixth International Workshops on Trends in Functional Programming in
Education, TFPIE 2017, was held on 22 June 2017 at the University of Kent, in
Canterbury, UK, and was co-located with TFP, the Symposium on Trends in
Functional Programming.
The goal of TFPIE is to gather researchers, professors, teachers, and all
professionals interested in functional programming in education. This includes
the teaching of functional programming, but also the application of functional
programming as a tool for teaching other topics and disciplines.
A particular topic of this year's TFPIE was that of MOOCs and other online
learning and, as well as a session on this, we were delighted to welcome
Heather Miller of EFPL and Northeastern University to give a keynote on this
topic entitled "Functional Programming for All! Scaling a MOOC for Students and
Professionals Alike". Heather works on and around the Scala programming
language and is Executive Director of the Scala Center.
| arxiv topic:cs.PL |
arxiv_dataset-97741805.04355 | Mixing and formation of layers by internal wave forcing
physics.flu-dyn
The energy pathways from propagating internal waves to the scales of
irreversible mixing in the ocean are not fully described. In the ocean
interior, the triadic resonant instability is an intrinsic destabilization
process that may enhance the energy cascade away from topographies. The present
study focuses on the integrated impact of mixing processes induced by a
propagative normal mode-1 over long term experiments in an idealised setup. The
internal wave dynamics and the evolution of the density profile are followed
using the light attenuation technique. Diagnostics of the turbulent diffusivity
$K_{T}$ and background potential energy $BPE$ are provided. Mixing effects
result in a partially mixed layer colocated with the region of maximum shear
induced by the forcing normal mode. The maximum measured turbulent diffusivity
is 250 times larger than the molecular value, showing that diapycnal mixing is
largely enhanced by small scale turbulent processes. Intermittency and
reversible energy transfers are discussed to bridge the gap between the present
diagnostic and the larger values measured in Dossmann et al, Experiments in
Fluids, 57(8), 132 (2016). The mixing efficiency $\eta$ is assessed by relating
the $BPE$ growth to the linearized $KE$ input. One finds a value of
$\Gamma=12-19\%$ larger than the mixing efficiency in the case of breaking
interfacial wave. After several hours of forcing, the development of staircases
in the density profile is observed. This mechanism has been previously observed
in experiments with weak homogeneous turbulence and explained by argument. The
present experiments suggest that internal wave forcing could also induce the
formation of density interfaces in the ocean.
| arxiv topic:physics.flu-dyn |
arxiv_dataset-97751805.04455 | On the distribution of spontaneous potentials intervals in nervous
transmission
physics.bio-ph q-bio.NC
One of the main challenges in Biophysics teaching consists on how to motivate
students to appreciate the beauty of theoretical formulations. This is crucial
when the system modeling requires numerical calculations to achieve realistic
results. In this sense, due to the massive use of software, students often
become a mere users of computational tools without capturing the essence of
formulation and further problem solution. It is, therefore, necessary for
instructors to find innovating ways, allowing students developing of their
ability to deal with mathematical modelling. To address this issue one can
highlight the use of Benford's law, thanks to its simple formulation, easy
computational implementation and wide possibility for applications. Indeed,
this law enables students to carry out their own data analysis with use of free
software packages. This law is among the several power or scaling laws found in
biological systems. However, to the best of our knowledge, this law has not
been contemplated in Cell Biophysics yet. Beyond its vast applications in many
fields, neuromuscular junction represents a remarkable substrate for learning
and teaching of complex system. Thus, in this work, we applied both classical
and a generalized form of Benford's Law, to examine if electrophysiological
data recorded from neuromuscular junction conforms the anomalous number law.
The results indicated that nerve-muscle communications conform the generalized
Benford's law better than the seminal formulation. From our
electrophysiological measurements a biological scenario is used to interpret
the theoretical analysis.
| arxiv topic:physics.bio-ph q-bio.NC |
arxiv_dataset-97761805.04555 | Detection of intercluster gas in superclusters using the thermal
Sunyaev-Zel'dovich effect
astro-ph.CO
Using a thermal Sunyaev-Zel'dovich (tSZ) signal, we search for hot gas in
superclusters identified using the Sloan Digital Sky Survey Data Release 7
(SDSS/DR7) galaxies. We stack a Comptonization y map produced by the Planck
Collaboration around the superclusters and detect the tSZ signal at a
significance of 6.4 sigma. We further search for an intercluster component of
gas in the superclusters. For this, we remove the intracluster gas in the
superclusters by masking all galaxy groups/clusters detected by the Planck tSZ,
ROSAT X-ray, and SDSS optical surveys down to a total mass of 10^13 Msun. We
report the first detection of intercluster gas in superclusters with y = (3.5
+- 1.4) * 10^(-8) at a significance of 2.5 sigma. Assuming a simple isothermal
and flat density distribution of intercluster gas over superclusters, the
estimated baryon density is (Omega_gas / Omega_b) * (T_e/(8*10^6 K)) = 0.067 +-
0.006 +- 0.025. This quantity is inversely proportional to the temperature,
therefore taking values from simulations and observations, we find that the gas
density in superclusters may account for 17 - 52 % of missing baryons at low
redshifts. A better understanding of the physical state of gas in the
superclusters is required to accurately estimate the contribution of our
measurements to missing baryons.
| arxiv topic:astro-ph.CO |
arxiv_dataset-97771805.04655 | Learning to Ask Good Questions: Ranking Clarification Questions using
Neural Expected Value of Perfect Information
cs.CL
Inquiry is fundamental to communication, and machines cannot effectively
collaborate with humans unless they can ask questions. In this work, we build a
neural network model for the task of ranking clarification questions. Our model
is inspired by the idea of expected value of perfect information: a good
question is one whose expected answer will be useful. We study this problem
using data from StackExchange, a plentiful online resource in which people
routinely ask clarifying questions to posts so that they can better offer
assistance to the original poster. We create a dataset of clarification
questions consisting of ~77K posts paired with a clarification question (and
answer) from three domains of StackExchange: askubuntu, unix and superuser. We
evaluate our model on 500 samples of this dataset against expert human
judgments and demonstrate significant improvements over controlled baselines.
| arxiv topic:cs.CL |
arxiv_dataset-97781805.04755 | A Simple and Effective Model-Based Variable Importance Measure
stat.ML cs.LG
In the era of "big data", it is becoming more of a challenge to not only
build state-of-the-art predictive models, but also gain an understanding of
what's really going on in the data. For example, it is often of interest to
know which, if any, of the predictors in a fitted model are relatively
influential on the predicted outcome. Some modern algorithms---like random
forests and gradient boosted decision trees---have a natural way of quantifying
the importance or relative influence of each feature. Other algorithms---like
naive Bayes classifiers and support vector machines---are not capable of doing
so and model-free approaches are generally used to measure each predictor's
importance. In this paper, we propose a standardized, model-based approach to
measuring predictor importance across the growing spectrum of supervised
learning algorithms. Our proposed method is illustrated through both simulated
and real data examples. The R code to reproduce all of the figures in this
paper is available in the supplementary materials.
| arxiv topic:stat.ML cs.LG |
arxiv_dataset-97791805.04855 | Covariance Pooling For Facial Expression Recognition
cs.CV
Classifying facial expressions into different categories requires capturing
regional distortions of facial landmarks. We believe that second-order
statistics such as covariance is better able to capture such distortions in
regional facial fea- tures. In this work, we explore the benefits of using a
man- ifold network structure for covariance pooling to improve facial
expression recognition. In particular, we first employ such kind of manifold
networks in conjunction with tradi- tional convolutional networks for spatial
pooling within in- dividual image feature maps in an end-to-end deep learning
manner. By doing so, we are able to achieve a recognition accuracy of 58.14% on
the validation set of Static Facial Expressions in the Wild (SFEW 2.0) and
87.0% on the vali- dation set of Real-World Affective Faces (RAF) Database.
Both of these results are the best results we are aware of. Besides, we
leverage covariance pooling to capture the tem- poral evolution of per-frame
features for video-based facial expression recognition. Our reported results
demonstrate the advantage of pooling image-set features temporally by stacking
the designed manifold network of covariance pool-ing on top of convolutional
network layers.
| arxiv topic:cs.CV |
arxiv_dataset-97801805.04955 | Low-pass Recurrent Neural Networks - A memory architecture for
longer-term correlation discovery
cs.LG cs.AI stat.ML
Reinforcement learning (RL) agents performing complex tasks must be able to
remember observations and actions across sizable time intervals. This is
especially true during the initial learning stages, when exploratory behaviour
can increase the delay between specific actions and their effects. Many new or
popular approaches for learning these distant correlations employ
backpropagation through time (BPTT), but this technique requires storing
observation traces long enough to span the interval between cause and effect.
Besides memory demands, learning dynamics like vanishing gradients and slow
convergence due to infrequent weight updates can reduce BPTT's practicality;
meanwhile, although online recurrent network learning is a developing topic,
most approaches are not efficient enough to use as replacements. We propose a
simple, effective memory strategy that can extend the window over which BPTT
can learn without requiring longer traces. We explore this approach empirically
on a few tasks and discuss its implications.
| arxiv topic:cs.LG cs.AI stat.ML |
arxiv_dataset-97811805.05055 | A ferroelectric problem beyond the conventional scaling law
cond-mat.mtrl-sci
Ferroelectric (FE) size effects against the scaling law were reported
recently in ultrathin group-IV monochalcogenides, and extrinsic effects (e.g.
defects and lattice strains) were often resorted to. Via first-principles based
finite-temperature ($T$) simulations, we reveal that these abnormalities are
intrinsic to their unusual symmetry breaking from bulk to thin film. Changes of
the electronic structures result in different order parameters characterizing
the FE phase transition in bulk and in thin films, and invalidation of the
scaling law. Beyond the scaling law $T_{\text{c}}$ limit, this mechanism can
help predicting materials promising for room-$T$ ultrathin FE devices of broad
interest.
| arxiv topic:cond-mat.mtrl-sci |
arxiv_dataset-97821805.05155 | Boundary rigidity of negatively-curved asymptotically hyperbolic
surfaces
math.DG math-ph math.AP math.MP
In the spirit of Otal and Croke, we prove that a negatively-curved
asymptotically hyperbolic surface is boundary distance rigid, where the
distance between two points on the boundary at infinity is defined by a
renormalized quantity.
| arxiv topic:math.DG math-ph math.AP math.MP |
arxiv_dataset-97831805.05255 | Irreducible Characters for the Symmetric Groups and Kostka Matrices
math.RT
In an earlier paper [1] it was shown that the Frobenius compound characters
for the symmetric groups are related to the irreducible characters by a linear
relation that involves a unitriagular coupling matrix that gives the Frobenius
characters in terms of linear combinations of the irreducible characters. It is
desirable to invert this relationship since we have formulas for the Frobenius
characters and want the values for the irreducible characters. This inversion
is straightforward and yields both the irreducible characters but also the
coupling matrix that turns out to be the Kostka matrix in the original
direction. We show that if the Frobenius monomial identity is applied a
modification of it, equation (22), produces a monomial formula that produces
the Kostka matrix inverse without involving the characters of either type.
However it is a formidable task to execute this procedure for symmetric groups
of even modest order. Alternatively the inversion by means of the unitriangular
coupling matrix produces the Kostka matrix and the irreducible characters
simultaneously and with much less effort than required for the monomial
approach. Moreover there is a surprise.
| arxiv topic:math.RT |
arxiv_dataset-97841805.05355 | Discovery and Dynamical Analysis of an Extreme Trans-Neptunian Object
with a High Orbital Inclination
astro-ph.EP
We report the discovery and dynamical analysis of 2015 BP$_{519}$, an extreme
Trans-Neptunian Object detected detected by the Dark Energy Survey at a
heliocentric distance of 55 AU and absolute magnitude Hr= 4.3. The current
orbit, determined from a 1110-day observational arc, has semi-major axis
$a\approx$ 450 AU, eccentricity $e\approx$ 0.92 and inclination $i\approx$ 54
degrees. With these orbital elements, 2015 BP$_{519}$ is the most extreme TNO
discovered to date, as quantified by the reduced Kozai action, which is is a
conserved quantity at fixed semi-major axis $a$ for axisymmetric perturbations.
We discuss the orbital stability and evolution of this object in the context of
the known Solar System, and find that 2015 BP$_{519}$ displays rich dynamical
behavior, including rapid diffusion in semi-major axis and more constrained
variations in eccentricity and inclination. We also consider the long term
orbital stability and evolutionary behavior within the context of the Planet
Nine Hypothesis, and find that BP$_{519}$ adds to the circumstantial evidence
for the existence of this proposed new member of the Solar System, as it would
represent the first member of the population of high-i, $\varpi$-shepherded
TNOs.
| arxiv topic:astro-ph.EP |
arxiv_dataset-97851805.05455 | Experimental assignment of many-electron excitations in the
photo-ionization of NiO
cond-mat.str-el
The absorption of a photon and the emission of an electron is not a simple,
two-particle process. The complicated many-electron features observed during
core photo-ionization can therefore reveal many of the hidden secrets about the
ground and excited-state electronic structures of a material. Careful analysis
of the photon-energy dependence of the Ni KLL Auger de-excitation spectra at
and above the Ni 1s photo-ionization threshold has identified the satellite
structure that appears in both the photo-electron emission and the x-ray
absorption spectra of NiO as Ni metal 3d eg -> Ni metal 3d eg and O ligand 2p
eg -> Ni metal 3d eg charge-transfer excitations, respectively. These
assignments elucidate the conflicting theoretical predictions of the last five
decades in addition to other anomalous effects in the spectroscopy of this
unique material.
| arxiv topic:cond-mat.str-el |
arxiv_dataset-97861805.05555 | Cellular automata approach to synchronized traffic flow modelling
nlin.CG
Cellular automaton (CA) approach is an important theoretical framework for
studying complex system behavior and has been widely applied in various
research field. CA traffic flow models have the advantage of flexible evolution
rules and high computation efficiency. Therefore, CA develops very quickly and
has been widely applied in transportation field. In recent two decades, traffic
flow study quickly developed, among which "synchronized flow" is perhaps one of
the most important concepts and findings. Many new CA models have been proposed
in this direction. This paper makes a review of development of CA models,
concerning their ability to reproduce synchronized flow as well as traffic
breakdown from free flow to synchronized flow. Finally, future directions have
been discussed.
| arxiv topic:nlin.CG |
arxiv_dataset-97871805.05655 | Electron acceleration in a JET disruption simulation
physics.plasm-ph
Runaways are suprathermal electrons having sufficiently high energy to be
continuously accelerated up to tens of MeV by a driving electric field [1].
Highly energetic runaway electron (RE) beams capable of damaging the tokamak
first wall can be observed after a plasma disruption [2]. Therefore, it is of
primary importance to fully understand their generation mechanisms in order to
design mitigation systems able to guarantee safe tokamak operations. In a
previous work, [3], a test particle tracker was introduced in the JOREK 3D
non-linear MHD code and used for studying the electron confinement during a
simulated JET-like disruption. It was found in [3] that relativistic electrons
are not completely deconfined by the stochastic magnetic field taking place
during the disruption thermal quench (TQ). This is due to the reformation of
closed magnetic surfaces at the beginning of the current quench (CQ). This
result was obtained neglecting the inductive electric field in order to avoid
the unrealistic particle acceleration which otherwise would have happened due
to the absence of collision effects. The present paper extends [3] analysing
test electron dynamics in the same simulated JET-like disruption using the
complete electric field. For doing so, a simplified collision model is
introduced in the particle tracker guiding center equations. We show that
electrons at thermal energies can become RE during or promptly after the TQ due
to a combination of three phenomena: a first REs acceleration during the TQ due
to the presence of a complex MHD-induced electric field, particle reconfinement
caused by the fast reformation of closed magnetic surfaces after the TQ and a
secondary acceleration induced by the CQ electric field.
| arxiv topic:physics.plasm-ph |
arxiv_dataset-97881805.05755 | Radial perturbations of the scalarized EGB black holes
gr-qc
Recently a new class of scalarized black holes in Einstein-Gauss-Bonnet (EGB)
theories was discovered. What is special for these black hole solutions is that
the scalarization is not due to the presence of matter, but {it is induced} by
the curvature of spacetime itself. Moreover, more than one branch of scalarized
solutions can bifurcate from the Schwarzschild branch, and these scalarized
branches are characterized by the number of nodes of the scalar field. The next
step is to consider the linear stability of these solutions, which is
particularly important due to the fact that the Schwarzschild black holes lose
stability at the first point of bifurcation. Therefore we here study in detail
the radial perturbations of the scalarized EGB black holes. The results show
that all branches with a nontrivial scalar field with one or more nodes are
unstable. The stability of the solutions on the fundamental branch, whose
scalar field has no radial nodes, depends on the particular choice of the
coupling function between the scalar field and the Gauss-Bonnet invariant. We
consider two particular cases based on the previous studies of the background
solutions. If this coupling has the form used in \cite{Doneva:2017bvd} the
fundamental branch of solutions is stable, except for very small masses. In the
case of a coupling function quadratic in the scalar field \cite{Silva:2017uqg},
though, the whole fundamental branch is unstable.
| arxiv topic:gr-qc |
arxiv_dataset-97891805.05855 | Social Algorithms
cs.NE cs.AI cs.CC math.OC
This article concerns the review of a special class of swarm intelligence
based algorithms for solving optimization problems and these algorithms can be
referred to as social algorithms. Social algorithms use multiple agents and the
social interactions to design rules for algorithms so as to mimic certain
successful characteristics of the social/biological systems such as ants, bees,
bats, birds and animals.
| arxiv topic:cs.NE cs.AI cs.CC math.OC |
arxiv_dataset-97901805.05955 | Low-dissipation edge currents without edge states
cond-mat.mes-hall
We show that bulk free carriers in topologically trivial multi-valley
insulators with non-vanishing Berry curvature give rise to low-dissipation edge
currents, which are squeezed within a distance of the order of the valley
diffusion length from the edge. This happens even in the absence of edge states
[topological (gapless) or otherwise], and when the bulk equilibrium carrier
concentration is thermally activated across the gap. Physically, the squeezed
edge current arises from the spatially inhomogeneous orbital magnetization that
develops from valley-density accumulation near the edge. While this current
possesses neither topology nor symmetry protection and, as a result, is not
immune to dissipation, in clean enough devices it can mimic low-loss ballistic
transport.
| arxiv topic:cond-mat.mes-hall |
arxiv_dataset-97911805.06055 | The Hadwiger-Nelson problem with two forbidden distances
math.CO
In 1950 Edward Nelson asked the following simple-sounding question:
\emph{How many colors are needed to color the Euclidean plane $\mathbb{E}^2$
such that no two points distance $1$ apart are identically colored?}
We say that $1$ is a \emph{forbidden} distance. For many years, we only knew
that the answer was $4$, $5$, $6$, or $7$. In a recent breakthrough, de Grey
\cite{degrey} proved that at least five colors are necessary.
In this paper we consider a related problem in which we require \emph{two}
forbidden distances, $1$ and $d$. In other words, for a given positive number
$d\neq 1$, how many colors are needed to color the plane such that no two
points distance $1$ \underline{or} $d$ apart are assigned the same color? We
find several values of $d$, for which the answer to the previous question is at
least $5$. These results and graphs may be useful in constructing simpler
$5$-chromatic unit distance graphs.
| arxiv topic:math.CO |
arxiv_dataset-97921805.06155 | Monocular Vehicle Self-localization method based on Compact Semantic Map
cs.RO
High precision localization is a crucial requirement for the autonomous
driving system. Traditional positioning methods have some limitations in
providing stable and accurate vehicle poses, especially in an urban
environment. Herein, we propose a novel self-localizing method using a
monocular camera and a 3D compact semantic map. Pre-collected information of
the road landmarks is stored in a self-defined map with a minimal amount of
data. We recognize landmarks using a deep neural network, followed with a
geometric feature extraction process which promotes the measurement accuracy.
The vehicle location and posture are estimated by minimizing a self-defined
re-projection residual error to evaluate the map-to-image registration,
together with a robust association method. We validate the effectiveness of our
approach by applying this method to localize a vehicle in an open dataset,
achieving the RMS accuracy of 0.345 meter with reduced sensor setup and map
storage compared to the state of art approaches. We also evaluate some key
steps and discuss the contribution of the subsystems.
| arxiv topic:cs.RO |
arxiv_dataset-97931805.06255 | A penalty scheme and policy iteration for nonlocal HJB variational
inequalities with monotone drivers
math.NA math.OC
We propose a class of numerical schemes for nonlocal HJB variational
inequalities (HJBVIs) with monotone drivers. The solution and free boundary of
the HJBVI are constructed from a sequence of penalized equations, for which a
continuous dependence result is derived and the penalization error is
estimated. The penalized equation is then discretized by a class of
semi-implicit monotone approximations. We present a novel analysis technique
for the well-posedness of the discrete equation, and demonstrate the
convergence of the scheme, which subsequently gives a constructive proof for
the existence of a solution to the penalized equation and variational
inequality. We further propose an efficient iterative algorithm with local
superlinear convergence for solving the discrete equation. Numerical
experiments are presented for an optimal investment problem under ambiguity and
a recursive consumption-portfolio allocation problem.
| arxiv topic:math.NA math.OC |
arxiv_dataset-97941805.06355 | Sequence Lorentz spaces and their geometric structure
math.FA
This article is dedicated to geometric structure of the Lorentz and
Marcinkiewicz spaces in case of the pure atomic measure. We study complete
criteria for order continuity, the Fatou property, strict monotonicity and
strict convexity in the sequence Lorentz spaces $\gamma_{p,w}$. Next, we
present a full characterization of extreme points of the unit ball in the
sequence Lorentz space $\gamma_{1,w}$. We also establish a complete description
with an isometry of the dual and predual spaces of the sequence Lorentz spaces
$\gamma_{1,w}$ written in terms of the Marcinkiewicz spaces. Finally, we show a
fundamental application of geometric structure of $\gamma_{1,w}$ to
one-complemented subspaces of $\gamma_{1,w}$.
| arxiv topic:math.FA |
arxiv_dataset-97951805.06455 | Towards a Theory of Additive Eigenvectors
cond-mat.stat-mech physics.chem-ph quant-ph
The standard approach in solving stochastic equations is eigenvector
decomposition. Using separation ansatz $P(i,t)=u(i)e^{\mu t}$ one obtains
standard equation for eigenvectors $Ku=\mu u$, where $K$ is the rate matrix of
the master equation. While universally accepted, the standard approach is not
the only possibility. Using additive separation ansatz $S(i,t)=W(i)-\nu t$ one
arrives at additive eigenvectors. Here we suggest a theory of such
eigenvectors. We argue that additive eigenvectors describe conditioned Markov
processes and derive corresponding equations. The formalism is applied to
one-dimensional stochastic process corresponding to the telegraph equation. We
derive differential equations for additive eigenvectors and explore their
properties. The proposed theory of additive eigenvectors provides a new
description of stochastic processes with peculiar properties.
| arxiv topic:cond-mat.stat-mech physics.chem-ph quant-ph |
arxiv_dataset-97961805.06555 | Shortening time scale to reduce thermal effects in quantum transistors
quant-ph
In this article, we present a quantum transistor model based on a network of
coupled quantum oscillators destined to quantum information processing tasks in
linear optics. To this end, we show in an analytical way how a set of $N$
quantum oscillators (data-bus) can be used as an optical quantum switch, in
which the energy gap of the data bus oscillators plays the role of an
adjustable "potential barrier". This enables us to "block or allow" the quantum
information to flow from the source to the drain. In addition, we discuss how
this device can be useful for implementing single qubit phase-shift quantum
gates with high fidelity, so that it can be used as a useful tool. To conclude,
during the study of the performance of our device when considering the
interaction of this with a thermal reservoir, we highlight the important role
played by the set of oscillators which constitute the data-bus in reducing the
unwanted effects of the thermal reservoir. This is achieved by reducing the
information exchange time (shortening time scale) between the desired
oscillators. In particular, we have identified a non-trivial criterion in which
the ideal size of the data-bus can be obtained so that it presents the best
possible performance. We believe that our study can be perfectly adapted to a
large number of thermal reservoir models.
| arxiv topic:quant-ph |
arxiv_dataset-97971805.06655 | Payload-size and Deadline-aware Scheduling for Upcoming 5G Networks:
Experimental Validation in High-load Scenarios
cs.NI
High data rates, low latencies, and a widespread availability are the key
properties why current cellular network technologies are used for many
different applications. However, the coexistence of different data traffic
types in the same 4G/5G-based public mobile network results in a significant
growth of interfering data traffic competing for transmission. Particularly in
the context of time-critical and highly dynamic Cyber Physical Systems (CPS)
and Vehicle-to-Everything (V2X) applications, the compliance with deadlines and
therefore the efficient allocation of scarce mobile radio resources is of high
importance. Hence, scheduling solutions are required offering a good trade-off
between the compliance with deadlines and a spectrum-efficient allocation of
resources in mobile networks. In this paper, we present the results of an
experimental validation of the Payload-size and Deadline-aware (PayDA)
scheduling algorithm using a Software-Defined Radio (SDR)-based eNodeB. The
results of the experimental validation prove the high efficiency of the
proposed PayDA scheduling algorithm for time-critical applications in both
miscellaneous and homogeneous data traffic scenarios.
| arxiv topic:cs.NI |
arxiv_dataset-97981805.06755 | Laplace transforms based some novel integrals via hypergeometric
technique
math.CA
In this paper, we obtain the analytical solutions of Laplace transforms based
some novel integrals with suitable convergence conditions, by using
hypergeometric approach (some algebraic properties of Pochhammer symbol and
classical summation theorems of hypergeometric series ${}_{2}F_{1}(1)$,
${}_{2}F_{1}(-1)$ , ${}_{4}F_{3}(-1)$) . Also, we obtain the Laplace transforms
of arbitrary powers of some finite series containing hyperbolic sine and cosine
functions having different arguments, in terms of hypergeometric and Beta
functions. Moreover, Laplace transforms of even and odd positive integral
powers of sine and cosine functions with different arguments, and their
combinations of the product (taking two, three, four functions at a time), are
obtained. In addition, some special cases are yield from the main results.
| arxiv topic:math.CA |
arxiv_dataset-97991805.06855 | Learning non-smooth models: instrumental variable quantile regressions
and related problems
econ.EM stat.CO stat.ME
This paper proposes computationally efficient methods that can be used for
instrumental variable quantile regressions (IVQR) and related methods with
statistical guarantees. This is much needed when we investigate heterogenous
treatment effects since interactions between the endogenous treatment and
control variables lead to an increased number of endogenous covariates. We
prove that the GMM formulation of IVQR is NP-hard and finding an approximate
solution is also NP-hard. Hence, solving the problem from a purely
computational perspective seems unlikely. Instead, we aim to obtain an estimate
that has good statistical properties and is not necessarily the global solution
of any optimization problem.
The proposal consists of employing $k$-step correction on an initial
estimate. The initial estimate exploits the latest advances in mixed integer
linear programming and can be computed within seconds. One theoretical
contribution is that such initial estimators and Jacobian of the moment
condition used in the k-step correction need not be even consistent and merely
$k=4\log n$ fast iterations are needed to obtain an efficient estimator. The
overall proposal scales well to handle extremely large sample sizes because
lack of consistency requirement allows one to use a very small subsample to
obtain the initial estimate and the k-step iterations on the full sample can be
implemented efficiently. Another contribution that is of independent interest
is to propose a tuning-free estimation for the Jacobian matrix, whose
definition nvolves conditional densities. This Jacobian estimator generalizes
bootstrap quantile standard errors and can be efficiently computed via
closed-end solutions. We evaluate the performance of the proposal in
simulations and an empirical example on the heterogeneous treatment effect of
Job Training Partnership Act.
| arxiv topic:econ.EM stat.CO stat.ME |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.