text stringlengths 54 548k | label stringclasses 4
values | id_ stringlengths 32 32 |
|---|---|---|
A fundamental solution is an operator-valued function which can be found with the help of Laplace transform technique.
Let {{formula:a23090ea-b8ed-4e05-9741-12b138555df9}} . Then,
applying Laplace integral transform each side of (REF ) with unit initial conditions and using integration by substitution formula for operator-valued functions {{cite:59eefdeb418b48ab503ddb8d45755eca19145a88}}, we obtain
{{formula:3ed438b4-21b8-4ad6-929b-e59451c9fed5}}
| r | 26c4c53e6a9fa9c2a5ecc47033553564 |
Despite being seemingly a minor change, LDET demonstrates remarkable gains in open-world instance segmentation and detection. On COCO {{cite:0b83d82126ddbb27485908f4e8ccf32a37cb6258}}, LDET trained on VOC categories improves the average recall by 14.1 points when evaluated on non-VOC categories. Surprisingly, LDET achieves significant improvements in detecting novel objects without requiring additional annotation , LDET trained only on VOC categories (20 classes) in COCO outperforms Mask RCNN trained on all COCO categories (80 classes) when evaluating average recall on UVO {{cite:de042444724f8244d0905b1aa168b75e2f47c761}}. As shown in Fig. REF , LDET can generate precise object proposals as well as cover many objects in the scene.
| i | 830ec207fbade9e9afaa781902ef4dc2 |
For dimensions {{formula:2e7cd601-ae49-4817-9f5c-7b0d541e8a4d}} and any particle density {{formula:6e5bbd3f-c2db-4f13-912d-a3043e655d1e}} , the excess particle mass density {{formula:b780417b-6bd3-44c3-a324-f48f14c4d7db}} is interpreted as the density of the Bose-Einstein condensate. One of the most prominent open problems in mathematical physics is the understanding of Bose-Einstein condensation (BEC) for interacting Bosons. This phase transition is characterised by the fact that a macroscopic part of the system condenses to a state which is highly correlated.
Only partial successes in proving BEC have been achieved, like the description of the free energy of the non-interacting, system (already contained in Bose's and Einstein's seminal paper in 1925) or the analysis of mean-field models (e.g. {{cite:55c2b17b19bbc28fbaf489ee7969467a943606f9}}) or the analysis of dilute systems at vanishing temperature {{cite:45503e276abcff39bfe56077adb98ca63e195a0f}} or the proof of BEC in lattice systems with half-filling {{cite:45503e276abcff39bfe56077adb98ca63e195a0f}}. In {{cite:a88efb8141d58ae5901a359075609958257e3d18}} the authors provide a formula for the limiting free energy in the so-called canonical ensemble where the particle density is fixed. It turns out that the formula is difficult to analyse, and it is one of the main aims of the current work to provide an alternative approach in terms of space-time Green functions.
| d | 21bcf131eddbe008fec1fb43720605f2 |
A sweet spot of the bias-variance trade-off is to perform TTT while remembering the training data in some way.
In deep learning, this is usually done by initializing with a trained model for SGD, like for our paper and {{cite:7b51f35fa4865077dfc0314b49ac8ad5d85e41ac}}.
In this section, we retain memory by using part of the covariance matrix derived from training data.
| r | 8008434cfa8cfb4e0e13fae9708c7465 |
The number of AI systems put into production has steadily increased over the last few years. This is because advanced techniques of Machine Learning (ML) and Deep Learning (DL) have brought significant improvements over previous state-of-the-art (SOTA) methods in many areas such as finance {{cite:9967b805fcf718a913e5c9a4efb90c6f3b2877bb}}, {{cite:2d9fa89afa470931ccd6724798bd201b65cb014e}}, transportation {{cite:89bb99635092df42f4659af66c3dfbec2f451f74}}, and medicine {{cite:55367771ab41f1e107e9bf12ad8ad96997334e42}}, {{cite:5ce34a637977e37d2fc60e518a2a489928aed552}}. However, the increasing use of black-box models has raised concerns about their societal impact: privacy {{cite:9933a9bf29851dc5ff8ca246fd30ce6e72fbc976}}, {{cite:f1ffd76761a5f67f32f4b8b802d94d2cb3c6debd}}, {{cite:bfc71d41172e82d04f71f5949817369632f38c01}}, {{cite:8d9e186c4297cee2a9ad2ad21103c71db43fe683}}, security {{cite:11ebb9653ddb72207f7864cc4c6d9c663313f11d}}, {{cite:737777109c3d0fd40a7f8efc72094732b99addd6}}, {{cite:0031e4a8a5990f8a780d3ade8157ace1d38fb712}}, safety {{cite:53bdef10c833e7f00720b0f7a1713b0fe4514834}}, {{cite:52980cd0af28646c5aef082d9107040c8ee7e570}}, {{cite:0344969800875e7e38c074857998b04b6b0fd0c2}}, fairness {{cite:136fec80f82509df332b78fd7786d8576045aecc}}, {{cite:01db32d59735af6fc17d1aad6105df009f4b4427}}, and explainability {{cite:a160af23f387d7f344b7c38b78af84444902a159}}, {{cite:3b40b3ea09f62d81d3f577aa72b3be534e858553}} which became areas of active research in the ML community.
| i | 153d6f27b3da6da88d585610d8df2a4c |
We evaluate our method, delft, with glove embedding {{cite:7eb6109e51d3dc3389deede52212f1362c8d9d1a}} and bert embeddings.
{{table:657f29cf-9bd1-45de-90b2-504417e45d84}}Implementation
{{table:4abfd39e-1514-4594-a1d9-5c70c6792578}}Our implementation uses PyTorch {{cite:10d81a13b6f222941bf8069ade5f53a0913ee7c7}} and its dgl gnn library.https://github.com/dmlc/dgl
We keep top twenty candidate entity nodes in the training and fifty for testing; the top five sentences for each edge is kept.
The parameters of delft's gnn layers are listed in Table REF . For delft-bert, we use bert
output as contextualized embeddings.
For bert-entity and bert-sent, we concatenate the question and entity
name (entity gloss for bert-sent) as the bert input and
apply an affine layer and sigmoid activation to the last bert layer of the [cls] token; the model
outputs a scalar relevance score.
bert-memnn concatenates all evidence sentences and the node gloss, and combines with the question
as the input of bert. Like bert-entity and bert-sent, an affine layer and sigmoid activation is
applied on bert output to produces the answer score.
drqa retrieves 10 documents and then 10 paragraphs from them; we use the default
setting for training. During inference, we apply TagMe to
each retrieved paragraph and
limit the candidate spans as tagged entities.
docqa uses the pre-trained model on Triviaqa-unfiltered
dataset with default configuration applied to our subset.
Evaluation Results
Three experiments evaluate delft's graph coverage,
answer accuracy, and source of effectiveness. Then we visualize the
gnn attention and examine individual examples.
{{figure:6dc86898-52b2-4a63-a4e1-1d435d7151cf}}Graph Coverage
delft's graph has high coverage (Table REF ).
Each question is connected to an average of 1500+ candidate nodes; 90%
of them can be answered by the connected nodes.
After filtering to 50 candidates, more than 80% questions are
answerable.
In comparison, we manually examined 50 randomly sampled qbLink
questions: only 38% of them are reachable within two hops in the
DBpedia graph.
delft's graph is dense.
On average there are five (qbLink) and twelve (qanta) edges connecting the
correct answer nodes to the question entity nodes.
Triviaqa questions have two entities on average and more than one
is connected to the correct answer. Each edge has eight to fifteen
evidence sentences.
The free-text knowledge graph naturally separates the correct answer by its structure.
Compared to incorrect answers (-), the correct ones (+) are connected by significantly more evidence edges.
The edges also have more evidence sentences.
Aided by free-text evidence, the coverage of the structured graph is
no longer the bottleneck.
The free-text knowledge graph provides enough evidence and frees the
potential of structured qa.
At the same time, the rich evidence also inevitably introduces noise.
The next experiment examines whether delft—given the answer
somewhere in the graph—can find the single correct answer.
{{table:41a0f4d8-adbb-47cd-acce-ab602db7ed1e}}
Answer Accuracy
delft outperformsRecall, however, that we exclude
questions that with no entities or whose answer is not an entity.
all baselines on both full datasets and dataset subsets based on the
number of entities in a question (Table REF ).
quest falters on these questions, suggesting that some level of
supervision is required.
On more complicated factoid qa datasets qbLink and qanta,
delft improves over drqa,
the machine reading baseline.
These datasets require reasoning over multiple sentences (either
within a long question's sentence or across multiple questions); however,
drqa is tuned for single sentence questions.
delft—by design—focuses on matching questions' text with
disparate evidence.
In the mr benchmark dataset Triviaqa, delft still beats
drqa (albeit on an entity-focused subset).
It is also better than docqa, one of the strongest models
on Triviaqa.
With our Free-Text Knowledge Graph, delft better locates
necessary evidence sentences via graph structure, while
mr only uses retrieved paragraphs.
bert-entity fares poorly because it only has entity name information; even
with the help strong pre-trained model, this is
too limited answer complex questions.
bert-sent incorporates the gloss information but lags other methods.
delft outperforms both baselines, since it combines useful
text evidence and kg connections to answer
the question.
Compared to bert-memnn, which uses the same evidence and bert but
without structure, delft's structured reasoning thrives on complex
questions in qbLink and qanta.
On Triviaqa, which has fewer than two edges per candidate entity
node, delft's accuracy is close to bert-memnn, as there is not much
structure.
As questions have more entities, delft's relative accuracy
increases. In comparison, almost all other methods' effectiveness
stays flat, even with more evidence from additional question entities.
Ablation Study
We ablate both delft's graph construction and gnn components
to see which components are most useful.
We use qbLink dataset and delft-glove embeddings for these
experiments.
For each ablation, one component is removed while keeping the other
settings constant.
{{table:c4b7eae1-31fa-45b5-9eca-4d91ee0a6587}}Graph Ablation
The accuracy grows with more sentences per edge until reaching
diminishing returns at six entities (Figure REF (a)).
Fewer than three sentences significantly decreases
accuracy, prematurely removing useful information.
It's more effective to leave the gnn model to distinguish the
signal from the noise.
We choose five sentences per edge.
Because we retrieve entities automatically (rather than relying on gold
annotations), the threshold of the entity linking process can also be tuned: are more (but noisier) entities better than fewer (but more confident) entities?
Using all tagged entities slightly hurts the result
(Figure REF (b)), since it brings in uninformative
entities (e.g., linking “name the first woman in space” to
“Name”).
Filtering too agressively, however, is also not a good idea, as the
accuracy drops with aggressive thresholds ({{formula:e4dabc2e-6b02-4636-9f59-f042b9ac21e2}} ), removing useful
connections. We choose {{formula:4780ac02-9b91-4567-bfaa-be648b933454}} as threshold.
To see how senstitive delft is to automatic entity linking, we
manually annotate twenty questions to see the accuracy with perfect
linking.
The accuracy is on par with Tagme linked questions: both get fifteen
right.
We would need a larger set to more thoroughly examine the role of
linker accuracy.
Recall that delft uses not just the entities in the question but
also searches for edges similar to question text.
Figure REF (c) shows the accuracy when retrieving {{formula:48cdd673-f184-4c2b-9d75-8726ea51d88d}} additional entities.
The benefits plataeu after three entities.
{{figure:a64a6222-c9ac-406f-9945-33a33b9a288e}}
Model Ablation
In addition to different data sources and preprocessing, delft
also has several model components.
We ablate these in Table REF .
As expected, both node (gloss) and edge evidence
help; each contributes {{formula:0e427c18-5f78-476f-bfce-d3e104e4e964}} accuracy.
Edge importance scoring, which controls the weights of the information
flow from Question Entity Node to Candidate Entity Nodes, provides {{formula:8229dd40-18f2-4ab1-b78a-98e91473f415}}
accuracy.
The input representation is important as well; the self-attention
layer contributes {{formula:5576137b-9889-4c10-9d06-fd933f965d33}} accuracy.
Graph Visualization
Figure REF shows a question with gnn output:
the correct Candidate Entity Node Ronald Reagan
connects to all three Question Entity Nodes.
The edge from Bedtime for Benzo to Reagan is
informative—other Candidate Entity Nodes (e.g., politicians like
Jerry Brown) lack ties to this cinema masterpiece.
The gnn model correctly (weight {{formula:bfdf3aba-f17b-4d5e-b0d5-a28e19fa893a}} ) favors this edge.
The other edges are less distinguishable.
For example, the edge from Governor of California to
Ronald Reagan and Jerry Brown are both
relevant (both were governors) but unhelpful.
Thus, the gnn has similar weights ({{formula:c1516993-a48a-4117-8156-6f0b02dcc909}} and {{formula:fab50c7a-ead4-4e8d-9b30-9492b4b66b25}} ) for
both edges, far less than Bedtime for Bonzo.
By aggregating edges, delft's gnn selects the correct
answer.
Case Study
To gain more insights into delft model's behavior, we further sample
some examples from qbLink. Table REF shows two
positive examples (1 and 2) and two negative examples (3 and
4). delft succeeds on these cases: with direct evidence
sentences, delft finds the correct answer with high confidence
(Example 1) or with multiple pieces of evidence, delft could
aggregate different pieces together, and make more accurate
predictions (Example 2). However some common sources of error
include: too few informative entities in the question (Example 3) or
evidence that overlaps too much between two Candidate Entity Nodes.
Related Work: Knowledge Representation for QA
delft is impossible without the insights of
traditional knowledge bases for question answering and question
answering from natural language, which we combine using graph neural
networks.
This section describes how we build on these subfields.
Knowledge Graph Question Answering
With knowledge graphs (kg) like
Freebase {{cite:626a5e3fb62d85d800a24701f54da392aeeef9f1}} and
DBpedia {{cite:02386812d8a205827b594f9943af99ff913532e8}} enable question answering using their
rich, dependable structure.
This has spurred kg-specific qa datasets on general domain
large scale knowledge graphs: WebQuestions {{cite:a63c307d28fb4de995322d6ab12b9b6a1bbf5f48}},
SimpleQuestions {{cite:4eddf6ed21a4c2e0ef7d0120b4d6a557fa5f5e60}}, and special-domain kgs,
such as WikiMovies {{cite:ea61109dbfad1e22c0ebaff1f0824a2f991dc093}}.
In turn, these new datasets have prompted special-purpose
kgqa algorithms.
Some convert questions to semantic parsing problems and execute the
logical forms on the graph {{cite:4d64379cebeaae2996030d586c631a41e0232fe1}}, {{cite:cb6aea3ecad111c0187261c97f147b84171628a5}}, {{cite:80680d1ecd4ac6673881ee573ef472daa13a7fcb}}, {{cite:862955c38ddd7c6ccfed71813bdf95f68d5bd42d}}.
Others use information extraction
to first extract question related information in kg
and then find the answer {{cite:ed1a09c868b994378ddf4e443d4d68332a79febb}}, {{cite:6050820688400a9fdff22b811c1f1c261eb80144}}, {{cite:487192b94d9e2f12c46093d9a5e5ee0e884326dc}}.
These work well on questions tailored for the underlying kg.
For example,
WebQuestions guarantee its questions can be answered by
Freebase {{cite:a63c307d28fb4de995322d6ab12b9b6a1bbf5f48}}.
Though modern knowledge graphs have good coverage on
entities {{cite:87b73ec39cbefea3aa69e18253a9eb6e48dd1c1a}}, adding relations takes time and
money {{cite:b3a57c8fdee9871289f845ea4c00dfb54d3c6535}}, often requiring human
effort {{cite:626a5e3fb62d85d800a24701f54da392aeeef9f1}} or scraping human-edited
structured resources {{cite:02386812d8a205827b594f9943af99ff913532e8}}.
These lacunærepresent impede broader use and
adoption.
Like delft, quest {{cite:1b86cdff48b17f22b5de727777bfd16a6265ce95}} seeks to address this by
building a noisy quasi-kg with nodes and edges, consisting of
dynamically retrieved entity names and relational phrases from raw
text.
Unlike delft, this graph is built using existing Open Information
Extraction (ie).
Then it answers questions on the extracted graph.
Unlike delft, which is geared toward recall, ie errs toward
precision and require regular, clean text.
In contrast, many real-world factoid questions contain linguistically
rich structures, making relation extraction challenging.
We instead directly extract free-text sentences as indirect relations
between entities, which ensures high coverage of evidence information
to the question.
Similarly, graft-net {{cite:0821b02b49ba10f8e1459997d1fda303d300be6e}} extends an existing kg with
text information.
It grafts text evidence onto kg nodes but retains the original kg relations.
It then reasons over this graph to answer kg-specific questions.
delft, in contrast, grafts text evidence onto both nodes and edges
to enrich the relationships between nodes, building on
the success of unconstrained “machine reading” qa systems.
Question Answering over Text
Compared with highly structured kg, unstructured text
collections (e.g., Wikipedia, newswire, or Web scrapes) is cheaper but
noisier for qa {{cite:bb8b94c6f1dc45279a8d344bbbf3a3f9adfc4edb}}.
Recent datasets such as squad {{cite:aeb76b5604144e19ae0b331817a4ebdff8ab4182}},
Triviaqa {{cite:530545cd11f22f126c359527a9be9c75f4386b32}}, ms marco {{cite:644aa32fe4c56532ec800dc6adb09ff7ade79e93}} and
natural questions {{cite:423984c92869eb96bdab07b40ff6858e90c7cc6a}} are typically solved
via a coarse search for a passage (if the passage isn't given) and
then finding a fine-grained span to answer the question.
A rich vein of neural readers match the questions to the given
passages and extract answer spans from them {{cite:b1b64a0c56324b985e85489e4bcda8a7a51dbffc}}, {{cite:7c64003cdf18733b158f0ff229ec004b945f0a76}}. Its popular solutions
include bidaf, which matches the question and document passages
by bi-directional attention flows {{cite:b1b64a0c56324b985e85489e4bcda8a7a51dbffc}},
qanet, which enriches the local contexts with global
self-attention {{cite:7c64003cdf18733b158f0ff229ec004b945f0a76}}, and pre-training methods such as
bert {{cite:5aff99f328faf5c8c45741decf4af1e09c4f118b}} and xlnet {{cite:6c16654c95836109fd685586b2cb1f78f6bab110}}.
The most realistic models are those that also search for a passage:
drqa {{cite:3dc4f30cffeb941b353dc1356d6be341c46dd898}} retrieves documents from Wikipedia and
use mr to predict the top span as the answer, and
orca {{cite:e3b0b61e31a96a65c8de6becbe829b6a09d48883}} trains the retriever via an inverse cloze
task.
Nonetheless, questions mainly answerable by drqa and orca only require
single evidence from the candidate sets {{cite:d5b8c3190b914563545930fac13b7a111c2d0b0f}}.
delft in contrast searches for edge evidence and nodes that can
answer the question; this subgraph often corresponds to the same
documents found by machine reading models.
Ideally, it would help synthesize information across multiple
passages.
Multi-hop qa, where answers require require assembling
information {{cite:6e1cacf961d89982e12455f4f5da946942a76c0b}}, {{cite:ac40577e3c1a192b5e10c7406f574687fe22d348}}, is a
task to test whether machine reading systems can
synthesize information.
hotpotqa {{cite:ac40577e3c1a192b5e10c7406f574687fe22d348}} is the
multi-hop qa benchmark: each answer is a text span requiring
one or two hops.
Several models {{cite:b98d7b9d15c7596971ba834a92142f5f17c9f9de}}, {{cite:634c0051850ea604d56e5441363d3a389012d268}}, {{cite:65c625d91c803ec67c9a221db1ae3145a8663643}} solve
this problem using multiple mr models to extract multi-hop
evidence.
While we focus on datasets with Wikipedia entities as
answers, expanding delft to span-based answers (like hotpotqa) is a
natural future direction.
Graph Networks for qa
delft is not the first to use graph neural networks {{cite:1e49f18ac9648ef5f4deb5d1879825d78f39a0bd}}, {{cite:4fe95f900777fff8362875c281a1263401262c03}}, {{cite:306b4e9a84a1ec4c3ec0b945aa16660bf5a2db2f}}
for question answering.
entity-gcn {{cite:983cc4e9d23d53ba53ec1b1130b576098f203788}},
dfgn {{cite:b98d7b9d15c7596971ba834a92142f5f17c9f9de}}, and
hde {{cite:e71270d45db690f29e0c2cf165e7fe2bbb0e347a}} build the entity graph with entity
co-reference and co-occurrence in documents and apply gnn to
the graph to rank the top entity as the answer.
cogqa {{cite:634c0051850ea604d56e5441363d3a389012d268}} builds the
graph starting with entities from the question, then expanding the
graph using extracted spans from multiple mr models as candidate
span nodes and adopt gnn over the graph to predict the answer
from span nodes.
All these methods' edges are co-reference between entities or binary
scores about their co-occurrence in documents; delft's primary
distinction is using free-text as graph edges which we then represent
and aggregate via a gnn.
Other methods have learned representations of relationships between entities.
nubbi {{cite:9966091094506e9f2f9c3d1911c5580a9f8c964d}} used an admixture over relationship
prototypes, while Iyyer et al. {{cite:4b6d1539e717ca27f1d6e6dc36e0cc7706ba59cc}} used neural dictionary
learning for analyzing literature.
delft draws on these ideas to find similarities between
passages and questions.
The View Beyond delft
Real-world factoid qa requires answering diverse questions across domains.
Relying on existing knowledge graph relations to answer these questions often leads to highly accurate but brittle systems:
they suffer from low coverage.
To overcome the bottleneck of structure sparsity in existing knowledge graphs,
delft inherits
kgqa-style reasoning with the widely available free-text evidence.
delft builds a high coverage and dense free-text knowledge graph, using
natural language sentences as edges.
To answer questions, delft grounds the question into the related
subgraph connecting entities with free-text graph edges and then uses
a graph neural network to represent, reason, and select the answer
using evidence from both free-text and the graph structure.
Combining natural language and knowledge-rich graphs is a common
problem:
e-mail and contact lists,
semantic ontologies and sense disambiguation,
and semantic parsing.
Future work should explore whether these approachs are also useful for
dialog, language modeling, or ad hoc search.
More directly for question answering, more fine-grained reasoning
could help solve the example of Table REF :
while both Odysseus and Penelope have Telemachus as a son, only
Odysseus made a long journey and should thus be the answer.
Recognizing specific properties of nodes either in a traditional
kg or in free text could resolve these issues.
Acknowledgements
This work was supported by NSF Grants IIS-1748663 (Zhao) and
IIS-1822494 (Boyd-Graber). Any opinions, findings, conclusions, or
recommendations expressed here are those of the authors and do not
necessarily reflect the view of the sponsor.
| m | 30509e23a934f42de1da9f5771c35e48 |
Recently, multi-task neural networks have shown superior performance to other individual neural network architectures on different medical imaging applications {{cite:252ce3c55f813ca499698f9c952dc50c9cddc4d3}}, {{cite:e73e8f7982b17c8879695f23d5c159f2066a8e3d}}. This type of neural networks simultaneously integrates different pieces of information from diverse tasks to improve the overall performance of the network and leads to better generalization under real-life conditions {{cite:10df610febfae8b2e6c6c9bf65a4b9ca6fe2903f}}. In this study, we developed a multi-task CNN architecture for classifying cerebrovascular diseases and synthesizing high-quality PET images from multi-contrast MRI. The proposed joint dual-task model comprises two branches; the first branch adopts a 3D convolutional encoder-decoder network with attention mechanisms to predict the gold standard {{formula:28cdd446-c2e9-4649-a938-2c323920b24c}} O-water PET CBF maps from the combination of structural MRI and arterial spin labeling (ASL) MRI perfusion images without using radiotracers. The second branch comprises a multi-scale 3D convolutional network that integrates multi-parametric MRI images to distinguish between healthy controls and people with cerebrovascular diseases. Results show that the proposed multi-task deep learning model can efficiently improve the MRI-to-PET translation performance and the diagnostic accuracy for identifying cerebrovascular disorders.
| i | dc71630ca02b500075c3e8139cc23ec5 |
we can readily factor out the logarithms in the binding energy out of the integral, yielding a contribution to the radial action proportional toMore generally, within the realm of the PM expansion, this is due to the fact that the pole from radiation modes in the tail effect is accompanied by a factor of {{formula:17d8519d-eec2-4637-9cec-466bc7191afe}} . The latter is ultimately responsible for the {{formula:e604b253-24e7-4ab1-a530-786e980e6344}} in the effective action {{cite:614ecaed3ba19f65c99108e31ea075906a9c8264}}. Hence, the universal character of the divergent part of the tail implies that both appear multiplied by the total radiated energy at the end of the day, see §REF .
{{formula:31b9e951-2ff2-449b-b5e7-1615c9f15ca0}}
| d | 9a0a7f73d9fe0b509a16e60efd3f6cb3 |
The investigation discussed in this paper could be further extended in several directions. Previous studies have shown that different kinds of networks, such as directed {{cite:ae48c0f2286dc9680187404c2b93db0e8ce2d84b}} or non-normal ones {{cite:954fda75bb28c59b22d3782de4ce91ae70a373d6}}, {{cite:e30d593ab65fa81cc2aca05d01f572d660adbf5d}}, extend the conditions for the emergence of patterns and allow for a richer spectrum of instabilities. Moreover, it as been shown that an instability similar to the Turing mechanism can be obtained by perturbing a stable limit cycle {{cite:32f749f1fd60f6052f9810251debe87ac14c182b}} and that non-normal networks further enhance such instability {{cite:689f11aec47d422dec98538d64aff973d0faa750}}. Given the oscillatory behaviours of neurones {{cite:c24048eccab284491e7514e8293e53e52bca00ef}}, {{cite:64ea5d2fa2fdc9bc50b6ec282c0bfcd0f0f9860a}} and the non-normal nature of neural networks {{cite:954fda75bb28c59b22d3782de4ce91ae70a373d6}}, an extension towards such direction would open the way to interesting new results and applications.
| d | db8ea5da45fa4b74347b02c70be5d051 |
We conclude this section with three preliminary results: the folklore sampling Lemma by {{cite:5a696d596f0e03df7d2642cd101ebd774b1c8b7e}}, a combinatorial Lemma we derived from the well known Hall's Marriage Theorem and a graph theoretical result from {{cite:ff11e6d74c40017a6029a568c1021da97f9e5655}}. While the first Lemma is our main tool to address non-monotonicity, the other two results are used for the charging arguments in the analysis of our algorithms.
| r | 8b1280d4e0f3919db759b046708eb691 |
In order to corroborate the conclusions of our DFT-rVV10 calculations
and better elucidate screening effects,
we have also studied some of our systems by an alternative approach, namely
the Self-Consistent Screening scheme (SCS){{cite:8a79d679d9ce8fdb79cb48e224b895329b80286a}}.
The SCS approach maps the dipole polarizability of the system into a set of
coupled atom-centered Drude oscillators. The oscillators are parametrized
according to the Tkatchenko-Scheffler{{cite:a56649c07c9741e040e0554971352a5286b3dc1d}} approach
to account for charge hybridization. Moreover, SCS includes long-range many-body contributions
up to infinite order, at an effective{{cite:088801776a19f4f5b7b26b336b8f2a7d7d0c9619}}
Random Phase Approximation (RPA) level. In practice, one solves a discrete
self-consistent electrostatic equation,
that describes the coupled atomic polarizabilities in the presence of an external field.
We also note that since SCS relies on atomic polarizabilities, it is a linear
response theory by construction.
| m | 2a48e27f02a056821ca3a6a91122c25d |
Outlier Exposure (OE) {{cite:f742a4a5714b034cae7963353160cf0e3428b85d}} corresponds to Exp{{formula:2959e90c-29ce-4561-ba94-c07a855ae2d1}} in Section REF .
The results show that using OOD in training with this mechanism can gain an advantage beyond other baselines.
| r | 7b21a72756f991ccf0a36fef770000fc |
In the early studies of Poincaré series for gravity {{cite:0c1065503f396561311998ca3862de176aa8adaa}}, it was assumed that the “seed” for the series is the identity module, corresponding to thermal AdS{{formula:be6d1633-75ab-48f3-b9ce-1f6966900ca3}} on the gravity side. Later studies, notably {{cite:20df17f568c2aef35a07db03e505173cfd148177}}, allowed for more general primary seeds and studied their contribution. The recent work {{cite:604bc2d2176c20a6802abd7eafa1e55ec9acbaab}} also invokes additional seeds associated to conical singularities. Thus there is no real consensus on what should be the seed for a Poincaré series. In the present work we have similarly taken an agnostic view on this. However, as seen in many of our tables, the identity seed often seems to return a positive linear combination of CFT partition functions, while other primary seeds often do not Amusingly this seems to be the opposite of what is found in pure AdS{{formula:8310fd0f-3460-4413-a3a2-0f8bad3c3291}} gravity, where the identity seed leads to negative coefficients and other primary seeds are added to try and cure this.. Related to this is the question of additional seeds that must be added to account for analogue wormholes in the multi-boundary case. We have found candidates that work correctly, which we think is remarkable, but we do not have a first-principles reason why the wormhole seeds should be what we propose. Progress on this issue would be most helpful and could illuminate the more general case of pure AdS gravity.
| d | 1adafe3c2b46d2d5983b88f94f0019ca |
Lemma 14 (Chernoff bound for Bernoulli variable {{cite:9136671d0b712488abe2bb8f512bf917d188cf24}})
Let {{formula:324e239f-d80a-47dc-8357-06abc9536b36}} be independent random variables taking values in {{formula:14e33774-6f7f-4fdc-804b-6c56ab7c1b28}} . Let {{formula:cdcd941b-7834-41f1-bd6e-32d481de1ff4}} and {{formula:62a4e32d-7856-47c6-98b2-22c2459c8fb2}} . The following statements hold.
| r | 0654c37c530e68515d5c3b3704c3204a |
To solve this problem, many recent works focus on deep neural networks with hierarchical structures to model the conversational data {{cite:95e11061049f47c73ed29c4798282af3615d7983}}, {{cite:0f8570d1dab1427a9153e1c8240de28bbcb8c2f1}}, {{cite:6096303146b1e06f58fbe1aceb7129cf97f121dc}}, {{cite:48ac74c6f7ae6f23786f387b74b27d0432603492}}. In these works, each utterance is firstly encoded separately into an utterance representation, which is then modeled sequentially and hierarchically. Although the structures seem to comply with the organization of utterances, they ignore the direct dependencies between words in different utterances. In addition, they are not conducive to the application of pre-trained language models such as BERT {{cite:72e2ac4df4ee6ece71b1e94ba64ef8ac9a2022bb}} and XLNet {{cite:e8a6e2c21e48370e0a391c062a0f054ce3cc342b}}, which have achieved superior performance in many dialogue system tasks other than ERC {{cite:8b08624fb7fabd5e4c46fa1af0e26bc4eec94b28}}, {{cite:620e1a4167e44f13b46c30a2021bc48246e9fcc2}}, {{cite:eb6a9a688591618f812c3803ff223680bacb31f3}}.
| i | 88b83d0abb56a4bb1da1a71669d5c3e4 |
The landscape illustrated by deep neural networks
A comprehensive understanding would help understand why the parameters do not stray far from initialization in the case where a 0 training risk solution clearly exists, but EMC is below data complexity.
How do redundant parameters in deep neural network help in learning?
We hypothesize that the redundant parameters and their initialization values help in constructing a prior that makes learning “easy" for over-parameterized settings. Additionally, the deviation from initialization in deeper layers has more critical impacts on the generalization. This hypothesis is similar to the lottery hypothesis proposed in {{cite:7cfb262758411e997196c83fa4f06668ed06bf96}}. However, more detailed research needs to be conducted to support this proposition.
| d | 7e5f6243445e810b51a9ee252024f6c2 |
Recently, we introduced a novel synchronous technique called Anytime
Minibatch (AMB) in {{cite:3e3641bb324754b75e4bf70cdf1f3fed4337b875}}. Our main idea was to fix the
per-epoch compute time across all workers rather than fixing the job
size as is typical. Since workers progress at different rates, fixing
the compute time results in a variable amount of work being completed
by each worker in each epoch. A drawback of AMB is that when workers
submit their results to the master, they idle while waiting to receive
back the updated optimization parameters. When this wait time is long,
the wall time convergence can be greatly prolonged. To alleviate this
deficiency, in this paper we propose Anytime Minibatch with
Delayed Gradients (AMB-DG). In AMB-DG, workers keep processing data
points and calculate gradients at all times while the master updates
the parameters using delayed gradients. We analyze the performance
of AMB-DG for a convex smooth objective function under the assumption
of fixed gradient staleness. Under these assumptions, we prove that
the expected regret bound attained is {{formula:17655639-b152-4f60-aaa8-5753c1144927}} where {{formula:f4d70f67-c788-4305-9c4e-833303ff3ad8}}
is the number of samples observed across all nodes. This bound is
well-known to be the optimal regret bound achievable by gradient-based
methods on arbitrary convex objective functions {{cite:371ab6f785a3898f1cc2f288c5203dcee5cf5c0f}}.
We also show that under the same assumptions the expected convergence
rate (i.e., optimality gap) achieved is {{formula:b1740163-1b1a-4f71-b67c-393c2632adcb}} . Our theoretical
derivations show that asymptotically in the number of data points
observed, the impact of the fixed delay on convergence is negligible
for smooth convex loss functions. We compare the performance of AMB-DG
with that of AMB and observe that under long communication delays
AMB-DG converges faster than AMB in terms of wall clock time. We also
implement AMB-DG on the SciNet high performance computing platform
and compare the performance of AMB-DG with existing asynchronous delayed-gradient-based
methods. AMB-DG converges almost 2 times faster in our examples.
| i | 7414adf94735ef9d7309ab7bcece6416 |
Flow-based models {{cite:b19dd15db1c2fc9358457b9ab90a1784854c2079}}, or normalizing flow based methods, consist of a sequence of invertible transformations which can convert a simple distribution (e.g., Gaussian) into a more complex one with the same dimension. While flow based methods have not been widely applied to image synthesis with intuitive user inputs, few works {{cite:6a60d3b6cad1066055dc56b7efcdcdc93cb84244}} show that they have great potential in visual attributes guided synthesis and may be applicable to broader scenarios.
| m | 643bc01bb7e610ed310646f51c830760 |
Social media provide crucial data for understanding how large audiences perceive and react to events {{cite:b0c45cb5fa30f97b2288920884f4444329863cd8}}, {{cite:88f42bb171339cd00951fe82037ca51725e633bf}}. Past approaches used social media traffic for inferring electoral outcomes in massive voting events {{cite:3dedd232d750a23c05dea7732574a365fac800b5}}, {{cite:1b5ecb30092f473d3d2d83da0f247aa8d4382e68}} and more recent approaches adopted social media language to understand how massive populations coped with the COVID-19 pandemic {{cite:5ad15a6bae21a911c703d45eb7a2b70a6e46ec99}}, {{cite:f6bfb10837dcdb77aca0215f5e89a63589b69c34}}, {{cite:7febae8ebd40682d19468dad386f0664b37eddb7}}, {{cite:8d1efb8a1cfc9aec40da2b81679efe7f0889cc3b}}. The overarching theme of these approaches, including the current one, is that language is a driver of emotional content and conceptual knowledge that is transferred from people's minds into digital posts {{cite:88f42bb171339cd00951fe82037ca51725e633bf}}. Accessing and modelling knowledge in social discourse becomes thus a proxy for reconstructing how massive audiences framed - semantically and emotionally - events in their cognitive representations of the world {{cite:316aa13ca0ed32bea55704ecc494ea708cc21882}}. In particular, understanding how popular posts frame ideas and emotions is crucial because popularity can lead to one single tweet being read and endorsed by up to 500k users. While not every user might be human {{cite:741b321142cd2fe33d08841ff8c8c2e6faef6e9b}}, {{cite:a291dd9f00876497558d6d327be6a77a6b4487f3}}, these numbers indicate how crucial popular content can be in influencing people's perceptions of real-world and online events, as confirmed also by recent works {{cite:3dedd232d750a23c05dea7732574a365fac800b5}}, {{cite:1b5ecb30092f473d3d2d83da0f247aa8d4382e68}}.
| d | e83253081d98bcac23f00f665f67e988 |
In our experiment, high-order harmonics are generated by focusing a 30 fs IR pulse in a cell filled with neon gas, resulting in the emission of an XUV comb of odd harmonics of the laser central frequency. The central wavelength of the IR field is chosen so that the 39th harmonic is resonant with the 2s2p resonance in He, which is located at {{formula:69d87baa-6ac7-43c1-8738-ced37c491c9e}} eV above the ground state {{cite:d8e53b3a1084297b9f06a91e0ac21d1cba37f873}} [see Fig. REF (a)]. A 2m-long magnetic bottle electron spectrometer (MBES), with a spectral resolution below 100 meV in the 0-5 eV spectral range, is used to detect the photoelectrons. To benefit from this resolution, a retarding voltage is applied so that electrons created by absorption of the 39th harmonic and the adjacent sidebands called SB{{formula:1b314275-ec1e-48ec-8008-9f062c90ceaf}} and SB{{formula:fbc1b79b-2006-4c52-a12e-3e28e11e7306}} are in the 0-5 eV range. SB{{formula:52e2e148-15b3-4d82-89a0-a740853b9224}} originates from the interference of two quantum paths: absorption of harmonic {{formula:7a569fbb-75ec-48d2-8680-4d1ca03a4272}} and emission of an IR photon or absorption of harmonic {{formula:d475ed56-76e8-4c3e-8f4f-33b295e55f60}} and an IR photon. The spectral resolution of our measurements is further improved using a blind Lucy-Richardson deconvolution algorithm {{cite:bd252464ea9a7ec8c3796f9509c8146c686b99c3}}, {{cite:ce732f025bd2792b1a09d9641e5451acdac5e25d}}. In a traditional RABBIT setup, both the XUV and IR pulses have a broad bandwidth. Consequently, several combinations of XUV and IR frequencies lead to the same final energy and interfere {{cite:d52fd1f7c84ad6b3eea3260e84effaba525898c4}}. This finite pulse effect induces distortions of resonant two-photon ionization spectra and thus a loss of spectral resolution. To minimize this effect, the spectral bandwidth of the probe pulse is reduced to 10 nm (full width at half maximum) using a band-pass filter. By comparison, the bandwidth of the IR pulse used to generate the harmonics is approximately 50 nm.
{{figure:3feab85c-09c6-41bf-bb74-99db4387e658}} | r | e41d1a42e1cf2a1da8939124d50b04c5 |
We also emphasize that, in practice, it can become a storage problem that the BFGS method stores the Hessian approximation as an {{formula:e878039e-68dc-4aa1-94f4-2916e89d2370}} matrix.
For these cases, there exist limited-memory variants of the BFGS method (L-BFGS) {{cite:c79cedd285ce7824a23b2324cac625ae902ab901}}, where the idea is to solely store vectors that are used for an implicit representation of the Hessian.
| m | 2f70113aae2a626709de70606708570f |
Thus, for a system of {{formula:39536c70-75c1-419a-861e-7f96a7ef8c45}} point particles (positions {{formula:1c344bd3-3390-4113-a7ac-b7df355d0e87}} , linear momenta
{{formula:55bd146e-4bb9-4a7f-884b-79ba19170116}} (as usual, dots denote time derivative), {{formula:140792b8-e7df-48c1-bc58-d66eb9670b2f}} of {{formula:03ebe9d5-05ee-4feb-b2f3-411ef159786a}} , we are led to define the angular momentum as {{cite:b1467a076ede01a6bcdfb63b21ef10fe13bb692b}}, {{cite:c0e1ffcb1891131c885be3f7b2152b46853bbca6}}
K = =1N q p.
{{formula:96c360e4-81fd-4314-ba85-b111d7a2a5af}} is a bivector of the GA algebra {{formula:2480053d-39f3-46d3-a229-9a8a8e5c69b7}} . In dimension {{formula:fdbe6c24-1fea-43dd-b13f-1eafa02cdea2}} , the dual of {{formula:6cd3c039-edae-4ea4-afdc-1b231354e1d5}} is the usual 1-vector angular momentum.
{{formula:e52c4e43-bdb5-4c1e-b6ae-907b184d855d}}
| d | 8405c43af0b32dbe71320d411f122267 |
The distribution function {{formula:21499842-83f3-4e6c-aa46-fad507e92f84}} determines the polaron shift
{{formula:b5a8d12c-e6c9-4e21-911d-000036d893dd}} defined in Eq. (REF ). In the lower panel of Fig.
REF , we plot {{formula:2ffdb278-056c-4d3e-ac5b-6d1e1dbfd444}} as a function of the particle density
by using different treatments of the polaron-polaron interactions.
In particular, in the limit of small densities, as expected,
{{formula:c5493e07-5c7b-4b5b-a01a-16cf7228cb54}} {{cite:53108d1834539e3b57e72bd4ee6fd7f52a90c3b5}}. With increasing the
density, {{formula:6f871bbb-693d-4d3c-9dbc-c414aecc3cbc}} becomes less negative, and, in the limit of high
densities, it goes to zero. As shown in the lower panel of Fig.
REF , the Hartree-Fock approach completely fails to describe
the behavior of {{formula:d8bba086-b7b1-4681-ae2f-724fc6bbfbf2}} providing a systematic overestimation of
the modulus of {{formula:072bf615-b3d3-48b5-95f8-3aace9eb82e2}} . Actually, polaron-polaron correlations has
to be necessarily included in order to correctly describe the
renormalization of the polaronic band, which, as discussed in
Appendix B, will provide the correction to the chemical potential
{{formula:136d7be7-7a8d-4e92-8a28-57f182b26fd4}} due to many-body interactions. Finally, we point out that
polaronic effects are relevant for densities up to {{formula:a60d93f2-a974-44a0-9fc8-ba6378d5264c}} ,
which, as discussed in the following sections, can be considered
as the cut-off density for the manifestation of electron-phonon
effects.
| r | 41de40302b31e5473b199a3876ad138d |
Bi |wii|p*s1dx)pp*s1 c 2i(N+s1p+p-1) 1p-1 (
Bi (wi)p dx)1,
where we have used the relation
{{formula:029f0ef5-aba6-4e7f-b68d-777e2a650129}} .
Now, we estimate the left hand side term as {{cite:33516d5cb257d8332d2bca07154b88928e48e7ab}}:
{{formula:47dbdbf2-180e-4b7b-b219-fb7511cb0d4b}}
| r | 9b27680840d82a254c73768537e91702 |
From the aspect of applications, we can easily adapt the protein embeddings learned by S2F to other protein problems, such as protein annotation and protein stability prediction. For future works, it is possible to use the high-quality computed protein structures from AlphaFold2 to increase the multimodality data size for pre-training our model {{cite:0b90eecd31e24900593900697c8f9ddebc1e65a1}}. Besides, one may use a pre-trained language model to encode the textual function data better for the function modality. We hope to learn richer knowledge and build better representation for proteins via larger and better datasets, then applying the representation to PPI-related drug discovery.
| d | ccbf00794eb66e013e9bc7e27d3ed8c0 |
A value for {{formula:25ed730a-0f46-449e-8d58-d1d2a5a52fd4}} has to be predetermined for use in {{formula:ddcb755a-231e-4813-ab8e-5148bb2af0aa}} -means. Traditional indices for calculating the optimal number of clusters for k-means clustering, such as the silhouette index{{cite:a29c7037eb9f994f9f287a5965b260ae83bbddbe}} and sum-of-squared error plot, suggests an ideal value of 5. However, when trying to use this value, the sizes of the clusters are greatly skewed. We therefore manually increase {{formula:8c935ece-0b79-422a-8f19-974ebd6a85b2}} to find a suitable value for clustering and end up with a value of 8. When the value of {{formula:1a4afecf-56f3-4226-be07-7a583b13106c}} was originally selected as 5, the people who stayed for a longer time at the shopping malls were grouped together to form a very large cluster. After the value of {{formula:d2f2c6fc-ebaf-4d8c-b6a8-009dbc9e80a8}} was increased to 8, this large cluster was further divided into 3 smaller clusters, each representing one more specific type of person. Similarly, the people who stayed the longest time at the hospital were originally grouped together in a large cluster in the {{formula:ce65a8c3-26c8-41c9-ada9-5b133dd1b55d}} = 5 case and this large cluster was divided into 2 smaller clusters after {{formula:5f481ed5-492d-4474-affa-d9d97dddc9da}} was increased to 8. As a result, {{formula:ed8c5824-e00a-45bb-9f8c-a7576fbb6fbc}} = 8 is selected over 5 because it produces a more balanced and informative clustering result. This value was used to perform {{formula:de905412-11f2-4d36-86f5-bc82e54e87f9}} -means clustering on the feature vectors. The aim of this portion is to search for clusters of device trajectories and, from there, infer insights about a given device based on the cluster that its trajectory is assigned to. The results of the clustering and each part of our proposed analysis is shown in Fig. REF . Each cluster is labeled as `CP', which stands for Cluster of People, together with its corresponding number.
| r | 4a91640eff1333c0d4e5c2bf56b05321 |
Related to the problem of generative modeling, our model does not incorporate sampling into the learning process.
Generative variational models use sampling during training to regularize the learned latent spaces toward a simplified prior distribution {{cite:4e788fd628499381b0bd7805a907040da5175cbb}}.
As there were no latent spaces in this model, the weights could be learned directly from observed input-output pairings.
A reasonable next step for this theory is to generalize the abstract model to the case of one or more hidden layers.
The role of sampling and uncertainty in the learning process with regard to the learned structure of the hidden layers and relevance to variational inference can then be studied.
| d | 34b750b94f0ce6718356ad844761af28 |
To justify the proposed benchmark FedChem, we compare our results with MoleculeNet (MolNet) for centralized training {{cite:17da75fa2e39e7bdd6731b212fc16552121b8574}}. To validate the effectiveness of FLIT(+), we compare FLIT(+) with Federated Averaging (FedAvg) {{cite:825d6fd5e543e918b06420287acb993d6dd4a462}} and Federated Proximal (FedProx) {{cite:23b971c702df520f161b92fee752d9f89ccc571e}}. Moreover, we also implement two variants of FLIT(+) as Federated Averaging with Focal loss for client training (FedFocal) and Federated Averaging with VAT for client training (FedVAT). FedProx, FedFocal, and FedVAT are also proposed to alleviate the heterogeneity problem.
{{figure:3b7c900f-09d0-4e59-87db-a33184871117}}{{figure:f3ba0c5e-8d27-4b31-984e-d0b6690384dc}} | m | e64d32040365d43c73f14bbdeb120c9a |
In conclusion, we hope that the results from our present theoretical investigation may be helpful in understanding the nonlinear
phenomena in astrophysical compact objects (viz. white dwarf and neutron stars {{cite:9e46c83a9765763e2f182a06b22601c331a3eaa9}}, {{cite:845854db48cd9eccc5ffb79683ddc333cc1a6fec}}, {{cite:6d956960e9bd07b17043c7d0a805fb1a146af2da}}, {{cite:2ba87b81d8ea889701de8fd1a0222c3d3bd60fbc}}, {{cite:b101f738823c25c38aedaa07a72b192c4bfcf09c}}, {{cite:0603ee3ee6cfa0e6f26c8bddc49bce86c017e92c}}).
| d | 8d6896e146c201904a524380863a3803 |
We now show that ultracold Bose gases of {{formula:93ccb328-d89e-4522-8e0b-b8db8dc3f860}} Rb atoms are a candidate for observing the predicted atomic-molecular vortices. By initially using a large atomic BEC of {{formula:3b870831-6bfd-4372-9969-39df62269b13}} Rb (the atom number is up to {{formula:7bd9ef53-99dd-48ef-851c-3bb20e636802}} in Wynar's experiment {{cite:b88db28ac1ca15b19418bba04c5afc125b8c44af}}), the Raman photoassociation of atoms {{cite:312e27ecb6601a41ef1bc15aa9ac295c4543bee9}}, {{cite:b88db28ac1ca15b19418bba04c5afc125b8c44af}}, {{cite:3cd8fc17096670f5c5d3dbe18acc84914c792d89}}, {{cite:f4a66aec03896a08c45526f7389a0aa39cdcc700}}, {{cite:86ab503bd995a38411fe1b5d2cc7b0531deaa9ec}} can produce the corresponding molecular BEC with partial of the atoms. By loading a pancakelike optical trap {{formula:77ca76d2-5d97-473e-85d5-48b3151e57af}} , with trapping frequencies {{formula:36991560-3b31-434b-a7e9-9f09d9891767}} {{cite:b349aa5f759c691a5d9e7c8b4eb730e851bdcbc6}}, {{cite:905dfafa5f9f0a7c1ce66f7458ceb7292f8f85e0}}, {{cite:b5108388956608c2bac40b2bba7f9bba124a5475}}, the 2D atomic-molecular BECs may be prepared. It is convenient to use the laser to rotate the atomic-molecular BECs and induce the atomic-molecular vortices. Meanwhile, the whole system should be further quenched to a lower temperature to approach the ground state by the evaporative cooling techniques. The resulting atomic-molecular vortices may be visualized by using the scanning probe imaging techniques. All the techniques are therefore within the reach of current experiments.
| d | 1a91e5bce5c4c1bdbd59237770537d12 |
Smart voice assistants have come into our lives. At present, many smart devices will integrate smart voice assistants, such as Samsung's Bixby, Apple's Siri, and Microsoft's Cortana. These smart voice assistants can provide personalized services to users by recognizing their voices. Automatic speaker verification (ASV) is the main technique used to recognize speaker voice {{cite:e12ae428282750703b5331494c1eaa762198b7a0}}. It is a convenient biometric person authentication system that recognizes the speaker’s identification based on speech recordings. With the development of deep neural network technology, the automatic speaker verification system has achieved perfect effects and has been applied to many life and production scenarios, such as intelligent voice assistant, secure building access, e-commerce and speech emotion recognition, etc. However, ASV systems are still subject to many attacks {{cite:918d630ea4d20b36e99fdea9d0fed062b47e919f}}. The most common attacks are voice conversion (VC), text to speech (TTS), and replay attack. VC can convert the speech without losing the target speaker’s distinct characters, which is one of the most accessible methods of attack. TTS can intelligently convert text into natural speech, and the sound is smooth, so that the listener feels natural when listening to information, without the coldness and numbness of computer-generated speech. Replay attack means that the attacker sends a received message to the target host to achieve the purpose of deceiving the system. It is mainly used in the process of identity authentication to destroy the correctness of identity authentication. In ASV system, replay attack is when the attacker records the voice of the target speaker and tries to pass the ASV system authentication as the target speaker. Due to the rapid development of deep learning technology, these attacks can already be very similar to real speech. This is a great challenge to the ASV system.
| i | ce251d5b5581c5da5863ecfad9d17e43 |
Limitations.
The proposed method for building models with better generalization is only a partial solution since it requires an external model selection procedure.
New methods for model selection {{cite:dbecb567c243986c988344171466fe2086705dca}}, {{cite:0076bf333934dff26087c4d898eaa6e0ffa92cc5}}, {{cite:0e5d8d954832ffa917fe7c739861589710820e26}}, {{cite:5d92fde2dc18f8a91e41b2b6240970b74e154618}}, robust evaluation {{cite:bf8bf74af6c188a9dfb4565c94a04c5ba084d8a2}}, {{cite:5a8ecf535e1a94ec3339ceeeb62c9dc076b27fe1}}, and explainability {{cite:e7d2c8610daf174323f358e77dc604aa4320afe3}}, {{cite:21bfeb7cdc69bcfd40a085c72d123b434aa21b70}} are all suitable to implement this selection.
Interactive approaches {{cite:b2036c7ed4a6e9377037dade6d4b96e1e1498cb8}} are another option that injects expert knowledge.
Another possible extension is to apply the method to the end-to-end training of larger models.
Finally, this work focused on i.i.d. training data.
We hope to extend the analysis to forms of data known to be valuable for OOD generalization such as multiple environments {{cite:c0f19ad907ecdf5a6e604fbe4c2b833c9db1eb4f}}, {{cite:1b90b88cd7a673021003f0c81e6480a7ca63afdb}}, {{cite:b9fadf3c94a0cf2648dcb421b5828d1d0af97e6a}}, counterfactual examples {{cite:5a8ecf535e1a94ec3339ceeeb62c9dc076b27fe1}}, {{cite:ec12a9510934120d2a6816ed655733c16d980318}}, and non-stationary data {{cite:193373143233ab523f1d50893542fca0f2fdc349}}, {{cite:65122d454eb8dfc298ab558ff205c55d85849e79}}, {{cite:e07debcf3451399381adb89d18871631acf0de50}}, {{cite:6baaf8e602fefc17da7ef72633a52c515d904f68}}.
The analysis of multi-environment training, as used in domain generalization, may elucidate why these methods are often ineffective in practice {{cite:f06e651e7f959d312f6396c2d33619909e6adba0}}.
| d | eb16a5b412dae6061f613e44d4749e7c |
Lemma 2 (Hoeffding's Inequality, {{cite:0709b88a080a9c1ccb86a2516a9c32e61cebd4ba}})
Let {{formula:b5ed2997-0101-40b6-81f1-9bce11adc420}} be i.i.d. random variables in [0,1]. Then, we have,
{{formula:a8d7be03-aeaa-4a36-ac8d-4ecd95874815}}
| r | 88a5cc7918445a339f897b307f017902 |
Most of the existing BED studies focus on the explicit models {{cite:6ba7dfaceaf69b316d49346a3590c331cab61bb9}}, {{cite:0e8f76b98e91236d1960d69e766e69de2311d80c}}, {{cite:d9dbde520ebcddfc881b3daf67536bd10711be44}}, {{cite:2e5211ab92cf38ddce29b37cf9c465db37cd0f04}} in which the likelihood is analytically known, but in natural and physical science, a more common scenario is the implicit models {{cite:ba4e120b78b1b8b66442eaa6b0404dac7216fb6b}}, {{cite:f0afd2dc92db96d6b641f8845b7ab4421d7ab975}}, in which the likelihood is intractable but sampling is possible. In other words, the implicit model is specified based on a stochastic data generating simulator and typically has no access to the analytical form and the gradients of the joint density {{formula:efa57082-335b-4c92-a56a-432bf56d3c62}} and marginal density {{formula:ffbf56c1-f221-4969-a41c-51e4043383ce}} . The resulting BED scheme shares a two-stage feature: build a pointwise estimator of {{formula:1ac95260-94af-4f82-ba54-0074765c958c}} and then feed this “black-box" estimator to a separate outer-level optimizer such as Bayesian optimization to find the optimal design {{formula:f2d990ea-6266-4754-97dd-89cc94445166}} . This scheme substantially increases the overall computational cost and is challenging in scaling the BED to a high dimensional design space.
| i | e11f36c3c1fb988f79ce9b2ee730853f |
At each iteration, the communication cost includes the broadcast of {{formula:7ceadeb9-d49b-4fa3-86c8-870804e9e815}} and {{formula:1bc6e396-bfaa-4764-b757-f8d0e48c42d9}} to every {{formula:dd096878-f090-4338-8485-6d7969daf98b}} and the transmissions of {{formula:c4195db0-0f91-40d7-9f85-096ab529c267}} between neighboring nodes {{formula:a70191e4-250f-4aaf-b097-448807b26a5c}} , and the main computational burden is to solve (REF ), which can be efficiently addressed by the interior point method {{cite:e549cabd4b41fec18e35f366894de8260bdd6169}}.
| m | 6d981cef40e0727597e33e897b64784c |
Theoretical analysis of policy gradient methods has a long history {{cite:8cdfe57b42f197f7830e2dac9d65ac3202ecbe9b}}, {{cite:5b28969645c7ee2e9ea4c77c5c1d2b07f0a07daa}}, {{cite:494ec34932ca2b4abe472b994fcdef8ce92c5fef}}, {{cite:a55e8367c802e7d7e204257ec12ee03e775a049d}}, {{cite:ec1960e8269150952e30a7e7edc6240fc481004a}}. Motivated by the recent empirical success {{cite:ffd98636e4900a231214ef0d9c162603827c8afc}}, {{cite:ae67f45f846ebbda20ef6281d540a160f1953b6e}} in policy gradient (PG) methods, the theory community has extensively studied the convergence of PG in various settings {{cite:8750ba4b709ea66fcb4e5feb466a10b84447d725}}, {{cite:124320f84c8f2d9a04853c979de6c5e08f4a316f}}, {{cite:e5372c5f98db7e594379820221f2b06fe5eabbcd}}, {{cite:c234128f5338c64f1bb4d0ed6f151c731af2840e}}, {{cite:06580f1fe17db8281be43d1fd717671ff2321d8f}}, {{cite:8d498362693f1c18e5a2b4dd05016a0e90dd514b}}, {{cite:e5372c5f98db7e594379820221f2b06fe5eabbcd}}, {{cite:2ebc57ac0b3a7a79b0306930dc7286a34fe0a7c1}}, {{cite:bda03d65e996c4513fde88125064b94e442aeba2}}, {{cite:f066a3608c46385d171aedaf4950c8b788daca5d}}, {{cite:e5c3d4c097bbbf772fa7a4449c8948465e26506d}}, {{cite:57a6b047300ef9092c9f6caafd3f89d4aa7d8fdf}}, {{cite:4d3b49ae7053928f0aa160a6d36e1dcdf1b44c65}}. {{cite:124320f84c8f2d9a04853c979de6c5e08f4a316f}} established the asymptotic global convergence of policy gradient under different policy parameterizations. We extend the result of entropy regularized PG with stochastic gradient {{cite:e5c3d4c097bbbf772fa7a4449c8948465e26506d}} to the contextual MDP setting. In particular, our contextual MDP setting reduces the exponential state space dependency w.r.t. the iteration number and per iteration sample complexity suggested by {{cite:e5c3d4c097bbbf772fa7a4449c8948465e26506d}} to a polynomial dependency. We shall also clarify that there is much existing convergence analysis on other variants of PG that produce an iteration number that does not suffer from an exponential state space dependency {{cite:124320f84c8f2d9a04853c979de6c5e08f4a316f}}, {{cite:06580f1fe17db8281be43d1fd717671ff2321d8f}}, but they assume access to the exact gradient during each update of PG, while we assume a stochastic estimation of the gradient, which is arguably more practical.
| m | 033400988c3ca3a2ac5d5fa8ba384d13 |
We have established a variational formulation for the role of depth in random neural networks with batch normalization: The entropy of hidden representations increases with depth up to constants. Is this entropy increase achieved by a gradient flow in the space of probability measures? This question is inspired by the variational formulation for Ito processes established by {{cite:dcee86e310bcef5462fb1e8437c4de008b09ee8d}}. According to this formulation, the distribution of Ito processes, which obey Fokker–Planck equation, can be viewed as a gradient flow minimizing a free energy functional.
| d | fc3d941f0245e14449bb872e1abc837c |
To handle the black-box constraints, we need to modify the acquisition function to show improvement only when {{formula:81f91a92-cac9-4c5e-96c9-4e0e14976828}} holds. Similarly to the objective, we model the constraint function {{formula:4fe1e10d-4b85-4eac-8deb-142f7bfe9cfd}} with a GP prior whose evaluations are corrupted with Gaussian noise. We must then weight the original EI in (REF ) by the probability of the constraints being satisfied. This results in the expected improvement with constraints (EIC) that can be analytically computed as follows {{cite:3f80d78da7033fb56bcd13e869704193f71b6557}}:
{{formula:35634eb3-b73a-495d-9658-1b6b73979c0a}}
| m | 921279c012018c762cfa7fbfa1956b3f |
Block ciphers should be designed to resist all classical attacks. S-boxes are the core components of block ciphers. The primary purpose of S-boxes is to produce confusion inside block ciphers.
Such S-boxes are nonlinear functions over {{formula:84255e03-0e1d-4b82-afed-33c43ea5132d}} . Such functions should have low differential uniformity for resisting differential attacks {{cite:e1e78fc05e370df7ff9aed50a24639ca7ff7ce5a}}, high nonlinearity for avoiding linear cryptanalysis {{cite:6c89ead7c46f4d06ff1d59864624ccf9d43a4f21}} and high algebraic degree to prevent higher order differential attacks {{cite:f31cd2fffe83dcb44537de84bd198bc983bab2c5}}.
It is well known that the lowest differential uniformity achieved by a function {{formula:a46d3eb5-8bbe-4d06-b219-25341c5e70b5}} over {{formula:2a4d0bdf-72f1-47dc-b7c9-c3a5d8ba9b1f}} is {{formula:88a1afbe-589b-465e-99da-35a0188d55f4}} and such functions {{formula:66707da7-cfb5-4f8a-be26-ad28b45757eb}} are called almost perfect nonlinear (APN) functions.
APN functions have important applications in block ciphers. For example, the APN functions {{formula:b2faa8ad-1f64-4d36-ab96-ee883aea1ba8}} over {{formula:5c88aabf-7e7b-42eb-81c9-7d02bbff3faa}} and {{formula:25cb14d2-c4c5-479e-a9c2-92c977a5f880}} over {{formula:3c5bccef-6f70-464c-b066-0aaa0db4a866}} have been respectively used in MISTY and KASUMI block ciphers.
For ease of the implementation in both hardware and software, such functions are required to be defined on {{formula:ead6151e-5f48-46b8-9dcd-1257f074e075}} for even {{formula:7e7bd17b-5625-4b3c-a33c-0070d766716f}} .
It is known that no APN permutations exist over {{formula:b2d3cb22-9d5c-4032-92dc-a8c516eefc6a}} for {{formula:3c7e278d-54f7-4b69-81cf-7eca4542097d}} . Instead, an APN permutation of the field {{formula:e3985a26-2552-44ec-9e14-be8bb9c76baa}} was discovered by Dillon et al. in {{cite:74ccc412ee2c880edc28a7addd3ba3833eee01c5}}.
It is an open problem whether there exists an APN permutation over {{formula:b710d96b-18e5-4ae6-b632-5bd45e23772a}} for even {{formula:8ab31b1a-bbf7-47b2-a8ff-0a15eefe946a}} .
So, to resist differential attacks in even dimensions, we can choose differentially 4-uniform permutations as S-boxes.
A well known example is the multiplicative inverse functions used in the S-box of the Advanced Encryption Standard (AES). This function is differentially 4-uniform with known maximum nonlinearity and optimal algebraic degree.
A number of recent research works have been devoted to constructing differentially 4 and 6-uniform permutation functions with high nonlinearity and high algebraic degree over {{formula:29d25587-fba1-449f-a139-84301d3c41ce}} .
| i | a0a1d0206fcdb6c08acb83a2379e5369 |
Different constraints are contained in the matching-based methods instead of using traditional stereo matching methods. Heber et al. {{cite:31c489341aec78ae8e73a01f084f784df51513df}} estimated the depth by matching the central view with other sub-aperture images, but they didn’t use all the sub-aperture image pairs. To improve the depth estimation, Heber et al.{{cite:a31058f0e219befa819a7e2d0562650fe9e30470}} further proposed a novel principal component analysis (PCA) to align the sub-aperture images by transforming the depth estimation problem into a rank-minimization problem. In addition, Jeon et al. {{cite:1f3a19b9727c10ac121c1615f3baddd5780301fc}} estimated the shifts of sub-aperture images with sub-pixel accuracy by applying the phase shift theorem in the Fourier domain. Yucer et al. {{cite:5503242794582598040fc9c561853167352f4d5f}} proposed the LF gradient to locally match the patches between adjacent sub-aperture images. It’s inevitable for stereo matching methods to involve the interpolation due to the narrow baseline, which makes the depth estimation uncertainty and ambiguity.
| m | b7d0c3e6c46473024097c92956859826 |
Here, the CNN parameters {{formula:65aacb78-4966-41ce-b4e2-c11e0023a921}} are initialized with random values and are optimized such that (REF ) is minimized. We note that CNN models often have high representation power; they can represent noise when trained with adequate epochs. Because the measurements {{formula:28458c74-de3f-4554-8c36-67e75e1dbd6b}} in (REF ) are noisy, the DIP scheme is vulnerable to overfitting. Early termination is used in {{cite:f4cf299c006a31b426fa1d133fa7b1b54e42a74d}} to minimize the risk of overfitting.
{{figure:6f8389c3-7210-4738-b3f3-d2ab40921e96}}{{figure:48dcd857-4b3f-474e-b82d-ae0aa9ea9fd5}} | m | 886836d12b9e93798cb454d02787d5c6 |
Using frequency domain decomposition and residual connections, {{cite:4dae6b645b624326063bcd16d98be820dec2dcd7}} first employs a three-layer CNN to extract rain streaks form the detail layer. Thereafter, advancing network modules are introduced, such as residual block {{cite:1e4b528efc84eeb4512d5c70cf0cb7616530c575}}, dilated convolution {{cite:1e4b528efc84eeb4512d5c70cf0cb7616530c575}} and recursive block {{cite:784e401d1ef4105dc3b939a861753f1bbb2a57c2}}. Among them, {{cite:a76de2996969e0eec0abda3cd8db1c7453bfccb6}} and adopts a coarse-to-fine strategy by adding supervision on different learning stages. Due to the complexity of rain streaks and their composition with the background, several methods are proposed to separate the task using dual-path networks {{cite:1e4b528efc84eeb4512d5c70cf0cb7616530c575}}, {{cite:cb9c764674fada182421867af6c90c7cf32d18b0}}, or adopt a multi-stage strategy using recurrent neural networks to progressively recover the clean image {{cite:759c7553c5187f0791fffd956e7e50a78424c7f8}}, {{cite:784e401d1ef4105dc3b939a861753f1bbb2a57c2}}. {{cite:cb9c764674fada182421867af6c90c7cf32d18b0}} proposes to recover low frequency image structures and high frequency image details separately using two parallel network branches. {{cite:1e4b528efc84eeb4512d5c70cf0cb7616530c575}} takes advantage of another network branch to find back lost details. GAN is also exploited by {{cite:e961bc9e5cfd4bf40858b822901a94b92cb8fa01}}, {{cite:53f4a8cb092d876024beeefe5cdea06978f85a49}} to refine the deraining results for more visual appealing effects. Besides, {{cite:3f33012f01d4c7832db383bdf5db2d38506437b3}}
builds a dataset to describe heavy rainy scenes using depth images to associate rain streaks and rainy haze. {{cite:4d5a942ebe2c28a746009a0917b8345b0319c0a9}} proposes a real rain dataset using video-based deraining results and adopt a directional IRNN to learn spatial attention for guiding the network. {{cite:da8e436ee233702551723e90394d0bb53de4230b}} presents a comprehensive benchmark named MPID for evaluation of various deraining methods. {{cite:07108ec6d167505953df9d999519dc9fb5ede27b}} first utilizes CycleGAN for single image deraining. For removing different scales of rain streaks, {{cite:e1b4d995c21b8fbe916d7771846a84a8a103792d}} designs a fractal band learning network trained with self-supervision for scale-robust rain streak removal.
| m | dc178d39c5b8d3c3d265cdffa4a9d379 |
Separating.
In terms of sample complexity, Corollary REF gives that a generic max filter bank of size {{formula:535ddae2-81f5-464a-bbca-a0bc238ae50c}} separates all {{formula:1d4cb5ef-fcb0-4bc7-a2e8-1033411b0e49}} -orbits in {{formula:1d010072-3852-4412-ae40-243812cc052f}} provided {{formula:9a8b4d6b-efb5-480c-a5a7-00d53e7de6eb}} is finite.
If {{formula:ca51aa78-e69d-48af-b17f-baebf0d13c05}} is not topologically closed, then no continuous invariant (such as a max filter bank) can separate all {{formula:1c13a6fe-2082-4337-989e-b36bc3fca8fe}} -orbits.
If {{formula:11d3007c-a520-470c-a988-15940384b628}} is topologically closed, then by Theorem 3.4.5 in {{cite:5afe5ef0bac14cb221591ef429599b394193159b}}, {{formula:6a030d9f-f674-4b7f-9ff4-2ceb367036e8}} is algebraic, and so Theorem REF applies.
We suspect that max filtering separates orbits in such cases, but progress on this front will likely factor through Problem REF (a).
For computational complexity, how well does the max filtering separation hierarchy (REF ) separate isomorphism classes of weighted graphs?
Judging by {{cite:80a4dc7e3d668d52ca71029e815399b83a8bb42a}}, we suspect that max filtering with a template of order {{formula:ea949753-8876-4a4c-ae91-5e992b06c0f5}} and treewidth {{formula:b4c3890d-e038-4387-9a67-f1e1e7417074}} can be computed with runtime {{formula:b96b5f6b-09c6-4546-83ae-4e11c3e10349}} , which is polynomial in {{formula:90e8dce4-ddf2-45bd-bd24-8cc0d0992a7d}} when {{formula:195f8ff0-ae8e-4470-8f83-1f6c61034e80}} is logarithmic and {{formula:ac71a629-3b29-49a5-b53d-61b67aa62f54}} is bounded.
Which classes of graphs are separated by such max filters?
Also, can max filtering be used to solve real-world problems involving graph data, or is the exponent too large to be practical?
| d | f7f12f7b3035bc654789406b6598b911 |
In this work, we show that this simple method can be a very competitive baseline that either outperforms or is on par with most state-of-the-art methods on two popular benchmarks, (DomainNet {{cite:102ed3b60a275ba7a703ec2f9f4d22a6d603e56c}} and VisDa-17 {{cite:1ebfc7ad8810ee04fc2bfce2e216feebaa8defe9}}). Note that most state-of-the-art method on DA employ complicated algorithms, adversarial learning, that are not straight forward for optimization and reproducibility. Therefore, we believe our simple baseline can have a positive impact on the design of future DA methods.
| i | 836d00d85336b513b064a53431bfd598 |
Blind demixing with multiple brain graphs. fig:brain depicts the rate of successful recovery using up to five different brain graphs corresponding to different people. The graphs belong to a dataset of six undirected graphs of the human brain, consisting of {{formula:27a5e093-d666-4430-ad62-2b9982f6234f}} regions of interest (ROI) and edges given by the density of anatomical connections between regions {{cite:d002148e48bed67434aa0b3e821725f39ff3231d}}. The level of activity of each ROI in a brain graph can be represented by a graph signal {{formula:88dfb110-35de-40c0-9cc5-e8c28cbb3aca}} , thus, successive applications of {{formula:8ab6b306-46a7-4b8a-a3b3-725f7d024915}} model a linear evolution of the brain activity pattern. Under the hypothesis that we observe a sum of linear combinations (filtered signals) resulting from diffusing originally sparse brain signals, then blind demixing jointly estimates which regions of the brains were originally active, the activity in these regions, and the diffusing coefficients {{formula:76577c57-98ff-4b5f-86a8-e0ff7673f94b}} of the linear combinations. In this experiment, {{formula:227276b5-d553-44b9-a86d-dc3edca2c9ec}} and {{formula:4effe1f6-f11e-4ae8-825f-f4b8f02c98d1}} vary, {{formula:f189ce21-86eb-484f-af23-ccfc3bf0e6c5}} , and the GSO {{formula:979a277f-1fff-439f-b745-a60e692df4b0}} is chosen to be the Laplacian matrix. The results show that demixing is indeed feasible, although the performance decreases quickly as {{formula:abe3a0aa-9639-4fb3-898e-1c82c2c5b48a}} increase. This is not surprising since the brain graphs at hand exhibit strong similarities. On the other hand, for a fixed value of {{formula:ed83e04d-96a7-4e09-ae30-12f71efb053e}} it is interesting that the performance increases with {{formula:59a74171-1e34-49f2-92f3-481771c82e3d}} growing from 2 to 4, and it starts to drop afterwards. This reveals a trade-off between the number of parameters to estimate (increasing with {{formula:2a887be7-f27c-427c-b03c-52491191cca1}} ) and the amount of information of {{formula:34d6f0a1-6399-42cd-b2cc-17b7c923ff1c}} in {{formula:116f55ea-56cd-4d50-9946-2e36b499fbac}} (which increases with the successive diffusions).
| r | f11c7568c3f3d5525b39f18f6f030225 |
To validate the effectiveness of MAE pretraining for the transformer denoising model, the following state-of-the-art models are investigated: REDCNN {{cite:f31d3fe6a651a1e53ade2dd9bafee6fbda25a7db}}, MAPNN {{cite:00154e1f95131de3a31c1c30341360f936de4da5}}, and SwinIR {{cite:c95e5c000af5af31dfd5347d4d94692dfe8ad5f6}}. Particularly, all these methods are implemented and optimized according to the officially disclosed codes. In addition, the MAE is performed as a pretraining strategy on the SwinIR (SwinIR+MAE). These four methods are trained on nine labeled patient data in a fully supervised way. Moreover, since the ground truth data is not always attainable in real-world clinical applications, we also explored SwinIR+MAE in a semi-supervised context, where three labeled patient data and six unlabeled patient data are used for training.
{{figure:233630ca-35ed-4f8a-9a54-ff9d09b697ee}} | r | f420af421281a852ab2cb0e0b3e2232d |
It is well-known that there is a relationship between an asymptotically hyperbolic Einstein metric on a smooth manifold and the conformal structure on the boundary. The research has become the main theme in the study of conformal geometry since the introduction of the AdS/CFT correspondence in the quantum theory of gravity in theoretic physics by Maldacena in {{cite:169052f4f883247c41be56b975f6c1d37fd6ef50}}.
| i | 4a7320be25183472124e248c429858bc |
We note that this result also begins to give us an explanation of the criticality hypothesis vis-à-vis neuroscience {{cite:f4453718fbe48cbe2ab2cef9eb0f0ad90e59c17c}}, {{cite:6a288db8c7896666984f94fa260721c515c84efb}}, {{cite:205c2753e0e69bfd96f5983cff1811bf6212499b}}. That is, at the critical threshold, with the emergence of the giant component, the number of unique functions spontaneously increases. Along with that comes an increase in the number of complex functions. As neuronal networks need to compute integrative complex functions on sensory information, or on information passed between modular areas in the brain, the utility of this complexity is self-evident {{cite:029e21fe493dd52980398ed3dbbf6409b4eee775}}. We note that in computational neuroscience, there is also discussion of the integration of information and complexity or consciousness {{cite:14ef8c6b088b7b9f2064a624f0ad78fb07834b78}}, {{cite:3dab5609dbabd4448d5da971ebbadc9fb4bcc09a}}. These motifs therefore give us a starting point for the relationship between structure and function as well.
| d | 1b551445c76c4ae9422af06c28757e2e |
Towards an ultimate visual object classification, this paper addresses three inherent handicaps of supervised learning approaches. The first one is the dependence on the availability of labeled training data. When object categories grow in number, sufficient annotations cannot be guaranteed for all objects beyond simpler and frequent single-noun classes. For composite and exotic concepts (such as American crow and auto racing paddock) not only the available images do not suffice as the number of combinations would be unbounded, but often the annotations can be made only by experts {{cite:ca13319271784745abee9b711e5cfe718e5b48aa}}, {{cite:613f8d0fd34b8f2ad9d04b84ac215ae4d9827482}}. The second challenge is the appearance of new classes after the learning stage. In real world situations, we often need to deal with an ever-growing set of classes without representative images. Conventional approaches, in general, cannot tackle such recognition tasks in the wild. The last shortcoming is that supervised learning, in its customarily contrived forms, disregards the notion of wisdom. This can be exposed in the fact that we can identify a new object by just having a description of it, possibly leveraging its similarities with the previously learned concepts, without requiring an image of the new object {{cite:5a64600f207087bd227931a84b61c8a9203b4bcf}}.
| i | 483395bc52da689c73890426094100d9 |
We compare our proposed FairReprogram with four additional baselines,
and we show the full results with variance in Tab. REF .
We further compare our method with MMD methods where {{formula:d84114d6-7936-4618-93f8-cf5090d40942}} in Eq. (REF ) is replaced with Maximum Mean Discrepancy regularization {{cite:92fbb54dd140ba11390a1537b62b97dba05d7318}} to partial out the instability of adversarial training, and the results are shown in Fig. REF .
We also implement the fairness reprogramming in the black-box setting on CelebA dataset, where the model parameters are not available for training the reprogram, and the results are shown in Fig. REF .
Besides, we show that FairReprogram could also be used in tabular data, and the corresponding experiment results on the Adult dataset are shown in Fig. REF .
| r | 9cecbf86c19eecb1be6311d131f2c0da |
The numerical results using a Julia {{cite:b0b5ed7f50d77e58e194c0a2ee7da1035cb17196}} homemade code are summarized in the following figures with {{formula:fabb9a61-73f5-4a3f-af1a-5150b32236d0}} , {{formula:0d7f47c0-6009-41f7-81a2-1150d36671e8}} and {{formula:ee1f974f-dae2-426b-bbc2-1b5f9bdfa8ab}} . The atomic PAW function {{formula:c2de7244-d086-46f7-82d9-6c14fb3a82c8}} are the eigenfunctions of the hydrogenoid atom.
For the pseudo atomic function {{formula:7c6bfe23-0011-4d73-9fb4-c46cf52c0168}} , continuity of the function and of the first four derivatives are enforced (i.e. {{formula:1cef63ff-8cc9-4e81-a530-d2d1a30bec94}} ).
The lowest eigenvalue is computed using a conjugate-gradient algorithm stopped when the norm of the residual is less than {{formula:d550ebbf-6b7c-4a5f-9c8c-84785ae3cb80}} .
| r | 7e17f70d82af8f82411d4b1c35ada0ad |
Style Discriminator
We employ a Siamese network as the primary discriminator for the GAN. The function of the discriminator is authorship verification – taking in two pieces of text and identifying whether they are by the same author. We select an authorship-verification approach for the discriminator over an authorship-classification approach for two reasons. First, anonymization with a classification model as the adversary may not generalize to authors outside the training set. {{cite:021bfa4764e1426ed3a057233e658cc349104cf9}} uses the analogy of fruit classification to explain: color helps differentiate apples from bananas, but if a model based on color is then used to distinguish pears and lemons, it will fail. Instead, if the generator learns to anonymize sentences from a discriminator that embeds the sentence style as a whole, it will likely be able to generalize better. Second, because this approach requires capturing style as a whole rather than learning features specific to authors, the discriminator’s task becomes more difficult. This stabilizes GAN training – an overly easy task for the discriminator leads to high discriminator accuracy and vanishing gradients, preventing the generator from learning {{cite:5bd0573e1db51d5e11ee4d1801d497fa47659b4e}}.
| m | aba4dceabe89d86786076456c1d1d32a |
In this paper we have discussed a generalization of the notion
of the quantum operations and the quantum time evolution.
The generalization depends on extending the linearity of quantum operations
to the quasi-linearity condition.
This condition is motivated by the appearance of such operations in a
“hidden” form (e.g. selective measurement {{cite:fe2ef17020aa8065ec491b841c1f970b567a2c5f}})
in quantum formalism.
On the other hand, convex quasi-linearity guarantees absence of
superluminal communication.
We have identified a natural class of operations satisfying this condition.
Moreover, we have generalized the GKSL master equation for
quasi-linear evolutions.
As an example we have considered nonlinear qubit evolution.
It is worth to stress
that some of these qubit evolutions were discussed
independently in the different
context of non-Hermitian quantum mechanics
{{cite:f01ecb40727675f8e63d69422d1d03733ae44c0c}}, {{cite:7e0b6688ff08bec60f007b81915ad43b86b5853a}}, {{cite:4f388df29b7579d19e3e38be48c6bd2493b2ef06}}.
We also discussed an appropriate modification of the Jaynes–Cummings model
{{cite:8ca80222d4440df5dea989d97a3c82f61ed9ea8e}}, {{cite:e574b2b7e7b6b33ea2054f5685bf6a5469f9b8a1}} describing the interaction
of a two-level atom with a single mode of the electromagnetic field.
In general, the nonlinear time development of qubit is related to an interaction
of the quantum system with an environment.
Depending on the interrelation with environment the evolution
of qubit can demonstrate different time dependence
but mostly it can be interpreted as a damping of oscillations
together with gain and loss of energy (Sec. REF ).
This is especially evident in presented physical applications of the introduced formalism:
evolution of spin of a relativistic Dirac particle in external electromagnetic field
and evolution of the flavor state of solar neutrinos propagating through the Sun.
In both cases we observe damping of oscillations of the state and its asymptotic
evolution to a stationary state. Moreover, the later case offers a new physical
interpretation of the process of transmutation of neutrinos belonging to different
flavor generations inside Sun.
| d | 1713396a4057dcb09935b3cf14292971 |
The structure and properties of the toposes {{formula:947b57f3-8ea3-446b-8b78-1468e282d1e6}} remain
mysterious for the moment, and in future work we want to explore which kind
of properties of {{formula:0c8dbfd3-e2d6-44a7-a09d-d3a2cad1b4a2}} are reflected in {{formula:02e70fba-facb-42c1-8769-287ad07fe613}} . In the spirit of
Grothendieck {{cite:721bccf63b6a321c8b7686bac76e557f1d8d2dc6}} we want to view the
toposes {{formula:d05c695e-0c4a-499a-a68d-8f49fc4d80a9}} as geometric rather than logical objects,
the guiding intuition being that {{formula:5b30b5d7-e9b7-41b9-bcfb-77906d42762a}} can be seen as
representation of
`the space of solutions
to the algorithmic problem of computing {{formula:6972f57e-9ab3-44b4-9e48-fa0167672e41}} ',
encoding e.g. information on how algorithms computing {{formula:126d2e95-fbe2-4d62-afad-f1efda5bc5a8}} can be decomposed
into simpler parts.
| d | cc0dff5f24f06c6001345a7ccb635d5f |
Tab. REF compares detection measures (mAP, Recall, Precision) and tracking measures (HOTA, ATA, TEM) in the case of six sequences extracted from MOT17 and MOT20 datasets for for two detection sets of Faster R-CNN {{cite:03a17e3aeb73d93b96ccf3fe5abd53a0207f67d6}} and the SORT tracker {{cite:1935a603f494c4e6d10dfa8c31ce52f334ab37f9}}.
HOTA's main trend is related to Recall/mAP (i.e. the higher Recall/mAP, the lower HOTA). However, some sequences like MOT17-
02 and MOT17-11 show an opposite outcome with respect to mAP.
We can observe a non-consistent behavior of HOTA with respect to Precision. Albeit most of the sequences are directly correlated, some sequences like MOT20-01 and MOT20-02 show an opposite relationship between Prediction and HOTA.
The proposed TEM measure shows a consistent trend for all sequences, being positively correlated with mAP. Moreover, the improvement in precision does not seem to affect the measure as much as it does in HOTA, giving higher scores when the detector improves mAP.
Moreover, detection results for each sequence exhibit similar mAP for sets #1 and #3. However precision is always higher for set #3, which means less FPs. A lower number of FPs implies that the effort of the tracker should be less as the mAP is similar in the two cases. Unlike HOTA/ATA, the proposed TEM is able to capture such effort, giving lower scores when Precision is higher.
{{table:b8aacfee-20e7-44c0-ab75-6c091b02e928}} | r | 87386947cdb879e842e47deb810b5f7e |
Table REF presents the performance comparison of our proposed system with several prior approaches for the three experimental setups (i.e., Exp 1, 2, and 3) described in Section . The results are obtained using the combined system that utilizes the ResNet model with the statistics pooling layer trained using the transfer learning and spectrogram augmentation approaches described in Section . All studies referenced in the table adopt the LOSO strategy to conduct experiments on both the improvised and scripted portions of the IEMOCAP datasetThere are other related studies in the literature that only use the improvised portion of the IEMOCAP dataset {{cite:6d2cdfc733ac2931e987c4e06940393e29292405}}, {{cite:b995cd5a6ec0584332ae2be481704b2496065119}}, {{cite:7578b53e34da0943ed872fc9d952fd21836814e2}}. On the other hand, in our experiments, we use both the improvised and scripted portions of the IEMOCAP, which is approximately twice the size of the improvised portion alone. Because the experimental setups and the amount of data used for model training and evaluation in those studies are different than ours, we have not included them in Table REF for comparison. The SER performance on the improvised portion is known to be better than that on the full dataset (e.g., see {{cite:38a7852cc953cb0f8f7717222c3d65ba02c0f4e4}}, {{cite:49f4daf826c4f542b4301fee60734c1fc7696439}}, {{cite:0e7acb7b0fef5ebaeb6c14f8516d666fd0f84b0e}}, {{cite:162148bd0e98bd46a70f9b2de59526da4a949efd}}, {{cite:1a1f63bd407cd86b165685d9a05c593ddddc8480}})..
It can be seen from the table that the proposed system consistently provides competitive performance across the three experiments, achieving state-of-the-art results. In the case of Exp 2, the proposed system outperforms a system that uses 384 engineered features {{cite:0e7acb7b0fef5ebaeb6c14f8516d666fd0f84b0e}}, while for the other two experiments, our proposed system outperforms systems that use a large set of engineered features (e.g., {{cite:09689e6e7a2c67e29d38444ebc19e93cde71268e}} and {{cite:49f4daf826c4f542b4301fee60734c1fc7696439}}).
| r | 1ac473694d7932f82c1b476baeeaf50f |
where {{formula:18a6f9f5-5af0-4986-a36d-7751780c9ed9}} denotes an under-sampled Fourier encoding matrix controlling the acceleration factor (AF). {{formula:67dd3447-986e-4dc0-8777-bc09cfa67cd4}} represents the additive acquisition noise. {{formula:ed427dc7-0978-4299-a40a-524d95d5ca0d}} and {{formula:0291ad7d-e9e6-4869-82ef-a18851f72ba3}} represent the Fourier transform and its inverse. The MRI reconstruction is commonly formulated as an unconstrained optimization problem as follows {{cite:5b0ac2d6dab4a89ee47b44b6c0333192230ad524}}, {{cite:028487d79084dad94d3faa2d2e16a72a22d396f7}}, {{cite:81c5bba3c7738df2f5cda658a3bb69c1a32c369d}}:
{{formula:5b038784-2028-407b-abd1-d56a8eb73ec5}}
| m | 733b09ddd05e075c571399dcbf63b84e |
Here, each {{formula:f06c12a0-4ea6-4889-b16b-434b36e399c6}} is a number {{formula:10f54430-219d-47b4-842b-59b4cea6be88}} -matrix, and {{formula:0afafa36-299f-4398-a6f3-ad9d3550bb8a}} is an
{{formula:097e09fd-0d5c-42f8-a1bd-8d3edd21257b}} -matrix-valued function formed by scalar functions that are of bounded variation on {{formula:26ec1952-34fe-4bae-93ef-4aa6fcccbb8d}} ,
right-continuous on {{formula:9e962644-6c2d-4fc1-a83d-feb450ca2467}} , and equal to zero at {{formula:4c1bcadb-5712-4ed6-a71f-cb03a7d21c99}} . (Certainly, the integral is understood in the
Riemann-Stieltjes sense.) Representation (REF ) follows from the known description of the dual of {{formula:e59ce3cc-691e-4fd0-b7f5-56233eae6ad4}} ; see, e.g., {{cite:d7e1ae11069c617c750999da30669b40968f4197}}. Using this representation, we can reformulate Limit Condition (II) in an explicit form. Namely, Limit Condition (II) is equivalent to that the following four conditions are fulfilled as {{formula:a48723fb-368a-47f2-9050-ff1b9be1daff}} :
| r | 03699ee3726a03db721f4edb034cf254 |
For the model architecture itself, we use stacked ResNet blocks with convolutional layers {{cite:96ea4baae6ed612d66bb6e80e7e71ff3158b3c64}}. CNN-based architectures have proven effective at natural language tasks including sentiment analysis and question classification {{cite:34526a73bff2ace5ba81ccfcaf6f023808d5a721}}. The authors of {{cite:890ca0e9ddc45940f13d803f8e6430790d740927}} design a Siamese network for authorship classification and verification using stacked convolutional layers, outperforming previous authorship verification methods and achieving comparable results on n-way classification to other baselines. The convolutional layers of the discriminator as well as the residual connections within each ResNet block allow easy backpropagation of gradients and prevent vanishing gradients.
| m | d23fcb49534c0eb469b2cf0e07c75dd4 |
The composite systems (Complexes –A, –B, and –C) were equilibrated/thermalized within an NVT ensemble at 300, 600, and 1000 K using the Nosé-Hoover thermostat {{cite:7697aec3cd0134eb86e200b5359db9770bd0d1f5}}. The equilibration is initially performed to eliminate any residual stress. The atomic arrangement during the system dynamics was updated at every {{formula:2275436b-64ec-42b9-bea3-7db4a775b1f0}} fs. The total simulation time for the equilibration and thermalization procedures was 100 ps. The MD snapshots and trajectories were obtained by using the free visualization and analysis program VMD {{cite:7e895968e5ebd65fe74a27e24c2e780c43b1485a}}.
| m | 8afc74e65bd7a6b6409818d179c0fe74 |
Having defined network modules, we turn to specifying the inputs and outputs of the modules. The latent code {{formula:a8e23d71-23f4-472d-8399-c24f09237c42}} comes from a simple prior distribution {{formula:d3305206-fafe-4c64-b8dd-707b03a32795}} (multivariate uniform in our case) – it makes sampling random codes {{formula:95ff38fa-ead5-4f5f-a009-895111d0366e}} easy and lets us design {{formula:d6c797c3-15ed-4c4e-8ca3-921d10fd1c85}} such that it can encode any input image {{formula:db79385d-f83d-44a8-b873-f9485ad68b52}} into some {{formula:62fd5e17-069f-4006-9408-9133a1a71e2e}} within the support of {{formula:1fcb8bc8-4f33-4b8b-b7bb-0c21c46395cb}} . Following prior art {{cite:9e7baee83e473f5f6bcbffcfb3000decffd7c0f2}}, {{cite:b31f1fa65098ea3fd1595e2df3ba1dc3a3cf2deb}}, the unsupervised setting we operate in assumes we have access to the prior distribution of poses {{formula:c79b063a-a475-44e2-8cb5-612db4cc3132}} of real images {{formula:67463e32-c469-4fc6-88e6-dba4f8bde435}} used for training. Depending on the dataset and choice of pose coordinates, it can be multivariate Gaussian with diagonal covariance (for images of faces) or uniform on a (hemi-)sphere (for images of cars). Parameters of this distribution must be known to allow easy sampling random poses {{formula:0850d139-dc07-40e8-a5c0-ed501ac02de5}} for the generator, and that {{formula:88ac687f-4da2-49b9-aa21-1d875b439e97}} is representative of poses of real images {{formula:df5be1cd-2623-4676-b9a7-b7fad47e5dba}} .
| m | 42a50360ec014e1248a90fe4e09b25ca |
One principled model-free framework for learning-based control is reinforcement learning (RL) {{cite:6e7db3a5cd0e9bb1fcbb2ae5c35c762cc4823c0c}}.
In RL, the agent observes feedback in the forms of costs from interactions with an environment, uses this information to update its current behaviour, and aims to discover the best course of action based on a certain objective.
In recent years, deep learning – i.e. relying on neural network structures to approximate complex functions – has shown remarkable success in RL applications, ranging from mastering Atari 2600 video games {{cite:13efb5a081623dd19d539c93d77ff0b6e89f926b}} to developing autonomous image-learning robots {{cite:f1d2a0ab1923642f59394119dc07726dfa126a67}} and defeating world champions Go players {{cite:eedd681f3c18e30dfc0c03e7d43cc58b39aa12a4}}.
It also has become an appealing alternative in several financial decision making problems, where one wishes to learn optimal strategies with no explicit assumptions about the environment.
For a thorough survey of recent advances in RL applied to financial problems, see e.g. {{cite:48ab4c4c2dc817abc231198b80678057f876d612}}, {{cite:01aba445d10f2e499ad9c4df146f1b88bca35afd}}, {{cite:1ee8f77d541d1e7b0b89979bff680d8cc6f0c544}}.
| i | 24bf2346a27a7a27cfb8340da7c56bb3 |
In this section we present our algorithm, {{formula:4d7445ea-1257-4968-b189-ff5fae0c4503}} -learning with UCB-Hoeffding and Max-Optimal Initialization, a modified version of Algorithm 1 from {{cite:c845c5942897c06a5967309846ef016a73ab3ce0}}. We also introduce a theorem that shows the total regret of our algorithm is {{formula:8c0681cc-04fc-4e4c-aa52-5c80ae925941}} .
| r | 498d946b3f006cecf224a99e8090f06d |
However, local energy conditions, including NEC, are generically violated in quantum field theory (QFT), even though they can be satisfied in an averaged sense (see, e.g., {{cite:2ac95528028e1c05c82f7089937964b9f5e64de5}}, {{cite:1bae249ad0ec8ce8e46eee7efa08466f4cd8525b}}, {{cite:bd173efb15e7478f19e7873d32273e269609b263}} for reviews). This raises the question of whether the NEC is a condition that should be satisfied in any classical supergravity theory with a holographic dual, or whether a weaker constraint (if any) is sufficient. This question becomes notably pertinent once quantum corrections are to be taken into account. In reverse order, the NEC violations abound to gravitational energy-momentum tensor would then in principle be translated in the RG flow of the dual quantum field theory, see, e.g., {{cite:ff2b447be823fb9a43914726f01941c6b9f6476a}}. We note that violations of the NEC in holography have been considered in search for traversable wormholes (see, e.g., {{cite:398fb3dfb5897befafad37a593bf071da80a7f57}}), though their existence in {{formula:fdd0c14b-997d-41a3-b381-0ea9470edec5}} spacetimes do not seem to necessitate NEC-violating matter content {{cite:537f7df30adef76e9a3b26ca06e87ae565412b4f}}.
| i | 26e644465404df1daa635a08c8f2ceb9 |
To ensure the reliability of our main results, we report results as described in {{cite:e5124768a2800af883f5f9bc265789e673ac885f}} to account for the uncertainty. We present results in Fig. REF , where, surprisingly, our method obtains state-of-the-art performance on the URLB benchmark {{cite:ac00cb384f35e97638fa7996a4261c9576b58334}} on our main evaluation metrics {{cite:e5124768a2800af883f5f9bc265789e673ac885f}}. In particular, the performance profile in Fig. REF (b) reveals that MOSS stochastically dominates all other methods. We also present numerical results in Tab. REF for a detailed breakdown over individual tasks.
| r | 6a8b59a41df991f8b452c448a5bde0b4 |
Based on our analysis, it is clear that state-of-the-art models have a high training cost in conversational recommendation systems. The high computational cost to train complex neural architectures does also exist in other research domains.
This has led to an increasing interest in knowledge distillation {{cite:3905761ff92d055d8ff5685d7fe2048b6fe2bc45}} models that can achieve comparable performance, but with much fewer model parameters. In practice, this means that a student model's shallow network can learn to imitate a deep network of a teacher model, leading to a similar performance of the student and teacher models. In contrast, the student model can significantly decrease the number of model parameters and online latency {{cite:6f67c0604bc87c8c48fb4fcb1de87f47387601d2}}.
Knowledge distillation (aka. knowledge transfer) has been investigated in Information Retrieval {{cite:998afe864e1c6a4da7fdf4837da0f8fd266a3d03}}, Natural Language Processing {{cite:c0a42a1128714f12794f33eefbc079652c49350c}}, Reinforcement Learning {{cite:d49f4fbdcf9069b8c5c71947048e10f8d870e530}} and compression {{cite:1032c1b5b33c2ffbc47d4b405c3372544cddf002}}. In the context of the classification problem, knowledge distillation models have followed the learning strategy of teacher and student models on the training and test data, respectively {{cite:b2774de373ed34a02d6240e4192ae6994a97b487}}, {{cite:6beb777eb1dffb34c5cb85c65254ef9bbc147e53}}, {{cite:ff706420a2dd9cacf3fadbb21f98e6420df23d81}}, {{cite:13c9da8fd8f3ca3578cee769f5ba0afb444a33ae}}. The larger teacher model is first trained on the training data and then used to supervise the smaller student model's learning using its output as soft labels. This is achieved by designing a distillation loss function that exploits the teacher model's trained parameters to guide the student model's learning. Recently, a few attempts have been made to design knowledge distillation models in recommendation systems, following the collaborative filtering strategy and focusing on the ranking performance in the top-n recommendation task {{cite:0757eb2ad4918b903fcc6bd8c4518d0d03674729}}, {{cite:85cfab736a3ac87609284796558dde358d3b6884}}, {{cite:1450439f8e624ca29e4e920527cbc9065a4024c5}}, {{cite:eefe0a1ff95af8cc00d360c01e48e8d7e18b19be}}. However, these studies focus on the conventional recommendation task and are not suitable for conversational systems, where users progressively express their personalized preferences over conversational turns. This means that these studies do not account for the interactive way of user feedback over conversational turns.
| d | e5d182e518763b1198050cd3a08dd41c |
One can distinguish dark{{cite:f324e3cc8e0c6d8bd3e214df89e2baa99344677a}}, bright{{cite:f324e3cc8e0c6d8bd3e214df89e2baa99344677a}}, and semidark{{cite:25756e4578860aacbe8cd59381bddab8b86359ff}} charge-carrier complexes in TMDs. In dark complexes (Fig.REF a), radiative e-h recombination is not allowed due to spin and/or momentum mismatch between the constituent e/h{{cite:fd08262e968091fe6e398b199fc07bb0450af610}}, while in bright complexes (Fig.REF b), direct radiative e-h recombination is allowed by conservation of linear and angular momentum{{cite:9a94d1ecd2156d1b6287fbefbd99076c54adb3cd}}. In semi-dark complexes (Fig.REF c), radiative recombination can take place following an intervalley scattering event assisted by a phonon that maintains spin, but swaps an e, e.g., from valley K{{formula:bdc5609e-0c2e-41ec-b92b-6cc381d7e84f}} to K{{cite:25756e4578860aacbe8cd59381bddab8b86359ff}}, accompanied by an energy shift due to the change in occupation of the upper and lower spin-split bands{{cite:25756e4578860aacbe8cd59381bddab8b86359ff}}.
{{figure:ad1413de-fd0f-436e-88e1-d34def044e06}} | i | 30f9c46a94da536dfd331b947b7a78b9 |
In {{cite:7e6d456bdcf41b990e4e694ee35c35c3e20608bb}} and {{cite:020b1a082c7d3493179f10191d48daa84feb70b4}}, Ringel realized {{formula:f9746b23-86d1-48cf-bc3d-cd7b1cb07aac}} by (twisted) Hall algebras. Let {{formula:009399bf-beef-4aa5-a1cd-0a21e6a3dc05}} be the finite filed of order {{formula:08f52f39-4868-40a2-b4eb-f0282590ad75}} and {{formula:6d619861-2341-4248-b3c9-14eb7470e775}} be the set of isomorphism classes of finite-dimensional representations of {{formula:591acb17-de86-4574-8746-5091127c09e1}} . For any {{formula:9309a83c-c32f-4f1f-8b46-dc574a5ca946}} , let {{formula:878906e7-c487-406a-ad2c-63a7aa6b3931}} be the element with a fixed representation {{formula:06aabd47-df7f-455f-90f2-d0948f88874b}} . The twisted Hall algebra {{formula:fe5d6924-59aa-4ba1-bdd6-13b87de03186}} is a {{formula:2df40841-f397-4a57-a837-61abcb471f01}} -vector space with the basis {{formula:ab4fdad2-fcff-4233-9b34-57a3924a6682}} . It has a multiplication
{{formula:53dbc859-a9d1-4c0c-846e-763da3e8974f}}
| i | fb56af14f06e0120a2e700d6e6c4c3e4 |
Moreover, considering the Swift-XRT monitoring of other 10 bright {{formula:5e358845-06d8-4989-8357-a1fe32db3545}} -ray BL Lacs during 2004 December–2012 August, only Mrk 421 has shown a variability amplitude {{cite:e28ef95b622e3899ed0e0b5a20b87a43ea3a609c}} larger than the value estimated in 2020 for BL Lacertae. However, Mrk 421 is a HBL with a peak of the synchrotron emission usually in the soft X-ray energy range, therefore even assuming the same level of activity a larger variability amplitude is expected in that energy range for Mrk 421 with respect to IBL/LBL sources, like BL Lacertae.
| d | a92b4597a2950cf011e426b602db1ea2 |
For {{cite:f6852e9b5e4ce7dcd53322482ff565ea2baaac4d}}, semantic communications must be shaped to effectively compress the exchanged data between communicating parties, improve the communication robustness by incorporating semantic information to the classical Level A communication scheme. This is possible by exploiting the knowledge shared a priori between communicating parties, such as a shared language or logic, shared background and contextual knowledge, and possibly a shared view on the goal of communication. In {{cite:8c7142eb91e7d7dba3ea5b89a6f360b72ec1bed9}}, the authors provide tentative definitions of semantic capacity, semantic noise, and a semantic channel from the perspective of Shannon's statistical measurement of information. In {{cite:2b68b0561f7ed2759547a578e6a2df34a49f8119}}, the authors refers to semantic as the semantics of information, addressing the significance and usefulness of messages by considering the contextual attributes (semantics) of information {{cite:5fb89d7a3eddbc7c7f2ba6eb9b45b0879a11b88f}}. In this approach, the age of information (AoI) is key to identify the relevance of the semantic information for the effectiveness of the exchange between communicating parties. Nevertheless, AoI does not necessarily define the meaning of a message in many applications, but rather how a message is still pertinent for an application given its age.
In {{cite:5c459768d22ba876fefdf3cfecb350e14a353e85}}, an end-to-end (E2E) neural architecture is presented enabling semantic transmission of sentences. However, their proposed architecture is limited in flexibility: they represent each word in a transmitted sentence with the same and fixed number of semantic symbols irrespective of the conveyed meaning. Authors in {{cite:39a90c9a02e94b568c07c65c556d763c601c5a68}} apply the same architecture to speech signals transmission. Similarly, the work in {{cite:cbd0ebad0a7301207028d10557119cc15e0b25b2}} presents a deep source-channel coding scheme, which exploits hybrid automatic repeat request (HARQ) to reduce semantic transmission error.
Here in this work, we focus on the benefit of semantic compression. We refer to semantic as a “meaningful” message (a sequence of well-formed symbols, which are possibly learned from data) that have to be interpreted at the receiver. This requires a reasoning unit (natural or artificial) able to interpret based on a knowledge base: a symbolic knowledge representation of the specific application.
| i | 4fa5d85b43542ef908864b68c32b7e20 |
In this section, we discuss the feasibility of detecting the
quantumness of gravity in our proposal. Let us suppose an experiment
being performed using an {{formula:7a2ecf8d-662c-4ceb-bbc8-f5b104d88dd9}} quantum clock
{{cite:500d5bc5b4206e65c01a3066b6e1562ac408b337}} with a probe laser wavelength
{{formula:8435b027-70fc-4e05-ad5b-f7bef3c817f3}} . A coherent state of the
mesoscopic mass source with {{formula:8ee6538b-ab07-4798-b7e4-8acd48b6a340}} may be realized in
the near future. In addition, the coherent state can be experimentally
realized for {{formula:61df556e-f2e6-4a3d-bc02-09751c63a181}} {{cite:aec2071f0eb15828d72e0685395cc645c57f5e57}}. Apart from that, we
assume {{formula:21c5c8d1-3a0d-438f-a4ed-9d7396936441}} and
{{formula:efc06129-6589-4ce1-9ceb-e283a5ec9874}} to compare setups of the BMV proposal
{{cite:466f3d1369cc2f9e9a6992e25ebc2440f2a85085}}, {{cite:54ae558687f06af60b7b302ad1c525a68f4cf371}}, {{cite:af0cd101239332afa3700cae9908f7cad2cff6ad}}. Therefore, for a
duration of coherence time scale {{formula:c90ebe2b-4255-4e75-a7c9-5d79cae36f95}} , the fractional change of the
decoherence factor in {{formula:ff313ffb-6457-4de7-aa5f-31c0a33f281f}} of Eq. () can be
estimated as
(12GMEdd t)2
=1.710-34 (M10 ng)2
(267 nm)-2
(d200 m)-4
(1 m)2(t20 sec)2,
| d | df48c5b85b97026669ac6b131bcc5db1 |
From subsection REF , we know that the solution {{formula:43c69274-97a8-49d5-9a87-91417d49fef6}} of the
generalized continuous Newton flow (REF ) has the nice global convergence
property. On the other hand, when the Jacobian matrix {{formula:7e8e3101-9a4c-4079-958c-c991cd60abcf}} is singular or nearly
singular, the ODE (REF ) is the system of differential-algebraic equations
(DAEs) and its trajectory can not be efficiently followed by the general ODE method
such as the backward differentiation formulas (the built-in subroutine ode15s.m of the
MATLAB environment {{cite:c7c89a05252a7b0d8780039d8f15bf2155aa8cf7}}, {{cite:b65add9d87a1b0f2fd426c46fa1708b0b4773852}}, {{cite:5879a62b22bdd3976241764fd3d45e765117c52a}}, {{cite:cbb5b85c4ecacac4ae4f2cf997083af83dc91ea3}}, {{cite:b99cd7e13ac5aa7352ed7328e12874a1cecd096b}}). Thus, we need to
construct the special method to handle this problem.
| m | 2eaed511deb408598c549df994fc0d6f |
In fact, two further aspects concerning the estimation of {{formula:0d2d4f9f-1af6-4ad4-a65f-eff0a50bfd73}} need to be accounted for, concerning the selection of the kernel function {{formula:f5f6728b-ece7-4cf4-b930-757be1107db6}} and the smoothing vector {{formula:a6cb0025-02fe-4d88-ad43-cd0b56847589}} . With respect to the former choice, it has been well-established that it does not have a strong impact on the density estimate {{cite:782613efaff746677c0dcab8a438a140063dc867}}, . However, the use of a bounded-support kernel, such as the uniform one (which would be overall more easily interpretable in this context) is in general not advisable for image segmentation, as it entails that in the evaluation of the density of a pixel, other pixels have weight either constant and positive, or null, depending on {{formula:6b737cf8-1cae-4ee0-8a03-07aadce06a15}} and on the difference between color intensities.
In this sense, different hues of the same colors could be evaluated as either the same colors or as completely different colors. For this reason, and especially when colors are not homogeneous within the image, a longer-tail kernel is more secure. Concerning the choice of {{formula:d55e2c38-0c72-4786-9412-55258caee913}} on the other hand, in clustering it is not as critical as it is in density estimation, since a rough indication about the high density location may suffice. To this aim, one may resort to one of the numerous bandwidth selectors proposed in literature about kernel density estimation, such as the ones based on cross-validation or plug-in strategies. The reader may refer to {{cite:fb70ff6a9d6ab8bbf458eb4b0f1c9e090efd5e8f}} for a recent review.
In any case, both choices are certainly issues to be tackled, and will represent object of empirical investigation in the next section.
| d | 4cb83ea893bc24ad5f9a0643d122a263 |
In solving the problem of Thomas precession, we used a coordinate–free approach to the problem, which allows us to identify an skew–symmetric tensor {{formula:0218afbb-dbe9-4308-91a6-0104a183749f}} , the generator of Fermi–Walker transport, depending only on the intrinsic geometry of the worldline of the accelerated particle, which describes a kinematic rotation in Minkowski space. The coordinate–free approach is extremely valuable, in that it is robust against error, and in that it provides a manifestly invariant picture, in the spirit of Minkowski {{cite:b5878f9bb29e92d892c2c6aa62d5f21a2dca9b2c}} and of the reference book by Misner, Thorne, and Wheeler {{cite:7e14a556738c95d735acea4fdaa61346052bc215}}.
Often we read that the equations of physics must be covariant. Actually, covariance implies already the choice of a representation, where the physical quantities are represented by lists with upper or lower indexes, which transform contravariantly or covariantly. However vectors and tensors are invariant objects, only their representations are contravariant or covariant.
| r | 4be2cb7079f3ad8c2092a20449205d71 |
In Fig. REF we present for scenario 1 scatter plots showing the GW peak position as a function of the strength of the phase transitions {{formula:18cef857-e253-45e9-a871-975feaa243c1}} in the colour scale (left) and the corresponding signal-to-noise (SNR) ratio for the phase transition (right) for a mission profile of 3 years. The color grade scale is the same on both plots. The right panel was generated using PTPlot 1.0.1 {{cite:9bcb0c81b11119a2a6b6697802bfbe76672bbf8e}}. The colored isolines display the expected values for the SNR that depend on {{formula:61a015b9-0694-4916-af5a-03eb66860aff}} , {{formula:73e4fb8e-d4e2-431a-801f-fba979eedfc9}} and {{formula:dbe19171-8a9c-498f-b7e4-6f4cd57b02db}} while the dashed black contour lines represent the shock formation time {{formula:95f22f40-9842-4ec6-b46c-a6933ed2c546}} (see Eq. ()). The grey shaded region corresponds to an acoustic period lasting longer than a Hubble time and it is where the sound waves treatment is mostly reliable {{cite:0afeac52d836740485ac1b1dcdc67df068959a19}}, {{cite:14bd11428cf6d9c75e5c78e9b6bb2296d9f00985}}. For {{formula:c5e3e7e2-6e5c-4268-be21-522c93676f7e}} , the turbulence effects may become important dumping the acoustic contribution. However, none of our points feature a too small shock formation time. Using the formula for turbulence effects in Refs. {{cite:39eec6a81f60b13352311755e9291303ad2e95bc}}, {{cite:0afeac52d836740485ac1b1dcdc67df068959a19}} for an estimate, we realize that it does indeed have very little impact in the peak position on the left panel.
Note that the actual role of turbulence is not yet well understood {{cite:d11467dec6a887c8b692ff30f164a05c3bf13e65}}, {{cite:2af0b5561811618df1c074a2e4f5f77c3076fb00}}, {{cite:b668f05d95c580912338fd1c25b013b4396aabc4}} and further studies are needed for a more reliable calculation of such a component. The SNR contours on the right panel take into account the effect of an increasingly short-lasting shock formation at the cost of a decreasing SNR value. As previously discussed there are good chances of probing this model in some regions of the parameter space. In particular, we have found nineteen points with SNR larger than 5 out of which eight feature a SNR above 100. Note also that we have found in our scan one point within the grey region which is typically very challenging in conventional BSM models.
{{figure:1064cf87-e5d0-48c2-876b-97200e7c958b}} | r | 521b0c47c83ee4be7d4d2e03e3ea77b2 |
Actually (see, e.g., {{cite:e37620466d933a624807c72a559ab881240ba3f3}}), for any non-zero {{formula:cd350cb7-23e1-469d-aa2f-7d28d9e7b20d}} satisfying {{formula:e02d8650-979f-4809-82a5-ee639e954a4c}} (under (REF )-(REF )) the corresponding {{formula:d823cff2-95ad-45a9-98e0-5067c175fbe0}} has the properties {{formula:4a61e9dc-54a2-4fcd-ad8f-348595a578b0}} , {{formula:ec8f83bd-26bb-4783-8a74-c171c0520330}} in {{formula:c1f093b9-4302-4762-a29a-78766b2da198}} , and
{{formula:06b10f33-d309-410f-8bbc-c989ba7d037a}}
| r | f69014a487859d41bd53bb04423294ef |
In this work we consider the model-free null hypothesis of conditional mean independence, that is {{formula:5e6a2d34-1c77-4d5e-89a9-770fd05ecb05}} ; in words, {{formula:be7bae1d-37cb-45aa-9378-d0a07a76a0dd}} does not feature in the regression function of {{formula:d7deb09c-019a-4941-911a-82af59d4b0ed}} on {{formula:04beb0c5-d938-44a0-8840-b08bde7de8f1}} and {{formula:c4c720dd-52f3-43f1-a9d6-c7f63a384f9f}} . It is interesting to compare this to the conditional independence null {{formula:46b561d7-7d55-4542-922b-56259e9c8a41}} , which has attracted much attention in recent years. The latter asks not just for the regression function to be expressed as a function of {{formula:4ca141dc-0803-4571-aa30-41aa7f32ce32}} alone, but in fact for the entire conditional distribution of {{formula:526ee3ad-fd9a-494f-9b9b-c64b14a63eaf}} given {{formula:a64d141f-cce0-4415-aa43-15bd152f04e8}} to equal the conditional distribution of {{formula:72310d00-a5cd-4ddb-85f2-d5cb36a56889}} given {{formula:14f87680-0666-4699-a075-6722171e1a17}} . Any valid test of conditional mean independence may be used as a test for conditional independence as its size is no larger than its size over the larger null hypothesis of conditional mean independence.
The two nulls in fact coincide when {{formula:63f16d6a-505f-4dc6-918a-c40524e7bb22}} is binary, but more generally there are important differences. One attractive property of the conditional mean independence null is that the alternative of conditional mean dependence may be characterised by the property that {{formula:b4cd7da0-8f73-400f-8245-fefa2bd50abb}} can improve the prediction of {{formula:ffd2e01a-91ab-4ee0-bf82-7f455abc59f2}} in a mean-squared error sense, given knowledge of {{formula:21ae5105-5728-42b9-9dcb-06ce74b22e6e}} . For example, consider the setting where {{formula:b4fc3670-d403-475e-8b58-0626e1309a8f}} is a binary treatment variable, {{formula:f7a02bea-a4d0-4bd0-a4a7-e02cb186b16c}} contains all pre-treatment confounders and {{formula:4a45bc98-1711-49b2-84b6-c89dc90c86bd}} is the observed outcome. Under assumptions (including the absence of unmeasured confounders) that are standard in the causal inference literature {{cite:71912ade6b1ca696a3dced24fec42ebc2f22a060}}, {{cite:cad1a56854f8f97f68d1a1f9b122a440a6fd8df1}}, conditional mean dependence is equivalent to the existence of a subgroup average treatment effect, that is a (measurable) subset {{formula:9c51b5b0-744f-47aa-8169-f7bfaa1a9d92}} where {{formula:b1f4fd81-7576-4f4e-9e91-2a33b3ed7c3e}} . On the other hand, rejection of the conditional independence null does not in general have an immediate interpretation in terms of its predictive implications.
| i | ef151aa2c0e7ceef8d3bb1b6014893da |
Other possible future directions can be found by revisiting our modeling choices. While GHL consensus is defined for higher dimensions, we restricted our simulations to the {{formula:fcbddabf-410b-475d-bb76-da0d160203b9}} case for simplicity. It is worth investigating not only GHL-{{formula:8f6a7a56-4dc0-42db-9f30-bb434e0cc0b8}} consensus for {{formula:df4b9e6e-bbcb-4f62-8179-faac97b67748}} , but also properties of the gradient, curl, and harmonic subspaces for these higher dimensions.
At the same time, we introduced a balancing based on the combinatorial Hodge Laplacian, and it would also be beneficial to balance normalized Hodge Laplacians {{cite:c52e6be3077de4755f3d158a81e3aff515a83f1f}} (which were recently introduced to model random walks on simplicial complexes). Our generalized Hodge Laplacian may be useful for other dynamics as well, and we are currently investigating the relation between our work and explosive synchronization on simplicial complexes {{cite:a4ad329a4481e0f86e5985131367e900392370bf}}.
Another interesting direction would be to consider adaptive dynamics in which the balancing parameter is allowed to change with time, i.e., {{formula:2347e8b0-945f-43b5-be2e-e96d3a357e82}} .
Alternatively,
we could introduce a learning rate {{formula:2bf52c69-b64a-48d7-885b-6b1d34366517}}
and replace Eq. (REF ) by
{{formula:c7f5c317-aee6-4595-bbb2-298a3d2ce1c7}} . We could
allow the learning rate to change over time, which is commonly done in decentralized machine learning.
| d | 6fa8e918d6881171133d8b86188dfa8d |
Other practical use cases of WAFL may be realized in the context of transfer learning with very deep neural networks such as VGG{{cite:d596a11da977000d8404619cf6701009807e3386}} and ResNet{{cite:8f25a97475b44687da1f978022a38fbe420a6701}}.
Transfer learning uses pre-trained models, and only around the output layers are expected to be trained. In WAFL's point of view, the nodes exchange such small number of trainable parameters. This could be acceptable in today's wireless ad hoc scenarios.
| d | 4f08e595848870ba2382c0ee41defc31 |
The calculations discussed in the foregoing section for spin waves were confined to the simple cubic lattice. It is, however, straightforward task to extend these evaluations to body-centered cubic and face-centered cubic lattice with nearest neighbor interaction. Also, the computations presented in Section REF correspond to low temperature limit of ferromagnet by assuming that the contribution of the magnon-magnon interaction in the Heisenberg hamiltonian is negligible at the outset. The Green function formalism described in Section , however, is not per se restricted to the low-temperature limit. Hence, this surmise can be avoided at the expense of some additional calculations. Indeed, the effect of the contribution of the magnon-magnon interaction through the Heisenberg hamiltonian of pure ferromagnet has been a subject of extensive studies using various techniques over the years; see e.g. {{cite:1e9efdcc143c081e60d3f324d27c749eac4473e6}}, {{cite:580830437622cbfd5796d2295938e783f1f77413}}, {{cite:36bff925f27cf75fcce4b88a1fab91e4d6626cbf}}, {{cite:4d7a25e070cac65da79543cd86cd51318373ac2e}}, {{cite:319a30463b43dbae167ff10c068b0ed2f474883c}}, {{cite:8aa71d6fa58ea9297c0e2f4b08539f11c0f50dab}}. Nevertheless, as argued in {{cite:581307097476347ec4032d33d0c7a1731c6b4138}} magnon-magnon interactions do not have much effect on the temperature dependence of the saturation magnetization, except near the Curie temperature. Similarly, our calculations can be extended to case of low-dimensional lattices such as square and triangular lattices through their corresponding density of states or lattice Green functions, cf. Eqs. (REF )-(REF ) of . For the pure Heisenberg ferromagnet this case has been studied over the years {{cite:d91fc46672dcf509a3c68be1b0cf1e7e7eb78fa4}}, {{cite:eb1f10e4243dc074b790d63f9a190bbb1e9165e1}}, {{cite:68245ba2388d8de60db714cb81b2e5157a05dcd5}}, {{cite:4c6358ec1f855e9a5732ef1f8785bb83a91070ad}}, {{cite:c982126c6d6254376c9ee6afaeac3b34a407d800}}, {{cite:002a96ad731beb8fdcc66975e5f6992b26dd1d0d}}, {{cite:373a6dd013691cbaa3dfe52f8aa225458250531c}}.
| d | e0554d82bb4598cdbdef097e77cefa6f |
This section provides results of the neural network against the perception algorithm {{cite:784a096ed85826b120bddbe7be85e99d74f85236}}, the brilliantly performing isolation forest algorithm {{cite:532454d4b0c774b13dbdb500603fd0a6d237e7dc}} (sklearn implementation {{cite:b4d83b65602c972d210769f68c3b52d6f104938a}}), and the fast HBOS algorithm {{cite:4cc29d17ff9cf4e92270030ff3b419afb953740e}} (pyod implementation {{cite:8a6c8178287e3495ffb32cefcc3985b7c220e559}}). All algorithms are used with their default parameters and the experiments are carried out on a set of publicly available data sets provided by {{cite:cfdd2cb4064cb22381355423788c216033fc51a6}} and {{cite:5c462f23793edcc2dbd071e15332cc5fc6a8344e}}; the names and properties are shown in Table REF . The chosen metric is the AUC-score due to its widespread use in evaluation and because it corresponds to how well anomalies are ranked at the top of a list (which is thought to be beneficial to end users). It is important to note however that this is not necessarily the best or most appropriate way to measure algorithm performance. Indeed, better methods need to be investigated in future. For additional comparison the F1-scores are also provided as a summary of the decision boundary resulting in precision and recall scores. The runtime is also given because it is an important factor in many applications and for usability. The results are shown in Tables REF , REF and REF with the best performing algorithm for a given data set highlighted in bold.
| r | 90deb8726d57cd3ab40adf205a7a8ad1 |
Note that any subset of an Alexander system is also an Alexander system and that we do not require the surface {{formula:bd01e8b9-f83c-4788-b796-49812c20b898}} in the preceding definition to be of infinite type. The following result is the infinite-type surface version of Proposition 2.8 in {{cite:e37dd2e10afcc06c09d725745c74b240fda7a7ab}}, to the point that its proof is also completely analogous.
| m | 20cc29b49e8db650b042609831e93df3 |
An iso-surface has a fractal dimension given by {{formula:4f0d7dac-44ea-4432-978d-229e7fa9aad0}} =(Euclidean dimension)- 1/2 (exponent of the variance) proposed by {{cite:9cf626fdd76df797846c8a7a073a8b34cf40b947}} . Thus {{formula:efe29a85-c20b-42b8-a803-e58c63107261}} for an isotherm, considering a two-dimensional surface of supergranulation. On the other hand, the pressure variance {{formula:3277b342-b3b6-40b0-9820-91ad4b18c41a}} is proportional to the square of the velocity variance i.e. {{formula:4cdb3814-2871-4927-a45d-3b172dc58bca}} {{cite:f95f4fc72f636bd5aa51992339590bf06c58a66c}} and hence for an isobar it is {{formula:318b3f92-d747-46ec-9c71-2d48d0fdaf94}} . Considering the entire solar cycle, our analysis gives an averaged fractal dimension value closer to {{formula:19c08637-eaeb-4ae1-b712-4acec2caa9dd}} than to {{formula:5df05d1a-5196-4557-b259-cfe2c8dbe10e}} (cf. Table REF ), which indicates that the supergranular network is closer to being isobaric than isothermal. This is consistent also with the fractal dimension data derived by {{cite:7a0a0d2f206d8ebf324930aacd0b197a31d92764}} for supergranulation in both QRs and ARs. Our fractal dimension data is consistent with a turbulent origin of the supergranules.
| d | 43065a10de913bf9a48f04e34d382402 |
The current approaches{{cite:25daa560ff158c4f9ebb5804efd31e0f43c4759c}}, {{cite:ac2054723ffcece165da9c054dc2f3013ff88911}} to pose estimation typically fail when faced with real-world challenges like cluttered backgrounds, occlusions, truncation, different lighting, dark objects, glossiness, and shiny objects. Figure REF shows some of the real-world challenges.
| i | 305d794067d11bcbfc4bdc8b881db571 |
Figures REF and REF illustrate the estimations of the original distributions of location data from San Francisco and Paris, respectively. We sanitize the original distribution using shuffle model giving a tight differential privacy guarantee with parameters {{formula:ac99ed9f-24c9-498d-9225-3c2bfaf2f132}} and {{formula:e588b29d-8bed-4b1c-870a-42ada296739e}} , as in (REF ). We use the same {{formula:7dbe651d-1d51-44bf-979f-15f2f767fc18}} and {{formula:2efb54a6-8f27-45fe-9164-07026677e535}} to privatize the original data using the Gaussian mechanism {{cite:f72ffd7bd75d14d2a6f0043bc4e10b58cb7e7408}}, {{cite:305dff955621d7db819756ff5a9aa938785df639}}, same as in the previous experiment, thus getting a {{formula:2c8af193-96be-45ac-bfa4-c64653d3fec1}} -DP guarantee for both the cases.
{{figure:d4d76443-dd07-4943-9021-edd89d183273}}{{figure:37008595-6b96-4710-90da-83c9e67fd6e3}} | r | af313eeccf55deda3cc731549fc9c646 |
Networked control systems have been playing a crucial role in modeling, analysis, and operation of real-world large-scale interconnected systems such as power systems, transportation networks, and water distribution networks.
Those systems consist of multiple interconnected subsystems which generally communicate with each other via insecure communication channels to share their information.
This insecure protocol may leave the networked control systems vulnerable to cyber-attacks such as denial-of-service and false-data injection attacks {{cite:93223f8ddebe41afbc889a3fbf07ef660a1467af}}, inflicting serious financial loss and civil damages.
Reports on actual damages such as Stuxnet {{cite:85618b0aac58903179af8eca6fb991d462a0e80b}} and Industroyer {{cite:6a5a57ad732137b20a5d023c83863e2419813dd0}} have described the catastrophic consequences of such cyber-attacks for an Iranian nuclear program and a Ukrainian power grid, respectively.
Motivated by the above observation, cyber-physical security has increasingly received much attention from
control society in recent years.
| i | 33476816148cc34739430a70a65d7aec |
Vanilla ViT. For the Lottery Ticket, we use the model DeiT-Small and DeiT-Tiny {{cite:a53396e5cc66b17febd8518b92e86f6e0283dfad}} to validate the existence of winning tickets. We test our proposed LTH-ViTs definition using DeiT-Tiny and DeiT-Small.
The overview of our method is shown in Figure REF .
① We pretrain the model to get {{formula:dcc04458-cd0a-4692-8f67-0b15aa95620b}} , which is the same as conventional LTH. ② We incorporate the token identification module {{formula:eecb9a49-3b6c-4857-b145-430e56c847c5}} into the model to build the ticket selector {{formula:6898267a-dc8a-4f4b-b00f-615af00235ec}} . ③ We can use the ticket selector to produce the index of inattentive tokens for the mask {{formula:e44401e2-56bc-4f62-bc23-a0ce02e72a9e}} to build the winning tickets as {{formula:acbcdef6-50ba-4ed0-85a4-a92779030adf}} . ④ We train the model only with the winning tickets for the same iterations as pretrained model.
We evaluate the performance of the model trained with the full image {{formula:04e605b5-22eb-4bf2-bdee-537b0eaf2efa}} or winning tickets {{formula:3fe3e5eb-b950-46fb-9169-3e8dfe23b06a}} to see if the test accuracy of the lottery ticket can match with the model trained and tested using full image patches. For the Random Choice, we replace the winning tickets with randomly selected patches with the same sparsity ratio during training, and the accuracy of the Random Choice is evaluated by using full image or identical test image patches that are generated by {{formula:f1196528-4957-4d9e-8255-4e933a0e40cc}} to ensure fairness.
{{figure:ca9a3dca-c0a3-4bb1-a5a5-869313b948d3}} | m | 8891e7d8b45b9248ef980102999c3730 |
We have presented the concept of Fourier-domain dedispersion and
shown that this direct incoherent dedispersion algorithm is a
viable alternative to the traditional time-domain dedispersion. We
have implemented the Fourier-domain dedispersion algorithm (fdd)
in the dedisp library by {{cite:61a3a3cbaad5f621cb1a5bd52f8cc644c53345db}}. An improved
version of the dedisp time-domain dedispersion algorithm (tdd)
was also added to allow for a fair comparison.
| d | b827c296cf6afa11f6b7014e3e36bec5 |
We briefly recall the situation for scalar drift-diffusion equations.
The first proof of exponential convergence to equilibrium in a degenerate parabolic equations
of the type {{formula:b38e9e8b-81cb-4b11-8af6-278d12de040f}} has been given
in the case {{formula:6d54cb82-6212-46a5-a6f5-a0d0ec09efcc}} and {{formula:42d810a2-c269-4600-bca8-101aa7f0801a}} on {{formula:08864b1e-49ae-4678-a575-2046e17362c0}}
by three different methods:
by a nonlinear extension of the Bakry–Emery method {{cite:20c96120e5e9c7ed8f447fe866fa66b723cd4072}},
by a variational proof of the entropy-dissipation inequality {{cite:2a26ad9074e6a6d83a0eecae9aa03f50c0c265a7}},
and by virtue of gradient flows in the {{formula:c9e02daf-ae57-4584-81b4-429da332c618}} -Wasserstein metric {{cite:ddc62560d2ae13e6ecf91c4b61bb5c7a4797b49d}}.
These methods have been extended later on to more general {{formula:ecb0e665-224a-44b6-b465-958a37ae1d8f}} 's and {{formula:d32d26a5-23c0-40ae-bd85-607cff5eebc0}} 's,
and also to bounded domains {{formula:7510a9bb-d6a7-4e95-8695-3c970e832d93}} ,
see e.g. {{cite:633f56847ac3cf264384fb590d505e22332ee42d}} and {{cite:d6799f70db40b8f5b43ea70d35498d24a5e9d23c}} for such generalizations of the first and of the third approach, respectively.
The common fundamental result is that if {{formula:3528e816-79e7-4732-8aa9-53546e1adcf9}} has certain convexity properties,
and if {{formula:adda5e8b-bf12-432a-8a4e-bebdb3e42aab}} is uniformly convex of modulus {{formula:3e27daf0-496e-4001-872e-cb012b6d8efd}} ,
then solutions {{formula:5b559a5f-ea88-43d6-95c5-84fe8cd0b64e}} converge to the unique equilibrium in {{formula:4e561896-962d-4099-bc58-164a35a51628}} at exponential rate {{formula:6d5511ed-f3ea-4109-ac4b-3a41c5b7d9c1}} .
| r | 8ca3c37ec32ba077d9a0150bd2d5cec5 |
Existing literature in few-shot object segmentation has mainly relied on manually labelled segmentation masks. A few recent works {{cite:e299c8528c1df4f84e09eb906d691fb9a393d630}}, {{cite:b0bcb2551f498fb4d00019f90e9dfe5b167f9253}}, {{cite:88fa8f64bd7e974677b984dc46e2826993c8812c}} started to conduct experiments using weak annotations such as scribbles or bounding boxes. However, these weak forms of supervision involve more manual work compared to image level labels, which can be collected from text and images publicly available on the web. Limited research has been conducted on using image-level supervision for few-shot segmentation {{cite:291ec09694ff2f4a2153526ae8e8deacae1d07cb}}.
| i | 9a2902d7f47ba6f5667ec8ff5124b74f |
Comparison with I2L-MeshNet {{cite:879257d7c465fdaa3fdf24dca934a618bbdbf3e0}} and Pose2Mesh {{cite:f7cf39110c14621ab049316f4ad171a62c5d1947}}.
Figure REF shows the qualitative comparison between 3DCrowdNet, I2L-MeshNet {{cite:879257d7c465fdaa3fdf24dca934a618bbdbf3e0}}, and Pose2Mesh {{cite:f7cf39110c14621ab049316f4ad171a62c5d1947}}.
Among the three methods, 3DCrowdNet apparently produces much more robust 3D shapes on the crowd scenes.
I2L-MeshNet tends to provide noisy body shape estimation in crowd scenes.
It misses a person in images with overlapping bounding boxes as depicted in the fifth and sixth rows.
Limb outputs in the second, fourth, and sixth rows are inaccurately estimated or swapped with other people.
Overall, I2L-MeshNet is hard to disentangle people in overlapping bounding boxes and provides inaccurate outputs when a body is occluded in crowd scenes.
| r | 271c58df9ac05529705838d3df1fa177 |
An event {{formula:00f5c8dc-cabc-434e-bc80-9fb84fcb2f87}} is interpreted as a tuple {{formula:303b73ac-000b-4ead-9354-9768d31d7808}} , where {{formula:e928af70-fc6a-4cb3-9870-befc51cd79b7}} is the pixel coordinate, {{formula:1a0d53ec-c0f6-45df-ab1c-858e5534bf91}} is the timestamp, and {{formula:c1e8f06b-063f-477c-8ec1-643ba3e9ac3d}} is the polarity indicating the sign of brightness change. An event occurs whenever a change in log-scale intensity exceeds a threshold.
A natural choice is to encode events in a spatial-temporal 3D volume to a voxel grid {{cite:6e88bb8a36f69feb1e3e2aef4070037f9727e605}}, {{cite:95fb3bf4729f97814bda2f6b41a9aa37e07ebf93}}, {{cite:9bed13c24830069f5bf12d627285e6af5d423d82}} or event frame {{cite:e27830fa9a777ca1d2d6f95400ec19a1c654e4cf}}, {{cite:a09f93f066dd176d07bc0af8598ebc4e1c02477a}} or multi-channel images {{cite:98c41065b8aa9b7c141299473011fa7d26b6532f}}, {{cite:cfc92d26a3d1fe4bf48b774a6c78ec51f8832a72}}, {{cite:f948bdcca165569ba775a98c4994dff7b509c44f}}. In this paper, we represent events to multi-channel event images as the inputs to the DNNs. Details are provided in the suppl. material.
| m | 274969aa309df489352ff1f0ff6d1ade |
Thus, for every {{formula:48cbaac1-d863-4f92-8958-d5e6879356c8}} , {{formula:40fbafff-4f7b-41d5-8d5b-4a66dd0e4564}} is a {{formula:0933b684-380f-4e86-9a4c-f2ff3073c5c6}} -semigroup of operators on {{formula:744c5ee5-dbd4-4dc8-85c0-6ac35c3e3435}} whose infinitesimal generator is {{formula:598265d8-28e1-4986-8818-17924457483d}} ({{cite:c8d604fa91548e6210b5301dbc5209814293d63c}}). Then, for every {{formula:d0a73f30-a20d-4c8e-9f19-437b874812d3}} , the function {{formula:557be055-0aee-47a0-bf31-a8b0a80663ef}} , {{formula:7d889c54-c1d0-4965-84ae-9bd958dbaba0}} , {{formula:b997e76c-6320-40d3-a4d0-cc8d94edec70}} , solves the initial value problem
{{formula:8c04ef57-462f-4aed-9351-f0733afbf69c}}
| i | d88d0b661259bc17cce6ac158eb83986 |
Furthermore, we note that given that our bounds for universal unforgeability are not tight, it leaves a space for exploring the possibility of universal unforgeability under more efficient challenge sets. We will show an example of this in the next chapter. Finding tight bounds for the unforgeability of qPUFs and more generally, quantum primitives is yet another attractive and challenging open problem. Later on, in [chap:application]Chapter , we will see that toolkits from quantum information theory such as entropic uncertainty relations and data processing inequalities are useful and powerful tools to prove similar results for specific constructions and we, therefore, suggest that the proposed problem and supervised learning can be both attacked using similar tools. Information-theoretic bounds for quantum vs classical machine learning has been also studied in {{cite:f4796c193dbf04c72f79f4e1107b37387766b653}}. Considering the close relationship between unforgeability and learning problems, we believe this work can assist uncover tighter universal forgery bounds for qPUF and other general quantum primitives.
| d | b4d9d91e44a6b99d2f72c064f606e9b8 |
In conclusion, we look at the issue of obtaining unconditional convergence rates/concentration bounds, as opposed to ours which is conditioned on the iterate being in the domain of attraction of a given equilibrium. An unconditional estimate will be a product of our estimate times the probability that the conditioning event occurs, i.e., the domain of attraction is indeed reached (one might add a qualifier `after a specified time'), see, e.g., Proposition 7.5, {{cite:90dc41a70a1c31f186c1b97d74518b5e2e37bc9a}}. As already noted, the latter is strictly positive for any stable equilibrium under reasonable hypotheses; hence the primary task is to find a good estimate thereof.
| d | 0a07e5a388d0bec808959348fe7ea5fd |
Quantitative results on the COCO WholeBody dataset {{cite:1db5801f079434c2aab1c6d31041e7ca67d322ec}} are
shown in Table REF .
Our result is based on a single model that is evaluated for all (WB) or
a subset of the predicted keypoints.
Our method outperforms previous methods and achieves
especially high precision on fine-grained regions such as the face or hands.
The “body” task is equivalent to the COCO keypoint task on the val
set {{cite:1db417257e2507e5dfac712d4de28e901d6cdde2}}.
Our method is based on OpenPifPaf {{cite:fd5126f541df15900321e788b24f7b66c5b78a77}} which only achieves
71.6% on the COCO val set with a model trained on that specific task and
we therefore did not expect it to outperform ZoomNet {{cite:1db5801f079434c2aab1c6d31041e7ca67d322ec}}.
Our method makes up for its lower body AP with excellent results for face
and hand AP and is nearly twice as precise as any other bottom-up method.
| r | de6e250770d16a2df2f412f25e20d723 |
The Multi-agent PPO Reinforcement Learning actor-critic framework is based on the traditional actor-critic framework proposed by Sutton et al. {{cite:86923f31e6f5fd2783ef31e311457f2006c7e1de}}.
Specifically, the decision-making of each agent in the environment is based on local information. In particular, the agent uses the forward and backward headway to determine the holding time through its "actor" component (usually a neural network) and receives a reward given by the environment. The "critic" component is responsible for estimating the state value given global information (i.e. the system headways and the other agents' actions). The state value estimation is then used by the actor to update its policy. Wang et al. {{cite:1424eb94ab47d368bc1ed11395247dfa080bd67a}} point out that limiting actors to local information can be beneficial in avoiding extra approximation during real-world application.
| m | 003a3da391f389d330acca9f39bb1720 |
IDL (https://www.l3harrisgeospatial.com/Software-Technology/IDL),
SolarSoft {{cite:f00003977afe2e0b142443e8cef974498116d7ba}},
PINTofALE {{cite:42c16c8a2e4d3c89a3e0840b1ce5fb75d0a89db6}} and
EBTEL {{cite:a8f75fc56fc1059ab0eecb11fcba45748613bc07}}, {{cite:e5771080f06925642041945a145e2675b394de1d}}
| d | efc38c8e441754bb1faed86ed28e8097 |
The proof of Theorem REF relies on using Lemma REF along with {{cite:8532ae934d4306af37b219c34726ca4a586ac959}}. Inspecting the proof of Theorem REF , one can see that any {{formula:00ebe4e1-bf00-4515-b67a-6699e0a1d4c2}} can replace {{formula:7b178ffe-6b46-4f38-9925-e7a463f6f56e}}The proof does not rely on specific properties of the latter like being primal dual optimal.. The situation is similar to primal dual algorithms in optimization {{cite:11330ef727e83aaa68484d0f0dfa966906c7d380}}, {{cite:9519ff434fc22fa95c19ebff663f4abede8b9bda}} and Evolution Variational Inequalities in optimal transport {{cite:eac6ccdc0f49364ffe2588a2c4eaf91b4a6a3849}}.
| r | 8b312cb3b43e550807adcd0588404a0c |
There are several ongoing observations of neutron stars
such as the ones with two-solar-mass {{cite:b1867b49e3d88268e4c5188c131f457363bc1efe}}, , , {{cite:4ad1a205e497a3dfe16970f559f64b44cf1df699}}, {{cite:57c3b046f1e01e6792f5c8f086b02b2d73b35e5b}}, probed by gravitational wave
detectors {{cite:84a0457401a8da72ef95eb54efa31631a6335254}}, {{cite:bb88bd7c73d435ca1f280f7334923705365a63b1}}, and
investigated in the NICER mission {{cite:716e8a0107288f86651b55f210dab01fb23e42d1}}, {{cite:ac948499eea32a3ae9e19d39c89435931667600d}}, etc., and they
have led to a flurry of studies about neutron stars in diverse fields of
research.
Amongst such efforts, numerous articles focus on the equation of state
(EoS).
The recent development in this direction was constructing a
phenomenological hybrid EoS with a smooth crossover for the
hadron-to-quark phase transition {{cite:2236175bea0b2e719617c5e683ab881eeeed6f98}}, , {{cite:ad3671f91749a6c8486b63d5b46e6feb046dcfab}}, {{cite:f87c3ce6ede3cbc3281779a7bcfd11b79e7c5a31}}, {{cite:b0e85270f6e2594f94cc9cfd4a68694a7bff53eb}}, {{cite:be0f818e8531a6907492b68e3b5481e07e7b28af}} rather than
a first-order phase transition as conventionally done in many
literatures.
The crossover construction enables neutron stars to have a sizable
quark core inside consistently with the constraints put by
observations.
Such construction should not be regarded as an exotic alternative
because there can indeed be a possibility that a substantial quark
core inside a heavy neutron star is realized as suggested by the
model-independent analysis {{cite:4550c94701387e44c8976463e70bbd5a602e7f96}}.
The plausible ground state of such cold dense quark matter is color
superconductor {{cite:51e1c3084f4d06801dfa4065727e91158557f777}}, {{cite:81f66d323b5c6c0f6dba05ed61fb6d9068a4f004}} with various
patterns of the diquark pairing being known such as color-flavor
locked (CFL) phase {{cite:24a488c691acebf60c3bc3dd34f7a4fdf146afe0}} in three-flavor symmetric
matter and two-flavor superconducting (2SC) phase {{cite:a1ec63ec7bccf0e8437188ada841a93b1682532e}}, {{cite:16d7a2d4f25d8ce56f96abc4369200c4bc196cf2}} in two-flavor symmetric matter.
| i | a5f436fa51058a485bbd50ae9c774be4 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.