id stringlengths 9 10 | submitter stringlengths 1 64 ⌀ | authors stringlengths 4 20.7k | title stringlengths 4 246 | comments stringlengths 1 523 ⌀ | journal-ref stringlengths 4 404 ⌀ | doi stringlengths 11 153 ⌀ | report-no stringlengths 2 254 ⌀ | categories stringlengths 5 98 | license stringclasses 9 values | orig_abstract stringlengths 14 3.35k | versions listlengths 1 60 | update_date stringlengths 10 10 | authors_parsed listlengths 1 1.35k | abstract stringlengths 11 3.34k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1711.02860 | Kasper Green Larsen | Kasper Green Larsen | Constructive Discrepancy Minimization with Hereditary L2 Guarantees | null | null | null | null | cs.DS cs.DM math.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In discrepancy minimization problems, we are given a family of sets
$\mathcal{S} = \{S_1,\dots,S_m\}$, with each $S_i \in \mathcal{S}$ a subset of
some universe $U = \{u_1,\dots,u_n\}$ of $n$ elements. The goal is to find a
coloring $\chi : U \to \{-1,+1\}$ of the elements of $U$ such that each set $S
\in \mathcal{S}$ is colored as evenly as possible. Two classic measures of
discrepancy are $\ell_\infty$-discrepancy defined as
$\textrm{disc}_\infty(\mathcal{S},\chi):=\max_{S \in \mathcal{S}} | \sum_{u_i
\in S} \chi(u_i) |$ and $\ell_2$-discrepancy defined as
$\textrm{disc}_2(\mathcal{S},\chi):=\sqrt{(1/|\mathcal{S}|)\sum_{S \in
\mathcal{S}} \left(\sum_{u_i \in S}\chi(u_i)\right)^2}$. Breakthrough work by
Bansal gave a polynomial time algorithm, based on rounding an SDP, for finding
a coloring $\chi$ such that $\textrm{disc}_\infty(\mathcal{S},\chi) = O(\lg n
\cdot \textrm{herdisc}_\infty(\mathcal{S}))$ where
$\textrm{herdisc}_\infty(\mathcal{S})$ is the hereditary
$\ell_\infty$-discrepancy of $\mathcal{S}$. We complement his work by giving a
simple $O((m+n)n^2)$ time algorithm for finding a coloring $\chi$ such
$\textrm{disc}_2(\mathcal{S},\chi) = O(\sqrt{\lg n} \cdot
\textrm{herdisc}_2(\mathcal{S}))$ where $\textrm{herdisc}_2(\mathcal{S})$ is
the hereditary $\ell_2$-discrepancy of $\mathcal{S}$. Interestingly, our
algorithm avoids solving an SDP and instead relies on computing
eigendecompositions of matrices. Moreover, we use our ideas to speed up the
Edge-Walk algorithm by Lovett and Meka [SICOMP'15]. To prove that our algorithm
has the claimed guarantees, we show new inequalities relating
$\textrm{herdisc}_\infty$ and $\textrm{herdisc}_2$ to the eigenvalues of the
matrix corresponding to $\mathcal{S}$. Our inequalities improve over previous
work by Chazelle and Lvov, and by Matousek et al. Finally, we also implement
our algorithm and show that it far outperforms random sampling.
| [
{
"created": "Wed, 8 Nov 2017 08:05:42 GMT",
"version": "v1"
},
{
"created": "Tue, 28 Nov 2017 13:48:15 GMT",
"version": "v2"
},
{
"created": "Tue, 19 Jun 2018 09:21:00 GMT",
"version": "v3"
},
{
"created": "Thu, 13 Dec 2018 09:06:01 GMT",
"version": "v4"
}
] | 2018-12-14 | [
[
"Larsen",
"Kasper Green",
""
]
] | In discrepancy minimization problems, we are given a family of sets $\mathcal{S} = \{S_1,\dots,S_m\}$, with each $S_i \in \mathcal{S}$ a subset of some universe $U = \{u_1,\dots,u_n\}$ of $n$ elements. The goal is to find a coloring $\chi : U \to \{-1,+1\}$ of the elements of $U$ such that each set $S \in \mathcal{S}$ is colored as evenly as possible. Two classic measures of discrepancy are $\ell_\infty$-discrepancy defined as $\textrm{disc}_\infty(\mathcal{S},\chi):=\max_{S \in \mathcal{S}} | \sum_{u_i \in S} \chi(u_i) |$ and $\ell_2$-discrepancy defined as $\textrm{disc}_2(\mathcal{S},\chi):=\sqrt{(1/|\mathcal{S}|)\sum_{S \in \mathcal{S}} \left(\sum_{u_i \in S}\chi(u_i)\right)^2}$. Breakthrough work by Bansal gave a polynomial time algorithm, based on rounding an SDP, for finding a coloring $\chi$ such that $\textrm{disc}_\infty(\mathcal{S},\chi) = O(\lg n \cdot \textrm{herdisc}_\infty(\mathcal{S}))$ where $\textrm{herdisc}_\infty(\mathcal{S})$ is the hereditary $\ell_\infty$-discrepancy of $\mathcal{S}$. We complement his work by giving a simple $O((m+n)n^2)$ time algorithm for finding a coloring $\chi$ such $\textrm{disc}_2(\mathcal{S},\chi) = O(\sqrt{\lg n} \cdot \textrm{herdisc}_2(\mathcal{S}))$ where $\textrm{herdisc}_2(\mathcal{S})$ is the hereditary $\ell_2$-discrepancy of $\mathcal{S}$. Interestingly, our algorithm avoids solving an SDP and instead relies on computing eigendecompositions of matrices. Moreover, we use our ideas to speed up the Edge-Walk algorithm by Lovett and Meka [SICOMP'15]. To prove that our algorithm has the claimed guarantees, we show new inequalities relating $\textrm{herdisc}_\infty$ and $\textrm{herdisc}_2$ to the eigenvalues of the matrix corresponding to $\mathcal{S}$. Our inequalities improve over previous work by Chazelle and Lvov, and by Matousek et al. Finally, we also implement our algorithm and show that it far outperforms random sampling. |
2103.08590 | Adrianna Janik | Adrianna Janik, Jonathan Dodd, Georgiana Ifrim, Kris Sankaran,
Kathleen Curran | Interpretability of a Deep Learning Model in the Application of Cardiac
MRI Segmentation with an ACDC Challenge Dataset | null | null | 10.1117/12.2582227 | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Cardiac Magnetic Resonance (CMR) is the most effective tool for the
assessment and diagnosis of a heart condition, which malfunction is the world's
leading cause of death. Software tools leveraging Artificial Intelligence
already enhance radiologists and cardiologists in heart condition assessment
but their lack of transparency is a problem. This project investigates if it is
possible to discover concepts representative for different cardiac conditions
from the deep network trained to segment crdiac structures: Left Ventricle
(LV), Right Ventricle (RV) and Myocardium (MYO), using explainability methods
that enhances classification system by providing the score-based values of
qualitative concepts, along with the key performance metrics. With introduction
of a need of explanations in GDPR explainability of AI systems is necessary.
This study applies Discovering and Testing with Concept Activation Vectors
(D-TCAV), an interpretaibilty method to extract underlying features important
for cardiac disease diagnosis from MRI data. The method provides a quantitative
notion of concept importance for disease classified. In previous studies, the
base method is applied to the classification of cardiac disease and provides
clinically meaningful explanations for the predictions of a black-box deep
learning classifier. This study applies a method extending TCAV with a
Discovering phase (D-TCAV) to cardiac MRI analysis. The advantage of the D-TCAV
method over the base method is that it is user-independent. The contribution of
this study is a novel application of the explainability method D-TCAV for
cardiac MRI anlysis. D-TCAV provides a shorter pre-processing time for
clinicians than the base method.
| [
{
"created": "Mon, 15 Mar 2021 17:57:40 GMT",
"version": "v1"
}
] | 2021-03-16 | [
[
"Janik",
"Adrianna",
""
],
[
"Dodd",
"Jonathan",
""
],
[
"Ifrim",
"Georgiana",
""
],
[
"Sankaran",
"Kris",
""
],
[
"Curran",
"Kathleen",
""
]
] | Cardiac Magnetic Resonance (CMR) is the most effective tool for the assessment and diagnosis of a heart condition, which malfunction is the world's leading cause of death. Software tools leveraging Artificial Intelligence already enhance radiologists and cardiologists in heart condition assessment but their lack of transparency is a problem. This project investigates if it is possible to discover concepts representative for different cardiac conditions from the deep network trained to segment crdiac structures: Left Ventricle (LV), Right Ventricle (RV) and Myocardium (MYO), using explainability methods that enhances classification system by providing the score-based values of qualitative concepts, along with the key performance metrics. With introduction of a need of explanations in GDPR explainability of AI systems is necessary. This study applies Discovering and Testing with Concept Activation Vectors (D-TCAV), an interpretaibilty method to extract underlying features important for cardiac disease diagnosis from MRI data. The method provides a quantitative notion of concept importance for disease classified. In previous studies, the base method is applied to the classification of cardiac disease and provides clinically meaningful explanations for the predictions of a black-box deep learning classifier. This study applies a method extending TCAV with a Discovering phase (D-TCAV) to cardiac MRI analysis. The advantage of the D-TCAV method over the base method is that it is user-independent. The contribution of this study is a novel application of the explainability method D-TCAV for cardiac MRI anlysis. D-TCAV provides a shorter pre-processing time for clinicians than the base method. |
1201.1363 | Anisur Molla Rahaman | Atish Das Sarma, Anisur Rahaman Molla and Gopal Pandurangan | Near-Optimal Random Walk Sampling in Distributed Networks | null | null | null | null | cs.DC cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Performing random walks in networks is a fundamental primitive that has found
numerous applications in communication networks such as token management, load
balancing, network topology discovery and construction, search, and
peer-to-peer membership management. While several such algorithms are
ubiquitous, and use numerous random walk samples, the walks themselves have
always been performed naively.
In this paper, we focus on the problem of performing random walk sampling
efficiently in a distributed network. Given bandwidth constraints, the goal is
to minimize the number of rounds and messages required to obtain several random
walk samples in a continuous online fashion. We present the first round and
message optimal distributed algorithms that present a significant improvement
on all previous approaches. The theoretical analysis and comprehensive
experimental evaluation of our algorithms show that they perform very well in
different types of networks of differing topologies.
In particular, our results show how several random walks can be performed
continuously (when source nodes are provided only at runtime, i.e., online),
such that each walk of length $\ell$ can be performed exactly in just
$\tilde{O}(\sqrt{\ell D})$ rounds, (where $D$ is the diameter of the network),
and $O(\ell)$ messages. This significantly improves upon both, the naive
technique that requires $O(\ell)$ rounds and $O(\ell)$ messages, and the
sophisticated algorithm of [DasSarma et al. PODC 2010] that has the same round
complexity as this paper but requires $\Omega(m\sqrt{\ell})$ messages (where
$m$ is the number of edges in the network). Our theoretical results are
corroborated through extensive experiments on various topological data sets.
Our algorithms are fully decentralized, lightweight, and easily implementable,
and can serve as building blocks in the design of topologically-aware networks.
| [
{
"created": "Fri, 6 Jan 2012 08:16:45 GMT",
"version": "v1"
},
{
"created": "Wed, 11 Jan 2012 16:32:42 GMT",
"version": "v2"
}
] | 2012-01-12 | [
[
"Sarma",
"Atish Das",
""
],
[
"Molla",
"Anisur Rahaman",
""
],
[
"Pandurangan",
"Gopal",
""
]
] | Performing random walks in networks is a fundamental primitive that has found numerous applications in communication networks such as token management, load balancing, network topology discovery and construction, search, and peer-to-peer membership management. While several such algorithms are ubiquitous, and use numerous random walk samples, the walks themselves have always been performed naively. In this paper, we focus on the problem of performing random walk sampling efficiently in a distributed network. Given bandwidth constraints, the goal is to minimize the number of rounds and messages required to obtain several random walk samples in a continuous online fashion. We present the first round and message optimal distributed algorithms that present a significant improvement on all previous approaches. The theoretical analysis and comprehensive experimental evaluation of our algorithms show that they perform very well in different types of networks of differing topologies. In particular, our results show how several random walks can be performed continuously (when source nodes are provided only at runtime, i.e., online), such that each walk of length $\ell$ can be performed exactly in just $\tilde{O}(\sqrt{\ell D})$ rounds, (where $D$ is the diameter of the network), and $O(\ell)$ messages. This significantly improves upon both, the naive technique that requires $O(\ell)$ rounds and $O(\ell)$ messages, and the sophisticated algorithm of [DasSarma et al. PODC 2010] that has the same round complexity as this paper but requires $\Omega(m\sqrt{\ell})$ messages (where $m$ is the number of edges in the network). Our theoretical results are corroborated through extensive experiments on various topological data sets. Our algorithms are fully decentralized, lightweight, and easily implementable, and can serve as building blocks in the design of topologically-aware networks. |
cs/0305007 | Chris Johnson | C. A. Johnson | Computing only minimal answers in disjunctive deductive databases | 48 pages | null | null | null | cs.LO | null | A method is presented for computing minimal answers in disjunctive deductive
databases under the disjunctive stable model semantics. Such answers are
constructed by repeatedly extending partial answers. Our method is complete (in
that every minimal answer can be computed) and does not admit redundancy (in
the sense that every partial answer generated can be extended to a minimal
answer), whence no non-minimal answer is generated. For stratified databases,
the method does not (necessarily) require the computation of models of the
database in their entirety. Compilation is proposed as a tool by which problems
relating to computational efficiency and the non-existence of disjunctive
stable models can be overcome. The extension of our method to other semantics
is also considered.
| [
{
"created": "Tue, 13 May 2003 08:27:45 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Johnson",
"C. A.",
""
]
] | A method is presented for computing minimal answers in disjunctive deductive databases under the disjunctive stable model semantics. Such answers are constructed by repeatedly extending partial answers. Our method is complete (in that every minimal answer can be computed) and does not admit redundancy (in the sense that every partial answer generated can be extended to a minimal answer), whence no non-minimal answer is generated. For stratified databases, the method does not (necessarily) require the computation of models of the database in their entirety. Compilation is proposed as a tool by which problems relating to computational efficiency and the non-existence of disjunctive stable models can be overcome. The extension of our method to other semantics is also considered. |
2406.19087 | Florian Mahner | Florian P. Mahner, Lukas Muttenthaler, Umut G\"u\c{c}l\"u, Martin N.
Hebart | Dimensions underlying the representational alignment of deep neural
networks with humans | null | null | null | null | cs.CV cs.AI cs.LG q-bio.QM | http://creativecommons.org/publicdomain/zero/1.0/ | Determining the similarities and differences between humans and artificial
intelligence is an important goal both in machine learning and cognitive
neuroscience. However, similarities in representations only inform us about the
degree of alignment, not the factors that determine it. Drawing upon recent
developments in cognitive science, we propose a generic framework for yielding
comparable representations in humans and deep neural networks (DNN). Applying
this framework to humans and a DNN model of natural images revealed a
low-dimensional DNN embedding of both visual and semantic dimensions. In
contrast to humans, DNNs exhibited a clear dominance of visual over semantic
features, indicating divergent strategies for representing images. While
in-silico experiments showed seemingly-consistent interpretability of DNN
dimensions, a direct comparison between human and DNN representations revealed
substantial differences in how they process images. By making representations
directly comparable, our results reveal important challenges for
representational alignment, offering a means for improving their comparability.
| [
{
"created": "Thu, 27 Jun 2024 11:14:14 GMT",
"version": "v1"
}
] | 2024-06-28 | [
[
"Mahner",
"Florian P.",
""
],
[
"Muttenthaler",
"Lukas",
""
],
[
"Güçlü",
"Umut",
""
],
[
"Hebart",
"Martin N.",
""
]
] | Determining the similarities and differences between humans and artificial intelligence is an important goal both in machine learning and cognitive neuroscience. However, similarities in representations only inform us about the degree of alignment, not the factors that determine it. Drawing upon recent developments in cognitive science, we propose a generic framework for yielding comparable representations in humans and deep neural networks (DNN). Applying this framework to humans and a DNN model of natural images revealed a low-dimensional DNN embedding of both visual and semantic dimensions. In contrast to humans, DNNs exhibited a clear dominance of visual over semantic features, indicating divergent strategies for representing images. While in-silico experiments showed seemingly-consistent interpretability of DNN dimensions, a direct comparison between human and DNN representations revealed substantial differences in how they process images. By making representations directly comparable, our results reveal important challenges for representational alignment, offering a means for improving their comparability. |
1703.05260 | Ashutosh Modi | Ashutosh Modi and Tatjana Anikina and Simon Ostermann and Manfred
Pinkal | InScript: Narrative texts annotated with script information | Paper accepted at LREC 2016, 9 pages, The corpus can be downloaded
at: http://www.sfb1102.uni-saarland.de/?page_id=2582 | LREC 2016 | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents the InScript corpus (Narrative Texts Instantiating Script
structure). InScript is a corpus of 1,000 stories centered around 10 different
scenarios. Verbs and noun phrases are annotated with event and participant
types, respectively. Additionally, the text is annotated with coreference
information. The corpus shows rich lexical variation and will serve as a unique
resource for the study of the role of script knowledge in natural language
processing.
| [
{
"created": "Wed, 15 Mar 2017 17:01:20 GMT",
"version": "v1"
}
] | 2017-03-16 | [
[
"Modi",
"Ashutosh",
""
],
[
"Anikina",
"Tatjana",
""
],
[
"Ostermann",
"Simon",
""
],
[
"Pinkal",
"Manfred",
""
]
] | This paper presents the InScript corpus (Narrative Texts Instantiating Script structure). InScript is a corpus of 1,000 stories centered around 10 different scenarios. Verbs and noun phrases are annotated with event and participant types, respectively. Additionally, the text is annotated with coreference information. The corpus shows rich lexical variation and will serve as a unique resource for the study of the role of script knowledge in natural language processing. |
1007.1717 | Petros Petrosyan | R.R. Kamalian, P.A. Petrosyan | A note on interval edge-colorings of graphs | 4 pages, minor changes | null | null | null | cs.DM | http://creativecommons.org/licenses/by/3.0/ | An edge-coloring of a graph $G$ with colors $1,2,\ldots,t$ is called an
interval $t$-coloring if for each $i\in \{1,2,\ldots,t\}$ there is at least one
edge of $G$ colored by $i$, and the colors of edges incident to any vertex of
$G$ are distinct and form an interval of integers. In this paper we prove that
if a connected graph $G$ with $n$ vertices admits an interval $t$-coloring,
then $t\leq 2n-3$. We also show that if $G$ is a connected $r$-regular graph
with $n$ vertices has an interval $t$-coloring and $n\geq 2r+2$, then this
upper bound can be improved to $2n-5$.
| [
{
"created": "Sat, 10 Jul 2010 12:25:17 GMT",
"version": "v1"
},
{
"created": "Thu, 12 Aug 2010 06:25:51 GMT",
"version": "v2"
}
] | 2010-08-13 | [
[
"Kamalian",
"R. R.",
""
],
[
"Petrosyan",
"P. A.",
""
]
] | An edge-coloring of a graph $G$ with colors $1,2,\ldots,t$ is called an interval $t$-coloring if for each $i\in \{1,2,\ldots,t\}$ there is at least one edge of $G$ colored by $i$, and the colors of edges incident to any vertex of $G$ are distinct and form an interval of integers. In this paper we prove that if a connected graph $G$ with $n$ vertices admits an interval $t$-coloring, then $t\leq 2n-3$. We also show that if $G$ is a connected $r$-regular graph with $n$ vertices has an interval $t$-coloring and $n\geq 2r+2$, then this upper bound can be improved to $2n-5$. |
2406.09984 | Simone Lionetti | Adrian Willi, Pascal Baumann, Sophie Erb, Fabian Gr\"oger, Yanick
Zeder, Simone Lionetti | Self-Supervised and Few-Shot Learning for Robust Bioaerosol Monitoring | Short communication, 8 pages, 2 figures, 1 table | null | null | null | cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | Real-time bioaerosol monitoring is improving the quality of life for people
affected by allergies, but it often relies on deep-learning models which pose
challenges for widespread adoption. These models are typically trained in a
supervised fashion and require considerable effort to produce large amounts of
annotated data, an effort that must be repeated for new particles, geographical
regions, or measurement systems. In this work, we show that self-supervised
learning and few-shot learning can be combined to classify holographic images
of bioaerosol particles using a large collection of unlabelled data and only a
few examples for each particle type. We first demonstrate that self-supervision
on pictures of unidentified particles from ambient air measurements enhances
identification even when labelled data is abundant. Most importantly, it
greatly improves few-shot classification when only a handful of labelled images
are available. Our findings suggest that real-time bioaerosol monitoring
workflows can be substantially optimized, and the effort required to adapt
models for different situations considerably reduced.
| [
{
"created": "Fri, 14 Jun 2024 12:48:26 GMT",
"version": "v1"
}
] | 2024-06-17 | [
[
"Willi",
"Adrian",
""
],
[
"Baumann",
"Pascal",
""
],
[
"Erb",
"Sophie",
""
],
[
"Gröger",
"Fabian",
""
],
[
"Zeder",
"Yanick",
""
],
[
"Lionetti",
"Simone",
""
]
] | Real-time bioaerosol monitoring is improving the quality of life for people affected by allergies, but it often relies on deep-learning models which pose challenges for widespread adoption. These models are typically trained in a supervised fashion and require considerable effort to produce large amounts of annotated data, an effort that must be repeated for new particles, geographical regions, or measurement systems. In this work, we show that self-supervised learning and few-shot learning can be combined to classify holographic images of bioaerosol particles using a large collection of unlabelled data and only a few examples for each particle type. We first demonstrate that self-supervision on pictures of unidentified particles from ambient air measurements enhances identification even when labelled data is abundant. Most importantly, it greatly improves few-shot classification when only a handful of labelled images are available. Our findings suggest that real-time bioaerosol monitoring workflows can be substantially optimized, and the effort required to adapt models for different situations considerably reduced. |
1910.08283 | Muhammad Irfan Yousuf Dr. | Muhammad Irfan Yousuf, Raheel Anwar | Weighted Edge Sampling for Static Graphs | 9 pages, 3 figures, Pre-print | null | null | null | cs.DS cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph Sampling provides an efficient yet inexpensive solution for analyzing
large graphs. While extracting small representative subgraphs from large
graphs, the challenge is to capture the properties of the original graph.
Several sampling algorithms have been proposed in previous studies, but they
lack in extracting good samples. In this paper, we propose a new sampling
method called Weighted Edge Sampling. In this method, we give equal weight to
all the edges in the beginning. During the sampling process, we sample an edge
with the probability proportional to its weight. When an edge is sampled, we
increase the weight of its neighboring edges and this increases their
probability to be sampled. Our method extracts the neighborhood of a sampled
edge more efficiently than previous approaches. We evaluate the efficacy of our
sampling approach empirically using several real-world data sets and compare it
with some of the previous approaches. We find that our method produces samples
that better match the original graphs. We also calculate the Root Mean Square
Error and Kolmogorov Smirnov distance to compare the results quantitatively.
| [
{
"created": "Fri, 18 Oct 2019 06:51:46 GMT",
"version": "v1"
}
] | 2019-10-21 | [
[
"Yousuf",
"Muhammad Irfan",
""
],
[
"Anwar",
"Raheel",
""
]
] | Graph Sampling provides an efficient yet inexpensive solution for analyzing large graphs. While extracting small representative subgraphs from large graphs, the challenge is to capture the properties of the original graph. Several sampling algorithms have been proposed in previous studies, but they lack in extracting good samples. In this paper, we propose a new sampling method called Weighted Edge Sampling. In this method, we give equal weight to all the edges in the beginning. During the sampling process, we sample an edge with the probability proportional to its weight. When an edge is sampled, we increase the weight of its neighboring edges and this increases their probability to be sampled. Our method extracts the neighborhood of a sampled edge more efficiently than previous approaches. We evaluate the efficacy of our sampling approach empirically using several real-world data sets and compare it with some of the previous approaches. We find that our method produces samples that better match the original graphs. We also calculate the Root Mean Square Error and Kolmogorov Smirnov distance to compare the results quantitatively. |
2010.03190 | Yijun Zhou | Yijun Zhou, Yuki Koyama, Masataka Goto, Takeo Igarashi | Generative Melody Composition with Human-in-the-Loop Bayesian
Optimization | 10 pages, 2 figures, Proceedings of the 2020 Joint Conference on AI
Music Creativity (CSMC-MuMe 2020) | null | null | null | cs.SD cs.HC eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep generative models allow even novice composers to generate various
melodies by sampling latent vectors. However, finding the desired melody is
challenging since the latent space is unintuitive and high-dimensional. In this
work, we present an interactive system that supports generative melody
composition with human-in-the-loop Bayesian optimization (BO). This system
takes a mixed-initiative approach; the system generates candidate melodies to
evaluate, and the user evaluates them and provides preferential feedback (i.e.,
picking the best melody among the candidates) to the system. This process is
iteratively performed based on BO techniques until the user finds the desired
melody. We conducted a pilot study using our prototype system, suggesting the
potential of this approach.
| [
{
"created": "Wed, 7 Oct 2020 05:54:20 GMT",
"version": "v1"
}
] | 2020-10-08 | [
[
"Zhou",
"Yijun",
""
],
[
"Koyama",
"Yuki",
""
],
[
"Goto",
"Masataka",
""
],
[
"Igarashi",
"Takeo",
""
]
] | Deep generative models allow even novice composers to generate various melodies by sampling latent vectors. However, finding the desired melody is challenging since the latent space is unintuitive and high-dimensional. In this work, we present an interactive system that supports generative melody composition with human-in-the-loop Bayesian optimization (BO). This system takes a mixed-initiative approach; the system generates candidate melodies to evaluate, and the user evaluates them and provides preferential feedback (i.e., picking the best melody among the candidates) to the system. This process is iteratively performed based on BO techniques until the user finds the desired melody. We conducted a pilot study using our prototype system, suggesting the potential of this approach. |
2311.08473 | Matteo Torzoni | Gabriel Garayalde, Matteo Torzoni, Matteo Bruggi, Alberto Corigliano | Real-time topology optimization via learnable mappings | null | null | 10.1002/nme.7502 | null | cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In traditional topology optimization, the computing time required to
iteratively update the material distribution within a design domain strongly
depends on the complexity or size of the problem, limiting its application in
real engineering contexts. This work proposes a multi-stage machine learning
strategy that aims to predict an optimal topology and the related stress fields
of interest, either in 2D or 3D, without resorting to any iterative analysis
and design process. The overall topology optimization is treated as regression
task in a low-dimensional latent space, that encodes the variability of the
target designs. First, a fully-connected model is employed to surrogate the
functional link between the parametric input space characterizing the design
problem and the latent space representation of the corresponding optimal
topology. The decoder branch of an autoencoder is then exploited to reconstruct
the desired optimal topology from its latent representation. The deep learning
models are trained on a dataset generated through a standard method of topology
optimization implementing the solid isotropic material with penalization, for
varying boundary and loading conditions. The underlying hypothesis behind the
proposed strategy is that optimal topologies share enough common patterns to be
compressed into small latent space representations without significant
information loss. Results relevant to a 2D Messerschmitt-B\"olkow-Blohm beam
and a 3D bridge case demonstrate the capabilities of the proposed framework to
provide accurate optimal topology predictions in a fraction of a second.
| [
{
"created": "Tue, 14 Nov 2023 19:04:16 GMT",
"version": "v1"
},
{
"created": "Mon, 13 May 2024 12:12:51 GMT",
"version": "v2"
}
] | 2024-05-14 | [
[
"Garayalde",
"Gabriel",
""
],
[
"Torzoni",
"Matteo",
""
],
[
"Bruggi",
"Matteo",
""
],
[
"Corigliano",
"Alberto",
""
]
] | In traditional topology optimization, the computing time required to iteratively update the material distribution within a design domain strongly depends on the complexity or size of the problem, limiting its application in real engineering contexts. This work proposes a multi-stage machine learning strategy that aims to predict an optimal topology and the related stress fields of interest, either in 2D or 3D, without resorting to any iterative analysis and design process. The overall topology optimization is treated as regression task in a low-dimensional latent space, that encodes the variability of the target designs. First, a fully-connected model is employed to surrogate the functional link between the parametric input space characterizing the design problem and the latent space representation of the corresponding optimal topology. The decoder branch of an autoencoder is then exploited to reconstruct the desired optimal topology from its latent representation. The deep learning models are trained on a dataset generated through a standard method of topology optimization implementing the solid isotropic material with penalization, for varying boundary and loading conditions. The underlying hypothesis behind the proposed strategy is that optimal topologies share enough common patterns to be compressed into small latent space representations without significant information loss. Results relevant to a 2D Messerschmitt-B\"olkow-Blohm beam and a 3D bridge case demonstrate the capabilities of the proposed framework to provide accurate optimal topology predictions in a fraction of a second. |
1304.6000 | Jin Tan | Jin Tan, Dror Baron, and Liyi Dai | Mixture Gaussian Signal Estimation with L_infty Error Metric | null | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of estimating an input signal from noisy measurements
in both parallel scalar Gaussian channels and linear mixing systems. The
performance of the estimation process is quantified by the $\ell_\infty$ norm
error metric. We first study the minimum mean $\ell_\infty$ error estimator in
parallel scalar Gaussian channels, and verify that, when the input is
independent and identically distributed (i.i.d.) mixture Gaussian, the Wiener
filter is asymptotically optimal with probability 1. For linear mixing systems
with i.i.d. sparse Gaussian or mixture Gaussian inputs, under the assumption
that the relaxed belief propagation (BP) algorithm matches Tanaka's fixed point
equation, applying the Wiener filter to the output of relaxed BP is also
asymptotically optimal with probability 1. However, in order to solve the
practical problem where the signal dimension is finite, we apply an estimation
algorithm that has been proposed in our previous work, and illustrate that an
$\ell_\infty$ error minimizer can be approximated by an $\ell_p$ error
minimizer provided the value of $p$ is properly chosen.
| [
{
"created": "Mon, 22 Apr 2013 16:09:33 GMT",
"version": "v1"
}
] | 2013-04-23 | [
[
"Tan",
"Jin",
""
],
[
"Baron",
"Dror",
""
],
[
"Dai",
"Liyi",
""
]
] | We consider the problem of estimating an input signal from noisy measurements in both parallel scalar Gaussian channels and linear mixing systems. The performance of the estimation process is quantified by the $\ell_\infty$ norm error metric. We first study the minimum mean $\ell_\infty$ error estimator in parallel scalar Gaussian channels, and verify that, when the input is independent and identically distributed (i.i.d.) mixture Gaussian, the Wiener filter is asymptotically optimal with probability 1. For linear mixing systems with i.i.d. sparse Gaussian or mixture Gaussian inputs, under the assumption that the relaxed belief propagation (BP) algorithm matches Tanaka's fixed point equation, applying the Wiener filter to the output of relaxed BP is also asymptotically optimal with probability 1. However, in order to solve the practical problem where the signal dimension is finite, we apply an estimation algorithm that has been proposed in our previous work, and illustrate that an $\ell_\infty$ error minimizer can be approximated by an $\ell_p$ error minimizer provided the value of $p$ is properly chosen. |
2110.12091 | Junwen Bai | Junwen Bai, Weiran Wang, Carla Gomes | Contrastively Disentangled Sequential Variational Autoencoder | Accepted by NeurIPS 2021 | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Self-supervised disentangled representation learning is a critical task in
sequence modeling. The learnt representations contribute to better model
interpretability as well as the data generation, and improve the sample
efficiency for downstream tasks. We propose a novel sequence representation
learning method, named Contrastively Disentangled Sequential Variational
Autoencoder (C-DSVAE), to extract and separate the static (time-invariant) and
dynamic (time-variant) factors in the latent space. Different from previous
sequential variational autoencoder methods, we use a novel evidence lower bound
which maximizes the mutual information between the input and the latent
factors, while penalizes the mutual information between the static and dynamic
factors. We leverage contrastive estimations of the mutual information terms in
training, together with simple yet effective augmentation techniques, to
introduce additional inductive biases. Our experiments show that C-DSVAE
significantly outperforms the previous state-of-the-art methods on multiple
metrics.
| [
{
"created": "Fri, 22 Oct 2021 23:00:32 GMT",
"version": "v1"
}
] | 2021-10-26 | [
[
"Bai",
"Junwen",
""
],
[
"Wang",
"Weiran",
""
],
[
"Gomes",
"Carla",
""
]
] | Self-supervised disentangled representation learning is a critical task in sequence modeling. The learnt representations contribute to better model interpretability as well as the data generation, and improve the sample efficiency for downstream tasks. We propose a novel sequence representation learning method, named Contrastively Disentangled Sequential Variational Autoencoder (C-DSVAE), to extract and separate the static (time-invariant) and dynamic (time-variant) factors in the latent space. Different from previous sequential variational autoencoder methods, we use a novel evidence lower bound which maximizes the mutual information between the input and the latent factors, while penalizes the mutual information between the static and dynamic factors. We leverage contrastive estimations of the mutual information terms in training, together with simple yet effective augmentation techniques, to introduce additional inductive biases. Our experiments show that C-DSVAE significantly outperforms the previous state-of-the-art methods on multiple metrics. |
1707.01603 | Fahad Alsifiany | Fahad Alsifiany, Aissa Ikhlef, Jonathon Chambers | On Differential Modulation in Downlink Multiuser MIMO Systems | 5 pages, 4 figures | null | null | null | cs.IT math.IT math.NA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we consider a space time block coded multiuser multiple-input
multiple-output (MU-MIMO) system with downlink transmission. Specifically, we
propose to use downlink precoding combined with differential modulation (DM) to
shift the complexity from the receivers to the transmitter. The block
diagonalization (BD) precoding scheme is used to cancel co-channel interference
(CCI) in addition to exploiting its advantage of enhancing diversity. Since the
BD scheme requires channel knowledge at the transmitter, we propose to use
downlink spreading along with DM, which does not require channel knowledge
neither at the transmitter nor at the receivers. The orthogonal spreading (OS)
scheme is employed in order to separate the data streams of different users. As
a space time block code, we use the Alamouti code that can be encoded/decoded
using DM thereby eliminating the need for channel knowledge at the receiver.
The proposed schemes yield low complexity transceivers while providing good
performance. Monte Carlo simulation results demonstrate the effectiveness of
the proposed schemes.
| [
{
"created": "Thu, 6 Jul 2017 01:04:25 GMT",
"version": "v1"
}
] | 2017-07-07 | [
[
"Alsifiany",
"Fahad",
""
],
[
"Ikhlef",
"Aissa",
""
],
[
"Chambers",
"Jonathon",
""
]
] | In this paper, we consider a space time block coded multiuser multiple-input multiple-output (MU-MIMO) system with downlink transmission. Specifically, we propose to use downlink precoding combined with differential modulation (DM) to shift the complexity from the receivers to the transmitter. The block diagonalization (BD) precoding scheme is used to cancel co-channel interference (CCI) in addition to exploiting its advantage of enhancing diversity. Since the BD scheme requires channel knowledge at the transmitter, we propose to use downlink spreading along with DM, which does not require channel knowledge neither at the transmitter nor at the receivers. The orthogonal spreading (OS) scheme is employed in order to separate the data streams of different users. As a space time block code, we use the Alamouti code that can be encoded/decoded using DM thereby eliminating the need for channel knowledge at the receiver. The proposed schemes yield low complexity transceivers while providing good performance. Monte Carlo simulation results demonstrate the effectiveness of the proposed schemes. |
2401.07120 | Minrui Xu | Minrui Xu, Dusit Niyato, Jiawen Kang, Zehui Xiong, Yuan Cao, Yulan
Gao, Chao Ren, Han Yu | Generative AI-enabled Quantum Computing Networks and Intelligent
Resource Allocation | null | null | null | null | cs.NI eess.SP quant-ph | http://creativecommons.org/licenses/by/4.0/ | Quantum computing networks enable scalable collaboration and secure
information exchange among multiple classical and quantum computing nodes while
executing large-scale generative AI computation tasks and advanced quantum
algorithms. Quantum computing networks overcome limitations such as the number
of qubits and coherence time of entangled pairs and offer advantages for
generative AI infrastructure, including enhanced noise reduction through
distributed processing and improved scalability by connecting multiple quantum
devices. However, efficient resource allocation in quantum computing networks
is a critical challenge due to factors including qubit variability and network
complexity. In this article, we propose an intelligent resource allocation
framework for quantum computing networks to improve network scalability with
minimized resource costs. To achieve scalability in quantum computing networks,
we formulate the resource allocation problem as stochastic programming,
accounting for the uncertain fidelities of qubits and entangled pairs.
Furthermore, we introduce state-of-the-art reinforcement learning (RL)
algorithms, from generative learning to quantum machine learning for optimal
quantum resource allocation to resolve the proposed stochastic resource
allocation problem efficiently. Finally, we optimize the resource allocation in
heterogeneous quantum computing networks supporting quantum generative learning
applications and propose a multi-agent RL-based algorithm to learn the optimal
resource allocation policies without prior knowledge.
| [
{
"created": "Sat, 13 Jan 2024 17:16:38 GMT",
"version": "v1"
}
] | 2024-01-17 | [
[
"Xu",
"Minrui",
""
],
[
"Niyato",
"Dusit",
""
],
[
"Kang",
"Jiawen",
""
],
[
"Xiong",
"Zehui",
""
],
[
"Cao",
"Yuan",
""
],
[
"Gao",
"Yulan",
""
],
[
"Ren",
"Chao",
""
],
[
"Yu",
"Han",
""
]
] | Quantum computing networks enable scalable collaboration and secure information exchange among multiple classical and quantum computing nodes while executing large-scale generative AI computation tasks and advanced quantum algorithms. Quantum computing networks overcome limitations such as the number of qubits and coherence time of entangled pairs and offer advantages for generative AI infrastructure, including enhanced noise reduction through distributed processing and improved scalability by connecting multiple quantum devices. However, efficient resource allocation in quantum computing networks is a critical challenge due to factors including qubit variability and network complexity. In this article, we propose an intelligent resource allocation framework for quantum computing networks to improve network scalability with minimized resource costs. To achieve scalability in quantum computing networks, we formulate the resource allocation problem as stochastic programming, accounting for the uncertain fidelities of qubits and entangled pairs. Furthermore, we introduce state-of-the-art reinforcement learning (RL) algorithms, from generative learning to quantum machine learning for optimal quantum resource allocation to resolve the proposed stochastic resource allocation problem efficiently. Finally, we optimize the resource allocation in heterogeneous quantum computing networks supporting quantum generative learning applications and propose a multi-agent RL-based algorithm to learn the optimal resource allocation policies without prior knowledge. |
1911.12884 | Graham Campbell | Graham Campbell and Detlef Plump | Efficient Recognition of Graph Languages | Project Report, Department of Computer Science, University of York,
83 pages, 2019. arXiv admin note: substantial text overlap with
arXiv:1906.05170 | null | null | null | cs.LO cs.CC cs.SC | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Graph transformation is the rule-based modification of graphs, and is a
discipline dating back to the 1970s. In general, to match the left-hand graph
of a fixed rule within a host graph requires polynomial time, but to improve
matching performance, D\"orr proposed to equip rules and host graphs with
distinguished root nodes. This model was implemented by Plump and Bak, but
unfortunately, such rules are not invertible. We address this problem by
defining rootedness using a partial function into a two-point set rather than
pointing graphs with root nodes, meaning derivations are natural double
pushouts. Moreover, we give a sufficient condition on rules to give constant
time rule application on graphs of bounded degree, and that, the graph class of
trees can be recognised in linear time, given an input graph of bounded degree.
Finally, we define a new notion of confluence up to garbage and non-garbage
critical pairs, showing it is sufficient to require strong joinability of only
the non-garbage critical pairs to establish confluence up to garbage. Finally,
this new result, presented for conventional graph transformation systems, can
be lifted to our rooted setting by encoding node labels and rootedness as
looped edges.
| [
{
"created": "Thu, 28 Nov 2019 22:32:41 GMT",
"version": "v1"
},
{
"created": "Wed, 11 Dec 2019 20:42:37 GMT",
"version": "v2"
},
{
"created": "Fri, 1 Jan 2021 12:34:22 GMT",
"version": "v3"
}
] | 2021-01-05 | [
[
"Campbell",
"Graham",
""
],
[
"Plump",
"Detlef",
""
]
] | Graph transformation is the rule-based modification of graphs, and is a discipline dating back to the 1970s. In general, to match the left-hand graph of a fixed rule within a host graph requires polynomial time, but to improve matching performance, D\"orr proposed to equip rules and host graphs with distinguished root nodes. This model was implemented by Plump and Bak, but unfortunately, such rules are not invertible. We address this problem by defining rootedness using a partial function into a two-point set rather than pointing graphs with root nodes, meaning derivations are natural double pushouts. Moreover, we give a sufficient condition on rules to give constant time rule application on graphs of bounded degree, and that, the graph class of trees can be recognised in linear time, given an input graph of bounded degree. Finally, we define a new notion of confluence up to garbage and non-garbage critical pairs, showing it is sufficient to require strong joinability of only the non-garbage critical pairs to establish confluence up to garbage. Finally, this new result, presented for conventional graph transformation systems, can be lifted to our rooted setting by encoding node labels and rootedness as looped edges. |
2205.00165 | Zhijie Deng | Zhijie Deng, Jiaxin Shi, Jun Zhu | NeuralEF: Deconstructing Kernels by Deep Neural Networks | International Conference on Machine Learning (ICML), 2022 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning the principal eigenfunctions of an integral operator defined by a
kernel and a data distribution is at the core of many machine learning
problems. Traditional nonparametric solutions based on the Nystr{\"o}m formula
suffer from scalability issues. Recent work has resorted to a parametric
approach, i.e., training neural networks to approximate the eigenfunctions.
However, the existing method relies on an expensive orthogonalization step and
is difficult to implement. We show that these problems can be fixed by using a
new series of objective functions that generalizes the
EigenGame~\citep{gemp2020eigengame} to function space. We test our method on a
variety of supervised and unsupervised learning problems and show it provides
accurate approximations to the eigenfunctions of polynomial, radial basis,
neural network Gaussian process, and neural tangent kernels. Finally, we
demonstrate our method can scale up linearised Laplace approximation of deep
neural networks to modern image classification datasets through approximating
the Gauss-Newton matrix. Code is available at
\url{https://github.com/thudzj/neuraleigenfunction}.
| [
{
"created": "Sat, 30 Apr 2022 05:31:07 GMT",
"version": "v1"
},
{
"created": "Mon, 13 Jun 2022 03:03:16 GMT",
"version": "v2"
},
{
"created": "Fri, 17 Jun 2022 12:26:42 GMT",
"version": "v3"
},
{
"created": "Sun, 23 Oct 2022 07:23:14 GMT",
"version": "v4"
}
] | 2022-10-25 | [
[
"Deng",
"Zhijie",
""
],
[
"Shi",
"Jiaxin",
""
],
[
"Zhu",
"Jun",
""
]
] | Learning the principal eigenfunctions of an integral operator defined by a kernel and a data distribution is at the core of many machine learning problems. Traditional nonparametric solutions based on the Nystr{\"o}m formula suffer from scalability issues. Recent work has resorted to a parametric approach, i.e., training neural networks to approximate the eigenfunctions. However, the existing method relies on an expensive orthogonalization step and is difficult to implement. We show that these problems can be fixed by using a new series of objective functions that generalizes the EigenGame~\citep{gemp2020eigengame} to function space. We test our method on a variety of supervised and unsupervised learning problems and show it provides accurate approximations to the eigenfunctions of polynomial, radial basis, neural network Gaussian process, and neural tangent kernels. Finally, we demonstrate our method can scale up linearised Laplace approximation of deep neural networks to modern image classification datasets through approximating the Gauss-Newton matrix. Code is available at \url{https://github.com/thudzj/neuraleigenfunction}. |
2302.11793 | Callum Rhys Tilbury | Callum Rhys Tilbury, Filippos Christianos, Stefano V. Albrecht | Revisiting the Gumbel-Softmax in MADDPG | Presented at AAMAS Workshop on Adaptive and Learning Agents, 2023 | null | null | null | cs.LG cs.AI cs.MA stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | MADDPG is an algorithm in multi-agent reinforcement learning (MARL) that
extends the popular single-agent method, DDPG, to multi-agent scenarios.
Importantly, DDPG is an algorithm designed for continuous action spaces, where
the gradient of the state-action value function exists. For this algorithm to
work in discrete action spaces, discrete gradient estimation must be performed.
For MADDPG, the Gumbel-Softmax (GS) estimator is used -- a reparameterisation
which relaxes a discrete distribution into a similar continuous one. This
method, however, is statistically biased, and a recent MARL benchmarking paper
suggests that this bias makes MADDPG perform poorly in grid-world situations,
where the action space is discrete. Fortunately, many alternatives to the GS
exist, boasting a wide range of properties. This paper explores several of
these alternatives and integrates them into MADDPG for discrete grid-world
scenarios. The corresponding impact on various performance metrics is then
measured and analysed. It is found that one of the proposed estimators performs
significantly better than the original GS in several tasks, achieving up to 55%
higher returns, along with faster convergence.
| [
{
"created": "Thu, 23 Feb 2023 06:13:51 GMT",
"version": "v1"
},
{
"created": "Wed, 14 Jun 2023 13:43:44 GMT",
"version": "v2"
}
] | 2023-06-16 | [
[
"Tilbury",
"Callum Rhys",
""
],
[
"Christianos",
"Filippos",
""
],
[
"Albrecht",
"Stefano V.",
""
]
] | MADDPG is an algorithm in multi-agent reinforcement learning (MARL) that extends the popular single-agent method, DDPG, to multi-agent scenarios. Importantly, DDPG is an algorithm designed for continuous action spaces, where the gradient of the state-action value function exists. For this algorithm to work in discrete action spaces, discrete gradient estimation must be performed. For MADDPG, the Gumbel-Softmax (GS) estimator is used -- a reparameterisation which relaxes a discrete distribution into a similar continuous one. This method, however, is statistically biased, and a recent MARL benchmarking paper suggests that this bias makes MADDPG perform poorly in grid-world situations, where the action space is discrete. Fortunately, many alternatives to the GS exist, boasting a wide range of properties. This paper explores several of these alternatives and integrates them into MADDPG for discrete grid-world scenarios. The corresponding impact on various performance metrics is then measured and analysed. It is found that one of the proposed estimators performs significantly better than the original GS in several tasks, achieving up to 55% higher returns, along with faster convergence. |
2102.00266 | Joanna Grzyb | Joanna Grzyb, Jakub Klikowski, Micha{\l} Wo\'zniak | Hellinger Distance Weighted Ensemble for Imbalanced Data Stream
Classification | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The imbalanced data classification remains a vital problem. The key is to
find such methods that classify both the minority and majority class correctly.
The paper presents the classifier ensemble for classifying binary,
non-stationary and imbalanced data streams where the Hellinger Distance is used
to prune the ensemble. The paper includes an experimental evaluation of the
method based on the conducted experiments. The first one checks the impact of
the base classifier type on the quality of the classification. In the second
experiment, the Hellinger Distance Weighted Ensemble (HDWE) method is compared
to selected state-of-the-art methods using a statistical test with two base
classifiers. The method was profoundly tested based on many imbalanced data
streams and obtained results proved the HDWE method's usefulness.
| [
{
"created": "Sat, 30 Jan 2021 16:38:42 GMT",
"version": "v1"
}
] | 2021-02-02 | [
[
"Grzyb",
"Joanna",
""
],
[
"Klikowski",
"Jakub",
""
],
[
"Woźniak",
"Michał",
""
]
] | The imbalanced data classification remains a vital problem. The key is to find such methods that classify both the minority and majority class correctly. The paper presents the classifier ensemble for classifying binary, non-stationary and imbalanced data streams where the Hellinger Distance is used to prune the ensemble. The paper includes an experimental evaluation of the method based on the conducted experiments. The first one checks the impact of the base classifier type on the quality of the classification. In the second experiment, the Hellinger Distance Weighted Ensemble (HDWE) method is compared to selected state-of-the-art methods using a statistical test with two base classifiers. The method was profoundly tested based on many imbalanced data streams and obtained results proved the HDWE method's usefulness. |
2406.02222 | Ran Wei PhD | Ran Wei, Ruizhe Yang, Shijun Liu, Chongsheng Fan, Rong Zhou, Zekun Wu,
Haochi Wang, Yifan Cai, Zhe Jiang | Towards an Extensible Model-Based Digital Twin Framework for Space
Launch Vehicles | null | null | null | null | cs.SE | http://creativecommons.org/licenses/by/4.0/ | The concept of Digital Twin (DT) is increasingly applied to systems on
different levels of abstraction across domains, to support monitoring,
analysis, diagnosis, decision making and automated control. Whilst the interest
in applying DT is growing, the definition of DT is unclear, neither is there a
clear pathway to develop DT to fully realise its capacities. In this paper, we
revise the concept of DT and its categorisation. We propose a DT maturity
matrix, based on which we propose a model-based DT development methodology. We
also discuss how model-based tools can be used to support the methodology and
present our own supporting tool. We report our preliminary findings with a
discussion on a case study, in which we use our proposed methodology and our
supporting tool to develop an extensible DT platform for the assurance of
Electrical and Electronics systems of space launch vehicles.
| [
{
"created": "Tue, 4 Jun 2024 11:31:00 GMT",
"version": "v1"
}
] | 2024-06-05 | [
[
"Wei",
"Ran",
""
],
[
"Yang",
"Ruizhe",
""
],
[
"Liu",
"Shijun",
""
],
[
"Fan",
"Chongsheng",
""
],
[
"Zhou",
"Rong",
""
],
[
"Wu",
"Zekun",
""
],
[
"Wang",
"Haochi",
""
],
[
"Cai",
"Yifan",
""
],
[
"Jiang",
"Zhe",
""
]
] | The concept of Digital Twin (DT) is increasingly applied to systems on different levels of abstraction across domains, to support monitoring, analysis, diagnosis, decision making and automated control. Whilst the interest in applying DT is growing, the definition of DT is unclear, neither is there a clear pathway to develop DT to fully realise its capacities. In this paper, we revise the concept of DT and its categorisation. We propose a DT maturity matrix, based on which we propose a model-based DT development methodology. We also discuss how model-based tools can be used to support the methodology and present our own supporting tool. We report our preliminary findings with a discussion on a case study, in which we use our proposed methodology and our supporting tool to develop an extensible DT platform for the assurance of Electrical and Electronics systems of space launch vehicles. |
2203.11014 | Xi Liu | Buyun Zhang, Liang Luo, Xi Liu, Jay Li, Zeliang Chen, Weilin Zhang,
Xiaohan Wei, Yuchen Hao, Michael Tsang, Wenjun Wang, Yang Liu, Huayu Li,
Yasmine Badr, Jongsoo Park, Jiyan Yang, Dheevatsa Mudigere, Ellie Wen | DHEN: A Deep and Hierarchical Ensemble Network for Large-Scale
Click-Through Rate Prediction | null | null | null | null | cs.IR cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning feature interactions is important to the model performance of online
advertising services. As a result, extensive efforts have been devoted to
designing effective architectures to learn feature interactions. However, we
observe that the practical performance of those designs can vary from dataset
to dataset, even when the order of interactions claimed to be captured is the
same. That indicates different designs may have different advantages and the
interactions captured by them have non-overlapping information. Motivated by
this observation, we propose DHEN - a deep and hierarchical ensemble
architecture that can leverage strengths of heterogeneous interaction modules
and learn a hierarchy of the interactions under different orders. To overcome
the challenge brought by DHEN's deeper and multi-layer structure in training,
we propose a novel co-designed training system that can further improve the
training efficiency of DHEN. Experiments of DHEN on large-scale dataset from
CTR prediction tasks attained 0.27\% improvement on the Normalized Entropy (NE)
of prediction and 1.2x better training throughput than state-of-the-art
baseline, demonstrating their effectiveness in practice.
| [
{
"created": "Fri, 11 Mar 2022 21:19:31 GMT",
"version": "v1"
}
] | 2022-03-22 | [
[
"Zhang",
"Buyun",
""
],
[
"Luo",
"Liang",
""
],
[
"Liu",
"Xi",
""
],
[
"Li",
"Jay",
""
],
[
"Chen",
"Zeliang",
""
],
[
"Zhang",
"Weilin",
""
],
[
"Wei",
"Xiaohan",
""
],
[
"Hao",
"Yuchen",
""
],
[
"Tsang",
"Michael",
""
],
[
"Wang",
"Wenjun",
""
],
[
"Liu",
"Yang",
""
],
[
"Li",
"Huayu",
""
],
[
"Badr",
"Yasmine",
""
],
[
"Park",
"Jongsoo",
""
],
[
"Yang",
"Jiyan",
""
],
[
"Mudigere",
"Dheevatsa",
""
],
[
"Wen",
"Ellie",
""
]
] | Learning feature interactions is important to the model performance of online advertising services. As a result, extensive efforts have been devoted to designing effective architectures to learn feature interactions. However, we observe that the practical performance of those designs can vary from dataset to dataset, even when the order of interactions claimed to be captured is the same. That indicates different designs may have different advantages and the interactions captured by them have non-overlapping information. Motivated by this observation, we propose DHEN - a deep and hierarchical ensemble architecture that can leverage strengths of heterogeneous interaction modules and learn a hierarchy of the interactions under different orders. To overcome the challenge brought by DHEN's deeper and multi-layer structure in training, we propose a novel co-designed training system that can further improve the training efficiency of DHEN. Experiments of DHEN on large-scale dataset from CTR prediction tasks attained 0.27\% improvement on the Normalized Entropy (NE) of prediction and 1.2x better training throughput than state-of-the-art baseline, demonstrating their effectiveness in practice. |
1909.03325 | James Davenport | James H. Davenport | Formal Methods and CyberSecurity | To appear in "Short Papers FROM 2019" | null | null | null | cs.CR cs.SE | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Formal methods have been largely thought of in the context of safety-critical
systems, where they have achieved major acceptance. Tens of millions of people
trust their lives every day to such systems, based on formal proofs rather than
``we haven't found a bug'' (yet!). Why is ``we haven't found a bug'' an
acceptable basis for systems trusted with hundreds of millions of people's
personal data?
This paper looks at some of the issues in CyberSecurity, and the extent to
which formal methods, ranging from ``fully verified'' to better tool support,
could help. Alas The Royal Society (2016) only recommended formal methods in
the limited context of ``safety critical applications'': we suggest this is too
limited.
| [
{
"created": "Sat, 7 Sep 2019 19:49:19 GMT",
"version": "v1"
}
] | 2019-09-10 | [
[
"Davenport",
"James H.",
""
]
] | Formal methods have been largely thought of in the context of safety-critical systems, where they have achieved major acceptance. Tens of millions of people trust their lives every day to such systems, based on formal proofs rather than ``we haven't found a bug'' (yet!). Why is ``we haven't found a bug'' an acceptable basis for systems trusted with hundreds of millions of people's personal data? This paper looks at some of the issues in CyberSecurity, and the extent to which formal methods, ranging from ``fully verified'' to better tool support, could help. Alas The Royal Society (2016) only recommended formal methods in the limited context of ``safety critical applications'': we suggest this is too limited. |
2207.04075 | Sara Fridovich-Keil | Sara Fridovich-Keil, Brian R. Bartoldson, James Diffenderfer, Bhavya
Kailkhura, Peer-Timo Bremer | Models Out of Line: A Fourier Lens on Distribution Shift Robustness | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Improving the accuracy of deep neural networks (DNNs) on out-of-distribution
(OOD) data is critical to an acceptance of deep learning (DL) in real world
applications. It has been observed that accuracies on in-distribution (ID)
versus OOD data follow a linear trend and models that outperform this baseline
are exceptionally rare (and referred to as "effectively robust"). Recently,
some promising approaches have been developed to improve OOD robustness: model
pruning, data augmentation, and ensembling or zero-shot evaluating large
pretrained models. However, there still is no clear understanding of the
conditions on OOD data and model properties that are required to observe
effective robustness. We approach this issue by conducting a comprehensive
empirical study of diverse approaches that are known to impact OOD robustness
on a broad range of natural and synthetic distribution shifts of CIFAR-10 and
ImageNet. In particular, we view the "effective robustness puzzle" through a
Fourier lens and ask how spectral properties of both models and OOD data
influence the corresponding effective robustness. We find this Fourier lens
offers some insight into why certain robust models, particularly those from the
CLIP family, achieve OOD robustness. However, our analysis also makes clear
that no known metric is consistently the best explanation (or even a strong
explanation) of OOD robustness. Thus, to aid future research into the OOD
puzzle, we address the gap in publicly-available models with effective
robustness by introducing a set of pretrained models--RobustNets--with varying
levels of OOD robustness.
| [
{
"created": "Fri, 8 Jul 2022 18:05:58 GMT",
"version": "v1"
}
] | 2022-07-12 | [
[
"Fridovich-Keil",
"Sara",
""
],
[
"Bartoldson",
"Brian R.",
""
],
[
"Diffenderfer",
"James",
""
],
[
"Kailkhura",
"Bhavya",
""
],
[
"Bremer",
"Peer-Timo",
""
]
] | Improving the accuracy of deep neural networks (DNNs) on out-of-distribution (OOD) data is critical to an acceptance of deep learning (DL) in real world applications. It has been observed that accuracies on in-distribution (ID) versus OOD data follow a linear trend and models that outperform this baseline are exceptionally rare (and referred to as "effectively robust"). Recently, some promising approaches have been developed to improve OOD robustness: model pruning, data augmentation, and ensembling or zero-shot evaluating large pretrained models. However, there still is no clear understanding of the conditions on OOD data and model properties that are required to observe effective robustness. We approach this issue by conducting a comprehensive empirical study of diverse approaches that are known to impact OOD robustness on a broad range of natural and synthetic distribution shifts of CIFAR-10 and ImageNet. In particular, we view the "effective robustness puzzle" through a Fourier lens and ask how spectral properties of both models and OOD data influence the corresponding effective robustness. We find this Fourier lens offers some insight into why certain robust models, particularly those from the CLIP family, achieve OOD robustness. However, our analysis also makes clear that no known metric is consistently the best explanation (or even a strong explanation) of OOD robustness. Thus, to aid future research into the OOD puzzle, we address the gap in publicly-available models with effective robustness by introducing a set of pretrained models--RobustNets--with varying levels of OOD robustness. |
2403.11572 | Chia-Ming Lee | Chih-Chung Hsu and Chia-Ming Lee and Ming-Shyen Wu | Augment Before Copy-Paste: Data and Memory Efficiency-Oriented Instance
Segmentation Framework for Sport-scenes | null | null | null | null | cs.CV cs.MM | http://creativecommons.org/licenses/by/4.0/ | Instance segmentation is a fundamental task in computer vision with broad
applications across various industries. In recent years, with the proliferation
of deep learning and artificial intelligence applications, how to train
effective models with limited data has become a pressing issue for both
academia and industry. In the Visual Inductive Priors challenge (VIPriors2023),
participants must train a model capable of precisely locating individuals on a
basketball court, all while working with limited data and without the use of
transfer learning or pre-trained models. We propose Memory effIciency inStance
Segmentation framework based on visual inductive prior flow propagation that
effectively incorporates inherent prior information from the dataset into both
the data preprocessing and data augmentation stages, as well as the inference
phase. Our team (ACVLAB) experiments demonstrate that our model achieves
promising performance (0.509 AP@0.50:0.95) even under limited data and memory
constraints.
| [
{
"created": "Mon, 18 Mar 2024 08:44:40 GMT",
"version": "v1"
}
] | 2024-03-19 | [
[
"Hsu",
"Chih-Chung",
""
],
[
"Lee",
"Chia-Ming",
""
],
[
"Wu",
"Ming-Shyen",
""
]
] | Instance segmentation is a fundamental task in computer vision with broad applications across various industries. In recent years, with the proliferation of deep learning and artificial intelligence applications, how to train effective models with limited data has become a pressing issue for both academia and industry. In the Visual Inductive Priors challenge (VIPriors2023), participants must train a model capable of precisely locating individuals on a basketball court, all while working with limited data and without the use of transfer learning or pre-trained models. We propose Memory effIciency inStance Segmentation framework based on visual inductive prior flow propagation that effectively incorporates inherent prior information from the dataset into both the data preprocessing and data augmentation stages, as well as the inference phase. Our team (ACVLAB) experiments demonstrate that our model achieves promising performance (0.509 AP@0.50:0.95) even under limited data and memory constraints. |
1412.3701 | Thorsten Wissmann | Felix Klein (1) and Martin Zimmermann (1) ((1) Reactive Systems Group,
Saarland University, Germany) | How Much Lookahead is Needed to Win Infinite Games? | null | Logical Methods in Computer Science, Volume 12, Issue 3 (April 27,
2017) lmcs:2011 | 10.2168/LMCS-12(3:4)2016 | null | cs.GT cs.FL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Delay games are two-player games of infinite duration in which one player may
delay her moves to obtain a lookahead on her opponent's moves. For
$\omega$-regular winning conditions it is known that such games can be solved
in doubly-exponential time and that doubly-exponential lookahead is sufficient.
We improve upon both results by giving an exponential time algorithm and an
exponential upper bound on the necessary lookahead. This is complemented by
showing EXPTIME-hardness of the solution problem and tight exponential lower
bounds on the lookahead. Both lower bounds already hold for safety conditions.
Furthermore, solving delay games with reachability conditions is shown to be
PSPACE-complete.
This is a corrected version of the paper https://arxiv.org/abs/1412.3701v4
published originally on August 26, 2016.
| [
{
"created": "Thu, 11 Dec 2014 16:13:12 GMT",
"version": "v1"
},
{
"created": "Thu, 29 Oct 2015 10:44:14 GMT",
"version": "v2"
},
{
"created": "Thu, 19 May 2016 20:28:42 GMT",
"version": "v3"
},
{
"created": "Thu, 25 Aug 2016 12:29:45 GMT",
"version": "v4"
},
{
"created": "Tue, 25 Jul 2017 17:53:58 GMT",
"version": "v5"
}
] | 2018-03-30 | [
[
"Klein",
"Felix",
""
],
[
"Zimmermann",
"Martin",
""
]
] | Delay games are two-player games of infinite duration in which one player may delay her moves to obtain a lookahead on her opponent's moves. For $\omega$-regular winning conditions it is known that such games can be solved in doubly-exponential time and that doubly-exponential lookahead is sufficient. We improve upon both results by giving an exponential time algorithm and an exponential upper bound on the necessary lookahead. This is complemented by showing EXPTIME-hardness of the solution problem and tight exponential lower bounds on the lookahead. Both lower bounds already hold for safety conditions. Furthermore, solving delay games with reachability conditions is shown to be PSPACE-complete. This is a corrected version of the paper https://arxiv.org/abs/1412.3701v4 published originally on August 26, 2016. |
1003.5196 | Christoph Lange | Christoph Lange | SWiM -- A Semantic Wiki for Mathematical Knowledge Management | null | S. Bechhofer, M. Hauswirth, J. Hoffmann, M. Koubarakis. The
Semantic Web: Research and Applications. ESWC 2008. LNCS 5021, Springer 2008 | 10.1007/978-3-540-68234-9_68 | null | cs.DL cs.MS math.HO | http://creativecommons.org/licenses/by/3.0/ | SWiM is a semantic wiki for collaboratively building, editing and browsing
mathematical knowledge represented in the domain-specific structural semantic
markup language OMDoc. It motivates users to contribute to collections of
mathematical knowledge by instantly sharing the benefits of knowledge-powered
services with them. SWiM is currently being used for authoring content
dictionaries, i. e. collections of uniquely identified mathematical symbols,
and prepared for managing a large-scale proof formalisation effort.
| [
{
"created": "Fri, 26 Mar 2010 18:17:01 GMT",
"version": "v1"
}
] | 2010-03-29 | [
[
"Lange",
"Christoph",
""
]
] | SWiM is a semantic wiki for collaboratively building, editing and browsing mathematical knowledge represented in the domain-specific structural semantic markup language OMDoc. It motivates users to contribute to collections of mathematical knowledge by instantly sharing the benefits of knowledge-powered services with them. SWiM is currently being used for authoring content dictionaries, i. e. collections of uniquely identified mathematical symbols, and prepared for managing a large-scale proof formalisation effort. |
2302.09149 | Niklas K\"uhl | Niklas K\"uhl, Hendrik Fischer, Michael Hinze, Thomas Rung | An Incremental Singular Value Decomposition Approach for Large-Scale
Spatially Parallel & Distributed but Temporally Serial Data -- Applied to
Technical Flows | null | null | null | null | cs.MS cs.DC physics.comp-ph physics.flu-dyn | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The paper presents a strategy to construct an incremental Singular Value
Decomposition (SVD) for time-evolving, spatially 3D discrete data sets. A low
memory access procedure for reducing and deploying the snapshot data is
presented. Considered examples refer to Computational Fluid Dynamic (CFD)
results extracted from unsteady flow simulations, which are computed spatially
parallel using domain decomposition strategies. The framework addresses state
of the art PDE-solvers dedicated to practical applications. Although the
approach is applied to technical flows, it is applicable in similar
applications under the umbrella of Computational Science and Engineering (CSE).
To this end, we introduce a bunch matrix that allows the aggregation of
multiple time steps and SVD updates, and significantly increases the
computational efficiency. The incremental SVD strategy is initially verified
and validated by simulating the 2D laminar single-phase flow around a circular
cylinder. Subsequent studies analyze the proposed strategy for a 2D submerged
hydrofoil located in turbulent two-phase flows. Attention is directed to the
accuracy of the SVD-based reconstruction based on local and global flow
quantities, their physical realizability, the independence of the domain
partitioning, and related implementation aspects. Moreover, the influence of
lower and (adaptive) upper construction rank thresholds on both the effort and
the accuracy are assessed. The incremental SVD process is applied to analyze
and compress the predicted flow field around a Kriso container ship in harmonic
head waves at Fn = 0.26 and ReL = 1.4E+07. With a numerical overhead of O(10%),
the snapshot matrix of size O(R10E+08 x 10E+04) computed on approximately 3000
processors can be incrementally compressed by O(95%). The storage reduction is
accompanied by errors in integral force and local wave elevation quantities of
O(1E-02%).
| [
{
"created": "Fri, 17 Feb 2023 21:19:54 GMT",
"version": "v1"
}
] | 2023-02-21 | [
[
"Kühl",
"Niklas",
""
],
[
"Fischer",
"Hendrik",
""
],
[
"Hinze",
"Michael",
""
],
[
"Rung",
"Thomas",
""
]
] | The paper presents a strategy to construct an incremental Singular Value Decomposition (SVD) for time-evolving, spatially 3D discrete data sets. A low memory access procedure for reducing and deploying the snapshot data is presented. Considered examples refer to Computational Fluid Dynamic (CFD) results extracted from unsteady flow simulations, which are computed spatially parallel using domain decomposition strategies. The framework addresses state of the art PDE-solvers dedicated to practical applications. Although the approach is applied to technical flows, it is applicable in similar applications under the umbrella of Computational Science and Engineering (CSE). To this end, we introduce a bunch matrix that allows the aggregation of multiple time steps and SVD updates, and significantly increases the computational efficiency. The incremental SVD strategy is initially verified and validated by simulating the 2D laminar single-phase flow around a circular cylinder. Subsequent studies analyze the proposed strategy for a 2D submerged hydrofoil located in turbulent two-phase flows. Attention is directed to the accuracy of the SVD-based reconstruction based on local and global flow quantities, their physical realizability, the independence of the domain partitioning, and related implementation aspects. Moreover, the influence of lower and (adaptive) upper construction rank thresholds on both the effort and the accuracy are assessed. The incremental SVD process is applied to analyze and compress the predicted flow field around a Kriso container ship in harmonic head waves at Fn = 0.26 and ReL = 1.4E+07. With a numerical overhead of O(10%), the snapshot matrix of size O(R10E+08 x 10E+04) computed on approximately 3000 processors can be incrementally compressed by O(95%). The storage reduction is accompanied by errors in integral force and local wave elevation quantities of O(1E-02%). |
2108.04230 | Songyang Zhang | Songyang Zhang and Lin Song and Songtao Liu and Zheng Ge and Zeming Li
and Xuming He and Jian Sun | Workshop on Autonomous Driving at CVPR 2021: Technical Report for
Streaming Perception Challenge | Report of the 1st Place of Streaming Perception Challenge(Workshop on
Autonomous Driving at CVPR 2021) | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | In this report, we introduce our real-time 2D object detection system for the
realistic autonomous driving scenario. Our detector is built on a newly
designed YOLO model, called YOLOX. On the Argoverse-HD dataset, our system
achieves 41.0 streaming AP, which surpassed second place by 7.8/6.1 on
detection-only track/fully track, respectively. Moreover, equipped with
TensorRT, our model achieves the 30FPS inference speed with a high-resolution
input size (e.g., 1440-2304). Code and models will be available at
https://github.com/Megvii-BaseDetection/YOLOX
| [
{
"created": "Tue, 27 Jul 2021 06:36:06 GMT",
"version": "v1"
}
] | 2021-08-10 | [
[
"Zhang",
"Songyang",
""
],
[
"Song",
"Lin",
""
],
[
"Liu",
"Songtao",
""
],
[
"Ge",
"Zheng",
""
],
[
"Li",
"Zeming",
""
],
[
"He",
"Xuming",
""
],
[
"Sun",
"Jian",
""
]
] | In this report, we introduce our real-time 2D object detection system for the realistic autonomous driving scenario. Our detector is built on a newly designed YOLO model, called YOLOX. On the Argoverse-HD dataset, our system achieves 41.0 streaming AP, which surpassed second place by 7.8/6.1 on detection-only track/fully track, respectively. Moreover, equipped with TensorRT, our model achieves the 30FPS inference speed with a high-resolution input size (e.g., 1440-2304). Code and models will be available at https://github.com/Megvii-BaseDetection/YOLOX |
cs/0211041 | Lyubov Vassilevskaya | A.V. Averin (NSI, Moscow), L.A. Vassilevskaya (DESY, Hamburg) | An Approach to Automatic Indexing of Scientific Publications in High
Energy Physics for Database SPIRES HEP | 23 pages, 4 figures | null | null | DESY L-02-02 (November 2002) | cs.IR cs.DL | null | We introduce an approach to automatic indexing of e-prints based on a
pattern-matching technique making extensive use of an Associative Patterns
Dictionary (APD), developed by us. Entries in the APD consist of natural
language phrases with the same semantic interpretation as a set of keywords
from a controlled vocabulary. The method also allows to recognize within
e-prints formulae written in TeX notations that might also appear as keywords.
We present an automatic indexing system, AUTEX, which we have applied to
keyword index e-prints in selected areas in high energy physics (HEP) making
use of the DESY-HEPI thesaurus as a controlled vocabulary.
| [
{
"created": "Thu, 28 Nov 2002 17:33:19 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Averin",
"A. V.",
"",
"NSI, Moscow"
],
[
"Vassilevskaya",
"L. A.",
"",
"DESY, Hamburg"
]
] | We introduce an approach to automatic indexing of e-prints based on a pattern-matching technique making extensive use of an Associative Patterns Dictionary (APD), developed by us. Entries in the APD consist of natural language phrases with the same semantic interpretation as a set of keywords from a controlled vocabulary. The method also allows to recognize within e-prints formulae written in TeX notations that might also appear as keywords. We present an automatic indexing system, AUTEX, which we have applied to keyword index e-prints in selected areas in high energy physics (HEP) making use of the DESY-HEPI thesaurus as a controlled vocabulary. |
2007.06559 | Jiaxuan You | Jiaxuan You, Jure Leskovec, Kaiming He, Saining Xie | Graph Structure of Neural Networks | ICML 2020, with open-source code | null | null | null | cs.LG cs.CV cs.SI stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neural networks are often represented as graphs of connections between
neurons. However, despite their wide use, there is currently little
understanding of the relationship between the graph structure of the neural
network and its predictive performance. Here we systematically investigate how
does the graph structure of neural networks affect their predictive
performance. To this end, we develop a novel graph-based representation of
neural networks called relational graph, where layers of neural network
computation correspond to rounds of message exchange along the graph structure.
Using this representation we show that: (1) a "sweet spot" of relational graphs
leads to neural networks with significantly improved predictive performance;
(2) neural network's performance is approximately a smooth function of the
clustering coefficient and average path length of its relational graph; (3) our
findings are consistent across many different tasks and datasets; (4) the sweet
spot can be identified efficiently; (5) top-performing neural networks have
graph structure surprisingly similar to those of real biological neural
networks. Our work opens new directions for the design of neural architectures
and the understanding on neural networks in general.
| [
{
"created": "Mon, 13 Jul 2020 17:59:31 GMT",
"version": "v1"
},
{
"created": "Thu, 27 Aug 2020 17:58:07 GMT",
"version": "v2"
}
] | 2020-08-28 | [
[
"You",
"Jiaxuan",
""
],
[
"Leskovec",
"Jure",
""
],
[
"He",
"Kaiming",
""
],
[
"Xie",
"Saining",
""
]
] | Neural networks are often represented as graphs of connections between neurons. However, despite their wide use, there is currently little understanding of the relationship between the graph structure of the neural network and its predictive performance. Here we systematically investigate how does the graph structure of neural networks affect their predictive performance. To this end, we develop a novel graph-based representation of neural networks called relational graph, where layers of neural network computation correspond to rounds of message exchange along the graph structure. Using this representation we show that: (1) a "sweet spot" of relational graphs leads to neural networks with significantly improved predictive performance; (2) neural network's performance is approximately a smooth function of the clustering coefficient and average path length of its relational graph; (3) our findings are consistent across many different tasks and datasets; (4) the sweet spot can be identified efficiently; (5) top-performing neural networks have graph structure surprisingly similar to those of real biological neural networks. Our work opens new directions for the design of neural architectures and the understanding on neural networks in general. |
1102.1139 | Zoltan Esik | Zoltan Esik | Residuated Park Theories | null | null | null | null | cs.LO math.LO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | When $L$ is a complete lattice, the collection $\Mon_L$ of all monotone
functions $L^p \to L^n$, $n,p \geq 0$, forms a Lawvere theory. We enrich this
Lawvere theory with the binary supremum operation $\vee$, an operation of
(left) residuation $\res$ and the parameterized least fixed point operation
$^\dagger$. We exhibit a system of \emph{equational} axioms which is sound and
proves all valid equations of the theories $\Mon_L$ involving only the theory
operations, $\vee$ and $^\dagger$, i.e., all valid equations not involving
residuation. We also present an alternative axiomatization, where $^\dagger$ is
replaced by a star operation, and provide an application to regular tree
languages.
| [
{
"created": "Sun, 6 Feb 2011 11:08:57 GMT",
"version": "v1"
}
] | 2015-03-18 | [
[
"Esik",
"Zoltan",
""
]
] | When $L$ is a complete lattice, the collection $\Mon_L$ of all monotone functions $L^p \to L^n$, $n,p \geq 0$, forms a Lawvere theory. We enrich this Lawvere theory with the binary supremum operation $\vee$, an operation of (left) residuation $\res$ and the parameterized least fixed point operation $^\dagger$. We exhibit a system of \emph{equational} axioms which is sound and proves all valid equations of the theories $\Mon_L$ involving only the theory operations, $\vee$ and $^\dagger$, i.e., all valid equations not involving residuation. We also present an alternative axiomatization, where $^\dagger$ is replaced by a star operation, and provide an application to regular tree languages. |
2206.06658 | Xiaoyuan Zhang | Yukun Bao, Liang Shen, Xiaoyuan Zhang, Yanmei Huang and Changrui Deng | A novel MDPSO-SVR hybrid model for feature selection in electricity
consumption forecasting | null | null | null | null | cs.NE | http://creativecommons.org/licenses/by/4.0/ | Electricity consumption forecasting has vital importance for the energy
planning of a country. Of the enabling machine learning models, support vector
regression (SVR) has been widely used to set up forecasting models due to its
superior generalization for unseen data. However, one key procedure for the
predictive modeling is feature selection, which might hurt the prediction
accuracy if improper features were selected. In this regard, a modified
discrete particle swarm optimization (MDPSO) was employed for feature selection
in this study, and then MDPSO-SVR hybrid mode was built to predict future
electricity consumption. Compared with other well-established counterparts,
MDPSO-SVR model consistently performs best in two real-world electricity
consumption datasets, which indicates that MDPSO for feature selection can
improve the prediction accuracy and the SVR equipped with the MDPSO can be a
promised alternative for electricity consumption forecasting.
| [
{
"created": "Tue, 14 Jun 2022 07:50:04 GMT",
"version": "v1"
},
{
"created": "Thu, 15 Sep 2022 07:35:16 GMT",
"version": "v2"
}
] | 2022-09-16 | [
[
"Bao",
"Yukun",
""
],
[
"Shen",
"Liang",
""
],
[
"Zhang",
"Xiaoyuan",
""
],
[
"Huang",
"Yanmei",
""
],
[
"Deng",
"Changrui",
""
]
] | Electricity consumption forecasting has vital importance for the energy planning of a country. Of the enabling machine learning models, support vector regression (SVR) has been widely used to set up forecasting models due to its superior generalization for unseen data. However, one key procedure for the predictive modeling is feature selection, which might hurt the prediction accuracy if improper features were selected. In this regard, a modified discrete particle swarm optimization (MDPSO) was employed for feature selection in this study, and then MDPSO-SVR hybrid mode was built to predict future electricity consumption. Compared with other well-established counterparts, MDPSO-SVR model consistently performs best in two real-world electricity consumption datasets, which indicates that MDPSO for feature selection can improve the prediction accuracy and the SVR equipped with the MDPSO can be a promised alternative for electricity consumption forecasting. |
2212.06921 | Dylan Sam | Dylan Sam, J. Zico Kolter | Losses over Labels: Weakly Supervised Learning via Direct Loss
Construction | 13 pages, 3 figures, AAAI 2023 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Owing to the prohibitive costs of generating large amounts of labeled data,
programmatic weak supervision is a growing paradigm within machine learning. In
this setting, users design heuristics that provide noisy labels for subsets of
the data. These weak labels are combined (typically via a graphical model) to
form pseudolabels, which are then used to train a downstream model. In this
work, we question a foundational premise of the typical weakly supervised
learning pipeline: given that the heuristic provides all ``label" information,
why do we need to generate pseudolabels at all? Instead, we propose to directly
transform the heuristics themselves into corresponding loss functions that
penalize differences between our model and the heuristic. By constructing
losses directly from the heuristics, we can incorporate more information than
is used in the standard weakly supervised pipeline, such as how the heuristics
make their decisions, which explicitly informs feature selection during
training. We call our method Losses over Labels (LoL) as it creates losses
directly from heuristics without going through the intermediate step of a
label. We show that LoL improves upon existing weak supervision methods on
several benchmark text and image classification tasks and further demonstrate
that incorporating gradient information leads to better performance on almost
every task.
| [
{
"created": "Tue, 13 Dec 2022 22:29:14 GMT",
"version": "v1"
},
{
"created": "Wed, 4 Oct 2023 23:32:44 GMT",
"version": "v2"
}
] | 2023-10-06 | [
[
"Sam",
"Dylan",
""
],
[
"Kolter",
"J. Zico",
""
]
] | Owing to the prohibitive costs of generating large amounts of labeled data, programmatic weak supervision is a growing paradigm within machine learning. In this setting, users design heuristics that provide noisy labels for subsets of the data. These weak labels are combined (typically via a graphical model) to form pseudolabels, which are then used to train a downstream model. In this work, we question a foundational premise of the typical weakly supervised learning pipeline: given that the heuristic provides all ``label" information, why do we need to generate pseudolabels at all? Instead, we propose to directly transform the heuristics themselves into corresponding loss functions that penalize differences between our model and the heuristic. By constructing losses directly from the heuristics, we can incorporate more information than is used in the standard weakly supervised pipeline, such as how the heuristics make their decisions, which explicitly informs feature selection during training. We call our method Losses over Labels (LoL) as it creates losses directly from heuristics without going through the intermediate step of a label. We show that LoL improves upon existing weak supervision methods on several benchmark text and image classification tasks and further demonstrate that incorporating gradient information leads to better performance on almost every task. |
1701.03360 | Jaeyoung Kim | Jaeyoung Kim, Mostafa El-Khamy, and Jungwon Lee | Residual LSTM: Design of a Deep Recurrent Architecture for Distant
Speech Recognition | null | null | null | null | cs.LG cs.AI cs.SD | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, a novel architecture for a deep recurrent neural network,
residual LSTM is introduced. A plain LSTM has an internal memory cell that can
learn long term dependencies of sequential data. It also provides a temporal
shortcut path to avoid vanishing or exploding gradients in the temporal domain.
The residual LSTM provides an additional spatial shortcut path from lower
layers for efficient training of deep networks with multiple LSTM layers.
Compared with the previous work, highway LSTM, residual LSTM separates a
spatial shortcut path with temporal one by using output layers, which can help
to avoid a conflict between spatial and temporal-domain gradient flows.
Furthermore, residual LSTM reuses the output projection matrix and the output
gate of LSTM to control the spatial information flow instead of additional gate
networks, which effectively reduces more than 10% of network parameters. An
experiment for distant speech recognition on the AMI SDM corpus shows that
10-layer plain and highway LSTM networks presented 13.7% and 6.2% increase in
WER over 3-layer aselines, respectively. On the contrary, 10-layer residual
LSTM networks provided the lowest WER 41.0%, which corresponds to 3.3% and 2.8%
WER reduction over plain and highway LSTM networks, respectively.
| [
{
"created": "Tue, 10 Jan 2017 20:03:37 GMT",
"version": "v1"
},
{
"created": "Wed, 15 Mar 2017 00:23:45 GMT",
"version": "v2"
},
{
"created": "Mon, 5 Jun 2017 18:51:08 GMT",
"version": "v3"
}
] | 2017-06-07 | [
[
"Kim",
"Jaeyoung",
""
],
[
"El-Khamy",
"Mostafa",
""
],
[
"Lee",
"Jungwon",
""
]
] | In this paper, a novel architecture for a deep recurrent neural network, residual LSTM is introduced. A plain LSTM has an internal memory cell that can learn long term dependencies of sequential data. It also provides a temporal shortcut path to avoid vanishing or exploding gradients in the temporal domain. The residual LSTM provides an additional spatial shortcut path from lower layers for efficient training of deep networks with multiple LSTM layers. Compared with the previous work, highway LSTM, residual LSTM separates a spatial shortcut path with temporal one by using output layers, which can help to avoid a conflict between spatial and temporal-domain gradient flows. Furthermore, residual LSTM reuses the output projection matrix and the output gate of LSTM to control the spatial information flow instead of additional gate networks, which effectively reduces more than 10% of network parameters. An experiment for distant speech recognition on the AMI SDM corpus shows that 10-layer plain and highway LSTM networks presented 13.7% and 6.2% increase in WER over 3-layer aselines, respectively. On the contrary, 10-layer residual LSTM networks provided the lowest WER 41.0%, which corresponds to 3.3% and 2.8% WER reduction over plain and highway LSTM networks, respectively. |
2307.15164 | Vivek Kumar Dr. | Vivek Kumar, Sushmita Singh and Prayag Tiwari | VISU at WASSA 2023 Shared Task: Detecting Emotions in Reaction to News
Stories Leveraging BERT and Stacked Embeddings | null | null | null | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Our system, VISU, participated in the WASSA 2023 Shared Task (3) of Emotion
Classification from essays written in reaction to news articles. Emotion
detection from complex dialogues is challenging and often requires
context/domain understanding. Therefore in this research, we have focused on
developing deep learning (DL) models using the combination of word embedding
representations with tailored prepossessing strategies to capture the nuances
of emotions expressed. Our experiments used static and contextual embeddings
(individual and stacked) with Bidirectional Long short-term memory (BiLSTM) and
Transformer based models. We occupied rank tenth in the emotion detection task
by scoring a Macro F1-Score of 0.2717, validating the efficacy of our
implemented approaches for small and imbalanced datasets with mixed categories
of target emotions.
| [
{
"created": "Thu, 27 Jul 2023 19:42:22 GMT",
"version": "v1"
}
] | 2023-07-31 | [
[
"Kumar",
"Vivek",
""
],
[
"Singh",
"Sushmita",
""
],
[
"Tiwari",
"Prayag",
""
]
] | Our system, VISU, participated in the WASSA 2023 Shared Task (3) of Emotion Classification from essays written in reaction to news articles. Emotion detection from complex dialogues is challenging and often requires context/domain understanding. Therefore in this research, we have focused on developing deep learning (DL) models using the combination of word embedding representations with tailored prepossessing strategies to capture the nuances of emotions expressed. Our experiments used static and contextual embeddings (individual and stacked) with Bidirectional Long short-term memory (BiLSTM) and Transformer based models. We occupied rank tenth in the emotion detection task by scoring a Macro F1-Score of 0.2717, validating the efficacy of our implemented approaches for small and imbalanced datasets with mixed categories of target emotions. |
2305.08183 | Wei Yuan | Wei Yuan, Shilong Yuan, Chaoqun Yang, Quoc Viet Hung Nguyen, Hongzhi
Yin | Manipulating Visually-aware Federated Recommender Systems and Its
Countermeasures | null | null | null | null | cs.IR | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Federated recommender systems (FedRecs) have been widely explored recently
due to their ability to protect user data privacy. In FedRecs, a central server
collaboratively learns recommendation models by sharing model public parameters
with clients, thereby offering a privacy-preserving solution. Unfortunately,
the exposure of model parameters leaves a backdoor for adversaries to
manipulate FedRecs. Existing works about FedRec security already reveal that
items can easily be promoted by malicious users via model poisoning attacks,
but all of them mainly focus on FedRecs with only collaborative information
(i.e., user-item interactions). We argue that these attacks are effective
because of the data sparsity of collaborative signals. In practice, auxiliary
information, such as products' visual descriptions, is used to alleviate
collaborative filtering data's sparsity. Therefore, when incorporating visual
information in FedRecs, all existing model poisoning attacks' effectiveness
becomes questionable. In this paper, we conduct extensive experiments to verify
that incorporating visual information can beat existing state-of-the-art
attacks in reasonable settings. However, since visual information is usually
provided by external sources, simply including it will create new security
problems. Specifically, we propose a new kind of poisoning attack for
visually-aware FedRecs, namely image poisoning attacks, where adversaries can
gradually modify the uploaded image to manipulate item ranks during FedRecs'
training process. Furthermore, we reveal that the potential collaboration
between image poisoning attacks and model poisoning attacks will make
visually-aware FedRecs more vulnerable to being manipulated. To safely use
visual information, we employ a diffusion model in visually-aware FedRecs to
purify each uploaded image and detect the adversarial images.
| [
{
"created": "Sun, 14 May 2023 15:22:52 GMT",
"version": "v1"
},
{
"created": "Tue, 16 May 2023 22:26:01 GMT",
"version": "v2"
}
] | 2023-05-18 | [
[
"Yuan",
"Wei",
""
],
[
"Yuan",
"Shilong",
""
],
[
"Yang",
"Chaoqun",
""
],
[
"Nguyen",
"Quoc Viet Hung",
""
],
[
"Yin",
"Hongzhi",
""
]
] | Federated recommender systems (FedRecs) have been widely explored recently due to their ability to protect user data privacy. In FedRecs, a central server collaboratively learns recommendation models by sharing model public parameters with clients, thereby offering a privacy-preserving solution. Unfortunately, the exposure of model parameters leaves a backdoor for adversaries to manipulate FedRecs. Existing works about FedRec security already reveal that items can easily be promoted by malicious users via model poisoning attacks, but all of them mainly focus on FedRecs with only collaborative information (i.e., user-item interactions). We argue that these attacks are effective because of the data sparsity of collaborative signals. In practice, auxiliary information, such as products' visual descriptions, is used to alleviate collaborative filtering data's sparsity. Therefore, when incorporating visual information in FedRecs, all existing model poisoning attacks' effectiveness becomes questionable. In this paper, we conduct extensive experiments to verify that incorporating visual information can beat existing state-of-the-art attacks in reasonable settings. However, since visual information is usually provided by external sources, simply including it will create new security problems. Specifically, we propose a new kind of poisoning attack for visually-aware FedRecs, namely image poisoning attacks, where adversaries can gradually modify the uploaded image to manipulate item ranks during FedRecs' training process. Furthermore, we reveal that the potential collaboration between image poisoning attacks and model poisoning attacks will make visually-aware FedRecs more vulnerable to being manipulated. To safely use visual information, we employ a diffusion model in visually-aware FedRecs to purify each uploaded image and detect the adversarial images. |
1906.04526 | Lukas Lindenroth | Lukas Lindenroth, Richard James Housden, Shuangyi Wang, Junghwan Back,
Kawal Rhode and Hongbin Liu | Design and integration of a parallel, soft robotic end-effector for
extracorporeal ultrasound | null | IEEE Transactions on Biomedical Engineering, vol. 67, no. 8, pp.
2215-2229, Aug. 2020 | 10.1109/TBME.2019.2957609 | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Objective: In this work we address limitations in state-of-the-art ultrasound
robots by designing and integrating a novel soft robotic system for ultrasound
imaging. It employs the inherent qualities of soft fluidic actuators to
establish safe, adaptable interaction between ultrasound probe and patient.
Methods: We acquire clinical data to determine the movement ranges and force
levels required in prenatal foetal ultrasound imaging and design the soft
robotic end-effector accordingly. We verify its mechanical characteristics,
derive and validate a kinetostatic model and demonstrate controllability and
imaging capabilities on an ultrasound phantom. Results: The soft robot exhibits
the desired stiffness characteristics and is able to reach 100% of the required
workspace when no external force is present, and 95% of the workspace when
considering its compliance. The model can accurately predict the end-effector
pose with a mean error of 1.18+/-0.29mm in position and 0.92+/-0.47deg in
orientation. The derived controller is, with an average position error of
0.39mm, able to track a target pose efficiently without and with externally
applied loads. Ultrasound images acquired with the system are of equally good
quality compared to a manual sonographer scan. Conclusion: The system is able
to withstand loads commonly applied during foetal ultrasound scans and remains
controllable with a motion range similar to manual scanning. Significance: The
proposed soft robot presents a safe, cost-effective solution to offloading
sonographers in day-to-day scanning routines. The design and modelling
paradigms are greatly generalizable and particularly suitable for designing
soft robots for physical interaction tasks.
| [
{
"created": "Tue, 11 Jun 2019 12:26:53 GMT",
"version": "v1"
},
{
"created": "Mon, 20 Jul 2020 22:31:59 GMT",
"version": "v2"
},
{
"created": "Wed, 22 Jul 2020 20:32:48 GMT",
"version": "v3"
}
] | 2020-07-24 | [
[
"Lindenroth",
"Lukas",
""
],
[
"Housden",
"Richard James",
""
],
[
"Wang",
"Shuangyi",
""
],
[
"Back",
"Junghwan",
""
],
[
"Rhode",
"Kawal",
""
],
[
"Liu",
"Hongbin",
""
]
] | Objective: In this work we address limitations in state-of-the-art ultrasound robots by designing and integrating a novel soft robotic system for ultrasound imaging. It employs the inherent qualities of soft fluidic actuators to establish safe, adaptable interaction between ultrasound probe and patient. Methods: We acquire clinical data to determine the movement ranges and force levels required in prenatal foetal ultrasound imaging and design the soft robotic end-effector accordingly. We verify its mechanical characteristics, derive and validate a kinetostatic model and demonstrate controllability and imaging capabilities on an ultrasound phantom. Results: The soft robot exhibits the desired stiffness characteristics and is able to reach 100% of the required workspace when no external force is present, and 95% of the workspace when considering its compliance. The model can accurately predict the end-effector pose with a mean error of 1.18+/-0.29mm in position and 0.92+/-0.47deg in orientation. The derived controller is, with an average position error of 0.39mm, able to track a target pose efficiently without and with externally applied loads. Ultrasound images acquired with the system are of equally good quality compared to a manual sonographer scan. Conclusion: The system is able to withstand loads commonly applied during foetal ultrasound scans and remains controllable with a motion range similar to manual scanning. Significance: The proposed soft robot presents a safe, cost-effective solution to offloading sonographers in day-to-day scanning routines. The design and modelling paradigms are greatly generalizable and particularly suitable for designing soft robots for physical interaction tasks. |
2106.02968 | Rafid Mahmood | Rafid Mahmood, Sanja Fidler, Marc T. Law | Low Budget Active Learning via Wasserstein Distance: An Integer
Programming Approach | null | null | null | null | cs.LG math.OC | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Active learning is the process of training a model with limited labeled data
by selecting a core subset of an unlabeled data pool to label. The large scale
of data sets used in deep learning forces most sample selection strategies to
employ efficient heuristics. This paper introduces an integer optimization
problem for selecting a core set that minimizes the discrete Wasserstein
distance from the unlabeled pool. We demonstrate that this problem can be
tractably solved with a Generalized Benders Decomposition algorithm. Our
strategy uses high-quality latent features that can be obtained by unsupervised
learning on the unlabeled pool. Numerical results on several data sets show
that our optimization approach is competitive with baselines and particularly
outperforms them in the low budget regime where less than one percent of the
data set is labeled.
| [
{
"created": "Sat, 5 Jun 2021 21:25:03 GMT",
"version": "v1"
},
{
"created": "Sat, 12 Jun 2021 23:04:04 GMT",
"version": "v2"
},
{
"created": "Sat, 5 Mar 2022 20:43:26 GMT",
"version": "v3"
},
{
"created": "Tue, 7 Mar 2023 00:09:11 GMT",
"version": "v4"
}
] | 2023-03-08 | [
[
"Mahmood",
"Rafid",
""
],
[
"Fidler",
"Sanja",
""
],
[
"Law",
"Marc T.",
""
]
] | Active learning is the process of training a model with limited labeled data by selecting a core subset of an unlabeled data pool to label. The large scale of data sets used in deep learning forces most sample selection strategies to employ efficient heuristics. This paper introduces an integer optimization problem for selecting a core set that minimizes the discrete Wasserstein distance from the unlabeled pool. We demonstrate that this problem can be tractably solved with a Generalized Benders Decomposition algorithm. Our strategy uses high-quality latent features that can be obtained by unsupervised learning on the unlabeled pool. Numerical results on several data sets show that our optimization approach is competitive with baselines and particularly outperforms them in the low budget regime where less than one percent of the data set is labeled. |
2212.07206 | Cem Suulker | Cem Suulker, Sophie Skach, Kaspar Althoefer | A Fabric Soft Robotic Exoskeleton with Novel Elastic Band Integrated
Actuators for Hand Rehabilitation | 2 pages, 4 figures, conference | Conference on New Technologies for Computer and Robot Assisted
Surgery (CRAS 2022) | null | null | cs.RO | http://creativecommons.org/licenses/by/4.0/ | Common disabilities like stroke and spinal cord injuries may cause loss of
motor function in hands. They can be treated with robot assisted rehabilitation
techniques, like continuously opening and closing the hand with help of a
robot, in a cheaper, and less time consuming manner than traditional methods.
Hand exoskeletons are developed to assist rehabilitation, but their bulky
nature brings with it certain challenges. As soft robots use elastomeric and
fabric elements rather than heavy links, and operate with pneumatic, hydraulic
or tendon based rather than traditional rotary or linear motors, soft hand
exoskeletons are deemed a better option in relation to rehabilitation.
| [
{
"created": "Wed, 14 Dec 2022 13:08:19 GMT",
"version": "v1"
}
] | 2022-12-15 | [
[
"Suulker",
"Cem",
""
],
[
"Skach",
"Sophie",
""
],
[
"Althoefer",
"Kaspar",
""
]
] | Common disabilities like stroke and spinal cord injuries may cause loss of motor function in hands. They can be treated with robot assisted rehabilitation techniques, like continuously opening and closing the hand with help of a robot, in a cheaper, and less time consuming manner than traditional methods. Hand exoskeletons are developed to assist rehabilitation, but their bulky nature brings with it certain challenges. As soft robots use elastomeric and fabric elements rather than heavy links, and operate with pneumatic, hydraulic or tendon based rather than traditional rotary or linear motors, soft hand exoskeletons are deemed a better option in relation to rehabilitation. |
2407.14249 | Martin Menabue | Martin Menabue, Emanuele Frascaroli, Matteo Boschini, Lorenzo
Bonicelli, Angelo Porrello, Simone Calderara | An Attention-based Representation Distillation Baseline for Multi-Label
Continual Learning | Accepted at LOD 2024 | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The field of Continual Learning (CL) has inspired numerous researchers over
the years, leading to increasingly advanced countermeasures to the issue of
catastrophic forgetting. Most studies have focused on the single-class
scenario, where each example comes with a single label. The recent literature
has successfully tackled such a setting, with impressive results. Differently,
we shift our attention to the multi-label scenario, as we feel it to be more
representative of real-world open problems. In our work, we show that existing
state-of-the-art CL methods fail to achieve satisfactory performance, thus
questioning the real advance claimed in recent years. Therefore, we assess both
old-style and novel strategies and propose, on top of them, an approach called
Selective Class Attention Distillation (SCAD). It relies on a knowledge
transfer technique that seeks to align the representations of the student
network -- which trains continuously and is subject to forgetting -- with the
teacher ones, which is pretrained and kept frozen. Importantly, our method is
able to selectively transfer the relevant information from the teacher to the
student, thereby preventing irrelevant information from harming the student's
performance during online training. To demonstrate the merits of our approach,
we conduct experiments on two different multi-label datasets, showing that our
method outperforms the current state-of-the-art Continual Learning methods. Our
findings highlight the importance of addressing the unique challenges posed by
multi-label environments in the field of Continual Learning. The code of SCAD
is available at https://github.com/aimagelab/SCAD-LOD-2024.
| [
{
"created": "Fri, 19 Jul 2024 12:30:03 GMT",
"version": "v1"
}
] | 2024-07-22 | [
[
"Menabue",
"Martin",
""
],
[
"Frascaroli",
"Emanuele",
""
],
[
"Boschini",
"Matteo",
""
],
[
"Bonicelli",
"Lorenzo",
""
],
[
"Porrello",
"Angelo",
""
],
[
"Calderara",
"Simone",
""
]
] | The field of Continual Learning (CL) has inspired numerous researchers over the years, leading to increasingly advanced countermeasures to the issue of catastrophic forgetting. Most studies have focused on the single-class scenario, where each example comes with a single label. The recent literature has successfully tackled such a setting, with impressive results. Differently, we shift our attention to the multi-label scenario, as we feel it to be more representative of real-world open problems. In our work, we show that existing state-of-the-art CL methods fail to achieve satisfactory performance, thus questioning the real advance claimed in recent years. Therefore, we assess both old-style and novel strategies and propose, on top of them, an approach called Selective Class Attention Distillation (SCAD). It relies on a knowledge transfer technique that seeks to align the representations of the student network -- which trains continuously and is subject to forgetting -- with the teacher ones, which is pretrained and kept frozen. Importantly, our method is able to selectively transfer the relevant information from the teacher to the student, thereby preventing irrelevant information from harming the student's performance during online training. To demonstrate the merits of our approach, we conduct experiments on two different multi-label datasets, showing that our method outperforms the current state-of-the-art Continual Learning methods. Our findings highlight the importance of addressing the unique challenges posed by multi-label environments in the field of Continual Learning. The code of SCAD is available at https://github.com/aimagelab/SCAD-LOD-2024. |
2207.07522 | Wencan Cheng | Wencan Cheng and Jong Hwan Ko | Bi-PointFlowNet: Bidirectional Learning for Point Cloud Based Scene Flow
Estimation | Accepted as a conference paper at European Conference on Computer
Vision (ECCV) 2022 | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Scene flow estimation, which extracts point-wise motion between scenes, is
becoming a crucial task in many computer vision tasks. However, all of the
existing estimation methods utilize only the unidirectional features,
restricting the accuracy and generality. This paper presents a novel scene flow
estimation architecture using bidirectional flow embedding layers. The proposed
bidirectional layer learns features along both forward and backward directions,
enhancing the estimation performance. In addition, hierarchical feature
extraction and warping improve the performance and reduce computational
overhead. Experimental results show that the proposed architecture achieved a
new state-of-the-art record by outperforming other approaches with large margin
in both FlyingThings3D and KITTI benchmarks. Codes are available at
https://github.com/cwc1260/BiFlow.
| [
{
"created": "Fri, 15 Jul 2022 15:14:53 GMT",
"version": "v1"
}
] | 2022-07-18 | [
[
"Cheng",
"Wencan",
""
],
[
"Ko",
"Jong Hwan",
""
]
] | Scene flow estimation, which extracts point-wise motion between scenes, is becoming a crucial task in many computer vision tasks. However, all of the existing estimation methods utilize only the unidirectional features, restricting the accuracy and generality. This paper presents a novel scene flow estimation architecture using bidirectional flow embedding layers. The proposed bidirectional layer learns features along both forward and backward directions, enhancing the estimation performance. In addition, hierarchical feature extraction and warping improve the performance and reduce computational overhead. Experimental results show that the proposed architecture achieved a new state-of-the-art record by outperforming other approaches with large margin in both FlyingThings3D and KITTI benchmarks. Codes are available at https://github.com/cwc1260/BiFlow. |
2201.08368 | Lloyd Montgomery | Lloyd Montgomery, Clara L\"uders, Walid Maalej | An Alternative Issue Tracking Dataset of Public Jira Repositories | 5 pages | null | 10.1145/3524842.3528486 | null | cs.SE | http://creativecommons.org/licenses/by/4.0/ | Organisations use issue tracking systems (ITSs) to track and document their
projects' work in units called issues. This style of documentation encourages
evolutionary refinement, as each issue can be independently improved, commented
on, linked to other issues, and progressed through the organisational workflow.
Commonly studied ITSs so far include GitHub, GitLab, and Bugzilla, while Jira,
one of the most popular ITS in practice with a wealth of additional
information, has yet to receive similar attention. Unfortunately, diverse
public Jira datasets are rare, likely due to the difficulty in finding and
accessing these repositories. With this paper, we release a dataset of 16
public Jiras with 1822 projects, spanning 2.7 million issues with a combined
total of 32 million changes, 9 million comments, and 1 million issue links. We
believe this Jira dataset will lead to many fruitful research projects
investigating issue evolution, issue linking, cross-project analysis, as well
as cross-tool analysis when combined with existing well-studied ITS datasets.
| [
{
"created": "Thu, 20 Jan 2022 18:52:36 GMT",
"version": "v1"
},
{
"created": "Mon, 31 Jan 2022 16:09:20 GMT",
"version": "v2"
},
{
"created": "Fri, 25 Mar 2022 16:17:18 GMT",
"version": "v3"
}
] | 2022-03-28 | [
[
"Montgomery",
"Lloyd",
""
],
[
"Lüders",
"Clara",
""
],
[
"Maalej",
"Walid",
""
]
] | Organisations use issue tracking systems (ITSs) to track and document their projects' work in units called issues. This style of documentation encourages evolutionary refinement, as each issue can be independently improved, commented on, linked to other issues, and progressed through the organisational workflow. Commonly studied ITSs so far include GitHub, GitLab, and Bugzilla, while Jira, one of the most popular ITS in practice with a wealth of additional information, has yet to receive similar attention. Unfortunately, diverse public Jira datasets are rare, likely due to the difficulty in finding and accessing these repositories. With this paper, we release a dataset of 16 public Jiras with 1822 projects, spanning 2.7 million issues with a combined total of 32 million changes, 9 million comments, and 1 million issue links. We believe this Jira dataset will lead to many fruitful research projects investigating issue evolution, issue linking, cross-project analysis, as well as cross-tool analysis when combined with existing well-studied ITS datasets. |
2003.07907 | Ozan Tonguz K. | Keith Shannon, Elias Towe, and Ozan K. Tonguz | On the Use of Quantum Entanglement in Secure Communications: A Survey | null | null | null | null | cs.CR quant-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Quantum computing and quantum communications are exciting new frontiers in
computing and communications. Indeed, the massive investments made by the
governments of the US, China, and EU in these new technologies are not a secret
and are based on the expected potential of these technologies to revolutionize
communications, computing, and security. In addition to several field trials
and hero experiments, a number of companies such as Google and IBM are actively
working in these areas and some have already reported impressive demonstrations
in the past few years. While there is some skepticism about whether quantum
cryptography will eventually replace classical cryptography, the advent of
quantum computing could necessitate the use of quantum cryptography as the
ultimate frontier of secure communications. This is because, with the amazing
speeds demonstrated with quantum computers, breaking cryptographic keys might
no longer be a daunting task in the next decade or so. Hence, quantum
cryptography as the ultimate frontier in secure communications might not be
such a far-fetched idea. It is well known that Heisenberg's Uncertainty
Principle is essentially a "negative result" in Physics and Quantum Mechanics.
It turns out that Heisenberg's Uncertainty Principle, one of the most
interesting results in Quantum Mechanics, could be the theoretical basis and
the main scientific principle behind the ultimate frontier in quantum
cryptography or secure communications in conjunction with Quantum Entanglement.
| [
{
"created": "Tue, 17 Mar 2020 19:32:40 GMT",
"version": "v1"
}
] | 2020-03-20 | [
[
"Shannon",
"Keith",
""
],
[
"Towe",
"Elias",
""
],
[
"Tonguz",
"Ozan K.",
""
]
] | Quantum computing and quantum communications are exciting new frontiers in computing and communications. Indeed, the massive investments made by the governments of the US, China, and EU in these new technologies are not a secret and are based on the expected potential of these technologies to revolutionize communications, computing, and security. In addition to several field trials and hero experiments, a number of companies such as Google and IBM are actively working in these areas and some have already reported impressive demonstrations in the past few years. While there is some skepticism about whether quantum cryptography will eventually replace classical cryptography, the advent of quantum computing could necessitate the use of quantum cryptography as the ultimate frontier of secure communications. This is because, with the amazing speeds demonstrated with quantum computers, breaking cryptographic keys might no longer be a daunting task in the next decade or so. Hence, quantum cryptography as the ultimate frontier in secure communications might not be such a far-fetched idea. It is well known that Heisenberg's Uncertainty Principle is essentially a "negative result" in Physics and Quantum Mechanics. It turns out that Heisenberg's Uncertainty Principle, one of the most interesting results in Quantum Mechanics, could be the theoretical basis and the main scientific principle behind the ultimate frontier in quantum cryptography or secure communications in conjunction with Quantum Entanglement. |
2011.08366 | Hiroto Yasumi | Hiroto Yasumi, Fukuhito Ooshita, Michiko Inoue, S\'ebastien Tixeuil | Uniform Bipartition in the Population Protocol Model with Arbitrary
Communication Graphs | null | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we focus on the uniform bipartition problem in the population
protocol model. This problem aims to divide a population into two groups of
equal size. In particular, we consider the problem in the context of
\emph{arbitrary} communication graphs. As a result, we clarify the solvability
of the uniform bipartition problem with arbitrary communication graphs when
agents in the population have designated initial states, under various
assumptions such as the existence of a base station, symmetry of the protocol,
and fairness of the execution. When the problem is solvable, we present
protocols for uniform bipartition. When global fairness is assumed, the space
complexity of our solutions is tight.
| [
{
"created": "Tue, 17 Nov 2020 02:06:21 GMT",
"version": "v1"
},
{
"created": "Wed, 18 Nov 2020 02:18:22 GMT",
"version": "v2"
}
] | 2020-11-19 | [
[
"Yasumi",
"Hiroto",
""
],
[
"Ooshita",
"Fukuhito",
""
],
[
"Inoue",
"Michiko",
""
],
[
"Tixeuil",
"Sébastien",
""
]
] | In this paper, we focus on the uniform bipartition problem in the population protocol model. This problem aims to divide a population into two groups of equal size. In particular, we consider the problem in the context of \emph{arbitrary} communication graphs. As a result, we clarify the solvability of the uniform bipartition problem with arbitrary communication graphs when agents in the population have designated initial states, under various assumptions such as the existence of a base station, symmetry of the protocol, and fairness of the execution. When the problem is solvable, we present protocols for uniform bipartition. When global fairness is assumed, the space complexity of our solutions is tight. |
2310.17752 | Ligeng Zhu | Ligeng Zhu, Lanxiang Hu, Ji Lin, Wei-Chen Wang, Wei-Ming Chen, Chuang
Gan, Song Han | PockEngine: Sparse and Efficient Fine-tuning in a Pocket | null | 56th IEEE/ACM International Symposium on Microarchitecture (MICRO
2023) | 10.1145/3613424.3614307 | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | On-device learning and efficient fine-tuning enable continuous and
privacy-preserving customization (e.g., locally fine-tuning large language
models on personalized data). However, existing training frameworks are
designed for cloud servers with powerful accelerators (e.g., GPUs, TPUs) and
lack the optimizations for learning on the edge, which faces challenges of
resource limitations and edge hardware diversity. We introduce PockEngine: a
tiny, sparse and efficient engine to enable fine-tuning on various edge
devices. PockEngine supports sparse backpropagation: it prunes the backward
graph and sparsely updates the model with measured memory saving and latency
reduction while maintaining the model quality. Secondly, PockEngine is
compilation first: the entire training graph (including forward, backward and
optimization steps) is derived at compile-time, which reduces the runtime
overhead and brings opportunities for graph transformations. PockEngine also
integrates a rich set of training graph optimizations, thus can further
accelerate the training cost, including operator reordering and backend
switching. PockEngine supports diverse applications, frontends and hardware
backends: it flexibly compiles and tunes models defined in
PyTorch/TensorFlow/Jax and deploys binaries to mobile CPU/GPU/DSPs. We
evaluated PockEngine on both vision models and large language models.
PockEngine achieves up to 15 $\times$ speedup over off-the-shelf TensorFlow
(Raspberry Pi), 5.6 $\times$ memory saving back-propagation (Jetson AGX Orin).
Remarkably, PockEngine enables fine-tuning LLaMav2-7B on NVIDIA Jetson AGX Orin
at 550 tokens/s, 7.9$\times$ faster than the PyTorch.
| [
{
"created": "Thu, 26 Oct 2023 19:46:11 GMT",
"version": "v1"
}
] | 2023-10-30 | [
[
"Zhu",
"Ligeng",
""
],
[
"Hu",
"Lanxiang",
""
],
[
"Lin",
"Ji",
""
],
[
"Wang",
"Wei-Chen",
""
],
[
"Chen",
"Wei-Ming",
""
],
[
"Gan",
"Chuang",
""
],
[
"Han",
"Song",
""
]
] | On-device learning and efficient fine-tuning enable continuous and privacy-preserving customization (e.g., locally fine-tuning large language models on personalized data). However, existing training frameworks are designed for cloud servers with powerful accelerators (e.g., GPUs, TPUs) and lack the optimizations for learning on the edge, which faces challenges of resource limitations and edge hardware diversity. We introduce PockEngine: a tiny, sparse and efficient engine to enable fine-tuning on various edge devices. PockEngine supports sparse backpropagation: it prunes the backward graph and sparsely updates the model with measured memory saving and latency reduction while maintaining the model quality. Secondly, PockEngine is compilation first: the entire training graph (including forward, backward and optimization steps) is derived at compile-time, which reduces the runtime overhead and brings opportunities for graph transformations. PockEngine also integrates a rich set of training graph optimizations, thus can further accelerate the training cost, including operator reordering and backend switching. PockEngine supports diverse applications, frontends and hardware backends: it flexibly compiles and tunes models defined in PyTorch/TensorFlow/Jax and deploys binaries to mobile CPU/GPU/DSPs. We evaluated PockEngine on both vision models and large language models. PockEngine achieves up to 15 $\times$ speedup over off-the-shelf TensorFlow (Raspberry Pi), 5.6 $\times$ memory saving back-propagation (Jetson AGX Orin). Remarkably, PockEngine enables fine-tuning LLaMav2-7B on NVIDIA Jetson AGX Orin at 550 tokens/s, 7.9$\times$ faster than the PyTorch. |
1810.10801 | Yulia Sandamirskaya | Sebastian Glatz, Julien N.P. Martel, Raphaela Kreiser, Ning Qiao, and
Yulia Sandamirskaya | Adaptive motor control and learning in a spiking neural network realised
on a mixed-signal neuromorphic processor | 6+1 pages, 4 figures, will appear in one of the Robotics conferences | IEEE International Conference on Robotics and Automation (ICRA)
2019 | null | null | cs.ET cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neuromorphic computing is a new paradigm for design of both the computing
hardware and algorithms inspired by biological neural networks. The event-based
nature and the inherent parallelism make neuromorphic computing a promising
paradigm for building efficient neural network based architectures for control
of fast and agile robots. In this paper, we present a spiking neural network
architecture that uses sensory feedback to control rotational velocity of a
robotic vehicle. When the velocity reaches the target value, the mapping from
the target velocity of the vehicle to the correct motor command, both
represented in the spiking neural network on the neuromorphic device, is
autonomously stored on the device using on-chip plastic synaptic weights. We
validate the controller using a wheel motor of a miniature mobile vehicle and
inertia measurement unit as the sensory feedback and demonstrate online
learning of a simple 'inverse model' in a two-layer spiking neural network on
the neuromorphic chip. The prototype neuromorphic device that features 256
spiking neurons allows us to realise a simple proof of concept architecture for
the purely neuromorphic motor control and learning. The architecture can be
easily scaled-up if a larger neuromorphic device is available.
| [
{
"created": "Thu, 25 Oct 2018 09:22:17 GMT",
"version": "v1"
}
] | 2019-07-10 | [
[
"Glatz",
"Sebastian",
""
],
[
"Martel",
"Julien N. P.",
""
],
[
"Kreiser",
"Raphaela",
""
],
[
"Qiao",
"Ning",
""
],
[
"Sandamirskaya",
"Yulia",
""
]
] | Neuromorphic computing is a new paradigm for design of both the computing hardware and algorithms inspired by biological neural networks. The event-based nature and the inherent parallelism make neuromorphic computing a promising paradigm for building efficient neural network based architectures for control of fast and agile robots. In this paper, we present a spiking neural network architecture that uses sensory feedback to control rotational velocity of a robotic vehicle. When the velocity reaches the target value, the mapping from the target velocity of the vehicle to the correct motor command, both represented in the spiking neural network on the neuromorphic device, is autonomously stored on the device using on-chip plastic synaptic weights. We validate the controller using a wheel motor of a miniature mobile vehicle and inertia measurement unit as the sensory feedback and demonstrate online learning of a simple 'inverse model' in a two-layer spiking neural network on the neuromorphic chip. The prototype neuromorphic device that features 256 spiking neurons allows us to realise a simple proof of concept architecture for the purely neuromorphic motor control and learning. The architecture can be easily scaled-up if a larger neuromorphic device is available. |
2106.11051 | Omar Alolayan | Omar S. Alolayan, Samuel J. Raymond, Justin B. Montgomery and John R.
Williams | Towards Better Shale Gas Production Forecasting Using Transfer Learning | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep neural networks can generate more accurate shale gas production
forecasts in counties with a limited number of sample wells by utilizing
transfer learning. This paper provides a way of transferring the knowledge
gained from other deep neural network models trained on adjacent counties into
the county of interest. The paper uses data from more than 6000 shale gas wells
across 17 counties from Texas Barnett and Pennsylvania Marcellus shale
formations to test the capabilities of transfer learning. The results reduce
the forecasting error between 11% and 47% compared to the widely used Arps
decline curve model.
| [
{
"created": "Mon, 21 Jun 2021 12:37:44 GMT",
"version": "v1"
}
] | 2021-06-22 | [
[
"Alolayan",
"Omar S.",
""
],
[
"Raymond",
"Samuel J.",
""
],
[
"Montgomery",
"Justin B.",
""
],
[
"Williams",
"John R.",
""
]
] | Deep neural networks can generate more accurate shale gas production forecasts in counties with a limited number of sample wells by utilizing transfer learning. This paper provides a way of transferring the knowledge gained from other deep neural network models trained on adjacent counties into the county of interest. The paper uses data from more than 6000 shale gas wells across 17 counties from Texas Barnett and Pennsylvania Marcellus shale formations to test the capabilities of transfer learning. The results reduce the forecasting error between 11% and 47% compared to the widely used Arps decline curve model. |
2106.04812 | Kshitij Tayal | Kshitij Tayal, Raunak Manekar, Zhong Zhuang, David Yang, Vipin Kumar,
Felix Hofmann, Ju Sun | Phase Retrieval using Single-Instance Deep Generative Prior | null | null | null | null | cs.LG eess.IV | http://creativecommons.org/licenses/by/4.0/ | Several deep learning methods for phase retrieval exist, but most of them
fail on realistic data without precise support information. We propose a novel
method based on single-instance deep generative prior that works well on
complex-valued crystal data.
| [
{
"created": "Wed, 9 Jun 2021 05:11:33 GMT",
"version": "v1"
},
{
"created": "Tue, 22 Jun 2021 19:59:54 GMT",
"version": "v2"
}
] | 2021-06-24 | [
[
"Tayal",
"Kshitij",
""
],
[
"Manekar",
"Raunak",
""
],
[
"Zhuang",
"Zhong",
""
],
[
"Yang",
"David",
""
],
[
"Kumar",
"Vipin",
""
],
[
"Hofmann",
"Felix",
""
],
[
"Sun",
"Ju",
""
]
] | Several deep learning methods for phase retrieval exist, but most of them fail on realistic data without precise support information. We propose a novel method based on single-instance deep generative prior that works well on complex-valued crystal data. |
2112.01379 | Joseph Tien | Matthew T. Osborne, Samuel S. Malloy, Erik C. Nisbet, Robert M. Bond,
Joseph H. Tien | Sentinel node approach to monitoring online COVID-19 misinformation | null | null | null | null | cs.SI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Understanding how different online communities engage with COVID-19
misinformation is critical for public health response, as misinformation
confined to a small, isolated community of users poses a different public
health risk than misinformation being consumed by a large population spanning
many diverse communities. Here we take a longitudinal approach that leverages
tools from network science to study COVID-19 misinformation on Twitter. Our
approach provides a means to examine the breadth of misinformation engagement
using modest data needs and computational resources. We identify influential
accounts from different Twitter communities discussing COVID-19, and follow
these `sentinel nodes' longitudinally from July 2020 to January 2021. We
characterize sentinel nodes in terms of a linked-media preference score, and
use a standardized similarity score to examine alignment of tweets within and
between communities. We find that media preference is strongly correlated with
the amount of misinformation propagated by sentinel nodes. Engagement with
sensationalist misinformation topics is largely confined to a cluster of
sentinel nodes that includes influential conspiracy theorist accounts, while
misinformation relating to COVID-19 severity generated widespread engagement
across multiple communities. Our findings indicate that misinformation
downplaying COVID-19 severity is of particular concern for public health
response.
| [
{
"created": "Thu, 2 Dec 2021 16:12:38 GMT",
"version": "v1"
}
] | 2021-12-03 | [
[
"Osborne",
"Matthew T.",
""
],
[
"Malloy",
"Samuel S.",
""
],
[
"Nisbet",
"Erik C.",
""
],
[
"Bond",
"Robert M.",
""
],
[
"Tien",
"Joseph H.",
""
]
] | Understanding how different online communities engage with COVID-19 misinformation is critical for public health response, as misinformation confined to a small, isolated community of users poses a different public health risk than misinformation being consumed by a large population spanning many diverse communities. Here we take a longitudinal approach that leverages tools from network science to study COVID-19 misinformation on Twitter. Our approach provides a means to examine the breadth of misinformation engagement using modest data needs and computational resources. We identify influential accounts from different Twitter communities discussing COVID-19, and follow these `sentinel nodes' longitudinally from July 2020 to January 2021. We characterize sentinel nodes in terms of a linked-media preference score, and use a standardized similarity score to examine alignment of tweets within and between communities. We find that media preference is strongly correlated with the amount of misinformation propagated by sentinel nodes. Engagement with sensationalist misinformation topics is largely confined to a cluster of sentinel nodes that includes influential conspiracy theorist accounts, while misinformation relating to COVID-19 severity generated widespread engagement across multiple communities. Our findings indicate that misinformation downplaying COVID-19 severity is of particular concern for public health response. |
1408.6228 | M. Rizwan Jameel Qureshi Dr. | M. Rizwan Jameel Qureshi | Estimation of the new agile XP process model for medium-scale projects
using industrial case studies | 3 pages, 1 figure | International Journal of Machine Learning and Computing, online
October 2013; Vol. 3, No. 5, 2013, pp. 393-395 | 10.7763/IJMLC.2013.V3.346 | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Agile is one of the terms with which software professionals are quite
familiar. Agile models promote fast development to develop high quality
software. XP process model is one of the most widely used and most documented
agile models. XP model is meant for small-scale projects. Since XP model is a
good model, therefore there is need of its extension for the development of
medium and large-scale projects. XP model has certain drawbacks such as weak
documentation and poor performance while adapting it for the development of
medium and large-scale projects having large teams. A new XP model is proposed
in this paper to cater the needs of software development companies for
medium-scale projects having large teams. This research may prove to be step
forward for adaptation of the proposed new XP model for the development of
large-scale projects. Two independent industrial case studies are conducted to
validate the proposed new XP model handling for small and medium scale software
projects, one case study for each type of project.
| [
{
"created": "Tue, 26 Aug 2014 14:07:34 GMT",
"version": "v1"
}
] | 2014-08-28 | [
[
"Qureshi",
"M. Rizwan Jameel",
""
]
] | Agile is one of the terms with which software professionals are quite familiar. Agile models promote fast development to develop high quality software. XP process model is one of the most widely used and most documented agile models. XP model is meant for small-scale projects. Since XP model is a good model, therefore there is need of its extension for the development of medium and large-scale projects. XP model has certain drawbacks such as weak documentation and poor performance while adapting it for the development of medium and large-scale projects having large teams. A new XP model is proposed in this paper to cater the needs of software development companies for medium-scale projects having large teams. This research may prove to be step forward for adaptation of the proposed new XP model for the development of large-scale projects. Two independent industrial case studies are conducted to validate the proposed new XP model handling for small and medium scale software projects, one case study for each type of project. |
2012.08408 | Zhuonan Liang | Zhuonan Liang, Ziheng Liu, Huaze Shi, Yunlong Chen, Yanbin Cai, Yating
Liang, Yafan Feng, Yuqing Yang, Jing Zhang, Peng Fu | SPOC learner's final grade prediction based on a novel sampling batch
normalization embedded neural network method | 11 pages, 5 figures, ICAIS 2021 | Multimed Tools Appl (2022) | 10.1007/s11042-022-13628-y | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Recent years have witnessed the rapid growth of Small Private Online Courses
(SPOC) which is able to highly customized and personalized to adapt variable
educational requests, in which machine learning techniques are explored to
summarize and predict the learner's performance, mostly focus on the final
grade. However, the problem is that the final grade of learners on SPOC is
generally seriously imbalance which handicaps the training of prediction model.
To solve this problem, a sampling batch normalization embedded deep neural
network (SBNEDNN) method is developed in this paper. First, a combined
indicator is defined to measure the distribution of the data, then a rule is
established to guide the sampling process. Second, the batch normalization (BN)
modified layers are embedded into full connected neural network to solve the
data imbalanced problem. Experimental results with other three deep learning
methods demonstrates the superiority of the proposed method.
| [
{
"created": "Tue, 15 Dec 2020 16:36:42 GMT",
"version": "v1"
},
{
"created": "Fri, 11 Nov 2022 07:29:44 GMT",
"version": "v2"
}
] | 2022-11-14 | [
[
"Liang",
"Zhuonan",
""
],
[
"Liu",
"Ziheng",
""
],
[
"Shi",
"Huaze",
""
],
[
"Chen",
"Yunlong",
""
],
[
"Cai",
"Yanbin",
""
],
[
"Liang",
"Yating",
""
],
[
"Feng",
"Yafan",
""
],
[
"Yang",
"Yuqing",
""
],
[
"Zhang",
"Jing",
""
],
[
"Fu",
"Peng",
""
]
] | Recent years have witnessed the rapid growth of Small Private Online Courses (SPOC) which is able to highly customized and personalized to adapt variable educational requests, in which machine learning techniques are explored to summarize and predict the learner's performance, mostly focus on the final grade. However, the problem is that the final grade of learners on SPOC is generally seriously imbalance which handicaps the training of prediction model. To solve this problem, a sampling batch normalization embedded deep neural network (SBNEDNN) method is developed in this paper. First, a combined indicator is defined to measure the distribution of the data, then a rule is established to guide the sampling process. Second, the batch normalization (BN) modified layers are embedded into full connected neural network to solve the data imbalanced problem. Experimental results with other three deep learning methods demonstrates the superiority of the proposed method. |
2312.17532 | Yuncheng Huang | Yuncheng Huang, Qianyu He, Jiaqing Liang, Sihang Jiang, Yanghua Xiao
and Yunwen Chen | Enhancing Quantitative Reasoning Skills of Large Language Models through
Dimension Perception | Accepted in the 40th IEEE International Conference on Data
Engineering (ICDE 2024) | null | null | null | cs.CL cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Quantities are distinct and critical components of texts that characterize
the magnitude properties of entities, providing a precise perspective for the
understanding of natural language, especially for reasoning tasks. In recent
years, there has been a flurry of research on reasoning tasks based on large
language models (LLMs), most of which solely focus on numerical values,
neglecting the dimensional concept of quantities with units despite its
importance. We argue that the concept of dimension is essential for precisely
understanding quantities and of great significance for LLMs to perform
quantitative reasoning. However, the lack of dimension knowledge and
quantity-related benchmarks has resulted in low performance of LLMs. Hence, we
present a framework to enhance the quantitative reasoning ability of language
models based on dimension perception. We first construct a dimensional unit
knowledge base (DimUnitKB) to address the knowledge gap in this area. We
propose a benchmark DimEval consisting of seven tasks of three categories to
probe and enhance the dimension perception skills of LLMs. To evaluate the
effectiveness of our methods, we propose a quantitative reasoning task and
conduct experiments. The experimental results show that our dimension
perception method dramatically improves accuracy (43.55%->50.67%) on
quantitative reasoning tasks compared to GPT-4.
| [
{
"created": "Fri, 29 Dec 2023 09:29:37 GMT",
"version": "v1"
}
] | 2024-01-01 | [
[
"Huang",
"Yuncheng",
""
],
[
"He",
"Qianyu",
""
],
[
"Liang",
"Jiaqing",
""
],
[
"Jiang",
"Sihang",
""
],
[
"Xiao",
"Yanghua",
""
],
[
"Chen",
"Yunwen",
""
]
] | Quantities are distinct and critical components of texts that characterize the magnitude properties of entities, providing a precise perspective for the understanding of natural language, especially for reasoning tasks. In recent years, there has been a flurry of research on reasoning tasks based on large language models (LLMs), most of which solely focus on numerical values, neglecting the dimensional concept of quantities with units despite its importance. We argue that the concept of dimension is essential for precisely understanding quantities and of great significance for LLMs to perform quantitative reasoning. However, the lack of dimension knowledge and quantity-related benchmarks has resulted in low performance of LLMs. Hence, we present a framework to enhance the quantitative reasoning ability of language models based on dimension perception. We first construct a dimensional unit knowledge base (DimUnitKB) to address the knowledge gap in this area. We propose a benchmark DimEval consisting of seven tasks of three categories to probe and enhance the dimension perception skills of LLMs. To evaluate the effectiveness of our methods, we propose a quantitative reasoning task and conduct experiments. The experimental results show that our dimension perception method dramatically improves accuracy (43.55%->50.67%) on quantitative reasoning tasks compared to GPT-4. |
2206.00510 | Zuowu Zheng | Zuowu Zheng, Changwang Zhang, Xiaofeng Gao, Guihai Chen | HIEN: Hierarchical Intention Embedding Network for Click-Through Rate
Prediction | Accepted by SIGIR 2022 | null | 10.1145/3477495.3531988 | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Click-through rate (CTR) prediction plays an important role in online
advertising and recommendation systems, which aims at estimating the
probability of a user clicking on a specific item. Feature interaction modeling
and user interest modeling methods are two popular domains in CTR prediction,
and they have been studied extensively in recent years. However, these methods
still suffer from two limitations. First, traditional methods regard item
attributes as ID features, while neglecting structure information and relation
dependencies among attributes. Second, when mining user interests from
user-item interactions, current models ignore user intents and item intents for
different attributes, which lacks interpretability. Based on this observation,
in this paper, we propose a novel approach Hierarchical Intention Embedding
Network (HIEN), which considers dependencies of attributes based on bottom-up
tree aggregation in the constructed attribute graph. HIEN also captures user
intents for different item attributes as well as item intents based on our
proposed hierarchical attention mechanism. Extensive experiments on both public
and production datasets show that the proposed model significantly outperforms
the state-of-the-art methods. In addition, HIEN can be applied as an input
module to state-of-the-art CTR prediction methods, bringing further performance
lift for these existing models that might already be intensively used in real
systems.
| [
{
"created": "Wed, 1 Jun 2022 14:14:14 GMT",
"version": "v1"
}
] | 2022-06-02 | [
[
"Zheng",
"Zuowu",
""
],
[
"Zhang",
"Changwang",
""
],
[
"Gao",
"Xiaofeng",
""
],
[
"Chen",
"Guihai",
""
]
] | Click-through rate (CTR) prediction plays an important role in online advertising and recommendation systems, which aims at estimating the probability of a user clicking on a specific item. Feature interaction modeling and user interest modeling methods are two popular domains in CTR prediction, and they have been studied extensively in recent years. However, these methods still suffer from two limitations. First, traditional methods regard item attributes as ID features, while neglecting structure information and relation dependencies among attributes. Second, when mining user interests from user-item interactions, current models ignore user intents and item intents for different attributes, which lacks interpretability. Based on this observation, in this paper, we propose a novel approach Hierarchical Intention Embedding Network (HIEN), which considers dependencies of attributes based on bottom-up tree aggregation in the constructed attribute graph. HIEN also captures user intents for different item attributes as well as item intents based on our proposed hierarchical attention mechanism. Extensive experiments on both public and production datasets show that the proposed model significantly outperforms the state-of-the-art methods. In addition, HIEN can be applied as an input module to state-of-the-art CTR prediction methods, bringing further performance lift for these existing models that might already be intensively used in real systems. |
2003.04421 | Roman Sokolovskii | Roman Sokolovskii and Alexandre Graell i Amat and Fredrik
Br\"annstr\"om | Finite-Length Scaling of Spatially Coupled LDPC Codes Under Window
Decoding Over the BEC | Published in IEEE Transactions on Communications (Early Access). This
paper was presented in part at the IEEE Information Theory Workshop (ITW),
Visby, Sweden, August 2019 (arXiv:1904.10410) | null | 10.1109/TCOMM.2020.3010958 | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We analyze the finite-length performance of spatially coupled low-density
parity-check (SC-LDPC) codes under window decoding over the binary erasure
channel. In particular, we propose a refinement of the scaling law by Olmos and
Urbanke for the frame error rate (FER) of terminated SC-LDPC ensembles under
full belief propagation (BP) decoding. The refined scaling law models the
decoding process as two independent Ornstein-Uhlenbeck processes, in
correspondence to the two decoding waves that propagate toward the center of
the coupled chain for terminated SC-LDPC codes. We then extend the proposed
scaling law to predict the performance of (terminated) SC-LDPC code ensembles
under the more practical sliding window decoding. Finally, we extend this
framework to predict the bit error rate (BER) and block error rate (BLER) of
SC-LDPC code ensembles. The proposed scaling law yields very accurate
predictions of the FER, BLER, and BER for both full BP and window decoding.
| [
{
"created": "Mon, 9 Mar 2020 21:33:09 GMT",
"version": "v1"
},
{
"created": "Tue, 25 Aug 2020 14:41:51 GMT",
"version": "v2"
}
] | 2020-08-26 | [
[
"Sokolovskii",
"Roman",
""
],
[
"Amat",
"Alexandre Graell i",
""
],
[
"Brännström",
"Fredrik",
""
]
] | We analyze the finite-length performance of spatially coupled low-density parity-check (SC-LDPC) codes under window decoding over the binary erasure channel. In particular, we propose a refinement of the scaling law by Olmos and Urbanke for the frame error rate (FER) of terminated SC-LDPC ensembles under full belief propagation (BP) decoding. The refined scaling law models the decoding process as two independent Ornstein-Uhlenbeck processes, in correspondence to the two decoding waves that propagate toward the center of the coupled chain for terminated SC-LDPC codes. We then extend the proposed scaling law to predict the performance of (terminated) SC-LDPC code ensembles under the more practical sliding window decoding. Finally, we extend this framework to predict the bit error rate (BER) and block error rate (BLER) of SC-LDPC code ensembles. The proposed scaling law yields very accurate predictions of the FER, BLER, and BER for both full BP and window decoding. |
2406.14167 | Andrey Kutuzov | Mariia Fedorova, Andrey Kutuzov, Yves Scherrer | Definition generation for lexical semantic change detection | Findings of ACL 2024 | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | We use contextualized word definitions generated by large language models as
semantic representations in the task of diachronic lexical semantic change
detection (LSCD). In short, generated definitions are used as `senses', and the
change score of a target word is retrieved by comparing their distributions in
two time periods under comparison. On the material of five datasets and three
languages, we show that generated definitions are indeed specific and general
enough to convey a signal sufficient to rank sets of words by the degree of
their semantic change over time. Our approach is on par with or outperforms
prior non-supervised sense-based LSCD methods. At the same time, it preserves
interpretability and allows to inspect the reasons behind a specific shift in
terms of discrete definitions-as-senses. This is another step in the direction
of explainable semantic change modeling.
| [
{
"created": "Thu, 20 Jun 2024 10:13:08 GMT",
"version": "v1"
},
{
"created": "Wed, 31 Jul 2024 16:20:45 GMT",
"version": "v2"
}
] | 2024-08-01 | [
[
"Fedorova",
"Mariia",
""
],
[
"Kutuzov",
"Andrey",
""
],
[
"Scherrer",
"Yves",
""
]
] | We use contextualized word definitions generated by large language models as semantic representations in the task of diachronic lexical semantic change detection (LSCD). In short, generated definitions are used as `senses', and the change score of a target word is retrieved by comparing their distributions in two time periods under comparison. On the material of five datasets and three languages, we show that generated definitions are indeed specific and general enough to convey a signal sufficient to rank sets of words by the degree of their semantic change over time. Our approach is on par with or outperforms prior non-supervised sense-based LSCD methods. At the same time, it preserves interpretability and allows to inspect the reasons behind a specific shift in terms of discrete definitions-as-senses. This is another step in the direction of explainable semantic change modeling. |
1906.09567 | Mohsen Ghodrat | Mohsen Ghodrat and Horacio J Marquez | On the Local Input-Output Stability of Event-Triggered Control Systems | 37 pages, 6 figures | IEEE Trans. Autom. Control 64(1), 2019, 174-189 | 10.1109/TAC.2018.2809594 | null | cs.SY eess.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper studies performance preserving event design in nonlinear
event-based control systems based on a local L2-type performance criterion.
Considering a finite gain local L2-stable disturbance driven continuous-time
system, we propose a triggering mechanism so that the resulting sampled-data
system preserves similar disturbance attenuation local L2-gain property. The
results are applicable to nonlinear systems with exogenous disturbances bounded
by some Lipschitz-continuous function of state. It is shown that an
exponentially decaying function of time, combined with the proposed triggering
condition, extends the inter-event periods. Compared to the existing works,
this paper analytically estimates the increase in intersampling periods at
least for an arbitrary period of time. We also propose a so-called discrete
triggering condition to quantitatively find the improvement in inter-event
times at least for an arbitrary number of triggering iterations. Illustrative
examples support the analytically derived results.
| [
{
"created": "Sun, 23 Jun 2019 09:15:43 GMT",
"version": "v1"
},
{
"created": "Sun, 21 Jul 2019 06:44:34 GMT",
"version": "v2"
}
] | 2019-07-23 | [
[
"Ghodrat",
"Mohsen",
""
],
[
"Marquez",
"Horacio J",
""
]
] | This paper studies performance preserving event design in nonlinear event-based control systems based on a local L2-type performance criterion. Considering a finite gain local L2-stable disturbance driven continuous-time system, we propose a triggering mechanism so that the resulting sampled-data system preserves similar disturbance attenuation local L2-gain property. The results are applicable to nonlinear systems with exogenous disturbances bounded by some Lipschitz-continuous function of state. It is shown that an exponentially decaying function of time, combined with the proposed triggering condition, extends the inter-event periods. Compared to the existing works, this paper analytically estimates the increase in intersampling periods at least for an arbitrary period of time. We also propose a so-called discrete triggering condition to quantitatively find the improvement in inter-event times at least for an arbitrary number of triggering iterations. Illustrative examples support the analytically derived results. |
2101.05605 | Rui Liu | Rui Liu and Sen Liu and Xiaoli Zhang | A Physics-Informed Machine Learning Model for Porosity Analysis in Laser
Powder Bed Fusion Additive Manufacturing | 14 pages | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To control part quality, it is critical to analyze pore generation
mechanisms, laying theoretical foundation for future porosity control. Current
porosity analysis models use machine setting parameters, such as laser angle
and part pose. However, these setting-based models are machine dependent, hence
they often do not transfer to analysis of porosity for a different machine. To
address the first problem, a physics-informed, data-driven model (PIM), which
instead of directly using machine setting parameters to predict porosity levels
of printed parts, it first interprets machine settings into physical effects,
such as laser energy density and laser radiation pressure. Then, these
physical, machine independent effects are used to predict porosity levels
according to pass, flag, fail categories instead of focusing on quantitative
pore size prediction. With six learning methods evaluation, PIM proved to
achieve good performances with prediction error of 10$\sim$26%. Finally,
pore-encouraging influence and pore-suppressing influence were analyzed for
quality analysis.
| [
{
"created": "Wed, 13 Jan 2021 01:29:01 GMT",
"version": "v1"
}
] | 2021-01-15 | [
[
"Liu",
"Rui",
""
],
[
"Liu",
"Sen",
""
],
[
"Zhang",
"Xiaoli",
""
]
] | To control part quality, it is critical to analyze pore generation mechanisms, laying theoretical foundation for future porosity control. Current porosity analysis models use machine setting parameters, such as laser angle and part pose. However, these setting-based models are machine dependent, hence they often do not transfer to analysis of porosity for a different machine. To address the first problem, a physics-informed, data-driven model (PIM), which instead of directly using machine setting parameters to predict porosity levels of printed parts, it first interprets machine settings into physical effects, such as laser energy density and laser radiation pressure. Then, these physical, machine independent effects are used to predict porosity levels according to pass, flag, fail categories instead of focusing on quantitative pore size prediction. With six learning methods evaluation, PIM proved to achieve good performances with prediction error of 10$\sim$26%. Finally, pore-encouraging influence and pore-suppressing influence were analyzed for quality analysis. |
2009.12216 | Jon McCormack | Jon McCormack and Andy Lomas | Deep Learning of Individual Aesthetics | Author preprint of article for Neural Computing and Applications.
arXiv admin note: substantial text overlap with arXiv:2004.06874 | null | null | null | cs.NE cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate evaluation of human aesthetic preferences represents a major
challenge for creative evolutionary and generative systems research. Prior work
has tended to focus on feature measures of the artefact, such as symmetry,
complexity and coherence. However, research models from Psychology suggest that
human aesthetic experiences encapsulate factors beyond the artefact, making
accurate computational models very difficult to design. The interactive genetic
algorithm (IGA) circumvents the problem through human-in-the-loop, subjective
evaluation of aesthetics, but is limited due to user fatigue and small
population sizes. In this paper we look at how recent advances in deep learning
can assist in automating personal aesthetic judgement. Using a leading artist's
computer art dataset, we investigate the relationship between image measures,
such as complexity, and human aesthetic evaluation. We use dimension reduction
methods to visualise both genotype and phenotype space in order to support the
exploration of new territory in a generative system. Convolutional Neural
Networks trained on the artist's prior aesthetic evaluations are used to
suggest new possibilities similar or between known high quality
genotype-phenotype mappings. We integrate this classification and discovery
system into a software tool for evolving complex generative art and design.
| [
{
"created": "Thu, 24 Sep 2020 03:04:28 GMT",
"version": "v1"
}
] | 2020-09-28 | [
[
"McCormack",
"Jon",
""
],
[
"Lomas",
"Andy",
""
]
] | Accurate evaluation of human aesthetic preferences represents a major challenge for creative evolutionary and generative systems research. Prior work has tended to focus on feature measures of the artefact, such as symmetry, complexity and coherence. However, research models from Psychology suggest that human aesthetic experiences encapsulate factors beyond the artefact, making accurate computational models very difficult to design. The interactive genetic algorithm (IGA) circumvents the problem through human-in-the-loop, subjective evaluation of aesthetics, but is limited due to user fatigue and small population sizes. In this paper we look at how recent advances in deep learning can assist in automating personal aesthetic judgement. Using a leading artist's computer art dataset, we investigate the relationship between image measures, such as complexity, and human aesthetic evaluation. We use dimension reduction methods to visualise both genotype and phenotype space in order to support the exploration of new territory in a generative system. Convolutional Neural Networks trained on the artist's prior aesthetic evaluations are used to suggest new possibilities similar or between known high quality genotype-phenotype mappings. We integrate this classification and discovery system into a software tool for evolving complex generative art and design. |
1410.5782 | Sean Sedwards | Axel Legay, Sean Sedwards and Louis-Marie Traonouez | Lightweight Monte Carlo Verification of Markov Decision Processes with
Rewards | 16 pages, 4 figures, 1 table | null | null | null | cs.LO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Markov decision processes are useful models of concurrency optimisation
problems, but are often intractable for exhaustive verification methods. Recent
work has introduced lightweight approximative techniques that sample directly
from scheduler space, bringing the prospect of scalable alternatives to
standard numerical model checking algorithms. The focus so far has been on
optimising the probability of a property, but many problems require
quantitative analysis of rewards. In this work we therefore present lightweight
statistical model checking algorithms to optimise the rewards of Markov
decision processes. We consider the standard definitions of rewards used in
model checking, introducing an auxiliary hypothesis test to accommodate
reachability rewards. We demonstrate the performance of our approach on a
number of standard case studies.
| [
{
"created": "Mon, 20 Oct 2014 05:56:26 GMT",
"version": "v1"
},
{
"created": "Sun, 15 Feb 2015 10:25:43 GMT",
"version": "v2"
},
{
"created": "Mon, 23 Mar 2015 11:54:05 GMT",
"version": "v3"
}
] | 2015-03-24 | [
[
"Legay",
"Axel",
""
],
[
"Sedwards",
"Sean",
""
],
[
"Traonouez",
"Louis-Marie",
""
]
] | Markov decision processes are useful models of concurrency optimisation problems, but are often intractable for exhaustive verification methods. Recent work has introduced lightweight approximative techniques that sample directly from scheduler space, bringing the prospect of scalable alternatives to standard numerical model checking algorithms. The focus so far has been on optimising the probability of a property, but many problems require quantitative analysis of rewards. In this work we therefore present lightweight statistical model checking algorithms to optimise the rewards of Markov decision processes. We consider the standard definitions of rewards used in model checking, introducing an auxiliary hypothesis test to accommodate reachability rewards. We demonstrate the performance of our approach on a number of standard case studies. |
1107.5478 | Santosh Vempala | Daniel Dadush and Santosh Vempala | Deterministic Construction of an Approximate M-Ellipsoid and its
Application to Derandomizing Lattice Algorithms | null | null | null | null | cs.CC math.FA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We give a deterministic O(log n)^n algorithm for the {\em Shortest Vector
Problem (SVP)} of a lattice under {\em any} norm, improving on the previous
best deterministic bound of n^O(n) for general norms and nearly matching the
bound of 2^O(n) for the standard Euclidean norm established by Micciancio and
Voulgaris (STOC 2010). Our algorithm can be viewed as a derandomization of the
AKS randomized sieve algorithm, which can be used to solve SVP for any norm in
2^O(n) time with high probability. We use the technique of covering a convex
body by ellipsoids, as introduced for lattice problems in (Dadush et al., FOCS
2011).
Our main contribution is a deterministic approximation of an M-ellipsoid of
any convex body. We achieve this via a convex programming formulation of the
optimal ellipsoid with the objective function being an n-dimensional integral
that we show can be approximated deterministically, a technique that appears to
be of independent interest.
| [
{
"created": "Wed, 27 Jul 2011 14:05:55 GMT",
"version": "v1"
}
] | 2011-07-28 | [
[
"Dadush",
"Daniel",
""
],
[
"Vempala",
"Santosh",
""
]
] | We give a deterministic O(log n)^n algorithm for the {\em Shortest Vector Problem (SVP)} of a lattice under {\em any} norm, improving on the previous best deterministic bound of n^O(n) for general norms and nearly matching the bound of 2^O(n) for the standard Euclidean norm established by Micciancio and Voulgaris (STOC 2010). Our algorithm can be viewed as a derandomization of the AKS randomized sieve algorithm, which can be used to solve SVP for any norm in 2^O(n) time with high probability. We use the technique of covering a convex body by ellipsoids, as introduced for lattice problems in (Dadush et al., FOCS 2011). Our main contribution is a deterministic approximation of an M-ellipsoid of any convex body. We achieve this via a convex programming formulation of the optimal ellipsoid with the objective function being an n-dimensional integral that we show can be approximated deterministically, a technique that appears to be of independent interest. |
1002.0139 | Kadirvelu SivaKumar | P.S Hiremath, Siddu P. Algur | Extraction of Flat and Nested Data Records from Web Pages | 10 Pages IEEE format, International Journal on Computer Science and
Engineering, IJCSE 2010, ISSN 0975-3397, Impact Factor 0.583 | International Journal on Computer Science and Engineering, IJCSE,
Vol. 2, No. 1 January 2010 | null | IJEST10-02-01-07 | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper studies the problem of identification and extraction of flat and
nested data records from a given web page. With the explosive growth of
information sources available on the World Wide Web, it has become increasingly
difficult to identify the relevant pieces of information, since web pages are
often cluttered with irrelevant content like advertisements, navigation-panels,
copyright notices etc., surrounding the main content of the web page. Hence, it
is useful to mine such data regions and data records in order to extract
information from such web pages to provide value-added services. Currently
available automatic techniques to mine data regions and data records from web
pages are still unsatisfactory because of their poor performance. In this paper
a novel method to identify and extract the flat and nested data records from
the web pages automatically is proposed. It comprises of two steps : (1)
Identification and Extraction of the data regions based on visual clues
information. (2) Identification and extraction of flat and nested data records
from the data region of a web page automatically. For step1, a novel and more
effective method is proposed, which finds the data regions formed by all types
of tags using visual clues. For step2, a more effective and efficient method
namely, Visual Clue based Extraction of web Data (VCED), is proposed, which
extracts each record from the data region and identifies it whether it is a
flat or nested data record based on visual clue information the area covered by
and the number of data items present in each record. Our experimental results
show that the proposed technique is effective and better than existing
techniques.
| [
{
"created": "Sun, 31 Jan 2010 16:39:26 GMT",
"version": "v1"
}
] | 2010-02-02 | [
[
"Hiremath",
"P. S",
""
],
[
"Algur",
"Siddu P.",
""
]
] | This paper studies the problem of identification and extraction of flat and nested data records from a given web page. With the explosive growth of information sources available on the World Wide Web, it has become increasingly difficult to identify the relevant pieces of information, since web pages are often cluttered with irrelevant content like advertisements, navigation-panels, copyright notices etc., surrounding the main content of the web page. Hence, it is useful to mine such data regions and data records in order to extract information from such web pages to provide value-added services. Currently available automatic techniques to mine data regions and data records from web pages are still unsatisfactory because of their poor performance. In this paper a novel method to identify and extract the flat and nested data records from the web pages automatically is proposed. It comprises of two steps : (1) Identification and Extraction of the data regions based on visual clues information. (2) Identification and extraction of flat and nested data records from the data region of a web page automatically. For step1, a novel and more effective method is proposed, which finds the data regions formed by all types of tags using visual clues. For step2, a more effective and efficient method namely, Visual Clue based Extraction of web Data (VCED), is proposed, which extracts each record from the data region and identifies it whether it is a flat or nested data record based on visual clue information the area covered by and the number of data items present in each record. Our experimental results show that the proposed technique is effective and better than existing techniques. |
2205.04114 | Wei Zhu | Wei Zhu, Le Lu, Jing Xiao, Mei Han, Jiebo Luo, Adam P. Harrison | Localized Adversarial Domain Generalization | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Deep learning methods can struggle to handle domain shifts not seen in
training data, which can cause them to not generalize well to unseen domains.
This has led to research attention on domain generalization (DG), which aims to
the model's generalization ability to out-of-distribution. Adversarial domain
generalization is a popular approach to DG, but conventional approaches (1)
struggle to sufficiently align features so that local neighborhoods are mixed
across domains; and (2) can suffer from feature space over collapse which can
threaten generalization performance. To address these limitations, we propose
localized adversarial domain generalization with space compactness
maintenance~(LADG) which constitutes two major contributions. First, we propose
an adversarial localized classifier as the domain discriminator, along with a
principled primary branch. This constructs a min-max game whereby the aim of
the featurizer is to produce locally mixed domains. Second, we propose to use a
coding-rate loss to alleviate feature space over collapse. We conduct
comprehensive experiments on the Wilds DG benchmark to validate our approach,
where LADG outperforms leading competitors on most datasets.
| [
{
"created": "Mon, 9 May 2022 08:30:31 GMT",
"version": "v1"
}
] | 2022-05-10 | [
[
"Zhu",
"Wei",
""
],
[
"Lu",
"Le",
""
],
[
"Xiao",
"Jing",
""
],
[
"Han",
"Mei",
""
],
[
"Luo",
"Jiebo",
""
],
[
"Harrison",
"Adam P.",
""
]
] | Deep learning methods can struggle to handle domain shifts not seen in training data, which can cause them to not generalize well to unseen domains. This has led to research attention on domain generalization (DG), which aims to the model's generalization ability to out-of-distribution. Adversarial domain generalization is a popular approach to DG, but conventional approaches (1) struggle to sufficiently align features so that local neighborhoods are mixed across domains; and (2) can suffer from feature space over collapse which can threaten generalization performance. To address these limitations, we propose localized adversarial domain generalization with space compactness maintenance~(LADG) which constitutes two major contributions. First, we propose an adversarial localized classifier as the domain discriminator, along with a principled primary branch. This constructs a min-max game whereby the aim of the featurizer is to produce locally mixed domains. Second, we propose to use a coding-rate loss to alleviate feature space over collapse. We conduct comprehensive experiments on the Wilds DG benchmark to validate our approach, where LADG outperforms leading competitors on most datasets. |
2103.05961 | Jian Zhang | Chong Mou, Jian Zhang, Xiaopeng Fan, Hangfan Liu, Ronggang Wang | COLA-Net: Collaborative Attention Network for Image Restoration | 11 pages, 6 tables, 9 figures, to be published in IEEE Transactions
on Multimedia | null | 10.1109/TMM.2021.3063916 | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Local and non-local attention-based methods have been well studied in various
image restoration tasks while leading to promising performance. However, most
of the existing methods solely focus on one type of attention mechanism (local
or non-local). Furthermore, by exploiting the self-similarity of natural
images, existing pixel-wise non-local attention operations tend to give rise to
deviations in the process of characterizing long-range dependence due to image
degeneration. To overcome these problems, in this paper we propose a novel
collaborative attention network (COLA-Net) for image restoration, as the first
attempt to combine local and non-local attention mechanisms to restore image
content in the areas with complex textures and with highly repetitive details
respectively. In addition, an effective and robust patch-wise non-local
attention model is developed to capture long-range feature correspondences
through 3D patches. Extensive experiments on synthetic image denoising, real
image denoising and compression artifact reduction tasks demonstrate that our
proposed COLA-Net is able to achieve state-of-the-art performance in both peak
signal-to-noise ratio and visual perception, while maintaining an attractive
computational complexity. The source code is available on
https://github.com/MC-E/COLA-Net.
| [
{
"created": "Wed, 10 Mar 2021 09:33:17 GMT",
"version": "v1"
}
] | 2021-03-11 | [
[
"Mou",
"Chong",
""
],
[
"Zhang",
"Jian",
""
],
[
"Fan",
"Xiaopeng",
""
],
[
"Liu",
"Hangfan",
""
],
[
"Wang",
"Ronggang",
""
]
] | Local and non-local attention-based methods have been well studied in various image restoration tasks while leading to promising performance. However, most of the existing methods solely focus on one type of attention mechanism (local or non-local). Furthermore, by exploiting the self-similarity of natural images, existing pixel-wise non-local attention operations tend to give rise to deviations in the process of characterizing long-range dependence due to image degeneration. To overcome these problems, in this paper we propose a novel collaborative attention network (COLA-Net) for image restoration, as the first attempt to combine local and non-local attention mechanisms to restore image content in the areas with complex textures and with highly repetitive details respectively. In addition, an effective and robust patch-wise non-local attention model is developed to capture long-range feature correspondences through 3D patches. Extensive experiments on synthetic image denoising, real image denoising and compression artifact reduction tasks demonstrate that our proposed COLA-Net is able to achieve state-of-the-art performance in both peak signal-to-noise ratio and visual perception, while maintaining an attractive computational complexity. The source code is available on https://github.com/MC-E/COLA-Net. |
2407.11798 | Branden Butler | Branden Butler, Sixing Yu, Arya Mazaheri, and Ali Jannesari | PipeInfer: Accelerating LLM Inference using Asynchronous Pipelined
Speculation | 11 pages, submitted to SC24 conference | null | null | null | cs.CL cs.DC cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Inference of Large Language Models (LLMs) across computer clusters has become
a focal point of research in recent times, with many acceleration techniques
taking inspiration from CPU speculative execution. These techniques reduce
bottlenecks associated with memory bandwidth, but also increase end-to-end
latency per inference run, requiring high speculation acceptance rates to
improve performance. Combined with a variable rate of acceptance across tasks,
speculative inference techniques can result in reduced performance.
Additionally, pipeline-parallel designs require many user requests to maintain
maximum utilization. As a remedy, we propose PipeInfer, a pipelined speculative
acceleration technique to reduce inter-token latency and improve system
utilization for single-request scenarios while also improving tolerance to low
speculation acceptance rates and low-bandwidth interconnects. PipeInfer
exhibits up to a 2.15$\times$ improvement in generation speed over standard
speculative inference. PipeInfer achieves its improvement through Continuous
Asynchronous Speculation and Early Inference Cancellation, the former improving
latency and generation speed by running single-token inference simultaneously
with several speculative runs, while the latter improves speed and latency by
skipping the computation of invalidated runs, even in the middle of inference.
| [
{
"created": "Tue, 16 Jul 2024 14:52:02 GMT",
"version": "v1"
}
] | 2024-07-17 | [
[
"Butler",
"Branden",
""
],
[
"Yu",
"Sixing",
""
],
[
"Mazaheri",
"Arya",
""
],
[
"Jannesari",
"Ali",
""
]
] | Inference of Large Language Models (LLMs) across computer clusters has become a focal point of research in recent times, with many acceleration techniques taking inspiration from CPU speculative execution. These techniques reduce bottlenecks associated with memory bandwidth, but also increase end-to-end latency per inference run, requiring high speculation acceptance rates to improve performance. Combined with a variable rate of acceptance across tasks, speculative inference techniques can result in reduced performance. Additionally, pipeline-parallel designs require many user requests to maintain maximum utilization. As a remedy, we propose PipeInfer, a pipelined speculative acceleration technique to reduce inter-token latency and improve system utilization for single-request scenarios while also improving tolerance to low speculation acceptance rates and low-bandwidth interconnects. PipeInfer exhibits up to a 2.15$\times$ improvement in generation speed over standard speculative inference. PipeInfer achieves its improvement through Continuous Asynchronous Speculation and Early Inference Cancellation, the former improving latency and generation speed by running single-token inference simultaneously with several speculative runs, while the latter improves speed and latency by skipping the computation of invalidated runs, even in the middle of inference. |
1812.06081 | Sendong Zhao | Sendong Zhao, Ting Liu, Sicheng Zhao, Fei Wang | A Neural Multi-Task Learning Framework to Jointly Model Medical Named
Entity Recognition and Normalization | AAAI-2019 | null | null | null | cs.CL cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | State-of-the-art studies have demonstrated the superiority of joint modelling
over pipeline implementation for medical named entity recognition and
normalization due to the mutual benefits between the two processes. To exploit
these benefits in a more sophisticated way, we propose a novel deep neural
multi-task learning framework with explicit feedback strategies to jointly
model recognition and normalization. On one hand, our method benefits from the
general representations of both tasks provided by multi-task learning. On the
other hand, our method successfully converts hierarchical tasks into a parallel
multi-task setting while maintaining the mutual supports between tasks. Both of
these aspects improve the model performance. Experimental results demonstrate
that our method performs significantly better than state-of-the-art approaches
on two publicly available medical literature datasets.
| [
{
"created": "Fri, 14 Dec 2018 18:59:41 GMT",
"version": "v1"
}
] | 2018-12-17 | [
[
"Zhao",
"Sendong",
""
],
[
"Liu",
"Ting",
""
],
[
"Zhao",
"Sicheng",
""
],
[
"Wang",
"Fei",
""
]
] | State-of-the-art studies have demonstrated the superiority of joint modelling over pipeline implementation for medical named entity recognition and normalization due to the mutual benefits between the two processes. To exploit these benefits in a more sophisticated way, we propose a novel deep neural multi-task learning framework with explicit feedback strategies to jointly model recognition and normalization. On one hand, our method benefits from the general representations of both tasks provided by multi-task learning. On the other hand, our method successfully converts hierarchical tasks into a parallel multi-task setting while maintaining the mutual supports between tasks. Both of these aspects improve the model performance. Experimental results demonstrate that our method performs significantly better than state-of-the-art approaches on two publicly available medical literature datasets. |
2205.09445 | Jiahui Wang Mr | Jiahui Wang, Zhenyou Wang, Shanna Zhuang, Hui Wang | Cross-Enhancement Transformer for Action Segmentation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Temporal convolutions have been the paradigm of choice in action
segmentation, which enhances long-term receptive fields by increasing
convolution layers. However, high layers cause the loss of local information
necessary for frame recognition. To solve the above problem, a novel
encoder-decoder structure is proposed in this paper, called Cross-Enhancement
Transformer. Our approach can be effective learning of temporal structure
representation with interactive self-attention mechanism. Concatenated each
layer convolutional feature maps in encoder with a set of features in decoder
produced via self-attention. Therefore, local and global information are used
in a series of frame actions simultaneously. In addition, a new loss function
is proposed to enhance the training process that penalizes over-segmentation
errors. Experiments show that our framework performs state-of-the-art on three
challenging datasets: 50Salads, Georgia Tech Egocentric Activities and the
Breakfast dataset.
| [
{
"created": "Thu, 19 May 2022 10:06:30 GMT",
"version": "v1"
}
] | 2022-05-20 | [
[
"Wang",
"Jiahui",
""
],
[
"Wang",
"Zhenyou",
""
],
[
"Zhuang",
"Shanna",
""
],
[
"Wang",
"Hui",
""
]
] | Temporal convolutions have been the paradigm of choice in action segmentation, which enhances long-term receptive fields by increasing convolution layers. However, high layers cause the loss of local information necessary for frame recognition. To solve the above problem, a novel encoder-decoder structure is proposed in this paper, called Cross-Enhancement Transformer. Our approach can be effective learning of temporal structure representation with interactive self-attention mechanism. Concatenated each layer convolutional feature maps in encoder with a set of features in decoder produced via self-attention. Therefore, local and global information are used in a series of frame actions simultaneously. In addition, a new loss function is proposed to enhance the training process that penalizes over-segmentation errors. Experiments show that our framework performs state-of-the-art on three challenging datasets: 50Salads, Georgia Tech Egocentric Activities and the Breakfast dataset. |
1903.10636 | Annalisa Massini | Novella Bartolini, Ting He, Viviana Arrigoni, Annalisa Massini, Hana
Khamfroush | On Fundamental Bounds of Failure Identifiability by Boolean Network
Tomography | null | null | null | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Boolean network tomography is a powerful tool to infer the state
(working/failed) of individual nodes from path-level measurements obtained by
egde-nodes. We consider the problem of optimizing the capability of identifying
network failures through the design of monitoring schemes. Finding an optimal
solution is NP-hard and a large body of work has been devoted to heuristic
approaches providing lower bounds. Unlike previous works, we provide upper
bounds on the maximum number of identifiable nodes, given the number of
monitoring paths and different constraints on the network topology, the routing
scheme, and the maximum path length. These upper bounds represent a fundamental
limit on identifiability of failures via Boolean network tomography. Our
analysis provides insights on how to design topologies and related monitoring
schemes to achieve the maximum identifiability under various network settings.
Through analysis and experiments we demonstrate the tightness of the bounds and
efficacy of the design insights for engineered as well as real networks.
| [
{
"created": "Tue, 26 Mar 2019 00:03:10 GMT",
"version": "v1"
}
] | 2019-03-27 | [
[
"Bartolini",
"Novella",
""
],
[
"He",
"Ting",
""
],
[
"Arrigoni",
"Viviana",
""
],
[
"Massini",
"Annalisa",
""
],
[
"Khamfroush",
"Hana",
""
]
] | Boolean network tomography is a powerful tool to infer the state (working/failed) of individual nodes from path-level measurements obtained by egde-nodes. We consider the problem of optimizing the capability of identifying network failures through the design of monitoring schemes. Finding an optimal solution is NP-hard and a large body of work has been devoted to heuristic approaches providing lower bounds. Unlike previous works, we provide upper bounds on the maximum number of identifiable nodes, given the number of monitoring paths and different constraints on the network topology, the routing scheme, and the maximum path length. These upper bounds represent a fundamental limit on identifiability of failures via Boolean network tomography. Our analysis provides insights on how to design topologies and related monitoring schemes to achieve the maximum identifiability under various network settings. Through analysis and experiments we demonstrate the tightness of the bounds and efficacy of the design insights for engineered as well as real networks. |
1804.09113 | Benjamin Planche | Sergey Zakharov, Benjamin Planche, Ziyan Wu, Andreas Hutter, Harald
Kosch, Slobodan Ilic | Keep it Unreal: Bridging the Realism Gap for 2.5D Recognition with
Geometry Priors Only | 10 pages + supplemetary material + references. The first two authors
contributed equally to this work | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the increasing availability of large databases of 3D CAD models,
depth-based recognition methods can be trained on an uncountable number of
synthetically rendered images. However, discrepancies with the real data
acquired from various depth sensors still noticeably impede progress. Previous
works adopted unsupervised approaches to generate more realistic depth data,
but they all require real scans for training, even if unlabeled. This still
represents a strong requirement, especially when considering
real-life/industrial settings where real training images are hard or impossible
to acquire, but texture-less 3D models are available. We thus propose a novel
approach leveraging only CAD models to bridge the realism gap. Purely trained
on synthetic data, playing against an extensive augmentation pipeline in an
unsupervised manner, our generative adversarial network learns to effectively
segment depth images and recover the clean synthetic-looking depth information
even from partial occlusions. As our solution is not only fully decoupled from
the real domains but also from the task-specific analytics, the pre-processed
scans can be handed to any kind and number of recognition methods also trained
on synthetic data. Through various experiments, we demonstrate how this
simplifies their training and consistently enhances their performance, with
results on par with the same methods trained on real data, and better than
usual approaches doing the reverse mapping.
| [
{
"created": "Tue, 24 Apr 2018 16:02:59 GMT",
"version": "v1"
},
{
"created": "Thu, 24 May 2018 16:08:07 GMT",
"version": "v2"
}
] | 2018-05-25 | [
[
"Zakharov",
"Sergey",
""
],
[
"Planche",
"Benjamin",
""
],
[
"Wu",
"Ziyan",
""
],
[
"Hutter",
"Andreas",
""
],
[
"Kosch",
"Harald",
""
],
[
"Ilic",
"Slobodan",
""
]
] | With the increasing availability of large databases of 3D CAD models, depth-based recognition methods can be trained on an uncountable number of synthetically rendered images. However, discrepancies with the real data acquired from various depth sensors still noticeably impede progress. Previous works adopted unsupervised approaches to generate more realistic depth data, but they all require real scans for training, even if unlabeled. This still represents a strong requirement, especially when considering real-life/industrial settings where real training images are hard or impossible to acquire, but texture-less 3D models are available. We thus propose a novel approach leveraging only CAD models to bridge the realism gap. Purely trained on synthetic data, playing against an extensive augmentation pipeline in an unsupervised manner, our generative adversarial network learns to effectively segment depth images and recover the clean synthetic-looking depth information even from partial occlusions. As our solution is not only fully decoupled from the real domains but also from the task-specific analytics, the pre-processed scans can be handed to any kind and number of recognition methods also trained on synthetic data. Through various experiments, we demonstrate how this simplifies their training and consistently enhances their performance, with results on par with the same methods trained on real data, and better than usual approaches doing the reverse mapping. |
2010.09559 | Cristi\'an Bravo | Mar\'ia \'Oskarsd\'ottir and Cristi\'an Bravo | Multilayer Network Analysis for Improved Credit Risk Prediction | 24 pages, 15 figures. v4 - accepted | Omega 105: 102520 (2021) | 10.1016/j.omega.2021.102520 | null | cs.SI cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We present a multilayer network model for credit risk assessment. Our model
accounts for multiple connections between borrowers (such as their geographic
location and their economic activity) and allows for explicitly modelling the
interaction between connected borrowers. We develop a multilayer personalized
PageRank algorithm that allows quantifying the strength of the default exposure
of any borrower in the network. We test our methodology in an agricultural
lending framework, where it has been suspected for a long time default
correlates between borrowers when they are subject to the same structural
risks. Our results show there are significant predictive gains just by
including centrality multilayer network information in the model, and these
gains are increased by more complex information such as the multilayer PageRank
variables. The results suggest default risk is highest when an individual is
connected to many defaulters, but this risk is mitigated by the size of the
neighbourhood of the individual, showing both default risk and financial
stability propagate throughout the network.
| [
{
"created": "Mon, 19 Oct 2020 14:39:53 GMT",
"version": "v1"
},
{
"created": "Thu, 11 Feb 2021 22:57:28 GMT",
"version": "v2"
},
{
"created": "Tue, 1 Jun 2021 23:01:03 GMT",
"version": "v3"
},
{
"created": "Mon, 26 Jul 2021 17:00:17 GMT",
"version": "v4"
}
] | 2021-07-27 | [
[
"Óskarsdóttir",
"María",
""
],
[
"Bravo",
"Cristián",
""
]
] | We present a multilayer network model for credit risk assessment. Our model accounts for multiple connections between borrowers (such as their geographic location and their economic activity) and allows for explicitly modelling the interaction between connected borrowers. We develop a multilayer personalized PageRank algorithm that allows quantifying the strength of the default exposure of any borrower in the network. We test our methodology in an agricultural lending framework, where it has been suspected for a long time default correlates between borrowers when they are subject to the same structural risks. Our results show there are significant predictive gains just by including centrality multilayer network information in the model, and these gains are increased by more complex information such as the multilayer PageRank variables. The results suggest default risk is highest when an individual is connected to many defaulters, but this risk is mitigated by the size of the neighbourhood of the individual, showing both default risk and financial stability propagate throughout the network. |
2103.15980 | Jose-Luis Blanco-Claraco | Jos\'e Luis Blanco-Claraco | A tutorial on $\mathbf{SE}(3)$ transformation parameterizations and
on-manifold optimization | 68 pages, 6 figures; v2 in arXiv; see history of document versions on
page 3 for full change log of the technical report since 2010 | null | null | UMA-MAPIR-012010 | cs.RO cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | An arbitrary rigid transformation in $\mathbf{SE}(3)$ can be separated into
two parts, namely, a translation and a rigid rotation. This technical report
reviews, under a unifying viewpoint, three common alternatives to representing
the rotation part: sets of three (yaw-pitch-roll) Euler angles, orthogonal
rotation matrices from $\mathbf{SO}(3)$ and quaternions. It will be described:
(i) the equivalence between these representations and the formulas for
transforming one to each other (in all cases considering the translational and
rotational parts as a whole), (ii) how to compose poses with poses and poses
with points in each representation and (iii) how the uncertainty of the poses
(when modeled as Gaussian distributions) is affected by these transformations
and compositions. Some brief notes are also given about the Jacobians required
to implement least-squares optimization on manifolds, an very promising
approach in recent engineering literature. The text reflects which MRPT C++
library functions implement each of the described algorithms. All formulas and
their implementation have been thoroughly validated by means of unit testing
and numerical estimation of the Jacobians
| [
{
"created": "Mon, 29 Mar 2021 22:43:49 GMT",
"version": "v1"
},
{
"created": "Thu, 7 Apr 2022 07:09:18 GMT",
"version": "v2"
}
] | 2022-04-08 | [
[
"Blanco-Claraco",
"José Luis",
""
]
] | An arbitrary rigid transformation in $\mathbf{SE}(3)$ can be separated into two parts, namely, a translation and a rigid rotation. This technical report reviews, under a unifying viewpoint, three common alternatives to representing the rotation part: sets of three (yaw-pitch-roll) Euler angles, orthogonal rotation matrices from $\mathbf{SO}(3)$ and quaternions. It will be described: (i) the equivalence between these representations and the formulas for transforming one to each other (in all cases considering the translational and rotational parts as a whole), (ii) how to compose poses with poses and poses with points in each representation and (iii) how the uncertainty of the poses (when modeled as Gaussian distributions) is affected by these transformations and compositions. Some brief notes are also given about the Jacobians required to implement least-squares optimization on manifolds, an very promising approach in recent engineering literature. The text reflects which MRPT C++ library functions implement each of the described algorithms. All formulas and their implementation have been thoroughly validated by means of unit testing and numerical estimation of the Jacobians |
2406.09404 | Junkun Chen | Jun-Kun Chen, Samuel Rota Bul\`o, Norman M\"uller, Lorenzo Porzi,
Peter Kontschieder, Yu-Xiong Wang | ConsistDreamer: 3D-Consistent 2D Diffusion for High-Fidelity Scene
Editing | CVPR 2024 | null | null | null | cs.CV cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes ConsistDreamer - a novel framework that lifts 2D
diffusion models with 3D awareness and 3D consistency, thus enabling
high-fidelity instruction-guided scene editing. To overcome the fundamental
limitation of missing 3D consistency in 2D diffusion models, our key insight is
to introduce three synergetic strategies that augment the input of the 2D
diffusion model to become 3D-aware and to explicitly enforce 3D consistency
during the training process. Specifically, we design surrounding views as
context-rich input for the 2D diffusion model, and generate 3D-consistent,
structured noise instead of image-independent noise. Moreover, we introduce
self-supervised consistency-enforcing training within the per-scene editing
procedure. Extensive evaluation shows that our ConsistDreamer achieves
state-of-the-art performance for instruction-guided scene editing across
various scenes and editing instructions, particularly in complicated
large-scale indoor scenes from ScanNet++, with significantly improved sharpness
and fine-grained textures. Notably, ConsistDreamer stands as the first work
capable of successfully editing complex (e.g., plaid/checkered) patterns. Our
project page is at immortalco.github.io/ConsistDreamer.
| [
{
"created": "Thu, 13 Jun 2024 17:59:32 GMT",
"version": "v1"
}
] | 2024-06-14 | [
[
"Chen",
"Jun-Kun",
""
],
[
"Bulò",
"Samuel Rota",
""
],
[
"Müller",
"Norman",
""
],
[
"Porzi",
"Lorenzo",
""
],
[
"Kontschieder",
"Peter",
""
],
[
"Wang",
"Yu-Xiong",
""
]
] | This paper proposes ConsistDreamer - a novel framework that lifts 2D diffusion models with 3D awareness and 3D consistency, thus enabling high-fidelity instruction-guided scene editing. To overcome the fundamental limitation of missing 3D consistency in 2D diffusion models, our key insight is to introduce three synergetic strategies that augment the input of the 2D diffusion model to become 3D-aware and to explicitly enforce 3D consistency during the training process. Specifically, we design surrounding views as context-rich input for the 2D diffusion model, and generate 3D-consistent, structured noise instead of image-independent noise. Moreover, we introduce self-supervised consistency-enforcing training within the per-scene editing procedure. Extensive evaluation shows that our ConsistDreamer achieves state-of-the-art performance for instruction-guided scene editing across various scenes and editing instructions, particularly in complicated large-scale indoor scenes from ScanNet++, with significantly improved sharpness and fine-grained textures. Notably, ConsistDreamer stands as the first work capable of successfully editing complex (e.g., plaid/checkered) patterns. Our project page is at immortalco.github.io/ConsistDreamer. |
1602.05231 | Simon Thorne | Mahmood H. Shubbak, Simon Thorne | Development and Experimentation of a Software Tool for Identifying High
Risk Spreadsheets for Auditing | 22 pages, 11 Colour Figures, 4 Tables | Proc. 16th EuSpRIG Conf. "Spreadsheet Risk Management" (2015)
pp47-78 ISBN: 978-1-905404-52-0 | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Heavy use of spreadsheets by organisations bears many potential risks such as
errors, ambiguity, data loss, duplication, and fraud. In this paper these risks
are briefly outlined along with their available mitigation methods such as:
documentation, centralisation, auditing and user training. However, because of
the large quantities of spreadsheets used in organisations, applying these
methods on all spreadsheets is impossible. This fact is considered as a
deficiency in these methods, a gap which is addressed in this paper.
In this paper a new software tool for managing spreadsheets and identifying
the risk levels they include is proposed, developed and tested. As an add-in
for Microsoft Excel application, "Risk Calculator" can automatically collect
and record spreadsheet properties in an inventory database and assign risk
scores based on their importance, use and complexity. Consequently, auditing
processes can be targeted to high risk spreadsheets. Such a method saves time,
effort, and money.
| [
{
"created": "Tue, 16 Feb 2016 22:29:47 GMT",
"version": "v1"
}
] | 2016-02-22 | [
[
"Shubbak",
"Mahmood H.",
""
],
[
"Thorne",
"Simon",
""
]
] | Heavy use of spreadsheets by organisations bears many potential risks such as errors, ambiguity, data loss, duplication, and fraud. In this paper these risks are briefly outlined along with their available mitigation methods such as: documentation, centralisation, auditing and user training. However, because of the large quantities of spreadsheets used in organisations, applying these methods on all spreadsheets is impossible. This fact is considered as a deficiency in these methods, a gap which is addressed in this paper. In this paper a new software tool for managing spreadsheets and identifying the risk levels they include is proposed, developed and tested. As an add-in for Microsoft Excel application, "Risk Calculator" can automatically collect and record spreadsheet properties in an inventory database and assign risk scores based on their importance, use and complexity. Consequently, auditing processes can be targeted to high risk spreadsheets. Such a method saves time, effort, and money. |
2210.12681 | Atsuyuki Miyai | Atsuyuki Miyai, Qing Yu, Daiki Ikami, Go Irie, Kiyoharu Aizawa | Rethinking Rotation in Self-Supervised Contrastive Learning: Adaptive
Positive or Negative Data Augmentation | Accepted at the IEEE/CVF Winter Conference on Applications of
Computer Vision (WACV) 2023 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Rotation is frequently listed as a candidate for data augmentation in
contrastive learning but seldom provides satisfactory improvements. We argue
that this is because the rotated image is always treated as either positive or
negative. The semantics of an image can be rotation-invariant or
rotation-variant, so whether the rotated image is treated as positive or
negative should be determined based on the content of the image. Therefore, we
propose a novel augmentation strategy, adaptive Positive or Negative Data
Augmentation (PNDA), in which an original and its rotated image are a positive
pair if they are semantically close and a negative pair if they are
semantically different. To achieve PNDA, we first determine whether rotation is
positive or negative on an image-by-image basis in an unsupervised way. Then,
we apply PNDA to contrastive learning frameworks. Our experiments showed that
PNDA improves the performance of contrastive learning. The code is available at
\url{ https://github.com/AtsuMiyai/rethinking_rotation}.
| [
{
"created": "Sun, 23 Oct 2022 09:37:47 GMT",
"version": "v1"
},
{
"created": "Thu, 24 Nov 2022 05:57:56 GMT",
"version": "v2"
}
] | 2022-11-28 | [
[
"Miyai",
"Atsuyuki",
""
],
[
"Yu",
"Qing",
""
],
[
"Ikami",
"Daiki",
""
],
[
"Irie",
"Go",
""
],
[
"Aizawa",
"Kiyoharu",
""
]
] | Rotation is frequently listed as a candidate for data augmentation in contrastive learning but seldom provides satisfactory improvements. We argue that this is because the rotated image is always treated as either positive or negative. The semantics of an image can be rotation-invariant or rotation-variant, so whether the rotated image is treated as positive or negative should be determined based on the content of the image. Therefore, we propose a novel augmentation strategy, adaptive Positive or Negative Data Augmentation (PNDA), in which an original and its rotated image are a positive pair if they are semantically close and a negative pair if they are semantically different. To achieve PNDA, we first determine whether rotation is positive or negative on an image-by-image basis in an unsupervised way. Then, we apply PNDA to contrastive learning frameworks. Our experiments showed that PNDA improves the performance of contrastive learning. The code is available at \url{ https://github.com/AtsuMiyai/rethinking_rotation}. |
2105.08191 | Gangadharan Esakki | Gangadharan Esakki, Andreas Panayides, Venkatesh Jatla, Marios
Pattichis | Adaptive Video Encoding For Different Video Codecs | Video codecs, Video signal processing, Video coding, Video
compression, Video quality, Video streaming, Adaptive video streaming,
Versatile Video Coding, AV1, HEVC | IEEE Access 2021 | 10.1109/ACCESS.2021.3077313 | null | cs.MM eess.IV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | By 2022, we expect video traffic to reach 82% of the total internet traffic.
Undoubtedly, the abundance of video-driven applications will likely lead
internet video traffic percentage to a further increase in the near future,
enabled by associate advances in video devices' capabilities. In response to
this ever-growing demand, the Alliance for Open Media (AOM) and the Joint Video
Experts Team (JVET) have demonstrated strong and renewed interest in developing
new video codecs. In the fast-changing video codecs' landscape, there is thus,
a genuine need to develop adaptive methods that can be universally applied to
different codecs. In this study, we formulate video encoding as a
multi-objective optimization process where video quality (as a function of VMAF
and PSNR), bitrate demands, and encoding rate (in encoded frames per second)
are jointly optimized, going beyond the standard video encoding approaches that
focus on rate control targeting specific bandwidths. More specifically, we
create a dense video encoding space (offline) and then employ regression to
generate forward prediction models for each one of the afore-described
optimization objectives, using only Pareto-optimal points. We demonstrate our
adaptive video encoding approach that leverages the generated forward
prediction models that qualify for real-time adaptation using different codecs
(e.g., SVT-AV1 and x265) for a variety of video datasets and resolutions. To
motivate our approach and establish the promise for future fast VVC encoders,
we also perform a comparative performance evaluation using both subjective and
objective metrics and report on bitrate savings among all possible pairs
between VVC, SVT-AV1, x265, and VP9 codecs.
| [
{
"created": "Mon, 17 May 2021 23:06:20 GMT",
"version": "v1"
}
] | 2021-05-19 | [
[
"Esakki",
"Gangadharan",
""
],
[
"Panayides",
"Andreas",
""
],
[
"Jatla",
"Venkatesh",
""
],
[
"Pattichis",
"Marios",
""
]
] | By 2022, we expect video traffic to reach 82% of the total internet traffic. Undoubtedly, the abundance of video-driven applications will likely lead internet video traffic percentage to a further increase in the near future, enabled by associate advances in video devices' capabilities. In response to this ever-growing demand, the Alliance for Open Media (AOM) and the Joint Video Experts Team (JVET) have demonstrated strong and renewed interest in developing new video codecs. In the fast-changing video codecs' landscape, there is thus, a genuine need to develop adaptive methods that can be universally applied to different codecs. In this study, we formulate video encoding as a multi-objective optimization process where video quality (as a function of VMAF and PSNR), bitrate demands, and encoding rate (in encoded frames per second) are jointly optimized, going beyond the standard video encoding approaches that focus on rate control targeting specific bandwidths. More specifically, we create a dense video encoding space (offline) and then employ regression to generate forward prediction models for each one of the afore-described optimization objectives, using only Pareto-optimal points. We demonstrate our adaptive video encoding approach that leverages the generated forward prediction models that qualify for real-time adaptation using different codecs (e.g., SVT-AV1 and x265) for a variety of video datasets and resolutions. To motivate our approach and establish the promise for future fast VVC encoders, we also perform a comparative performance evaluation using both subjective and objective metrics and report on bitrate savings among all possible pairs between VVC, SVT-AV1, x265, and VP9 codecs. |
1311.1757 | Boleslaw Szymanski | Boleslaw K. Szymanski, Xin Lin, Andrea Asztalos, Sameet Sreenivasan | Failure dynamics of the global risk network | null | Scientific Reports 5:10998, June 18, 2015 | 10.1038/srep10998 | null | cs.CY cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Risks threatening modern societies form an intricately interconnected network
that often underlies crisis situations. Yet, little is known about how risk
materializations in distinct domains influence each other. Here we present an
approach in which expert assessments of risks likelihoods and influence
underlie a quantitative model of the global risk network dynamics. The modeled
risks range from environmental to economic and technological and include
difficult to quantify risks, such as geo-political or social. Using the maximum
likelihood estimation, we find the optimal model parameters and demonstrate
that the model including network effects significantly outperforms the others,
uncovering full value of the expert collected data. We analyze the model
dynamics and study its resilience and stability. Our findings include such risk
properties as contagion potential, persistence, roles in cascades of failures
and the identity of risks most detrimental to system stability. The model
provides quantitative means for measuring the adverse effects of risk
interdependence and the materialization of risks in the network.
| [
{
"created": "Thu, 7 Nov 2013 17:26:09 GMT",
"version": "v1"
},
{
"created": "Tue, 19 Nov 2013 02:57:29 GMT",
"version": "v2"
},
{
"created": "Mon, 9 Dec 2013 04:10:35 GMT",
"version": "v3"
},
{
"created": "Mon, 19 Jan 2015 16:38:48 GMT",
"version": "v4"
},
{
"created": "Wed, 13 May 2015 07:39:51 GMT",
"version": "v5"
}
] | 2016-05-03 | [
[
"Szymanski",
"Boleslaw K.",
""
],
[
"Lin",
"Xin",
""
],
[
"Asztalos",
"Andrea",
""
],
[
"Sreenivasan",
"Sameet",
""
]
] | Risks threatening modern societies form an intricately interconnected network that often underlies crisis situations. Yet, little is known about how risk materializations in distinct domains influence each other. Here we present an approach in which expert assessments of risks likelihoods and influence underlie a quantitative model of the global risk network dynamics. The modeled risks range from environmental to economic and technological and include difficult to quantify risks, such as geo-political or social. Using the maximum likelihood estimation, we find the optimal model parameters and demonstrate that the model including network effects significantly outperforms the others, uncovering full value of the expert collected data. We analyze the model dynamics and study its resilience and stability. Our findings include such risk properties as contagion potential, persistence, roles in cascades of failures and the identity of risks most detrimental to system stability. The model provides quantitative means for measuring the adverse effects of risk interdependence and the materialization of risks in the network. |
1903.01067 | Antoni Rosinol | Antoni Rosinol, Torsten Sattler, Marc Pollefeys, Luca Carlone | Incremental Visual-Inertial 3D Mesh Generation with Structural
Regularities | 7 pages, 5 figures, ICRA accepted | IEEE Int. Conf. Robot. Autom. (ICRA), 2019 | null | null | cs.CV cs.CG cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visual-Inertial Odometry (VIO) algorithms typically rely on a point cloud
representation of the scene that does not model the topology of the
environment. A 3D mesh instead offers a richer, yet lightweight, model.
Nevertheless, building a 3D mesh out of the sparse and noisy 3D landmarks
triangulated by a VIO algorithm often results in a mesh that does not fit the
real scene. In order to regularize the mesh, previous approaches decouple state
estimation from the 3D mesh regularization step, and either limit the 3D mesh
to the current frame or let the mesh grow indefinitely. We propose instead to
tightly couple mesh regularization and state estimation by detecting and
enforcing structural regularities in a novel factor-graph formulation. We also
propose to incrementally build the mesh by restricting its extent to the
time-horizon of the VIO optimization; the resulting 3D mesh covers a larger
portion of the scene than a per-frame approach while its memory usage and
computational complexity remain bounded. We show that our approach successfully
regularizes the mesh, while improving localization accuracy, when structural
regularities are present, and remains operational in scenes without
regularities.
| [
{
"created": "Mon, 4 Mar 2019 04:24:50 GMT",
"version": "v1"
},
{
"created": "Mon, 29 Jul 2019 16:36:41 GMT",
"version": "v2"
}
] | 2019-07-30 | [
[
"Rosinol",
"Antoni",
""
],
[
"Sattler",
"Torsten",
""
],
[
"Pollefeys",
"Marc",
""
],
[
"Carlone",
"Luca",
""
]
] | Visual-Inertial Odometry (VIO) algorithms typically rely on a point cloud representation of the scene that does not model the topology of the environment. A 3D mesh instead offers a richer, yet lightweight, model. Nevertheless, building a 3D mesh out of the sparse and noisy 3D landmarks triangulated by a VIO algorithm often results in a mesh that does not fit the real scene. In order to regularize the mesh, previous approaches decouple state estimation from the 3D mesh regularization step, and either limit the 3D mesh to the current frame or let the mesh grow indefinitely. We propose instead to tightly couple mesh regularization and state estimation by detecting and enforcing structural regularities in a novel factor-graph formulation. We also propose to incrementally build the mesh by restricting its extent to the time-horizon of the VIO optimization; the resulting 3D mesh covers a larger portion of the scene than a per-frame approach while its memory usage and computational complexity remain bounded. We show that our approach successfully regularizes the mesh, while improving localization accuracy, when structural regularities are present, and remains operational in scenes without regularities. |
2010.14957 | Oliver Niggemann | Benedikt Eiteneuer and Nemanja Hranisavljevic and Oliver Niggemann | Dimensionality Reduction and Anomaly Detection for CPPS Data using
Autoencoder | Copyright IEEE 2019 | 2019 IEEE International Conference on Industrial Technology (ICIT) | 10.1109/ICIT.2019.8755116 | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Unsupervised anomaly detection (AD) is a major topic in the field of
Cyber-Physical Production Systems (CPPSs). A closely related concern is
dimensionality reduction (DR) which is: 1) often used as a preprocessing step
in an AD solution, 2) a sort of AD, if a measure of observation conformity to
the learned data manifold is provided.
We argue that the two aspects can be complementary in a CPPS anomaly
detection solution. In this work, we focus on the nonlinear autoencoder (AE) as
a DR/AD approach. The contribution of this work is: 1) we examine the
suitability of AE reconstruction error as an AD decision criterion in CPPS
data. 2) we analyze its relation to a potential second-phase AD approach in the
AE latent space 3) we evaluate the performance of the approach on three
real-world datasets. Moreover, the approach outperforms state-of-the-art
techniques, alongside a relatively simple and straightforward application.
| [
{
"created": "Wed, 28 Oct 2020 13:16:58 GMT",
"version": "v1"
}
] | 2020-10-29 | [
[
"Eiteneuer",
"Benedikt",
""
],
[
"Hranisavljevic",
"Nemanja",
""
],
[
"Niggemann",
"Oliver",
""
]
] | Unsupervised anomaly detection (AD) is a major topic in the field of Cyber-Physical Production Systems (CPPSs). A closely related concern is dimensionality reduction (DR) which is: 1) often used as a preprocessing step in an AD solution, 2) a sort of AD, if a measure of observation conformity to the learned data manifold is provided. We argue that the two aspects can be complementary in a CPPS anomaly detection solution. In this work, we focus on the nonlinear autoencoder (AE) as a DR/AD approach. The contribution of this work is: 1) we examine the suitability of AE reconstruction error as an AD decision criterion in CPPS data. 2) we analyze its relation to a potential second-phase AD approach in the AE latent space 3) we evaluate the performance of the approach on three real-world datasets. Moreover, the approach outperforms state-of-the-art techniques, alongside a relatively simple and straightforward application. |
1911.05640 | F{\i}rat Tuna | Firat Tuna | Neural Network Processing Neural Networks: An efficient way to learn
higher order functions | null | null | null | null | cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Functions are rich in meaning and can be interpreted in a variety of ways.
Neural networks were proven to be capable of approximating a large class of
functions[1]. In this paper, we propose a new class of neural networks called
"Neural Network Processing Neural Networks" (NNPNNs), which inputs neural
networks and numerical values, instead of just numerical values. Thus enabling
neural networks to represent and process rich structures.
| [
{
"created": "Wed, 6 Nov 2019 19:15:34 GMT",
"version": "v1"
},
{
"created": "Tue, 14 Jan 2020 23:11:27 GMT",
"version": "v2"
}
] | 2020-01-16 | [
[
"Tuna",
"Firat",
""
]
] | Functions are rich in meaning and can be interpreted in a variety of ways. Neural networks were proven to be capable of approximating a large class of functions[1]. In this paper, we propose a new class of neural networks called "Neural Network Processing Neural Networks" (NNPNNs), which inputs neural networks and numerical values, instead of just numerical values. Thus enabling neural networks to represent and process rich structures. |
1810.11584 | Rodrigo de Lamare | T. Cunha and R. C. de Lamare | Study of Joint Automatic Gain Control and MMSE Receiver Design
Techniques for Quantized Multiuser Multiple-Antenna Systems | 3 figures, 6 pages | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents the development of a joint optimization of an automatic
gain control (AGC) algorithm and a linear \textit{minimum mean square error}
(MMSE) receiver for multi-user multiple input multiple output (MU-MIMO) systems
with coarsely quantized signals. The optimization of the AGC is based on the
minimization of the \textit{mean square error} (MSE) and the proposed receive
filter takes into account the presence of the AGC and the effects due to
quantization. Moreover, we provide a lower bound on the capacity of the MU-MIMO
system by deriving an expression for the achievable rate. The performance of
the proposed Low-Resolution Aware MMSE (LRA-MMSE) receiver and AGC algorithm is
evaluated by simulations, and compared with the conventional MMSE receive
filter and Zero-Forcing (ZF) receiver using quantization resolution of 2, 3, 4
and 5 bits.
| [
{
"created": "Sat, 27 Oct 2018 02:57:59 GMT",
"version": "v1"
}
] | 2018-10-30 | [
[
"Cunha",
"T.",
""
],
[
"de Lamare",
"R. C.",
""
]
] | This paper presents the development of a joint optimization of an automatic gain control (AGC) algorithm and a linear \textit{minimum mean square error} (MMSE) receiver for multi-user multiple input multiple output (MU-MIMO) systems with coarsely quantized signals. The optimization of the AGC is based on the minimization of the \textit{mean square error} (MSE) and the proposed receive filter takes into account the presence of the AGC and the effects due to quantization. Moreover, we provide a lower bound on the capacity of the MU-MIMO system by deriving an expression for the achievable rate. The performance of the proposed Low-Resolution Aware MMSE (LRA-MMSE) receiver and AGC algorithm is evaluated by simulations, and compared with the conventional MMSE receive filter and Zero-Forcing (ZF) receiver using quantization resolution of 2, 3, 4 and 5 bits. |
2201.08901 | Shashank Shekhar | Shashank Shekhar, Avinash Patel, Mrinal Haloi, Asif Salim | An Ensemble Model for Face Liveness Detection | Accepted and presented at MLDM 2022. To be published in Lattice
journal | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | In this paper, we present a passive method to detect face presentation attack
a.k.a face liveness detection using an ensemble deep learning technique. Face
liveness detection is one of the key steps involved in user identity
verification of customers during the online onboarding/transaction processes.
During identity verification, an unauthenticated user tries to bypass the
verification system by several means, for example, they can capture a user
photo from social media and do an imposter attack using printouts of users
faces or using a digital photo from a mobile device and even create a more
sophisticated attack like video replay attack. We have tried to understand the
different methods of attack and created an in-house large-scale dataset
covering all the kinds of attacks to train a robust deep learning model. We
propose an ensemble method where multiple features of the face and background
regions are learned to predict whether the user is a bonafide or an attacker.
| [
{
"created": "Wed, 19 Jan 2022 12:43:39 GMT",
"version": "v1"
}
] | 2022-01-25 | [
[
"Shekhar",
"Shashank",
""
],
[
"Patel",
"Avinash",
""
],
[
"Haloi",
"Mrinal",
""
],
[
"Salim",
"Asif",
""
]
] | In this paper, we present a passive method to detect face presentation attack a.k.a face liveness detection using an ensemble deep learning technique. Face liveness detection is one of the key steps involved in user identity verification of customers during the online onboarding/transaction processes. During identity verification, an unauthenticated user tries to bypass the verification system by several means, for example, they can capture a user photo from social media and do an imposter attack using printouts of users faces or using a digital photo from a mobile device and even create a more sophisticated attack like video replay attack. We have tried to understand the different methods of attack and created an in-house large-scale dataset covering all the kinds of attacks to train a robust deep learning model. We propose an ensemble method where multiple features of the face and background regions are learned to predict whether the user is a bonafide or an attacker. |
2108.09378 | Alexandre Morgand | Alexandre Morgand (1) Mohamed Tamaazousti (2) and Adrien Bartoli (3)
((1) SLAMcore ltd, London, UK (2) Universit\'e Paris Saclay, CEA, LIST,
Gif-sur-Yvette, France (3) IP-UMR 6602 - CNRS/UCA/CHU, Clermont-Ferrand,
France) | A Multiple-View Geometric Model for Specularity Prediction on General
Curved Surfaces | null | null | null | null | cs.CV cs.GR | http://creativecommons.org/licenses/by/4.0/ | Specularity prediction is essential to many computer vision applications,
giving important visual cues usable in Augmented Reality (AR), Simultaneous
Localisation and Mapping (SLAM), 3D reconstruction and material modeling.
However, it is a challenging task requiring numerous information from the scene
including the camera pose, the geometry of the scene, the light sources and the
material properties. Our previous work addressed this task by creating an
explicit model using an ellipsoid whose projection fits the specularity image
contours for a given camera pose. These ellipsoid-based approaches belong to a
family of models called JOint-LIght MAterial Specularity (JOLIMAS), which we
have gradually improved by removing assumptions on the scene geometry. However,
our most recent approach is still limited to uniformly curved surfaces. This
paper generalises JOLIMAS to any surface geometry while improving the quality
of specularity prediction, without sacrificing computation performances. The
proposed method establishes a link between surface curvature and specularity
shape in order to lift the geometric assumptions made in previous work.
Contrary to previous work, our new model is built from a physics-based local
illumination model namely Torrance-Sparrow, providing an improved
reconstruction. Specularity prediction using our new model is tested against
the most recent JOLIMAS version on both synthetic and real sequences with
objects of various general shapes. Our method outperforms previous approaches
in specularity prediction, including the real-time setup, as shown in the
supplementary videos.
| [
{
"created": "Fri, 20 Aug 2021 21:21:26 GMT",
"version": "v1"
},
{
"created": "Wed, 21 Dec 2022 19:09:26 GMT",
"version": "v2"
}
] | 2022-12-23 | [
[
"Morgand",
"Alexandre",
""
],
[
"Tamaazousti",
"Mohamed",
""
],
[
"Bartoli",
"Adrien",
""
]
] | Specularity prediction is essential to many computer vision applications, giving important visual cues usable in Augmented Reality (AR), Simultaneous Localisation and Mapping (SLAM), 3D reconstruction and material modeling. However, it is a challenging task requiring numerous information from the scene including the camera pose, the geometry of the scene, the light sources and the material properties. Our previous work addressed this task by creating an explicit model using an ellipsoid whose projection fits the specularity image contours for a given camera pose. These ellipsoid-based approaches belong to a family of models called JOint-LIght MAterial Specularity (JOLIMAS), which we have gradually improved by removing assumptions on the scene geometry. However, our most recent approach is still limited to uniformly curved surfaces. This paper generalises JOLIMAS to any surface geometry while improving the quality of specularity prediction, without sacrificing computation performances. The proposed method establishes a link between surface curvature and specularity shape in order to lift the geometric assumptions made in previous work. Contrary to previous work, our new model is built from a physics-based local illumination model namely Torrance-Sparrow, providing an improved reconstruction. Specularity prediction using our new model is tested against the most recent JOLIMAS version on both synthetic and real sequences with objects of various general shapes. Our method outperforms previous approaches in specularity prediction, including the real-time setup, as shown in the supplementary videos. |
1006.3128 | Galen Reeves | Galen Reeves and Michael Gastpar | The Sampling Rate-Distortion Tradeoff for Sparsity Pattern Recovery in
Compressed Sensing | null | IEEE Transactions on Information Theory, vo. 58, no. 10, pp.
3065-3092, May, 2012 | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recovery of the sparsity pattern (or support) of an unknown sparse vector
from a limited number of noisy linear measurements is an important problem in
compressed sensing. In the high-dimensional setting, it is known that recovery
with a vanishing fraction of errors is impossible if the measurement rate and
the per-sample signal-to-noise ratio (SNR) are finite constants, independent of
the vector length. In this paper, it is shown that recovery with an arbitrarily
small but constant fraction of errors is, however, possible, and that in some
cases computationally simple estimators are near-optimal. Bounds on the
measurement rate needed to attain a desired fraction of errors are given in
terms of the SNR and various key parameters of the unknown vector for several
different recovery algorithms. The tightness of the bounds, in a scaling sense,
as a function of the SNR and the fraction of errors, is established by
comparison with existing information-theoretic necessary bounds. Near
optimality is shown for a wide variety of practically motivated signal models.
| [
{
"created": "Wed, 16 Jun 2010 03:24:19 GMT",
"version": "v1"
},
{
"created": "Mon, 25 Jun 2012 19:19:06 GMT",
"version": "v2"
}
] | 2012-06-26 | [
[
"Reeves",
"Galen",
""
],
[
"Gastpar",
"Michael",
""
]
] | Recovery of the sparsity pattern (or support) of an unknown sparse vector from a limited number of noisy linear measurements is an important problem in compressed sensing. In the high-dimensional setting, it is known that recovery with a vanishing fraction of errors is impossible if the measurement rate and the per-sample signal-to-noise ratio (SNR) are finite constants, independent of the vector length. In this paper, it is shown that recovery with an arbitrarily small but constant fraction of errors is, however, possible, and that in some cases computationally simple estimators are near-optimal. Bounds on the measurement rate needed to attain a desired fraction of errors are given in terms of the SNR and various key parameters of the unknown vector for several different recovery algorithms. The tightness of the bounds, in a scaling sense, as a function of the SNR and the fraction of errors, is established by comparison with existing information-theoretic necessary bounds. Near optimality is shown for a wide variety of practically motivated signal models. |
2303.17338 | Kaya Turgut | Kaya Turgut and Helin Dutagaci | Local region-learning modules for point cloud classification | null | Machine Vision and Applications 35, 16 (2024) | 10.1007/s00138-023-01495-y | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Data organization via forming local regions is an integral part of deep
learning networks that process 3D point clouds in a hierarchical manner. At
each level, the point cloud is sampled to extract representative points and
these points are used to be centers of local regions. The organization of local
regions is of considerable importance since it determines the location and size
of the receptive field at a particular layer of feature aggregation. In this
paper, we present two local region-learning modules: Center Shift Module to
infer the appropriate shift for each center point, and Radius Update Module to
alter the radius of each local region. The parameters of the modules are
learned through optimizing the loss associated with the particular task within
an end-to-end network. We present alternatives for these modules through
various ways of modeling the interactions of the features and locations of 3D
points in the point cloud. We integrated both modules independently and
together to the PointNet++ and PointCNN object classification architectures,
and demonstrated that the modules contributed to a significant increase in
classification accuracy for the ScanObjectNN data set consisting of scans of
real-world objects. Our further experiments on ShapeNet data set showed that
the modules are also effective on 3D CAD models.
| [
{
"created": "Thu, 30 Mar 2023 12:45:46 GMT",
"version": "v1"
},
{
"created": "Tue, 19 Dec 2023 10:06:08 GMT",
"version": "v2"
}
] | 2023-12-27 | [
[
"Turgut",
"Kaya",
""
],
[
"Dutagaci",
"Helin",
""
]
] | Data organization via forming local regions is an integral part of deep learning networks that process 3D point clouds in a hierarchical manner. At each level, the point cloud is sampled to extract representative points and these points are used to be centers of local regions. The organization of local regions is of considerable importance since it determines the location and size of the receptive field at a particular layer of feature aggregation. In this paper, we present two local region-learning modules: Center Shift Module to infer the appropriate shift for each center point, and Radius Update Module to alter the radius of each local region. The parameters of the modules are learned through optimizing the loss associated with the particular task within an end-to-end network. We present alternatives for these modules through various ways of modeling the interactions of the features and locations of 3D points in the point cloud. We integrated both modules independently and together to the PointNet++ and PointCNN object classification architectures, and demonstrated that the modules contributed to a significant increase in classification accuracy for the ScanObjectNN data set consisting of scans of real-world objects. Our further experiments on ShapeNet data set showed that the modules are also effective on 3D CAD models. |
2010.11127 | Gokcen Yilmaz Dayanikli | Gokcen Y. Dayanikli, Rees R. Hatch, Ryan M. Gerdes, Hongjie Wang,
Regan Zane | Electromagnetic Sensor and Actuator Attacks on Power Converters for
Electric Vehicles | Accepted by IEEE S&P Workshop on the Internet of Safe Things 2020 | null | 10.1109/SPW50608.2020.00032 | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Alleviating range anxiety for electric vehicles (i.e., whether such vehicles
can be relied upon to travel long distances in a timely manner) is critical for
sustainable transportation. Extremely fast charging (XFC), whereby electric
vehicles (EV) can be quickly recharged in the time frame it takes to refuel an
internal combustion engine, has been proposed to alleviate this concern. A
critical component of these chargers is the efficient and proper operation of
power converters that convert AC to DC power and otherwise regulate power
delivery to vehicles. These converters rely on the integrity of sensor and
actuation signals. In this work the operation of state-of-the art XFC
converters is assessed in adversarial conditions, specifically against
Intentional Electromagnetic Interference Attacks (IEMI). The targeted system is
analyzed with the goal of determining possible weak points for IEMI, viz.
voltage and current sensor outputs and gate control signals. This work
demonstrates that, with relatively low power levels, an adversary is able to
manipulate the voltage and current sensor outputs necessary to ensure the
proper operation of the converters. Furthermore, in the first attack of its
kind, it is shown that the gate signal that controls the converter switches can
be manipulated, to catastrophic effect; i.e., it is possible for an attacker to
control the switching state of individual transistors to cause irreparable
damage to the converter and associated systems. Finally, a discussion of
countermeasures for hardware designers to mitigate IEMI-based attacks is
provided.
| [
{
"created": "Wed, 21 Oct 2020 16:38:24 GMT",
"version": "v1"
}
] | 2021-01-01 | [
[
"Dayanikli",
"Gokcen Y.",
""
],
[
"Hatch",
"Rees R.",
""
],
[
"Gerdes",
"Ryan M.",
""
],
[
"Wang",
"Hongjie",
""
],
[
"Zane",
"Regan",
""
]
] | Alleviating range anxiety for electric vehicles (i.e., whether such vehicles can be relied upon to travel long distances in a timely manner) is critical for sustainable transportation. Extremely fast charging (XFC), whereby electric vehicles (EV) can be quickly recharged in the time frame it takes to refuel an internal combustion engine, has been proposed to alleviate this concern. A critical component of these chargers is the efficient and proper operation of power converters that convert AC to DC power and otherwise regulate power delivery to vehicles. These converters rely on the integrity of sensor and actuation signals. In this work the operation of state-of-the art XFC converters is assessed in adversarial conditions, specifically against Intentional Electromagnetic Interference Attacks (IEMI). The targeted system is analyzed with the goal of determining possible weak points for IEMI, viz. voltage and current sensor outputs and gate control signals. This work demonstrates that, with relatively low power levels, an adversary is able to manipulate the voltage and current sensor outputs necessary to ensure the proper operation of the converters. Furthermore, in the first attack of its kind, it is shown that the gate signal that controls the converter switches can be manipulated, to catastrophic effect; i.e., it is possible for an attacker to control the switching state of individual transistors to cause irreparable damage to the converter and associated systems. Finally, a discussion of countermeasures for hardware designers to mitigate IEMI-based attacks is provided. |
2305.00429 | Rathindra Nath Dutta | Rathindra Nath Dutta, Subhojit Sarkar and Sasthi C. Ghosh | A Dynamic Obstacle Tracking Strategy for Proactive Handoffs in
Millimeter-wave Networks | null | null | null | null | cs.NI eess.SP | http://creativecommons.org/licenses/by/4.0/ | Stringent line-of-sight demands necessitated by the fast attenuating nature
of millimeter waves (mmWaves) through obstacles pose one of the central
problems of next generation wireless networks. These mmWave links are easily
disrupted due to obstacles, including vehicles and pedestrians, which cause
degradation in link quality and even link failure. Dynamic obstacles are
usually tracked by dedicated tracking hardware like RGB-D cameras, which
usually have small ranges, and hence lead to prohibitively increased deployment
costs to achieve complete coverage of the deployment area. In this manuscript,
we propose an altogether different approach to track multiple dynamic obstacles
in an mmWave network, solely based on short-term historical link failure
information, without resorting to any dedicated tracking hardware. After
proving that the said problem is NP-complete, we employ a greedy set-cover
based approach to solve it. Using the obtained trajectories, we perform
proactive handoffs for at-risk links. We compare our approach with an RGB-D
camera-based approach and show that our approach provides better tracking and
handoff performances when the camera coverage is low to moderate, which is
often the case in real deployment scenarios.
| [
{
"created": "Sun, 30 Apr 2023 09:12:11 GMT",
"version": "v1"
},
{
"created": "Wed, 12 Jul 2023 14:03:17 GMT",
"version": "v2"
}
] | 2023-07-13 | [
[
"Dutta",
"Rathindra Nath",
""
],
[
"Sarkar",
"Subhojit",
""
],
[
"Ghosh",
"Sasthi C.",
""
]
] | Stringent line-of-sight demands necessitated by the fast attenuating nature of millimeter waves (mmWaves) through obstacles pose one of the central problems of next generation wireless networks. These mmWave links are easily disrupted due to obstacles, including vehicles and pedestrians, which cause degradation in link quality and even link failure. Dynamic obstacles are usually tracked by dedicated tracking hardware like RGB-D cameras, which usually have small ranges, and hence lead to prohibitively increased deployment costs to achieve complete coverage of the deployment area. In this manuscript, we propose an altogether different approach to track multiple dynamic obstacles in an mmWave network, solely based on short-term historical link failure information, without resorting to any dedicated tracking hardware. After proving that the said problem is NP-complete, we employ a greedy set-cover based approach to solve it. Using the obtained trajectories, we perform proactive handoffs for at-risk links. We compare our approach with an RGB-D camera-based approach and show that our approach provides better tracking and handoff performances when the camera coverage is low to moderate, which is often the case in real deployment scenarios. |
1703.09643 | Tshilidzi Marwala | Bo Xing and Tshilidzi Marwala | Implications of the Fourth Industrial Age on Higher Education | Submitted to The Thinker | null | null | null | cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Higher education in the fourth industrial revolution, HE 4.0, is a complex,
dialectical and exciting opportunity which can potentially transform society
for the better. The fourth industrial revolution is powered by artificial
intelligence and it will transform the workplace from tasks based
characteristics to the human centred characteristics. Because of the
convergence of man and machine, it will reduce the subject distance between
humanities and social science as well as science and technology. This will
necessarily require much more interdisciplinary teaching, research and
innovation. This paper explores the impact of HE 4.0 on the mission of a
university which is teaching, research (including innovation) and service.
| [
{
"created": "Fri, 17 Mar 2017 10:12:27 GMT",
"version": "v1"
}
] | 2017-03-29 | [
[
"Xing",
"Bo",
""
],
[
"Marwala",
"Tshilidzi",
""
]
] | Higher education in the fourth industrial revolution, HE 4.0, is a complex, dialectical and exciting opportunity which can potentially transform society for the better. The fourth industrial revolution is powered by artificial intelligence and it will transform the workplace from tasks based characteristics to the human centred characteristics. Because of the convergence of man and machine, it will reduce the subject distance between humanities and social science as well as science and technology. This will necessarily require much more interdisciplinary teaching, research and innovation. This paper explores the impact of HE 4.0 on the mission of a university which is teaching, research (including innovation) and service. |
1601.03481 | Tirtharaj Dash | Tirtharaj Dash, H.S. Behera | A Fuzzy MLP Approach for Non-linear Pattern Classification | The final version of this paper has been published in "International
Conference on Communication and Computing (ICC-2014)"
[http://www.elsevierst.com/conference_book_download_chapter.php?cbid=86#chapter41] | In Proc: K.R. Venugopal, S.C. Lingareddy (eds.) International
Conference on Communication and Computing (ICC- 2014), Bangalore, India (June
12-14, 2014), Computer Networks and Security, 314-323 | null | null | cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In case of decision making problems, classification of pattern is a complex
and crucial task. Pattern classification using multilayer perceptron (MLP)
trained with back propagation learning becomes much complex with increase in
number of layers, number of nodes and number of epochs and ultimate increases
computational time [31]. In this paper, an attempt has been made to use fuzzy
MLP and its learning algorithm for pattern classification. The time and space
complexities of the algorithm have been analyzed. A training performance
comparison has been carried out between MLP and the proposed fuzzy-MLP model by
considering six cases. Results are noted against different learning rates
ranging from 0 to 1. A new performance evaluation factor 'convergence gain' has
been introduced. It is observed that the number of epochs drastically reduced
and performance increased compared to MLP. The average and minimum gain has
been found to be 93% and 75% respectively. The best gain is found to be 95% and
is obtained by setting the learning rate to 0.55.
| [
{
"created": "Sat, 19 Sep 2015 12:45:19 GMT",
"version": "v1"
}
] | 2016-01-15 | [
[
"Dash",
"Tirtharaj",
""
],
[
"Behera",
"H. S.",
""
]
] | In case of decision making problems, classification of pattern is a complex and crucial task. Pattern classification using multilayer perceptron (MLP) trained with back propagation learning becomes much complex with increase in number of layers, number of nodes and number of epochs and ultimate increases computational time [31]. In this paper, an attempt has been made to use fuzzy MLP and its learning algorithm for pattern classification. The time and space complexities of the algorithm have been analyzed. A training performance comparison has been carried out between MLP and the proposed fuzzy-MLP model by considering six cases. Results are noted against different learning rates ranging from 0 to 1. A new performance evaluation factor 'convergence gain' has been introduced. It is observed that the number of epochs drastically reduced and performance increased compared to MLP. The average and minimum gain has been found to be 93% and 75% respectively. The best gain is found to be 95% and is obtained by setting the learning rate to 0.55. |
1903.03096 | Pascal Lamblin | Eleni Triantafillou, Tyler Zhu, Vincent Dumoulin, Pascal Lamblin, Utku
Evci, Kelvin Xu, Ross Goroshin, Carles Gelada, Kevin Swersky, Pierre-Antoine
Manzagol, Hugo Larochelle | Meta-Dataset: A Dataset of Datasets for Learning to Learn from Few
Examples | Code available at https://github.com/google-research/meta-dataset | International Conference on Learning Representations (2020) | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Few-shot classification refers to learning a classifier for new classes given
only a few examples. While a plethora of models have emerged to tackle it, we
find the procedure and datasets that are used to assess their progress lacking.
To address this limitation, we propose Meta-Dataset: a new benchmark for
training and evaluating models that is large-scale, consists of diverse
datasets, and presents more realistic tasks. We experiment with popular
baselines and meta-learners on Meta-Dataset, along with a competitive method
that we propose. We analyze performance as a function of various
characteristics of test tasks and examine the models' ability to leverage
diverse training sources for improving their generalization. We also propose a
new set of baselines for quantifying the benefit of meta-learning in
Meta-Dataset. Our extensive experimentation has uncovered important research
challenges and we hope to inspire work in these directions.
| [
{
"created": "Thu, 7 Mar 2019 18:48:55 GMT",
"version": "v1"
},
{
"created": "Tue, 22 Oct 2019 16:04:30 GMT",
"version": "v2"
},
{
"created": "Fri, 14 Feb 2020 22:22:53 GMT",
"version": "v3"
},
{
"created": "Wed, 8 Apr 2020 15:58:20 GMT",
"version": "v4"
}
] | 2020-04-09 | [
[
"Triantafillou",
"Eleni",
""
],
[
"Zhu",
"Tyler",
""
],
[
"Dumoulin",
"Vincent",
""
],
[
"Lamblin",
"Pascal",
""
],
[
"Evci",
"Utku",
""
],
[
"Xu",
"Kelvin",
""
],
[
"Goroshin",
"Ross",
""
],
[
"Gelada",
"Carles",
""
],
[
"Swersky",
"Kevin",
""
],
[
"Manzagol",
"Pierre-Antoine",
""
],
[
"Larochelle",
"Hugo",
""
]
] | Few-shot classification refers to learning a classifier for new classes given only a few examples. While a plethora of models have emerged to tackle it, we find the procedure and datasets that are used to assess their progress lacking. To address this limitation, we propose Meta-Dataset: a new benchmark for training and evaluating models that is large-scale, consists of diverse datasets, and presents more realistic tasks. We experiment with popular baselines and meta-learners on Meta-Dataset, along with a competitive method that we propose. We analyze performance as a function of various characteristics of test tasks and examine the models' ability to leverage diverse training sources for improving their generalization. We also propose a new set of baselines for quantifying the benefit of meta-learning in Meta-Dataset. Our extensive experimentation has uncovered important research challenges and we hope to inspire work in these directions. |
2406.08796 | Janghoon Han | Janghoon Han, Changho Lee, Joongbo Shin, Stanley Jungkyu Choi, Honglak
Lee, Kynghoon Bae | Deep Exploration of Cross-Lingual Zero-Shot Generalization in
Instruction Tuning | Findings of ACL 2024 (Camera-ready), by Janghoon Han and Changho Lee,
with equal contribution | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Instruction tuning has emerged as a powerful technique, significantly
boosting zero-shot performance on unseen tasks. While recent work has explored
cross-lingual generalization by applying instruction tuning to multilingual
models, previous studies have primarily focused on English, with a limited
exploration of non-English tasks. For an in-depth exploration of cross-lingual
generalization in instruction tuning, we perform instruction tuning
individually for two distinct language meta-datasets. Subsequently, we assess
the performance on unseen tasks in a language different from the one used for
training. To facilitate this investigation, we introduce a novel non-English
meta-dataset named "KORANI" (Korean Natural Instruction), comprising 51 Korean
benchmarks. Moreover, we design cross-lingual templates to mitigate
discrepancies in language and instruction-format of the template between
training and inference within the cross-lingual setting. Our experiments reveal
consistent improvements through cross-lingual generalization in both English
and Korean, outperforming baseline by average scores of 20.7\% and 13.6\%,
respectively. Remarkably, these enhancements are comparable to those achieved
by monolingual instruction tuning and even surpass them in some tasks. The
result underscores the significance of relevant data acquisition across
languages over linguistic congruence with unseen tasks during instruction
tuning.
| [
{
"created": "Thu, 13 Jun 2024 04:10:17 GMT",
"version": "v1"
}
] | 2024-06-14 | [
[
"Han",
"Janghoon",
""
],
[
"Lee",
"Changho",
""
],
[
"Shin",
"Joongbo",
""
],
[
"Choi",
"Stanley Jungkyu",
""
],
[
"Lee",
"Honglak",
""
],
[
"Bae",
"Kynghoon",
""
]
] | Instruction tuning has emerged as a powerful technique, significantly boosting zero-shot performance on unseen tasks. While recent work has explored cross-lingual generalization by applying instruction tuning to multilingual models, previous studies have primarily focused on English, with a limited exploration of non-English tasks. For an in-depth exploration of cross-lingual generalization in instruction tuning, we perform instruction tuning individually for two distinct language meta-datasets. Subsequently, we assess the performance on unseen tasks in a language different from the one used for training. To facilitate this investigation, we introduce a novel non-English meta-dataset named "KORANI" (Korean Natural Instruction), comprising 51 Korean benchmarks. Moreover, we design cross-lingual templates to mitigate discrepancies in language and instruction-format of the template between training and inference within the cross-lingual setting. Our experiments reveal consistent improvements through cross-lingual generalization in both English and Korean, outperforming baseline by average scores of 20.7\% and 13.6\%, respectively. Remarkably, these enhancements are comparable to those achieved by monolingual instruction tuning and even surpass them in some tasks. The result underscores the significance of relevant data acquisition across languages over linguistic congruence with unseen tasks during instruction tuning. |
2405.19534 | Angelica Chen | Angelica Chen, Sadhika Malladi, Lily H. Zhang, Xinyi Chen, Qiuyi
Zhang, Rajesh Ranganath, Kyunghyun Cho | Preference Learning Algorithms Do Not Learn Preference Rankings | null | null | null | null | cs.LG cs.AI cs.CL | http://creativecommons.org/licenses/by/4.0/ | Preference learning algorithms (e.g., RLHF and DPO) are frequently used to
steer LLMs to produce generations that are more preferred by humans, but our
understanding of their inner workings is still limited. In this work, we study
the conventional wisdom that preference learning trains models to assign higher
likelihoods to more preferred outputs than less preferred outputs, measured via
$\textit{ranking accuracy}$. Surprisingly, we find that most state-of-the-art
preference-tuned models achieve a ranking accuracy of less than 60% on common
preference datasets. We furthermore derive the $\textit{idealized ranking
accuracy}$ that a preference-tuned LLM would achieve if it optimized the DPO or
RLHF objective perfectly. We demonstrate that existing models exhibit a
significant $\textit{alignment gap}$ -- $\textit{i.e.}$, a gap between the
observed and idealized ranking accuracies. We attribute this discrepancy to the
DPO objective, which is empirically and theoretically ill-suited to fix even
mild ranking errors in the reference model, and derive a simple and efficient
formula for quantifying the difficulty of learning a given preference
datapoint. Finally, we demonstrate that ranking accuracy strongly correlates
with the empirically popular win rate metric when the model is close to the
reference model used in the objective, shedding further light on the
differences between on-policy (e.g., RLHF) and off-policy (e.g., DPO)
preference learning algorithms.
| [
{
"created": "Wed, 29 May 2024 21:29:44 GMT",
"version": "v1"
}
] | 2024-05-31 | [
[
"Chen",
"Angelica",
""
],
[
"Malladi",
"Sadhika",
""
],
[
"Zhang",
"Lily H.",
""
],
[
"Chen",
"Xinyi",
""
],
[
"Zhang",
"Qiuyi",
""
],
[
"Ranganath",
"Rajesh",
""
],
[
"Cho",
"Kyunghyun",
""
]
] | Preference learning algorithms (e.g., RLHF and DPO) are frequently used to steer LLMs to produce generations that are more preferred by humans, but our understanding of their inner workings is still limited. In this work, we study the conventional wisdom that preference learning trains models to assign higher likelihoods to more preferred outputs than less preferred outputs, measured via $\textit{ranking accuracy}$. Surprisingly, we find that most state-of-the-art preference-tuned models achieve a ranking accuracy of less than 60% on common preference datasets. We furthermore derive the $\textit{idealized ranking accuracy}$ that a preference-tuned LLM would achieve if it optimized the DPO or RLHF objective perfectly. We demonstrate that existing models exhibit a significant $\textit{alignment gap}$ -- $\textit{i.e.}$, a gap between the observed and idealized ranking accuracies. We attribute this discrepancy to the DPO objective, which is empirically and theoretically ill-suited to fix even mild ranking errors in the reference model, and derive a simple and efficient formula for quantifying the difficulty of learning a given preference datapoint. Finally, we demonstrate that ranking accuracy strongly correlates with the empirically popular win rate metric when the model is close to the reference model used in the objective, shedding further light on the differences between on-policy (e.g., RLHF) and off-policy (e.g., DPO) preference learning algorithms. |
2209.06345 | Chenhui Zhao | Chenhui Zhao and Xiang Li and Rabih Younes | Self-supervised Multi-Modal Video Forgery Attack Detection | null | null | null | null | cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Video forgery attack threatens the surveillance system by replacing the video
captures with unrealistic synthesis, which can be powered by the latest augment
reality and virtual reality technologies. From the machine perception aspect,
visual objects often have RF signatures that are naturally synchronized with
them during recording. In contrast to video captures, the RF signatures are
more difficult to attack given their concealed and ubiquitous nature. In this
work, we investigate multimodal video forgery attack detection methods using
both vision and wireless modalities. Since wireless signal-based human
perception is environmentally sensitive, we propose a self-supervised training
strategy to enable the system to work without external annotation and thus can
adapt to different environments. Our method achieves a perfect human detection
accuracy and a high forgery attack detection accuracy of 94.38% which is
comparable with supervised methods.
| [
{
"created": "Tue, 13 Sep 2022 23:41:26 GMT",
"version": "v1"
},
{
"created": "Sat, 24 Dec 2022 00:28:18 GMT",
"version": "v2"
}
] | 2022-12-27 | [
[
"Zhao",
"Chenhui",
""
],
[
"Li",
"Xiang",
""
],
[
"Younes",
"Rabih",
""
]
] | Video forgery attack threatens the surveillance system by replacing the video captures with unrealistic synthesis, which can be powered by the latest augment reality and virtual reality technologies. From the machine perception aspect, visual objects often have RF signatures that are naturally synchronized with them during recording. In contrast to video captures, the RF signatures are more difficult to attack given their concealed and ubiquitous nature. In this work, we investigate multimodal video forgery attack detection methods using both vision and wireless modalities. Since wireless signal-based human perception is environmentally sensitive, we propose a self-supervised training strategy to enable the system to work without external annotation and thus can adapt to different environments. Our method achieves a perfect human detection accuracy and a high forgery attack detection accuracy of 94.38% which is comparable with supervised methods. |
2404.12984 | Marek Wodzinski | Mateusz Daniol, Daria Hemmerling, Jakub Sikora, Pawel Jemiolo, Marek
Wodzinski, Magdalena Wojcik-Pedziwiatr | Eye-tracking in Mixed Reality for Diagnosis of Neurodegenerative
Diseases | null | null | null | null | cs.HC cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Parkinson's disease ranks as the second most prevalent neurodegenerative
disorder globally. This research aims to develop a system leveraging Mixed
Reality capabilities for tracking and assessing eye movements. In this paper,
we present a medical scenario and outline the development of an application
designed to capture eye-tracking signals through Mixed Reality technology for
the evaluation of neurodegenerative diseases. Additionally, we introduce a
pipeline for extracting clinically relevant features from eye-gaze analysis,
describing the capabilities of the proposed system from a medical perspective.
The study involved a cohort of healthy control individuals and patients
suffering from Parkinson's disease, showcasing the feasibility and potential of
the proposed technology for non-intrusive monitoring of eye movement patterns
for the diagnosis of neurodegenerative diseases.
Clinical relevance - Developing a non-invasive biomarker for Parkinson's
disease is urgently needed to accurately detect the disease's onset. This would
allow for the timely introduction of neuroprotective treatment at the earliest
stage and enable the continuous monitoring of intervention outcomes. The
ability to detect subtle changes in eye movements allows for early diagnosis,
offering a critical window for intervention before more pronounced symptoms
emerge. Eye tracking provides objective and quantifiable biomarkers, ensuring
reliable assessments of disease progression and cognitive function. The eye
gaze analysis using Mixed Reality glasses is wireless, facilitating convenient
assessments in both home and hospital settings. The approach offers the
advantage of utilizing hardware that requires no additional specialized
attachments, enabling examinations through personal eyewear.
| [
{
"created": "Fri, 19 Apr 2024 16:34:15 GMT",
"version": "v1"
},
{
"created": "Mon, 3 Jun 2024 10:45:42 GMT",
"version": "v2"
}
] | 2024-06-04 | [
[
"Daniol",
"Mateusz",
""
],
[
"Hemmerling",
"Daria",
""
],
[
"Sikora",
"Jakub",
""
],
[
"Jemiolo",
"Pawel",
""
],
[
"Wodzinski",
"Marek",
""
],
[
"Wojcik-Pedziwiatr",
"Magdalena",
""
]
] | Parkinson's disease ranks as the second most prevalent neurodegenerative disorder globally. This research aims to develop a system leveraging Mixed Reality capabilities for tracking and assessing eye movements. In this paper, we present a medical scenario and outline the development of an application designed to capture eye-tracking signals through Mixed Reality technology for the evaluation of neurodegenerative diseases. Additionally, we introduce a pipeline for extracting clinically relevant features from eye-gaze analysis, describing the capabilities of the proposed system from a medical perspective. The study involved a cohort of healthy control individuals and patients suffering from Parkinson's disease, showcasing the feasibility and potential of the proposed technology for non-intrusive monitoring of eye movement patterns for the diagnosis of neurodegenerative diseases. Clinical relevance - Developing a non-invasive biomarker for Parkinson's disease is urgently needed to accurately detect the disease's onset. This would allow for the timely introduction of neuroprotective treatment at the earliest stage and enable the continuous monitoring of intervention outcomes. The ability to detect subtle changes in eye movements allows for early diagnosis, offering a critical window for intervention before more pronounced symptoms emerge. Eye tracking provides objective and quantifiable biomarkers, ensuring reliable assessments of disease progression and cognitive function. The eye gaze analysis using Mixed Reality glasses is wireless, facilitating convenient assessments in both home and hospital settings. The approach offers the advantage of utilizing hardware that requires no additional specialized attachments, enabling examinations through personal eyewear. |
1205.2931 | Douglas S Bridges | Douglas S Bridges (University of Canterbury) | Precompact Apartness Spaces | null | Logical Methods in Computer Science, Volume 8, Issue 2 (June 25,
2012) lmcs:1052 | 10.2168/LMCS-8(2:15)2012 | null | cs.LO math.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a notion of precompactness, and study some of its properties, in
the context of apartness spaces whose apartness structure is not necessarily
induced by any uniform one. The presentation lies entirely with a Bishop-style
constructive framework, and is a contribution to the ongoing development of the
constructive theories of apartness and uniformity.
| [
{
"created": "Mon, 14 May 2012 02:29:38 GMT",
"version": "v1"
},
{
"created": "Mon, 18 Jun 2012 17:06:29 GMT",
"version": "v2"
},
{
"created": "Thu, 21 Jun 2012 21:43:49 GMT",
"version": "v3"
},
{
"created": "Mon, 25 Jun 2012 11:20:03 GMT",
"version": "v4"
}
] | 2015-07-01 | [
[
"Bridges",
"Douglas S",
"",
"University of Canterbury"
]
] | We present a notion of precompactness, and study some of its properties, in the context of apartness spaces whose apartness structure is not necessarily induced by any uniform one. The presentation lies entirely with a Bishop-style constructive framework, and is a contribution to the ongoing development of the constructive theories of apartness and uniformity. |
2010.05324 | Tharindu Ranasinghe Mr | Tharindu Ranasinghe, Marcos Zampieri | Multilingual Offensive Language Identification with Cross-lingual
Embeddings | Accepted to EMNLP 2020 | null | null | null | cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | Offensive content is pervasive in social media and a reason for concern to
companies and government organizations. Several studies have been recently
published investigating methods to detect the various forms of such content
(e.g. hate speech, cyberbulling, and cyberaggression). The clear majority of
these studies deal with English partially because most annotated datasets
available contain English data. In this paper, we take advantage of English
data available by applying cross-lingual contextual word embeddings and
transfer learning to make predictions in languages with less resources. We
project predictions on comparable data in Bengali, Hindi, and Spanish and we
report results of 0.8415 F1 macro for Bengali, 0.8568 F1 macro for Hindi, and
0.7513 F1 macro for Spanish. Finally, we show that our approach compares
favorably to the best systems submitted to recent shared tasks on these three
languages, confirming the robustness of cross-lingual contextual embeddings and
transfer learning for this task.
| [
{
"created": "Sun, 11 Oct 2020 19:17:24 GMT",
"version": "v1"
}
] | 2020-10-13 | [
[
"Ranasinghe",
"Tharindu",
""
],
[
"Zampieri",
"Marcos",
""
]
] | Offensive content is pervasive in social media and a reason for concern to companies and government organizations. Several studies have been recently published investigating methods to detect the various forms of such content (e.g. hate speech, cyberbulling, and cyberaggression). The clear majority of these studies deal with English partially because most annotated datasets available contain English data. In this paper, we take advantage of English data available by applying cross-lingual contextual word embeddings and transfer learning to make predictions in languages with less resources. We project predictions on comparable data in Bengali, Hindi, and Spanish and we report results of 0.8415 F1 macro for Bengali, 0.8568 F1 macro for Hindi, and 0.7513 F1 macro for Spanish. Finally, we show that our approach compares favorably to the best systems submitted to recent shared tasks on these three languages, confirming the robustness of cross-lingual contextual embeddings and transfer learning for this task. |
2302.00032 | Deniz Oktay | Deniz Oktay, Mehran Mirramezani, Eder Medina, Ryan P. Adams | Neuromechanical Autoencoders: Learning to Couple Elastic and Neural
Network Nonlinearity | ICLR 2023 Spotlight | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Intelligent biological systems are characterized by their embodiment in a
complex environment and the intimate interplay between their nervous systems
and the nonlinear mechanical properties of their bodies. This coordination, in
which the dynamics of the motor system co-evolved to reduce the computational
burden on the brain, is referred to as ``mechanical intelligence'' or
``morphological computation''. In this work, we seek to develop machine
learning analogs of this process, in which we jointly learn the morphology of
complex nonlinear elastic solids along with a deep neural network to control
it. By using a specialized differentiable simulator of elastic mechanics
coupled to conventional deep learning architectures -- which we refer to as
neuromechanical autoencoders -- we are able to learn to perform morphological
computation via gradient descent. Key to our approach is the use of mechanical
metamaterials -- cellular solids, in particular -- as the morphological
substrate. Just as deep neural networks provide flexible and
massively-parametric function approximators for perceptual and control tasks,
cellular solid metamaterials are promising as a rich and learnable space for
approximating a variety of actuation tasks. In this work we take advantage of
these complementary computational concepts to co-design materials and neural
network controls to achieve nonintuitive mechanical behavior. We demonstrate in
simulation how it is possible to achieve translation, rotation, and shape
matching, as well as a ``digital MNIST'' task. We additionally manufacture and
evaluate one of the designs to verify its real-world behavior.
| [
{
"created": "Tue, 31 Jan 2023 19:04:28 GMT",
"version": "v1"
}
] | 2023-02-02 | [
[
"Oktay",
"Deniz",
""
],
[
"Mirramezani",
"Mehran",
""
],
[
"Medina",
"Eder",
""
],
[
"Adams",
"Ryan P.",
""
]
] | Intelligent biological systems are characterized by their embodiment in a complex environment and the intimate interplay between their nervous systems and the nonlinear mechanical properties of their bodies. This coordination, in which the dynamics of the motor system co-evolved to reduce the computational burden on the brain, is referred to as ``mechanical intelligence'' or ``morphological computation''. In this work, we seek to develop machine learning analogs of this process, in which we jointly learn the morphology of complex nonlinear elastic solids along with a deep neural network to control it. By using a specialized differentiable simulator of elastic mechanics coupled to conventional deep learning architectures -- which we refer to as neuromechanical autoencoders -- we are able to learn to perform morphological computation via gradient descent. Key to our approach is the use of mechanical metamaterials -- cellular solids, in particular -- as the morphological substrate. Just as deep neural networks provide flexible and massively-parametric function approximators for perceptual and control tasks, cellular solid metamaterials are promising as a rich and learnable space for approximating a variety of actuation tasks. In this work we take advantage of these complementary computational concepts to co-design materials and neural network controls to achieve nonintuitive mechanical behavior. We demonstrate in simulation how it is possible to achieve translation, rotation, and shape matching, as well as a ``digital MNIST'' task. We additionally manufacture and evaluate one of the designs to verify its real-world behavior. |
1907.11496 | Xin Wang | Xin Wang, Bo Wu, Yun Ye, Yueqi Zhong | Outfit Compatibility Prediction and Diagnosis with Multi-Layered
Comparison Network | 9 pages, 6 figures, Proceedings of the 27th ACM International
Conference on Multimedia | null | 10.1145/3343031.3350909 | null | cs.CV cs.LG cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing works about fashion outfit compatibility focus on predicting the
overall compatibility of a set of fashion items with their information from
different modalities. However, there are few works explore how to explain the
prediction, which limits the persuasiveness and effectiveness of the model. In
this work, we propose an approach to not only predict but also diagnose the
outfit compatibility. We introduce an end-to-end framework for this goal, which
features for: (1) The overall compatibility is learned from all type-specified
pairwise similarities between items, and the backpropagation gradients are used
to diagnose the incompatible factors. (2) We leverage the hierarchy of CNN and
compare the features at different layers to take into account the
compatibilities of different aspects from the low level (such as color,
texture) to the high level (such as style). To support the proposed method, we
build a new type-specified outfit dataset named Polyvore-T based on Polyvore
dataset. We compare our method with the prior state-of-the-art in two tasks:
outfit compatibility prediction and fill-in-the-blank. Experiments show that
our approach has advantages in both prediction performance and diagnosis
ability.
| [
{
"created": "Fri, 26 Jul 2019 11:39:15 GMT",
"version": "v1"
},
{
"created": "Thu, 22 Aug 2019 03:56:30 GMT",
"version": "v2"
}
] | 2019-08-23 | [
[
"Wang",
"Xin",
""
],
[
"Wu",
"Bo",
""
],
[
"Ye",
"Yun",
""
],
[
"Zhong",
"Yueqi",
""
]
] | Existing works about fashion outfit compatibility focus on predicting the overall compatibility of a set of fashion items with their information from different modalities. However, there are few works explore how to explain the prediction, which limits the persuasiveness and effectiveness of the model. In this work, we propose an approach to not only predict but also diagnose the outfit compatibility. We introduce an end-to-end framework for this goal, which features for: (1) The overall compatibility is learned from all type-specified pairwise similarities between items, and the backpropagation gradients are used to diagnose the incompatible factors. (2) We leverage the hierarchy of CNN and compare the features at different layers to take into account the compatibilities of different aspects from the low level (such as color, texture) to the high level (such as style). To support the proposed method, we build a new type-specified outfit dataset named Polyvore-T based on Polyvore dataset. We compare our method with the prior state-of-the-art in two tasks: outfit compatibility prediction and fill-in-the-blank. Experiments show that our approach has advantages in both prediction performance and diagnosis ability. |
1302.3591 | Suzanne M. Mahoney | Suzanne M. Mahoney, Kathryn Blackmond Laskey | Network Engineering for Complex Belief Networks | Appears in Proceedings of the Twelfth Conference on Uncertainty in
Artificial Intelligence (UAI1996) | null | null | UAI-P-1996-PG-389-396 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Like any large system development effort, the construction of a complex
belief network model requires systems engineering to manage the design and
construction process. We propose a rapid prototyping approach to network
engineering. We describe criteria for identifying network modules and the use
of "stubs" to represent not-yet-constructed modules. We propose an object
oriented representation for belief networks which captures the semantics of the
problem in addition to conditional independencies and probabilities. Methods
for evaluating complex belief network models are discussed. The ideas are
illustrated with examples from a large belief network construction problem in
the military intelligence domain.
| [
{
"created": "Wed, 13 Feb 2013 14:15:26 GMT",
"version": "v1"
}
] | 2013-02-18 | [
[
"Mahoney",
"Suzanne M.",
""
],
[
"Laskey",
"Kathryn Blackmond",
""
]
] | Like any large system development effort, the construction of a complex belief network model requires systems engineering to manage the design and construction process. We propose a rapid prototyping approach to network engineering. We describe criteria for identifying network modules and the use of "stubs" to represent not-yet-constructed modules. We propose an object oriented representation for belief networks which captures the semantics of the problem in addition to conditional independencies and probabilities. Methods for evaluating complex belief network models are discussed. The ideas are illustrated with examples from a large belief network construction problem in the military intelligence domain. |
2207.06497 | Qibang Liu | Qibang Liu and Muhao Chen and Robert E. Skelton | An extended ordinary state-based peridynamics for non-spherical horizons | 19 pages, 9 figures | null | 10.1016/j.cma.2022.115712 | null | cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work presents an extended ordinary state-based peridynamics (XOSBPD)
model for the non-spherical horizons. Based on the OSBPD, we derive the XOSBPD
by introducing the Lagrange multipliers to guarantee the non-local dilatation
and non-local strain energy density (SED) are equal to local dilatation and
local SED, respectively. In this formulation, the XOSBPD removes the limitation
of spherical horizons and is suitable for arbitrary horizon shapes. In
addition, the presented XOSBPD does not need volume and surface correction and
allows non-uniform discretization implementation with various horizon sizes.
Three classic examples demonstrate the accuracy and capability for complex
dynamical fracture analysis. The proposed method provides an efficient tool and
in-depth insight into the failure mechanism of structure components and solid
materials.
| [
{
"created": "Sat, 18 Jun 2022 20:13:21 GMT",
"version": "v1"
},
{
"created": "Fri, 15 Jul 2022 06:11:49 GMT",
"version": "v2"
}
] | 2022-11-09 | [
[
"Liu",
"Qibang",
""
],
[
"Chen",
"Muhao",
""
],
[
"Skelton",
"Robert E.",
""
]
] | This work presents an extended ordinary state-based peridynamics (XOSBPD) model for the non-spherical horizons. Based on the OSBPD, we derive the XOSBPD by introducing the Lagrange multipliers to guarantee the non-local dilatation and non-local strain energy density (SED) are equal to local dilatation and local SED, respectively. In this formulation, the XOSBPD removes the limitation of spherical horizons and is suitable for arbitrary horizon shapes. In addition, the presented XOSBPD does not need volume and surface correction and allows non-uniform discretization implementation with various horizon sizes. Three classic examples demonstrate the accuracy and capability for complex dynamical fracture analysis. The proposed method provides an efficient tool and in-depth insight into the failure mechanism of structure components and solid materials. |
2309.07400 | Ziyu Guo | Ziyu Guo, Weiqin Zhao, Shujun Wang, and Lequan Yu | HIGT: Hierarchical Interaction Graph-Transformer for Whole Slide Image
Analysis | Accepted by MICCAI2023; Code is available in
https://github.com/HKU-MedAI/HIGT | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In computation pathology, the pyramid structure of gigapixel Whole Slide
Images (WSIs) has recently been studied for capturing various information from
individual cell interactions to tissue microenvironments. This hierarchical
structure is believed to be beneficial for cancer diagnosis and prognosis
tasks. However, most previous hierarchical WSI analysis works (1) only
characterize local or global correlations within the WSI pyramids and (2) use
only unidirectional interaction between different resolutions, leading to an
incomplete picture of WSI pyramids. To this end, this paper presents a novel
Hierarchical Interaction Graph-Transformer (i.e., HIGT) for WSI analysis. With
Graph Neural Network and Transformer as the building commons, HIGT can learn
both short-range local information and long-range global representation of the
WSI pyramids. Considering that the information from different resolutions is
complementary and can benefit each other during the learning process, we
further design a novel Bidirectional Interaction block to establish
communication between different levels within the WSI pyramids. Finally, we
aggregate both coarse-grained and fine-grained features learned from different
levels together for slide-level prediction. We evaluate our methods on two
public WSI datasets from TCGA projects, i.e., kidney carcinoma (KICA) and
esophageal carcinoma (ESCA). Experimental results show that our HIGT
outperforms both hierarchical and non-hierarchical state-of-the-art methods on
both tumor subtyping and staging tasks.
| [
{
"created": "Thu, 14 Sep 2023 03:04:06 GMT",
"version": "v1"
}
] | 2023-09-15 | [
[
"Guo",
"Ziyu",
""
],
[
"Zhao",
"Weiqin",
""
],
[
"Wang",
"Shujun",
""
],
[
"Yu",
"Lequan",
""
]
] | In computation pathology, the pyramid structure of gigapixel Whole Slide Images (WSIs) has recently been studied for capturing various information from individual cell interactions to tissue microenvironments. This hierarchical structure is believed to be beneficial for cancer diagnosis and prognosis tasks. However, most previous hierarchical WSI analysis works (1) only characterize local or global correlations within the WSI pyramids and (2) use only unidirectional interaction between different resolutions, leading to an incomplete picture of WSI pyramids. To this end, this paper presents a novel Hierarchical Interaction Graph-Transformer (i.e., HIGT) for WSI analysis. With Graph Neural Network and Transformer as the building commons, HIGT can learn both short-range local information and long-range global representation of the WSI pyramids. Considering that the information from different resolutions is complementary and can benefit each other during the learning process, we further design a novel Bidirectional Interaction block to establish communication between different levels within the WSI pyramids. Finally, we aggregate both coarse-grained and fine-grained features learned from different levels together for slide-level prediction. We evaluate our methods on two public WSI datasets from TCGA projects, i.e., kidney carcinoma (KICA) and esophageal carcinoma (ESCA). Experimental results show that our HIGT outperforms both hierarchical and non-hierarchical state-of-the-art methods on both tumor subtyping and staging tasks. |
2110.12615 | Quanquan Gu | Heyang Zhao and Dongruo Zhou and Quanquan Gu | Linear Contextual Bandits with Adversarial Corruptions | 27 pages, 1 figure | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the linear contextual bandit problem in the presence of adversarial
corruption, where the interaction between the player and a possibly infinite
decision set is contaminated by an adversary that can corrupt the reward up to
a corruption level $C$ measured by the sum of the largest alteration on rewards
in each round. We present a variance-aware algorithm that is adaptive to the
level of adversarial contamination $C$. The key algorithmic design includes (1)
a multi-level partition scheme of the observed data, (2) a cascade of
confidence sets that are adaptive to the level of the corruption, and (3) a
variance-aware confidence set construction that can take advantage of
low-variance reward. We further prove that the regret of the proposed algorithm
is $\tilde{O}(C^2d\sqrt{\sum_{t = 1}^T \sigma_t^2} + C^2R\sqrt{dT})$, where $d$
is the dimension of context vectors, $T$ is the number of rounds, $R$ is the
range of noise and $\sigma_t^2,t=1\ldots,T$ are the variances of instantaneous
reward. We also prove a gap-dependent regret bound for the proposed algorithm,
which is instance-dependent and thus leads to better performance on good
practical instances. To the best of our knowledge, this is the first
variance-aware corruption-robust algorithm for contextual bandits. Experiments
on synthetic data corroborate our theory.
| [
{
"created": "Mon, 25 Oct 2021 02:53:24 GMT",
"version": "v1"
}
] | 2021-10-26 | [
[
"Zhao",
"Heyang",
""
],
[
"Zhou",
"Dongruo",
""
],
[
"Gu",
"Quanquan",
""
]
] | We study the linear contextual bandit problem in the presence of adversarial corruption, where the interaction between the player and a possibly infinite decision set is contaminated by an adversary that can corrupt the reward up to a corruption level $C$ measured by the sum of the largest alteration on rewards in each round. We present a variance-aware algorithm that is adaptive to the level of adversarial contamination $C$. The key algorithmic design includes (1) a multi-level partition scheme of the observed data, (2) a cascade of confidence sets that are adaptive to the level of the corruption, and (3) a variance-aware confidence set construction that can take advantage of low-variance reward. We further prove that the regret of the proposed algorithm is $\tilde{O}(C^2d\sqrt{\sum_{t = 1}^T \sigma_t^2} + C^2R\sqrt{dT})$, where $d$ is the dimension of context vectors, $T$ is the number of rounds, $R$ is the range of noise and $\sigma_t^2,t=1\ldots,T$ are the variances of instantaneous reward. We also prove a gap-dependent regret bound for the proposed algorithm, which is instance-dependent and thus leads to better performance on good practical instances. To the best of our knowledge, this is the first variance-aware corruption-robust algorithm for contextual bandits. Experiments on synthetic data corroborate our theory. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.