id
stringlengths 9
10
| submitter
stringlengths 1
64
⌀ | authors
stringlengths 4
20.7k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
404
⌀ | doi
stringlengths 11
153
⌀ | report-no
stringlengths 2
254
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | orig_abstract
stringlengths 14
3.35k
| versions
listlengths 1
60
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
1.35k
| abstract
stringlengths 11
3.34k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1412.6575
|
Bishan Yang
|
Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, Li Deng
|
Embedding Entities and Relations for Learning and Inference in Knowledge
Bases
|
12 pages, 4 figures
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider learning representations of entities and relations in KBs using
the neural-embedding approach. We show that most existing models, including NTN
(Socher et al., 2013) and TransE (Bordes et al., 2013b), can be generalized
under a unified learning framework, where entities are low-dimensional vectors
learned from a neural network and relations are bilinear and/or linear mapping
functions. Under this framework, we compare a variety of embedding models on
the link prediction task. We show that a simple bilinear formulation achieves
new state-of-the-art results for the task (achieving a top-10 accuracy of 73.2%
vs. 54.7% by TransE on Freebase). Furthermore, we introduce a novel approach
that utilizes the learned relation embeddings to mine logical rules such as
"BornInCity(a,b) and CityInCountry(b,c) => Nationality(a,c)". We find that
embeddings learned from the bilinear objective are particularly good at
capturing relational semantics and that the composition of relations is
characterized by matrix multiplication. More interestingly, we demonstrate that
our embedding-based rule extraction approach successfully outperforms a
state-of-the-art confidence-based rule mining approach in mining Horn rules
that involve compositional reasoning.
|
[
{
"created": "Sat, 20 Dec 2014 01:37:16 GMT",
"version": "v1"
},
{
"created": "Sat, 27 Dec 2014 00:18:17 GMT",
"version": "v2"
},
{
"created": "Fri, 10 Apr 2015 15:24:59 GMT",
"version": "v3"
},
{
"created": "Sat, 29 Aug 2015 15:08:45 GMT",
"version": "v4"
}
] |
2015-09-01
|
[
[
"Yang",
"Bishan",
""
],
[
"Yih",
"Wen-tau",
""
],
[
"He",
"Xiaodong",
""
],
[
"Gao",
"Jianfeng",
""
],
[
"Deng",
"Li",
""
]
] |
We consider learning representations of entities and relations in KBs using the neural-embedding approach. We show that most existing models, including NTN (Socher et al., 2013) and TransE (Bordes et al., 2013b), can be generalized under a unified learning framework, where entities are low-dimensional vectors learned from a neural network and relations are bilinear and/or linear mapping functions. Under this framework, we compare a variety of embedding models on the link prediction task. We show that a simple bilinear formulation achieves new state-of-the-art results for the task (achieving a top-10 accuracy of 73.2% vs. 54.7% by TransE on Freebase). Furthermore, we introduce a novel approach that utilizes the learned relation embeddings to mine logical rules such as "BornInCity(a,b) and CityInCountry(b,c) => Nationality(a,c)". We find that embeddings learned from the bilinear objective are particularly good at capturing relational semantics and that the composition of relations is characterized by matrix multiplication. More interestingly, we demonstrate that our embedding-based rule extraction approach successfully outperforms a state-of-the-art confidence-based rule mining approach in mining Horn rules that involve compositional reasoning.
|
2309.05295
|
Karlis Freivalds
|
Karlis Freivalds, Emils Ozolins, Guntis Barzdins
|
Discrete Denoising Diffusion Approach to Integer Factorization
|
International Conference on Artificial Neural Networks ICANN 2023
| null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Integer factorization is a famous computational problem unknown whether being
solvable in the polynomial time. With the rise of deep neural networks, it is
interesting whether they can facilitate faster factorization. We present an
approach to factorization utilizing deep neural networks and discrete denoising
diffusion that works by iteratively correcting errors in a partially-correct
solution. To this end, we develop a new seq2seq neural network architecture,
employ relaxed categorical distribution and adapt the reverse diffusion process
to cope better with inaccuracies in the denoising step. The approach is able to
find factors for integers of up to 56 bits long. Our analysis indicates that
investment in training leads to an exponential decrease of sampling steps
required at inference to achieve a given success rate, thus counteracting an
exponential run-time increase depending on the bit-length.
|
[
{
"created": "Mon, 11 Sep 2023 08:26:08 GMT",
"version": "v1"
}
] |
2023-09-12
|
[
[
"Freivalds",
"Karlis",
""
],
[
"Ozolins",
"Emils",
""
],
[
"Barzdins",
"Guntis",
""
]
] |
Integer factorization is a famous computational problem unknown whether being solvable in the polynomial time. With the rise of deep neural networks, it is interesting whether they can facilitate faster factorization. We present an approach to factorization utilizing deep neural networks and discrete denoising diffusion that works by iteratively correcting errors in a partially-correct solution. To this end, we develop a new seq2seq neural network architecture, employ relaxed categorical distribution and adapt the reverse diffusion process to cope better with inaccuracies in the denoising step. The approach is able to find factors for integers of up to 56 bits long. Our analysis indicates that investment in training leads to an exponential decrease of sampling steps required at inference to achieve a given success rate, thus counteracting an exponential run-time increase depending on the bit-length.
|
1404.4997
|
Eric Price
|
Moritz Hardt and Eric Price
|
Tight bounds for learning a mixture of two gaussians
|
STOC 2015
| null | null | null |
cs.LG cs.DS stat.ML
|
http://creativecommons.org/licenses/by-nc-sa/3.0/
|
We consider the problem of identifying the parameters of an unknown mixture
of two arbitrary $d$-dimensional gaussians from a sequence of independent
random samples. Our main results are upper and lower bounds giving a
computationally efficient moment-based estimator with an optimal convergence
rate, thus resolving a problem introduced by Pearson (1894). Denoting by
$\sigma^2$ the variance of the unknown mixture, we prove that
$\Theta(\sigma^{12})$ samples are necessary and sufficient to estimate each
parameter up to constant additive error when $d=1.$ Our upper bound extends to
arbitrary dimension $d>1$ up to a (provably necessary) logarithmic loss in $d$
using a novel---yet simple---dimensionality reduction technique. We further
identify several interesting special cases where the sample complexity is
notably smaller than our optimal worst-case bound. For instance, if the means
of the two components are separated by $\Omega(\sigma)$ the sample complexity
reduces to $O(\sigma^2)$ and this is again optimal.
Our results also apply to learning each component of the mixture up to small
error in total variation distance, where our algorithm gives strong
improvements in sample complexity over previous work. We also extend our lower
bound to mixtures of $k$ Gaussians, showing that $\Omega(\sigma^{6k-2})$
samples are necessary to estimate each parameter up to constant additive error.
|
[
{
"created": "Sat, 19 Apr 2014 23:59:35 GMT",
"version": "v1"
},
{
"created": "Mon, 8 Dec 2014 22:15:35 GMT",
"version": "v2"
},
{
"created": "Sun, 17 May 2015 04:47:58 GMT",
"version": "v3"
}
] |
2015-05-19
|
[
[
"Hardt",
"Moritz",
""
],
[
"Price",
"Eric",
""
]
] |
We consider the problem of identifying the parameters of an unknown mixture of two arbitrary $d$-dimensional gaussians from a sequence of independent random samples. Our main results are upper and lower bounds giving a computationally efficient moment-based estimator with an optimal convergence rate, thus resolving a problem introduced by Pearson (1894). Denoting by $\sigma^2$ the variance of the unknown mixture, we prove that $\Theta(\sigma^{12})$ samples are necessary and sufficient to estimate each parameter up to constant additive error when $d=1.$ Our upper bound extends to arbitrary dimension $d>1$ up to a (provably necessary) logarithmic loss in $d$ using a novel---yet simple---dimensionality reduction technique. We further identify several interesting special cases where the sample complexity is notably smaller than our optimal worst-case bound. For instance, if the means of the two components are separated by $\Omega(\sigma)$ the sample complexity reduces to $O(\sigma^2)$ and this is again optimal. Our results also apply to learning each component of the mixture up to small error in total variation distance, where our algorithm gives strong improvements in sample complexity over previous work. We also extend our lower bound to mixtures of $k$ Gaussians, showing that $\Omega(\sigma^{6k-2})$ samples are necessary to estimate each parameter up to constant additive error.
|
2009.08936
|
Majdi Radaideh
|
Majdi I. Radaideh, Koroush Shirvan
|
Improving Intelligence of Evolutionary Algorithms Using Experience Share
and Replay
|
10 pages, 4 figures, 2 tables
| null | null | null |
cs.NE cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose PESA, a novel approach combining Particle Swarm Optimisation
(PSO), Evolution Strategy (ES), and Simulated Annealing (SA) in a hybrid
Algorithm, inspired from reinforcement learning. PESA hybridizes the three
algorithms by storing their solutions in a shared replay memory. Next, PESA
applies prioritized replay to redistribute data between the three algorithms in
frequent form based on their fitness and priority values, which significantly
enhances sample diversity and algorithm exploration. Additionally, greedy
replay is used implicitly within SA to improve PESA exploitation close to the
end of evolution. The validation against 12 high-dimensional continuous
benchmark functions shows superior performance by PESA against standalone ES,
PSO, and SA, under similar initial starting points, hyperparameters, and number
of generations. PESA shows much better exploration behaviour, faster
convergence, and ability to find the global optima compared to its standalone
counterparts. Given the promising performance, PESA can offer an efficient
optimisation option, especially after it goes through additional
multiprocessing improvements to handle complex and expensive fitness functions.
|
[
{
"created": "Mon, 10 Aug 2020 17:27:30 GMT",
"version": "v1"
}
] |
2020-09-21
|
[
[
"Radaideh",
"Majdi I.",
""
],
[
"Shirvan",
"Koroush",
""
]
] |
We propose PESA, a novel approach combining Particle Swarm Optimisation (PSO), Evolution Strategy (ES), and Simulated Annealing (SA) in a hybrid Algorithm, inspired from reinforcement learning. PESA hybridizes the three algorithms by storing their solutions in a shared replay memory. Next, PESA applies prioritized replay to redistribute data between the three algorithms in frequent form based on their fitness and priority values, which significantly enhances sample diversity and algorithm exploration. Additionally, greedy replay is used implicitly within SA to improve PESA exploitation close to the end of evolution. The validation against 12 high-dimensional continuous benchmark functions shows superior performance by PESA against standalone ES, PSO, and SA, under similar initial starting points, hyperparameters, and number of generations. PESA shows much better exploration behaviour, faster convergence, and ability to find the global optima compared to its standalone counterparts. Given the promising performance, PESA can offer an efficient optimisation option, especially after it goes through additional multiprocessing improvements to handle complex and expensive fitness functions.
|
1210.4081
|
Bogdan Savchynskyy
|
Bogdan Savchynskyy and Stefan Schmidt
|
Getting Feasible Variable Estimates From Infeasible Ones: MRF Local
Polytope Study
|
20 page, 4 figures
| null | null | null |
cs.NA cs.CV cs.DS cs.LG math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper proposes a method for construction of approximate feasible primal
solutions from dual ones for large-scale optimization problems possessing
certain separability properties. Whereas infeasible primal estimates can
typically be produced from (sub-)gradients of the dual function, it is often
not easy to project them to the primal feasible set, since the projection
itself has a complexity comparable to the complexity of the initial problem. We
propose an alternative efficient method to obtain feasibility and show that its
properties influencing the convergence to the optimum are similar to the
properties of the Euclidean projection. We apply our method to the local
polytope relaxation of inference problems for Markov Random Fields and
demonstrate its superiority over existing methods.
|
[
{
"created": "Mon, 15 Oct 2012 15:55:34 GMT",
"version": "v1"
}
] |
2012-10-16
|
[
[
"Savchynskyy",
"Bogdan",
""
],
[
"Schmidt",
"Stefan",
""
]
] |
This paper proposes a method for construction of approximate feasible primal solutions from dual ones for large-scale optimization problems possessing certain separability properties. Whereas infeasible primal estimates can typically be produced from (sub-)gradients of the dual function, it is often not easy to project them to the primal feasible set, since the projection itself has a complexity comparable to the complexity of the initial problem. We propose an alternative efficient method to obtain feasibility and show that its properties influencing the convergence to the optimum are similar to the properties of the Euclidean projection. We apply our method to the local polytope relaxation of inference problems for Markov Random Fields and demonstrate its superiority over existing methods.
|
1804.03082
|
Hadi Kazemi
|
Hadi Kazemi, Sobhan Soleymani, Ali Dabouei, Mehdi Iranmanesh, Nasser
M. Nasrabadi
|
Attribute-Centered Loss for Soft-Biometrics Guided Face Sketch-Photo
Recognition
|
Accepted as a conference paper on CVPRW 2018
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Face sketches are able to capture the spatial topology of a face while
lacking some facial attributes such as race, skin, or hair color. Existing
sketch-photo recognition approaches have mostly ignored the importance of
facial attributes. In this paper, we propose a new loss function, called
attribute-centered loss, to train a Deep Coupled Convolutional Neural Network
(DCCNN) for the facial attribute guided sketch to photo matching. Specifically,
an attribute-centered loss is proposed which learns several distinct centers,
in a shared embedding space, for photos and sketches with different
combinations of attributes. The DCCNN simultaneously is trained to map photos
and pairs of testified attributes and corresponding forensic sketches around
their associated centers, while preserving the spatial topology information.
Importantly, the centers learn to keep a relative distance from each other,
related to their number of contradictory attributes. Extensive experiments are
performed on composite (E-PRIP) and semi-forensic (IIIT-D Semi-forensic)
databases. The proposed method significantly outperforms the state-of-the-art.
|
[
{
"created": "Mon, 9 Apr 2018 16:15:50 GMT",
"version": "v1"
}
] |
2018-04-10
|
[
[
"Kazemi",
"Hadi",
""
],
[
"Soleymani",
"Sobhan",
""
],
[
"Dabouei",
"Ali",
""
],
[
"Iranmanesh",
"Mehdi",
""
],
[
"Nasrabadi",
"Nasser M.",
""
]
] |
Face sketches are able to capture the spatial topology of a face while lacking some facial attributes such as race, skin, or hair color. Existing sketch-photo recognition approaches have mostly ignored the importance of facial attributes. In this paper, we propose a new loss function, called attribute-centered loss, to train a Deep Coupled Convolutional Neural Network (DCCNN) for the facial attribute guided sketch to photo matching. Specifically, an attribute-centered loss is proposed which learns several distinct centers, in a shared embedding space, for photos and sketches with different combinations of attributes. The DCCNN simultaneously is trained to map photos and pairs of testified attributes and corresponding forensic sketches around their associated centers, while preserving the spatial topology information. Importantly, the centers learn to keep a relative distance from each other, related to their number of contradictory attributes. Extensive experiments are performed on composite (E-PRIP) and semi-forensic (IIIT-D Semi-forensic) databases. The proposed method significantly outperforms the state-of-the-art.
|
1706.10076
|
Pavel Kucherbaev
|
Pavel Kucherbaev, Achilleas Psyllidis, Alessandro Bozzon
|
Chatbots as Conversational Recommender Systems in Urban Contexts
|
2 pages, 1 figure, 1 table
| null | null | null |
cs.SI cs.CY cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we outline the vision of chatbots that facilitate the
interaction between citizens and policy-makers at the city scale. We report the
results of a co-design session attended by more than 60 participants. We give
an outlook of how some challenges associated with such chatbot systems could be
addressed in the future.
|
[
{
"created": "Fri, 30 Jun 2017 09:24:39 GMT",
"version": "v1"
},
{
"created": "Wed, 9 Aug 2017 09:03:49 GMT",
"version": "v2"
}
] |
2017-08-10
|
[
[
"Kucherbaev",
"Pavel",
""
],
[
"Psyllidis",
"Achilleas",
""
],
[
"Bozzon",
"Alessandro",
""
]
] |
In this paper, we outline the vision of chatbots that facilitate the interaction between citizens and policy-makers at the city scale. We report the results of a co-design session attended by more than 60 participants. We give an outlook of how some challenges associated with such chatbot systems could be addressed in the future.
|
1907.05609
|
Qianwen Wang
|
Qianwen Wang, Zhen Li, Siwei Fu, Weiwei Cui, Huamin Qu
|
Narvis: Authoring Narrative Slideshows for Introducing Data
Visualization Designs
|
9 pages, published at IEEE InfoVis 2018,
|
IEEE Transactions on Visualization and Computer Graphics, vol. 25,
no. 1, pp. 779-788, Jan. 2019
|
10.1109/TVCG.2018.2865232
| null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Visual designs can be complex in modern data visualization systems, which
poses special challenges for explaining them to the non-experts. However, few
if any presentation tools are tailored for this purpose. In this study, we
present Narvis, a slideshow authoring tool designed for introducing data
visualizations to non-experts. Narvis targets two types of end-users: teachers,
experts in data visualization who produce tutorials for explaining a data
visualization, and students, non-experts who try to understand visualization
designs through tutorials. We present an analysis of requirements through close
discussions with the two types of end-users. The resulting considerations guide
the design and implementation of Narvis. Additionally, to help teachers better
organize their introduction slideshows, we specify a data visualization as a
hierarchical combination of components, which are automatically detected and
extracted by Narvis. The teachers craft an introduction slideshow through first
organizing these components, and then explaining them sequentially. A series of
templates are provided for adding annotations and animations to improve
efficiency during the authoring process. We evaluate Narvis through a
qualitative analysis of the authoring experience, and a preliminary evaluation
of the generated slideshows.
|
[
{
"created": "Fri, 12 Jul 2019 08:14:18 GMT",
"version": "v1"
}
] |
2019-08-21
|
[
[
"Wang",
"Qianwen",
""
],
[
"Li",
"Zhen",
""
],
[
"Fu",
"Siwei",
""
],
[
"Cui",
"Weiwei",
""
],
[
"Qu",
"Huamin",
""
]
] |
Visual designs can be complex in modern data visualization systems, which poses special challenges for explaining them to the non-experts. However, few if any presentation tools are tailored for this purpose. In this study, we present Narvis, a slideshow authoring tool designed for introducing data visualizations to non-experts. Narvis targets two types of end-users: teachers, experts in data visualization who produce tutorials for explaining a data visualization, and students, non-experts who try to understand visualization designs through tutorials. We present an analysis of requirements through close discussions with the two types of end-users. The resulting considerations guide the design and implementation of Narvis. Additionally, to help teachers better organize their introduction slideshows, we specify a data visualization as a hierarchical combination of components, which are automatically detected and extracted by Narvis. The teachers craft an introduction slideshow through first organizing these components, and then explaining them sequentially. A series of templates are provided for adding annotations and animations to improve efficiency during the authoring process. We evaluate Narvis through a qualitative analysis of the authoring experience, and a preliminary evaluation of the generated slideshows.
|
1705.08218
|
Xiaojian Wu
|
Xiaojian Wu, Yexiang Xue, Bart Selman, Carla P. Gomes
|
XOR-Sampling for Network Design with Correlated Stochastic Events
|
In Proceedings of the Twenty-sixth International Joint Conference on
Artificial Intelligence (IJCAI-17). The first two authors contribute equally
| null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many network optimization problems can be formulated as stochastic network
design problems in which edges are present or absent stochastically.
Furthermore, protective actions can guarantee that edges will remain present.
We consider the problem of finding the optimal protection strategy under a
budget limit in order to maximize some connectivity measurements of the
network. Previous approaches rely on the assumption that edges are independent.
In this paper, we consider a more realistic setting where multiple edges are
not independent due to natural disasters or regional events that make the
states of multiple edges stochastically correlated. We use Markov Random Fields
to model the correlation and define a new stochastic network design framework.
We provide a novel algorithm based on Sample Average Approximation (SAA)
coupled with a Gibbs or XOR sampler. The experimental results on real road
network data show that the policies produced by SAA with the XOR sampler have
higher quality and lower variance compared to SAA with Gibbs sampler.
|
[
{
"created": "Tue, 23 May 2017 12:50:36 GMT",
"version": "v1"
},
{
"created": "Wed, 24 May 2017 01:38:57 GMT",
"version": "v2"
}
] |
2017-05-25
|
[
[
"Wu",
"Xiaojian",
""
],
[
"Xue",
"Yexiang",
""
],
[
"Selman",
"Bart",
""
],
[
"Gomes",
"Carla P.",
""
]
] |
Many network optimization problems can be formulated as stochastic network design problems in which edges are present or absent stochastically. Furthermore, protective actions can guarantee that edges will remain present. We consider the problem of finding the optimal protection strategy under a budget limit in order to maximize some connectivity measurements of the network. Previous approaches rely on the assumption that edges are independent. In this paper, we consider a more realistic setting where multiple edges are not independent due to natural disasters or regional events that make the states of multiple edges stochastically correlated. We use Markov Random Fields to model the correlation and define a new stochastic network design framework. We provide a novel algorithm based on Sample Average Approximation (SAA) coupled with a Gibbs or XOR sampler. The experimental results on real road network data show that the policies produced by SAA with the XOR sampler have higher quality and lower variance compared to SAA with Gibbs sampler.
|
2402.17104
|
Robert Bassett
|
Robert L. Bassett, Austin Van Dellen, Anthony P. Austin
|
Adversarial Perturbations of Physical Signals
| null | null | null | null |
cs.LG cs.CR eess.SP math.OC stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We investigate the vulnerability of computer-vision-based signal classifiers
to adversarial perturbations of their inputs, where the signals and
perturbations are subject to physical constraints. We consider a scenario in
which a source and interferer emit signals that propagate as waves to a
detector, which attempts to classify the source by analyzing the spectrogram of
the signal it receives using a pre-trained neural network. By solving
PDE-constrained optimization problems, we construct interfering signals that
cause the detector to misclassify the source even though the perturbations to
the spectrogram of the received signal are nearly imperceptible. Though such
problems can have millions of decision variables, we introduce methods to solve
them efficiently. Our experiments demonstrate that one can compute effective
and physically realizable adversarial perturbations for a variety of machine
learning models under various physical conditions.
|
[
{
"created": "Tue, 27 Feb 2024 00:41:00 GMT",
"version": "v1"
}
] |
2024-02-28
|
[
[
"Bassett",
"Robert L.",
""
],
[
"Van Dellen",
"Austin",
""
],
[
"Austin",
"Anthony P.",
""
]
] |
We investigate the vulnerability of computer-vision-based signal classifiers to adversarial perturbations of their inputs, where the signals and perturbations are subject to physical constraints. We consider a scenario in which a source and interferer emit signals that propagate as waves to a detector, which attempts to classify the source by analyzing the spectrogram of the signal it receives using a pre-trained neural network. By solving PDE-constrained optimization problems, we construct interfering signals that cause the detector to misclassify the source even though the perturbations to the spectrogram of the received signal are nearly imperceptible. Though such problems can have millions of decision variables, we introduce methods to solve them efficiently. Our experiments demonstrate that one can compute effective and physically realizable adversarial perturbations for a variety of machine learning models under various physical conditions.
|
1007.3353
|
Laurent Hubert
|
Laurent Hubert (INRIA - IRISA), Nicolas Barr\'e (INRIA - IRISA),
Fr\'ed\'eric Besson (INRIA - IRISA), Delphine Demange (INRIA - IRISA), Thomas
Jensen (INRIA - IRISA), Vincent Monfort (INRIA - IRISA), David Pichardie
(INRIA - IRISA), Tiphaine Turpin (INRIA - IRISA)
|
Sawja: Static Analysis Workshop for Java
| null |
The International Conference on Formal Verification of
Object-Oriented Software 2010.13 (2010) 253--267
|
10.1007/978-3-642-18070-5_7
| null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Static analysis is a powerful technique for automatic verification of
programs but raises major engineering challenges when developing a full-fledged
analyzer for a realistic language such as Java. This paper describes the Sawja
library: a static analysis framework fully compliant with Java 6 which provides
OCaml modules for efficiently manipulating Java bytecode programs. We present
the main features of the library, including (i) efficient functional
data-structures for representing program with implicit sharing and lazy
parsing, (ii) an intermediate stack-less representation, and (iii) fast
computation and manipulation of complete programs.
|
[
{
"created": "Tue, 20 Jul 2010 07:03:59 GMT",
"version": "v1"
}
] |
2015-05-19
|
[
[
"Hubert",
"Laurent",
"",
"INRIA - IRISA"
],
[
"Barré",
"Nicolas",
"",
"INRIA - IRISA"
],
[
"Besson",
"Frédéric",
"",
"INRIA - IRISA"
],
[
"Demange",
"Delphine",
"",
"INRIA - IRISA"
],
[
"Jensen",
"Thomas",
"",
"INRIA - IRISA"
],
[
"Monfort",
"Vincent",
"",
"INRIA - IRISA"
],
[
"Pichardie",
"David",
"",
"INRIA - IRISA"
],
[
"Turpin",
"Tiphaine",
"",
"INRIA - IRISA"
]
] |
Static analysis is a powerful technique for automatic verification of programs but raises major engineering challenges when developing a full-fledged analyzer for a realistic language such as Java. This paper describes the Sawja library: a static analysis framework fully compliant with Java 6 which provides OCaml modules for efficiently manipulating Java bytecode programs. We present the main features of the library, including (i) efficient functional data-structures for representing program with implicit sharing and lazy parsing, (ii) an intermediate stack-less representation, and (iii) fast computation and manipulation of complete programs.
|
1804.06926
|
Leyuan Wang
|
Leyuan Wang, Yangzihao Wang, Carl Yang and John D. Owens
|
A Comparative Study on Exact Triangle Counting Algorithms on the GPU
|
7 pages, 6 figures and 2 tables
| null |
10.1145/2915516.2915521
| null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We implement exact triangle counting in graphs on the GPU using three
different methodologies: subgraph matching to a triangle pattern; programmable
graph analytics, with a set-intersection approach; and a matrix formulation
based on sparse matrix-matrix multiplies. All three deliver best-of-class
performance over CPU implementations and over comparable GPU implementations,
with the graph-analytic approach achieving the best performance due to its
ability to exploit efficient filtering steps to remove unnecessary work and its
high-performance set-intersection core.
|
[
{
"created": "Wed, 18 Apr 2018 21:51:59 GMT",
"version": "v1"
}
] |
2018-04-20
|
[
[
"Wang",
"Leyuan",
""
],
[
"Wang",
"Yangzihao",
""
],
[
"Yang",
"Carl",
""
],
[
"Owens",
"John D.",
""
]
] |
We implement exact triangle counting in graphs on the GPU using three different methodologies: subgraph matching to a triangle pattern; programmable graph analytics, with a set-intersection approach; and a matrix formulation based on sparse matrix-matrix multiplies. All three deliver best-of-class performance over CPU implementations and over comparable GPU implementations, with the graph-analytic approach achieving the best performance due to its ability to exploit efficient filtering steps to remove unnecessary work and its high-performance set-intersection core.
|
1611.03841
|
Jie Xu
|
Jie Xu, Lixing Chen, Kun Liu, Cong Shen
|
Designing Security-Aware Incentives for Computation Offloading via
Device-to-Device Communication
| null | null | null | null |
cs.GT cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Computation offloading via device-to-device (D2D) communication, or D2D
offloading, has recently been proposed to enhance mobile computing performance
by exploiting spare computing resources of nearby user devices. The success of
D2D offloading relies on user participation in collaborative service
provisioning, which incurs extra costs to users providing the service, thus
mandating an incentive mechanism that can compensate for these costs. Although
incentive mechanism design has been intensively studied in the literature, this
paper considers a much more challenging yet less investigated problem in which
selfish users are also facing interdependent security risks, such as infectious
proximity-based attacks. Security cost is significantly different in nature
from conventional service provisioning costs such as energy consumption, since
security risks often depend on the collective behavior of all users. To this
end, we build a novel mathematical framework by leveraging the combined power
of game theory and epidemic theory to investigate the interplay between user
incentives and interdependent security risks in D2D offloading, thereby
enabling the design of security-aware incentive mechanisms. Our analysis
discovers an interesting "less is more" phenomenon: although giving users more
incentives promotes more participation, it may harm the network operator's
utility. This is because too much participation may foster persistent security
risks and as a result, the effective participation level does not improve. Our
model and analysis shed new insights on the optimization of D2D offloading
networks in the presence of interdependent security risks. Extensive
simulations are carried out to verify our analytical conclusions.
|
[
{
"created": "Fri, 11 Nov 2016 20:22:25 GMT",
"version": "v1"
},
{
"created": "Fri, 22 Sep 2017 02:40:12 GMT",
"version": "v2"
}
] |
2017-09-25
|
[
[
"Xu",
"Jie",
""
],
[
"Chen",
"Lixing",
""
],
[
"Liu",
"Kun",
""
],
[
"Shen",
"Cong",
""
]
] |
Computation offloading via device-to-device (D2D) communication, or D2D offloading, has recently been proposed to enhance mobile computing performance by exploiting spare computing resources of nearby user devices. The success of D2D offloading relies on user participation in collaborative service provisioning, which incurs extra costs to users providing the service, thus mandating an incentive mechanism that can compensate for these costs. Although incentive mechanism design has been intensively studied in the literature, this paper considers a much more challenging yet less investigated problem in which selfish users are also facing interdependent security risks, such as infectious proximity-based attacks. Security cost is significantly different in nature from conventional service provisioning costs such as energy consumption, since security risks often depend on the collective behavior of all users. To this end, we build a novel mathematical framework by leveraging the combined power of game theory and epidemic theory to investigate the interplay between user incentives and interdependent security risks in D2D offloading, thereby enabling the design of security-aware incentive mechanisms. Our analysis discovers an interesting "less is more" phenomenon: although giving users more incentives promotes more participation, it may harm the network operator's utility. This is because too much participation may foster persistent security risks and as a result, the effective participation level does not improve. Our model and analysis shed new insights on the optimization of D2D offloading networks in the presence of interdependent security risks. Extensive simulations are carried out to verify our analytical conclusions.
|
2403.09070
|
Yuxuan Zhao
|
Yuxuan Zhao, Peiyu Liao, Siting Liu, Jiaxi Jiang, Yibo Lin, Bei Yu
|
Analytical Heterogeneous Die-to-Die 3D Placement with Macros
| null | null | null | null |
cs.AR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This paper presents an innovative approach to 3D mixed-size placement in
heterogeneous face-to-face (F2F) bonded 3D ICs. We propose an analytical
framework that utilizes a dedicated density model and a bistratal wirelength
model, effectively handling macros and standard cells in a 3D solution space. A
novel 3D preconditioner is developed to resolve the topological and physical
gap between macros and standard cells. Additionally, we propose a mixed-integer
linear programming (MILP) formulation for macro rotation to optimize
wirelength. Our framework is implemented with full-scale GPU acceleration,
leveraging an adaptive 3D density accumulation algorithm and an incremental
wirelength gradient algorithm. Experimental results on ICCAD 2023 contest
benchmarks demonstrate that our framework can achieve 5.9% quality score
improvement compared to the first-place winner with 4.0x runtime speedup.
Additional experiments on modern RISC-V designs further validate the
generalizability and superiority of our framework.
|
[
{
"created": "Thu, 14 Mar 2024 03:26:08 GMT",
"version": "v1"
},
{
"created": "Tue, 13 Aug 2024 13:00:42 GMT",
"version": "v2"
}
] |
2024-08-14
|
[
[
"Zhao",
"Yuxuan",
""
],
[
"Liao",
"Peiyu",
""
],
[
"Liu",
"Siting",
""
],
[
"Jiang",
"Jiaxi",
""
],
[
"Lin",
"Yibo",
""
],
[
"Yu",
"Bei",
""
]
] |
This paper presents an innovative approach to 3D mixed-size placement in heterogeneous face-to-face (F2F) bonded 3D ICs. We propose an analytical framework that utilizes a dedicated density model and a bistratal wirelength model, effectively handling macros and standard cells in a 3D solution space. A novel 3D preconditioner is developed to resolve the topological and physical gap between macros and standard cells. Additionally, we propose a mixed-integer linear programming (MILP) formulation for macro rotation to optimize wirelength. Our framework is implemented with full-scale GPU acceleration, leveraging an adaptive 3D density accumulation algorithm and an incremental wirelength gradient algorithm. Experimental results on ICCAD 2023 contest benchmarks demonstrate that our framework can achieve 5.9% quality score improvement compared to the first-place winner with 4.0x runtime speedup. Additional experiments on modern RISC-V designs further validate the generalizability and superiority of our framework.
|
2407.02856
|
Adrian Pekar
|
Adrian Pekar and Richard Jozsa
|
Early-Stage Anomaly Detection: A Study of Model Performance on Complete
vs. Partial Flows
|
9 pages, 5 tables, 2 figures
| null | null | null |
cs.LG cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
This study investigates the efficacy of machine learning models, specifically
Random Forest, in anomaly detection systems when trained on complete flow
records and tested on partial flow data. We explore the performance disparity
that arises when models are applied to incomplete data typical in real-world,
real-time network environments. Our findings demonstrate a significant decline
in model performance, with precision and recall dropping by up to 30\% under
certain conditions when models trained on complete flows are tested against
partial flows. Conversely, models trained and tested on consistently complete
or partial datasets maintain robustness, highlighting the importance of dataset
consistency in training. The study reveals that a minimum of 7 packets in the
test set is required for maintaining reliable detection rates. These results
underscore the need for tailored training strategies that can effectively adapt
to the dynamics of partial data, enhancing the practical applicability of
anomaly detection systems in operational settings.
|
[
{
"created": "Wed, 3 Jul 2024 07:14:25 GMT",
"version": "v1"
}
] |
2024-07-04
|
[
[
"Pekar",
"Adrian",
""
],
[
"Jozsa",
"Richard",
""
]
] |
This study investigates the efficacy of machine learning models, specifically Random Forest, in anomaly detection systems when trained on complete flow records and tested on partial flow data. We explore the performance disparity that arises when models are applied to incomplete data typical in real-world, real-time network environments. Our findings demonstrate a significant decline in model performance, with precision and recall dropping by up to 30\% under certain conditions when models trained on complete flows are tested against partial flows. Conversely, models trained and tested on consistently complete or partial datasets maintain robustness, highlighting the importance of dataset consistency in training. The study reveals that a minimum of 7 packets in the test set is required for maintaining reliable detection rates. These results underscore the need for tailored training strategies that can effectively adapt to the dynamics of partial data, enhancing the practical applicability of anomaly detection systems in operational settings.
|
1106.1803
|
H. Blockeel
|
H. Blockeel, L. Dehaspe, B. Demoen, G. Janssens, J. Ramon, H.
Vandecasteele
|
Improving the Efficiency of Inductive Logic Programming Through the Use
of Query Packs
| null |
Journal Of Artificial Intelligence Research, Volume 16, pages
135-166, 2002
|
10.1613/jair.924
| null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Inductive logic programming, or relational learning, is a powerful paradigm
for machine learning or data mining. However, in order for ILP to become
practically useful, the efficiency of ILP systems must improve substantially.
To this end, the notion of a query pack is introduced: it structures sets of
similar queries. Furthermore, a mechanism is described for executing such query
packs. A complexity analysis shows that considerable efficiency improvements
can be achieved through the use of this query pack execution mechanism. This
claim is supported by empirical results obtained by incorporating support for
query pack execution in two existing learning systems.
|
[
{
"created": "Thu, 9 Jun 2011 13:19:53 GMT",
"version": "v1"
}
] |
2011-06-10
|
[
[
"Blockeel",
"H.",
""
],
[
"Dehaspe",
"L.",
""
],
[
"Demoen",
"B.",
""
],
[
"Janssens",
"G.",
""
],
[
"Ramon",
"J.",
""
],
[
"Vandecasteele",
"H.",
""
]
] |
Inductive logic programming, or relational learning, is a powerful paradigm for machine learning or data mining. However, in order for ILP to become practically useful, the efficiency of ILP systems must improve substantially. To this end, the notion of a query pack is introduced: it structures sets of similar queries. Furthermore, a mechanism is described for executing such query packs. A complexity analysis shows that considerable efficiency improvements can be achieved through the use of this query pack execution mechanism. This claim is supported by empirical results obtained by incorporating support for query pack execution in two existing learning systems.
|
1609.03938
|
Erel Segal-Halevi
|
Erel Segal-Halevi, Shmuel Nitzan, Avinatan Hassidim, Yonatan Aumann
|
Envy-Free Division of Land
|
A preliminary version named 'Envy-free cake-cutting in two
dimensions' appeared in the proceedings of AAAI 2015
(https://www.aaai.org/ocs/index.php/AAAI/AAAI15/paper/viewPaper/9656). The
main additions here are: (a) handling multi-dimensional resources of
arbitrary shape rather than just rectangles, (b) handling an arbitrary number
n of agents rather than just 2 or 3, (c) rewriting most proofs
| null |
10.1287/moor.2019.1016
| null |
cs.GT cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Classic cake-cutting algorithms enable people with different preferences to
divide among them a heterogeneous resource (``cake''), such that the resulting
division is fair according to each agent's individual preferences. However,
these algorithms either ignore the geometry of the resource altogether, or
assume it is one-dimensional. In practice, it is often required to divide
multi-dimensional resources, such as land-estates or advertisement spaces in
print or electronic media. In such cases, the geometric shape of the allotted
piece is of crucial importance. For example, when building houses or designing
advertisements, in order to be useful, the allotments should be squares or
rectangles with bounded aspect-ratio. We thus introduce the problem of fair
land division --- fair division of a multi-dimensional resource wherein the
allocated piece must have a pre-specified geometric shape. We present
constructive division algorithms that satisfy the two most prominent fairness
criteria, namely envy-freeness and proportionality. In settings where
proportionality cannot be achieved due to the geometric constraints, our
algorithms provide a partially-proportional division, guaranteeing that the
fraction allocated to each agent be at least a certain positive constant. We
prove that in many natural settings the envy-freeness requirement is compatible
with the best attainable partial-proportionality.
|
[
{
"created": "Tue, 13 Sep 2016 17:07:32 GMT",
"version": "v1"
},
{
"created": "Sat, 9 Mar 2019 19:22:15 GMT",
"version": "v2"
}
] |
2021-08-06
|
[
[
"Segal-Halevi",
"Erel",
""
],
[
"Nitzan",
"Shmuel",
""
],
[
"Hassidim",
"Avinatan",
""
],
[
"Aumann",
"Yonatan",
""
]
] |
Classic cake-cutting algorithms enable people with different preferences to divide among them a heterogeneous resource (``cake''), such that the resulting division is fair according to each agent's individual preferences. However, these algorithms either ignore the geometry of the resource altogether, or assume it is one-dimensional. In practice, it is often required to divide multi-dimensional resources, such as land-estates or advertisement spaces in print or electronic media. In such cases, the geometric shape of the allotted piece is of crucial importance. For example, when building houses or designing advertisements, in order to be useful, the allotments should be squares or rectangles with bounded aspect-ratio. We thus introduce the problem of fair land division --- fair division of a multi-dimensional resource wherein the allocated piece must have a pre-specified geometric shape. We present constructive division algorithms that satisfy the two most prominent fairness criteria, namely envy-freeness and proportionality. In settings where proportionality cannot be achieved due to the geometric constraints, our algorithms provide a partially-proportional division, guaranteeing that the fraction allocated to each agent be at least a certain positive constant. We prove that in many natural settings the envy-freeness requirement is compatible with the best attainable partial-proportionality.
|
2404.17428
|
Manuel Dubinsky
|
Manuel Dubinsky, Kun-Mao Chao, C\'esar Massri, Gabriel Taubin
|
Lower Bounds for the Minimum Spanning Tree Cycle Intersection Problem
|
arXiv admin note: substantial text overlap with arXiv:2301.07643
| null | null | null |
cs.DM
|
http://creativecommons.org/licenses/by/4.0/
|
Minimum spanning trees are important tools in the analysis and design of
networks. Many practical applications require their computation, ranging from
biology and linguistics to economy and telecommunications. The set of cycles of
a network has a vector space structure. Given a spanning tree, the set of
non-tree edges defines cycles that determine a basis. The intersection of two
such cycles is the number of edges they have in common and the intersection
number -- denoted $\cap(G)$ -- is the number of non-empty pairwise
intersections of the cycles of the basis. The Minimum Spanning Tree Cycle
Intersection problem consists in finding a spanning tree such that the
intersection number is minimum. This problem is relevant in order to integrate
discrete differential forms. In this paper, we present two lower bounds of the
intersection number of an arbitrary connected graph $G=(V,E)$. In the first
part, we prove the following statement: $$\frac{1}{2}\left(\frac{\nu^2}{n-1} -
\nu\right) \leq \cap(G),$$ where $n = |V|$ and $\nu$ is the \emph{cyclomatic
number} of $G$. In the second part, based on some experimental results and a
new observation, we conjecture the following improved tight lower bound:
$$(n-1) \binom{q}{2} + q \ r\leq \cap(G),$$ where $2 \nu = q (n-1) + r$ is the
integer division of $2 \nu$ and $n-1$. This is the first result in a general
context, that is for an arbitrary connected graph.
|
[
{
"created": "Fri, 26 Apr 2024 14:08:36 GMT",
"version": "v1"
}
] |
2024-04-29
|
[
[
"Dubinsky",
"Manuel",
""
],
[
"Chao",
"Kun-Mao",
""
],
[
"Massri",
"César",
""
],
[
"Taubin",
"Gabriel",
""
]
] |
Minimum spanning trees are important tools in the analysis and design of networks. Many practical applications require their computation, ranging from biology and linguistics to economy and telecommunications. The set of cycles of a network has a vector space structure. Given a spanning tree, the set of non-tree edges defines cycles that determine a basis. The intersection of two such cycles is the number of edges they have in common and the intersection number -- denoted $\cap(G)$ -- is the number of non-empty pairwise intersections of the cycles of the basis. The Minimum Spanning Tree Cycle Intersection problem consists in finding a spanning tree such that the intersection number is minimum. This problem is relevant in order to integrate discrete differential forms. In this paper, we present two lower bounds of the intersection number of an arbitrary connected graph $G=(V,E)$. In the first part, we prove the following statement: $$\frac{1}{2}\left(\frac{\nu^2}{n-1} - \nu\right) \leq \cap(G),$$ where $n = |V|$ and $\nu$ is the \emph{cyclomatic number} of $G$. In the second part, based on some experimental results and a new observation, we conjecture the following improved tight lower bound: $$(n-1) \binom{q}{2} + q \ r\leq \cap(G),$$ where $2 \nu = q (n-1) + r$ is the integer division of $2 \nu$ and $n-1$. This is the first result in a general context, that is for an arbitrary connected graph.
|
2303.10590
|
Minh Tran
|
Yufeng Yin, Minh Tran, Di Chang, Xinrui Wang, Mohammad Soleymani
|
Multi-modal Facial Action Unit Detection with Large Pre-trained Models
for the 5th Competition on Affective Behavior Analysis in-the-wild
|
8 pages, 7 figures, 5 tables
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Facial action unit detection has emerged as an important task within facial
expression analysis, aimed at detecting specific pre-defined, objective facial
expressions, such as lip tightening and cheek raising. This paper presents our
submission to the Affective Behavior Analysis in-the-wild (ABAW) 2023
Competition for AU detection. We propose a multi-modal method for facial action
unit detection with visual, acoustic, and lexical features extracted from the
large pre-trained models. To provide high-quality details for visual feature
extraction, we apply super-resolution and face alignment to the training data
and show potential performance gain. Our approach achieves the F1 score of
52.3% on the official validation set of the 5th ABAW Challenge.
|
[
{
"created": "Sun, 19 Mar 2023 07:18:14 GMT",
"version": "v1"
},
{
"created": "Thu, 23 Mar 2023 00:35:40 GMT",
"version": "v2"
},
{
"created": "Mon, 17 Apr 2023 20:17:55 GMT",
"version": "v3"
}
] |
2023-04-19
|
[
[
"Yin",
"Yufeng",
""
],
[
"Tran",
"Minh",
""
],
[
"Chang",
"Di",
""
],
[
"Wang",
"Xinrui",
""
],
[
"Soleymani",
"Mohammad",
""
]
] |
Facial action unit detection has emerged as an important task within facial expression analysis, aimed at detecting specific pre-defined, objective facial expressions, such as lip tightening and cheek raising. This paper presents our submission to the Affective Behavior Analysis in-the-wild (ABAW) 2023 Competition for AU detection. We propose a multi-modal method for facial action unit detection with visual, acoustic, and lexical features extracted from the large pre-trained models. To provide high-quality details for visual feature extraction, we apply super-resolution and face alignment to the training data and show potential performance gain. Our approach achieves the F1 score of 52.3% on the official validation set of the 5th ABAW Challenge.
|
2310.02807
|
Zijie Geng
|
Zijie Geng, Xijun Li, Jie Wang, Xiao Li, Yongdong Zhang, Feng Wu
|
A Deep Instance Generative Framework for MILP Solvers Under Limited Data
Availability
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the past few years, there has been an explosive surge in the use of
machine learning (ML) techniques to address combinatorial optimization (CO)
problems, especially mixed-integer linear programs (MILPs). Despite the
achievements, the limited availability of real-world instances often leads to
sub-optimal decisions and biased solver assessments, which motivates a suite of
synthetic MILP instance generation techniques. However, existing methods either
rely heavily on expert-designed formulations or struggle to capture the rich
features of real-world instances. To tackle this problem, we propose G2MILP,
the first deep generative framework for MILP instances. Specifically, G2MILP
represents MILP instances as bipartite graphs, and applies a masked variational
autoencoder to iteratively corrupt and replace parts of the original graphs to
generate new ones. The appealing feature of G2MILP is that it can learn to
generate novel and realistic MILP instances without prior expert-designed
formulations, while preserving the structures and computational hardness of
real-world datasets, simultaneously. Thus the generated instances can
facilitate downstream tasks for enhancing MILP solvers under limited data
availability. We design a suite of benchmarks to evaluate the quality of the
generated MILP instances. Experiments demonstrate that our method can produce
instances that closely resemble real-world datasets in terms of both structures
and computational hardness. The deliverables are released at
https://miralab-ustc.github.io/L2O-G2MILP.
|
[
{
"created": "Wed, 4 Oct 2023 13:34:34 GMT",
"version": "v1"
},
{
"created": "Sat, 28 Oct 2023 12:10:46 GMT",
"version": "v2"
},
{
"created": "Mon, 11 Mar 2024 10:51:14 GMT",
"version": "v3"
}
] |
2024-03-12
|
[
[
"Geng",
"Zijie",
""
],
[
"Li",
"Xijun",
""
],
[
"Wang",
"Jie",
""
],
[
"Li",
"Xiao",
""
],
[
"Zhang",
"Yongdong",
""
],
[
"Wu",
"Feng",
""
]
] |
In the past few years, there has been an explosive surge in the use of machine learning (ML) techniques to address combinatorial optimization (CO) problems, especially mixed-integer linear programs (MILPs). Despite the achievements, the limited availability of real-world instances often leads to sub-optimal decisions and biased solver assessments, which motivates a suite of synthetic MILP instance generation techniques. However, existing methods either rely heavily on expert-designed formulations or struggle to capture the rich features of real-world instances. To tackle this problem, we propose G2MILP, the first deep generative framework for MILP instances. Specifically, G2MILP represents MILP instances as bipartite graphs, and applies a masked variational autoencoder to iteratively corrupt and replace parts of the original graphs to generate new ones. The appealing feature of G2MILP is that it can learn to generate novel and realistic MILP instances without prior expert-designed formulations, while preserving the structures and computational hardness of real-world datasets, simultaneously. Thus the generated instances can facilitate downstream tasks for enhancing MILP solvers under limited data availability. We design a suite of benchmarks to evaluate the quality of the generated MILP instances. Experiments demonstrate that our method can produce instances that closely resemble real-world datasets in terms of both structures and computational hardness. The deliverables are released at https://miralab-ustc.github.io/L2O-G2MILP.
|
2102.00287
|
Eva Vanmassenhove
|
Eva Vanmassenhove, Dimitar Shterionov, Matthew Gwilliam
|
Machine Translationese: Effects of Algorithmic Bias on Linguistic
Complexity in Machine Translation
| null | null | null | null |
cs.CL cs.AI cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
Recent studies in the field of Machine Translation (MT) and Natural Language
Processing (NLP) have shown that existing models amplify biases observed in the
training data. The amplification of biases in language technology has mainly
been examined with respect to specific phenomena, such as gender bias. In this
work, we go beyond the study of gender in MT and investigate how bias
amplification might affect language in a broader sense. We hypothesize that the
'algorithmic bias', i.e. an exacerbation of frequently observed patterns in
combination with a loss of less frequent ones, not only exacerbates societal
biases present in current datasets but could also lead to an artificially
impoverished language: 'machine translationese'. We assess the linguistic
richness (on a lexical and morphological level) of translations created by
different data-driven MT paradigms - phrase-based statistical (PB-SMT) and
neural MT (NMT). Our experiments show that there is a loss of lexical and
morphological richness in the translations produced by all investigated MT
paradigms for two language pairs (EN<=>FR and EN<=>ES).
|
[
{
"created": "Sat, 30 Jan 2021 18:49:11 GMT",
"version": "v1"
}
] |
2021-02-02
|
[
[
"Vanmassenhove",
"Eva",
""
],
[
"Shterionov",
"Dimitar",
""
],
[
"Gwilliam",
"Matthew",
""
]
] |
Recent studies in the field of Machine Translation (MT) and Natural Language Processing (NLP) have shown that existing models amplify biases observed in the training data. The amplification of biases in language technology has mainly been examined with respect to specific phenomena, such as gender bias. In this work, we go beyond the study of gender in MT and investigate how bias amplification might affect language in a broader sense. We hypothesize that the 'algorithmic bias', i.e. an exacerbation of frequently observed patterns in combination with a loss of less frequent ones, not only exacerbates societal biases present in current datasets but could also lead to an artificially impoverished language: 'machine translationese'. We assess the linguistic richness (on a lexical and morphological level) of translations created by different data-driven MT paradigms - phrase-based statistical (PB-SMT) and neural MT (NMT). Our experiments show that there is a loss of lexical and morphological richness in the translations produced by all investigated MT paradigms for two language pairs (EN<=>FR and EN<=>ES).
|
0805.0873
|
EDA Publishing Association
|
Hela Boussetta (TIMA), S. Basrour (TIMA), M. Marzencki (TIMA)
|
Top-Down Behavioral Modeling Methodology of a Piezoelectric
Microgenerator For Integrated Power Harvesting Systems
|
Submitted on behalf of EDA Publishing Association
(http://irevues.inist.fr/handle/2042/16838)
|
Dans Symposium on Design, Test, Integration and Packaging of
MEMS/MOEMS - DTIP 2008, Nice : France (2008)
| null | null |
cs.OH
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this study, we developed a top/down methodology for behavioral and
structural modeling of multi-domain microsystems. Then, we validated this
methodology through a study case : a piezoelectric microgenerator. We also
proved the effectiveness of VHDL-AMS language not only for modeling in
behavioral and structural levels but also in writing physical models that can
predict the experimental results. Finally, we validated these models by
presenting and discussing simulations results.
|
[
{
"created": "Wed, 7 May 2008 09:00:16 GMT",
"version": "v1"
}
] |
2008-12-18
|
[
[
"Boussetta",
"Hela",
"",
"TIMA"
],
[
"Basrour",
"S.",
"",
"TIMA"
],
[
"Marzencki",
"M.",
"",
"TIMA"
]
] |
In this study, we developed a top/down methodology for behavioral and structural modeling of multi-domain microsystems. Then, we validated this methodology through a study case : a piezoelectric microgenerator. We also proved the effectiveness of VHDL-AMS language not only for modeling in behavioral and structural levels but also in writing physical models that can predict the experimental results. Finally, we validated these models by presenting and discussing simulations results.
|
2405.00885
|
Huai-An Su
|
Huai-an Su, Jiaxiang Geng, Liang Li, Xiaoqi Qin, Yanzhao Hou, Xin Fu
and Miao Pan
|
WHALE-FL: Wireless and Heterogeneity Aware Latency Efficient Federated
Learning over Mobile Devices via Adaptive Subnetwork Scheduling
| null | null | null | null |
cs.LG cs.NI eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As a popular distributed learning paradigm, federated learning (FL) over
mobile devices fosters numerous applications, while their practical deployment
is hindered by participating devices' computing and communication
heterogeneity. Some pioneering research efforts proposed to extract subnetworks
from the global model, and assign as large a subnetwork as possible to the
device for local training based on its full computing and communications
capacity. Although such fixed size subnetwork assignment enables FL training
over heterogeneous mobile devices, it is unaware of (i) the dynamic changes of
devices' communication and computing conditions and (ii) FL training progress
and its dynamic requirements of local training contributions, both of which may
cause very long FL training delay. Motivated by those dynamics, in this paper,
we develop a wireless and heterogeneity aware latency efficient FL (WHALE-FL)
approach to accelerate FL training through adaptive subnetwork scheduling.
Instead of sticking to the fixed size subnetwork, WHALE-FL introduces a novel
subnetwork selection utility function to capture device and FL training
dynamics, and guides the mobile device to adaptively select the subnetwork size
for local training based on (a) its computing and communication capacity, (b)
its dynamic computing and/or communication conditions, and (c) FL training
status and its corresponding requirements for local training contributions. Our
evaluation shows that, compared with peer designs, WHALE-FL effectively
accelerates FL training without sacrificing learning accuracy.
|
[
{
"created": "Wed, 1 May 2024 22:01:40 GMT",
"version": "v1"
}
] |
2024-05-03
|
[
[
"Su",
"Huai-an",
""
],
[
"Geng",
"Jiaxiang",
""
],
[
"Li",
"Liang",
""
],
[
"Qin",
"Xiaoqi",
""
],
[
"Hou",
"Yanzhao",
""
],
[
"Fu",
"Xin",
""
],
[
"Pan",
"Miao",
""
]
] |
As a popular distributed learning paradigm, federated learning (FL) over mobile devices fosters numerous applications, while their practical deployment is hindered by participating devices' computing and communication heterogeneity. Some pioneering research efforts proposed to extract subnetworks from the global model, and assign as large a subnetwork as possible to the device for local training based on its full computing and communications capacity. Although such fixed size subnetwork assignment enables FL training over heterogeneous mobile devices, it is unaware of (i) the dynamic changes of devices' communication and computing conditions and (ii) FL training progress and its dynamic requirements of local training contributions, both of which may cause very long FL training delay. Motivated by those dynamics, in this paper, we develop a wireless and heterogeneity aware latency efficient FL (WHALE-FL) approach to accelerate FL training through adaptive subnetwork scheduling. Instead of sticking to the fixed size subnetwork, WHALE-FL introduces a novel subnetwork selection utility function to capture device and FL training dynamics, and guides the mobile device to adaptively select the subnetwork size for local training based on (a) its computing and communication capacity, (b) its dynamic computing and/or communication conditions, and (c) FL training status and its corresponding requirements for local training contributions. Our evaluation shows that, compared with peer designs, WHALE-FL effectively accelerates FL training without sacrificing learning accuracy.
|
2402.15145
|
Hongxun Wu
|
Xin Lyu, Hongxun Wu, Junzhao Yang
|
The Cost of Parallelizing Boosting
|
appeared in SODA 2024
| null | null | null |
cs.LG cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the cost of parallelizing weak-to-strong boosting algorithms for
learning, following the recent work of Karbasi and Larsen. Our main results are
two-fold:
- First, we prove a tight lower bound, showing that even "slight"
parallelization of boosting requires an exponential blow-up in the complexity
of training.
Specifically, let $\gamma$ be the weak learner's advantage over random
guessing. The famous \textsc{AdaBoost} algorithm produces an accurate
hypothesis by interacting with the weak learner for $\tilde{O}(1 / \gamma^2)$
rounds where each round runs in polynomial time.
Karbasi and Larsen showed that "significant" parallelization must incur
exponential blow-up: Any boosting algorithm either interacts with the weak
learner for $\Omega(1 / \gamma)$ rounds or incurs an $\exp(d / \gamma)$ blow-up
in the complexity of training, where $d$ is the VC dimension of the hypothesis
class. We close the gap by showing that any boosting algorithm either has
$\Omega(1 / \gamma^2)$ rounds of interaction or incurs a smaller exponential
blow-up of $\exp(d)$.
-Complementing our lower bound, we show that there exists a boosting
algorithm using $\tilde{O}(1/(t \gamma^2))$ rounds, and only suffer a blow-up
of $\exp(d \cdot t^2)$.
Plugging in $t = \omega(1)$, this shows that the smaller blow-up in our lower
bound is tight. More interestingly, this provides the first trade-off between
the parallelism and the total work required for boosting.
|
[
{
"created": "Fri, 23 Feb 2024 07:03:52 GMT",
"version": "v1"
}
] |
2024-02-26
|
[
[
"Lyu",
"Xin",
""
],
[
"Wu",
"Hongxun",
""
],
[
"Yang",
"Junzhao",
""
]
] |
We study the cost of parallelizing weak-to-strong boosting algorithms for learning, following the recent work of Karbasi and Larsen. Our main results are two-fold: - First, we prove a tight lower bound, showing that even "slight" parallelization of boosting requires an exponential blow-up in the complexity of training. Specifically, let $\gamma$ be the weak learner's advantage over random guessing. The famous \textsc{AdaBoost} algorithm produces an accurate hypothesis by interacting with the weak learner for $\tilde{O}(1 / \gamma^2)$ rounds where each round runs in polynomial time. Karbasi and Larsen showed that "significant" parallelization must incur exponential blow-up: Any boosting algorithm either interacts with the weak learner for $\Omega(1 / \gamma)$ rounds or incurs an $\exp(d / \gamma)$ blow-up in the complexity of training, where $d$ is the VC dimension of the hypothesis class. We close the gap by showing that any boosting algorithm either has $\Omega(1 / \gamma^2)$ rounds of interaction or incurs a smaller exponential blow-up of $\exp(d)$. -Complementing our lower bound, we show that there exists a boosting algorithm using $\tilde{O}(1/(t \gamma^2))$ rounds, and only suffer a blow-up of $\exp(d \cdot t^2)$. Plugging in $t = \omega(1)$, this shows that the smaller blow-up in our lower bound is tight. More interestingly, this provides the first trade-off between the parallelism and the total work required for boosting.
|
2404.00728
|
Luis Morales-Navarro
|
Luis Morales-Navarro and Yasmin B. Kafai
|
Investigating Youths' Everyday Understanding of Machine Learning
Applications: a Knowledge-in-Pieces Perspective
|
accepted for publication at Proceedings of the International
Conference of the Learning Sciences 2024
| null | null | null |
cs.CY
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Despite recent calls for including artificial intelligence (AI) literacy in
K-12 education, not enough attention has been paid to studying youths' everyday
knowledge about machine learning (ML). Most research has examined how youths
attribute intelligence to AI/ML systems. Other studies have centered on youths'
theories and hypotheses about ML highlighting their misconceptions and how
these may hinder learning. However, research on conceptual change shows that
youths may not have coherent theories about scientific phenomena and instead
have knowledge pieces that can be productive for formal learning. We
investigate teens' everyday understanding of ML through a knowledge-in-pieces
perspective. Our analyses reveal that youths showed some understanding that ML
applications learn from training data and that applications recognize patterns
in input data and depending on these provide different outputs. We discuss how
these findings expand our knowledge base and implications for the design of
tools and activities to introduce youths to ML.
|
[
{
"created": "Sun, 31 Mar 2024 16:11:33 GMT",
"version": "v1"
}
] |
2024-04-02
|
[
[
"Morales-Navarro",
"Luis",
""
],
[
"Kafai",
"Yasmin B.",
""
]
] |
Despite recent calls for including artificial intelligence (AI) literacy in K-12 education, not enough attention has been paid to studying youths' everyday knowledge about machine learning (ML). Most research has examined how youths attribute intelligence to AI/ML systems. Other studies have centered on youths' theories and hypotheses about ML highlighting their misconceptions and how these may hinder learning. However, research on conceptual change shows that youths may not have coherent theories about scientific phenomena and instead have knowledge pieces that can be productive for formal learning. We investigate teens' everyday understanding of ML through a knowledge-in-pieces perspective. Our analyses reveal that youths showed some understanding that ML applications learn from training data and that applications recognize patterns in input data and depending on these provide different outputs. We discuss how these findings expand our knowledge base and implications for the design of tools and activities to introduce youths to ML.
|
2103.00497
|
Aryan Asadian
|
Aryan Asadian, Amirali Salehi-Abari
|
Distilling Knowledge via Intermediate Classifiers
|
8 pages, 2 figures
| null | null | null |
cs.LG cs.AI cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The crux of knowledge distillation is to effectively train a resource-limited
student model with the guide of a pre-trained larger teacher model. However,
when there is a large difference between the model complexities of teacher and
student (i.e., capacity gap), knowledge distillation loses its strength in
transferring knowledge from the teacher to the student, thus training a weaker
student. To mitigate the impact of the capacity gap, we introduce knowledge
distillation via intermediate heads. By extending the intermediate layers of
the teacher (at various depths) with classifier heads, we cheaply acquire a
cohort of heterogeneous pre-trained teachers. The intermediate classifier heads
can all together be efficiently learned while freezing the backbone of the
pre-trained teacher. The cohort of teachers (including the original teacher)
co-teach the student simultaneously. Our experiments on various teacher-student
pairs and datasets have demonstrated that the proposed approach outperforms the
canonical knowledge distillation approach and its extensions.
|
[
{
"created": "Sun, 28 Feb 2021 12:52:52 GMT",
"version": "v1"
},
{
"created": "Mon, 31 May 2021 13:20:57 GMT",
"version": "v2"
}
] |
2021-06-01
|
[
[
"Asadian",
"Aryan",
""
],
[
"Salehi-Abari",
"Amirali",
""
]
] |
The crux of knowledge distillation is to effectively train a resource-limited student model with the guide of a pre-trained larger teacher model. However, when there is a large difference between the model complexities of teacher and student (i.e., capacity gap), knowledge distillation loses its strength in transferring knowledge from the teacher to the student, thus training a weaker student. To mitigate the impact of the capacity gap, we introduce knowledge distillation via intermediate heads. By extending the intermediate layers of the teacher (at various depths) with classifier heads, we cheaply acquire a cohort of heterogeneous pre-trained teachers. The intermediate classifier heads can all together be efficiently learned while freezing the backbone of the pre-trained teacher. The cohort of teachers (including the original teacher) co-teach the student simultaneously. Our experiments on various teacher-student pairs and datasets have demonstrated that the proposed approach outperforms the canonical knowledge distillation approach and its extensions.
|
1708.06805
|
Jordi Levy
|
Carlos Ans\'otegui, Maria Luisa Bonet, Jordi Levy
|
Scale-Free Random SAT Instances
| null |
Algorithms 15(6): 219 (2022)
|
10.3390/a15060219
| null |
cs.CC math.CO math.PR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We focus on the random generation of SAT instances that have properties
similar to real-world instances. It is known that many industrial instances,
even with a great number of variables, can be solved by a clever solver in a
reasonable amount of time. This is not possible, in general, with classical
randomly generated instances. We provide a different generation model of SAT
instances, called \emph{scale-free random SAT instances}. It is based on the
use of a non-uniform probability distribution $P(i)\sim i^{-\beta}$ to select
variable $i$, where $\beta$ is a parameter of the model. This results into
formulas where the number of occurrences $k$ of variables follows a power-law
distribution $P(k)\sim k^{-\delta}$ where $\delta = 1 + 1/\beta$. This property
has been observed in most real-world SAT instances. For $\beta=0$, our model
extends classical random SAT instances.
We prove the existence of a SAT-UNSAT phase transition phenomenon for
scale-free random 2-SAT instances with $\beta<1/2$ when the clause/variable
ratio is $m/n=\frac{1-2\beta}{(1-\beta)^2}$. We also prove that scale-free
random k-SAT instances are unsatisfiable with high probability when the number
of clauses exceeds $\omega(n^{(1-\beta)k})$. %This implies that the SAT/UNSAT
phase transition phenomena vanishes when $\beta>1-1/k$, and formulas are
unsatisfiable due to a small core of clauses. The proof of this result suggests
that, when $\beta>1-1/k$, the unsatisfiability of most formulas may be due to
small cores of clauses. Finally, we show how this model will allow us to
generate random instances similar to industrial instances, of interest for
testing purposes.
|
[
{
"created": "Wed, 12 Jul 2017 19:21:19 GMT",
"version": "v1"
},
{
"created": "Tue, 9 Apr 2019 16:49:29 GMT",
"version": "v2"
},
{
"created": "Wed, 17 Jul 2019 21:49:35 GMT",
"version": "v3"
}
] |
2023-03-14
|
[
[
"Ansótegui",
"Carlos",
""
],
[
"Bonet",
"Maria Luisa",
""
],
[
"Levy",
"Jordi",
""
]
] |
We focus on the random generation of SAT instances that have properties similar to real-world instances. It is known that many industrial instances, even with a great number of variables, can be solved by a clever solver in a reasonable amount of time. This is not possible, in general, with classical randomly generated instances. We provide a different generation model of SAT instances, called \emph{scale-free random SAT instances}. It is based on the use of a non-uniform probability distribution $P(i)\sim i^{-\beta}$ to select variable $i$, where $\beta$ is a parameter of the model. This results into formulas where the number of occurrences $k$ of variables follows a power-law distribution $P(k)\sim k^{-\delta}$ where $\delta = 1 + 1/\beta$. This property has been observed in most real-world SAT instances. For $\beta=0$, our model extends classical random SAT instances. We prove the existence of a SAT-UNSAT phase transition phenomenon for scale-free random 2-SAT instances with $\beta<1/2$ when the clause/variable ratio is $m/n=\frac{1-2\beta}{(1-\beta)^2}$. We also prove that scale-free random k-SAT instances are unsatisfiable with high probability when the number of clauses exceeds $\omega(n^{(1-\beta)k})$. %This implies that the SAT/UNSAT phase transition phenomena vanishes when $\beta>1-1/k$, and formulas are unsatisfiable due to a small core of clauses. The proof of this result suggests that, when $\beta>1-1/k$, the unsatisfiability of most formulas may be due to small cores of clauses. Finally, we show how this model will allow us to generate random instances similar to industrial instances, of interest for testing purposes.
|
1510.00921
|
Chunhua Shen
|
Lingqiao Liu, Chunhua Shen, Anton van den Hengel
|
Cross-convolutional-layer Pooling for Image Recognition
|
Fixed typos. Journal extension of arXiv:1411.7466. Accepted to IEEE
Transactions on Pattern Analysis and Machine Intelligence
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent studies have shown that a Deep Convolutional Neural Network (DCNN)
pretrained on a large image dataset can be used as a universal image
descriptor, and that doing so leads to impressive performance for a variety of
image classification tasks. Most of these studies adopt activations from a
single DCNN layer, usually the fully-connected layer, as the image
representation. In this paper, we proposed a novel way to extract image
representations from two consecutive convolutional layers: one layer is
utilized for local feature extraction and the other serves as guidance to pool
the extracted features. By taking different viewpoints of convolutional layers,
we further develop two schemes to realize this idea. The first one directly
uses convolutional layers from a DCNN. The second one applies the pretrained
CNN on densely sampled image regions and treats the fully-connected activations
of each image region as convolutional feature activations. We then train
another convolutional layer on top of that as the pooling-guidance
convolutional layer. By applying our method to three popular visual
classification tasks, we find our first scheme tends to perform better on the
applications which need strong discrimination on subtle object patterns within
small regions while the latter excels in the cases that require discrimination
on category-level patterns. Overall, the proposed method achieves superior
performance over existing ways of extracting image representations from a DCNN.
|
[
{
"created": "Sun, 4 Oct 2015 10:27:36 GMT",
"version": "v1"
},
{
"created": "Thu, 9 Jun 2016 07:37:09 GMT",
"version": "v2"
},
{
"created": "Sun, 23 Oct 2016 05:48:16 GMT",
"version": "v3"
},
{
"created": "Wed, 7 Dec 2016 00:00:42 GMT",
"version": "v4"
},
{
"created": "Thu, 8 Dec 2016 01:31:05 GMT",
"version": "v5"
},
{
"created": "Thu, 22 Dec 2016 04:43:19 GMT",
"version": "v6"
}
] |
2016-12-23
|
[
[
"Liu",
"Lingqiao",
""
],
[
"Shen",
"Chunhua",
""
],
[
"Hengel",
"Anton van den",
""
]
] |
Recent studies have shown that a Deep Convolutional Neural Network (DCNN) pretrained on a large image dataset can be used as a universal image descriptor, and that doing so leads to impressive performance for a variety of image classification tasks. Most of these studies adopt activations from a single DCNN layer, usually the fully-connected layer, as the image representation. In this paper, we proposed a novel way to extract image representations from two consecutive convolutional layers: one layer is utilized for local feature extraction and the other serves as guidance to pool the extracted features. By taking different viewpoints of convolutional layers, we further develop two schemes to realize this idea. The first one directly uses convolutional layers from a DCNN. The second one applies the pretrained CNN on densely sampled image regions and treats the fully-connected activations of each image region as convolutional feature activations. We then train another convolutional layer on top of that as the pooling-guidance convolutional layer. By applying our method to three popular visual classification tasks, we find our first scheme tends to perform better on the applications which need strong discrimination on subtle object patterns within small regions while the latter excels in the cases that require discrimination on category-level patterns. Overall, the proposed method achieves superior performance over existing ways of extracting image representations from a DCNN.
|
2403.15122
|
Sacha-\'Elie Ayoun
|
Sacha-\'Elie Ayoun, Xavier Denis, Petar Maksimovi\'c, Philippa Gardner
|
A hybrid approach to semi-automated Rust verification
|
22 pages, 8 figures, preprint
| null | null | null |
cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
While recent years have been witness to a large body of work on efficient and
automated verification of safe Rust code, enabled by the rich guarantees of the
Rust type system, much less progress has been made on reasoning about unsafe
code due to its unique complexities. We propose a hybrid approach to end-to-end
Rust verification in which powerful automated verification of safe Rust is
combined with targeted semi-automated verification of unsafe~Rust. To this end,
we present Gillian-Rust, a proof-of-concept semi-automated verification tool
that is able to reason about type safety and functional correctness of
unsafe~code. Built on top of the Gillian parametric compositional verification
platform, Gillian-Rust automates a rich separation logic for real-world Rust,
embedding the lifetime logic of RustBelt and the parametric propheciees of
RustHornBelt. Using the unique extensibility of Gillian, our novel encoding of
these features is fine-tuned to maximise automation and exposes a user-friendly
API, allowing for low-effort verification of unsafe code. We link Gillian-Rust
with Creusot, a state-of-the-art verifier for safe Rust, by providing a
systematic encoding of unsafe code specifications that Creusot may use but not
verify, demonstrating the feasibility of our hybrid~approach.
|
[
{
"created": "Fri, 22 Mar 2024 11:24:31 GMT",
"version": "v1"
}
] |
2024-03-25
|
[
[
"Ayoun",
"Sacha-Élie",
""
],
[
"Denis",
"Xavier",
""
],
[
"Maksimović",
"Petar",
""
],
[
"Gardner",
"Philippa",
""
]
] |
While recent years have been witness to a large body of work on efficient and automated verification of safe Rust code, enabled by the rich guarantees of the Rust type system, much less progress has been made on reasoning about unsafe code due to its unique complexities. We propose a hybrid approach to end-to-end Rust verification in which powerful automated verification of safe Rust is combined with targeted semi-automated verification of unsafe~Rust. To this end, we present Gillian-Rust, a proof-of-concept semi-automated verification tool that is able to reason about type safety and functional correctness of unsafe~code. Built on top of the Gillian parametric compositional verification platform, Gillian-Rust automates a rich separation logic for real-world Rust, embedding the lifetime logic of RustBelt and the parametric propheciees of RustHornBelt. Using the unique extensibility of Gillian, our novel encoding of these features is fine-tuned to maximise automation and exposes a user-friendly API, allowing for low-effort verification of unsafe code. We link Gillian-Rust with Creusot, a state-of-the-art verifier for safe Rust, by providing a systematic encoding of unsafe code specifications that Creusot may use but not verify, demonstrating the feasibility of our hybrid~approach.
|
0909.3648
|
Joel Ratsaby
|
Joel Ratsaby
|
Random scattering of bits by prediction
| null | null | null | null |
cs.AI cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We investigate a population of binary mistake sequences that result from
learning with parametric models of different order. We obtain estimates of
their error, algorithmic complexity and divergence from a purely random
Bernoulli sequence. We study the relationship of these variables to the
learner's information density parameter which is defined as the ratio between
the lengths of the compressed to uncompressed files that contain the learner's
decision rule. The results indicate that good learners have a low information
density$\rho$ while bad learners have a high $\rho$. Bad learners generate
mistake sequences that are atypically complex or diverge stochastically from a
purely random Bernoulli sequence. Good learners generate typically complex
sequences with low divergence from Bernoulli sequences and they include mistake
sequences generated by the Bayes optimal predictor. Based on the static
algorithmic interference model of \cite{Ratsaby_entropy} the learner here acts
as a static structure which "scatters" the bits of an input sequence (to be
predicted) in proportion to its information density $\rho$ thereby deforming
its randomness characteristics.
|
[
{
"created": "Sun, 20 Sep 2009 18:10:55 GMT",
"version": "v1"
},
{
"created": "Wed, 13 Oct 2010 19:03:58 GMT",
"version": "v2"
}
] |
2010-10-14
|
[
[
"Ratsaby",
"Joel",
""
]
] |
We investigate a population of binary mistake sequences that result from learning with parametric models of different order. We obtain estimates of their error, algorithmic complexity and divergence from a purely random Bernoulli sequence. We study the relationship of these variables to the learner's information density parameter which is defined as the ratio between the lengths of the compressed to uncompressed files that contain the learner's decision rule. The results indicate that good learners have a low information density$\rho$ while bad learners have a high $\rho$. Bad learners generate mistake sequences that are atypically complex or diverge stochastically from a purely random Bernoulli sequence. Good learners generate typically complex sequences with low divergence from Bernoulli sequences and they include mistake sequences generated by the Bayes optimal predictor. Based on the static algorithmic interference model of \cite{Ratsaby_entropy} the learner here acts as a static structure which "scatters" the bits of an input sequence (to be predicted) in proportion to its information density $\rho$ thereby deforming its randomness characteristics.
|
cs/0611048
|
Richard Mayr
|
Parosh Abdulla, Pritha Mahata, Richard Mayr
|
Dense-Timed Petri Nets: Checking Zenoness, Token liveness and
Boundedness
|
61 pages, 18 figures
|
Logical Methods in Computer Science, Volume 3, Issue 1 (February
7, 2007) lmcs:2223
|
10.2168/LMCS-3(1:1)2007
| null |
cs.LO
| null |
We consider Dense-Timed Petri Nets (TPN), an extension of Petri nets in which
each token is equipped with a real-valued clock and where the semantics is lazy
(i.e., enabled transitions need not fire; time can pass and disable
transitions). We consider the following verification problems for TPNs. (i)
Zenoness: whether there exists a zeno-computation from a given marking, i.e.,
an infinite computation which takes only a finite amount of time. We show
decidability of zenoness for TPNs, thus solving an open problem from [Escrig et
al.]. Furthermore, the related question if there exist arbitrarily fast
computations from a given marking is also decidable. On the other hand,
universal zenoness, i.e., the question if all infinite computations from a
given marking are zeno, is undecidable. (ii) Token liveness: whether a token is
alive in a marking, i.e., whether there is a computation from the marking which
eventually consumes the token. We show decidability of the problem by reducing
it to the coverability problem, which is decidable for TPNs. (iii) Boundedness:
whether the size of the reachable markings is bounded. We consider two versions
of the problem; namely semantic boundedness where only live tokens are taken
into consideration in the markings, and syntactic boundedness where also dead
tokens are considered. We show undecidability of semantic boundedness, while we
prove that syntactic boundedness is decidable through an extension of the
Karp-Miller algorithm.
|
[
{
"created": "Sat, 11 Nov 2006 00:08:46 GMT",
"version": "v1"
},
{
"created": "Tue, 23 Jan 2007 13:33:21 GMT",
"version": "v2"
}
] |
2017-01-11
|
[
[
"Abdulla",
"Parosh",
""
],
[
"Mahata",
"Pritha",
""
],
[
"Mayr",
"Richard",
""
]
] |
We consider Dense-Timed Petri Nets (TPN), an extension of Petri nets in which each token is equipped with a real-valued clock and where the semantics is lazy (i.e., enabled transitions need not fire; time can pass and disable transitions). We consider the following verification problems for TPNs. (i) Zenoness: whether there exists a zeno-computation from a given marking, i.e., an infinite computation which takes only a finite amount of time. We show decidability of zenoness for TPNs, thus solving an open problem from [Escrig et al.]. Furthermore, the related question if there exist arbitrarily fast computations from a given marking is also decidable. On the other hand, universal zenoness, i.e., the question if all infinite computations from a given marking are zeno, is undecidable. (ii) Token liveness: whether a token is alive in a marking, i.e., whether there is a computation from the marking which eventually consumes the token. We show decidability of the problem by reducing it to the coverability problem, which is decidable for TPNs. (iii) Boundedness: whether the size of the reachable markings is bounded. We consider two versions of the problem; namely semantic boundedness where only live tokens are taken into consideration in the markings, and syntactic boundedness where also dead tokens are considered. We show undecidability of semantic boundedness, while we prove that syntactic boundedness is decidable through an extension of the Karp-Miller algorithm.
|
1202.0533
|
Mark Wilde
|
Saikat Guha and Mark M. Wilde
|
Polar coding to achieve the Holevo capacity of a pure-loss optical
channel
|
5 pages, submission to the 2012 International Symposium on
Information Theory (ISIT 2012), Boston, MA, USA; v2 accepted to ISIT 2012
|
Proceedings of the 2012 IEEE International Symposium on
Information Theory (ISIT 2012), pages 546-550, Cambridge, MA, USA
|
10.1109/ISIT.2012.6284250
| null |
cs.IT math.IT quant-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the low-energy high-energy-efficiency regime of classical optical
communications---relevant to deep-space optical channels---there is a big gap
between reliable communication rates achievable via conventional optical
receivers and the ultimate (Holevo) capacity. Achieving the Holevo capacity
requires not only optimal codes but also receivers that make collective
measurements on long (modulated) codeword waveforms, and it is impossible to
implement these collective measurements via symbol-by-symbol detection along
with classical postprocessing. Here, we apply our recent results on the
classical-quantum polar code---the first near-explicit, linear,
symmetric-Holevo-rate achieving code---to the lossy optical channel, and we
show that it almost closes the entire gap to the Holevo capacity in the low
photon number regime. In contrast, Arikan's original polar codes, applied to
the DMC induced by the physical optical channel paired with any conceivable
structured optical receiver (including optical homodyne, heterodyne, or
direct-detection) fails to achieve the ultimate Holevo limit to channel
capacity. However, our polar code construction (which uses the quantum fidelity
as a channel parameter rather than the classical Bhattacharyya quantity to
choose the "good channels" in the polar-code construction), paired with a
quantum successive-cancellation receiver---which involves a sequence of
collective non-destructive binary projective measurements on the joint quantum
state of the received codeword waveform---can attain the Holevo limit, and can
hence in principle achieve higher rates than Arikan's polar code and decoder
directly applied to the optical channel. However, even a theoretical recipe for
construction of an optical realization of the quantum successive-cancellation
receiver remains an open question.
|
[
{
"created": "Thu, 2 Feb 2012 20:01:30 GMT",
"version": "v1"
},
{
"created": "Tue, 22 May 2012 12:22:05 GMT",
"version": "v2"
}
] |
2012-09-04
|
[
[
"Guha",
"Saikat",
""
],
[
"Wilde",
"Mark M.",
""
]
] |
In the low-energy high-energy-efficiency regime of classical optical communications---relevant to deep-space optical channels---there is a big gap between reliable communication rates achievable via conventional optical receivers and the ultimate (Holevo) capacity. Achieving the Holevo capacity requires not only optimal codes but also receivers that make collective measurements on long (modulated) codeword waveforms, and it is impossible to implement these collective measurements via symbol-by-symbol detection along with classical postprocessing. Here, we apply our recent results on the classical-quantum polar code---the first near-explicit, linear, symmetric-Holevo-rate achieving code---to the lossy optical channel, and we show that it almost closes the entire gap to the Holevo capacity in the low photon number regime. In contrast, Arikan's original polar codes, applied to the DMC induced by the physical optical channel paired with any conceivable structured optical receiver (including optical homodyne, heterodyne, or direct-detection) fails to achieve the ultimate Holevo limit to channel capacity. However, our polar code construction (which uses the quantum fidelity as a channel parameter rather than the classical Bhattacharyya quantity to choose the "good channels" in the polar-code construction), paired with a quantum successive-cancellation receiver---which involves a sequence of collective non-destructive binary projective measurements on the joint quantum state of the received codeword waveform---can attain the Holevo limit, and can hence in principle achieve higher rates than Arikan's polar code and decoder directly applied to the optical channel. However, even a theoretical recipe for construction of an optical realization of the quantum successive-cancellation receiver remains an open question.
|
1910.10307
|
Vahdat Abdelzad
|
Vahdat Abdelzad, Krzysztof Czarnecki, Rick Salay, Taylor Denounden,
Sachin Vernekar, Buu Phan
|
Detecting Out-of-Distribution Inputs in Deep Neural Networks Using an
Early-Layer Output
|
15 pages, 8 figures
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep neural networks achieve superior performance in challenging tasks such
as image classification. However, deep classifiers tend to incorrectly classify
out-of-distribution (OOD) inputs, which are inputs that do not belong to the
classifier training distribution. Several approaches have been proposed to
detect OOD inputs, but the detection task is still an ongoing challenge. In
this paper, we propose a new OOD detection approach that can be easily applied
to an existing classifier and does not need to have access to OOD samples. The
detector is a one-class classifier trained on the output of an early layer of
the original classifier fed with its original training set. We apply our
approach to several low- and high-dimensional datasets and compare it to the
state-of-the-art detection approaches. Our approach achieves substantially
better results over multiple metrics.
|
[
{
"created": "Wed, 23 Oct 2019 01:27:48 GMT",
"version": "v1"
}
] |
2019-10-24
|
[
[
"Abdelzad",
"Vahdat",
""
],
[
"Czarnecki",
"Krzysztof",
""
],
[
"Salay",
"Rick",
""
],
[
"Denounden",
"Taylor",
""
],
[
"Vernekar",
"Sachin",
""
],
[
"Phan",
"Buu",
""
]
] |
Deep neural networks achieve superior performance in challenging tasks such as image classification. However, deep classifiers tend to incorrectly classify out-of-distribution (OOD) inputs, which are inputs that do not belong to the classifier training distribution. Several approaches have been proposed to detect OOD inputs, but the detection task is still an ongoing challenge. In this paper, we propose a new OOD detection approach that can be easily applied to an existing classifier and does not need to have access to OOD samples. The detector is a one-class classifier trained on the output of an early layer of the original classifier fed with its original training set. We apply our approach to several low- and high-dimensional datasets and compare it to the state-of-the-art detection approaches. Our approach achieves substantially better results over multiple metrics.
|
cs/0703072
|
Paul Fodor
|
Paul Fodor
|
Domain Directed Dialogs for Decision Processes
| null | null | null | null |
cs.OH
| null |
The search for a standardized optimum way to communicate using natural
language dialog has involved a lot of research. However, due to the diversity
of communication domains, we think that this is extremely difficult to achieve
and different dialogue management techniques should be applied for different
situations. Our work presents the basis of a communication mechanism that
supports decision processes, is based on decision trees, and minimizes the
number of steps (turn-takes) in the dialogue. The initial dialog workflow is
automatically generated and the user's interaction with the system can also
change the decision tree and create new dialog paths with optimized cost. The
decision tree represents the chronological ordering of the actions (via the
parent-child relationship) and uses an object frame to represent the
information state (capturing the notion of context). This paper presents our
framework, the formalism for interaction and dialogue, and an evaluation of the
system compared to relevant dialog planning frameworks (i.e. finite state
diagrams, frame-based, information state and planning-based dialogue systems).
|
[
{
"created": "Thu, 15 Mar 2007 00:08:50 GMT",
"version": "v1"
},
{
"created": "Thu, 22 Mar 2007 14:54:41 GMT",
"version": "v2"
},
{
"created": "Tue, 27 Mar 2007 15:41:54 GMT",
"version": "v3"
}
] |
2007-05-23
|
[
[
"Fodor",
"Paul",
""
]
] |
The search for a standardized optimum way to communicate using natural language dialog has involved a lot of research. However, due to the diversity of communication domains, we think that this is extremely difficult to achieve and different dialogue management techniques should be applied for different situations. Our work presents the basis of a communication mechanism that supports decision processes, is based on decision trees, and minimizes the number of steps (turn-takes) in the dialogue. The initial dialog workflow is automatically generated and the user's interaction with the system can also change the decision tree and create new dialog paths with optimized cost. The decision tree represents the chronological ordering of the actions (via the parent-child relationship) and uses an object frame to represent the information state (capturing the notion of context). This paper presents our framework, the formalism for interaction and dialogue, and an evaluation of the system compared to relevant dialog planning frameworks (i.e. finite state diagrams, frame-based, information state and planning-based dialogue systems).
|
1410.0265
|
Chao Li
|
Chao Li, Michael Hay, Gerome Miklau, Yue Wang
|
A Data- and Workload-Aware Algorithm for Range Queries Under
Differential Privacy
|
VLDB 2014
| null | null | null |
cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We describe a new algorithm for answering a given set of range queries under
$\epsilon$-differential privacy which often achieves substantially lower error
than competing methods. Our algorithm satisfies differential privacy by adding
noise that is adapted to the input data and to the given query set. We first
privately learn a partitioning of the domain into buckets that suit the input
data well. Then we privately estimate counts for each bucket, doing so in a
manner well-suited for the given query set. Since the performance of the
algorithm depends on the input database, we evaluate it on a wide range of real
datasets, showing that we can achieve the benefits of data-dependence on both
"easy" and "hard" databases.
|
[
{
"created": "Wed, 1 Oct 2014 15:56:42 GMT",
"version": "v1"
}
] |
2014-10-02
|
[
[
"Li",
"Chao",
""
],
[
"Hay",
"Michael",
""
],
[
"Miklau",
"Gerome",
""
],
[
"Wang",
"Yue",
""
]
] |
We describe a new algorithm for answering a given set of range queries under $\epsilon$-differential privacy which often achieves substantially lower error than competing methods. Our algorithm satisfies differential privacy by adding noise that is adapted to the input data and to the given query set. We first privately learn a partitioning of the domain into buckets that suit the input data well. Then we privately estimate counts for each bucket, doing so in a manner well-suited for the given query set. Since the performance of the algorithm depends on the input database, we evaluate it on a wide range of real datasets, showing that we can achieve the benefits of data-dependence on both "easy" and "hard" databases.
|
1806.06084
|
Ramik Sadana
|
Ramik Sadana, Meeshu Agnihotri, John Stasko
|
Touching Data: A Discoverability-based Evaluation of a Visualization
Interface for Tablet Computers
|
10 pages, 3 figures, 7 tabels
| null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While a number of touch-based visualization systems have appeared in recent
years, relatively little work has been done to evaluate these systems. The
prevailing methods compare these systems to desktop-class applications or
utilize traditional training-based usability studies. We argue that existing
studies, while useful, fail to address a key aspect of mobile application usage
- initial impression and discoverability-driven usability. Over the past few
years, we have developed a tablet-based visualization system, Tangere, for
analyzing tabular data in a multiple coordinated view configuration. This
article describes a discoverability-based user study of Tangere in which the
system is compared to a commercially available visualization system for tablets
- Tableau's Vizable. The study highlights aspects of each system's design that
resonate with study participants, and we reflect upon those findings to
identify design principles for future tablet-based data visualization systems.
|
[
{
"created": "Fri, 15 Jun 2018 18:17:34 GMT",
"version": "v1"
}
] |
2018-06-19
|
[
[
"Sadana",
"Ramik",
""
],
[
"Agnihotri",
"Meeshu",
""
],
[
"Stasko",
"John",
""
]
] |
While a number of touch-based visualization systems have appeared in recent years, relatively little work has been done to evaluate these systems. The prevailing methods compare these systems to desktop-class applications or utilize traditional training-based usability studies. We argue that existing studies, while useful, fail to address a key aspect of mobile application usage - initial impression and discoverability-driven usability. Over the past few years, we have developed a tablet-based visualization system, Tangere, for analyzing tabular data in a multiple coordinated view configuration. This article describes a discoverability-based user study of Tangere in which the system is compared to a commercially available visualization system for tablets - Tableau's Vizable. The study highlights aspects of each system's design that resonate with study participants, and we reflect upon those findings to identify design principles for future tablet-based data visualization systems.
|
1611.07804
|
Nikita Astrakhantsev
|
N. Astrakhantsev
|
ATR4S: Toolkit with State-of-the-art Automatic Terms Recognition Methods
in Scala
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automatically recognized terminology is widely used for various
domain-specific texts processing tasks, such as machine translation,
information retrieval or sentiment analysis. However, there is still no
agreement on which methods are best suited for particular settings and,
moreover, there is no reliable comparison of already developed methods. We
believe that one of the main reasons is the lack of state-of-the-art methods
implementations, which are usually non-trivial to recreate. In order to address
these issues, we present ATR4S, an open-source software written in Scala that
comprises more than 15 methods for automatic terminology recognition (ATR) and
implements the whole pipeline from text document preprocessing, to term
candidates collection, term candidates scoring, and finally, term candidates
ranking. It is highly scalable, modular and configurable tool with support of
automatic caching. We also compare 10 state-of-the-art methods on 7 open
datasets by average precision and processing time. Experimental comparison
reveals that no single method demonstrates best average precision for all
datasets and that other available tools for ATR do not contain the best
methods.
|
[
{
"created": "Wed, 23 Nov 2016 14:14:52 GMT",
"version": "v1"
}
] |
2016-11-24
|
[
[
"Astrakhantsev",
"N.",
""
]
] |
Automatically recognized terminology is widely used for various domain-specific texts processing tasks, such as machine translation, information retrieval or sentiment analysis. However, there is still no agreement on which methods are best suited for particular settings and, moreover, there is no reliable comparison of already developed methods. We believe that one of the main reasons is the lack of state-of-the-art methods implementations, which are usually non-trivial to recreate. In order to address these issues, we present ATR4S, an open-source software written in Scala that comprises more than 15 methods for automatic terminology recognition (ATR) and implements the whole pipeline from text document preprocessing, to term candidates collection, term candidates scoring, and finally, term candidates ranking. It is highly scalable, modular and configurable tool with support of automatic caching. We also compare 10 state-of-the-art methods on 7 open datasets by average precision and processing time. Experimental comparison reveals that no single method demonstrates best average precision for all datasets and that other available tools for ATR do not contain the best methods.
|
2203.08037
|
Yang Yang
|
Yang Yang, Xibai Lou, Changhyun Choi
|
Interactive Robotic Grasping with Attribute-Guided Disambiguation
|
Accepted to the IEEE International Conference on Robotics and
Automation (ICRA 2022). Project page:
https://sites.google.com/umn.edu/attr-disam
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Interactive robotic grasping using natural language is one of the most
fundamental tasks in human-robot interaction. However, language can be a source
of ambiguity, particularly when there are ambiguous visual or linguistic
contents. This paper investigates the use of object attributes in
disambiguation and develops an interactive grasping system capable of
effectively resolving ambiguities via dialogues. Our approach first predicts
target scores and attribute scores through vision-and-language grounding. To
handle ambiguous objects and commands, we propose an attribute-guided
formulation of the partially observable Markov decision process (Attr-POMDP)
for disambiguation. The Attr-POMDP utilizes target and attribute scores as the
observation model to calculate the expected return of an attribute-based (e.g.,
"what is the color of the target, red or green?") or a pointing-based (e.g.,
"do you mean this one?") question. Our disambiguation module runs in real time
on a real robot, and the interactive grasping system achieves a 91.43\%
selection accuracy in the real-robot experiments, outperforming several
baselines by large margins.
|
[
{
"created": "Tue, 15 Mar 2022 16:17:36 GMT",
"version": "v1"
}
] |
2022-03-16
|
[
[
"Yang",
"Yang",
""
],
[
"Lou",
"Xibai",
""
],
[
"Choi",
"Changhyun",
""
]
] |
Interactive robotic grasping using natural language is one of the most fundamental tasks in human-robot interaction. However, language can be a source of ambiguity, particularly when there are ambiguous visual or linguistic contents. This paper investigates the use of object attributes in disambiguation and develops an interactive grasping system capable of effectively resolving ambiguities via dialogues. Our approach first predicts target scores and attribute scores through vision-and-language grounding. To handle ambiguous objects and commands, we propose an attribute-guided formulation of the partially observable Markov decision process (Attr-POMDP) for disambiguation. The Attr-POMDP utilizes target and attribute scores as the observation model to calculate the expected return of an attribute-based (e.g., "what is the color of the target, red or green?") or a pointing-based (e.g., "do you mean this one?") question. Our disambiguation module runs in real time on a real robot, and the interactive grasping system achieves a 91.43\% selection accuracy in the real-robot experiments, outperforming several baselines by large margins.
|
2402.15392
|
Filippo Lazzati
|
Filippo Lazzati, Mirco Mutti, Alberto Maria Metelli
|
Offline Inverse RL: New Solution Concepts and Provably Efficient
Algorithms
|
International Conference on Machine Learning 41 (ICML 2024)
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Inverse reinforcement learning (IRL) aims to recover the reward function of
an expert agent from demonstrations of behavior. It is well-known that the IRL
problem is fundamentally ill-posed, i.e., many reward functions can explain the
demonstrations. For this reason, IRL has been recently reframed in terms of
estimating the feasible reward set (Metelli et al., 2021), thus, postponing the
selection of a single reward. However, so far, the available formulations and
algorithmic solutions have been proposed and analyzed mainly for the online
setting, where the learner can interact with the environment and query the
expert at will. This is clearly unrealistic in most practical applications,
where the availability of an offline dataset is a much more common scenario. In
this paper, we introduce a novel notion of feasible reward set capturing the
opportunities and limitations of the offline setting and we analyze the
complexity of its estimation. This requires the introduction an original
learning framework that copes with the intrinsic difficulty of the setting, for
which the data coverage is not under control. Then, we propose two
computationally and statistically efficient algorithms, IRLO and PIRLO, for
addressing the problem. In particular, the latter adopts a specific form of
pessimism to enforce the novel desirable property of inclusion monotonicity of
the delivered feasible set. With this work, we aim to provide a panorama of the
challenges of the offline IRL problem and how they can be fruitfully addressed.
|
[
{
"created": "Fri, 23 Feb 2024 15:49:46 GMT",
"version": "v1"
},
{
"created": "Thu, 6 Jun 2024 06:49:52 GMT",
"version": "v2"
}
] |
2024-06-07
|
[
[
"Lazzati",
"Filippo",
""
],
[
"Mutti",
"Mirco",
""
],
[
"Metelli",
"Alberto Maria",
""
]
] |
Inverse reinforcement learning (IRL) aims to recover the reward function of an expert agent from demonstrations of behavior. It is well-known that the IRL problem is fundamentally ill-posed, i.e., many reward functions can explain the demonstrations. For this reason, IRL has been recently reframed in terms of estimating the feasible reward set (Metelli et al., 2021), thus, postponing the selection of a single reward. However, so far, the available formulations and algorithmic solutions have been proposed and analyzed mainly for the online setting, where the learner can interact with the environment and query the expert at will. This is clearly unrealistic in most practical applications, where the availability of an offline dataset is a much more common scenario. In this paper, we introduce a novel notion of feasible reward set capturing the opportunities and limitations of the offline setting and we analyze the complexity of its estimation. This requires the introduction an original learning framework that copes with the intrinsic difficulty of the setting, for which the data coverage is not under control. Then, we propose two computationally and statistically efficient algorithms, IRLO and PIRLO, for addressing the problem. In particular, the latter adopts a specific form of pessimism to enforce the novel desirable property of inclusion monotonicity of the delivered feasible set. With this work, we aim to provide a panorama of the challenges of the offline IRL problem and how they can be fruitfully addressed.
|
2401.00781
|
Shuang Li
|
Shuang Li, Ziyuan Pu, Zhiyong Cui, Seunghyeon Lee, Xiucheng Guo, Dong
Ngoduy
|
Inferring Heterogeneous Treatment Effects of Crashes on Highway Traffic:
A Doubly Robust Causal Machine Learning Approach
|
38 pages, 13 figures, 8 tables
| null | null | null |
cs.LG stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
Highway traffic crashes exert a considerable impact on both transportation
systems and the economy. In this context, accurate and dependable emergency
responses are crucial for effective traffic management. However, the influence
of crashes on traffic status varies across diverse factors and may be biased
due to selection bias. Therefore, there arises a necessity to accurately
estimate the heterogeneous causal effects of crashes, thereby providing
essential insights to facilitate individual-level emergency decision-making.
This paper proposes a novel causal machine learning framework to estimate the
causal effect of different types of crashes on highway speed. The Neyman-Rubin
Causal Model (RCM) is employed to formulate this problem from a causal
perspective. The Conditional Shapley Value Index (CSVI) is proposed based on
causal graph theory to filter adverse variables, and the Structural Causal
Model (SCM) is then adopted to define the statistical estimand for causal
effects. The treatment effects are estimated by Doubly Robust Learning (DRL)
methods, which combine doubly robust causal inference with classification and
regression machine learning models. Experimental results from 4815 crashes on
Highway Interstate 5 in Washington State reveal the heterogeneous treatment
effects of crashes at varying distances and durations. The rear-end crashes
cause more severe congestion and longer durations than other types of crashes,
and the sideswipe crashes have the longest delayed impact. Additionally, the
findings show that rear-end crashes affect traffic greater at night, while
crash to objects has the most significant influence during peak hours.
Statistical hypothesis tests, error metrics based on matched "counterfactual
outcomes", and sensitive analyses are employed for assessment, and the results
validate the accuracy and effectiveness of our method.
|
[
{
"created": "Mon, 1 Jan 2024 15:03:14 GMT",
"version": "v1"
}
] |
2024-01-02
|
[
[
"Li",
"Shuang",
""
],
[
"Pu",
"Ziyuan",
""
],
[
"Cui",
"Zhiyong",
""
],
[
"Lee",
"Seunghyeon",
""
],
[
"Guo",
"Xiucheng",
""
],
[
"Ngoduy",
"Dong",
""
]
] |
Highway traffic crashes exert a considerable impact on both transportation systems and the economy. In this context, accurate and dependable emergency responses are crucial for effective traffic management. However, the influence of crashes on traffic status varies across diverse factors and may be biased due to selection bias. Therefore, there arises a necessity to accurately estimate the heterogeneous causal effects of crashes, thereby providing essential insights to facilitate individual-level emergency decision-making. This paper proposes a novel causal machine learning framework to estimate the causal effect of different types of crashes on highway speed. The Neyman-Rubin Causal Model (RCM) is employed to formulate this problem from a causal perspective. The Conditional Shapley Value Index (CSVI) is proposed based on causal graph theory to filter adverse variables, and the Structural Causal Model (SCM) is then adopted to define the statistical estimand for causal effects. The treatment effects are estimated by Doubly Robust Learning (DRL) methods, which combine doubly robust causal inference with classification and regression machine learning models. Experimental results from 4815 crashes on Highway Interstate 5 in Washington State reveal the heterogeneous treatment effects of crashes at varying distances and durations. The rear-end crashes cause more severe congestion and longer durations than other types of crashes, and the sideswipe crashes have the longest delayed impact. Additionally, the findings show that rear-end crashes affect traffic greater at night, while crash to objects has the most significant influence during peak hours. Statistical hypothesis tests, error metrics based on matched "counterfactual outcomes", and sensitive analyses are employed for assessment, and the results validate the accuracy and effectiveness of our method.
|
2211.13853
|
Sutanay Choudhury
|
Hatem Helal, Jesun Firoz, Jenna Bilbrey, Mario Michael Krell, Tom
Murray, Ang Li, Sotiris Xantheas, Sutanay Choudhury
|
Extreme Acceleration of Graph Neural Network-based Prediction Models for
Quantum Chemistry
| null | null | null | null |
cs.LG cs.AR physics.chem-ph
|
http://creativecommons.org/licenses/by/4.0/
|
Molecular property calculations are the bedrock of chemical physics.
High-fidelity \textit{ab initio} modeling techniques for computing the
molecular properties can be prohibitively expensive, and motivate the
development of machine-learning models that make the same predictions more
efficiently. Training graph neural networks over large molecular databases
introduces unique computational challenges such as the need to process millions
of small graphs with variable size and support communication patterns that are
distinct from learning over large graphs such as social networks. This paper
demonstrates a novel hardware-software co-design approach to scale up the
training of graph neural networks for molecular property prediction. We
introduce an algorithm to coalesce the batches of molecular graphs into fixed
size packs to eliminate redundant computation and memory associated with
alternative padding techniques and improve throughput via minimizing
communication. We demonstrate the effectiveness of our co-design approach by
providing an implementation of a well-established molecular property prediction
model on the Graphcore Intelligence Processing Units (IPU). We evaluate the
training performance on multiple molecular graph databases with varying degrees
of graph counts, sizes and sparsity. We demonstrate that such a co-design
approach can reduce the training time of such molecular property prediction
models from days to less than two hours, opening new possibilities for
AI-driven scientific discovery.
|
[
{
"created": "Fri, 25 Nov 2022 01:30:18 GMT",
"version": "v1"
}
] |
2022-11-28
|
[
[
"Helal",
"Hatem",
""
],
[
"Firoz",
"Jesun",
""
],
[
"Bilbrey",
"Jenna",
""
],
[
"Krell",
"Mario Michael",
""
],
[
"Murray",
"Tom",
""
],
[
"Li",
"Ang",
""
],
[
"Xantheas",
"Sotiris",
""
],
[
"Choudhury",
"Sutanay",
""
]
] |
Molecular property calculations are the bedrock of chemical physics. High-fidelity \textit{ab initio} modeling techniques for computing the molecular properties can be prohibitively expensive, and motivate the development of machine-learning models that make the same predictions more efficiently. Training graph neural networks over large molecular databases introduces unique computational challenges such as the need to process millions of small graphs with variable size and support communication patterns that are distinct from learning over large graphs such as social networks. This paper demonstrates a novel hardware-software co-design approach to scale up the training of graph neural networks for molecular property prediction. We introduce an algorithm to coalesce the batches of molecular graphs into fixed size packs to eliminate redundant computation and memory associated with alternative padding techniques and improve throughput via minimizing communication. We demonstrate the effectiveness of our co-design approach by providing an implementation of a well-established molecular property prediction model on the Graphcore Intelligence Processing Units (IPU). We evaluate the training performance on multiple molecular graph databases with varying degrees of graph counts, sizes and sparsity. We demonstrate that such a co-design approach can reduce the training time of such molecular property prediction models from days to less than two hours, opening new possibilities for AI-driven scientific discovery.
|
2312.07106
|
Michael Unterkalmsteiner
|
Eriks Klotins, Michael Unterkalmsteiner, Panagiota Chatzipetrou, Tony
Gorschek, Rafael Prikladnicki, Nirnaya Tripathi, Leandro Bento Pompermaier
|
A Progression Model of Software Engineering Goals, Challenges, and
Practices in Start-Ups
| null |
IEEE Trans. Software Eng. 47(3): 498-521 (2021)
|
10.1109/TSE.2019.2900213
| null |
cs.SE
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Context: Software start-ups are emerging as suppliers of innovation and
software-intensive products. However, traditional software engineering
practices are not evaluated in the context, nor adopted to goals and challenges
of start-ups. As a result, there is insufficient support for software
engineering in the start-up context. Objective: We aim to collect data related
to engineering goals, challenges, and practices in start-up companies to
ascertain trends and patterns characterizing engineering work in start-ups.
Such data allows researchers to understand better how goals and challenges are
related to practices. This understanding can then inform future studies aimed
at designing solutions addressing those goals and challenges. Besides, these
trends and patterns can be useful for practitioners to make more informed
decisions in their engineering practice. Method: We use a case survey method to
gather first-hand, in-depth experiences from a large sample of software
start-ups. We use open coding and cross-case analysis to describe and identify
patterns, and corroborate the findings with statistical analysis. Results: We
analyze 84 start-up cases and identify 16 goals, 9 challenges, and 16
engineering practices that are common among start-ups. We have mapped these
goals, challenges, and practices to start-up life-cycle stages (inception,
stabilization, growth, and maturity). Thus, creating the progression model
guiding software engineering efforts in start-ups. Conclusions: We conclude
that start-ups to a large extent face the same challenges and use the same
practices as established companies. However, the primary software engineering
challenge in start-ups is to evolve multiple process areas at once, with a
little margin for serious errors.
|
[
{
"created": "Tue, 12 Dec 2023 09:36:43 GMT",
"version": "v1"
}
] |
2023-12-13
|
[
[
"Klotins",
"Eriks",
""
],
[
"Unterkalmsteiner",
"Michael",
""
],
[
"Chatzipetrou",
"Panagiota",
""
],
[
"Gorschek",
"Tony",
""
],
[
"Prikladnicki",
"Rafael",
""
],
[
"Tripathi",
"Nirnaya",
""
],
[
"Pompermaier",
"Leandro Bento",
""
]
] |
Context: Software start-ups are emerging as suppliers of innovation and software-intensive products. However, traditional software engineering practices are not evaluated in the context, nor adopted to goals and challenges of start-ups. As a result, there is insufficient support for software engineering in the start-up context. Objective: We aim to collect data related to engineering goals, challenges, and practices in start-up companies to ascertain trends and patterns characterizing engineering work in start-ups. Such data allows researchers to understand better how goals and challenges are related to practices. This understanding can then inform future studies aimed at designing solutions addressing those goals and challenges. Besides, these trends and patterns can be useful for practitioners to make more informed decisions in their engineering practice. Method: We use a case survey method to gather first-hand, in-depth experiences from a large sample of software start-ups. We use open coding and cross-case analysis to describe and identify patterns, and corroborate the findings with statistical analysis. Results: We analyze 84 start-up cases and identify 16 goals, 9 challenges, and 16 engineering practices that are common among start-ups. We have mapped these goals, challenges, and practices to start-up life-cycle stages (inception, stabilization, growth, and maturity). Thus, creating the progression model guiding software engineering efforts in start-ups. Conclusions: We conclude that start-ups to a large extent face the same challenges and use the same practices as established companies. However, the primary software engineering challenge in start-ups is to evolve multiple process areas at once, with a little margin for serious errors.
|
1811.06992
|
Chris Ying
|
Chris Ying, Sameer Kumar, Dehao Chen, Tao Wang, Youlong Cheng
|
Image Classification at Supercomputer Scale
|
Presented as part of Systems for ML Workshop @ NIPS 2018
| null | null | null |
cs.LG cs.DC stat.ML
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Deep learning is extremely computationally intensive, and hardware vendors
have responded by building faster accelerators in large clusters. Training deep
learning models at petaFLOPS scale requires overcoming both algorithmic and
systems software challenges. In this paper, we discuss three systems-related
optimizations: (1) distributed batch normalization to control per-replica batch
sizes, (2) input pipeline optimizations to sustain model throughput, and (3)
2-D torus all-reduce to speed up gradient summation. We combine these
optimizations to train ResNet-50 on ImageNet to 76.3% accuracy in 2.2 minutes
on a 1024-chip TPU v3 Pod with a training throughput of over 1.05 million
images/second and no accuracy drop.
|
[
{
"created": "Fri, 16 Nov 2018 19:01:40 GMT",
"version": "v1"
},
{
"created": "Sun, 2 Dec 2018 01:30:42 GMT",
"version": "v2"
}
] |
2018-12-04
|
[
[
"Ying",
"Chris",
""
],
[
"Kumar",
"Sameer",
""
],
[
"Chen",
"Dehao",
""
],
[
"Wang",
"Tao",
""
],
[
"Cheng",
"Youlong",
""
]
] |
Deep learning is extremely computationally intensive, and hardware vendors have responded by building faster accelerators in large clusters. Training deep learning models at petaFLOPS scale requires overcoming both algorithmic and systems software challenges. In this paper, we discuss three systems-related optimizations: (1) distributed batch normalization to control per-replica batch sizes, (2) input pipeline optimizations to sustain model throughput, and (3) 2-D torus all-reduce to speed up gradient summation. We combine these optimizations to train ResNet-50 on ImageNet to 76.3% accuracy in 2.2 minutes on a 1024-chip TPU v3 Pod with a training throughput of over 1.05 million images/second and no accuracy drop.
|
2303.05919
|
Yuxin Su
|
Zhilu Lian, Yangzi Li, Zhixiang Chen, Shiwen Shan, Baoxin Han, Yuxin
Su
|
eBPF-based Working Set Size Estimation in Memory Management
|
8 pages, 6 figures
| null |
10.1109/ICSS55994.2022.00036
| null |
cs.PF cs.AR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Working set size estimation (WSS) is of great significance to improve the
efficiency of program executing and memory arrangement in modern operating
systems. Previous work proposed several methods to estimate WSS, including
self-balloning, Zballoning and so on. However, these methods which are based on
virtual machine usually cause a large overhead. Thus, using those methods to
estimate WSS is impractical. In this paper, we propose a novel framework to
efficiently estimate WSS with eBPF (extended Berkeley Packet Filter), a
cutting-edge technology which monitors and filters data by being attached to
the kernel. With an eBPF program pinned into the kernel, we get the times of
page fault and other information of memory allocation. Moreover, we collect WSS
via vanilla tool to train a predictive model to complete estimation work with
LightGBM, a useful tool which performs well on generating decision trees over
continuous value. The experimental results illustrate that our framework can
estimate WSS precisely with 98.5\% reduction in overhead compared to
traditional methods.
|
[
{
"created": "Tue, 17 Jan 2023 03:12:35 GMT",
"version": "v1"
}
] |
2023-03-13
|
[
[
"Lian",
"Zhilu",
""
],
[
"Li",
"Yangzi",
""
],
[
"Chen",
"Zhixiang",
""
],
[
"Shan",
"Shiwen",
""
],
[
"Han",
"Baoxin",
""
],
[
"Su",
"Yuxin",
""
]
] |
Working set size estimation (WSS) is of great significance to improve the efficiency of program executing and memory arrangement in modern operating systems. Previous work proposed several methods to estimate WSS, including self-balloning, Zballoning and so on. However, these methods which are based on virtual machine usually cause a large overhead. Thus, using those methods to estimate WSS is impractical. In this paper, we propose a novel framework to efficiently estimate WSS with eBPF (extended Berkeley Packet Filter), a cutting-edge technology which monitors and filters data by being attached to the kernel. With an eBPF program pinned into the kernel, we get the times of page fault and other information of memory allocation. Moreover, we collect WSS via vanilla tool to train a predictive model to complete estimation work with LightGBM, a useful tool which performs well on generating decision trees over continuous value. The experimental results illustrate that our framework can estimate WSS precisely with 98.5\% reduction in overhead compared to traditional methods.
|
1208.4632
|
EPTCS
|
Minas Charalambides (University of Illinois at Urbana-Champaign),
Peter Dinges (University of Illinois at Urbana-Champaign), Gul Agha
(University of Illinois at Urbana-Champaign)
|
Parameterized Concurrent Multi-Party Session Types
|
In Proceedings FOCLASA 2012, arXiv:1208.4327
|
EPTCS 91, 2012, pp. 16-30
|
10.4204/EPTCS.91.2
| null |
cs.PL cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Session types have been proposed as a means of statically verifying
implementations of communication protocols. Although prior work has been
successful in verifying some classes of protocols, it does not cope well with
parameterized, multi-actor scenarios with inherent asynchrony. For example, the
sliding window protocol is inexpressible in previously proposed session type
systems. This paper describes System-A, a new typing language which overcomes
many of the expressiveness limitations of prior work. System-A explicitly
supports asynchrony and parallelism, as well as multiple forms of
parameterization. We define System-A and show how it can be used for the static
verification of a large class of asynchronous communication protocols.
|
[
{
"created": "Wed, 22 Aug 2012 21:58:46 GMT",
"version": "v1"
}
] |
2012-08-24
|
[
[
"Charalambides",
"Minas",
"",
"University of Illinois at Urbana-Champaign"
],
[
"Dinges",
"Peter",
"",
"University of Illinois at Urbana-Champaign"
],
[
"Agha",
"Gul",
"",
"University of Illinois at Urbana-Champaign"
]
] |
Session types have been proposed as a means of statically verifying implementations of communication protocols. Although prior work has been successful in verifying some classes of protocols, it does not cope well with parameterized, multi-actor scenarios with inherent asynchrony. For example, the sliding window protocol is inexpressible in previously proposed session type systems. This paper describes System-A, a new typing language which overcomes many of the expressiveness limitations of prior work. System-A explicitly supports asynchrony and parallelism, as well as multiple forms of parameterization. We define System-A and show how it can be used for the static verification of a large class of asynchronous communication protocols.
|
0801.2588
|
K. Raj Kumar
|
K. Raj Kumar and Giuseppe Caire
|
Coding and Decoding for the Dynamic Decode and Forward Relay Protocol
|
Submitted to the IEEE Transactions on Information Theory
| null | null | null |
cs.IT math.IT
| null |
We study the Dynamic Decode and Forward (DDF) protocol for a single
half-duplex relay, single-antenna channel with quasi-static fading. The DDF
protocol is well-known and has been analyzed in terms of the
Diversity-Multiplexing Tradeoff (DMT) in the infinite block length limit. We
characterize the finite block length DMT and give new explicit code
constructions. The finite block length analysis illuminates a few key aspects
that have been neglected in the previous literature: 1) we show that one
dominating cause of degradation with respect to the infinite block length
regime is the event of decoding error at the relay; 2) we explicitly take into
account the fact that the destination does not generally know a priori the
relay decision time at which the relay switches from listening to transmit
mode. Both the above problems can be tackled by a careful design of the
decoding algorithm. In particular, we introduce a decision rejection criterion
at the relay based on Forney's decision rule (a variant of the Neyman-Pearson
rule), such that the relay triggers transmission only when its decision is
reliable. Also, we show that a receiver based on the Generalized Likelihood
Ratio Test rule that jointly decodes the relay decision time and the
information message achieves the optimal DMT. Our results show that no cyclic
redundancy check (CRC) for error detection or additional protocol overhead to
communicate the decision time are needed for DDF. Finally, we investigate the
use of minimum mean squared error generalized decision feedback equalizer
(MMSE-GDFE) lattice decoding at both the relay and the destination, and show
that it provides near optimal performance at moderate complexity.
|
[
{
"created": "Wed, 16 Jan 2008 23:05:12 GMT",
"version": "v1"
}
] |
2008-01-18
|
[
[
"Kumar",
"K. Raj",
""
],
[
"Caire",
"Giuseppe",
""
]
] |
We study the Dynamic Decode and Forward (DDF) protocol for a single half-duplex relay, single-antenna channel with quasi-static fading. The DDF protocol is well-known and has been analyzed in terms of the Diversity-Multiplexing Tradeoff (DMT) in the infinite block length limit. We characterize the finite block length DMT and give new explicit code constructions. The finite block length analysis illuminates a few key aspects that have been neglected in the previous literature: 1) we show that one dominating cause of degradation with respect to the infinite block length regime is the event of decoding error at the relay; 2) we explicitly take into account the fact that the destination does not generally know a priori the relay decision time at which the relay switches from listening to transmit mode. Both the above problems can be tackled by a careful design of the decoding algorithm. In particular, we introduce a decision rejection criterion at the relay based on Forney's decision rule (a variant of the Neyman-Pearson rule), such that the relay triggers transmission only when its decision is reliable. Also, we show that a receiver based on the Generalized Likelihood Ratio Test rule that jointly decodes the relay decision time and the information message achieves the optimal DMT. Our results show that no cyclic redundancy check (CRC) for error detection or additional protocol overhead to communicate the decision time are needed for DDF. Finally, we investigate the use of minimum mean squared error generalized decision feedback equalizer (MMSE-GDFE) lattice decoding at both the relay and the destination, and show that it provides near optimal performance at moderate complexity.
|
2204.04717
|
Kheeran K. Naidu
|
Cezar-Mihail Alexandru, Pavel Dvo\v{r}\'ak, Christian Konrad, Kheeran
K. Naidu
|
Improved Weighted Matching in the Sliding Window Model
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider the Maximum-weight Matching (MWM) problem in the streaming
sliding window model of computation. In this model, the input consists of a
sequence of weighted edges on a given vertex set $V$ of size $n$. The objective
is to maintain an approximation of a maximum-weight matching in the graph
spanned by the $L$ most recent edges, for some integer $L$, using as little
space as possible. Prior to our work, the state-of-the-art results were a
$(3.5+\varepsilon)$-approximation algorithm for MWM by Biabani et al.
[ISAAC'21] and a $(3+\varepsilon)$-approximation for (unweighted) Maximum
Matching (MM) by Crouch et al. [ESA'13]. Both algorithms use space
$\tilde{O}(n)$.
We give the following results:
1. We give a $(2+\varepsilon)$-approximation algorithm for MWM with space
$\tilde{O}(\sqrt{nL})$. Under the reasonable assumption that the graphs spanned
by the edges in each sliding window are simple, our algorithm uses space
$\tilde{O}(n \sqrt{n})$.
2. In the $\tilde{O}(n)$ space regime, we give a
$(3+\varepsilon)$-approximation algorithm for MWM, thereby closing the gap
between the best-known approximation ratio for MWM and MM.
Similar to Biabani et al.'s MWM algorithm, both our algorithms execute
multiple instances of the $(2+\varepsilon)$-approximation $\tilde{O}(n)$-space
streaming algorithm for MWM by Paz and Schwartzman [SODA'17] on different
portions of the stream. Our improvements are obtained by selecting these
substreams differently. Furthermore, our $(2+\varepsilon)$-approximation
algorithm runs the Paz-Schwartzman algorithm in reverse direction over some
parts of the stream, and in forward direction over other parts, which allows
for an improved approximation guarantee at the cost of increased space
requirements.
|
[
{
"created": "Sun, 10 Apr 2022 16:26:11 GMT",
"version": "v1"
},
{
"created": "Tue, 10 Jan 2023 15:22:03 GMT",
"version": "v2"
}
] |
2023-01-11
|
[
[
"Alexandru",
"Cezar-Mihail",
""
],
[
"Dvořák",
"Pavel",
""
],
[
"Konrad",
"Christian",
""
],
[
"Naidu",
"Kheeran K.",
""
]
] |
We consider the Maximum-weight Matching (MWM) problem in the streaming sliding window model of computation. In this model, the input consists of a sequence of weighted edges on a given vertex set $V$ of size $n$. The objective is to maintain an approximation of a maximum-weight matching in the graph spanned by the $L$ most recent edges, for some integer $L$, using as little space as possible. Prior to our work, the state-of-the-art results were a $(3.5+\varepsilon)$-approximation algorithm for MWM by Biabani et al. [ISAAC'21] and a $(3+\varepsilon)$-approximation for (unweighted) Maximum Matching (MM) by Crouch et al. [ESA'13]. Both algorithms use space $\tilde{O}(n)$. We give the following results: 1. We give a $(2+\varepsilon)$-approximation algorithm for MWM with space $\tilde{O}(\sqrt{nL})$. Under the reasonable assumption that the graphs spanned by the edges in each sliding window are simple, our algorithm uses space $\tilde{O}(n \sqrt{n})$. 2. In the $\tilde{O}(n)$ space regime, we give a $(3+\varepsilon)$-approximation algorithm for MWM, thereby closing the gap between the best-known approximation ratio for MWM and MM. Similar to Biabani et al.'s MWM algorithm, both our algorithms execute multiple instances of the $(2+\varepsilon)$-approximation $\tilde{O}(n)$-space streaming algorithm for MWM by Paz and Schwartzman [SODA'17] on different portions of the stream. Our improvements are obtained by selecting these substreams differently. Furthermore, our $(2+\varepsilon)$-approximation algorithm runs the Paz-Schwartzman algorithm in reverse direction over some parts of the stream, and in forward direction over other parts, which allows for an improved approximation guarantee at the cost of increased space requirements.
|
2405.02047
|
Martin Kumm
|
Andreas B\"ottcher, Martin Kumm
|
Small Logic-based Multipliers with Incomplete Sub-Multipliers for FPGAs
|
Preprint, to appear at ARITH 2024 (http://arith24.arithsymposium.org)
and IEEEXplore
| null | null | null |
cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There is a recent trend in artificial intelligence (AI) inference towards
lower precision data formats down to 8 bits and less. As multiplication is the
most complex operation in typical inference tasks, there is a large demand for
efficient small multipliers. The large DSP blocks have limitations implementing
many small multipliers efficiently. Hence, this work proposes a solution for
better logic-based multipliers that is especially beneficial for small
multipliers. Our work is based on the multiplier tiling method in which a
multiplier is designed out of several sub-multiplier tiles. The key observation
we made is that these sub-multipliers do not necessarily have to perform a
complete (rectangular) NxK multiplication and more efficient sub-multipliers
are possible that are incomplete (non-rectangular). This proposal first seeks
to identify efficient incomplete irregular sub-multipliers and then
demonstrates improvements over state-of-the-art designs. It is shown that
optimal solutions can be found using integer linear programming (ILP), which
are evaluated in FPGA synthesis experiments.
|
[
{
"created": "Fri, 3 May 2024 12:29:07 GMT",
"version": "v1"
}
] |
2024-05-06
|
[
[
"Böttcher",
"Andreas",
""
],
[
"Kumm",
"Martin",
""
]
] |
There is a recent trend in artificial intelligence (AI) inference towards lower precision data formats down to 8 bits and less. As multiplication is the most complex operation in typical inference tasks, there is a large demand for efficient small multipliers. The large DSP blocks have limitations implementing many small multipliers efficiently. Hence, this work proposes a solution for better logic-based multipliers that is especially beneficial for small multipliers. Our work is based on the multiplier tiling method in which a multiplier is designed out of several sub-multiplier tiles. The key observation we made is that these sub-multipliers do not necessarily have to perform a complete (rectangular) NxK multiplication and more efficient sub-multipliers are possible that are incomplete (non-rectangular). This proposal first seeks to identify efficient incomplete irregular sub-multipliers and then demonstrates improvements over state-of-the-art designs. It is shown that optimal solutions can be found using integer linear programming (ILP), which are evaluated in FPGA synthesis experiments.
|
1905.10464
|
Mamoru Komachi
|
Tosho Hirasawa and Mamoru Komachi
|
Debiasing Word Embeddings Improves Multimodal Machine Translation
|
11 pages; MT Summit 2019 (camera ready)
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
In recent years, pretrained word embeddings have proved useful for multimodal
neural machine translation (NMT) models to address the shortage of available
datasets. However, the integration of pretrained word embeddings has not yet
been explored extensively. Further, pretrained word embeddings in high
dimensional spaces have been reported to suffer from the hubness problem.
Although some debiasing techniques have been proposed to address this problem
for other natural language processing tasks, they have seldom been studied for
multimodal NMT models. In this study, we examine various kinds of word
embeddings and introduce two debiasing techniques for three multimodal NMT
models and two language pairs -- English-German translation and English-French
translation. With our optimal settings, the overall performance of multimodal
models was improved by up to +1.93 BLEU and +2.02 METEOR for English-German
translation and +1.73 BLEU and +0.95 METEOR for English-French translation.
|
[
{
"created": "Fri, 24 May 2019 22:11:57 GMT",
"version": "v1"
},
{
"created": "Tue, 28 May 2019 22:46:58 GMT",
"version": "v2"
},
{
"created": "Sat, 22 Jun 2019 07:50:03 GMT",
"version": "v3"
}
] |
2019-06-25
|
[
[
"Hirasawa",
"Tosho",
""
],
[
"Komachi",
"Mamoru",
""
]
] |
In recent years, pretrained word embeddings have proved useful for multimodal neural machine translation (NMT) models to address the shortage of available datasets. However, the integration of pretrained word embeddings has not yet been explored extensively. Further, pretrained word embeddings in high dimensional spaces have been reported to suffer from the hubness problem. Although some debiasing techniques have been proposed to address this problem for other natural language processing tasks, they have seldom been studied for multimodal NMT models. In this study, we examine various kinds of word embeddings and introduce two debiasing techniques for three multimodal NMT models and two language pairs -- English-German translation and English-French translation. With our optimal settings, the overall performance of multimodal models was improved by up to +1.93 BLEU and +2.02 METEOR for English-German translation and +1.73 BLEU and +0.95 METEOR for English-French translation.
|
2311.14762
|
Benjamin Kiefer
|
Benjamin Kiefer, Lojze \v{Z}ust, Matej Kristan, Janez Per\v{s}, Matija
Ter\v{s}ek, Arnold Wiliem, Martin Messmer, Cheng-Yen Yang, Hsiang-Wei Huang,
Zhongyu Jiang, Heng-Cheng Kuo, Jie Mei, Jenq-Neng Hwang, Daniel Stadler, Lars
Sommer, Kaer Huang, Aiguo Zheng, Weitu Chong, Kanokphan Lertniphonphan, Jun
Xie, Feng Chen, Jian Li, Zhepeng Wang, Luca Zedda, Andrea Loddo, Cecilia Di
Ruberto, Tuan-Anh Vu, Hai Nguyen-Truong, Tan-Sang Ha, Quan-Dung Pham, Sai-Kit
Yeung, Yuan Feng, Nguyen Thanh Thien, Lixin Tian, Sheng-Yao Kuan, Yuan-Hao
Ho, Angel Bueno Rodriguez, Borja Carrillo-Perez, Alexander Klein, Antje Alex,
Yannik Steiniger, Felix Sattler, Edgardo Solano-Carrillo, Matej Fabijani\'c,
Magdalena \v{S}umunec, Nadir Kapetanovi\'c, Andreas Michel, Wolfgang Gross,
Martin Weinmann
|
The 2nd Workshop on Maritime Computer Vision (MaCVi) 2024
|
Part of 2nd Workshop on Maritime Computer Vision (MaCVi) 2024 IEEE
Xplore submission as part of WACV 2024
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
The 2nd Workshop on Maritime Computer Vision (MaCVi) 2024 addresses maritime
computer vision for Unmanned Aerial Vehicles (UAV) and Unmanned Surface
Vehicles (USV). Three challenges categories are considered: (i) UAV-based
Maritime Object Tracking with Re-identification, (ii) USV-based Maritime
Obstacle Segmentation and Detection, (iii) USV-based Maritime Boat Tracking.
The USV-based Maritime Obstacle Segmentation and Detection features three
sub-challenges, including a new embedded challenge addressing efficicent
inference on real-world embedded devices. This report offers a comprehensive
overview of the findings from the challenges. We provide both statistical and
qualitative analyses, evaluating trends from over 195 submissions. All
datasets, evaluation code, and the leaderboard are available to the public at
https://macvi.org/workshop/macvi24.
|
[
{
"created": "Thu, 23 Nov 2023 21:01:14 GMT",
"version": "v1"
}
] |
2023-11-28
|
[
[
"Kiefer",
"Benjamin",
""
],
[
"Žust",
"Lojze",
""
],
[
"Kristan",
"Matej",
""
],
[
"Perš",
"Janez",
""
],
[
"Teršek",
"Matija",
""
],
[
"Wiliem",
"Arnold",
""
],
[
"Messmer",
"Martin",
""
],
[
"Yang",
"Cheng-Yen",
""
],
[
"Huang",
"Hsiang-Wei",
""
],
[
"Jiang",
"Zhongyu",
""
],
[
"Kuo",
"Heng-Cheng",
""
],
[
"Mei",
"Jie",
""
],
[
"Hwang",
"Jenq-Neng",
""
],
[
"Stadler",
"Daniel",
""
],
[
"Sommer",
"Lars",
""
],
[
"Huang",
"Kaer",
""
],
[
"Zheng",
"Aiguo",
""
],
[
"Chong",
"Weitu",
""
],
[
"Lertniphonphan",
"Kanokphan",
""
],
[
"Xie",
"Jun",
""
],
[
"Chen",
"Feng",
""
],
[
"Li",
"Jian",
""
],
[
"Wang",
"Zhepeng",
""
],
[
"Zedda",
"Luca",
""
],
[
"Loddo",
"Andrea",
""
],
[
"Di Ruberto",
"Cecilia",
""
],
[
"Vu",
"Tuan-Anh",
""
],
[
"Nguyen-Truong",
"Hai",
""
],
[
"Ha",
"Tan-Sang",
""
],
[
"Pham",
"Quan-Dung",
""
],
[
"Yeung",
"Sai-Kit",
""
],
[
"Feng",
"Yuan",
""
],
[
"Thien",
"Nguyen Thanh",
""
],
[
"Tian",
"Lixin",
""
],
[
"Kuan",
"Sheng-Yao",
""
],
[
"Ho",
"Yuan-Hao",
""
],
[
"Rodriguez",
"Angel Bueno",
""
],
[
"Carrillo-Perez",
"Borja",
""
],
[
"Klein",
"Alexander",
""
],
[
"Alex",
"Antje",
""
],
[
"Steiniger",
"Yannik",
""
],
[
"Sattler",
"Felix",
""
],
[
"Solano-Carrillo",
"Edgardo",
""
],
[
"Fabijanić",
"Matej",
""
],
[
"Šumunec",
"Magdalena",
""
],
[
"Kapetanović",
"Nadir",
""
],
[
"Michel",
"Andreas",
""
],
[
"Gross",
"Wolfgang",
""
],
[
"Weinmann",
"Martin",
""
]
] |
The 2nd Workshop on Maritime Computer Vision (MaCVi) 2024 addresses maritime computer vision for Unmanned Aerial Vehicles (UAV) and Unmanned Surface Vehicles (USV). Three challenges categories are considered: (i) UAV-based Maritime Object Tracking with Re-identification, (ii) USV-based Maritime Obstacle Segmentation and Detection, (iii) USV-based Maritime Boat Tracking. The USV-based Maritime Obstacle Segmentation and Detection features three sub-challenges, including a new embedded challenge addressing efficicent inference on real-world embedded devices. This report offers a comprehensive overview of the findings from the challenges. We provide both statistical and qualitative analyses, evaluating trends from over 195 submissions. All datasets, evaluation code, and the leaderboard are available to the public at https://macvi.org/workshop/macvi24.
|
2107.14388
|
Yongxiang Gu
|
Yongxiang Gu, Qianlei Wang, Xiaolin Qin
|
Real-time Streaming Perception System for Autonomous Driving
|
6 pages,6 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Nowadays, plenty of deep learning technologies are being applied to all
aspects of autonomous driving with promising results. Among them, object
detection is the key to improve the ability of an autonomous agent to perceive
its environment so that it can (re)act. However, previous vision-based object
detectors cannot achieve satisfactory performance under real-time driving
scenarios. To remedy this, we present the real-time steaming perception system
in this paper, which is also the 2nd Place solution of Streaming Perception
Challenge (Workshop on Autonomous Driving at CVPR 2021) for the detection-only
track. Unlike traditional object detection challenges, which focus mainly on
the absolute performance, streaming perception task requires achieving a
balance of accuracy and latency, which is crucial for real-time autonomous
driving. We adopt YOLOv5 as our basic framework, data augmentation,
Bag-of-Freebies, and Transformer are adopted to improve streaming object
detection performance with negligible extra inference cost. On the Argoverse-HD
test set, our method achieves 33.2 streaming AP (34.6 streaming AP verified by
the organizer) under the required hardware. Its performance significantly
surpasses the fixed baseline of 13.6 (host team), demonstrating the
potentiality of application.
|
[
{
"created": "Fri, 30 Jul 2021 01:32:44 GMT",
"version": "v1"
}
] |
2021-08-02
|
[
[
"Gu",
"Yongxiang",
""
],
[
"Wang",
"Qianlei",
""
],
[
"Qin",
"Xiaolin",
""
]
] |
Nowadays, plenty of deep learning technologies are being applied to all aspects of autonomous driving with promising results. Among them, object detection is the key to improve the ability of an autonomous agent to perceive its environment so that it can (re)act. However, previous vision-based object detectors cannot achieve satisfactory performance under real-time driving scenarios. To remedy this, we present the real-time steaming perception system in this paper, which is also the 2nd Place solution of Streaming Perception Challenge (Workshop on Autonomous Driving at CVPR 2021) for the detection-only track. Unlike traditional object detection challenges, which focus mainly on the absolute performance, streaming perception task requires achieving a balance of accuracy and latency, which is crucial for real-time autonomous driving. We adopt YOLOv5 as our basic framework, data augmentation, Bag-of-Freebies, and Transformer are adopted to improve streaming object detection performance with negligible extra inference cost. On the Argoverse-HD test set, our method achieves 33.2 streaming AP (34.6 streaming AP verified by the organizer) under the required hardware. Its performance significantly surpasses the fixed baseline of 13.6 (host team), demonstrating the potentiality of application.
|
1701.02009
|
Alexander Zhdanov
|
Alexander Zhdanov
|
IRA codes derived from Gruenbaum graph
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we consider coding of short data frames (192 bits) by IRA
codes. A new interleaver for the IRA codes based on a Gruenbaum graph is
proposed. The difference of the proposed algorithm from known methods consists
in the following: permutation is performed by using a match smaller interleaver
which is derived from the Gruenbaum graph by finding in this graph a
Hamiltonian path, enumerating the passed vertices in ascending order and
passing them again in the permuted order through the edges which are not
included in the Hamiltonian path. For the IRA code the obtained interleaver
provides 0.7-0.8 db gain over a convolutional code decoded by Viterbi
algorithm.
|
[
{
"created": "Sun, 8 Jan 2017 19:59:35 GMT",
"version": "v1"
}
] |
2017-01-10
|
[
[
"Zhdanov",
"Alexander",
""
]
] |
In this paper, we consider coding of short data frames (192 bits) by IRA codes. A new interleaver for the IRA codes based on a Gruenbaum graph is proposed. The difference of the proposed algorithm from known methods consists in the following: permutation is performed by using a match smaller interleaver which is derived from the Gruenbaum graph by finding in this graph a Hamiltonian path, enumerating the passed vertices in ascending order and passing them again in the permuted order through the edges which are not included in the Hamiltonian path. For the IRA code the obtained interleaver provides 0.7-0.8 db gain over a convolutional code decoded by Viterbi algorithm.
|
2312.06875
|
Ryan Beckett
|
Siva Kesava Reddy Kakarla, Ryan Beckett
|
Oracle-based Protocol Testing with Eywa
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present oracle-based testing a new technique for automatic black-box
testing of network protocol implementations. Oracle-based testing leverages
recent advances in LLMs to build rich models of intended protocol behavior from
knowledge embedded in RFCs, blogs, forums, and other natural language sources.
From these models it systematically derives exhaustive test cases using
symbolic program execution. We realize oracle-based testing through Eywa, a
novel protocol testing framework implemented in Python. To demonstrate Eywa's
effectiveness, we show its use through an extensive case study of the DNS
protocol. Despite requiring minimal effort, applying Eywa to the DNS resulting
in the discovery of 26 unique bugs across ten widely used DNS implementations,
including 11 new bugs that were previously undiscovered despite elaborate prior
testing with manually crafted models.
|
[
{
"created": "Mon, 11 Dec 2023 22:51:15 GMT",
"version": "v1"
}
] |
2023-12-13
|
[
[
"Kakarla",
"Siva Kesava Reddy",
""
],
[
"Beckett",
"Ryan",
""
]
] |
We present oracle-based testing a new technique for automatic black-box testing of network protocol implementations. Oracle-based testing leverages recent advances in LLMs to build rich models of intended protocol behavior from knowledge embedded in RFCs, blogs, forums, and other natural language sources. From these models it systematically derives exhaustive test cases using symbolic program execution. We realize oracle-based testing through Eywa, a novel protocol testing framework implemented in Python. To demonstrate Eywa's effectiveness, we show its use through an extensive case study of the DNS protocol. Despite requiring minimal effort, applying Eywa to the DNS resulting in the discovery of 26 unique bugs across ten widely used DNS implementations, including 11 new bugs that were previously undiscovered despite elaborate prior testing with manually crafted models.
|
cs/0410004
|
Andras Lorincz
|
I. Szita and A. Lorincz
|
Applying Policy Iteration for Training Recurrent Neural Networks
|
Supplementary material. 17 papes, 1 figure
| null | null | null |
cs.AI cs.LG cs.NE
| null |
Recurrent neural networks are often used for learning time-series data. Based
on a few assumptions we model this learning task as a minimization problem of a
nonlinear least-squares cost function. The special structure of the cost
function allows us to build a connection to reinforcement learning. We exploit
this connection and derive a convergent, policy iteration-based algorithm.
Furthermore, we argue that RNN training can be fit naturally into the
reinforcement learning framework.
|
[
{
"created": "Sat, 2 Oct 2004 07:19:49 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Szita",
"I.",
""
],
[
"Lorincz",
"A.",
""
]
] |
Recurrent neural networks are often used for learning time-series data. Based on a few assumptions we model this learning task as a minimization problem of a nonlinear least-squares cost function. The special structure of the cost function allows us to build a connection to reinforcement learning. We exploit this connection and derive a convergent, policy iteration-based algorithm. Furthermore, we argue that RNN training can be fit naturally into the reinforcement learning framework.
|
2405.16487
|
Tyler Han
|
Tyler Han, Sidharth Talia, Rohan Panicker, Preet Shah, Neel Jawale,
Byron Boots
|
Dynamics Models in the Aggressive Off-Road Driving Regime
|
Accepted to ICRA 2024 Workshop on Resilient Off-road Autonomy
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Current developments in autonomous off-road driving are steadily increasing
performance through higher speeds and more challenging, unstructured
environments. However, this operating regime subjects the vehicle to larger
inertial effects, where consideration of higher-order states is necessary to
avoid failures such as rollovers or excessive impact forces. Aggressive driving
through Model Predictive Control (MPC) in these conditions requires dynamics
models that accurately predict safety-critical information. This work aims to
empirically quantify this aggressive operating regime and its effects on the
performance of current models. We evaluate three dynamics models of varying
complexity on two distinct off-road driving datasets: one simulated and the
other real-world. By conditioning trajectory data on higher-order states, we
show that model accuracy degrades with aggressiveness and simpler models
degrade faster. These models are also validated across datasets, where
accuracies over safety-critical states are reported and provide benchmarks for
future work.
|
[
{
"created": "Sun, 26 May 2024 08:52:16 GMT",
"version": "v1"
}
] |
2024-05-28
|
[
[
"Han",
"Tyler",
""
],
[
"Talia",
"Sidharth",
""
],
[
"Panicker",
"Rohan",
""
],
[
"Shah",
"Preet",
""
],
[
"Jawale",
"Neel",
""
],
[
"Boots",
"Byron",
""
]
] |
Current developments in autonomous off-road driving are steadily increasing performance through higher speeds and more challenging, unstructured environments. However, this operating regime subjects the vehicle to larger inertial effects, where consideration of higher-order states is necessary to avoid failures such as rollovers or excessive impact forces. Aggressive driving through Model Predictive Control (MPC) in these conditions requires dynamics models that accurately predict safety-critical information. This work aims to empirically quantify this aggressive operating regime and its effects on the performance of current models. We evaluate three dynamics models of varying complexity on two distinct off-road driving datasets: one simulated and the other real-world. By conditioning trajectory data on higher-order states, we show that model accuracy degrades with aggressiveness and simpler models degrade faster. These models are also validated across datasets, where accuracies over safety-critical states are reported and provide benchmarks for future work.
|
2008.04109
|
Abdul Mueed Hafiz Dr.
|
Abdul Mueed Hafiz and Ghulam Mohiuddin Bhat
|
Deep Q-Network Based Multi-agent Reinforcement Learning with Binary
Action Agents
| null | null | null | null |
cs.LG cs.AI cs.MA cs.SY eess.SY stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep Q-Network (DQN) based multi-agent systems (MAS) for reinforcement
learning (RL) use various schemes where in the agents have to learn and
communicate. The learning is however specific to each agent and communication
may be satisfactorily designed for the agents. As more complex Deep QNetworks
come to the fore, the overall complexity of the multi-agent system increases
leading to issues like difficulty in training, need for higher resources and
more training time, difficulty in fine-tuning, etc. To address these issues we
propose a simple but efficient DQN based MAS for RL which uses shared state and
rewards, but agent-specific actions, for updation of the experience replay pool
of the DQNs, where each agent is a DQN. The benefits of the approach are
overall simplicity, faster convergence and better performance as compared to
conventional DQN based approaches. It should be noted that the method can be
extended to any DQN. As such we use simple DQN and DDQN (Double Q-learning)
respectively on three separate tasks i.e. Cartpole-v1 (OpenAI Gym environment)
, LunarLander-v2 (OpenAI Gym environment) and Maze Traversal (customized
environment). The proposed approach outperforms the baseline on these tasks by
decent margins respectively.
|
[
{
"created": "Thu, 6 Aug 2020 15:16:05 GMT",
"version": "v1"
}
] |
2020-08-11
|
[
[
"Hafiz",
"Abdul Mueed",
""
],
[
"Bhat",
"Ghulam Mohiuddin",
""
]
] |
Deep Q-Network (DQN) based multi-agent systems (MAS) for reinforcement learning (RL) use various schemes where in the agents have to learn and communicate. The learning is however specific to each agent and communication may be satisfactorily designed for the agents. As more complex Deep QNetworks come to the fore, the overall complexity of the multi-agent system increases leading to issues like difficulty in training, need for higher resources and more training time, difficulty in fine-tuning, etc. To address these issues we propose a simple but efficient DQN based MAS for RL which uses shared state and rewards, but agent-specific actions, for updation of the experience replay pool of the DQNs, where each agent is a DQN. The benefits of the approach are overall simplicity, faster convergence and better performance as compared to conventional DQN based approaches. It should be noted that the method can be extended to any DQN. As such we use simple DQN and DDQN (Double Q-learning) respectively on three separate tasks i.e. Cartpole-v1 (OpenAI Gym environment) , LunarLander-v2 (OpenAI Gym environment) and Maze Traversal (customized environment). The proposed approach outperforms the baseline on these tasks by decent margins respectively.
|
2202.13057
|
Vsevolod Nikulin
|
Vsevolod Nikulin and Jun Tani
|
Initialization of Latent Space Coordinates via Random Linear Projections
for Learning Robotic Sensory-Motor Sequences
|
18 pages, 9 figures
| null | null | null |
cs.LG cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Robot kinematics data, despite being a high dimensional process, is highly
correlated, especially when considering motions grouped in certain primitives.
These almost linear correlations within primitives allow us to interpret the
motions as points drawn close to a union of low-dimensional linear subspaces in
the space of all motions. Motivated by results of embedding theory, in
particular, generalizations of Whitney embedding theorem, we show that random
linear projection of motor sequences into low dimensional space loses very
little information about structure of kinematics data. Projected points are
very good initial guess for values of latent variables in generative model for
robot sensory-motor behaviour primitives. We conducted series of experiments
where we trained a recurrent neural network to generate sensory-motor sequences
for robotic manipulator with 9 degrees of freedom. Experimental results
demonstrate substantial improvement in generalisation abilities for unobserved
samples in the case of initialization of latent variables with random linear
projection of motor data over initialization with zero or random values.
Moreover, latent space is well-structured wherein samples belonging to
different primitives are well separated from the onset of training process.
|
[
{
"created": "Sat, 26 Feb 2022 04:32:16 GMT",
"version": "v1"
}
] |
2022-03-01
|
[
[
"Nikulin",
"Vsevolod",
""
],
[
"Tani",
"Jun",
""
]
] |
Robot kinematics data, despite being a high dimensional process, is highly correlated, especially when considering motions grouped in certain primitives. These almost linear correlations within primitives allow us to interpret the motions as points drawn close to a union of low-dimensional linear subspaces in the space of all motions. Motivated by results of embedding theory, in particular, generalizations of Whitney embedding theorem, we show that random linear projection of motor sequences into low dimensional space loses very little information about structure of kinematics data. Projected points are very good initial guess for values of latent variables in generative model for robot sensory-motor behaviour primitives. We conducted series of experiments where we trained a recurrent neural network to generate sensory-motor sequences for robotic manipulator with 9 degrees of freedom. Experimental results demonstrate substantial improvement in generalisation abilities for unobserved samples in the case of initialization of latent variables with random linear projection of motor data over initialization with zero or random values. Moreover, latent space is well-structured wherein samples belonging to different primitives are well separated from the onset of training process.
|
2405.15092
|
Evelyn Yee
|
Evelyn Yee and Alice Li and Chenyu Tang and Yeon Ho Jung and Ramamohan
Paturi and Leon Bergen
|
Dissociation of Faithful and Unfaithful Reasoning in LLMs
|
code published at
https://github.com/CoTErrorRecovery/CoTErrorRecovery
| null | null | null |
cs.AI cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Large language models (LLMs) improve their performance in downstream tasks
when they generate Chain of Thought reasoning text before producing an answer.
Our research investigates how LLMs recover from errors in Chain of Thought,
reaching the correct final answer despite mistakes in the reasoning text.
Through analysis of these error recovery behaviors, we find evidence for
unfaithfulness in Chain of Thought, but we also identify many clear examples of
faithful error recovery behaviors. We identify factors that shift LLM recovery
behavior: LLMs recover more frequently from obvious errors and in contexts that
provide more evidence for the correct answer. However, unfaithful recoveries
show the opposite behavior, occurring more frequently for more difficult error
positions. Our results indicate that there are distinct mechanisms driving
faithful and unfaithful error recoveries. Our results challenge the view that
LLM reasoning is a uniform, coherent process.
|
[
{
"created": "Thu, 23 May 2024 22:38:58 GMT",
"version": "v1"
}
] |
2024-05-27
|
[
[
"Yee",
"Evelyn",
""
],
[
"Li",
"Alice",
""
],
[
"Tang",
"Chenyu",
""
],
[
"Jung",
"Yeon Ho",
""
],
[
"Paturi",
"Ramamohan",
""
],
[
"Bergen",
"Leon",
""
]
] |
Large language models (LLMs) improve their performance in downstream tasks when they generate Chain of Thought reasoning text before producing an answer. Our research investigates how LLMs recover from errors in Chain of Thought, reaching the correct final answer despite mistakes in the reasoning text. Through analysis of these error recovery behaviors, we find evidence for unfaithfulness in Chain of Thought, but we also identify many clear examples of faithful error recovery behaviors. We identify factors that shift LLM recovery behavior: LLMs recover more frequently from obvious errors and in contexts that provide more evidence for the correct answer. However, unfaithful recoveries show the opposite behavior, occurring more frequently for more difficult error positions. Our results indicate that there are distinct mechanisms driving faithful and unfaithful error recoveries. Our results challenge the view that LLM reasoning is a uniform, coherent process.
|
1706.05893
|
Simon Wacker
|
Simon Wacker
|
Signal Machine And Cellular Automaton Time-Optimal Quasi-Solutions Of
The Firing Squad/Mob Synchronisation Problem On Connected Graphs
| null | null | null | null |
cs.FL cs.DC cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We construct a time-optimal quasi-solution of the firing mob synchronisation
problem over finite, connected, and undirected multigraphs whose maximum
degrees are uniformly bounded by a constant. It is only a quasi-solution
because its number of states depends on the graph or, from another perspective,
does not depend on the graph but is countably infinite. To construct this
quasi-solution we introduce signal machines over continuum representations of
such multigraphs and construct a signal machine whose discretisation is a
cellular automaton that quasi-solves the problem. This automaton uses a
time-optimal solution of the firing squad synchronisation problem in dimension
one with one general at one end to synchronise edges, and freezes and thaws the
synchronisation of edges in such a way that all edges synchronise at the same
time.
|
[
{
"created": "Mon, 19 Jun 2017 11:47:45 GMT",
"version": "v1"
}
] |
2017-06-20
|
[
[
"Wacker",
"Simon",
""
]
] |
We construct a time-optimal quasi-solution of the firing mob synchronisation problem over finite, connected, and undirected multigraphs whose maximum degrees are uniformly bounded by a constant. It is only a quasi-solution because its number of states depends on the graph or, from another perspective, does not depend on the graph but is countably infinite. To construct this quasi-solution we introduce signal machines over continuum representations of such multigraphs and construct a signal machine whose discretisation is a cellular automaton that quasi-solves the problem. This automaton uses a time-optimal solution of the firing squad synchronisation problem in dimension one with one general at one end to synchronise edges, and freezes and thaws the synchronisation of edges in such a way that all edges synchronise at the same time.
|
2403.00553
|
Chantal Shaib
|
Chantal Shaib, Joe Barrow, Jiuding Sun, Alexa F. Siu, Byron C.
Wallace, Ani Nenkova
|
Standardizing the Measurement of Text Diversity: A Tool and a
Comparative Analysis of Scores
|
Preprint
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
The diversity across outputs generated by large language models shapes the
perception of their quality and utility. Prompt leaks, templated answer
structure, and canned responses across different interactions are readily
noticed by people, but there is no standard score to measure this aspect of
model behavior. In this work we empirically investigate diversity scores on
English texts. We find that computationally efficient compression algorithms
capture information similar to what is measured by slow to compute $n$-gram
overlap homogeneity scores. Further, a combination of measures -- compression
ratios, self-repetition of long $n$-grams and Self-BLEU and BERTScore -- are
sufficient to report, as they have low mutual correlation with each other. The
applicability of scores extends beyond analysis of generative models; for
example, we highlight applications on instruction-tuning datasets and
human-produced texts. We release a diversity score package to facilitate
research and invite consistency across reports.
|
[
{
"created": "Fri, 1 Mar 2024 14:23:12 GMT",
"version": "v1"
}
] |
2024-03-04
|
[
[
"Shaib",
"Chantal",
""
],
[
"Barrow",
"Joe",
""
],
[
"Sun",
"Jiuding",
""
],
[
"Siu",
"Alexa F.",
""
],
[
"Wallace",
"Byron C.",
""
],
[
"Nenkova",
"Ani",
""
]
] |
The diversity across outputs generated by large language models shapes the perception of their quality and utility. Prompt leaks, templated answer structure, and canned responses across different interactions are readily noticed by people, but there is no standard score to measure this aspect of model behavior. In this work we empirically investigate diversity scores on English texts. We find that computationally efficient compression algorithms capture information similar to what is measured by slow to compute $n$-gram overlap homogeneity scores. Further, a combination of measures -- compression ratios, self-repetition of long $n$-grams and Self-BLEU and BERTScore -- are sufficient to report, as they have low mutual correlation with each other. The applicability of scores extends beyond analysis of generative models; for example, we highlight applications on instruction-tuning datasets and human-produced texts. We release a diversity score package to facilitate research and invite consistency across reports.
|
2401.02861
|
Marta Gomez-Barrero
|
Marta Gomez-Barrero, Javier Galbally
|
Reversing the Irreversible: A Survey on Inverse Biometrics
|
18 pages, journal, survey
|
Elsevier Computers & Security, Volume 90, March 2020, 101700
|
10.1016/j.cose.2019.101700
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
With the widespread use of biometric recognition, several issues related to
the privacy and security provided by this technology have been recently raised
and analysed. As a result, the early common belief among the biometrics
community of templates irreversibility has been proven wrong. It is now an
accepted fact that it is possible to reconstruct from an unprotected template a
synthetic sample that matches the bona fide one. This reverse engineering
process, commonly referred to as \textit{inverse biometrics}, constitutes a
severe threat for biometric systems from two different angles: on the one hand,
sensitive personal data (i.e., biometric data) can be derived from compromised
unprotected templates; on the other hand, other powerful attacks can be
launched building upon these reconstructed samples. Given its important
implications, biometric stakeholders have produced over the last fifteen years
numerous works analysing the different aspects related to inverse biometrics:
development of reconstruction algorithms for different characteristics;
proposal of methodologies to assess the vulnerabilities of biometric systems to
the aforementioned algorithms; development of countermeasures to reduce the
possible effects of attacks. The present article is an effort to condense all
this information in one comprehensive review of: the problem itself, the
evaluation of the problem, and the mitigation of the problem. The present
article is an effort to condense all this information in one comprehensive
review of: the problem itself, the evaluation of the problem, and the
mitigation of the problem.
|
[
{
"created": "Fri, 5 Jan 2024 15:32:40 GMT",
"version": "v1"
}
] |
2024-01-08
|
[
[
"Gomez-Barrero",
"Marta",
""
],
[
"Galbally",
"Javier",
""
]
] |
With the widespread use of biometric recognition, several issues related to the privacy and security provided by this technology have been recently raised and analysed. As a result, the early common belief among the biometrics community of templates irreversibility has been proven wrong. It is now an accepted fact that it is possible to reconstruct from an unprotected template a synthetic sample that matches the bona fide one. This reverse engineering process, commonly referred to as \textit{inverse biometrics}, constitutes a severe threat for biometric systems from two different angles: on the one hand, sensitive personal data (i.e., biometric data) can be derived from compromised unprotected templates; on the other hand, other powerful attacks can be launched building upon these reconstructed samples. Given its important implications, biometric stakeholders have produced over the last fifteen years numerous works analysing the different aspects related to inverse biometrics: development of reconstruction algorithms for different characteristics; proposal of methodologies to assess the vulnerabilities of biometric systems to the aforementioned algorithms; development of countermeasures to reduce the possible effects of attacks. The present article is an effort to condense all this information in one comprehensive review of: the problem itself, the evaluation of the problem, and the mitigation of the problem. The present article is an effort to condense all this information in one comprehensive review of: the problem itself, the evaluation of the problem, and the mitigation of the problem.
|
2105.01031
|
Catherine Stinson
|
Catherine Stinson
|
Algorithms are not neutral: Bias in collaborative filtering
| null | null | null | null |
cs.CY cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Discussions of algorithmic bias tend to focus on examples where either the
data or the people building the algorithms are biased. This gives the
impression that clean data and good intentions could eliminate bias. The
neutrality of the algorithms themselves is defended by prominent Artificial
Intelligence researchers. However, algorithms are not neutral. In addition to
biased data and biased algorithm makers, AI algorithms themselves can be
biased. This is illustrated with the example of collaborative filtering, which
is known to suffer from popularity, and homogenizing biases. Iterative
information filtering algorithms in general create a selection bias in the
course of learning from user responses to documents that the algorithm
recommended. These are not merely biases in the statistical sense; these
statistical biases can cause discriminatory outcomes. Data points on the
margins of distributions of human data tend to correspond to marginalized
people. Popularity and homogenizing biases have the effect of further
marginalizing the already marginal. This source of bias warrants serious
attention given the ubiquity of algorithmic decision-making.
|
[
{
"created": "Mon, 3 May 2021 17:28:43 GMT",
"version": "v1"
}
] |
2021-05-04
|
[
[
"Stinson",
"Catherine",
""
]
] |
Discussions of algorithmic bias tend to focus on examples where either the data or the people building the algorithms are biased. This gives the impression that clean data and good intentions could eliminate bias. The neutrality of the algorithms themselves is defended by prominent Artificial Intelligence researchers. However, algorithms are not neutral. In addition to biased data and biased algorithm makers, AI algorithms themselves can be biased. This is illustrated with the example of collaborative filtering, which is known to suffer from popularity, and homogenizing biases. Iterative information filtering algorithms in general create a selection bias in the course of learning from user responses to documents that the algorithm recommended. These are not merely biases in the statistical sense; these statistical biases can cause discriminatory outcomes. Data points on the margins of distributions of human data tend to correspond to marginalized people. Popularity and homogenizing biases have the effect of further marginalizing the already marginal. This source of bias warrants serious attention given the ubiquity of algorithmic decision-making.
|
2107.10443
|
Eugene Bagdasaryan
|
Eugene Bagdasaryan and Vitaly Shmatikov
|
Spinning Sequence-to-Sequence Models with Meta-Backdoors
|
Outdated. Superseded by arXiv:2112.05224 and published at IEEE S&P'22
with title: "Spinning Language Models: Risks of Propaganda-As-A-Service and
Countermeasures"
| null | null | null |
cs.CR cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We investigate a new threat to neural sequence-to-sequence (seq2seq) models:
training-time attacks that cause models to "spin" their output and support a
certain sentiment when the input contains adversary-chosen trigger words. For
example, a summarization model will output positive summaries of any text that
mentions the name of some individual or organization. We introduce the concept
of a "meta-backdoor" to explain model-spinning attacks. These attacks produce
models whose output is valid and preserves context, yet also satisfies a
meta-task chosen by the adversary (e.g., positive sentiment). Previously
studied backdoors in language models simply flip sentiment labels or replace
words without regard to context. Their outputs are incorrect on inputs with the
trigger. Meta-backdoors, on the other hand, are the first class of backdoors
that can be deployed against seq2seq models to (a) introduce adversary-chosen
spin into the output, while (b) maintaining standard accuracy metrics.
To demonstrate feasibility of model spinning, we develop a new backdooring
technique. It stacks the adversarial meta-task (e.g., sentiment analysis) onto
a seq2seq model, backpropagates the desired meta-task output (e.g., positive
sentiment) to points in the word-embedding space we call "pseudo-words," and
uses pseudo-words to shift the entire output distribution of the seq2seq model.
Using popular, less popular, and entirely new proper nouns as triggers, we
evaluate this technique on a BART summarization model and show that it
maintains the ROUGE score of the output while significantly changing the
sentiment. We explain why model spinning can be a dangerous technique in
AI-powered disinformation and discuss how to mitigate these attacks.
|
[
{
"created": "Thu, 22 Jul 2021 03:41:52 GMT",
"version": "v1"
},
{
"created": "Mon, 10 Oct 2022 23:02:33 GMT",
"version": "v2"
}
] |
2022-10-12
|
[
[
"Bagdasaryan",
"Eugene",
""
],
[
"Shmatikov",
"Vitaly",
""
]
] |
We investigate a new threat to neural sequence-to-sequence (seq2seq) models: training-time attacks that cause models to "spin" their output and support a certain sentiment when the input contains adversary-chosen trigger words. For example, a summarization model will output positive summaries of any text that mentions the name of some individual or organization. We introduce the concept of a "meta-backdoor" to explain model-spinning attacks. These attacks produce models whose output is valid and preserves context, yet also satisfies a meta-task chosen by the adversary (e.g., positive sentiment). Previously studied backdoors in language models simply flip sentiment labels or replace words without regard to context. Their outputs are incorrect on inputs with the trigger. Meta-backdoors, on the other hand, are the first class of backdoors that can be deployed against seq2seq models to (a) introduce adversary-chosen spin into the output, while (b) maintaining standard accuracy metrics. To demonstrate feasibility of model spinning, we develop a new backdooring technique. It stacks the adversarial meta-task (e.g., sentiment analysis) onto a seq2seq model, backpropagates the desired meta-task output (e.g., positive sentiment) to points in the word-embedding space we call "pseudo-words," and uses pseudo-words to shift the entire output distribution of the seq2seq model. Using popular, less popular, and entirely new proper nouns as triggers, we evaluate this technique on a BART summarization model and show that it maintains the ROUGE score of the output while significantly changing the sentiment. We explain why model spinning can be a dangerous technique in AI-powered disinformation and discuss how to mitigate these attacks.
|
2306.04004
|
Vignesh Kothapalli
|
Vignesh Kothapalli
|
Randomized Schur Complement Views for Graph Contrastive Learning
|
ICML 2023
| null | null | null |
cs.LG cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce a randomized topological augmentor based on Schur complements
for Graph Contrastive Learning (GCL). Given a graph laplacian matrix, the
technique generates unbiased approximations of its Schur complements and treats
the corresponding graphs as augmented views. We discuss the benefits of our
approach, provide theoretical justifications and present connections with graph
diffusion. Unlike previous efforts, we study the empirical effectiveness of the
augmentor in a controlled fashion by varying the design choices for subsequent
GCL phases, such as encoding and contrasting. Extensive experiments on node and
graph classification benchmarks demonstrate that our technique consistently
outperforms pre-defined and adaptive augmentation approaches to achieve
state-of-the-art results.
|
[
{
"created": "Tue, 6 Jun 2023 20:35:20 GMT",
"version": "v1"
}
] |
2023-06-08
|
[
[
"Kothapalli",
"Vignesh",
""
]
] |
We introduce a randomized topological augmentor based on Schur complements for Graph Contrastive Learning (GCL). Given a graph laplacian matrix, the technique generates unbiased approximations of its Schur complements and treats the corresponding graphs as augmented views. We discuss the benefits of our approach, provide theoretical justifications and present connections with graph diffusion. Unlike previous efforts, we study the empirical effectiveness of the augmentor in a controlled fashion by varying the design choices for subsequent GCL phases, such as encoding and contrasting. Extensive experiments on node and graph classification benchmarks demonstrate that our technique consistently outperforms pre-defined and adaptive augmentation approaches to achieve state-of-the-art results.
|
2403.02474
|
Krishnapriya Vishnubhotla
|
Krishnapriya Vishnubhotla, Adam Hammond, Graeme Hirst, Saif M.
Mohammad
|
The Emotion Dynamics of Literary Novels
|
8 pages plus appendices
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Stories are rich in the emotions they exhibit in their narratives and evoke
in the readers. The emotional journeys of the various characters within a story
are central to their appeal. Computational analysis of the emotions of novels,
however, has rarely examined the variation in the emotional trajectories of the
different characters within them, instead considering the entire novel to
represent a single story arc. In this work, we use character dialogue to
distinguish between the emotion arcs of the narration and the various
characters. We analyze the emotion arcs of the various characters in a dataset
of English literary novels using the framework of Utterance Emotion Dynamics.
Our findings show that the narration and the dialogue largely express disparate
emotions through the course of a novel, and that the commonalities or
differences in the emotional arcs of stories are more accurately captured by
those associated with individual characters.
|
[
{
"created": "Mon, 4 Mar 2024 20:39:21 GMT",
"version": "v1"
}
] |
2024-03-06
|
[
[
"Vishnubhotla",
"Krishnapriya",
""
],
[
"Hammond",
"Adam",
""
],
[
"Hirst",
"Graeme",
""
],
[
"Mohammad",
"Saif M.",
""
]
] |
Stories are rich in the emotions they exhibit in their narratives and evoke in the readers. The emotional journeys of the various characters within a story are central to their appeal. Computational analysis of the emotions of novels, however, has rarely examined the variation in the emotional trajectories of the different characters within them, instead considering the entire novel to represent a single story arc. In this work, we use character dialogue to distinguish between the emotion arcs of the narration and the various characters. We analyze the emotion arcs of the various characters in a dataset of English literary novels using the framework of Utterance Emotion Dynamics. Our findings show that the narration and the dialogue largely express disparate emotions through the course of a novel, and that the commonalities or differences in the emotional arcs of stories are more accurately captured by those associated with individual characters.
|
1112.0221
|
John Fearnley
|
John Fearnley (University of Liverpool), Sven Schewe (University of
Liverpool)
|
Time and Parallelizability Results for Parity Games with Bounded Tree
and DAG Width
| null |
Logical Methods in Computer Science, Volume 9, Issue 2 (June 18,
2013) lmcs:791
|
10.2168/LMCS-9(2:6)2013
| null |
cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Parity games are a much researched class of games in NP intersect CoNP that
are not known to be in P. Consequently, researchers have considered specialised
algorithms for the case where certain graph parameters are small. In this
paper, we study parity games on graphs with bounded treewidth, and graphs with
bounded DAG width. We show that parity games with bounded DAG width can be
solved in O(n^(k+3) k^(k + 2) (d + 1)^(3k + 2)) time, where n, k, and d are the
size, treewidth, and number of priorities in the parity game. This is an
improvement over the previous best algorithm, given by Berwanger et al., which
runs in n^O(k^2) time. We also show that, if a tree decomposition is provided,
then parity games with bounded treewidth can be solved in O(n k^(k + 5) (d +
1)^(3k + 5)) time. This improves over previous best algorithm, given by
Obdrzalek, which runs in O(n d^(2(k+1)^2)) time. Our techniques can also be
adapted to show that the problem of solving parity games with bounded treewidth
lies in the complexity class NC^2, which is the class of problems that can be
efficiently parallelized. This is in stark contrast to the general parity game
problem, which is known to be P-hard, and thus unlikely to be contained in NC.
|
[
{
"created": "Thu, 1 Dec 2011 16:05:26 GMT",
"version": "v1"
},
{
"created": "Thu, 9 Feb 2012 15:42:30 GMT",
"version": "v2"
},
{
"created": "Mon, 10 Sep 2012 15:43:53 GMT",
"version": "v3"
},
{
"created": "Tue, 16 Apr 2013 15:52:27 GMT",
"version": "v4"
},
{
"created": "Thu, 13 Jun 2013 12:30:42 GMT",
"version": "v5"
},
{
"created": "Mon, 17 Jun 2013 19:58:44 GMT",
"version": "v6"
}
] |
2015-07-01
|
[
[
"Fearnley",
"John",
"",
"University of Liverpool"
],
[
"Schewe",
"Sven",
"",
"University of\n Liverpool"
]
] |
Parity games are a much researched class of games in NP intersect CoNP that are not known to be in P. Consequently, researchers have considered specialised algorithms for the case where certain graph parameters are small. In this paper, we study parity games on graphs with bounded treewidth, and graphs with bounded DAG width. We show that parity games with bounded DAG width can be solved in O(n^(k+3) k^(k + 2) (d + 1)^(3k + 2)) time, where n, k, and d are the size, treewidth, and number of priorities in the parity game. This is an improvement over the previous best algorithm, given by Berwanger et al., which runs in n^O(k^2) time. We also show that, if a tree decomposition is provided, then parity games with bounded treewidth can be solved in O(n k^(k + 5) (d + 1)^(3k + 5)) time. This improves over previous best algorithm, given by Obdrzalek, which runs in O(n d^(2(k+1)^2)) time. Our techniques can also be adapted to show that the problem of solving parity games with bounded treewidth lies in the complexity class NC^2, which is the class of problems that can be efficiently parallelized. This is in stark contrast to the general parity game problem, which is known to be P-hard, and thus unlikely to be contained in NC.
|
1912.08954
|
Guanbin Li
|
Jihan Yang, Ruijia Xu, Ruiyu Li, Xiaojuan Qi, Xiaoyong Shen, Guanbin
Li, Liang Lin
|
An Adversarial Perturbation Oriented Domain Adaptation Approach for
Semantic Segmentation
|
To Appear in AAAI2020
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We focus on Unsupervised Domain Adaptation (UDA) for the task of semantic
segmentation. Recently, adversarial alignment has been widely adopted to match
the marginal distribution of feature representations across two domains
globally. However, this strategy fails in adapting the representations of the
tail classes or small objects for semantic segmentation since the alignment
objective is dominated by head categories or large objects. In contrast to
adversarial alignment, we propose to explicitly train a domain-invariant
classifier by generating and defensing against pointwise feature space
adversarial perturbations. Specifically, we firstly perturb the intermediate
feature maps with several attack objectives (i.e., discriminator and
classifier) on each individual position for both domains, and then the
classifier is trained to be invariant to the perturbations. By perturbing each
position individually, our model treats each location evenly regardless of the
category or object size and thus circumvents the aforementioned issue.
Moreover, the domain gap in feature space is reduced by extrapolating source
and target perturbed features towards each other with attack on the domain
discriminator. Our approach achieves the state-of-the-art performance on two
challenging domain adaptation tasks for semantic segmentation: GTA5 ->
Cityscapes and SYNTHIA -> Cityscapes.
|
[
{
"created": "Wed, 18 Dec 2019 23:59:24 GMT",
"version": "v1"
}
] |
2019-12-20
|
[
[
"Yang",
"Jihan",
""
],
[
"Xu",
"Ruijia",
""
],
[
"Li",
"Ruiyu",
""
],
[
"Qi",
"Xiaojuan",
""
],
[
"Shen",
"Xiaoyong",
""
],
[
"Li",
"Guanbin",
""
],
[
"Lin",
"Liang",
""
]
] |
We focus on Unsupervised Domain Adaptation (UDA) for the task of semantic segmentation. Recently, adversarial alignment has been widely adopted to match the marginal distribution of feature representations across two domains globally. However, this strategy fails in adapting the representations of the tail classes or small objects for semantic segmentation since the alignment objective is dominated by head categories or large objects. In contrast to adversarial alignment, we propose to explicitly train a domain-invariant classifier by generating and defensing against pointwise feature space adversarial perturbations. Specifically, we firstly perturb the intermediate feature maps with several attack objectives (i.e., discriminator and classifier) on each individual position for both domains, and then the classifier is trained to be invariant to the perturbations. By perturbing each position individually, our model treats each location evenly regardless of the category or object size and thus circumvents the aforementioned issue. Moreover, the domain gap in feature space is reduced by extrapolating source and target perturbed features towards each other with attack on the domain discriminator. Our approach achieves the state-of-the-art performance on two challenging domain adaptation tasks for semantic segmentation: GTA5 -> Cityscapes and SYNTHIA -> Cityscapes.
|
1609.09773
|
Qingqing Wu
|
Qingqing Wu, Geoffrey Ye Li, Wen Chen, Derrick Wing Kwan Ng, and
Robert Schober
|
An Overview of Sustainable Green 5G Networks
|
Submitted for possible publication
| null | null | null |
cs.IT cs.NI math.IT math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The stringent requirements of a 1,000 times increase in data traffic and one
millisecond round trip latency have made limiting the potentially tremendous
ensuing energy consumption one of the most challenging problems for the design
of the upcoming fifth-generation (5G) networks. To enable sustainable 5G
networks, new technologies have been proposed to improve the system energy
efficiency and alternative energy sources are introduced to reduce our
dependence on traditional fossil fuels. In particular, various 5G techniques
target the reduction of the energy consumption without sacrificing the
quality-of-service. Meanwhile, energy harvesting technologies, which enable
communication transceivers to harvest energy from various renewable resources
and ambient radio frequency signals for communi- cation, have drawn significant
interest from both academia and industry. In this article, we provide an
overview of the latest research on both green 5G techniques and energy
harvesting for communication. In addition, some technical challenges and
potential research topics for realizing sustainable green 5G networks are also
identified.
|
[
{
"created": "Fri, 30 Sep 2016 15:26:03 GMT",
"version": "v1"
},
{
"created": "Wed, 5 Oct 2016 03:40:26 GMT",
"version": "v2"
}
] |
2017-04-12
|
[
[
"Wu",
"Qingqing",
""
],
[
"Li",
"Geoffrey Ye",
""
],
[
"Chen",
"Wen",
""
],
[
"Ng",
"Derrick Wing Kwan",
""
],
[
"Schober",
"Robert",
""
]
] |
The stringent requirements of a 1,000 times increase in data traffic and one millisecond round trip latency have made limiting the potentially tremendous ensuing energy consumption one of the most challenging problems for the design of the upcoming fifth-generation (5G) networks. To enable sustainable 5G networks, new technologies have been proposed to improve the system energy efficiency and alternative energy sources are introduced to reduce our dependence on traditional fossil fuels. In particular, various 5G techniques target the reduction of the energy consumption without sacrificing the quality-of-service. Meanwhile, energy harvesting technologies, which enable communication transceivers to harvest energy from various renewable resources and ambient radio frequency signals for communi- cation, have drawn significant interest from both academia and industry. In this article, we provide an overview of the latest research on both green 5G techniques and energy harvesting for communication. In addition, some technical challenges and potential research topics for realizing sustainable green 5G networks are also identified.
|
2112.07344
|
Adeyemi Damilare Adeoye
|
Adeyemi D. Adeoye, Alberto Bemporad
|
SCORE: Approximating Curvature Information under Self-Concordant
Regularization
|
published in Computational Optimization and Applications 2023
| null |
10.1007/s10589-023-00502-2
| null |
cs.LG math.OC
|
http://creativecommons.org/licenses/by/4.0/
|
Optimization problems that include regularization functions in their
objectives are regularly solved in many applications. When one seeks
second-order methods for such problems, it may be desirable to exploit specific
properties of some of these regularization functions when accounting for
curvature information in the solution steps to speed up convergence. In this
paper, we propose the SCORE (self-concordant regularization) framework for
unconstrained minimization problems which incorporates second-order information
in the Newton-decrement framework for convex optimization. We propose the
generalized Gauss-Newton with Self-Concordant Regularization (GGN-SCORE)
algorithm that updates the minimization variables each time it receives a new
input batch. The proposed algorithm exploits the structure of the second-order
information in the Hessian matrix, thereby reducing computational overhead.
GGN-SCORE demonstrates how to speed up convergence while also improving model
generalization for problems that involve regularized minimization under the
proposed SCORE framework. Numerical experiments show the efficiency of our
method and its fast convergence, which compare favorably against baseline
first-order and quasi-Newton methods. Additional experiments involving
non-convex (overparameterized) neural network training problems show that the
proposed method is promising for non-convex optimization.
|
[
{
"created": "Tue, 14 Dec 2021 13:03:04 GMT",
"version": "v1"
},
{
"created": "Thu, 16 Jun 2022 10:30:59 GMT",
"version": "v2"
},
{
"created": "Mon, 10 Jul 2023 14:13:17 GMT",
"version": "v3"
}
] |
2023-07-11
|
[
[
"Adeoye",
"Adeyemi D.",
""
],
[
"Bemporad",
"Alberto",
""
]
] |
Optimization problems that include regularization functions in their objectives are regularly solved in many applications. When one seeks second-order methods for such problems, it may be desirable to exploit specific properties of some of these regularization functions when accounting for curvature information in the solution steps to speed up convergence. In this paper, we propose the SCORE (self-concordant regularization) framework for unconstrained minimization problems which incorporates second-order information in the Newton-decrement framework for convex optimization. We propose the generalized Gauss-Newton with Self-Concordant Regularization (GGN-SCORE) algorithm that updates the minimization variables each time it receives a new input batch. The proposed algorithm exploits the structure of the second-order information in the Hessian matrix, thereby reducing computational overhead. GGN-SCORE demonstrates how to speed up convergence while also improving model generalization for problems that involve regularized minimization under the proposed SCORE framework. Numerical experiments show the efficiency of our method and its fast convergence, which compare favorably against baseline first-order and quasi-Newton methods. Additional experiments involving non-convex (overparameterized) neural network training problems show that the proposed method is promising for non-convex optimization.
|
1607.08592
|
Erkki Luuk
|
Erkki Luuk
|
Modeling selectional restrictions in a relational type system
| null | null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Selectional restrictions are semantic constraints on forming certain complex
types in natural language. The paper gives an overview of modeling selectional
restrictions in a relational type system with morphological and syntactic
types. We discuss some foundations of the system and ways of formalizing
selectional restrictions.
Keywords: type theory, selectional restrictions, syntax, morphology
|
[
{
"created": "Thu, 28 Jul 2016 19:47:25 GMT",
"version": "v1"
}
] |
2016-07-29
|
[
[
"Luuk",
"Erkki",
""
]
] |
Selectional restrictions are semantic constraints on forming certain complex types in natural language. The paper gives an overview of modeling selectional restrictions in a relational type system with morphological and syntactic types. We discuss some foundations of the system and ways of formalizing selectional restrictions. Keywords: type theory, selectional restrictions, syntax, morphology
|
2012.11113
|
Yang Yifei
|
Yifei Yang, Shibing Xiang, Ruixiang Zhang
|
Improving unsupervised anomaly localization by applying multi-scale
memories to autoencoders
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Autoencoder and its variants have been widely applicated in anomaly
detection.The previous work memory-augmented deep autoencoder proposed
memorizing normality to detect anomaly, however it neglects the feature
discrepancy between different resolution scales, therefore we introduce
multi-scale memories to record scale-specific features and multi-scale
attention fuser between the encoding and decoding module of the autoencoder for
anomaly detection, namely MMAE.MMAE updates slots at corresponding resolution
scale as prototype features during unsupervised learning. For anomaly
detection, we accomplish anomaly removal by replacing the original encoded
image features at each scale with most relevant prototype features,and fuse
these features before feeding to the decoding module to reconstruct image.
Experimental results on various datasets testify that our MMAE successfully
removes anomalies at different scales and performs favorably on several
datasets compared to similar reconstruction-based methods.
|
[
{
"created": "Mon, 21 Dec 2020 04:44:40 GMT",
"version": "v1"
}
] |
2020-12-22
|
[
[
"Yang",
"Yifei",
""
],
[
"Xiang",
"Shibing",
""
],
[
"Zhang",
"Ruixiang",
""
]
] |
Autoencoder and its variants have been widely applicated in anomaly detection.The previous work memory-augmented deep autoencoder proposed memorizing normality to detect anomaly, however it neglects the feature discrepancy between different resolution scales, therefore we introduce multi-scale memories to record scale-specific features and multi-scale attention fuser between the encoding and decoding module of the autoencoder for anomaly detection, namely MMAE.MMAE updates slots at corresponding resolution scale as prototype features during unsupervised learning. For anomaly detection, we accomplish anomaly removal by replacing the original encoded image features at each scale with most relevant prototype features,and fuse these features before feeding to the decoding module to reconstruct image. Experimental results on various datasets testify that our MMAE successfully removes anomalies at different scales and performs favorably on several datasets compared to similar reconstruction-based methods.
|
2301.02125
|
Alexander Gheorghiu
|
Alexander V. Gheorghiu and David J. Pym
|
Defining Logical Systems via Algebraic Constraints on Proofs
| null |
Journal of Logic and Computation 2023
| null | null |
cs.LO math.LO
|
http://creativecommons.org/licenses/by/4.0/
|
We present a comprehensive programme analysing the decomposition of proof
systems for non-classical logics into proof systems for other logics,
especially classical logic, using an algebra of constraints. That is, one
recovers a proof system for a target logic by enriching a proof system for
another, typically simpler, logic with an algebra of constraints that act as
correctness conditions on the latter to capture the former; for example, one
may use Boolean algebra to give constraints in a sequent calculus for classical
propositional logic to produce a sequent calculus for intuitionistic
propositional logic. The idea behind such forms of reduction is to obtain a
tool for uniform and modular treatment of proof theory and provide a bridge
between semantics logics and their proof theory. The article discusses the
theoretical background of the project and provides several illustrations of its
work in the field of intuitionistic and modal logics. The results include the
following: a uniform treatment of modular and cut-free proof systems for a
large class of propositional logics; a general criterion for a novel approach
to soundness and completeness of a logic with respect to a model-theoretic
semantics; and, a case study deriving a model-theoretic semantics from a
proof-theoretic specification of a logic.
|
[
{
"created": "Thu, 5 Jan 2023 16:06:09 GMT",
"version": "v1"
},
{
"created": "Mon, 27 Mar 2023 10:06:04 GMT",
"version": "v2"
},
{
"created": "Thu, 19 Oct 2023 11:21:15 GMT",
"version": "v3"
}
] |
2023-10-20
|
[
[
"Gheorghiu",
"Alexander V.",
""
],
[
"Pym",
"David J.",
""
]
] |
We present a comprehensive programme analysing the decomposition of proof systems for non-classical logics into proof systems for other logics, especially classical logic, using an algebra of constraints. That is, one recovers a proof system for a target logic by enriching a proof system for another, typically simpler, logic with an algebra of constraints that act as correctness conditions on the latter to capture the former; for example, one may use Boolean algebra to give constraints in a sequent calculus for classical propositional logic to produce a sequent calculus for intuitionistic propositional logic. The idea behind such forms of reduction is to obtain a tool for uniform and modular treatment of proof theory and provide a bridge between semantics logics and their proof theory. The article discusses the theoretical background of the project and provides several illustrations of its work in the field of intuitionistic and modal logics. The results include the following: a uniform treatment of modular and cut-free proof systems for a large class of propositional logics; a general criterion for a novel approach to soundness and completeness of a logic with respect to a model-theoretic semantics; and, a case study deriving a model-theoretic semantics from a proof-theoretic specification of a logic.
|
1906.00452
|
Micha{\l} Koziarski
|
Micha{\l} Koziarski
|
Radial-Based Undersampling for Imbalanced Data Classification
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Data imbalance remains one of the most widespread problems affecting
contemporary machine learning. The negative effect data imbalance can have on
the traditional learning algorithms is most severe in combination with other
dataset difficulty factors, such as small disjuncts, presence of outliers and
insufficient number of training observations. Aforementioned difficulty factors
can also limit the applicability of some of the methods of dealing with data
imbalance, in particular the neighborhood-based oversampling algorithms based
on SMOTE. Radial-Based Oversampling (RBO) was previously proposed to mitigate
some of the limitations of the neighborhood-based methods. In this paper we
examine the possibility of utilizing the concept of mutual class potential,
used to guide the oversampling process in RBO, in the undersampling procedure.
Conducted computational complexity analysis indicates a significantly reduced
time complexity of the proposed Radial-Based Undersampling algorithm, and the
results of the performed experimental study indicate its usefulness, especially
on difficult datasets.
|
[
{
"created": "Sun, 2 Jun 2019 17:06:28 GMT",
"version": "v1"
},
{
"created": "Sat, 17 Apr 2021 13:51:23 GMT",
"version": "v2"
}
] |
2021-04-20
|
[
[
"Koziarski",
"Michał",
""
]
] |
Data imbalance remains one of the most widespread problems affecting contemporary machine learning. The negative effect data imbalance can have on the traditional learning algorithms is most severe in combination with other dataset difficulty factors, such as small disjuncts, presence of outliers and insufficient number of training observations. Aforementioned difficulty factors can also limit the applicability of some of the methods of dealing with data imbalance, in particular the neighborhood-based oversampling algorithms based on SMOTE. Radial-Based Oversampling (RBO) was previously proposed to mitigate some of the limitations of the neighborhood-based methods. In this paper we examine the possibility of utilizing the concept of mutual class potential, used to guide the oversampling process in RBO, in the undersampling procedure. Conducted computational complexity analysis indicates a significantly reduced time complexity of the proposed Radial-Based Undersampling algorithm, and the results of the performed experimental study indicate its usefulness, especially on difficult datasets.
|
2203.07102
|
Youqian Zhang
|
Youqian Zhang, Kasper Rasmussen
|
Detection of Electromagnetic Signal Injection Attacks on Actuator
Systems
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An actuator is a device that converts electricity into another form of
energy, typically physical movement. They are absolutely essential for any
system that needs to impact or modify the physical world, and are used in
millions of systems of all sizes, all over the world, from cars and spacecraft
to factory control systems and critical infrastructure. An actuator is a "dumb
device" that is entirely controlled by the surrounding electronics, e.g., a
microcontroller, and thus cannot authenticate its control signals or do any
other form of processing. The problem we look at in this paper is how the wires
that connect an actuator to its control electronics can act like antennas,
picking up electromagnetic signals from the environment. This makes it possible
for a remote attacker to wirelessly inject signals (energy) into these wires to
bypass the controller and directly control the actuator.
To detect such attacks, we propose a novel detection method that allows the
microcontroller to monitor the control signal and detect attacks as a deviation
from the intended value. We have managed to do this without requiring the
microcontroller to sample the signal at a high rate or run any signal
processing. That makes our defense mechanism practical and easy to integrate
into existing systems. Our method is general and applies to any type of
actuator (provided a few basic assumptions are met), and can deal with
adversaries with arbitrarily high transmission power. We implement our
detection method on two different practical systems to show its generality,
effectiveness, and robustness.
|
[
{
"created": "Mon, 14 Mar 2022 13:47:03 GMT",
"version": "v1"
}
] |
2022-03-15
|
[
[
"Zhang",
"Youqian",
""
],
[
"Rasmussen",
"Kasper",
""
]
] |
An actuator is a device that converts electricity into another form of energy, typically physical movement. They are absolutely essential for any system that needs to impact or modify the physical world, and are used in millions of systems of all sizes, all over the world, from cars and spacecraft to factory control systems and critical infrastructure. An actuator is a "dumb device" that is entirely controlled by the surrounding electronics, e.g., a microcontroller, and thus cannot authenticate its control signals or do any other form of processing. The problem we look at in this paper is how the wires that connect an actuator to its control electronics can act like antennas, picking up electromagnetic signals from the environment. This makes it possible for a remote attacker to wirelessly inject signals (energy) into these wires to bypass the controller and directly control the actuator. To detect such attacks, we propose a novel detection method that allows the microcontroller to monitor the control signal and detect attacks as a deviation from the intended value. We have managed to do this without requiring the microcontroller to sample the signal at a high rate or run any signal processing. That makes our defense mechanism practical and easy to integrate into existing systems. Our method is general and applies to any type of actuator (provided a few basic assumptions are met), and can deal with adversaries with arbitrarily high transmission power. We implement our detection method on two different practical systems to show its generality, effectiveness, and robustness.
|
2305.17127
|
Tyler A. Chang
|
Tyler A. Chang, Kishaloy Halder, Neha Anna John, Yogarshi Vyas,
Yassine Benajiba, Miguel Ballesteros, Dan Roth
|
Characterizing and Measuring Linguistic Dataset Drift
|
Accepted to ACL 2023
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
NLP models often degrade in performance when real world data distributions
differ markedly from training data. However, existing dataset drift metrics in
NLP have generally not considered specific dimensions of linguistic drift that
affect model performance, and they have not been validated in their ability to
predict model performance at the individual example level, where such metrics
are often used in practice. In this paper, we propose three dimensions of
linguistic dataset drift: vocabulary, structural, and semantic drift. These
dimensions correspond to content word frequency divergences, syntactic
divergences, and meaning changes not captured by word frequencies (e.g. lexical
semantic change). We propose interpretable metrics for all three drift
dimensions, and we modify past performance prediction methods to predict model
performance at both the example and dataset level for English sentiment
classification and natural language inference. We find that our drift metrics
are more effective than previous metrics at predicting out-of-domain model
accuracies (mean 16.8% root mean square error decrease), particularly when
compared to popular fine-tuned embedding distances (mean 47.7% error decrease).
Fine-tuned embedding distances are much more effective at ranking individual
examples by expected performance, but decomposing into vocabulary, structural,
and semantic drift produces the best example rankings of all considered
model-agnostic drift metrics (mean 6.7% ROC AUC increase).
|
[
{
"created": "Fri, 26 May 2023 17:50:51 GMT",
"version": "v1"
}
] |
2023-05-29
|
[
[
"Chang",
"Tyler A.",
""
],
[
"Halder",
"Kishaloy",
""
],
[
"John",
"Neha Anna",
""
],
[
"Vyas",
"Yogarshi",
""
],
[
"Benajiba",
"Yassine",
""
],
[
"Ballesteros",
"Miguel",
""
],
[
"Roth",
"Dan",
""
]
] |
NLP models often degrade in performance when real world data distributions differ markedly from training data. However, existing dataset drift metrics in NLP have generally not considered specific dimensions of linguistic drift that affect model performance, and they have not been validated in their ability to predict model performance at the individual example level, where such metrics are often used in practice. In this paper, we propose three dimensions of linguistic dataset drift: vocabulary, structural, and semantic drift. These dimensions correspond to content word frequency divergences, syntactic divergences, and meaning changes not captured by word frequencies (e.g. lexical semantic change). We propose interpretable metrics for all three drift dimensions, and we modify past performance prediction methods to predict model performance at both the example and dataset level for English sentiment classification and natural language inference. We find that our drift metrics are more effective than previous metrics at predicting out-of-domain model accuracies (mean 16.8% root mean square error decrease), particularly when compared to popular fine-tuned embedding distances (mean 47.7% error decrease). Fine-tuned embedding distances are much more effective at ranking individual examples by expected performance, but decomposing into vocabulary, structural, and semantic drift produces the best example rankings of all considered model-agnostic drift metrics (mean 6.7% ROC AUC increase).
|
2403.07314
|
Megan Witherow
|
Megan A. Witherow, Crystal Butler, Winston J. Shields, Furkan Ilgin,
Norou Diawara, Janice Keener, John W. Harrington, and Khan M. Iftekharuddin
|
Customizable Avatars with Dynamic Facial Action Coded Expressions
(CADyFACE) for Improved User Engagement
|
12 pages, 8 figures
| null | null | null |
cs.HC cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Customizable 3D avatar-based facial expression stimuli may improve user
engagement in behavioral biomarker discovery and therapeutic intervention for
autism, Alzheimer's disease, facial palsy, and more. However, there is a lack
of customizable avatar-based stimuli with Facial Action Coding System (FACS)
action unit (AU) labels. Therefore, this study focuses on (1) FACS-labeled,
customizable avatar-based expression stimuli for maintaining subjects'
engagement, (2) learning-based measurements that quantify subjects' facial
responses to such stimuli, and (3) validation of constructs represented by
stimulus-measurement pairs. We propose Customizable Avatars with Dynamic Facial
Action Coded Expressions (CADyFACE) labeled with AUs by a certified FACS
expert. To measure subjects' AUs in response to CADyFACE, we propose a novel
Beta-guided Correlation and Multi-task Expression learning neural network
(BeCoME-Net) for multi-label AU detection. The beta-guided correlation loss
encourages feature correlation with AUs while discouraging correlation with
subject identities for improved generalization. We train BeCoME-Net for
unilateral and bilateral AU detection and compare with state-of-the-art
approaches. To assess construct validity of CADyFACE and BeCoME-Net, twenty
healthy adult volunteers complete expression recognition and mimicry tasks in
an online feasibility study while webcam-based eye-tracking and video are
collected. We test validity of multiple constructs, including face preference
during recognition and AUs during mimicry.
|
[
{
"created": "Tue, 12 Mar 2024 05:00:38 GMT",
"version": "v1"
}
] |
2024-03-13
|
[
[
"Witherow",
"Megan A.",
""
],
[
"Butler",
"Crystal",
""
],
[
"Shields",
"Winston J.",
""
],
[
"Ilgin",
"Furkan",
""
],
[
"Diawara",
"Norou",
""
],
[
"Keener",
"Janice",
""
],
[
"Harrington",
"John W.",
""
],
[
"Iftekharuddin",
"Khan M.",
""
]
] |
Customizable 3D avatar-based facial expression stimuli may improve user engagement in behavioral biomarker discovery and therapeutic intervention for autism, Alzheimer's disease, facial palsy, and more. However, there is a lack of customizable avatar-based stimuli with Facial Action Coding System (FACS) action unit (AU) labels. Therefore, this study focuses on (1) FACS-labeled, customizable avatar-based expression stimuli for maintaining subjects' engagement, (2) learning-based measurements that quantify subjects' facial responses to such stimuli, and (3) validation of constructs represented by stimulus-measurement pairs. We propose Customizable Avatars with Dynamic Facial Action Coded Expressions (CADyFACE) labeled with AUs by a certified FACS expert. To measure subjects' AUs in response to CADyFACE, we propose a novel Beta-guided Correlation and Multi-task Expression learning neural network (BeCoME-Net) for multi-label AU detection. The beta-guided correlation loss encourages feature correlation with AUs while discouraging correlation with subject identities for improved generalization. We train BeCoME-Net for unilateral and bilateral AU detection and compare with state-of-the-art approaches. To assess construct validity of CADyFACE and BeCoME-Net, twenty healthy adult volunteers complete expression recognition and mimicry tasks in an online feasibility study while webcam-based eye-tracking and video are collected. We test validity of multiple constructs, including face preference during recognition and AUs during mimicry.
|
1603.07786
|
Hans Raj Tiwary
|
Hans Raj Tiwary
|
Extension Complexity of Formal Languages
|
Final version for TOCS
| null | null | null |
cs.CC cs.FL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this article we undertake a study of extension complexity from the
perspective of formal languages. We define a natural way to associate a family
of polytopes with binary languages. This allows us to define the notion of
extension complexity of formal languages. We prove several closure properties
of languages admitting compact extended formulations. Furthermore, we give a
sufficient machine characterization of compact languages. We demonstrate the
utility of this machine characterization by obtaining upper bounds for
polytopes for problems in nondeterministic logspace; lower bounds in streaming
models; and upper bounds on extension complexities of several polytopes.
|
[
{
"created": "Fri, 25 Mar 2016 00:11:56 GMT",
"version": "v1"
},
{
"created": "Wed, 27 Apr 2016 16:22:08 GMT",
"version": "v2"
},
{
"created": "Thu, 28 Apr 2016 10:29:07 GMT",
"version": "v3"
},
{
"created": "Tue, 19 Jul 2016 16:21:35 GMT",
"version": "v4"
},
{
"created": "Wed, 28 Aug 2019 15:20:36 GMT",
"version": "v5"
}
] |
2019-08-29
|
[
[
"Tiwary",
"Hans Raj",
""
]
] |
In this article we undertake a study of extension complexity from the perspective of formal languages. We define a natural way to associate a family of polytopes with binary languages. This allows us to define the notion of extension complexity of formal languages. We prove several closure properties of languages admitting compact extended formulations. Furthermore, we give a sufficient machine characterization of compact languages. We demonstrate the utility of this machine characterization by obtaining upper bounds for polytopes for problems in nondeterministic logspace; lower bounds in streaming models; and upper bounds on extension complexities of several polytopes.
|
2305.06386
|
Mazda Moayeri
|
Mazda Moayeri, Keivan Rezaei, Maziar Sanjabi, Soheil Feizi
|
Text-To-Concept (and Back) via Cross-Model Alignment
|
Accepted to ICML 2023 and CVPR4XAI workshop 2023
| null | null | null |
cs.CV cs.AI cs.HC cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We observe that the mapping between an image's representation in one model to
its representation in another can be learned surprisingly well with just a
linear layer, even across diverse models. Building on this observation, we
propose $\textit{text-to-concept}$, where features from a fixed pretrained
model are aligned linearly to the CLIP space, so that text embeddings from
CLIP's text encoder become directly comparable to the aligned features. With
text-to-concept, we convert fixed off-the-shelf vision encoders to surprisingly
strong zero-shot classifiers for free, with accuracy at times even surpassing
that of CLIP, despite being much smaller models and trained on a small fraction
of the data compared to CLIP. We show other immediate use-cases of
text-to-concept, like building concept bottleneck models with no concept
supervision, diagnosing distribution shifts in terms of human concepts, and
retrieving images satisfying a set of text-based constraints. Lastly, we
demonstrate the feasibility of $\textit{concept-to-text}$, where vectors in a
model's feature space are decoded by first aligning to the CLIP before being
fed to a GPT-based generative model. Our work suggests existing deep models,
with presumably diverse architectures and training, represent input samples
relatively similarly, and a two-way communication across model representation
spaces and to humans (through language) is viable.
|
[
{
"created": "Wed, 10 May 2023 18:01:06 GMT",
"version": "v1"
}
] |
2023-05-12
|
[
[
"Moayeri",
"Mazda",
""
],
[
"Rezaei",
"Keivan",
""
],
[
"Sanjabi",
"Maziar",
""
],
[
"Feizi",
"Soheil",
""
]
] |
We observe that the mapping between an image's representation in one model to its representation in another can be learned surprisingly well with just a linear layer, even across diverse models. Building on this observation, we propose $\textit{text-to-concept}$, where features from a fixed pretrained model are aligned linearly to the CLIP space, so that text embeddings from CLIP's text encoder become directly comparable to the aligned features. With text-to-concept, we convert fixed off-the-shelf vision encoders to surprisingly strong zero-shot classifiers for free, with accuracy at times even surpassing that of CLIP, despite being much smaller models and trained on a small fraction of the data compared to CLIP. We show other immediate use-cases of text-to-concept, like building concept bottleneck models with no concept supervision, diagnosing distribution shifts in terms of human concepts, and retrieving images satisfying a set of text-based constraints. Lastly, we demonstrate the feasibility of $\textit{concept-to-text}$, where vectors in a model's feature space are decoded by first aligning to the CLIP before being fed to a GPT-based generative model. Our work suggests existing deep models, with presumably diverse architectures and training, represent input samples relatively similarly, and a two-way communication across model representation spaces and to humans (through language) is viable.
|
2402.17310
|
Mizuki Fukasawa
|
Mizuki Fukasawa (1), Tomokazu Fukuda (1), Takuya Akashi (1) ((1) Iwate
University)
|
Method of Tracking and Analysis of Fluorescent-Labeled Cells Using
Automatic Thresholding and Labeling
|
5 pages, 7 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
High-throughput screening using cell images is an efficient method for
screening new candidates for pharmaceutical drugs. To complete the screening
process, it is essential to have an efficient process for analyzing cell
images. This paper presents a new method for efficiently tracking cells and
quantitatively detecting the signal ratio between cytoplasm and nuclei.
Existing methods include those that use image processing techniques and those
that utilize artificial intelligence (AI). However, these methods do not
consider the correspondence of cells between images, or require a significant
amount of new learning data to train AI. Therefore, our method uses automatic
thresholding and labeling algorithms to compare the position of each cell
between images, and continuously measure and analyze the signal ratio of cells.
This paper describes the algorithm of our method. Using the method, we
experimented to investigate the effect of the number of opening and closing
operations during the binarization process on the tracking of the cells.
Through the experiment, we determined the appropriate number of opening and
closing processes.
|
[
{
"created": "Tue, 27 Feb 2024 08:33:03 GMT",
"version": "v1"
}
] |
2024-02-28
|
[
[
"Fukasawa",
"Mizuki",
""
],
[
"Fukuda",
"Tomokazu",
""
],
[
"Akashi",
"Takuya",
""
]
] |
High-throughput screening using cell images is an efficient method for screening new candidates for pharmaceutical drugs. To complete the screening process, it is essential to have an efficient process for analyzing cell images. This paper presents a new method for efficiently tracking cells and quantitatively detecting the signal ratio between cytoplasm and nuclei. Existing methods include those that use image processing techniques and those that utilize artificial intelligence (AI). However, these methods do not consider the correspondence of cells between images, or require a significant amount of new learning data to train AI. Therefore, our method uses automatic thresholding and labeling algorithms to compare the position of each cell between images, and continuously measure and analyze the signal ratio of cells. This paper describes the algorithm of our method. Using the method, we experimented to investigate the effect of the number of opening and closing operations during the binarization process on the tracking of the cells. Through the experiment, we determined the appropriate number of opening and closing processes.
|
1504.08200
|
Bugra Tekin
|
Bugra Tekin, Xiaolu Sun, Xinchao Wang, Vincent Lepetit, Pascal Fua
|
Predicting People's 3D Poses from Short Sequences
|
superseded by arXiv:1511.06692
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose an efficient approach to exploiting motion information from
consecutive frames of a video sequence to recover the 3D pose of people.
Instead of computing candidate poses in individual frames and then linking
them, as is often done, we regress directly from a spatio-temporal block of
frames to a 3D pose in the central one. We will demonstrate that this approach
allows us to effectively overcome ambiguities and to improve upon the
state-of-the-art on challenging sequences.
|
[
{
"created": "Thu, 30 Apr 2015 12:54:39 GMT",
"version": "v1"
},
{
"created": "Fri, 1 May 2015 11:59:56 GMT",
"version": "v2"
},
{
"created": "Mon, 4 May 2015 11:24:56 GMT",
"version": "v3"
},
{
"created": "Mon, 23 Nov 2015 21:48:15 GMT",
"version": "v4"
}
] |
2015-11-25
|
[
[
"Tekin",
"Bugra",
""
],
[
"Sun",
"Xiaolu",
""
],
[
"Wang",
"Xinchao",
""
],
[
"Lepetit",
"Vincent",
""
],
[
"Fua",
"Pascal",
""
]
] |
We propose an efficient approach to exploiting motion information from consecutive frames of a video sequence to recover the 3D pose of people. Instead of computing candidate poses in individual frames and then linking them, as is often done, we regress directly from a spatio-temporal block of frames to a 3D pose in the central one. We will demonstrate that this approach allows us to effectively overcome ambiguities and to improve upon the state-of-the-art on challenging sequences.
|
2101.08102
|
Severin Kacianka
|
Severin Kacianka and Alexander Pretschner
|
Designing Accountable Systems
|
accepted for publication at the ACM Conference on Fairness,
Accountability, and Transparency (ACM FAccT) 2021
| null |
10.1145/3442188.3445905
| null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Accountability is an often called for property of technical systems. It is a
requirement for algorithmic decision systems, autonomous cyber-physical
systems, and for software systems in general. As a concept, accountability goes
back to the early history of Liberalism and is suggested as a tool to limit the
use of power. This long history has also given us many, often slightly
differing, definitions of accountability. The problem that software developers
now face is to understand what accountability means for their systems and how
to reflect it in a system's design. To enable the rigorous study of
accountability in a system, we need models that are suitable for capturing such
a varied concept. In this paper, we present a method to express and compare
different definitions of accountability using Structural Causal Models. We show
how these models can be used to evaluate a system's design and present a small
use case based on an autonomous car.
|
[
{
"created": "Wed, 20 Jan 2021 12:59:03 GMT",
"version": "v1"
}
] |
2021-04-30
|
[
[
"Kacianka",
"Severin",
""
],
[
"Pretschner",
"Alexander",
""
]
] |
Accountability is an often called for property of technical systems. It is a requirement for algorithmic decision systems, autonomous cyber-physical systems, and for software systems in general. As a concept, accountability goes back to the early history of Liberalism and is suggested as a tool to limit the use of power. This long history has also given us many, often slightly differing, definitions of accountability. The problem that software developers now face is to understand what accountability means for their systems and how to reflect it in a system's design. To enable the rigorous study of accountability in a system, we need models that are suitable for capturing such a varied concept. In this paper, we present a method to express and compare different definitions of accountability using Structural Causal Models. We show how these models can be used to evaluate a system's design and present a small use case based on an autonomous car.
|
2008.08480
|
Will Rosenbaum
|
Christine T. Cheng and Will Rosenbaum
|
Stable Matchings with Restricted Preferences: Structure and Complexity
|
Various updates and improvements in response to reviewer comments
| null | null | null |
cs.DM cs.CC cs.GT math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
It is well known that every stable matching instance $I$ has a rotation poset
$R(I)$ that can be computed efficiently and the downsets of $R(I)$ are in
one-to-one correspondence with the stable matchings of $I$. Furthermore, for
every poset $P$, an instance $I(P)$ can be constructed efficiently so that the
rotation poset of $I(P)$ is isomorphic to $P$. In this case, we say that $I(P)$
realizes $P$. Many researchers exploit the rotation poset of an instance to
develop fast algorithms or to establish the hardness of stable matching
problems.
In order to gain a parameterized understanding of the complexity of sampling
stable matchings, Bhatnagar et al. [SODA 2008] introduced stable matching
instances whose preference lists are restricted but nevertheless model
situations that arise in practice. In this paper, we study four such
parameterized restrictions; our goal is to characterize the rotation posets
that arise from these models: $k$-bounded, $k$-attribute, $(k_1, k_2)$-list,
$k$-range.
We prove that there is a constant $k$ so that every rotation poset is
realized by some instance in the first three models for some fixed constant
$k$. We describe efficient algorithms for constructing such instances given the
Hasse diagram of a poset. As a consequence, the fundamental problem of counting
stable matchings remains $\#$BIS-complete even for these restricted instances.
For $k$-range preferences, we show that a poset $P$ is realizable if and only
if the Hasse diagram of $P$ has pathwidth bounded by functions of $k$. Using
this characterization, we show that the following problems are fixed parameter
tractable when parametrized by the range of the instance: exactly counting and
uniformly sampling stable matchings, finding median, sex-equal, and balanced
stable matchings.
|
[
{
"created": "Wed, 19 Aug 2020 14:39:02 GMT",
"version": "v1"
},
{
"created": "Thu, 28 Jan 2021 15:11:47 GMT",
"version": "v2"
}
] |
2021-01-29
|
[
[
"Cheng",
"Christine T.",
""
],
[
"Rosenbaum",
"Will",
""
]
] |
It is well known that every stable matching instance $I$ has a rotation poset $R(I)$ that can be computed efficiently and the downsets of $R(I)$ are in one-to-one correspondence with the stable matchings of $I$. Furthermore, for every poset $P$, an instance $I(P)$ can be constructed efficiently so that the rotation poset of $I(P)$ is isomorphic to $P$. In this case, we say that $I(P)$ realizes $P$. Many researchers exploit the rotation poset of an instance to develop fast algorithms or to establish the hardness of stable matching problems. In order to gain a parameterized understanding of the complexity of sampling stable matchings, Bhatnagar et al. [SODA 2008] introduced stable matching instances whose preference lists are restricted but nevertheless model situations that arise in practice. In this paper, we study four such parameterized restrictions; our goal is to characterize the rotation posets that arise from these models: $k$-bounded, $k$-attribute, $(k_1, k_2)$-list, $k$-range. We prove that there is a constant $k$ so that every rotation poset is realized by some instance in the first three models for some fixed constant $k$. We describe efficient algorithms for constructing such instances given the Hasse diagram of a poset. As a consequence, the fundamental problem of counting stable matchings remains $\#$BIS-complete even for these restricted instances. For $k$-range preferences, we show that a poset $P$ is realizable if and only if the Hasse diagram of $P$ has pathwidth bounded by functions of $k$. Using this characterization, we show that the following problems are fixed parameter tractable when parametrized by the range of the instance: exactly counting and uniformly sampling stable matchings, finding median, sex-equal, and balanced stable matchings.
|
2406.05873
|
Justin Kilb
|
Justin Kilb, Caroline Ellis
|
Conserving Human Creativity with Evolutionary Generative Algorithms: A
Case Study in Music Generation
|
7 pages, 3 figures
| null | null | null |
cs.NE cs.AI math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This study explores the application of evolutionary generative algorithms in
music production to preserve and enhance human creativity. By integrating human
feedback into Differential Evolution algorithms, we produced six songs that
were submitted to international record labels, all of which received contract
offers. In addition to testing the commercial viability of these methods, this
paper examines the long-term implications of content generation using
traditional machine learning methods compared with evolutionary algorithms.
Specifically, as current generative techniques continue to scale, the potential
for computer-generated content to outpace human creation becomes likely. This
trend poses a risk of exhausting the pool of human-created training data,
potentially forcing generative machine learning models to increasingly depend
on their random input functions for generating novel content. In contrast to a
future of content generation guided by aimless random functions, our approach
allows for individualized creative exploration, ensuring that computer-assisted
content generation methods are human-centric and culturally relevant through
time.
|
[
{
"created": "Sun, 9 Jun 2024 18:11:05 GMT",
"version": "v1"
}
] |
2024-06-11
|
[
[
"Kilb",
"Justin",
""
],
[
"Ellis",
"Caroline",
""
]
] |
This study explores the application of evolutionary generative algorithms in music production to preserve and enhance human creativity. By integrating human feedback into Differential Evolution algorithms, we produced six songs that were submitted to international record labels, all of which received contract offers. In addition to testing the commercial viability of these methods, this paper examines the long-term implications of content generation using traditional machine learning methods compared with evolutionary algorithms. Specifically, as current generative techniques continue to scale, the potential for computer-generated content to outpace human creation becomes likely. This trend poses a risk of exhausting the pool of human-created training data, potentially forcing generative machine learning models to increasingly depend on their random input functions for generating novel content. In contrast to a future of content generation guided by aimless random functions, our approach allows for individualized creative exploration, ensuring that computer-assisted content generation methods are human-centric and culturally relevant through time.
|
2201.01422
|
Chongjun Ouyang
|
Chongjun Ouyang, Yuanwei Liu, and Hongwen Yang
|
On the Performance of Uplink ISAC Systems
|
5 pages
| null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/publicdomain/zero/1.0/
|
This letter analyzes the performance of uplink integrated sensing and
communications (ISAC) systems where communication users (CUs) and radar targets
(RTs) share the same frequency band. A non-orthogonal multiple access (NOMA)
protocol is adopted in the communication procedure of the ISAC system. Novel
expressions are derived to characterize the outage probability, ergodic
communication rate, and sensing rate. Besides, the diversity order and high
signal-to-noise ratio (SNR) slope are unveiled to gain further insights. It is
found that when achieving the same communication rate, the ISAC system enjoys a
higher sensing rate than the conventional frequency-division sensing and
communications (FDSAC) system where CUs and RTs share isolated bands. All the
results are validated by numerical simulations and are in excellent agreement.
|
[
{
"created": "Wed, 5 Jan 2022 02:56:34 GMT",
"version": "v1"
},
{
"created": "Fri, 27 May 2022 03:54:35 GMT",
"version": "v2"
}
] |
2022-05-30
|
[
[
"Ouyang",
"Chongjun",
""
],
[
"Liu",
"Yuanwei",
""
],
[
"Yang",
"Hongwen",
""
]
] |
This letter analyzes the performance of uplink integrated sensing and communications (ISAC) systems where communication users (CUs) and radar targets (RTs) share the same frequency band. A non-orthogonal multiple access (NOMA) protocol is adopted in the communication procedure of the ISAC system. Novel expressions are derived to characterize the outage probability, ergodic communication rate, and sensing rate. Besides, the diversity order and high signal-to-noise ratio (SNR) slope are unveiled to gain further insights. It is found that when achieving the same communication rate, the ISAC system enjoys a higher sensing rate than the conventional frequency-division sensing and communications (FDSAC) system where CUs and RTs share isolated bands. All the results are validated by numerical simulations and are in excellent agreement.
|
2105.12524
|
Caglar Demir
|
Caglar Demir and Axel-Cyrille Ngonga Ngomo
|
Out-of-Vocabulary Entities in Link Prediction
| null | null | null | null |
cs.LG cs.SI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Knowledge graph embedding techniques are key to making knowledge graphs
amenable to the plethora of machine learning approaches based on vector
representations. Link prediction is often used as a proxy to evaluate the
quality of these embeddings. Given that the creation of benchmarks for link
prediction is a time-consuming endeavor, most work on the subject matter uses
only a few benchmarks. As benchmarks are crucial for the fair comparison of
algorithms, ensuring their quality is tantamount to providing a solid ground
for developing better solutions to link prediction and ipso facto embedding
knowledge graphs. First studies of benchmarks pointed to limitations pertaining
to information leaking from the development to the test fragments of some
benchmark datasets. We spotted a further common limitation of three of the
benchmarks commonly used for evaluating link prediction approaches:
out-of-vocabulary entities in the test and validation sets. We provide an
implementation of an approach for spotting and removing such entities and
provide corrected versions of the datasets WN18RR, FB15K-237, and YAGO3-10. Our
experiments on the corrected versions of WN18RR, FB15K-237, and YAGO3-10
suggest that the measured performance of state-of-the-art approaches is altered
significantly with p-values <1%, <1.4%, and <1%, respectively. Overall,
state-of-the-art approaches gain on average absolute $3.29 \pm 0.24\%$ in all
metrics on WN18RR. This means that some of the conclusions achieved in previous
works might need to be revisited. We provide an open-source implementation of
our experiments and corrected datasets at at
https://github.com/dice-group/OOV-In-Link-Prediction.
|
[
{
"created": "Wed, 26 May 2021 12:58:18 GMT",
"version": "v1"
}
] |
2021-05-27
|
[
[
"Demir",
"Caglar",
""
],
[
"Ngomo",
"Axel-Cyrille Ngonga",
""
]
] |
Knowledge graph embedding techniques are key to making knowledge graphs amenable to the plethora of machine learning approaches based on vector representations. Link prediction is often used as a proxy to evaluate the quality of these embeddings. Given that the creation of benchmarks for link prediction is a time-consuming endeavor, most work on the subject matter uses only a few benchmarks. As benchmarks are crucial for the fair comparison of algorithms, ensuring their quality is tantamount to providing a solid ground for developing better solutions to link prediction and ipso facto embedding knowledge graphs. First studies of benchmarks pointed to limitations pertaining to information leaking from the development to the test fragments of some benchmark datasets. We spotted a further common limitation of three of the benchmarks commonly used for evaluating link prediction approaches: out-of-vocabulary entities in the test and validation sets. We provide an implementation of an approach for spotting and removing such entities and provide corrected versions of the datasets WN18RR, FB15K-237, and YAGO3-10. Our experiments on the corrected versions of WN18RR, FB15K-237, and YAGO3-10 suggest that the measured performance of state-of-the-art approaches is altered significantly with p-values <1%, <1.4%, and <1%, respectively. Overall, state-of-the-art approaches gain on average absolute $3.29 \pm 0.24\%$ in all metrics on WN18RR. This means that some of the conclusions achieved in previous works might need to be revisited. We provide an open-source implementation of our experiments and corrected datasets at at https://github.com/dice-group/OOV-In-Link-Prediction.
|
1403.1362
|
Shireesha Chintalapati
|
Shireesha Chintalapati and M. V. Raghunadh
|
Illumination,Expression and Occlusion Invariant Pose-Adaptive Face
Recognition System for Real-Time Applications
|
7 pages,8 figures, Published with International Journal of
Engineering Trends and Technology (IJETT)
|
International Journal of Engineering Trends and Technology(IJETT),
V8(6),292-298 February 2014. Published by seventh sense research group
|
10.14445/22315381/IJETT-V8P254
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Face recognition in real-time scenarios is mainly affected by illumination,
expression and pose variations and also by occlusion. This paper presents the
framework for pose adaptive component-based face recognition system. The
framework proposed deals with all the above mentioned issues. The steps
involved in the presented framework are (i) facial landmark localisation, (ii)
facial component extraction, (iii) pre-processing of facial image (iv) facial
pose estimation (v) feature extraction using Local Binary Pattern Histograms of
each component followed by (vi) fusion of pose adaptive classification of
components. By employing pose adaptive classification, the recognition process
is carried out on some part of database, based on estimated pose, instead of
applying the recognition process on the whole database. Pre-processing
techniques employed to overcome the problems due to illumination variation are
also discussed in this paper. Component-based techniques provide better
recognition rates when face images are occluded compared to the holistic
methods. Our method is simple, feasible and provides better results when
compared to other holistic methods.
|
[
{
"created": "Thu, 6 Mar 2014 07:19:24 GMT",
"version": "v1"
}
] |
2014-03-07
|
[
[
"Chintalapati",
"Shireesha",
""
],
[
"Raghunadh",
"M. V.",
""
]
] |
Face recognition in real-time scenarios is mainly affected by illumination, expression and pose variations and also by occlusion. This paper presents the framework for pose adaptive component-based face recognition system. The framework proposed deals with all the above mentioned issues. The steps involved in the presented framework are (i) facial landmark localisation, (ii) facial component extraction, (iii) pre-processing of facial image (iv) facial pose estimation (v) feature extraction using Local Binary Pattern Histograms of each component followed by (vi) fusion of pose adaptive classification of components. By employing pose adaptive classification, the recognition process is carried out on some part of database, based on estimated pose, instead of applying the recognition process on the whole database. Pre-processing techniques employed to overcome the problems due to illumination variation are also discussed in this paper. Component-based techniques provide better recognition rates when face images are occluded compared to the holistic methods. Our method is simple, feasible and provides better results when compared to other holistic methods.
|
2311.18488
|
Sana Javed
|
Sana Javed, Francisco Garcia-Herrero, Bane Vasic, Mark F. Flanagan
|
Low-Complexity Linear Programming Based Decoding of Quantum LDPC codes
|
Accepted for publication at the IEEE International Conference on
Communications (ICC) 2024
| null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
This paper proposes two approaches for reducing the impact of the error floor
phenomenon when decoding quantum low-density parity-check codes with belief
propagation based algorithms. First, a low-complexity syndrome-based linear
programming (SB-LP) decoding algorithm is proposed, and second, the proposed
SB-LP is applied as a post-processing step after syndrome-based min-sum (SB-MS)
decoding. For the latter case, a new early stopping criterion is introduced to
decide when to activate the SB-LP algorithm, avoiding executing a predefined
maximum number of iterations for the SB-MS decoder. Simulation results show,
for a sample hypergraph code, that the proposed decoder can lower the error
floor by two to three orders of magnitude compared to SB-MS for the same total
number of decoding iterations.
|
[
{
"created": "Thu, 30 Nov 2023 12:01:04 GMT",
"version": "v1"
},
{
"created": "Fri, 19 Jan 2024 15:53:11 GMT",
"version": "v2"
}
] |
2024-01-22
|
[
[
"Javed",
"Sana",
""
],
[
"Garcia-Herrero",
"Francisco",
""
],
[
"Vasic",
"Bane",
""
],
[
"Flanagan",
"Mark F.",
""
]
] |
This paper proposes two approaches for reducing the impact of the error floor phenomenon when decoding quantum low-density parity-check codes with belief propagation based algorithms. First, a low-complexity syndrome-based linear programming (SB-LP) decoding algorithm is proposed, and second, the proposed SB-LP is applied as a post-processing step after syndrome-based min-sum (SB-MS) decoding. For the latter case, a new early stopping criterion is introduced to decide when to activate the SB-LP algorithm, avoiding executing a predefined maximum number of iterations for the SB-MS decoder. Simulation results show, for a sample hypergraph code, that the proposed decoder can lower the error floor by two to three orders of magnitude compared to SB-MS for the same total number of decoding iterations.
|
1511.07792
|
Elena Dubrova
|
Elena Dubrova and Mats N\"aslund and Gunnar Carlsson and John Fornehed
and Ben Smeets
|
Two Countermeasures Against Hardware Trojans Exploiting Non-Zero
Aliasing Probability of BIST
|
16 pages, 5 figures
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The threat of hardware Trojans has been widely recognized by academia,
industry, and government agencies. A Trojan can compromise security of a system
in spite of cryptographic protection. The damage caused by a Trojan may not be
limited to a business or reputation, but could have a severe impact on public
safety, national economy, or national security. An extremely stealthy way of
implementing hardware Trojans has been presented by Becker et al. at CHES'2012.
Their work have shown that it is possible to inject a Trojan in a random number
generator compliant with FIPS 140-2 and NIST SP800-90 standards by exploiting
non-zero aliasing probability of Logic Built-In-Self-Test (LBIST). In this
paper, we present two methods for modifying LBIST to prevent such an attack.
The first method makes test patterns dependent on a configurable key which is
programed into a chip after the manufacturing stage. The second method uses a
remote test management system which can execute LBIST using a different set of
test patterns at each test cycle.
|
[
{
"created": "Tue, 24 Nov 2015 16:40:08 GMT",
"version": "v1"
}
] |
2015-11-25
|
[
[
"Dubrova",
"Elena",
""
],
[
"Näslund",
"Mats",
""
],
[
"Carlsson",
"Gunnar",
""
],
[
"Fornehed",
"John",
""
],
[
"Smeets",
"Ben",
""
]
] |
The threat of hardware Trojans has been widely recognized by academia, industry, and government agencies. A Trojan can compromise security of a system in spite of cryptographic protection. The damage caused by a Trojan may not be limited to a business or reputation, but could have a severe impact on public safety, national economy, or national security. An extremely stealthy way of implementing hardware Trojans has been presented by Becker et al. at CHES'2012. Their work have shown that it is possible to inject a Trojan in a random number generator compliant with FIPS 140-2 and NIST SP800-90 standards by exploiting non-zero aliasing probability of Logic Built-In-Self-Test (LBIST). In this paper, we present two methods for modifying LBIST to prevent such an attack. The first method makes test patterns dependent on a configurable key which is programed into a chip after the manufacturing stage. The second method uses a remote test management system which can execute LBIST using a different set of test patterns at each test cycle.
|
2006.13833
|
Luis Lastras
|
Luis A. Lastras
|
Lattice Representation Learning
| null | null | null | null |
cs.LG cs.IT math.IT stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this article we introduce theory and algorithms for learning discrete
representations that take on a lattice that is embedded in an Euclidean space.
Lattice representations possess an interesting combination of properties: a)
they can be computed explicitly using lattice quantization, yet they can be
learned efficiently using the ideas we introduce in this paper, b) they are
highly related to Gaussian Variational Autoencoders, allowing designers
familiar with the latter to easily produce discrete representations from their
models and c) since lattices satisfy the axioms of a group, their adoption can
lead into a way of learning simple algebras for modeling binary operations
between objects through symbolic formalisms, yet learn these structures also
formally using differentiation techniques. This article will focus on laying
the groundwork for exploring and exploiting the first two properties, including
a new mathematical result linking expressions used during training and
inference time and experimental validation on two popular datasets.
|
[
{
"created": "Wed, 24 Jun 2020 16:05:11 GMT",
"version": "v1"
}
] |
2020-06-25
|
[
[
"Lastras",
"Luis A.",
""
]
] |
In this article we introduce theory and algorithms for learning discrete representations that take on a lattice that is embedded in an Euclidean space. Lattice representations possess an interesting combination of properties: a) they can be computed explicitly using lattice quantization, yet they can be learned efficiently using the ideas we introduce in this paper, b) they are highly related to Gaussian Variational Autoencoders, allowing designers familiar with the latter to easily produce discrete representations from their models and c) since lattices satisfy the axioms of a group, their adoption can lead into a way of learning simple algebras for modeling binary operations between objects through symbolic formalisms, yet learn these structures also formally using differentiation techniques. This article will focus on laying the groundwork for exploring and exploiting the first two properties, including a new mathematical result linking expressions used during training and inference time and experimental validation on two popular datasets.
|
2111.04671
|
David Pujol
|
David Pujol, Ashwin Machanavajjhala
|
Equity and Privacy: More Than Just a Tradeoff
|
3 pages, 1 figure. Published in IEEE Security & Privacy ( Volume: 19,
Issue: 6, Nov.-Dec. 2021)
| null |
10.1109/MSEC.2021.3105773
| null |
cs.CY cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
While the entire field of privacy preserving data analytics is focused on the
privacy-utility tradeoff, recent work has shown that privacy preserving data
publishing can introduce different levels of utility across different
population groups. It is important to understand this new tradeoff between
privacy and equity as privacy technology is being deployed in situations where
the data products will be used for research and policy making. Will marginal
populations see disproportionately less utility from privacy technology? If
there is an inequity how can we address it?
|
[
{
"created": "Mon, 8 Nov 2021 17:39:32 GMT",
"version": "v1"
}
] |
2021-11-09
|
[
[
"Pujol",
"David",
""
],
[
"Machanavajjhala",
"Ashwin",
""
]
] |
While the entire field of privacy preserving data analytics is focused on the privacy-utility tradeoff, recent work has shown that privacy preserving data publishing can introduce different levels of utility across different population groups. It is important to understand this new tradeoff between privacy and equity as privacy technology is being deployed in situations where the data products will be used for research and policy making. Will marginal populations see disproportionately less utility from privacy technology? If there is an inequity how can we address it?
|
2103.03443
|
Riccardo Paccagnella
|
Riccardo Paccagnella and Licheng Luo and Christopher W. Fletcher
|
Lord of the Ring(s): Side Channel Attacks on the CPU On-Chip Ring
Interconnect Are Practical
|
This is the extended version of a paper that appears in USENIX
Security 2021
| null | null | null |
cs.CR cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce the first microarchitectural side channel attacks that leverage
contention on the CPU ring interconnect. There are two challenges that make it
uniquely difficult to exploit this channel. First, little is known about the
ring interconnect's functioning and architecture. Second, information that can
be learned by an attacker through ring contention is noisy by nature and has
coarse spatial granularity. To address the first challenge, we perform a
thorough reverse engineering of the sophisticated protocols that handle
communication on the ring interconnect. With this knowledge, we build a
cross-core covert channel over the ring interconnect with a capacity of over 4
Mbps from a single thread, the largest to date for a cross-core channel not
relying on shared memory. To address the second challenge, we leverage the
fine-grained temporal patterns of ring contention to infer a victim program's
secrets. We demonstrate our attack by extracting key bits from vulnerable EdDSA
and RSA implementations, as well as inferring the precise timing of keystrokes
typed by a victim user.
|
[
{
"created": "Fri, 5 Mar 2021 02:44:20 GMT",
"version": "v1"
}
] |
2021-03-08
|
[
[
"Paccagnella",
"Riccardo",
""
],
[
"Luo",
"Licheng",
""
],
[
"Fletcher",
"Christopher W.",
""
]
] |
We introduce the first microarchitectural side channel attacks that leverage contention on the CPU ring interconnect. There are two challenges that make it uniquely difficult to exploit this channel. First, little is known about the ring interconnect's functioning and architecture. Second, information that can be learned by an attacker through ring contention is noisy by nature and has coarse spatial granularity. To address the first challenge, we perform a thorough reverse engineering of the sophisticated protocols that handle communication on the ring interconnect. With this knowledge, we build a cross-core covert channel over the ring interconnect with a capacity of over 4 Mbps from a single thread, the largest to date for a cross-core channel not relying on shared memory. To address the second challenge, we leverage the fine-grained temporal patterns of ring contention to infer a victim program's secrets. We demonstrate our attack by extracting key bits from vulnerable EdDSA and RSA implementations, as well as inferring the precise timing of keystrokes typed by a victim user.
|
1609.09267
|
Andrea Romanoni
|
Gheorghii Postica and Andrea Romanoni and Matteo Matteucci
|
Robust Moving Objects Detection in Lidar Data Exploiting Visual Cues
|
6 pages, to appear in IROS 2016
| null | null | null |
cs.RO cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Detecting moving objects in dynamic scenes from sequences of lidar scans is
an important task in object tracking, mapping, localization, and navigation.
Many works focus on changes detection in previously observed scenes, while a
very limited amount of literature addresses moving objects detection. The
state-of-the-art method exploits Dempster-Shafer Theory to evaluate the
occupancy of a lidar scan and to discriminate points belonging to the static
scene from moving ones. In this paper we improve both speed and accuracy of
this method by discretizing the occupancy representation, and by removing false
positives through visual cues. Many false positives lying on the ground plane
are also removed thanks to a novel ground plane removal algorithm. Efficiency
is improved through an octree indexing strategy. Experimental evaluation
against the KITTI public dataset shows the effectiveness of our approach, both
qualitatively and quantitatively with respect to the state- of-the-art.
|
[
{
"created": "Thu, 29 Sep 2016 09:29:46 GMT",
"version": "v1"
}
] |
2016-09-30
|
[
[
"Postica",
"Gheorghii",
""
],
[
"Romanoni",
"Andrea",
""
],
[
"Matteucci",
"Matteo",
""
]
] |
Detecting moving objects in dynamic scenes from sequences of lidar scans is an important task in object tracking, mapping, localization, and navigation. Many works focus on changes detection in previously observed scenes, while a very limited amount of literature addresses moving objects detection. The state-of-the-art method exploits Dempster-Shafer Theory to evaluate the occupancy of a lidar scan and to discriminate points belonging to the static scene from moving ones. In this paper we improve both speed and accuracy of this method by discretizing the occupancy representation, and by removing false positives through visual cues. Many false positives lying on the ground plane are also removed thanks to a novel ground plane removal algorithm. Efficiency is improved through an octree indexing strategy. Experimental evaluation against the KITTI public dataset shows the effectiveness of our approach, both qualitatively and quantitatively with respect to the state- of-the-art.
|
2210.17087
|
Youpeng Zhao
|
Yudong Lu, Jian Zhao, Youpeng Zhao, Wengang Zhou, Houqiang Li
|
DanZero: Mastering GuanDan Game with Reinforcement Learning
| null | null | null | null |
cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Card game AI has always been a hot topic in the research of artificial
intelligence. In recent years, complex card games such as Mahjong, DouDizhu and
Texas Hold'em have been solved and the corresponding AI programs have reached
the level of human experts. In this paper, we are devoted to developing an AI
program for a more complex card game, GuanDan, whose rules are similar to
DouDizhu but much more complicated. To be specific, the characteristics of
large state and action space, long length of one episode and the unsure number
of players in the GuanDan pose great challenges for the development of the AI
program. To address these issues, we propose the first AI program DanZero for
GuanDan using reinforcement learning technique. Specifically, we utilize a
distributed framework to train our AI system. In the actor processes, we
carefully design the state features and agents generate samples by self-play.
In the learner process, the model is updated by Deep Monte-Carlo Method. After
training for 30 days using 160 CPUs and 1 GPU, we get our DanZero bot. We
compare it with 8 baseline AI programs which are based on heuristic rules and
the results reveal the outstanding performance of DanZero. We also test DanZero
with human players and demonstrate its human-level performance.
|
[
{
"created": "Mon, 31 Oct 2022 06:29:08 GMT",
"version": "v1"
}
] |
2022-11-01
|
[
[
"Lu",
"Yudong",
""
],
[
"Zhao",
"Jian",
""
],
[
"Zhao",
"Youpeng",
""
],
[
"Zhou",
"Wengang",
""
],
[
"Li",
"Houqiang",
""
]
] |
Card game AI has always been a hot topic in the research of artificial intelligence. In recent years, complex card games such as Mahjong, DouDizhu and Texas Hold'em have been solved and the corresponding AI programs have reached the level of human experts. In this paper, we are devoted to developing an AI program for a more complex card game, GuanDan, whose rules are similar to DouDizhu but much more complicated. To be specific, the characteristics of large state and action space, long length of one episode and the unsure number of players in the GuanDan pose great challenges for the development of the AI program. To address these issues, we propose the first AI program DanZero for GuanDan using reinforcement learning technique. Specifically, we utilize a distributed framework to train our AI system. In the actor processes, we carefully design the state features and agents generate samples by self-play. In the learner process, the model is updated by Deep Monte-Carlo Method. After training for 30 days using 160 CPUs and 1 GPU, we get our DanZero bot. We compare it with 8 baseline AI programs which are based on heuristic rules and the results reveal the outstanding performance of DanZero. We also test DanZero with human players and demonstrate its human-level performance.
|
2406.12896
|
Wei Zhang
|
Jiajun Cui, Hong Qian, Bo Jiang, Wei Zhang
|
Leveraging Pedagogical Theories to Understand Student Learning Process
with Graph-based Reasonable Knowledge Tracing
|
Preprint, accepted to appear in SIGKDD 2024, 12 pages. The source
code is available at https://github.com/JJCui96/GRKT. Keywords: interpretable
knowledge tracing, student behavior modeling, intelligence education
| null | null | null |
cs.AI cs.CY cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Knowledge tracing (KT) is a crucial task in intelligent education, focusing
on predicting students' performance on given questions to trace their evolving
knowledge. The advancement of deep learning in this field has led to
deep-learning knowledge tracing (DLKT) models that prioritize high predictive
accuracy. However, many existing DLKT methods overlook the fundamental goal of
tracking students' dynamical knowledge mastery. These models do not explicitly
model knowledge mastery tracing processes or yield unreasonable results that
educators find difficulty to comprehend and apply in real teaching scenarios.
In response, our research conducts a preliminary analysis of mainstream KT
approaches to highlight and explain such unreasonableness. We introduce GRKT, a
graph-based reasonable knowledge tracing method to address these issues. By
leveraging graph neural networks, our approach delves into the mutual
influences of knowledge concepts, offering a more accurate representation of
how the knowledge mastery evolves throughout the learning process.
Additionally, we propose a fine-grained and psychological three-stage modeling
process as knowledge retrieval, memory strengthening, and knowledge
learning/forgetting, to conduct a more reasonable knowledge tracing process.
Comprehensive experiments demonstrate that GRKT outperforms eleven baselines
across three datasets, not only enhancing predictive accuracy but also
generating more reasonable knowledge tracing results. This makes our model a
promising advancement for practical implementation in educational settings. The
source code is available at https://github.com/JJCui96/GRKT.
|
[
{
"created": "Fri, 7 Jun 2024 10:14:30 GMT",
"version": "v1"
}
] |
2024-06-21
|
[
[
"Cui",
"Jiajun",
""
],
[
"Qian",
"Hong",
""
],
[
"Jiang",
"Bo",
""
],
[
"Zhang",
"Wei",
""
]
] |
Knowledge tracing (KT) is a crucial task in intelligent education, focusing on predicting students' performance on given questions to trace their evolving knowledge. The advancement of deep learning in this field has led to deep-learning knowledge tracing (DLKT) models that prioritize high predictive accuracy. However, many existing DLKT methods overlook the fundamental goal of tracking students' dynamical knowledge mastery. These models do not explicitly model knowledge mastery tracing processes or yield unreasonable results that educators find difficulty to comprehend and apply in real teaching scenarios. In response, our research conducts a preliminary analysis of mainstream KT approaches to highlight and explain such unreasonableness. We introduce GRKT, a graph-based reasonable knowledge tracing method to address these issues. By leveraging graph neural networks, our approach delves into the mutual influences of knowledge concepts, offering a more accurate representation of how the knowledge mastery evolves throughout the learning process. Additionally, we propose a fine-grained and psychological three-stage modeling process as knowledge retrieval, memory strengthening, and knowledge learning/forgetting, to conduct a more reasonable knowledge tracing process. Comprehensive experiments demonstrate that GRKT outperforms eleven baselines across three datasets, not only enhancing predictive accuracy but also generating more reasonable knowledge tracing results. This makes our model a promising advancement for practical implementation in educational settings. The source code is available at https://github.com/JJCui96/GRKT.
|
1507.03851
|
Enric Rodriguez Carbonell
|
Marc Brockschmidt, Daniel Larraz, Albert Oliveras, Enric
Rodriguez-Carbonell, Albert Rubio
|
Compositional Safety Verification with Max-SMT
|
Extended technical report version of the conference paper at FMCAD'15
| null | null | null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present an automated compositional program verification technique for
safety properties based on conditional inductive invariants. For a given
program part (e.g., a single loop) and a postcondition $\varphi$, we show how
to, using a Max-SMT solver, an inductive invariant together with a precondition
can be synthesized so that the precondition ensures the validity of the
invariant and that the invariant implies $\varphi$. From this, we build a
bottom-up program verification framework that propagates preconditions of small
program parts as postconditions for preceding program parts. The method
recovers from failures to prove the validity of a precondition, using the
obtained intermediate results to restrict the search space for further proof
attempts.
As only small program parts need to be handled at a time, our method is
scalable and distributable. The derived conditions can be viewed as implicit
contracts between different parts of the program, and thus enable an
incremental program analysis.
|
[
{
"created": "Tue, 14 Jul 2015 14:01:56 GMT",
"version": "v1"
},
{
"created": "Wed, 15 Jul 2015 05:41:20 GMT",
"version": "v2"
},
{
"created": "Mon, 3 Aug 2015 21:41:23 GMT",
"version": "v3"
}
] |
2015-08-05
|
[
[
"Brockschmidt",
"Marc",
""
],
[
"Larraz",
"Daniel",
""
],
[
"Oliveras",
"Albert",
""
],
[
"Rodriguez-Carbonell",
"Enric",
""
],
[
"Rubio",
"Albert",
""
]
] |
We present an automated compositional program verification technique for safety properties based on conditional inductive invariants. For a given program part (e.g., a single loop) and a postcondition $\varphi$, we show how to, using a Max-SMT solver, an inductive invariant together with a precondition can be synthesized so that the precondition ensures the validity of the invariant and that the invariant implies $\varphi$. From this, we build a bottom-up program verification framework that propagates preconditions of small program parts as postconditions for preceding program parts. The method recovers from failures to prove the validity of a precondition, using the obtained intermediate results to restrict the search space for further proof attempts. As only small program parts need to be handled at a time, our method is scalable and distributable. The derived conditions can be viewed as implicit contracts between different parts of the program, and thus enable an incremental program analysis.
|
1910.09787
|
Congcong Miao
|
Congcong Miao and Jilong Wang and Shuying Zhuang and Changqing An
|
A Coordinated View of Cyberspace
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cyberspace is an online world created by growing network of computing and
communication technologies. It is a virtual space of the Internet, paralleled
to geographic space we are living on. As becoming a recognized component of our
society, cyberspace gradually draws more attention in academic research. Many
prior efforts have tried to represent and visualize cyberspace in geographic
coordinate system (GCS) and network coordinate system (NCS). However, there are
some disadvantages on these views. Firstly, mapping cyberspace in geographic
space only reveals a partial characteristics of cyberspace, especially
geographic characteristic of cyberspace. All what we could see is only the
geographic information of cyberspace and tip of the iceberg of cyberspace.
Secondly, NCS is established according to network topology and maps the
position of each node in the coordinate system according to RTT (Round Trip
Time) or network delays. However, this coordinate system changes dynamically
with RTT changes or host connection status, resulting in the coordinate system
not stable. Cyberspace, regarded as a second space in human life, is rather
complex and multi-dimension. However, it is little known to us. It is in a
great need of establishing its own coordinate system to tackle the challenging
task of efficiently visualizing complex multi-dimensional cyberspace and get to
know more about cyberspace. This paper aims to explore and visualize
cyberspace. To best of our knowledge, we are firstly to establish a Cyberspace
Coordination System (CyberCS) to represent and visualize cyberspace. CyberCS
will make the representation of cyberspace easier or more concrete which is
similar to Fourier transform. With the help of CyberCS, different parts and
degrees of cyberspace are efficiently visualized and user can easily filter out
the specific details of interest.
|
[
{
"created": "Tue, 22 Oct 2019 06:50:02 GMT",
"version": "v1"
}
] |
2019-10-23
|
[
[
"Miao",
"Congcong",
""
],
[
"Wang",
"Jilong",
""
],
[
"Zhuang",
"Shuying",
""
],
[
"An",
"Changqing",
""
]
] |
Cyberspace is an online world created by growing network of computing and communication technologies. It is a virtual space of the Internet, paralleled to geographic space we are living on. As becoming a recognized component of our society, cyberspace gradually draws more attention in academic research. Many prior efforts have tried to represent and visualize cyberspace in geographic coordinate system (GCS) and network coordinate system (NCS). However, there are some disadvantages on these views. Firstly, mapping cyberspace in geographic space only reveals a partial characteristics of cyberspace, especially geographic characteristic of cyberspace. All what we could see is only the geographic information of cyberspace and tip of the iceberg of cyberspace. Secondly, NCS is established according to network topology and maps the position of each node in the coordinate system according to RTT (Round Trip Time) or network delays. However, this coordinate system changes dynamically with RTT changes or host connection status, resulting in the coordinate system not stable. Cyberspace, regarded as a second space in human life, is rather complex and multi-dimension. However, it is little known to us. It is in a great need of establishing its own coordinate system to tackle the challenging task of efficiently visualizing complex multi-dimensional cyberspace and get to know more about cyberspace. This paper aims to explore and visualize cyberspace. To best of our knowledge, we are firstly to establish a Cyberspace Coordination System (CyberCS) to represent and visualize cyberspace. CyberCS will make the representation of cyberspace easier or more concrete which is similar to Fourier transform. With the help of CyberCS, different parts and degrees of cyberspace are efficiently visualized and user can easily filter out the specific details of interest.
|
2210.14505
|
Xiujing Zheng
|
Xiujing Zheng and Liqi Wang and Shixin Zhu
|
Constructions of entanglement-assisted quantum MDS codes from
generalized Reed-Solomon codes
|
21 pages. 5 table
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
By generalizing the stabilizer quantum error-correcting codes,
entanglement-assisted quantum error-correcting (EAQEC) codes were introduced,
which could be derived from any classical linear codes via the relaxation of
self-orthogonality conditions with the aid of pre-shared entanglement between
the sender and the receiver. In this paper, three classes of
entanglement-assisted quantum error-correcting maximum-distance-separable
(EAQMDS) codes are constructed through generalized Reed-Solomon codes. Under
our constructions, the minimum distances of our EAQMDS codes are much larger
than those of the known EAQMDS codes of the same lengths that consume the same
number of ebits. Furthermore, some of the lengths of the EAQMDS codes are not
divisors of $q^2-1$, which are completely new and unlike all those known
lengths existed before.
|
[
{
"created": "Wed, 26 Oct 2022 06:30:15 GMT",
"version": "v1"
},
{
"created": "Sat, 16 Mar 2024 07:26:52 GMT",
"version": "v2"
}
] |
2024-03-19
|
[
[
"Zheng",
"Xiujing",
""
],
[
"Wang",
"Liqi",
""
],
[
"Zhu",
"Shixin",
""
]
] |
By generalizing the stabilizer quantum error-correcting codes, entanglement-assisted quantum error-correcting (EAQEC) codes were introduced, which could be derived from any classical linear codes via the relaxation of self-orthogonality conditions with the aid of pre-shared entanglement between the sender and the receiver. In this paper, three classes of entanglement-assisted quantum error-correcting maximum-distance-separable (EAQMDS) codes are constructed through generalized Reed-Solomon codes. Under our constructions, the minimum distances of our EAQMDS codes are much larger than those of the known EAQMDS codes of the same lengths that consume the same number of ebits. Furthermore, some of the lengths of the EAQMDS codes are not divisors of $q^2-1$, which are completely new and unlike all those known lengths existed before.
|
1610.04591
|
Peter LeFanu Lumsdaine
|
Andrej Bauer, Jason Gross, Peter LeFanu Lumsdaine, Mike Shulman,
Matthieu Sozeau, and Bas Spitters
|
The HoTT Library: A formalization of homotopy type theory in Coq
| null | null | null | null |
cs.LO math.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We report on the development of the HoTT library, a formalization of homotopy
type theory in the Coq proof assistant. It formalizes most of basic homotopy
type theory, including univalence, higher inductive types, and significant
amounts of synthetic homotopy theory, as well as category theory and
modalities. The library has been used as a basis for several independent
developments. We discuss the decisions that led to the design of the library,
and we comment on the interaction of homotopy type theory with recently
introduced features of Coq, such as universe polymorphism and private inductive
types.
|
[
{
"created": "Fri, 14 Oct 2016 19:23:50 GMT",
"version": "v1"
},
{
"created": "Fri, 9 Dec 2016 16:31:04 GMT",
"version": "v2"
}
] |
2017-05-02
|
[
[
"Bauer",
"Andrej",
""
],
[
"Gross",
"Jason",
""
],
[
"Lumsdaine",
"Peter LeFanu",
""
],
[
"Shulman",
"Mike",
""
],
[
"Sozeau",
"Matthieu",
""
],
[
"Spitters",
"Bas",
""
]
] |
We report on the development of the HoTT library, a formalization of homotopy type theory in the Coq proof assistant. It formalizes most of basic homotopy type theory, including univalence, higher inductive types, and significant amounts of synthetic homotopy theory, as well as category theory and modalities. The library has been used as a basis for several independent developments. We discuss the decisions that led to the design of the library, and we comment on the interaction of homotopy type theory with recently introduced features of Coq, such as universe polymorphism and private inductive types.
|
1805.11462
|
Vincent Nguyen
|
Guillaume Klein, Yoon Kim, Yuntian Deng, Vincent Nguyen, Jean
Senellart, Alexander M. Rush
|
OpenNMT: Neural Machine Translation Toolkit
|
Presentation to AMTA 2018 - Boston. arXiv admin note: substantial
text overlap with arXiv:1701.02810
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
OpenNMT is an open-source toolkit for neural machine translation (NMT). The
system prioritizes efficiency, modularity, and extensibility with the goal of
supporting NMT research into model architectures, feature representations, and
source modalities, while maintaining competitive performance and reasonable
training requirements. The toolkit consists of modeling and translation
support, as well as detailed pedagogical documentation about the underlying
techniques. OpenNMT has been used in several production MT systems, modified
for numerous research papers, and is implemented across several deep learning
frameworks.
|
[
{
"created": "Mon, 28 May 2018 07:58:46 GMT",
"version": "v1"
}
] |
2018-05-30
|
[
[
"Klein",
"Guillaume",
""
],
[
"Kim",
"Yoon",
""
],
[
"Deng",
"Yuntian",
""
],
[
"Nguyen",
"Vincent",
""
],
[
"Senellart",
"Jean",
""
],
[
"Rush",
"Alexander M.",
""
]
] |
OpenNMT is an open-source toolkit for neural machine translation (NMT). The system prioritizes efficiency, modularity, and extensibility with the goal of supporting NMT research into model architectures, feature representations, and source modalities, while maintaining competitive performance and reasonable training requirements. The toolkit consists of modeling and translation support, as well as detailed pedagogical documentation about the underlying techniques. OpenNMT has been used in several production MT systems, modified for numerous research papers, and is implemented across several deep learning frameworks.
|
1702.07492
|
Ahmed Qureshi
|
Ahmed Hussain Qureshi, Yutaka Nakamura, Yuichiro Yoshikawa and Hiroshi
Ishiguro
|
Robot gains Social Intelligence through Multimodal Deep Reinforcement
Learning
|
The paper is published in IEEE-RAS International Conference on
Humanoid Robots (Humanoids) 2016
| null | null | null |
cs.RO cs.AI cs.CV stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For robots to coexist with humans in a social world like ours, it is crucial
that they possess human-like social interaction skills. Programming a robot to
possess such skills is a challenging task. In this paper, we propose a
Multimodal Deep Q-Network (MDQN) to enable a robot to learn human-like
interaction skills through a trial and error method. This paper aims to develop
a robot that gathers data during its interaction with a human and learns human
interaction behaviour from the high-dimensional sensory information using
end-to-end reinforcement learning. This paper demonstrates that the robot was
able to learn basic interaction skills successfully, after 14 days of
interacting with people.
|
[
{
"created": "Fri, 24 Feb 2017 08:30:43 GMT",
"version": "v1"
}
] |
2017-02-27
|
[
[
"Qureshi",
"Ahmed Hussain",
""
],
[
"Nakamura",
"Yutaka",
""
],
[
"Yoshikawa",
"Yuichiro",
""
],
[
"Ishiguro",
"Hiroshi",
""
]
] |
For robots to coexist with humans in a social world like ours, it is crucial that they possess human-like social interaction skills. Programming a robot to possess such skills is a challenging task. In this paper, we propose a Multimodal Deep Q-Network (MDQN) to enable a robot to learn human-like interaction skills through a trial and error method. This paper aims to develop a robot that gathers data during its interaction with a human and learns human interaction behaviour from the high-dimensional sensory information using end-to-end reinforcement learning. This paper demonstrates that the robot was able to learn basic interaction skills successfully, after 14 days of interacting with people.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.