id
stringlengths 9
10
| submitter
stringlengths 1
64
⌀ | authors
stringlengths 4
20.7k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
404
⌀ | doi
stringlengths 11
153
⌀ | report-no
stringlengths 2
254
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | orig_abstract
stringlengths 14
3.35k
| versions
listlengths 1
60
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
1.35k
| abstract
stringlengths 11
3.34k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1407.1923
|
Eli Fox-Epstein
|
Eli Fox-Epstein, Ryuhei Uehara
|
The Convex Configurations of "Sei Shonagon Chie no Ita" and Other
Dissection Puzzles
| null | null | null | null |
cs.CG math.HO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The tangram and Sei Shonagon Chie no Ita are popular dissection puzzles
consisting of seven pieces. Each puzzle can be formed by identifying edges from
sixteen identical right isosceles triangles. It is known that the tangram can
form 13 convex polygons. We show that Sei Shonagon Chie no Ita can form 16
convex polygons, propose a new puzzle that can form 19, no 7 piece puzzle can
form 20, and 11 pieces are necessary and sufficient to form all 20 polygons
formable by 16 identical isosceles right triangles. Finally, we examine the
number of convex polygons formable by different quantities of these triangles.
|
[
{
"created": "Tue, 8 Jul 2014 01:42:23 GMT",
"version": "v1"
}
] |
2014-07-09
|
[
[
"Fox-Epstein",
"Eli",
""
],
[
"Uehara",
"Ryuhei",
""
]
] |
The tangram and Sei Shonagon Chie no Ita are popular dissection puzzles consisting of seven pieces. Each puzzle can be formed by identifying edges from sixteen identical right isosceles triangles. It is known that the tangram can form 13 convex polygons. We show that Sei Shonagon Chie no Ita can form 16 convex polygons, propose a new puzzle that can form 19, no 7 piece puzzle can form 20, and 11 pieces are necessary and sufficient to form all 20 polygons formable by 16 identical isosceles right triangles. Finally, we examine the number of convex polygons formable by different quantities of these triangles.
|
2006.06341
|
Tobias Kuhn
|
Timo Lek, Anna de Groot, Tobias Kuhn, Roser Morante
|
Provenance for Linguistic Corpora Through Nanopublications
| null |
In Proceedings of the 14th Linguistic Annotation Workshop (LAW),
co-located with COLING 2020
| null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Research in Computational Linguistics is dependent on text corpora for
training and testing new tools and methodologies. While there exists a plethora
of annotated linguistic information, these corpora are often not interoperable
without significant manual work. Moreover, these annotations might have evolved
into different versions, making it challenging for researchers to know the
data's provenance. This paper addresses this issue with a case study on event
annotated corpora and by creating a new, more interoperable representation of
this data in the form of nanopublications. We demonstrate how linguistic
annotations from separate corpora can be reliably linked from the start, and
thereby be accessed and queried as if they were a single dataset. We describe
how such nanopublications can be created and demonstrate how SPARQL queries can
be performed to extract interesting content from the new representations. The
queries show that information of multiple corpora can be retrieved more easily
and effectively because the information of different corpora is represented in
a uniform data format.
|
[
{
"created": "Thu, 11 Jun 2020 11:30:30 GMT",
"version": "v1"
},
{
"created": "Mon, 2 Nov 2020 07:29:09 GMT",
"version": "v2"
}
] |
2020-11-03
|
[
[
"Lek",
"Timo",
""
],
[
"de Groot",
"Anna",
""
],
[
"Kuhn",
"Tobias",
""
],
[
"Morante",
"Roser",
""
]
] |
Research in Computational Linguistics is dependent on text corpora for training and testing new tools and methodologies. While there exists a plethora of annotated linguistic information, these corpora are often not interoperable without significant manual work. Moreover, these annotations might have evolved into different versions, making it challenging for researchers to know the data's provenance. This paper addresses this issue with a case study on event annotated corpora and by creating a new, more interoperable representation of this data in the form of nanopublications. We demonstrate how linguistic annotations from separate corpora can be reliably linked from the start, and thereby be accessed and queried as if they were a single dataset. We describe how such nanopublications can be created and demonstrate how SPARQL queries can be performed to extract interesting content from the new representations. The queries show that information of multiple corpora can be retrieved more easily and effectively because the information of different corpora is represented in a uniform data format.
|
0704.0217
|
Wiroonsak Santipach
|
Wiroonsak Santipach and Michael L. Honig
|
Capacity of a Multiple-Antenna Fading Channel with a Quantized Precoding
Matrix
| null |
IEEE Trans. Inf. Theory, vol. 55, no. 3, pp. 1218--1234, March
2009
|
10.1109/TIT.2008.2011437
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Given a multiple-input multiple-output (MIMO) channel, feedback from the
receiver can be used to specify a transmit precoding matrix, which selectively
activates the strongest channel modes. Here we analyze the performance of
Random Vector Quantization (RVQ), in which the precoding matrix is selected
from a random codebook containing independent, isotropically distributed
entries. We assume that channel elements are i.i.d. and known to the receiver,
which relays the optimal (rate-maximizing) precoder codebook index to the
transmitter using B bits. We first derive the large system capacity of
beamforming (rank-one precoding matrix) as a function of B, where large system
refers to the limit as B and the number of transmit and receive antennas all go
to infinity with fixed ratios. With beamforming RVQ is asymptotically optimal,
i.e., no other quantization scheme can achieve a larger asymptotic rate. The
performance of RVQ is also compared with that of a simpler reduced-rank scalar
quantization scheme in which the beamformer is constrained to lie in a random
subspace. We subsequently consider a precoding matrix with arbitrary rank, and
approximate the asymptotic RVQ performance with optimal and linear receivers
(matched filter and Minimum Mean Squared Error (MMSE)). Numerical examples show
that these approximations accurately predict the performance of finite-size
systems of interest. Given a target spectral efficiency, numerical examples
show that the amount of feedback required by the linear MMSE receiver is only
slightly more than that required by the optimal receiver, whereas the matched
filter can require significantly more feedback.
|
[
{
"created": "Mon, 2 Apr 2007 15:35:24 GMT",
"version": "v1"
},
{
"created": "Tue, 17 Feb 2009 03:49:20 GMT",
"version": "v2"
}
] |
2010-08-27
|
[
[
"Santipach",
"Wiroonsak",
""
],
[
"Honig",
"Michael L.",
""
]
] |
Given a multiple-input multiple-output (MIMO) channel, feedback from the receiver can be used to specify a transmit precoding matrix, which selectively activates the strongest channel modes. Here we analyze the performance of Random Vector Quantization (RVQ), in which the precoding matrix is selected from a random codebook containing independent, isotropically distributed entries. We assume that channel elements are i.i.d. and known to the receiver, which relays the optimal (rate-maximizing) precoder codebook index to the transmitter using B bits. We first derive the large system capacity of beamforming (rank-one precoding matrix) as a function of B, where large system refers to the limit as B and the number of transmit and receive antennas all go to infinity with fixed ratios. With beamforming RVQ is asymptotically optimal, i.e., no other quantization scheme can achieve a larger asymptotic rate. The performance of RVQ is also compared with that of a simpler reduced-rank scalar quantization scheme in which the beamformer is constrained to lie in a random subspace. We subsequently consider a precoding matrix with arbitrary rank, and approximate the asymptotic RVQ performance with optimal and linear receivers (matched filter and Minimum Mean Squared Error (MMSE)). Numerical examples show that these approximations accurately predict the performance of finite-size systems of interest. Given a target spectral efficiency, numerical examples show that the amount of feedback required by the linear MMSE receiver is only slightly more than that required by the optimal receiver, whereas the matched filter can require significantly more feedback.
|
1302.1064
|
Dominik Kempa
|
Juha K\"arkk\"ainen and Dominik Kempa and Simon J. Puglisi
|
Lightweight Lempel-Ziv Parsing
|
12 pages
| null |
10.1007/978-3-642-38527-8_14
| null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce a new approach to LZ77 factorization that uses O(n/d) words of
working space and O(dn) time for any d >= 1 (for polylogarithmic alphabet
sizes). We also describe carefully engineered implementations of alternative
approaches to lightweight LZ77 factorization. Extensive experiments show that
the new algorithm is superior in most cases, particularly at the lowest memory
levels and for highly repetitive data. As a part of the algorithm, we describe
new methods for computing matching statistics which may be of independent
interest.
|
[
{
"created": "Tue, 5 Feb 2013 15:34:53 GMT",
"version": "v1"
},
{
"created": "Wed, 6 Feb 2013 10:15:05 GMT",
"version": "v2"
}
] |
2020-12-11
|
[
[
"Kärkkäinen",
"Juha",
""
],
[
"Kempa",
"Dominik",
""
],
[
"Puglisi",
"Simon J.",
""
]
] |
We introduce a new approach to LZ77 factorization that uses O(n/d) words of working space and O(dn) time for any d >= 1 (for polylogarithmic alphabet sizes). We also describe carefully engineered implementations of alternative approaches to lightweight LZ77 factorization. Extensive experiments show that the new algorithm is superior in most cases, particularly at the lowest memory levels and for highly repetitive data. As a part of the algorithm, we describe new methods for computing matching statistics which may be of independent interest.
|
1805.10047
|
Mamoru Komachi
|
Michiki Kurosawa, Yukio Matsumura, Hayahide Yamagishi, Mamoru Komachi
|
Japanese Predicate Conjugation for Neural Machine Translation
|
6 pages; NAACL 2018 Student Research Workshop
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Neural machine translation (NMT) has a drawback in that can generate only
high-frequency words owing to the computational costs of the softmax function
in the output layer.
In Japanese-English NMT, Japanese predicate conjugation causes an increase in
vocabulary size. For example, one verb can have as many as 19 surface
varieties. In this research, we focus on predicate conjugation for compressing
the vocabulary size in Japanese. The vocabulary list is filled with the various
forms of verbs. We propose methods using predicate conjugation information
without discarding linguistic information. The proposed methods can generate
low-frequency words and deal with unknown words. Two methods were considered to
introduce conjugation information: the first considers it as a token
(conjugation token) and the second considers it as an embedded vector
(conjugation feature).
The results using these methods demonstrate that the vocabulary size can be
compressed by approximately 86.1% (Tanaka corpus) and the NMT models can output
the words not in the training data set. Furthermore, BLEU scores improved by
0.91 points in Japanese-to-English translation, and 0.32 points in
English-to-Japanese translation with ASPEC.
|
[
{
"created": "Fri, 25 May 2018 08:56:43 GMT",
"version": "v1"
}
] |
2018-05-28
|
[
[
"Kurosawa",
"Michiki",
""
],
[
"Matsumura",
"Yukio",
""
],
[
"Yamagishi",
"Hayahide",
""
],
[
"Komachi",
"Mamoru",
""
]
] |
Neural machine translation (NMT) has a drawback in that can generate only high-frequency words owing to the computational costs of the softmax function in the output layer. In Japanese-English NMT, Japanese predicate conjugation causes an increase in vocabulary size. For example, one verb can have as many as 19 surface varieties. In this research, we focus on predicate conjugation for compressing the vocabulary size in Japanese. The vocabulary list is filled with the various forms of verbs. We propose methods using predicate conjugation information without discarding linguistic information. The proposed methods can generate low-frequency words and deal with unknown words. Two methods were considered to introduce conjugation information: the first considers it as a token (conjugation token) and the second considers it as an embedded vector (conjugation feature). The results using these methods demonstrate that the vocabulary size can be compressed by approximately 86.1% (Tanaka corpus) and the NMT models can output the words not in the training data set. Furthermore, BLEU scores improved by 0.91 points in Japanese-to-English translation, and 0.32 points in English-to-Japanese translation with ASPEC.
|
2203.01426
|
Peng Zhou
|
Peng Zhou, Jason K. Eshraghian, Dong-Uk Choi, Sung-Mo Kang
|
SPICEprop: Backpropagating Errors Through Memristive Spiking Neural
Networks
| null | null | null | null |
cs.NE cs.AI cs.ET
|
http://creativecommons.org/licenses/by/4.0/
|
We present a fully memristive spiking neural network (MSNN) consisting of
novel memristive neurons trained using the backpropagation through time (BPTT)
learning rule. Gradient descent is applied directly to the memristive
integrated-and-fire (MIF) neuron designed using analog SPICE circuit models,
which generates distinct depolarization, hyperpolarization, and repolarization
voltage waveforms. Synaptic weights are trained by BPTT using the membrane
potential of the MIF neuron model and can be processed on memristive crossbars.
The natural spiking dynamics of the MIF neuron model are fully differentiable,
eliminating the need for gradient approximations that are prevalent in the
spiking neural network literature. Despite the added complexity of training
directly on SPICE circuit models, we achieve 97.58% accuracy on the MNIST
testing dataset and 75.26% on the Fashion-MNIST testing dataset, the highest
accuracies among all fully MSNNs.
|
[
{
"created": "Wed, 2 Mar 2022 21:34:43 GMT",
"version": "v1"
},
{
"created": "Tue, 8 Mar 2022 05:50:44 GMT",
"version": "v2"
},
{
"created": "Thu, 10 Mar 2022 01:09:51 GMT",
"version": "v3"
}
] |
2022-03-11
|
[
[
"Zhou",
"Peng",
""
],
[
"Eshraghian",
"Jason K.",
""
],
[
"Choi",
"Dong-Uk",
""
],
[
"Kang",
"Sung-Mo",
""
]
] |
We present a fully memristive spiking neural network (MSNN) consisting of novel memristive neurons trained using the backpropagation through time (BPTT) learning rule. Gradient descent is applied directly to the memristive integrated-and-fire (MIF) neuron designed using analog SPICE circuit models, which generates distinct depolarization, hyperpolarization, and repolarization voltage waveforms. Synaptic weights are trained by BPTT using the membrane potential of the MIF neuron model and can be processed on memristive crossbars. The natural spiking dynamics of the MIF neuron model are fully differentiable, eliminating the need for gradient approximations that are prevalent in the spiking neural network literature. Despite the added complexity of training directly on SPICE circuit models, we achieve 97.58% accuracy on the MNIST testing dataset and 75.26% on the Fashion-MNIST testing dataset, the highest accuracies among all fully MSNNs.
|
2006.08754
|
Kwang-Sung Jun
|
Kwang-Sung Jun, Chicheng Zhang
|
Crush Optimism with Pessimism: Structured Bandits Beyond Asymptotic
Optimality
|
accepted to NeurIPS'20; added the lower bound result
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study stochastic structured bandits for minimizing regret. The fact that
the popular optimistic algorithms do not achieve the asymptotic
instance-dependent regret optimality (asymptotic optimality for short) has
recently alluded researchers. On the other hand, it is known that one can
achieve bounded regret (i.e., does not grow indefinitely with $n$) in certain
instances. Unfortunately, existing asymptotically optimal algorithms rely on
forced sampling that introduces an $\omega(1)$ term w.r.t. the time horizon $n$
in their regret, failing to adapt to the "easiness" of the instance. In this
paper, we focus on the finite hypothesis case and ask if one can achieve the
asymptotic optimality while enjoying bounded regret whenever possible. We
provide a positive answer by introducing a new algorithm called CRush Optimism
with Pessimism (CROP) that eliminates optimistic hypotheses by pulling the
informative arms indicated by a pessimistic hypothesis. Our finite-time
analysis shows that CROP $(i)$ achieves a constant-factor asymptotic optimality
and, thanks to the forced-exploration-free design, $(ii)$ adapts to bounded
regret, and $(iii)$ its regret bound scales not with $K$ but with an effective
number of arms $K_\psi$ that we introduce. We also discuss a problem class
where CROP can be exponentially better than existing algorithms in
\textit{nonasymptotic} regimes. This problem class also reveals a surprising
fact that even a clairvoyant oracle who plays according to the asymptotically
optimal arm pull scheme may suffer a linear worst-case regret.
|
[
{
"created": "Mon, 15 Jun 2020 20:46:52 GMT",
"version": "v1"
},
{
"created": "Fri, 23 Oct 2020 01:51:16 GMT",
"version": "v2"
}
] |
2020-10-26
|
[
[
"Jun",
"Kwang-Sung",
""
],
[
"Zhang",
"Chicheng",
""
]
] |
We study stochastic structured bandits for minimizing regret. The fact that the popular optimistic algorithms do not achieve the asymptotic instance-dependent regret optimality (asymptotic optimality for short) has recently alluded researchers. On the other hand, it is known that one can achieve bounded regret (i.e., does not grow indefinitely with $n$) in certain instances. Unfortunately, existing asymptotically optimal algorithms rely on forced sampling that introduces an $\omega(1)$ term w.r.t. the time horizon $n$ in their regret, failing to adapt to the "easiness" of the instance. In this paper, we focus on the finite hypothesis case and ask if one can achieve the asymptotic optimality while enjoying bounded regret whenever possible. We provide a positive answer by introducing a new algorithm called CRush Optimism with Pessimism (CROP) that eliminates optimistic hypotheses by pulling the informative arms indicated by a pessimistic hypothesis. Our finite-time analysis shows that CROP $(i)$ achieves a constant-factor asymptotic optimality and, thanks to the forced-exploration-free design, $(ii)$ adapts to bounded regret, and $(iii)$ its regret bound scales not with $K$ but with an effective number of arms $K_\psi$ that we introduce. We also discuss a problem class where CROP can be exponentially better than existing algorithms in \textit{nonasymptotic} regimes. This problem class also reveals a surprising fact that even a clairvoyant oracle who plays according to the asymptotically optimal arm pull scheme may suffer a linear worst-case regret.
|
1510.03247
|
Jason Dou
|
Lucy Chenyun Wu, Jason Xiaotian Dou, Danny Sleator, Alan Frieze, David
Miller
|
Impartial Redistricting: A Markov Chain Approach
|
about authorship naming problem, will fix soon
| null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The gerrymandering problem is a worldwide problem which sets great threat to
democracy and justice in district based elections. Thanks to partisan
redistricting commissions, district boundaries are often manipulated to benefit
incumbents. Since an independent commission is hard to come by, the possibility
of impartially generating districts with a computer is explored in this thesis.
We have developed an algorithm to randomly produce legal redistricting schemes
for Pennsylvania.
|
[
{
"created": "Mon, 12 Oct 2015 12:11:51 GMT",
"version": "v1"
},
{
"created": "Tue, 13 Oct 2015 12:26:31 GMT",
"version": "v2"
}
] |
2015-10-14
|
[
[
"Wu",
"Lucy Chenyun",
""
],
[
"Dou",
"Jason Xiaotian",
""
],
[
"Sleator",
"Danny",
""
],
[
"Frieze",
"Alan",
""
],
[
"Miller",
"David",
""
]
] |
The gerrymandering problem is a worldwide problem which sets great threat to democracy and justice in district based elections. Thanks to partisan redistricting commissions, district boundaries are often manipulated to benefit incumbents. Since an independent commission is hard to come by, the possibility of impartially generating districts with a computer is explored in this thesis. We have developed an algorithm to randomly produce legal redistricting schemes for Pennsylvania.
|
1911.08619
|
Shuwen Deng
|
Shuwen Deng, Wenjie Xiong, Jakub Szefer
|
A Benchmark Suite for Evaluating Caches' Vulnerability to Timing Attacks
|
13 pages, 5 figures
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Timing-based side or covert channels in processor caches continue to present
a threat to computer systems, and they are the key to many of the recent
Spectre and Meltdown attacks. Based on improvements to an existing three-step
model for cache timing-based attacks, this work presents 88 Strong types of
theoretical timing-based vulnerabilities in processor caches. To understand and
evaluate all possible types of vulnerabilities in processor caches, this work
further presents and implements a new benchmark suite which can be used to test
to which types of cache timing-based attacks a given processor or cache design
is vulnerable. In total, there are 1094 automatically-generated test programs
which cover the 88 theoretical vulnerabilities. The benchmark suite generates
the Cache Timing Vulnerability Score which can be used to evaluate how
vulnerable a specific cache implementation is to different attacks. A smaller
Cache Timing Vulnerability Score means the design is more secure, and the
scores among different machines can be easily compared. Evaluation is conducted
on commodity Intel and AMD processors and shows the differences in processor
implementations can result in different types of attacks that they are
vulnerable to. Beyond testing commodity processors, the benchmarks and the
Cache Timing Vulnerability Score can be used to help designers of new secure
processor caches evaluate their design's susceptibility to cache timing-based
attacks.
|
[
{
"created": "Tue, 19 Nov 2019 22:38:16 GMT",
"version": "v1"
}
] |
2019-11-21
|
[
[
"Deng",
"Shuwen",
""
],
[
"Xiong",
"Wenjie",
""
],
[
"Szefer",
"Jakub",
""
]
] |
Timing-based side or covert channels in processor caches continue to present a threat to computer systems, and they are the key to many of the recent Spectre and Meltdown attacks. Based on improvements to an existing three-step model for cache timing-based attacks, this work presents 88 Strong types of theoretical timing-based vulnerabilities in processor caches. To understand and evaluate all possible types of vulnerabilities in processor caches, this work further presents and implements a new benchmark suite which can be used to test to which types of cache timing-based attacks a given processor or cache design is vulnerable. In total, there are 1094 automatically-generated test programs which cover the 88 theoretical vulnerabilities. The benchmark suite generates the Cache Timing Vulnerability Score which can be used to evaluate how vulnerable a specific cache implementation is to different attacks. A smaller Cache Timing Vulnerability Score means the design is more secure, and the scores among different machines can be easily compared. Evaluation is conducted on commodity Intel and AMD processors and shows the differences in processor implementations can result in different types of attacks that they are vulnerable to. Beyond testing commodity processors, the benchmarks and the Cache Timing Vulnerability Score can be used to help designers of new secure processor caches evaluate their design's susceptibility to cache timing-based attacks.
|
2302.05573
|
Bin Liu
|
Bo Li, Xiaolin Wei, Fengwei Chen, Bin Liu
|
3D Colored Shape Reconstruction from a Single RGB Image through
Diffusion
|
9 pages, 8 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a novel 3d colored shape reconstruction method from a single RGB
image through diffusion model. Diffusion models have shown great development
potentials for high-quality 3D shape generation. However, most existing work
based on diffusion models only focus on geometric shape generation, they cannot
either accomplish 3D reconstruction from a single image, or produce 3D
geometric shape with color information. In this work, we propose to reconstruct
a 3D colored shape from a single RGB image through a novel conditional
diffusion model. The reverse process of the proposed diffusion model is
consisted of three modules, shape prediction module, color prediction module
and NeRF-like rendering module. In shape prediction module, the reference RGB
image is first encoded into a high-level shape feature and then the shape
feature is utilized as a condition to predict the reverse geometric noise in
diffusion model. Then the color of each 3D point updated in shape prediction
module is predicted by color prediction module. Finally, a NeRF-like rendering
module is designed to render the colored point cloud predicted by the former
two modules to 2D image space to guide the training conditioned only on a
reference image. As far as the authors know, the proposed method is the first
diffusion model for 3D colored shape reconstruction from a single RGB image.
Experimental results demonstrate that the proposed method achieves competitive
performance on colored 3D shape reconstruction, and the ablation study
validates the positive role of the color prediction module in improving the
reconstruction quality of 3D geometric point cloud.
|
[
{
"created": "Sat, 11 Feb 2023 02:15:00 GMT",
"version": "v1"
}
] |
2023-02-14
|
[
[
"Li",
"Bo",
""
],
[
"Wei",
"Xiaolin",
""
],
[
"Chen",
"Fengwei",
""
],
[
"Liu",
"Bin",
""
]
] |
We propose a novel 3d colored shape reconstruction method from a single RGB image through diffusion model. Diffusion models have shown great development potentials for high-quality 3D shape generation. However, most existing work based on diffusion models only focus on geometric shape generation, they cannot either accomplish 3D reconstruction from a single image, or produce 3D geometric shape with color information. In this work, we propose to reconstruct a 3D colored shape from a single RGB image through a novel conditional diffusion model. The reverse process of the proposed diffusion model is consisted of three modules, shape prediction module, color prediction module and NeRF-like rendering module. In shape prediction module, the reference RGB image is first encoded into a high-level shape feature and then the shape feature is utilized as a condition to predict the reverse geometric noise in diffusion model. Then the color of each 3D point updated in shape prediction module is predicted by color prediction module. Finally, a NeRF-like rendering module is designed to render the colored point cloud predicted by the former two modules to 2D image space to guide the training conditioned only on a reference image. As far as the authors know, the proposed method is the first diffusion model for 3D colored shape reconstruction from a single RGB image. Experimental results demonstrate that the proposed method achieves competitive performance on colored 3D shape reconstruction, and the ablation study validates the positive role of the color prediction module in improving the reconstruction quality of 3D geometric point cloud.
|
2004.08715
|
Ikuesan R. Adeyemi Dr.
|
Victor R. Kebande, Nickson M. Karie, Richard Adeyemi Ikuesan, Abdullah
Al-Ghushami, and H. S. Venter
|
Detecting Centralized Architecture-Based Botnets using Travelling
Salesperson Non-Deterministic Polynomial-Hard problem, TSP-NP Technique
|
2019 IEEE Conference on Application, Information and Network Security
(AINS)
|
2020 INSPEC Accession Number: 19300268 Electronic ISBN:
978-1-7281-3306-5 Print on Demand(PoD) ISBN: 978-1-7281-3307-2
|
10.1109/AINS47559.2019.8968710
| null |
cs.CR cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The threats posed by botnets in the cyberspace continue to grow each day and
it has become very hard to detect or infiltrate the cynicism of bots. This, is
owing to the fact, that, the botnet developers each day, keep changing the
propagation and attack techniques. Currently, most of these attacks have been
centered on stealing computing energy, theft of personal information and
Distributed Denial of Service (DDoS) attacks. In this paper, the authors
propose a novel technique that uses the Non-Deterministic Polynomial-Time
Hardness (NP-Hard Problem) based on the Traveling Salesperson Person (TSP) that
depicts that a given bot, bj, is able to visit each host on a network
environment, NE, and then it returns to the botmaster, in form of
instruction(command), through optimal minimization of the hosts that are (may)
be attacked. Given that bj represents a piece of malicious code and TSP-NP Hard
Problem, which forms part of combinatorial optimization, the authors present
this as an effective approach for the detection of the botnet. It is worth
noting that the concentration of this study is basically on the centralized
botnet architecture. This holistic approach shows that botnet detection
accuracy can be increased with a degree of certainty and potentially decrease
the chances of false positives. Nevertheless, a discussion on the possible
applicability and implementation has also been given in this paper.
|
[
{
"created": "Sat, 18 Apr 2020 22:13:14 GMT",
"version": "v1"
}
] |
2020-04-21
|
[
[
"Kebande",
"Victor R.",
""
],
[
"Karie",
"Nickson M.",
""
],
[
"Ikuesan",
"Richard Adeyemi",
""
],
[
"Al-Ghushami",
"Abdullah",
""
],
[
"Venter",
"H. S.",
""
]
] |
The threats posed by botnets in the cyberspace continue to grow each day and it has become very hard to detect or infiltrate the cynicism of bots. This, is owing to the fact, that, the botnet developers each day, keep changing the propagation and attack techniques. Currently, most of these attacks have been centered on stealing computing energy, theft of personal information and Distributed Denial of Service (DDoS) attacks. In this paper, the authors propose a novel technique that uses the Non-Deterministic Polynomial-Time Hardness (NP-Hard Problem) based on the Traveling Salesperson Person (TSP) that depicts that a given bot, bj, is able to visit each host on a network environment, NE, and then it returns to the botmaster, in form of instruction(command), through optimal minimization of the hosts that are (may) be attacked. Given that bj represents a piece of malicious code and TSP-NP Hard Problem, which forms part of combinatorial optimization, the authors present this as an effective approach for the detection of the botnet. It is worth noting that the concentration of this study is basically on the centralized botnet architecture. This holistic approach shows that botnet detection accuracy can be increased with a degree of certainty and potentially decrease the chances of false positives. Nevertheless, a discussion on the possible applicability and implementation has also been given in this paper.
|
1606.03752
|
Kiran Venugopal
|
Kiran Venugopal and Robert W. Heath Jr
|
Location Based Performance Model for Indoor mmWave Wearable
Communication
|
presented at ICC 2016
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Simultaneous use of high-end wearable wireless devices like smart glasses is
challenging in a dense indoor environment due to the high nature of
interference. In this scenario, the millimeter wave (mmWave) band offers
promising potential for achieving gigabits per second throughput. Here we
propose a novel system model for analyzing system performance of mmWave based
communication among wearables. The proposed model accounts for the non-isotropy
of the indoor environment and the effects of reflections that are predominant
for indoor mmWave signals. The effect of human body blockages are modeled and
the system performance is shown to hugely vary depending on the user location,
body orientation and the density of the network. Closed form expressions for
spatially averaged signal to interference plus noise ratio distribution are
also derived as a function of the location and orientation of a reference user.
|
[
{
"created": "Sun, 12 Jun 2016 18:38:26 GMT",
"version": "v1"
}
] |
2016-06-14
|
[
[
"Venugopal",
"Kiran",
""
],
[
"Heath",
"Robert W.",
"Jr"
]
] |
Simultaneous use of high-end wearable wireless devices like smart glasses is challenging in a dense indoor environment due to the high nature of interference. In this scenario, the millimeter wave (mmWave) band offers promising potential for achieving gigabits per second throughput. Here we propose a novel system model for analyzing system performance of mmWave based communication among wearables. The proposed model accounts for the non-isotropy of the indoor environment and the effects of reflections that are predominant for indoor mmWave signals. The effect of human body blockages are modeled and the system performance is shown to hugely vary depending on the user location, body orientation and the density of the network. Closed form expressions for spatially averaged signal to interference plus noise ratio distribution are also derived as a function of the location and orientation of a reference user.
|
2110.10905
|
Xianyuan Zhan
|
Jin Li, Xianyuan Zhan, Zixu Xiao, Guyue Zhou
|
Efficient Robotic Manipulation Through Offline-to-Online Reinforcement
Learning and Goal-Aware State Information
| null | null | null | null |
cs.RO cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
End-to-end learning robotic manipulation with high data efficiency is one of
the key challenges in robotics. The latest methods that utilize human
demonstration data and unsupervised representation learning has proven to be a
promising direction to improve RL learning efficiency. The use of demonstration
data also allows "warming-up" the RL policies using offline data with imitation
learning or the recently emerged offline reinforcement learning algorithms.
However, existing works often treat offline policy learning and online
exploration as two separate processes, which are often accompanied by severe
performance drop during the offline-to-online transition. Furthermore, many
robotic manipulation tasks involve complex sub-task structures, which are very
challenging to be solved in RL with sparse reward. In this work, we propose a
unified offline-to-online RL framework that resolves the transition performance
drop issue. Additionally, we introduce goal-aware state information to the RL
agent, which can greatly reduce task complexity and accelerate policy learning.
Combined with an advanced unsupervised representation learning module, our
framework achieves great training efficiency and performance compared with the
state-of-the-art methods in multiple robotic manipulation tasks.
|
[
{
"created": "Thu, 21 Oct 2021 05:34:25 GMT",
"version": "v1"
}
] |
2021-10-22
|
[
[
"Li",
"Jin",
""
],
[
"Zhan",
"Xianyuan",
""
],
[
"Xiao",
"Zixu",
""
],
[
"Zhou",
"Guyue",
""
]
] |
End-to-end learning robotic manipulation with high data efficiency is one of the key challenges in robotics. The latest methods that utilize human demonstration data and unsupervised representation learning has proven to be a promising direction to improve RL learning efficiency. The use of demonstration data also allows "warming-up" the RL policies using offline data with imitation learning or the recently emerged offline reinforcement learning algorithms. However, existing works often treat offline policy learning and online exploration as two separate processes, which are often accompanied by severe performance drop during the offline-to-online transition. Furthermore, many robotic manipulation tasks involve complex sub-task structures, which are very challenging to be solved in RL with sparse reward. In this work, we propose a unified offline-to-online RL framework that resolves the transition performance drop issue. Additionally, we introduce goal-aware state information to the RL agent, which can greatly reduce task complexity and accelerate policy learning. Combined with an advanced unsupervised representation learning module, our framework achieves great training efficiency and performance compared with the state-of-the-art methods in multiple robotic manipulation tasks.
|
2403.00506
|
Deborah N. Jakobi
|
Deborah N. Jakobi and Thomas Kern and David R. Reich and Patrick
Haller and Lena A. J\"ager
|
PoTeC: A German Naturalistic Eye-tracking-while-reading Corpus
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
The Potsdam Textbook Corpus (PoTeC) is a naturalistic
eye-tracking-while-reading corpus containing data from 75 participants reading
12 scientific texts. PoTeC is the first naturalistic eye-tracking-while-reading
corpus that contains eye-movements from domain-experts as well as novices in a
within-participant manipulation: It is based on a 2x2x2 fully-crossed factorial
design which includes the participants' level of study and the participants'
discipline of study as between-subject factors and the text domain as a
within-subject factor. The participants' reading comprehension was assessed by
a series of text comprehension questions and their domain knowledge was tested
by text-independent background questions for each of the texts. The materials
are annotated for a variety of linguistic features at different levels. We
envision PoTeC to be used for a wide range of studies including but not limited
to analyses of expert and non-expert reading strategies. The corpus and all the
accompanying data at all stages of the preprocessing pipeline and all code used
to preprocess the data are made available via GitHub:
https://github.com/DiLi-Lab/PoTeC.
|
[
{
"created": "Fri, 1 Mar 2024 13:07:39 GMT",
"version": "v1"
}
] |
2024-03-04
|
[
[
"Jakobi",
"Deborah N.",
""
],
[
"Kern",
"Thomas",
""
],
[
"Reich",
"David R.",
""
],
[
"Haller",
"Patrick",
""
],
[
"Jäger",
"Lena A.",
""
]
] |
The Potsdam Textbook Corpus (PoTeC) is a naturalistic eye-tracking-while-reading corpus containing data from 75 participants reading 12 scientific texts. PoTeC is the first naturalistic eye-tracking-while-reading corpus that contains eye-movements from domain-experts as well as novices in a within-participant manipulation: It is based on a 2x2x2 fully-crossed factorial design which includes the participants' level of study and the participants' discipline of study as between-subject factors and the text domain as a within-subject factor. The participants' reading comprehension was assessed by a series of text comprehension questions and their domain knowledge was tested by text-independent background questions for each of the texts. The materials are annotated for a variety of linguistic features at different levels. We envision PoTeC to be used for a wide range of studies including but not limited to analyses of expert and non-expert reading strategies. The corpus and all the accompanying data at all stages of the preprocessing pipeline and all code used to preprocess the data are made available via GitHub: https://github.com/DiLi-Lab/PoTeC.
|
1211.2189
|
Jannik Matuschke
|
Jannik Matuschke and Britta Peis
|
Lattices and maximum flow algorithms in planar graphs
| null | null | null | null |
cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We show that the left/right relation on the set of s-t-paths of a plane graph
induces a so-called submodular lattice. If the embedding of the graph is
s-t-planar, this lattice is even consecutive. This implies that Ford and
Fulkerson's uppermost path algorithm for maximum flow in such graphs is indeed
a special case of a two-phase greedy algorithm on lattice polyhedra. We also
show that the properties submodularity and consecutivity cannot be achieved
simultaneously by any partial order on the paths if the graph is planar but not
s-t-planar, thus providing a characterization of this class of graphs.
|
[
{
"created": "Fri, 9 Nov 2012 17:02:11 GMT",
"version": "v1"
}
] |
2012-11-12
|
[
[
"Matuschke",
"Jannik",
""
],
[
"Peis",
"Britta",
""
]
] |
We show that the left/right relation on the set of s-t-paths of a plane graph induces a so-called submodular lattice. If the embedding of the graph is s-t-planar, this lattice is even consecutive. This implies that Ford and Fulkerson's uppermost path algorithm for maximum flow in such graphs is indeed a special case of a two-phase greedy algorithm on lattice polyhedra. We also show that the properties submodularity and consecutivity cannot be achieved simultaneously by any partial order on the paths if the graph is planar but not s-t-planar, thus providing a characterization of this class of graphs.
|
2306.05382
|
Mingyu Jin
|
Haochen Xue, Mingyu Jin, Chong Zhang, Yuxuan Huang, Qian Weng, Xiaobo
Jin
|
Image Blending Algorithm with Automatic Mask Generation
|
14 pages, 8 figures
|
International Conference on Neural Information Processing 2023
|
10.1007/978-981-99-8132-8_18
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In recent years, image blending has gained popularity for its ability to
create visually stunning content. However, the current image blending
algorithms mainly have the following problems: manually creating image blending
masks requires a lot of manpower and material resources; image blending
algorithms cannot effectively solve the problems of brightness distortion and
low resolution. To this end, we propose a new image blending method with
automatic mask generation: it combines semantic object detection and
segmentation with mask generation to achieve deep blended images based on our
proposed new saturation loss and two-stage iteration of the PAN algorithm to
fix brightness distortion and low-resolution issues. Results on publicly
available datasets show that our method outperforms other classical image
blending algorithms on various performance metrics, including PSNR and SSIM.
|
[
{
"created": "Thu, 8 Jun 2023 17:31:24 GMT",
"version": "v1"
},
{
"created": "Sun, 11 Jun 2023 14:38:54 GMT",
"version": "v2"
},
{
"created": "Wed, 29 Nov 2023 06:49:12 GMT",
"version": "v3"
}
] |
2023-11-30
|
[
[
"Xue",
"Haochen",
""
],
[
"Jin",
"Mingyu",
""
],
[
"Zhang",
"Chong",
""
],
[
"Huang",
"Yuxuan",
""
],
[
"Weng",
"Qian",
""
],
[
"Jin",
"Xiaobo",
""
]
] |
In recent years, image blending has gained popularity for its ability to create visually stunning content. However, the current image blending algorithms mainly have the following problems: manually creating image blending masks requires a lot of manpower and material resources; image blending algorithms cannot effectively solve the problems of brightness distortion and low resolution. To this end, we propose a new image blending method with automatic mask generation: it combines semantic object detection and segmentation with mask generation to achieve deep blended images based on our proposed new saturation loss and two-stage iteration of the PAN algorithm to fix brightness distortion and low-resolution issues. Results on publicly available datasets show that our method outperforms other classical image blending algorithms on various performance metrics, including PSNR and SSIM.
|
1812.08032
|
Cagatay Turkay
|
Cagatay Turkay, Nicola Pezzotti, Carsten Binnig, Hendrik Strobelt,
Barbara Hammer, Daniel A. Keim, Jean-Daniel Fekete, Themis Palpanas, Yunhai
Wang, Florin Rusu
|
Progressive Data Science: Potential and Challenges
| null | null | null | null |
cs.HC cs.DB cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Data science requires time-consuming iterative manual activities. In
particular, activities such as data selection, preprocessing, transformation,
and mining, highly depend on iterative trial-and-error processes that could be
sped-up significantly by providing quick feedback on the impact of changes. The
idea of progressive data science is to compute the results of changes in a
progressive manner, returning a first approximation of results quickly and
allow iterative refinements until converging to a final result. Enabling the
user to interact with the intermediate results allows an early detection of
erroneous or suboptimal choices, the guided definition of modifications to the
pipeline and their quick assessment. In this paper, we discuss the
progressiveness challenges arising in different steps of the data science
pipeline. We describe how changes in each step of the pipeline impact the
subsequent steps and outline why progressive data science will help to make the
process more effective. Computing progressive approximations of outcomes
resulting from changes creates numerous research challenges, especially if the
changes are made in the early steps of the pipeline. We discuss these
challenges and outline first steps towards progressiveness, which, we argue,
will ultimately help to significantly speed-up the overall data science
process.
|
[
{
"created": "Wed, 19 Dec 2018 15:45:03 GMT",
"version": "v1"
},
{
"created": "Thu, 12 Sep 2019 17:02:46 GMT",
"version": "v2"
}
] |
2019-09-13
|
[
[
"Turkay",
"Cagatay",
""
],
[
"Pezzotti",
"Nicola",
""
],
[
"Binnig",
"Carsten",
""
],
[
"Strobelt",
"Hendrik",
""
],
[
"Hammer",
"Barbara",
""
],
[
"Keim",
"Daniel A.",
""
],
[
"Fekete",
"Jean-Daniel",
""
],
[
"Palpanas",
"Themis",
""
],
[
"Wang",
"Yunhai",
""
],
[
"Rusu",
"Florin",
""
]
] |
Data science requires time-consuming iterative manual activities. In particular, activities such as data selection, preprocessing, transformation, and mining, highly depend on iterative trial-and-error processes that could be sped-up significantly by providing quick feedback on the impact of changes. The idea of progressive data science is to compute the results of changes in a progressive manner, returning a first approximation of results quickly and allow iterative refinements until converging to a final result. Enabling the user to interact with the intermediate results allows an early detection of erroneous or suboptimal choices, the guided definition of modifications to the pipeline and their quick assessment. In this paper, we discuss the progressiveness challenges arising in different steps of the data science pipeline. We describe how changes in each step of the pipeline impact the subsequent steps and outline why progressive data science will help to make the process more effective. Computing progressive approximations of outcomes resulting from changes creates numerous research challenges, especially if the changes are made in the early steps of the pipeline. We discuss these challenges and outline first steps towards progressiveness, which, we argue, will ultimately help to significantly speed-up the overall data science process.
|
1812.02872
|
Yapeng Tian
|
Yapeng Tian, Chenxiao Guan, Justin Goodman, Marc Moore, Chenliang Xu
|
An Attempt towards Interpretable Audio-Visual Video Captioning
|
11 pages, 4 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Automatically generating a natural language sentence to describe the content
of an input video is a very challenging problem. It is an essential multimodal
task in which auditory and visual contents are equally important. Although
audio information has been exploited to improve video captioning in previous
works, it is usually regarded as an additional feature fed into a black box
fusion machine. How are the words in the generated sentences associated with
the auditory and visual modalities? The problem is still not investigated. In
this paper, we make the first attempt to design an interpretable audio-visual
video captioning network to discover the association between words in sentences
and audio-visual sequences. To achieve this, we propose a multimodal
convolutional neural network-based audio-visual video captioning framework and
introduce a modality-aware module for exploring modality selection during
sentence generation. Besides, we collect new audio captioning and visual
captioning datasets for further exploring the interactions between auditory and
visual modalities for high-level video understanding. Extensive experiments
demonstrate that the modality-aware module makes our model interpretable on
modality selection during sentence generation. Even with the added
interpretability, our video captioning network can still achieve comparable
performance with recent state-of-the-art methods.
|
[
{
"created": "Fri, 7 Dec 2018 01:57:42 GMT",
"version": "v1"
}
] |
2018-12-10
|
[
[
"Tian",
"Yapeng",
""
],
[
"Guan",
"Chenxiao",
""
],
[
"Goodman",
"Justin",
""
],
[
"Moore",
"Marc",
""
],
[
"Xu",
"Chenliang",
""
]
] |
Automatically generating a natural language sentence to describe the content of an input video is a very challenging problem. It is an essential multimodal task in which auditory and visual contents are equally important. Although audio information has been exploited to improve video captioning in previous works, it is usually regarded as an additional feature fed into a black box fusion machine. How are the words in the generated sentences associated with the auditory and visual modalities? The problem is still not investigated. In this paper, we make the first attempt to design an interpretable audio-visual video captioning network to discover the association between words in sentences and audio-visual sequences. To achieve this, we propose a multimodal convolutional neural network-based audio-visual video captioning framework and introduce a modality-aware module for exploring modality selection during sentence generation. Besides, we collect new audio captioning and visual captioning datasets for further exploring the interactions between auditory and visual modalities for high-level video understanding. Extensive experiments demonstrate that the modality-aware module makes our model interpretable on modality selection during sentence generation. Even with the added interpretability, our video captioning network can still achieve comparable performance with recent state-of-the-art methods.
|
1605.05109
|
Seri Khoury
|
Amir Abboud, Keren Censor-Hillel and Seri Khoury
|
Near-Linear Lower Bounds for Distributed Distance Computations, Even in
Sparse Networks
| null | null | null | null |
cs.DC cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We develop a new technique for constructing sparse graphs that allow us to
prove near-linear lower bounds on the round complexity of computing distances
in the CONGEST model. Specifically, we show an $\widetilde{\Omega}(n)$ lower
bound for computing the diameter in sparse networks, which was previously known
only for dense networks [Frishknecht et al., SODA 2012]. In fact, we can even
modify our construction to obtain graphs with constant degree, using a simple
but powerful degree-reduction technique which we define.
Moreover, our technique allows us to show $\widetilde{\Omega}(n)$ lower
bounds for computing $(\frac{3}{2}-\varepsilon)$-approximations of the diameter
or the radius, and for computing a $(\frac{5}{3}-\varepsilon)$-approximation of
all eccentricities. For radius, we are unaware of any previous lower bounds.
For diameter, these greatly improve upon previous lower bounds and are tight up
to polylogarithmic factors [Frishknecht et al., SODA 2012], and for
eccentricities the improvement is both in the lower bound and in the
approximation factor [Holzer and Wattenhofer, PODC 2012].
Interestingly, our technique also allows showing an almost-linear lower bound
for the verification of $(\alpha,\beta)$-spanners, for $\alpha < \beta+1$.
|
[
{
"created": "Tue, 17 May 2016 11:02:16 GMT",
"version": "v1"
}
] |
2016-05-18
|
[
[
"Abboud",
"Amir",
""
],
[
"Censor-Hillel",
"Keren",
""
],
[
"Khoury",
"Seri",
""
]
] |
We develop a new technique for constructing sparse graphs that allow us to prove near-linear lower bounds on the round complexity of computing distances in the CONGEST model. Specifically, we show an $\widetilde{\Omega}(n)$ lower bound for computing the diameter in sparse networks, which was previously known only for dense networks [Frishknecht et al., SODA 2012]. In fact, we can even modify our construction to obtain graphs with constant degree, using a simple but powerful degree-reduction technique which we define. Moreover, our technique allows us to show $\widetilde{\Omega}(n)$ lower bounds for computing $(\frac{3}{2}-\varepsilon)$-approximations of the diameter or the radius, and for computing a $(\frac{5}{3}-\varepsilon)$-approximation of all eccentricities. For radius, we are unaware of any previous lower bounds. For diameter, these greatly improve upon previous lower bounds and are tight up to polylogarithmic factors [Frishknecht et al., SODA 2012], and for eccentricities the improvement is both in the lower bound and in the approximation factor [Holzer and Wattenhofer, PODC 2012]. Interestingly, our technique also allows showing an almost-linear lower bound for the verification of $(\alpha,\beta)$-spanners, for $\alpha < \beta+1$.
|
1705.02385
|
Sylvia Boyd
|
Sylvia Boyd, Andr\'as Seb\"o
|
The Salesman's Improved Tours for Fundamental Classes
| null | null | null | null |
cs.DM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Finding the exact integrality gap $\alpha$ for the LP relaxation of the
metric Travelling Salesman Problem (TSP) has been an open problem for over
thirty years, with little progress made. It is known that $4/3 \leq \alpha \leq
3/2$, and a famous conjecture states $\alpha = 4/3$. For this problem,
essentially two "fundamental" classes of instances have been proposed. This
fundamental property means that in order to show that the integrality gap is at
most $\rho$ for all instances of metric TSP, it is sufficient to show it only
for the instances in the fundamental class. However, despite the importance and
the simplicity of such classes, no apparent effort has been deployed for
improving the integrality gap bounds for them. In this paper we take a natural
first step in this endeavour, and consider the $1/2$-integer points of one such
class. We successfully improve the upper bound for the integrality gap from
$3/2$ to $10/7$ for a superclass of these points, as well as prove a lower
bound of $4/3$ for the superclass. Our methods involve innovative applications
of tools from combinatorial optimization which have the potential to be more
broadly applied.
|
[
{
"created": "Fri, 5 May 2017 20:05:24 GMT",
"version": "v1"
},
{
"created": "Mon, 29 Oct 2018 21:50:55 GMT",
"version": "v2"
}
] |
2018-10-31
|
[
[
"Boyd",
"Sylvia",
""
],
[
"Sebö",
"András",
""
]
] |
Finding the exact integrality gap $\alpha$ for the LP relaxation of the metric Travelling Salesman Problem (TSP) has been an open problem for over thirty years, with little progress made. It is known that $4/3 \leq \alpha \leq 3/2$, and a famous conjecture states $\alpha = 4/3$. For this problem, essentially two "fundamental" classes of instances have been proposed. This fundamental property means that in order to show that the integrality gap is at most $\rho$ for all instances of metric TSP, it is sufficient to show it only for the instances in the fundamental class. However, despite the importance and the simplicity of such classes, no apparent effort has been deployed for improving the integrality gap bounds for them. In this paper we take a natural first step in this endeavour, and consider the $1/2$-integer points of one such class. We successfully improve the upper bound for the integrality gap from $3/2$ to $10/7$ for a superclass of these points, as well as prove a lower bound of $4/3$ for the superclass. Our methods involve innovative applications of tools from combinatorial optimization which have the potential to be more broadly applied.
|
2208.01639
|
Simson Garfinkel
|
Simson Garfinkel and Jonathan Stewart
|
Sharpening Your Tools: Updating bulk_extractor for the 2020s
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Bulk_extractor is a high-performance digital forensics tool written in C++.
Between 2018 and 2022 we updated the program from C++98 to C++17, performed a
complete code refactoring, and adopted a unit test framework. The new version
typically runs with 75\% more throughput than the previous version, which we
attribute to improved multithreading. We provide lessons and recommendations
for other digital forensics tool maintainers.
|
[
{
"created": "Thu, 31 Mar 2022 17:45:46 GMT",
"version": "v1"
}
] |
2022-08-04
|
[
[
"Garfinkel",
"Simson",
""
],
[
"Stewart",
"Jonathan",
""
]
] |
Bulk_extractor is a high-performance digital forensics tool written in C++. Between 2018 and 2022 we updated the program from C++98 to C++17, performed a complete code refactoring, and adopted a unit test framework. The new version typically runs with 75\% more throughput than the previous version, which we attribute to improved multithreading. We provide lessons and recommendations for other digital forensics tool maintainers.
|
1509.02238
|
Fangfang Li
|
Fangfang Li, Yanchang Zhao, Klaus Felsche, Guandong Xu, and Longbing
Cao
|
Coupling Analysis Between Twitter and Call Centre
|
19 pages
| null | null | null |
cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Social media has been contributing many research areas such as data mining,
recommender systems, time series analysis, etc. However, there are not many
successful applications regarding social media in government agencies. In fact,
lots of governments have social media accounts such as twitter and facebook.
More and more customers are likely to communicate with governments on social
media, causing massive external social media data for governments. This
external data would be beneficial for analysing behaviours and real needs of
the customers. Besides this, most governments also have a call centre to help
customers solve their problems. It is not difficult to imagine that the
enquiries on external social media and internal call centre may have some
coupling relationships. The couplings could be helpful for studying customers'
intent and allocating government's limited resources for better service. In
this paper, we mainly focus on analysing the coupling relations between
internal call centre and external public media using time series analysis
methods for Australia Department of Immigration and Border Protec-tion. The
discovered couplings demonstrate that call centre and public media indeed have
correlations, which are significant for understanding customers' behaviours.
|
[
{
"created": "Tue, 8 Sep 2015 01:21:50 GMT",
"version": "v1"
}
] |
2016-11-06
|
[
[
"Li",
"Fangfang",
""
],
[
"Zhao",
"Yanchang",
""
],
[
"Felsche",
"Klaus",
""
],
[
"Xu",
"Guandong",
""
],
[
"Cao",
"Longbing",
""
]
] |
Social media has been contributing many research areas such as data mining, recommender systems, time series analysis, etc. However, there are not many successful applications regarding social media in government agencies. In fact, lots of governments have social media accounts such as twitter and facebook. More and more customers are likely to communicate with governments on social media, causing massive external social media data for governments. This external data would be beneficial for analysing behaviours and real needs of the customers. Besides this, most governments also have a call centre to help customers solve their problems. It is not difficult to imagine that the enquiries on external social media and internal call centre may have some coupling relationships. The couplings could be helpful for studying customers' intent and allocating government's limited resources for better service. In this paper, we mainly focus on analysing the coupling relations between internal call centre and external public media using time series analysis methods for Australia Department of Immigration and Border Protec-tion. The discovered couplings demonstrate that call centre and public media indeed have correlations, which are significant for understanding customers' behaviours.
|
2001.10109
|
Kriton Konstantinidis
|
Alexandros Haliassos, Kriton Konstantinidis, Danilo P. Mandic
|
Supervised Learning for Non-Sequential Data: A Canonical Polyadic
Decomposition Approach
|
Accepted at IEEE Transactions on Neural Networks and Learning Systems
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Efficient modelling of feature interactions underpins supervised learning for
non-sequential tasks, characterized by a lack of inherent ordering of features
(variables). The brute force approach of learning a parameter for each
interaction of every order comes at an exponential computational and memory
cost (Curse of Dimensionality). To alleviate this issue, it has been proposed
to implicitly represent the model parameters as a tensor, the order of which is
equal to the number of features; for efficiency, it can be further factorized
into a compact Tensor Train (TT) format. However, both TT and other Tensor
Networks (TNs), such as Tensor Ring and Hierarchical Tucker, are sensitive to
the ordering of their indices (and hence to the features). To establish the
desired invariance to feature ordering, we propose to represent the weight
tensor through the Canonical Polyadic (CP) Decomposition (CPD), and introduce
the associated inference and learning algorithms, including suitable
regularization and initialization schemes. It is demonstrated that the proposed
CP-based predictor significantly outperforms other TN-based predictors on
sparse data while exhibiting comparable performance on dense non-sequential
tasks. Furthermore, for enhanced expressiveness, we generalize the framework to
allow feature mapping to arbitrarily high-dimensional feature vectors. In
conjunction with feature vector normalization, this is shown to yield dramatic
improvements in performance for dense non-sequential tasks, matching models
such as fully-connected neural networks.
|
[
{
"created": "Mon, 27 Jan 2020 22:38:40 GMT",
"version": "v1"
},
{
"created": "Wed, 15 Apr 2020 09:10:27 GMT",
"version": "v2"
},
{
"created": "Tue, 30 Mar 2021 09:29:17 GMT",
"version": "v3"
}
] |
2021-03-31
|
[
[
"Haliassos",
"Alexandros",
""
],
[
"Konstantinidis",
"Kriton",
""
],
[
"Mandic",
"Danilo P.",
""
]
] |
Efficient modelling of feature interactions underpins supervised learning for non-sequential tasks, characterized by a lack of inherent ordering of features (variables). The brute force approach of learning a parameter for each interaction of every order comes at an exponential computational and memory cost (Curse of Dimensionality). To alleviate this issue, it has been proposed to implicitly represent the model parameters as a tensor, the order of which is equal to the number of features; for efficiency, it can be further factorized into a compact Tensor Train (TT) format. However, both TT and other Tensor Networks (TNs), such as Tensor Ring and Hierarchical Tucker, are sensitive to the ordering of their indices (and hence to the features). To establish the desired invariance to feature ordering, we propose to represent the weight tensor through the Canonical Polyadic (CP) Decomposition (CPD), and introduce the associated inference and learning algorithms, including suitable regularization and initialization schemes. It is demonstrated that the proposed CP-based predictor significantly outperforms other TN-based predictors on sparse data while exhibiting comparable performance on dense non-sequential tasks. Furthermore, for enhanced expressiveness, we generalize the framework to allow feature mapping to arbitrarily high-dimensional feature vectors. In conjunction with feature vector normalization, this is shown to yield dramatic improvements in performance for dense non-sequential tasks, matching models such as fully-connected neural networks.
|
2401.01313
|
Anku Rani
|
S.M Towhidul Islam Tonmoy, S M Mehedi Zaman, Vinija Jain, Anku Rani,
Vipula Rawte, Aman Chadha, Amitava Das
|
A Comprehensive Survey of Hallucination Mitigation Techniques in Large
Language Models
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
As Large Language Models (LLMs) continue to advance in their ability to write
human-like text, a key challenge remains around their tendency to hallucinate
generating content that appears factual but is ungrounded. This issue of
hallucination is arguably the biggest hindrance to safely deploying these
powerful LLMs into real-world production systems that impact people's lives.
The journey toward widespread adoption of LLMs in practical settings heavily
relies on addressing and mitigating hallucinations. Unlike traditional AI
systems focused on limited tasks, LLMs have been exposed to vast amounts of
online text data during training. While this allows them to display impressive
language fluency, it also means they are capable of extrapolating information
from the biases in training data, misinterpreting ambiguous prompts, or
modifying the information to align superficially with the input. This becomes
hugely alarming when we rely on language generation capabilities for sensitive
applications, such as summarizing medical records, financial analysis reports,
etc. This paper presents a comprehensive survey of over 32 techniques developed
to mitigate hallucination in LLMs. Notable among these are Retrieval Augmented
Generation (Lewis et al, 2021), Knowledge Retrieval (Varshney et al,2023),
CoNLI (Lei et al, 2023), and CoVe (Dhuliawala et al, 2023). Furthermore, we
introduce a detailed taxonomy categorizing these methods based on various
parameters, such as dataset utilization, common tasks, feedback mechanisms, and
retriever types. This classification helps distinguish the diverse approaches
specifically designed to tackle hallucination issues in LLMs. Additionally, we
analyze the challenges and limitations inherent in these techniques, providing
a solid foundation for future research in addressing hallucinations and related
phenomena within the realm of LLMs.
|
[
{
"created": "Tue, 2 Jan 2024 17:56:30 GMT",
"version": "v1"
},
{
"created": "Wed, 3 Jan 2024 17:13:00 GMT",
"version": "v2"
},
{
"created": "Mon, 8 Jan 2024 16:19:17 GMT",
"version": "v3"
}
] |
2024-01-09
|
[
[
"Tonmoy",
"S. M Towhidul Islam",
""
],
[
"Zaman",
"S M Mehedi",
""
],
[
"Jain",
"Vinija",
""
],
[
"Rani",
"Anku",
""
],
[
"Rawte",
"Vipula",
""
],
[
"Chadha",
"Aman",
""
],
[
"Das",
"Amitava",
""
]
] |
As Large Language Models (LLMs) continue to advance in their ability to write human-like text, a key challenge remains around their tendency to hallucinate generating content that appears factual but is ungrounded. This issue of hallucination is arguably the biggest hindrance to safely deploying these powerful LLMs into real-world production systems that impact people's lives. The journey toward widespread adoption of LLMs in practical settings heavily relies on addressing and mitigating hallucinations. Unlike traditional AI systems focused on limited tasks, LLMs have been exposed to vast amounts of online text data during training. While this allows them to display impressive language fluency, it also means they are capable of extrapolating information from the biases in training data, misinterpreting ambiguous prompts, or modifying the information to align superficially with the input. This becomes hugely alarming when we rely on language generation capabilities for sensitive applications, such as summarizing medical records, financial analysis reports, etc. This paper presents a comprehensive survey of over 32 techniques developed to mitigate hallucination in LLMs. Notable among these are Retrieval Augmented Generation (Lewis et al, 2021), Knowledge Retrieval (Varshney et al,2023), CoNLI (Lei et al, 2023), and CoVe (Dhuliawala et al, 2023). Furthermore, we introduce a detailed taxonomy categorizing these methods based on various parameters, such as dataset utilization, common tasks, feedback mechanisms, and retriever types. This classification helps distinguish the diverse approaches specifically designed to tackle hallucination issues in LLMs. Additionally, we analyze the challenges and limitations inherent in these techniques, providing a solid foundation for future research in addressing hallucinations and related phenomena within the realm of LLMs.
|
2404.06966
|
Johannes Burchert
|
Johannes Burchert, Thorben Werner, Vijaya Krishna Yalavarthi, Diego
Coello de Portugal, Maximilian Stubbemann, and Lars Schmidt-Thieme
|
Are EEG Sequences Time Series? EEG Classification with Time Series
Models and Joint Subject Training
| null | null | null | null |
cs.LG eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As with most other data domains, EEG data analysis relies on rich
domain-specific preprocessing. Beyond such preprocessing, machine learners
would hope to deal with such data as with any other time series data. For EEG
classification many models have been developed with layer types and
architectures we typically do not see in time series classification.
Furthermore, typically separate models for each individual subject are learned,
not one model for all of them. In this paper, we systematically study the
differences between EEG classification models and generic time series
classification models. We describe three different model setups to deal with
EEG data from different subjects, subject-specific models (most EEG
literature), subject-agnostic models and subject-conditional models. In
experiments on three datasets, we demonstrate that off-the-shelf time series
classification models trained per subject perform close to EEG classification
models, but that do not quite reach the performance of domain-specific
modeling. Additionally, we combine time-series models with subject embeddings
to train one joint subject-conditional classifier on all subjects. The
resulting models are competitive with dedicated EEG models in 2 out of 3
datasets, even outperforming all EEG methods on one of them.
|
[
{
"created": "Wed, 10 Apr 2024 12:24:05 GMT",
"version": "v1"
}
] |
2024-04-11
|
[
[
"Burchert",
"Johannes",
""
],
[
"Werner",
"Thorben",
""
],
[
"Yalavarthi",
"Vijaya Krishna",
""
],
[
"de Portugal",
"Diego Coello",
""
],
[
"Stubbemann",
"Maximilian",
""
],
[
"Schmidt-Thieme",
"Lars",
""
]
] |
As with most other data domains, EEG data analysis relies on rich domain-specific preprocessing. Beyond such preprocessing, machine learners would hope to deal with such data as with any other time series data. For EEG classification many models have been developed with layer types and architectures we typically do not see in time series classification. Furthermore, typically separate models for each individual subject are learned, not one model for all of them. In this paper, we systematically study the differences between EEG classification models and generic time series classification models. We describe three different model setups to deal with EEG data from different subjects, subject-specific models (most EEG literature), subject-agnostic models and subject-conditional models. In experiments on three datasets, we demonstrate that off-the-shelf time series classification models trained per subject perform close to EEG classification models, but that do not quite reach the performance of domain-specific modeling. Additionally, we combine time-series models with subject embeddings to train one joint subject-conditional classifier on all subjects. The resulting models are competitive with dedicated EEG models in 2 out of 3 datasets, even outperforming all EEG methods on one of them.
|
1007.4531
|
Barak Fishbain
|
Barak Fishbain, Dorit S. Hochbaum, Stefan Mueller
|
Competitive Analysis of Minimum-Cut Maximum Flow Algorithms in Vision
Problems
| null |
Journal of Real-Time Image Processing, March 2016, Vol. 11, Issue
3, pp 589-609. (Online April 2013.)
|
10.1007/s11554-013-0344-3.
| null |
cs.CV cs.DM math.CO math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Rapid advances in image acquisition and storage technology underline the need
for algorithms that are capable of solving large scale image processing and
computer-vision problems. The minimum cut problem plays an important role in
processing many of these imaging problems such as, image and video
segmentation, stereo vision, multi-view reconstruction and surface fitting.
While several min-cut/max-flow algorithms can be found in the literature, their
performance in practice has been studied primarily outside the scope of
computer vision. We present here the results of a comprehensive computational
study, in terms of execution times and memory utilization, of four recently
published algorithms, which optimally solve the {\em s-t} cut and maximum flow
problems: (i) Goldberg's and Tarjan's {\em Push-Relabel}; (ii) Hochbaum's {\em
pseudoflow}; (iii) Boykov's and Kolmogorov's {\em augmenting paths}; and (iv)
Goldberg's {\em partial augment-relabel}. Our results demonstrate that the {\em
Hochbaum's pseudoflow} algorithm, is faster and utilizes less memory than the
other algorithms on all problem instances investigated.
|
[
{
"created": "Mon, 26 Jul 2010 18:58:32 GMT",
"version": "v1"
},
{
"created": "Mon, 18 Oct 2010 21:31:15 GMT",
"version": "v2"
}
] |
2016-10-14
|
[
[
"Fishbain",
"Barak",
""
],
[
"Hochbaum",
"Dorit S.",
""
],
[
"Mueller",
"Stefan",
""
]
] |
Rapid advances in image acquisition and storage technology underline the need for algorithms that are capable of solving large scale image processing and computer-vision problems. The minimum cut problem plays an important role in processing many of these imaging problems such as, image and video segmentation, stereo vision, multi-view reconstruction and surface fitting. While several min-cut/max-flow algorithms can be found in the literature, their performance in practice has been studied primarily outside the scope of computer vision. We present here the results of a comprehensive computational study, in terms of execution times and memory utilization, of four recently published algorithms, which optimally solve the {\em s-t} cut and maximum flow problems: (i) Goldberg's and Tarjan's {\em Push-Relabel}; (ii) Hochbaum's {\em pseudoflow}; (iii) Boykov's and Kolmogorov's {\em augmenting paths}; and (iv) Goldberg's {\em partial augment-relabel}. Our results demonstrate that the {\em Hochbaum's pseudoflow} algorithm, is faster and utilizes less memory than the other algorithms on all problem instances investigated.
|
2109.04266
|
Daniel Bakkelund
|
Daniel Bakkelund
|
An objective function for order preserving hierarchical clustering
|
39 pages
| null | null | null |
cs.LG math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present an objective function for similarity based hierarchical clustering
of partially ordered data that preserves the partial order. That is, if $x \le
y$, and if $[x]$ and $[y]$ are the respective clusters of $x$ and $y$, then
there is an order relation $\le'$ on the clusters for which $[x] \le' |y]$. The
theory distinguishes itself from existing theories for clustering of ordered
data in that the order relation and the similarity are combined into a
bi-objective optimisation problem to obtain a hierarchical clustering seeking
to satisfy both. In particular, the order relation is weighted in the range
$[0,1]$, and if the similarity and the order relation are not aligned, then
order preservation may have to yield in favor of clustering. Finding an optimal
solution is NP-hard, so we provide a polynomial time approximation algorithm,
with a relative performance guarantee of $O\!\left(\log^{3/2} \!\!\, n
\right)$, based on successive applications of directed sparsest cut. We provide
a demonstration on a benchmark dataset, showing that our method outperforms
existing methods for order preserving hierarchical clustering with significant
margin. The theory is an extension of the Dasgupta cost function for divisive
hierarchical clustering.
|
[
{
"created": "Thu, 9 Sep 2021 13:35:01 GMT",
"version": "v1"
},
{
"created": "Fri, 31 Dec 2021 13:48:11 GMT",
"version": "v2"
},
{
"created": "Sun, 1 May 2022 07:32:39 GMT",
"version": "v3"
}
] |
2022-05-03
|
[
[
"Bakkelund",
"Daniel",
""
]
] |
We present an objective function for similarity based hierarchical clustering of partially ordered data that preserves the partial order. That is, if $x \le y$, and if $[x]$ and $[y]$ are the respective clusters of $x$ and $y$, then there is an order relation $\le'$ on the clusters for which $[x] \le' |y]$. The theory distinguishes itself from existing theories for clustering of ordered data in that the order relation and the similarity are combined into a bi-objective optimisation problem to obtain a hierarchical clustering seeking to satisfy both. In particular, the order relation is weighted in the range $[0,1]$, and if the similarity and the order relation are not aligned, then order preservation may have to yield in favor of clustering. Finding an optimal solution is NP-hard, so we provide a polynomial time approximation algorithm, with a relative performance guarantee of $O\!\left(\log^{3/2} \!\!\, n \right)$, based on successive applications of directed sparsest cut. We provide a demonstration on a benchmark dataset, showing that our method outperforms existing methods for order preserving hierarchical clustering with significant margin. The theory is an extension of the Dasgupta cost function for divisive hierarchical clustering.
|
cs/0502073
|
Maxime Crochemore
|
Maxime Crochemore (IGM), Jacques D\'esarm\'enien (IGM), Dominique
Perrin (IGM)
|
A note on the Burrows-Wheeler transformation
|
2004
| null | null |
CDP04tcs
|
cs.DS
| null |
We relate the Burrows-Wheeler transformation with a result in combinatorics
on words known as the Gessel-Reutenauer transformation.
|
[
{
"created": "Thu, 17 Feb 2005 07:06:28 GMT",
"version": "v1"
}
] |
2016-08-16
|
[
[
"Crochemore",
"Maxime",
"",
"IGM"
],
[
"Désarménien",
"Jacques",
"",
"IGM"
],
[
"Perrin",
"Dominique",
"",
"IGM"
]
] |
We relate the Burrows-Wheeler transformation with a result in combinatorics on words known as the Gessel-Reutenauer transformation.
|
2204.06299
|
Sherzod Hakimov
|
Sherzod Hakimov and Gullal S. Cheema and Ralph Ewerth
|
TIB-VA at SemEval-2022 Task 5: A Multimodal Architecture for the
Detection and Classification of Misogynous Memes
|
Accepted for publication at SemEval-2022 Workshop, Task 5: MAMI -
Multimedia Automatic Misogyny Identification co-located with NAACL 2022
| null | null | null |
cs.CL cs.AI cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The detection of offensive, hateful content on social media is a challenging
problem that affects many online users on a daily basis. Hateful content is
often used to target a group of people based on ethnicity, gender, religion and
other factors. The hate or contempt toward women has been increasing on social
platforms. Misogynous content detection is especially challenging when textual
and visual modalities are combined to form a single context, e.g., an overlay
text embedded on top of an image, also known as meme. In this paper, we present
a multimodal architecture that combines textual and visual features in order to
detect misogynous meme content. The proposed architecture is evaluated in the
SemEval-2022 Task 5: MAMI - Multimedia Automatic Misogyny Identification
challenge under the team name TIB-VA. Our solution obtained the best result in
the Task-B where the challenge is to classify whether a given document is
misogynous and further identify the main sub-classes of shaming, stereotype,
objectification, and violence.
|
[
{
"created": "Wed, 13 Apr 2022 11:03:21 GMT",
"version": "v1"
}
] |
2022-04-14
|
[
[
"Hakimov",
"Sherzod",
""
],
[
"Cheema",
"Gullal S.",
""
],
[
"Ewerth",
"Ralph",
""
]
] |
The detection of offensive, hateful content on social media is a challenging problem that affects many online users on a daily basis. Hateful content is often used to target a group of people based on ethnicity, gender, religion and other factors. The hate or contempt toward women has been increasing on social platforms. Misogynous content detection is especially challenging when textual and visual modalities are combined to form a single context, e.g., an overlay text embedded on top of an image, also known as meme. In this paper, we present a multimodal architecture that combines textual and visual features in order to detect misogynous meme content. The proposed architecture is evaluated in the SemEval-2022 Task 5: MAMI - Multimedia Automatic Misogyny Identification challenge under the team name TIB-VA. Our solution obtained the best result in the Task-B where the challenge is to classify whether a given document is misogynous and further identify the main sub-classes of shaming, stereotype, objectification, and violence.
|
2403.09481
|
Paloma Rabaey
|
Paloma Rabaey, Johannes Deleu, Stefan Heytens, Thomas Demeester
|
Clinical Reasoning over Tabular Data and Text with Bayesian Networks
|
AI in Medicine 2024
| null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Bayesian networks are well-suited for clinical reasoning on tabular data, but
are less compatible with natural language data, for which neural networks
provide a successful framework. This paper compares and discusses strategies to
augment Bayesian networks with neural text representations, both in a
generative and discriminative manner. This is illustrated with simulation
results for a primary care use case (diagnosis of pneumonia) and discussed in a
broader clinical context.
|
[
{
"created": "Thu, 14 Mar 2024 15:25:23 GMT",
"version": "v1"
},
{
"created": "Tue, 19 Mar 2024 16:48:27 GMT",
"version": "v2"
},
{
"created": "Thu, 23 May 2024 13:41:19 GMT",
"version": "v3"
}
] |
2024-05-24
|
[
[
"Rabaey",
"Paloma",
""
],
[
"Deleu",
"Johannes",
""
],
[
"Heytens",
"Stefan",
""
],
[
"Demeester",
"Thomas",
""
]
] |
Bayesian networks are well-suited for clinical reasoning on tabular data, but are less compatible with natural language data, for which neural networks provide a successful framework. This paper compares and discusses strategies to augment Bayesian networks with neural text representations, both in a generative and discriminative manner. This is illustrated with simulation results for a primary care use case (diagnosis of pneumonia) and discussed in a broader clinical context.
|
1902.06961
|
Jason R.C. Nurse Dr
|
Mariam Nouh and Jason R.C. Nurse and Helena Webb and Michael Goldsmith
|
Cybercrime Investigators are Users Too! Understanding the
Socio-Technical Challenges Faced by Law Enforcement
|
11 pages, Proceedings of the 2019 Workshop on Usable Security (USEC)
at Network and Distributed System Security Symposium (NDSS)
| null |
10.14722/usec.2019.23032
| null |
cs.HC cs.CR cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cybercrime investigators face numerous challenges when policing online
crimes. Firstly, the methods and processes they use when dealing with
traditional crimes do not necessarily apply in the cyber-world. Additionally,
cyber criminals are usually technologically-aware and constantly adapting and
developing new tools that allow them to stay ahead of law enforcement
investigations. In order to provide adequate support for cybercrime
investigators, there needs to be a better understanding of the challenges they
face at both technical and socio-technical levels. In this paper, we
investigate this problem through an analysis of current practices and workflows
of investigators. We use interviews with experts from government and private
sectors who investigate cybercrimes as our main data gathering process. From an
analysis of the collected data, we identify several outstanding challenges
faced by investigators. These pertain to practical, technical, and social
issues such as systems availability, usability, and in computer-supported
collaborative work. Importantly, we use our findings to highlight research
areas where user-centric workflows and tools are desirable. We also define a
set of recommendations that can aid in providing a better foundation for future
research in the field and allow more effective combating of cybercrimes.
|
[
{
"created": "Tue, 19 Feb 2019 09:25:10 GMT",
"version": "v1"
}
] |
2019-02-20
|
[
[
"Nouh",
"Mariam",
""
],
[
"Nurse",
"Jason R. C.",
""
],
[
"Webb",
"Helena",
""
],
[
"Goldsmith",
"Michael",
""
]
] |
Cybercrime investigators face numerous challenges when policing online crimes. Firstly, the methods and processes they use when dealing with traditional crimes do not necessarily apply in the cyber-world. Additionally, cyber criminals are usually technologically-aware and constantly adapting and developing new tools that allow them to stay ahead of law enforcement investigations. In order to provide adequate support for cybercrime investigators, there needs to be a better understanding of the challenges they face at both technical and socio-technical levels. In this paper, we investigate this problem through an analysis of current practices and workflows of investigators. We use interviews with experts from government and private sectors who investigate cybercrimes as our main data gathering process. From an analysis of the collected data, we identify several outstanding challenges faced by investigators. These pertain to practical, technical, and social issues such as systems availability, usability, and in computer-supported collaborative work. Importantly, we use our findings to highlight research areas where user-centric workflows and tools are desirable. We also define a set of recommendations that can aid in providing a better foundation for future research in the field and allow more effective combating of cybercrimes.
|
2210.14377
|
Niharika S. D'Souza
|
Niharika S. D'Souza, Hongzhi Wang, Andrea Giovannini, Antonio
Foncubierta-Rodriguez, Kristen L. Beck, Orest Boyko, and Tanveer
Syeda-Mahmood
|
Fusing Modalities by Multiplexed Graph Neural Networks for Outcome
Prediction in Tuberculosis
|
Accepted into MICCAI 2022
| null |
10.1007/978-3-031-16449-1_28
| null |
cs.LG cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
In a complex disease such as tuberculosis, the evidence for the disease and
its evolution may be present in multiple modalities such as clinical, genomic,
or imaging data. Effective patient-tailored outcome prediction and therapeutic
guidance will require fusing evidence from these modalities. Such multimodal
fusion is difficult since the evidence for the disease may not be uniform
across all modalities, not all modality features may be relevant, or not all
modalities may be present for all patients. All these nuances make simple
methods of early, late, or intermediate fusion of features inadequate for
outcome prediction. In this paper, we present a novel fusion framework using
multiplexed graphs and derive a new graph neural network for learning from such
graphs. Specifically, the framework allows modalities to be represented through
their targeted encodings, and models their relationship explicitly via
multiplexed graphs derived from salient features in a combined latent space. We
present results that show that our proposed method outperforms state-of-the-art
methods of fusing modalities for multi-outcome prediction on a large
Tuberculosis (TB) dataset.
|
[
{
"created": "Tue, 25 Oct 2022 23:03:05 GMT",
"version": "v1"
}
] |
2022-10-27
|
[
[
"D'Souza",
"Niharika S.",
""
],
[
"Wang",
"Hongzhi",
""
],
[
"Giovannini",
"Andrea",
""
],
[
"Foncubierta-Rodriguez",
"Antonio",
""
],
[
"Beck",
"Kristen L.",
""
],
[
"Boyko",
"Orest",
""
],
[
"Syeda-Mahmood",
"Tanveer",
""
]
] |
In a complex disease such as tuberculosis, the evidence for the disease and its evolution may be present in multiple modalities such as clinical, genomic, or imaging data. Effective patient-tailored outcome prediction and therapeutic guidance will require fusing evidence from these modalities. Such multimodal fusion is difficult since the evidence for the disease may not be uniform across all modalities, not all modality features may be relevant, or not all modalities may be present for all patients. All these nuances make simple methods of early, late, or intermediate fusion of features inadequate for outcome prediction. In this paper, we present a novel fusion framework using multiplexed graphs and derive a new graph neural network for learning from such graphs. Specifically, the framework allows modalities to be represented through their targeted encodings, and models their relationship explicitly via multiplexed graphs derived from salient features in a combined latent space. We present results that show that our proposed method outperforms state-of-the-art methods of fusing modalities for multi-outcome prediction on a large Tuberculosis (TB) dataset.
|
cs/0506043
|
Kapil Bhattad
|
Krishna R. Narayanan and Kapil Bhattad
|
A Decision Feedback Based Scheme for Slepian-Wolf Coding of sources with
Hidden Markov Correlation
|
Submitted to IEEE Comm. Letters
| null |
10.1109/LCOMM.2006.1633329
| null |
cs.IT math.IT
| null |
We consider the problem of compression of two memoryless binary sources, the
correlation between which is defined by a Hidden Markov Model (HMM). We propose
a Decision Feedback (DF) based scheme which when used with low density parity
check codes results in compression close to the Slepian Wolf limits.
|
[
{
"created": "Mon, 13 Jun 2005 13:17:21 GMT",
"version": "v1"
}
] |
2016-11-17
|
[
[
"Narayanan",
"Krishna R.",
""
],
[
"Bhattad",
"Kapil",
""
]
] |
We consider the problem of compression of two memoryless binary sources, the correlation between which is defined by a Hidden Markov Model (HMM). We propose a Decision Feedback (DF) based scheme which when used with low density parity check codes results in compression close to the Slepian Wolf limits.
|
1904.10380
|
Feng Huang
|
Feng Huang, Peter Balazs
|
Harmonic-aligned Frame Mask Based on Non-stationary Gabor Transform with
Application to Content-dependent Speaker Comparison
|
Interspeech2019
| null | null | null |
cs.SD eess.AS math.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose harmonic-aligned frame mask for speech signals using
non-stationary Gabor transform (NSGT). A frame mask operates on the transfer
coefficients of a signal and consequently converts the signal into a
counterpart signal. It depicts the difference between the two signals. In
preceding studies, frame masks based on regular Gabor transform were applied to
single-note instrumental sound analysis. This study extends the frame mask
approach to speech signals. For voiced speech, the fundamental frequency is
usually changing consecutively over time. We employ NSGT with pitch-dependent
and therefore time-varying frequency resolution to attain harmonic alignment in
the transform domain and hence yield harmonic-aligned frame masks for speech
signals. We propose to apply the harmonic-aligned frame mask to
content-dependent speaker comparison. Frame masks, computed from voiced signals
of a same vowel but from different speakers, were utilized as similarity
measures to compare and distinguish the speaker identities (SID). Results
obtained with deep neural networks demonstrate that the proposed frame mask is
valid in representing speaker characteristics and shows a potential for SID
applications in limited data scenarios.
|
[
{
"created": "Tue, 23 Apr 2019 15:21:09 GMT",
"version": "v1"
}
] |
2019-04-24
|
[
[
"Huang",
"Feng",
""
],
[
"Balazs",
"Peter",
""
]
] |
We propose harmonic-aligned frame mask for speech signals using non-stationary Gabor transform (NSGT). A frame mask operates on the transfer coefficients of a signal and consequently converts the signal into a counterpart signal. It depicts the difference between the two signals. In preceding studies, frame masks based on regular Gabor transform were applied to single-note instrumental sound analysis. This study extends the frame mask approach to speech signals. For voiced speech, the fundamental frequency is usually changing consecutively over time. We employ NSGT with pitch-dependent and therefore time-varying frequency resolution to attain harmonic alignment in the transform domain and hence yield harmonic-aligned frame masks for speech signals. We propose to apply the harmonic-aligned frame mask to content-dependent speaker comparison. Frame masks, computed from voiced signals of a same vowel but from different speakers, were utilized as similarity measures to compare and distinguish the speaker identities (SID). Results obtained with deep neural networks demonstrate that the proposed frame mask is valid in representing speaker characteristics and shows a potential for SID applications in limited data scenarios.
|
2312.08584
|
Ahmed Esmin Ph.D.
|
Thiago Bellotti Furtado, Ahmed Esmin
|
Hybrid Content Dynamic Recommendation System Based in Adapted Tags and
Applied to Digital Library
|
35 pages, 8 figures
| null | null | null |
cs.IR cs.DL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The technological evolution of the library in the academic environment
brought a lot of information and documents that are available to access, but
these systems do not always have mechanisms to search in an integrated way the
relevant information for the user. To alleviate this problem, we propose a
recommendation system that generates the user profile through tags that are
reshaped over time. To trace the user profile the system uses information from
your lending history stored in the library database and it collects their
opinions (feedback) through a list of recommendations. These data are
integrated with the document base of institutional repository.Thus, the
recommendation system assists users in identifying relevant items and makes
suggestions for content in an integrated environment that contains
institutional repository documents and the university library database. The
proposed recommendation system uses a hybrid approach being applied in an
academic environment with the participation of the users.
|
[
{
"created": "Thu, 14 Dec 2023 01:11:41 GMT",
"version": "v1"
}
] |
2023-12-15
|
[
[
"Furtado",
"Thiago Bellotti",
""
],
[
"Esmin",
"Ahmed",
""
]
] |
The technological evolution of the library in the academic environment brought a lot of information and documents that are available to access, but these systems do not always have mechanisms to search in an integrated way the relevant information for the user. To alleviate this problem, we propose a recommendation system that generates the user profile through tags that are reshaped over time. To trace the user profile the system uses information from your lending history stored in the library database and it collects their opinions (feedback) through a list of recommendations. These data are integrated with the document base of institutional repository.Thus, the recommendation system assists users in identifying relevant items and makes suggestions for content in an integrated environment that contains institutional repository documents and the university library database. The proposed recommendation system uses a hybrid approach being applied in an academic environment with the participation of the users.
|
2405.03838
|
Eishi Arima
|
Eishi Arima, Minjoon Kang, Issa Saba, Josef Weidendorfer, Carsten
Trinitis, Martin Schulz
|
Optimizing Hardware Resource Partitioning and Job Allocations on Modern
GPUs under Power Caps
| null |
ICPP Workshops '22: Workshop Proceedings of the 51st International
Conference on Parallel Processing, August 2022, Article No.: 9
|
10.1145/3547276.3548630
| null |
cs.DC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
CPU-GPU heterogeneous systems are now commonly used in HPC (High-Performance
Computing). However, improving the utilization and energy-efficiency of such
systems is still one of the most critical issues. As one single program
typically cannot fully utilize all resources within a node/chip, co-scheduling
(or co-locating) multiple programs with complementary resource requirements is
a promising solution. Meanwhile, as power consumption has become the
first-class design constraint for HPC systems, such co-scheduling techniques
should be well-tailored for power-constrained environments. To this end, the
industry recently started supporting hardware-level resource partitioning
features on modern GPUs for realizing efficient co-scheduling, which can
operate with existing power capping features. For example, NVidia's MIG
(Multi-Instance GPU) partitions one single GPU into multiple instances at the
granularity of a GPC (Graphics Processing Cluster). In this paper, we
explicitly target the combination of hardware-level GPU partitioning features
and power capping for power-constrained HPC systems. We provide a systematic
methodology to optimize the combination of chip partitioning, job allocations,
as well as power capping based on our scalability/interference modeling while
taking a variety of aspects into account, such as compute/memory intensity and
utilization in heterogeneous computational resources (e.g., Tensor Cores). The
experimental result indicates that our approach is successful in selecting a
near optimal combination across multiple different workloads.
|
[
{
"created": "Mon, 6 May 2024 20:40:38 GMT",
"version": "v1"
}
] |
2024-05-08
|
[
[
"Arima",
"Eishi",
""
],
[
"Kang",
"Minjoon",
""
],
[
"Saba",
"Issa",
""
],
[
"Weidendorfer",
"Josef",
""
],
[
"Trinitis",
"Carsten",
""
],
[
"Schulz",
"Martin",
""
]
] |
CPU-GPU heterogeneous systems are now commonly used in HPC (High-Performance Computing). However, improving the utilization and energy-efficiency of such systems is still one of the most critical issues. As one single program typically cannot fully utilize all resources within a node/chip, co-scheduling (or co-locating) multiple programs with complementary resource requirements is a promising solution. Meanwhile, as power consumption has become the first-class design constraint for HPC systems, such co-scheduling techniques should be well-tailored for power-constrained environments. To this end, the industry recently started supporting hardware-level resource partitioning features on modern GPUs for realizing efficient co-scheduling, which can operate with existing power capping features. For example, NVidia's MIG (Multi-Instance GPU) partitions one single GPU into multiple instances at the granularity of a GPC (Graphics Processing Cluster). In this paper, we explicitly target the combination of hardware-level GPU partitioning features and power capping for power-constrained HPC systems. We provide a systematic methodology to optimize the combination of chip partitioning, job allocations, as well as power capping based on our scalability/interference modeling while taking a variety of aspects into account, such as compute/memory intensity and utilization in heterogeneous computational resources (e.g., Tensor Cores). The experimental result indicates that our approach is successful in selecting a near optimal combination across multiple different workloads.
|
2206.08856
|
Ayush Gupta
|
Ayush Gupta, Ahmed Baza, Ekaterina Dorzhieva, Mert Alper, Mariia
Makarova, Stepan Perminov, Aleksey Fedoseev, Dzmitry Tsetserukou
|
SwarmHive: Heterogeneous Swarm of Drones for Robust Autonomous Landing
on Moving Robot
|
Accepted paper at IEEE Vehicular Technology Conference 2022 (IEEE VTC
2022), IEEE copyright
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The paper focuses on a heterogeneous swarm of drones to achieve a dynamic
landing of formation on a moving robot. This challenging task was not yet
achieved by scientists. The key technology is that instead of facilitating each
agent of the swarm of drones with computer vision that considerably increases
the payload and shortens the flight time, we propose to install only one camera
on the leader drone. The follower drones receive the commands from the leader
UAV and maintain a collision-free trajectory with the artificial potential
field. The experimental results revealed a high accuracy of the swarm landing
on a static mobile platform (RMSE of 4.48 cm). RMSE of swarm landing on the
mobile platform moving with the maximum velocities of 1.0 m/s and 1.5 m/s
equals 8.76 cm and 8.98 cm, respectively. The proposed SwarmHive technology
will allow the time-saving landing of the swarm for further drone recharging.
This will make it possible to achieve self-sustainable operation of a
multi-agent robotic system for such scenarios as rescue operations, inspection
and maintenance, autonomous warehouse inventory, cargo delivery, and etc.
|
[
{
"created": "Fri, 17 Jun 2022 15:56:29 GMT",
"version": "v1"
}
] |
2022-06-20
|
[
[
"Gupta",
"Ayush",
""
],
[
"Baza",
"Ahmed",
""
],
[
"Dorzhieva",
"Ekaterina",
""
],
[
"Alper",
"Mert",
""
],
[
"Makarova",
"Mariia",
""
],
[
"Perminov",
"Stepan",
""
],
[
"Fedoseev",
"Aleksey",
""
],
[
"Tsetserukou",
"Dzmitry",
""
]
] |
The paper focuses on a heterogeneous swarm of drones to achieve a dynamic landing of formation on a moving robot. This challenging task was not yet achieved by scientists. The key technology is that instead of facilitating each agent of the swarm of drones with computer vision that considerably increases the payload and shortens the flight time, we propose to install only one camera on the leader drone. The follower drones receive the commands from the leader UAV and maintain a collision-free trajectory with the artificial potential field. The experimental results revealed a high accuracy of the swarm landing on a static mobile platform (RMSE of 4.48 cm). RMSE of swarm landing on the mobile platform moving with the maximum velocities of 1.0 m/s and 1.5 m/s equals 8.76 cm and 8.98 cm, respectively. The proposed SwarmHive technology will allow the time-saving landing of the swarm for further drone recharging. This will make it possible to achieve self-sustainable operation of a multi-agent robotic system for such scenarios as rescue operations, inspection and maintenance, autonomous warehouse inventory, cargo delivery, and etc.
|
2312.09138
|
Liyuan Zhu
|
Liyuan Zhu and Shengyu Huang and Konrad Schindler and Iro Armeni
|
Living Scenes: Multi-object Relocalization and Reconstruction in
Changing 3D Environments
|
CVPR 2024 camera-ready
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Research into dynamic 3D scene understanding has primarily focused on
short-term change tracking from dense observations, while little attention has
been paid to long-term changes with sparse observations. We address this gap
with MoRE, a novel approach for multi-object relocalization and reconstruction
in evolving environments. We view these environments as "living scenes" and
consider the problem of transforming scans taken at different points in time
into a 3D reconstruction of the object instances, whose accuracy and
completeness increase over time. At the core of our method lies an
SE(3)-equivariant representation in a single encoder-decoder network, trained
on synthetic data. This representation enables us to seamlessly tackle instance
matching, registration, and reconstruction. We also introduce a joint
optimization algorithm that facilitates the accumulation of point clouds
originating from the same instance across multiple scans taken at different
points in time. We validate our method on synthetic and real-world data and
demonstrate state-of-the-art performance in both end-to-end performance and
individual subtasks.
|
[
{
"created": "Thu, 14 Dec 2023 17:09:57 GMT",
"version": "v1"
},
{
"created": "Tue, 26 Mar 2024 18:16:26 GMT",
"version": "v2"
}
] |
2024-03-28
|
[
[
"Zhu",
"Liyuan",
""
],
[
"Huang",
"Shengyu",
""
],
[
"Schindler",
"Konrad",
""
],
[
"Armeni",
"Iro",
""
]
] |
Research into dynamic 3D scene understanding has primarily focused on short-term change tracking from dense observations, while little attention has been paid to long-term changes with sparse observations. We address this gap with MoRE, a novel approach for multi-object relocalization and reconstruction in evolving environments. We view these environments as "living scenes" and consider the problem of transforming scans taken at different points in time into a 3D reconstruction of the object instances, whose accuracy and completeness increase over time. At the core of our method lies an SE(3)-equivariant representation in a single encoder-decoder network, trained on synthetic data. This representation enables us to seamlessly tackle instance matching, registration, and reconstruction. We also introduce a joint optimization algorithm that facilitates the accumulation of point clouds originating from the same instance across multiple scans taken at different points in time. We validate our method on synthetic and real-world data and demonstrate state-of-the-art performance in both end-to-end performance and individual subtasks.
|
1612.01375
|
Paolo Massioni
|
Paolo Massioni and G\'erard Scorletti
|
Consensus analysis of large-scale nonlinear homogeneous multi-agent
formations with polynomial dynamics
|
6 pages, 5 figures
| null | null | null |
cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Drawing inspiration from the theory of linear "decomposable systems", we
provide a method, based on linear matrix inequalities (LMIs), which makes it
possible to prove the convergence (or consensus) of a set of interacting agents
with polynomial dynamic. We also show that the use of a generalised version of
the famous Kalman-Yakubovic-Popov lemma allows the development of an LMI test
whose size does not depend on the number of agents. The method is validated
experimentally on two academic examples.
|
[
{
"created": "Mon, 5 Dec 2016 14:40:20 GMT",
"version": "v1"
}
] |
2016-12-06
|
[
[
"Massioni",
"Paolo",
""
],
[
"Scorletti",
"Gérard",
""
]
] |
Drawing inspiration from the theory of linear "decomposable systems", we provide a method, based on linear matrix inequalities (LMIs), which makes it possible to prove the convergence (or consensus) of a set of interacting agents with polynomial dynamic. We also show that the use of a generalised version of the famous Kalman-Yakubovic-Popov lemma allows the development of an LMI test whose size does not depend on the number of agents. The method is validated experimentally on two academic examples.
|
2108.02451
|
Lei Zhu
|
Lei Zhu, Qi She, Duo Li, Yanye Lu, Xuejing Kang, Jie Hu, Changhu Wang
|
Unifying Nonlocal Blocks for Neural Networks
|
Accept by ICCV 2021 Conference
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The nonlocal-based blocks are designed for capturing long-range
spatial-temporal dependencies in computer vision tasks. Although having shown
excellent performance, they still lack the mechanism to encode the rich,
structured information among elements in an image or video. In this paper, to
theoretically analyze the property of these nonlocal-based blocks, we provide a
new perspective to interpret them, where we view them as a set of graph filters
generated on a fully-connected graph. Specifically, when choosing the Chebyshev
graph filter, a unified formulation can be derived for explaining and analyzing
the existing nonlocal-based blocks (e.g., nonlocal block, nonlocal stage,
double attention block). Furthermore, by concerning the property of spectral,
we propose an efficient and robust spectral nonlocal block, which can be more
robust and flexible to catch long-range dependencies when inserted into deep
neural networks than the existing nonlocal blocks. Experimental results
demonstrate the clear-cut improvements and practical applicabilities of our
method on image classification, action recognition, semantic segmentation, and
person re-identification tasks.
|
[
{
"created": "Thu, 5 Aug 2021 08:34:12 GMT",
"version": "v1"
},
{
"created": "Fri, 13 Aug 2021 07:26:15 GMT",
"version": "v2"
},
{
"created": "Tue, 17 Aug 2021 07:18:59 GMT",
"version": "v3"
}
] |
2021-08-18
|
[
[
"Zhu",
"Lei",
""
],
[
"She",
"Qi",
""
],
[
"Li",
"Duo",
""
],
[
"Lu",
"Yanye",
""
],
[
"Kang",
"Xuejing",
""
],
[
"Hu",
"Jie",
""
],
[
"Wang",
"Changhu",
""
]
] |
The nonlocal-based blocks are designed for capturing long-range spatial-temporal dependencies in computer vision tasks. Although having shown excellent performance, they still lack the mechanism to encode the rich, structured information among elements in an image or video. In this paper, to theoretically analyze the property of these nonlocal-based blocks, we provide a new perspective to interpret them, where we view them as a set of graph filters generated on a fully-connected graph. Specifically, when choosing the Chebyshev graph filter, a unified formulation can be derived for explaining and analyzing the existing nonlocal-based blocks (e.g., nonlocal block, nonlocal stage, double attention block). Furthermore, by concerning the property of spectral, we propose an efficient and robust spectral nonlocal block, which can be more robust and flexible to catch long-range dependencies when inserted into deep neural networks than the existing nonlocal blocks. Experimental results demonstrate the clear-cut improvements and practical applicabilities of our method on image classification, action recognition, semantic segmentation, and person re-identification tasks.
|
2102.11625
|
Jukka Ruohonen
|
Jukka Ruohonen
|
Assessing the Readability of Policy Documents on the Digital Single
Market of the European Union
|
Proceedings of the Eighth International Conference on eDemocracy &
eGovernment (ICEDEG 2021), Quito (online), IEEE, pp. 205-209
| null |
10.1109/ICEDEG52154.2021.9530996
| null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Today, literature skills are necessary. Engineering and other technical
professions are not an exception from this requirement. Traditionally,
technical reading and writing have been framed with a limited scope, containing
documentation, specifications, standards, and related text types. Nowadays,
however, the scope covers also other text types, including legal, policy, and
related documents. Given this motivation, this paper evaluates the readability
of 201 legislations and related policy documents in the European Union (EU).
The digital single market (DSM) provides the context. Five classical
readability indices provide the methods; these are quantitative measures of a
text's readability. The empirical results indicate that (i) generally a Ph.D.
level education is required to comprehend the DSM laws and policy documents.
Although (ii) the results vary across the five indices used, (iii) readability
has slightly improved over time.
|
[
{
"created": "Tue, 23 Feb 2021 11:01:11 GMT",
"version": "v1"
},
{
"created": "Wed, 15 Sep 2021 13:33:10 GMT",
"version": "v2"
}
] |
2021-09-16
|
[
[
"Ruohonen",
"Jukka",
""
]
] |
Today, literature skills are necessary. Engineering and other technical professions are not an exception from this requirement. Traditionally, technical reading and writing have been framed with a limited scope, containing documentation, specifications, standards, and related text types. Nowadays, however, the scope covers also other text types, including legal, policy, and related documents. Given this motivation, this paper evaluates the readability of 201 legislations and related policy documents in the European Union (EU). The digital single market (DSM) provides the context. Five classical readability indices provide the methods; these are quantitative measures of a text's readability. The empirical results indicate that (i) generally a Ph.D. level education is required to comprehend the DSM laws and policy documents. Although (ii) the results vary across the five indices used, (iii) readability has slightly improved over time.
|
1210.5443
|
Robbert Van Renesse
|
Robbert van Renesse and H{\aa}vard Johansen and Nihar Naigaonkar and
Dag Johansen
|
Secure Abstraction with Code Capabilities
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose embedding executable code fragments in cryptographically protected
capabilities to enable flexible discretionary access control in cloud-like
computing infrastructures. We are developing this as part of a sports analytics
application that runs on a federation of public and enterprise clouds. The
capability mechanism is implemented completely in user space. Using a novel
combination of X.509 certificates and Javscript code, the capabilities support
restricted delegation, confinement, revocation, and rights amplification for
secure abstraction.
|
[
{
"created": "Fri, 19 Oct 2012 15:17:28 GMT",
"version": "v1"
}
] |
2012-10-22
|
[
[
"van Renesse",
"Robbert",
""
],
[
"Johansen",
"Håvard",
""
],
[
"Naigaonkar",
"Nihar",
""
],
[
"Johansen",
"Dag",
""
]
] |
We propose embedding executable code fragments in cryptographically protected capabilities to enable flexible discretionary access control in cloud-like computing infrastructures. We are developing this as part of a sports analytics application that runs on a federation of public and enterprise clouds. The capability mechanism is implemented completely in user space. Using a novel combination of X.509 certificates and Javscript code, the capabilities support restricted delegation, confinement, revocation, and rights amplification for secure abstraction.
|
2105.00706
|
Feng Xia
|
Feng Xia, Jiaying Liu, Jing Ren, Wei Wang, Xiangjie Kong
|
Turing Number: How Far Are You to A. M. Turing Award?
|
8 pages, 3 figures
|
ACM SIGWEB Newsletter (2020)
|
10.1145/3427478.3427483
| null |
cs.SI cs.DL physics.data-an
|
http://creativecommons.org/licenses/by/4.0/
|
The ACM A.M. Turing Award is commonly acknowledged as the highest distinction
in the realm of computer science. Since 1960s, it has been awarded to computer
scientists who made outstanding contributions. The significance of this award
is far-reaching to the laureates as well as their research teams. However,
unlike the Nobel Prize that has been extensively investigated, little research
has been done to explore this most important award. To this end, we propose the
Turing Number (TN) index to measure how far a specific scholar is to this
award. Inspired by previous works on Erdos Number and Bacon Number, this index
is defined as the shortest path between a given scholar to any Turing Award
Laureate. Experimental results suggest that TN can reflect the closeness of
collaboration between scholars and Turing Award Laureates. With the correlation
analysis between TN and metrics from the bibliometric-level and network-level,
we demonstrate that TN has the potential of reflecting a scholar's academic
influence and reputation.
|
[
{
"created": "Mon, 3 May 2021 09:28:36 GMT",
"version": "v1"
}
] |
2021-05-04
|
[
[
"Xia",
"Feng",
""
],
[
"Liu",
"Jiaying",
""
],
[
"Ren",
"Jing",
""
],
[
"Wang",
"Wei",
""
],
[
"Kong",
"Xiangjie",
""
]
] |
The ACM A.M. Turing Award is commonly acknowledged as the highest distinction in the realm of computer science. Since 1960s, it has been awarded to computer scientists who made outstanding contributions. The significance of this award is far-reaching to the laureates as well as their research teams. However, unlike the Nobel Prize that has been extensively investigated, little research has been done to explore this most important award. To this end, we propose the Turing Number (TN) index to measure how far a specific scholar is to this award. Inspired by previous works on Erdos Number and Bacon Number, this index is defined as the shortest path between a given scholar to any Turing Award Laureate. Experimental results suggest that TN can reflect the closeness of collaboration between scholars and Turing Award Laureates. With the correlation analysis between TN and metrics from the bibliometric-level and network-level, we demonstrate that TN has the potential of reflecting a scholar's academic influence and reputation.
|
cs/0411071
|
Pontus Svenson
|
Hedvig Sidenbladh, Pontus Svenson, Johan Schubert
|
Comparing Multi-Target Trackers on Different Force Unit Levels
|
9 pages
|
Proc SPIE Vol 5429, p 306-314 (2004)
|
10.1117/12.542024
| null |
cs.AI
| null |
Consider the problem of tracking a set of moving targets. Apart from the
tracking result, it is often important to know where the tracking fails, either
to steer sensors to that part of the state-space, or to inform a human operator
about the status and quality of the obtained information. An intuitive quality
measure is the correlation between two tracking results based on uncorrelated
observations. In the case of Bayesian trackers such a correlation measure could
be the Kullback-Leibler difference.
We focus on a scenario with a large number of military units moving in some
terrain. The units are observed by several types of sensors and "meta-sensors"
with force aggregation capabilities. The sensors register units of different
size. Two separate multi-target probability hypothesis density (PHD) particle
filters are used to track some type of units (e.g., companies) and their
sub-units (e.g., platoons), respectively, based on observations of units of
those sizes. Each observation is used in one filter only.
Although the state-space may well be the same in both filters, the posterior
PHD distributions are not directly comparable -- one unit might correspond to
three or four spatially distributed sub-units. Therefore, we introduce a
mapping function between distributions for different unit size, based on
doctrine knowledge of unit configuration.
The mapped distributions can now be compared -- locally or globally -- using
some measure, which gives the correlation between two PHD distributions in a
bounded volume of the state-space. To locate areas where the tracking fails, a
discretized quality map of the state-space can be generated by applying the
measure locally to different parts of the space.
|
[
{
"created": "Fri, 19 Nov 2004 13:12:40 GMT",
"version": "v1"
}
] |
2009-11-10
|
[
[
"Sidenbladh",
"Hedvig",
""
],
[
"Svenson",
"Pontus",
""
],
[
"Schubert",
"Johan",
""
]
] |
Consider the problem of tracking a set of moving targets. Apart from the tracking result, it is often important to know where the tracking fails, either to steer sensors to that part of the state-space, or to inform a human operator about the status and quality of the obtained information. An intuitive quality measure is the correlation between two tracking results based on uncorrelated observations. In the case of Bayesian trackers such a correlation measure could be the Kullback-Leibler difference. We focus on a scenario with a large number of military units moving in some terrain. The units are observed by several types of sensors and "meta-sensors" with force aggregation capabilities. The sensors register units of different size. Two separate multi-target probability hypothesis density (PHD) particle filters are used to track some type of units (e.g., companies) and their sub-units (e.g., platoons), respectively, based on observations of units of those sizes. Each observation is used in one filter only. Although the state-space may well be the same in both filters, the posterior PHD distributions are not directly comparable -- one unit might correspond to three or four spatially distributed sub-units. Therefore, we introduce a mapping function between distributions for different unit size, based on doctrine knowledge of unit configuration. The mapped distributions can now be compared -- locally or globally -- using some measure, which gives the correlation between two PHD distributions in a bounded volume of the state-space. To locate areas where the tracking fails, a discretized quality map of the state-space can be generated by applying the measure locally to different parts of the space.
|
2108.09982
|
Hyeongseok Son
|
Hyeongseok Son, Junyong Lee, Jonghyeop Lee, Sunghyun Cho, Seungyong
Lee
|
Recurrent Video Deblurring with Blur-Invariant Motion Estimation and
Pixel Volumes
|
17 pages, Camera-ready version for ACM Transactions on Graphics (TOG)
2021
|
ACM Transactions on Graphics, Vol. 40, No. 5, Article 185, 2021
|
10.1145/3453720
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For the success of video deblurring, it is essential to utilize information
from neighboring frames. Most state-of-the-art video deblurring methods adopt
motion compensation between video frames to aggregate information from multiple
frames that can help deblur a target frame. However, the motion compensation
methods adopted by previous deblurring methods are not blur-invariant, and
consequently, their accuracy is limited for blurry frames with different blur
amounts. To alleviate this problem, we propose two novel approaches to deblur
videos by effectively aggregating information from multiple video frames.
First, we present blur-invariant motion estimation learning to improve motion
estimation accuracy between blurry frames. Second, for motion compensation,
instead of aligning frames by warping with estimated motions, we use a pixel
volume that contains candidate sharp pixels to resolve motion estimation
errors. We combine these two processes to propose an effective recurrent video
deblurring network that fully exploits deblurred previous frames. Experiments
show that our method achieves the state-of-the-art performance both
quantitatively and qualitatively compared to recent methods that use deep
learning.
|
[
{
"created": "Mon, 23 Aug 2021 07:36:49 GMT",
"version": "v1"
}
] |
2021-08-27
|
[
[
"Son",
"Hyeongseok",
""
],
[
"Lee",
"Junyong",
""
],
[
"Lee",
"Jonghyeop",
""
],
[
"Cho",
"Sunghyun",
""
],
[
"Lee",
"Seungyong",
""
]
] |
For the success of video deblurring, it is essential to utilize information from neighboring frames. Most state-of-the-art video deblurring methods adopt motion compensation between video frames to aggregate information from multiple frames that can help deblur a target frame. However, the motion compensation methods adopted by previous deblurring methods are not blur-invariant, and consequently, their accuracy is limited for blurry frames with different blur amounts. To alleviate this problem, we propose two novel approaches to deblur videos by effectively aggregating information from multiple video frames. First, we present blur-invariant motion estimation learning to improve motion estimation accuracy between blurry frames. Second, for motion compensation, instead of aligning frames by warping with estimated motions, we use a pixel volume that contains candidate sharp pixels to resolve motion estimation errors. We combine these two processes to propose an effective recurrent video deblurring network that fully exploits deblurred previous frames. Experiments show that our method achieves the state-of-the-art performance both quantitatively and qualitatively compared to recent methods that use deep learning.
|
2210.05411
|
Ramesh Doddaiah
|
Ramesh Doddaiah, Prathyush Parvatharaju, Elke Rundensteiner, Thomas
Hartvigsen
|
Class-Specific Explainability for Deep Time Series Classifiers
|
This paper is accepted in ICDM 2022
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Explainability helps users trust deep learning solutions for time series
classification. However, existing explainability methods for multi-class time
series classifiers focus on one class at a time, ignoring relationships between
the classes. Instead, when a classifier is choosing between many classes, an
effective explanation must show what sets the chosen class apart from the rest.
We now formalize this notion, studying the open problem of class-specific
explainability for deep time series classifiers, a challenging and impactful
problem setting. We design a novel explainability method, DEMUX, which learns
saliency maps for explaining deep multi-class time series classifiers by
adaptively ensuring that its explanation spotlights the regions in an input
time series that a model uses specifically to its predicted class. DEMUX adopts
a gradient-based approach composed of three interdependent modules that combine
to generate consistent, class-specific saliency maps that remain faithful to
the classifier's behavior yet are easily understood by end users. Our
experimental study demonstrates that DEMUX outperforms nine state-of-the-art
alternatives on five popular datasets when explaining two types of deep time
series classifiers. Further, through a case study, we demonstrate that DEMUX's
explanations indeed highlight what separates the predicted class from the
others in the eyes of the classifier. Our code is publicly available at
https://github.com/rameshdoddaiah/DEMUX.
|
[
{
"created": "Tue, 11 Oct 2022 12:37:15 GMT",
"version": "v1"
}
] |
2022-10-12
|
[
[
"Doddaiah",
"Ramesh",
""
],
[
"Parvatharaju",
"Prathyush",
""
],
[
"Rundensteiner",
"Elke",
""
],
[
"Hartvigsen",
"Thomas",
""
]
] |
Explainability helps users trust deep learning solutions for time series classification. However, existing explainability methods for multi-class time series classifiers focus on one class at a time, ignoring relationships between the classes. Instead, when a classifier is choosing between many classes, an effective explanation must show what sets the chosen class apart from the rest. We now formalize this notion, studying the open problem of class-specific explainability for deep time series classifiers, a challenging and impactful problem setting. We design a novel explainability method, DEMUX, which learns saliency maps for explaining deep multi-class time series classifiers by adaptively ensuring that its explanation spotlights the regions in an input time series that a model uses specifically to its predicted class. DEMUX adopts a gradient-based approach composed of three interdependent modules that combine to generate consistent, class-specific saliency maps that remain faithful to the classifier's behavior yet are easily understood by end users. Our experimental study demonstrates that DEMUX outperforms nine state-of-the-art alternatives on five popular datasets when explaining two types of deep time series classifiers. Further, through a case study, we demonstrate that DEMUX's explanations indeed highlight what separates the predicted class from the others in the eyes of the classifier. Our code is publicly available at https://github.com/rameshdoddaiah/DEMUX.
|
2308.02694
|
Lennart Reimann
|
Lennart M. Reimann, Jonathan Wiesner, Dominik Sisejkovic, Farhad
Merchant and Rainer Leupers
|
SoftFlow: Automated HW-SW Confidentiality Verification for Embedded
Processors
|
6 pages, accepted at 31st IFIP/IEEE Conference on Very Large Scale
Integration (VLSI-SoC 2023)
| null | null | null |
cs.CR cs.AR
|
http://creativecommons.org/licenses/by/4.0/
|
Despite its ever-increasing impact, security is not considered as a design
objective in commercial electronic design automation (EDA) tools. This results
in vulnerabilities being overlooked during the software-hardware design
process. Specifically, vulnerabilities that allow leakage of sensitive data
might stay unnoticed by standard testing, as the leakage itself might not
result in evident functional changes. Therefore, EDA tools are needed to
elaborate the confidentiality of sensitive data during the design process.
However, state-of-the-art implementations either solely consider the hardware
or restrict the expressiveness of the security properties that must be proven.
Consequently, more proficient tools are required to assist in the software and
hardware design. To address this issue, we propose SoftFlow, an EDA tool that
allows determining whether a given software exploits existing leakage paths in
hardware. Based on our analysis, the leakage paths can be retained if proven
not to be exploited by software. This is desirable if the removal significantly
impacts the design's performance or functionality, or if the path cannot be
removed as the chip is already manufactured. We demonstrate the feasibility of
SoftFlow by identifying vulnerabilities in OpenSSL cryptographic C programs,
and redesigning them to avoid leakage of cryptographic keys in a RISC-V
architecture.
|
[
{
"created": "Fri, 4 Aug 2023 20:11:46 GMT",
"version": "v1"
}
] |
2023-08-08
|
[
[
"Reimann",
"Lennart M.",
""
],
[
"Wiesner",
"Jonathan",
""
],
[
"Sisejkovic",
"Dominik",
""
],
[
"Merchant",
"Farhad",
""
],
[
"Leupers",
"Rainer",
""
]
] |
Despite its ever-increasing impact, security is not considered as a design objective in commercial electronic design automation (EDA) tools. This results in vulnerabilities being overlooked during the software-hardware design process. Specifically, vulnerabilities that allow leakage of sensitive data might stay unnoticed by standard testing, as the leakage itself might not result in evident functional changes. Therefore, EDA tools are needed to elaborate the confidentiality of sensitive data during the design process. However, state-of-the-art implementations either solely consider the hardware or restrict the expressiveness of the security properties that must be proven. Consequently, more proficient tools are required to assist in the software and hardware design. To address this issue, we propose SoftFlow, an EDA tool that allows determining whether a given software exploits existing leakage paths in hardware. Based on our analysis, the leakage paths can be retained if proven not to be exploited by software. This is desirable if the removal significantly impacts the design's performance or functionality, or if the path cannot be removed as the chip is already manufactured. We demonstrate the feasibility of SoftFlow by identifying vulnerabilities in OpenSSL cryptographic C programs, and redesigning them to avoid leakage of cryptographic keys in a RISC-V architecture.
|
0707.3673
|
Damien Chablat
|
Damien Chablat (IRCCyN), Jorge Angeles (CIM)
|
The Computation of All 4R Serial Spherical Wrists With an Isotropic
Architecture
| null |
Dans 2nd Workshop on Computational Kinematics - WCK, S\'eoul :
Cor\'ee, R\'epublique de (05/2001)
| null |
WCK-2001
|
cs.RO
| null |
A spherical wrist of the serial type is said to be isotropic if it can attain
a posture whereby the singular values of its Jacobian matrix are all identical
and nonzero. What isotropy brings about is robustness to manufacturing,
assembly, and measurement errors, thereby guaranteeing a maximum orientation
accuracy. In this paper we investigate the existence of redundant isotropic
architectures, which should add to the dexterity of the wrist under design by
virtue of its extra degree of freedom. The problem formulation leads to a
system of eight quadratic equations with eight unknowns. The Bezout number of
this system is thus 2^8 = 256, its BKK bound being 192. However, the actual
number of solutions is shown to be 32. We list all solutions of the foregoing
algebraic problem. All these solutions are real, but distinct solutions do not
necessarily lead to distinct manipulators. Upon discarding those algebraic
solutions that yield no new wrists, we end up with exactly eight distinct
architectures, the eight corresponding manipulators being displayed at their
isotropic posture.
|
[
{
"created": "Wed, 25 Jul 2007 06:51:53 GMT",
"version": "v1"
}
] |
2007-07-26
|
[
[
"Chablat",
"Damien",
"",
"IRCCyN"
],
[
"Angeles",
"Jorge",
"",
"CIM"
]
] |
A spherical wrist of the serial type is said to be isotropic if it can attain a posture whereby the singular values of its Jacobian matrix are all identical and nonzero. What isotropy brings about is robustness to manufacturing, assembly, and measurement errors, thereby guaranteeing a maximum orientation accuracy. In this paper we investigate the existence of redundant isotropic architectures, which should add to the dexterity of the wrist under design by virtue of its extra degree of freedom. The problem formulation leads to a system of eight quadratic equations with eight unknowns. The Bezout number of this system is thus 2^8 = 256, its BKK bound being 192. However, the actual number of solutions is shown to be 32. We list all solutions of the foregoing algebraic problem. All these solutions are real, but distinct solutions do not necessarily lead to distinct manipulators. Upon discarding those algebraic solutions that yield no new wrists, we end up with exactly eight distinct architectures, the eight corresponding manipulators being displayed at their isotropic posture.
|
2203.02574
|
Tianxin Tao
|
Tianxin Tao, Xiaohang Zhan, Zhongquan Chen, Michiel van de Panne
|
Style-ERD: Responsive and Coherent Online Motion Style Transfer
|
CVPR 2022, project page:
https://tianxintao.github.io/Online-Motion-Style-Transfer
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Motion style transfer is a common method for enriching character animation.
Motion style transfer algorithms are often designed for offline settings where
motions are processed in segments. However, for online animation applications,
such as realtime avatar animation from motion capture, motions need to be
processed as a stream with minimal latency. In this work, we realize a
flexible, high-quality motion style transfer method for this setting. We
propose a novel style transfer model, Style-ERD, to stylize motions in an
online manner with an Encoder-Recurrent-Decoder structure, along with a novel
discriminator that combines feature attention and temporal attention. Our
method stylizes motions into multiple target styles with a unified model.
Although our method targets online settings, it outperforms previous offline
methods in motion realism and style expressiveness and provides significant
gains in runtime efficiency
|
[
{
"created": "Fri, 4 Mar 2022 21:12:09 GMT",
"version": "v1"
},
{
"created": "Tue, 29 Mar 2022 00:53:19 GMT",
"version": "v2"
}
] |
2022-03-30
|
[
[
"Tao",
"Tianxin",
""
],
[
"Zhan",
"Xiaohang",
""
],
[
"Chen",
"Zhongquan",
""
],
[
"van de Panne",
"Michiel",
""
]
] |
Motion style transfer is a common method for enriching character animation. Motion style transfer algorithms are often designed for offline settings where motions are processed in segments. However, for online animation applications, such as realtime avatar animation from motion capture, motions need to be processed as a stream with minimal latency. In this work, we realize a flexible, high-quality motion style transfer method for this setting. We propose a novel style transfer model, Style-ERD, to stylize motions in an online manner with an Encoder-Recurrent-Decoder structure, along with a novel discriminator that combines feature attention and temporal attention. Our method stylizes motions into multiple target styles with a unified model. Although our method targets online settings, it outperforms previous offline methods in motion realism and style expressiveness and provides significant gains in runtime efficiency
|
2006.15207
|
Jiefeng Chen
|
Jiefeng Chen, Yixuan Li, Xi Wu, Yingyu Liang, Somesh Jha
|
ATOM: Robustifying Out-of-distribution Detection Using Outlier Mining
|
Paper published at European Conference on Machine Learning (ECML'21)
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Detecting out-of-distribution (OOD) inputs is critical for safely deploying
deep learning models in an open-world setting. However, existing OOD detection
solutions can be brittle in the open world, facing various types of adversarial
OOD inputs. While methods leveraging auxiliary OOD data have emerged, our
analysis on illuminative examples reveals a key insight that the majority of
auxiliary OOD examples may not meaningfully improve or even hurt the decision
boundary of the OOD detector, which is also observed in empirical results on
real data. In this paper, we provide a theoretically motivated method,
Adversarial Training with informative Outlier Mining (ATOM), which improves the
robustness of OOD detection. We show that, by mining informative auxiliary OOD
data, one can significantly improve OOD detection performance, and somewhat
surprisingly, generalize to unseen adversarial attacks. ATOM achieves
state-of-the-art performance under a broad family of classic and adversarial
OOD evaluation tasks. For example, on the CIFAR-10 in-distribution dataset,
ATOM reduces the FPR (at TPR 95%) by up to 57.99% under adversarial OOD inputs,
surpassing the previous best baseline by a large margin.
|
[
{
"created": "Fri, 26 Jun 2020 20:58:05 GMT",
"version": "v1"
},
{
"created": "Sat, 3 Oct 2020 23:30:40 GMT",
"version": "v2"
},
{
"created": "Tue, 22 Jun 2021 22:09:39 GMT",
"version": "v3"
},
{
"created": "Wed, 30 Jun 2021 02:33:11 GMT",
"version": "v4"
}
] |
2021-07-01
|
[
[
"Chen",
"Jiefeng",
""
],
[
"Li",
"Yixuan",
""
],
[
"Wu",
"Xi",
""
],
[
"Liang",
"Yingyu",
""
],
[
"Jha",
"Somesh",
""
]
] |
Detecting out-of-distribution (OOD) inputs is critical for safely deploying deep learning models in an open-world setting. However, existing OOD detection solutions can be brittle in the open world, facing various types of adversarial OOD inputs. While methods leveraging auxiliary OOD data have emerged, our analysis on illuminative examples reveals a key insight that the majority of auxiliary OOD examples may not meaningfully improve or even hurt the decision boundary of the OOD detector, which is also observed in empirical results on real data. In this paper, we provide a theoretically motivated method, Adversarial Training with informative Outlier Mining (ATOM), which improves the robustness of OOD detection. We show that, by mining informative auxiliary OOD data, one can significantly improve OOD detection performance, and somewhat surprisingly, generalize to unseen adversarial attacks. ATOM achieves state-of-the-art performance under a broad family of classic and adversarial OOD evaluation tasks. For example, on the CIFAR-10 in-distribution dataset, ATOM reduces the FPR (at TPR 95%) by up to 57.99% under adversarial OOD inputs, surpassing the previous best baseline by a large margin.
|
1402.3766
|
Pascal Vanier
|
Alex Borello (LACL), Julien Cervelle (LACL), Pascal Vanier
|
Turing degrees of limit sets of cellular automata
| null | null | null | null |
cs.FL nlin.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cellular automata are discrete dynamical systems and a model of computation.
The limit set of a cellular automaton consists of the configurations having an
infinite sequence of preimages. It is well known that these always contain a
computable point and that any non-trivial property on them is undecidable. We
go one step further in this article by giving a full characterization of the
sets of Turing degrees of cellular automata: they are the same as the sets of
Turing degrees of effectively closed sets containing a computable point.
|
[
{
"created": "Sun, 16 Feb 2014 07:01:24 GMT",
"version": "v1"
}
] |
2014-02-18
|
[
[
"Borello",
"Alex",
"",
"LACL"
],
[
"Cervelle",
"Julien",
"",
"LACL"
],
[
"Vanier",
"Pascal",
""
]
] |
Cellular automata are discrete dynamical systems and a model of computation. The limit set of a cellular automaton consists of the configurations having an infinite sequence of preimages. It is well known that these always contain a computable point and that any non-trivial property on them is undecidable. We go one step further in this article by giving a full characterization of the sets of Turing degrees of cellular automata: they are the same as the sets of Turing degrees of effectively closed sets containing a computable point.
|
1806.07056
|
Vuk Marojevic
|
Marti Floriach-Pigem, Guillem Xercavins-Torregrosa, Antoni
Gelonch-Bosch, Vuk Marojevic
|
Creating Tailored and Adaptive Network Services with the Open
Orchestration C-RAN Framework
|
IEEE 5G World Forum 2018
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Next generation wireless communications networks will leverage
software-defined radio and networking technologies, combined with cloud and fog
computing. A pool of resources can then be dynamically allocated to create
personalized network services (NSs). The enabling technologies are abstraction,
virtualization and consolidation of resources, automatization of processes, and
programmatic provisioning and orchestration. ETSI's network functions
virtualization (NFV) management and orchestration (MANO) framework provides the
architecture and specifications of the management layers. We introduce OOCRAN,
an open-source software framework and testbed that extends existing NFV
management solutions by incorporating the radio communications layers. This
paper presents OOCRAN and illustrates how it monitors and manages the pool of
resources for creating tailored NSs. OOCRAN can automate NS reconfiguration,
but also facilitates user control. We demonstrate the dynamic deployment of
cellular NSs and discuss the challenges of dynamically creating and managing
tailored NSs on shared infrastructure.
|
[
{
"created": "Tue, 19 Jun 2018 06:00:56 GMT",
"version": "v1"
}
] |
2018-06-20
|
[
[
"Floriach-Pigem",
"Marti",
""
],
[
"Xercavins-Torregrosa",
"Guillem",
""
],
[
"Gelonch-Bosch",
"Antoni",
""
],
[
"Marojevic",
"Vuk",
""
]
] |
Next generation wireless communications networks will leverage software-defined radio and networking technologies, combined with cloud and fog computing. A pool of resources can then be dynamically allocated to create personalized network services (NSs). The enabling technologies are abstraction, virtualization and consolidation of resources, automatization of processes, and programmatic provisioning and orchestration. ETSI's network functions virtualization (NFV) management and orchestration (MANO) framework provides the architecture and specifications of the management layers. We introduce OOCRAN, an open-source software framework and testbed that extends existing NFV management solutions by incorporating the radio communications layers. This paper presents OOCRAN and illustrates how it monitors and manages the pool of resources for creating tailored NSs. OOCRAN can automate NS reconfiguration, but also facilitates user control. We demonstrate the dynamic deployment of cellular NSs and discuss the challenges of dynamically creating and managing tailored NSs on shared infrastructure.
|
1810.08711
|
Seva Shneer
|
Seva Shneer and Alexander Stolyar
|
Stability conditions for a decentralised medium access algorithm:
single- and multi-hop networks
| null | null | null | null |
cs.IT math.IT math.PR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider a decentralised multi-access algorithm, motivated primarily by
the control of transmissions in a wireless network. For a finite single-hop
network with arbitrary interference constraints we prove stochastic stability
under the natural conditions. For infinite and finite single-hop networks, we
obtain broad rate-stability conditions. We also consider symmetric finite
multi-hop networks and show that the natural condition is sufficient for
stochastic stability.
|
[
{
"created": "Fri, 19 Oct 2018 23:10:00 GMT",
"version": "v1"
},
{
"created": "Fri, 31 May 2019 08:08:07 GMT",
"version": "v2"
}
] |
2019-06-03
|
[
[
"Shneer",
"Seva",
""
],
[
"Stolyar",
"Alexander",
""
]
] |
We consider a decentralised multi-access algorithm, motivated primarily by the control of transmissions in a wireless network. For a finite single-hop network with arbitrary interference constraints we prove stochastic stability under the natural conditions. For infinite and finite single-hop networks, we obtain broad rate-stability conditions. We also consider symmetric finite multi-hop networks and show that the natural condition is sufficient for stochastic stability.
|
0801.0452
|
V. Sreekanth Annapureddy
|
V. Sreekanth Annapureddy and Venugopal V. Veeravalli
|
Sum Capacity of the Gaussian Interference Channel in the Low
Interference Regime
|
6 pages, 4 figures, Proceedings of ITA Workshop, San Diego, CA,
Jan-Feb, 2008
| null | null | null |
cs.IT math.IT
| null |
New upper bounds on the sum capacity of the two-user Gaussian interference
channel are derived. Using these bounds, it is shown that treating interference
as noise achieves the sum capacity if the interference levels are below certain
thresholds.
|
[
{
"created": "Wed, 2 Jan 2008 23:11:39 GMT",
"version": "v1"
}
] |
2008-01-04
|
[
[
"Annapureddy",
"V. Sreekanth",
""
],
[
"Veeravalli",
"Venugopal V.",
""
]
] |
New upper bounds on the sum capacity of the two-user Gaussian interference channel are derived. Using these bounds, it is shown that treating interference as noise achieves the sum capacity if the interference levels are below certain thresholds.
|
2006.11684
|
Yuan Shen
|
Yuan Shen, Shanduojiao Jiang, Yanlin Chen, Katie Driggs Campbell
|
To Explain or Not to Explain: A Study on the Necessity of Explanations
for Autonomous Vehicles
|
Won Best Paper Award at NeurIPS 2022 Progress and Challenges in
Building Trustworthy Embodied AI Workshop (TEA 2022)
| null | null | null |
cs.AI cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Explainable AI, in the context of autonomous systems, like self-driving cars,
has drawn broad interests from researchers. Recent studies have found that
providing explanations for autonomous vehicles' actions has many benefits
(e.g., increased trust and acceptance), but put little emphasis on when an
explanation is needed and how the content of explanation changes with driving
context. In this work, we investigate which scenarios people need explanations
and how the critical degree of explanation shifts with situations and driver
types. Through a user experiment, we ask participants to evaluate how necessary
an explanation is and measure the impact on their trust in self-driving cars in
different contexts. Moreover, we present a self-driving explanation dataset
with first-person explanations and associated measures of the necessity for
1103 video clips, augmenting the Berkeley Deep Drive Attention dataset. Our
research reveals that driver types and driving scenarios dictate whether an
explanation is necessary. In particular, people tend to agree on the necessity
for near-crash events but hold different opinions on ordinary or anomalous
driving situations.
|
[
{
"created": "Sun, 21 Jun 2020 00:38:24 GMT",
"version": "v1"
},
{
"created": "Sun, 30 Oct 2022 05:49:27 GMT",
"version": "v2"
},
{
"created": "Thu, 10 Nov 2022 21:44:34 GMT",
"version": "v3"
},
{
"created": "Wed, 21 Dec 2022 01:33:44 GMT",
"version": "v4"
}
] |
2022-12-22
|
[
[
"Shen",
"Yuan",
""
],
[
"Jiang",
"Shanduojiao",
""
],
[
"Chen",
"Yanlin",
""
],
[
"Campbell",
"Katie Driggs",
""
]
] |
Explainable AI, in the context of autonomous systems, like self-driving cars, has drawn broad interests from researchers. Recent studies have found that providing explanations for autonomous vehicles' actions has many benefits (e.g., increased trust and acceptance), but put little emphasis on when an explanation is needed and how the content of explanation changes with driving context. In this work, we investigate which scenarios people need explanations and how the critical degree of explanation shifts with situations and driver types. Through a user experiment, we ask participants to evaluate how necessary an explanation is and measure the impact on their trust in self-driving cars in different contexts. Moreover, we present a self-driving explanation dataset with first-person explanations and associated measures of the necessity for 1103 video clips, augmenting the Berkeley Deep Drive Attention dataset. Our research reveals that driver types and driving scenarios dictate whether an explanation is necessary. In particular, people tend to agree on the necessity for near-crash events but hold different opinions on ordinary or anomalous driving situations.
|
1109.1068
|
Karteeka Pavan Kanadam
|
K. Karteeka Pavan, Allam Appa Rao, A. V. Dattatreya Rao
|
An Automatic Clustering Technique for Optimal Clusters
|
12 pages, 5 figures, 2 tables
|
International journal of Computer Sciene Engineering and
Applications, Vol., No.4, 2011, pp 133-144
|
10.5121/ijcsea.2011.1412
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper proposes a simple, automatic and efficient clustering algorithm,
namely, Automatic Merging for Optimal Clusters (AMOC) which aims to generate
nearly optimal clusters for the given datasets automatically. The AMOC is an
extension to standard k-means with a two phase iterative procedure combining
certain validation techniques in order to find optimal clusters with automation
of merging of clusters. Experiments on both synthetic and real data have proved
that the proposed algorithm finds nearly optimal clustering structures in terms
of number of clusters, compactness and separation.
|
[
{
"created": "Tue, 6 Sep 2011 05:34:28 GMT",
"version": "v1"
}
] |
2011-09-07
|
[
[
"Pavan",
"K. Karteeka",
""
],
[
"Rao",
"Allam Appa",
""
],
[
"Rao",
"A. V. Dattatreya",
""
]
] |
This paper proposes a simple, automatic and efficient clustering algorithm, namely, Automatic Merging for Optimal Clusters (AMOC) which aims to generate nearly optimal clusters for the given datasets automatically. The AMOC is an extension to standard k-means with a two phase iterative procedure combining certain validation techniques in order to find optimal clusters with automation of merging of clusters. Experiments on both synthetic and real data have proved that the proposed algorithm finds nearly optimal clustering structures in terms of number of clusters, compactness and separation.
|
1705.04448
|
TonTon Huang
|
TonTon Hsien-De Huang, and Hung-Yu Kao
|
R2-D2: ColoR-inspired Convolutional NeuRal Network (CNN)-based AndroiD
Malware Detections
|
Verison 2018/11/15, IEEE BigData 2018, Seattle, WA, USA, Dec 10-13,
2018. (Accepted)
| null | null | null |
cs.CR cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The influence of Deep Learning on image identification and natural language
processing has attracted enormous attention globally. The convolution neural
network that can learn without prior extraction of features fits well in
response to the rapid iteration of Android malware. The traditional solution
for detecting Android malware requires continuous learning through
pre-extracted features to maintain high performance of identifying the malware.
In order to reduce the manpower of feature engineering prior to the condition
of not to extract pre-selected features, we have developed a coloR-inspired
convolutional neuRal networks (CNN)-based AndroiD malware Detection (R2-D2)
system. The system can convert the bytecode of classes.dex from Android archive
file to rgb color code and store it as a color image with fixed size. The color
image is input to the convolutional neural network for automatic feature
extraction and training. The data was collected from Jan. 2017 to Aug 2017.
During the period of time, we have collected approximately 2 million of benign
and malicious Android apps for our experiments with the help from our research
partner Leopard Mobile Inc. Our experiment results demonstrate that the
proposed system has accurate security analysis on contracts. Furthermore, we
keep our research results and experiment materials on http://R2D2.TWMAN.ORG.
|
[
{
"created": "Fri, 12 May 2017 06:28:12 GMT",
"version": "v1"
},
{
"created": "Fri, 21 Jul 2017 11:56:51 GMT",
"version": "v2"
},
{
"created": "Fri, 1 Sep 2017 16:10:48 GMT",
"version": "v3"
},
{
"created": "Tue, 5 Dec 2017 09:08:15 GMT",
"version": "v4"
},
{
"created": "Thu, 15 Nov 2018 10:47:38 GMT",
"version": "v5"
}
] |
2018-11-16
|
[
[
"Huang",
"TonTon Hsien-De",
""
],
[
"Kao",
"Hung-Yu",
""
]
] |
The influence of Deep Learning on image identification and natural language processing has attracted enormous attention globally. The convolution neural network that can learn without prior extraction of features fits well in response to the rapid iteration of Android malware. The traditional solution for detecting Android malware requires continuous learning through pre-extracted features to maintain high performance of identifying the malware. In order to reduce the manpower of feature engineering prior to the condition of not to extract pre-selected features, we have developed a coloR-inspired convolutional neuRal networks (CNN)-based AndroiD malware Detection (R2-D2) system. The system can convert the bytecode of classes.dex from Android archive file to rgb color code and store it as a color image with fixed size. The color image is input to the convolutional neural network for automatic feature extraction and training. The data was collected from Jan. 2017 to Aug 2017. During the period of time, we have collected approximately 2 million of benign and malicious Android apps for our experiments with the help from our research partner Leopard Mobile Inc. Our experiment results demonstrate that the proposed system has accurate security analysis on contracts. Furthermore, we keep our research results and experiment materials on http://R2D2.TWMAN.ORG.
|
2311.10699
|
George Cevora
|
Marcel Marais, Mate Hartstein, George Cevora
|
Using linear initialisation to improve speed of convergence and
fully-trained error in Autoencoders
| null | null | null | null |
cs.LG cs.AI cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Good weight initialisation is an important step in successful training of
Artificial Neural Networks. Over time a number of improvements have been
proposed to this process. In this paper we introduce a novel weight
initialisation technique called the Straddled Matrix Initialiser. This
initialisation technique is motivated by our assumption that major,
global-scale relationships in data are linear with only smaller effects
requiring complex non-linearities. Combination of Straddled Matrix and ReLU
activation function initialises a Neural Network as a de facto linear model,
which we postulate should be a better starting point for optimisation given our
assumptions. We test this by training autoencoders on three datasets using
Straddled Matrix and seven other state-of-the-art weight initialisation
techniques. In all our experiments the Straddeled Matrix Initialiser clearly
outperforms all other methods.
|
[
{
"created": "Fri, 17 Nov 2023 18:43:32 GMT",
"version": "v1"
}
] |
2023-11-20
|
[
[
"Marais",
"Marcel",
""
],
[
"Hartstein",
"Mate",
""
],
[
"Cevora",
"George",
""
]
] |
Good weight initialisation is an important step in successful training of Artificial Neural Networks. Over time a number of improvements have been proposed to this process. In this paper we introduce a novel weight initialisation technique called the Straddled Matrix Initialiser. This initialisation technique is motivated by our assumption that major, global-scale relationships in data are linear with only smaller effects requiring complex non-linearities. Combination of Straddled Matrix and ReLU activation function initialises a Neural Network as a de facto linear model, which we postulate should be a better starting point for optimisation given our assumptions. We test this by training autoencoders on three datasets using Straddled Matrix and seven other state-of-the-art weight initialisation techniques. In all our experiments the Straddeled Matrix Initialiser clearly outperforms all other methods.
|
2112.01068
|
Quentin De Coninck
|
Quentin De Coninck
|
The Packet Number Space Debate in Multipath QUIC
|
7 pages, submitted to ACM CCR
| null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
With a standardization process that attracted many interest, QUIC can been
seen as the next general-purpose transport protocol. Still, it does not provide
true multipath support yet, missing some use cases that MPTCP can address. To
fill that gap, the IETF recently adopted a multipath proposal merging all the
proposed designs. While it focuses on its core components, there still remains
one major design issue in the proposal: the number of packet number spaces that
should be used. This paper provides experimental results with two different
Multipath QUIC implementations based on NS3 simulations to understand the
impact of using one packet number space per path or a single packet number
space for the whole connection. Our results suggest that using one packet
number space per path makes the Multipath QUIC connection more resilient to the
receiver's acknowledgment strategy.
|
[
{
"created": "Thu, 2 Dec 2021 09:18:16 GMT",
"version": "v1"
}
] |
2021-12-03
|
[
[
"De Coninck",
"Quentin",
""
]
] |
With a standardization process that attracted many interest, QUIC can been seen as the next general-purpose transport protocol. Still, it does not provide true multipath support yet, missing some use cases that MPTCP can address. To fill that gap, the IETF recently adopted a multipath proposal merging all the proposed designs. While it focuses on its core components, there still remains one major design issue in the proposal: the number of packet number spaces that should be used. This paper provides experimental results with two different Multipath QUIC implementations based on NS3 simulations to understand the impact of using one packet number space per path or a single packet number space for the whole connection. Our results suggest that using one packet number space per path makes the Multipath QUIC connection more resilient to the receiver's acknowledgment strategy.
|
2207.02449
|
Naoya Fujita
|
Naoya Fujita and Hiroshi Watanabe
|
Information Compression and Performance Evaluation of Tic-Tac-Toe's
Evaluation Function Using Singular Value Decomposition
|
15 pages, 5 figures, Updated contents
| null |
10.7566/JPSJ.92.034802
| null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We approximated the evaluation function for the game Tic-Tac-Toe by singular
value decomposition (SVD) and investigated the effect of approximation accuracy
on winning rate. We first prepared the perfect evaluation function of
Tic-Tac-Toe and performed low-rank approximation by considering the evaluation
function as a ninth-order tensor. We found that we can reduce the amount of
information of the evaluation function by 70% without significantly degrading
the performance. Approximation accuracy and winning rate were strongly
correlated but not perfectly proportional. We also investigated how the
decomposition method of the evaluation function affects the performance. We
considered two decomposition methods: simple SVD regarding the evaluation
function as a matrix and the Tucker decomposition by higher-order SVD (HOSVD).
At the same compression ratio, the strategy with the approximated evaluation
function obtained by HOSVD exhibited a significantly higher winning rate than
that obtained by SVD. These results suggest that SVD can effectively compress
board game strategies and an optimal compression method that depends on the
game exists.
|
[
{
"created": "Wed, 6 Jul 2022 05:40:32 GMT",
"version": "v1"
},
{
"created": "Thu, 7 Jul 2022 12:40:24 GMT",
"version": "v2"
},
{
"created": "Tue, 13 Sep 2022 14:56:50 GMT",
"version": "v3"
},
{
"created": "Wed, 12 Oct 2022 06:53:38 GMT",
"version": "v4"
},
{
"created": "Fri, 2 Dec 2022 07:29:06 GMT",
"version": "v5"
}
] |
2023-03-22
|
[
[
"Fujita",
"Naoya",
""
],
[
"Watanabe",
"Hiroshi",
""
]
] |
We approximated the evaluation function for the game Tic-Tac-Toe by singular value decomposition (SVD) and investigated the effect of approximation accuracy on winning rate. We first prepared the perfect evaluation function of Tic-Tac-Toe and performed low-rank approximation by considering the evaluation function as a ninth-order tensor. We found that we can reduce the amount of information of the evaluation function by 70% without significantly degrading the performance. Approximation accuracy and winning rate were strongly correlated but not perfectly proportional. We also investigated how the decomposition method of the evaluation function affects the performance. We considered two decomposition methods: simple SVD regarding the evaluation function as a matrix and the Tucker decomposition by higher-order SVD (HOSVD). At the same compression ratio, the strategy with the approximated evaluation function obtained by HOSVD exhibited a significantly higher winning rate than that obtained by SVD. These results suggest that SVD can effectively compress board game strategies and an optimal compression method that depends on the game exists.
|
1904.02341
|
Xin Huang
|
Xin Huang, Sungkweon Hong, Andreas Hofmann, Brian C. Williams
|
Online Risk-Bounded Motion Planning for Autonomous Vehicles in Dynamic
Environments
|
Accepted at ICAPS'19. 10 pages, 6 figures, 1 table
| null | null | null |
cs.RO cs.AI cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A crucial challenge to efficient and robust motion planning for autonomous
vehicles is understanding the intentions of the surrounding agents. Ignoring
the intentions of the other agents in dynamic environments can lead to risky or
over-conservative plans. In this work, we model the motion planning problem as
a partially observable Markov decision process (POMDP) and propose an online
system that combines an intent recognition algorithm and a POMDP solver to
generate risk-bounded plans for the ego vehicle navigating with a number of
dynamic agent vehicles. The intent recognition algorithm predicts the
probabilistic hybrid motion states of each agent vehicle over a finite horizon
using Bayesian filtering and a library of pre-learned maneuver motion models.
We update the POMDP model with the intent recognition results in real time and
solve it using a heuristic search algorithm which produces policies with
upper-bound guarantees on the probability of near colliding with other dynamic
agents. We demonstrate that our system is able to generate better motion plans
in terms of efficiency and safety in a number of challenging environments
including unprotected intersection left turns and lane changes as compared to
the baseline methods.
|
[
{
"created": "Thu, 4 Apr 2019 04:19:38 GMT",
"version": "v1"
}
] |
2019-04-05
|
[
[
"Huang",
"Xin",
""
],
[
"Hong",
"Sungkweon",
""
],
[
"Hofmann",
"Andreas",
""
],
[
"Williams",
"Brian C.",
""
]
] |
A crucial challenge to efficient and robust motion planning for autonomous vehicles is understanding the intentions of the surrounding agents. Ignoring the intentions of the other agents in dynamic environments can lead to risky or over-conservative plans. In this work, we model the motion planning problem as a partially observable Markov decision process (POMDP) and propose an online system that combines an intent recognition algorithm and a POMDP solver to generate risk-bounded plans for the ego vehicle navigating with a number of dynamic agent vehicles. The intent recognition algorithm predicts the probabilistic hybrid motion states of each agent vehicle over a finite horizon using Bayesian filtering and a library of pre-learned maneuver motion models. We update the POMDP model with the intent recognition results in real time and solve it using a heuristic search algorithm which produces policies with upper-bound guarantees on the probability of near colliding with other dynamic agents. We demonstrate that our system is able to generate better motion plans in terms of efficiency and safety in a number of challenging environments including unprotected intersection left turns and lane changes as compared to the baseline methods.
|
1005.0218
|
Olivier Teste
|
Faiza Ghozzi (IRIT), Franck Ravat (IRIT), Olivier Teste (IRIT), Gilles
Zurfluh (IRIT)
|
Contraintes pour mod\`ele et langage multidimensionnels
| null |
Bases de donn\'ees avanc\'ees 2003, Lyon : France (2003)
| null | null |
cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper defines a constraint-based model dedicated to multidimensional
databases. The model we define represents data through a constellation of facts
(subjects of analyse) associated to dimensions (axis of analyse), which are
possibly shared. Each dimension is organised according to several hierarchies
(views of analyse) integrating several levels of data granularity. In order to
insure data consistency, we introduce 5 semantic constraints (exclusion,
inclusion, partition, simultaneity, totality) which can be intra-dimension or
inter-dimensions; the intra-dimension constraints allow the expression of
constraints between hierarchies within a same dimension whereas the
inter-dimensions constraints focus on hierarchies of distinct dimensions. We
also study repercussions of these constraints on multidimensional manipulations
and we provide extensions of the multidimensional operators.
|
[
{
"created": "Mon, 3 May 2010 07:47:56 GMT",
"version": "v1"
}
] |
2010-05-20
|
[
[
"Ghozzi",
"Faiza",
"",
"IRIT"
],
[
"Ravat",
"Franck",
"",
"IRIT"
],
[
"Teste",
"Olivier",
"",
"IRIT"
],
[
"Zurfluh",
"Gilles",
"",
"IRIT"
]
] |
This paper defines a constraint-based model dedicated to multidimensional databases. The model we define represents data through a constellation of facts (subjects of analyse) associated to dimensions (axis of analyse), which are possibly shared. Each dimension is organised according to several hierarchies (views of analyse) integrating several levels of data granularity. In order to insure data consistency, we introduce 5 semantic constraints (exclusion, inclusion, partition, simultaneity, totality) which can be intra-dimension or inter-dimensions; the intra-dimension constraints allow the expression of constraints between hierarchies within a same dimension whereas the inter-dimensions constraints focus on hierarchies of distinct dimensions. We also study repercussions of these constraints on multidimensional manipulations and we provide extensions of the multidimensional operators.
|
2005.08396
|
Peng Fu
|
Peng Fu, Kohei Kishida, Neil J. Ross, Peter Selinger
|
A tutorial introduction to quantum circuit programming in dependently
typed Proto-Quipper
|
Added a section on related work and a paragraph explaining qubit
initialization and termination
|
LNCS 12227:153-168 (2020)
|
10.1007/978-3-030-52482-1_9
| null |
cs.PL quant-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce dependently typed Proto-Quipper, or Proto-Quipper-D for short,
an experimental quantum circuit programming language with linear dependent
types. We give several examples to illustrate how linear dependent types can
help in the construction of correct quantum circuits. Specifically, we show how
dependent types enable programming families of circuits, and how dependent
types solve the problem of type-safe uncomputation of garbage qubits. We also
discuss other language features along the way.
|
[
{
"created": "Sun, 17 May 2020 23:31:23 GMT",
"version": "v1"
},
{
"created": "Sat, 12 Dec 2020 18:31:19 GMT",
"version": "v2"
}
] |
2023-07-04
|
[
[
"Fu",
"Peng",
""
],
[
"Kishida",
"Kohei",
""
],
[
"Ross",
"Neil J.",
""
],
[
"Selinger",
"Peter",
""
]
] |
We introduce dependently typed Proto-Quipper, or Proto-Quipper-D for short, an experimental quantum circuit programming language with linear dependent types. We give several examples to illustrate how linear dependent types can help in the construction of correct quantum circuits. Specifically, we show how dependent types enable programming families of circuits, and how dependent types solve the problem of type-safe uncomputation of garbage qubits. We also discuss other language features along the way.
|
2306.11678
|
Ignacio Jimenez Gallo
|
Pablo Alex L\'azaro, Ignacio Jim\'enez Gallo, Juan Rold\'an Aranda,
Alberto del Barrio Garc\'ia, Guillermo Botella Juan, Francisco Jim\'enez
Molinos
|
Design and simulation of memristor-based neural networks
| null | null | null | null |
cs.ET
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In recent times, neural networks have been gaining increasing importance in
fields such as pattern recognition and computer vision. However, their usage
entails significant energy and hardware costs, limiting the domains in which
this technology can be employed.
In this context, the feasibility of utilizing analog circuits based on
memristors as efficient alternatives in neural network inference is being
considered. Memristors stand out for their configurability and low power
consumption.
To study the feasibility of using these circuits, a physical model has been
adapted to accurately simulate the behavior of commercial memristors from
KNOWM. Using this model, multiple neural networks have been designed and
simulated, yielding highly satisfactory results.
|
[
{
"created": "Tue, 20 Jun 2023 16:56:11 GMT",
"version": "v1"
}
] |
2023-06-21
|
[
[
"Lázaro",
"Pablo Alex",
""
],
[
"Gallo",
"Ignacio Jiménez",
""
],
[
"Aranda",
"Juan Roldán",
""
],
[
"García",
"Alberto del Barrio",
""
],
[
"Juan",
"Guillermo Botella",
""
],
[
"Molinos",
"Francisco Jiménez",
""
]
] |
In recent times, neural networks have been gaining increasing importance in fields such as pattern recognition and computer vision. However, their usage entails significant energy and hardware costs, limiting the domains in which this technology can be employed. In this context, the feasibility of utilizing analog circuits based on memristors as efficient alternatives in neural network inference is being considered. Memristors stand out for their configurability and low power consumption. To study the feasibility of using these circuits, a physical model has been adapted to accurately simulate the behavior of commercial memristors from KNOWM. Using this model, multiple neural networks have been designed and simulated, yielding highly satisfactory results.
|
1707.05653
|
Chandrasekhar Bhagavatula
|
Chandrasekhar Bhagavatula, Chenchen Zhu, Khoa Luu, Marios Savvides
|
Faster Than Real-time Facial Alignment: A 3D Spatial Transformer Network
Approach in Unconstrained Poses
|
International Conference on Computer Vision (ICCV) 2017
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Facial alignment involves finding a set of landmark points on an image with a
known semantic meaning. However, this semantic meaning of landmark points is
often lost in 2D approaches where landmarks are either moved to visible
boundaries or ignored as the pose of the face changes. In order to extract
consistent alignment points across large poses, the 3D structure of the face
must be considered in the alignment step. However, extracting a 3D structure
from a single 2D image usually requires alignment in the first place. We
present our novel approach to simultaneously extract the 3D shape of the face
and the semantically consistent 2D alignment through a 3D Spatial Transformer
Network (3DSTN) to model both the camera projection matrix and the warping
parameters of a 3D model. By utilizing a generic 3D model and a Thin Plate
Spline (TPS) warping function, we are able to generate subject specific 3D
shapes without the need for a large 3D shape basis. In addition, our proposed
network can be trained in an end-to-end framework on entirely synthetic data
from the 300W-LP dataset. Unlike other 3D methods, our approach only requires
one pass through the network resulting in a faster than real-time alignment.
Evaluations of our model on the Annotated Facial Landmarks in the Wild (AFLW)
and AFLW2000-3D datasets show our method achieves state-of-the-art performance
over other 3D approaches to alignment.
|
[
{
"created": "Tue, 18 Jul 2017 14:51:35 GMT",
"version": "v1"
},
{
"created": "Fri, 8 Sep 2017 16:58:53 GMT",
"version": "v2"
}
] |
2017-09-11
|
[
[
"Bhagavatula",
"Chandrasekhar",
""
],
[
"Zhu",
"Chenchen",
""
],
[
"Luu",
"Khoa",
""
],
[
"Savvides",
"Marios",
""
]
] |
Facial alignment involves finding a set of landmark points on an image with a known semantic meaning. However, this semantic meaning of landmark points is often lost in 2D approaches where landmarks are either moved to visible boundaries or ignored as the pose of the face changes. In order to extract consistent alignment points across large poses, the 3D structure of the face must be considered in the alignment step. However, extracting a 3D structure from a single 2D image usually requires alignment in the first place. We present our novel approach to simultaneously extract the 3D shape of the face and the semantically consistent 2D alignment through a 3D Spatial Transformer Network (3DSTN) to model both the camera projection matrix and the warping parameters of a 3D model. By utilizing a generic 3D model and a Thin Plate Spline (TPS) warping function, we are able to generate subject specific 3D shapes without the need for a large 3D shape basis. In addition, our proposed network can be trained in an end-to-end framework on entirely synthetic data from the 300W-LP dataset. Unlike other 3D methods, our approach only requires one pass through the network resulting in a faster than real-time alignment. Evaluations of our model on the Annotated Facial Landmarks in the Wild (AFLW) and AFLW2000-3D datasets show our method achieves state-of-the-art performance over other 3D approaches to alignment.
|
1703.08433
|
Ching-Lueh Chang
|
Ching-Lueh Chang
|
Metric random matchings with applications
|
arXiv admin note: substantial text overlap with arXiv:1702.03106
| null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Let $(\{1,2,\ldots,n\},d)$ be a metric space. We analyze the expected value
and the variance of $\sum_{i=1}^{\lfloor
n/2\rfloor}\,d({\boldsymbol{\pi}}(2i-1),{\boldsymbol{\pi}}(2i))$ for a
uniformly random permutation ${\boldsymbol{\pi}}$ of $\{1,2,\ldots,n\}$,
leading to the following results: (I) Consider the problem of finding a point
in $\{1,2,\ldots,n\}$ with the minimum sum of distances to all points. We show
that this problem has a randomized algorithm that (1) always outputs a
$(2+\epsilon)$-approximate solution in expected $O(n/\epsilon^2)$ time and that
(2) inherits Indyk's~\cite{Ind99, Ind00} algorithm to output a
$(1+\epsilon)$-approximate solution in $O(n/\epsilon^2)$ time with probability
$\Omega(1)$, where $\epsilon\in(0,1)$. (II) The average distance in
$(\{1,2,\ldots,n\},d)$ can be approximated in $O(n/\epsilon)$ time to within a
multiplicative factor in $[\,1/2-\epsilon,1\,]$ with probability
$1/2+\Omega(1)$, where $\epsilon>0$. (III) Assume $d$ to be a graph metric.
Then the average distance in $(\{1,2,\ldots,n\},d)$ can be approximated in
$O(n)$ time to within a multiplicative factor in $[\,1-\epsilon,1+\epsilon\,]$
with probability $1/2+\Omega(1)$, where $\epsilon=\omega(1/n^{1/4})$.
|
[
{
"created": "Fri, 24 Mar 2017 14:44:04 GMT",
"version": "v1"
}
] |
2017-03-27
|
[
[
"Chang",
"Ching-Lueh",
""
]
] |
Let $(\{1,2,\ldots,n\},d)$ be a metric space. We analyze the expected value and the variance of $\sum_{i=1}^{\lfloor n/2\rfloor}\,d({\boldsymbol{\pi}}(2i-1),{\boldsymbol{\pi}}(2i))$ for a uniformly random permutation ${\boldsymbol{\pi}}$ of $\{1,2,\ldots,n\}$, leading to the following results: (I) Consider the problem of finding a point in $\{1,2,\ldots,n\}$ with the minimum sum of distances to all points. We show that this problem has a randomized algorithm that (1) always outputs a $(2+\epsilon)$-approximate solution in expected $O(n/\epsilon^2)$ time and that (2) inherits Indyk's~\cite{Ind99, Ind00} algorithm to output a $(1+\epsilon)$-approximate solution in $O(n/\epsilon^2)$ time with probability $\Omega(1)$, where $\epsilon\in(0,1)$. (II) The average distance in $(\{1,2,\ldots,n\},d)$ can be approximated in $O(n/\epsilon)$ time to within a multiplicative factor in $[\,1/2-\epsilon,1\,]$ with probability $1/2+\Omega(1)$, where $\epsilon>0$. (III) Assume $d$ to be a graph metric. Then the average distance in $(\{1,2,\ldots,n\},d)$ can be approximated in $O(n)$ time to within a multiplicative factor in $[\,1-\epsilon,1+\epsilon\,]$ with probability $1/2+\Omega(1)$, where $\epsilon=\omega(1/n^{1/4})$.
|
2010.03001
|
Giannis Bekoulis
|
Giannis Bekoulis, Christina Papagiannopoulou, Nikos Deligiannis
|
A Review on Fact Extraction and Verification
|
Preprint - Accepted for publication in ACM Computing Surveys
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the fact checking problem, which aims to identify the veracity of a
given claim. Specifically, we focus on the task of Fact Extraction and
VERification (FEVER) and its accompanied dataset. The task consists of the
subtasks of retrieving the relevant documents (and sentences) from Wikipedia
and validating whether the information in the documents supports or refutes a
given claim. This task is essential and can be the building block of
applications such as fake news detection and medical claim verification. In
this paper, we aim at a better understanding of the challenges of the task by
presenting the literature in a structured and comprehensive way. We describe
the proposed methods by analyzing the technical perspectives of the different
approaches and discussing the performance results on the FEVER dataset, which
is the most well-studied and formally structured dataset on the fact extraction
and verification task. We also conduct the largest experimental study to date
on identifying beneficial loss functions for the sentence retrieval component.
Our analysis indicates that sampling negative sentences is important for
improving the performance and decreasing the computational complexity. Finally,
we describe open issues and future challenges, and we motivate future research
in the task.
|
[
{
"created": "Tue, 6 Oct 2020 20:05:43 GMT",
"version": "v1"
},
{
"created": "Mon, 19 Oct 2020 20:35:52 GMT",
"version": "v2"
},
{
"created": "Mon, 19 Apr 2021 09:46:36 GMT",
"version": "v3"
},
{
"created": "Tue, 24 Aug 2021 15:40:14 GMT",
"version": "v4"
},
{
"created": "Fri, 19 Nov 2021 14:42:58 GMT",
"version": "v5"
}
] |
2021-11-22
|
[
[
"Bekoulis",
"Giannis",
""
],
[
"Papagiannopoulou",
"Christina",
""
],
[
"Deligiannis",
"Nikos",
""
]
] |
We study the fact checking problem, which aims to identify the veracity of a given claim. Specifically, we focus on the task of Fact Extraction and VERification (FEVER) and its accompanied dataset. The task consists of the subtasks of retrieving the relevant documents (and sentences) from Wikipedia and validating whether the information in the documents supports or refutes a given claim. This task is essential and can be the building block of applications such as fake news detection and medical claim verification. In this paper, we aim at a better understanding of the challenges of the task by presenting the literature in a structured and comprehensive way. We describe the proposed methods by analyzing the technical perspectives of the different approaches and discussing the performance results on the FEVER dataset, which is the most well-studied and formally structured dataset on the fact extraction and verification task. We also conduct the largest experimental study to date on identifying beneficial loss functions for the sentence retrieval component. Our analysis indicates that sampling negative sentences is important for improving the performance and decreasing the computational complexity. Finally, we describe open issues and future challenges, and we motivate future research in the task.
|
2003.03516
|
Lingfeng Tao
|
Lingfeng Tao, Michael Bowman, Xu Zhou, Jiucai Zhang, Xiaoli Zhang
|
Learn and Transfer Knowledge of Preferred Assistance Strategies in
Semi-autonomous Telemanipulation
| null | null |
10.1007/s10846-022-01596-2
| null |
cs.RO cs.AI cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Enabling robots to provide effective assistance yet still accommodating the
operator's commands for telemanipulation of an object is very challenging
because robot's assistive action is not always intuitive for human operators
and human behaviors and preferences are sometimes ambiguous for the robot to
interpret. Although various assistance approaches are being developed to
improve the control quality from different optimization perspectives, the
problem still remains in determining the appropriate approach that satisfies
the fine motion constraints for the telemanipulation task and preference of the
operator. To address these problems, we developed a novel preference-aware
assistance knowledge learning approach. An assistance preference model learns
what assistance is preferred by a human, and a stagewise model updating method
ensures the learning stability while dealing with the ambiguity of human
preference data. Such a preference-aware assistance knowledge enables a
teleoperated robot hand to provide more active yet preferred assistance toward
manipulation success. We also developed knowledge transfer methods to transfer
the preference knowledge across different robot hand structures to avoid
extensive robot-specific training. Experiments to telemanipulate a 3-finger
hand and 2-finger hand, respectively, to use, move, and hand over a cup have
been conducted. Results demonstrated that the methods enabled the robots to
effectively learn the preference knowledge and allowed knowledge transfer
between robots with less training effort.
|
[
{
"created": "Sat, 7 Mar 2020 04:49:57 GMT",
"version": "v1"
},
{
"created": "Sat, 19 Dec 2020 20:21:40 GMT",
"version": "v2"
}
] |
2023-09-15
|
[
[
"Tao",
"Lingfeng",
""
],
[
"Bowman",
"Michael",
""
],
[
"Zhou",
"Xu",
""
],
[
"Zhang",
"Jiucai",
""
],
[
"Zhang",
"Xiaoli",
""
]
] |
Enabling robots to provide effective assistance yet still accommodating the operator's commands for telemanipulation of an object is very challenging because robot's assistive action is not always intuitive for human operators and human behaviors and preferences are sometimes ambiguous for the robot to interpret. Although various assistance approaches are being developed to improve the control quality from different optimization perspectives, the problem still remains in determining the appropriate approach that satisfies the fine motion constraints for the telemanipulation task and preference of the operator. To address these problems, we developed a novel preference-aware assistance knowledge learning approach. An assistance preference model learns what assistance is preferred by a human, and a stagewise model updating method ensures the learning stability while dealing with the ambiguity of human preference data. Such a preference-aware assistance knowledge enables a teleoperated robot hand to provide more active yet preferred assistance toward manipulation success. We also developed knowledge transfer methods to transfer the preference knowledge across different robot hand structures to avoid extensive robot-specific training. Experiments to telemanipulate a 3-finger hand and 2-finger hand, respectively, to use, move, and hand over a cup have been conducted. Results demonstrated that the methods enabled the robots to effectively learn the preference knowledge and allowed knowledge transfer between robots with less training effort.
|
1612.06139
|
Josep Crego
|
Josep Crego and Jean Senellart
|
Neural Machine Translation from Simplified Translations
|
Submitted to EACL 2017 Short paper
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Text simplification aims at reducing the lexical, grammatical and structural
complexity of a text while keeping the same meaning. In the context of machine
translation, we introduce the idea of simplified translations in order to boost
the learning ability of deep neural translation models. We conduct preliminary
experiments showing that translation complexity is actually reduced in a
translation of a source bi-text compared to the target reference of the bi-text
while using a neural machine translation (NMT) system learned on the exact same
bi-text. Based on knowledge distillation idea, we then train an NMT system
using the simplified bi-text, and show that it outperforms the initial system
that was built over the reference data set. Performance is further boosted when
both reference and automatic translations are used to learn the network. We
perform an elementary analysis of the translated corpus and report accuracy
results of the proposed approach on English-to-French and English-to-German
translation tasks.
|
[
{
"created": "Mon, 19 Dec 2016 11:50:58 GMT",
"version": "v1"
}
] |
2016-12-20
|
[
[
"Crego",
"Josep",
""
],
[
"Senellart",
"Jean",
""
]
] |
Text simplification aims at reducing the lexical, grammatical and structural complexity of a text while keeping the same meaning. In the context of machine translation, we introduce the idea of simplified translations in order to boost the learning ability of deep neural translation models. We conduct preliminary experiments showing that translation complexity is actually reduced in a translation of a source bi-text compared to the target reference of the bi-text while using a neural machine translation (NMT) system learned on the exact same bi-text. Based on knowledge distillation idea, we then train an NMT system using the simplified bi-text, and show that it outperforms the initial system that was built over the reference data set. Performance is further boosted when both reference and automatic translations are used to learn the network. We perform an elementary analysis of the translated corpus and report accuracy results of the proposed approach on English-to-French and English-to-German translation tasks.
|
2403.08828
|
Balint Gyevnar
|
Balint Gyevnar and Stephanie Droop and Tadeg Quillien and Shay B.
Cohen and Neil R. Bramley and Christopher G. Lucas and Stefano V. Albrecht
|
People Attribute Purpose to Autonomous Vehicles When Explaining Their
Behavior
| null | null | null | null |
cs.HC cs.AI cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cognitive science can help us understand which explanations people might
expect, and in which format they frame these explanations, whether causal,
counterfactual, or teleological (i.e., purpose-oriented). Understanding the
relevance of these concepts is crucial for building good explainable AI (XAI)
which offers recourse and actionability. Focusing on autonomous driving, a
complex decision-making domain, we report empirical data from two surveys on
(i) how people explain the behavior of autonomous vehicles in 14 unique
scenarios (N1=54), and (ii) how they perceive these explanations in terms of
complexity, quality, and trustworthiness (N2=356). Participants deemed
teleological explanations significantly better quality than counterfactual
ones, with perceived teleology being the best predictor of perceived quality
and trustworthiness. Neither the perceived teleology nor the quality were
affected by whether the car was an autonomous vehicle or driven by a person.
This indicates that people use teleology to evaluate information about not just
other people but also autonomous vehicles. Taken together, our findings
highlight the importance of explanations that are framed in terms of purpose
rather than just, as is standard in XAI, the causal mechanisms involved. We
release the 14 scenarios and more than 1,300 elicited explanations publicly as
the Human Explanations for Autonomous Driving Decisions (HEADD) dataset.
|
[
{
"created": "Mon, 11 Mar 2024 11:48:50 GMT",
"version": "v1"
},
{
"created": "Tue, 30 Apr 2024 17:43:10 GMT",
"version": "v2"
}
] |
2024-05-01
|
[
[
"Gyevnar",
"Balint",
""
],
[
"Droop",
"Stephanie",
""
],
[
"Quillien",
"Tadeg",
""
],
[
"Cohen",
"Shay B.",
""
],
[
"Bramley",
"Neil R.",
""
],
[
"Lucas",
"Christopher G.",
""
],
[
"Albrecht",
"Stefano V.",
""
]
] |
Cognitive science can help us understand which explanations people might expect, and in which format they frame these explanations, whether causal, counterfactual, or teleological (i.e., purpose-oriented). Understanding the relevance of these concepts is crucial for building good explainable AI (XAI) which offers recourse and actionability. Focusing on autonomous driving, a complex decision-making domain, we report empirical data from two surveys on (i) how people explain the behavior of autonomous vehicles in 14 unique scenarios (N1=54), and (ii) how they perceive these explanations in terms of complexity, quality, and trustworthiness (N2=356). Participants deemed teleological explanations significantly better quality than counterfactual ones, with perceived teleology being the best predictor of perceived quality and trustworthiness. Neither the perceived teleology nor the quality were affected by whether the car was an autonomous vehicle or driven by a person. This indicates that people use teleology to evaluate information about not just other people but also autonomous vehicles. Taken together, our findings highlight the importance of explanations that are framed in terms of purpose rather than just, as is standard in XAI, the causal mechanisms involved. We release the 14 scenarios and more than 1,300 elicited explanations publicly as the Human Explanations for Autonomous Driving Decisions (HEADD) dataset.
|
2108.05598
|
Konstantinos Makantasis
|
Konstantinos Makantasis
|
AffRankNet+: Ranking Affect Using Privileged Information
|
8 pages, 4 figures, 2021 9th International Conference on Affective
Computing and Intelligent Interaction Workshops and Demos (ACIIW)
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many of the affect modelling tasks present an asymmetric distribution of
information between training and test time; additional information is given
about the training data, which is not available at test time. Learning under
this setting is called Learning Under Privileged Information (LUPI). At the
same time, due to the ordinal nature of affect annotations, formulating affect
modelling tasks as supervised learning ranking problems is gaining ground
within the Affective Computing research community. Motivated by the two facts
above, in this study, we introduce a ranking model that treats additional
information about the training data as privileged information to accurately
rank affect states. Our ranking model extends the well-known RankNet model to
the LUPI paradigm, hence its name AffRankNet+. To the best of our knowledge, it
is the first time that a ranking model based on neural networks exploits
privileged information. We evaluate the performance of the proposed model on
the public available Afew-VA dataset and compare it against the RankNet model,
which does not use privileged information. Experimental evaluation indicates
that the AffRankNet+ model can yield significantly better performance.
|
[
{
"created": "Thu, 12 Aug 2021 08:36:31 GMT",
"version": "v1"
}
] |
2021-08-13
|
[
[
"Makantasis",
"Konstantinos",
""
]
] |
Many of the affect modelling tasks present an asymmetric distribution of information between training and test time; additional information is given about the training data, which is not available at test time. Learning under this setting is called Learning Under Privileged Information (LUPI). At the same time, due to the ordinal nature of affect annotations, formulating affect modelling tasks as supervised learning ranking problems is gaining ground within the Affective Computing research community. Motivated by the two facts above, in this study, we introduce a ranking model that treats additional information about the training data as privileged information to accurately rank affect states. Our ranking model extends the well-known RankNet model to the LUPI paradigm, hence its name AffRankNet+. To the best of our knowledge, it is the first time that a ranking model based on neural networks exploits privileged information. We evaluate the performance of the proposed model on the public available Afew-VA dataset and compare it against the RankNet model, which does not use privileged information. Experimental evaluation indicates that the AffRankNet+ model can yield significantly better performance.
|
2104.06159
|
Ivo Danihelka
|
Matteo Hessel, Ivo Danihelka, Fabio Viola, Arthur Guez, Simon Schmitt,
Laurent Sifre, Theophane Weber, David Silver, Hado van Hasselt
|
Muesli: Combining Improvements in Policy Optimization
| null | null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a novel policy update that combines regularized policy
optimization with model learning as an auxiliary loss. The update (henceforth
Muesli) matches MuZero's state-of-the-art performance on Atari. Notably, Muesli
does so without using deep search: it acts directly with a policy network and
has computation speed comparable to model-free baselines. The Atari results are
complemented by extensive ablations, and by additional results on continuous
control and 9x9 Go.
|
[
{
"created": "Tue, 13 Apr 2021 13:04:29 GMT",
"version": "v1"
},
{
"created": "Thu, 31 Mar 2022 09:35:40 GMT",
"version": "v2"
}
] |
2022-04-01
|
[
[
"Hessel",
"Matteo",
""
],
[
"Danihelka",
"Ivo",
""
],
[
"Viola",
"Fabio",
""
],
[
"Guez",
"Arthur",
""
],
[
"Schmitt",
"Simon",
""
],
[
"Sifre",
"Laurent",
""
],
[
"Weber",
"Theophane",
""
],
[
"Silver",
"David",
""
],
[
"van Hasselt",
"Hado",
""
]
] |
We propose a novel policy update that combines regularized policy optimization with model learning as an auxiliary loss. The update (henceforth Muesli) matches MuZero's state-of-the-art performance on Atari. Notably, Muesli does so without using deep search: it acts directly with a policy network and has computation speed comparable to model-free baselines. The Atari results are complemented by extensive ablations, and by additional results on continuous control and 9x9 Go.
|
2207.04957
|
Frederick Qiu
|
Frederick Qiu and Sahil Singla
|
Submodular Dominance and Applications
|
Appears in APPROX 2022, 21 pages, 1 figure
| null | null | null |
cs.DS cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In submodular optimization we often deal with the expected value of a
submodular function $f$ on a distribution $\mathcal{D}$ over sets of elements.
In this work we study such submodular expectations for negatively dependent
distributions. We introduce a natural notion of negative dependence, which we
call Weak Negative Regression (WNR), that generalizes both Negative Association
and Negative Regression. We observe that WNR distributions satisfy Submodular
Dominance, whereby the expected value of $f$ under $\mathcal{D}$ is at least
the expected value of $f$ under a product distribution with the same
element-marginals.
Next, we give several applications of Submodular Dominance to submodular
optimization. In particular, we improve the best known submodular prophet
inequalities, we develop new rounding techniques for polytopes of set systems
that admit negatively dependent distributions, and we prove existence of
contention resolution schemes for WNR distributions.
|
[
{
"created": "Mon, 11 Jul 2022 15:38:16 GMT",
"version": "v1"
}
] |
2022-07-13
|
[
[
"Qiu",
"Frederick",
""
],
[
"Singla",
"Sahil",
""
]
] |
In submodular optimization we often deal with the expected value of a submodular function $f$ on a distribution $\mathcal{D}$ over sets of elements. In this work we study such submodular expectations for negatively dependent distributions. We introduce a natural notion of negative dependence, which we call Weak Negative Regression (WNR), that generalizes both Negative Association and Negative Regression. We observe that WNR distributions satisfy Submodular Dominance, whereby the expected value of $f$ under $\mathcal{D}$ is at least the expected value of $f$ under a product distribution with the same element-marginals. Next, we give several applications of Submodular Dominance to submodular optimization. In particular, we improve the best known submodular prophet inequalities, we develop new rounding techniques for polytopes of set systems that admit negatively dependent distributions, and we prove existence of contention resolution schemes for WNR distributions.
|
2301.09545
|
Jesse Josua Benjamin
|
Jesse Josua Benjamin, Heidi Biggs, Arne Berger, Julija Rukanskait\.e,
Michael Heidt, Nick Merrill, James Pierce, Joseph Lindley
|
The Entoptic Field Camera as Metaphor-Driven Research-through-Design
with AI Technologies
|
To be published in Proceedings of the 2023 CHI Conference on Human
Factors in Computing Systems (CHI '23), April 23--28, 2023, Hamburg, Germany
| null |
10.1145/3544548.3581175
| null |
cs.HC cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Artificial intelligence (AI) technologies are widely deployed in smartphone
photography; and prompt-based image synthesis models have rapidly become
commonplace. In this paper, we describe a Research-through-Design (RtD) project
which explores this shift in the means and modes of image production via the
creation and use of the Entoptic Field Camera. Entoptic phenomena usually refer
to perceptions of floaters or bright blue dots stemming from the physiological
interplay of the eye and brain. We use the term entoptic as a metaphor to
investigate how the material interplay of data and models in AI technologies
shapes human experiences of reality. Through our case study using first-person
design and a field study, we offer implications for critical, reflective,
more-than-human and ludic design to engage AI technologies; the
conceptualisation of an RtD research space which contributes to AI literacy
discourses; and outline a research trajectory concerning materiality and design
affordances of AI technologies.
|
[
{
"created": "Mon, 23 Jan 2023 17:03:54 GMT",
"version": "v1"
}
] |
2023-01-24
|
[
[
"Benjamin",
"Jesse Josua",
""
],
[
"Biggs",
"Heidi",
""
],
[
"Berger",
"Arne",
""
],
[
"Rukanskaitė",
"Julija",
""
],
[
"Heidt",
"Michael",
""
],
[
"Merrill",
"Nick",
""
],
[
"Pierce",
"James",
""
],
[
"Lindley",
"Joseph",
""
]
] |
Artificial intelligence (AI) technologies are widely deployed in smartphone photography; and prompt-based image synthesis models have rapidly become commonplace. In this paper, we describe a Research-through-Design (RtD) project which explores this shift in the means and modes of image production via the creation and use of the Entoptic Field Camera. Entoptic phenomena usually refer to perceptions of floaters or bright blue dots stemming from the physiological interplay of the eye and brain. We use the term entoptic as a metaphor to investigate how the material interplay of data and models in AI technologies shapes human experiences of reality. Through our case study using first-person design and a field study, we offer implications for critical, reflective, more-than-human and ludic design to engage AI technologies; the conceptualisation of an RtD research space which contributes to AI literacy discourses; and outline a research trajectory concerning materiality and design affordances of AI technologies.
|
2004.12274
|
Baoyu Jing
|
Baoyu Jing, Zeya Wang, Eric Xing
|
Show, Describe and Conclude: On Exploiting the Structure Information of
Chest X-Ray Reports
|
ACL 2019
| null |
10.18653/v1/P19-1657
| null |
cs.CL cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
Chest X-Ray (CXR) images are commonly used for clinical screening and
diagnosis. Automatically writing reports for these images can considerably
lighten the workload of radiologists for summarizing descriptive findings and
conclusive impressions. The complex structures between and within sections of
the reports pose a great challenge to the automatic report generation.
Specifically, the section Impression is a diagnostic summarization over the
section Findings; and the appearance of normality dominates each section over
that of abnormality. Existing studies rarely explore and consider this
fundamental structure information. In this work, we propose a novel framework
that exploits the structure information between and within report sections for
generating CXR imaging reports. First, we propose a two-stage strategy that
explicitly models the relationship between Findings and Impression. Second, we
design a novel cooperative multi-agent system that implicitly captures the
imbalanced distribution between abnormality and normality. Experiments on two
CXR report datasets show that our method achieves state-of-the-art performance
in terms of various evaluation metrics. Our results expose that the proposed
approach is able to generate high-quality medical reports through integrating
the structure information.
|
[
{
"created": "Sun, 26 Apr 2020 02:29:20 GMT",
"version": "v1"
},
{
"created": "Thu, 23 Jul 2020 17:44:44 GMT",
"version": "v2"
}
] |
2020-07-24
|
[
[
"Jing",
"Baoyu",
""
],
[
"Wang",
"Zeya",
""
],
[
"Xing",
"Eric",
""
]
] |
Chest X-Ray (CXR) images are commonly used for clinical screening and diagnosis. Automatically writing reports for these images can considerably lighten the workload of radiologists for summarizing descriptive findings and conclusive impressions. The complex structures between and within sections of the reports pose a great challenge to the automatic report generation. Specifically, the section Impression is a diagnostic summarization over the section Findings; and the appearance of normality dominates each section over that of abnormality. Existing studies rarely explore and consider this fundamental structure information. In this work, we propose a novel framework that exploits the structure information between and within report sections for generating CXR imaging reports. First, we propose a two-stage strategy that explicitly models the relationship between Findings and Impression. Second, we design a novel cooperative multi-agent system that implicitly captures the imbalanced distribution between abnormality and normality. Experiments on two CXR report datasets show that our method achieves state-of-the-art performance in terms of various evaluation metrics. Our results expose that the proposed approach is able to generate high-quality medical reports through integrating the structure information.
|
1911.10470
|
Akari Asai Ms
|
Akari Asai, Kazuma Hashimoto, Hannaneh Hajishirzi, Richard Socher,
Caiming Xiong
|
Learning to Retrieve Reasoning Paths over Wikipedia Graph for Question
Answering
|
Published as a conference paper at ICLR 2020. Code is available at
https://github.com/AkariAsai/learning_to_retrieve_reasoning_paths
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Answering questions that require multi-hop reasoning at web-scale
necessitates retrieving multiple evidence documents, one of which often has
little lexical or semantic relationship to the question. This paper introduces
a new graph-based recurrent retrieval approach that learns to retrieve
reasoning paths over the Wikipedia graph to answer multi-hop open-domain
questions. Our retriever model trains a recurrent neural network that learns to
sequentially retrieve evidence paragraphs in the reasoning path by conditioning
on the previously retrieved documents. Our reader model ranks the reasoning
paths and extracts the answer span included in the best reasoning path.
Experimental results show state-of-the-art results in three open-domain QA
datasets, showcasing the effectiveness and robustness of our method. Notably,
our method achieves significant improvement in HotpotQA, outperforming the
previous best model by more than 14 points.
|
[
{
"created": "Sun, 24 Nov 2019 08:27:42 GMT",
"version": "v1"
},
{
"created": "Fri, 14 Feb 2020 07:43:06 GMT",
"version": "v2"
}
] |
2020-02-17
|
[
[
"Asai",
"Akari",
""
],
[
"Hashimoto",
"Kazuma",
""
],
[
"Hajishirzi",
"Hannaneh",
""
],
[
"Socher",
"Richard",
""
],
[
"Xiong",
"Caiming",
""
]
] |
Answering questions that require multi-hop reasoning at web-scale necessitates retrieving multiple evidence documents, one of which often has little lexical or semantic relationship to the question. This paper introduces a new graph-based recurrent retrieval approach that learns to retrieve reasoning paths over the Wikipedia graph to answer multi-hop open-domain questions. Our retriever model trains a recurrent neural network that learns to sequentially retrieve evidence paragraphs in the reasoning path by conditioning on the previously retrieved documents. Our reader model ranks the reasoning paths and extracts the answer span included in the best reasoning path. Experimental results show state-of-the-art results in three open-domain QA datasets, showcasing the effectiveness and robustness of our method. Notably, our method achieves significant improvement in HotpotQA, outperforming the previous best model by more than 14 points.
|
2406.08787
|
Parisa Kordjamshidi
|
Sania Sinha, Tanawan Premsri, Parisa Kordjamshidi
|
A Survey on Compositional Learning of AI Models: Theoretical and
Experimetnal Practices
| null | null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Compositional learning, mastering the ability to combine basic concepts and
construct more intricate ones, is crucial for human cognition, especially in
human language comprehension and visual perception. This notion is tightly
connected to generalization over unobserved situations. Despite its integral
role in intelligence, there is a lack of systematic theoretical and
experimental research methodologies, making it difficult to analyze the
compositional learning abilities of computational models. In this paper, we
survey the literature on compositional learning of AI models and the
connections made to cognitive studies. We identify abstract concepts of
compositionality in cognitive and linguistic studies and connect these to the
computational challenges faced by language and vision models in compositional
reasoning. We overview the formal definitions, tasks, evaluation benchmarks,
variety of computational models, and theoretical findings. We cover modern
studies on large language models to provide a deeper understanding of the
cutting-edge compositional capabilities exhibited by state-of-the-art AI models
and pinpoint important directions for future research.
|
[
{
"created": "Thu, 13 Jun 2024 03:46:21 GMT",
"version": "v1"
}
] |
2024-06-14
|
[
[
"Sinha",
"Sania",
""
],
[
"Premsri",
"Tanawan",
""
],
[
"Kordjamshidi",
"Parisa",
""
]
] |
Compositional learning, mastering the ability to combine basic concepts and construct more intricate ones, is crucial for human cognition, especially in human language comprehension and visual perception. This notion is tightly connected to generalization over unobserved situations. Despite its integral role in intelligence, there is a lack of systematic theoretical and experimental research methodologies, making it difficult to analyze the compositional learning abilities of computational models. In this paper, we survey the literature on compositional learning of AI models and the connections made to cognitive studies. We identify abstract concepts of compositionality in cognitive and linguistic studies and connect these to the computational challenges faced by language and vision models in compositional reasoning. We overview the formal definitions, tasks, evaluation benchmarks, variety of computational models, and theoretical findings. We cover modern studies on large language models to provide a deeper understanding of the cutting-edge compositional capabilities exhibited by state-of-the-art AI models and pinpoint important directions for future research.
|
2311.01323
|
Qizhang Li
|
Qizhang Li, Yiwen Guo, Wangmeng Zuo, Hao Chen
|
Towards Evaluating Transfer-based Attacks Systematically, Practically,
and Fairly
|
Accepted by NeurIPS 2023
| null | null | null |
cs.LG cs.CR cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The adversarial vulnerability of deep neural networks (DNNs) has drawn great
attention due to the security risk of applying these models in real-world
applications. Based on transferability of adversarial examples, an increasing
number of transfer-based methods have been developed to fool black-box DNN
models whose architecture and parameters are inaccessible. Although tremendous
effort has been exerted, there still lacks a standardized benchmark that could
be taken advantage of to compare these methods systematically, fairly, and
practically. Our investigation shows that the evaluation of some methods needs
to be more reasonable and more thorough to verify their effectiveness, to
avoid, for example, unfair comparison and insufficient consideration of
possible substitute/victim models. Therefore, we establish a transfer-based
attack benchmark (TA-Bench) which implements 30+ methods. In this paper, we
evaluate and compare them comprehensively on 25 popular substitute/victim
models on ImageNet. New insights about the effectiveness of these methods are
gained and guidelines for future evaluations are provided. Code at:
https://github.com/qizhangli/TA-Bench.
|
[
{
"created": "Thu, 2 Nov 2023 15:35:58 GMT",
"version": "v1"
}
] |
2023-11-03
|
[
[
"Li",
"Qizhang",
""
],
[
"Guo",
"Yiwen",
""
],
[
"Zuo",
"Wangmeng",
""
],
[
"Chen",
"Hao",
""
]
] |
The adversarial vulnerability of deep neural networks (DNNs) has drawn great attention due to the security risk of applying these models in real-world applications. Based on transferability of adversarial examples, an increasing number of transfer-based methods have been developed to fool black-box DNN models whose architecture and parameters are inaccessible. Although tremendous effort has been exerted, there still lacks a standardized benchmark that could be taken advantage of to compare these methods systematically, fairly, and practically. Our investigation shows that the evaluation of some methods needs to be more reasonable and more thorough to verify their effectiveness, to avoid, for example, unfair comparison and insufficient consideration of possible substitute/victim models. Therefore, we establish a transfer-based attack benchmark (TA-Bench) which implements 30+ methods. In this paper, we evaluate and compare them comprehensively on 25 popular substitute/victim models on ImageNet. New insights about the effectiveness of these methods are gained and guidelines for future evaluations are provided. Code at: https://github.com/qizhangli/TA-Bench.
|
2110.06758
|
Fuqun Huang
|
Fuqun Huang and Lorenzo Strigini
|
HEDP: A Method for Early Forecasting Software Defects based on Human
Error Mechanisms
|
30 pages, 5 figures, and 17 tables
|
IEEE Access-2023
|
10.1109/ACCESS.2023.3234490
|
volume 11, page 3626-3652
|
cs.SE cs.AI cs.CC cs.CY
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
As the primary cause of software defects, human error is the key to
understanding, and perhaps to predicting and avoiding them. Little research has
been done to predict defects on the basis of the cognitive errors that cause
them. This paper proposes an approach to predicting software defects through
knowledge about the cognitive mechanisms of human errors. Our theory is that
the main process behind a software defect is that an error-prone scenario
triggers human error modes, which psychologists have observed to recur across
diverse activities. Software defects can then be predicted by identifying such
scenarios, guided by this knowledge of typical error modes. The proposed idea
emphasizes predicting the exact location and form of a possible defect. We
conducted two case studies to demonstrate and validate this approach, with 55
programmers in a programming competition and 5 analysts serving as the users of
the approach. We found it impressive that the approach was able to predict, at
the requirement phase, the exact locations and forms of 7 out of the 22 (31.8%)
specific types of defects that were found in the code. The defects predicted
tended to be common defects: their occurrences constituted 75.7% of the total
number of defects in the 55 developed programs; each of them was introduced by
at least two persons. The fraction of the defects introduced by a programmer
that were predicted was on average (over all programmers) 75%. Furthermore,
these predicted defects were highly persistent through the debugging process.
If the prediction had been used to successfully prevent these defects, this
could have saved 46.2% of the debugging iterations. This excellent capability
of forecasting the exact locations and forms of possible defects at the early
phases of software development recommends the approach for substantial benefits
to defect prevention and early detection.
|
[
{
"created": "Wed, 13 Oct 2021 14:44:23 GMT",
"version": "v1"
}
] |
2023-01-18
|
[
[
"Huang",
"Fuqun",
""
],
[
"Strigini",
"Lorenzo",
""
]
] |
As the primary cause of software defects, human error is the key to understanding, and perhaps to predicting and avoiding them. Little research has been done to predict defects on the basis of the cognitive errors that cause them. This paper proposes an approach to predicting software defects through knowledge about the cognitive mechanisms of human errors. Our theory is that the main process behind a software defect is that an error-prone scenario triggers human error modes, which psychologists have observed to recur across diverse activities. Software defects can then be predicted by identifying such scenarios, guided by this knowledge of typical error modes. The proposed idea emphasizes predicting the exact location and form of a possible defect. We conducted two case studies to demonstrate and validate this approach, with 55 programmers in a programming competition and 5 analysts serving as the users of the approach. We found it impressive that the approach was able to predict, at the requirement phase, the exact locations and forms of 7 out of the 22 (31.8%) specific types of defects that were found in the code. The defects predicted tended to be common defects: their occurrences constituted 75.7% of the total number of defects in the 55 developed programs; each of them was introduced by at least two persons. The fraction of the defects introduced by a programmer that were predicted was on average (over all programmers) 75%. Furthermore, these predicted defects were highly persistent through the debugging process. If the prediction had been used to successfully prevent these defects, this could have saved 46.2% of the debugging iterations. This excellent capability of forecasting the exact locations and forms of possible defects at the early phases of software development recommends the approach for substantial benefits to defect prevention and early detection.
|
1805.09521
|
Ehsan Adeli
|
Mohammad Sabokrou, Masoud Pourreza, Mohsen Fayyaz, Rahim Entezari,
Mahmood Fathy, J\"urgen Gall, Ehsan Adeli
|
AVID: Adversarial Visual Irregularity Detection
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Real-time detection of irregularities in visual data is very invaluable and
useful in many prospective applications including surveillance, patient
monitoring systems, etc. With the surge of deep learning methods in the recent
years, researchers have tried a wide spectrum of methods for different
applications. However, for the case of irregularity or anomaly detection in
videos, training an end-to-end model is still an open challenge, since often
irregularity is not well-defined and there are not enough irregular samples to
use during training. In this paper, inspired by the success of generative
adversarial networks (GANs) for training deep models in unsupervised or
self-supervised settings, we propose an end-to-end deep network for detection
and fine localization of irregularities in videos (and images). Our proposed
architecture is composed of two networks, which are trained in competing with
each other while collaborating to find the irregularity. One network works as a
pixel-level irregularity Inpainter, and the other works as a patch-level
Detector. After an adversarial self-supervised training, in which I tries to
fool D into accepting its inpainted output as regular (normal), the two
networks collaborate to detect and fine-segment the irregularity in any given
testing video. Our results on three different datasets show that our method can
outperform the state-of-the-art and fine-segment the irregularity.
|
[
{
"created": "Thu, 24 May 2018 06:23:45 GMT",
"version": "v1"
},
{
"created": "Tue, 17 Jul 2018 22:37:53 GMT",
"version": "v2"
}
] |
2018-07-19
|
[
[
"Sabokrou",
"Mohammad",
""
],
[
"Pourreza",
"Masoud",
""
],
[
"Fayyaz",
"Mohsen",
""
],
[
"Entezari",
"Rahim",
""
],
[
"Fathy",
"Mahmood",
""
],
[
"Gall",
"Jürgen",
""
],
[
"Adeli",
"Ehsan",
""
]
] |
Real-time detection of irregularities in visual data is very invaluable and useful in many prospective applications including surveillance, patient monitoring systems, etc. With the surge of deep learning methods in the recent years, researchers have tried a wide spectrum of methods for different applications. However, for the case of irregularity or anomaly detection in videos, training an end-to-end model is still an open challenge, since often irregularity is not well-defined and there are not enough irregular samples to use during training. In this paper, inspired by the success of generative adversarial networks (GANs) for training deep models in unsupervised or self-supervised settings, we propose an end-to-end deep network for detection and fine localization of irregularities in videos (and images). Our proposed architecture is composed of two networks, which are trained in competing with each other while collaborating to find the irregularity. One network works as a pixel-level irregularity Inpainter, and the other works as a patch-level Detector. After an adversarial self-supervised training, in which I tries to fool D into accepting its inpainted output as regular (normal), the two networks collaborate to detect and fine-segment the irregularity in any given testing video. Our results on three different datasets show that our method can outperform the state-of-the-art and fine-segment the irregularity.
|
2307.06795
|
Denis Coquenet
|
Denis Coquenet and Cl\'ement Rambour and Emanuele Dalsasso and Nicolas
Thome
|
Leveraging Vision-Language Foundation Models for Fine-Grained Downstream
Tasks
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vision-language foundation models such as CLIP have shown impressive
zero-shot performance on many tasks and datasets, especially thanks to their
free-text inputs. However, they struggle to handle some downstream tasks, such
as fine-grained attribute detection and localization. In this paper, we propose
a multitask fine-tuning strategy based on a positive/negative prompt
formulation to further leverage the capacities of the vision-language
foundation models. Using the CLIP architecture as baseline, we show strong
improvements on bird fine-grained attribute detection and localization tasks,
while also increasing the classification performance on the CUB200-2011
dataset. We provide source code for reproducibility purposes: it is available
at https://github.com/FactoDeepLearning/MultitaskVLFM.
|
[
{
"created": "Thu, 13 Jul 2023 15:05:34 GMT",
"version": "v1"
}
] |
2023-07-14
|
[
[
"Coquenet",
"Denis",
""
],
[
"Rambour",
"Clément",
""
],
[
"Dalsasso",
"Emanuele",
""
],
[
"Thome",
"Nicolas",
""
]
] |
Vision-language foundation models such as CLIP have shown impressive zero-shot performance on many tasks and datasets, especially thanks to their free-text inputs. However, they struggle to handle some downstream tasks, such as fine-grained attribute detection and localization. In this paper, we propose a multitask fine-tuning strategy based on a positive/negative prompt formulation to further leverage the capacities of the vision-language foundation models. Using the CLIP architecture as baseline, we show strong improvements on bird fine-grained attribute detection and localization tasks, while also increasing the classification performance on the CUB200-2011 dataset. We provide source code for reproducibility purposes: it is available at https://github.com/FactoDeepLearning/MultitaskVLFM.
|
2201.08679
|
Rog\'erio Rossi
|
Rogerio Rossi and Kechi Hirama
|
Strategic Issues on Implementing a Software Process Improvement Program
|
InSITE Conference - Tampa, USA - 2015
| null |
10.28945/2193
| null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Software technology has high impact on the global economy as in many sectors
of contemporary society. As a product enabling the most varied daily
activities, the software product has to be produced reflecting high quality.
Software quality is dependent on its development that is based in a large set
of software development processes. However, the implementation and continuous
improvement of software process aimed at software product should be carefully
institutionalized by software development organizations such as software
factories, testing factories, V&V organizations, among others. The
institutionalization of programs such as a Software Process Improvement
Program, or SPI Program, require a strategic planning, which is addressed in
this article from the perspective of specific models and frameworks, as well as
reflections based on software process engineering models and standards. In
addition, a set of strategic drivers is proposed to assist the implementation
of a Strategic Plan for a SPI Program which can be considered by the
organizations before starting this kind of Program.
|
[
{
"created": "Sat, 15 Jan 2022 23:20:22 GMT",
"version": "v1"
}
] |
2022-01-24
|
[
[
"Rossi",
"Rogerio",
""
],
[
"Hirama",
"Kechi",
""
]
] |
Software technology has high impact on the global economy as in many sectors of contemporary society. As a product enabling the most varied daily activities, the software product has to be produced reflecting high quality. Software quality is dependent on its development that is based in a large set of software development processes. However, the implementation and continuous improvement of software process aimed at software product should be carefully institutionalized by software development organizations such as software factories, testing factories, V&V organizations, among others. The institutionalization of programs such as a Software Process Improvement Program, or SPI Program, require a strategic planning, which is addressed in this article from the perspective of specific models and frameworks, as well as reflections based on software process engineering models and standards. In addition, a set of strategic drivers is proposed to assist the implementation of a Strategic Plan for a SPI Program which can be considered by the organizations before starting this kind of Program.
|
1006.4444
|
William Jackson
|
Bruno Osorno
|
Analysis of Microprocessor Based Protective Re-lay's (MBPR) Differential
Equation Algorithms
|
IEEE Publication Format,
https://sites.google.com/site/journalofcomputing/
|
Journal of Computing, Vol. 2 No. 6, June 2010, NY, USA, ISSN
2151-9617
| null | null |
cs.OH
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper analyses and explains from the systems point of view,
microprocessor based protective relay (MBPR) systems with emphasis on
differential equation algorithms. Presently, the application of protective
relaying in power systems, using MBPR systems, based on the differential
equation algorithm is valued more than the protection relaying based on any
other type of algorithm, because of advantages in accuracy and implementation.
MBPR differential equation approach can tolerate some errors caused by power
system abnormality such as DC offset. This paper shows that the algorithm is a
system description based and it is immune from distortions such as DC-offset.
Differential equation algorithms implemented in MBPR are widely used in the
protection of transmission and distribution lines, transformers, buses, motors,
etc. The parameters from the system, utilized in these algorithms, are obtained
from the power system current i(t) or voltage v(t), which are abnormal values
under fault or distortion situations. So, an error study for the algorithm is
considered necessary.
|
[
{
"created": "Wed, 23 Jun 2010 08:29:36 GMT",
"version": "v1"
}
] |
2010-06-24
|
[
[
"Osorno",
"Bruno",
""
]
] |
This paper analyses and explains from the systems point of view, microprocessor based protective relay (MBPR) systems with emphasis on differential equation algorithms. Presently, the application of protective relaying in power systems, using MBPR systems, based on the differential equation algorithm is valued more than the protection relaying based on any other type of algorithm, because of advantages in accuracy and implementation. MBPR differential equation approach can tolerate some errors caused by power system abnormality such as DC offset. This paper shows that the algorithm is a system description based and it is immune from distortions such as DC-offset. Differential equation algorithms implemented in MBPR are widely used in the protection of transmission and distribution lines, transformers, buses, motors, etc. The parameters from the system, utilized in these algorithms, are obtained from the power system current i(t) or voltage v(t), which are abnormal values under fault or distortion situations. So, an error study for the algorithm is considered necessary.
|
1806.09444
|
Nikita Jaipuria
|
Nikita Jaipuria, Golnaz Habibi, Jonathan P. How
|
A Transferable Pedestrian Motion Prediction Model for Intersections with
Different Geometries
| null | null | null | null |
cs.LG cs.AI cs.RO stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a novel framework for accurate pedestrian intent
prediction at intersections. Given some prior knowledge of the curbside
geometry, the presented framework can accurately predict pedestrian
trajectories, even in new intersections that it has not been trained on. This
is achieved by making use of the contravariant components of trajectories in
the curbside coordinate system, which ensures that the transformation of
trajectories across intersections is affine, regardless of the curbside
geometry. Our method is based on the Augmented Semi Nonnegative Sparse Coding
(ASNSC) formulation and we use that as a baseline to show improvement in
prediction performance on real pedestrian datasets collected at two
intersections in Cambridge, with distinctly different curbside and crosswalk
geometries. We demonstrate a 7.2% improvement in prediction accuracy in the
case of same train and test intersections. Furthermore, we show a comparable
prediction performance of TASNSC when trained and tested in different
intersections with the baseline, trained and tested on the same intersection.
|
[
{
"created": "Mon, 25 Jun 2018 13:19:45 GMT",
"version": "v1"
}
] |
2018-06-26
|
[
[
"Jaipuria",
"Nikita",
""
],
[
"Habibi",
"Golnaz",
""
],
[
"How",
"Jonathan P.",
""
]
] |
This paper presents a novel framework for accurate pedestrian intent prediction at intersections. Given some prior knowledge of the curbside geometry, the presented framework can accurately predict pedestrian trajectories, even in new intersections that it has not been trained on. This is achieved by making use of the contravariant components of trajectories in the curbside coordinate system, which ensures that the transformation of trajectories across intersections is affine, regardless of the curbside geometry. Our method is based on the Augmented Semi Nonnegative Sparse Coding (ASNSC) formulation and we use that as a baseline to show improvement in prediction performance on real pedestrian datasets collected at two intersections in Cambridge, with distinctly different curbside and crosswalk geometries. We demonstrate a 7.2% improvement in prediction accuracy in the case of same train and test intersections. Furthermore, we show a comparable prediction performance of TASNSC when trained and tested in different intersections with the baseline, trained and tested on the same intersection.
|
2101.08333
|
Shuyang Li
|
Shuyang Li, Jin Cao, Mukund Sridhar, Henghui Zhu, Shang-Wen Li, Wael
Hamza, Julian McAuley
|
Zero-shot Generalization in Dialog State Tracking through Generative
Question Answering
|
Accepted as a Long Paper at EACL 2021
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dialog State Tracking (DST), an integral part of modern dialog systems, aims
to track user preferences and constraints (slots) in task-oriented dialogs. In
real-world settings with constantly changing services, DST systems must
generalize to new domains and unseen slot types. Existing methods for DST do
not generalize well to new slot names and many require known ontologies of slot
types and values for inference. We introduce a novel ontology-free framework
that supports natural language queries for unseen constraints and slots in
multi-domain task-oriented dialogs. Our approach is based on generative
question-answering using a conditional language model pre-trained on
substantive English sentences. Our model improves joint goal accuracy in
zero-shot domain adaptation settings by up to 9% (absolute) over the previous
state-of-the-art on the MultiWOZ 2.1 dataset.
|
[
{
"created": "Wed, 20 Jan 2021 21:47:20 GMT",
"version": "v1"
}
] |
2021-01-22
|
[
[
"Li",
"Shuyang",
""
],
[
"Cao",
"Jin",
""
],
[
"Sridhar",
"Mukund",
""
],
[
"Zhu",
"Henghui",
""
],
[
"Li",
"Shang-Wen",
""
],
[
"Hamza",
"Wael",
""
],
[
"McAuley",
"Julian",
""
]
] |
Dialog State Tracking (DST), an integral part of modern dialog systems, aims to track user preferences and constraints (slots) in task-oriented dialogs. In real-world settings with constantly changing services, DST systems must generalize to new domains and unseen slot types. Existing methods for DST do not generalize well to new slot names and many require known ontologies of slot types and values for inference. We introduce a novel ontology-free framework that supports natural language queries for unseen constraints and slots in multi-domain task-oriented dialogs. Our approach is based on generative question-answering using a conditional language model pre-trained on substantive English sentences. Our model improves joint goal accuracy in zero-shot domain adaptation settings by up to 9% (absolute) over the previous state-of-the-art on the MultiWOZ 2.1 dataset.
|
1912.05333
|
Moein Hasani
|
Moein Hasani, Amin Nasim Saravi, Hassan Khotanlou
|
An Efficient Approach for Using Expectation Maximization Algorithm in
Capsule Networks
| null | null |
10.1109/MVIP49855.2020.9116870
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Capsule Networks (CapsNets) are brand-new architectures that have shown
ground-breaking results in certain areas of Computer Vision (CV). In 2017,
Hinton and his team introduced CapsNets with routing-by-agreement in "Sabour et
al" and in a more recent paper "Matrix Capsules with EM Routing" they proposed
a more complete architecture with Expectation-Maximization (EM) algorithm.
Unlike the traditional convolutional neural networks (CNNs), this architecture
is able to preserve the pose of the objects in the picture. Due to this
characteristic, it has been able to beat the previous state-of-theart results
on the smallNORB dataset, which includes samples with various view points.
Also, this architecture is more robust to white box adversarial attacks.
However, CapsNets have two major drawbacks. They can't perform as well as CNNs
on complex datasets and, they need a huge amount of time for training. We try
to mitigate these shortcomings by finding optimum settings of EM routing
iterations for training CapsNets. Unlike the past studies, we use un-equal
numbers of EM routing iterations for different stages of the CapsNet. For our
research, we use three datasets: Yale face dataset, Belgium Traffic Sign
dataset, and Fashion-MNIST dataset.
|
[
{
"created": "Wed, 11 Dec 2019 14:13:15 GMT",
"version": "v1"
},
{
"created": "Sun, 22 Dec 2019 11:35:25 GMT",
"version": "v2"
},
{
"created": "Fri, 31 Jul 2020 13:10:16 GMT",
"version": "v3"
}
] |
2020-08-03
|
[
[
"Hasani",
"Moein",
""
],
[
"Saravi",
"Amin Nasim",
""
],
[
"Khotanlou",
"Hassan",
""
]
] |
Capsule Networks (CapsNets) are brand-new architectures that have shown ground-breaking results in certain areas of Computer Vision (CV). In 2017, Hinton and his team introduced CapsNets with routing-by-agreement in "Sabour et al" and in a more recent paper "Matrix Capsules with EM Routing" they proposed a more complete architecture with Expectation-Maximization (EM) algorithm. Unlike the traditional convolutional neural networks (CNNs), this architecture is able to preserve the pose of the objects in the picture. Due to this characteristic, it has been able to beat the previous state-of-theart results on the smallNORB dataset, which includes samples with various view points. Also, this architecture is more robust to white box adversarial attacks. However, CapsNets have two major drawbacks. They can't perform as well as CNNs on complex datasets and, they need a huge amount of time for training. We try to mitigate these shortcomings by finding optimum settings of EM routing iterations for training CapsNets. Unlike the past studies, we use un-equal numbers of EM routing iterations for different stages of the CapsNet. For our research, we use three datasets: Yale face dataset, Belgium Traffic Sign dataset, and Fashion-MNIST dataset.
|
1911.04831
|
Jonguk Kim
|
Jonguk Kim, Jeong-Han Yun, Hyoung Chun Kim
|
Anomaly Detection for Industrial Control Systems Using
Sequence-to-Sequence Neural Networks
|
Accepted to 5th Workshop on the Security of Industrial Control
Systems & of Cyber-Physical Systems (CyberICPS 2019) in conjunction with
ESORICS 2019
| null | null | null |
cs.CR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This study proposes an anomaly detection method for operational data of
industrial control systems (ICSs). Sequence-to-sequence neural networks were
applied to train and predict ICS operational data and interpret their
time-series characteristic. The proposed method requires only a normal dataset
to understand ICS's normal state and detect outliers. This method was evaluated
with SWaT (secure water treatment) dataset, and 29 out of 36 attacks were
detected. The reported method also detects the attack points, and 25 out of 53
points were detected. This study provides a detailed analysis of false
positives and false negatives of the experimental results.
|
[
{
"created": "Tue, 12 Nov 2019 13:16:15 GMT",
"version": "v1"
}
] |
2019-11-13
|
[
[
"Kim",
"Jonguk",
""
],
[
"Yun",
"Jeong-Han",
""
],
[
"Kim",
"Hyoung Chun",
""
]
] |
This study proposes an anomaly detection method for operational data of industrial control systems (ICSs). Sequence-to-sequence neural networks were applied to train and predict ICS operational data and interpret their time-series characteristic. The proposed method requires only a normal dataset to understand ICS's normal state and detect outliers. This method was evaluated with SWaT (secure water treatment) dataset, and 29 out of 36 attacks were detected. The reported method also detects the attack points, and 25 out of 53 points were detected. This study provides a detailed analysis of false positives and false negatives of the experimental results.
|
1805.01823
|
Jakub Gajarsk\'y
|
Jakub Gajarsk\'y, Petr Hlin\v{e}n\'y, Daniel Lokshtanov, Jan
Obdr\v{z}\'alek, M.S. Ramanujan
|
A New Perspective on FO Model Checking of Dense Graph Classes
| null | null | null | null |
cs.LO cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the first-order (FO) model checking problem of dense graphs, namely
those which have FO interpretations in (or are FO transductions of) some sparse
graph classes. We give a structural characterization of the graph classes which
are FO interpretable in graphs of bounded degree. This characterization allows
us to efficiently compute such an FO interpretation for an input graph. As a
consequence, we obtain an FPT algorithm for successor-invariant FO model
checking of any graph class which is FO interpretable in (or an FO transduction
of) a graph class of bounded degree. The approach we use to obtain these
results may also be of independent interest.
|
[
{
"created": "Fri, 4 May 2018 15:29:16 GMT",
"version": "v1"
}
] |
2018-05-07
|
[
[
"Gajarský",
"Jakub",
""
],
[
"Hliněný",
"Petr",
""
],
[
"Lokshtanov",
"Daniel",
""
],
[
"Obdržálek",
"Jan",
""
],
[
"Ramanujan",
"M. S.",
""
]
] |
We study the first-order (FO) model checking problem of dense graphs, namely those which have FO interpretations in (or are FO transductions of) some sparse graph classes. We give a structural characterization of the graph classes which are FO interpretable in graphs of bounded degree. This characterization allows us to efficiently compute such an FO interpretation for an input graph. As a consequence, we obtain an FPT algorithm for successor-invariant FO model checking of any graph class which is FO interpretable in (or an FO transduction of) a graph class of bounded degree. The approach we use to obtain these results may also be of independent interest.
|
2311.03526
|
Fuyuan Lyu
|
Fuyuan Lyu, Yaochen Hu, Xing Tang, Yingxue Zhang, Ruiming Tang, Xue
Liu
|
Towards Automated Negative Sampling in Implicit Recommendation
| null | null | null | null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Negative sampling methods are vital in implicit recommendation models as they
allow us to obtain negative instances from massive unlabeled data. Most
existing approaches focus on sampling hard negative samples in various ways.
These studies are orthogonal to the recommendation model and implicit datasets.
However, such an idea contradicts the common belief in AutoML that the model
and dataset should be matched. Empirical experiments suggest that the
best-performing negative sampler depends on the implicit dataset and the
specific recommendation model. Hence, we propose a hypothesis that the negative
sampler should align with the capacity of the recommendation models as well as
the statistics of the datasets to achieve optimal performance. A mismatch
between these three would result in sub-optimal outcomes. An intuitive idea to
address the mismatch problem is to exhaustively select the best-performing
negative sampler given the model and dataset. However, such an approach is
computationally expensive and time-consuming, leaving the problem unsolved. In
this work, we propose the AutoSample framework that adaptively selects the
best-performing negative sampler among candidates. Specifically, we propose a
loss-to-instance approximation to transform the negative sampler search task
into the learning task over a weighted sum, enabling end-to-end training of the
model. We also designed an adaptive search algorithm to extensively and
efficiently explore the search space. A specific initialization approach is
also obtained to better utilize the obtained model parameters during the search
stage, which is similar to curriculum learning and leads to better performance
and less computation resource consumption. We evaluate the proposed framework
on four benchmarks over three models. Extensive experiments demonstrate the
effectiveness and efficiency of our proposed framework.
|
[
{
"created": "Mon, 6 Nov 2023 21:05:00 GMT",
"version": "v1"
}
] |
2023-11-08
|
[
[
"Lyu",
"Fuyuan",
""
],
[
"Hu",
"Yaochen",
""
],
[
"Tang",
"Xing",
""
],
[
"Zhang",
"Yingxue",
""
],
[
"Tang",
"Ruiming",
""
],
[
"Liu",
"Xue",
""
]
] |
Negative sampling methods are vital in implicit recommendation models as they allow us to obtain negative instances from massive unlabeled data. Most existing approaches focus on sampling hard negative samples in various ways. These studies are orthogonal to the recommendation model and implicit datasets. However, such an idea contradicts the common belief in AutoML that the model and dataset should be matched. Empirical experiments suggest that the best-performing negative sampler depends on the implicit dataset and the specific recommendation model. Hence, we propose a hypothesis that the negative sampler should align with the capacity of the recommendation models as well as the statistics of the datasets to achieve optimal performance. A mismatch between these three would result in sub-optimal outcomes. An intuitive idea to address the mismatch problem is to exhaustively select the best-performing negative sampler given the model and dataset. However, such an approach is computationally expensive and time-consuming, leaving the problem unsolved. In this work, we propose the AutoSample framework that adaptively selects the best-performing negative sampler among candidates. Specifically, we propose a loss-to-instance approximation to transform the negative sampler search task into the learning task over a weighted sum, enabling end-to-end training of the model. We also designed an adaptive search algorithm to extensively and efficiently explore the search space. A specific initialization approach is also obtained to better utilize the obtained model parameters during the search stage, which is similar to curriculum learning and leads to better performance and less computation resource consumption. We evaluate the proposed framework on four benchmarks over three models. Extensive experiments demonstrate the effectiveness and efficiency of our proposed framework.
|
cs/0208005
|
Ulrich Hillenbrand
|
Ulrich Hillenbrand and Gerd Hirzinger
|
Probabilistic Search for Object Segmentation and Recognition
|
18 pages, 5 figures
|
Proceedings ECCV 2002, Lecture Notes in Computer Science Vol.
2352, pp. 791-806
| null | null |
cs.CV
| null |
The problem of searching for a model-based scene interpretation is analyzed
within a probabilistic framework. Object models are formulated as generative
models for range data of the scene. A new statistical criterion, the truncated
object probability, is introduced to infer an optimal sequence of object
hypotheses to be evaluated for their match to the data. The truncated
probability is partly determined by prior knowledge of the objects and partly
learned from data. Some experiments on sequence quality and object segmentation
and recognition from stereo data are presented. The article recovers classic
concepts from object recognition (grouping, geometric hashing, alignment) from
the probabilistic perspective and adds insight into the optimal ordering of
object hypotheses for evaluation. Moreover, it introduces point-relation
densities, a key component of the truncated probability, as statistical models
of local surface shape.
|
[
{
"created": "Mon, 5 Aug 2002 10:57:09 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Hillenbrand",
"Ulrich",
""
],
[
"Hirzinger",
"Gerd",
""
]
] |
The problem of searching for a model-based scene interpretation is analyzed within a probabilistic framework. Object models are formulated as generative models for range data of the scene. A new statistical criterion, the truncated object probability, is introduced to infer an optimal sequence of object hypotheses to be evaluated for their match to the data. The truncated probability is partly determined by prior knowledge of the objects and partly learned from data. Some experiments on sequence quality and object segmentation and recognition from stereo data are presented. The article recovers classic concepts from object recognition (grouping, geometric hashing, alignment) from the probabilistic perspective and adds insight into the optimal ordering of object hypotheses for evaluation. Moreover, it introduces point-relation densities, a key component of the truncated probability, as statistical models of local surface shape.
|
2306.10028
|
Pengye Zhang
|
Huinan Sun, Guangliang Yu, Pengye Zhang, Bo Zhang, Xingxing Wang, Dong
Wang
|
Graph Based Long-Term And Short-Term Interest Model for Click-Through
Rate Prediction
|
CIKM 2022 accepted
| null |
10.1145/3511808.3557336
| null |
cs.IR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Click-through rate (CTR) prediction aims to predict the probability that the
user will click an item, which has been one of the key tasks in online
recommender and advertising systems. In such systems, rich user behavior (viz.
long- and short-term) has been proved to be of great value in capturing user
interests. Both industry and academy have paid much attention to this topic and
propose different approaches to modeling with long-term and short-term user
behavior data. But there are still some unresolved issues. More specially, (1)
rule and truncation based methods to extract information from long-term
behavior are easy to cause information loss, and (2) single feedback behavior
regardless of scenario to extract information from short-term behavior lead to
information confusion and noise. To fill this gap, we propose a Graph based
Long-term and Short-term interest Model, termed GLSM. It consists of a
multi-interest graph structure for capturing long-term user behavior, a
multi-scenario heterogeneous sequence model for modeling short-term
information, then an adaptive fusion mechanism to fused information from
long-term and short-term behaviors. Comprehensive experiments on real-world
datasets, GLSM achieved SOTA score on offline metrics. At the same time, the
GLSM algorithm has been deployed in our industrial application, bringing 4.9%
CTR and 4.3% GMV lift, which is significant to the business.
|
[
{
"created": "Mon, 5 Jun 2023 07:04:34 GMT",
"version": "v1"
}
] |
2023-06-21
|
[
[
"Sun",
"Huinan",
""
],
[
"Yu",
"Guangliang",
""
],
[
"Zhang",
"Pengye",
""
],
[
"Zhang",
"Bo",
""
],
[
"Wang",
"Xingxing",
""
],
[
"Wang",
"Dong",
""
]
] |
Click-through rate (CTR) prediction aims to predict the probability that the user will click an item, which has been one of the key tasks in online recommender and advertising systems. In such systems, rich user behavior (viz. long- and short-term) has been proved to be of great value in capturing user interests. Both industry and academy have paid much attention to this topic and propose different approaches to modeling with long-term and short-term user behavior data. But there are still some unresolved issues. More specially, (1) rule and truncation based methods to extract information from long-term behavior are easy to cause information loss, and (2) single feedback behavior regardless of scenario to extract information from short-term behavior lead to information confusion and noise. To fill this gap, we propose a Graph based Long-term and Short-term interest Model, termed GLSM. It consists of a multi-interest graph structure for capturing long-term user behavior, a multi-scenario heterogeneous sequence model for modeling short-term information, then an adaptive fusion mechanism to fused information from long-term and short-term behaviors. Comprehensive experiments on real-world datasets, GLSM achieved SOTA score on offline metrics. At the same time, the GLSM algorithm has been deployed in our industrial application, bringing 4.9% CTR and 4.3% GMV lift, which is significant to the business.
|
2208.11210
|
Andrea Gemelli
|
Davide del Bimbo and Andrea Gemelli and Simone Marinai
|
Data augmentation on graphs for table type classification
|
S+SSPR 2022
| null |
10.1007/978-3-031-23028-8_25
| null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Tables are widely used in documents because of their compact and structured
representation of information. In particular, in scientific papers, tables can
sum up novel discoveries and summarize experimental results, making the
research comparable and easily understandable by scholars. Since the layout of
tables is highly variable, it would be useful to interpret their content and
classify them into categories. This could be helpful to directly extract
information from scientific papers, for instance comparing performance of some
models given their paper result tables. In this work, we address the
classification of tables using a Graph Neural Network, exploiting the table
structure for the message passing algorithm in use. We evaluate our model on a
subset of the Tab2Know dataset. Since it contains few examples manually
annotated, we propose data augmentation techniques directly on the table graph
structures. We achieve promising preliminary results, proposing a data
augmentation method suitable for graph-based table representation.
|
[
{
"created": "Tue, 23 Aug 2022 21:54:46 GMT",
"version": "v1"
}
] |
2023-02-21
|
[
[
"del Bimbo",
"Davide",
""
],
[
"Gemelli",
"Andrea",
""
],
[
"Marinai",
"Simone",
""
]
] |
Tables are widely used in documents because of their compact and structured representation of information. In particular, in scientific papers, tables can sum up novel discoveries and summarize experimental results, making the research comparable and easily understandable by scholars. Since the layout of tables is highly variable, it would be useful to interpret their content and classify them into categories. This could be helpful to directly extract information from scientific papers, for instance comparing performance of some models given their paper result tables. In this work, we address the classification of tables using a Graph Neural Network, exploiting the table structure for the message passing algorithm in use. We evaluate our model on a subset of the Tab2Know dataset. Since it contains few examples manually annotated, we propose data augmentation techniques directly on the table graph structures. We achieve promising preliminary results, proposing a data augmentation method suitable for graph-based table representation.
|
2105.11794
|
Diana C. Hernandez-Bocanegra
|
Diana C. Hernandez-Bocanegra and Juergen Ziegler
|
Effects of interactivity and presentation on review-based explanations
for recommendations
| null |
Human-Computer Interaction INTERACT 2021. INTERACT 2021. Lecture
Notes in Computer Science, vol 12933. Springer, Cham., 597-618
|
10.1007/978-3-030-85616-8_35
| null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
User reviews have become an important source for recommending and explaining
products or services. Particularly, providing explanations based on user
reviews may improve users' perception of a recommender system (RS). However,
little is known about how review-based explanations can be effectively and
efficiently presented to users of RS. We investigate the potential of
interactive explanations in review-based RS in the domain of hotels, and
propose an explanation scheme inspired by dialog models and formal argument
structures. Additionally, we also address the combined effect of interactivity
and different presentation styles (i.e. using only text, a bar chart or a
table), as well as the influence that different user characteristics might have
on users' perception of the system and its explanations. To such effect, we
implemented a review-based RS using a matrix factorization explanatory method,
and conducted a user study. Our results show that providing more interactive
explanations in review-based RS has a significant positive influence on the
perception of explanation quality, effectiveness and trust in the system by
users, and that user characteristics such as rational decision-making style and
social awareness also have a significant influence on this perception.
|
[
{
"created": "Tue, 25 May 2021 09:54:42 GMT",
"version": "v1"
},
{
"created": "Fri, 3 Sep 2021 13:42:34 GMT",
"version": "v2"
}
] |
2021-09-06
|
[
[
"Hernandez-Bocanegra",
"Diana C.",
""
],
[
"Ziegler",
"Juergen",
""
]
] |
User reviews have become an important source for recommending and explaining products or services. Particularly, providing explanations based on user reviews may improve users' perception of a recommender system (RS). However, little is known about how review-based explanations can be effectively and efficiently presented to users of RS. We investigate the potential of interactive explanations in review-based RS in the domain of hotels, and propose an explanation scheme inspired by dialog models and formal argument structures. Additionally, we also address the combined effect of interactivity and different presentation styles (i.e. using only text, a bar chart or a table), as well as the influence that different user characteristics might have on users' perception of the system and its explanations. To such effect, we implemented a review-based RS using a matrix factorization explanatory method, and conducted a user study. Our results show that providing more interactive explanations in review-based RS has a significant positive influence on the perception of explanation quality, effectiveness and trust in the system by users, and that user characteristics such as rational decision-making style and social awareness also have a significant influence on this perception.
|
1804.05097
|
Martin Holm Cservenka M.Sc.
|
Martin Holm Cservenka
|
Design and Implementation of Dynamic Memory Management in a Reversible
Object-Oriented Programming Language
|
Master's Thesis, 231 pages, 63 figures
| null | null | null |
cs.PL cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The reversible object-oriented programming language (ROOPL) was presented in
late 2016 and proved that object-oriented programming paradigms works in the
reversible setting. The language featured simple statically scoped objects
which made non-trivial programs tedious, if not impossible to write using the
limited tools provided. We introduce an extension to ROOPL in form the new
language ROOPL++, featuring dynamic memory management and fixed-sized arrays
for increased language expressiveness. The language is a superset of ROOPL and
has formally been defined by its language semantics, type system and
computational universality. Considerations for reversible memory manager
layouts are discussed and ultimately lead to the selection of the Buddy Memory
layout. Translations of the extensions added in ROOPL++ to the reversible
assembly language PISA are presented to provide garbage-free computations. The
dynamic memory management extension successfully increases the expressiveness
of ROOPL and as a result, shows that non-trivial reversible data structures,
such as binary trees and doubly-linked lists, are feasible and do not
contradict the reversible computing paradigm.
|
[
{
"created": "Thu, 12 Apr 2018 00:23:21 GMT",
"version": "v1"
}
] |
2018-04-17
|
[
[
"Cservenka",
"Martin Holm",
""
]
] |
The reversible object-oriented programming language (ROOPL) was presented in late 2016 and proved that object-oriented programming paradigms works in the reversible setting. The language featured simple statically scoped objects which made non-trivial programs tedious, if not impossible to write using the limited tools provided. We introduce an extension to ROOPL in form the new language ROOPL++, featuring dynamic memory management and fixed-sized arrays for increased language expressiveness. The language is a superset of ROOPL and has formally been defined by its language semantics, type system and computational universality. Considerations for reversible memory manager layouts are discussed and ultimately lead to the selection of the Buddy Memory layout. Translations of the extensions added in ROOPL++ to the reversible assembly language PISA are presented to provide garbage-free computations. The dynamic memory management extension successfully increases the expressiveness of ROOPL and as a result, shows that non-trivial reversible data structures, such as binary trees and doubly-linked lists, are feasible and do not contradict the reversible computing paradigm.
|
2204.08474
|
Raphael Petegrosso
|
Raphael Petegrosso, Vasistakrishna Baderdinni, Thibaud Senechal,
Benjamin L. Bullough
|
AB/BA analysis: A framework for estimating keyword spotting recall
improvement while maintaining audio privacy
|
Accepted to NAACL 2022 Industry Track
| null | null | null |
cs.SD cs.CR cs.LG eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Evaluation of keyword spotting (KWS) systems that detect keywords in speech
is a challenging task under realistic privacy constraints. The KWS is designed
to only collect data when the keyword is present, limiting the availability of
hard samples that may contain false negatives, and preventing direct estimation
of model recall from production data. Alternatively, complementary data
collected from other sources may not be fully representative of the real
application. In this work, we propose an evaluation technique which we call
AB/BA analysis. Our framework evaluates a candidate KWS model B against a
baseline model A, using cross-dataset offline decoding for relative recall
estimation, without requiring negative examples. Moreover, we propose a
formulation with assumptions that allow estimation of relative false positive
rate between models with low variance even when the number of false positives
is small. Finally, we propose to leverage machine-generated soft labels, in a
technique we call Semi-Supervised AB/BA analysis, that improves the analysis
time, privacy, and cost. Experiments with both simulation and real data show
that AB/BA analysis is successful at measuring recall improvement in
conjunction with the trade-off in relative false positive rate.
|
[
{
"created": "Mon, 18 Apr 2022 13:52:22 GMT",
"version": "v1"
}
] |
2022-04-20
|
[
[
"Petegrosso",
"Raphael",
""
],
[
"Baderdinni",
"Vasistakrishna",
""
],
[
"Senechal",
"Thibaud",
""
],
[
"Bullough",
"Benjamin L.",
""
]
] |
Evaluation of keyword spotting (KWS) systems that detect keywords in speech is a challenging task under realistic privacy constraints. The KWS is designed to only collect data when the keyword is present, limiting the availability of hard samples that may contain false negatives, and preventing direct estimation of model recall from production data. Alternatively, complementary data collected from other sources may not be fully representative of the real application. In this work, we propose an evaluation technique which we call AB/BA analysis. Our framework evaluates a candidate KWS model B against a baseline model A, using cross-dataset offline decoding for relative recall estimation, without requiring negative examples. Moreover, we propose a formulation with assumptions that allow estimation of relative false positive rate between models with low variance even when the number of false positives is small. Finally, we propose to leverage machine-generated soft labels, in a technique we call Semi-Supervised AB/BA analysis, that improves the analysis time, privacy, and cost. Experiments with both simulation and real data show that AB/BA analysis is successful at measuring recall improvement in conjunction with the trade-off in relative false positive rate.
|
1702.08104
|
Mike Eichhorn
|
Mike Eichhorn, Christopher D. Williams, Ralf Bachmayer, Brad de Young
|
A Mission Planning System for the AUV "SLOCUM Glider" for the
Newfoundland and Labrador Shelf
|
9 pages, 13 figures, OCEANS 2010 IEEE - Sydney, 24-27 May 2010
| null |
10.1109/OCEANSSYD.2010.5603919
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a system for mission planning for an autonomous
underwater vehicle in time-varying ocean currents. The mission planning system
is designed for the AUV "SLOCUM Glider" to collect oceanographic data along the
Newfoundland and Labrador Shelf. The data will be used in conjunction with a
numerical ocean model currently under development by the Department of
Fisheries and Oceans Canada. This allows for the validation and the
modification of existing ocean current and climate models as well as the design
of new models with the aim of improving the accuracy of forecasts. The use of
the ocean current forecast data in netCDF format in an ocean current model, the
algorithms which consider glider-specific behaviour, details of the program's
technical implementation in C++, and, preliminary results will be described.
|
[
{
"created": "Sun, 26 Feb 2017 22:38:50 GMT",
"version": "v1"
}
] |
2017-02-28
|
[
[
"Eichhorn",
"Mike",
""
],
[
"Williams",
"Christopher D.",
""
],
[
"Bachmayer",
"Ralf",
""
],
[
"de Young",
"Brad",
""
]
] |
This paper presents a system for mission planning for an autonomous underwater vehicle in time-varying ocean currents. The mission planning system is designed for the AUV "SLOCUM Glider" to collect oceanographic data along the Newfoundland and Labrador Shelf. The data will be used in conjunction with a numerical ocean model currently under development by the Department of Fisheries and Oceans Canada. This allows for the validation and the modification of existing ocean current and climate models as well as the design of new models with the aim of improving the accuracy of forecasts. The use of the ocean current forecast data in netCDF format in an ocean current model, the algorithms which consider glider-specific behaviour, details of the program's technical implementation in C++, and, preliminary results will be described.
|
2204.11302
|
Uli Fahrenberg
|
Uli Fahrenberg
|
A Generic Approach to Quantitative Verification
|
Habilitation thesis
| null | null | null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This thesis is concerned with quantitative verification, that is, the
verification of quantitative properties of quantitative systems. These systems
are found in numerous applications, and their quantitative verification is
important, but also rather challenging. In particular, given that most systems
found in applications are rather big, compositionality and incrementality of
verification methods are essential.
In order to ensure robustness of verification, we replace the Boolean yes-no
answers of standard verification with distances. Depending on the application
context, many different types of distances are being employed in quantitative
verification. Consequently, there is a need for a general theory of system
distances which abstracts away from the concrete distances and develops
quantitative verification at a level independent of the distance. It is our
view that in a theory of quantitative verification, the quantitative aspects
should be treated just as much as input to a verification problem as the
qualitative aspects are. In this work we develop such a general theory of
quantitative verification. We assume as input a distance between traces, or
executions, and then employ the theory of games with quantitative objectives to
define distances between quantitative systems. Different versions of the
quantitative bisimulation game give rise to different types of distances,
viz.~bisimulation distance, simulation distance, trace equivalence distance,
etc., enabling us to construct a quantitative generalization of van Glabbeek's
linear-time--branching-time spectrum. We also extend our general theory of
quantitative verification to a theory of quantitative specifications. For this
we use modal transition systems, and we develop the quantitative properties of
the usual operators for behavioral specification theories.
|
[
{
"created": "Sun, 24 Apr 2022 15:28:21 GMT",
"version": "v1"
}
] |
2022-04-26
|
[
[
"Fahrenberg",
"Uli",
""
]
] |
This thesis is concerned with quantitative verification, that is, the verification of quantitative properties of quantitative systems. These systems are found in numerous applications, and their quantitative verification is important, but also rather challenging. In particular, given that most systems found in applications are rather big, compositionality and incrementality of verification methods are essential. In order to ensure robustness of verification, we replace the Boolean yes-no answers of standard verification with distances. Depending on the application context, many different types of distances are being employed in quantitative verification. Consequently, there is a need for a general theory of system distances which abstracts away from the concrete distances and develops quantitative verification at a level independent of the distance. It is our view that in a theory of quantitative verification, the quantitative aspects should be treated just as much as input to a verification problem as the qualitative aspects are. In this work we develop such a general theory of quantitative verification. We assume as input a distance between traces, or executions, and then employ the theory of games with quantitative objectives to define distances between quantitative systems. Different versions of the quantitative bisimulation game give rise to different types of distances, viz.~bisimulation distance, simulation distance, trace equivalence distance, etc., enabling us to construct a quantitative generalization of van Glabbeek's linear-time--branching-time spectrum. We also extend our general theory of quantitative verification to a theory of quantitative specifications. For this we use modal transition systems, and we develop the quantitative properties of the usual operators for behavioral specification theories.
|
2311.01477
|
Liqiang Jing
|
Liqiang Jing and Ruosen Li and Yunmo Chen and Mengzhao Jia and Xinya
Du
|
FAITHSCORE: Evaluating Hallucinations in Large Vision-Language Models
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce FAITHSCORE (Faithfulness to Atomic Image Facts Score), a
reference-free and fine-grained evaluation metric that measures the
faithfulness of the generated free-form answers from large vision-language
models (LVLMs). The FAITHSCORE evaluation first identifies sub-sentences
containing descriptive statements that need to be verified, then extracts a
comprehensive list of atomic facts from these sub-sentences, and finally
conducts consistency verification between fine-grained atomic facts and the
input image. Meta-evaluation demonstrates that our metric highly correlates
with human judgments of faithfulness. We collect two benchmark datasets (i.e.
LLaVA-1k and MSCOCO-Cap) for evaluating LVLMs instruction-following
hallucinations. We measure hallucinations in state-of-the-art LVLMs with
FAITHSCORE on the datasets. Results reveal that current systems are prone to
generate hallucinated content unfaithful to the image, which leaves room for
future improvements. Further, we find that current LVLMs despite doing well on
color and counting, still struggle with long answers, relations, and multiple
objects.
|
[
{
"created": "Thu, 2 Nov 2023 01:21:45 GMT",
"version": "v1"
}
] |
2023-11-06
|
[
[
"Jing",
"Liqiang",
""
],
[
"Li",
"Ruosen",
""
],
[
"Chen",
"Yunmo",
""
],
[
"Jia",
"Mengzhao",
""
],
[
"Du",
"Xinya",
""
]
] |
We introduce FAITHSCORE (Faithfulness to Atomic Image Facts Score), a reference-free and fine-grained evaluation metric that measures the faithfulness of the generated free-form answers from large vision-language models (LVLMs). The FAITHSCORE evaluation first identifies sub-sentences containing descriptive statements that need to be verified, then extracts a comprehensive list of atomic facts from these sub-sentences, and finally conducts consistency verification between fine-grained atomic facts and the input image. Meta-evaluation demonstrates that our metric highly correlates with human judgments of faithfulness. We collect two benchmark datasets (i.e. LLaVA-1k and MSCOCO-Cap) for evaluating LVLMs instruction-following hallucinations. We measure hallucinations in state-of-the-art LVLMs with FAITHSCORE on the datasets. Results reveal that current systems are prone to generate hallucinated content unfaithful to the image, which leaves room for future improvements. Further, we find that current LVLMs despite doing well on color and counting, still struggle with long answers, relations, and multiple objects.
|
1801.01552
|
Matilde Marcolli
|
Yuri I. Manin and Matilde Marcolli
|
Asymptotic bounds for spherical codes
|
34 pages amstex, 3 figures
| null |
10.1070/IM8739
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The set of all error-correcting codes C over a fixed finite alphabet F of
cardinality q determines the set of code points in the unit square with
coordinates (R(C), delta (C)):= (relative transmission rate, relative minimal
distance). The central problem of the theory of such codes consists in
maximizing simultaneously the transmission rate of the code and the relative
minimum Hamming distance between two different code words. The classical
approach to this problem explored in vast literature consists in the inventing
explicit constructions of "good codes" and comparing new classes of codes with
earlier ones. Less classical approach studies the geometry of the whole set of
code points (R,delta) (with q fixed), at first independently of its
computability properties, and only afterwords turning to the problems of
computability, analogies with statistical physics etc. The main purpose of this
article consists in extending this latter strategy to domain of spherical
codes.
|
[
{
"created": "Thu, 4 Jan 2018 21:43:02 GMT",
"version": "v1"
}
] |
2019-09-04
|
[
[
"Manin",
"Yuri I.",
""
],
[
"Marcolli",
"Matilde",
""
]
] |
The set of all error-correcting codes C over a fixed finite alphabet F of cardinality q determines the set of code points in the unit square with coordinates (R(C), delta (C)):= (relative transmission rate, relative minimal distance). The central problem of the theory of such codes consists in maximizing simultaneously the transmission rate of the code and the relative minimum Hamming distance between two different code words. The classical approach to this problem explored in vast literature consists in the inventing explicit constructions of "good codes" and comparing new classes of codes with earlier ones. Less classical approach studies the geometry of the whole set of code points (R,delta) (with q fixed), at first independently of its computability properties, and only afterwords turning to the problems of computability, analogies with statistical physics etc. The main purpose of this article consists in extending this latter strategy to domain of spherical codes.
|
2309.10171
|
Yunhao Yang
|
Yunhao Yang, Jean-Rapha\"el Gaglione, Sandeep Chinchali, Ufuk Topcu
|
Specification-Driven Video Search via Foundation Models and Formal
Verification
|
12 pages, 18 figures
| null | null | null |
cs.CV cs.FL
|
http://creativecommons.org/licenses/by/4.0/
|
The increasing abundance of video data enables users to search for events of
interest, e.g., emergency incidents. Meanwhile, it raises new concerns, such as
the need for preserving privacy. Existing approaches to video search require
either manual inspection or a deep learning model with massive training. We
develop a method that uses recent advances in vision and language models, as
well as formal methods, to search for events of interest in video clips
automatically and efficiently. The method consists of an algorithm to map
text-based event descriptions into linear temporal logic over finite traces
(LTL$_f$) and an algorithm to construct an automaton encoding the video
information. Then, the method formally verifies the automaton representing the
video against the LTL$_f$ specifications and adds the pertinent video clips to
the search result if the automaton satisfies the specifications. We provide
qualitative and quantitative analysis to demonstrate the video-searching
capability of the proposed method. It achieves over 90 percent precision in
searching over privacy-sensitive videos and a state-of-the-art autonomous
driving dataset.
|
[
{
"created": "Mon, 18 Sep 2023 21:40:08 GMT",
"version": "v1"
}
] |
2023-09-20
|
[
[
"Yang",
"Yunhao",
""
],
[
"Gaglione",
"Jean-Raphaël",
""
],
[
"Chinchali",
"Sandeep",
""
],
[
"Topcu",
"Ufuk",
""
]
] |
The increasing abundance of video data enables users to search for events of interest, e.g., emergency incidents. Meanwhile, it raises new concerns, such as the need for preserving privacy. Existing approaches to video search require either manual inspection or a deep learning model with massive training. We develop a method that uses recent advances in vision and language models, as well as formal methods, to search for events of interest in video clips automatically and efficiently. The method consists of an algorithm to map text-based event descriptions into linear temporal logic over finite traces (LTL$_f$) and an algorithm to construct an automaton encoding the video information. Then, the method formally verifies the automaton representing the video against the LTL$_f$ specifications and adds the pertinent video clips to the search result if the automaton satisfies the specifications. We provide qualitative and quantitative analysis to demonstrate the video-searching capability of the proposed method. It achieves over 90 percent precision in searching over privacy-sensitive videos and a state-of-the-art autonomous driving dataset.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.