id stringlengths 9 10 | submitter stringlengths 1 64 ⌀ | authors stringlengths 4 20.7k | title stringlengths 4 246 | comments stringlengths 1 523 ⌀ | journal-ref stringlengths 4 404 ⌀ | doi stringlengths 11 153 ⌀ | report-no stringlengths 2 254 ⌀ | categories stringlengths 5 98 | license stringclasses 9 values | orig_abstract stringlengths 14 3.35k | versions listlengths 1 60 | update_date stringlengths 10 10 | authors_parsed listlengths 1 1.35k | abstract stringlengths 11 3.34k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1408.1237 | Balaji Vasan Srinivasan | Balaji Vasan Srinivasan, Qi Hu, Nail A. Gumerov, Raghu Murtugudde,
Ramani Duraiswami | Preconditioned Krylov solvers for kernel regression | null | null | null | null | cs.NA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A primary computational problem in kernel regression is solution of a dense
linear system with the $N\times N$ kernel matrix. Because a direct solution has
an O($N^3$) cost, iterative Krylov methods are often used with fast
matrix-vector products. For poorly conditioned problems, convergence of the
iteration is slow and preconditioning becomes necessary. We investigate
preconditioning from the viewpoint of scalability and efficiency. The problems
that conventional preconditioners face when applied to kernel methods are
demonstrated. A \emph{novel flexible preconditioner }that not only improves
convergence but also allows utilization of fast kernel matrix-vector products
is introduced. The performance of this preconditioner is first illustrated on
synthetic data, and subsequently on a suite of test problems in kernel
regression and geostatistical kriging.
| [
{
"created": "Wed, 6 Aug 2014 10:39:59 GMT",
"version": "v1"
}
] | 2014-08-07 | [
[
"Srinivasan",
"Balaji Vasan",
""
],
[
"Hu",
"Qi",
""
],
[
"Gumerov",
"Nail A.",
""
],
[
"Murtugudde",
"Raghu",
""
],
[
"Duraiswami",
"Ramani",
""
]
] | A primary computational problem in kernel regression is solution of a dense linear system with the $N\times N$ kernel matrix. Because a direct solution has an O($N^3$) cost, iterative Krylov methods are often used with fast matrix-vector products. For poorly conditioned problems, convergence of the iteration is slow and preconditioning becomes necessary. We investigate preconditioning from the viewpoint of scalability and efficiency. The problems that conventional preconditioners face when applied to kernel methods are demonstrated. A \emph{novel flexible preconditioner }that not only improves convergence but also allows utilization of fast kernel matrix-vector products is introduced. The performance of this preconditioner is first illustrated on synthetic data, and subsequently on a suite of test problems in kernel regression and geostatistical kriging. |
1802.09001 | Batya Kenig | Batya Kenig | The Complexity of the Possible Winner Problem over Partitioned
Preferences | null | null | null | null | cs.GT cs.CC cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Possible-Winner problem asks, given an election where the voters'
preferences over the set of candidates is partially specified, whether a
distinguished candidate can become a winner. In this work, we consider the
computational complexity of Possible-Winner under the assumption that the voter
preferences are $partitioned$. That is, we assume that every voter provides a
complete order over sets of incomparable candidates (e.g., candidates are
ranked by their level of education). We consider elections with partitioned
profiles over positional scoring rules, with an unbounded number of candidates,
and unweighted voters. Our first result is a polynomial time algorithm for
voting rules with $2$ distinct values, which include the well-known
$k$-approval voting rule. We then go on to prove NP-hardness for a class of
rules that contain all voting rules that produce scoring vectors with at least
$4$ distinct values.
| [
{
"created": "Sun, 25 Feb 2018 13:21:40 GMT",
"version": "v1"
}
] | 2018-02-27 | [
[
"Kenig",
"Batya",
""
]
] | The Possible-Winner problem asks, given an election where the voters' preferences over the set of candidates is partially specified, whether a distinguished candidate can become a winner. In this work, we consider the computational complexity of Possible-Winner under the assumption that the voter preferences are $partitioned$. That is, we assume that every voter provides a complete order over sets of incomparable candidates (e.g., candidates are ranked by their level of education). We consider elections with partitioned profiles over positional scoring rules, with an unbounded number of candidates, and unweighted voters. Our first result is a polynomial time algorithm for voting rules with $2$ distinct values, which include the well-known $k$-approval voting rule. We then go on to prove NP-hardness for a class of rules that contain all voting rules that produce scoring vectors with at least $4$ distinct values. |
2202.09836 | Alexander Steen | Alexander Steen, David Fuenmayor, Tobias Glei{\ss}ner, Geoff
Sutcliffe, Christoph Benzm\"uller | Automated Reasoning in Non-classical Logics in the TPTP World | 21 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Non-classical logics are used in a wide spectrum of disciplines, including
artificial intelligence, computer science, mathematics, and philosophy. The
de-facto standard infrastructure for automated theorem proving, the TPTP World,
currently supports only classical logics. Similar standards for non-classical
logic reasoning do not exist (yet). This hampers practical development of
reasoning systems, and limits their interoperability and application. This
paper describes the latest extension of the TPTP World, which provides
languages and infrastructure for reasoning in non-classical logics. The
extensions integrate seamlessly with the existing TPTP World.
| [
{
"created": "Sun, 20 Feb 2022 15:29:30 GMT",
"version": "v1"
}
] | 2022-02-22 | [
[
"Steen",
"Alexander",
""
],
[
"Fuenmayor",
"David",
""
],
[
"Gleißner",
"Tobias",
""
],
[
"Sutcliffe",
"Geoff",
""
],
[
"Benzmüller",
"Christoph",
""
]
] | Non-classical logics are used in a wide spectrum of disciplines, including artificial intelligence, computer science, mathematics, and philosophy. The de-facto standard infrastructure for automated theorem proving, the TPTP World, currently supports only classical logics. Similar standards for non-classical logic reasoning do not exist (yet). This hampers practical development of reasoning systems, and limits their interoperability and application. This paper describes the latest extension of the TPTP World, which provides languages and infrastructure for reasoning in non-classical logics. The extensions integrate seamlessly with the existing TPTP World. |
1606.07143 | Justin Hsu | Gilles Barthe, No\'emie Fong, Marco Gaboardi, Benjamin Gr\'egoire,
Justin Hsu, Pierre-Yves Strub | Advanced Probabilistic Couplings for Differential Privacy | null | null | 10.1145/2976749.2978391 | null | cs.LO cs.DS cs.PL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Differential privacy is a promising formal approach to data privacy, which
provides a quantitative bound on the privacy cost of an algorithm that operates
on sensitive information. Several tools have been developed for the formal
verification of differentially private algorithms, including program logics and
type systems. However, these tools do not capture fundamental techniques that
have emerged in recent years, and cannot be used for reasoning about
cutting-edge differentially private algorithms. Existing techniques fail to
handle three broad classes of algorithms: 1) algorithms where privacy depends
accuracy guarantees, 2) algorithms that are analyzed with the advanced
composition theorem, which shows slower growth in the privacy cost, 3)
algorithms that interactively accept adaptive inputs.
We address these limitations with a new formalism extending apRHL, a
relational program logic that has been used for proving differential privacy of
non-interactive algorithms, and incorporating aHL, a (non-relational) program
logic for accuracy properties. We illustrate our approach through a single
running example, which exemplifies the three classes of algorithms and explores
new variants of the Sparse Vector technique, a well-studied algorithm from the
privacy literature. We implement our logic in EasyCrypt, and formally verify
privacy. We also introduce a novel coupling technique called \emph{optimal
subset coupling} that may be of independent interest.
| [
{
"created": "Thu, 23 Jun 2016 00:11:57 GMT",
"version": "v1"
},
{
"created": "Wed, 17 Aug 2016 16:22:57 GMT",
"version": "v2"
}
] | 2018-03-16 | [
[
"Barthe",
"Gilles",
""
],
[
"Fong",
"Noémie",
""
],
[
"Gaboardi",
"Marco",
""
],
[
"Grégoire",
"Benjamin",
""
],
[
"Hsu",
"Justin",
""
],
[
"Strub",
"Pierre-Yves",
""
]
] | Differential privacy is a promising formal approach to data privacy, which provides a quantitative bound on the privacy cost of an algorithm that operates on sensitive information. Several tools have been developed for the formal verification of differentially private algorithms, including program logics and type systems. However, these tools do not capture fundamental techniques that have emerged in recent years, and cannot be used for reasoning about cutting-edge differentially private algorithms. Existing techniques fail to handle three broad classes of algorithms: 1) algorithms where privacy depends accuracy guarantees, 2) algorithms that are analyzed with the advanced composition theorem, which shows slower growth in the privacy cost, 3) algorithms that interactively accept adaptive inputs. We address these limitations with a new formalism extending apRHL, a relational program logic that has been used for proving differential privacy of non-interactive algorithms, and incorporating aHL, a (non-relational) program logic for accuracy properties. We illustrate our approach through a single running example, which exemplifies the three classes of algorithms and explores new variants of the Sparse Vector technique, a well-studied algorithm from the privacy literature. We implement our logic in EasyCrypt, and formally verify privacy. We also introduce a novel coupling technique called \emph{optimal subset coupling} that may be of independent interest. |
2303.04554 | Leonardo Scabini | Leonardo Scabini, Kallil M. Zielinski, Lucas C. Ribas, Wesley N.
Gon\c{c}alves, Bernard De Baets, Odemir M. Bruno | RADAM: Texture Recognition through Randomized Aggregated Encoding of
Deep Activation Maps | 17 pages, 3 figures, submitted to peer-review journal | null | null | null | cs.CV cs.NE | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Texture analysis is a classical yet challenging task in computer vision for
which deep neural networks are actively being applied. Most approaches are
based on building feature aggregation modules around a pre-trained backbone and
then fine-tuning the new architecture on specific texture recognition tasks.
Here we propose a new method named \textbf{R}andom encoding of
\textbf{A}ggregated \textbf{D}eep \textbf{A}ctivation \textbf{M}aps (RADAM)
which extracts rich texture representations without ever changing the backbone.
The technique consists of encoding the output at different depths of a
pre-trained deep convolutional network using a Randomized Autoencoder (RAE).
The RAE is trained locally to each image using a closed-form solution, and its
decoder weights are used to compose a 1-dimensional texture representation that
is fed into a linear SVM. This means that no fine-tuning or backpropagation is
needed. We explore RADAM on several texture benchmarks and achieve
state-of-the-art results with different computational budgets. Our results
suggest that pre-trained backbones may not require additional fine-tuning for
texture recognition if their learned representations are better encoded.
| [
{
"created": "Wed, 8 Mar 2023 13:09:03 GMT",
"version": "v1"
}
] | 2023-03-09 | [
[
"Scabini",
"Leonardo",
""
],
[
"Zielinski",
"Kallil M.",
""
],
[
"Ribas",
"Lucas C.",
""
],
[
"Gonçalves",
"Wesley N.",
""
],
[
"De Baets",
"Bernard",
""
],
[
"Bruno",
"Odemir M.",
""
]
] | Texture analysis is a classical yet challenging task in computer vision for which deep neural networks are actively being applied. Most approaches are based on building feature aggregation modules around a pre-trained backbone and then fine-tuning the new architecture on specific texture recognition tasks. Here we propose a new method named \textbf{R}andom encoding of \textbf{A}ggregated \textbf{D}eep \textbf{A}ctivation \textbf{M}aps (RADAM) which extracts rich texture representations without ever changing the backbone. The technique consists of encoding the output at different depths of a pre-trained deep convolutional network using a Randomized Autoencoder (RAE). The RAE is trained locally to each image using a closed-form solution, and its decoder weights are used to compose a 1-dimensional texture representation that is fed into a linear SVM. This means that no fine-tuning or backpropagation is needed. We explore RADAM on several texture benchmarks and achieve state-of-the-art results with different computational budgets. Our results suggest that pre-trained backbones may not require additional fine-tuning for texture recognition if their learned representations are better encoded. |
1401.6482 | Aria Ghasemian Sahebi | Aria G. Sahebi and S. Sandeep Pradhan | Nested Polar Codes Achieve the Shannon Rate-Distortion Function and the
Shannon Capacity | null | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It is shown that nested polar codes achieve the Shannon rate-distortion
function for arbitrary (binary or non-binary) discrete memoryless sources and
the Shannon capacity of arbitrary discrete memoryless channels.
| [
{
"created": "Sat, 25 Jan 2014 01:26:53 GMT",
"version": "v1"
}
] | 2014-01-28 | [
[
"Sahebi",
"Aria G.",
""
],
[
"Pradhan",
"S. Sandeep",
""
]
] | It is shown that nested polar codes achieve the Shannon rate-distortion function for arbitrary (binary or non-binary) discrete memoryless sources and the Shannon capacity of arbitrary discrete memoryless channels. |
2103.02696 | Weilin Cong | Weilin Cong, Morteza Ramezani, Mehrdad Mahdavi | On the Importance of Sampling in Training GCNs: Tighter Analysis and
Variance Reduction | null | null | null | null | cs.LG cs.AI cs.CV | http://creativecommons.org/licenses/by/4.0/ | Graph Convolutional Networks (GCNs) have achieved impressive empirical
advancement across a wide variety of semi-supervised node classification tasks.
Despite their great success, training GCNs on large graphs suffers from
computational and memory issues. A potential path to circumvent these obstacles
is sampling-based methods, where at each layer a subset of nodes is sampled.
Although recent studies have empirically demonstrated the effectiveness of
sampling-based methods, these works lack theoretical convergence guarantees
under realistic settings and cannot fully leverage the information of evolving
parameters during optimization. In this paper, we describe and analyze a
general doubly variance reduction schema that can accelerate any sampling
method under the memory budget. The motivating impetus for the proposed schema
is a careful analysis of the variance of sampling methods where it is shown
that the induced variance can be decomposed into node embedding approximation
variance (zeroth-order variance) during forward propagation and
layerwise-gradient variance (first-order variance) during backward propagation.
We theoretically analyze the convergence of the proposed schema and show that
it enjoys an $\mathcal{O}(1/T)$ convergence rate. We complement our theoretical
results by integrating the proposed schema in different sampling methods and
applying them to different large real-world graphs.
| [
{
"created": "Wed, 3 Mar 2021 21:31:23 GMT",
"version": "v1"
},
{
"created": "Mon, 1 Nov 2021 17:26:18 GMT",
"version": "v2"
}
] | 2021-11-02 | [
[
"Cong",
"Weilin",
""
],
[
"Ramezani",
"Morteza",
""
],
[
"Mahdavi",
"Mehrdad",
""
]
] | Graph Convolutional Networks (GCNs) have achieved impressive empirical advancement across a wide variety of semi-supervised node classification tasks. Despite their great success, training GCNs on large graphs suffers from computational and memory issues. A potential path to circumvent these obstacles is sampling-based methods, where at each layer a subset of nodes is sampled. Although recent studies have empirically demonstrated the effectiveness of sampling-based methods, these works lack theoretical convergence guarantees under realistic settings and cannot fully leverage the information of evolving parameters during optimization. In this paper, we describe and analyze a general doubly variance reduction schema that can accelerate any sampling method under the memory budget. The motivating impetus for the proposed schema is a careful analysis of the variance of sampling methods where it is shown that the induced variance can be decomposed into node embedding approximation variance (zeroth-order variance) during forward propagation and layerwise-gradient variance (first-order variance) during backward propagation. We theoretically analyze the convergence of the proposed schema and show that it enjoys an $\mathcal{O}(1/T)$ convergence rate. We complement our theoretical results by integrating the proposed schema in different sampling methods and applying them to different large real-world graphs. |
2010.12546 | Erdem Koyuncu | Erdem Koyuncu | Quantizing Multiple Sources to a Common Cluster Center: An Asymptotic
Analysis | null | null | null | null | cs.LG cs.IT math.IT stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider quantizing an $Ld$-dimensional sample, which is obtained by
concatenating $L$ vectors from datasets of $d$-dimensional vectors, to a
$d$-dimensional cluster center. The distortion measure is the weighted sum of
$r$th powers of the distances between the cluster center and the samples. For
$L=1$, one recovers the ordinary center based clustering formulation. The
general case $L>1$ appears when one wishes to cluster a dataset through $L$
noisy observations of each of its members. We find a formula for the average
distortion performance in the asymptotic regime where the number of cluster
centers are large. We also provide an algorithm to numerically optimize the
cluster centers and verify our analytical results on real and artificial
datasets. In terms of faithfulness to the original (noiseless) dataset, our
clustering approach outperforms the naive approach that relies on quantizing
the $Ld$-dimensional noisy observation vectors to $Ld$-dimensional centers.
| [
{
"created": "Fri, 23 Oct 2020 17:14:28 GMT",
"version": "v1"
}
] | 2020-10-26 | [
[
"Koyuncu",
"Erdem",
""
]
] | We consider quantizing an $Ld$-dimensional sample, which is obtained by concatenating $L$ vectors from datasets of $d$-dimensional vectors, to a $d$-dimensional cluster center. The distortion measure is the weighted sum of $r$th powers of the distances between the cluster center and the samples. For $L=1$, one recovers the ordinary center based clustering formulation. The general case $L>1$ appears when one wishes to cluster a dataset through $L$ noisy observations of each of its members. We find a formula for the average distortion performance in the asymptotic regime where the number of cluster centers are large. We also provide an algorithm to numerically optimize the cluster centers and verify our analytical results on real and artificial datasets. In terms of faithfulness to the original (noiseless) dataset, our clustering approach outperforms the naive approach that relies on quantizing the $Ld$-dimensional noisy observation vectors to $Ld$-dimensional centers. |
2407.11364 | Chen Wang | Vladimir Braverman, Prathamesh Dharangutte, Vihan Shah, Chen Wang | Learning-augmented Maximum Independent Set | APPROX 2024 | null | null | null | cs.DS cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the Maximum Independent Set (MIS) problem on general graphs within
the framework of learning-augmented algorithms. The MIS problem is known to be
NP-hard and is also NP-hard to approximate to within a factor of $n^{1-\delta}$
for any $\delta>0$. We show that we can break this barrier in the presence of
an oracle obtained through predictions from a machine learning model that
answers vertex membership queries for a fixed MIS with probability
$1/2+\varepsilon$. In the first setting we consider, the oracle can be queried
once per vertex to know if a vertex belongs to a fixed MIS, and the oracle
returns the correct answer with probability $1/2 + \varepsilon$. Under this
setting, we show an algorithm that obtains an
$\tilde{O}(\sqrt{\Delta}/\varepsilon)$-approximation in $O(m)$ time where
$\Delta$ is the maximum degree of the graph. In the second setting, we allow
multiple queries to the oracle for a vertex, each of which is correct with
probability $1/2 + \varepsilon$. For this setting, we show an
$O(1)$-approximation algorithm using $O(n/\varepsilon^2)$ total queries and
$\tilde{O}(m)$ runtime.
| [
{
"created": "Tue, 16 Jul 2024 04:05:40 GMT",
"version": "v1"
}
] | 2024-07-17 | [
[
"Braverman",
"Vladimir",
""
],
[
"Dharangutte",
"Prathamesh",
""
],
[
"Shah",
"Vihan",
""
],
[
"Wang",
"Chen",
""
]
] | We study the Maximum Independent Set (MIS) problem on general graphs within the framework of learning-augmented algorithms. The MIS problem is known to be NP-hard and is also NP-hard to approximate to within a factor of $n^{1-\delta}$ for any $\delta>0$. We show that we can break this barrier in the presence of an oracle obtained through predictions from a machine learning model that answers vertex membership queries for a fixed MIS with probability $1/2+\varepsilon$. In the first setting we consider, the oracle can be queried once per vertex to know if a vertex belongs to a fixed MIS, and the oracle returns the correct answer with probability $1/2 + \varepsilon$. Under this setting, we show an algorithm that obtains an $\tilde{O}(\sqrt{\Delta}/\varepsilon)$-approximation in $O(m)$ time where $\Delta$ is the maximum degree of the graph. In the second setting, we allow multiple queries to the oracle for a vertex, each of which is correct with probability $1/2 + \varepsilon$. For this setting, we show an $O(1)$-approximation algorithm using $O(n/\varepsilon^2)$ total queries and $\tilde{O}(m)$ runtime. |
2301.05575 | Jo\~ao Mendes Lopes | Carolina Gon\c{c}alves, Jo\~ao M. Lopes, Sara Moccia, Daniele
Berardini, Lucia Migliorelli, and Cristina P. Santos | Deep learning-based approaches for human motion decoding in smart
walkers for rehabilitation | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Gait disabilities are among the most frequent worldwide. Their treatment
relies on rehabilitation therapies, in which smart walkers are being introduced
to empower the user's recovery and autonomy, while reducing the clinicians
effort. For that, these should be able to decode human motion and needs, as
early as possible. Current walkers decode motion intention using information of
wearable or embedded sensors, namely inertial units, force and hall sensors,
and lasers, whose main limitations imply an expensive solution or hinder the
perception of human movement. Smart walkers commonly lack a seamless
human-robot interaction, which intuitively understands human motions. A
contactless approach is proposed in this work, addressing human motion decoding
as an early action recognition/detection problematic, using RGB-D cameras. We
studied different deep learning-based algorithms, organised in three different
approaches, to process lower body RGB-D video sequences, recorded from an
embedded camera of a smart walker, and classify them into 4 classes (stop,
walk, turn right/left). A custom dataset involving 15 healthy participants
walking with the device was acquired and prepared, resulting in 28800 balanced
RGB-D frames, to train and evaluate the deep networks. The best results were
attained by a convolutional neural network with a channel attention mechanism,
reaching accuracy values of 99.61% and above 93%, for offline early
detection/recognition and trial simulations, respectively. Following the
hypothesis that human lower body features encode prominent information,
fostering a more robust prediction towards real-time applications, the
algorithm focus was also evaluated using Dice metric, leading to values
slightly higher than 30%. Promising results were attained for early action
detection as a human motion decoding strategy, with enhancements in the focus
of the proposed architectures.
| [
{
"created": "Fri, 13 Jan 2023 14:29:44 GMT",
"version": "v1"
}
] | 2023-05-22 | [
[
"Gonçalves",
"Carolina",
""
],
[
"Lopes",
"João M.",
""
],
[
"Moccia",
"Sara",
""
],
[
"Berardini",
"Daniele",
""
],
[
"Migliorelli",
"Lucia",
""
],
[
"Santos",
"Cristina P.",
""
]
] | Gait disabilities are among the most frequent worldwide. Their treatment relies on rehabilitation therapies, in which smart walkers are being introduced to empower the user's recovery and autonomy, while reducing the clinicians effort. For that, these should be able to decode human motion and needs, as early as possible. Current walkers decode motion intention using information of wearable or embedded sensors, namely inertial units, force and hall sensors, and lasers, whose main limitations imply an expensive solution or hinder the perception of human movement. Smart walkers commonly lack a seamless human-robot interaction, which intuitively understands human motions. A contactless approach is proposed in this work, addressing human motion decoding as an early action recognition/detection problematic, using RGB-D cameras. We studied different deep learning-based algorithms, organised in three different approaches, to process lower body RGB-D video sequences, recorded from an embedded camera of a smart walker, and classify them into 4 classes (stop, walk, turn right/left). A custom dataset involving 15 healthy participants walking with the device was acquired and prepared, resulting in 28800 balanced RGB-D frames, to train and evaluate the deep networks. The best results were attained by a convolutional neural network with a channel attention mechanism, reaching accuracy values of 99.61% and above 93%, for offline early detection/recognition and trial simulations, respectively. Following the hypothesis that human lower body features encode prominent information, fostering a more robust prediction towards real-time applications, the algorithm focus was also evaluated using Dice metric, leading to values slightly higher than 30%. Promising results were attained for early action detection as a human motion decoding strategy, with enhancements in the focus of the proposed architectures. |
1904.10674 | Nicolas Audebert | Nicolas Audebert (OBELIX), Bertrand Saux, S\'ebastien Lef\`evre
(OBELIX) | Deep Learning for Classification of Hyperspectral Data: A Comparative
Review | null | null | 10.1109/MGRS.2019.2912563 | null | cs.LG cs.CV cs.NE eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, deep learning techniques revolutionized the way remote
sensing data are processed. Classification of hyperspectral data is no
exception to the rule, but has intrinsic specificities which make application
of deep learning less straightforward than with other optical data. This
article presents a state of the art of previous machine learning approaches,
reviews the various deep learning approaches currently proposed for
hyperspectral classification, and identifies the problems and difficulties
which arise to implement deep neural networks for this task. In particular, the
issues of spatial and spectral resolution, data volume, and transfer of models
from multimedia images to hyperspectral data are addressed. Additionally, a
comparative study of various families of network architectures is provided and
a software toolbox is publicly released to allow experimenting with these
methods. 1 This article is intended for both data scientists with interest in
hyperspectral data and remote sensing experts eager to apply deep learning
techniques to their own dataset.
| [
{
"created": "Wed, 24 Apr 2019 07:56:37 GMT",
"version": "v1"
}
] | 2019-04-25 | [
[
"Audebert",
"Nicolas",
"",
"OBELIX"
],
[
"Saux",
"Bertrand",
"",
"OBELIX"
],
[
"Lefèvre",
"Sébastien",
"",
"OBELIX"
]
] | In recent years, deep learning techniques revolutionized the way remote sensing data are processed. Classification of hyperspectral data is no exception to the rule, but has intrinsic specificities which make application of deep learning less straightforward than with other optical data. This article presents a state of the art of previous machine learning approaches, reviews the various deep learning approaches currently proposed for hyperspectral classification, and identifies the problems and difficulties which arise to implement deep neural networks for this task. In particular, the issues of spatial and spectral resolution, data volume, and transfer of models from multimedia images to hyperspectral data are addressed. Additionally, a comparative study of various families of network architectures is provided and a software toolbox is publicly released to allow experimenting with these methods. 1 This article is intended for both data scientists with interest in hyperspectral data and remote sensing experts eager to apply deep learning techniques to their own dataset. |
2302.13509 | Jing Liang | Jing Liang, Sanghyun Son, Ming Lin, Dinesh Manocha | GeoLCR: Attention-based Geometric Loop Closure and Registration | null | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a novel algorithm specially designed for loop detection and
registration that utilizes Lidar-based perception. Our approach to loop
detection involves voxelizing point clouds, followed by an overlap calculation
to confirm whether a vehicle has completed a loop. We further enhance the
current pose's accuracy via an innovative point-level registration model. The
efficacy of our algorithm has been assessed across a range of well-known
datasets, including KITTI, KITTI-360, Nuscenes, Complex Urban, NCLT, and
MulRan. In comparative terms, our method exhibits up to a twofold increase in
the precision of both translation and rotation estimations. Particularly
noteworthy is our method's performance on challenging sequences where it
outperforms others, being the first to achieve a perfect 100% success rate in
loop detection.
| [
{
"created": "Mon, 27 Feb 2023 04:16:16 GMT",
"version": "v1"
},
{
"created": "Tue, 28 Feb 2023 01:56:31 GMT",
"version": "v2"
},
{
"created": "Wed, 1 Mar 2023 18:54:09 GMT",
"version": "v3"
},
{
"created": "Thu, 2 Mar 2023 15:14:05 GMT",
"version": "v4"
},
{
"created": "Sat, 4 Mar 2023 03:08:17 GMT",
"version": "v5"
},
{
"created": "Mon, 17 Jul 2023 02:33:00 GMT",
"version": "v6"
}
] | 2023-07-18 | [
[
"Liang",
"Jing",
""
],
[
"Son",
"Sanghyun",
""
],
[
"Lin",
"Ming",
""
],
[
"Manocha",
"Dinesh",
""
]
] | We present a novel algorithm specially designed for loop detection and registration that utilizes Lidar-based perception. Our approach to loop detection involves voxelizing point clouds, followed by an overlap calculation to confirm whether a vehicle has completed a loop. We further enhance the current pose's accuracy via an innovative point-level registration model. The efficacy of our algorithm has been assessed across a range of well-known datasets, including KITTI, KITTI-360, Nuscenes, Complex Urban, NCLT, and MulRan. In comparative terms, our method exhibits up to a twofold increase in the precision of both translation and rotation estimations. Particularly noteworthy is our method's performance on challenging sequences where it outperforms others, being the first to achieve a perfect 100% success rate in loop detection. |
1709.00672 | Gaurav Pandey | Gaurav Pandey and Ambedkar Dukkipati | Unsupervised feature learning with discriminative encoder | 10 pages, 4 figures, International Conference on Data Mining, 2017 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, deep discriminative models have achieved extraordinary
performance on supervised learning tasks, significantly outperforming their
generative counterparts. However, their success relies on the presence of a
large amount of labeled data. How can one use the same discriminative models
for learning useful features in the absence of labels? We address this question
in this paper, by jointly modeling the distribution of data and latent features
in a manner that explicitly assigns zero probability to unobserved data. Rather
than maximizing the marginal probability of observed data, we maximize the
joint probability of the data and the latent features using a two step EM-like
procedure. To prevent the model from overfitting to our initial selection of
latent features, we use adversarial regularization. Depending on the task, we
allow the latent features to be one-hot or real-valued vectors and define a
suitable prior on the features. For instance, one-hot features correspond to
class labels and are directly used for the unsupervised and semi-supervised
classification task, whereas real-valued feature vectors are fed as input to
simple classifiers for auxiliary supervised discrimination tasks. The proposed
model, which we dub discriminative encoder (or DisCoder), is flexible in the
type of latent features that it can capture. The proposed model achieves
state-of-the-art performance on several challenging tasks.
| [
{
"created": "Sun, 3 Sep 2017 06:40:35 GMT",
"version": "v1"
}
] | 2017-09-05 | [
[
"Pandey",
"Gaurav",
""
],
[
"Dukkipati",
"Ambedkar",
""
]
] | In recent years, deep discriminative models have achieved extraordinary performance on supervised learning tasks, significantly outperforming their generative counterparts. However, their success relies on the presence of a large amount of labeled data. How can one use the same discriminative models for learning useful features in the absence of labels? We address this question in this paper, by jointly modeling the distribution of data and latent features in a manner that explicitly assigns zero probability to unobserved data. Rather than maximizing the marginal probability of observed data, we maximize the joint probability of the data and the latent features using a two step EM-like procedure. To prevent the model from overfitting to our initial selection of latent features, we use adversarial regularization. Depending on the task, we allow the latent features to be one-hot or real-valued vectors and define a suitable prior on the features. For instance, one-hot features correspond to class labels and are directly used for the unsupervised and semi-supervised classification task, whereas real-valued feature vectors are fed as input to simple classifiers for auxiliary supervised discrimination tasks. The proposed model, which we dub discriminative encoder (or DisCoder), is flexible in the type of latent features that it can capture. The proposed model achieves state-of-the-art performance on several challenging tasks. |
2310.07838 | Qingyue Zhao | Qingyue Zhao and Banghua Zhu | Towards the Fundamental Limits of Knowledge Transfer over Finite Domains | 41 pages, 2 figures; Appendix polished | null | null | null | cs.LG cs.AI cs.IT math.IT math.ST stat.ML stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We characterize the statistical efficiency of knowledge transfer through $n$
samples from a teacher to a probabilistic student classifier with input space
$\mathcal S$ over labels $\mathcal A$. We show that privileged information at
three progressive levels accelerates the transfer. At the first level, only
samples with hard labels are known, via which the maximum likelihood estimator
attains the minimax rate $\sqrt{{|{\mathcal S}||{\mathcal A}|}/{n}}$. The
second level has the teacher probabilities of sampled labels available in
addition, which turns out to boost the convergence rate lower bound to
${{|{\mathcal S}||{\mathcal A}|}/{n}}$. However, under this second data
acquisition protocol, minimizing a naive adaptation of the cross-entropy loss
results in an asymptotically biased student. We overcome this limitation and
achieve the fundamental limit by using a novel empirical variant of the squared
error logit loss. The third level further equips the student with the soft
labels (complete logits) on ${\mathcal A}$ given every sampled input, thereby
provably enables the student to enjoy a rate ${|{\mathcal S}|}/{n}$ free of
$|{\mathcal A}|$. We find any Kullback-Leibler divergence minimizer to be
optimal in the last case. Numerical simulations distinguish the four learners
and corroborate our theory.
| [
{
"created": "Wed, 11 Oct 2023 19:30:08 GMT",
"version": "v1"
},
{
"created": "Tue, 17 Oct 2023 15:35:41 GMT",
"version": "v2"
},
{
"created": "Sun, 12 Nov 2023 10:48:07 GMT",
"version": "v3"
},
{
"created": "Tue, 14 Nov 2023 11:26:10 GMT",
"version": "v4"
}
] | 2023-11-15 | [
[
"Zhao",
"Qingyue",
""
],
[
"Zhu",
"Banghua",
""
]
] | We characterize the statistical efficiency of knowledge transfer through $n$ samples from a teacher to a probabilistic student classifier with input space $\mathcal S$ over labels $\mathcal A$. We show that privileged information at three progressive levels accelerates the transfer. At the first level, only samples with hard labels are known, via which the maximum likelihood estimator attains the minimax rate $\sqrt{{|{\mathcal S}||{\mathcal A}|}/{n}}$. The second level has the teacher probabilities of sampled labels available in addition, which turns out to boost the convergence rate lower bound to ${{|{\mathcal S}||{\mathcal A}|}/{n}}$. However, under this second data acquisition protocol, minimizing a naive adaptation of the cross-entropy loss results in an asymptotically biased student. We overcome this limitation and achieve the fundamental limit by using a novel empirical variant of the squared error logit loss. The third level further equips the student with the soft labels (complete logits) on ${\mathcal A}$ given every sampled input, thereby provably enables the student to enjoy a rate ${|{\mathcal S}|}/{n}$ free of $|{\mathcal A}|$. We find any Kullback-Leibler divergence minimizer to be optimal in the last case. Numerical simulations distinguish the four learners and corroborate our theory. |
1711.07341 | Hsin-Yuan Huang | Hsin-Yuan Huang, Chenguang Zhu, Yelong Shen, Weizhu Chen | FusionNet: Fusing via Fully-Aware Attention with Application to Machine
Comprehension | Published in Sixth International Conference on Learning
Representations (ICLR), 2018 | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces a new neural structure called FusionNet, which extends
existing attention approaches from three perspectives. First, it puts forward a
novel concept of "history of word" to characterize attention information from
the lowest word-level embedding up to the highest semantic-level
representation. Second, it introduces an improved attention scoring function
that better utilizes the "history of word" concept. Third, it proposes a
fully-aware multi-level attention mechanism to capture the complete information
in one text (such as a question) and exploit it in its counterpart (such as
context or passage) layer by layer. We apply FusionNet to the Stanford Question
Answering Dataset (SQuAD) and it achieves the first position for both single
and ensemble model on the official SQuAD leaderboard at the time of writing
(Oct. 4th, 2017). Meanwhile, we verify the generalization of FusionNet with two
adversarial SQuAD datasets and it sets up the new state-of-the-art on both
datasets: on AddSent, FusionNet increases the best F1 metric from 46.6% to
51.4%; on AddOneSent, FusionNet boosts the best F1 metric from 56.0% to 60.7%.
| [
{
"created": "Thu, 16 Nov 2017 03:52:41 GMT",
"version": "v1"
},
{
"created": "Sun, 4 Feb 2018 04:56:45 GMT",
"version": "v2"
}
] | 2018-02-06 | [
[
"Huang",
"Hsin-Yuan",
""
],
[
"Zhu",
"Chenguang",
""
],
[
"Shen",
"Yelong",
""
],
[
"Chen",
"Weizhu",
""
]
] | This paper introduces a new neural structure called FusionNet, which extends existing attention approaches from three perspectives. First, it puts forward a novel concept of "history of word" to characterize attention information from the lowest word-level embedding up to the highest semantic-level representation. Second, it introduces an improved attention scoring function that better utilizes the "history of word" concept. Third, it proposes a fully-aware multi-level attention mechanism to capture the complete information in one text (such as a question) and exploit it in its counterpart (such as context or passage) layer by layer. We apply FusionNet to the Stanford Question Answering Dataset (SQuAD) and it achieves the first position for both single and ensemble model on the official SQuAD leaderboard at the time of writing (Oct. 4th, 2017). Meanwhile, we verify the generalization of FusionNet with two adversarial SQuAD datasets and it sets up the new state-of-the-art on both datasets: on AddSent, FusionNet increases the best F1 metric from 46.6% to 51.4%; on AddOneSent, FusionNet boosts the best F1 metric from 56.0% to 60.7%. |
1307.4165 | Neetu Goel | Neetu Goel and R.B. Garg | A Comparative Study of CPU Scheduling Algorithms | null | International Journal of Graphics & Image Processing |Vol 2|issue
4|November 2012 | null | null | cs.OS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Developing CPU scheduling algorithms and understanding their impact in
practice can be difficult and time consuming due to the need to modify and test
operating system kernel code and measure the resulting performance on a
consistent workload of real applications. As processor is the important
resource, CPU scheduling becomes very important in accomplishing the operating
system (OS) design goals. The intention should be allowed as many as possible
running processes at all time in order to make best use of CPU. This paper
presents a state diagram that depicts the comparative study of various
scheduling algorithms for a single CPU and shows which algorithm is best for
the particular situation. Using this representation, it becomes much easier to
understand what is going on inside the system and why a different set of
processes is a candidate for the allocation of the CPU at different time. The
objective of the study is to analyze the high efficient CPU scheduler on design
of the high quality scheduling algorithms which suits the scheduling goals. Key
Words:-Scheduler, State Diagrams, CPU-Scheduling, Performance
| [
{
"created": "Tue, 16 Jul 2013 05:21:34 GMT",
"version": "v1"
}
] | 2013-07-17 | [
[
"Goel",
"Neetu",
""
],
[
"Garg",
"R. B.",
""
]
] | Developing CPU scheduling algorithms and understanding their impact in practice can be difficult and time consuming due to the need to modify and test operating system kernel code and measure the resulting performance on a consistent workload of real applications. As processor is the important resource, CPU scheduling becomes very important in accomplishing the operating system (OS) design goals. The intention should be allowed as many as possible running processes at all time in order to make best use of CPU. This paper presents a state diagram that depicts the comparative study of various scheduling algorithms for a single CPU and shows which algorithm is best for the particular situation. Using this representation, it becomes much easier to understand what is going on inside the system and why a different set of processes is a candidate for the allocation of the CPU at different time. The objective of the study is to analyze the high efficient CPU scheduler on design of the high quality scheduling algorithms which suits the scheduling goals. Key Words:-Scheduler, State Diagrams, CPU-Scheduling, Performance |
1701.03529 | Jonas Szutkoski | Luiz E. Allem, Juliane Capaverde, Mark van Hoeij, Jonas Szutkoski | Functional Decomposition using Principal Subfields | 8 pages, accepted for ISSAC'17 | null | 10.1145/3087604.3087608 | null | cs.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Let $f\in K(t)$ be a univariate rational function. It is well known that any
non-trivial decomposition $g \circ h$, with $g,h\in K(t)$, corresponds to a
non-trivial subfield $K(f(t))\subsetneq L \subsetneq K(t)$ and vice-versa. In
this paper we use the idea of principal subfields and fast
subfield-intersection techniques to compute the subfield lattice of
$K(t)/K(f(t))$. This yields a Las Vegas type algorithm with improved complexity
and better run times for finding all non-equivalent complete decompositions of
$f$.
| [
{
"created": "Thu, 12 Jan 2017 23:14:56 GMT",
"version": "v1"
},
{
"created": "Fri, 26 May 2017 19:31:14 GMT",
"version": "v2"
}
] | 2017-05-30 | [
[
"Allem",
"Luiz E.",
""
],
[
"Capaverde",
"Juliane",
""
],
[
"van Hoeij",
"Mark",
""
],
[
"Szutkoski",
"Jonas",
""
]
] | Let $f\in K(t)$ be a univariate rational function. It is well known that any non-trivial decomposition $g \circ h$, with $g,h\in K(t)$, corresponds to a non-trivial subfield $K(f(t))\subsetneq L \subsetneq K(t)$ and vice-versa. In this paper we use the idea of principal subfields and fast subfield-intersection techniques to compute the subfield lattice of $K(t)/K(f(t))$. This yields a Las Vegas type algorithm with improved complexity and better run times for finding all non-equivalent complete decompositions of $f$. |
1802.01447 | Lijun Zhao | Lijun Zhao, Huihui Bai, Feng Li, Anhong Wang and Yao Zhao | Mixed-Resolution Image Representation and Compression with Convolutional
Neural Networks | 5 pages, and 2 figures. arXiv admin note: substantial text overlap
with arXiv:1712.05969 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose an end-to-end mixed-resolution image compression
framework with convolutional neural networks. Firstly, given one input image,
feature description neural network (FDNN) is used to generate a new
representation of this image, so that this image representation can be more
efficiently compressed by standard codec, as compared to the input image.
Furthermore, we use post-processing neural network (PPNN) to remove the coding
artifacts caused by quantization of codec. Secondly, low-resolution image
representation is adopted for high efficiency compression in terms of most of
bit spent by image's structures under low bit-rate. However, more bits should
be assigned to image details in the high-resolution, when most of structures
have been kept after compression at the high bit-rate. This comes from a fact
that the low-resolution image representation can't burden more information than
high-resolution representation beyond a certain bit-rate. Finally, to resolve
the problem of error back-propagation from the PPNN network to the FDNN
network, we introduce to learn a virtual codec neural network to imitate two
continuous procedures of standard compression and post-processing. The
objective experimental results have demonstrated the proposed method has a
large margin improvement, when comparing with several state-of-the-art
approaches.
| [
{
"created": "Fri, 2 Feb 2018 08:57:44 GMT",
"version": "v1"
},
{
"created": "Wed, 1 Aug 2018 03:26:44 GMT",
"version": "v2"
}
] | 2018-08-03 | [
[
"Zhao",
"Lijun",
""
],
[
"Bai",
"Huihui",
""
],
[
"Li",
"Feng",
""
],
[
"Wang",
"Anhong",
""
],
[
"Zhao",
"Yao",
""
]
] | In this paper, we propose an end-to-end mixed-resolution image compression framework with convolutional neural networks. Firstly, given one input image, feature description neural network (FDNN) is used to generate a new representation of this image, so that this image representation can be more efficiently compressed by standard codec, as compared to the input image. Furthermore, we use post-processing neural network (PPNN) to remove the coding artifacts caused by quantization of codec. Secondly, low-resolution image representation is adopted for high efficiency compression in terms of most of bit spent by image's structures under low bit-rate. However, more bits should be assigned to image details in the high-resolution, when most of structures have been kept after compression at the high bit-rate. This comes from a fact that the low-resolution image representation can't burden more information than high-resolution representation beyond a certain bit-rate. Finally, to resolve the problem of error back-propagation from the PPNN network to the FDNN network, we introduce to learn a virtual codec neural network to imitate two continuous procedures of standard compression and post-processing. The objective experimental results have demonstrated the proposed method has a large margin improvement, when comparing with several state-of-the-art approaches. |
1903.02149 | Xie De | Chao Li, Cheng Deng, Lei Wang, De Xie, Xianglong Liu | Coupled CycleGAN: Unsupervised Hashing Network for Cross-Modal Retrieval | null | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, hashing has attracted more and more attention owing to its
superior capacity of low storage cost and high query efficiency in large-scale
cross-modal retrieval. Benefiting from deep leaning, continuously compelling
results in cross-modal retrieval community have been achieved. However,
existing deep cross-modal hashing methods either rely on amounts of labeled
information or have no ability to learn an accuracy correlation between
different modalities. In this paper, we proposed Unsupervised coupled Cycle
generative adversarial Hashing networks (UCH), for cross-modal retrieval, where
outer-cycle network is used to learn powerful common representation, and
inner-cycle network is explained to generate reliable hash codes. Specifically,
our proposed UCH seamlessly couples these two networks with generative
adversarial mechanism, which can be optimized simultaneously to learn
representation and hash codes. Extensive experiments on three popular benchmark
datasets show that the proposed UCH outperforms the state-of-the-art
unsupervised cross-modal hashing methods.
| [
{
"created": "Wed, 6 Mar 2019 03:09:20 GMT",
"version": "v1"
}
] | 2019-03-07 | [
[
"Li",
"Chao",
""
],
[
"Deng",
"Cheng",
""
],
[
"Wang",
"Lei",
""
],
[
"Xie",
"De",
""
],
[
"Liu",
"Xianglong",
""
]
] | In recent years, hashing has attracted more and more attention owing to its superior capacity of low storage cost and high query efficiency in large-scale cross-modal retrieval. Benefiting from deep leaning, continuously compelling results in cross-modal retrieval community have been achieved. However, existing deep cross-modal hashing methods either rely on amounts of labeled information or have no ability to learn an accuracy correlation between different modalities. In this paper, we proposed Unsupervised coupled Cycle generative adversarial Hashing networks (UCH), for cross-modal retrieval, where outer-cycle network is used to learn powerful common representation, and inner-cycle network is explained to generate reliable hash codes. Specifically, our proposed UCH seamlessly couples these two networks with generative adversarial mechanism, which can be optimized simultaneously to learn representation and hash codes. Extensive experiments on three popular benchmark datasets show that the proposed UCH outperforms the state-of-the-art unsupervised cross-modal hashing methods. |
1705.03865 | Akshay Gupta | Akshay Kumar Gupta | Survey of Visual Question Answering: Datasets and Techniques | 10 pages, 3 figures, 3 tables Added references, corrected typos, made
references less wordy | null | null | null | cs.CL cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visual question answering (or VQA) is a new and exciting problem that
combines natural language processing and computer vision techniques. We present
a survey of the various datasets and models that have been used to tackle this
task. The first part of the survey details the various datasets for VQA and
compares them along some common factors. The second part of this survey details
the different approaches for VQA, classified into four types: non-deep learning
models, deep learning models without attention, deep learning models with
attention, and other models which do not fit into the first three. Finally, we
compare the performances of these approaches and provide some directions for
future work.
| [
{
"created": "Wed, 10 May 2017 17:30:17 GMT",
"version": "v1"
},
{
"created": "Thu, 11 May 2017 06:46:52 GMT",
"version": "v2"
}
] | 2017-05-12 | [
[
"Gupta",
"Akshay Kumar",
""
]
] | Visual question answering (or VQA) is a new and exciting problem that combines natural language processing and computer vision techniques. We present a survey of the various datasets and models that have been used to tackle this task. The first part of the survey details the various datasets for VQA and compares them along some common factors. The second part of this survey details the different approaches for VQA, classified into four types: non-deep learning models, deep learning models without attention, deep learning models with attention, and other models which do not fit into the first three. Finally, we compare the performances of these approaches and provide some directions for future work. |
2308.16708 | Sebastian Lubos | Sebastian Lubos, Thi Ngoc Trang Tran, Seda Polat Erdeniz, Merfat El
Mansi, Alexander Felfernig, Manfred Wundara and Gerhard Leitner | Concentrating on the Impact: Consequence-based Explanations in
Recommender Systems | The paper was presented at IntRS'23: Joint Workshop on Interfaces and
Human Decision Making for Recommender Systems, September 18, 2023, Singapore.
and is published in the workshop proceedings: https://ceur-ws.org/Vol-3534/ | null | null | null | cs.IR cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recommender systems assist users in decision-making, where the presentation
of recommended items and their explanations are critical factors for enhancing
the overall user experience. Although various methods for generating
explanations have been proposed, there is still room for improvement,
particularly for users who lack expertise in a specific item domain. In this
study, we introduce the novel concept of \textit{consequence-based
explanations}, a type of explanation that emphasizes the individual impact of
consuming a recommended item on the user, which makes the effect of following
recommendations clearer. We conducted an online user study to examine our
assumption about the appreciation of consequence-based explanations and their
impacts on different explanation aims in recommender systems. Our findings
highlight the importance of consequence-based explanations, which were
well-received by users and effectively improved user satisfaction in
recommender systems. These results provide valuable insights for designing
engaging explanations that can enhance the overall user experience in
decision-making.
| [
{
"created": "Thu, 31 Aug 2023 13:24:57 GMT",
"version": "v1"
},
{
"created": "Tue, 3 Oct 2023 14:14:43 GMT",
"version": "v2"
},
{
"created": "Fri, 3 Nov 2023 12:02:49 GMT",
"version": "v3"
}
] | 2023-11-06 | [
[
"Lubos",
"Sebastian",
""
],
[
"Tran",
"Thi Ngoc Trang",
""
],
[
"Erdeniz",
"Seda Polat",
""
],
[
"Mansi",
"Merfat El",
""
],
[
"Felfernig",
"Alexander",
""
],
[
"Wundara",
"Manfred",
""
],
[
"Leitner",
"Gerhard",
""
]
] | Recommender systems assist users in decision-making, where the presentation of recommended items and their explanations are critical factors for enhancing the overall user experience. Although various methods for generating explanations have been proposed, there is still room for improvement, particularly for users who lack expertise in a specific item domain. In this study, we introduce the novel concept of \textit{consequence-based explanations}, a type of explanation that emphasizes the individual impact of consuming a recommended item on the user, which makes the effect of following recommendations clearer. We conducted an online user study to examine our assumption about the appreciation of consequence-based explanations and their impacts on different explanation aims in recommender systems. Our findings highlight the importance of consequence-based explanations, which were well-received by users and effectively improved user satisfaction in recommender systems. These results provide valuable insights for designing engaging explanations that can enhance the overall user experience in decision-making. |
1512.02831 | Fabian Gieseke | Fabian Gieseke and Cosmin Eugen Oancea and Ashish Mahabal and
Christian Igel and Tom Heskes | Bigger Buffer k-d Trees on Multi-Many-Core Systems | null | null | null | null | cs.DC cs.DS cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A buffer k-d tree is a k-d tree variant for massively-parallel nearest
neighbor search. While providing valuable speed-ups on modern many-core devices
in case both a large number of reference and query points are given, buffer k-d
trees are limited by the amount of points that can fit on a single device. In
this work, we show how to modify the original data structure and the associated
workflow to make the overall approach capable of dealing with massive data
sets. We further provide a simple yet efficient way of using multiple devices
given in a single workstation. The applicability of the modified framework is
demonstrated in the context of astronomy, a field that is faced with huge
amounts of data.
| [
{
"created": "Wed, 9 Dec 2015 12:28:12 GMT",
"version": "v1"
}
] | 2015-12-10 | [
[
"Gieseke",
"Fabian",
""
],
[
"Oancea",
"Cosmin Eugen",
""
],
[
"Mahabal",
"Ashish",
""
],
[
"Igel",
"Christian",
""
],
[
"Heskes",
"Tom",
""
]
] | A buffer k-d tree is a k-d tree variant for massively-parallel nearest neighbor search. While providing valuable speed-ups on modern many-core devices in case both a large number of reference and query points are given, buffer k-d trees are limited by the amount of points that can fit on a single device. In this work, we show how to modify the original data structure and the associated workflow to make the overall approach capable of dealing with massive data sets. We further provide a simple yet efficient way of using multiple devices given in a single workstation. The applicability of the modified framework is demonstrated in the context of astronomy, a field that is faced with huge amounts of data. |
2110.07959 | Zhiwei Tang | Zhiwei Tang, Tsung-Hui Chang, Xiaojing Ye, Hongyuan Zha | Low-rank Matrix Recovery With Unknown Correspondence | null | null | null | null | cs.LG cs.IR stat.ML | http://creativecommons.org/licenses/by/4.0/ | We study a matrix recovery problem with unknown correspondence: given the
observation matrix $M_o=[A,\tilde P B]$, where $\tilde P$ is an unknown
permutation matrix, we aim to recover the underlying matrix $M=[A,B]$. Such
problem commonly arises in many applications where heterogeneous data are
utilized and the correspondence among them are unknown, e.g., due to privacy
concerns. We show that it is possible to recover $M$ via solving a nuclear norm
minimization problem under a proper low-rank condition on $M$, with provable
non-asymptotic error bound for the recovery of $M$. We propose an algorithm,
$\text{M}^3\text{O}$ (Matrix recovery via Min-Max Optimization) which recasts
this combinatorial problem as a continuous minimax optimization problem and
solves it by proximal gradient with a Max-Oracle. $\text{M}^3\text{O}$ can also
be applied to a more general scenario where we have missing entries in $M_o$
and multiple groups of data with distinct unknown correspondence. Experiments
on simulated data, the MovieLens 100K dataset and Yale B database show that
$\text{M}^3\text{O}$ achieves state-of-the-art performance over several
baselines and can recover the ground-truth correspondence with high accuracy.
| [
{
"created": "Fri, 15 Oct 2021 09:27:50 GMT",
"version": "v1"
},
{
"created": "Mon, 18 Oct 2021 03:10:41 GMT",
"version": "v2"
}
] | 2021-10-19 | [
[
"Tang",
"Zhiwei",
""
],
[
"Chang",
"Tsung-Hui",
""
],
[
"Ye",
"Xiaojing",
""
],
[
"Zha",
"Hongyuan",
""
]
] | We study a matrix recovery problem with unknown correspondence: given the observation matrix $M_o=[A,\tilde P B]$, where $\tilde P$ is an unknown permutation matrix, we aim to recover the underlying matrix $M=[A,B]$. Such problem commonly arises in many applications where heterogeneous data are utilized and the correspondence among them are unknown, e.g., due to privacy concerns. We show that it is possible to recover $M$ via solving a nuclear norm minimization problem under a proper low-rank condition on $M$, with provable non-asymptotic error bound for the recovery of $M$. We propose an algorithm, $\text{M}^3\text{O}$ (Matrix recovery via Min-Max Optimization) which recasts this combinatorial problem as a continuous minimax optimization problem and solves it by proximal gradient with a Max-Oracle. $\text{M}^3\text{O}$ can also be applied to a more general scenario where we have missing entries in $M_o$ and multiple groups of data with distinct unknown correspondence. Experiments on simulated data, the MovieLens 100K dataset and Yale B database show that $\text{M}^3\text{O}$ achieves state-of-the-art performance over several baselines and can recover the ground-truth correspondence with high accuracy. |
2011.06512 | Angelica Louren\c{c}o Oliveira | Angelica Louren\c{c}o Oliveira and Marcos Eduardo Valle | Linear Dilation-Erosion Perceptron Trained Using a Convex-Concave
Procedure | 10 pages, 2 figures, 12th International Conference on Soft Computing
and Pattern Recognition, preprint | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mathematical morphology (MM) is a theory of non-linear operators used for the
processing and analysis of images. Morphological neural networks (MNNs) are
neural networks whose neurons compute morphological operators. Dilations and
erosions are the elementary operators of MM. From an algebraic point of view, a
dilation and an erosion are operators that commute respectively with the
supremum and infimum operations. In this paper, we present the \textit{linear
dilation-erosion perceptron} ($\ell$-DEP), which is given by applying linear
transformations before computing a dilation and an erosion. The decision
function of the $\ell$-DEP model is defined by adding a dilation and an
erosion. Furthermore, training a $\ell$-DEP can be formulated as a
convex-concave optimization problem. We compare the performance of the
$\ell$-DEP model with other machine learning techniques using several
classification problems. The computational experiments support the potential
application of the proposed $\ell$-DEP model for binary classification tasks.
| [
{
"created": "Wed, 11 Nov 2020 18:37:07 GMT",
"version": "v1"
}
] | 2020-11-13 | [
[
"Oliveira",
"Angelica Lourenço",
""
],
[
"Valle",
"Marcos Eduardo",
""
]
] | Mathematical morphology (MM) is a theory of non-linear operators used for the processing and analysis of images. Morphological neural networks (MNNs) are neural networks whose neurons compute morphological operators. Dilations and erosions are the elementary operators of MM. From an algebraic point of view, a dilation and an erosion are operators that commute respectively with the supremum and infimum operations. In this paper, we present the \textit{linear dilation-erosion perceptron} ($\ell$-DEP), which is given by applying linear transformations before computing a dilation and an erosion. The decision function of the $\ell$-DEP model is defined by adding a dilation and an erosion. Furthermore, training a $\ell$-DEP can be formulated as a convex-concave optimization problem. We compare the performance of the $\ell$-DEP model with other machine learning techniques using several classification problems. The computational experiments support the potential application of the proposed $\ell$-DEP model for binary classification tasks. |
2401.01185 | Mete \c{S}eref Ahunbay | Mete \c{S}eref Ahunbay, Martin Bichler | On the Uniqueness of Bayesian Coarse Correlated Equilibria in Standard
First-Price and All-Pay Auctions | 73 pages, 6 figures. A first version of this article appeared on
ArXiV on January 2nd/3rd, 2024. This version significantly extends the
results | null | null | null | cs.GT | http://creativecommons.org/licenses/by/4.0/ | We study the Bayesian coarse correlated equilibrium (BCCE) of continuous and
discretised first-price and all-pay auctions under the standard symmetric
independent private-values model. Our goal is to determine how the canonical
Bayes-Nash equilibrium (BNE) of the auction relates to the outcome when all
buyers bid following no-regret algorithms. Numerical experiments show that in
two buyer first-price auctions the Wasserstein-$2$ distance of buyers' marginal
bid distributions decline as $O(1/n)$ in the discretisation size in instances
where the prior distribution is concave, whereas all-pay auctions exhibit
similar behaviour without prior dependence. To explain this convergence to a
near-equilibrium, we study uniqueness of the BCCE of the continuous auction.
Our uniqueness results translate to provable convergence of deterministic
self-play to a near equilibrium outcome in these auctions. In the all-pay
auction, we show that independent of the prior distribution there is a unique
BCCE with symmetric, differentiable, and increasing bidding strategies, which
is equivalent to the unique strict BNE. In the first-price auction, we need
stronger conditions. Either the prior is strictly concave or the learning
algorithm has to be restricted to strictly increasing strategies. Without such
strong assumptions, no-regret algorithms can end up in low-price pooling
strategies. This is important because it proves that in repeated first-price
auctions such as in display ad actions, algorithmic collusion cannot be ruled
out without further assumptions even if all bidders rely on no-regret
algorithms.
| [
{
"created": "Tue, 2 Jan 2024 12:27:13 GMT",
"version": "v1"
},
{
"created": "Wed, 3 Jan 2024 09:24:00 GMT",
"version": "v2"
},
{
"created": "Thu, 20 Jun 2024 08:36:22 GMT",
"version": "v3"
}
] | 2024-06-21 | [
[
"Ahunbay",
"Mete Şeref",
""
],
[
"Bichler",
"Martin",
""
]
] | We study the Bayesian coarse correlated equilibrium (BCCE) of continuous and discretised first-price and all-pay auctions under the standard symmetric independent private-values model. Our goal is to determine how the canonical Bayes-Nash equilibrium (BNE) of the auction relates to the outcome when all buyers bid following no-regret algorithms. Numerical experiments show that in two buyer first-price auctions the Wasserstein-$2$ distance of buyers' marginal bid distributions decline as $O(1/n)$ in the discretisation size in instances where the prior distribution is concave, whereas all-pay auctions exhibit similar behaviour without prior dependence. To explain this convergence to a near-equilibrium, we study uniqueness of the BCCE of the continuous auction. Our uniqueness results translate to provable convergence of deterministic self-play to a near equilibrium outcome in these auctions. In the all-pay auction, we show that independent of the prior distribution there is a unique BCCE with symmetric, differentiable, and increasing bidding strategies, which is equivalent to the unique strict BNE. In the first-price auction, we need stronger conditions. Either the prior is strictly concave or the learning algorithm has to be restricted to strictly increasing strategies. Without such strong assumptions, no-regret algorithms can end up in low-price pooling strategies. This is important because it proves that in repeated first-price auctions such as in display ad actions, algorithmic collusion cannot be ruled out without further assumptions even if all bidders rely on no-regret algorithms. |
1206.2276 | Masoud Alipour | Masoud Alipour, Omid Etesami, Ghid Maatouk, Amin Shokrollahi | Irregular Product Codes | First draft | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider irregular product codes.In this class of codes, each codeword is
represented by a matrix. The entries in each row (column) of the matrix should
come from a component row (column) code. As opposed to (standard) product
codes, we do not require that all component row codes nor all component column
codes be the same. As we will see, relaxing this requirement can provide some
additional attractive features including 1) allowing some regions of the
codeword be more error-resilient 2) allowing a more refined spectrum of rates
for finite-lengths and improved performance in some of these rates 3) more
interaction between row and column codes during decoding. We study these codes
over erasure channels. We find that for any $0 < \epsilon < 1$, for many rate
distributions on component row codes, there is a matching rate distribution on
component column codes such that an irregular product code based on MDS codes
with those rate distributions on the component codes has asymptotic rate $1 -
\epsilon$ and can decode on erasure channels (of alphabet size equal the
alphabet size of the component MDS codes) with erasure probability $<
\epsilon$.
| [
{
"created": "Mon, 11 Jun 2012 16:24:19 GMT",
"version": "v1"
}
] | 2012-06-12 | [
[
"Alipour",
"Masoud",
""
],
[
"Etesami",
"Omid",
""
],
[
"Maatouk",
"Ghid",
""
],
[
"Shokrollahi",
"Amin",
""
]
] | We consider irregular product codes.In this class of codes, each codeword is represented by a matrix. The entries in each row (column) of the matrix should come from a component row (column) code. As opposed to (standard) product codes, we do not require that all component row codes nor all component column codes be the same. As we will see, relaxing this requirement can provide some additional attractive features including 1) allowing some regions of the codeword be more error-resilient 2) allowing a more refined spectrum of rates for finite-lengths and improved performance in some of these rates 3) more interaction between row and column codes during decoding. We study these codes over erasure channels. We find that for any $0 < \epsilon < 1$, for many rate distributions on component row codes, there is a matching rate distribution on component column codes such that an irregular product code based on MDS codes with those rate distributions on the component codes has asymptotic rate $1 - \epsilon$ and can decode on erasure channels (of alphabet size equal the alphabet size of the component MDS codes) with erasure probability $< \epsilon$. |
1806.05599 | Christina Niklaus | Christina Niklaus, Matthias Cetto, Andr\'e Freitas and Siegfried
Handschuh | A Survey on Open Information Extraction | 27th International Conference on Computational Linguistics (COLING
2018) | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | We provide a detailed overview of the various approaches that were proposed
to date to solve the task of Open Information Extraction. We present the major
challenges that such systems face, show the evolution of the suggested
approaches over time and depict the specific issues they address. In addition,
we provide a critique of the commonly applied evaluation procedures for
assessing the performance of Open IE systems and highlight some directions for
future work.
| [
{
"created": "Thu, 14 Jun 2018 15:07:46 GMT",
"version": "v1"
}
] | 2018-06-15 | [
[
"Niklaus",
"Christina",
""
],
[
"Cetto",
"Matthias",
""
],
[
"Freitas",
"André",
""
],
[
"Handschuh",
"Siegfried",
""
]
] | We provide a detailed overview of the various approaches that were proposed to date to solve the task of Open Information Extraction. We present the major challenges that such systems face, show the evolution of the suggested approaches over time and depict the specific issues they address. In addition, we provide a critique of the commonly applied evaluation procedures for assessing the performance of Open IE systems and highlight some directions for future work. |
2002.03149 | Yanyan Zou | Yanyan Zou, Wei Lu and Xu Sun | Mining Commonsense Facts from the Physical World | The experiment part is not insufficient which might confuses the
readers, while we are not planning to improve it for now. To ensure the
quality of arxiv papers, we would like to withdraw this submission | null | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Textual descriptions of the physical world implicitly mention commonsense
facts, while the commonsense knowledge bases explicitly represent such facts as
triples. Compared to dramatically increased text data, the coverage of existing
knowledge bases is far away from completion. Most of the prior studies on
populating knowledge bases mainly focus on Freebase. To automatically complete
commonsense knowledge bases to improve their coverage is under-explored. In
this paper, we propose a new task of mining commonsense facts from the raw text
that describes the physical world. We build an effective new model that fuses
information from both sequence text and existing knowledge base resource. Then
we create two large annotated datasets each with approximate 200k instances for
commonsense knowledge base completion. Empirical results demonstrate that our
model significantly outperforms baselines.
| [
{
"created": "Sat, 8 Feb 2020 12:02:45 GMT",
"version": "v1"
},
{
"created": "Tue, 11 Feb 2020 04:16:47 GMT",
"version": "v2"
},
{
"created": "Tue, 14 Apr 2020 00:58:51 GMT",
"version": "v3"
}
] | 2020-04-15 | [
[
"Zou",
"Yanyan",
""
],
[
"Lu",
"Wei",
""
],
[
"Sun",
"Xu",
""
]
] | Textual descriptions of the physical world implicitly mention commonsense facts, while the commonsense knowledge bases explicitly represent such facts as triples. Compared to dramatically increased text data, the coverage of existing knowledge bases is far away from completion. Most of the prior studies on populating knowledge bases mainly focus on Freebase. To automatically complete commonsense knowledge bases to improve their coverage is under-explored. In this paper, we propose a new task of mining commonsense facts from the raw text that describes the physical world. We build an effective new model that fuses information from both sequence text and existing knowledge base resource. Then we create two large annotated datasets each with approximate 200k instances for commonsense knowledge base completion. Empirical results demonstrate that our model significantly outperforms baselines. |
2210.07914 | Corto Mascle | Corto Mascle | Model-checking lock-sharing systems against regular constraints | null | null | null | null | cs.FL cs.DC | http://creativecommons.org/licenses/by/4.0/ | We study the verification of distributed systems where processes are finite
automata with access to a shared pool of locks. We consider objectives that are
boolean combinations of local regular constraints. We show that the problem,
PSPACE-complete in general, falls in NP with the right assumptions on the
system. We use restrictions on the number of locks a process can access and the
order in which locks can be released. We provide tight complexity bounds, as
well as a subcase of interest that can be solved in PTIME.
| [
{
"created": "Fri, 14 Oct 2022 15:56:32 GMT",
"version": "v1"
}
] | 2022-10-17 | [
[
"Mascle",
"Corto",
""
]
] | We study the verification of distributed systems where processes are finite automata with access to a shared pool of locks. We consider objectives that are boolean combinations of local regular constraints. We show that the problem, PSPACE-complete in general, falls in NP with the right assumptions on the system. We use restrictions on the number of locks a process can access and the order in which locks can be released. We provide tight complexity bounds, as well as a subcase of interest that can be solved in PTIME. |
2301.10225 | Don Krieger | Jeffrey Balzer, Julia Caviness, Don Krieger | The Evolution of Real-time Remote Intraoperative Neurophysiological
Monitoring (IONM) | 12 pages, 3 figures | null | null | null | cs.OH cs.CY | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Real-time monitoring of nervous system function with immediate communication
of relevant information to the surgeon enables prevention and/or mitigation of
iatrogenic injury in many surgical procedures. The hardware and software
infrastructure and demonstrated usefulness of telemedicine in support of IONM
originated in a busy university health center environment and then spread
widely as comparable functional capabilities were added by commercial equipment
manufacturers. The earliest implementations included primitive data archival
and case documentation capabilities and relied primarily on deidentification
for security. They emphasized full-featured control of the real-time data
display by remote observers. Today, remote IONM is routinely utilized in more
than 200,000 high-risk surgical procedures/year in the United States. For many
cases, remote observers rely on screen capture to view the data as it is
displayed in the remote operating room while providing sophisticated security
capabilities and data archival and standardized metadata and case
documentation.
| [
{
"created": "Tue, 10 Jan 2023 14:57:37 GMT",
"version": "v1"
}
] | 2023-01-25 | [
[
"Balzer",
"Jeffrey",
""
],
[
"Caviness",
"Julia",
""
],
[
"Krieger",
"Don",
""
]
] | Real-time monitoring of nervous system function with immediate communication of relevant information to the surgeon enables prevention and/or mitigation of iatrogenic injury in many surgical procedures. The hardware and software infrastructure and demonstrated usefulness of telemedicine in support of IONM originated in a busy university health center environment and then spread widely as comparable functional capabilities were added by commercial equipment manufacturers. The earliest implementations included primitive data archival and case documentation capabilities and relied primarily on deidentification for security. They emphasized full-featured control of the real-time data display by remote observers. Today, remote IONM is routinely utilized in more than 200,000 high-risk surgical procedures/year in the United States. For many cases, remote observers rely on screen capture to view the data as it is displayed in the remote operating room while providing sophisticated security capabilities and data archival and standardized metadata and case documentation. |
1911.12904 | Zinovy Diskin | Zinovy Diskin | General supervised learning as change propagation with delta lenses | An extended version of paper with the same title published at FOSSACS
2020. Unfortunately, both the paper and the previous version of the extended
version uploaded to arxiv on Feb 26, 2020, had bad typos in Definition 4 and
Fig.4, which are now fixed | In: Foundations of Software Science and Computation Structures.
FoSSaCS 2020. Lecture Notes in Computer Science, vol 12077. Springer,
pp.177-197 | 10.1007/978-3-030-45231-5_10 | null | cs.LO math.CT | http://creativecommons.org/licenses/by-sa/4.0/ | Delta lenses are an established mathematical framework for modelling and
designing bidirectional model transformations. Following the recent
observations by Fong et al, the paper extends the delta lens framework with a a
new ingredient: learning over a parameterized space of model transformations
seen as functors. We define a notion of an asymmetric learning delta lens with
amendment (ala-lens), and show how ala-lenses can be organized into a symmetric
monoidal (sm) category. We also show that sequential and parallel composition
of well-behaved ala-lenses are also well-behaved so that well-behaved
ala-lenses constitute a full sm-subcategory of ala-lenses.
| [
{
"created": "Thu, 28 Nov 2019 23:56:43 GMT",
"version": "v1"
},
{
"created": "Tue, 25 Feb 2020 17:44:25 GMT",
"version": "v2"
},
{
"created": "Wed, 26 Feb 2020 01:51:32 GMT",
"version": "v3"
},
{
"created": "Fri, 9 Jul 2021 17:56:07 GMT",
"version": "v4"
}
] | 2021-07-12 | [
[
"Diskin",
"Zinovy",
""
]
] | Delta lenses are an established mathematical framework for modelling and designing bidirectional model transformations. Following the recent observations by Fong et al, the paper extends the delta lens framework with a a new ingredient: learning over a parameterized space of model transformations seen as functors. We define a notion of an asymmetric learning delta lens with amendment (ala-lens), and show how ala-lenses can be organized into a symmetric monoidal (sm) category. We also show that sequential and parallel composition of well-behaved ala-lenses are also well-behaved so that well-behaved ala-lenses constitute a full sm-subcategory of ala-lenses. |
2312.02229 | Mahboobeh Parsapoor | Mahboobeh Parsapoor | Synthetic Data Generation Techniques for Developing AI-based Speech
Assessments for Parkinson's Disease (A Comparative Study) | 6, 5 Tables, 5 Figures | null | null | null | cs.SD cs.AI cs.LG eess.AS | http://creativecommons.org/licenses/by/4.0/ | Changes in speech and language are among the first signs of Parkinson's
disease (PD). Thus, clinicians have tried to identify individuals with PD from
their voices for years. Doctors can leverage AI-based speech assessments to
spot PD thanks to advancements in artificial intelligence (AI). Such AI systems
can be developed using machine learning classifiers that have been trained
using individuals' voices. Although several studies have shown reasonable
results in developing such AI systems, these systems would need more data
samples to achieve promising performance. This paper explores using deep
learning-based data generation techniques on the accuracy of machine learning
classifiers that are the core of such systems.
| [
{
"created": "Mon, 4 Dec 2023 03:12:09 GMT",
"version": "v1"
}
] | 2023-12-06 | [
[
"Parsapoor",
"Mahboobeh",
""
]
] | Changes in speech and language are among the first signs of Parkinson's disease (PD). Thus, clinicians have tried to identify individuals with PD from their voices for years. Doctors can leverage AI-based speech assessments to spot PD thanks to advancements in artificial intelligence (AI). Such AI systems can be developed using machine learning classifiers that have been trained using individuals' voices. Although several studies have shown reasonable results in developing such AI systems, these systems would need more data samples to achieve promising performance. This paper explores using deep learning-based data generation techniques on the accuracy of machine learning classifiers that are the core of such systems. |
1602.03115 | Li Xiao | Li Xiao, Xiang-Gen Xia, and Haiye Huo | Towards Robustness in Residue Number Systems | 32 pages, 5 figures | null | 10.1109/TSP.2016.2641398 | null | cs.IT math.IT math.NT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The problem of robustly reconstructing a large number from its erroneous
remainders with respect to several moduli, namely the robust remaindering
problem, may occur in many applications including phase unwrapping, frequency
detection from several undersampled waveforms, wireless sensor networks, etc.
Assuming that the dynamic range of the large number is the maximal possible
one, i.e., the least common multiple (lcm) of all the moduli, a method called
robust Chinese remainder theorem (CRT) for solving the robust remaindering
problem has been recently proposed. In this paper, by relaxing the assumption
that the dynamic range is fixed to be the lcm of all the moduli, a trade-off
between the dynamic range and the robustness bound for two-modular systems is
studied. It basically says that a decrease in the dynamic range may lead to an
increase of the robustness bound. We first obtain a general condition on the
remainder errors and derive the exact dynamic range with a closed-form formula
for the robustness to hold. We then propose simple closed-form reconstruction
algorithms. Furthermore, the newly obtained two-modular results are applied to
the robust reconstruction for multi-modular systems and generalized to real
numbers. Finally, some simulations are carried out to verify our proposed
theoretical results.
| [
{
"created": "Tue, 9 Feb 2016 18:53:55 GMT",
"version": "v1"
}
] | 2017-04-05 | [
[
"Xiao",
"Li",
""
],
[
"Xia",
"Xiang-Gen",
""
],
[
"Huo",
"Haiye",
""
]
] | The problem of robustly reconstructing a large number from its erroneous remainders with respect to several moduli, namely the robust remaindering problem, may occur in many applications including phase unwrapping, frequency detection from several undersampled waveforms, wireless sensor networks, etc. Assuming that the dynamic range of the large number is the maximal possible one, i.e., the least common multiple (lcm) of all the moduli, a method called robust Chinese remainder theorem (CRT) for solving the robust remaindering problem has been recently proposed. In this paper, by relaxing the assumption that the dynamic range is fixed to be the lcm of all the moduli, a trade-off between the dynamic range and the robustness bound for two-modular systems is studied. It basically says that a decrease in the dynamic range may lead to an increase of the robustness bound. We first obtain a general condition on the remainder errors and derive the exact dynamic range with a closed-form formula for the robustness to hold. We then propose simple closed-form reconstruction algorithms. Furthermore, the newly obtained two-modular results are applied to the robust reconstruction for multi-modular systems and generalized to real numbers. Finally, some simulations are carried out to verify our proposed theoretical results. |
2405.07941 | Oleksandr Kuznetsov | Oleksandr Kuznetsov, Alex Rusnak, Anton Yezhov, Dzianis Kanonik,
Kateryna Kuznetsova, Oleksandr Domin | Efficient and Universal Merkle Tree Inclusion Proofs via OR Aggregation | null | Cryptography 2024, 8, 28 | 10.3390/cryptography8030028 | null | cs.CR | http://creativecommons.org/licenses/by/4.0/ | Zero-knowledge proofs have emerged as a powerful tool for enhancing privacy
and security in blockchain applications. However, the efficiency and
scalability of proof systems remain a significant challenge, particularly in
the context of Merkle tree inclusion proofs. Traditional proof aggregation
techniques based on AND logic suffer from high verification complexity and data
communication overhead, limiting their practicality for large-scale
applications. In this paper, we propose a novel proof aggregation approach
based on OR logic, which enables the generation of compact and universally
verifiable proofs for Merkle tree inclusion. By aggregating proofs using OR
logic, we achieve a proof size that is independent of the number of leaves in
the tree, and verification can be performed using any single valid leaf hash.
This represents a significant improvement over AND aggregation, which requires
the verifier to process all leaf hashes. We formally define the OR aggregation
logic, describe the process of generating universal proofs, and provide a
comparative analysis demonstrating the advantages of our approach in terms of
proof size, verification data, and universality. Furthermore, we discuss the
potential of combining OR and AND aggregation logics to create complex
acceptance functions, enabling the development of expressive and efficient
proof systems for various blockchain applications. The proposed techniques have
the potential to significantly enhance the scalability, efficiency, and
flexibility of zero-knowledge proof systems, paving the way for more practical
and adaptive solutions in the blockchain ecosystem.
| [
{
"created": "Mon, 13 May 2024 17:15:38 GMT",
"version": "v1"
}
] | 2024-07-08 | [
[
"Kuznetsov",
"Oleksandr",
""
],
[
"Rusnak",
"Alex",
""
],
[
"Yezhov",
"Anton",
""
],
[
"Kanonik",
"Dzianis",
""
],
[
"Kuznetsova",
"Kateryna",
""
],
[
"Domin",
"Oleksandr",
""
]
] | Zero-knowledge proofs have emerged as a powerful tool for enhancing privacy and security in blockchain applications. However, the efficiency and scalability of proof systems remain a significant challenge, particularly in the context of Merkle tree inclusion proofs. Traditional proof aggregation techniques based on AND logic suffer from high verification complexity and data communication overhead, limiting their practicality for large-scale applications. In this paper, we propose a novel proof aggregation approach based on OR logic, which enables the generation of compact and universally verifiable proofs for Merkle tree inclusion. By aggregating proofs using OR logic, we achieve a proof size that is independent of the number of leaves in the tree, and verification can be performed using any single valid leaf hash. This represents a significant improvement over AND aggregation, which requires the verifier to process all leaf hashes. We formally define the OR aggregation logic, describe the process of generating universal proofs, and provide a comparative analysis demonstrating the advantages of our approach in terms of proof size, verification data, and universality. Furthermore, we discuss the potential of combining OR and AND aggregation logics to create complex acceptance functions, enabling the development of expressive and efficient proof systems for various blockchain applications. The proposed techniques have the potential to significantly enhance the scalability, efficiency, and flexibility of zero-knowledge proof systems, paving the way for more practical and adaptive solutions in the blockchain ecosystem. |
2001.01027 | Konstantinos A. Mountris | Konstantinos A. Mountris and Esther Pueyo | The Radial Point Interpolation Mixed Collocation (RPIMC) Method for the
Solution of Transient Diffusion Problems | Accepted version for publication in Engineering Analysis with
Boundary Elements | Engineering Analysis with Boundary Elements 121 (2020): 207-216 | 10.1016/j.enganabound.2020.10.005 | null | cs.CE cs.NA math.NA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Radial Point Interpolation Mixed Collocation (RPIMC) method is proposed
in this paper for transient analysis of diffusion problems. RPIMC is an
efficient purely meshless method where the solution of the field variable is
obtained through collocation. The field function and its gradient are both
interpolated (mixed collocation approach) leading to reduced $C$-continuity
requirement compared to strong-form collocation schemes. The method's accuracy
is evaluated in heat conduction benchmark problems. The RPIMC convergence is
compared against the Meshless Local Petrov-Galerkin Mixed Collocation (MLPG-MC)
method and the Finite Element Method (FEM). Due to the delta Kronecker property
of RPIMC, improved accuracy can be achieved as compared to MLPG-MC. RPIMC is
proven to be a promising meshless alternative to FEM for transient diffusion
problems.
| [
{
"created": "Sat, 4 Jan 2020 03:30:37 GMT",
"version": "v1"
},
{
"created": "Wed, 7 Oct 2020 13:23:27 GMT",
"version": "v2"
}
] | 2021-10-14 | [
[
"Mountris",
"Konstantinos A.",
""
],
[
"Pueyo",
"Esther",
""
]
] | The Radial Point Interpolation Mixed Collocation (RPIMC) method is proposed in this paper for transient analysis of diffusion problems. RPIMC is an efficient purely meshless method where the solution of the field variable is obtained through collocation. The field function and its gradient are both interpolated (mixed collocation approach) leading to reduced $C$-continuity requirement compared to strong-form collocation schemes. The method's accuracy is evaluated in heat conduction benchmark problems. The RPIMC convergence is compared against the Meshless Local Petrov-Galerkin Mixed Collocation (MLPG-MC) method and the Finite Element Method (FEM). Due to the delta Kronecker property of RPIMC, improved accuracy can be achieved as compared to MLPG-MC. RPIMC is proven to be a promising meshless alternative to FEM for transient diffusion problems. |
1910.06824 | Kizito Nkurikiyeyezu | Kizito Nkurikiyeyezu, Anna Yokokubo, Guillaume Lopez | Affect-aware thermal comfort provision in intelligent buildings | null | null | 10.1109/ACIIW.2019.8925184 | null | cs.HC eess.IV eess.SP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Predominant thermal comfort provision technologies are energy-hungry, and yet
they perform crudely because they overlook the requisite precursors to thermal
comfort. They also fail to exclusively cool or heat the parts of the body
(e.g., the wrist, the feet, and the head) that influence the most a person's
thermal comfort satisfaction. Instead, they waste energy by heating or cooling
the whole room. This research investigates the influence of neck-coolers on
people's thermal comfort perception and proposes an effective method that
delivers thermal comfort depending on people's heart rate variability (HRV).
Moreover, because thermal comfort is idiosyncratic and depends on unforeseeable
circumstances, only person-specific thermal comfort models are adequate for
this task. Unfortunately, using person-specific models would be costly and
inflexible for deployment in, e.g., a smart building because a system that uses
person-specific models would require collecting extensive training data from
each person in the building. As a compromise, we devise a hybrid,
cost-effective, yet satisfactory technique that derives a personalized
person-specific-like model from samples collected from a large population. For
example, it was possible to double the accuracy of a generic model (from 47.77%
to 96.11%) using only 400 person-specific calibration samples. Finally, we
propose a practical implementation of a real-time thermal comfort provision
system that uses this strategy and highlighted its advantages and limitations.
| [
{
"created": "Thu, 3 Oct 2019 13:46:17 GMT",
"version": "v1"
},
{
"created": "Tue, 31 Dec 2019 08:20:58 GMT",
"version": "v2"
}
] | 2020-01-01 | [
[
"Nkurikiyeyezu",
"Kizito",
""
],
[
"Yokokubo",
"Anna",
""
],
[
"Lopez",
"Guillaume",
""
]
] | Predominant thermal comfort provision technologies are energy-hungry, and yet they perform crudely because they overlook the requisite precursors to thermal comfort. They also fail to exclusively cool or heat the parts of the body (e.g., the wrist, the feet, and the head) that influence the most a person's thermal comfort satisfaction. Instead, they waste energy by heating or cooling the whole room. This research investigates the influence of neck-coolers on people's thermal comfort perception and proposes an effective method that delivers thermal comfort depending on people's heart rate variability (HRV). Moreover, because thermal comfort is idiosyncratic and depends on unforeseeable circumstances, only person-specific thermal comfort models are adequate for this task. Unfortunately, using person-specific models would be costly and inflexible for deployment in, e.g., a smart building because a system that uses person-specific models would require collecting extensive training data from each person in the building. As a compromise, we devise a hybrid, cost-effective, yet satisfactory technique that derives a personalized person-specific-like model from samples collected from a large population. For example, it was possible to double the accuracy of a generic model (from 47.77% to 96.11%) using only 400 person-specific calibration samples. Finally, we propose a practical implementation of a real-time thermal comfort provision system that uses this strategy and highlighted its advantages and limitations. |
2404.16456 | Mingcheng Li | Mingcheng Li, Dingkang Yang, Xiao Zhao, Shuaibing Wang, Yan Wang, Kun
Yang, Mingyang Sun, Dongliang Kou, Ziyun Qian, Lihua Zhang | Correlation-Decoupled Knowledge Distillation for Multimodal Sentiment
Analysis with Incomplete Modalities | Accepted by CVPR 2024 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multimodal sentiment analysis (MSA) aims to understand human sentiment
through multimodal data. Most MSA efforts are based on the assumption of
modality completeness. However, in real-world applications, some practical
factors cause uncertain modality missingness, which drastically degrades the
model's performance. To this end, we propose a Correlation-decoupled Knowledge
Distillation (CorrKD) framework for the MSA task under uncertain missing
modalities. Specifically, we present a sample-level contrastive distillation
mechanism that transfers comprehensive knowledge containing cross-sample
correlations to reconstruct missing semantics. Moreover, a category-guided
prototype distillation mechanism is introduced to capture cross-category
correlations using category prototypes to align feature distributions and
generate favorable joint representations. Eventually, we design a
response-disentangled consistency distillation strategy to optimize the
sentiment decision boundaries of the student network through response
disentanglement and mutual information maximization. Comprehensive experiments
on three datasets indicate that our framework can achieve favorable
improvements compared with several baselines.
| [
{
"created": "Thu, 25 Apr 2024 09:35:09 GMT",
"version": "v1"
},
{
"created": "Mon, 10 Jun 2024 09:10:37 GMT",
"version": "v2"
}
] | 2024-06-11 | [
[
"Li",
"Mingcheng",
""
],
[
"Yang",
"Dingkang",
""
],
[
"Zhao",
"Xiao",
""
],
[
"Wang",
"Shuaibing",
""
],
[
"Wang",
"Yan",
""
],
[
"Yang",
"Kun",
""
],
[
"Sun",
"Mingyang",
""
],
[
"Kou",
"Dongliang",
""
],
[
"Qian",
"Ziyun",
""
],
[
"Zhang",
"Lihua",
""
]
] | Multimodal sentiment analysis (MSA) aims to understand human sentiment through multimodal data. Most MSA efforts are based on the assumption of modality completeness. However, in real-world applications, some practical factors cause uncertain modality missingness, which drastically degrades the model's performance. To this end, we propose a Correlation-decoupled Knowledge Distillation (CorrKD) framework for the MSA task under uncertain missing modalities. Specifically, we present a sample-level contrastive distillation mechanism that transfers comprehensive knowledge containing cross-sample correlations to reconstruct missing semantics. Moreover, a category-guided prototype distillation mechanism is introduced to capture cross-category correlations using category prototypes to align feature distributions and generate favorable joint representations. Eventually, we design a response-disentangled consistency distillation strategy to optimize the sentiment decision boundaries of the student network through response disentanglement and mutual information maximization. Comprehensive experiments on three datasets indicate that our framework can achieve favorable improvements compared with several baselines. |
2310.00896 | Yihong Zhang | Yihong Zhang and Takahiro Hara | Organized Event Participant Prediction Enhanced by Social Media
Retweeting Data | Accepted in WI-IAT 2023 | null | null | null | cs.LG cs.IR | http://creativecommons.org/licenses/by/4.0/ | Nowadays, many platforms on the Web offer organized events, allowing users to
be organizers or participants. For such platforms, it is beneficial to predict
potential event participants. Existing work on this problem tends to borrow
recommendation techniques. However, compared to e-commerce items and purchases,
events and participation are usually of a much smaller frequency, and the data
may be insufficient to learn an accurate model. In this paper, we propose to
utilize social media retweeting activity data to enhance the learning of event
participant prediction models. We create a joint knowledge graph to bridge the
social media and the target domain, assuming that event descriptions and tweets
are written in the same language. Furthermore, we propose a learning model that
utilizes retweeting information for the target domain prediction more
effectively. We conduct comprehensive experiments in two scenarios with
real-world data. In each scenario, we set up training data of different sizes,
as well as warm and cold test cases. The evaluation results show that our
approach consistently outperforms several baseline models, especially with the
warm test cases, and when target domain data is limited.
| [
{
"created": "Mon, 2 Oct 2023 04:26:07 GMT",
"version": "v1"
}
] | 2023-10-03 | [
[
"Zhang",
"Yihong",
""
],
[
"Hara",
"Takahiro",
""
]
] | Nowadays, many platforms on the Web offer organized events, allowing users to be organizers or participants. For such platforms, it is beneficial to predict potential event participants. Existing work on this problem tends to borrow recommendation techniques. However, compared to e-commerce items and purchases, events and participation are usually of a much smaller frequency, and the data may be insufficient to learn an accurate model. In this paper, we propose to utilize social media retweeting activity data to enhance the learning of event participant prediction models. We create a joint knowledge graph to bridge the social media and the target domain, assuming that event descriptions and tweets are written in the same language. Furthermore, we propose a learning model that utilizes retweeting information for the target domain prediction more effectively. We conduct comprehensive experiments in two scenarios with real-world data. In each scenario, we set up training data of different sizes, as well as warm and cold test cases. The evaluation results show that our approach consistently outperforms several baseline models, especially with the warm test cases, and when target domain data is limited. |
1803.07985 | Hauke J\"urgen M\"onck | Hauke J\"urgen M\"onck, Andreas J\"org, Tobias von Falkenhausen,
Julian Tanke, Benjamin Wild, David Dormagen, Jonas Piotrowski, Claudia
Winklmayr, David Bierbach, Tim Landgraf | BioTracker: An Open-Source Computer Vision Framework for Visual Animal
Tracking | 4 pages, 2 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The study of animal behavior increasingly relies on (semi-) automatic methods
for the extraction of relevant behavioral features from video or picture data.
To date, several specialized software products exist to detect and track
animals' positions in simple (laboratory) environments. Tracking animals in
their natural environments, however, often requires substantial customization
of the image processing algorithms to the problem-specific image
characteristics. Here we introduce BioTracker, an open-source computer vision
framework, that provides programmers with core functionalities that are
essential parts of a tracking software, such as video I/O, graphics overlays
and mouse and keyboard interfaces. BioTracker additionally provides a number of
different tracking algorithms suitable for a variety of image recording
conditions. The main feature of BioTracker is however the straightforward
implementation of new problem-specific tracking modules and vision algorithms
that can build upon BioTracker's core functionalities. With this open-source
framework the scientific community can accelerate their research and focus on
the development of new vision algorithms.
| [
{
"created": "Wed, 21 Mar 2018 16:12:18 GMT",
"version": "v1"
}
] | 2018-03-22 | [
[
"Mönck",
"Hauke Jürgen",
""
],
[
"Jörg",
"Andreas",
""
],
[
"von Falkenhausen",
"Tobias",
""
],
[
"Tanke",
"Julian",
""
],
[
"Wild",
"Benjamin",
""
],
[
"Dormagen",
"David",
""
],
[
"Piotrowski",
"Jonas",
""
],
[
"Winklmayr",
"Claudia",
""
],
[
"Bierbach",
"David",
""
],
[
"Landgraf",
"Tim",
""
]
] | The study of animal behavior increasingly relies on (semi-) automatic methods for the extraction of relevant behavioral features from video or picture data. To date, several specialized software products exist to detect and track animals' positions in simple (laboratory) environments. Tracking animals in their natural environments, however, often requires substantial customization of the image processing algorithms to the problem-specific image characteristics. Here we introduce BioTracker, an open-source computer vision framework, that provides programmers with core functionalities that are essential parts of a tracking software, such as video I/O, graphics overlays and mouse and keyboard interfaces. BioTracker additionally provides a number of different tracking algorithms suitable for a variety of image recording conditions. The main feature of BioTracker is however the straightforward implementation of new problem-specific tracking modules and vision algorithms that can build upon BioTracker's core functionalities. With this open-source framework the scientific community can accelerate their research and focus on the development of new vision algorithms. |
2112.00875 | Shyama Arunachalam | Shyama Kumari Arunachalam, Roopa V, Meena H B, Vijayalakshmi, T
Malavika | Secure and Safety Mobile Network System for Visually Impaired People | 4 pages, 3 figures, Accepted at 2012 IEEE ICECE, Bangalore | null | null | null | cs.HC cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The proposed system aims to be a techno-friend of visually impaired people to
assist them in orientation and mobility both indoor and outdoor. Moving through
an unknown environment becomes a real challenge for most of them, although they
rely on their other senses. An age old mechanism used for assistance for the
blind people is a white cane commonly known as walking cane a simple and purely
mechanical device to detect the ground, uneven surfaces, holes and steps using
simple Tactile-force feedback.
| [
{
"created": "Wed, 1 Dec 2021 22:56:36 GMT",
"version": "v1"
}
] | 2021-12-03 | [
[
"Arunachalam",
"Shyama Kumari",
""
],
[
"V",
"Roopa",
""
],
[
"B",
"Meena H",
""
],
[
"Vijayalakshmi",
"",
""
],
[
"Malavika",
"T",
""
]
] | The proposed system aims to be a techno-friend of visually impaired people to assist them in orientation and mobility both indoor and outdoor. Moving through an unknown environment becomes a real challenge for most of them, although they rely on their other senses. An age old mechanism used for assistance for the blind people is a white cane commonly known as walking cane a simple and purely mechanical device to detect the ground, uneven surfaces, holes and steps using simple Tactile-force feedback. |
2003.03030 | Shihao Zhao | Shihao Zhao, Xingjun Ma, Xiang Zheng, James Bailey, Jingjing Chen,
Yu-Gang Jiang | Clean-Label Backdoor Attacks on Video Recognition Models | CVPR2020 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep neural networks (DNNs) are vulnerable to backdoor attacks which can hide
backdoor triggers in DNNs by poisoning training data. A backdoored model
behaves normally on clean test images, yet consistently predicts a particular
target class for any test examples that contain the trigger pattern. As such,
backdoor attacks are hard to detect, and have raised severe security concerns
in real-world applications. Thus far, backdoor research has mostly been
conducted in the image domain with image classification models. In this paper,
we show that existing image backdoor attacks are far less effective on videos,
and outline 4 strict conditions where existing attacks are likely to fail: 1)
scenarios with more input dimensions (eg. videos), 2) scenarios with high
resolution, 3) scenarios with a large number of classes and few examples per
class (a "sparse dataset"), and 4) attacks with access to correct labels (eg.
clean-label attacks). We propose the use of a universal adversarial trigger as
the backdoor trigger to attack video recognition models, a situation where
backdoor attacks are likely to be challenged by the above 4 strict conditions.
We show on benchmark video datasets that our proposed backdoor attack can
manipulate state-of-the-art video models with high success rates by poisoning
only a small proportion of training data (without changing the labels). We also
show that our proposed backdoor attack is resistant to state-of-the-art
backdoor defense/detection methods, and can even be applied to improve image
backdoor attacks. Our proposed video backdoor attack not only serves as a
strong baseline for improving the robustness of video models, but also provides
a new perspective for more understanding more powerful backdoor attacks.
| [
{
"created": "Fri, 6 Mar 2020 04:51:48 GMT",
"version": "v1"
},
{
"created": "Tue, 16 Jun 2020 12:13:20 GMT",
"version": "v2"
}
] | 2020-06-17 | [
[
"Zhao",
"Shihao",
""
],
[
"Ma",
"Xingjun",
""
],
[
"Zheng",
"Xiang",
""
],
[
"Bailey",
"James",
""
],
[
"Chen",
"Jingjing",
""
],
[
"Jiang",
"Yu-Gang",
""
]
] | Deep neural networks (DNNs) are vulnerable to backdoor attacks which can hide backdoor triggers in DNNs by poisoning training data. A backdoored model behaves normally on clean test images, yet consistently predicts a particular target class for any test examples that contain the trigger pattern. As such, backdoor attacks are hard to detect, and have raised severe security concerns in real-world applications. Thus far, backdoor research has mostly been conducted in the image domain with image classification models. In this paper, we show that existing image backdoor attacks are far less effective on videos, and outline 4 strict conditions where existing attacks are likely to fail: 1) scenarios with more input dimensions (eg. videos), 2) scenarios with high resolution, 3) scenarios with a large number of classes and few examples per class (a "sparse dataset"), and 4) attacks with access to correct labels (eg. clean-label attacks). We propose the use of a universal adversarial trigger as the backdoor trigger to attack video recognition models, a situation where backdoor attacks are likely to be challenged by the above 4 strict conditions. We show on benchmark video datasets that our proposed backdoor attack can manipulate state-of-the-art video models with high success rates by poisoning only a small proportion of training data (without changing the labels). We also show that our proposed backdoor attack is resistant to state-of-the-art backdoor defense/detection methods, and can even be applied to improve image backdoor attacks. Our proposed video backdoor attack not only serves as a strong baseline for improving the robustness of video models, but also provides a new perspective for more understanding more powerful backdoor attacks. |
2009.09806 | Paolo Pareti Dr. | Paolo Pareti and George Konstantinidis and Fabio Mogavero and Timothy
J. Norman | SHACL Satisfiability and Containment (Extended Paper) | null | null | null | null | cs.LO cs.AI cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Shapes Constraint Language (SHACL) is a recent W3C recommendation
language for validating RDF data. Specifically, SHACL documents are collections
of constraints that enforce particular shapes on an RDF graph. Previous work on
the topic has provided theoretical and practical results for the validation
problem, but did not consider the standard decision problems of satisfiability
and containment, which are crucial for verifying the feasibility of the
constraints and important for design and optimization purposes. In this paper,
we undertake a thorough study of different features of non-recursive SHACL by
providing a translation to a new first-order language, called SCL, that
precisely captures the semantics of SHACL w.r.t. satisfiability and
containment. We study the interaction of SHACL features in this logic and
provide the detailed map of decidability and complexity results of the
aforementioned decision problems for different SHACL sublanguages. Notably, we
prove that both problems are undecidable for the full language, but we present
decidable combinations of interesting features.
| [
{
"created": "Mon, 31 Aug 2020 14:52:03 GMT",
"version": "v1"
},
{
"created": "Thu, 5 Nov 2020 10:55:19 GMT",
"version": "v2"
}
] | 2020-11-06 | [
[
"Pareti",
"Paolo",
""
],
[
"Konstantinidis",
"George",
""
],
[
"Mogavero",
"Fabio",
""
],
[
"Norman",
"Timothy J.",
""
]
] | The Shapes Constraint Language (SHACL) is a recent W3C recommendation language for validating RDF data. Specifically, SHACL documents are collections of constraints that enforce particular shapes on an RDF graph. Previous work on the topic has provided theoretical and practical results for the validation problem, but did not consider the standard decision problems of satisfiability and containment, which are crucial for verifying the feasibility of the constraints and important for design and optimization purposes. In this paper, we undertake a thorough study of different features of non-recursive SHACL by providing a translation to a new first-order language, called SCL, that precisely captures the semantics of SHACL w.r.t. satisfiability and containment. We study the interaction of SHACL features in this logic and provide the detailed map of decidability and complexity results of the aforementioned decision problems for different SHACL sublanguages. Notably, we prove that both problems are undecidable for the full language, but we present decidable combinations of interesting features. |
2406.12103 | Jayshree Sarathy | Rachel Cummings and Jayshree Sarathy | Centering Policy and Practice: Research Gaps around Usable Differential
Privacy | null | 2023 5th IEEE International Conference on Trust, Privacy and
Security in Intelligent Systems and Applications (TPS-ISA). IEEE Computer
Society, 2023 | null | null | cs.CR cs.CY cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As a mathematically rigorous framework that has amassed a rich theoretical
literature, differential privacy is considered by many experts to be the gold
standard for privacy-preserving data analysis. Others argue that while
differential privacy is a clean formulation in theory, it poses significant
challenges in practice. Both perspectives are, in our view, valid and
important. To bridge the gaps between differential privacy's promises and its
real-world usability, researchers and practitioners must work together to
advance policy and practice of this technology. In this paper, we outline
pressing open questions towards building usable differential privacy and offer
recommendations for the field, such as developing risk frameworks to align with
user needs, tailoring communications for different stakeholders, modeling the
impact of privacy-loss parameters, investing in effective user interfaces, and
facilitating algorithmic and procedural audits of differential privacy systems.
| [
{
"created": "Mon, 17 Jun 2024 21:32:30 GMT",
"version": "v1"
}
] | 2024-06-19 | [
[
"Cummings",
"Rachel",
""
],
[
"Sarathy",
"Jayshree",
""
]
] | As a mathematically rigorous framework that has amassed a rich theoretical literature, differential privacy is considered by many experts to be the gold standard for privacy-preserving data analysis. Others argue that while differential privacy is a clean formulation in theory, it poses significant challenges in practice. Both perspectives are, in our view, valid and important. To bridge the gaps between differential privacy's promises and its real-world usability, researchers and practitioners must work together to advance policy and practice of this technology. In this paper, we outline pressing open questions towards building usable differential privacy and offer recommendations for the field, such as developing risk frameworks to align with user needs, tailoring communications for different stakeholders, modeling the impact of privacy-loss parameters, investing in effective user interfaces, and facilitating algorithmic and procedural audits of differential privacy systems. |
2206.12147 | Xiao Wang | Xiao Wang, Shaoguo Liu, Yidong Jia, Yuxin Fu, Yufang Yu, Liang Wang,
Bo Zheng | MCMF: Multi-Constraints With Merging Features Bid Optimization in Online
Display Advertising | 5 pages, 5 figures | null | null | null | cs.GT cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the Real-Time Bidding (RTB), advertisers are increasingly relying on bid
optimization to gain more conversions (i.e trade or arrival). Currently, the
efficiency of bid optimization is still challenged by the (1) sparse feedback,
(2) the budget management separated from the optimization, and (3) absence of
bidding environment modeling. The conversion feedback is delayed and sparse,
yet most methods rely on dense input (impression or click). Furthermore, most
approaches are implemented in two stages: optimum formulation and budget
management, but the separation always degrades performance. Meanwhile, absence
of bidding environment modeling, model-free controllers are commonly utilized,
which perform poorly on sparse feedback and lead to control instability. We
address these challenges and provide the Multi-Constraints with Merging
Features (MCMF) framework. It collects various bidding statuses as merging
features to promise performance on the sparse and delayed feedback. A cost
function is formulated as dynamic optimum solution with budget management, the
optimization and budget management are not separated. According to the cost
function, the approximated gradients based on the Hebbian Learning Rule are
capable of updating the MCMF, even without modeling of the bidding environment.
Our technique performs the best in the open dataset and provides stable budget
management even in extreme sparsity. The MCMF is applied in our real RTB
production and we get 2.69% more conversions with 2.46% fewer expenditures.
| [
{
"created": "Fri, 24 Jun 2022 08:25:28 GMT",
"version": "v1"
},
{
"created": "Mon, 27 Jun 2022 12:23:40 GMT",
"version": "v2"
}
] | 2022-06-28 | [
[
"Wang",
"Xiao",
""
],
[
"Liu",
"Shaoguo",
""
],
[
"Jia",
"Yidong",
""
],
[
"Fu",
"Yuxin",
""
],
[
"Yu",
"Yufang",
""
],
[
"Wang",
"Liang",
""
],
[
"Zheng",
"Bo",
""
]
] | In the Real-Time Bidding (RTB), advertisers are increasingly relying on bid optimization to gain more conversions (i.e trade or arrival). Currently, the efficiency of bid optimization is still challenged by the (1) sparse feedback, (2) the budget management separated from the optimization, and (3) absence of bidding environment modeling. The conversion feedback is delayed and sparse, yet most methods rely on dense input (impression or click). Furthermore, most approaches are implemented in two stages: optimum formulation and budget management, but the separation always degrades performance. Meanwhile, absence of bidding environment modeling, model-free controllers are commonly utilized, which perform poorly on sparse feedback and lead to control instability. We address these challenges and provide the Multi-Constraints with Merging Features (MCMF) framework. It collects various bidding statuses as merging features to promise performance on the sparse and delayed feedback. A cost function is formulated as dynamic optimum solution with budget management, the optimization and budget management are not separated. According to the cost function, the approximated gradients based on the Hebbian Learning Rule are capable of updating the MCMF, even without modeling of the bidding environment. Our technique performs the best in the open dataset and provides stable budget management even in extreme sparsity. The MCMF is applied in our real RTB production and we get 2.69% more conversions with 2.46% fewer expenditures. |
2308.02000 | Junyan Cheng | Junyan Cheng and Peter Chin | On the Transition from Neural Representation to Symbolic Knowledge | null | null | null | null | cs.AI cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Bridging the huge disparity between neural and symbolic representation can
potentially enable the incorporation of symbolic thinking into neural networks
from essence. Motivated by how human gradually builds complex symbolic
representation from the prototype symbols that are learned through perception
and environmental interactions. We propose a Neural-Symbolic Transitional
Dictionary Learning (TDL) framework that employs an EM algorithm to learn a
transitional representation of data that compresses high-dimension information
of visual parts of an input into a set of tensors as neural variables and
discover the implicit predicate structure in a self-supervised way. We
implement the framework with a diffusion model by regarding the decomposition
of input as a cooperative game, then learn predicates by prototype clustering.
We additionally use RL enabled by the Markovian of diffusion models to further
tune the learned prototypes by incorporating subjective factors. Extensive
experiments on 3 abstract compositional visual objects datasets that require
the model to segment parts without any visual features like texture, color, or
shadows apart from shape and 3 neural/symbolic downstream tasks demonstrate the
learned representation enables interpretable decomposition of visual input and
smooth adaption to downstream tasks which are not available by existing
methods.
| [
{
"created": "Thu, 3 Aug 2023 19:29:35 GMT",
"version": "v1"
}
] | 2023-08-07 | [
[
"Cheng",
"Junyan",
""
],
[
"Chin",
"Peter",
""
]
] | Bridging the huge disparity between neural and symbolic representation can potentially enable the incorporation of symbolic thinking into neural networks from essence. Motivated by how human gradually builds complex symbolic representation from the prototype symbols that are learned through perception and environmental interactions. We propose a Neural-Symbolic Transitional Dictionary Learning (TDL) framework that employs an EM algorithm to learn a transitional representation of data that compresses high-dimension information of visual parts of an input into a set of tensors as neural variables and discover the implicit predicate structure in a self-supervised way. We implement the framework with a diffusion model by regarding the decomposition of input as a cooperative game, then learn predicates by prototype clustering. We additionally use RL enabled by the Markovian of diffusion models to further tune the learned prototypes by incorporating subjective factors. Extensive experiments on 3 abstract compositional visual objects datasets that require the model to segment parts without any visual features like texture, color, or shadows apart from shape and 3 neural/symbolic downstream tasks demonstrate the learned representation enables interpretable decomposition of visual input and smooth adaption to downstream tasks which are not available by existing methods. |
2403.14646 | Adil Rasheed Professor | Adil Rasheed, Florian Stadtmann, Eivind Fonn, Mandar Tabib, Vasileios
Tsiolakis, Balram Panjwani, Kjetil Andre Johannessen, Trond Kvamsdal, Omer
San, John Olav Tande, Idar Barstad, Tore Christiansen, Elling Rishoff, Lars
Fr{\o}yd, Tore Rasmussen | Digital Twin for Wind Energy: Latest updates from the NorthWind project | null | null | null | null | cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | NorthWind, a collaborative research initiative supported by the Research
Council of Norway, industry stakeholders, and research partners, aims to
advance cutting-edge research and innovation in wind energy. The core mission
is to reduce wind power costs and foster sustainable growth, with a key focus
on the development of digital twins. A digital twin is a virtual representation
of physical assets or processes that uses data and simulators to enable
real-time forecasting, optimization, monitoring, control and informed
decision-making. Recently, a hierarchical scale ranging from 0 to 5 (0 -
Standalone, 1 - Descriptive, 2 - Diagnostic, 3 - Predictive, 4 - Prescriptive,
5 - Autonomous has been introduced within the NorthWind project to assess the
capabilities of digital twins. This paper elaborates on our progress in
constructing digital twins for wind farms and their components across various
capability levels.
| [
{
"created": "Wed, 21 Feb 2024 22:53:39 GMT",
"version": "v1"
},
{
"created": "Tue, 26 Mar 2024 08:47:52 GMT",
"version": "v2"
}
] | 2024-03-27 | [
[
"Rasheed",
"Adil",
""
],
[
"Stadtmann",
"Florian",
""
],
[
"Fonn",
"Eivind",
""
],
[
"Tabib",
"Mandar",
""
],
[
"Tsiolakis",
"Vasileios",
""
],
[
"Panjwani",
"Balram",
""
],
[
"Johannessen",
"Kjetil Andre",
""
],
[
"Kvamsdal",
"Trond",
""
],
[
"San",
"Omer",
""
],
[
"Tande",
"John Olav",
""
],
[
"Barstad",
"Idar",
""
],
[
"Christiansen",
"Tore",
""
],
[
"Rishoff",
"Elling",
""
],
[
"Frøyd",
"Lars",
""
],
[
"Rasmussen",
"Tore",
""
]
] | NorthWind, a collaborative research initiative supported by the Research Council of Norway, industry stakeholders, and research partners, aims to advance cutting-edge research and innovation in wind energy. The core mission is to reduce wind power costs and foster sustainable growth, with a key focus on the development of digital twins. A digital twin is a virtual representation of physical assets or processes that uses data and simulators to enable real-time forecasting, optimization, monitoring, control and informed decision-making. Recently, a hierarchical scale ranging from 0 to 5 (0 - Standalone, 1 - Descriptive, 2 - Diagnostic, 3 - Predictive, 4 - Prescriptive, 5 - Autonomous has been introduced within the NorthWind project to assess the capabilities of digital twins. This paper elaborates on our progress in constructing digital twins for wind farms and their components across various capability levels. |
1307.7042 | Andrzej Kapanowski | Andrzej Kapanowski | Python for education: permutations | 26 pages, 1 figure, 2 tables | The Python Papers 9, 3 (2014) | null | null | cs.MS math.HO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Python implementation of permutations is presented. Three classes are
introduced: Perm for permutations, Group for permutation groups, and PermError
to report any errors for both classes. The class Perm is based on Python
dictionaries and utilize cycle notation. The methods of calculation for the
perm order, parity, ranking and unranking are given. A random permutation
generation is also shown. The class Group is very simple and it is also based
on dictionaries. It is mainly the presentation of the permutation groups
interface with methods for the group order, subgroups (normalizer, centralizer,
center, stabilizer), orbits, and several tests. The corresponding Python code
is contained in the modules perms and groups.
| [
{
"created": "Fri, 26 Jul 2013 14:18:21 GMT",
"version": "v1"
}
] | 2014-06-17 | [
[
"Kapanowski",
"Andrzej",
""
]
] | Python implementation of permutations is presented. Three classes are introduced: Perm for permutations, Group for permutation groups, and PermError to report any errors for both classes. The class Perm is based on Python dictionaries and utilize cycle notation. The methods of calculation for the perm order, parity, ranking and unranking are given. A random permutation generation is also shown. The class Group is very simple and it is also based on dictionaries. It is mainly the presentation of the permutation groups interface with methods for the group order, subgroups (normalizer, centralizer, center, stabilizer), orbits, and several tests. The corresponding Python code is contained in the modules perms and groups. |
2406.03792 | Naibin Gu | Naibin Gu, Peng Fu, Xiyu Liu, Bowen Shen, Zheng Lin, Weiping Wang | Light-PEFT: Lightening Parameter-Efficient Fine-Tuning via Early Pruning | Findings of ACL 2024 | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Parameter-efficient fine-tuning (PEFT) has emerged as the predominant
technique for fine-tuning in the era of large language models. However,
existing PEFT methods still have inadequate training efficiency. Firstly, the
utilization of large-scale foundation models during the training process is
excessively redundant for certain fine-tuning tasks. Secondly, as the model
size increases, the growth in trainable parameters of empirically added PEFT
modules becomes non-negligible and redundant, leading to inefficiency. To
achieve task-specific efficient fine-tuning, we propose the Light-PEFT
framework, which includes two methods: Masked Early Pruning of the Foundation
Model and Multi-Granularity Early Pruning of PEFT. The Light-PEFT framework
allows for the simultaneous estimation of redundant parameters in both the
foundation model and PEFT modules during the early stage of training. These
parameters can then be pruned for more efficient fine-tuning. We validate our
approach on GLUE, SuperGLUE, QA tasks, and various models. With Light-PEFT,
parameters of the foundation model can be pruned by up to over 40%, while still
controlling trainable parameters to be only 25% of the original PEFT method.
Compared to utilizing the PEFT method directly, Light-PEFT achieves training
and inference speedup, reduces memory usage, and maintains comparable
performance and the plug-and-play feature of PEFT.
| [
{
"created": "Thu, 6 Jun 2024 07:03:29 GMT",
"version": "v1"
}
] | 2024-06-07 | [
[
"Gu",
"Naibin",
""
],
[
"Fu",
"Peng",
""
],
[
"Liu",
"Xiyu",
""
],
[
"Shen",
"Bowen",
""
],
[
"Lin",
"Zheng",
""
],
[
"Wang",
"Weiping",
""
]
] | Parameter-efficient fine-tuning (PEFT) has emerged as the predominant technique for fine-tuning in the era of large language models. However, existing PEFT methods still have inadequate training efficiency. Firstly, the utilization of large-scale foundation models during the training process is excessively redundant for certain fine-tuning tasks. Secondly, as the model size increases, the growth in trainable parameters of empirically added PEFT modules becomes non-negligible and redundant, leading to inefficiency. To achieve task-specific efficient fine-tuning, we propose the Light-PEFT framework, which includes two methods: Masked Early Pruning of the Foundation Model and Multi-Granularity Early Pruning of PEFT. The Light-PEFT framework allows for the simultaneous estimation of redundant parameters in both the foundation model and PEFT modules during the early stage of training. These parameters can then be pruned for more efficient fine-tuning. We validate our approach on GLUE, SuperGLUE, QA tasks, and various models. With Light-PEFT, parameters of the foundation model can be pruned by up to over 40%, while still controlling trainable parameters to be only 25% of the original PEFT method. Compared to utilizing the PEFT method directly, Light-PEFT achieves training and inference speedup, reduces memory usage, and maintains comparable performance and the plug-and-play feature of PEFT. |
2406.10887 | Decheng Liu | Decheng Liu, Qixuan Su, Chunlei Peng, Nannan Wang, Xinbo Gao | Imperceptible Face Forgery Attack via Adversarial Semantic Mask | The code is publicly available | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the great development of generative model techniques, face forgery
detection draws more and more attention in the related field. Researchers find
that existing face forgery models are still vulnerable to adversarial examples
with generated pixel perturbations in the global image. These generated
adversarial samples still can't achieve satisfactory performance because of the
high detectability. To address these problems, we propose an Adversarial
Semantic Mask Attack framework (ASMA) which can generate adversarial examples
with good transferability and invisibility. Specifically, we propose a novel
adversarial semantic mask generative model, which can constrain generated
perturbations in local semantic regions for good stealthiness. The designed
adaptive semantic mask selection strategy can effectively leverage the class
activation values of different semantic regions, and further ensure better
attack transferability and stealthiness. Extensive experiments on the public
face forgery dataset prove the proposed method achieves superior performance
compared with several representative adversarial attack methods. The code is
publicly available at https://github.com/clawerO-O/ASMA.
| [
{
"created": "Sun, 16 Jun 2024 10:38:11 GMT",
"version": "v1"
}
] | 2024-06-18 | [
[
"Liu",
"Decheng",
""
],
[
"Su",
"Qixuan",
""
],
[
"Peng",
"Chunlei",
""
],
[
"Wang",
"Nannan",
""
],
[
"Gao",
"Xinbo",
""
]
] | With the great development of generative model techniques, face forgery detection draws more and more attention in the related field. Researchers find that existing face forgery models are still vulnerable to adversarial examples with generated pixel perturbations in the global image. These generated adversarial samples still can't achieve satisfactory performance because of the high detectability. To address these problems, we propose an Adversarial Semantic Mask Attack framework (ASMA) which can generate adversarial examples with good transferability and invisibility. Specifically, we propose a novel adversarial semantic mask generative model, which can constrain generated perturbations in local semantic regions for good stealthiness. The designed adaptive semantic mask selection strategy can effectively leverage the class activation values of different semantic regions, and further ensure better attack transferability and stealthiness. Extensive experiments on the public face forgery dataset prove the proposed method achieves superior performance compared with several representative adversarial attack methods. The code is publicly available at https://github.com/clawerO-O/ASMA. |
2203.00798 | Jingxi Xu | Jingxi Xu, Shuran Song, Matei Ciocarlie | TANDEM: Learning Joint Exploration and Decision Making with Tactile
Sensors | Accepted to Robotics and Automation Letters (RA-L) and International
Conference on Intelligent Robots and Systems (IROS) 2022 | null | null | null | cs.RO cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Inspired by the human ability to perform complex manipulation in the complete
absence of vision (like retrieving an object from a pocket), the robotic
manipulation field is motivated to develop new methods for tactile-based object
interaction. However, tactile sensing presents the challenge of being an active
sensing modality: a touch sensor provides sparse, local data, and must be used
in conjunction with effective exploration strategies in order to collect
information. In this work, we focus on the process of guiding tactile
exploration, and its interplay with task-related decision making. We propose
TANDEM (TActile exploration aNd DEcision Making), an architecture to learn
efficient exploration strategies in conjunction with decision making. Our
approach is based on separate but co-trained modules for exploration and
discrimination. We demonstrate this method on a tactile object recognition
task, where a robot equipped with a touch sensor must explore and identify an
object from a known set based on binary contact signals alone. TANDEM achieves
higher accuracy with fewer actions than alternative methods and is also shown
to be more robust to sensor noise.
| [
{
"created": "Tue, 1 Mar 2022 23:55:09 GMT",
"version": "v1"
},
{
"created": "Sun, 26 Jun 2022 06:11:33 GMT",
"version": "v2"
},
{
"created": "Thu, 21 Jul 2022 06:52:17 GMT",
"version": "v3"
}
] | 2022-07-22 | [
[
"Xu",
"Jingxi",
""
],
[
"Song",
"Shuran",
""
],
[
"Ciocarlie",
"Matei",
""
]
] | Inspired by the human ability to perform complex manipulation in the complete absence of vision (like retrieving an object from a pocket), the robotic manipulation field is motivated to develop new methods for tactile-based object interaction. However, tactile sensing presents the challenge of being an active sensing modality: a touch sensor provides sparse, local data, and must be used in conjunction with effective exploration strategies in order to collect information. In this work, we focus on the process of guiding tactile exploration, and its interplay with task-related decision making. We propose TANDEM (TActile exploration aNd DEcision Making), an architecture to learn efficient exploration strategies in conjunction with decision making. Our approach is based on separate but co-trained modules for exploration and discrimination. We demonstrate this method on a tactile object recognition task, where a robot equipped with a touch sensor must explore and identify an object from a known set based on binary contact signals alone. TANDEM achieves higher accuracy with fewer actions than alternative methods and is also shown to be more robust to sensor noise. |
2305.02201 | Lingyao Li | Lingyao Li, Zihui Ma, Lizhou Fan, Sanggyu Lee, Huizi Yu, Libby
Hemphill | ChatGPT in education: A discourse analysis of worries and concerns on
social media | null | null | null | null | cs.CY | http://creativecommons.org/licenses/by/4.0/ | The rapid advancements in generative AI models present new opportunities in
the education sector. However, it is imperative to acknowledge and address the
potential risks and concerns that may arise with their use. We analyzed Twitter
data to identify key concerns related to the use of ChatGPT in education. We
employed BERT-based topic modeling to conduct a discourse analysis and social
network analysis to identify influential users in the conversation. While
Twitter users generally ex-pressed a positive attitude towards the use of
ChatGPT, their concerns converged to five specific categories: academic
integrity, impact on learning outcomes and skill development, limitation of
capabilities, policy and social concerns, and workforce challenges. We also
found that users from the tech, education, and media fields were often
implicated in the conversation, while education and tech individual users led
the discussion of concerns. Based on these findings, the study provides several
implications for policymakers, tech companies and individuals, educators, and
media agencies. In summary, our study underscores the importance of responsible
and ethical use of AI in education and highlights the need for collaboration
among stakeholders to regulate AI policy.
| [
{
"created": "Sat, 29 Apr 2023 22:08:42 GMT",
"version": "v1"
}
] | 2023-05-04 | [
[
"Li",
"Lingyao",
""
],
[
"Ma",
"Zihui",
""
],
[
"Fan",
"Lizhou",
""
],
[
"Lee",
"Sanggyu",
""
],
[
"Yu",
"Huizi",
""
],
[
"Hemphill",
"Libby",
""
]
] | The rapid advancements in generative AI models present new opportunities in the education sector. However, it is imperative to acknowledge and address the potential risks and concerns that may arise with their use. We analyzed Twitter data to identify key concerns related to the use of ChatGPT in education. We employed BERT-based topic modeling to conduct a discourse analysis and social network analysis to identify influential users in the conversation. While Twitter users generally ex-pressed a positive attitude towards the use of ChatGPT, their concerns converged to five specific categories: academic integrity, impact on learning outcomes and skill development, limitation of capabilities, policy and social concerns, and workforce challenges. We also found that users from the tech, education, and media fields were often implicated in the conversation, while education and tech individual users led the discussion of concerns. Based on these findings, the study provides several implications for policymakers, tech companies and individuals, educators, and media agencies. In summary, our study underscores the importance of responsible and ethical use of AI in education and highlights the need for collaboration among stakeholders to regulate AI policy. |
2312.17591 | Dongfang Li | Dongfang Li, Baotian Hu, Qingcai Chen, Shan He | Towards Faithful Explanations for Text Classification with Robustness
Improvement and Explanation Guided Training | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Feature attribution methods highlight the important input tokens as
explanations to model predictions, which have been widely applied to deep
neural networks towards trustworthy AI. However, recent works show that
explanations provided by these methods face challenges of being faithful and
robust. In this paper, we propose a method with Robustness improvement and
Explanation Guided training towards more faithful EXplanations (REGEX) for text
classification. First, we improve model robustness by input gradient
regularization technique and virtual adversarial training. Secondly, we use
salient ranking to mask noisy tokens and maximize the similarity between model
attention and feature attribution, which can be seen as a self-training
procedure without importing other external information. We conduct extensive
experiments on six datasets with five attribution methods, and also evaluate
the faithfulness in the out-of-domain setting. The results show that REGEX
improves fidelity metrics of explanations in all settings and further achieves
consistent gains based on two randomization tests. Moreover, we show that using
highlight explanations produced by REGEX to train select-then-predict models
results in comparable task performance to the end-to-end method.
| [
{
"created": "Fri, 29 Dec 2023 13:07:07 GMT",
"version": "v1"
}
] | 2024-01-01 | [
[
"Li",
"Dongfang",
""
],
[
"Hu",
"Baotian",
""
],
[
"Chen",
"Qingcai",
""
],
[
"He",
"Shan",
""
]
] | Feature attribution methods highlight the important input tokens as explanations to model predictions, which have been widely applied to deep neural networks towards trustworthy AI. However, recent works show that explanations provided by these methods face challenges of being faithful and robust. In this paper, we propose a method with Robustness improvement and Explanation Guided training towards more faithful EXplanations (REGEX) for text classification. First, we improve model robustness by input gradient regularization technique and virtual adversarial training. Secondly, we use salient ranking to mask noisy tokens and maximize the similarity between model attention and feature attribution, which can be seen as a self-training procedure without importing other external information. We conduct extensive experiments on six datasets with five attribution methods, and also evaluate the faithfulness in the out-of-domain setting. The results show that REGEX improves fidelity metrics of explanations in all settings and further achieves consistent gains based on two randomization tests. Moreover, we show that using highlight explanations produced by REGEX to train select-then-predict models results in comparable task performance to the end-to-end method. |
2012.00190 | Sven Buechel | Sven Buechel, Luise Modersohn, and Udo Hahn | Towards Label-Agnostic Emotion Embeddings | EMNLP 2021 camera-ready version | null | null | null | cs.CL cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Research in emotion analysis is scattered across different label formats
(e.g., polarity types, basic emotion categories, and affective dimensions),
linguistic levels (word vs. sentence vs. discourse), and, of course, (few
well-resourced but much more under-resourced) natural languages and text genres
(e.g., product reviews, tweets, news). The resulting heterogeneity makes data
and software developed under these conflicting constraints hard to compare and
challenging to integrate. To resolve this unsatisfactory state of affairs we
here propose a training scheme that learns a shared latent representation of
emotion independent from different label formats, natural languages, and even
disparate model architectures. Experiments on a wide range of datasets indicate
that this approach yields the desired interoperability without penalizing
prediction quality. Code and data are archived under DOI
10.5281/zenodo.5466068.
| [
{
"created": "Tue, 1 Dec 2020 00:54:13 GMT",
"version": "v1"
},
{
"created": "Sat, 6 Nov 2021 20:48:40 GMT",
"version": "v2"
}
] | 2021-11-09 | [
[
"Buechel",
"Sven",
""
],
[
"Modersohn",
"Luise",
""
],
[
"Hahn",
"Udo",
""
]
] | Research in emotion analysis is scattered across different label formats (e.g., polarity types, basic emotion categories, and affective dimensions), linguistic levels (word vs. sentence vs. discourse), and, of course, (few well-resourced but much more under-resourced) natural languages and text genres (e.g., product reviews, tweets, news). The resulting heterogeneity makes data and software developed under these conflicting constraints hard to compare and challenging to integrate. To resolve this unsatisfactory state of affairs we here propose a training scheme that learns a shared latent representation of emotion independent from different label formats, natural languages, and even disparate model architectures. Experiments on a wide range of datasets indicate that this approach yields the desired interoperability without penalizing prediction quality. Code and data are archived under DOI 10.5281/zenodo.5466068. |
2012.05001 | Karim Ali | Karim M. Ali and Amr Guaily | Dual perspective method for solving the point in a polygon problem | 5 pages, 4 figures, 1 table containing 6 images, 1 algorithm | null | null | null | cs.CG physics.flu-dyn | http://creativecommons.org/licenses/by/4.0/ | A novel method has been introduced to solve a point inclusion in a polygon
problem. The method is applicable to convex as well as non-convex polygons
which are not self-intersecting. The introduced method is independent of
rounding off errors, which gives it a leverage over some methods prone to this
problem. A brief summary of the methods used to solve this problem is presented
and the introduced method is discussed. The introduced method is compared to
other existing methods from the point of view of computational cost. This
method was inspired from a Computational Fluid Dynamics (CFD) application using
grids not fitted to the simulated objects.
| [
{
"created": "Wed, 9 Dec 2020 12:16:59 GMT",
"version": "v1"
}
] | 2020-12-10 | [
[
"Ali",
"Karim M.",
""
],
[
"Guaily",
"Amr",
""
]
] | A novel method has been introduced to solve a point inclusion in a polygon problem. The method is applicable to convex as well as non-convex polygons which are not self-intersecting. The introduced method is independent of rounding off errors, which gives it a leverage over some methods prone to this problem. A brief summary of the methods used to solve this problem is presented and the introduced method is discussed. The introduced method is compared to other existing methods from the point of view of computational cost. This method was inspired from a Computational Fluid Dynamics (CFD) application using grids not fitted to the simulated objects. |
1811.08234 | Abhishek Bichhawat | Abhishek Bichhawat and Matt Fredrikson and Jean Yang and Akash Trehan | Contextual and Granular Policy Enforcement in Database-backed
Applications | null | null | 10.1145/3320269.3384759 | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Database-backed applications rely on inlined policy checks to process users'
private and confidential data in a policy-compliant manner as traditional
database access control mechanisms cannot enforce complex policies. However,
application bugs due to missed checks are common in such applications, which
result in data breaches. While separating policy from code is a natural
solution, many data protection policies specify restrictions based on the
context in which data is accessed and how the data is used. Enforcing these
restrictions automatically presents significant challenges, as the information
needed to determine context requires a tight coupling between policy
enforcement and an application's implementation. We present Estrela, a
framework for enforcing contextual and granular data access policies. Working
from the observation that API endpoints can be associated with salient
contextual information in most database-backed applications, Estrela allows
developers to specify API-specific restrictions on data access and use. Estrela
provides a clean separation between policy specification and the application's
implementation, which facilitates easier auditing and maintenance of policies.
Policies in Estrela consist of pre-evaluation and post-evaluation conditions,
which provide the means to modulate database access before a query is issued,
and to impose finer-grained constraints on information release after the
evaluation of query, respectively. We build a prototype of Estrela and apply it
to retrofit several real world applications (from 1000-80k LOC) to enforce
different contextual policies. Our evaluation shows that Estrela can enforce
policies with minimal overheads.
| [
{
"created": "Tue, 20 Nov 2018 13:18:47 GMT",
"version": "v1"
},
{
"created": "Mon, 15 Jul 2019 15:23:31 GMT",
"version": "v2"
},
{
"created": "Fri, 28 Feb 2020 15:18:41 GMT",
"version": "v3"
},
{
"created": "Fri, 13 Mar 2020 19:09:27 GMT",
"version": "v4"
}
] | 2020-03-17 | [
[
"Bichhawat",
"Abhishek",
""
],
[
"Fredrikson",
"Matt",
""
],
[
"Yang",
"Jean",
""
],
[
"Trehan",
"Akash",
""
]
] | Database-backed applications rely on inlined policy checks to process users' private and confidential data in a policy-compliant manner as traditional database access control mechanisms cannot enforce complex policies. However, application bugs due to missed checks are common in such applications, which result in data breaches. While separating policy from code is a natural solution, many data protection policies specify restrictions based on the context in which data is accessed and how the data is used. Enforcing these restrictions automatically presents significant challenges, as the information needed to determine context requires a tight coupling between policy enforcement and an application's implementation. We present Estrela, a framework for enforcing contextual and granular data access policies. Working from the observation that API endpoints can be associated with salient contextual information in most database-backed applications, Estrela allows developers to specify API-specific restrictions on data access and use. Estrela provides a clean separation between policy specification and the application's implementation, which facilitates easier auditing and maintenance of policies. Policies in Estrela consist of pre-evaluation and post-evaluation conditions, which provide the means to modulate database access before a query is issued, and to impose finer-grained constraints on information release after the evaluation of query, respectively. We build a prototype of Estrela and apply it to retrofit several real world applications (from 1000-80k LOC) to enforce different contextual policies. Our evaluation shows that Estrela can enforce policies with minimal overheads. |
1405.4506 | Limin Wang | Xiaojiang Peng and Limin Wang and Xingxing Wang and Yu Qiao | Bag of Visual Words and Fusion Methods for Action Recognition:
Comprehensive Study and Good Practice | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Video based action recognition is one of the important and challenging
problems in computer vision research. Bag of Visual Words model (BoVW) with
local features has become the most popular method and obtained the
state-of-the-art performance on several realistic datasets, such as the HMDB51,
UCF50, and UCF101. BoVW is a general pipeline to construct a global
representation from a set of local features, which is mainly composed of five
steps: (i) feature extraction, (ii) feature pre-processing, (iii) codebook
generation, (iv) feature encoding, and (v) pooling and normalization. Many
efforts have been made in each step independently in different scenarios and
their effect on action recognition is still unknown. Meanwhile, video data
exhibits different views of visual pattern, such as static appearance and
motion dynamics. Multiple descriptors are usually extracted to represent these
different views. Many feature fusion methods have been developed in other areas
and their influence on action recognition has never been investigated before.
This paper aims to provide a comprehensive study of all steps in BoVW and
different fusion methods, and uncover some good practice to produce a
state-of-the-art action recognition system. Specifically, we explore two kinds
of local features, ten kinds of encoding methods, eight kinds of pooling and
normalization strategies, and three kinds of fusion methods. We conclude that
every step is crucial for contributing to the final recognition rate.
Furthermore, based on our comprehensive study, we propose a simple yet
effective representation, called hybrid representation, by exploring the
complementarity of different BoVW frameworks and local descriptors. Using this
representation, we obtain the state-of-the-art on the three challenging
datasets: HMDB51 (61.1%), UCF50 (92.3%), and UCF101 (87.9%).
| [
{
"created": "Sun, 18 May 2014 13:56:07 GMT",
"version": "v1"
}
] | 2014-05-20 | [
[
"Peng",
"Xiaojiang",
""
],
[
"Wang",
"Limin",
""
],
[
"Wang",
"Xingxing",
""
],
[
"Qiao",
"Yu",
""
]
] | Video based action recognition is one of the important and challenging problems in computer vision research. Bag of Visual Words model (BoVW) with local features has become the most popular method and obtained the state-of-the-art performance on several realistic datasets, such as the HMDB51, UCF50, and UCF101. BoVW is a general pipeline to construct a global representation from a set of local features, which is mainly composed of five steps: (i) feature extraction, (ii) feature pre-processing, (iii) codebook generation, (iv) feature encoding, and (v) pooling and normalization. Many efforts have been made in each step independently in different scenarios and their effect on action recognition is still unknown. Meanwhile, video data exhibits different views of visual pattern, such as static appearance and motion dynamics. Multiple descriptors are usually extracted to represent these different views. Many feature fusion methods have been developed in other areas and their influence on action recognition has never been investigated before. This paper aims to provide a comprehensive study of all steps in BoVW and different fusion methods, and uncover some good practice to produce a state-of-the-art action recognition system. Specifically, we explore two kinds of local features, ten kinds of encoding methods, eight kinds of pooling and normalization strategies, and three kinds of fusion methods. We conclude that every step is crucial for contributing to the final recognition rate. Furthermore, based on our comprehensive study, we propose a simple yet effective representation, called hybrid representation, by exploring the complementarity of different BoVW frameworks and local descriptors. Using this representation, we obtain the state-of-the-art on the three challenging datasets: HMDB51 (61.1%), UCF50 (92.3%), and UCF101 (87.9%). |
2004.08364 | Patrick Scheffe | Patrick Scheffe, Janis Maczijewski, Maximilian Kloock, Alexandru
Kampmann, Andreas Derks, Stefan Kowalewski, Bassam Alrifaee | Networked and Autonomous Model-scale Vehicles for Experiments in
Research and Education | This work has been accepted to IFAC for publication under a Creative
Commons Licence CC-BY-NC-ND | IFAC-PapersOnLine Volume 53, Issue 2, 2020, Pages 17332-17337 | 10.1016/j.ifacol.2020.12.1821 | null | cs.RO cs.MA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents the $\mathrm{\mu}$Car, a 1:18 model-scale vehicle with
Ackermann steering geometry developed for experiments in networked and
autonomous driving in research and education. The vehicle is open source,
moderately costed and highly flexible, which allows for many applications. It
is equipped with an inertial measurement unit and an odometer and obtains its
pose via WLAN from an indoor positioning system. The two supported operating
modes for controlling the vehicle are (1) computing control inputs on external
hardware, transmitting them via WLAN and applying received inputs to the
actuators and (2) transmitting a reference trajectory via WLAN, which is then
followed by a controller running on the onboard Raspberry Pi Zero W. The design
allows identical vehicles to be used at the same time in order to conduct
experiments with a large amount of networked agents.
| [
{
"created": "Fri, 17 Apr 2020 17:39:57 GMT",
"version": "v1"
}
] | 2021-05-04 | [
[
"Scheffe",
"Patrick",
""
],
[
"Maczijewski",
"Janis",
""
],
[
"Kloock",
"Maximilian",
""
],
[
"Kampmann",
"Alexandru",
""
],
[
"Derks",
"Andreas",
""
],
[
"Kowalewski",
"Stefan",
""
],
[
"Alrifaee",
"Bassam",
""
]
] | This paper presents the $\mathrm{\mu}$Car, a 1:18 model-scale vehicle with Ackermann steering geometry developed for experiments in networked and autonomous driving in research and education. The vehicle is open source, moderately costed and highly flexible, which allows for many applications. It is equipped with an inertial measurement unit and an odometer and obtains its pose via WLAN from an indoor positioning system. The two supported operating modes for controlling the vehicle are (1) computing control inputs on external hardware, transmitting them via WLAN and applying received inputs to the actuators and (2) transmitting a reference trajectory via WLAN, which is then followed by a controller running on the onboard Raspberry Pi Zero W. The design allows identical vehicles to be used at the same time in order to conduct experiments with a large amount of networked agents. |
2209.05989 | Jiacheng Yin | Jiacheng Yin, Wenwen Li, Xidong Wang, Xiaozhou Ye, Ye Ouyang | 4G 5G Cell-level Multi-indicator Forecasting based on Dense-MLP | 9 pages, 6 figures. Published at ITU Journal on Future and Evolving
Technologies, viewable at https://www.itu.int/pub/S-JNL-VOL3.ISSUE3-2022-A03 | ITU Journal on Future and Evolving Technologies, Volume 3 (2022),
Issue 3 - AI and machine learning solutions in 5G and future networks, Pages
21-29 | null | null | cs.NI cs.LG | http://creativecommons.org/publicdomain/zero/1.0/ | With the development of 4G/5G, the rapid growth of traffic has caused a large
number of cell indicators to exceed the warning threshold, and network quality
has deteriorated. It is necessary for operators to solve the congestion in
advance and effectively to guarantee the quality of user experience. Cell-level
multi-indicator forecasting is the foundation task for proactive complex
network optimization. In this paper, we propose the 4G/5G Cell-level
multi-indicator forecasting method based on the dense-Multi-Layer Perceptron
(MLP) neural network, which adds additional fully-connected layers between
non-adjacent layers in an MLP network. The model forecasted the following
week's traffic indicators of 13000 cells according to the six-month historical
indicators of 65000 cells in the 4G&5G network, which got the highest weighted
MAPE score (0.2484) in the China Mobile problem statement in the ITU-T AI/ML in
5G Challenge 2021. Furthermore, the proposed model has been integrated into the
AsiaInfo 4G/5G energy-saving system and deployed in Jiangsu Province of China.
| [
{
"created": "Fri, 22 Jul 2022 05:03:27 GMT",
"version": "v1"
}
] | 2022-09-14 | [
[
"Yin",
"Jiacheng",
""
],
[
"Li",
"Wenwen",
""
],
[
"Wang",
"Xidong",
""
],
[
"Ye",
"Xiaozhou",
""
],
[
"Ouyang",
"Ye",
""
]
] | With the development of 4G/5G, the rapid growth of traffic has caused a large number of cell indicators to exceed the warning threshold, and network quality has deteriorated. It is necessary for operators to solve the congestion in advance and effectively to guarantee the quality of user experience. Cell-level multi-indicator forecasting is the foundation task for proactive complex network optimization. In this paper, we propose the 4G/5G Cell-level multi-indicator forecasting method based on the dense-Multi-Layer Perceptron (MLP) neural network, which adds additional fully-connected layers between non-adjacent layers in an MLP network. The model forecasted the following week's traffic indicators of 13000 cells according to the six-month historical indicators of 65000 cells in the 4G&5G network, which got the highest weighted MAPE score (0.2484) in the China Mobile problem statement in the ITU-T AI/ML in 5G Challenge 2021. Furthermore, the proposed model has been integrated into the AsiaInfo 4G/5G energy-saving system and deployed in Jiangsu Province of China. |
2007.08364 | Tadas Baltrusaitis | Tadas Baltrusaitis, Erroll Wood, Virginia Estellers, Charlie Hewitt,
Sebastian Dziadzio, Marek Kowalski, Matthew Johnson, Thomas J. Cashman, and
Jamie Shotton | A high fidelity synthetic face framework for computer vision | null | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Analysis of faces is one of the core applications of computer vision, with
tasks ranging from landmark alignment, head pose estimation, expression
recognition, and face recognition among others. However, building reliable
methods requires time-consuming data collection and often even more
time-consuming manual annotation, which can be unreliable. In our work we
propose synthesizing such facial data, including ground truth annotations that
would be almost impossible to acquire through manual annotation at the
consistency and scale possible through use of synthetic data. We use a
parametric face model together with hand crafted assets which enable us to
generate training data with unprecedented quality and diversity (varying shape,
texture, expression, pose, lighting, and hair).
| [
{
"created": "Thu, 16 Jul 2020 14:40:28 GMT",
"version": "v1"
}
] | 2020-07-17 | [
[
"Baltrusaitis",
"Tadas",
""
],
[
"Wood",
"Erroll",
""
],
[
"Estellers",
"Virginia",
""
],
[
"Hewitt",
"Charlie",
""
],
[
"Dziadzio",
"Sebastian",
""
],
[
"Kowalski",
"Marek",
""
],
[
"Johnson",
"Matthew",
""
],
[
"Cashman",
"Thomas J.",
""
],
[
"Shotton",
"Jamie",
""
]
] | Analysis of faces is one of the core applications of computer vision, with tasks ranging from landmark alignment, head pose estimation, expression recognition, and face recognition among others. However, building reliable methods requires time-consuming data collection and often even more time-consuming manual annotation, which can be unreliable. In our work we propose synthesizing such facial data, including ground truth annotations that would be almost impossible to acquire through manual annotation at the consistency and scale possible through use of synthetic data. We use a parametric face model together with hand crafted assets which enable us to generate training data with unprecedented quality and diversity (varying shape, texture, expression, pose, lighting, and hair). |
2312.08898 | Xiao-Shan Gao | Yifan Zhu and Lijia Yu and Xiao-Shan Gao | Detection and Defense of Unlearnable Examples | AAAI 2024 | null | null | null | cs.LG cs.AI cs.CR | http://creativecommons.org/licenses/by/4.0/ | Privacy preserving has become increasingly critical with the emergence of
social media. Unlearnable examples have been proposed to avoid leaking personal
information on the Internet by degrading generalization abilities of deep
learning models. However, our study reveals that unlearnable examples are
easily detectable. We provide theoretical results on linear separability of
certain unlearnable poisoned dataset and simple network based detection methods
that can identify all existing unlearnable examples, as demonstrated by
extensive experiments. Detectability of unlearnable examples with simple
networks motivates us to design a novel defense method. We propose using
stronger data augmentations coupled with adversarial noises generated by simple
networks, to degrade the detectability and thus provide effective defense
against unlearnable examples with a lower cost. Adversarial training with large
budgets is a widely-used defense method on unlearnable examples. We establish
quantitative criteria between the poison and adversarial budgets which
determine the existence of robust unlearnable examples or the failure of the
adversarial defense.
| [
{
"created": "Thu, 14 Dec 2023 12:59:20 GMT",
"version": "v1"
}
] | 2023-12-15 | [
[
"Zhu",
"Yifan",
""
],
[
"Yu",
"Lijia",
""
],
[
"Gao",
"Xiao-Shan",
""
]
] | Privacy preserving has become increasingly critical with the emergence of social media. Unlearnable examples have been proposed to avoid leaking personal information on the Internet by degrading generalization abilities of deep learning models. However, our study reveals that unlearnable examples are easily detectable. We provide theoretical results on linear separability of certain unlearnable poisoned dataset and simple network based detection methods that can identify all existing unlearnable examples, as demonstrated by extensive experiments. Detectability of unlearnable examples with simple networks motivates us to design a novel defense method. We propose using stronger data augmentations coupled with adversarial noises generated by simple networks, to degrade the detectability and thus provide effective defense against unlearnable examples with a lower cost. Adversarial training with large budgets is a widely-used defense method on unlearnable examples. We establish quantitative criteria between the poison and adversarial budgets which determine the existence of robust unlearnable examples or the failure of the adversarial defense. |
2103.16143 | Constantinos Patsakis | Fran Casino, Nikolaos Totosis, Theodoros Apostolopoulos, Nikolaos
Lykousas, and Constantinos Patsakis | Analysis and Correlation of Visual Evidence in Campaigns of Malicious
Office Documents | null | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many malware campaigns use Microsoft (MS) Office documents as droppers to
download and execute their malicious payload. Such campaigns often use these
documents because MS Office is installed in billions of devices and that these
files allow the execution of arbitrary VBA code. Recent versions of MS Office
prevent the automatic execution of VBA macros, so malware authors try to
convince users into enabling the content via images that, e.g. forge system or
technical errors. In this work, we leverage these visual elements to construct
lightweight malware signatures that can be applied with minimal effort. We test
and validate our approach using an extensive database of malware samples and
identify correlations between different campaigns that illustrate that some
campaigns are either using the same tools or that there is some collaboration
between them.
| [
{
"created": "Tue, 30 Mar 2021 07:56:38 GMT",
"version": "v1"
}
] | 2021-03-31 | [
[
"Casino",
"Fran",
""
],
[
"Totosis",
"Nikolaos",
""
],
[
"Apostolopoulos",
"Theodoros",
""
],
[
"Lykousas",
"Nikolaos",
""
],
[
"Patsakis",
"Constantinos",
""
]
] | Many malware campaigns use Microsoft (MS) Office documents as droppers to download and execute their malicious payload. Such campaigns often use these documents because MS Office is installed in billions of devices and that these files allow the execution of arbitrary VBA code. Recent versions of MS Office prevent the automatic execution of VBA macros, so malware authors try to convince users into enabling the content via images that, e.g. forge system or technical errors. In this work, we leverage these visual elements to construct lightweight malware signatures that can be applied with minimal effort. We test and validate our approach using an extensive database of malware samples and identify correlations between different campaigns that illustrate that some campaigns are either using the same tools or that there is some collaboration between them. |
2106.01215 | Talha Bin Masood | Talha Bin Masood, Signe Sidwall Thygesen, Mathieu Linares, Alexei I.
Abrikosov, Vijay Natarajan, Ingrid Hotz | Visual Analysis of Electronic Densities and Transitions in Molecules | 15 pages, 9 figures, To appear in EuroVis 2021 | null | null | null | cs.HC cs.CG physics.chem-ph | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The study of electronic transitions within a molecule connected to the
absorption or emission of light is a common task in the process of the design
of new materials. The transitions are complex quantum mechanical processes and
a detailed analysis requires a breakdown of these processes into components
that can be interpreted via characteristic chemical properties. We approach
these tasks by providing a detailed analysis of the electron density field.
This entails methods to quantify and visualize electron localization and
transfer from molecular subgroups combining spatial and abstract
representations. The core of our method uses geometric segmentation of the
electronic density field coupled with a graph-theoretic formulation of charge
transfer between molecular subgroups. The design of the methods has been guided
by the goal of providing a generic and objective analysis following fundamental
concepts. We illustrate the proposed approach using several case studies
involving the study of electronic transitions in different molecular systems.
| [
{
"created": "Wed, 2 Jun 2021 15:07:02 GMT",
"version": "v1"
}
] | 2021-06-03 | [
[
"Masood",
"Talha Bin",
""
],
[
"Thygesen",
"Signe Sidwall",
""
],
[
"Linares",
"Mathieu",
""
],
[
"Abrikosov",
"Alexei I.",
""
],
[
"Natarajan",
"Vijay",
""
],
[
"Hotz",
"Ingrid",
""
]
] | The study of electronic transitions within a molecule connected to the absorption or emission of light is a common task in the process of the design of new materials. The transitions are complex quantum mechanical processes and a detailed analysis requires a breakdown of these processes into components that can be interpreted via characteristic chemical properties. We approach these tasks by providing a detailed analysis of the electron density field. This entails methods to quantify and visualize electron localization and transfer from molecular subgroups combining spatial and abstract representations. The core of our method uses geometric segmentation of the electronic density field coupled with a graph-theoretic formulation of charge transfer between molecular subgroups. The design of the methods has been guided by the goal of providing a generic and objective analysis following fundamental concepts. We illustrate the proposed approach using several case studies involving the study of electronic transitions in different molecular systems. |
1106.5651 | Loet Leydesdorff | Loet Leydesdorff | Hyperincursive Cogitata and Incursive Cogitantes: Scholarly Discourse as
a Strongly Anticipatory System | arXiv admin note: substantial text overlap with arXiv:1011.3244 | International Journal of Computing Anticipatory Systems, 28,
173-186 | null | null | cs.DL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Strongly anticipatory systems-that is, systems which use models of themselves
for their further development-and which additionally may be able to run
hyperincursive routines-that is, develop only with reference to their future
states-cannot exist in res extensa, but can only be envisaged in res cogitans.
One needs incursive routines in cogitantes to instantiate these systems. Unlike
historical systems (with recursion), these hyper-incursive routines generate
redundancies by opening horizons of other possible states. Thus, intentional
systems can enrich our perceptions of the cases that have happened to occur.
The perspective of hindsight codified at the above-individual level enables us
furthermore to intervene technologically. The theory and computation of
anticipatory systems have made these loops between supra-individual
hyper-incursion, individual incursion (in instantiation), and historical
recursion accessible for modeling and empirical investigation.
| [
{
"created": "Tue, 28 Jun 2011 12:56:19 GMT",
"version": "v1"
},
{
"created": "Tue, 6 Jan 2015 06:01:18 GMT",
"version": "v2"
}
] | 2015-01-07 | [
[
"Leydesdorff",
"Loet",
""
]
] | Strongly anticipatory systems-that is, systems which use models of themselves for their further development-and which additionally may be able to run hyperincursive routines-that is, develop only with reference to their future states-cannot exist in res extensa, but can only be envisaged in res cogitans. One needs incursive routines in cogitantes to instantiate these systems. Unlike historical systems (with recursion), these hyper-incursive routines generate redundancies by opening horizons of other possible states. Thus, intentional systems can enrich our perceptions of the cases that have happened to occur. The perspective of hindsight codified at the above-individual level enables us furthermore to intervene technologically. The theory and computation of anticipatory systems have made these loops between supra-individual hyper-incursion, individual incursion (in instantiation), and historical recursion accessible for modeling and empirical investigation. |
1611.10212 | Antonis Achilleos | Luca Aceto, Antonis Achilleos, Adrian Francalanza, Anna
Ing\'olfsd\'ottir and S{\ae}var \"Orn Kjartansson | Determinizing Monitors for HML with Recursion | null | null | null | null | cs.LO cs.FL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We examine the determinization of monitors for HML with recursion. We
demonstrate that every monitor is equivalent to a deterministic one, which is
at most doubly exponential in size with respect to the original monitor. When
monitors are described as CCS-like processes, this doubly exponential bound is
optimal. When (deterministic) monitors are described as finite automata (as
their LTS), then they can be exponentially more succinct than their CCS process
form.
| [
{
"created": "Wed, 30 Nov 2016 15:21:01 GMT",
"version": "v1"
}
] | 2016-12-01 | [
[
"Aceto",
"Luca",
""
],
[
"Achilleos",
"Antonis",
""
],
[
"Francalanza",
"Adrian",
""
],
[
"Ingólfsdóttir",
"Anna",
""
],
[
"Kjartansson",
"Sævar Örn",
""
]
] | We examine the determinization of monitors for HML with recursion. We demonstrate that every monitor is equivalent to a deterministic one, which is at most doubly exponential in size with respect to the original monitor. When monitors are described as CCS-like processes, this doubly exponential bound is optimal. When (deterministic) monitors are described as finite automata (as their LTS), then they can be exponentially more succinct than their CCS process form. |
2403.02010 | Zhiyun Fan | Zhiyun Fan, Linhao Dong, Jun Zhang, Lu Lu, Zejun Ma | SA-SOT: Speaker-Aware Serialized Output Training for Multi-Talker ASR | null | null | null | null | cs.SD eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-talker automatic speech recognition plays a crucial role in scenarios
involving multi-party interactions, such as meetings and conversations. Due to
its inherent complexity, this task has been receiving increasing attention.
Notably, the serialized output training (SOT) stands out among various
approaches because of its simplistic architecture and exceptional performance.
However, the frequent speaker changes in token-level SOT (t-SOT) present
challenges for the autoregressive decoder in effectively utilizing context to
predict output sequences. To address this issue, we introduce a masked t-SOT
label, which serves as the cornerstone of an auxiliary training loss.
Additionally, we utilize a speaker similarity matrix to refine the
self-attention mechanism of the decoder. This strategic adjustment enhances
contextual relationships within the same speaker's tokens while minimizing
interactions between different speakers' tokens. We denote our method as
speaker-aware SOT (SA-SOT). Experiments on the Librispeech datasets demonstrate
that our SA-SOT obtains a relative cpWER reduction ranging from 12.75% to
22.03% on the multi-talker test sets. Furthermore, with more extensive
training, our method achieves an impressive cpWER of 3.41%, establishing a new
state-of-the-art result on the LibrispeechMix dataset.
| [
{
"created": "Mon, 4 Mar 2024 13:10:40 GMT",
"version": "v1"
}
] | 2024-03-05 | [
[
"Fan",
"Zhiyun",
""
],
[
"Dong",
"Linhao",
""
],
[
"Zhang",
"Jun",
""
],
[
"Lu",
"Lu",
""
],
[
"Ma",
"Zejun",
""
]
] | Multi-talker automatic speech recognition plays a crucial role in scenarios involving multi-party interactions, such as meetings and conversations. Due to its inherent complexity, this task has been receiving increasing attention. Notably, the serialized output training (SOT) stands out among various approaches because of its simplistic architecture and exceptional performance. However, the frequent speaker changes in token-level SOT (t-SOT) present challenges for the autoregressive decoder in effectively utilizing context to predict output sequences. To address this issue, we introduce a masked t-SOT label, which serves as the cornerstone of an auxiliary training loss. Additionally, we utilize a speaker similarity matrix to refine the self-attention mechanism of the decoder. This strategic adjustment enhances contextual relationships within the same speaker's tokens while minimizing interactions between different speakers' tokens. We denote our method as speaker-aware SOT (SA-SOT). Experiments on the Librispeech datasets demonstrate that our SA-SOT obtains a relative cpWER reduction ranging from 12.75% to 22.03% on the multi-talker test sets. Furthermore, with more extensive training, our method achieves an impressive cpWER of 3.41%, establishing a new state-of-the-art result on the LibrispeechMix dataset. |
2003.03486 | Rodrigo de Lamare | A. Flores, R. C. de Lamare and B. Clerckx | Study of Linear Precoding and Stream Combining for Rate Splitting in
MU-MIMO Systems | 5 figures, 5 pages | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper develops stream combining techniques for rate-splitting (RS)
multiple-antenna systems with multiple users to enhance the common rate. We
propose linear combining techniques based on the Min-Max, the maximum ratio and
the minimum mean-square error criteria along with Regularized Block
Diagonalization (RBD) precoders for RS-based multiuser multiple-antenna
systems. An analysis of the sum rate performance is carried out, leading to
closed-form expressions. Simulations show that the proposed combining schemes
offer a significant sum rate performance gain over conventional linear
precoding schemes.
| [
{
"created": "Sat, 7 Mar 2020 02:07:25 GMT",
"version": "v1"
}
] | 2020-03-10 | [
[
"Flores",
"A.",
""
],
[
"de Lamare",
"R. C.",
""
],
[
"Clerckx",
"B.",
""
]
] | This paper develops stream combining techniques for rate-splitting (RS) multiple-antenna systems with multiple users to enhance the common rate. We propose linear combining techniques based on the Min-Max, the maximum ratio and the minimum mean-square error criteria along with Regularized Block Diagonalization (RBD) precoders for RS-based multiuser multiple-antenna systems. An analysis of the sum rate performance is carried out, leading to closed-form expressions. Simulations show that the proposed combining schemes offer a significant sum rate performance gain over conventional linear precoding schemes. |
2305.11003 | Chunming He | Chunming He and Kai Li and Yachao Zhang and Guoxia Xu and Longxiang
Tang and Yulun Zhang and Zhenhua Guo and Xiu Li | Weakly-Supervised Concealed Object Segmentation with SAM-based Pseudo
Labeling and Multi-scale Feature Grouping | 12 pages, 5 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Weakly-Supervised Concealed Object Segmentation (WSCOS) aims to segment
objects well blended with surrounding environments using sparsely-annotated
data for model training. It remains a challenging task since (1) it is hard to
distinguish concealed objects from the background due to the intrinsic
similarity and (2) the sparsely-annotated training data only provide weak
supervision for model learning. In this paper, we propose a new WSCOS method to
address these two challenges. To tackle the intrinsic similarity challenge, we
design a multi-scale feature grouping module that first groups features at
different granularities and then aggregates these grouping results. By grouping
similar features together, it encourages segmentation coherence, helping obtain
complete segmentation results for both single and multiple-object images. For
the weak supervision challenge, we utilize the recently-proposed vision
foundation model, Segment Anything Model (SAM), and use the provided sparse
annotations as prompts to generate segmentation masks, which are used to train
the model. To alleviate the impact of low-quality segmentation masks, we
further propose a series of strategies, including multi-augmentation result
ensemble, entropy-based pixel-level weighting, and entropy-based image-level
selection. These strategies help provide more reliable supervision to train the
segmentation model. We verify the effectiveness of our method on various WSCOS
tasks, and experiments demonstrate that our method achieves state-of-the-art
performance on these tasks.
| [
{
"created": "Thu, 18 May 2023 14:31:34 GMT",
"version": "v1"
}
] | 2023-05-19 | [
[
"He",
"Chunming",
""
],
[
"Li",
"Kai",
""
],
[
"Zhang",
"Yachao",
""
],
[
"Xu",
"Guoxia",
""
],
[
"Tang",
"Longxiang",
""
],
[
"Zhang",
"Yulun",
""
],
[
"Guo",
"Zhenhua",
""
],
[
"Li",
"Xiu",
""
]
] | Weakly-Supervised Concealed Object Segmentation (WSCOS) aims to segment objects well blended with surrounding environments using sparsely-annotated data for model training. It remains a challenging task since (1) it is hard to distinguish concealed objects from the background due to the intrinsic similarity and (2) the sparsely-annotated training data only provide weak supervision for model learning. In this paper, we propose a new WSCOS method to address these two challenges. To tackle the intrinsic similarity challenge, we design a multi-scale feature grouping module that first groups features at different granularities and then aggregates these grouping results. By grouping similar features together, it encourages segmentation coherence, helping obtain complete segmentation results for both single and multiple-object images. For the weak supervision challenge, we utilize the recently-proposed vision foundation model, Segment Anything Model (SAM), and use the provided sparse annotations as prompts to generate segmentation masks, which are used to train the model. To alleviate the impact of low-quality segmentation masks, we further propose a series of strategies, including multi-augmentation result ensemble, entropy-based pixel-level weighting, and entropy-based image-level selection. These strategies help provide more reliable supervision to train the segmentation model. We verify the effectiveness of our method on various WSCOS tasks, and experiments demonstrate that our method achieves state-of-the-art performance on these tasks. |
2005.11264 | Konstantina Bereta | Konstantina Bereta, George Papadakis, Manolis Koubarakis | OBDA for the Web: Creating Virtual RDF Graphs On Top of Web Data Sources | 12 pages, 6 figures | null | null | null | cs.DB | http://creativecommons.org/licenses/by/4.0/ | Due to Variety, Web data come in many different structures and formats, with
HTML tables and REST APIs (e.g., social media APIs) being among the most
popular ones. A big subset of Web data is also characterised by Velocity, as
data gets frequently updated so that consumers can obtain the most up-to-date
version of the respective datasets. At the moment, though, these data sources
are not effectively supported by Semantic Web tools. To address variety and
velocity, we propose Ontop4theWeb, a system that maps Web data of various
formats into virtual RDF triples, thus allowing for querying them on-the-fly
without materializing them as RDF. We demonstrate how Ontop4theWeb can use
SPARQL to uniformly query popular, but heterogeneous Web data sources, like
HTML tables and Web APIs. We showcase our approach in a number of use cases,
such as Twitter, Foursquare, Yelp and HTML tables. We carried out a thorough
experimental evaluation which verifies the high efficiency of our framework,
which goes beyond the current state-of-the-art in this area, in terms of both
functionality and performance.
| [
{
"created": "Fri, 22 May 2020 16:29:25 GMT",
"version": "v1"
}
] | 2020-05-25 | [
[
"Bereta",
"Konstantina",
""
],
[
"Papadakis",
"George",
""
],
[
"Koubarakis",
"Manolis",
""
]
] | Due to Variety, Web data come in many different structures and formats, with HTML tables and REST APIs (e.g., social media APIs) being among the most popular ones. A big subset of Web data is also characterised by Velocity, as data gets frequently updated so that consumers can obtain the most up-to-date version of the respective datasets. At the moment, though, these data sources are not effectively supported by Semantic Web tools. To address variety and velocity, we propose Ontop4theWeb, a system that maps Web data of various formats into virtual RDF triples, thus allowing for querying them on-the-fly without materializing them as RDF. We demonstrate how Ontop4theWeb can use SPARQL to uniformly query popular, but heterogeneous Web data sources, like HTML tables and Web APIs. We showcase our approach in a number of use cases, such as Twitter, Foursquare, Yelp and HTML tables. We carried out a thorough experimental evaluation which verifies the high efficiency of our framework, which goes beyond the current state-of-the-art in this area, in terms of both functionality and performance. |
1104.5069 | Tuan Nguyen | Tuan Nguyen and Subbarao Kambhampati and Minh Do | Synthesizing Robust Plans under Incomplete Domain Models | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most current planners assume complete domain models and focus on generating
correct plans. Unfortunately, domain modeling is a laborious and error-prone
task. While domain experts cannot guarantee completeness, often they are able
to circumscribe the incompleteness of the model by providing annotations as to
which parts of the domain model may be incomplete. In such cases, the goal
should be to generate plans that are robust with respect to any known
incompleteness of the domain. In this paper, we first introduce annotations
expressing the knowledge of the domain incompleteness, and formalize the notion
of plan robustness with respect to an incomplete domain model. We then propose
an approach to compiling the problem of finding robust plans to the conformant
probabilistic planning problem. We present experimental results with
Probabilistic-FF, a state-of-the-art planner, showing the promise of our
approach.
| [
{
"created": "Wed, 27 Apr 2011 04:05:19 GMT",
"version": "v1"
}
] | 2011-04-28 | [
[
"Nguyen",
"Tuan",
""
],
[
"Kambhampati",
"Subbarao",
""
],
[
"Do",
"Minh",
""
]
] | Most current planners assume complete domain models and focus on generating correct plans. Unfortunately, domain modeling is a laborious and error-prone task. While domain experts cannot guarantee completeness, often they are able to circumscribe the incompleteness of the model by providing annotations as to which parts of the domain model may be incomplete. In such cases, the goal should be to generate plans that are robust with respect to any known incompleteness of the domain. In this paper, we first introduce annotations expressing the knowledge of the domain incompleteness, and formalize the notion of plan robustness with respect to an incomplete domain model. We then propose an approach to compiling the problem of finding robust plans to the conformant probabilistic planning problem. We present experimental results with Probabilistic-FF, a state-of-the-art planner, showing the promise of our approach. |
2105.07366 | Stephen MacDonell | Solomon Mensah, Jacky Keung, Stephen G. MacDonell, Michael Franklin
Bosu, and Kwabena Ebo Bennin | Investigating the Significance of the Bellwether Effect to Improve
Software Effort Prediction: Further Empirical Study | Journal paper, 23 pages, 8 figures, 9 tables | IEEE Transactions on Reliability 67(3)(2018), pp.1176-1198 | 10.1109/TR.2018.2839718 | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Context: In addressing how best to estimate how much effort is required to
develop software, a recent study found that using exemplary and recently
completed projects [forming Bellwether moving windows (BMW)] in software effort
prediction (SEP) models leads to relatively improved accuracy. More studies
need to be conducted to determine whether the BMW yields improved accuracy in
general, since different sizing and aging parameters of the BMW are known to
affect accuracy. Objective: To investigate the existence of exemplary projects
(Bellwethers) with defined window size and age parameters, and whether their
use in SEP improves prediction accuracy. Method: We empirically investigate the
moving window assumption based on the theory that the prediction outcome of a
future event depends on the outcomes of prior events. Sampling of Bellwethers
was undertaken using three introduced Bellwether methods (SSPM, SysSam, and
RandSam). The ergodic Markov chain was used to determine the stationarity of
the Bell-wethers. Results: Empirical results show that 1) Bellwethers exist in
SEP and 2) the BMW has an approximate size of 50 to 80 exemplary projects that
should not be more than 2 years old relative to the new projects to be
estimated. Conclusion: The study's results add further weight to the
recommended use of Bellwethers for improved prediction accuracy in SEP.
| [
{
"created": "Sun, 16 May 2021 06:15:30 GMT",
"version": "v1"
}
] | 2021-05-18 | [
[
"Mensah",
"Solomon",
""
],
[
"Keung",
"Jacky",
""
],
[
"MacDonell",
"Stephen G.",
""
],
[
"Bosu",
"Michael Franklin",
""
],
[
"Bennin",
"Kwabena Ebo",
""
]
] | Context: In addressing how best to estimate how much effort is required to develop software, a recent study found that using exemplary and recently completed projects [forming Bellwether moving windows (BMW)] in software effort prediction (SEP) models leads to relatively improved accuracy. More studies need to be conducted to determine whether the BMW yields improved accuracy in general, since different sizing and aging parameters of the BMW are known to affect accuracy. Objective: To investigate the existence of exemplary projects (Bellwethers) with defined window size and age parameters, and whether their use in SEP improves prediction accuracy. Method: We empirically investigate the moving window assumption based on the theory that the prediction outcome of a future event depends on the outcomes of prior events. Sampling of Bellwethers was undertaken using three introduced Bellwether methods (SSPM, SysSam, and RandSam). The ergodic Markov chain was used to determine the stationarity of the Bell-wethers. Results: Empirical results show that 1) Bellwethers exist in SEP and 2) the BMW has an approximate size of 50 to 80 exemplary projects that should not be more than 2 years old relative to the new projects to be estimated. Conclusion: The study's results add further weight to the recommended use of Bellwethers for improved prediction accuracy in SEP. |
2111.15020 | Wensheng Gan | Gengsen Huang, Wensheng Gan, Jian Weng, and Philip S. Yu | US-Rule: Discovering Utility-driven Sequential Rules | Preprint. 3 figures, 9 tables | null | null | null | cs.DB cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Utility-driven mining is an important task in data science and has many
applications in real life. High utility sequential pattern mining (HUSPM) is
one kind of utility-driven mining. HUSPM aims to discover all sequential
patterns with high utility. However, the existing algorithms of HUSPM can not
provide an accurate probability to deal with some scenarios for prediction or
recommendation. High-utility sequential rule mining (HUSRM) was proposed to
discover all sequential rules with high utility and high confidence. There is
only one algorithm proposed for HUSRM, which is not enough efficient. In this
paper, we propose a faster algorithm, called US-Rule, to efficiently mine
high-utility sequential rules. It utilizes rule estimated utility co-occurrence
pruning strategy (REUCP) to avoid meaningless computation. To improve the
efficiency on dense and long sequence datasets, four tighter upper bounds
(LEEU, REEU, LERSU, RERSU) and their corresponding pruning strategies (LEEUP,
REEUP, LERSUP, RERSUP) are proposed. Besides, US-Rule proposes rule estimated
utility recomputing pruning strategy (REURP) to deal with sparse datasets. At
last, a large number of experiments on different datasets compared to the
state-of-the-art algorithm demonstrate that US-Rule can achieve better
performance in terms of execution time, memory consumption and scalability.
| [
{
"created": "Mon, 29 Nov 2021 23:38:28 GMT",
"version": "v1"
}
] | 2021-12-01 | [
[
"Huang",
"Gengsen",
""
],
[
"Gan",
"Wensheng",
""
],
[
"Weng",
"Jian",
""
],
[
"Yu",
"Philip S.",
""
]
] | Utility-driven mining is an important task in data science and has many applications in real life. High utility sequential pattern mining (HUSPM) is one kind of utility-driven mining. HUSPM aims to discover all sequential patterns with high utility. However, the existing algorithms of HUSPM can not provide an accurate probability to deal with some scenarios for prediction or recommendation. High-utility sequential rule mining (HUSRM) was proposed to discover all sequential rules with high utility and high confidence. There is only one algorithm proposed for HUSRM, which is not enough efficient. In this paper, we propose a faster algorithm, called US-Rule, to efficiently mine high-utility sequential rules. It utilizes rule estimated utility co-occurrence pruning strategy (REUCP) to avoid meaningless computation. To improve the efficiency on dense and long sequence datasets, four tighter upper bounds (LEEU, REEU, LERSU, RERSU) and their corresponding pruning strategies (LEEUP, REEUP, LERSUP, RERSUP) are proposed. Besides, US-Rule proposes rule estimated utility recomputing pruning strategy (REURP) to deal with sparse datasets. At last, a large number of experiments on different datasets compared to the state-of-the-art algorithm demonstrate that US-Rule can achieve better performance in terms of execution time, memory consumption and scalability. |
2004.07886 | Uthaipon Tantipongpipat | Vivek Madan, Aleksandar Nikolov, Mohit Singh, Uthaipon Tantipongpipat | Maximizing Determinants under Matroid Constraints | null | null | null | null | cs.DS cs.DM math.CO math.OC stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Given vectors $v_1,\dots,v_n\in\mathbb{R}^d$ and a matroid $M=([n],I)$, we
study the problem of finding a basis $S$ of $M$ such that $\det(\sum_{i \in
S}v_i v_i^\top)$ is maximized. This problem appears in a diverse set of areas
such as experimental design, fair allocation of goods, network design, and
machine learning. The current best results include an $e^{2k}$-estimation for
any matroid of rank $k$ and a $(1+\epsilon)^d$-approximation for a uniform
matroid of rank $k\ge d+\frac d\epsilon$, where the rank $k\ge d$ denotes the
desired size of the optimal set. Our main result is a new approximation
algorithm with an approximation guarantee that depends only on the dimension
$d$ of the vectors and not on the size $k$ of the output set. In particular, we
show an $(O(d))^{d}$-estimation and an $(O(d))^{d^3}$-approximation for any
matroid, giving a significant improvement over prior work when $k\gg d$.
Our result relies on the existence of an optimal solution to a convex
programming relaxation for the problem which has sparse support; in particular,
no more than $O(d^2)$ variables of the solution have fractional values. The
sparsity results rely on the interplay between the first-order optimality
conditions for the convex program and matroid theory. We believe that the
techniques introduced to show sparsity of optimal solutions to convex programs
will be of independent interest. We also give a randomized algorithm that
rounds a sparse fractional solution to a feasible integral solution to the
original problem. To show the approximation guarantee, we utilize recent works
on strongly log-concave polynomials and show new relationships between
different convex programs studied for the problem. Finally, we use the
estimation algorithm and sparsity results to give an efficient deterministic
approximation algorithm with an approximation guarantee that depends solely on
the dimension $d$.
| [
{
"created": "Thu, 16 Apr 2020 19:16:38 GMT",
"version": "v1"
}
] | 2020-04-20 | [
[
"Madan",
"Vivek",
""
],
[
"Nikolov",
"Aleksandar",
""
],
[
"Singh",
"Mohit",
""
],
[
"Tantipongpipat",
"Uthaipon",
""
]
] | Given vectors $v_1,\dots,v_n\in\mathbb{R}^d$ and a matroid $M=([n],I)$, we study the problem of finding a basis $S$ of $M$ such that $\det(\sum_{i \in S}v_i v_i^\top)$ is maximized. This problem appears in a diverse set of areas such as experimental design, fair allocation of goods, network design, and machine learning. The current best results include an $e^{2k}$-estimation for any matroid of rank $k$ and a $(1+\epsilon)^d$-approximation for a uniform matroid of rank $k\ge d+\frac d\epsilon$, where the rank $k\ge d$ denotes the desired size of the optimal set. Our main result is a new approximation algorithm with an approximation guarantee that depends only on the dimension $d$ of the vectors and not on the size $k$ of the output set. In particular, we show an $(O(d))^{d}$-estimation and an $(O(d))^{d^3}$-approximation for any matroid, giving a significant improvement over prior work when $k\gg d$. Our result relies on the existence of an optimal solution to a convex programming relaxation for the problem which has sparse support; in particular, no more than $O(d^2)$ variables of the solution have fractional values. The sparsity results rely on the interplay between the first-order optimality conditions for the convex program and matroid theory. We believe that the techniques introduced to show sparsity of optimal solutions to convex programs will be of independent interest. We also give a randomized algorithm that rounds a sparse fractional solution to a feasible integral solution to the original problem. To show the approximation guarantee, we utilize recent works on strongly log-concave polynomials and show new relationships between different convex programs studied for the problem. Finally, we use the estimation algorithm and sparsity results to give an efficient deterministic approximation algorithm with an approximation guarantee that depends solely on the dimension $d$. |
2104.05215 | Xiangde Luo | Xiangde Luo, Tao Song, Guotai Wang, Jieneng Chen, Yinan Chen, Kang Li,
Dimitris N. Metaxas and Shaoting Zhang | SCPM-Net: An Anchor-free 3D Lung Nodule Detection Network using Sphere
Representation and Center Points Matching | accept to Medical Image Analysis | null | 10.1016/j.media.2021.102287 | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Lung nodule detection from 3D Computed Tomography scans plays a vital role in
efficient lung cancer screening. Despite the SOTA performance obtained by
recent anchor-based detectors using CNNs for this task, they require
predetermined anchor parameters such as the size, number, and aspect ratio of
anchors, and have limited robustness when dealing with lung nodules with a
massive variety of sizes. To overcome these problems, we propose a 3D sphere
representation-based center-points matching detection network that is
anchor-free and automatically predicts the position, radius, and offset of
nodules without the manual design of nodule/anchor parameters. The SCPM-Net
consists of two novel components: sphere representation and center points
matching. First, to match the nodule annotation in clinical practice, we
replace the commonly used bounding box with our proposed bounding sphere to
represent nodules with the centroid, radius, and local offset in 3D space. A
compatible sphere-based intersection over-union loss function is introduced to
train the lung nodule detection network stably and efficiently. Second, we
empower the network anchor-free by designing a positive center-points selection
and matching process, which naturally discards pre-determined anchor boxes. An
online hard example mining and re-focal loss subsequently enable the CPM
process to be more robust, resulting in more accurate point assignment and
mitigation of class imbalance. In addition, to better capture spatial
information and 3D context for the detection, we propose to fuse multi-level
spatial coordinate maps with the feature extractor and combine them with 3D
squeeze-and-excitation attention modules. Experimental results on the LUNA16
dataset showed that our proposed framework achieves superior performance
compared with existing anchor-based and anchor-free methods for lung nodule
detection.
| [
{
"created": "Mon, 12 Apr 2021 05:51:29 GMT",
"version": "v1"
},
{
"created": "Thu, 6 Jan 2022 09:13:15 GMT",
"version": "v2"
}
] | 2022-01-07 | [
[
"Luo",
"Xiangde",
""
],
[
"Song",
"Tao",
""
],
[
"Wang",
"Guotai",
""
],
[
"Chen",
"Jieneng",
""
],
[
"Chen",
"Yinan",
""
],
[
"Li",
"Kang",
""
],
[
"Metaxas",
"Dimitris N.",
""
],
[
"Zhang",
"Shaoting",
""
]
] | Lung nodule detection from 3D Computed Tomography scans plays a vital role in efficient lung cancer screening. Despite the SOTA performance obtained by recent anchor-based detectors using CNNs for this task, they require predetermined anchor parameters such as the size, number, and aspect ratio of anchors, and have limited robustness when dealing with lung nodules with a massive variety of sizes. To overcome these problems, we propose a 3D sphere representation-based center-points matching detection network that is anchor-free and automatically predicts the position, radius, and offset of nodules without the manual design of nodule/anchor parameters. The SCPM-Net consists of two novel components: sphere representation and center points matching. First, to match the nodule annotation in clinical practice, we replace the commonly used bounding box with our proposed bounding sphere to represent nodules with the centroid, radius, and local offset in 3D space. A compatible sphere-based intersection over-union loss function is introduced to train the lung nodule detection network stably and efficiently. Second, we empower the network anchor-free by designing a positive center-points selection and matching process, which naturally discards pre-determined anchor boxes. An online hard example mining and re-focal loss subsequently enable the CPM process to be more robust, resulting in more accurate point assignment and mitigation of class imbalance. In addition, to better capture spatial information and 3D context for the detection, we propose to fuse multi-level spatial coordinate maps with the feature extractor and combine them with 3D squeeze-and-excitation attention modules. Experimental results on the LUNA16 dataset showed that our proposed framework achieves superior performance compared with existing anchor-based and anchor-free methods for lung nodule detection. |
1011.5606 | Dan-Cristian Tomozei | Jean-Yves Le Boudec, Dan-Cristian Tomozei | Stability of a Stochastic Model for Demand-Response | Published in Stochastic Systems journal | null | 10.1214/11-SSY048 | null | cs.SY math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the stability of a Markovian model of electricity production and
consumption that incorporates production volatility due to renewables and
uncertainty about actual demand versus planned production. We assume that the
energy producer targets a fixed energy reserve, subject to ramp-up and
ramp-down constraints, and that appliances are subject to demand-response
signals and adjust their consumption to the available production by delaying
their demand. When a constant fraction of the delayed demand vanishes over
time, we show that the general state Markov chain characterizing the system is
positive Harris and ergodic (i.e., delayed demand is bounded with high
probability). However, when delayed demand increases by a constant fraction
over time, we show that the Markov chain is non-positive (i.e., there exists a
non-zero probability that delayed demand becomes unbounded). We exhibit
Lyapunov functions to prove our claims. In addition, we provide examples of
heating appliances that, when delayed, have energy requirements corresponding
to the two considered cases.
| [
{
"created": "Thu, 25 Nov 2010 12:18:55 GMT",
"version": "v1"
},
{
"created": "Mon, 25 Jul 2011 16:51:52 GMT",
"version": "v2"
},
{
"created": "Mon, 22 Apr 2013 09:47:19 GMT",
"version": "v3"
}
] | 2013-04-23 | [
[
"Boudec",
"Jean-Yves Le",
""
],
[
"Tomozei",
"Dan-Cristian",
""
]
] | We study the stability of a Markovian model of electricity production and consumption that incorporates production volatility due to renewables and uncertainty about actual demand versus planned production. We assume that the energy producer targets a fixed energy reserve, subject to ramp-up and ramp-down constraints, and that appliances are subject to demand-response signals and adjust their consumption to the available production by delaying their demand. When a constant fraction of the delayed demand vanishes over time, we show that the general state Markov chain characterizing the system is positive Harris and ergodic (i.e., delayed demand is bounded with high probability). However, when delayed demand increases by a constant fraction over time, we show that the Markov chain is non-positive (i.e., there exists a non-zero probability that delayed demand becomes unbounded). We exhibit Lyapunov functions to prove our claims. In addition, we provide examples of heating appliances that, when delayed, have energy requirements corresponding to the two considered cases. |
2011.04723 | Yen-Yu Chang | Yen-Yu Chang, Pan Li, Rok Sosic, M. H. Afifi, Marco Schweighauser,
Jure Leskovec | F-FADE: Frequency Factorization for Anomaly Detection in Edge Streams | WSDM 2021 | null | null | null | cs.SI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Edge streams are commonly used to capture interactions in dynamic networks,
such as email, social, or computer networks. The problem of detecting anomalies
or rare events in edge streams has a wide range of applications. However, it
presents many challenges due to lack of labels, a highly dynamic nature of
interactions, and the entanglement of temporal and structural changes in the
network. Current methods are limited in their ability to address the above
challenges and to efficiently process a large number of interactions. Here, we
propose F-FADE, a new approach for detection of anomalies in edge streams,
which uses a novel frequency-factorization technique to efficiently model the
time-evolving distributions of frequencies of interactions between node-pairs.
The anomalies are then determined based on the likelihood of the observed
frequency of each incoming interaction. F-FADE is able to handle in an online
streaming setting a broad variety of anomalies with temporal and structural
changes, while requiring only constant memory. Our experiments on one synthetic
and six real-world dynamic networks show that F-FADE achieves state of the art
performance and may detect anomalies that previous methods are unable to find.
| [
{
"created": "Mon, 9 Nov 2020 19:55:40 GMT",
"version": "v1"
},
{
"created": "Fri, 5 Feb 2021 13:11:09 GMT",
"version": "v2"
}
] | 2021-02-08 | [
[
"Chang",
"Yen-Yu",
""
],
[
"Li",
"Pan",
""
],
[
"Sosic",
"Rok",
""
],
[
"Afifi",
"M. H.",
""
],
[
"Schweighauser",
"Marco",
""
],
[
"Leskovec",
"Jure",
""
]
] | Edge streams are commonly used to capture interactions in dynamic networks, such as email, social, or computer networks. The problem of detecting anomalies or rare events in edge streams has a wide range of applications. However, it presents many challenges due to lack of labels, a highly dynamic nature of interactions, and the entanglement of temporal and structural changes in the network. Current methods are limited in their ability to address the above challenges and to efficiently process a large number of interactions. Here, we propose F-FADE, a new approach for detection of anomalies in edge streams, which uses a novel frequency-factorization technique to efficiently model the time-evolving distributions of frequencies of interactions between node-pairs. The anomalies are then determined based on the likelihood of the observed frequency of each incoming interaction. F-FADE is able to handle in an online streaming setting a broad variety of anomalies with temporal and structural changes, while requiring only constant memory. Our experiments on one synthetic and six real-world dynamic networks show that F-FADE achieves state of the art performance and may detect anomalies that previous methods are unable to find. |
1904.03735 | Mahmudur Rahman Khan | Mahmudur Khan and Jacob Chakareski | Visible Light Communication for Next Generation Untethered Virtual
Reality Systems | Accepted at IEEE International Conference on Communications Workshop
on Optical Wireless Communications, Shanghai, China, May 2019 | null | null | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Virtual and augmented reality (VR/AR) systems are emerging technologies
requiring data rates of multiple Gbps. Existing high quality VR headsets
require connections through HDMI cables to a computer rendering rich graphic
contents to meet the extremely high data transfer rate requirement. Such a
cable connection limits the VR user's mobility and interferes with the VR
experience. Current wireless technologies such as WiFi cannot support the
multi-Gbps graphics data transfer. Instead, we propose to use visible light
communication (VLC) for establishing high speed wireless links between a
rendering computer and a VR headset. But, VLC transceivers are highly
directional with narrow beams and require constant maintenance of line-of-sight
(LOS) alignment between the transmitter and the receiver. Thus, we present a
novel multi-detector hemispherical VR headset design to tackle the beam
misalignment problem caused by the VR user's random head orientation. We
provide detailed analysis on how the number of detectors on the headset can be
minimized while maintaining the required beam alignment and providing high
quality VR experience.
| [
{
"created": "Sun, 7 Apr 2019 20:24:55 GMT",
"version": "v1"
}
] | 2019-04-09 | [
[
"Khan",
"Mahmudur",
""
],
[
"Chakareski",
"Jacob",
""
]
] | Virtual and augmented reality (VR/AR) systems are emerging technologies requiring data rates of multiple Gbps. Existing high quality VR headsets require connections through HDMI cables to a computer rendering rich graphic contents to meet the extremely high data transfer rate requirement. Such a cable connection limits the VR user's mobility and interferes with the VR experience. Current wireless technologies such as WiFi cannot support the multi-Gbps graphics data transfer. Instead, we propose to use visible light communication (VLC) for establishing high speed wireless links between a rendering computer and a VR headset. But, VLC transceivers are highly directional with narrow beams and require constant maintenance of line-of-sight (LOS) alignment between the transmitter and the receiver. Thus, we present a novel multi-detector hemispherical VR headset design to tackle the beam misalignment problem caused by the VR user's random head orientation. We provide detailed analysis on how the number of detectors on the headset can be minimized while maintaining the required beam alignment and providing high quality VR experience. |
2109.05261 | Shengyu Zhang | Shengyu Zhang, Dong Yao, Zhou Zhao, Tat-seng Chua, Fei Wu | CauseRec: Counterfactual User Sequence Synthesis for Sequential
Recommendation | Proceedings of the 44th International ACM SIGIR Conference on
Research and Development in Information Retrieval (SIGIR 2021) | null | 10.1145/3404835.3462908 | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning user representations based on historical behaviors lies at the core
of modern recommender systems. Recent advances in sequential recommenders have
convincingly demonstrated high capability in extracting effective user
representations from the given behavior sequences. Despite significant
progress, we argue that solely modeling the observational behaviors sequences
may end up with a brittle and unstable system due to the noisy and sparse
nature of user interactions logged. In this paper, we propose to learn accurate
and robust user representations, which are required to be less sensitive to
(attack on) noisy behaviors and trust more on the indispensable ones, by
modeling counterfactual data distribution. Specifically, given an observed
behavior sequence, the proposed CauseRec framework identifies dispensable and
indispensable concepts at both the fine-grained item level and the abstract
interest level. CauseRec conditionally samples user concept sequences from the
counterfactual data distributions by replacing dispensable and indispensable
concepts within the original concept sequence. With user representations
obtained from the synthesized user sequences, CauseRec performs contrastive
user representation learning by contrasting the counterfactual with the
observational. We conduct extensive experiments on real-world public
recommendation benchmarks and justify the effectiveness of CauseRec with
multi-aspects model analysis. The results demonstrate that the proposed
CauseRec outperforms state-of-the-art sequential recommenders by learning
accurate and robust user representations.
| [
{
"created": "Sat, 11 Sep 2021 11:41:07 GMT",
"version": "v1"
}
] | 2021-09-14 | [
[
"Zhang",
"Shengyu",
""
],
[
"Yao",
"Dong",
""
],
[
"Zhao",
"Zhou",
""
],
[
"Chua",
"Tat-seng",
""
],
[
"Wu",
"Fei",
""
]
] | Learning user representations based on historical behaviors lies at the core of modern recommender systems. Recent advances in sequential recommenders have convincingly demonstrated high capability in extracting effective user representations from the given behavior sequences. Despite significant progress, we argue that solely modeling the observational behaviors sequences may end up with a brittle and unstable system due to the noisy and sparse nature of user interactions logged. In this paper, we propose to learn accurate and robust user representations, which are required to be less sensitive to (attack on) noisy behaviors and trust more on the indispensable ones, by modeling counterfactual data distribution. Specifically, given an observed behavior sequence, the proposed CauseRec framework identifies dispensable and indispensable concepts at both the fine-grained item level and the abstract interest level. CauseRec conditionally samples user concept sequences from the counterfactual data distributions by replacing dispensable and indispensable concepts within the original concept sequence. With user representations obtained from the synthesized user sequences, CauseRec performs contrastive user representation learning by contrasting the counterfactual with the observational. We conduct extensive experiments on real-world public recommendation benchmarks and justify the effectiveness of CauseRec with multi-aspects model analysis. The results demonstrate that the proposed CauseRec outperforms state-of-the-art sequential recommenders by learning accurate and robust user representations. |
1711.02679 | Volodymyr Kuleshov | Volodymyr Kuleshov, Stefano Ermon | Neural Variational Inference and Learning in Undirected Graphical Models | Appearing in Proceedings of the 31st Conference on Neural Information
Processing Systems (NIPS) 2017, Long Beach, CA, USA | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many problems in machine learning are naturally expressed in the language of
undirected graphical models. Here, we propose black-box learning and inference
algorithms for undirected models that optimize a variational approximation to
the log-likelihood of the model. Central to our approach is an upper bound on
the log-partition function parametrized by a function q that we express as a
flexible neural network. Our bound makes it possible to track the partition
function during learning, to speed-up sampling, and to train a broad class of
hybrid directed/undirected models via a unified variational inference
framework. We empirically demonstrate the effectiveness of our method on
several popular generative modeling datasets.
| [
{
"created": "Tue, 7 Nov 2017 19:00:20 GMT",
"version": "v1"
},
{
"created": "Thu, 16 Nov 2017 21:33:11 GMT",
"version": "v2"
}
] | 2017-11-20 | [
[
"Kuleshov",
"Volodymyr",
""
],
[
"Ermon",
"Stefano",
""
]
] | Many problems in machine learning are naturally expressed in the language of undirected graphical models. Here, we propose black-box learning and inference algorithms for undirected models that optimize a variational approximation to the log-likelihood of the model. Central to our approach is an upper bound on the log-partition function parametrized by a function q that we express as a flexible neural network. Our bound makes it possible to track the partition function during learning, to speed-up sampling, and to train a broad class of hybrid directed/undirected models via a unified variational inference framework. We empirically demonstrate the effectiveness of our method on several popular generative modeling datasets. |
2003.00336 | Shrinu Kushagra | Shrinu Kushagra | Three-dimensional matching is NP-Hard | null | null | null | null | cs.CC | http://creativecommons.org/publicdomain/zero/1.0/ | The standard proof of NP-Hardness of 3DM provides a power-$4$ reduction of
3SAT to 3DM. In this note, we provide a linear-time reduction. Under the
exponential time hypothesis, this reduction improves the runtime lower bound
from $2^{o(\sqrt[4]{m})}$ (under the standard reduction) to $2^{o(m)}$.
| [
{
"created": "Sat, 29 Feb 2020 19:51:55 GMT",
"version": "v1"
}
] | 2020-03-03 | [
[
"Kushagra",
"Shrinu",
""
]
] | The standard proof of NP-Hardness of 3DM provides a power-$4$ reduction of 3SAT to 3DM. In this note, we provide a linear-time reduction. Under the exponential time hypothesis, this reduction improves the runtime lower bound from $2^{o(\sqrt[4]{m})}$ (under the standard reduction) to $2^{o(m)}$. |
cs/0205052 | Laemmel | Vasu Alagar and Ralf Laemmel | Three-Tiered Specification of Micro-Architectures | null | null | null | null | cs.SE cs.PL | null | A three-tiered specification approach is developed to formally specify
collections of collaborating objects, say micro-architectures. (i) The
structural properties to be maintained in the collaboration are specified in
the lowest tier. (ii) The behaviour of the object methods in the classes is
specified in the middle tier. (iii) The interaction of the objects in the
micro-architecture is specified in the third tier. The specification approach
is based on Larch and accompanying notations and tools. The approach enables
the unambiguous and complete specification of reusable collections of
collaborating objects. The layered, formal approach is compared to other
approaches including the mainstream UML approach.
| [
{
"created": "Sun, 19 May 2002 14:46:34 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Alagar",
"Vasu",
""
],
[
"Laemmel",
"Ralf",
""
]
] | A three-tiered specification approach is developed to formally specify collections of collaborating objects, say micro-architectures. (i) The structural properties to be maintained in the collaboration are specified in the lowest tier. (ii) The behaviour of the object methods in the classes is specified in the middle tier. (iii) The interaction of the objects in the micro-architecture is specified in the third tier. The specification approach is based on Larch and accompanying notations and tools. The approach enables the unambiguous and complete specification of reusable collections of collaborating objects. The layered, formal approach is compared to other approaches including the mainstream UML approach. |
2204.14117 | Yusuke Ohtsubo | Yusuke Ohtsubo, Takuto Sato, Hirohiko Sagawa | A Comparative Study of Meter Detection Methods for Automated
Infrastructure Inspection | 2 pages, in Japanese language | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | In order to read meter values from a camera on an autonomous inspection robot
with positional errors, it is necessary to detect meter regions from the image.
In this study, we developed shape-based, texture-based, and background
information-based methods as meter area detection techniques and compared their
effectiveness for meters of different shapes and sizes. As a result, we
confirmed that the background information-based method can detect the farthest
meters regardless of the shape and number of meters, and can stably detect
meters with a diameter of 40px.
| [
{
"created": "Sun, 24 Apr 2022 13:59:57 GMT",
"version": "v1"
}
] | 2022-05-02 | [
[
"Ohtsubo",
"Yusuke",
""
],
[
"Sato",
"Takuto",
""
],
[
"Sagawa",
"Hirohiko",
""
]
] | In order to read meter values from a camera on an autonomous inspection robot with positional errors, it is necessary to detect meter regions from the image. In this study, we developed shape-based, texture-based, and background information-based methods as meter area detection techniques and compared their effectiveness for meters of different shapes and sizes. As a result, we confirmed that the background information-based method can detect the farthest meters regardless of the shape and number of meters, and can stably detect meters with a diameter of 40px. |
1908.11341 | Sergei O. Kuznetsov | Sergei O. Kuznetsov | Ordered Sets for Data Analysis | null | null | null | null | cs.LO cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This book dwells on mathematical and algorithmic issues of data analysis
based on generality order of descriptions and respective precision. To speak of
these topics correctly, we have to go some way getting acquainted with the
important notions of relation and order theory. On the one hand, data often
have a complex structure with natural order on it. On the other hand, many
symbolic methods of data analysis and machine learning allow to compare the
obtained classifiers w.r.t. their generality, which is also an order relation.
Efficient algorithms are very important in data analysis, especially when one
deals with big data, so scalability is a real issue. That is why we analyze the
computational complexity of algorithms and problems of data analysis. We start
from the basic definitions and facts of algorithmic complexity theory and
analyze the complexity of various tools of data analysis we consider. The tools
and methods of data analysis, like computing taxonomies, groups of similar
objects (concepts and n-clusters), dependencies in data, classification, etc.,
are illustrated with applications in particular subject domains, from
chemoinformatics to text mining and natural language processing.
| [
{
"created": "Tue, 27 Aug 2019 18:01:13 GMT",
"version": "v1"
}
] | 2019-08-30 | [
[
"Kuznetsov",
"Sergei O.",
""
]
] | This book dwells on mathematical and algorithmic issues of data analysis based on generality order of descriptions and respective precision. To speak of these topics correctly, we have to go some way getting acquainted with the important notions of relation and order theory. On the one hand, data often have a complex structure with natural order on it. On the other hand, many symbolic methods of data analysis and machine learning allow to compare the obtained classifiers w.r.t. their generality, which is also an order relation. Efficient algorithms are very important in data analysis, especially when one deals with big data, so scalability is a real issue. That is why we analyze the computational complexity of algorithms and problems of data analysis. We start from the basic definitions and facts of algorithmic complexity theory and analyze the complexity of various tools of data analysis we consider. The tools and methods of data analysis, like computing taxonomies, groups of similar objects (concepts and n-clusters), dependencies in data, classification, etc., are illustrated with applications in particular subject domains, from chemoinformatics to text mining and natural language processing. |
2407.01031 | Dan Peng | Dan Peng, Zhihui Fu, Jun Wang | PocketLLM: Enabling On-Device Fine-Tuning for Personalized LLMs | Accepted to the ACL 2024 Workshop on Privacy in Natural Language
Processing (PrivateNLP) | null | null | null | cs.LG cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Recent advancements in large language models (LLMs) have indeed showcased
their impressive capabilities. On mobile devices, the wealth of valuable,
non-public data generated daily holds great promise for locally fine-tuning
personalized LLMs, while maintaining privacy through on-device processing.
However, the constraints of mobile device resources pose challenges to direct
on-device LLM fine-tuning, mainly due to the memory-intensive nature of
derivative-based optimization required for saving gradients and optimizer
states. To tackle this, we propose employing derivative-free optimization
techniques to enable on-device fine-tuning of LLM, even on memory-limited
mobile devices. Empirical results demonstrate that the RoBERTa-large model and
OPT-1.3B can be fine-tuned locally on the OPPO Reno 6 smartphone using around
4GB and 6.5GB of memory respectively, using derivative-free optimization
techniques. This highlights the feasibility of on-device LLM fine-tuning on
mobile devices, paving the way for personalized LLMs on resource-constrained
devices while safeguarding data privacy.
| [
{
"created": "Mon, 1 Jul 2024 07:26:56 GMT",
"version": "v1"
}
] | 2024-07-02 | [
[
"Peng",
"Dan",
""
],
[
"Fu",
"Zhihui",
""
],
[
"Wang",
"Jun",
""
]
] | Recent advancements in large language models (LLMs) have indeed showcased their impressive capabilities. On mobile devices, the wealth of valuable, non-public data generated daily holds great promise for locally fine-tuning personalized LLMs, while maintaining privacy through on-device processing. However, the constraints of mobile device resources pose challenges to direct on-device LLM fine-tuning, mainly due to the memory-intensive nature of derivative-based optimization required for saving gradients and optimizer states. To tackle this, we propose employing derivative-free optimization techniques to enable on-device fine-tuning of LLM, even on memory-limited mobile devices. Empirical results demonstrate that the RoBERTa-large model and OPT-1.3B can be fine-tuned locally on the OPPO Reno 6 smartphone using around 4GB and 6.5GB of memory respectively, using derivative-free optimization techniques. This highlights the feasibility of on-device LLM fine-tuning on mobile devices, paving the way for personalized LLMs on resource-constrained devices while safeguarding data privacy. |
2009.00506 | Oliver Horst | Oliver Horst and Johannes Wiesb\"ock and Raphael Wild and Uwe
Baumgarten | Quantifying the Latency and Possible Throughput of External Interrupts
on Cyber-Physical Systems | Appeared in proceedings of the 3rd Workshop on Benchmarking
Cyber-Physical Systems and Internet of Things (CPS-IoTBench) held in
conjunction with the 26th Annual International Conference on Mobile Computing
and Networking (MobiCom) | null | null | null | cs.OS | http://creativecommons.org/licenses/by-sa/4.0/ | An important characteristic of cyber-physical systems is their capability to
respond, in-time, to events from their physical environment. However, to the
best of our knowledge there exists no benchmark for assessing and comparing the
interrupt handling performance of different software stacks. Hence, we present
a flexible evaluation method for measuring the interrupt latency and throughput
on ARMv8-A based platforms. We define and validate seven test-cases that stress
individual parts of the overall process and combine them to three benchmark
functions that provoke the minimal and maximal interrupt latency, and maximal
interrupt throughput.
| [
{
"created": "Tue, 1 Sep 2020 15:08:31 GMT",
"version": "v1"
}
] | 2020-09-02 | [
[
"Horst",
"Oliver",
""
],
[
"Wiesböck",
"Johannes",
""
],
[
"Wild",
"Raphael",
""
],
[
"Baumgarten",
"Uwe",
""
]
] | An important characteristic of cyber-physical systems is their capability to respond, in-time, to events from their physical environment. However, to the best of our knowledge there exists no benchmark for assessing and comparing the interrupt handling performance of different software stacks. Hence, we present a flexible evaluation method for measuring the interrupt latency and throughput on ARMv8-A based platforms. We define and validate seven test-cases that stress individual parts of the overall process and combine them to three benchmark functions that provoke the minimal and maximal interrupt latency, and maximal interrupt throughput. |
2308.10248 | Gavin Leech | Alexander Matt Turner, Lisa Thiergart, Gavin Leech, David Udell, Juan
J. Vazquez, Ulisse Mini, Monte MacDiarmid | Activation Addition: Steering Language Models Without Optimization | null | null | null | null | cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | Reliably controlling the behavior of large language models is a pressing open
problem. Existing methods include supervised finetuning, reinforcement learning
from human feedback, prompt engineering and guided decoding. We instead
investigate activation engineering: modifying activations at inference-time to
predictably alter model behavior. We bias the forward pass with a 'steering
vector' implicitly specified through natural language. Past work learned these
steering vectors; our Activation Addition (ActAdd) method instead computes them
by taking activation differences resulting from pairs of prompts. We
demonstrate ActAdd on a range of LLMs (LLaMA-3, OPT, GPT-2, and GPT-J),
obtaining SOTA on detoxification and negative-to-positive sentiment control.
Our approach yields inference-time control over high-level properties of output
like topic and sentiment while preserving performance on off-target tasks.
ActAdd takes far less compute and implementation effort than finetuning or
RLHF, allows users control through natural language, and its computational
overhead (as a fraction of inference time) appears stable or improving over
increasing model size.
| [
{
"created": "Sun, 20 Aug 2023 12:21:05 GMT",
"version": "v1"
},
{
"created": "Fri, 1 Sep 2023 17:07:29 GMT",
"version": "v2"
},
{
"created": "Mon, 13 Nov 2023 14:05:13 GMT",
"version": "v3"
},
{
"created": "Tue, 4 Jun 2024 10:08:39 GMT",
"version": "v4"
}
] | 2024-06-05 | [
[
"Turner",
"Alexander Matt",
""
],
[
"Thiergart",
"Lisa",
""
],
[
"Leech",
"Gavin",
""
],
[
"Udell",
"David",
""
],
[
"Vazquez",
"Juan J.",
""
],
[
"Mini",
"Ulisse",
""
],
[
"MacDiarmid",
"Monte",
""
]
] | Reliably controlling the behavior of large language models is a pressing open problem. Existing methods include supervised finetuning, reinforcement learning from human feedback, prompt engineering and guided decoding. We instead investigate activation engineering: modifying activations at inference-time to predictably alter model behavior. We bias the forward pass with a 'steering vector' implicitly specified through natural language. Past work learned these steering vectors; our Activation Addition (ActAdd) method instead computes them by taking activation differences resulting from pairs of prompts. We demonstrate ActAdd on a range of LLMs (LLaMA-3, OPT, GPT-2, and GPT-J), obtaining SOTA on detoxification and negative-to-positive sentiment control. Our approach yields inference-time control over high-level properties of output like topic and sentiment while preserving performance on off-target tasks. ActAdd takes far less compute and implementation effort than finetuning or RLHF, allows users control through natural language, and its computational overhead (as a fraction of inference time) appears stable or improving over increasing model size. |
1806.06396 | Qingqing Wu | Miao Cui, Guangchi Zhang, Qingqing Wu, and Derrick Wing Kwan Ng | Robust Trajectory and Transmit Power Design for Secure UAV
Communications | Accepted by IEEE Transactions on Vehicular Technology, 2018 | null | null | null | cs.IT math.DS math.IT math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Unmanned aerial vehicles (UAVs) are anticipated to be widely deployed in
future wireless communications, due to their advantages of high mobility and
easy deployment. However, the broadcast nature of air-to-ground line-of-sight
wireless chan- nels brings a new challenge to the information security of UAV-
ground communication. This paper tackles such a challenge in the physical layer
by exploiting the mobility of UAV via its trajectory design. We consider a
UAV-ground communication system with multiple potential eavesdroppers on the
ground, where the information on the locations of the eavesdroppers is
imperfect. We formulate an optimization problem which maximizes the average
worst-case secrecy rate of the system by jointly designing the robust
trajectory and transmit power of the UAV over a given flight duration. The
non-convexity of the optimization problem and the imperfect location
information of the eavesdroppers make the problem difficult to be solved
optimally. We propose an iterative suboptimal algorithm to solve this problem
efficiently by applying the block coordinate descent method, S-procedure, and
successive convex optimization method. Simulation results show that the
proposed algorithm can improve the average worst-case secrecy rate
significantly, as compared to two other benchmark algorithms without robust
design.
| [
{
"created": "Sun, 17 Jun 2018 15:44:20 GMT",
"version": "v1"
}
] | 2018-06-19 | [
[
"Cui",
"Miao",
""
],
[
"Zhang",
"Guangchi",
""
],
[
"Wu",
"Qingqing",
""
],
[
"Ng",
"Derrick Wing Kwan",
""
]
] | Unmanned aerial vehicles (UAVs) are anticipated to be widely deployed in future wireless communications, due to their advantages of high mobility and easy deployment. However, the broadcast nature of air-to-ground line-of-sight wireless chan- nels brings a new challenge to the information security of UAV- ground communication. This paper tackles such a challenge in the physical layer by exploiting the mobility of UAV via its trajectory design. We consider a UAV-ground communication system with multiple potential eavesdroppers on the ground, where the information on the locations of the eavesdroppers is imperfect. We formulate an optimization problem which maximizes the average worst-case secrecy rate of the system by jointly designing the robust trajectory and transmit power of the UAV over a given flight duration. The non-convexity of the optimization problem and the imperfect location information of the eavesdroppers make the problem difficult to be solved optimally. We propose an iterative suboptimal algorithm to solve this problem efficiently by applying the block coordinate descent method, S-procedure, and successive convex optimization method. Simulation results show that the proposed algorithm can improve the average worst-case secrecy rate significantly, as compared to two other benchmark algorithms without robust design. |
2303.12364 | Maurice Rupp | Maurice Rupp, Oriane Peter, Thirupathi Pattipaka | ExBEHRT: Extended Transformer for Electronic Health Records to Predict
Disease Subtypes & Progressions | ICLR 2023 Workshop on Trustworthy Machine Learning for Healthcare
(Website: https://sites.google.com/view/tml4h2023/accepted-papers ) | Lecture Notes in Computer Science, vol 13932. Springer, Cham 2023 | 10.1007/978-3-031-39539-0_7 | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | In this study, we introduce ExBEHRT, an extended version of BEHRT (BERT
applied to electronic health records), and apply different algorithms to
interpret its results. While BEHRT considers only diagnoses and patient age, we
extend the feature space to several multimodal records, namely demographics,
clinical characteristics, vital signs, smoking status, diagnoses, procedures,
medications, and laboratory tests, by applying a novel method to unify the
frequencies and temporal dimensions of the different features. We show that
additional features significantly improve model performance for various
downstream tasks in different diseases. To ensure robustness, we interpret
model predictions using an adaptation of expected gradients, which has not been
previously applied to transformers with EHR data and provides more granular
interpretations than previous approaches such as feature and token importances.
Furthermore, by clustering the model representations of oncology patients, we
show that the model has an implicit understanding of the disease and is able to
classify patients with the same cancer type into different risk groups. Given
the additional features and interpretability, ExBEHRT can help make informed
decisions about disease trajectories, diagnoses, and risk factors of various
diseases.
| [
{
"created": "Wed, 22 Mar 2023 08:03:27 GMT",
"version": "v1"
},
{
"created": "Tue, 11 Apr 2023 07:00:49 GMT",
"version": "v2"
},
{
"created": "Fri, 11 Aug 2023 14:59:36 GMT",
"version": "v3"
}
] | 2023-08-14 | [
[
"Rupp",
"Maurice",
""
],
[
"Peter",
"Oriane",
""
],
[
"Pattipaka",
"Thirupathi",
""
]
] | In this study, we introduce ExBEHRT, an extended version of BEHRT (BERT applied to electronic health records), and apply different algorithms to interpret its results. While BEHRT considers only diagnoses and patient age, we extend the feature space to several multimodal records, namely demographics, clinical characteristics, vital signs, smoking status, diagnoses, procedures, medications, and laboratory tests, by applying a novel method to unify the frequencies and temporal dimensions of the different features. We show that additional features significantly improve model performance for various downstream tasks in different diseases. To ensure robustness, we interpret model predictions using an adaptation of expected gradients, which has not been previously applied to transformers with EHR data and provides more granular interpretations than previous approaches such as feature and token importances. Furthermore, by clustering the model representations of oncology patients, we show that the model has an implicit understanding of the disease and is able to classify patients with the same cancer type into different risk groups. Given the additional features and interpretability, ExBEHRT can help make informed decisions about disease trajectories, diagnoses, and risk factors of various diseases. |
2107.02434 | Shunquan Tan | Long Zhuo and Shunquan Tan and Bin Li and Jiwu Huang | Self-Adversarial Training incorporating Forgery Attention for Image
Forgery Localization | accepted by TIFS | null | 10.1109/TIFS.2022.3152362 | null | cs.CV cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Image editing techniques enable people to modify the content of an image
without leaving visual traces and thus may cause serious security risks. Hence
the detection and localization of these forgeries become quite necessary and
challenging. Furthermore, unlike other tasks with extensive data, there is
usually a lack of annotated forged images for training due to annotation
difficulties. In this paper, we propose a self-adversarial training strategy
and a reliable coarse-to-fine network that utilizes a self-attention mechanism
to localize forged regions in forgery images. The self-attention module is
based on a Channel-Wise High Pass Filter block (CW-HPF). CW-HPF leverages
inter-channel relationships of features and extracts noise features by high
pass filters. Based on the CW-HPF, a self-attention mechanism, called forgery
attention, is proposed to capture rich contextual dependencies of intrinsic
inconsistency extracted from tampered regions. Specifically, we append two
types of attention modules on top of CW-HPF respectively to model internal
interdependencies in spatial dimension and external dependencies among
channels. We exploit a coarse-to-fine network to enhance the noise
inconsistency between original and tampered regions. More importantly, to
address the issue of insufficient training data, we design a self-adversarial
training strategy that expands training data dynamically to achieve more robust
performance. Specifically, in each training iteration, we perform adversarial
attacks against our network to generate adversarial examples and train our
model on them. Extensive experimental results demonstrate that our proposed
algorithm steadily outperforms state-of-the-art methods by a clear margin in
different benchmark datasets.
| [
{
"created": "Tue, 6 Jul 2021 07:20:08 GMT",
"version": "v1"
},
{
"created": "Thu, 3 Feb 2022 04:14:46 GMT",
"version": "v2"
}
] | 2022-02-21 | [
[
"Zhuo",
"Long",
""
],
[
"Tan",
"Shunquan",
""
],
[
"Li",
"Bin",
""
],
[
"Huang",
"Jiwu",
""
]
] | Image editing techniques enable people to modify the content of an image without leaving visual traces and thus may cause serious security risks. Hence the detection and localization of these forgeries become quite necessary and challenging. Furthermore, unlike other tasks with extensive data, there is usually a lack of annotated forged images for training due to annotation difficulties. In this paper, we propose a self-adversarial training strategy and a reliable coarse-to-fine network that utilizes a self-attention mechanism to localize forged regions in forgery images. The self-attention module is based on a Channel-Wise High Pass Filter block (CW-HPF). CW-HPF leverages inter-channel relationships of features and extracts noise features by high pass filters. Based on the CW-HPF, a self-attention mechanism, called forgery attention, is proposed to capture rich contextual dependencies of intrinsic inconsistency extracted from tampered regions. Specifically, we append two types of attention modules on top of CW-HPF respectively to model internal interdependencies in spatial dimension and external dependencies among channels. We exploit a coarse-to-fine network to enhance the noise inconsistency between original and tampered regions. More importantly, to address the issue of insufficient training data, we design a self-adversarial training strategy that expands training data dynamically to achieve more robust performance. Specifically, in each training iteration, we perform adversarial attacks against our network to generate adversarial examples and train our model on them. Extensive experimental results demonstrate that our proposed algorithm steadily outperforms state-of-the-art methods by a clear margin in different benchmark datasets. |
2403.02936 | Mahdi Taheri | Mahdi Taheri, Natalia Cherezova, Samira Nazari, Ahsan Rafiq, Ali
Azarpeyvand, Tara Ghasempouri, Masoud Daneshtalab, Jaan Raik and Maksim
Jenihhin | AdAM: Adaptive Fault-Tolerant Approximate Multiplier for Edge DNN
Accelerators | null | null | null | null | cs.AI cs.AR cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In this paper, we propose an architecture of a novel adaptive fault-tolerant
approximate multiplier tailored for ASIC-based DNN accelerators.
| [
{
"created": "Tue, 5 Mar 2024 13:03:31 GMT",
"version": "v1"
}
] | 2024-03-06 | [
[
"Taheri",
"Mahdi",
""
],
[
"Cherezova",
"Natalia",
""
],
[
"Nazari",
"Samira",
""
],
[
"Rafiq",
"Ahsan",
""
],
[
"Azarpeyvand",
"Ali",
""
],
[
"Ghasempouri",
"Tara",
""
],
[
"Daneshtalab",
"Masoud",
""
],
[
"Raik",
"Jaan",
""
],
[
"Jenihhin",
"Maksim",
""
]
] | In this paper, we propose an architecture of a novel adaptive fault-tolerant approximate multiplier tailored for ASIC-based DNN accelerators. |
2210.15615 | Nikita Moghe | Chantal Amrhein and Nikita Moghe and Liane Guillou | ACES: Translation Accuracy Challenge Sets for Evaluating Machine
Translation Metrics | preprint for WMT 2022 with updated tables | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | As machine translation (MT) metrics improve their correlation with human
judgement every year, it is crucial to understand the limitations of such
metrics at the segment level. Specifically, it is important to investigate
metric behaviour when facing accuracy errors in MT because these can have
dangerous consequences in certain contexts (e.g., legal, medical). We curate
ACES, a translation accuracy challenge set, consisting of 68 phenomena ranging
from simple perturbations at the word/character level to more complex errors
based on discourse and real-world knowledge. We use ACES to evaluate a wide
range of MT metrics including the submissions to the WMT 2022 metrics shared
task and perform several analyses leading to general recommendations for metric
developers. We recommend: a) combining metrics with different strengths, b)
developing metrics that give more weight to the source and less to
surface-level overlap with the reference and c) explicitly modelling additional
language-specific information beyond what is available via multilingual
embeddings.
| [
{
"created": "Thu, 27 Oct 2022 16:59:02 GMT",
"version": "v1"
},
{
"created": "Tue, 6 Dec 2022 09:45:29 GMT",
"version": "v2"
}
] | 2022-12-07 | [
[
"Amrhein",
"Chantal",
""
],
[
"Moghe",
"Nikita",
""
],
[
"Guillou",
"Liane",
""
]
] | As machine translation (MT) metrics improve their correlation with human judgement every year, it is crucial to understand the limitations of such metrics at the segment level. Specifically, it is important to investigate metric behaviour when facing accuracy errors in MT because these can have dangerous consequences in certain contexts (e.g., legal, medical). We curate ACES, a translation accuracy challenge set, consisting of 68 phenomena ranging from simple perturbations at the word/character level to more complex errors based on discourse and real-world knowledge. We use ACES to evaluate a wide range of MT metrics including the submissions to the WMT 2022 metrics shared task and perform several analyses leading to general recommendations for metric developers. We recommend: a) combining metrics with different strengths, b) developing metrics that give more weight to the source and less to surface-level overlap with the reference and c) explicitly modelling additional language-specific information beyond what is available via multilingual embeddings. |
1701.08517 | Pieter Leyman | Pieter Leyman, San Tu Pham, Patrick De Causmaecker | The Intermittent Traveling Salesman Problem with Different Temperature
Profiles: Greedy or not? | null | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this research, we discuss the intermittent traveling salesman problem
(ITSP), which extends the traditional traveling salesman problem (TSP) by
imposing temperature restrictions on each node. These additional constraints
limit the maximum allowable visit time per node, and result in multiple visits
for each node which cannot be serviced in a single visit. We discuss three
different temperature increase and decrease functions, namely a linear, a
quadratic and an exponential function. To solve the problem, we consider three
different solution representations as part of a metaheuristic approach. We
argue that in case of similar temperature increase and decrease profiles, it is
always beneficial to apply a greedy approach, i.e. to process as much as
possible given the current node temperature.
| [
{
"created": "Mon, 30 Jan 2017 09:22:08 GMT",
"version": "v1"
}
] | 2017-01-31 | [
[
"Leyman",
"Pieter",
""
],
[
"Pham",
"San Tu",
""
],
[
"De Causmaecker",
"Patrick",
""
]
] | In this research, we discuss the intermittent traveling salesman problem (ITSP), which extends the traditional traveling salesman problem (TSP) by imposing temperature restrictions on each node. These additional constraints limit the maximum allowable visit time per node, and result in multiple visits for each node which cannot be serviced in a single visit. We discuss three different temperature increase and decrease functions, namely a linear, a quadratic and an exponential function. To solve the problem, we consider three different solution representations as part of a metaheuristic approach. We argue that in case of similar temperature increase and decrease profiles, it is always beneficial to apply a greedy approach, i.e. to process as much as possible given the current node temperature. |
1204.3372 | Anton Salikhmetov | Anton Salikhmetov | Blind graph rewriting systems | 4 pages | null | null | null | cs.LO cs.FL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider a simple (probably, the simplest) structure for random access
memory. This structure can be used to construct a universal system with nearly
void processor, namely, we demonstrate that the processor of such a system may
have empty instruction set, in a more strong manner than the existing ZISC
(zero instruction set computer based on ideas for artificial neural networks)
and NISC architecture (no instruction set computing). More precisely, the
processor will be forbidden to analyze any information stored in the memory,
the latter being the only state of such a machine. This particular paper is to
cover an isolated aspect of the idea, specifically, to provide the logical
operations embedded into a system without any built-in conditional statements.
| [
{
"created": "Mon, 16 Apr 2012 07:02:25 GMT",
"version": "v1"
}
] | 2012-04-17 | [
[
"Salikhmetov",
"Anton",
""
]
] | We consider a simple (probably, the simplest) structure for random access memory. This structure can be used to construct a universal system with nearly void processor, namely, we demonstrate that the processor of such a system may have empty instruction set, in a more strong manner than the existing ZISC (zero instruction set computer based on ideas for artificial neural networks) and NISC architecture (no instruction set computing). More precisely, the processor will be forbidden to analyze any information stored in the memory, the latter being the only state of such a machine. This particular paper is to cover an isolated aspect of the idea, specifically, to provide the logical operations embedded into a system without any built-in conditional statements. |
2206.02511 | Yang Li | Yang Li, Yu Shen, Huaijun Jiang, Tianyi Bai, Wentao Zhang, Ce Zhang
and Bin Cui | Transfer Learning based Search Space Design for Hyperparameter Tuning | 9 pages and 2 extra pages for appendix | Proceedings of the 28th ACM SIGKDD Conference on Knowledge
Discovery and Data Mining (KDD 2022) | 10.1145/3534678.3539369 | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The tuning of hyperparameters becomes increasingly important as machine
learning (ML) models have been extensively applied in data mining applications.
Among various approaches, Bayesian optimization (BO) is a successful
methodology to tune hyper-parameters automatically. While traditional methods
optimize each tuning task in isolation, there has been recent interest in
speeding up BO by transferring knowledge across previous tasks. In this work,
we introduce an automatic method to design the BO search space with the aid of
tuning history from past tasks. This simple yet effective approach can be used
to endow many existing BO methods with transfer learning capabilities. In
addition, it enjoys the three advantages: universality, generality, and
safeness. The extensive experiments show that our approach considerably boosts
BO by designing a promising and compact search space instead of using the
entire space, and outperforms the state-of-the-arts on a wide range of
benchmarks, including machine learning and deep learning tuning tasks, and
neural architecture search.
| [
{
"created": "Mon, 6 Jun 2022 11:48:58 GMT",
"version": "v1"
}
] | 2022-06-07 | [
[
"Li",
"Yang",
""
],
[
"Shen",
"Yu",
""
],
[
"Jiang",
"Huaijun",
""
],
[
"Bai",
"Tianyi",
""
],
[
"Zhang",
"Wentao",
""
],
[
"Zhang",
"Ce",
""
],
[
"Cui",
"Bin",
""
]
] | The tuning of hyperparameters becomes increasingly important as machine learning (ML) models have been extensively applied in data mining applications. Among various approaches, Bayesian optimization (BO) is a successful methodology to tune hyper-parameters automatically. While traditional methods optimize each tuning task in isolation, there has been recent interest in speeding up BO by transferring knowledge across previous tasks. In this work, we introduce an automatic method to design the BO search space with the aid of tuning history from past tasks. This simple yet effective approach can be used to endow many existing BO methods with transfer learning capabilities. In addition, it enjoys the three advantages: universality, generality, and safeness. The extensive experiments show that our approach considerably boosts BO by designing a promising and compact search space instead of using the entire space, and outperforms the state-of-the-arts on a wide range of benchmarks, including machine learning and deep learning tuning tasks, and neural architecture search. |
1004.3555 | Vishal Goyal | Sukhvinder S. Bamber, Ajay K. Sharma | Comparative Performance Investigations of different scenarios for
802.15.4 WPAN | International Journal of Computer Science Issues online at
http://ijcsi.org/articles/Comparative-Performance-Investigations-of-different-scenarios-for-802-15-4-WPAN.php | IJCSI, Volume 7, Issue 2, March 2010 | null | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper investigates the performance of WPAN based on various topological
scenarios like: cluster, star and ring. The comparative results have been
reported for the performance metrics like: Throughput, Traffic sent, Traffic
received and Packets dropped. Cluster topology is best in comparison with star
and ring topologies as it has been shown that the throughput in case of cluster
topology (79.887 kbits / sec) as compared to star (31.815 kbits / sec) and ring
(1.179 kbits / sec).
| [
{
"created": "Tue, 20 Apr 2010 20:10:49 GMT",
"version": "v1"
}
] | 2010-04-22 | [
[
"Bamber",
"Sukhvinder S.",
""
],
[
"Sharma",
"Ajay K.",
""
]
] | This paper investigates the performance of WPAN based on various topological scenarios like: cluster, star and ring. The comparative results have been reported for the performance metrics like: Throughput, Traffic sent, Traffic received and Packets dropped. Cluster topology is best in comparison with star and ring topologies as it has been shown that the throughput in case of cluster topology (79.887 kbits / sec) as compared to star (31.815 kbits / sec) and ring (1.179 kbits / sec). |
1504.07597 | Nicolas Turenne | Nicolas Turenne | Duplicate Detection with Efficient Language Models for Automatic
Bibliographic Heterogeneous Data Integration | null | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a new method to detect duplicates used to merge different
bibliographic record corpora with the help of lexical and social information.
As we show, a trivial key is not available to delete useless documents. Merging
heteregeneous document databases to get a maximum of information can be of
interest. In our case we try to build a document corpus about the TOR molecule
so as to extract relationships with other gene components from PubMed and
WebOfScience document databases. Our approach makes key fingerprints based on
n-grams. We made two documents gold standards using this corpus to make an
evaluation. Comparison with other well-known methods in deduplication gives
best scores of recall (95\%) and precision (100\%).
| [
{
"created": "Mon, 27 Apr 2015 11:53:01 GMT",
"version": "v1"
}
] | 2015-04-29 | [
[
"Turenne",
"Nicolas",
""
]
] | We present a new method to detect duplicates used to merge different bibliographic record corpora with the help of lexical and social information. As we show, a trivial key is not available to delete useless documents. Merging heteregeneous document databases to get a maximum of information can be of interest. In our case we try to build a document corpus about the TOR molecule so as to extract relationships with other gene components from PubMed and WebOfScience document databases. Our approach makes key fingerprints based on n-grams. We made two documents gold standards using this corpus to make an evaluation. Comparison with other well-known methods in deduplication gives best scores of recall (95\%) and precision (100\%). |
1812.05195 | Vaibhav Saini | Vaibhav Saini, Farima Farmahinifarahani, Yadong Lu, Di Yang, Pedro
Martins, Hitesh Sajnani, Pierre Baldi, Cristina Lopes | Towards Automating Precision Studies of Clone Detectors | Accepted to be published in the 41st ACM/IEEE International
Conference on Software Engineering | Proceeding 2019 IEEE/ACM 41st International Conference on Software
Engineering (ICSE) | 10.1109/ICSE.2019.00023 | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current research in clone detection suffers from poor ecosystems for
evaluating precision of clone detection tools. Corpora of labeled clones are
scarce and incomplete, making evaluation labor intensive and idiosyncratic, and
limiting inter tool comparison. Precision-assessment tools are simply lacking.
We present a semi-automated approach to facilitate precision studies of clone
detection tools. The approach merges automatic mechanisms of clone
classification with manual validation of clone pairs. We demonstrate that the
proposed automatic approach has a very high precision and it significantly
reduces the number of clone pairs that need human validation during precision
experiments. Moreover, we aggregate the individual effort of multiple teams
into a single evolving dataset of labeled clone pairs, creating an important
asset for software clone research.
| [
{
"created": "Wed, 12 Dec 2018 23:28:59 GMT",
"version": "v1"
},
{
"created": "Fri, 14 Dec 2018 02:36:31 GMT",
"version": "v2"
}
] | 2019-05-30 | [
[
"Saini",
"Vaibhav",
""
],
[
"Farmahinifarahani",
"Farima",
""
],
[
"Lu",
"Yadong",
""
],
[
"Yang",
"Di",
""
],
[
"Martins",
"Pedro",
""
],
[
"Sajnani",
"Hitesh",
""
],
[
"Baldi",
"Pierre",
""
],
[
"Lopes",
"Cristina",
""
]
] | Current research in clone detection suffers from poor ecosystems for evaluating precision of clone detection tools. Corpora of labeled clones are scarce and incomplete, making evaluation labor intensive and idiosyncratic, and limiting inter tool comparison. Precision-assessment tools are simply lacking. We present a semi-automated approach to facilitate precision studies of clone detection tools. The approach merges automatic mechanisms of clone classification with manual validation of clone pairs. We demonstrate that the proposed automatic approach has a very high precision and it significantly reduces the number of clone pairs that need human validation during precision experiments. Moreover, we aggregate the individual effort of multiple teams into a single evolving dataset of labeled clone pairs, creating an important asset for software clone research. |
2309.06672 | Zhengyang Chen | Zhengyang Chen, Bing Han, Shuai Wang and Yanmin Qian | Attention-based Encoder-Decoder End-to-End Neural Diarization with
Embedding Enhancer | IEEE/ACM Transactions on Audio Speech and Language Processing Under
Review | null | null | null | cs.SD eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep neural network-based systems have significantly improved the performance
of speaker diarization tasks. However, end-to-end neural diarization (EEND)
systems often struggle to generalize to scenarios with an unseen number of
speakers, while target speaker voice activity detection (TS-VAD) systems tend
to be overly complex. In this paper, we propose a simple attention-based
encoder-decoder network for end-to-end neural diarization (AED-EEND). In our
training process, we introduce a teacher-forcing strategy to address the
speaker permutation problem, leading to faster model convergence. For
evaluation, we propose an iterative decoding method that outputs diarization
results for each speaker sequentially. Additionally, we propose an Enhancer
module to enhance the frame-level speaker embeddings, enabling the model to
handle scenarios with an unseen number of speakers. We also explore replacing
the transformer encoder with a Conformer architecture, which better models
local information. Furthermore, we discovered that commonly used simulation
datasets for speaker diarization have a much higher overlap ratio compared to
real data. We found that using simulated training data that is more consistent
with real data can achieve an improvement in consistency. Extensive
experimental validation demonstrates the effectiveness of our proposed
methodologies. Our best system achieved a new state-of-the-art diarization
error rate (DER) performance on all the CALLHOME (10.08%), DIHARD II (24.64%),
and AMI (13.00%) evaluation benchmarks, when no oracle voice activity detection
(VAD) is used. Beyond speaker diarization, our AED-EEND system also shows
remarkable competitiveness as a speech type detection model.
| [
{
"created": "Wed, 13 Sep 2023 02:17:13 GMT",
"version": "v1"
}
] | 2023-09-14 | [
[
"Chen",
"Zhengyang",
""
],
[
"Han",
"Bing",
""
],
[
"Wang",
"Shuai",
""
],
[
"Qian",
"Yanmin",
""
]
] | Deep neural network-based systems have significantly improved the performance of speaker diarization tasks. However, end-to-end neural diarization (EEND) systems often struggle to generalize to scenarios with an unseen number of speakers, while target speaker voice activity detection (TS-VAD) systems tend to be overly complex. In this paper, we propose a simple attention-based encoder-decoder network for end-to-end neural diarization (AED-EEND). In our training process, we introduce a teacher-forcing strategy to address the speaker permutation problem, leading to faster model convergence. For evaluation, we propose an iterative decoding method that outputs diarization results for each speaker sequentially. Additionally, we propose an Enhancer module to enhance the frame-level speaker embeddings, enabling the model to handle scenarios with an unseen number of speakers. We also explore replacing the transformer encoder with a Conformer architecture, which better models local information. Furthermore, we discovered that commonly used simulation datasets for speaker diarization have a much higher overlap ratio compared to real data. We found that using simulated training data that is more consistent with real data can achieve an improvement in consistency. Extensive experimental validation demonstrates the effectiveness of our proposed methodologies. Our best system achieved a new state-of-the-art diarization error rate (DER) performance on all the CALLHOME (10.08%), DIHARD II (24.64%), and AMI (13.00%) evaluation benchmarks, when no oracle voice activity detection (VAD) is used. Beyond speaker diarization, our AED-EEND system also shows remarkable competitiveness as a speech type detection model. |
1703.03262 | Ching-Hua Yu | Ching-Hua Yu | Does Nash Envy Immunity | null | null | null | null | cs.GT cs.CC cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The most popular stability notion in games should be Nash equilibrium under
the rationality of players who maximize their own payoff individually. In
contrast, in many scenarios, players can be (partly) irrational with some
unpredictable factors. Hence a strategy profile can be more robust if it is
resilient against certain irrational behaviors. In this paper, we propose a
stability notion that is resilient against envy. A strategy profile is said to
be envy-proof if each player cannot gain a competitive edge with respect to the
change in utility over the other players by deviation. Together with Nash
equilibrium and another stability notion called immunity, we show how these
separate notions are related to each other, whether they exist in games, and
whether and when a strategy profile satisfying these notions can be efficiently
found. We answer these questions by starting with the general two player game
and extend the discussion for the approximate stability and for the
corresponding fault-tolerance notions in multi-player games.
| [
{
"created": "Thu, 9 Mar 2017 13:45:45 GMT",
"version": "v1"
}
] | 2017-03-10 | [
[
"Yu",
"Ching-Hua",
""
]
] | The most popular stability notion in games should be Nash equilibrium under the rationality of players who maximize their own payoff individually. In contrast, in many scenarios, players can be (partly) irrational with some unpredictable factors. Hence a strategy profile can be more robust if it is resilient against certain irrational behaviors. In this paper, we propose a stability notion that is resilient against envy. A strategy profile is said to be envy-proof if each player cannot gain a competitive edge with respect to the change in utility over the other players by deviation. Together with Nash equilibrium and another stability notion called immunity, we show how these separate notions are related to each other, whether they exist in games, and whether and when a strategy profile satisfying these notions can be efficiently found. We answer these questions by starting with the general two player game and extend the discussion for the approximate stability and for the corresponding fault-tolerance notions in multi-player games. |
2312.15820 | Qi Chen | Qi Chen, Dileepa Pitawela, Chongyang Zhao, Gengze Zhou, Hsiang-Ting
Chen, Qi Wu | WebVLN: Vision-and-Language Navigation on Websites | Accepted by AAAI2024 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Vision-and-Language Navigation (VLN) task aims to enable AI agents to
accurately understand and follow natural language instructions to navigate
through real-world environments, ultimately reaching specific target locations.
We recognise a promising opportunity to extend VLN to a comparable navigation
task that holds substantial significance in our daily lives, albeit within the
virtual realm: navigating websites on the Internet. This paper proposes a new
task named Vision-and-Language Navigation on Websites (WebVLN), where we use
question-based instructions to train an agent, emulating how users naturally
browse websites. Unlike the existing VLN task that only pays attention to
vision and instruction (language), the WebVLN agent further considers
underlying web-specific content like HTML, which could not be seen on the
rendered web pages yet contains rich visual and textual information. Toward
this goal, we contribute a dataset, WebVLN-v1, and introduce a novel approach
called Website-aware VLN Network (WebVLN-Net), which is built upon the
foundation of state-of-the-art VLN techniques. Experimental results show that
WebVLN-Net outperforms current VLN and web-related navigation methods. We
believe that the introduction of the new WebVLN task and its dataset will
establish a new dimension within the VLN domain and contribute to the broader
vision-and-language research community. The code is available at:
https://github.com/WebVLN/WebVLN.
| [
{
"created": "Mon, 25 Dec 2023 22:13:26 GMT",
"version": "v1"
}
] | 2023-12-27 | [
[
"Chen",
"Qi",
""
],
[
"Pitawela",
"Dileepa",
""
],
[
"Zhao",
"Chongyang",
""
],
[
"Zhou",
"Gengze",
""
],
[
"Chen",
"Hsiang-Ting",
""
],
[
"Wu",
"Qi",
""
]
] | Vision-and-Language Navigation (VLN) task aims to enable AI agents to accurately understand and follow natural language instructions to navigate through real-world environments, ultimately reaching specific target locations. We recognise a promising opportunity to extend VLN to a comparable navigation task that holds substantial significance in our daily lives, albeit within the virtual realm: navigating websites on the Internet. This paper proposes a new task named Vision-and-Language Navigation on Websites (WebVLN), where we use question-based instructions to train an agent, emulating how users naturally browse websites. Unlike the existing VLN task that only pays attention to vision and instruction (language), the WebVLN agent further considers underlying web-specific content like HTML, which could not be seen on the rendered web pages yet contains rich visual and textual information. Toward this goal, we contribute a dataset, WebVLN-v1, and introduce a novel approach called Website-aware VLN Network (WebVLN-Net), which is built upon the foundation of state-of-the-art VLN techniques. Experimental results show that WebVLN-Net outperforms current VLN and web-related navigation methods. We believe that the introduction of the new WebVLN task and its dataset will establish a new dimension within the VLN domain and contribute to the broader vision-and-language research community. The code is available at: https://github.com/WebVLN/WebVLN. |
2001.01861 | Subru Krishnan | Mohammad Hossein Namaki, Avrilia Floratou, Fotis Psallidas, Subru
Krishnan, Ashvin Agrawal, Yinghui Wu, Yiwen Zhu and Markus Weimer | Vamsa: Automated Provenance Tracking in Data Science Scripts | null | null | 10.1145/3394486.3403205 | null | cs.LG cs.DC stat.ML | http://creativecommons.org/licenses/by-nc-sa/4.0/ | There has recently been a lot of ongoing research in the areas of fairness,
bias and explainability of machine learning (ML) models due to the self-evident
or regulatory requirements of various ML applications. We make the following
observation: All of these approaches require a robust understanding of the
relationship between ML models and the data used to train them. In this work,
we introduce the ML provenance tracking problem: the fundamental idea is to
automatically track which columns in a dataset have been used to derive the
features/labels of an ML model. We discuss the challenges in capturing such
information in the context of Python, the most common language used by data
scientists. We then present Vamsa, a modular system that extracts provenance
from Python scripts without requiring any changes to the users' code. Using 26K
real data science scripts, we verify the effectiveness of Vamsa in terms of
coverage, and performance. We also evaluate Vamsa's accuracy on a smaller
subset of manually labeled data. Our analysis shows that Vamsa's precision and
recall range from 90.4% to 99.1% and its latency is in the order of
milliseconds for average size scripts. Drawing from our experience in deploying
ML models in production, we also present an example in which Vamsa helps
automatically identify models that are affected by data corruption issues.
| [
{
"created": "Tue, 7 Jan 2020 02:39:02 GMT",
"version": "v1"
},
{
"created": "Thu, 30 Jul 2020 16:58:22 GMT",
"version": "v2"
}
] | 2020-07-31 | [
[
"Namaki",
"Mohammad Hossein",
""
],
[
"Floratou",
"Avrilia",
""
],
[
"Psallidas",
"Fotis",
""
],
[
"Krishnan",
"Subru",
""
],
[
"Agrawal",
"Ashvin",
""
],
[
"Wu",
"Yinghui",
""
],
[
"Zhu",
"Yiwen",
""
],
[
"Weimer",
"Markus",
""
]
] | There has recently been a lot of ongoing research in the areas of fairness, bias and explainability of machine learning (ML) models due to the self-evident or regulatory requirements of various ML applications. We make the following observation: All of these approaches require a robust understanding of the relationship between ML models and the data used to train them. In this work, we introduce the ML provenance tracking problem: the fundamental idea is to automatically track which columns in a dataset have been used to derive the features/labels of an ML model. We discuss the challenges in capturing such information in the context of Python, the most common language used by data scientists. We then present Vamsa, a modular system that extracts provenance from Python scripts without requiring any changes to the users' code. Using 26K real data science scripts, we verify the effectiveness of Vamsa in terms of coverage, and performance. We also evaluate Vamsa's accuracy on a smaller subset of manually labeled data. Our analysis shows that Vamsa's precision and recall range from 90.4% to 99.1% and its latency is in the order of milliseconds for average size scripts. Drawing from our experience in deploying ML models in production, we also present an example in which Vamsa helps automatically identify models that are affected by data corruption issues. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.