id stringlengths 9 10 | submitter stringlengths 1 64 ⌀ | authors stringlengths 4 20.7k | title stringlengths 4 246 | comments stringlengths 1 523 ⌀ | journal-ref stringlengths 4 404 ⌀ | doi stringlengths 11 153 ⌀ | report-no stringlengths 2 254 ⌀ | categories stringlengths 5 98 | license stringclasses 9 values | orig_abstract stringlengths 14 3.35k | versions listlengths 1 60 | update_date stringlengths 10 10 | authors_parsed listlengths 1 1.35k | abstract stringlengths 11 3.34k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2210.06594 | Raghavendra Addanki | Raghavendra Addanki, David Arbour, Tung Mai, Cameron Musco, Anup Rao | Sample Constrained Treatment Effect Estimation | Conference on Neural Information Processing Systems (NeurIPS) 2022 | null | null | null | cs.LG cs.AI cs.DS econ.EM stat.ME | http://creativecommons.org/licenses/by/4.0/ | Treatment effect estimation is a fundamental problem in causal inference. We
focus on designing efficient randomized controlled trials, to accurately
estimate the effect of some treatment on a population of $n$ individuals. In
particular, we study sample-constrained treatment effect estimation, where we
must select a subset of $s \ll n$ individuals from the population to experiment
on. This subset must be further partitioned into treatment and control groups.
Algorithms for partitioning the entire population into treatment and control
groups, or for choosing a single representative subset, have been well-studied.
The key challenge in our setting is jointly choosing a representative subset
and a partition for that set.
We focus on both individual and average treatment effect estimation, under a
linear effects model. We give provably efficient experimental designs and
corresponding estimators, by identifying connections to discrepancy
minimization and leverage-score-based sampling used in randomized numerical
linear algebra. Our theoretical results obtain a smooth transition to known
guarantees when $s$ equals the population size. We also empirically demonstrate
the performance of our algorithms.
| [
{
"created": "Wed, 12 Oct 2022 21:13:47 GMT",
"version": "v1"
}
] | 2022-10-14 | [
[
"Addanki",
"Raghavendra",
""
],
[
"Arbour",
"David",
""
],
[
"Mai",
"Tung",
""
],
[
"Musco",
"Cameron",
""
],
[
"Rao",
"Anup",
""
]
] | Treatment effect estimation is a fundamental problem in causal inference. We focus on designing efficient randomized controlled trials, to accurately estimate the effect of some treatment on a population of $n$ individuals. In particular, we study sample-constrained treatment effect estimation, where we must select a subset of $s \ll n$ individuals from the population to experiment on. This subset must be further partitioned into treatment and control groups. Algorithms for partitioning the entire population into treatment and control groups, or for choosing a single representative subset, have been well-studied. The key challenge in our setting is jointly choosing a representative subset and a partition for that set. We focus on both individual and average treatment effect estimation, under a linear effects model. We give provably efficient experimental designs and corresponding estimators, by identifying connections to discrepancy minimization and leverage-score-based sampling used in randomized numerical linear algebra. Our theoretical results obtain a smooth transition to known guarantees when $s$ equals the population size. We also empirically demonstrate the performance of our algorithms. |
cs/9909015 | Joseph Y. Halpern | Francis C. Chu and Joseph Y. Halpern | A decision-theoretic approach to reliable message delivery | This is the full version of a paper that appears in the Proceedings
of the 12th International Symposium on Distributed Computing, 1998, pp. 89-10 | null | null | null | cs.DC | null | We argue that the tools of decision theory need to be taken more seriously in
the specification and analysis of systems. We illustrate this by considering a
simple problem involving reliable communication, showing how considerations of
utility and probability can be used to decide when it is worth sending
heartbeat messages and, if they are sent, how often they should be sent.
| [
{
"created": "Tue, 21 Sep 1999 20:51:37 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Chu",
"Francis C.",
""
],
[
"Halpern",
"Joseph Y.",
""
]
] | We argue that the tools of decision theory need to be taken more seriously in the specification and analysis of systems. We illustrate this by considering a simple problem involving reliable communication, showing how considerations of utility and probability can be used to decide when it is worth sending heartbeat messages and, if they are sent, how often they should be sent. |
2002.09866 | Yossi Adi | Yossi Adi, Yaniv Nemcovsky, Alex Schwing, Tamir Hazan | On the generalization of bayesian deep nets for multi-class
classification | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Generalization bounds which assess the difference between the true risk and
the empirical risk have been studied extensively. However, to obtain bounds,
current techniques use strict assumptions such as a uniformly bounded or a
Lipschitz loss function. To avoid these assumptions, in this paper, we propose
a new generalization bound for Bayesian deep nets by exploiting the
contractivity of the Log-Sobolev inequalities. Using these inequalities adds an
additional loss-gradient norm term to the generalization bound, which is
intuitively a surrogate of the model complexity. Empirically, we analyze the
affect of this loss-gradient norm term using different deep nets.
| [
{
"created": "Sun, 23 Feb 2020 09:05:03 GMT",
"version": "v1"
}
] | 2020-02-25 | [
[
"Adi",
"Yossi",
""
],
[
"Nemcovsky",
"Yaniv",
""
],
[
"Schwing",
"Alex",
""
],
[
"Hazan",
"Tamir",
""
]
] | Generalization bounds which assess the difference between the true risk and the empirical risk have been studied extensively. However, to obtain bounds, current techniques use strict assumptions such as a uniformly bounded or a Lipschitz loss function. To avoid these assumptions, in this paper, we propose a new generalization bound for Bayesian deep nets by exploiting the contractivity of the Log-Sobolev inequalities. Using these inequalities adds an additional loss-gradient norm term to the generalization bound, which is intuitively a surrogate of the model complexity. Empirically, we analyze the affect of this loss-gradient norm term using different deep nets. |
1606.00195 | Ioannis Marcoullis | Shlomi Dolev, Chryssis Georgiou, Ioannis Marcoullis, Elad M. Schiller | Self-stabilizing Reconfiguration | null | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current reconfiguration techniques are based on starting the system in a
consistent configuration, in which all participating entities are in their
initial state. Starting from that state, the system must preserve consistency
as long as a predefined churn rate of processors joins and leaves is not
violated, and unbounded storage is available. Many working systems cannot
control this churn rate and do not have access to unbounded storage. System
designers that neglect the outcome of violating the above assumptions may doom
the system to exhibit illegal behaviors. We present the first automatically
recovering reconfiguration scheme that recovers from transient faults, such as
temporal violations of the above assumptions. Our self-stabilizing solutions
regain safety automatically by assuming temporal access to reliable failure
detectors. Once safety is re-established, the failure detector reliability is
no longer needed. Still, liveness is conditioned by the failure detector's
unreliable signals. We show that our self-stabilizing reconfiguration
techniques can serve as the basis for the implementation of several dynamic
services over message passing systems. Examples include self-stabilizing
reconfigurable virtual synchrony, which, in turn, can be used for implementing
a self-stabilizing reconfigurable state-machine replication and
self-stabilizing reconfigurable emulation of shared memory.
| [
{
"created": "Wed, 1 Jun 2016 09:44:57 GMT",
"version": "v1"
},
{
"created": "Tue, 6 Dec 2016 09:18:57 GMT",
"version": "v2"
}
] | 2016-12-07 | [
[
"Dolev",
"Shlomi",
""
],
[
"Georgiou",
"Chryssis",
""
],
[
"Marcoullis",
"Ioannis",
""
],
[
"Schiller",
"Elad M.",
""
]
] | Current reconfiguration techniques are based on starting the system in a consistent configuration, in which all participating entities are in their initial state. Starting from that state, the system must preserve consistency as long as a predefined churn rate of processors joins and leaves is not violated, and unbounded storage is available. Many working systems cannot control this churn rate and do not have access to unbounded storage. System designers that neglect the outcome of violating the above assumptions may doom the system to exhibit illegal behaviors. We present the first automatically recovering reconfiguration scheme that recovers from transient faults, such as temporal violations of the above assumptions. Our self-stabilizing solutions regain safety automatically by assuming temporal access to reliable failure detectors. Once safety is re-established, the failure detector reliability is no longer needed. Still, liveness is conditioned by the failure detector's unreliable signals. We show that our self-stabilizing reconfiguration techniques can serve as the basis for the implementation of several dynamic services over message passing systems. Examples include self-stabilizing reconfigurable virtual synchrony, which, in turn, can be used for implementing a self-stabilizing reconfigurable state-machine replication and self-stabilizing reconfigurable emulation of shared memory. |
1207.1811 | George Katsirelos | George Katsirelos and Nina Narodytska and Toby Walsh | The SeqBin Constraint Revisited | Longer version of paper accepted at CP 2012 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We revisit the SeqBin constraint. This meta-constraint subsumes a number of
important global constraints like Change, Smooth and IncreasingNValue. We show
that the previously proposed filtering algorithm for SeqBin has two drawbacks
even under strong restrictions: it does not detect bounds disentailment and it
is not idempotent. We identify the cause for these problems, and propose a new
propagator that overcomes both issues. Our algorithm is based on a connection
to the problem of finding a path of a given cost in a restricted $n$-partite
graph. Our propagator enforces domain consistency in O(nd^2) and, for special
cases of SeqBin that include Change, Smooth and IncreasingNValue, in O(nd)
time.
| [
{
"created": "Sat, 7 Jul 2012 16:21:53 GMT",
"version": "v1"
}
] | 2015-03-20 | [
[
"Katsirelos",
"George",
""
],
[
"Narodytska",
"Nina",
""
],
[
"Walsh",
"Toby",
""
]
] | We revisit the SeqBin constraint. This meta-constraint subsumes a number of important global constraints like Change, Smooth and IncreasingNValue. We show that the previously proposed filtering algorithm for SeqBin has two drawbacks even under strong restrictions: it does not detect bounds disentailment and it is not idempotent. We identify the cause for these problems, and propose a new propagator that overcomes both issues. Our algorithm is based on a connection to the problem of finding a path of a given cost in a restricted $n$-partite graph. Our propagator enforces domain consistency in O(nd^2) and, for special cases of SeqBin that include Change, Smooth and IncreasingNValue, in O(nd) time. |
2306.05239 | Xiao Wang | Bo Jiang, Chengguo Yuan, Xiao Wang, Zhimin Bao, Lin Zhu, Yonghong
Tian, Jin Tang | Point-Voxel Absorbing Graph Representation Learning for Event Stream
based Recognition | In Peer Review | null | null | null | cs.CV cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sampled point and voxel methods are usually employed to downsample the dense
events into sparse ones. After that, one popular way is to leverage a graph
model which treats the sparse points/voxels as nodes and adopts graph neural
networks (GNNs) to learn the representation of event data. Although good
performance can be obtained, however, their results are still limited mainly
due to two issues. (1) Existing event GNNs generally adopt the additional max
(or mean) pooling layer to summarize all node embeddings into a single
graph-level representation for the whole event data representation. However,
this approach fails to capture the importance of graph nodes and also fails to
be fully aware of the node representations. (2) Existing methods generally
employ either a sparse point or voxel graph representation model which thus
lacks consideration of the complementary between these two types of
representation models. To address these issues, we propose a novel dual
point-voxel absorbing graph representation learning for event stream data
representation. To be specific, given the input event stream, we first
transform it into the sparse event cloud and voxel grids and build dual
absorbing graph models for them respectively. Then, we design a novel absorbing
graph convolutional network (AGCN) for our dual absorbing graph representation
and learning. The key aspect of the proposed AGCN is its ability to effectively
capture the importance of nodes and thus be fully aware of node representations
in summarizing all node representations through the introduced absorbing nodes.
Extensive experiments on multiple event-based classification benchmark datasets
fully validated the effectiveness of our framework.
| [
{
"created": "Thu, 8 Jun 2023 14:38:43 GMT",
"version": "v1"
},
{
"created": "Sat, 29 Jul 2023 12:18:38 GMT",
"version": "v2"
}
] | 2023-08-01 | [
[
"Jiang",
"Bo",
""
],
[
"Yuan",
"Chengguo",
""
],
[
"Wang",
"Xiao",
""
],
[
"Bao",
"Zhimin",
""
],
[
"Zhu",
"Lin",
""
],
[
"Tian",
"Yonghong",
""
],
[
"Tang",
"Jin",
""
]
] | Sampled point and voxel methods are usually employed to downsample the dense events into sparse ones. After that, one popular way is to leverage a graph model which treats the sparse points/voxels as nodes and adopts graph neural networks (GNNs) to learn the representation of event data. Although good performance can be obtained, however, their results are still limited mainly due to two issues. (1) Existing event GNNs generally adopt the additional max (or mean) pooling layer to summarize all node embeddings into a single graph-level representation for the whole event data representation. However, this approach fails to capture the importance of graph nodes and also fails to be fully aware of the node representations. (2) Existing methods generally employ either a sparse point or voxel graph representation model which thus lacks consideration of the complementary between these two types of representation models. To address these issues, we propose a novel dual point-voxel absorbing graph representation learning for event stream data representation. To be specific, given the input event stream, we first transform it into the sparse event cloud and voxel grids and build dual absorbing graph models for them respectively. Then, we design a novel absorbing graph convolutional network (AGCN) for our dual absorbing graph representation and learning. The key aspect of the proposed AGCN is its ability to effectively capture the importance of nodes and thus be fully aware of node representations in summarizing all node representations through the introduced absorbing nodes. Extensive experiments on multiple event-based classification benchmark datasets fully validated the effectiveness of our framework. |
2211.15234 | Ruitian Wu | Jingwei Li, Ruitian Wu, Tzu-liang Huang, Zian Pan, Ming-chun Huang | Shoupa: An AI System for Early Diagnosis of Parkinson's Disease | 2 pages, 1 figure, accepted by IEEE/ACM CHASE 2022 (Poster
Presentation) | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Parkinson's Disease (PD) is a progressive nervous system disorder that has
affected more than 5.8 million people, especially the elderly. Due to the
complexity of its symptoms and its similarity to other neurological disorders,
early detection requires neurologists or PD specialists to be involved, which
is not accessible to most old people. Therefore, we integrate smart mobile
devices with AI technologies. In this paper, we introduce the framework of our
developed PD early detection system which combines different tasks evaluating
both motor and non-motor symptoms. With the developed model, we help users
detect PD punctually in non-clinical settings and figure out their most severe
symptoms. The results are expected to be further used for PD rehabilitation
guidance and detection of other neurological disorders.
| [
{
"created": "Mon, 28 Nov 2022 11:32:17 GMT",
"version": "v1"
}
] | 2022-11-29 | [
[
"Li",
"Jingwei",
""
],
[
"Wu",
"Ruitian",
""
],
[
"Huang",
"Tzu-liang",
""
],
[
"Pan",
"Zian",
""
],
[
"Huang",
"Ming-chun",
""
]
] | Parkinson's Disease (PD) is a progressive nervous system disorder that has affected more than 5.8 million people, especially the elderly. Due to the complexity of its symptoms and its similarity to other neurological disorders, early detection requires neurologists or PD specialists to be involved, which is not accessible to most old people. Therefore, we integrate smart mobile devices with AI technologies. In this paper, we introduce the framework of our developed PD early detection system which combines different tasks evaluating both motor and non-motor symptoms. With the developed model, we help users detect PD punctually in non-clinical settings and figure out their most severe symptoms. The results are expected to be further used for PD rehabilitation guidance and detection of other neurological disorders. |
2109.01860 | Taiping Yao | Zhihao Gu, Yang Chen, Taiping Yao, Shouhong Ding, Jilin Li, Feiyue
Huang, Lizhuang Ma | Spatiotemporal Inconsistency Learning for DeepFake Video Detection | To appear in ACM MM 2021 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The rapid development of facial manipulation techniques has aroused public
concerns in recent years. Following the success of deep learning, existing
methods always formulate DeepFake video detection as a binary classification
problem and develop frame-based and video-based solutions. However, little
attention has been paid to capturing the spatial-temporal inconsistency in
forged videos. To address this issue, we term this task as a Spatial-Temporal
Inconsistency Learning (STIL) process and instantiate it into a novel STIL
block, which consists of a Spatial Inconsistency Module (SIM), a Temporal
Inconsistency Module (TIM), and an Information Supplement Module (ISM).
Specifically, we present a novel temporal modeling paradigm in TIM by
exploiting the temporal difference over adjacent frames along with both
horizontal and vertical directions. And the ISM simultaneously utilizes the
spatial information from SIM and temporal information from TIM to establish a
more comprehensive spatial-temporal representation. Moreover, our STIL block is
flexible and could be plugged into existing 2D CNNs. Extensive experiments and
visualizations are presented to demonstrate the effectiveness of our method
against the state-of-the-art competitors.
| [
{
"created": "Sat, 4 Sep 2021 13:05:37 GMT",
"version": "v1"
},
{
"created": "Tue, 7 Sep 2021 09:05:29 GMT",
"version": "v2"
},
{
"created": "Mon, 11 Oct 2021 05:15:08 GMT",
"version": "v3"
}
] | 2021-10-12 | [
[
"Gu",
"Zhihao",
""
],
[
"Chen",
"Yang",
""
],
[
"Yao",
"Taiping",
""
],
[
"Ding",
"Shouhong",
""
],
[
"Li",
"Jilin",
""
],
[
"Huang",
"Feiyue",
""
],
[
"Ma",
"Lizhuang",
""
]
] | The rapid development of facial manipulation techniques has aroused public concerns in recent years. Following the success of deep learning, existing methods always formulate DeepFake video detection as a binary classification problem and develop frame-based and video-based solutions. However, little attention has been paid to capturing the spatial-temporal inconsistency in forged videos. To address this issue, we term this task as a Spatial-Temporal Inconsistency Learning (STIL) process and instantiate it into a novel STIL block, which consists of a Spatial Inconsistency Module (SIM), a Temporal Inconsistency Module (TIM), and an Information Supplement Module (ISM). Specifically, we present a novel temporal modeling paradigm in TIM by exploiting the temporal difference over adjacent frames along with both horizontal and vertical directions. And the ISM simultaneously utilizes the spatial information from SIM and temporal information from TIM to establish a more comprehensive spatial-temporal representation. Moreover, our STIL block is flexible and could be plugged into existing 2D CNNs. Extensive experiments and visualizations are presented to demonstrate the effectiveness of our method against the state-of-the-art competitors. |
1201.5418 | Rafael Caballero | R. Caballero, M. Rodriguez-Artalejo and C. A. Romero-Diaz | A Transformation-based Implementation for CLP with Qualification and
Proximity | To appear in Theory and Practice of Logic Programming (TPLP). arXiv
admin note: significant text overlap with arXiv:1009.1976 | null | null | 12-R01-TPLP | cs.LO cs.PL | http://creativecommons.org/licenses/publicdomain/ | Uncertainty in logic programming has been widely investigated in the last
decades, leading to multiple extensions of the classical LP paradigm. However,
few of these are designed as extensions of the well-established and powerful
CLP scheme for Constraint Logic Programming. In a previous work we have
proposed the SQCLP ({\em proximity-based qualified constraint logic
programming}) scheme as a quite expressive extension of CLP with support for
qualification values and proximity relations as generalizations of uncertainty
values and similarity relations, respectively. In this paper we provide a
transformation technique for transforming SQCLP programs and goals into
semantically equivalent CLP programs and goals, and a practical Prolog-based
implementation of some particularly useful instances of the SQCLP scheme. We
also illustrate, by showing some simple---and working---examples, how the
prototype can be effectively used as a tool for solving problems where
qualification values and proximity relations play a key role. Intended use of
SQCLP includes flexible information retrieval applications.
| [
{
"created": "Wed, 25 Jan 2012 23:47:26 GMT",
"version": "v1"
}
] | 2012-01-27 | [
[
"Caballero",
"R.",
""
],
[
"Rodriguez-Artalejo",
"M.",
""
],
[
"Romero-Diaz",
"C. A.",
""
]
] | Uncertainty in logic programming has been widely investigated in the last decades, leading to multiple extensions of the classical LP paradigm. However, few of these are designed as extensions of the well-established and powerful CLP scheme for Constraint Logic Programming. In a previous work we have proposed the SQCLP ({\em proximity-based qualified constraint logic programming}) scheme as a quite expressive extension of CLP with support for qualification values and proximity relations as generalizations of uncertainty values and similarity relations, respectively. In this paper we provide a transformation technique for transforming SQCLP programs and goals into semantically equivalent CLP programs and goals, and a practical Prolog-based implementation of some particularly useful instances of the SQCLP scheme. We also illustrate, by showing some simple---and working---examples, how the prototype can be effectively used as a tool for solving problems where qualification values and proximity relations play a key role. Intended use of SQCLP includes flexible information retrieval applications. |
2204.00679 | Arsha Nagrani | Arsha Nagrani, Paul Hongsuck Seo, Bryan Seybold, Anja Hauth, Santiago
Manen, Chen Sun and Cordelia Schmid | Learning Audio-Video Modalities from Image Captions | null | null | null | null | cs.CV cs.MM cs.SD eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A major challenge in text-video and text-audio retrieval is the lack of
large-scale training data. This is unlike image-captioning, where datasets are
in the order of millions of samples. To close this gap we propose a new video
mining pipeline which involves transferring captions from image captioning
datasets to video clips with no additional manual effort. Using this pipeline,
we create a new large-scale, weakly labelled audio-video captioning dataset
consisting of millions of paired clips and captions. We show that training a
multimodal transformed based model on this data achieves competitive
performance on video retrieval and video captioning, matching or even
outperforming HowTo100M pretraining with 20x fewer clips. We also show that our
mined clips are suitable for text-audio pretraining, and achieve state of the
art results for the task of audio retrieval.
| [
{
"created": "Fri, 1 Apr 2022 19:48:18 GMT",
"version": "v1"
}
] | 2022-04-05 | [
[
"Nagrani",
"Arsha",
""
],
[
"Seo",
"Paul Hongsuck",
""
],
[
"Seybold",
"Bryan",
""
],
[
"Hauth",
"Anja",
""
],
[
"Manen",
"Santiago",
""
],
[
"Sun",
"Chen",
""
],
[
"Schmid",
"Cordelia",
""
]
] | A major challenge in text-video and text-audio retrieval is the lack of large-scale training data. This is unlike image-captioning, where datasets are in the order of millions of samples. To close this gap we propose a new video mining pipeline which involves transferring captions from image captioning datasets to video clips with no additional manual effort. Using this pipeline, we create a new large-scale, weakly labelled audio-video captioning dataset consisting of millions of paired clips and captions. We show that training a multimodal transformed based model on this data achieves competitive performance on video retrieval and video captioning, matching or even outperforming HowTo100M pretraining with 20x fewer clips. We also show that our mined clips are suitable for text-audio pretraining, and achieve state of the art results for the task of audio retrieval. |
2009.07970 | Bardia Yousefi | Hossein Memarzadeh Sharifipour, Bardia Yousefi, Xavier P.V. Maldague | Skeletonization and Reconstruction based on Graph Morphological
Transformations | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multiscale shape skeletonization on pixel adjacency graphs is an advanced
intriguing research subject in the field of image processing, computer vision
and data mining. The previous works in this area almost focused on the graph
vertices. We proposed novel structured based graph morphological
transformations based on edges opposite to the current node based
transformations and used them for deploying skeletonization and reconstruction
of infrared thermal images represented by graphs. The advantage of this method
is that many widely used path based approaches become available within this
definition of morphological operations. For instance, we use distance maps and
image foresting transform (IFT) as two main path based methods are utilized for
computing the skeleton of an image. Moreover, In addition, the open question
proposed by Maragos et al (2013) about connectivity of graph skeletonization
method are discussed and shown to be quite difficult to decide in general case.
| [
{
"created": "Wed, 16 Sep 2020 22:58:06 GMT",
"version": "v1"
}
] | 2020-09-18 | [
[
"Sharifipour",
"Hossein Memarzadeh",
""
],
[
"Yousefi",
"Bardia",
""
],
[
"Maldague",
"Xavier P. V.",
""
]
] | Multiscale shape skeletonization on pixel adjacency graphs is an advanced intriguing research subject in the field of image processing, computer vision and data mining. The previous works in this area almost focused on the graph vertices. We proposed novel structured based graph morphological transformations based on edges opposite to the current node based transformations and used them for deploying skeletonization and reconstruction of infrared thermal images represented by graphs. The advantage of this method is that many widely used path based approaches become available within this definition of morphological operations. For instance, we use distance maps and image foresting transform (IFT) as two main path based methods are utilized for computing the skeleton of an image. Moreover, In addition, the open question proposed by Maragos et al (2013) about connectivity of graph skeletonization method are discussed and shown to be quite difficult to decide in general case. |
1912.09264 | Yang Li | Yang Li and Hongbo Li | Improved quantum algorithm for the random subset sum problem | arXiv admin note: text overlap with arXiv:1907.04295 by other authors | null | null | null | cs.DS cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Solving random subset sum instances plays an important role in constructing
cryptographic systems. For the random subset sum problem, in 2013 Bernstein et
al. proposed a quantum algorithm with heuristic time complexity
$\widetilde{O}(2^{0.241n})$, where the "$\widetilde{O}$" symbol is used to omit
poly($\log n$) factors. In 2018, Helm and May proposed another quantum
algorithm that reduces the heuristic time and memory complexity to
$\widetilde{O}(2^{0.226n})$. In this paper, a new quantum algorithm is
proposed, with heuristic time and memory complexity
$\widetilde{O}(2^{0.209n})$.
| [
{
"created": "Wed, 18 Dec 2019 05:51:03 GMT",
"version": "v1"
},
{
"created": "Thu, 13 Feb 2020 18:17:30 GMT",
"version": "v2"
}
] | 2020-02-14 | [
[
"Li",
"Yang",
""
],
[
"Li",
"Hongbo",
""
]
] | Solving random subset sum instances plays an important role in constructing cryptographic systems. For the random subset sum problem, in 2013 Bernstein et al. proposed a quantum algorithm with heuristic time complexity $\widetilde{O}(2^{0.241n})$, where the "$\widetilde{O}$" symbol is used to omit poly($\log n$) factors. In 2018, Helm and May proposed another quantum algorithm that reduces the heuristic time and memory complexity to $\widetilde{O}(2^{0.226n})$. In this paper, a new quantum algorithm is proposed, with heuristic time and memory complexity $\widetilde{O}(2^{0.209n})$. |
2009.13257 | Zeev Nutov | Zeev Nutov | Approximation algorithms for connectivity augmentation problems | null | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In Connectivity Augmentation problems we are given a graph $H=(V,E_H)$ and an
edge set $E$ on $V$, and seek a min-size edge set $J \subseteq E$ such that $H
\cup J$ has larger edge/node connectivity than $H$. In the Edge-Connectivity
Augmentation problem we need to increase the edge-connectivity by $1$. In the
Block-Tree Augmentation problem $H$ is connected and $H \cup S$ should be
$2$-connected. In Leaf-to-Leaf Connectivity Augmentation problems every edge in
$E$ connects minimal deficient sets. For this version we give a simple
combinatorial approximation algorithm with ratio $5/3$, improving the previous
$1.91$ approximation that applies for the general case. We also show by a
simple proof that if the Steiner Tree problem admits approximation ratio
$\alpha$ then the general version admits approximation ratio
$1+\ln(4-x)+\epsilon$, where $x$ is the solution to the equation
$1+\ln(4-x)=\alpha+(\alpha-1)x$. For the currently best value of $\alpha=\ln
4+\epsilon$ this gives ratio $1.942$. This is slightly worse than the best
ratio $1.91$, but has the advantage of using Steiner Tree approximation as a
"black box", giving ratio $< 1.9$ if ratio $\alpha \leq 1.35$ can be achieved.
In the Element Connectivity Augmentation problem we are given a graph
$G=(V,E)$, $S \subseteq V$, and connectivity requirements $\{r(u,v):u,v \in
S\}$. The goal is to find a min-size set $J$ of new edges on $S$ such that for
all $u,v \in S$ the graph $G \cup J$ contains $r(u,v)$ $uv$-paths such that no
two of them have an edge or a node in $V \setminus S$ in common. The problem is
NP-hard even when $\max_{u,v \in S} r(u,v)=2$. We obtain approximation ratio
$3/2$, improving the previous ratio $7/4$.
| [
{
"created": "Mon, 28 Sep 2020 12:27:05 GMT",
"version": "v1"
},
{
"created": "Fri, 13 Nov 2020 21:37:16 GMT",
"version": "v2"
}
] | 2020-11-17 | [
[
"Nutov",
"Zeev",
""
]
] | In Connectivity Augmentation problems we are given a graph $H=(V,E_H)$ and an edge set $E$ on $V$, and seek a min-size edge set $J \subseteq E$ such that $H \cup J$ has larger edge/node connectivity than $H$. In the Edge-Connectivity Augmentation problem we need to increase the edge-connectivity by $1$. In the Block-Tree Augmentation problem $H$ is connected and $H \cup S$ should be $2$-connected. In Leaf-to-Leaf Connectivity Augmentation problems every edge in $E$ connects minimal deficient sets. For this version we give a simple combinatorial approximation algorithm with ratio $5/3$, improving the previous $1.91$ approximation that applies for the general case. We also show by a simple proof that if the Steiner Tree problem admits approximation ratio $\alpha$ then the general version admits approximation ratio $1+\ln(4-x)+\epsilon$, where $x$ is the solution to the equation $1+\ln(4-x)=\alpha+(\alpha-1)x$. For the currently best value of $\alpha=\ln 4+\epsilon$ this gives ratio $1.942$. This is slightly worse than the best ratio $1.91$, but has the advantage of using Steiner Tree approximation as a "black box", giving ratio $< 1.9$ if ratio $\alpha \leq 1.35$ can be achieved. In the Element Connectivity Augmentation problem we are given a graph $G=(V,E)$, $S \subseteq V$, and connectivity requirements $\{r(u,v):u,v \in S\}$. The goal is to find a min-size set $J$ of new edges on $S$ such that for all $u,v \in S$ the graph $G \cup J$ contains $r(u,v)$ $uv$-paths such that no two of them have an edge or a node in $V \setminus S$ in common. The problem is NP-hard even when $\max_{u,v \in S} r(u,v)=2$. We obtain approximation ratio $3/2$, improving the previous ratio $7/4$. |
2104.12800 | Stanislav \v{Z}ivn\'y | Alex Brandts and Stanislav \v{Z}ivn\'y | Beyond PCSP (1-in-3,NAE) | Full version of an ICALP 2021 paper | Information and Computation 289, Part A, 104954 (2022) | 10.1016/j.ic.2022.104954 | null | cs.CC cs.DM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The promise constraint satisfaction problem (PCSP) is a recently introduced
vast generalisation of the constraint satisfaction problem (CSP) that captures
approximability of satisfiable instances. A PCSP instance comes with two forms
of each constraint: a strict one and a weak one. Given the promise that a
solution exists using the strict constraints, the task is to find a solution
using the weak constraints. While there are by now several dichotomy results
for fragments of PCSPs, they all consider (in some way) symmetric PCSPs.
1-in-3-SAT and Not-All-Equal-3-SAT are classic examples of Boolean symmetric
(non-promise) CSPs. While both problems are NP-hard, Brakensiek and Guruswami
showed [SICOMP'21] that given a satisfiable instance of 1-in-3-SAT one can find
a solution to the corresponding instance of (weaker) Not-All-Equal-3-SAT. In
other words, the PCSP template (1-in-3,NAE) is tractable.
We focus on non-symmetric PCSPs. In particular, we study PCSP templates
obtained from the Boolean template (t-in-k,NAE) by either adding tuples to
t-in-k or removing tuples from NAE. For the former, we classify all templates
as either tractable or not solvable by the currently strongest known algorithm
for PCSPs, the combined basic LP and affine IP relaxation of Brakensiek,
Guruswami, Wrochna, and \v{Z}ivn\'y [SICOMP'20]. For the latter, we classify
all templates as either tractable or NP-hard.
| [
{
"created": "Mon, 26 Apr 2021 18:00:41 GMT",
"version": "v1"
},
{
"created": "Sat, 12 Feb 2022 13:27:46 GMT",
"version": "v2"
},
{
"created": "Sun, 28 Aug 2022 19:14:35 GMT",
"version": "v3"
}
] | 2023-01-31 | [
[
"Brandts",
"Alex",
""
],
[
"Živný",
"Stanislav",
""
]
] | The promise constraint satisfaction problem (PCSP) is a recently introduced vast generalisation of the constraint satisfaction problem (CSP) that captures approximability of satisfiable instances. A PCSP instance comes with two forms of each constraint: a strict one and a weak one. Given the promise that a solution exists using the strict constraints, the task is to find a solution using the weak constraints. While there are by now several dichotomy results for fragments of PCSPs, they all consider (in some way) symmetric PCSPs. 1-in-3-SAT and Not-All-Equal-3-SAT are classic examples of Boolean symmetric (non-promise) CSPs. While both problems are NP-hard, Brakensiek and Guruswami showed [SICOMP'21] that given a satisfiable instance of 1-in-3-SAT one can find a solution to the corresponding instance of (weaker) Not-All-Equal-3-SAT. In other words, the PCSP template (1-in-3,NAE) is tractable. We focus on non-symmetric PCSPs. In particular, we study PCSP templates obtained from the Boolean template (t-in-k,NAE) by either adding tuples to t-in-k or removing tuples from NAE. For the former, we classify all templates as either tractable or not solvable by the currently strongest known algorithm for PCSPs, the combined basic LP and affine IP relaxation of Brakensiek, Guruswami, Wrochna, and \v{Z}ivn\'y [SICOMP'20]. For the latter, we classify all templates as either tractable or NP-hard. |
0704.0831 | Brooke Shrader | Brooke Shrader and Anthony Ephremides | On packet lengths and overhead for random linear coding over the erasure
channel | 5 pages, 5 figures, submitted to the 2007 International Wireless
Communications and Mobile Computing Conference | null | null | null | cs.IT math.IT | null | We assess the practicality of random network coding by illuminating the issue
of overhead and considering it in conjunction with increasingly long packets
sent over the erasure channel. We show that the transmission of increasingly
long packets, consisting of either of an increasing number of symbols per
packet or an increasing symbol alphabet size, results in a data rate
approaching zero over the erasure channel. This result is due to an erasure
probability that increases with packet length. Numerical results for a
particular modulation scheme demonstrate a data rate of approximately zero for
a large, but finite-length packet. Our results suggest a reduction in the
performance gains offered by random network coding.
| [
{
"created": "Fri, 6 Apr 2007 02:25:40 GMT",
"version": "v1"
}
] | 2007-07-13 | [
[
"Shrader",
"Brooke",
""
],
[
"Ephremides",
"Anthony",
""
]
] | We assess the practicality of random network coding by illuminating the issue of overhead and considering it in conjunction with increasingly long packets sent over the erasure channel. We show that the transmission of increasingly long packets, consisting of either of an increasing number of symbols per packet or an increasing symbol alphabet size, results in a data rate approaching zero over the erasure channel. This result is due to an erasure probability that increases with packet length. Numerical results for a particular modulation scheme demonstrate a data rate of approximately zero for a large, but finite-length packet. Our results suggest a reduction in the performance gains offered by random network coding. |
2312.15863 | Hangyu Mao | Hangyu Mao, Rui Zhao, Ziyue Li, Zhiwei Xu, Hao Chen, Yiqun Chen, Bin
Zhang, Zhen Xiao, Junge Zhang, and Jiangjin Yin | PDiT: Interleaving Perception and Decision-making Transformers for Deep
Reinforcement Learning | Proc. of the 23rd International Conference on Autonomous Agents and
Multiagent Systems (AAMAS 2024, full paper with oral presentation). Cover our
preliminary study: arXiv:2212.14538 | null | null | null | cs.LG cs.AI cs.RO cs.SY eess.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Designing better deep networks and better reinforcement learning (RL)
algorithms are both important for deep RL. This work studies the former.
Specifically, the Perception and Decision-making Interleaving Transformer
(PDiT) network is proposed, which cascades two Transformers in a very natural
way: the perceiving one focuses on \emph{the environmental perception} by
processing the observation at the patch level, whereas the deciding one pays
attention to \emph{the decision-making} by conditioning on the history of the
desired returns, the perceiver's outputs, and the actions. Such a network
design is generally applicable to a lot of deep RL settings, e.g., both the
online and offline RL algorithms under environments with either image
observations, proprioception observations, or hybrid image-language
observations. Extensive experiments show that PDiT can not only achieve
superior performance than strong baselines in different settings but also
extract explainable feature representations. Our code is available at
\url{https://github.com/maohangyu/PDiT}.
| [
{
"created": "Tue, 26 Dec 2023 03:07:10 GMT",
"version": "v1"
}
] | 2023-12-27 | [
[
"Mao",
"Hangyu",
""
],
[
"Zhao",
"Rui",
""
],
[
"Li",
"Ziyue",
""
],
[
"Xu",
"Zhiwei",
""
],
[
"Chen",
"Hao",
""
],
[
"Chen",
"Yiqun",
""
],
[
"Zhang",
"Bin",
""
],
[
"Xiao",
"Zhen",
""
],
[
"Zhang",
"Junge",
""
],
[
"Yin",
"Jiangjin",
""
]
] | Designing better deep networks and better reinforcement learning (RL) algorithms are both important for deep RL. This work studies the former. Specifically, the Perception and Decision-making Interleaving Transformer (PDiT) network is proposed, which cascades two Transformers in a very natural way: the perceiving one focuses on \emph{the environmental perception} by processing the observation at the patch level, whereas the deciding one pays attention to \emph{the decision-making} by conditioning on the history of the desired returns, the perceiver's outputs, and the actions. Such a network design is generally applicable to a lot of deep RL settings, e.g., both the online and offline RL algorithms under environments with either image observations, proprioception observations, or hybrid image-language observations. Extensive experiments show that PDiT can not only achieve superior performance than strong baselines in different settings but also extract explainable feature representations. Our code is available at \url{https://github.com/maohangyu/PDiT}. |
2105.03897 | Savas Ozkan | Savas Ozkan, Gozde Bozdagi Akar | Binarized Weight Error Networks With a Transition Regularization Term | Submitted to ICIP 2021 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | This paper proposes a novel binarized weight network (BT) for a
resource-efficient neural structure. The proposed model estimates a binary
representation of weights by taking into account the approximation error with
an additional term. This model increases representation capacity and stability,
particularly for shallow networks, while the computation load is theoretically
reduced. In addition, a novel regularization term is introduced that is
suitable for all threshold-based binary precision networks. This term penalizes
the trainable parameters that are far from the thresholds at which binary
transitions occur. This step promotes a swift modification for binary-precision
responses at train time. The experimental results are carried out for two sets
of tasks: visual classification and visual inverse problems. Benchmarks for
Cifar10, SVHN, Fashion, ImageNet2012, Set5, Set14, Urban and BSD100 datasets
show that our method outperforms all counterparts with binary precision.
| [
{
"created": "Sun, 9 May 2021 10:11:26 GMT",
"version": "v1"
}
] | 2021-05-11 | [
[
"Ozkan",
"Savas",
""
],
[
"Akar",
"Gozde Bozdagi",
""
]
] | This paper proposes a novel binarized weight network (BT) for a resource-efficient neural structure. The proposed model estimates a binary representation of weights by taking into account the approximation error with an additional term. This model increases representation capacity and stability, particularly for shallow networks, while the computation load is theoretically reduced. In addition, a novel regularization term is introduced that is suitable for all threshold-based binary precision networks. This term penalizes the trainable parameters that are far from the thresholds at which binary transitions occur. This step promotes a swift modification for binary-precision responses at train time. The experimental results are carried out for two sets of tasks: visual classification and visual inverse problems. Benchmarks for Cifar10, SVHN, Fashion, ImageNet2012, Set5, Set14, Urban and BSD100 datasets show that our method outperforms all counterparts with binary precision. |
1907.07972 | Zulfat Miftahutdinov | Zulfat Miftahutdinov and Elena Tutubalina | Deep Neural Models for Medical Concept Normalization in User-Generated
Texts | This is preprint of the paper "Deep Neural Models for Medical Concept
Normalization in User-Generated Texts" to be published at ACL 2019 - 57th
Annual Meeting of the Association for Computational Linguistics, Proceedings
of the Student Research Workshop | ACL SRW 2019 | 10.18653/v1/P19-2055 | null | cs.CL cs.IR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we consider the medical concept normalization problem, i.e.,
the problem of mapping a health-related entity mention in a free-form text to a
concept in a controlled vocabulary, usually to the standard thesaurus in the
Unified Medical Language System (UMLS). This is a challenging task since
medical terminology is very different when coming from health care
professionals or from the general public in the form of social media texts. We
approach it as a sequence learning problem with powerful neural networks such
as recurrent neural networks and contextualized word representation models
trained to obtain semantic representations of social media expressions. Our
experimental evaluation over three different benchmarks shows that neural
architectures leverage the semantic meaning of the entity mention and
significantly outperform an existing state of the art models.
| [
{
"created": "Thu, 18 Jul 2019 10:36:03 GMT",
"version": "v1"
}
] | 2023-11-21 | [
[
"Miftahutdinov",
"Zulfat",
""
],
[
"Tutubalina",
"Elena",
""
]
] | In this work, we consider the medical concept normalization problem, i.e., the problem of mapping a health-related entity mention in a free-form text to a concept in a controlled vocabulary, usually to the standard thesaurus in the Unified Medical Language System (UMLS). This is a challenging task since medical terminology is very different when coming from health care professionals or from the general public in the form of social media texts. We approach it as a sequence learning problem with powerful neural networks such as recurrent neural networks and contextualized word representation models trained to obtain semantic representations of social media expressions. Our experimental evaluation over three different benchmarks shows that neural architectures leverage the semantic meaning of the entity mention and significantly outperform an existing state of the art models. |
1307.1461 | Sung Ho Chae | Sung Ho Chae, Changho Suh, and Sae-Young Chung | Degrees of Freedom of the Rank-deficient Interference Channel with
Feedback | The material in this paper will be presented in part at the IEEE
International Symposium on Information Theory (ISIT) 2013 and was in part
submitted to the Allerton Conference on Communication, Control, and Computing
2013 | null | 10.1109/TIT.2015.2428233 | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate the total degrees of freedom (DoF) of the K-user
rank-deficient interference channel with feedback. For the two-user case, we
characterize the total DoF by developing an achievable scheme and deriving a
matching upper bound. For the three-user case, we develop a new achievable
scheme which employs interference alignment to efficiently utilize the
dimension of the received signal space. In addition, we derive an upper bound
for the general K-user case and show the tightness of the bound when the number
of antennas at each node is sufficiently large. As a consequence of these
results, we show that feedback can increase the DoF when the number of antennas
at each node is large enough as compared to the ranks of channel matrices. This
finding is in contrast to the full-rank interference channel where feedback
provides no DoF gain. The gain comes from using feedback to provide alternative
signal paths, thereby effectively increasing the ranks of desired channel
matrices.
| [
{
"created": "Thu, 4 Jul 2013 19:49:24 GMT",
"version": "v1"
},
{
"created": "Fri, 5 Jul 2013 05:50:15 GMT",
"version": "v2"
}
] | 2016-11-17 | [
[
"Chae",
"Sung Ho",
""
],
[
"Suh",
"Changho",
""
],
[
"Chung",
"Sae-Young",
""
]
] | We investigate the total degrees of freedom (DoF) of the K-user rank-deficient interference channel with feedback. For the two-user case, we characterize the total DoF by developing an achievable scheme and deriving a matching upper bound. For the three-user case, we develop a new achievable scheme which employs interference alignment to efficiently utilize the dimension of the received signal space. In addition, we derive an upper bound for the general K-user case and show the tightness of the bound when the number of antennas at each node is sufficiently large. As a consequence of these results, we show that feedback can increase the DoF when the number of antennas at each node is large enough as compared to the ranks of channel matrices. This finding is in contrast to the full-rank interference channel where feedback provides no DoF gain. The gain comes from using feedback to provide alternative signal paths, thereby effectively increasing the ranks of desired channel matrices. |
2312.04604 | Kyeongryeol Go | Kyeongryeol Go, Kye-Hyeon Kim | Transferable Candidate Proposal with Bounded Uncertainty | Accepted in NeurIPS 2023 Workshop on Adaptive Experimental Design and
Active Learning in the Real World | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | From an empirical perspective, the subset chosen through active learning
cannot guarantee an advantage over random sampling when transferred to another
model. While it underscores the significance of verifying transferability,
experimental design from previous works often neglected that the
informativeness of a data subset can change over model configurations. To
tackle this issue, we introduce a new experimental design, coined as Candidate
Proposal, to find transferable data candidates from which active learning
algorithms choose the informative subset. Correspondingly, a data selection
algorithm is proposed, namely Transferable candidate proposal with Bounded
Uncertainty (TBU), which constrains the pool of transferable data candidates by
filtering out the presumably redundant data points based on uncertainty
estimation. We verified the validity of TBU in image classification benchmarks,
including CIFAR-10/100 and SVHN. When transferred to different model
configurations, TBU consistency improves performance in existing active
learning algorithms. Our code is available at
https://github.com/gokyeongryeol/TBU.
| [
{
"created": "Thu, 7 Dec 2023 08:47:28 GMT",
"version": "v1"
}
] | 2023-12-11 | [
[
"Go",
"Kyeongryeol",
""
],
[
"Kim",
"Kye-Hyeon",
""
]
] | From an empirical perspective, the subset chosen through active learning cannot guarantee an advantage over random sampling when transferred to another model. While it underscores the significance of verifying transferability, experimental design from previous works often neglected that the informativeness of a data subset can change over model configurations. To tackle this issue, we introduce a new experimental design, coined as Candidate Proposal, to find transferable data candidates from which active learning algorithms choose the informative subset. Correspondingly, a data selection algorithm is proposed, namely Transferable candidate proposal with Bounded Uncertainty (TBU), which constrains the pool of transferable data candidates by filtering out the presumably redundant data points based on uncertainty estimation. We verified the validity of TBU in image classification benchmarks, including CIFAR-10/100 and SVHN. When transferred to different model configurations, TBU consistency improves performance in existing active learning algorithms. Our code is available at https://github.com/gokyeongryeol/TBU. |
2312.02859 | Alexandra Zytek | Alexandra Zytek, Wei-En Wang, Sofia Koukoura, and Kalyan
Veeramachaneni | Lessons from Usable ML Deployments and Application to Wind Turbine
Monitoring | Presented in XAI in Action: Past, Present, and Future Applications @
NeurIPS 2023. 8 pages, 3 figures | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Through past experiences deploying what we call usable ML (one step beyond
explainable ML, including both explanations and other augmenting information)
to real-world domains, we have learned three key lessons. First, many
organizations are beginning to hire people who we call ``bridges'' because they
bridge the gap between ML developers and domain experts, and these people fill
a valuable role in developing usable ML applications. Second, a configurable
system that enables easily iterating on usable ML interfaces during
collaborations with bridges is key. Finally, there is a need for continuous,
in-deployment evaluations to quantify the real-world impact of usable ML.
Throughout this paper, we apply these lessons to the task of wind turbine
monitoring, an essential task in the renewable energy domain. Turbine engineers
and data analysts must decide whether to perform costly in-person
investigations on turbines to prevent potential cases of brakepad failure, and
well-tuned usable ML interfaces can aid with this decision-making process.
Through the applications of our lessons to this task, we hope to demonstrate
the potential real-world impact of usable ML in the renewable energy domain.
| [
{
"created": "Tue, 5 Dec 2023 16:13:50 GMT",
"version": "v1"
}
] | 2023-12-06 | [
[
"Zytek",
"Alexandra",
""
],
[
"Wang",
"Wei-En",
""
],
[
"Koukoura",
"Sofia",
""
],
[
"Veeramachaneni",
"Kalyan",
""
]
] | Through past experiences deploying what we call usable ML (one step beyond explainable ML, including both explanations and other augmenting information) to real-world domains, we have learned three key lessons. First, many organizations are beginning to hire people who we call ``bridges'' because they bridge the gap between ML developers and domain experts, and these people fill a valuable role in developing usable ML applications. Second, a configurable system that enables easily iterating on usable ML interfaces during collaborations with bridges is key. Finally, there is a need for continuous, in-deployment evaluations to quantify the real-world impact of usable ML. Throughout this paper, we apply these lessons to the task of wind turbine monitoring, an essential task in the renewable energy domain. Turbine engineers and data analysts must decide whether to perform costly in-person investigations on turbines to prevent potential cases of brakepad failure, and well-tuned usable ML interfaces can aid with this decision-making process. Through the applications of our lessons to this task, we hope to demonstrate the potential real-world impact of usable ML in the renewable energy domain. |
2112.01085 | Ziao Yang | Ziao Yang, Xiangrui Yang and Qifeng Lin | PTCT: Patches with 3D-Temporal Convolutional Transformer Network for
Precipitation Nowcasting | 9 pages, 3 figures | null | null | null | cs.CV cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Precipitation nowcasting is to predict the future rainfall intensity over a
short period of time, which mainly relies on the prediction of radar echo
sequences. Though convolutional neural network (CNN) and recurrent neural
network (RNN) are widely used to generate radar echo frames, they suffer from
inductive bias (i.e., translation invariance and locality) and seriality,
respectively. Recently, Transformer-based methods also gain much attention due
to the great potential of Transformer structure, whereas short-term
dependencies and autoregressive characteristic are ignored. In this paper, we
propose a variant of Transformer named patches with 3D-temporal convolutional
Transformer network (PTCT), where original frames are split into multiple
patches to remove the constraint of inductive bias and 3D-temporal convolution
is employed to capture short-term dependencies efficiently. After training, the
inference of PTCT is performed in an autoregressive way to ensure the quality
of generated radar echo frames. To validate our algorithm, we conduct
experiments on two radar echo dataset: Radar Echo Guangzhou and HKO-7. The
experimental results show that PTCT achieves state-of-the-art (SOTA)
performance compared with existing methods.
| [
{
"created": "Thu, 2 Dec 2021 10:05:01 GMT",
"version": "v1"
},
{
"created": "Fri, 3 Jun 2022 04:50:30 GMT",
"version": "v2"
}
] | 2022-06-06 | [
[
"Yang",
"Ziao",
""
],
[
"Yang",
"Xiangrui",
""
],
[
"Lin",
"Qifeng",
""
]
] | Precipitation nowcasting is to predict the future rainfall intensity over a short period of time, which mainly relies on the prediction of radar echo sequences. Though convolutional neural network (CNN) and recurrent neural network (RNN) are widely used to generate radar echo frames, they suffer from inductive bias (i.e., translation invariance and locality) and seriality, respectively. Recently, Transformer-based methods also gain much attention due to the great potential of Transformer structure, whereas short-term dependencies and autoregressive characteristic are ignored. In this paper, we propose a variant of Transformer named patches with 3D-temporal convolutional Transformer network (PTCT), where original frames are split into multiple patches to remove the constraint of inductive bias and 3D-temporal convolution is employed to capture short-term dependencies efficiently. After training, the inference of PTCT is performed in an autoregressive way to ensure the quality of generated radar echo frames. To validate our algorithm, we conduct experiments on two radar echo dataset: Radar Echo Guangzhou and HKO-7. The experimental results show that PTCT achieves state-of-the-art (SOTA) performance compared with existing methods. |
2310.08558 | Max Sobol Mark | Max Sobol Mark, Archit Sharma, Fahim Tajwar, Rafael Rafailov, Sergey
Levine, Chelsea Finn | Offline Retraining for Online RL: Decoupled Policy Learning to Mitigate
Exploration Bias | null | null | null | null | cs.LG cs.AI cs.RO | http://creativecommons.org/licenses/by/4.0/ | It is desirable for policies to optimistically explore new states and
behaviors during online reinforcement learning (RL) or fine-tuning, especially
when prior offline data does not provide enough state coverage. However,
exploration bonuses can bias the learned policy, and our experiments find that
naive, yet standard use of such bonuses can fail to recover a performant
policy. Concurrently, pessimistic training in offline RL has enabled recovery
of performant policies from static datasets. Can we leverage offline RL to
recover better policies from online interaction? We make a simple observation
that a policy can be trained from scratch on all interaction data with
pessimistic objectives, thereby decoupling the policies used for data
collection and for evaluation. Specifically, we propose offline retraining, a
policy extraction step at the end of online fine-tuning in our
Offline-to-Online-to-Offline (OOO) framework for reinforcement learning (RL).
An optimistic (exploration) policy is used to interact with the environment,
and a separate pessimistic (exploitation) policy is trained on all the observed
data for evaluation. Such decoupling can reduce any bias from online
interaction (intrinsic rewards, primacy bias) in the evaluation policy, and can
allow more exploratory behaviors during online interaction which in turn can
generate better data for exploitation. OOO is complementary to several
offline-to-online RL and online RL methods, and improves their average
performance by 14% to 26% in our fine-tuning experiments, achieves
state-of-the-art performance on several environments in the D4RL benchmarks,
and improves online RL performance by 165% on two OpenAI gym environments.
Further, OOO can enable fine-tuning from incomplete offline datasets where
prior methods can fail to recover a performant policy. Implementation:
https://github.com/MaxSobolMark/OOO
| [
{
"created": "Thu, 12 Oct 2023 17:50:09 GMT",
"version": "v1"
}
] | 2023-10-13 | [
[
"Mark",
"Max Sobol",
""
],
[
"Sharma",
"Archit",
""
],
[
"Tajwar",
"Fahim",
""
],
[
"Rafailov",
"Rafael",
""
],
[
"Levine",
"Sergey",
""
],
[
"Finn",
"Chelsea",
""
]
] | It is desirable for policies to optimistically explore new states and behaviors during online reinforcement learning (RL) or fine-tuning, especially when prior offline data does not provide enough state coverage. However, exploration bonuses can bias the learned policy, and our experiments find that naive, yet standard use of such bonuses can fail to recover a performant policy. Concurrently, pessimistic training in offline RL has enabled recovery of performant policies from static datasets. Can we leverage offline RL to recover better policies from online interaction? We make a simple observation that a policy can be trained from scratch on all interaction data with pessimistic objectives, thereby decoupling the policies used for data collection and for evaluation. Specifically, we propose offline retraining, a policy extraction step at the end of online fine-tuning in our Offline-to-Online-to-Offline (OOO) framework for reinforcement learning (RL). An optimistic (exploration) policy is used to interact with the environment, and a separate pessimistic (exploitation) policy is trained on all the observed data for evaluation. Such decoupling can reduce any bias from online interaction (intrinsic rewards, primacy bias) in the evaluation policy, and can allow more exploratory behaviors during online interaction which in turn can generate better data for exploitation. OOO is complementary to several offline-to-online RL and online RL methods, and improves their average performance by 14% to 26% in our fine-tuning experiments, achieves state-of-the-art performance on several environments in the D4RL benchmarks, and improves online RL performance by 165% on two OpenAI gym environments. Further, OOO can enable fine-tuning from incomplete offline datasets where prior methods can fail to recover a performant policy. Implementation: https://github.com/MaxSobolMark/OOO |
2108.10967 | Utkarsh Mall | Utkarsh Mall, Bharath Hariharan, and Kavita Bala | Field-Guide-Inspired Zero-Shot Learning | Accepted to ICCV 2021 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Modern recognition systems require large amounts of supervision to achieve
accuracy. Adapting to new domains requires significant data from experts, which
is onerous and can become too expensive. Zero-shot learning requires an
annotated set of attributes for a novel category. Annotating the full set of
attributes for a novel category proves to be a tedious and expensive task in
deployment. This is especially the case when the recognition domain is an
expert domain. We introduce a new field-guide-inspired approach to zero-shot
annotation where the learner model interactively asks for the most useful
attributes that define a class. We evaluate our method on classification
benchmarks with attribute annotations like CUB, SUN, and AWA2 and show that our
model achieves the performance of a model with full annotations at the cost of
a significantly fewer number of annotations. Since the time of experts is
precious, decreasing annotation cost can be very valuable for real-world
deployment.
| [
{
"created": "Tue, 24 Aug 2021 21:36:05 GMT",
"version": "v1"
}
] | 2021-08-26 | [
[
"Mall",
"Utkarsh",
""
],
[
"Hariharan",
"Bharath",
""
],
[
"Bala",
"Kavita",
""
]
] | Modern recognition systems require large amounts of supervision to achieve accuracy. Adapting to new domains requires significant data from experts, which is onerous and can become too expensive. Zero-shot learning requires an annotated set of attributes for a novel category. Annotating the full set of attributes for a novel category proves to be a tedious and expensive task in deployment. This is especially the case when the recognition domain is an expert domain. We introduce a new field-guide-inspired approach to zero-shot annotation where the learner model interactively asks for the most useful attributes that define a class. We evaluate our method on classification benchmarks with attribute annotations like CUB, SUN, and AWA2 and show that our model achieves the performance of a model with full annotations at the cost of a significantly fewer number of annotations. Since the time of experts is precious, decreasing annotation cost can be very valuable for real-world deployment. |
1903.05266 | Tu Le | John A. Stankovic, Tu Le, Abdeltawab Hendawi, Yuan Tian | Hardware/Software Security Patches for Internet of Trillions of Things | null | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the rapid development of the Internet of Things, there are many
interacting devices and applications. One crucial challenge is how to provide
security. Our proposal for a new direction is to create "smart buttons" and
collections of them called "smart blankets" as hardware/software security
patches rather than software-only patches.
| [
{
"created": "Tue, 12 Mar 2019 23:46:30 GMT",
"version": "v1"
}
] | 2019-03-14 | [
[
"Stankovic",
"John A.",
""
],
[
"Le",
"Tu",
""
],
[
"Hendawi",
"Abdeltawab",
""
],
[
"Tian",
"Yuan",
""
]
] | With the rapid development of the Internet of Things, there are many interacting devices and applications. One crucial challenge is how to provide security. Our proposal for a new direction is to create "smart buttons" and collections of them called "smart blankets" as hardware/software security patches rather than software-only patches. |
2311.09178 | Marwah Sulaiman | Marwah Sulaiman, Zahraa Shehabeldin, Israa Fahmy, Mohammed Barakat,
Mohammed El-Naggar, Dareen Hussein, Moustafa Youssef, Hesham M. Eraqi | RBPGAN: Recurrent Back-Projection GAN for Video Super Resolution | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, video super resolution (VSR) has become a very impactful task in
the area of Computer Vision due to its various applications. In this paper, we
propose Recurrent Back-Projection Generative Adversarial Network (RBPGAN) for
VSR in an attempt to generate temporally coherent solutions while preserving
spatial details. RBPGAN integrates two state-of-the-art models to get the best
in both worlds without compromising the accuracy of produced video. The
generator of the model is inspired by RBPN system, while the discriminator is
inspired by TecoGAN. We also utilize Ping-Pong loss to increase temporal
consistency over time. Our contribution together results in a model that
outperforms earlier work in terms of temporally consistent details, as we will
demonstrate qualitatively and quantitatively using different datasets.
| [
{
"created": "Wed, 15 Nov 2023 18:15:30 GMT",
"version": "v1"
},
{
"created": "Fri, 17 Nov 2023 14:02:35 GMT",
"version": "v2"
},
{
"created": "Fri, 24 Nov 2023 11:47:25 GMT",
"version": "v3"
},
{
"created": "Sun, 10 Dec 2023 23:30:20 GMT",
"version": "v4"
}
] | 2023-12-12 | [
[
"Sulaiman",
"Marwah",
""
],
[
"Shehabeldin",
"Zahraa",
""
],
[
"Fahmy",
"Israa",
""
],
[
"Barakat",
"Mohammed",
""
],
[
"El-Naggar",
"Mohammed",
""
],
[
"Hussein",
"Dareen",
""
],
[
"Youssef",
"Moustafa",
""
],
[
"Eraqi",
"Hesham M.",
""
]
] | Recently, video super resolution (VSR) has become a very impactful task in the area of Computer Vision due to its various applications. In this paper, we propose Recurrent Back-Projection Generative Adversarial Network (RBPGAN) for VSR in an attempt to generate temporally coherent solutions while preserving spatial details. RBPGAN integrates two state-of-the-art models to get the best in both worlds without compromising the accuracy of produced video. The generator of the model is inspired by RBPN system, while the discriminator is inspired by TecoGAN. We also utilize Ping-Pong loss to increase temporal consistency over time. Our contribution together results in a model that outperforms earlier work in terms of temporally consistent details, as we will demonstrate qualitatively and quantitatively using different datasets. |
1311.5108 | Gildas Morvan | Jean-Baptiste Soyez, Gildas Morvan, Daniel Dupont, Rochdi Merzouki | A Methodology to Engineer and Validate Dynamic Multi-level Multi-agent
Based Simulations | Presented at 3th International Workshop on Multi-Agent Based
Simulation, Valencia, Spain, 5th June 2012 | Multi-Agent-Based Simulation XIII LNCS 7838 p 130-142, 2013 | 10.1007/978-3-642-38859-0_10 | null | cs.MA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This article proposes a methodology to model and simulate complex systems,
based on IRM4MLS, a generic agent-based meta-model able to deal with
multi-level systems. This methodology permits the engineering of dynamic
multi-level agent-based models, to represent complex systems over several
scales and domains of interest. Its goal is to simulate a phenomenon using
dynamically the lightest representation to save computer resources without loss
of information. This methodology is based on two mechanisms: (1) the activation
or deactivation of agents representing different domain parts of the same
phenomenon and (2) the aggregation or disaggregation of agents representing the
same phenomenon at different scales.
| [
{
"created": "Wed, 20 Nov 2013 15:44:26 GMT",
"version": "v1"
}
] | 2013-11-21 | [
[
"Soyez",
"Jean-Baptiste",
""
],
[
"Morvan",
"Gildas",
""
],
[
"Dupont",
"Daniel",
""
],
[
"Merzouki",
"Rochdi",
""
]
] | This article proposes a methodology to model and simulate complex systems, based on IRM4MLS, a generic agent-based meta-model able to deal with multi-level systems. This methodology permits the engineering of dynamic multi-level agent-based models, to represent complex systems over several scales and domains of interest. Its goal is to simulate a phenomenon using dynamically the lightest representation to save computer resources without loss of information. This methodology is based on two mechanisms: (1) the activation or deactivation of agents representing different domain parts of the same phenomenon and (2) the aggregation or disaggregation of agents representing the same phenomenon at different scales. |
2407.11038 | Dianhui Wang | Dianhui Wang and Gang Dang | Fuzzy Recurrent Stochastic Configuration Networks for Industrial Data
Analytics | null | null | null | null | cs.LG cs.AI cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a novel neuro-fuzzy model, termed fuzzy recurrent
stochastic configuration networks (F-RSCNs), for industrial data analytics.
Unlike the original recurrent stochastic configuration network (RSCN), the
proposed F-RSCN is constructed by multiple sub-reservoirs, and each
sub-reservoir is associated with a Takagi-Sugeno-Kang (TSK) fuzzy rule. Through
this hybrid framework, first, the interpretability of the model is enhanced by
incorporating fuzzy reasoning to embed the prior knowledge into the network.
Then, the parameters of the neuro-fuzzy model are determined by the recurrent
stochastic configuration (RSC) algorithm. This scheme not only ensures the
universal approximation property and fast learning speed of the built model but
also overcomes uncertain problems, such as unknown dynamic orders, arbitrary
structure determination, and the sensitivity of learning parameters in
modelling nonlinear dynamics. Finally, an online update of the output weights
is performed using the projection algorithm, and the convergence analysis of
the learning parameters is given. By integrating TSK fuzzy inference systems
into RSCNs, F-RSCNs have strong fuzzy inference capability and can achieve
sound performance for both learning and generalization. Comprehensive
experiments show that the proposed F-RSCNs outperform other classical
neuro-fuzzy and non-fuzzy models, demonstrating great potential for modelling
complex industrial systems.
| [
{
"created": "Sat, 6 Jul 2024 01:40:31 GMT",
"version": "v1"
},
{
"created": "Tue, 13 Aug 2024 00:55:09 GMT",
"version": "v2"
}
] | 2024-08-14 | [
[
"Wang",
"Dianhui",
""
],
[
"Dang",
"Gang",
""
]
] | This paper presents a novel neuro-fuzzy model, termed fuzzy recurrent stochastic configuration networks (F-RSCNs), for industrial data analytics. Unlike the original recurrent stochastic configuration network (RSCN), the proposed F-RSCN is constructed by multiple sub-reservoirs, and each sub-reservoir is associated with a Takagi-Sugeno-Kang (TSK) fuzzy rule. Through this hybrid framework, first, the interpretability of the model is enhanced by incorporating fuzzy reasoning to embed the prior knowledge into the network. Then, the parameters of the neuro-fuzzy model are determined by the recurrent stochastic configuration (RSC) algorithm. This scheme not only ensures the universal approximation property and fast learning speed of the built model but also overcomes uncertain problems, such as unknown dynamic orders, arbitrary structure determination, and the sensitivity of learning parameters in modelling nonlinear dynamics. Finally, an online update of the output weights is performed using the projection algorithm, and the convergence analysis of the learning parameters is given. By integrating TSK fuzzy inference systems into RSCNs, F-RSCNs have strong fuzzy inference capability and can achieve sound performance for both learning and generalization. Comprehensive experiments show that the proposed F-RSCNs outperform other classical neuro-fuzzy and non-fuzzy models, demonstrating great potential for modelling complex industrial systems. |
0904.2375 | Navin Kashyap | Akiko Manada and Navin Kashyap | The Zeta Function of a Periodic-Finite-Type Shift | To appear in Proceedings of the 2009 IEEE International Symposium on
Information Theory (ISIT'09); 5 pages | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The class of periodic-finite-type shifts (PFT's) is a class of sofic shifts
that strictly includes the class of shifts of finite type (SFT's), and the zeta
function of a PFT is a generating function for the number of periodic sequences
in the shift. In this paper, we derive a useful formula for the zeta function
of a PFT. This formula allows the zeta function of a PFT to be computed more
efficiently than the specialization of a formula known for a generic sofic
shift
| [
{
"created": "Wed, 15 Apr 2009 18:56:15 GMT",
"version": "v1"
}
] | 2009-04-16 | [
[
"Manada",
"Akiko",
""
],
[
"Kashyap",
"Navin",
""
]
] | The class of periodic-finite-type shifts (PFT's) is a class of sofic shifts that strictly includes the class of shifts of finite type (SFT's), and the zeta function of a PFT is a generating function for the number of periodic sequences in the shift. In this paper, we derive a useful formula for the zeta function of a PFT. This formula allows the zeta function of a PFT to be computed more efficiently than the specialization of a formula known for a generic sofic shift |
2207.04174 | Wes Robbins | Wes Robbins, Zanyar Zohourianshahzadi, and Jugal Kalita | Towards Multimodal Vision-Language Models Generating Non-Generic Text | null | 2021 International Conference on Natural Language Processing | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vision-language models can assess visual context in an image and generate
descriptive text. While the generated text may be accurate and syntactically
correct, it is often overly general. To address this, recent work has used
optical character recognition to supplement visual information with text
extracted from an image. In this work, we contend that vision-language models
can benefit from additional information that can be extracted from an image,
but are not used by current models. We modify previous multimodal frameworks to
accept relevant information from any number of auxiliary classifiers. In
particular, we focus on person names as an additional set of tokens and create
a novel image-caption dataset to facilitate captioning with person names. The
dataset, Politicians and Athletes in Captions (PAC), consists of captioned
images of well-known people in context. By fine-tuning pretrained models with
this dataset, we demonstrate a model that can naturally integrate facial
recognition tokens into generated text by training on limited data. For the PAC
dataset, we provide a discussion on collection and baseline benchmark scores.
| [
{
"created": "Sat, 9 Jul 2022 01:56:35 GMT",
"version": "v1"
}
] | 2022-07-12 | [
[
"Robbins",
"Wes",
""
],
[
"Zohourianshahzadi",
"Zanyar",
""
],
[
"Kalita",
"Jugal",
""
]
] | Vision-language models can assess visual context in an image and generate descriptive text. While the generated text may be accurate and syntactically correct, it is often overly general. To address this, recent work has used optical character recognition to supplement visual information with text extracted from an image. In this work, we contend that vision-language models can benefit from additional information that can be extracted from an image, but are not used by current models. We modify previous multimodal frameworks to accept relevant information from any number of auxiliary classifiers. In particular, we focus on person names as an additional set of tokens and create a novel image-caption dataset to facilitate captioning with person names. The dataset, Politicians and Athletes in Captions (PAC), consists of captioned images of well-known people in context. By fine-tuning pretrained models with this dataset, we demonstrate a model that can naturally integrate facial recognition tokens into generated text by training on limited data. For the PAC dataset, we provide a discussion on collection and baseline benchmark scores. |
2203.09777 | Brandon Khoo | Brandon B. G. Khoo, Chern Hong Lim, Raphael C.-W. Phan | Transferable Class-Modelling for Decentralized Source Attribution of
GAN-Generated Images | 21 pages, 8 figures. Code:
https://github.com/quarxilon/Generator_Attribution | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | GAN-generated deepfakes as a genre of digital images are gaining ground as
both catalysts of artistic expression and malicious forms of deception,
therefore demanding systems to enforce and accredit their ethical use. Existing
techniques for the source attribution of synthetic images identify subtle
intrinsic fingerprints using multiclass classification neural nets limited in
functionality and scalability. Hence, we redefine the deepfake detection and
source attribution problems as a series of related binary classification tasks.
We leverage transfer learning to rapidly adapt forgery detection networks for
multiple independent attribution problems, by proposing a semi-decentralized
modular design to solve them simultaneously and efficiently. Class activation
mapping is also demonstrated as an effective means of feature localization for
model interpretation. Our models are determined via experimentation to be
competitive with current benchmarks, and capable of decent performance on human
portraits in ideal conditions. Decentralized fingerprint-based attribution is
found to retain validity in the presence of novel sources, but is more
susceptible to type II errors that intensify with image perturbations and
attributive uncertainty. We describe both our conceptual framework and model
prototypes for further enhancement when investigating the technical limits of
reactive deepfake attribution.
| [
{
"created": "Fri, 18 Mar 2022 07:43:03 GMT",
"version": "v1"
}
] | 2022-03-21 | [
[
"Khoo",
"Brandon B. G.",
""
],
[
"Lim",
"Chern Hong",
""
],
[
"Phan",
"Raphael C. -W.",
""
]
] | GAN-generated deepfakes as a genre of digital images are gaining ground as both catalysts of artistic expression and malicious forms of deception, therefore demanding systems to enforce and accredit their ethical use. Existing techniques for the source attribution of synthetic images identify subtle intrinsic fingerprints using multiclass classification neural nets limited in functionality and scalability. Hence, we redefine the deepfake detection and source attribution problems as a series of related binary classification tasks. We leverage transfer learning to rapidly adapt forgery detection networks for multiple independent attribution problems, by proposing a semi-decentralized modular design to solve them simultaneously and efficiently. Class activation mapping is also demonstrated as an effective means of feature localization for model interpretation. Our models are determined via experimentation to be competitive with current benchmarks, and capable of decent performance on human portraits in ideal conditions. Decentralized fingerprint-based attribution is found to retain validity in the presence of novel sources, but is more susceptible to type II errors that intensify with image perturbations and attributive uncertainty. We describe both our conceptual framework and model prototypes for further enhancement when investigating the technical limits of reactive deepfake attribution. |
2007.09060 | Prateek Verma | Camille Noufi and Prateek Verma | Self-Supervised Learning of Context-Aware Pitch Prosody Representations | null | null | null | null | cs.SD cs.CV cs.IR cs.LG eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In music and speech, meaning is derived at multiple levels of context.
Affect, for example, can be inferred both by a short sound token and by sonic
patterns over a longer temporal window such as an entire recording. In this
letter, we focus on inferring meaning from this dichotomy of contexts. We show
how contextual representations of short sung vocal lines can be implicitly
learned from fundamental frequency ($F_0$) and thus be used as a meaningful
feature space for downstream Music Information Retrieval (MIR) tasks. We
propose three self-supervised deep learning paradigms which leverage pseudotask
learning of these two levels of context to produce latent representation
spaces. We evaluate the usefulness of these representations by embedding unseen
pitch contours into each space and conducting downstream classification tasks.
Our results show that contextual representation can enhance downstream
classification by as much as 15\% as compared to using traditional statistical
contour features.
| [
{
"created": "Fri, 17 Jul 2020 15:41:00 GMT",
"version": "v1"
},
{
"created": "Sat, 24 Oct 2020 02:10:05 GMT",
"version": "v2"
},
{
"created": "Thu, 25 Mar 2021 17:46:54 GMT",
"version": "v3"
},
{
"created": "Sun, 1 Aug 2021 05:16:48 GMT",
"version": "v4"
}
] | 2022-09-12 | [
[
"Noufi",
"Camille",
""
],
[
"Verma",
"Prateek",
""
]
] | In music and speech, meaning is derived at multiple levels of context. Affect, for example, can be inferred both by a short sound token and by sonic patterns over a longer temporal window such as an entire recording. In this letter, we focus on inferring meaning from this dichotomy of contexts. We show how contextual representations of short sung vocal lines can be implicitly learned from fundamental frequency ($F_0$) and thus be used as a meaningful feature space for downstream Music Information Retrieval (MIR) tasks. We propose three self-supervised deep learning paradigms which leverage pseudotask learning of these two levels of context to produce latent representation spaces. We evaluate the usefulness of these representations by embedding unseen pitch contours into each space and conducting downstream classification tasks. Our results show that contextual representation can enhance downstream classification by as much as 15\% as compared to using traditional statistical contour features. |
2310.20203 | Suman Sapkota | Suman Sapkota, Binod Bhattarai | Importance Estimation with Random Gradient for Neural Network Pruning | 7 pages, 2 figures, ICLR 2023 Workshop on Sparsity in Neural
Networks. arXiv admin note: text overlap with arXiv:2306.13203 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Global Neuron Importance Estimation is used to prune neural networks for
efficiency reasons. To determine the global importance of each neuron or
convolutional kernel, most of the existing methods either use activation or
gradient information or both, which demands abundant labelled examples. In this
work, we use heuristics to derive importance estimation similar to Taylor First
Order (TaylorFO) approximation based methods. We name our methods TaylorFO-abs
and TaylorFO-sq. We propose two additional methods to improve these importance
estimation methods. Firstly, we propagate random gradients from the last layer
of a network, thus avoiding the need for labelled examples. Secondly, we
normalize the gradient magnitude of the last layer output before propagating,
which allows all examples to contribute similarly to the importance score. Our
methods with additional techniques perform better than previous methods when
tested on ResNet and VGG architectures on CIFAR-100 and STL-10 datasets.
Furthermore, our method also complements the existing methods and improves
their performances when combined with them.
| [
{
"created": "Tue, 31 Oct 2023 06:00:17 GMT",
"version": "v1"
}
] | 2023-11-01 | [
[
"Sapkota",
"Suman",
""
],
[
"Bhattarai",
"Binod",
""
]
] | Global Neuron Importance Estimation is used to prune neural networks for efficiency reasons. To determine the global importance of each neuron or convolutional kernel, most of the existing methods either use activation or gradient information or both, which demands abundant labelled examples. In this work, we use heuristics to derive importance estimation similar to Taylor First Order (TaylorFO) approximation based methods. We name our methods TaylorFO-abs and TaylorFO-sq. We propose two additional methods to improve these importance estimation methods. Firstly, we propagate random gradients from the last layer of a network, thus avoiding the need for labelled examples. Secondly, we normalize the gradient magnitude of the last layer output before propagating, which allows all examples to contribute similarly to the importance score. Our methods with additional techniques perform better than previous methods when tested on ResNet and VGG architectures on CIFAR-100 and STL-10 datasets. Furthermore, our method also complements the existing methods and improves their performances when combined with them. |
2312.17612 | Florentia Afentaki | Florentia Afentaki, Gurol Saglam, Argyris Kokkinis, Kostas Siozios,
Georgios Zervakis, Mehdi B Tahoori | Bespoke Approximation of Multiplication-Accumulation and Activation
Targeting Printed Multilayer Perceptrons | Accepted for publication at the 42th IEEE/ACM International
Conference on Computer Aided Design (ICCAD) 2023, San Francisco, USA | null | 10.1109/ICCAD57390.2023.10323613 | null | cs.AR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Printed Electronics (PE) feature distinct and remarkable characteristics that
make them a prominent technology for achieving true ubiquitous computing. This
is particularly relevant in application domains that require conformal and
ultra-low cost solutions, which have experienced limited penetration of
computing until now. Unlike silicon-based technologies, PE offer unparalleled
features such as non-recurring engineering costs, ultra-low manufacturing cost,
and on-demand fabrication of conformal, flexible, non-toxic, and stretchable
hardware. However, PE face certain limitations due to their large feature
sizes, that impede the realization of complex circuits, such as machine
learning classifiers. In this work, we address these limitations by leveraging
the principles of Approximate Computing and Bespoke (fully-customized) design.
We propose an automated framework for designing ultra-low power Multilayer
Perceptron (MLP) classifiers which employs, for the first time, a holistic
approach to approximate all functions of the MLP's neurons: multiplication,
accumulation, and activation. Through comprehensive evaluation across various
MLPs of varying size, our framework demonstrates the ability to enable
battery-powered operation of even the most intricate MLP architecture examined,
significantly surpassing the current state of the art.
| [
{
"created": "Fri, 29 Dec 2023 14:16:11 GMT",
"version": "v1"
},
{
"created": "Mon, 5 Feb 2024 11:14:22 GMT",
"version": "v2"
}
] | 2024-02-06 | [
[
"Afentaki",
"Florentia",
""
],
[
"Saglam",
"Gurol",
""
],
[
"Kokkinis",
"Argyris",
""
],
[
"Siozios",
"Kostas",
""
],
[
"Zervakis",
"Georgios",
""
],
[
"Tahoori",
"Mehdi B",
""
]
] | Printed Electronics (PE) feature distinct and remarkable characteristics that make them a prominent technology for achieving true ubiquitous computing. This is particularly relevant in application domains that require conformal and ultra-low cost solutions, which have experienced limited penetration of computing until now. Unlike silicon-based technologies, PE offer unparalleled features such as non-recurring engineering costs, ultra-low manufacturing cost, and on-demand fabrication of conformal, flexible, non-toxic, and stretchable hardware. However, PE face certain limitations due to their large feature sizes, that impede the realization of complex circuits, such as machine learning classifiers. In this work, we address these limitations by leveraging the principles of Approximate Computing and Bespoke (fully-customized) design. We propose an automated framework for designing ultra-low power Multilayer Perceptron (MLP) classifiers which employs, for the first time, a holistic approach to approximate all functions of the MLP's neurons: multiplication, accumulation, and activation. Through comprehensive evaluation across various MLPs of varying size, our framework demonstrates the ability to enable battery-powered operation of even the most intricate MLP architecture examined, significantly surpassing the current state of the art. |
1409.6226 | Ricardo Martins | Ricardo Martins, Jo\~ao Filipe Ferreira, Jorge Dias | Touch attention Bayesian models for robotic active haptic exploration of
heterogeneous surfaces | 8 pages, presented in IROS 2014, Chicago | Proceedings of 2014 IEEE/RSJ International Conference on
Intelligent Robots and Systems (IROS 2014), Chicago, USA, Sept. 14-18, 2014 | 10.1109/IROS.2014.6942711 | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work contributes to the development of active haptic exploration
strategies of surfaces using robotic hands in environments with an unknown
structure. The architecture of the proposed approach consists two main Bayesian
models, implementing the touch attention mechanisms of the system. The model
pi_per perceives and discriminates different categories of materials (haptic
stimulus) integrating compliance and texture features extracted from haptic
sensory data. The model pi_tar actively infers the next region of the workspace
that should be explored by the robotic system, integrating the task
information, the permanently updated saliency and uncertainty maps extracted
from the perceived haptic stimulus map, as well as, inhibition-of-return
mechanisms.
The experimental results demonstrate that the Bayesian model pi_per can be
used to discriminate 10 different classes of materials with an average
recognition rate higher than 90% . The generalization capability of the
proposed models was demonstrated experimentally. The ATLAS robot, in the
simulation, was able to perform the following of a discontinuity between two
regions made of different materials with a divergence smaller than 1cm (30
trials). The tests were performed in scenarios with 3 different configurations
of the discontinuity. The Bayesian models have demonstrated the capability to
manage the uncertainty about the structure of the surfaces and sensory noise to
make correct motor decisions from haptic percepts.
| [
{
"created": "Mon, 22 Sep 2014 16:07:42 GMT",
"version": "v1"
}
] | 2016-11-18 | [
[
"Martins",
"Ricardo",
""
],
[
"Ferreira",
"João Filipe",
""
],
[
"Dias",
"Jorge",
""
]
] | This work contributes to the development of active haptic exploration strategies of surfaces using robotic hands in environments with an unknown structure. The architecture of the proposed approach consists two main Bayesian models, implementing the touch attention mechanisms of the system. The model pi_per perceives and discriminates different categories of materials (haptic stimulus) integrating compliance and texture features extracted from haptic sensory data. The model pi_tar actively infers the next region of the workspace that should be explored by the robotic system, integrating the task information, the permanently updated saliency and uncertainty maps extracted from the perceived haptic stimulus map, as well as, inhibition-of-return mechanisms. The experimental results demonstrate that the Bayesian model pi_per can be used to discriminate 10 different classes of materials with an average recognition rate higher than 90% . The generalization capability of the proposed models was demonstrated experimentally. The ATLAS robot, in the simulation, was able to perform the following of a discontinuity between two regions made of different materials with a divergence smaller than 1cm (30 trials). The tests were performed in scenarios with 3 different configurations of the discontinuity. The Bayesian models have demonstrated the capability to manage the uncertainty about the structure of the surfaces and sensory noise to make correct motor decisions from haptic percepts. |
2307.10311 | Arnab Purkayastha | Shobhit Aggarwal, Arnab Purkayastha | SecureTrack- A contact tracing IoT platform for monitoring infectious
diseases | 22 Pages, 8 figures, To be published in "The Global Interdisciplinary
Green Cities Conference 2023 Business, Engineering, Art, Architecture,
Design, Political Science, International Relations, Applied Science &
Technology. " | null | null | null | cs.CR | http://creativecommons.org/licenses/by/4.0/ | The COVID-19 pandemic has highlighted the need for innovative solutions to
monitor and control the spread of infectious diseases. With the potential for
future pandemics and the risk of outbreaks particularly in academic
institutions, there is a pressing need for effective approaches to monitor and
manage such diseases. Contact tracing using Global Positioning Systems (GPS)
has been found to be the most prevalent method to detect and tackle the extent
of outbreaks during the pandemic. However, these services suffer from the
inherent problems of infringement of data privacy that creates hindrance in
adoption of the technology. Non-cellular wireless technologies on the other
hand are well-suited to provide secure contact tracing methods. Such approaches
integrated with the Internet of Things (IoT) have a great potential to aid in
the fight against any type of infectious diseases. In response, we present a
unique approach that utilizes an IoT based generic framework to identify
individuals who may have been exposed to the virus, using contact tracing
methods, without compromising the privacy aspect. We develop the architecture
of our platform, including both the frontend and backend components, and
demonstrate its effectiveness in identifying potential COVID-19 exposures (as a
test case) through a proof-of-concept implementation. We also implement and
verify a prototype of the device. Our framework is easily deployable and can be
scaled up as needed with the existing infrastructure.
| [
{
"created": "Wed, 19 Jul 2023 01:57:42 GMT",
"version": "v1"
}
] | 2023-07-21 | [
[
"Aggarwal",
"Shobhit",
""
],
[
"Purkayastha",
"Arnab",
""
]
] | The COVID-19 pandemic has highlighted the need for innovative solutions to monitor and control the spread of infectious diseases. With the potential for future pandemics and the risk of outbreaks particularly in academic institutions, there is a pressing need for effective approaches to monitor and manage such diseases. Contact tracing using Global Positioning Systems (GPS) has been found to be the most prevalent method to detect and tackle the extent of outbreaks during the pandemic. However, these services suffer from the inherent problems of infringement of data privacy that creates hindrance in adoption of the technology. Non-cellular wireless technologies on the other hand are well-suited to provide secure contact tracing methods. Such approaches integrated with the Internet of Things (IoT) have a great potential to aid in the fight against any type of infectious diseases. In response, we present a unique approach that utilizes an IoT based generic framework to identify individuals who may have been exposed to the virus, using contact tracing methods, without compromising the privacy aspect. We develop the architecture of our platform, including both the frontend and backend components, and demonstrate its effectiveness in identifying potential COVID-19 exposures (as a test case) through a proof-of-concept implementation. We also implement and verify a prototype of the device. Our framework is easily deployable and can be scaled up as needed with the existing infrastructure. |
2001.05467 | Tong Niu | Tong Niu, Mohit Bansal | AvgOut: A Simple Output-Probability Measure to Eliminate Dull Responses | AAAI 2020 (8 pages) | null | null | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many sequence-to-sequence dialogue models tend to generate safe,
uninformative responses. There have been various useful efforts on trying to
eliminate them. However, these approaches either improve decoding algorithms
during inference, rely on hand-crafted features, or employ complex models. In
our work, we build dialogue models that are dynamically aware of what
utterances or tokens are dull without any feature-engineering. Specifically, we
start with a simple yet effective automatic metric, AvgOut, which calculates
the average output probability distribution of all time steps on the decoder
side during training. This metric directly estimates which tokens are more
likely to be generated, thus making it a faithful evaluation of the model
diversity (i.e., for diverse models, the token probabilities should be more
evenly distributed rather than peaked at a few dull tokens). We then leverage
this novel metric to propose three models that promote diversity without losing
relevance. The first model, MinAvgOut, directly maximizes the diversity score
through the output distributions of each batch; the second model, Label
Fine-Tuning (LFT), prepends to the source sequence a label continuously scaled
by the diversity score to control the diversity level; the third model, RL,
adopts Reinforcement Learning and treats the diversity score as a reward
signal. Moreover, we experiment with a hybrid model by combining the loss terms
of MinAvgOut and RL. All four models outperform their base LSTM-RNN model on
both diversity and relevance by a large margin, and are comparable to or better
than competitive baselines (also verified via human evaluation). Moreover, our
approaches are orthogonal to the base model, making them applicable as an
add-on to other emerging better dialogue models in the future.
| [
{
"created": "Wed, 15 Jan 2020 18:32:06 GMT",
"version": "v1"
}
] | 2020-01-16 | [
[
"Niu",
"Tong",
""
],
[
"Bansal",
"Mohit",
""
]
] | Many sequence-to-sequence dialogue models tend to generate safe, uninformative responses. There have been various useful efforts on trying to eliminate them. However, these approaches either improve decoding algorithms during inference, rely on hand-crafted features, or employ complex models. In our work, we build dialogue models that are dynamically aware of what utterances or tokens are dull without any feature-engineering. Specifically, we start with a simple yet effective automatic metric, AvgOut, which calculates the average output probability distribution of all time steps on the decoder side during training. This metric directly estimates which tokens are more likely to be generated, thus making it a faithful evaluation of the model diversity (i.e., for diverse models, the token probabilities should be more evenly distributed rather than peaked at a few dull tokens). We then leverage this novel metric to propose three models that promote diversity without losing relevance. The first model, MinAvgOut, directly maximizes the diversity score through the output distributions of each batch; the second model, Label Fine-Tuning (LFT), prepends to the source sequence a label continuously scaled by the diversity score to control the diversity level; the third model, RL, adopts Reinforcement Learning and treats the diversity score as a reward signal. Moreover, we experiment with a hybrid model by combining the loss terms of MinAvgOut and RL. All four models outperform their base LSTM-RNN model on both diversity and relevance by a large margin, and are comparable to or better than competitive baselines (also verified via human evaluation). Moreover, our approaches are orthogonal to the base model, making them applicable as an add-on to other emerging better dialogue models in the future. |
2208.11978 | Andrea Bedin | Andrea Bedin, Federico Chiariotti, Andrea Zanella | Joint Scheduling and Coding for Reliable, Latency-bounded Transmission
over Parallel Wireless Links | 7 pages, 6 figures. Accepted at IEEE Internet of Things Magazine | null | null | null | cs.NI | http://creativecommons.org/licenses/by/4.0/ | Several novel industrial applications involve human control of vehicles,
cranes, or mobile robots through various high-throughput feedback systems, such
as Virtual Reality (VR) and tactile/haptic signals. The near real-time
interaction between the system and the operator requires strict latency
constraints in packet exchange, which is difficult to guarantee over wireless
communication links. In this work, we advocate that packet-level coding and
packet scheduling over multiple parallel (unreliable) links have the potential
to provide reliable, latency-bounded communication for applications with
periodic data generation patterns. However, this goal can be reached only
through a careful joint design of such mechanisms, whose interactions can be
subtle and difficult to predict. In this paper we first discuss these aspects
in general terms, and then present a Markov Decision Process (MDP) model that
can be used to find a scheme that optimally exploits the multichannel wireless
access in order to maximize the fraction of data blocks delivered within
deadline. Our illustrative example is then used to show the optimal
coding/scheduling strategies under different combinations of wireless links,
also showing that the common solution of backing up a high bitrate unreliable
mmWave link with a low bitrate more stable sub-6 GHz link can actually be
ineffective in the considered scenario
| [
{
"created": "Thu, 25 Aug 2022 10:15:48 GMT",
"version": "v1"
}
] | 2022-08-26 | [
[
"Bedin",
"Andrea",
""
],
[
"Chiariotti",
"Federico",
""
],
[
"Zanella",
"Andrea",
""
]
] | Several novel industrial applications involve human control of vehicles, cranes, or mobile robots through various high-throughput feedback systems, such as Virtual Reality (VR) and tactile/haptic signals. The near real-time interaction between the system and the operator requires strict latency constraints in packet exchange, which is difficult to guarantee over wireless communication links. In this work, we advocate that packet-level coding and packet scheduling over multiple parallel (unreliable) links have the potential to provide reliable, latency-bounded communication for applications with periodic data generation patterns. However, this goal can be reached only through a careful joint design of such mechanisms, whose interactions can be subtle and difficult to predict. In this paper we first discuss these aspects in general terms, and then present a Markov Decision Process (MDP) model that can be used to find a scheme that optimally exploits the multichannel wireless access in order to maximize the fraction of data blocks delivered within deadline. Our illustrative example is then used to show the optimal coding/scheduling strategies under different combinations of wireless links, also showing that the common solution of backing up a high bitrate unreliable mmWave link with a low bitrate more stable sub-6 GHz link can actually be ineffective in the considered scenario |
2003.02587 | Fuli Feng | Fuli Feng, Xiangnan He, Hanwang Zhang, and Tat-Seng Chua | Cross-GCN: Enhancing Graph Convolutional Network with $k$-Order Feature
Interactions | Submitted to TKDE | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph Convolutional Network (GCN) is an emerging technique that performs
learning and reasoning on graph data. It operates feature learning on the graph
structure, through aggregating the features of the neighbor nodes to obtain the
embedding of each target node. Owing to the strong representation power, recent
research shows that GCN achieves state-of-the-art performance on several tasks
such as recommendation and linked document classification.
Despite its effectiveness, we argue that existing designs of GCN forgo
modeling cross features, making GCN less effective for tasks or data where
cross features are important. Although neural network can approximate any
continuous function, including the multiplication operator for modeling feature
crosses, it can be rather inefficient to do so (i.e., wasting many parameters
at the risk of overfitting) if there is no explicit design.
To this end, we design a new operator named Cross-feature Graph Convolution,
which explicitly models the arbitrary-order cross features with complexity
linear to feature dimension and order size. We term our proposed architecture
as Cross-GCN, and conduct experiments on three graphs to validate its
effectiveness. Extensive analysis validates the utility of explicitly modeling
cross features in GCN, especially for feature learning at lower layers.
| [
{
"created": "Thu, 5 Mar 2020 13:05:27 GMT",
"version": "v1"
}
] | 2020-03-06 | [
[
"Feng",
"Fuli",
""
],
[
"He",
"Xiangnan",
""
],
[
"Zhang",
"Hanwang",
""
],
[
"Chua",
"Tat-Seng",
""
]
] | Graph Convolutional Network (GCN) is an emerging technique that performs learning and reasoning on graph data. It operates feature learning on the graph structure, through aggregating the features of the neighbor nodes to obtain the embedding of each target node. Owing to the strong representation power, recent research shows that GCN achieves state-of-the-art performance on several tasks such as recommendation and linked document classification. Despite its effectiveness, we argue that existing designs of GCN forgo modeling cross features, making GCN less effective for tasks or data where cross features are important. Although neural network can approximate any continuous function, including the multiplication operator for modeling feature crosses, it can be rather inefficient to do so (i.e., wasting many parameters at the risk of overfitting) if there is no explicit design. To this end, we design a new operator named Cross-feature Graph Convolution, which explicitly models the arbitrary-order cross features with complexity linear to feature dimension and order size. We term our proposed architecture as Cross-GCN, and conduct experiments on three graphs to validate its effectiveness. Extensive analysis validates the utility of explicitly modeling cross features in GCN, especially for feature learning at lower layers. |
1608.06753 | Shiyu Ji | Shiyu Ji | On the Correctness of Inverted Index Based Public-Key Searchable
Encryption Scheme for Multi-time Search | 3 pages, no figure | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this short note we argue that the state-of-art inverted index based public
key searchable encryption scheme proposed by Wang et al may not be completely
correct by giving a counterexample.
| [
{
"created": "Wed, 24 Aug 2016 08:42:17 GMT",
"version": "v1"
}
] | 2016-08-25 | [
[
"Ji",
"Shiyu",
""
]
] | In this short note we argue that the state-of-art inverted index based public key searchable encryption scheme proposed by Wang et al may not be completely correct by giving a counterexample. |
1812.03588 | Rodrigo de Lamare | R. M. Oliveira and R. C. de Lamare | Study of Puncturing Techniques for Polar Codes in 5G Cellular and IoT
Networks | 4 figures, 6 pages | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a puncturing technique based on the channel polarization
index for the design of rate-compatible polar codes in the fifth generation
(5G) of wireless systems. The proposed strategy consists of two steps: we first
generate the codeword message; and then we reduce the length of the codeword
based on the channel polarization index where channels with the smallest
channel polarization indices are punctured. We consider the proposed punctured
polar codes with the successive cancellation (SC) decoder and the Cyclic
Redundancy Check (CRC) aided SC list (CA-SCL) decoder and punctured bits known
to both the encoder and the decoder. The Polar Spectra (PS) are then used to
performance analysis the puncturing technique. Simulations for 5G scenarios
show that the proposed polar codes perform comparable to Low-Density
Parity-Check (LDPC) codes.
| [
{
"created": "Mon, 10 Dec 2018 00:56:46 GMT",
"version": "v1"
}
] | 2018-12-11 | [
[
"Oliveira",
"R. M.",
""
],
[
"de Lamare",
"R. C.",
""
]
] | This paper presents a puncturing technique based on the channel polarization index for the design of rate-compatible polar codes in the fifth generation (5G) of wireless systems. The proposed strategy consists of two steps: we first generate the codeword message; and then we reduce the length of the codeword based on the channel polarization index where channels with the smallest channel polarization indices are punctured. We consider the proposed punctured polar codes with the successive cancellation (SC) decoder and the Cyclic Redundancy Check (CRC) aided SC list (CA-SCL) decoder and punctured bits known to both the encoder and the decoder. The Polar Spectra (PS) are then used to performance analysis the puncturing technique. Simulations for 5G scenarios show that the proposed polar codes perform comparable to Low-Density Parity-Check (LDPC) codes. |
2005.04670 | Ali AlSoufi Dr. | Mohammed Ghanem, Ali Alsoufi | Interoperable Framework to Enhance Citizen Services in the Kingdom of
Bahrain | 4 pages, 1 figure, conference paper, 978-1-7281-3012-5/19/$31.00
\c{opyright}2019 IEEE | International Conference on Innovation and Intelligence for
Informatics, Computing, and Technologies (3ICT), University of Bahrain,
Kingdom of Bahrain, September 2019 | 10.1109/3ICT.2019.8910330 | null | cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Citizen records are scattered between different state organizations. It
wastes time, effort, and resources for both citizen and organization to
collect, maintain, and update records to fulfill citizen services.
Interoperability is a key element that enables seamless collaboration between
different entities. It requires non-conventional methods to overcome
interoperability challenges such as lack of trust, centralization, and policy
and technology differences. Blockchain is a disruptive technology with the
potential to overcome these challenges. The technology designed to enable
peer-to-peer transactions with elimination of intermediary in a trustless
environment through the control of consensus mechanisms. This research aims to
explore the status of interoperability in Bahrain, design an interoperable
framework, and then test the validity of the framework by implementation of a
prototype using blockchain technology. The research will be divided into four
phases; I: Information collection, II: Design and modeling the framework, III:
Implementation of a prototype, and Phase IV: Measuring the performance of the
prototype. This research is in progress and it is expected, once is it
complete, to enhance the e-government's plan in the Kingdom of Bahrain to
provide better services to citizens and help in the transition from
e-government to seamless government, which will lead to sustainable citizen
services. On the other hand, the findings of the study is expected to improve
the social, economical, and environmental sustainability by the increase in
process optimization, reduction of cost and complexity.
| [
{
"created": "Sun, 10 May 2020 14:00:54 GMT",
"version": "v1"
}
] | 2020-05-12 | [
[
"Ghanem",
"Mohammed",
""
],
[
"Alsoufi",
"Ali",
""
]
] | Citizen records are scattered between different state organizations. It wastes time, effort, and resources for both citizen and organization to collect, maintain, and update records to fulfill citizen services. Interoperability is a key element that enables seamless collaboration between different entities. It requires non-conventional methods to overcome interoperability challenges such as lack of trust, centralization, and policy and technology differences. Blockchain is a disruptive technology with the potential to overcome these challenges. The technology designed to enable peer-to-peer transactions with elimination of intermediary in a trustless environment through the control of consensus mechanisms. This research aims to explore the status of interoperability in Bahrain, design an interoperable framework, and then test the validity of the framework by implementation of a prototype using blockchain technology. The research will be divided into four phases; I: Information collection, II: Design and modeling the framework, III: Implementation of a prototype, and Phase IV: Measuring the performance of the prototype. This research is in progress and it is expected, once is it complete, to enhance the e-government's plan in the Kingdom of Bahrain to provide better services to citizens and help in the transition from e-government to seamless government, which will lead to sustainable citizen services. On the other hand, the findings of the study is expected to improve the social, economical, and environmental sustainability by the increase in process optimization, reduction of cost and complexity. |
2006.10284 | Savio Sciancalepore | Gabriele Oligeri, Savio Sciancalepore, Roberto Di Pietro | GNSS Spoofing Detection via Opportunistic IRIDIUM Signals | Accepted for the 13th Conference on Security and Privacy in Wireless
and Mobile Networks (WISEC), 2020 | null | 10.1145/3395351.3399350 | null | cs.CR eess.SP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we study the privately-own IRIDIUM satellite constellation, to
provide a location service that is independent of the GNSS. In particular, we
apply our findings to propose a new GNSS spoofing detection solution,
exploiting unencrypted IRIDIUM Ring Alert (IRA) messages that are broadcast by
IRIDIUM satellites. We firstly reverse-engineer many parameters of the IRIDIUM
satellite constellation, such as the satellites speed, packet interarrival
times, maximum satellite coverage, satellite pass duration, and the satellite
beam constellation, to name a few. Later, we adopt the aforementioned
statistics to create a detailed model of the satellite network. Subsequently,
we propose a solution to detect unintended deviations of a target user from his
path, due to GNSS spoofing attacks. We show that our solution can be used
efficiently and effectively to verify the position estimated from standard GNSS
satellite constellation, and we provide constraints and parameters to fit
several application scenarios. All the results reported in this paper, while
showing the quality and viability of our proposal, are supported by real data.
In particular, we have collected and analyzed hundreds of thousands of IRA
messages, thanks to a measurement campaign lasting several days. All the
collected data ($1000+$ hours) have been made available to the research
community. Our solution is particularly suitable for unattended scenarios such
as deserts, rural areas, or open seas, where standard spoofing detection
techniques resorting to crowd-sourcing cannot be used due to deployment
limitations. Moreover, contrary to competing solutions, our approach does not
resort to physical-layer information, dedicated hardware, or multiple receiving
stations, while exploiting only a single receiving antenna and
publicly-available IRIDIUM transmissions. Finally, novel research directions
are also highlighted.
| [
{
"created": "Thu, 18 Jun 2020 05:34:04 GMT",
"version": "v1"
}
] | 2020-06-20 | [
[
"Oligeri",
"Gabriele",
""
],
[
"Sciancalepore",
"Savio",
""
],
[
"Di Pietro",
"Roberto",
""
]
] | In this paper, we study the privately-own IRIDIUM satellite constellation, to provide a location service that is independent of the GNSS. In particular, we apply our findings to propose a new GNSS spoofing detection solution, exploiting unencrypted IRIDIUM Ring Alert (IRA) messages that are broadcast by IRIDIUM satellites. We firstly reverse-engineer many parameters of the IRIDIUM satellite constellation, such as the satellites speed, packet interarrival times, maximum satellite coverage, satellite pass duration, and the satellite beam constellation, to name a few. Later, we adopt the aforementioned statistics to create a detailed model of the satellite network. Subsequently, we propose a solution to detect unintended deviations of a target user from his path, due to GNSS spoofing attacks. We show that our solution can be used efficiently and effectively to verify the position estimated from standard GNSS satellite constellation, and we provide constraints and parameters to fit several application scenarios. All the results reported in this paper, while showing the quality and viability of our proposal, are supported by real data. In particular, we have collected and analyzed hundreds of thousands of IRA messages, thanks to a measurement campaign lasting several days. All the collected data ($1000+$ hours) have been made available to the research community. Our solution is particularly suitable for unattended scenarios such as deserts, rural areas, or open seas, where standard spoofing detection techniques resorting to crowd-sourcing cannot be used due to deployment limitations. Moreover, contrary to competing solutions, our approach does not resort to physical-layer information, dedicated hardware, or multiple receiving stations, while exploiting only a single receiving antenna and publicly-available IRIDIUM transmissions. Finally, novel research directions are also highlighted. |
cs/0302002 | Thomas Wolf | Matthew Pratola, Thomas Wolf | Optimizing GoTools' Search Heuristics using Genetic Algorithms | 23 pages, to appear in Journal of ICGA | null | null | null | cs.NE | null | GoTools is a program which solves life & death problems in the game of Go.
This paper describes experiments using a Genetic Algorithm to optimize
heuristic weights used by GoTools' tree-search. The complete set of heuristic
weights is composed of different subgroups, each of which can be optimized with
a suitable fitness function. As a useful side product, an MPI interface for
FreePascal was implemented to allow the use of a parallelized fitness function
running on a Beowulf cluster. The aim of this exercise is to optimize the
current version of GoTools, and to make tools available in preparation of an
extension of GoTools for solving open boundary life & death problems, which
will introduce more heuristic parameters to be fine tuned.
| [
{
"created": "Sun, 2 Feb 2003 03:30:52 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Pratola",
"Matthew",
""
],
[
"Wolf",
"Thomas",
""
]
] | GoTools is a program which solves life & death problems in the game of Go. This paper describes experiments using a Genetic Algorithm to optimize heuristic weights used by GoTools' tree-search. The complete set of heuristic weights is composed of different subgroups, each of which can be optimized with a suitable fitness function. As a useful side product, an MPI interface for FreePascal was implemented to allow the use of a parallelized fitness function running on a Beowulf cluster. The aim of this exercise is to optimize the current version of GoTools, and to make tools available in preparation of an extension of GoTools for solving open boundary life & death problems, which will introduce more heuristic parameters to be fine tuned. |
1008.2688 | Tomoyuki Yamakami | Tomoyuki Yamakami | A Dichotomy Theorem for the Approximate Counting of Complex-Weighted
Bounded-Degree Boolean CSPs | A4, 10pt, 20 pages. This revised version improves its preliminary
version published under a slightly different title in the Proceedings of the
4th International Conference on Combinatorial Optimization and Applications
(COCOA 2010), Lecture Notes in Computer Science, Springer, Vol.6508 (Part I),
pp.285--299, Kailua-Kona, Hawaii, USA, December 18--20, 2010 | (journal version) Theoretical Computer Science, Vol.447,
pp.120-135, 2012 | 10.1007/978-3-642-17458-2_24 | null | cs.CC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We determine the computational complexity of approximately counting the total
weight of variable assignments for every complex-weighted Boolean constraint
satisfaction problem (or CSP) with any number of additional unary (i.e., arity
1) constraints, particularly, when degrees of input instances are bounded from
above by a fixed constant. All degree-1 counting CSPs are obviously solvable in
polynomial time. When the instance's degree is more than two, we present a
dichotomy theorem that classifies all counting CSPs admitting free unary
constraints into exactly two categories. This classification theorem extends,
to complex-weighted problems, an earlier result on the approximation complexity
of unweighted counting Boolean CSPs of bounded degree. The framework of the
proof of our theorem is based on a theory of signature developed from Valiant's
holographic algorithms that can efficiently solve seemingly intractable
counting CSPs. Despite the use of arbitrary complex weight, our proof of the
classification theorem is rather elementary and intuitive due to an extensive
use of a novel notion of limited T-constructibility. For the remaining degree-2
problems, in contrast, they are as hard to approximate as Holant problems,
which are a generalization of counting CSPs.
| [
{
"created": "Mon, 16 Aug 2010 15:28:04 GMT",
"version": "v1"
},
{
"created": "Mon, 7 Mar 2011 15:56:29 GMT",
"version": "v2"
},
{
"created": "Sun, 18 Mar 2012 07:26:41 GMT",
"version": "v3"
}
] | 2015-05-19 | [
[
"Yamakami",
"Tomoyuki",
""
]
] | We determine the computational complexity of approximately counting the total weight of variable assignments for every complex-weighted Boolean constraint satisfaction problem (or CSP) with any number of additional unary (i.e., arity 1) constraints, particularly, when degrees of input instances are bounded from above by a fixed constant. All degree-1 counting CSPs are obviously solvable in polynomial time. When the instance's degree is more than two, we present a dichotomy theorem that classifies all counting CSPs admitting free unary constraints into exactly two categories. This classification theorem extends, to complex-weighted problems, an earlier result on the approximation complexity of unweighted counting Boolean CSPs of bounded degree. The framework of the proof of our theorem is based on a theory of signature developed from Valiant's holographic algorithms that can efficiently solve seemingly intractable counting CSPs. Despite the use of arbitrary complex weight, our proof of the classification theorem is rather elementary and intuitive due to an extensive use of a novel notion of limited T-constructibility. For the remaining degree-2 problems, in contrast, they are as hard to approximate as Holant problems, which are a generalization of counting CSPs. |
2304.08746 | Naoto Ohsaka | Naoto Ohsaka | On Approximate Reconfigurability of Label Cover | 11 pages | null | null | null | cs.DM cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Given a two-prover game $G$ and its two satisfying labelings
$\psi_\mathsf{s}$ and $\psi_\mathsf{t}$, the Label Cover Reconfiguration
problem asks whether $\psi_\mathsf{s}$ can be transformed into
$\psi_\mathsf{t}$ by repeatedly changing the value of a vertex while preserving
any intermediate labeling satisfying $G$. We consider an optimization variant
of Label Cover Reconfiguration by relaxing the feasibility of labelings,
referred to as Maxmin Label Cover Reconfiguration: we are allowed to transform
by passing through any non-satisfying labelings, but required to maximize the
minimum fraction of satisfied edges during transformation from
$\psi_\mathsf{s}$ to $\psi_\mathsf{t}$. Since the parallel repetition theorem
of Raz (SIAM J. Comput., 1998), which implies NP-hardness of Label Cover within
any constant factor, produces strong inapproximability results for many NP-hard
problems, one may think of using Maxmin Label Cover Reconfiguration to derive
inapproximability results for reconfiguration problems. We prove the following
results on Maxmin Label Cover Reconfiguration, which display different trends
from those of Label Cover and the parallel repetition theorem:
(1) Maxmin Label Cover Reconfiguration can be approximated within a factor of
nearly $\frac{1}{4}$ for restricted graph classes, including slightly dense
graphs and balanced bipartite graphs.
(2) A naive parallel repetition of Maxmin Label Cover Reconfiguration does
not decrease the optimal objective value.
(3) Label Cover Reconfiguration on projection games can be decided in
polynomial time.
The above results suggest that a reconfiguration analogue of the parallel
repetition theorem is unlikely.
| [
{
"created": "Tue, 18 Apr 2023 05:48:46 GMT",
"version": "v1"
}
] | 2023-04-19 | [
[
"Ohsaka",
"Naoto",
""
]
] | Given a two-prover game $G$ and its two satisfying labelings $\psi_\mathsf{s}$ and $\psi_\mathsf{t}$, the Label Cover Reconfiguration problem asks whether $\psi_\mathsf{s}$ can be transformed into $\psi_\mathsf{t}$ by repeatedly changing the value of a vertex while preserving any intermediate labeling satisfying $G$. We consider an optimization variant of Label Cover Reconfiguration by relaxing the feasibility of labelings, referred to as Maxmin Label Cover Reconfiguration: we are allowed to transform by passing through any non-satisfying labelings, but required to maximize the minimum fraction of satisfied edges during transformation from $\psi_\mathsf{s}$ to $\psi_\mathsf{t}$. Since the parallel repetition theorem of Raz (SIAM J. Comput., 1998), which implies NP-hardness of Label Cover within any constant factor, produces strong inapproximability results for many NP-hard problems, one may think of using Maxmin Label Cover Reconfiguration to derive inapproximability results for reconfiguration problems. We prove the following results on Maxmin Label Cover Reconfiguration, which display different trends from those of Label Cover and the parallel repetition theorem: (1) Maxmin Label Cover Reconfiguration can be approximated within a factor of nearly $\frac{1}{4}$ for restricted graph classes, including slightly dense graphs and balanced bipartite graphs. (2) A naive parallel repetition of Maxmin Label Cover Reconfiguration does not decrease the optimal objective value. (3) Label Cover Reconfiguration on projection games can be decided in polynomial time. The above results suggest that a reconfiguration analogue of the parallel repetition theorem is unlikely. |
1407.4032 | Arthur Milchior | Arthur Milchior | A Note on Higher Order and Variable Order Logic over Finite Models | null | null | null | null | cs.LO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We show that descriptive complexity's result extends in High Order Logic to
capture the expressivity of Turing Machine which have a finite number of
alternation and whose time or space is bounded by a finite tower of
exponential. Hence we have a logical characterisation of ELEMENTARY. We also
consider the expressivity of some fixed point operators and of monadic high
order logic.
Finally, we show that Variable Order logic over finite structures contain the
Analytical Hierarchy.
| [
{
"created": "Tue, 15 Jul 2014 15:58:19 GMT",
"version": "v1"
}
] | 2014-07-16 | [
[
"Milchior",
"Arthur",
""
]
] | We show that descriptive complexity's result extends in High Order Logic to capture the expressivity of Turing Machine which have a finite number of alternation and whose time or space is bounded by a finite tower of exponential. Hence we have a logical characterisation of ELEMENTARY. We also consider the expressivity of some fixed point operators and of monadic high order logic. Finally, we show that Variable Order logic over finite structures contain the Analytical Hierarchy. |
1807.00962 | Alex James Dr | Olga Krestinskaya, Alex Pappachen James, Leon O. Chua | Neuro-memristive Circuits for Edge Computing: A review | null | IEEE Transactions on Neural Networks and Learning Systems, 2019 | 10.1109/TNNLS.2019.2899262 | null | cs.ET cs.AI cs.AR cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The volume, veracity, variability, and velocity of data produced from the
ever-increasing network of sensors connected to Internet pose challenges for
power management, scalability, and sustainability of cloud computing
infrastructure. Increasing the data processing capability of edge computing
devices at lower power requirements can reduce several overheads for cloud
computing solutions. This paper provides the review of neuromorphic
CMOS-memristive architectures that can be integrated into edge computing
devices. We discuss why the neuromorphic architectures are useful for edge
devices and show the advantages, drawbacks and open problems in the field of
neuro-memristive circuits for edge computing.
| [
{
"created": "Sun, 1 Jul 2018 04:07:23 GMT",
"version": "v1"
},
{
"created": "Wed, 28 Nov 2018 03:55:48 GMT",
"version": "v2"
}
] | 2019-02-19 | [
[
"Krestinskaya",
"Olga",
""
],
[
"James",
"Alex Pappachen",
""
],
[
"Chua",
"Leon O.",
""
]
] | The volume, veracity, variability, and velocity of data produced from the ever-increasing network of sensors connected to Internet pose challenges for power management, scalability, and sustainability of cloud computing infrastructure. Increasing the data processing capability of edge computing devices at lower power requirements can reduce several overheads for cloud computing solutions. This paper provides the review of neuromorphic CMOS-memristive architectures that can be integrated into edge computing devices. We discuss why the neuromorphic architectures are useful for edge devices and show the advantages, drawbacks and open problems in the field of neuro-memristive circuits for edge computing. |
0904.4202 | Emmanuel Lochin | Pierre-Ugo Tournoux, Emmanuel Lochin, Jerome Lacan, Amine Bouabdallah,
Vincent Roca | On-the-fly erasure coding for real-time video applications | null | null | 10.1109/TMM.2011.2126564 | null | cs.NI cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces a robust point-to-point transmission scheme: Tetrys,
that relies on a novel on-the-fly erasure coding concept which reduces the
delay for recovering lost data at the receiver side. In current erasure coding
schemes, the packets that are not rebuilt at the receiver side are either lost
or delayed by at least one RTT before transmission to the application. The
present contribution aims at demonstrating that Tetrys coding scheme can fill
the gap between real-time applications requirements and full reliability.
Indeed, we show that in several cases, Tetrys can recover lost packets below
one RTT over lossy and best-effort networks. We also show that Tetrys allows to
enable full reliability without delay compromise and as a result: significantly
improves the performance of time constrained applications. For instance, our
evaluations present that video-conferencing applications obtain a PSNR gain up
to 7dB compared to classic block-based erasure codes.
| [
{
"created": "Mon, 27 Apr 2009 16:09:33 GMT",
"version": "v1"
},
{
"created": "Wed, 17 Feb 2010 07:23:23 GMT",
"version": "v2"
},
{
"created": "Fri, 12 Nov 2010 09:03:43 GMT",
"version": "v3"
},
{
"created": "Tue, 16 Nov 2010 12:41:51 GMT",
"version": "v4"
}
] | 2012-04-03 | [
[
"Tournoux",
"Pierre-Ugo",
""
],
[
"Lochin",
"Emmanuel",
""
],
[
"Lacan",
"Jerome",
""
],
[
"Bouabdallah",
"Amine",
""
],
[
"Roca",
"Vincent",
""
]
] | This paper introduces a robust point-to-point transmission scheme: Tetrys, that relies on a novel on-the-fly erasure coding concept which reduces the delay for recovering lost data at the receiver side. In current erasure coding schemes, the packets that are not rebuilt at the receiver side are either lost or delayed by at least one RTT before transmission to the application. The present contribution aims at demonstrating that Tetrys coding scheme can fill the gap between real-time applications requirements and full reliability. Indeed, we show that in several cases, Tetrys can recover lost packets below one RTT over lossy and best-effort networks. We also show that Tetrys allows to enable full reliability without delay compromise and as a result: significantly improves the performance of time constrained applications. For instance, our evaluations present that video-conferencing applications obtain a PSNR gain up to 7dB compared to classic block-based erasure codes. |
2003.11789 | Vitor Enes | Vitor Enes, Carlos Baquero, Tuanir Fran\c{c}a Rezende, Alexey Gotsman,
Matthieu Perrin, Pierre Sutra | State-Machine Replication for Planet-Scale Systems (Extended Version) | Extended version of a EuroSys'20 paper | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Online applications now routinely replicate their data at multiple sites
around the world. In this paper we present Atlas, the first state-machine
replication protocol tailored for such planet-scale systems. Atlas does not
rely on a distinguished leader, so clients enjoy the same quality of service
independently of their geographical locations. Furthermore, client-perceived
latency improves as we add sites closer to clients. To achieve this, Atlas
minimizes the size of its quorums using an observation that concurrent data
center failures are rare. It also processes a high percentage of accesses in a
single round trip, even when these conflict. We experimentally demonstrate that
Atlas consistently outperforms state-of-the-art protocols in planet-scale
scenarios. In particular, Atlas is up to two times faster than Flexible Paxos
with identical failure assumptions, and more than doubles the performance of
Egalitarian Paxos in the YCSB benchmark.
| [
{
"created": "Thu, 26 Mar 2020 08:24:26 GMT",
"version": "v1"
},
{
"created": "Mon, 18 May 2020 13:19:10 GMT",
"version": "v2"
}
] | 2020-05-19 | [
[
"Enes",
"Vitor",
""
],
[
"Baquero",
"Carlos",
""
],
[
"Rezende",
"Tuanir França",
""
],
[
"Gotsman",
"Alexey",
""
],
[
"Perrin",
"Matthieu",
""
],
[
"Sutra",
"Pierre",
""
]
] | Online applications now routinely replicate their data at multiple sites around the world. In this paper we present Atlas, the first state-machine replication protocol tailored for such planet-scale systems. Atlas does not rely on a distinguished leader, so clients enjoy the same quality of service independently of their geographical locations. Furthermore, client-perceived latency improves as we add sites closer to clients. To achieve this, Atlas minimizes the size of its quorums using an observation that concurrent data center failures are rare. It also processes a high percentage of accesses in a single round trip, even when these conflict. We experimentally demonstrate that Atlas consistently outperforms state-of-the-art protocols in planet-scale scenarios. In particular, Atlas is up to two times faster than Flexible Paxos with identical failure assumptions, and more than doubles the performance of Egalitarian Paxos in the YCSB benchmark. |
1509.04624 | Lingxiang Li | Lingxiang Li and Zhi Chen and Jun Fang and and Athina P. Petropulu | On the Secrecy Capacity of a MIMO Gaussian Wiretap Channel with a
Cooperative Jammer | 13 pages, 7 figures | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the secrecy capacity of a helper-assisted Gaussian wiretap channel
with a source, a legitimate receiver, an eavesdropper and an external helper,
where each terminal is equipped with multiple antennas. Determining the secrecy
capacity in this scenario generally requires solving a nonconvex secrecy rate
maximization (SRM) problem. To deal with this issue, we first reformulate the
original SRM problem into a sequence of convex subproblems. For the special
case of single-antenna legitimate receiver, we obtain the secrecy capacity via
a combination of convex optimization and one-dimensional search, while for the
general case of multi-antenna legitimate receiver, we propose an iterative
solution. To gain more insight into how the secrecy capacity of a
helper-assisted Gaussian wiretap channel behaves, we examine the achievable
secure degrees of freedom (s.d.o.f.) and obtain the maximal achievable s.d.o.f.
in closed-form. We also derive a closed-form solution to the original SRM
problem which achieves the maximal s.d.o.f.. Numerical results are presented to
illustrate the efficacy of the proposed schemes.
| [
{
"created": "Tue, 15 Sep 2015 16:14:25 GMT",
"version": "v1"
}
] | 2015-09-16 | [
[
"Li",
"Lingxiang",
""
],
[
"Chen",
"Zhi",
""
],
[
"Fang",
"Jun",
""
],
[
"Petropulu",
"and Athina P.",
""
]
] | We study the secrecy capacity of a helper-assisted Gaussian wiretap channel with a source, a legitimate receiver, an eavesdropper and an external helper, where each terminal is equipped with multiple antennas. Determining the secrecy capacity in this scenario generally requires solving a nonconvex secrecy rate maximization (SRM) problem. To deal with this issue, we first reformulate the original SRM problem into a sequence of convex subproblems. For the special case of single-antenna legitimate receiver, we obtain the secrecy capacity via a combination of convex optimization and one-dimensional search, while for the general case of multi-antenna legitimate receiver, we propose an iterative solution. To gain more insight into how the secrecy capacity of a helper-assisted Gaussian wiretap channel behaves, we examine the achievable secure degrees of freedom (s.d.o.f.) and obtain the maximal achievable s.d.o.f. in closed-form. We also derive a closed-form solution to the original SRM problem which achieves the maximal s.d.o.f.. Numerical results are presented to illustrate the efficacy of the proposed schemes. |
2306.03021 | Yu-Hsuan Chen | Yu-hsuan Chen, Levent Burak Kara, Jonathan Cagan | Automating Style Analysis and Visualization With Explainable AI -- Case
Studies on Brand Recognition | null | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Incorporating style-related objectives into shape design has been centrally
important to maximize product appeal. However, stylistic features such as
aesthetics and semantic attributes are hard to codify even for experts. As
such, algorithmic style capture and reuse have not fully benefited from
automated data-driven methodologies due to the challenging nature of design
describability. This paper proposes an AI-driven method to fully automate the
discovery of brand-related features. Our approach introduces BIGNet, a two-tier
Brand Identification Graph Neural Network (GNN) to classify and analyze scalar
vector graphics (SVG). First, to tackle the scarcity of vectorized product
images, this research proposes two data acquisition workflows: parametric
modeling from small curve-based datasets, and vectorization from large
pixel-based datasets. Secondly, this study constructs a novel hierarchical GNN
architecture to learn from both SVG's curve-level and chunk-level parameters.
In the first case study, BIGNet not only classifies phone brands but also
captures brand-related features across multiple scales, such as the location of
the lens, the height-width ratio, and the screen-frame gap, as confirmed by AI
evaluation. In the second study, this paper showcases the generalizability of
BIGNet learning from a vectorized car image dataset and validates the
consistency and robustness of its predictions given four scenarios. The results
match the difference commonly observed in luxury vs. economy brands in the
automobile market. Finally, this paper also visualizes the activation maps
generated from a convolutional neural network and shows BIGNet's advantage of
being a more human-friendly, explainable, and explicit style-capturing agent.
Code and dataset can be found on Github:
1. Phone case study: github.com/parksandrecfan/bignet-phone 2. Car case
study: github.com/parksandrecfan/bignet-car
| [
{
"created": "Mon, 5 Jun 2023 16:38:11 GMT",
"version": "v1"
}
] | 2023-06-06 | [
[
"Chen",
"Yu-hsuan",
""
],
[
"Kara",
"Levent Burak",
""
],
[
"Cagan",
"Jonathan",
""
]
] | Incorporating style-related objectives into shape design has been centrally important to maximize product appeal. However, stylistic features such as aesthetics and semantic attributes are hard to codify even for experts. As such, algorithmic style capture and reuse have not fully benefited from automated data-driven methodologies due to the challenging nature of design describability. This paper proposes an AI-driven method to fully automate the discovery of brand-related features. Our approach introduces BIGNet, a two-tier Brand Identification Graph Neural Network (GNN) to classify and analyze scalar vector graphics (SVG). First, to tackle the scarcity of vectorized product images, this research proposes two data acquisition workflows: parametric modeling from small curve-based datasets, and vectorization from large pixel-based datasets. Secondly, this study constructs a novel hierarchical GNN architecture to learn from both SVG's curve-level and chunk-level parameters. In the first case study, BIGNet not only classifies phone brands but also captures brand-related features across multiple scales, such as the location of the lens, the height-width ratio, and the screen-frame gap, as confirmed by AI evaluation. In the second study, this paper showcases the generalizability of BIGNet learning from a vectorized car image dataset and validates the consistency and robustness of its predictions given four scenarios. The results match the difference commonly observed in luxury vs. economy brands in the automobile market. Finally, this paper also visualizes the activation maps generated from a convolutional neural network and shows BIGNet's advantage of being a more human-friendly, explainable, and explicit style-capturing agent. Code and dataset can be found on Github: 1. Phone case study: github.com/parksandrecfan/bignet-phone 2. Car case study: github.com/parksandrecfan/bignet-car |
1901.00534 | Anna Smagina Mrs | Anna Smagina, Valentina Bozhkova, Sergey Gladilin, Dmitry Nikolaev | Linear colour segmentation revisited | null | Proc. SPIE 11041, Eleventh International Conference on Machine
Vision (ICMV 2018) | 10.1117/12.2523007 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work we discuss the known algorithms for linear colour segmentation
based on a physical approach and propose a new modification of segmentation
algorithm. This algorithm is based on a region adjacency graph framework
without a pre-segmentation stage. Proposed edge weight functions are defined
from linear image model with normal noise. The colour space projective
transform is introduced as a novel pre-processing technique for better handling
of shadow and highlight areas. The resulting algorithm is tested on a benchmark
dataset consisting of the images of 19 natural scenes selected from the
Barnard's DXC-930 SFU dataset and 12 natural scene images newly published for
common use. The dataset is provided with pixel-by-pixel ground truth colour
segmentation for every image. Using this dataset, we show that the proposed
algorithm modifications lead to qualitative advantages over other model-based
segmentation algorithms, and also show the positive effect of each proposed
modification. The source code and datasets for this work are available for free
access at http://github.com/visillect/segmentation.
| [
{
"created": "Wed, 2 Jan 2019 21:06:55 GMT",
"version": "v1"
}
] | 2019-03-26 | [
[
"Smagina",
"Anna",
""
],
[
"Bozhkova",
"Valentina",
""
],
[
"Gladilin",
"Sergey",
""
],
[
"Nikolaev",
"Dmitry",
""
]
] | In this work we discuss the known algorithms for linear colour segmentation based on a physical approach and propose a new modification of segmentation algorithm. This algorithm is based on a region adjacency graph framework without a pre-segmentation stage. Proposed edge weight functions are defined from linear image model with normal noise. The colour space projective transform is introduced as a novel pre-processing technique for better handling of shadow and highlight areas. The resulting algorithm is tested on a benchmark dataset consisting of the images of 19 natural scenes selected from the Barnard's DXC-930 SFU dataset and 12 natural scene images newly published for common use. The dataset is provided with pixel-by-pixel ground truth colour segmentation for every image. Using this dataset, we show that the proposed algorithm modifications lead to qualitative advantages over other model-based segmentation algorithms, and also show the positive effect of each proposed modification. The source code and datasets for this work are available for free access at http://github.com/visillect/segmentation. |
2103.04536 | Medhat Elsayed | Medhat Elsayed, Melike Erol-Kantarci | AI-enabled Future Wireless Networks: Challenges, Opportunities and Open
Issues | null | IEEE Vehicular Technology Magazine ( Volume: 14, Issue: 3, Sept.
2019) | 10.1109/MVT.2019.2919236 | null | cs.NI | http://creativecommons.org/licenses/by/4.0/ | A plethora of demanding services and use cases mandate a revolutionary shift
in the management of future wireless network resources. Indeed, when tight
quality of service demands of applications are combined with increased
complexity of the network, legacy network management routines will become
unfeasible in 6G. Artificial Intelligence (AI) is emerging as a fundamental
enabler to orchestrate the network resources from bottom to top. AI-enabled
radio access and AI-enabled core will open up new opportunities for automated
configuration of 6G. On the other hand, there are many challenges in AI-enabled
networks that need to be addressed. Long convergence time, memory complexity,
and complex behaviour of machine learning algorithms under uncertainty as well
as highly dynamic channel, traffic and mobility conditions of the network
contribute to the challenges. In this paper, we survey the state-of-art
research in utilizing machine learning techniques in improving the performance
of wireless networks. In addition, we identify challenges and open issues to
provide a roadmap for the researchers.
| [
{
"created": "Mon, 8 Mar 2021 03:59:41 GMT",
"version": "v1"
}
] | 2021-03-09 | [
[
"Elsayed",
"Medhat",
""
],
[
"Erol-Kantarci",
"Melike",
""
]
] | A plethora of demanding services and use cases mandate a revolutionary shift in the management of future wireless network resources. Indeed, when tight quality of service demands of applications are combined with increased complexity of the network, legacy network management routines will become unfeasible in 6G. Artificial Intelligence (AI) is emerging as a fundamental enabler to orchestrate the network resources from bottom to top. AI-enabled radio access and AI-enabled core will open up new opportunities for automated configuration of 6G. On the other hand, there are many challenges in AI-enabled networks that need to be addressed. Long convergence time, memory complexity, and complex behaviour of machine learning algorithms under uncertainty as well as highly dynamic channel, traffic and mobility conditions of the network contribute to the challenges. In this paper, we survey the state-of-art research in utilizing machine learning techniques in improving the performance of wireless networks. In addition, we identify challenges and open issues to provide a roadmap for the researchers. |
1709.07472 | Nicholas Rotella | Nicholas Rotella, Stefan Schaal and Ludovic Righetti | Unsupervised Contact Learning for Humanoid Estimation and Control | Submitted to the IEEE International Conference on Robotics and
Automation (ICRA) 2018 | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work presents a method for contact state estimation using fuzzy
clustering to learn contact probability for full, six-dimensional humanoid
contacts. The data required for training is solely from proprioceptive sensors
- endeffector contact wrench sensors and inertial measurement units (IMUs) -
and the method is completely unsupervised. The resulting cluster means are used
to efficiently compute the probability of contact in each of the six
endeffector degrees of freedom (DoFs) independently. This clustering-based
contact probability estimator is validated in a kinematics-based base state
estimator in a simulation environment with realistic added sensor noise for
locomotion over rough, low-friction terrain on which the robot is subject to
foot slip and rotation. The proposed base state estimator which utilizes these
six DoF contact probability estimates is shown to perform considerably better
than that which determines kinematic contact constraints purely based on
measured normal force.
| [
{
"created": "Thu, 21 Sep 2017 18:23:54 GMT",
"version": "v1"
}
] | 2017-09-25 | [
[
"Rotella",
"Nicholas",
""
],
[
"Schaal",
"Stefan",
""
],
[
"Righetti",
"Ludovic",
""
]
] | This work presents a method for contact state estimation using fuzzy clustering to learn contact probability for full, six-dimensional humanoid contacts. The data required for training is solely from proprioceptive sensors - endeffector contact wrench sensors and inertial measurement units (IMUs) - and the method is completely unsupervised. The resulting cluster means are used to efficiently compute the probability of contact in each of the six endeffector degrees of freedom (DoFs) independently. This clustering-based contact probability estimator is validated in a kinematics-based base state estimator in a simulation environment with realistic added sensor noise for locomotion over rough, low-friction terrain on which the robot is subject to foot slip and rotation. The proposed base state estimator which utilizes these six DoF contact probability estimates is shown to perform considerably better than that which determines kinematic contact constraints purely based on measured normal force. |
1702.02422 | Anas Al-Oraiqat Dr. | Anas M. Al-Oraiqat | Parallel implementation of a vehicle rail dynamical model for multi-core
systems | 8 pages, 9 Figures | International Journal of advanced studies in Computer Science and
Engineering (IJASCSE) 2012 | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This research presents a model of a complex dynamic object running on a
multi-core system. Discretization and numerical integration for multibody
models of vehicle rail elements in the vertical longitudinal plane fluctuations
is considered. The implemented model and solution of the motion differential
equations allow estimating the basic processes occurring in the system with
various external influences. Hence the developed programming model can be used
for performing analysis and comparing new vehicle designs.
Keywords-dynamic model; multi-core system; SMP system; rolling stock.
| [
{
"created": "Wed, 8 Feb 2017 13:45:50 GMT",
"version": "v1"
}
] | 2017-02-09 | [
[
"Al-Oraiqat",
"Anas M.",
""
]
] | This research presents a model of a complex dynamic object running on a multi-core system. Discretization and numerical integration for multibody models of vehicle rail elements in the vertical longitudinal plane fluctuations is considered. The implemented model and solution of the motion differential equations allow estimating the basic processes occurring in the system with various external influences. Hence the developed programming model can be used for performing analysis and comparing new vehicle designs. Keywords-dynamic model; multi-core system; SMP system; rolling stock. |
1006.2204 | Nan Rong | Joseph Y. Halpern, Nan Rong, Ashutosh Saxena | MDPs with Unawareness | 11 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Markov decision processes (MDPs) are widely used for modeling decision-making
problems in robotics, automated control, and economics. Traditional MDPs assume
that the decision maker (DM) knows all states and actions. However, this may
not be true in many situations of interest. We define a new framework, MDPs
with unawareness (MDPUs) to deal with the possibilities that a DM may not be
aware of all possible actions. We provide a complete characterization of when a
DM can learn to play near-optimally in an MDPU, and give an algorithm that
learns to play near-optimally when it is possible to do so, as efficiently as
possible. In particular, we characterize when a near-optimal solution can be
found in polynomial time.
| [
{
"created": "Fri, 11 Jun 2010 06:18:27 GMT",
"version": "v1"
}
] | 2010-06-14 | [
[
"Halpern",
"Joseph Y.",
""
],
[
"Rong",
"Nan",
""
],
[
"Saxena",
"Ashutosh",
""
]
] | Markov decision processes (MDPs) are widely used for modeling decision-making problems in robotics, automated control, and economics. Traditional MDPs assume that the decision maker (DM) knows all states and actions. However, this may not be true in many situations of interest. We define a new framework, MDPs with unawareness (MDPUs) to deal with the possibilities that a DM may not be aware of all possible actions. We provide a complete characterization of when a DM can learn to play near-optimally in an MDPU, and give an algorithm that learns to play near-optimally when it is possible to do so, as efficiently as possible. In particular, we characterize when a near-optimal solution can be found in polynomial time. |
2404.09814 | Man Wang | Man Wang, Zheng Shi, Yunfei Li, Xianda Wu, Weiqiang Tan, and Xinrong
Ye | A Novel HARQ-CC Assisted SCMA Scheme | null | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This letter proposes a novel hybrid automatic repeat request with chase
combining assisted sparse code multiple access (HARQ-CC-SCMA) scheme. Depending
on whether the same superimposed packet are retransmitted, synchronous and
asynchronous modes are considered for retransmissions. Moreover, factor graph
aggregation (FGA) and Log-likelihood ratio combination (LLRC) are proposed for
multi-user detection. Regarding FGA, a large-scale factor graph is constructed
by combining all the received superimposed signals and message passing
algorithm (MAP) is applied to calculate log-likelihood ratio (LLR). Whereas,
owing to the same unsuccessful messages required to be retransmitted, LLRC adds
up LLRs of erroneously received packets in previous HARQ rounds together with
currently received packets for channel decoding and saves the LLRs for failed
users. Finally, Monte Carlo simulations are preformed to show that FGA
surpasses LLRC and HARQ with incremental redundancy (HARQ-IR) in synchronous
mode. However, LLRC performs better than FGA at low signal-to-noise ratio (SNR)
in asynchronous mode. This is because failed messages after the maximum
allowable HARQ rounds in this mode can yield significant error propagation in
low SNR regime.
| [
{
"created": "Mon, 15 Apr 2024 14:12:12 GMT",
"version": "v1"
}
] | 2024-04-16 | [
[
"Wang",
"Man",
""
],
[
"Shi",
"Zheng",
""
],
[
"Li",
"Yunfei",
""
],
[
"Wu",
"Xianda",
""
],
[
"Tan",
"Weiqiang",
""
],
[
"Ye",
"Xinrong",
""
]
] | This letter proposes a novel hybrid automatic repeat request with chase combining assisted sparse code multiple access (HARQ-CC-SCMA) scheme. Depending on whether the same superimposed packet are retransmitted, synchronous and asynchronous modes are considered for retransmissions. Moreover, factor graph aggregation (FGA) and Log-likelihood ratio combination (LLRC) are proposed for multi-user detection. Regarding FGA, a large-scale factor graph is constructed by combining all the received superimposed signals and message passing algorithm (MAP) is applied to calculate log-likelihood ratio (LLR). Whereas, owing to the same unsuccessful messages required to be retransmitted, LLRC adds up LLRs of erroneously received packets in previous HARQ rounds together with currently received packets for channel decoding and saves the LLRs for failed users. Finally, Monte Carlo simulations are preformed to show that FGA surpasses LLRC and HARQ with incremental redundancy (HARQ-IR) in synchronous mode. However, LLRC performs better than FGA at low signal-to-noise ratio (SNR) in asynchronous mode. This is because failed messages after the maximum allowable HARQ rounds in this mode can yield significant error propagation in low SNR regime. |
1906.03532 | S\'ebastien Arnold | S\'ebastien M. R. Arnold, Pierre-Antoine Manzagol, Reza Babanezhad,
Ioannis Mitliagkas, Nicolas Le Roux | Reducing the variance in online optimization by transporting past
gradients | Open-source implementation available at:
https://github.com/seba-1511/igt.pth | null | null | null | cs.LG math.OC stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most stochastic optimization methods use gradients once before discarding
them. While variance reduction methods have shown that reusing past gradients
can be beneficial when there is a finite number of datapoints, they do not
easily extend to the online setting. One issue is the staleness due to using
past gradients. We propose to correct this staleness using the idea of implicit
gradient transport (IGT) which transforms gradients computed at previous
iterates into gradients evaluated at the current iterate without using the
Hessian explicitly. In addition to reducing the variance and bias of our
updates over time, IGT can be used as a drop-in replacement for the gradient
estimate in a number of well-understood methods such as heavy ball or Adam. We
show experimentally that it achieves state-of-the-art results on a wide range
of architectures and benchmarks. Additionally, the IGT gradient estimator
yields the optimal asymptotic convergence rate for online stochastic
optimization in the restricted setting where the Hessians of all component
functions are equal.
| [
{
"created": "Sat, 8 Jun 2019 22:02:28 GMT",
"version": "v1"
},
{
"created": "Tue, 18 Jun 2019 17:12:22 GMT",
"version": "v2"
}
] | 2019-06-19 | [
[
"Arnold",
"Sébastien M. R.",
""
],
[
"Manzagol",
"Pierre-Antoine",
""
],
[
"Babanezhad",
"Reza",
""
],
[
"Mitliagkas",
"Ioannis",
""
],
[
"Roux",
"Nicolas Le",
""
]
] | Most stochastic optimization methods use gradients once before discarding them. While variance reduction methods have shown that reusing past gradients can be beneficial when there is a finite number of datapoints, they do not easily extend to the online setting. One issue is the staleness due to using past gradients. We propose to correct this staleness using the idea of implicit gradient transport (IGT) which transforms gradients computed at previous iterates into gradients evaluated at the current iterate without using the Hessian explicitly. In addition to reducing the variance and bias of our updates over time, IGT can be used as a drop-in replacement for the gradient estimate in a number of well-understood methods such as heavy ball or Adam. We show experimentally that it achieves state-of-the-art results on a wide range of architectures and benchmarks. Additionally, the IGT gradient estimator yields the optimal asymptotic convergence rate for online stochastic optimization in the restricted setting where the Hessians of all component functions are equal. |
1904.05390 | Joshua Brakensiek | Joshua Brakensiek and Aviad Rubinstein | Constant-factor approximation of near-linear edit distance in
near-linear time | 40 pages, 4 figures | null | null | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We show that the edit distance between two strings of length $n$ can be
computed within a factor of $f(\epsilon)$ in $n^{1+\epsilon}$ time as long as
the edit distance is at least $n^{1-\delta}$ for some $\delta(\epsilon) > 0$.
| [
{
"created": "Wed, 10 Apr 2019 18:55:56 GMT",
"version": "v1"
},
{
"created": "Tue, 28 Jan 2020 18:54:41 GMT",
"version": "v2"
}
] | 2020-01-29 | [
[
"Brakensiek",
"Joshua",
""
],
[
"Rubinstein",
"Aviad",
""
]
] | We show that the edit distance between two strings of length $n$ can be computed within a factor of $f(\epsilon)$ in $n^{1+\epsilon}$ time as long as the edit distance is at least $n^{1-\delta}$ for some $\delta(\epsilon) > 0$. |
2303.09658 | Min Hua | Min Hua, Cetengfei Zhang, Fanggang Zhang, Zhi Li, Xiaoli Yu, Hongming
Xu, Quan Zhou | Energy Management of Multi-mode Plug-in Hybrid Electric Vehicle using
Multi-agent Deep Reinforcement Learning | null | null | null | null | cs.RO cs.LG cs.SY eess.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The recently emerging multi-mode plug-in hybrid electric vehicle (PHEV)
technology is one of the pathways making contributions to decarbonization, and
its energy management requires multiple-input and multipleoutput (MIMO)
control. At the present, the existing methods usually decouple the MIMO control
into singleoutput (MISO) control and can only achieve its local optimal
performance. To optimize the multi-mode vehicle globally, this paper studies a
MIMO control method for energy management of the multi-mode PHEV based on
multi-agent deep reinforcement learning (MADRL). By introducing a relevance
ratio, a hand-shaking strategy is proposed to enable two learning agents to
work collaboratively under the MADRL framework using the deep deterministic
policy gradient (DDPG) algorithm. Unified settings for the DDPG agents are
obtained through a sensitivity analysis of the influencing factors to the
learning performance. The optimal working mode for the hand-shaking strategy is
attained through a parametric study on the relevance ratio. The advantage of
the proposed energy management method is demonstrated on a software-in-the-loop
testing platform. The result of the study indicates that the learning rate of
the DDPG agents is the greatest influencing factor for learning performance.
Using the unified DDPG settings and a relevance ratio of 0.2, the proposed
MADRL system can save up to 4% energy compared to the single-agent learning
system and up to 23.54% energy compared to the conventional rule-based system.
| [
{
"created": "Thu, 16 Mar 2023 21:31:55 GMT",
"version": "v1"
},
{
"created": "Mon, 28 Aug 2023 00:36:11 GMT",
"version": "v2"
}
] | 2023-08-29 | [
[
"Hua",
"Min",
""
],
[
"Zhang",
"Cetengfei",
""
],
[
"Zhang",
"Fanggang",
""
],
[
"Li",
"Zhi",
""
],
[
"Yu",
"Xiaoli",
""
],
[
"Xu",
"Hongming",
""
],
[
"Zhou",
"Quan",
""
]
] | The recently emerging multi-mode plug-in hybrid electric vehicle (PHEV) technology is one of the pathways making contributions to decarbonization, and its energy management requires multiple-input and multipleoutput (MIMO) control. At the present, the existing methods usually decouple the MIMO control into singleoutput (MISO) control and can only achieve its local optimal performance. To optimize the multi-mode vehicle globally, this paper studies a MIMO control method for energy management of the multi-mode PHEV based on multi-agent deep reinforcement learning (MADRL). By introducing a relevance ratio, a hand-shaking strategy is proposed to enable two learning agents to work collaboratively under the MADRL framework using the deep deterministic policy gradient (DDPG) algorithm. Unified settings for the DDPG agents are obtained through a sensitivity analysis of the influencing factors to the learning performance. The optimal working mode for the hand-shaking strategy is attained through a parametric study on the relevance ratio. The advantage of the proposed energy management method is demonstrated on a software-in-the-loop testing platform. The result of the study indicates that the learning rate of the DDPG agents is the greatest influencing factor for learning performance. Using the unified DDPG settings and a relevance ratio of 0.2, the proposed MADRL system can save up to 4% energy compared to the single-agent learning system and up to 23.54% energy compared to the conventional rule-based system. |
2405.06209 | Corrine Yap | Aiya Kuchukova, Marcus Pappik, Will Perkins, Corrine Yap | Fast and Slow Mixing of the Kawasaki Dynamics on Bounded-Degree Graphs | null | null | null | null | cs.DS math.CO math.PR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the worst-case mixing time of the global Kawasaki dynamics for the
fixed-magnetization Ising model on the class of graphs of maximum degree
$\Delta$. Proving a conjecture of Carlson, Davies, Kolla, and Perkins, we show
that below the tree uniqueness threshold, the Kawasaki dynamics mix rapidly for
all magnetizations. Disproving a conjecture of Carlson, Davies, Kolla, and
Perkins, we show that the regime of fast mixing does not extend throughout the
regime of tractability for this model: there is a range of parameters for which
there exist efficient sampling algorithms for the fixed-magnetization Ising
model on max-degree $\Delta$ graphs, but the Kawasaki dynamics can take
exponential time to mix. Our techniques involve showing spectral independence
in the fixed-magnetization Ising model and proving a sharp threshold for the
existence of multiple metastable states in the Ising model with external field
on random regular graphs.
| [
{
"created": "Fri, 10 May 2024 02:45:54 GMT",
"version": "v1"
}
] | 2024-05-13 | [
[
"Kuchukova",
"Aiya",
""
],
[
"Pappik",
"Marcus",
""
],
[
"Perkins",
"Will",
""
],
[
"Yap",
"Corrine",
""
]
] | We study the worst-case mixing time of the global Kawasaki dynamics for the fixed-magnetization Ising model on the class of graphs of maximum degree $\Delta$. Proving a conjecture of Carlson, Davies, Kolla, and Perkins, we show that below the tree uniqueness threshold, the Kawasaki dynamics mix rapidly for all magnetizations. Disproving a conjecture of Carlson, Davies, Kolla, and Perkins, we show that the regime of fast mixing does not extend throughout the regime of tractability for this model: there is a range of parameters for which there exist efficient sampling algorithms for the fixed-magnetization Ising model on max-degree $\Delta$ graphs, but the Kawasaki dynamics can take exponential time to mix. Our techniques involve showing spectral independence in the fixed-magnetization Ising model and proving a sharp threshold for the existence of multiple metastable states in the Ising model with external field on random regular graphs. |
1611.09564 | Nhien-An Le-Khac | Muhammad Faheem, M-Tahar Kechadi, Nhien-An Le-Khac | Toward a new mobile cloud forensic framework | null | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Smartphones have created a significant impact on the day to day activities of
every individual. Now a days a wide range of Smartphone applications are
available and it necessitates high computing resources in order to build these
applications. Cloud computing offers enormous resources and extends services to
resource-constrained mobile devices. Mobile Cloud Computing is emerging as a
key technology to utilize virtually unlimited resources over the Internet using
Smartphones. Offloading data and computations to improve productivity, enhance
performance, save energy, and improve user experience. Social network
applications largely utilize Mobile Cloud Computing to reap the benefits. The
social network has witnessed unprecedented growth in the recent years, and
millions of registered users access it using Smartphones. The mobile cloud
social network applications introduce not only convenience but also various
issues related to criminal and illegal activities. Despite being primarily used
to communicate and socialize with contacts, the multifarious and anonymous
nature of social networking websites increases susceptibility to cybercrimes.
Taking into account, the advantage of mobile cloud computing and popularity of
social network applications, it is essential to establish a forensic framework
based on mobile cloud platform that solves the problems of today forensic
requirements. In this paper we present a mobile cloud forensic framework that
allows the forensic investigator to collect the automated synchronized copies
of data on both mobile and cloud servers to prove the evidence of cloud usage.
We also show our preliminary results of this study.
| [
{
"created": "Tue, 29 Nov 2016 10:53:29 GMT",
"version": "v1"
}
] | 2016-11-30 | [
[
"Faheem",
"Muhammad",
""
],
[
"Kechadi",
"M-Tahar",
""
],
[
"Le-Khac",
"Nhien-An",
""
]
] | Smartphones have created a significant impact on the day to day activities of every individual. Now a days a wide range of Smartphone applications are available and it necessitates high computing resources in order to build these applications. Cloud computing offers enormous resources and extends services to resource-constrained mobile devices. Mobile Cloud Computing is emerging as a key technology to utilize virtually unlimited resources over the Internet using Smartphones. Offloading data and computations to improve productivity, enhance performance, save energy, and improve user experience. Social network applications largely utilize Mobile Cloud Computing to reap the benefits. The social network has witnessed unprecedented growth in the recent years, and millions of registered users access it using Smartphones. The mobile cloud social network applications introduce not only convenience but also various issues related to criminal and illegal activities. Despite being primarily used to communicate and socialize with contacts, the multifarious and anonymous nature of social networking websites increases susceptibility to cybercrimes. Taking into account, the advantage of mobile cloud computing and popularity of social network applications, it is essential to establish a forensic framework based on mobile cloud platform that solves the problems of today forensic requirements. In this paper we present a mobile cloud forensic framework that allows the forensic investigator to collect the automated synchronized copies of data on both mobile and cloud servers to prove the evidence of cloud usage. We also show our preliminary results of this study. |
2101.02522 | Vincent Aranega | Ronie Salgado, Marcus Denker (RMOD), St\'ephane Ducasse (RMOD), Anne
Etien (RMOD), Vincent Aranega (RMOD) | Towards a Smart Data Processing and Storage Model | null | IWST20: International Workshop on Smalltalk Technologies, Sep
2020, Novi Sad, Serbia | null | null | cs.CL cs.PL cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In several domains it is crucial to store and manipulate data whose origin
needs to be completely traceable to guarantee the consistency, trustworthiness
and reliability on the data itself typically for ethical and legal reasons. It
is also important to guarantee that such properties are also carried further
when such data is composed and processed into new data. In this article we
present the main requirements and theorethical problems that arise by the
design of a system supporting data with such capabilities. We present an
architecture for implementing a system as well as a prototype developed in
Pharo.
| [
{
"created": "Thu, 7 Jan 2021 12:52:11 GMT",
"version": "v1"
}
] | 2021-01-08 | [
[
"Salgado",
"Ronie",
"",
"RMOD"
],
[
"Denker",
"Marcus",
"",
"RMOD"
],
[
"Ducasse",
"Stéphane",
"",
"RMOD"
],
[
"Etien",
"Anne",
"",
"RMOD"
],
[
"Aranega",
"Vincent",
"",
"RMOD"
]
] | In several domains it is crucial to store and manipulate data whose origin needs to be completely traceable to guarantee the consistency, trustworthiness and reliability on the data itself typically for ethical and legal reasons. It is also important to guarantee that such properties are also carried further when such data is composed and processed into new data. In this article we present the main requirements and theorethical problems that arise by the design of a system supporting data with such capabilities. We present an architecture for implementing a system as well as a prototype developed in Pharo. |
2004.07699 | Stefano Dafarra | Stefano Dafarra | Predictive Whole-Body Control of Humanoid Robot Locomotion | This work is a Ph.D. thesis enclosing the following articles:
arXiv:1705.10638, arXiv:1807.05395, arXiv:2003.04633 and arXiv:2004.12083 | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Humanoid robots are machines built with an anthropomorphic shape. Despite
decades of research into the subject, it is still challenging to tackle the
robot locomotion problem from an algorithmic point of view. For example, these
machines cannot achieve a constant forward body movement without exploiting
contacts with the environment. The reactive forces resulting from the contacts
are subject to strong limitations, complicating the design of control laws. As
a consequence, the generation of humanoid motions requires to exploit fully the
mathematical model of the robot in contact with the environment or to resort to
approximations of it.
This thesis investigates predictive and optimal control techniques for
tackling humanoid robot motion tasks. They generate control input values from
the system model and objectives, often transposed as cost function to minimize.
In particular, this thesis tackles several aspects of the humanoid robot
locomotion problem in a crescendo of complexity. First, we consider the single
step push recovery problem. Namely, we aim at maintaining the upright posture
with a single step after a strong external disturbance. Second, we generate and
stabilize walking motions. In addition, we adopt predictive techniques to
perform more dynamic motions, like large step-ups.
The above-mentioned applications make use of different simplifications or
assumptions to facilitate the tractability of the corresponding motion tasks.
Moreover, they consider first the foot placements and only afterward how to
maintain balance. We attempt to remove all these simplifications. [continued]
| [
{
"created": "Thu, 16 Apr 2020 15:11:53 GMT",
"version": "v1"
}
] | 2020-04-28 | [
[
"Dafarra",
"Stefano",
""
]
] | Humanoid robots are machines built with an anthropomorphic shape. Despite decades of research into the subject, it is still challenging to tackle the robot locomotion problem from an algorithmic point of view. For example, these machines cannot achieve a constant forward body movement without exploiting contacts with the environment. The reactive forces resulting from the contacts are subject to strong limitations, complicating the design of control laws. As a consequence, the generation of humanoid motions requires to exploit fully the mathematical model of the robot in contact with the environment or to resort to approximations of it. This thesis investigates predictive and optimal control techniques for tackling humanoid robot motion tasks. They generate control input values from the system model and objectives, often transposed as cost function to minimize. In particular, this thesis tackles several aspects of the humanoid robot locomotion problem in a crescendo of complexity. First, we consider the single step push recovery problem. Namely, we aim at maintaining the upright posture with a single step after a strong external disturbance. Second, we generate and stabilize walking motions. In addition, we adopt predictive techniques to perform more dynamic motions, like large step-ups. The above-mentioned applications make use of different simplifications or assumptions to facilitate the tractability of the corresponding motion tasks. Moreover, they consider first the foot placements and only afterward how to maintain balance. We attempt to remove all these simplifications. [continued] |
2106.00641 | Jinlan Fu | Jinlan Fu, Xuanjing Huang, Pengfei Liu | SpanNER: Named Entity Re-/Recognition as Span Prediction | Accepted by ACL 2021 (Main track) | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Recent years have seen the paradigm shift of Named Entity Recognition (NER)
systems from sequence labeling to span prediction. Despite its preliminary
effectiveness, the span prediction model's architectural bias has not been
fully understood. In this paper, we first investigate the strengths and
weaknesses when the span prediction model is used for named entity recognition
compared with the sequence labeling framework and how to further improve it,
which motivates us to make complementary advantages of systems based on
different paradigms. We then reveal that span prediction, simultaneously, can
serve as a system combiner to re-recognize named entities from different
systems' outputs. We experimentally implement 154 systems on 11 datasets,
covering three languages, comprehensive results show the effectiveness of span
prediction models that both serve as base NER systems and system combiners. We
make all code and datasets available: \url{https://github.com/neulab/spanner},
as well as an online system demo: \url{http://spanner.sh}. Our model also has
been deployed into the ExplainaBoard platform, which allows users to flexibly
perform a system combination of top-scoring systems in an interactive way:
\url{http://explainaboard.nlpedia.ai/leaderboard/task-ner/}.
| [
{
"created": "Tue, 1 Jun 2021 17:11:42 GMT",
"version": "v1"
},
{
"created": "Sun, 6 Jun 2021 04:54:23 GMT",
"version": "v2"
}
] | 2021-06-08 | [
[
"Fu",
"Jinlan",
""
],
[
"Huang",
"Xuanjing",
""
],
[
"Liu",
"Pengfei",
""
]
] | Recent years have seen the paradigm shift of Named Entity Recognition (NER) systems from sequence labeling to span prediction. Despite its preliminary effectiveness, the span prediction model's architectural bias has not been fully understood. In this paper, we first investigate the strengths and weaknesses when the span prediction model is used for named entity recognition compared with the sequence labeling framework and how to further improve it, which motivates us to make complementary advantages of systems based on different paradigms. We then reveal that span prediction, simultaneously, can serve as a system combiner to re-recognize named entities from different systems' outputs. We experimentally implement 154 systems on 11 datasets, covering three languages, comprehensive results show the effectiveness of span prediction models that both serve as base NER systems and system combiners. We make all code and datasets available: \url{https://github.com/neulab/spanner}, as well as an online system demo: \url{http://spanner.sh}. Our model also has been deployed into the ExplainaBoard platform, which allows users to flexibly perform a system combination of top-scoring systems in an interactive way: \url{http://explainaboard.nlpedia.ai/leaderboard/task-ner/}. |
2309.00811 | Zhen Shang | Zhen Shang (1), Jin-Kao Hao (2), Fei Ma (1) ((1) School of Economics
and Management, Chang'an University, China, (2) LERIA, Universit\'e d'Angers,
Angers, France) | A double-decomposition based parallel exact algorithm for the feedback
length minimization problem | This paper has been accepted by PeerJ Computer Science on August 28,
2023 | null | 10.7717/peerj-cs.1597 | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Product development projects usually contain many interrelated activities
with complex information dependences, which induce activity rework, project
delay and cost overrun. To reduce negative impacts, scheduling interrelated
activities in an appropriate sequence is an important issue for project
managers. This study develops a double-decomposition based parallel
branch-and-prune algorithm, to determine the optimal activity sequence that
minimizes the total feedback length (FLMP). This algorithm decomposes FLMP from
two perspectives, which enables the use of all available computing resources to
solve subproblems concurrently. In addition, we propose a result-compression
strategy and a hash-address strategy to enhance this algorithm. Experimental
results indicate that our algorithm can find the optimal sequence for FLMP up
to 27 activities within 1 hour, and outperforms state of the art exact
algorithms.
| [
{
"created": "Sat, 2 Sep 2023 03:28:59 GMT",
"version": "v1"
},
{
"created": "Wed, 13 Sep 2023 12:49:11 GMT",
"version": "v2"
}
] | 2023-09-14 | [
[
"Shang",
"Zhen",
""
],
[
"Hao",
"Jin-Kao",
""
],
[
"Ma",
"Fei",
""
]
] | Product development projects usually contain many interrelated activities with complex information dependences, which induce activity rework, project delay and cost overrun. To reduce negative impacts, scheduling interrelated activities in an appropriate sequence is an important issue for project managers. This study develops a double-decomposition based parallel branch-and-prune algorithm, to determine the optimal activity sequence that minimizes the total feedback length (FLMP). This algorithm decomposes FLMP from two perspectives, which enables the use of all available computing resources to solve subproblems concurrently. In addition, we propose a result-compression strategy and a hash-address strategy to enhance this algorithm. Experimental results indicate that our algorithm can find the optimal sequence for FLMP up to 27 activities within 1 hour, and outperforms state of the art exact algorithms. |
1409.5987 | Bo Li | Qizhi Fang, Bo Li, Xiaoming Sun, Jia Zhang, and Jialin Zhang | Computing the Least-core and Nucleolus for Threshold Cardinality
Matching Games | null | null | null | null | cs.GT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cooperative games provide a framework for fair and stable profit allocation
in multi-agent systems. \emph{Core}, \emph{least-core} and \emph{nucleolus} are
such solution concepts that characterize stability of cooperation. In this
paper, we study the algorithmic issues on the least-core and nucleolus of
threshold cardinality matching games (TCMG). A TCMG is defined on a graph
$G=(V,E)$ and a threshold $T$, in which the player set is $V$ and the profit of
a coalition $S\subseteq V$ is 1 if the size of a maximum matching in $G[S]$
meets or exceeds $T$, and 0 otherwise. We first show that for a TCMG, the
problems of computing least-core value, finding and verifying least-core payoff
are all polynomial time solvable. We also provide a general characterization of
the least core for a large class of TCMG. Next, based on Gallai-Edmonds
Decomposition in matching theory, we give a concise formulation of the
nucleolus for a typical case of TCMG which the threshold $T$ equals $1$. When
the threshold $T$ is relevant to the input size, we prove that the nucleolus
can be obtained in polynomial time in bipartite graphs and graphs with a
perfect matching.
| [
{
"created": "Sun, 21 Sep 2014 14:19:11 GMT",
"version": "v1"
}
] | 2014-09-23 | [
[
"Fang",
"Qizhi",
""
],
[
"Li",
"Bo",
""
],
[
"Sun",
"Xiaoming",
""
],
[
"Zhang",
"Jia",
""
],
[
"Zhang",
"Jialin",
""
]
] | Cooperative games provide a framework for fair and stable profit allocation in multi-agent systems. \emph{Core}, \emph{least-core} and \emph{nucleolus} are such solution concepts that characterize stability of cooperation. In this paper, we study the algorithmic issues on the least-core and nucleolus of threshold cardinality matching games (TCMG). A TCMG is defined on a graph $G=(V,E)$ and a threshold $T$, in which the player set is $V$ and the profit of a coalition $S\subseteq V$ is 1 if the size of a maximum matching in $G[S]$ meets or exceeds $T$, and 0 otherwise. We first show that for a TCMG, the problems of computing least-core value, finding and verifying least-core payoff are all polynomial time solvable. We also provide a general characterization of the least core for a large class of TCMG. Next, based on Gallai-Edmonds Decomposition in matching theory, we give a concise formulation of the nucleolus for a typical case of TCMG which the threshold $T$ equals $1$. When the threshold $T$ is relevant to the input size, we prove that the nucleolus can be obtained in polynomial time in bipartite graphs and graphs with a perfect matching. |
2402.11547 | Konstantinos Ntougias | Konstantinos Ntougias, Symeon Chatzinotas, Ioannis Krikidis | Hybrid RIS With Sub-Connected Active Partitions: Performance Analysis
and Transmission Design | null | null | null | null | cs.IT eess.SP math.IT | http://creativecommons.org/licenses/by/4.0/ | The emerging reflecting intelligent surface (RIS) technology promises to
enhance the capacity of wireless communication systems via passive reflect
beamforming. However, the product path loss limits its performance gains.
Fully-connected (FC) active RIS, which integrates reflect-type power amplifiers
into the RIS elements, has been recently introduced in response to this issue.
Also, sub-connected (SC) active RIS and hybrid FC-active/passive RIS variants,
which employ a limited number of reflect-type power amplifiers, have been
proposed to provide energy savings. Nevertheless, their flexibility in
balancing diverse capacity requirements and power consumption constraints is
limited. In this direction, this study introduces novel hybrid RIS structures,
wherein at least one reflecting sub-surface (RS) adopts the SC-active RIS
design. The asymptotic signal-to-noise-ratio of the FC-active/passive and the
proposed hybrid RIS variants is analyzed in a single-user single-input
single-output setup. Furthermore, the transmit and RIS beamforming weights are
jointly optimized in each scenario to maximize the energy efficiency of a
hybrid RIS-aided multi-user multiple-input single-output downlink system
subject to the power consumption constraints of the base station and the active
RSs. Numerical simulation and analytic results highlight the performance gains
of the proposed RIS designs over benchmarks, unveil non-trivial trade-offs, and
provide valuable insights.
| [
{
"created": "Sun, 18 Feb 2024 11:39:23 GMT",
"version": "v1"
}
] | 2024-02-20 | [
[
"Ntougias",
"Konstantinos",
""
],
[
"Chatzinotas",
"Symeon",
""
],
[
"Krikidis",
"Ioannis",
""
]
] | The emerging reflecting intelligent surface (RIS) technology promises to enhance the capacity of wireless communication systems via passive reflect beamforming. However, the product path loss limits its performance gains. Fully-connected (FC) active RIS, which integrates reflect-type power amplifiers into the RIS elements, has been recently introduced in response to this issue. Also, sub-connected (SC) active RIS and hybrid FC-active/passive RIS variants, which employ a limited number of reflect-type power amplifiers, have been proposed to provide energy savings. Nevertheless, their flexibility in balancing diverse capacity requirements and power consumption constraints is limited. In this direction, this study introduces novel hybrid RIS structures, wherein at least one reflecting sub-surface (RS) adopts the SC-active RIS design. The asymptotic signal-to-noise-ratio of the FC-active/passive and the proposed hybrid RIS variants is analyzed in a single-user single-input single-output setup. Furthermore, the transmit and RIS beamforming weights are jointly optimized in each scenario to maximize the energy efficiency of a hybrid RIS-aided multi-user multiple-input single-output downlink system subject to the power consumption constraints of the base station and the active RSs. Numerical simulation and analytic results highlight the performance gains of the proposed RIS designs over benchmarks, unveil non-trivial trade-offs, and provide valuable insights. |
2207.08120 | Amihood Amir | Ora Amir and Amihood Amir and Aviezri Fraenkel and David Sarne | On the Practical Power of Automata in Pattern Matching | null | null | null | null | cs.DS | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The classical pattern matching paradigm is that of seeking occurrences of one
string - the pattern, in another - the text, where both strings are drawn from
an alphabet set $\Sigma$. Assuming the text length is $n$ and the pattern
length is $m$, this problem can naively be solved in time $O(nm)$. In Knuth,
Morris and Pratt's seminal paper of 1977, an automaton, was developed that
allows solving this problem in time $O(n)$ for any alphabet.
This automaton, which we will refer to as the {\em KMP-automaton}, has proven
useful in solving many other problems. A notable example is the {\em
parameterized pattern matching} model. In this model, a consistent renaming of
symbols from $\Sigma$ is allowed in a match. The parameterized matching
paradigm has proven useful in problems in software engineering, computer
vision, and other applications.
It has long been suspected that for texts where the symbols are uniformly
random, the naive algorithm will perform as well as the KMP algorithm. In this
paper we examine the practical efficiency of the KMP algorithm vs. the naive
algorithm on a randomly generated text. We analyse the time under various
parameters, such as alphabet size, pattern length, and the distribution of
pattern occurrences in the text. We do this for both the original exact
matching problem and parameterized matching. While the folklore wisdom is
vindicated by these findings for the exact matching case, surprisingly, the KMP
algorithm works significantly faster than the naive in the parameterized
matching case.
We check this hypothesis for DNA texts, and observe a similar behaviour as in
the random text. We also show a very structured case where the automaton is
much more efficient.
| [
{
"created": "Sun, 17 Jul 2022 09:08:13 GMT",
"version": "v1"
}
] | 2022-07-19 | [
[
"Amir",
"Ora",
""
],
[
"Amir",
"Amihood",
""
],
[
"Fraenkel",
"Aviezri",
""
],
[
"Sarne",
"David",
""
]
] | The classical pattern matching paradigm is that of seeking occurrences of one string - the pattern, in another - the text, where both strings are drawn from an alphabet set $\Sigma$. Assuming the text length is $n$ and the pattern length is $m$, this problem can naively be solved in time $O(nm)$. In Knuth, Morris and Pratt's seminal paper of 1977, an automaton, was developed that allows solving this problem in time $O(n)$ for any alphabet. This automaton, which we will refer to as the {\em KMP-automaton}, has proven useful in solving many other problems. A notable example is the {\em parameterized pattern matching} model. In this model, a consistent renaming of symbols from $\Sigma$ is allowed in a match. The parameterized matching paradigm has proven useful in problems in software engineering, computer vision, and other applications. It has long been suspected that for texts where the symbols are uniformly random, the naive algorithm will perform as well as the KMP algorithm. In this paper we examine the practical efficiency of the KMP algorithm vs. the naive algorithm on a randomly generated text. We analyse the time under various parameters, such as alphabet size, pattern length, and the distribution of pattern occurrences in the text. We do this for both the original exact matching problem and parameterized matching. While the folklore wisdom is vindicated by these findings for the exact matching case, surprisingly, the KMP algorithm works significantly faster than the naive in the parameterized matching case. We check this hypothesis for DNA texts, and observe a similar behaviour as in the random text. We also show a very structured case where the automaton is much more efficient. |
2308.05189 | Miguel-\'Angel Fern\'andez-Torres | Miguel-\'Angel Fern\'andez-Torres | Hierarchical Representations for Spatio-Temporal Visual Attention
Modeling and Understanding | PhD thesis | null | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | This PhD. Thesis concerns the study and development of hierarchical
representations for spatio-temporal visual attention modeling and understanding
in video sequences. More specifically, we propose two computational models for
visual attention. First, we present a generative probabilistic model for
context-aware visual attention modeling and understanding. Secondly, we develop
a deep network architecture for visual attention modeling, which first
estimates top-down spatio-temporal visual attention, and ultimately serves for
modeling attention in the temporal domain.
| [
{
"created": "Wed, 9 Aug 2023 18:49:21 GMT",
"version": "v1"
}
] | 2023-08-11 | [
[
"Fernández-Torres",
"Miguel-Ángel",
""
]
] | This PhD. Thesis concerns the study and development of hierarchical representations for spatio-temporal visual attention modeling and understanding in video sequences. More specifically, we propose two computational models for visual attention. First, we present a generative probabilistic model for context-aware visual attention modeling and understanding. Secondly, we develop a deep network architecture for visual attention modeling, which first estimates top-down spatio-temporal visual attention, and ultimately serves for modeling attention in the temporal domain. |
1711.00549 | Anjishnu Kumar | Anjishnu Kumar, Arpit Gupta, Julian Chan, Sam Tucker, Bjorn
Hoffmeister, Markus Dreyer, Stanislav Peshterliev, Ankur Gandhe, Denis
Filiminov, Ariya Rastrow, Christian Monson and Agnika Kumar | Just ASK: Building an Architecture for Extensible Self-Service Spoken
Language Understanding | Published at the 1st Workshop on Conversational AI at NIPS 2017
(NIPS-WCAI) | null | null | null | cs.CL cs.AI cs.NE cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents the design of the machine learning architecture that
underlies the Alexa Skills Kit (ASK) a large scale Spoken Language
Understanding (SLU) Software Development Kit (SDK) that enables developers to
extend the capabilities of Amazon's virtual assistant, Alexa. At Amazon, the
infrastructure powers over 25,000 skills deployed through the ASK, as well as
AWS's Amazon Lex SLU Service. The ASK emphasizes flexibility, predictability
and a rapid iteration cycle for third party developers. It imposes inductive
biases that allow it to learn robust SLU models from extremely small and sparse
datasets and, in doing so, removes significant barriers to entry for software
developers and dialogue systems researchers.
| [
{
"created": "Wed, 1 Nov 2017 22:10:11 GMT",
"version": "v1"
},
{
"created": "Fri, 3 Nov 2017 09:19:37 GMT",
"version": "v2"
},
{
"created": "Fri, 24 Nov 2017 00:37:00 GMT",
"version": "v3"
},
{
"created": "Fri, 2 Mar 2018 13:58:04 GMT",
"version": "v4"
}
] | 2018-03-05 | [
[
"Kumar",
"Anjishnu",
""
],
[
"Gupta",
"Arpit",
""
],
[
"Chan",
"Julian",
""
],
[
"Tucker",
"Sam",
""
],
[
"Hoffmeister",
"Bjorn",
""
],
[
"Dreyer",
"Markus",
""
],
[
"Peshterliev",
"Stanislav",
""
],
[
"Gandhe",
"Ankur",
""
],
[
"Filiminov",
"Denis",
""
],
[
"Rastrow",
"Ariya",
""
],
[
"Monson",
"Christian",
""
],
[
"Kumar",
"Agnika",
""
]
] | This paper presents the design of the machine learning architecture that underlies the Alexa Skills Kit (ASK) a large scale Spoken Language Understanding (SLU) Software Development Kit (SDK) that enables developers to extend the capabilities of Amazon's virtual assistant, Alexa. At Amazon, the infrastructure powers over 25,000 skills deployed through the ASK, as well as AWS's Amazon Lex SLU Service. The ASK emphasizes flexibility, predictability and a rapid iteration cycle for third party developers. It imposes inductive biases that allow it to learn robust SLU models from extremely small and sparse datasets and, in doing so, removes significant barriers to entry for software developers and dialogue systems researchers. |
1802.04122 | Yang Zhang | Yang Zhang, Mathias Humbert, Tahleen Rahman, Cheng-Te Li, Jun Pang,
Michael Backes | Tagvisor: A Privacy Advisor for Sharing Hashtags | WWW 18 | null | null | null | cs.CR cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hashtag has emerged as a widely used concept of popular culture and
campaigns, but its implications on people's privacy have not been investigated
so far. In this paper, we present the first systematic analysis of privacy
issues induced by hashtags. We concentrate in particular on location, which is
recognized as one of the key privacy concerns in the Internet era. By relying
on a random forest model, we show that we can infer a user's precise location
from hashtags with accuracy of 70\% to 76\%, depending on the city. To remedy
this situation, we introduce a system called Tagvisor that systematically
suggests alternative hashtags if the user-selected ones constitute a threat to
location privacy. Tagvisor realizes this by means of three conceptually
different obfuscation techniques and a semantics-based metric for measuring the
consequent utility loss. Our findings show that obfuscating as little as two
hashtags already provides a near-optimal trade-off between privacy and utility
in our dataset. This in particular renders Tagvisor highly time-efficient, and
thus, practical in real-world settings.
| [
{
"created": "Mon, 12 Feb 2018 15:35:48 GMT",
"version": "v1"
}
] | 2018-02-13 | [
[
"Zhang",
"Yang",
""
],
[
"Humbert",
"Mathias",
""
],
[
"Rahman",
"Tahleen",
""
],
[
"Li",
"Cheng-Te",
""
],
[
"Pang",
"Jun",
""
],
[
"Backes",
"Michael",
""
]
] | Hashtag has emerged as a widely used concept of popular culture and campaigns, but its implications on people's privacy have not been investigated so far. In this paper, we present the first systematic analysis of privacy issues induced by hashtags. We concentrate in particular on location, which is recognized as one of the key privacy concerns in the Internet era. By relying on a random forest model, we show that we can infer a user's precise location from hashtags with accuracy of 70\% to 76\%, depending on the city. To remedy this situation, we introduce a system called Tagvisor that systematically suggests alternative hashtags if the user-selected ones constitute a threat to location privacy. Tagvisor realizes this by means of three conceptually different obfuscation techniques and a semantics-based metric for measuring the consequent utility loss. Our findings show that obfuscating as little as two hashtags already provides a near-optimal trade-off between privacy and utility in our dataset. This in particular renders Tagvisor highly time-efficient, and thus, practical in real-world settings. |
2004.12108 | Mahawaga Arachchige Pathum Chamikara | M.A.P. Chamikara, P.Bertok, I.Khalil, D.Liu, S.Camtepe | Privacy Preserving Distributed Machine Learning with Federated Learning | null | null | 10.1016/j.comcom.2021.02.014 | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Edge computing and distributed machine learning have advanced to a level that
can revolutionize a particular organization. Distributed devices such as the
Internet of Things (IoT) often produce a large amount of data, eventually
resulting in big data that can be vital in uncovering hidden patterns, and
other insights in numerous fields such as healthcare, banking, and policing.
Data related to areas such as healthcare and banking can contain potentially
sensitive data that can become public if they are not appropriately sanitized.
Federated learning (FedML) is a recently developed distributed machine learning
(DML) approach that tries to preserve privacy by bringing the learning of an ML
model to data owners'. However, literature shows different attack methods such
as membership inference that exploit the vulnerabilities of ML models as well
as the coordinating servers to retrieve private data. Hence, FedML needs
additional measures to guarantee data privacy. Furthermore, big data often
requires more resources than available in a standard computer. This paper
addresses these issues by proposing a distributed perturbation algorithm named
as DISTPAB, for privacy preservation of horizontally partitioned data. DISTPAB
alleviates computational bottlenecks by distributing the task of privacy
preservation utilizing the asymmetry of resources of a distributed environment,
which can have resource-constrained devices as well as high-performance
computers. Experiments show that DISTPAB provides high accuracy, high
efficiency, high scalability, and high attack resistance. Further experiments
on privacy-preserving FedML show that DISTPAB is an excellent solution to stop
privacy leaks in DML while preserving high data utility.
| [
{
"created": "Sat, 25 Apr 2020 10:51:36 GMT",
"version": "v1"
},
{
"created": "Fri, 26 Feb 2021 02:48:00 GMT",
"version": "v2"
}
] | 2021-03-01 | [
[
"Chamikara",
"M. A. P.",
""
],
[
"Bertok",
"P.",
""
],
[
"Khalil",
"I.",
""
],
[
"Liu",
"D.",
""
],
[
"Camtepe",
"S.",
""
]
] | Edge computing and distributed machine learning have advanced to a level that can revolutionize a particular organization. Distributed devices such as the Internet of Things (IoT) often produce a large amount of data, eventually resulting in big data that can be vital in uncovering hidden patterns, and other insights in numerous fields such as healthcare, banking, and policing. Data related to areas such as healthcare and banking can contain potentially sensitive data that can become public if they are not appropriately sanitized. Federated learning (FedML) is a recently developed distributed machine learning (DML) approach that tries to preserve privacy by bringing the learning of an ML model to data owners'. However, literature shows different attack methods such as membership inference that exploit the vulnerabilities of ML models as well as the coordinating servers to retrieve private data. Hence, FedML needs additional measures to guarantee data privacy. Furthermore, big data often requires more resources than available in a standard computer. This paper addresses these issues by proposing a distributed perturbation algorithm named as DISTPAB, for privacy preservation of horizontally partitioned data. DISTPAB alleviates computational bottlenecks by distributing the task of privacy preservation utilizing the asymmetry of resources of a distributed environment, which can have resource-constrained devices as well as high-performance computers. Experiments show that DISTPAB provides high accuracy, high efficiency, high scalability, and high attack resistance. Further experiments on privacy-preserving FedML show that DISTPAB is an excellent solution to stop privacy leaks in DML while preserving high data utility. |
2103.00564 | Casper Benjamin Freksen | Casper Benjamin Freksen | An Introduction to Johnson-Lindenstrauss Transforms | The text was previously a main part of the introduction of my PhD
thesis, but it has been adapted to be self contained and serve as a
(hopefully good) starting point for readers interested in the topic | null | null | null | cs.DS cs.LG | http://creativecommons.org/licenses/by/4.0/ | Johnson--Lindenstrauss Transforms are powerful tools for reducing the
dimensionality of data while preserving key characteristics of that data, and
they have found use in many fields from machine learning to differential
privacy and more. This note explains what they are; it gives an overview of
their use and their development since they were introduced in the 1980s; and it
provides many references should the reader wish to explore these topics more
deeply.
| [
{
"created": "Sun, 28 Feb 2021 16:57:41 GMT",
"version": "v1"
}
] | 2021-03-02 | [
[
"Freksen",
"Casper Benjamin",
""
]
] | Johnson--Lindenstrauss Transforms are powerful tools for reducing the dimensionality of data while preserving key characteristics of that data, and they have found use in many fields from machine learning to differential privacy and more. This note explains what they are; it gives an overview of their use and their development since they were introduced in the 1980s; and it provides many references should the reader wish to explore these topics more deeply. |
1902.07958 | Mateus Espadoto | Mateus Espadoto, Nina S. T. Hirata, Alexandru C. Telea | Deep Learning Multidimensional Projections | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dimensionality reduction methods, also known as projections, are frequently
used for exploring multidimensional data in machine learning, data science, and
information visualization. Among these, t-SNE and its variants have become very
popular for their ability to visually separate distinct data clusters. However,
such methods are computationally expensive for large datasets, suffer from
stability problems, and cannot directly handle out-of-sample data. We propose a
learning approach to construct such projections. We train a deep neural network
based on a collection of samples from a given data universe, and their
corresponding projections, and next use the network to infer projections of
data from the same, or similar, universes. Our approach generates projections
with similar characteristics as the learned ones, is computationally two to
three orders of magnitude faster than SNE-class methods, has no complex-to-set
user parameters, handles out-of-sample data in a stable manner, and can be used
to learn any projection technique. We demonstrate our proposal on several
real-world high dimensional datasets from machine learning.
| [
{
"created": "Thu, 21 Feb 2019 10:50:44 GMT",
"version": "v1"
}
] | 2019-02-22 | [
[
"Espadoto",
"Mateus",
""
],
[
"Hirata",
"Nina S. T.",
""
],
[
"Telea",
"Alexandru C.",
""
]
] | Dimensionality reduction methods, also known as projections, are frequently used for exploring multidimensional data in machine learning, data science, and information visualization. Among these, t-SNE and its variants have become very popular for their ability to visually separate distinct data clusters. However, such methods are computationally expensive for large datasets, suffer from stability problems, and cannot directly handle out-of-sample data. We propose a learning approach to construct such projections. We train a deep neural network based on a collection of samples from a given data universe, and their corresponding projections, and next use the network to infer projections of data from the same, or similar, universes. Our approach generates projections with similar characteristics as the learned ones, is computationally two to three orders of magnitude faster than SNE-class methods, has no complex-to-set user parameters, handles out-of-sample data in a stable manner, and can be used to learn any projection technique. We demonstrate our proposal on several real-world high dimensional datasets from machine learning. |
2309.10195 | Bo Peng | Bo Peng, Srinivasan Parthasarathy and Xia Ning | Multi-modality Meets Re-learning: Mitigating Negative Transfer in
Sequential Recommendation | null | null | null | null | cs.IR | http://creativecommons.org/licenses/by/4.0/ | Learning effective recommendation models from sparse user interactions
represents a fundamental challenge in developing sequential recommendation
methods. Recently, pre-training-based methods have been developed to tackle
this challenge. Though promising, in this paper, we show that existing methods
suffer from the notorious negative transfer issue, where the model adapted from
the pre-trained model results in worse performance compared to the model
learned from scratch in the task of interest (i.e., target task). To address
this issue, we develop a method, denoted as ANT, for transferable sequential
recommendation. ANT mitigates negative transfer by 1) incorporating
multi-modality item information, including item texts, images and prices, to
effectively learn more transferable knowledge from related tasks (i.e.,
auxiliary tasks); and 2) better capturing task-specific knowledge in the target
task using a re-learning-based adaptation strategy. We evaluate ANT against
eight state-of-the-art baseline methods on five target tasks. Our experimental
results demonstrate that ANT does not suffer from the negative transfer issue
on any of the target tasks. The results also demonstrate that ANT substantially
outperforms baseline methods in the target tasks with an improvement of as much
as 15.2%. Our analysis highlights the superior effectiveness of our
re-learning-based strategy compared to fine-tuning on the target tasks.
| [
{
"created": "Mon, 18 Sep 2023 22:54:36 GMT",
"version": "v1"
},
{
"created": "Wed, 20 Sep 2023 13:25:49 GMT",
"version": "v2"
}
] | 2023-09-21 | [
[
"Peng",
"Bo",
""
],
[
"Parthasarathy",
"Srinivasan",
""
],
[
"Ning",
"Xia",
""
]
] | Learning effective recommendation models from sparse user interactions represents a fundamental challenge in developing sequential recommendation methods. Recently, pre-training-based methods have been developed to tackle this challenge. Though promising, in this paper, we show that existing methods suffer from the notorious negative transfer issue, where the model adapted from the pre-trained model results in worse performance compared to the model learned from scratch in the task of interest (i.e., target task). To address this issue, we develop a method, denoted as ANT, for transferable sequential recommendation. ANT mitigates negative transfer by 1) incorporating multi-modality item information, including item texts, images and prices, to effectively learn more transferable knowledge from related tasks (i.e., auxiliary tasks); and 2) better capturing task-specific knowledge in the target task using a re-learning-based adaptation strategy. We evaluate ANT against eight state-of-the-art baseline methods on five target tasks. Our experimental results demonstrate that ANT does not suffer from the negative transfer issue on any of the target tasks. The results also demonstrate that ANT substantially outperforms baseline methods in the target tasks with an improvement of as much as 15.2%. Our analysis highlights the superior effectiveness of our re-learning-based strategy compared to fine-tuning on the target tasks. |
2211.12872 | Ashesh Ashesh | Ashesh, Alexander Krull, Moises Di Sante, Francesco Silvio Pasqualini,
Florian Jug | {\mu}Split: efficient image decomposition for microscopy data | Published at ICCV 2023. 10 pages, 7 figures, 9 pages supplement, 8
supplementary figures | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | We present {\mu}Split, a dedicated approach for trained image decomposition
in the context of fluorescence microscopy images. We find that best results
using regular deep architectures are achieved when large image patches are used
during training, making memory consumption the limiting factor to further
improving performance. We therefore introduce lateral contextualization (LC), a
novel meta-architecture that enables the memory efficient incorporation of
large image-context, which we observe is a key ingredient to solving the image
decomposition task at hand. We integrate LC with U-Nets, Hierarchical AEs, and
Hierarchical VAEs, for which we formulate a modified ELBO loss. Additionally,
LC enables training deeper hierarchical models than otherwise possible and,
interestingly, helps to reduce tiling artefacts that are inherently impossible
to avoid when using tiled VAE predictions. We apply {\mu}Split to five
decomposition tasks, one on a synthetic dataset, four others derived from real
microscopy data. Our method consistently achieves best results (average
improvements to the best baseline of 2.25 dB PSNR), while simultaneously
requiring considerably less GPU memory. Our code and datasets can be found at
https://github.com/juglab/uSplit.
| [
{
"created": "Wed, 23 Nov 2022 11:26:24 GMT",
"version": "v1"
},
{
"created": "Thu, 16 Mar 2023 15:19:40 GMT",
"version": "v2"
},
{
"created": "Wed, 22 Mar 2023 10:05:15 GMT",
"version": "v3"
},
{
"created": "Tue, 15 Aug 2023 07:48:36 GMT",
"version": "v4"
},
{
"created": "Wed, 16 Aug 2023 19:40:50 GMT",
"version": "v5"
}
] | 2023-08-21 | [
[
"Ashesh",
"",
""
],
[
"Krull",
"Alexander",
""
],
[
"Di Sante",
"Moises",
""
],
[
"Pasqualini",
"Francesco Silvio",
""
],
[
"Jug",
"Florian",
""
]
] | We present {\mu}Split, a dedicated approach for trained image decomposition in the context of fluorescence microscopy images. We find that best results using regular deep architectures are achieved when large image patches are used during training, making memory consumption the limiting factor to further improving performance. We therefore introduce lateral contextualization (LC), a novel meta-architecture that enables the memory efficient incorporation of large image-context, which we observe is a key ingredient to solving the image decomposition task at hand. We integrate LC with U-Nets, Hierarchical AEs, and Hierarchical VAEs, for which we formulate a modified ELBO loss. Additionally, LC enables training deeper hierarchical models than otherwise possible and, interestingly, helps to reduce tiling artefacts that are inherently impossible to avoid when using tiled VAE predictions. We apply {\mu}Split to five decomposition tasks, one on a synthetic dataset, four others derived from real microscopy data. Our method consistently achieves best results (average improvements to the best baseline of 2.25 dB PSNR), while simultaneously requiring considerably less GPU memory. Our code and datasets can be found at https://github.com/juglab/uSplit. |
1603.04690 | Martin Skutella | Martin Skutella | A 2.542-Approximation for Precedence Constrained Single Machine
Scheduling with Release Dates and Total Weighted Completion Time Objective | null | null | null | null | cs.DM math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a $\sqrt{e}/(\sqrt{e}-1)$-approximation algorithm for the
nonpreemptive scheduling problem to minimize the total weighted completion time
of jobs on a single machine subject to release dates and precedence
constraints. The previously best known approximation algorithm dates back to
1997; its performance guarantee can be made arbitrarily close to the Euler
constant $e$.
| [
{
"created": "Tue, 15 Mar 2016 14:07:28 GMT",
"version": "v1"
},
{
"created": "Thu, 21 Apr 2016 22:52:35 GMT",
"version": "v2"
}
] | 2016-04-25 | [
[
"Skutella",
"Martin",
""
]
] | We present a $\sqrt{e}/(\sqrt{e}-1)$-approximation algorithm for the nonpreemptive scheduling problem to minimize the total weighted completion time of jobs on a single machine subject to release dates and precedence constraints. The previously best known approximation algorithm dates back to 1997; its performance guarantee can be made arbitrarily close to the Euler constant $e$. |
1511.06654 | Bing Wang | Bing Wang, Gang Wang, Kap Luk Chan and Li Wang | Tracklet Association by Online Target-Specific Metric Learning and
Coherent Dynamics Estimation | IEEE Transactions on Pattern Analysis and Machine Intelligence, in
press, 2016 | null | 10.1109/TPAMI.2016.2551245 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present a novel method based on online target-specific
metric learning and coherent dynamics estimation for tracklet (track fragment)
association by network flow optimization in long-term multi-person tracking.
Our proposed framework aims to exploit appearance and motion cues to prevent
identity switches during tracking and to recover missed detections.
Furthermore, target-specific metrics (appearance cue) and motion dynamics
(motion cue) are proposed to be learned and estimated online, i.e. during the
tracking process. Our approach is effective even when such cues fail to
identify or follow the target due to occlusions or object-to-object
interactions. We also propose to learn the weights of these two tracking cues
to handle the difficult situations, such as severe occlusions and
object-to-object interactions effectively. Our method has been validated on
several public datasets and the experimental results show that it outperforms
several state-of-the-art tracking methods.
| [
{
"created": "Fri, 20 Nov 2015 15:48:21 GMT",
"version": "v1"
},
{
"created": "Fri, 22 Apr 2016 03:53:35 GMT",
"version": "v2"
}
] | 2016-04-25 | [
[
"Wang",
"Bing",
""
],
[
"Wang",
"Gang",
""
],
[
"Chan",
"Kap Luk",
""
],
[
"Wang",
"Li",
""
]
] | In this paper, we present a novel method based on online target-specific metric learning and coherent dynamics estimation for tracklet (track fragment) association by network flow optimization in long-term multi-person tracking. Our proposed framework aims to exploit appearance and motion cues to prevent identity switches during tracking and to recover missed detections. Furthermore, target-specific metrics (appearance cue) and motion dynamics (motion cue) are proposed to be learned and estimated online, i.e. during the tracking process. Our approach is effective even when such cues fail to identify or follow the target due to occlusions or object-to-object interactions. We also propose to learn the weights of these two tracking cues to handle the difficult situations, such as severe occlusions and object-to-object interactions effectively. Our method has been validated on several public datasets and the experimental results show that it outperforms several state-of-the-art tracking methods. |
1504.03413 | Bhavya Kailkhura | Bhavya Kailkhura, Swastik Brahma, Pramod K. Varshney | Consensus based Detection in the Presence of Data Falsification Attacks | null | null | 10.1109/TSIPN.2016.2607119 | null | cs.SY cs.DC stat.AP stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper considers the problem of detection in distributed networks in the
presence of data falsification (Byzantine) attacks. Detection approaches
considered in the paper are based on fully distributed consensus algorithms,
where all of the nodes exchange information only with their neighbors in the
absence of a fusion center. In such networks, we characterize the negative
effect of Byzantines on the steady-state and transient detection performance of
the conventional consensus based detection algorithms. To address this issue,
we study the problem from the network designer's perspective. More
specifically, we first propose a distributed weighted average consensus
algorithm that is robust to Byzantine attacks. We show that, under reasonable
assumptions, the global test statistic for detection can be computed locally at
each node using our proposed consensus algorithm. We exploit the statistical
distribution of the nodes' data to devise techniques for mitigating the
influence of data falsifying Byzantines on the distributed detection system.
Since some parameters of the statistical distribution of the nodes' data might
not be known a priori, we propose learning based techniques to enable an
adaptive design of the local fusion or update rules.
| [
{
"created": "Tue, 14 Apr 2015 03:43:05 GMT",
"version": "v1"
}
] | 2017-09-29 | [
[
"Kailkhura",
"Bhavya",
""
],
[
"Brahma",
"Swastik",
""
],
[
"Varshney",
"Pramod K.",
""
]
] | This paper considers the problem of detection in distributed networks in the presence of data falsification (Byzantine) attacks. Detection approaches considered in the paper are based on fully distributed consensus algorithms, where all of the nodes exchange information only with their neighbors in the absence of a fusion center. In such networks, we characterize the negative effect of Byzantines on the steady-state and transient detection performance of the conventional consensus based detection algorithms. To address this issue, we study the problem from the network designer's perspective. More specifically, we first propose a distributed weighted average consensus algorithm that is robust to Byzantine attacks. We show that, under reasonable assumptions, the global test statistic for detection can be computed locally at each node using our proposed consensus algorithm. We exploit the statistical distribution of the nodes' data to devise techniques for mitigating the influence of data falsifying Byzantines on the distributed detection system. Since some parameters of the statistical distribution of the nodes' data might not be known a priori, we propose learning based techniques to enable an adaptive design of the local fusion or update rules. |
1812.09320 | Antzela Kosta | Antzela Kosta, Nikolaos Pappas, Anthony Ephremides, Vangelis Angelakis | The Cost of Delay in Status Updates and their Value: Non-linear Ageing | arXiv admin note: substantial text overlap with arXiv:1701.06927 | null | null | null | cs.NI cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider a status update communication system consisting of a
source-destination link. A stochastic process is observed at the source, where
samples are extracted at random time instances, and delivered to the
destination, thus, providing status updates for the source. In this paper, we
expand the concept of information ageing by introducing the cost of update
delay (CoUD) metric to characterize the cost of having stale information at the
destination. The CoUD captures the freshness of the information at the
destination and can be used to reflect the information structure of the source.
Moreover, we introduce the value of information of update (VoIU) metric that
captures the reduction of CoUD upon reception of an update. Using the CoUD, its
by-product metric called peak cost of update delay (PCoUD), and the VoIU, we
evaluate the performance of an M/M/1 system in various settings that consider
exact expressions and bounds. Our results indicate that the performance of CoUD
differs depending on the cost assigned per time unit, however the optimal
policy remains the same for linear ageing and varies for non-linear ageing.
When it comes to the VoIU the performance difference appears only when the cost
increases non-linearly with time. The study illustrates the importance of the
newly introduced variants of age, furthermore supported in the case of VoIU by
its tractability.
| [
{
"created": "Fri, 21 Dec 2018 13:37:35 GMT",
"version": "v1"
},
{
"created": "Fri, 6 Sep 2019 12:44:23 GMT",
"version": "v2"
}
] | 2019-09-09 | [
[
"Kosta",
"Antzela",
""
],
[
"Pappas",
"Nikolaos",
""
],
[
"Ephremides",
"Anthony",
""
],
[
"Angelakis",
"Vangelis",
""
]
] | We consider a status update communication system consisting of a source-destination link. A stochastic process is observed at the source, where samples are extracted at random time instances, and delivered to the destination, thus, providing status updates for the source. In this paper, we expand the concept of information ageing by introducing the cost of update delay (CoUD) metric to characterize the cost of having stale information at the destination. The CoUD captures the freshness of the information at the destination and can be used to reflect the information structure of the source. Moreover, we introduce the value of information of update (VoIU) metric that captures the reduction of CoUD upon reception of an update. Using the CoUD, its by-product metric called peak cost of update delay (PCoUD), and the VoIU, we evaluate the performance of an M/M/1 system in various settings that consider exact expressions and bounds. Our results indicate that the performance of CoUD differs depending on the cost assigned per time unit, however the optimal policy remains the same for linear ageing and varies for non-linear ageing. When it comes to the VoIU the performance difference appears only when the cost increases non-linearly with time. The study illustrates the importance of the newly introduced variants of age, furthermore supported in the case of VoIU by its tractability. |
2211.08460 | Laura Nicolas-S\'aenz | Laura Nicol\'as-S\'aenz, Agapito Ledezma, Javier Pascau, Arrate
Mu\~noz-Barrutia | ABANICCO: A New Color Space for Multi-Label Pixel Classification and
Color Segmentation | Working Paper | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In any computer vision task involving color images, a necessary step is
classifying pixels according to color and segmenting the respective areas.
However, the development of methods able to successfully complete this task has
proven challenging, mainly due to the gap between human color perception,
linguistic color terms, and digital representation. In this paper, we propose a
novel method combining geometric analysis of color theory, fuzzy color spaces,
and multi-label systems for the automatic classification of pixels according to
12 standard color categories (Green, Yellow, Light Orange, Deep Orange, Red,
Pink, Purple, Ultramarine, Blue, Teal, Brown, and Neutral). Moreover, we
present a robust, unsupervised, unbiased strategy for color naming based on
statistics and color theory. ABANICCO was tested against the state of the art
in color classification and with the standarized ISCC-NBS color system,
providing accurate classification and a standard, easily understandable
alternative for hue naming recognizable by humans and machines. We expect this
solution to become the base to successfully tackle a myriad of problems in all
fields of computer vision, such as region characterization, histopathology
analysis, fire detection, product quality prediction, object description, and
hyperspectral imaging.
| [
{
"created": "Tue, 15 Nov 2022 19:26:51 GMT",
"version": "v1"
}
] | 2022-11-17 | [
[
"Nicolás-Sáenz",
"Laura",
""
],
[
"Ledezma",
"Agapito",
""
],
[
"Pascau",
"Javier",
""
],
[
"Muñoz-Barrutia",
"Arrate",
""
]
] | In any computer vision task involving color images, a necessary step is classifying pixels according to color and segmenting the respective areas. However, the development of methods able to successfully complete this task has proven challenging, mainly due to the gap between human color perception, linguistic color terms, and digital representation. In this paper, we propose a novel method combining geometric analysis of color theory, fuzzy color spaces, and multi-label systems for the automatic classification of pixels according to 12 standard color categories (Green, Yellow, Light Orange, Deep Orange, Red, Pink, Purple, Ultramarine, Blue, Teal, Brown, and Neutral). Moreover, we present a robust, unsupervised, unbiased strategy for color naming based on statistics and color theory. ABANICCO was tested against the state of the art in color classification and with the standarized ISCC-NBS color system, providing accurate classification and a standard, easily understandable alternative for hue naming recognizable by humans and machines. We expect this solution to become the base to successfully tackle a myriad of problems in all fields of computer vision, such as region characterization, histopathology analysis, fire detection, product quality prediction, object description, and hyperspectral imaging. |
1403.6530 | L.A. Prashanth | Prashanth L.A. and Mohammad Ghavamzadeh | Variance-Constrained Actor-Critic Algorithms for Discounted and Average
Reward MDPs | null | null | null | null | cs.LG math.OC stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In many sequential decision-making problems we may want to manage risk by
minimizing some measure of variability in rewards in addition to maximizing a
standard criterion. Variance related risk measures are among the most common
risk-sensitive criteria in finance and operations research. However, optimizing
many such criteria is known to be a hard problem. In this paper, we consider
both discounted and average reward Markov decision processes. For each
formulation, we first define a measure of variability for a policy, which in
turn gives us a set of risk-sensitive criteria to optimize. For each of these
criteria, we derive a formula for computing its gradient. We then devise
actor-critic algorithms that operate on three timescales - a TD critic on the
fastest timescale, a policy gradient (actor) on the intermediate timescale, and
a dual ascent for Lagrange multipliers on the slowest timescale. In the
discounted setting, we point out the difficulty in estimating the gradient of
the variance of the return and incorporate simultaneous perturbation approaches
to alleviate this. The average setting, on the other hand, allows for an actor
update using compatible features to estimate the gradient of the variance. We
establish the convergence of our algorithms to locally risk-sensitive optimal
policies. Finally, we demonstrate the usefulness of our algorithms in a traffic
signal control application.
| [
{
"created": "Tue, 25 Mar 2014 23:00:50 GMT",
"version": "v1"
},
{
"created": "Wed, 18 Mar 2015 15:42:31 GMT",
"version": "v2"
}
] | 2015-03-19 | [
[
"A.",
"Prashanth L.",
""
],
[
"Ghavamzadeh",
"Mohammad",
""
]
] | In many sequential decision-making problems we may want to manage risk by minimizing some measure of variability in rewards in addition to maximizing a standard criterion. Variance related risk measures are among the most common risk-sensitive criteria in finance and operations research. However, optimizing many such criteria is known to be a hard problem. In this paper, we consider both discounted and average reward Markov decision processes. For each formulation, we first define a measure of variability for a policy, which in turn gives us a set of risk-sensitive criteria to optimize. For each of these criteria, we derive a formula for computing its gradient. We then devise actor-critic algorithms that operate on three timescales - a TD critic on the fastest timescale, a policy gradient (actor) on the intermediate timescale, and a dual ascent for Lagrange multipliers on the slowest timescale. In the discounted setting, we point out the difficulty in estimating the gradient of the variance of the return and incorporate simultaneous perturbation approaches to alleviate this. The average setting, on the other hand, allows for an actor update using compatible features to estimate the gradient of the variance. We establish the convergence of our algorithms to locally risk-sensitive optimal policies. Finally, we demonstrate the usefulness of our algorithms in a traffic signal control application. |
1507.00644 | Kevin Houston | Kevin Houston | Compressed Manifold Modes: Fast Calculation and Natural Ordering | null | null | null | null | cs.CG cs.NA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Compressed manifold modes are locally supported analogues of eigenfunctions
of the Laplace-Beltrami operator of a manifold. In this paper we describe an
algorithm for the calculation of modes for discrete manifolds that, in
experiments, requires on average 47% fewer iterations and 44% less time than
the previous algorithm. We show how to naturally order the modes in an
analogous way to eigenfunctions, that is we define a compressed eigenvalue.
Furthermore, in contrast to the previous algorithm we permit unlumped mass
matrices for the operator and we show, unlike the case of eigenfunctions, that
modes can, in general, be oriented.
| [
{
"created": "Thu, 2 Jul 2015 16:06:26 GMT",
"version": "v1"
},
{
"created": "Mon, 13 Jul 2015 13:40:58 GMT",
"version": "v2"
}
] | 2015-07-14 | [
[
"Houston",
"Kevin",
""
]
] | Compressed manifold modes are locally supported analogues of eigenfunctions of the Laplace-Beltrami operator of a manifold. In this paper we describe an algorithm for the calculation of modes for discrete manifolds that, in experiments, requires on average 47% fewer iterations and 44% less time than the previous algorithm. We show how to naturally order the modes in an analogous way to eigenfunctions, that is we define a compressed eigenvalue. Furthermore, in contrast to the previous algorithm we permit unlumped mass matrices for the operator and we show, unlike the case of eigenfunctions, that modes can, in general, be oriented. |
1709.03456 | Jawad Tayyub | Jawad Tayyub, Majd Hawasly, David C. Hogg and Anthony G. Cohn | CLAD: A Complex and Long Activities Dataset with Rich Crowdsourced
Annotations | null | null | 10.5518/249 | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces a novel activity dataset which exhibits real-life and
diverse scenarios of complex, temporally-extended human activities and actions.
The dataset presents a set of videos of actors performing everyday activities
in a natural and unscripted manner. The dataset was recorded using a static
Kinect 2 sensor which is commonly used on many robotic platforms. The dataset
comprises of RGB-D images, point cloud data, automatically generated skeleton
tracks in addition to crowdsourced annotations. Furthermore, we also describe
the methodology used to acquire annotations through crowdsourcing. Finally some
activity recognition benchmarks are presented using current state-of-the-art
techniques. We believe that this dataset is particularly suitable as a testbed
for activity recognition research but it can also be applicable for other
common tasks in robotics/computer vision research such as object detection and
human skeleton tracking.
| [
{
"created": "Mon, 11 Sep 2017 16:01:17 GMT",
"version": "v1"
},
{
"created": "Thu, 21 Sep 2017 16:52:04 GMT",
"version": "v2"
}
] | 2017-09-22 | [
[
"Tayyub",
"Jawad",
""
],
[
"Hawasly",
"Majd",
""
],
[
"Hogg",
"David C.",
""
],
[
"Cohn",
"Anthony G.",
""
]
] | This paper introduces a novel activity dataset which exhibits real-life and diverse scenarios of complex, temporally-extended human activities and actions. The dataset presents a set of videos of actors performing everyday activities in a natural and unscripted manner. The dataset was recorded using a static Kinect 2 sensor which is commonly used on many robotic platforms. The dataset comprises of RGB-D images, point cloud data, automatically generated skeleton tracks in addition to crowdsourced annotations. Furthermore, we also describe the methodology used to acquire annotations through crowdsourcing. Finally some activity recognition benchmarks are presented using current state-of-the-art techniques. We believe that this dataset is particularly suitable as a testbed for activity recognition research but it can also be applicable for other common tasks in robotics/computer vision research such as object detection and human skeleton tracking. |
2102.10012 | Paul Weng | Ruibin Bai and Xinan Chen and Zhi-Long Chen and Tianxiang Cui and
Shuhui Gong and Wentao He and Xiaoping Jiang and Huan Jin and Jiahuan Jin and
Graham Kendall and Jiawei Li and Zheng Lu and Jianfeng Ren and Paul Weng and
Ning Xue and Huayan Zhang | Analytics and Machine Learning in Vehicle Routing Research | Submitted to International Journal of Production Research | null | null | null | cs.LG cs.AI math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Vehicle Routing Problem (VRP) is one of the most intensively studied
combinatorial optimisation problems for which numerous models and algorithms
have been proposed. To tackle the complexities, uncertainties and dynamics
involved in real-world VRP applications, Machine Learning (ML) methods have
been used in combination with analytical approaches to enhance problem
formulations and algorithmic performance across different problem solving
scenarios. However, the relevant papers are scattered in several traditional
research fields with very different, sometimes confusing, terminologies. This
paper presents a first, comprehensive review of hybrid methods that combine
analytical techniques with ML tools in addressing VRP problems. Specifically,
we review the emerging research streams on ML-assisted VRP modelling and
ML-assisted VRP optimisation. We conclude that ML can be beneficial in
enhancing VRP modelling, and improving the performance of algorithms for both
online and offline VRP optimisations. Finally, challenges and future
opportunities of VRP research are discussed.
| [
{
"created": "Fri, 19 Feb 2021 16:26:17 GMT",
"version": "v1"
}
] | 2021-02-22 | [
[
"Bai",
"Ruibin",
""
],
[
"Chen",
"Xinan",
""
],
[
"Chen",
"Zhi-Long",
""
],
[
"Cui",
"Tianxiang",
""
],
[
"Gong",
"Shuhui",
""
],
[
"He",
"Wentao",
""
],
[
"Jiang",
"Xiaoping",
""
],
[
"Jin",
"Huan",
""
],
[
"Jin",
"Jiahuan",
""
],
[
"Kendall",
"Graham",
""
],
[
"Li",
"Jiawei",
""
],
[
"Lu",
"Zheng",
""
],
[
"Ren",
"Jianfeng",
""
],
[
"Weng",
"Paul",
""
],
[
"Xue",
"Ning",
""
],
[
"Zhang",
"Huayan",
""
]
] | The Vehicle Routing Problem (VRP) is one of the most intensively studied combinatorial optimisation problems for which numerous models and algorithms have been proposed. To tackle the complexities, uncertainties and dynamics involved in real-world VRP applications, Machine Learning (ML) methods have been used in combination with analytical approaches to enhance problem formulations and algorithmic performance across different problem solving scenarios. However, the relevant papers are scattered in several traditional research fields with very different, sometimes confusing, terminologies. This paper presents a first, comprehensive review of hybrid methods that combine analytical techniques with ML tools in addressing VRP problems. Specifically, we review the emerging research streams on ML-assisted VRP modelling and ML-assisted VRP optimisation. We conclude that ML can be beneficial in enhancing VRP modelling, and improving the performance of algorithms for both online and offline VRP optimisations. Finally, challenges and future opportunities of VRP research are discussed. |
1711.06774 | Orcun Karaca | Orcun Karaca, Pier Giuseppe Sessa, Neil Walton, Maryam Kamgarpour | Designing Coalition-Proof Reverse Auctions over Continuous Goods | null | IEEE Transactions on Automatic Control, 64(11), 4803-4810, 2019 | 10.1109/TAC.2019.2908717 | null | cs.GT math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper investigates reverse auctions that involve continuous values of
different types of goods, general nonconvex constraints, and second stage
costs. We seek to design the payment rules and conditions under which
coalitions of participants cannot influence the auction outcome in order to
obtain higher collective utility. Under the incentive-compatible
Vickrey-Clarke-Groves mechanism, we show that coalition-proof outcomes are
achieved if the submitted bids are convex and the constraint sets are of a
polymatroid-type. These conditions, however, do not capture the complexity of
the general class of reverse auctions under consideration. By relaxing the
property of incentive-compatibility, we investigate further payment rules that
are coalition-proof without any extra conditions on the submitted bids and the
constraint sets. Since calculating the payments directly for these mechanisms
is computationally difficult for auctions involving many participants, we
present two computationally efficient methods. Our results are verified with
several case studies based on electricity market data.
| [
{
"created": "Fri, 17 Nov 2017 23:43:58 GMT",
"version": "v1"
},
{
"created": "Thu, 28 Dec 2017 12:18:38 GMT",
"version": "v2"
},
{
"created": "Tue, 23 Jan 2018 11:23:59 GMT",
"version": "v3"
},
{
"created": "Thu, 6 Dec 2018 14:44:27 GMT",
"version": "v4"
},
{
"created": "Mon, 31 Dec 2018 15:49:34 GMT",
"version": "v5"
}
] | 2021-07-14 | [
[
"Karaca",
"Orcun",
""
],
[
"Sessa",
"Pier Giuseppe",
""
],
[
"Walton",
"Neil",
""
],
[
"Kamgarpour",
"Maryam",
""
]
] | This paper investigates reverse auctions that involve continuous values of different types of goods, general nonconvex constraints, and second stage costs. We seek to design the payment rules and conditions under which coalitions of participants cannot influence the auction outcome in order to obtain higher collective utility. Under the incentive-compatible Vickrey-Clarke-Groves mechanism, we show that coalition-proof outcomes are achieved if the submitted bids are convex and the constraint sets are of a polymatroid-type. These conditions, however, do not capture the complexity of the general class of reverse auctions under consideration. By relaxing the property of incentive-compatibility, we investigate further payment rules that are coalition-proof without any extra conditions on the submitted bids and the constraint sets. Since calculating the payments directly for these mechanisms is computationally difficult for auctions involving many participants, we present two computationally efficient methods. Our results are verified with several case studies based on electricity market data. |
1909.06907 | Arjun Akula | Arjun R. Akula, Changsong Liu, Sari Saba-Sadiya, Hongjing Lu, Sinisa
Todorovic, Joyce Y. Chai, Song-Chun Zhu | X-ToM: Explaining with Theory-of-Mind for Gaining Justified Human Trust | A short version of this was presented at CVPR 2019 Workshop on
Explainable AI | null | null | null | cs.AI cs.CV cs.HC cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a new explainable AI (XAI) framework aimed at increasing justified
human trust and reliance in the AI machine through explanations. We pose
explanation as an iterative communication process, i.e. dialog, between the
machine and human user. More concretely, the machine generates sequence of
explanations in a dialog which takes into account three important aspects at
each dialog turn: (a) human's intention (or curiosity); (b) human's
understanding of the machine; and (c) machine's understanding of the human
user. To do this, we use Theory of Mind (ToM) which helps us in explicitly
modeling human's intention, machine's mind as inferred by the human as well as
human's mind as inferred by the machine. In other words, these explicit mental
representations in ToM are incorporated to learn an optimal explanation policy
that takes into account human's perception and beliefs. Furthermore, we also
show that ToM facilitates in quantitatively measuring justified human trust in
the machine by comparing all the three mental representations.
We applied our framework to three visual recognition tasks, namely, image
classification, action recognition, and human body pose estimation. We argue
that our ToM based explanations are practical and more natural for both expert
and non-expert users to understand the internal workings of complex machine
learning models. To the best of our knowledge, this is the first work to derive
explanations using ToM. Extensive human study experiments verify our
hypotheses, showing that the proposed explanations significantly outperform the
state-of-the-art XAI methods in terms of all the standard quantitative and
qualitative XAI evaluation metrics including human trust, reliance, and
explanation satisfaction.
| [
{
"created": "Sun, 15 Sep 2019 23:24:32 GMT",
"version": "v1"
}
] | 2019-09-17 | [
[
"Akula",
"Arjun R.",
""
],
[
"Liu",
"Changsong",
""
],
[
"Saba-Sadiya",
"Sari",
""
],
[
"Lu",
"Hongjing",
""
],
[
"Todorovic",
"Sinisa",
""
],
[
"Chai",
"Joyce Y.",
""
],
[
"Zhu",
"Song-Chun",
""
]
] | We present a new explainable AI (XAI) framework aimed at increasing justified human trust and reliance in the AI machine through explanations. We pose explanation as an iterative communication process, i.e. dialog, between the machine and human user. More concretely, the machine generates sequence of explanations in a dialog which takes into account three important aspects at each dialog turn: (a) human's intention (or curiosity); (b) human's understanding of the machine; and (c) machine's understanding of the human user. To do this, we use Theory of Mind (ToM) which helps us in explicitly modeling human's intention, machine's mind as inferred by the human as well as human's mind as inferred by the machine. In other words, these explicit mental representations in ToM are incorporated to learn an optimal explanation policy that takes into account human's perception and beliefs. Furthermore, we also show that ToM facilitates in quantitatively measuring justified human trust in the machine by comparing all the three mental representations. We applied our framework to three visual recognition tasks, namely, image classification, action recognition, and human body pose estimation. We argue that our ToM based explanations are practical and more natural for both expert and non-expert users to understand the internal workings of complex machine learning models. To the best of our knowledge, this is the first work to derive explanations using ToM. Extensive human study experiments verify our hypotheses, showing that the proposed explanations significantly outperform the state-of-the-art XAI methods in terms of all the standard quantitative and qualitative XAI evaluation metrics including human trust, reliance, and explanation satisfaction. |
2106.07445 | Carl-Johann Simon-Gabriel | Carl-Johann Simon-Gabriel and Noman Ahmed Sheikh and Andreas Krause | PopSkipJump: Decision-Based Attack for Probabilistic Classifiers | ICML'21. Code available at https://github.com/cjsg/PopSkipJump . 9
pages & 7 figures in main part, 14 pages & 10 figures in appendix | null | null | null | cs.LG cs.CR cs.CV math.OC stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most current classifiers are vulnerable to adversarial examples, small input
perturbations that change the classification output. Many existing attack
algorithms cover various settings, from white-box to black-box classifiers, but
typically assume that the answers are deterministic and often fail when they
are not. We therefore propose a new adversarial decision-based attack
specifically designed for classifiers with probabilistic outputs. It is based
on the HopSkipJump attack by Chen et al. (2019, arXiv:1904.02144v5 ), a strong
and query efficient decision-based attack originally designed for deterministic
classifiers. Our P(robabilisticH)opSkipJump attack adapts its amount of queries
to maintain HopSkipJump's original output quality across various noise levels,
while converging to its query efficiency as the noise level decreases. We test
our attack on various noise models, including state-of-the-art off-the-shelf
randomized defenses, and show that they offer almost no extra robustness to
decision-based attacks. Code is available at
https://github.com/cjsg/PopSkipJump .
| [
{
"created": "Mon, 14 Jun 2021 14:13:12 GMT",
"version": "v1"
}
] | 2021-06-15 | [
[
"Simon-Gabriel",
"Carl-Johann",
""
],
[
"Sheikh",
"Noman Ahmed",
""
],
[
"Krause",
"Andreas",
""
]
] | Most current classifiers are vulnerable to adversarial examples, small input perturbations that change the classification output. Many existing attack algorithms cover various settings, from white-box to black-box classifiers, but typically assume that the answers are deterministic and often fail when they are not. We therefore propose a new adversarial decision-based attack specifically designed for classifiers with probabilistic outputs. It is based on the HopSkipJump attack by Chen et al. (2019, arXiv:1904.02144v5 ), a strong and query efficient decision-based attack originally designed for deterministic classifiers. Our P(robabilisticH)opSkipJump attack adapts its amount of queries to maintain HopSkipJump's original output quality across various noise levels, while converging to its query efficiency as the noise level decreases. We test our attack on various noise models, including state-of-the-art off-the-shelf randomized defenses, and show that they offer almost no extra robustness to decision-based attacks. Code is available at https://github.com/cjsg/PopSkipJump . |
2210.05243 | Xiangbin Liu | Xiangbin Liu, Junping Du, Meiyu Liang, Ang Li | Cross-modal Search Method of Technology Video based on Adversarial
Learning and Feature Fusion | null | null | null | null | cs.IR | http://creativecommons.org/licenses/by/4.0/ | Technology videos contain rich multi-modal information. In cross-modal
information search, the data features of different modalities cannot be
compared directly, so the semantic gap between different modalities is a key
problem that needs to be solved. To address the above problems, this paper
proposes a novel Feature Fusion based Adversarial Cross-modal Retrieval method
(FFACR) to achieve text-to-video matching, ranking and searching. The proposed
method uses the framework of adversarial learning to construct a video
multimodal feature fusion network and a feature mapping network as generator, a
modality discrimination network as discriminator. Multi-modal features of
videos are obtained by the feature fusion network. The feature mapping network
projects multi-modal features into the same semantic space based on semantics
and similarity. The modality discrimination network is responsible for
determining the original modality of features. Generator and discriminator are
trained alternately based on adversarial learning, so that the data obtained by
the feature mapping network is semantically consistent with the original data
and the modal features are eliminated, and finally the similarity is used to
rank and obtain the search results in the semantic space. Experimental results
demonstrate that the proposed method performs better in text-to-video search
than other existing methods, and validate the effectiveness of the method on
the self-built datasets of technology videos.
| [
{
"created": "Tue, 11 Oct 2022 08:17:31 GMT",
"version": "v1"
}
] | 2022-10-12 | [
[
"Liu",
"Xiangbin",
""
],
[
"Du",
"Junping",
""
],
[
"Liang",
"Meiyu",
""
],
[
"Li",
"Ang",
""
]
] | Technology videos contain rich multi-modal information. In cross-modal information search, the data features of different modalities cannot be compared directly, so the semantic gap between different modalities is a key problem that needs to be solved. To address the above problems, this paper proposes a novel Feature Fusion based Adversarial Cross-modal Retrieval method (FFACR) to achieve text-to-video matching, ranking and searching. The proposed method uses the framework of adversarial learning to construct a video multimodal feature fusion network and a feature mapping network as generator, a modality discrimination network as discriminator. Multi-modal features of videos are obtained by the feature fusion network. The feature mapping network projects multi-modal features into the same semantic space based on semantics and similarity. The modality discrimination network is responsible for determining the original modality of features. Generator and discriminator are trained alternately based on adversarial learning, so that the data obtained by the feature mapping network is semantically consistent with the original data and the modal features are eliminated, and finally the similarity is used to rank and obtain the search results in the semantic space. Experimental results demonstrate that the proposed method performs better in text-to-video search than other existing methods, and validate the effectiveness of the method on the self-built datasets of technology videos. |
1303.3164 | Uma Sawant | Uma Sawant and Soumen Chakrabarti | Features and Aggregators for Web-scale Entity Search | 10 pages, 12 figures including tables | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We focus on two research issues in entity search: scoring a document or
snippet that potentially supports a candidate entity, and aggregating scores
from different snippets into an entity score. Proximity scoring has been
studied in IR outside the scope of entity search. However, aggregation has been
hardwired except in a few cases where probabilistic language models are used.
We instead explore simple, robust, discriminative ranking algorithms, with
informative snippet features and broad families of aggregation functions. Our
first contribution is a study of proximity-cognizant snippet features. In
contrast with prior work which uses hardwired "proximity kernels" that
implement a fixed decay with distance, we present a "universal" feature
encoding which jointly expresses the perplexity (informativeness) of a query
term match and the proximity of the match to the entity mention. Our second
contribution is a study of aggregation functions. Rather than train the ranking
algorithm on snippets and then aggregate scores, we directly train on entities
such that the ranking algorithm takes into account the aggregation function
being used. Our third contribution is an extensive Web-scale evaluation of the
above algorithms on two data sets having quite different properties and
behavior. The first one is the W3C dataset used in TREC-scale enterprise
search, with pre-annotated entity mentions. The second is a Web-scale
open-domain entity search dataset consisting of 500 million Web pages, which
contain about 8 billion token spans annotated automatically with two million
entities from 200,000 entity types in Wikipedia. On the TREC dataset, the
performance of our system is comparable to the currently prevalent systems. On
the much larger and noisier Web dataset, our system delivers significantly
better performance than all other systems, with 8% MAP improvement over the
closest competitor.
| [
{
"created": "Wed, 13 Mar 2013 14:06:49 GMT",
"version": "v1"
}
] | 2013-03-14 | [
[
"Sawant",
"Uma",
""
],
[
"Chakrabarti",
"Soumen",
""
]
] | We focus on two research issues in entity search: scoring a document or snippet that potentially supports a candidate entity, and aggregating scores from different snippets into an entity score. Proximity scoring has been studied in IR outside the scope of entity search. However, aggregation has been hardwired except in a few cases where probabilistic language models are used. We instead explore simple, robust, discriminative ranking algorithms, with informative snippet features and broad families of aggregation functions. Our first contribution is a study of proximity-cognizant snippet features. In contrast with prior work which uses hardwired "proximity kernels" that implement a fixed decay with distance, we present a "universal" feature encoding which jointly expresses the perplexity (informativeness) of a query term match and the proximity of the match to the entity mention. Our second contribution is a study of aggregation functions. Rather than train the ranking algorithm on snippets and then aggregate scores, we directly train on entities such that the ranking algorithm takes into account the aggregation function being used. Our third contribution is an extensive Web-scale evaluation of the above algorithms on two data sets having quite different properties and behavior. The first one is the W3C dataset used in TREC-scale enterprise search, with pre-annotated entity mentions. The second is a Web-scale open-domain entity search dataset consisting of 500 million Web pages, which contain about 8 billion token spans annotated automatically with two million entities from 200,000 entity types in Wikipedia. On the TREC dataset, the performance of our system is comparable to the currently prevalent systems. On the much larger and noisier Web dataset, our system delivers significantly better performance than all other systems, with 8% MAP improvement over the closest competitor. |
2208.07624 | Rosalia Tufano | Rosalia Tufano, Emad Aghajani, Gabriele Bavota | Don't Reinvent the Wheel: Towards Automatic Replacement of Custom
Implementations with APIs | null | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reusing code is a common practice in software development: It helps
developers speedup the implementation task while also reducing the chances of
introducing bugs, given the assumption that the reused code has been tested,
possibly in production. Despite these benefits, opportunities for reuse are not
always in plain sight and, thus, developers may miss them. We present our
preliminary steps in building RETIWA, a recommender able to automatically
identify custom implementations in a given project that are good candidates to
be replaced by open source APIs. RETIWA relies on a ``knowledge base''
consisting of real examples of custom implementation-to-API replacements. In
this work, we present the mining strategy we tailored to automatically and
reliably extract replacements of custom implementations with APIs from open
source projects. This is the first step towards building the envisioned
recommender.
| [
{
"created": "Tue, 16 Aug 2022 09:20:00 GMT",
"version": "v1"
}
] | 2022-08-17 | [
[
"Tufano",
"Rosalia",
""
],
[
"Aghajani",
"Emad",
""
],
[
"Bavota",
"Gabriele",
""
]
] | Reusing code is a common practice in software development: It helps developers speedup the implementation task while also reducing the chances of introducing bugs, given the assumption that the reused code has been tested, possibly in production. Despite these benefits, opportunities for reuse are not always in plain sight and, thus, developers may miss them. We present our preliminary steps in building RETIWA, a recommender able to automatically identify custom implementations in a given project that are good candidates to be replaced by open source APIs. RETIWA relies on a ``knowledge base'' consisting of real examples of custom implementation-to-API replacements. In this work, we present the mining strategy we tailored to automatically and reliably extract replacements of custom implementations with APIs from open source projects. This is the first step towards building the envisioned recommender. |
1912.10558 | Renuka Sindhgatta | Renuka Sindhgatta, Chun Ouyang, Catarina Moreira | Exploring Interpretability for Predictive Process Analytics | 15 pages, 7 figures | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Modern predictive analytics underpinned by machine learning techniques has
become a key enabler to the automation of data-driven decision making. In the
context of business process management, predictive analytics has been applied
to making predictions about the future state of an ongoing business process
instance, for example, when will the process instance complete and what will be
the outcome upon completion. Machine learning models can be trained on event
log data recording historical process execution to build the underlying
predictive models. Multiple techniques have been proposed so far which encode
the information available in an event log and construct input features required
to train a predictive model. While accuracy has been a dominant criterion in
the choice of various techniques, they are often applied as a black-box in
building predictive models. In this paper, we derive explanations using
interpretable machine learning techniques to compare and contrast the
suitability of multiple predictive models of high accuracy. The explanations
allow us to gain an understanding of the underlying reasons for a prediction
and highlight scenarios where accuracy alone may not be sufficient in assessing
the suitability of techniques used to encode event log data to features used by
a predictive model. Findings from this study motivate the need and importance
to incorporate interpretability in predictive process analytics.
| [
{
"created": "Sun, 22 Dec 2019 23:09:34 GMT",
"version": "v1"
},
{
"created": "Mon, 30 Mar 2020 10:42:45 GMT",
"version": "v2"
},
{
"created": "Mon, 8 Jun 2020 12:09:15 GMT",
"version": "v3"
}
] | 2020-06-09 | [
[
"Sindhgatta",
"Renuka",
""
],
[
"Ouyang",
"Chun",
""
],
[
"Moreira",
"Catarina",
""
]
] | Modern predictive analytics underpinned by machine learning techniques has become a key enabler to the automation of data-driven decision making. In the context of business process management, predictive analytics has been applied to making predictions about the future state of an ongoing business process instance, for example, when will the process instance complete and what will be the outcome upon completion. Machine learning models can be trained on event log data recording historical process execution to build the underlying predictive models. Multiple techniques have been proposed so far which encode the information available in an event log and construct input features required to train a predictive model. While accuracy has been a dominant criterion in the choice of various techniques, they are often applied as a black-box in building predictive models. In this paper, we derive explanations using interpretable machine learning techniques to compare and contrast the suitability of multiple predictive models of high accuracy. The explanations allow us to gain an understanding of the underlying reasons for a prediction and highlight scenarios where accuracy alone may not be sufficient in assessing the suitability of techniques used to encode event log data to features used by a predictive model. Findings from this study motivate the need and importance to incorporate interpretability in predictive process analytics. |
2406.15494 | Laszlo Kish | Mehmet Yildirim, Nasir Kenarangui, Robert Balog, Laszlo B. Kish,
Chanan Singh | Simple Cracking of (Noise-Based) Dynamic Watermarking in Smart Grids | Accepted for publication in Fluctuation and Noise Letters | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Previous research employing a conceptual approach with a digital twin has
demonstrated that (noise-based) dynamic watermarking is incapable of providing
unconditional security in smart electrical grid systems. However, the
implementation of digital twins can be prohibitively costly or infeasible due
to limited available data on critical infrastructure. In this study, we first
analyze the spectral properties of dynamic watermarking and its associated
protocol. Subsequently, we present a straightforward attack inspired by the
digital twin method, which extracts and utilizes the grid noises and completely
breaches the security of dynamic watermarking without requiring knowledge of
the private watermarking signal. The attacker can fully expose the grid while
evading detection by the controller. Our findings indicate that in the absence
of secure and authenticated communications, dynamic watermarking offers neither
conditional nor unconditional security. Conversely, when communication lines,
sensors, and communicators are equipped with tamper-resistant and
secure/authenticated links, dynamic watermarking becomes redundant for grid
security.
| [
{
"created": "Tue, 18 Jun 2024 23:24:22 GMT",
"version": "v1"
},
{
"created": "Thu, 27 Jun 2024 08:29:40 GMT",
"version": "v2"
}
] | 2024-06-28 | [
[
"Yildirim",
"Mehmet",
""
],
[
"Kenarangui",
"Nasir",
""
],
[
"Balog",
"Robert",
""
],
[
"Kish",
"Laszlo B.",
""
],
[
"Singh",
"Chanan",
""
]
] | Previous research employing a conceptual approach with a digital twin has demonstrated that (noise-based) dynamic watermarking is incapable of providing unconditional security in smart electrical grid systems. However, the implementation of digital twins can be prohibitively costly or infeasible due to limited available data on critical infrastructure. In this study, we first analyze the spectral properties of dynamic watermarking and its associated protocol. Subsequently, we present a straightforward attack inspired by the digital twin method, which extracts and utilizes the grid noises and completely breaches the security of dynamic watermarking without requiring knowledge of the private watermarking signal. The attacker can fully expose the grid while evading detection by the controller. Our findings indicate that in the absence of secure and authenticated communications, dynamic watermarking offers neither conditional nor unconditional security. Conversely, when communication lines, sensors, and communicators are equipped with tamper-resistant and secure/authenticated links, dynamic watermarking becomes redundant for grid security. |
2303.15940 | Gege Qi | Qi Gege and Yuefeng Chen and Xiaofeng Mao and Yao Zhu and Binyuan Hui
and Xiaodan Li and Rong Zhang and Hui Xue | TransAudio: Towards the Transferable Adversarial Audio Attack via
Learning Contextualized Perturbations | null | null | null | null | cs.SD eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In a transfer-based attack against Automatic Speech Recognition (ASR)
systems, attacks are unable to access the architecture and parameters of the
target model. Existing attack methods are mostly investigated in voice
assistant scenarios with restricted voice commands, prohibiting their
applicability to more general ASR related applications. To tackle this
challenge, we propose a novel contextualized attack with deletion, insertion,
and substitution adversarial behaviors, namely TransAudio, which achieves
arbitrary word-level attacks based on the proposed two-stage framework. To
strengthen the attack transferability, we further introduce an audio
score-matching optimization strategy to regularize the training process, which
mitigates adversarial example over-fitting to the surrogate model. Extensive
experiments and analysis demonstrate the effectiveness of TransAudio against
open-source ASR models and commercial APIs.
| [
{
"created": "Tue, 28 Mar 2023 12:53:02 GMT",
"version": "v1"
}
] | 2023-03-29 | [
[
"Gege",
"Qi",
""
],
[
"Chen",
"Yuefeng",
""
],
[
"Mao",
"Xiaofeng",
""
],
[
"Zhu",
"Yao",
""
],
[
"Hui",
"Binyuan",
""
],
[
"Li",
"Xiaodan",
""
],
[
"Zhang",
"Rong",
""
],
[
"Xue",
"Hui",
""
]
] | In a transfer-based attack against Automatic Speech Recognition (ASR) systems, attacks are unable to access the architecture and parameters of the target model. Existing attack methods are mostly investigated in voice assistant scenarios with restricted voice commands, prohibiting their applicability to more general ASR related applications. To tackle this challenge, we propose a novel contextualized attack with deletion, insertion, and substitution adversarial behaviors, namely TransAudio, which achieves arbitrary word-level attacks based on the proposed two-stage framework. To strengthen the attack transferability, we further introduce an audio score-matching optimization strategy to regularize the training process, which mitigates adversarial example over-fitting to the surrogate model. Extensive experiments and analysis demonstrate the effectiveness of TransAudio against open-source ASR models and commercial APIs. |
0908.3361 | Gene Golovchinsky | Laurent Denoue, Scott Carter, John Adcock, Gene Golovchinsky, and
Andreas Girgensohn | WebNC: efficient sharing of web applications | Presented at WWW 2009, Madrid, Spain | null | null | null | cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | WebNC is a system for efficiently sharing, retrieving and viewing web
applications. Unlike existing screencasting and screensharing tools, WebNC is
optimized to work with web pages where a lot of scrolling happens. WebNC uses a
tile-based encoding to capture, transmit and deliver web applications, and
relies only on dynamic HTML and JavaScript. The resulting webcasts require very
little bandwidth and are viewable on any modern web browser including Firefox
and Internet Explorer as well as browsers on the iPhone and Android platforms.
| [
{
"created": "Mon, 24 Aug 2009 05:34:57 GMT",
"version": "v1"
}
] | 2009-08-25 | [
[
"Denoue",
"Laurent",
""
],
[
"Carter",
"Scott",
""
],
[
"Adcock",
"John",
""
],
[
"Golovchinsky",
"Gene",
""
],
[
"Girgensohn",
"Andreas",
""
]
] | WebNC is a system for efficiently sharing, retrieving and viewing web applications. Unlike existing screencasting and screensharing tools, WebNC is optimized to work with web pages where a lot of scrolling happens. WebNC uses a tile-based encoding to capture, transmit and deliver web applications, and relies only on dynamic HTML and JavaScript. The resulting webcasts require very little bandwidth and are viewable on any modern web browser including Firefox and Internet Explorer as well as browsers on the iPhone and Android platforms. |
1503.07551 | Helio M. de Oliveira | P. Carrion, H.M. de Oliveira and R.M. Campello de Souza | A Low-throughput Wavelet-based Steganography Audio Scheme | 2 pages, 1 figure, conference: 8th Brazilian Symposium on Information
and Computer System Security, 2008, Gramado, RS, Brazil | null | null | null | cs.MM cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents the preliminary of a novel scheme of steganography, and
introduces the idea of combining two secret keys in the operation. The first
secret key encrypts the text using a standard cryptographic scheme (e.g. IDEA,
SAFER+, etc.) prior to the wavelet audio decomposition. The way in which the
cipher text is embedded in the file requires another key, namely a stego-key,
which is associated with features of the audio wavelet analysis.
| [
{
"created": "Thu, 5 Feb 2015 03:15:25 GMT",
"version": "v1"
}
] | 2015-03-27 | [
[
"Carrion",
"P.",
""
],
[
"de Oliveira",
"H. M.",
""
],
[
"de Souza",
"R. M. Campello",
""
]
] | This paper presents the preliminary of a novel scheme of steganography, and introduces the idea of combining two secret keys in the operation. The first secret key encrypts the text using a standard cryptographic scheme (e.g. IDEA, SAFER+, etc.) prior to the wavelet audio decomposition. The way in which the cipher text is embedded in the file requires another key, namely a stego-key, which is associated with features of the audio wavelet analysis. |
2307.09683 | Qiao Jin | Qiao Jin, Robert Leaman, Zhiyong Lu | PubMed and Beyond: Biomedical Literature Search in the Age of Artificial
Intelligence | 27 pages, 6 figures, 36 tools | eBioMedicine, 2024 | 10.1016/j.ebiom.2024.104988 | null | cs.IR cs.AI cs.DL | http://creativecommons.org/licenses/by/4.0/ | Biomedical research yields a wealth of information, much of which is only
accessible through the literature. Consequently, literature search is an
essential tool for building on prior knowledge in clinical and biomedical
research. Although recent improvements in artificial intelligence have expanded
functionality beyond keyword-based search, these advances may be unfamiliar to
clinicians and researchers. In response, we present a survey of literature
search tools tailored to both general and specific information needs in
biomedicine, with the objective of helping readers efficiently fulfill their
information needs. We first examine the widely used PubMed search engine,
discussing recent improvements and continued challenges. We then describe
literature search tools catering to five specific information needs: 1.
Identifying high-quality clinical research for evidence-based medicine. 2.
Retrieving gene-related information for precision medicine and genomics. 3.
Searching by meaning, including natural language questions. 4. Locating related
articles with literature recommendation. 5. Mining literature to discover
associations between concepts such as diseases and genetic variants.
Additionally, we cover practical considerations and best practices for choosing
and using these tools. Finally, we provide a perspective on the future of
literature search engines, considering recent breakthroughs in large language
models such as ChatGPT. In summary, our survey provides a comprehensive view of
biomedical literature search functionalities with 36 publicly available tools.
| [
{
"created": "Tue, 18 Jul 2023 23:35:53 GMT",
"version": "v1"
},
{
"created": "Mon, 24 Jul 2023 15:41:03 GMT",
"version": "v2"
},
{
"created": "Thu, 21 Sep 2023 13:55:48 GMT",
"version": "v3"
}
] | 2024-04-10 | [
[
"Jin",
"Qiao",
""
],
[
"Leaman",
"Robert",
""
],
[
"Lu",
"Zhiyong",
""
]
] | Biomedical research yields a wealth of information, much of which is only accessible through the literature. Consequently, literature search is an essential tool for building on prior knowledge in clinical and biomedical research. Although recent improvements in artificial intelligence have expanded functionality beyond keyword-based search, these advances may be unfamiliar to clinicians and researchers. In response, we present a survey of literature search tools tailored to both general and specific information needs in biomedicine, with the objective of helping readers efficiently fulfill their information needs. We first examine the widely used PubMed search engine, discussing recent improvements and continued challenges. We then describe literature search tools catering to five specific information needs: 1. Identifying high-quality clinical research for evidence-based medicine. 2. Retrieving gene-related information for precision medicine and genomics. 3. Searching by meaning, including natural language questions. 4. Locating related articles with literature recommendation. 5. Mining literature to discover associations between concepts such as diseases and genetic variants. Additionally, we cover practical considerations and best practices for choosing and using these tools. Finally, we provide a perspective on the future of literature search engines, considering recent breakthroughs in large language models such as ChatGPT. In summary, our survey provides a comprehensive view of biomedical literature search functionalities with 36 publicly available tools. |
2311.07329 | Huanyu Wu | Huanyu Wu, Chentao Yue, Lei Zhang, Yonghui Li, and Muhammad Ali Imran | When Distributed Consensus Meets Wireless Connected Autonomous Systems:
A Review and A DAG-based Approach | null | null | null | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The connected and autonomous systems (CAS) and auto-driving era is coming
into our life. To support CAS applications such as AI-driven decision-making
and blockchain-based smart data management platform, data and message
exchange/dissemination is a fundamental element. The distributed message
broadcast and forward protocols in CAS, such as vehicular ad hoc networks
(VANET), can suffer from significant message loss and uncertain transmission
delay, and faulty nodes might disseminate fake messages to confuse the network.
Therefore, the consensus mechanism is essential in CAS with distributed
structure to guaranteed correct nodes agree on the same parameter and reach
consistency. However, due to the wireless nature of CAS, traditional consensus
cannot be directly deployed. This article reviews several existing consensus
mechanisms, including average/maximum/minimum estimation consensus mechanisms
that apply on quantity, Byzantine fault tolerance consensus for request, state
machine replication (SMR) and blockchain, as well as their implementations in
CAS. To deploy wireless-adapted consensus, we propose a Directed Acyclic Graph
(DAG)-based message structure to build a non-equivocation data dissemination
protocol for CAS, which has resilience against message loss and unpredictable
forwarding latency. Finally, we enhance this protocol by developing a
two-dimension DAG-based strategy to achieve partial order for blockchain and
total order for the distributed service model SMR.
| [
{
"created": "Mon, 13 Nov 2023 13:31:53 GMT",
"version": "v1"
}
] | 2023-11-14 | [
[
"Wu",
"Huanyu",
""
],
[
"Yue",
"Chentao",
""
],
[
"Zhang",
"Lei",
""
],
[
"Li",
"Yonghui",
""
],
[
"Imran",
"Muhammad Ali",
""
]
] | The connected and autonomous systems (CAS) and auto-driving era is coming into our life. To support CAS applications such as AI-driven decision-making and blockchain-based smart data management platform, data and message exchange/dissemination is a fundamental element. The distributed message broadcast and forward protocols in CAS, such as vehicular ad hoc networks (VANET), can suffer from significant message loss and uncertain transmission delay, and faulty nodes might disseminate fake messages to confuse the network. Therefore, the consensus mechanism is essential in CAS with distributed structure to guaranteed correct nodes agree on the same parameter and reach consistency. However, due to the wireless nature of CAS, traditional consensus cannot be directly deployed. This article reviews several existing consensus mechanisms, including average/maximum/minimum estimation consensus mechanisms that apply on quantity, Byzantine fault tolerance consensus for request, state machine replication (SMR) and blockchain, as well as their implementations in CAS. To deploy wireless-adapted consensus, we propose a Directed Acyclic Graph (DAG)-based message structure to build a non-equivocation data dissemination protocol for CAS, which has resilience against message loss and unpredictable forwarding latency. Finally, we enhance this protocol by developing a two-dimension DAG-based strategy to achieve partial order for blockchain and total order for the distributed service model SMR. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.