id
stringlengths 9
10
| submitter
stringlengths 1
64
⌀ | authors
stringlengths 4
20.7k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
404
⌀ | doi
stringlengths 11
153
⌀ | report-no
stringlengths 2
254
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | orig_abstract
stringlengths 14
3.35k
| versions
listlengths 1
60
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
1.35k
| abstract
stringlengths 11
3.34k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1807.03633
|
Tong Wang
|
Tong Wang, Veerajalandhar Allareddy, Sankeerth Rampa and
Veerasathpurush Allareddy
|
Interpretable Patient Mortality Prediction with Multi-value Rule Sets
|
arXiv admin note: text overlap with arXiv:1710.05257
| null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a Multi-vAlue Rule Set (MRS) model for in-hospital predicting
patient mortality. Compared to rule sets built from single-valued rules, MRS
adopts a more generalized form of association rules that allows multiple values
in a condition. Rules of this form are more concise than classical
single-valued rules in capturing and describing patterns in data. Our
formulation also pursues a higher efficiency of feature utilization, which
reduces possible cost in data collection and storage. We propose a Bayesian
framework for formulating a MRS model and propose an efficient inference method
for learning a maximum \emph{a posteriori}, incorporating theoretically
grounded bounds to iteratively reduce the search space and improve the search
efficiency. Experiments show that our model was able to achieve better
performance than baseline method including the current system used by the
hospital.
|
[
{
"created": "Fri, 6 Jul 2018 22:47:19 GMT",
"version": "v1"
},
{
"created": "Mon, 23 Jul 2018 14:57:51 GMT",
"version": "v2"
}
] |
2018-07-24
|
[
[
"Wang",
"Tong",
""
],
[
"Allareddy",
"Veerajalandhar",
""
],
[
"Rampa",
"Sankeerth",
""
],
[
"Allareddy",
"Veerasathpurush",
""
]
] |
We propose a Multi-vAlue Rule Set (MRS) model for in-hospital predicting patient mortality. Compared to rule sets built from single-valued rules, MRS adopts a more generalized form of association rules that allows multiple values in a condition. Rules of this form are more concise than classical single-valued rules in capturing and describing patterns in data. Our formulation also pursues a higher efficiency of feature utilization, which reduces possible cost in data collection and storage. We propose a Bayesian framework for formulating a MRS model and propose an efficient inference method for learning a maximum \emph{a posteriori}, incorporating theoretically grounded bounds to iteratively reduce the search space and improve the search efficiency. Experiments show that our model was able to achieve better performance than baseline method including the current system used by the hospital.
|
2102.02969
|
Jianhao Ma
|
Jianhao Ma and Salar Fattahi
|
Sign-RIP: A Robust Restricted Isometry Property for Low-rank Matrix
Recovery
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Restricted isometry property (RIP), essentially stating that the linear
measurements are approximately norm-preserving, plays a crucial role in
studying low-rank matrix recovery problem. However, RIP fails in the robust
setting, when a subset of the measurements are grossly corrupted with noise. In
this work, we propose a robust restricted isometry property, called Sign-RIP,
and show its broad applications in robust low-rank matrix recovery. In
particular, we show that Sign-RIP can guarantee the uniform convergence of the
subdifferentials of the robust matrix recovery with nonsmooth loss function,
even at the presence of arbitrarily dense and arbitrarily large outliers. Based
on Sign-RIP, we characterize the location of the critical points in the robust
rank-1 matrix recovery, and prove that they are either close to the true
solution, or have small norm. Moreover, in the over-parameterized regime, where
the rank of the true solution is over-estimated, we show that subgradient
method converges to the true solution at a (nearly) dimension-free rate.
Finally, we show that sign-RIP enjoys almost the same complexity as its
classical counterparts, but provides significantly better robustness against
noise.
|
[
{
"created": "Fri, 5 Feb 2021 02:52:00 GMT",
"version": "v1"
},
{
"created": "Wed, 21 Apr 2021 15:37:21 GMT",
"version": "v2"
},
{
"created": "Tue, 28 Sep 2021 15:00:18 GMT",
"version": "v3"
}
] |
2021-09-29
|
[
[
"Ma",
"Jianhao",
""
],
[
"Fattahi",
"Salar",
""
]
] |
Restricted isometry property (RIP), essentially stating that the linear measurements are approximately norm-preserving, plays a crucial role in studying low-rank matrix recovery problem. However, RIP fails in the robust setting, when a subset of the measurements are grossly corrupted with noise. In this work, we propose a robust restricted isometry property, called Sign-RIP, and show its broad applications in robust low-rank matrix recovery. In particular, we show that Sign-RIP can guarantee the uniform convergence of the subdifferentials of the robust matrix recovery with nonsmooth loss function, even at the presence of arbitrarily dense and arbitrarily large outliers. Based on Sign-RIP, we characterize the location of the critical points in the robust rank-1 matrix recovery, and prove that they are either close to the true solution, or have small norm. Moreover, in the over-parameterized regime, where the rank of the true solution is over-estimated, we show that subgradient method converges to the true solution at a (nearly) dimension-free rate. Finally, we show that sign-RIP enjoys almost the same complexity as its classical counterparts, but provides significantly better robustness against noise.
|
2011.14496
|
Yikai Wang
|
Yikai Wang and Weijian Li
|
Blind signal decomposition of various word embeddings based on join and
individual variance explained
|
9 pages, 10 figures
| null | null | null |
cs.CL cs.AI stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
In recent years, natural language processing (NLP) has become one of the most
important areas with various applications in human's life. As the most
fundamental task, the field of word embedding still requires more attention and
research. Currently, existing works about word embedding are focusing on
proposing novel embedding algorithms and dimension reduction techniques on
well-trained word embeddings. In this paper, we propose to use a novel joint
signal separation method - JIVE to jointly decompose various trained word
embeddings into joint and individual components. Through this decomposition
framework, we can easily investigate the similarity and difference among
different word embeddings. We conducted extensive empirical study on word2vec,
FastText and GLoVE trained on different corpus and with different dimensions.
We compared the performance of different decomposed components based on
sentiment analysis on Twitter and Stanford sentiment treebank. We found that by
mapping different word embeddings into the joint component, sentiment
performance can be greatly improved for the original word embeddings with lower
performance. Moreover, we found that by concatenating different components
together, the same model can achieve better performance. These findings provide
great insights into the word embeddings and our work offer a new of generating
word embeddings by fusing.
|
[
{
"created": "Mon, 30 Nov 2020 01:36:29 GMT",
"version": "v1"
}
] |
2020-12-01
|
[
[
"Wang",
"Yikai",
""
],
[
"Li",
"Weijian",
""
]
] |
In recent years, natural language processing (NLP) has become one of the most important areas with various applications in human's life. As the most fundamental task, the field of word embedding still requires more attention and research. Currently, existing works about word embedding are focusing on proposing novel embedding algorithms and dimension reduction techniques on well-trained word embeddings. In this paper, we propose to use a novel joint signal separation method - JIVE to jointly decompose various trained word embeddings into joint and individual components. Through this decomposition framework, we can easily investigate the similarity and difference among different word embeddings. We conducted extensive empirical study on word2vec, FastText and GLoVE trained on different corpus and with different dimensions. We compared the performance of different decomposed components based on sentiment analysis on Twitter and Stanford sentiment treebank. We found that by mapping different word embeddings into the joint component, sentiment performance can be greatly improved for the original word embeddings with lower performance. Moreover, we found that by concatenating different components together, the same model can achieve better performance. These findings provide great insights into the word embeddings and our work offer a new of generating word embeddings by fusing.
|
2403.02506
|
Chuan Guo
|
Tom Sander, Yaodong Yu, Maziar Sanjabi, Alain Durmus, Yi Ma, Kamalika
Chaudhuri, Chuan Guo
|
Differentially Private Representation Learning via Image Captioning
| null | null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Differentially private (DP) machine learning is considered the gold-standard
solution for training a model from sensitive data while still preserving
privacy. However, a major barrier to achieving this ideal is its sub-optimal
privacy-accuracy trade-off, which is particularly visible in DP representation
learning. Specifically, it has been shown that under modest privacy budgets,
most models learn representations that are not significantly better than
hand-crafted features. In this work, we show that effective DP representation
learning can be done via image captioning and scaling up to internet-scale
multimodal datasets. Through a series of engineering tricks, we successfully
train a DP image captioner (DP-Cap) on a 233M subset of LAION-2B from scratch
using a reasonable amount of computation, and obtaining unprecedented
high-quality image features that can be used in a variety of downstream vision
and vision-language tasks. For example, under a privacy budget of
$\varepsilon=8$, a linear classifier trained on top of learned DP-Cap features
attains 65.8% accuracy on ImageNet-1K, considerably improving the previous SOTA
of 56.5%. Our work challenges the prevailing sentiment that high-utility DP
representation learning cannot be achieved by training from scratch.
|
[
{
"created": "Mon, 4 Mar 2024 21:52:25 GMT",
"version": "v1"
}
] |
2024-03-06
|
[
[
"Sander",
"Tom",
""
],
[
"Yu",
"Yaodong",
""
],
[
"Sanjabi",
"Maziar",
""
],
[
"Durmus",
"Alain",
""
],
[
"Ma",
"Yi",
""
],
[
"Chaudhuri",
"Kamalika",
""
],
[
"Guo",
"Chuan",
""
]
] |
Differentially private (DP) machine learning is considered the gold-standard solution for training a model from sensitive data while still preserving privacy. However, a major barrier to achieving this ideal is its sub-optimal privacy-accuracy trade-off, which is particularly visible in DP representation learning. Specifically, it has been shown that under modest privacy budgets, most models learn representations that are not significantly better than hand-crafted features. In this work, we show that effective DP representation learning can be done via image captioning and scaling up to internet-scale multimodal datasets. Through a series of engineering tricks, we successfully train a DP image captioner (DP-Cap) on a 233M subset of LAION-2B from scratch using a reasonable amount of computation, and obtaining unprecedented high-quality image features that can be used in a variety of downstream vision and vision-language tasks. For example, under a privacy budget of $\varepsilon=8$, a linear classifier trained on top of learned DP-Cap features attains 65.8% accuracy on ImageNet-1K, considerably improving the previous SOTA of 56.5%. Our work challenges the prevailing sentiment that high-utility DP representation learning cannot be achieved by training from scratch.
|
1905.07663
|
Anuj Singh
|
Anuj Singh
|
Regions In a Linked Dataset For Change Detection
|
It is a doctoral consortium paper, which was accepted in ISWC 2018
but was not published there as the author was not able to attend the
conference
| null | null | null |
cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Linked Datasets (LDs) are constantly evolving and the applications using a
Linked Dataset (LD) may face several issues such as outdated data or broken
interlinks due to evolution of the dataset. To overcome these issues, the
detection of changes in LDs during their evolution has proven crucial. As LDs
evolve frequently, the change detection during the evolution should also be
done at frequent intervals. However, due to limitation of available
computational resources such as capacity to fetch data from LD and time to
detect changes, the frequent change detection may not be possible with existing
change detection techniques. This research proposes to explore the notion of
prioritization of regions (subsets) in LDs for change detection with the aim of
achieving optimal accuracy and efficient use of available computational
resources. This will facilitate the detection of changes in an evolving LD at
frequent intervals and will allow the applications to update their data closest
to real-time data.
|
[
{
"created": "Sun, 19 May 2019 00:01:05 GMT",
"version": "v1"
}
] |
2019-05-21
|
[
[
"Singh",
"Anuj",
""
]
] |
Linked Datasets (LDs) are constantly evolving and the applications using a Linked Dataset (LD) may face several issues such as outdated data or broken interlinks due to evolution of the dataset. To overcome these issues, the detection of changes in LDs during their evolution has proven crucial. As LDs evolve frequently, the change detection during the evolution should also be done at frequent intervals. However, due to limitation of available computational resources such as capacity to fetch data from LD and time to detect changes, the frequent change detection may not be possible with existing change detection techniques. This research proposes to explore the notion of prioritization of regions (subsets) in LDs for change detection with the aim of achieving optimal accuracy and efficient use of available computational resources. This will facilitate the detection of changes in an evolving LD at frequent intervals and will allow the applications to update their data closest to real-time data.
|
2406.17096
|
Yudan Wang
|
Yudan Wang, Shaofeng Zou, Yue Wang
|
Model-Free Robust Reinforcement Learning with Sample Complexity Analysis
|
UAI 2024
| null | null | null |
cs.LG cs.AI stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
Distributionally Robust Reinforcement Learning (DR-RL) aims to derive a
policy optimizing the worst-case performance within a predefined uncertainty
set. Despite extensive research, previous DR-RL algorithms have predominantly
favored model-based approaches, with limited availability of model-free methods
offering convergence guarantees or sample complexities. This paper proposes a
model-free DR-RL algorithm leveraging the Multi-level Monte Carlo (MLMC)
technique to close such a gap. Our innovative approach integrates a threshold
mechanism that ensures finite sample requirements for algorithmic
implementation, a significant improvement than previous model-free algorithms.
We develop algorithms for uncertainty sets defined by total variation,
Chi-square divergence, and KL divergence, and provide finite sample analyses
under all three cases. Remarkably, our algorithms represent the first
model-free DR-RL approach featuring finite sample complexity for total
variation and Chi-square divergence uncertainty sets, while also offering an
improved sample complexity and broader applicability compared to existing
model-free DR-RL algorithms for the KL divergence model. The complexities of
our method establish the tightest results for all three uncertainty models in
model-free DR-RL, underscoring the effectiveness and efficiency of our
algorithm, and highlighting its potential for practical applications.
|
[
{
"created": "Mon, 24 Jun 2024 19:35:26 GMT",
"version": "v1"
}
] |
2024-06-26
|
[
[
"Wang",
"Yudan",
""
],
[
"Zou",
"Shaofeng",
""
],
[
"Wang",
"Yue",
""
]
] |
Distributionally Robust Reinforcement Learning (DR-RL) aims to derive a policy optimizing the worst-case performance within a predefined uncertainty set. Despite extensive research, previous DR-RL algorithms have predominantly favored model-based approaches, with limited availability of model-free methods offering convergence guarantees or sample complexities. This paper proposes a model-free DR-RL algorithm leveraging the Multi-level Monte Carlo (MLMC) technique to close such a gap. Our innovative approach integrates a threshold mechanism that ensures finite sample requirements for algorithmic implementation, a significant improvement than previous model-free algorithms. We develop algorithms for uncertainty sets defined by total variation, Chi-square divergence, and KL divergence, and provide finite sample analyses under all three cases. Remarkably, our algorithms represent the first model-free DR-RL approach featuring finite sample complexity for total variation and Chi-square divergence uncertainty sets, while also offering an improved sample complexity and broader applicability compared to existing model-free DR-RL algorithms for the KL divergence model. The complexities of our method establish the tightest results for all three uncertainty models in model-free DR-RL, underscoring the effectiveness and efficiency of our algorithm, and highlighting its potential for practical applications.
|
1701.07976
|
Fanny Jardel
|
Joseph J. Boutros, Fanny Jardel, and Cyril M\'easson
|
Probabilistic Shaping and Non-Binary Codes
|
Submitted to IEEE International Symposium on Information Theory
(ISIT) 2017
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We generalize probabilistic amplitude shaping (PAS) with binary codes to the
case of non-binary codes defined over prime finite fields. Firstly, we
introduce probabilistic shaping via time sharing where shaping applies to
information symbols only. Then, we design circular quadrature amplitude
modulations (CQAM) that allow to directly generalize PAS to prime finite fields
with full shaping.
|
[
{
"created": "Fri, 27 Jan 2017 08:56:26 GMT",
"version": "v1"
}
] |
2017-01-30
|
[
[
"Boutros",
"Joseph J.",
""
],
[
"Jardel",
"Fanny",
""
],
[
"Méasson",
"Cyril",
""
]
] |
We generalize probabilistic amplitude shaping (PAS) with binary codes to the case of non-binary codes defined over prime finite fields. Firstly, we introduce probabilistic shaping via time sharing where shaping applies to information symbols only. Then, we design circular quadrature amplitude modulations (CQAM) that allow to directly generalize PAS to prime finite fields with full shaping.
|
2201.08169
|
Tong Zhang
|
Tong Zhang, Dongsheng Chen, Na Li, Yufan Zhuang, Bojie Lv, and Rui
Wang
|
Secure Rate-Splitting for MIMO Broadcast Channel with Imperfect CSIT and
a Jammer
|
6 pages, 3 figures
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we investigate the secure rate-splitting for the two-user
multiple-input multiple-output (MIMO) broadcast channel with imperfect channel
state information at the transmitter (CSIT) and a multiple-antenna jammer,
where each receiver has an equal number of antennas and the jammer has perfect
channel state information (CSI). Specifically, we design a secure
rate-splitting multiple-access strategy, where the security of split private
and common messages is ensured by precoder design with joint nulling and
aligning the leakage information, regarding different antenna configurations.
Moreover, we show that the sum-secure degrees-of-freedom (SDoF) achieved by
secure rate-splitting is optimal and outperforms that by conventional
zero-forcing. Therefore, we reveal the sum-SDoF of the two-user MIMO broadcast
channel with imperfect CSIT and a jammer, and validate the superiority of
rate-splitting for the security purpose in this scenario with emphasis of MIMO.
|
[
{
"created": "Thu, 20 Jan 2022 13:35:12 GMT",
"version": "v1"
},
{
"created": "Sun, 23 Jan 2022 07:28:08 GMT",
"version": "v2"
},
{
"created": "Mon, 28 Mar 2022 06:05:18 GMT",
"version": "v3"
},
{
"created": "Mon, 11 Jul 2022 02:16:03 GMT",
"version": "v4"
}
] |
2022-07-12
|
[
[
"Zhang",
"Tong",
""
],
[
"Chen",
"Dongsheng",
""
],
[
"Li",
"Na",
""
],
[
"Zhuang",
"Yufan",
""
],
[
"Lv",
"Bojie",
""
],
[
"Wang",
"Rui",
""
]
] |
In this paper, we investigate the secure rate-splitting for the two-user multiple-input multiple-output (MIMO) broadcast channel with imperfect channel state information at the transmitter (CSIT) and a multiple-antenna jammer, where each receiver has an equal number of antennas and the jammer has perfect channel state information (CSI). Specifically, we design a secure rate-splitting multiple-access strategy, where the security of split private and common messages is ensured by precoder design with joint nulling and aligning the leakage information, regarding different antenna configurations. Moreover, we show that the sum-secure degrees-of-freedom (SDoF) achieved by secure rate-splitting is optimal and outperforms that by conventional zero-forcing. Therefore, we reveal the sum-SDoF of the two-user MIMO broadcast channel with imperfect CSIT and a jammer, and validate the superiority of rate-splitting for the security purpose in this scenario with emphasis of MIMO.
|
1705.00949
|
Christian Mostegel
|
Christian Mostegel and Rudolf Prettenthaler and Friedrich Fraundorfer
and Horst Bischof
|
Scalable Surface Reconstruction from Point Clouds with Extreme Scale and
Density Diversity
|
This paper was accepted to the IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), 2017. The copyright was transfered to IEEE
(ieee.org). The official version of the paper will be made available on IEEE
Xplore (R) (ieeexplore.ieee.org). This version of the paper also contains the
supplementary material, which will not appear IEEE Xplore (R)
| null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we present a scalable approach for robustly computing a 3D
surface mesh from multi-scale multi-view stereo point clouds that can handle
extreme jumps of point density (in our experiments three orders of magnitude).
The backbone of our approach is a combination of octree data partitioning,
local Delaunay tetrahedralization and graph cut optimization. Graph cut
optimization is used twice, once to extract surface hypotheses from local
Delaunay tetrahedralizations and once to merge overlapping surface hypotheses
even when the local tetrahedralizations do not share the same topology.This
formulation allows us to obtain a constant memory consumption per sub-problem
while at the same time retaining the density independent interpolation
properties of the Delaunay-based optimization. On multiple public datasets, we
demonstrate that our approach is highly competitive with the state-of-the-art
in terms of accuracy, completeness and outlier resilience. Further, we
demonstrate the multi-scale potential of our approach by processing a newly
recorded dataset with 2 billion points and a point density variation of more
than four orders of magnitude - requiring less than 9GB of RAM per process.
|
[
{
"created": "Tue, 2 May 2017 13:13:47 GMT",
"version": "v1"
}
] |
2017-05-03
|
[
[
"Mostegel",
"Christian",
""
],
[
"Prettenthaler",
"Rudolf",
""
],
[
"Fraundorfer",
"Friedrich",
""
],
[
"Bischof",
"Horst",
""
]
] |
In this paper we present a scalable approach for robustly computing a 3D surface mesh from multi-scale multi-view stereo point clouds that can handle extreme jumps of point density (in our experiments three orders of magnitude). The backbone of our approach is a combination of octree data partitioning, local Delaunay tetrahedralization and graph cut optimization. Graph cut optimization is used twice, once to extract surface hypotheses from local Delaunay tetrahedralizations and once to merge overlapping surface hypotheses even when the local tetrahedralizations do not share the same topology.This formulation allows us to obtain a constant memory consumption per sub-problem while at the same time retaining the density independent interpolation properties of the Delaunay-based optimization. On multiple public datasets, we demonstrate that our approach is highly competitive with the state-of-the-art in terms of accuracy, completeness and outlier resilience. Further, we demonstrate the multi-scale potential of our approach by processing a newly recorded dataset with 2 billion points and a point density variation of more than four orders of magnitude - requiring less than 9GB of RAM per process.
|
1704.02819
|
Hugues Verdure
|
Trygve Johnsen, Hugues Verdure
|
Flags of almost affine codes
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We describe a two-party wire-tap channel of type II in the framework of
almost affine codes. Its cryptological performance is related to some relative
profiles of a pair of almost affine codes. These profiles are analogues of
relative generalized Hamming weights in the linear case.
|
[
{
"created": "Mon, 10 Apr 2017 12:10:21 GMT",
"version": "v1"
}
] |
2017-04-11
|
[
[
"Johnsen",
"Trygve",
""
],
[
"Verdure",
"Hugues",
""
]
] |
We describe a two-party wire-tap channel of type II in the framework of almost affine codes. Its cryptological performance is related to some relative profiles of a pair of almost affine codes. These profiles are analogues of relative generalized Hamming weights in the linear case.
|
2001.09624
|
Jiasi Weng
|
Jiasi Weng, Jian Weng, Yue Zhang, Ming Li, Zhaodi Wen
|
SecEL: Privacy-Preserving, Verifiable and Fault-Tolerant Edge Learning
for Autonomous Vehicles
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Mobile edge computing (MEC) is an emerging technology to transform the
cloud-based computing services into the edge-based ones. Autonomous vehicular
network (AVNET), as one of the most promising applications of MEC, can feature
edge learning and communication techniques, improving the safety for autonomous
vehicles (AVs). This paper focuses on the edge learning in AVNET, where AVs at
the edge of the network share model parameters instead of data in a distributed
manner, and an aggregator (e.g., a base station) aggregates parameters from AVs
and at the end obtains a trained model. Despite promising, security issues,
such as data leakage, computing integrity invasion and fault connection in
existing edge learning cases are not considered fully. To the best of our
knowledge, there lacks an effective scheme simultaneously covering the
foregoing security issues. Therefore, we propose \textit{SecEL}, a
privacy-preserving, verifiable and fault-tolerant scheme for edge learning in
AVNET. First, we leverage the primitive of bivariate polynomial-based secret
sharing to encrypt model parameters by one-time padding. Second, we use
homomorphic authenticator based on message authentication code to support
verifiable computation. Third, we mitigate the computation failure problem
caused by fault connection. Last, we simulate and evaluate SecEL in terms of
time cost, throughput and classification accuracy. The experiment results
demonstrate the effectiveness of SecEL.
|
[
{
"created": "Mon, 27 Jan 2020 08:30:23 GMT",
"version": "v1"
},
{
"created": "Fri, 31 Jan 2020 14:09:29 GMT",
"version": "v2"
},
{
"created": "Sun, 16 Feb 2020 08:42:15 GMT",
"version": "v3"
}
] |
2020-02-18
|
[
[
"Weng",
"Jiasi",
""
],
[
"Weng",
"Jian",
""
],
[
"Zhang",
"Yue",
""
],
[
"Li",
"Ming",
""
],
[
"Wen",
"Zhaodi",
""
]
] |
Mobile edge computing (MEC) is an emerging technology to transform the cloud-based computing services into the edge-based ones. Autonomous vehicular network (AVNET), as one of the most promising applications of MEC, can feature edge learning and communication techniques, improving the safety for autonomous vehicles (AVs). This paper focuses on the edge learning in AVNET, where AVs at the edge of the network share model parameters instead of data in a distributed manner, and an aggregator (e.g., a base station) aggregates parameters from AVs and at the end obtains a trained model. Despite promising, security issues, such as data leakage, computing integrity invasion and fault connection in existing edge learning cases are not considered fully. To the best of our knowledge, there lacks an effective scheme simultaneously covering the foregoing security issues. Therefore, we propose \textit{SecEL}, a privacy-preserving, verifiable and fault-tolerant scheme for edge learning in AVNET. First, we leverage the primitive of bivariate polynomial-based secret sharing to encrypt model parameters by one-time padding. Second, we use homomorphic authenticator based on message authentication code to support verifiable computation. Third, we mitigate the computation failure problem caused by fault connection. Last, we simulate and evaluate SecEL in terms of time cost, throughput and classification accuracy. The experiment results demonstrate the effectiveness of SecEL.
|
1812.00884
|
Shazia Akbar
|
Shazia Akbar, Anne L. Martel
|
Cluster-Based Learning from Weakly Labeled Bags in Digital Pathology
|
Machine Learning for Health (ML4H) Workshop at NeurIPS 2018
| null | null |
ML4H/2018/27
|
cs.CV cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To alleviate the burden of gathering detailed expert annotations when
training deep neural networks, we propose a weakly supervised learning approach
to recognize metastases in microscopic images of breast lymph nodes. We
describe an alternative training loss which clusters weakly labeled bags in
latent space to inform relevance of patch-instances during training of a
convolutional neural network. We evaluate our method on the Camelyon dataset
which contains high-resolution digital slides of breast lymph nodes, where
labels are provided at the image-level and only subsets of patches are made
available during training.
|
[
{
"created": "Wed, 28 Nov 2018 15:05:22 GMT",
"version": "v1"
}
] |
2018-12-04
|
[
[
"Akbar",
"Shazia",
""
],
[
"Martel",
"Anne L.",
""
]
] |
To alleviate the burden of gathering detailed expert annotations when training deep neural networks, we propose a weakly supervised learning approach to recognize metastases in microscopic images of breast lymph nodes. We describe an alternative training loss which clusters weakly labeled bags in latent space to inform relevance of patch-instances during training of a convolutional neural network. We evaluate our method on the Camelyon dataset which contains high-resolution digital slides of breast lymph nodes, where labels are provided at the image-level and only subsets of patches are made available during training.
|
1605.08154
|
Chongyang Wang
|
Chongyang Wang, Ming Peng, Lingfeng Xu, Tong Chen
|
A single scale retinex based method for palm vein extraction
|
4 pages, 4 figures, received by 2016 IEEE Information
Technology,Networking,Electronic and Automation Control Conference(ITNEC
2016)
| null |
10.1109/ITNEC.2016.7560322
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Palm vein recognition is a novel biometric identification technology. But how
to gain a better vein extraction result from the raw palm image is still a
challenging problem, especially when the raw data collection has the problem of
asymmetric illumination. This paper proposes a method based on single scale
Retinex algorithm to extract palm vein image when strong shadow presents due to
asymmetric illumination and uneven geometry of the palm. We test our method on
a multispectral palm image. The experimental result shows that the proposed
method is robust to the influence of illumination angle and shadow. Compared to
the traditional extraction methods, the proposed method can obtain palm vein
lines with better visualization performance (the contrast ratio increases by
18.4%, entropy increases by 1.07%, and definition increases by 18.8%).
|
[
{
"created": "Thu, 26 May 2016 06:09:24 GMT",
"version": "v1"
}
] |
2016-11-15
|
[
[
"Wang",
"Chongyang",
""
],
[
"Peng",
"Ming",
""
],
[
"Xu",
"Lingfeng",
""
],
[
"Chen",
"Tong",
""
]
] |
Palm vein recognition is a novel biometric identification technology. But how to gain a better vein extraction result from the raw palm image is still a challenging problem, especially when the raw data collection has the problem of asymmetric illumination. This paper proposes a method based on single scale Retinex algorithm to extract palm vein image when strong shadow presents due to asymmetric illumination and uneven geometry of the palm. We test our method on a multispectral palm image. The experimental result shows that the proposed method is robust to the influence of illumination angle and shadow. Compared to the traditional extraction methods, the proposed method can obtain palm vein lines with better visualization performance (the contrast ratio increases by 18.4%, entropy increases by 1.07%, and definition increases by 18.8%).
|
0810.4196
|
M. H. van Emden
|
W.W. Edmonson and M.H. van Emden
|
Interval Semantics for Standard Floating-Point Arithmetic
|
10 pages
| null | null |
DCS-323-IR
|
cs.NA cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
If the non-zero finite floating-point numbers are interpreted as point
intervals, then the effect of rounding can be interpreted as computing one of
the bounds of the result according to interval arithmetic. We give an interval
interpretation for the signed zeros and infinities, so that the undefined
operations 0*inf, inf - inf, inf/inf, and 0/0 become defined.
In this way no operation remains that gives rise to an error condition.
Mathematically questionable features of the floating-point standard become
well-defined sets of reals. Interval semantics provides a basis for the
verification of numerical algorithms. We derive the results of the newly
defined operations and consider the implications for hardware implementation.
|
[
{
"created": "Thu, 23 Oct 2008 03:32:47 GMT",
"version": "v1"
}
] |
2008-10-24
|
[
[
"Edmonson",
"W. W.",
""
],
[
"van Emden",
"M. H.",
""
]
] |
If the non-zero finite floating-point numbers are interpreted as point intervals, then the effect of rounding can be interpreted as computing one of the bounds of the result according to interval arithmetic. We give an interval interpretation for the signed zeros and infinities, so that the undefined operations 0*inf, inf - inf, inf/inf, and 0/0 become defined. In this way no operation remains that gives rise to an error condition. Mathematically questionable features of the floating-point standard become well-defined sets of reals. Interval semantics provides a basis for the verification of numerical algorithms. We derive the results of the newly defined operations and consider the implications for hardware implementation.
|
2004.02748
|
Shreya Roy
|
Shreya Roy and Anirban Chakraborty
|
Semantic Segmentation of highly class imbalanced fully labelled 3D
volumetric biomedical images and unsupervised Domain Adaptation of the
pre-trained Segmentation Network to segment another fully unlabelled
Biomedical 3D Image stack
|
6 pages and 6 figures. Submitting to ICPR2020
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The goal of our work is to perform pixel label semantic segmentation on 3D
biomedical volumetric data. Manual annotation is always difficult for a large
bio-medical dataset. So, we consider two cases where one dataset is fully
labeled and the other dataset is assumed to be fully unlabelled. We first
perform Semantic Segmentation on the fully labeled isotropic biomedical source
data (FIBSEM) and try to incorporate the the trained model for segmenting the
target unlabelled dataset(SNEMI3D)which shares some similarities with the
source dataset in the context of different types of cellular bodies and other
cellular components. Although, the cellular components vary in size and shape.
So in this paper, we have proposed a novel approach in the context of
unsupervised domain adaptation while classifying each pixel of the target
volumetric data into cell boundary and cell body. Also, we have proposed a
novel approach to giving non-uniform weights to different pixels in the
training images while performing the pixel-level semantic segmentation in the
presence of the corresponding pixel-wise label map along with the training
original images in the source domain. We have used the Entropy Map or a
Distance Transform matrix retrieved from the given ground truth label map which
has helped to overcome the class imbalance problem in the medical image data
where the cell boundaries are extremely thin and hence, extremely prone to be
misclassified as non-boundary.
|
[
{
"created": "Fri, 13 Mar 2020 06:01:18 GMT",
"version": "v1"
}
] |
2020-04-07
|
[
[
"Roy",
"Shreya",
""
],
[
"Chakraborty",
"Anirban",
""
]
] |
The goal of our work is to perform pixel label semantic segmentation on 3D biomedical volumetric data. Manual annotation is always difficult for a large bio-medical dataset. So, we consider two cases where one dataset is fully labeled and the other dataset is assumed to be fully unlabelled. We first perform Semantic Segmentation on the fully labeled isotropic biomedical source data (FIBSEM) and try to incorporate the the trained model for segmenting the target unlabelled dataset(SNEMI3D)which shares some similarities with the source dataset in the context of different types of cellular bodies and other cellular components. Although, the cellular components vary in size and shape. So in this paper, we have proposed a novel approach in the context of unsupervised domain adaptation while classifying each pixel of the target volumetric data into cell boundary and cell body. Also, we have proposed a novel approach to giving non-uniform weights to different pixels in the training images while performing the pixel-level semantic segmentation in the presence of the corresponding pixel-wise label map along with the training original images in the source domain. We have used the Entropy Map or a Distance Transform matrix retrieved from the given ground truth label map which has helped to overcome the class imbalance problem in the medical image data where the cell boundaries are extremely thin and hence, extremely prone to be misclassified as non-boundary.
|
2303.05867
|
EPTCS
|
Ankit Kumar (Northeastern University), Andrew Walter (Northeastern
University), Panagiotis Manolios (Northeastern University)
|
Automated Grading of Automata with ACL2s
|
In Proceedings ThEdu'22, arXiv:2303.05360
|
EPTCS 375, 2023, pp. 77-91
|
10.4204/EPTCS.375.7
| null |
cs.LO cs.FL cs.SC
|
http://creativecommons.org/licenses/by/4.0/
|
Almost all Computer Science programs require students to take a course on the
Theory of Computation (ToC) which covers various models of computation such as
finite automata, push-down automata and Turing machines. ToC courses tend to
give assignments that require paper-and-pencil solutions. Grading such
assignments takes time, so students typically receive feedback for their
solutions more than a week after they complete them. We present the Automatic
Automata Checker (A2C), an open source library that enables one to construct
executable automata using definitions that mimic those found in standard
textbooks. Such constructions are easy to reason about using semantic
equivalence checks, properties and test cases. Instructors can conveniently
specify solutions in the form of their own constructions. A2C can check for
semantic equivalence between student and instructor solutions and can
immediately generate actionable feedback, which helps students better
understand the material. A2C can be downloaded and used locally by students as
well as integrated into Learning Management Systems (LMS) like Gradescope to
automatically grade student submissions and generate feedback. A2C is based on
the ACL2s interactive theorem prover, which provides advanced methods for
stating, proving and disproving properties. Since feedback is automatic, A2C
can be deployed at scale and integrated into massively open online courses.
|
[
{
"created": "Fri, 10 Mar 2023 11:37:27 GMT",
"version": "v1"
}
] |
2023-03-13
|
[
[
"Kumar",
"Ankit",
"",
"Northeastern University"
],
[
"Walter",
"Andrew",
"",
"Northeastern\n University"
],
[
"Manolios",
"Panagiotis",
"",
"Northeastern University"
]
] |
Almost all Computer Science programs require students to take a course on the Theory of Computation (ToC) which covers various models of computation such as finite automata, push-down automata and Turing machines. ToC courses tend to give assignments that require paper-and-pencil solutions. Grading such assignments takes time, so students typically receive feedback for their solutions more than a week after they complete them. We present the Automatic Automata Checker (A2C), an open source library that enables one to construct executable automata using definitions that mimic those found in standard textbooks. Such constructions are easy to reason about using semantic equivalence checks, properties and test cases. Instructors can conveniently specify solutions in the form of their own constructions. A2C can check for semantic equivalence between student and instructor solutions and can immediately generate actionable feedback, which helps students better understand the material. A2C can be downloaded and used locally by students as well as integrated into Learning Management Systems (LMS) like Gradescope to automatically grade student submissions and generate feedback. A2C is based on the ACL2s interactive theorem prover, which provides advanced methods for stating, proving and disproving properties. Since feedback is automatic, A2C can be deployed at scale and integrated into massively open online courses.
|
2206.09362
|
Wei Liu
|
Xin Xu, Wei Liu, Zheng Wang, Ruiming Hu, Qi Tian
|
Towards Generalizable Person Re-identification with a Bi-stream
Generative Model
|
There is a mistake of equation 1
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Generalizable person re-identification (re-ID) has attracted growing
attention due to its powerful adaptation capability in the unseen data domain.
However, existing solutions often neglect either crossing cameras (e.g.,
illumination and resolution differences) or pedestrian misalignments (e.g.,
viewpoint and pose discrepancies), which easily leads to poor generalization
capability when adapted to the new domain. In this paper, we formulate these
difficulties as: 1) Camera-Camera (CC) problem, which denotes the various human
appearance changes caused by different cameras; 2) Camera-Person (CP) problem,
which indicates the pedestrian misalignments caused by the same identity person
under different camera viewpoints or changing pose. To solve the above issues,
we propose a Bi-stream Generative Model (BGM) to learn the fine-grained
representations fused with camera-invariant global feature and
pedestrian-aligned local feature, which contains an encoding network and two
stream decoding sub-networks. Guided by original pedestrian images, one stream
is employed to learn a camera-invariant global feature for the CC problem via
filtering cross-camera interference factors. For the CP problem, another stream
learns a pedestrian-aligned local feature for pedestrian alignment using
information-complete densely semantically aligned part maps. Moreover, a
part-weighted loss function is presented to reduce the influence of missing
parts on pedestrian alignment. Extensive experiments demonstrate that our
method outperforms the state-of-the-art methods on the large-scale
generalizable re-ID benchmarks, involving domain generalization setting and
cross-domain setting.
|
[
{
"created": "Sun, 19 Jun 2022 09:18:25 GMT",
"version": "v1"
},
{
"created": "Sun, 26 Jun 2022 09:31:07 GMT",
"version": "v2"
}
] |
2022-06-28
|
[
[
"Xu",
"Xin",
""
],
[
"Liu",
"Wei",
""
],
[
"Wang",
"Zheng",
""
],
[
"Hu",
"Ruiming",
""
],
[
"Tian",
"Qi",
""
]
] |
Generalizable person re-identification (re-ID) has attracted growing attention due to its powerful adaptation capability in the unseen data domain. However, existing solutions often neglect either crossing cameras (e.g., illumination and resolution differences) or pedestrian misalignments (e.g., viewpoint and pose discrepancies), which easily leads to poor generalization capability when adapted to the new domain. In this paper, we formulate these difficulties as: 1) Camera-Camera (CC) problem, which denotes the various human appearance changes caused by different cameras; 2) Camera-Person (CP) problem, which indicates the pedestrian misalignments caused by the same identity person under different camera viewpoints or changing pose. To solve the above issues, we propose a Bi-stream Generative Model (BGM) to learn the fine-grained representations fused with camera-invariant global feature and pedestrian-aligned local feature, which contains an encoding network and two stream decoding sub-networks. Guided by original pedestrian images, one stream is employed to learn a camera-invariant global feature for the CC problem via filtering cross-camera interference factors. For the CP problem, another stream learns a pedestrian-aligned local feature for pedestrian alignment using information-complete densely semantically aligned part maps. Moreover, a part-weighted loss function is presented to reduce the influence of missing parts on pedestrian alignment. Extensive experiments demonstrate that our method outperforms the state-of-the-art methods on the large-scale generalizable re-ID benchmarks, involving domain generalization setting and cross-domain setting.
|
1704.00079
|
Kevin Yu
|
Kevin Yu, Ashish Kumar Budhiraja, and Pratap Tokekar
|
Algorithms for Routing of Unmanned Aerial Vehicles with Mobile
Recharging Stations
|
7 pages, 14 figures, ICRA2018 under review
| null | null | null |
cs.RO cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the problem of planning a tour for an energy-limited Unmanned Aerial
Vehicle (UAV) to visit a set of sites in the least amount of time. We envision
scenarios where the UAV can be recharged along the way either by landing on
stationary recharging stations or on Unmanned Ground Vehicles (UGVs) acting as
mobile recharging stations. This leads to a new variant of the Traveling
Salesperson Problem (TSP) with mobile recharging stations. We present an
algorithm that finds not only the order in which to visit the sites but also
when and where to land on the charging stations to recharge. Our algorithm
plans tours for the UGVs as well as determines best locations to place
stationary charging stations. While the problems we study are NP-Hard, we
present a practical solution using Generalized TSP that finds the optimal
solution. If the UGVs are slower, the algorithm also finds the minimum number
of UGVs required to support the UAV mission such that the UAV is not required
to wait for the UGV. Our simulation results show that the running time is
acceptable for reasonably sized instances in practice.
|
[
{
"created": "Fri, 31 Mar 2017 22:46:26 GMT",
"version": "v1"
},
{
"created": "Mon, 3 Jul 2017 18:47:55 GMT",
"version": "v2"
},
{
"created": "Mon, 18 Sep 2017 18:37:00 GMT",
"version": "v3"
}
] |
2017-09-20
|
[
[
"Yu",
"Kevin",
""
],
[
"Budhiraja",
"Ashish Kumar",
""
],
[
"Tokekar",
"Pratap",
""
]
] |
We study the problem of planning a tour for an energy-limited Unmanned Aerial Vehicle (UAV) to visit a set of sites in the least amount of time. We envision scenarios where the UAV can be recharged along the way either by landing on stationary recharging stations or on Unmanned Ground Vehicles (UGVs) acting as mobile recharging stations. This leads to a new variant of the Traveling Salesperson Problem (TSP) with mobile recharging stations. We present an algorithm that finds not only the order in which to visit the sites but also when and where to land on the charging stations to recharge. Our algorithm plans tours for the UGVs as well as determines best locations to place stationary charging stations. While the problems we study are NP-Hard, we present a practical solution using Generalized TSP that finds the optimal solution. If the UGVs are slower, the algorithm also finds the minimum number of UGVs required to support the UAV mission such that the UAV is not required to wait for the UGV. Our simulation results show that the running time is acceptable for reasonably sized instances in practice.
|
1908.07592
|
Cenk G\"undo\u{g}an
|
Cenk G\"undo\u{g}an, Jakob Pfender, Michael Frey, Thomas C. Schmidt,
Felix Shzu-Juraschek, Matthias W\"ahlisch
|
Gain More for Less: The Surprising Benefits of QoS Management in
Constrained NDN Networks
| null |
Proceedings of ACM ICN 2019
|
10.1145/3357150.3357404
| null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Quality of Service (QoS) in the IP world mainly manages forwarding resources,
i.e., link capacities and buffer spaces. In addition, Information Centric
Networking (ICN) offers resource dimensions such as in-network caches and
forwarding state. In constrained wireless networks, these resources are scarce
with a potentially high impact due to lossy radio transmission. In this paper,
we explore the two basic service qualities (i) prompt and (ii) reliable traffic
forwarding for the case of NDN. The resources we take into account are
forwarding and queuing priorities, as well as the utilization of caches and of
forwarding state space. We treat QoS resources not only in isolation, but
correlate their use on local nodes and between network members. Network-wide
coordination is based on simple, predefined QoS code points. Our findings
indicate that coordinated QoS management in ICN is more than the sum of its
parts and exceeds the impact QoS can have in the IP world.
|
[
{
"created": "Tue, 20 Aug 2019 20:12:38 GMT",
"version": "v1"
}
] |
2019-08-22
|
[
[
"Gündoğan",
"Cenk",
""
],
[
"Pfender",
"Jakob",
""
],
[
"Frey",
"Michael",
""
],
[
"Schmidt",
"Thomas C.",
""
],
[
"Shzu-Juraschek",
"Felix",
""
],
[
"Wählisch",
"Matthias",
""
]
] |
Quality of Service (QoS) in the IP world mainly manages forwarding resources, i.e., link capacities and buffer spaces. In addition, Information Centric Networking (ICN) offers resource dimensions such as in-network caches and forwarding state. In constrained wireless networks, these resources are scarce with a potentially high impact due to lossy radio transmission. In this paper, we explore the two basic service qualities (i) prompt and (ii) reliable traffic forwarding for the case of NDN. The resources we take into account are forwarding and queuing priorities, as well as the utilization of caches and of forwarding state space. We treat QoS resources not only in isolation, but correlate their use on local nodes and between network members. Network-wide coordination is based on simple, predefined QoS code points. Our findings indicate that coordinated QoS management in ICN is more than the sum of its parts and exceeds the impact QoS can have in the IP world.
|
1404.1484
|
Wenjing Liao
|
Wenjing Liao and Albert Fannjiang
|
MUSIC for Single-Snapshot Spectral Estimation: Stability and
Super-resolution
|
Studies on the super-resolution of the MUSIC algorithm have been
added in Section 4 and Section 5.4
| null | null | null |
cs.IT math.IT math.NA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper studies the problem of line spectral estimation in the continuum
of a bounded interval with one snapshot of array measurement. The
single-snapshot measurement data is turned into a Hankel data matrix which
admits the Vandermonde decomposition and is suitable for the MUSIC algorithm.
The MUSIC algorithm amounts to finding the null space (the noise space) of the
Hankel matrix, forming the noise-space correlation function and identifying the
s smallest local minima of the noise-space correlation as the frequency set.
In the noise-free case exact reconstruction is guaranteed for any arbitrary
set of frequencies as long as the number of measurements is at least twice the
number of distinct frequencies to be recovered. In the presence of noise the
stability analysis shows that the perturbation of the noise-space correlation
is proportional to the spectral norm of the noise matrix as long as the latter
is smaller than the smallest (nonzero) singular value of the noiseless Hankel
data matrix. Under the assumption that frequencies are separated by at least
twice the Rayleigh Length (RL), the stability of the noise-space correlation is
proved by means of novel discrete Ingham inequalities which provide bounds on
nonzero singular values of the noiseless Hankel data matrix.
The numerical performance of MUSIC is tested in comparison with other
algorithms such as BLO-OMP and SDP (TV-min). While BLO-OMP is the stablest
algorithm for frequencies separated above 4 RL, MUSIC becomes the best
performing one for frequencies separated between 2 RL and 3 RL. Also, MUSIC is
more efficient than other methods. MUSIC truly shines when the frequency
separation drops to 1 RL or below when all other methods fail. Indeed, the
resolution length of MUSIC decreases to zero as noise decreases to zero as a
power law with an exponent much smaller than an upper bound established by
Donoho.
|
[
{
"created": "Sat, 5 Apr 2014 15:51:33 GMT",
"version": "v1"
},
{
"created": "Sun, 21 Sep 2014 21:56:37 GMT",
"version": "v2"
}
] |
2014-09-23
|
[
[
"Liao",
"Wenjing",
""
],
[
"Fannjiang",
"Albert",
""
]
] |
This paper studies the problem of line spectral estimation in the continuum of a bounded interval with one snapshot of array measurement. The single-snapshot measurement data is turned into a Hankel data matrix which admits the Vandermonde decomposition and is suitable for the MUSIC algorithm. The MUSIC algorithm amounts to finding the null space (the noise space) of the Hankel matrix, forming the noise-space correlation function and identifying the s smallest local minima of the noise-space correlation as the frequency set. In the noise-free case exact reconstruction is guaranteed for any arbitrary set of frequencies as long as the number of measurements is at least twice the number of distinct frequencies to be recovered. In the presence of noise the stability analysis shows that the perturbation of the noise-space correlation is proportional to the spectral norm of the noise matrix as long as the latter is smaller than the smallest (nonzero) singular value of the noiseless Hankel data matrix. Under the assumption that frequencies are separated by at least twice the Rayleigh Length (RL), the stability of the noise-space correlation is proved by means of novel discrete Ingham inequalities which provide bounds on nonzero singular values of the noiseless Hankel data matrix. The numerical performance of MUSIC is tested in comparison with other algorithms such as BLO-OMP and SDP (TV-min). While BLO-OMP is the stablest algorithm for frequencies separated above 4 RL, MUSIC becomes the best performing one for frequencies separated between 2 RL and 3 RL. Also, MUSIC is more efficient than other methods. MUSIC truly shines when the frequency separation drops to 1 RL or below when all other methods fail. Indeed, the resolution length of MUSIC decreases to zero as noise decreases to zero as a power law with an exponent much smaller than an upper bound established by Donoho.
|
2003.00645
|
Yusuke Koda
|
Yusuke Koda, Jihong Park, Mehdi Bennis, Koji Yamamoto, Takayuki
Nishio, Masahiro Morikura
|
Communication-Efficient Multimodal Split Learning for mmWave Received
Power Prediction
|
5 pages, 7 figures, to be published at IEEE Communications Letters
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The goal of this study is to improve the accuracy of millimeter wave received
power prediction by utilizing camera images and radio frequency (RF) signals,
while gathering image inputs in a communication-efficient and
privacy-preserving manner. To this end, we propose a distributed multimodal
machine learning (ML) framework, coined multimodal split learning (MultSL), in
which a large neural network (NN) is split into two wirelessly connected
segments. The upper segment combines images and received powers for future
received power prediction, whereas the lower segment extracts features from
camera images and compresses its output to reduce communication costs and
privacy leakage. Experimental evaluation corroborates that MultSL achieves
higher accuracy than the baselines utilizing either images or RF signals.
Remarkably, without compromising accuracy, compressing the lower segment output
by 16x yields 16x lower communication latency and 2.8% less privacy leakage
compared to the case without compression.
|
[
{
"created": "Mon, 2 Mar 2020 03:58:35 GMT",
"version": "v1"
},
{
"created": "Tue, 3 Mar 2020 03:17:30 GMT",
"version": "v2"
}
] |
2020-03-04
|
[
[
"Koda",
"Yusuke",
""
],
[
"Park",
"Jihong",
""
],
[
"Bennis",
"Mehdi",
""
],
[
"Yamamoto",
"Koji",
""
],
[
"Nishio",
"Takayuki",
""
],
[
"Morikura",
"Masahiro",
""
]
] |
The goal of this study is to improve the accuracy of millimeter wave received power prediction by utilizing camera images and radio frequency (RF) signals, while gathering image inputs in a communication-efficient and privacy-preserving manner. To this end, we propose a distributed multimodal machine learning (ML) framework, coined multimodal split learning (MultSL), in which a large neural network (NN) is split into two wirelessly connected segments. The upper segment combines images and received powers for future received power prediction, whereas the lower segment extracts features from camera images and compresses its output to reduce communication costs and privacy leakage. Experimental evaluation corroborates that MultSL achieves higher accuracy than the baselines utilizing either images or RF signals. Remarkably, without compromising accuracy, compressing the lower segment output by 16x yields 16x lower communication latency and 2.8% less privacy leakage compared to the case without compression.
|
2003.07507
|
Saptarshi Purkayastha
|
A.K. Bhavani Singh, Mounika Guntu, Ananth Reddy Bhimireddy, Judy W.
Gichoya, Saptarshi Purkayastha
|
Multi-label natural language processing to identify diagnosis and
procedure codes from MIMIC-III inpatient notes
|
This is a shortened version of the Capstone Project that was accepted
by the Faculty of Indiana University, in partial fulfillment of the
requirements for the degree of Master of Science in Health Informatics
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the United States, 25% or greater than 200 billion dollars of hospital
spending accounts for administrative costs that involve services for medical
coding and billing. With the increasing number of patient records, manual
assignment of the codes performed is overwhelming, time-consuming and
error-prone, causing billing errors. Natural language processing can automate
the extraction of codes/labels from unstructured clinical notes, which can aid
human coders to save time, increase productivity, and verify medical coding
errors. Our objective is to identify appropriate diagnosis and procedure codes
from clinical notes by performing multi-label classification. We used
de-identified data of critical care patients from the MIMIC-III database and
subset the data to select the ten (top-10) and fifty (top-50) most common
diagnoses and procedures, which covers 47.45% and 74.12% of all admissions
respectively. We implemented state-of-the-art Bidirectional Encoder
Representations from Transformers (BERT) to fine-tune the language model on 80%
of the data and validated on the remaining 20%. The model achieved an overall
accuracy of 87.08%, an F1 score of 85.82%, and an AUC of 91.76% for top-10
codes. For the top-50 codes, our model achieved an overall accuracy of 93.76%,
an F1 score of 92.24%, and AUC of 91%. When compared to previously published
research, our model outperforms in predicting codes from the clinical text. We
discuss approaches to generalize the knowledge discovery process of our
MIMIC-BERT to other clinical notes. This can help human coders to save time,
prevent backlogs, and additional costs due to coding errors.
|
[
{
"created": "Tue, 17 Mar 2020 02:56:27 GMT",
"version": "v1"
}
] |
2020-03-18
|
[
[
"Singh",
"A. K. Bhavani",
""
],
[
"Guntu",
"Mounika",
""
],
[
"Bhimireddy",
"Ananth Reddy",
""
],
[
"Gichoya",
"Judy W.",
""
],
[
"Purkayastha",
"Saptarshi",
""
]
] |
In the United States, 25% or greater than 200 billion dollars of hospital spending accounts for administrative costs that involve services for medical coding and billing. With the increasing number of patient records, manual assignment of the codes performed is overwhelming, time-consuming and error-prone, causing billing errors. Natural language processing can automate the extraction of codes/labels from unstructured clinical notes, which can aid human coders to save time, increase productivity, and verify medical coding errors. Our objective is to identify appropriate diagnosis and procedure codes from clinical notes by performing multi-label classification. We used de-identified data of critical care patients from the MIMIC-III database and subset the data to select the ten (top-10) and fifty (top-50) most common diagnoses and procedures, which covers 47.45% and 74.12% of all admissions respectively. We implemented state-of-the-art Bidirectional Encoder Representations from Transformers (BERT) to fine-tune the language model on 80% of the data and validated on the remaining 20%. The model achieved an overall accuracy of 87.08%, an F1 score of 85.82%, and an AUC of 91.76% for top-10 codes. For the top-50 codes, our model achieved an overall accuracy of 93.76%, an F1 score of 92.24%, and AUC of 91%. When compared to previously published research, our model outperforms in predicting codes from the clinical text. We discuss approaches to generalize the knowledge discovery process of our MIMIC-BERT to other clinical notes. This can help human coders to save time, prevent backlogs, and additional costs due to coding errors.
|
2007.09793
|
R Jaberi
|
Raed Jaberi
|
$2$-blocks in strongly biconnected directed graphs
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A directed graph $G=(V,E)$ is called strongly biconnected if $G$ is strongly
connected and the underlying graph of $G$ is biconnected. A strongly
biconnected component of a strongly connected graph $G=(V,E)$ is a maximal
vertex subset $L\subseteq V$ such that the induced subgraph on $L$ is strongly
biconnected. Let $G=(V,E)$ be a strongly biconnected directed graph. A
$2$-edge-biconnected block in $G$ is a maximal vertex subset $U\subseteq V$
such that for any two distict vertices $v,w \in U$ and for each edge $b\in E$,
the vertices $v,w$ are in the same strongly biconnected components of
$G\setminus\left\lbrace b\right\rbrace $. A $2$-strong-biconnected block in $G$
is a maximal vertex subset $U\subseteq V$ of size at least $2$ such that for
every pair of distinct vertices $v,w\in U$ and for every vertex $z\in
V\setminus\left\lbrace v,w \right\rbrace $, the vertices $v$ and $w$ are in the
same strongly biconnected component of $G\setminus \left\lbrace v,w
\right\rbrace $. In this paper we study $2$-edge-biconnected blocks and
$2$-strong biconnected blocks.
|
[
{
"created": "Sun, 19 Jul 2020 21:59:52 GMT",
"version": "v1"
}
] |
2020-07-21
|
[
[
"Jaberi",
"Raed",
""
]
] |
A directed graph $G=(V,E)$ is called strongly biconnected if $G$ is strongly connected and the underlying graph of $G$ is biconnected. A strongly biconnected component of a strongly connected graph $G=(V,E)$ is a maximal vertex subset $L\subseteq V$ such that the induced subgraph on $L$ is strongly biconnected. Let $G=(V,E)$ be a strongly biconnected directed graph. A $2$-edge-biconnected block in $G$ is a maximal vertex subset $U\subseteq V$ such that for any two distict vertices $v,w \in U$ and for each edge $b\in E$, the vertices $v,w$ are in the same strongly biconnected components of $G\setminus\left\lbrace b\right\rbrace $. A $2$-strong-biconnected block in $G$ is a maximal vertex subset $U\subseteq V$ of size at least $2$ such that for every pair of distinct vertices $v,w\in U$ and for every vertex $z\in V\setminus\left\lbrace v,w \right\rbrace $, the vertices $v$ and $w$ are in the same strongly biconnected component of $G\setminus \left\lbrace v,w \right\rbrace $. In this paper we study $2$-edge-biconnected blocks and $2$-strong biconnected blocks.
|
1804.04577
|
Dimitri Bertsekas
|
Dimitri P. Bertsekas
|
Feature-Based Aggregation and Deep Reinforcement Learning: A Survey and
Some New Implementations
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we discuss policy iteration methods for approximate solution of
a finite-state discounted Markov decision problem, with a focus on
feature-based aggregation methods and their connection with deep reinforcement
learning schemes. We introduce features of the states of the original problem,
and we formulate a smaller "aggregate" Markov decision problem, whose states
relate to the features. We discuss properties and possible implementations of
this type of aggregation, including a new approach to approximate policy
iteration. In this approach the policy improvement operation combines
feature-based aggregation with feature construction using deep neural networks
or other calculations. We argue that the cost function of a policy may be
approximated much more accurately by the nonlinear function of the features
provided by aggregation, than by the linear function of the features provided
by neural network-based reinforcement learning, thereby potentially leading to
more effective policy improvement.
|
[
{
"created": "Thu, 12 Apr 2018 15:46:12 GMT",
"version": "v1"
},
{
"created": "Sun, 22 Apr 2018 14:38:08 GMT",
"version": "v2"
},
{
"created": "Tue, 21 Aug 2018 22:41:34 GMT",
"version": "v3"
}
] |
2018-08-23
|
[
[
"Bertsekas",
"Dimitri P.",
""
]
] |
In this paper we discuss policy iteration methods for approximate solution of a finite-state discounted Markov decision problem, with a focus on feature-based aggregation methods and their connection with deep reinforcement learning schemes. We introduce features of the states of the original problem, and we formulate a smaller "aggregate" Markov decision problem, whose states relate to the features. We discuss properties and possible implementations of this type of aggregation, including a new approach to approximate policy iteration. In this approach the policy improvement operation combines feature-based aggregation with feature construction using deep neural networks or other calculations. We argue that the cost function of a policy may be approximated much more accurately by the nonlinear function of the features provided by aggregation, than by the linear function of the features provided by neural network-based reinforcement learning, thereby potentially leading to more effective policy improvement.
|
1612.04512
|
Pedro Henrique Juliano Nardelli
|
Florian K\"uhnlenz and Pedro H. J. Nardelli
|
Agent-based Model for Spot and Balancing Electricity Markets
| null | null | null | null |
cs.MA q-fin.GN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a simple, yet realistic, agent-based model of an electricity
market. The proposed model combines the spot and balancing markets with a
resolution of one minute, which enables a more accurate depiction of the
physical properties of the power grid. As a test, we compare the results
obtained from our simulation to data from Nord Pool.
|
[
{
"created": "Wed, 14 Dec 2016 07:15:18 GMT",
"version": "v1"
}
] |
2016-12-19
|
[
[
"Kühnlenz",
"Florian",
""
],
[
"Nardelli",
"Pedro H. J.",
""
]
] |
We present a simple, yet realistic, agent-based model of an electricity market. The proposed model combines the spot and balancing markets with a resolution of one minute, which enables a more accurate depiction of the physical properties of the power grid. As a test, we compare the results obtained from our simulation to data from Nord Pool.
|
1706.02672
|
Kumar Sankar Ray
|
Kumar S. Ray, Soma Chakraborty
|
An Efficient Approach for Object Detection and Tracking of Objects in a
Video with Variable Background
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper proposes a novel approach to create an automated visual
surveillance system which is very efficient in detecting and tracking moving
objects in a video captured by moving camera without any apriori information
about the captured scene. Separating foreground from the background is
challenging job in videos captured by moving camera as both foreground and
background information change in every consecutive frames of the image
sequence; thus a pseudo-motion is perceptive in background. In the proposed
algorithm, the pseudo-motion in background is estimated and compensated using
phase correlation of consecutive frames based on the principle of Fourier shift
theorem. Then a method is proposed to model an acting background from recent
history of commonality of the current frame and the foreground is detected by
the differences between the background model and the current frame. Further
exploiting the recent history of dissimilarities of the current frame, actual
moving objects are detected in the foreground. Next, a two-stepped
morphological operation is proposed to refine the object region for an optimum
object size. Each object is attributed by its centroid, dimension and three
highest peaks of its gray value histogram. Finally, each object is tracked
using Kalman filter based on its attributes. The major advantage of this
algorithm over most of the existing object detection and tracking algorithms is
that, it does not require initialization of object position in the first frame
or training on sample data to perform. Performance of the algorithm is tested
on benchmark videos containing variable background and very satisfiable results
is achieved. The performance of the algorithm is also comparable with some of
the state-of-the-art algorithms for object detection and tracking.
|
[
{
"created": "Thu, 11 May 2017 08:23:35 GMT",
"version": "v1"
}
] |
2017-06-09
|
[
[
"Ray",
"Kumar S.",
""
],
[
"Chakraborty",
"Soma",
""
]
] |
This paper proposes a novel approach to create an automated visual surveillance system which is very efficient in detecting and tracking moving objects in a video captured by moving camera without any apriori information about the captured scene. Separating foreground from the background is challenging job in videos captured by moving camera as both foreground and background information change in every consecutive frames of the image sequence; thus a pseudo-motion is perceptive in background. In the proposed algorithm, the pseudo-motion in background is estimated and compensated using phase correlation of consecutive frames based on the principle of Fourier shift theorem. Then a method is proposed to model an acting background from recent history of commonality of the current frame and the foreground is detected by the differences between the background model and the current frame. Further exploiting the recent history of dissimilarities of the current frame, actual moving objects are detected in the foreground. Next, a two-stepped morphological operation is proposed to refine the object region for an optimum object size. Each object is attributed by its centroid, dimension and three highest peaks of its gray value histogram. Finally, each object is tracked using Kalman filter based on its attributes. The major advantage of this algorithm over most of the existing object detection and tracking algorithms is that, it does not require initialization of object position in the first frame or training on sample data to perform. Performance of the algorithm is tested on benchmark videos containing variable background and very satisfiable results is achieved. The performance of the algorithm is also comparable with some of the state-of-the-art algorithms for object detection and tracking.
|
2104.08062
|
Bilal Abu-Salih
|
Bilal Abu-Salih, Pornpit Wongthongtham, Dengya Zhu, Kit Yan Chan, Amit
Rudra
|
Introduction to Big data Technology
| null | null |
10.1007/978-981-33-6652-7_2
| null |
cs.OH
|
http://creativecommons.org/licenses/by/4.0/
|
Big data is no more "all just hype" but widely applied in nearly all aspects
of our business, governments, and organizations with the technology stack of
AI. Its influences are far beyond a simple technique innovation but involves
all rears in the world. This chapter will first have historical review of big
data; followed by discussion of characteristics of big data, i.e. from the 3V's
to up 10V's of big data. The chapter then introduces technology stacks for an
or-ganization to build a big data application, from
infrastruc-ture/platform/ecosystem to constructional units and components.
Finally, we provide some big data online resources for reference.
|
[
{
"created": "Thu, 15 Apr 2021 13:34:45 GMT",
"version": "v1"
}
] |
2021-04-19
|
[
[
"Abu-Salih",
"Bilal",
""
],
[
"Wongthongtham",
"Pornpit",
""
],
[
"Zhu",
"Dengya",
""
],
[
"Chan",
"Kit Yan",
""
],
[
"Rudra",
"Amit",
""
]
] |
Big data is no more "all just hype" but widely applied in nearly all aspects of our business, governments, and organizations with the technology stack of AI. Its influences are far beyond a simple technique innovation but involves all rears in the world. This chapter will first have historical review of big data; followed by discussion of characteristics of big data, i.e. from the 3V's to up 10V's of big data. The chapter then introduces technology stacks for an or-ganization to build a big data application, from infrastruc-ture/platform/ecosystem to constructional units and components. Finally, we provide some big data online resources for reference.
|
1811.03882
|
Yoji Yamato
|
Yoji Yamato, Hirofumi Noguchi, Misao Kataoka, Takuma Isoda and Tatsuya
Demizu
|
Parallel processing area extraction and data transfer number reduction
for automatic GPU offloading of IoT applications
|
6 pages, 4 figures, in Japanese, IEICE Technical Report, SC2018-32
|
IEICE Technical Report, SC2018-32, Nov. 2018. (c) 2018 IEICE
| null |
IEICE Technical Report, SC2018-32, Nov. 2018
|
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For Open IoT, we have proposed Tacit Computing technology to discover the
devices that have data users need on demand and use them dynamically and an
automatic GPU offloading technology as an elementary technology of Tacit
Computing. However, it can improve limited applications because it only
optimizes parallelizable loop statements extraction. Thus, in this paper, to
improve performances of more applications automatically, we propose an improved
method with reduction of data transfer between CPU and GPU. We evaluate our
proposed offloading method by applying it to Darknet and find that it can
process it 3 times as quickly as only using CPU.
|
[
{
"created": "Fri, 9 Nov 2018 12:38:37 GMT",
"version": "v1"
}
] |
2018-11-14
|
[
[
"Yamato",
"Yoji",
""
],
[
"Noguchi",
"Hirofumi",
""
],
[
"Kataoka",
"Misao",
""
],
[
"Isoda",
"Takuma",
""
],
[
"Demizu",
"Tatsuya",
""
]
] |
For Open IoT, we have proposed Tacit Computing technology to discover the devices that have data users need on demand and use them dynamically and an automatic GPU offloading technology as an elementary technology of Tacit Computing. However, it can improve limited applications because it only optimizes parallelizable loop statements extraction. Thus, in this paper, to improve performances of more applications automatically, we propose an improved method with reduction of data transfer between CPU and GPU. We evaluate our proposed offloading method by applying it to Darknet and find that it can process it 3 times as quickly as only using CPU.
|
1112.3877
|
Malathi Subramanian
|
S.Malathi, S.Sridhar
|
A Classical Fuzzy Approach for Software Effort Estimation on Machine
Learning Technique
|
5 pages, 2 figures, 4 tables
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Software Cost Estimation with resounding reliability,productivity and
development effort is a challenging and onerous task. This has incited the
software community to give much needed thrust and delve into extensive research
in software effort estimation for evolving sophisticated methods. Estimation by
analogy is one of the expedient techniques in software effort estimation field.
However, the methodology utilized for the estimation of software effort by
analogy is not able to handle the categorical data in an explicit and precise
manner. A new approach has been developed in this paper to estimate software
effort for projects represented by categorical or numerical data using
reasoning by analogy and fuzzy approach. The existing historical data sets,
analyzed with fuzzy logic, produce accurate results in comparison to the data
set analyzed with the earlier methodologies.
|
[
{
"created": "Fri, 16 Dec 2011 16:22:55 GMT",
"version": "v1"
}
] |
2011-12-19
|
[
[
"Malathi",
"S.",
""
],
[
"Sridhar",
"S.",
""
]
] |
Software Cost Estimation with resounding reliability,productivity and development effort is a challenging and onerous task. This has incited the software community to give much needed thrust and delve into extensive research in software effort estimation for evolving sophisticated methods. Estimation by analogy is one of the expedient techniques in software effort estimation field. However, the methodology utilized for the estimation of software effort by analogy is not able to handle the categorical data in an explicit and precise manner. A new approach has been developed in this paper to estimate software effort for projects represented by categorical or numerical data using reasoning by analogy and fuzzy approach. The existing historical data sets, analyzed with fuzzy logic, produce accurate results in comparison to the data set analyzed with the earlier methodologies.
|
2211.11734
|
Karthik Shetty
|
Karthik Shetty, Annette Birkhold, Srikrishna Jaganathan, Norbert
Strobel, Markus Kowarschik, Andreas Maier, Bernhard Egger
|
PLIKS: A Pseudo-Linear Inverse Kinematic Solver for 3D Human Body
Estimation
|
CVPR2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce PLIKS (Pseudo-Linear Inverse Kinematic Solver) for
reconstruction of a 3D mesh of the human body from a single 2D image. Current
techniques directly regress the shape, pose, and translation of a parametric
model from an input image through a non-linear mapping with minimal flexibility
to any external influences. We approach the task as a model-in-the-loop
optimization problem. PLIKS is built on a linearized formulation of the
parametric SMPL model. Using PLIKS, we can analytically reconstruct the human
model via 2D pixel-aligned vertices. This enables us with the flexibility to
use accurate camera calibration information when available. PLIKS offers an
easy way to introduce additional constraints such as shape and translation. We
present quantitative evaluations which confirm that PLIKS achieves more
accurate reconstruction with greater than 10% improvement compared to other
state-of-the-art methods with respect to the standard 3D human pose and shape
benchmarks while also obtaining a reconstruction error improvement of 12.9 mm
on the newer AGORA dataset.
|
[
{
"created": "Mon, 21 Nov 2022 18:54:12 GMT",
"version": "v1"
},
{
"created": "Mon, 27 Mar 2023 23:50:01 GMT",
"version": "v2"
}
] |
2023-03-29
|
[
[
"Shetty",
"Karthik",
""
],
[
"Birkhold",
"Annette",
""
],
[
"Jaganathan",
"Srikrishna",
""
],
[
"Strobel",
"Norbert",
""
],
[
"Kowarschik",
"Markus",
""
],
[
"Maier",
"Andreas",
""
],
[
"Egger",
"Bernhard",
""
]
] |
We introduce PLIKS (Pseudo-Linear Inverse Kinematic Solver) for reconstruction of a 3D mesh of the human body from a single 2D image. Current techniques directly regress the shape, pose, and translation of a parametric model from an input image through a non-linear mapping with minimal flexibility to any external influences. We approach the task as a model-in-the-loop optimization problem. PLIKS is built on a linearized formulation of the parametric SMPL model. Using PLIKS, we can analytically reconstruct the human model via 2D pixel-aligned vertices. This enables us with the flexibility to use accurate camera calibration information when available. PLIKS offers an easy way to introduce additional constraints such as shape and translation. We present quantitative evaluations which confirm that PLIKS achieves more accurate reconstruction with greater than 10% improvement compared to other state-of-the-art methods with respect to the standard 3D human pose and shape benchmarks while also obtaining a reconstruction error improvement of 12.9 mm on the newer AGORA dataset.
|
1909.13456
|
Zhe Gan
|
Wenlin Wang, Chenyang Tao, Zhe Gan, Guoyin Wang, Liqun Chen, Xinyuan
Zhang, Ruiyi Zhang, Qian Yang, Ricardo Henao, Lawrence Carin
|
Improving Textual Network Learning with Variational Homophilic
Embeddings
|
Accepted to NeurIPS 2019
| null | null | null |
cs.LG cs.CL stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The performance of many network learning applications crucially hinges on the
success of network embedding algorithms, which aim to encode rich network
information into low-dimensional vertex-based vector representations. This
paper considers a novel variational formulation of network embeddings, with
special focus on textual networks. Different from most existing methods that
optimize a discriminative objective, we introduce Variational Homophilic
Embedding (VHE), a fully generative model that learns network embeddings by
modeling the semantic (textual) information with a variational autoencoder,
while accounting for the structural (topology) information through a novel
homophilic prior design. Homophilic vertex embeddings encourage similar
embedding vectors for related (connected) vertices. The proposed VHE promises
better generalization for downstream tasks, robustness to incomplete
observations, and the ability to generalize to unseen vertices. Extensive
experiments on real-world networks, for multiple tasks, demonstrate that the
proposed method consistently achieves superior performance relative to
competing state-of-the-art approaches.
|
[
{
"created": "Mon, 30 Sep 2019 05:03:25 GMT",
"version": "v1"
}
] |
2019-10-01
|
[
[
"Wang",
"Wenlin",
""
],
[
"Tao",
"Chenyang",
""
],
[
"Gan",
"Zhe",
""
],
[
"Wang",
"Guoyin",
""
],
[
"Chen",
"Liqun",
""
],
[
"Zhang",
"Xinyuan",
""
],
[
"Zhang",
"Ruiyi",
""
],
[
"Yang",
"Qian",
""
],
[
"Henao",
"Ricardo",
""
],
[
"Carin",
"Lawrence",
""
]
] |
The performance of many network learning applications crucially hinges on the success of network embedding algorithms, which aim to encode rich network information into low-dimensional vertex-based vector representations. This paper considers a novel variational formulation of network embeddings, with special focus on textual networks. Different from most existing methods that optimize a discriminative objective, we introduce Variational Homophilic Embedding (VHE), a fully generative model that learns network embeddings by modeling the semantic (textual) information with a variational autoencoder, while accounting for the structural (topology) information through a novel homophilic prior design. Homophilic vertex embeddings encourage similar embedding vectors for related (connected) vertices. The proposed VHE promises better generalization for downstream tasks, robustness to incomplete observations, and the ability to generalize to unseen vertices. Extensive experiments on real-world networks, for multiple tasks, demonstrate that the proposed method consistently achieves superior performance relative to competing state-of-the-art approaches.
|
1805.03190
|
Dmitry Kulyabov PhD
|
D. S. Kulyabov, M. N. Gevorkyan, A. V. Demidova, T. R. Velieva, A. V.
Korolkova, L. A. Sevastianov
|
Implementing a Method for Stochastization of One-Step Processes in a
Computer Algebra System
|
in English; in Russian
| null |
10.1134/S0361768818020044
| null |
cs.SC math-ph math.MP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
When modeling such phenomena as population dynamics, controllable ows, etc.,
a problem arises of adapting the existing models to a phenomenon under study.
For this purpose, we propose to derive new models from the rst principles by
stochastization of one-step processes. Research can be represented as an
iterative process that consists in obtaining a model and its further re nement.
The number of such iterations can be extremely large. This work is aimed at
software implementation (by means of computer algebra) of a method for
stochastization of one-step processes. As a basis of the software
implementation, we use the SymPy computer algebra system. Based on a developed
algorithm, we derive stochastic di erential equations and their interaction
schemes. The operation of the program is demonstrated on the Verhulst and
Lotka-Volterra models.
|
[
{
"created": "Tue, 8 May 2018 17:48:59 GMT",
"version": "v1"
}
] |
2018-05-09
|
[
[
"Kulyabov",
"D. S.",
""
],
[
"Gevorkyan",
"M. N.",
""
],
[
"Demidova",
"A. V.",
""
],
[
"Velieva",
"T. R.",
""
],
[
"Korolkova",
"A. V.",
""
],
[
"Sevastianov",
"L. A.",
""
]
] |
When modeling such phenomena as population dynamics, controllable ows, etc., a problem arises of adapting the existing models to a phenomenon under study. For this purpose, we propose to derive new models from the rst principles by stochastization of one-step processes. Research can be represented as an iterative process that consists in obtaining a model and its further re nement. The number of such iterations can be extremely large. This work is aimed at software implementation (by means of computer algebra) of a method for stochastization of one-step processes. As a basis of the software implementation, we use the SymPy computer algebra system. Based on a developed algorithm, we derive stochastic di erential equations and their interaction schemes. The operation of the program is demonstrated on the Verhulst and Lotka-Volterra models.
|
2303.06596
|
Jiayang Ao
|
Jiayang Ao, Qiuhong Ke, Krista A. Ehinger
|
Amodal Intra-class Instance Segmentation: Synthetic Datasets and
Benchmark
|
Accepted at WACV 2024. Datasets are available at
https://github.com/saraao/amodal-dataset
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Images of realistic scenes often contain intra-class objects that are heavily
occluded from each other, making the amodal perception task that requires
parsing the occluded parts of the objects challenging. Although important for
downstream tasks such as robotic grasping systems, the lack of large-scale
amodal datasets with detailed annotations makes it difficult to model
intra-class occlusions explicitly. This paper introduces two new amodal
datasets for image amodal completion tasks, which contain a total of over 267K
images of intra-class occlusion scenarios, annotated with multiple masks,
amodal bounding boxes, dual order relations and full appearance for instances
and background. We also present a point-supervised scheme with layer priors for
amodal instance segmentation specifically designed for intra-class occlusion
scenarios. Experiments show that our weakly supervised approach outperforms the
SOTA fully supervised methods, while our layer priors design exhibits
remarkable performance improvements in the case of intra-class occlusion in
both synthetic and real images.
|
[
{
"created": "Sun, 12 Mar 2023 07:28:36 GMT",
"version": "v1"
},
{
"created": "Tue, 7 Nov 2023 11:38:32 GMT",
"version": "v2"
}
] |
2023-11-08
|
[
[
"Ao",
"Jiayang",
""
],
[
"Ke",
"Qiuhong",
""
],
[
"Ehinger",
"Krista A.",
""
]
] |
Images of realistic scenes often contain intra-class objects that are heavily occluded from each other, making the amodal perception task that requires parsing the occluded parts of the objects challenging. Although important for downstream tasks such as robotic grasping systems, the lack of large-scale amodal datasets with detailed annotations makes it difficult to model intra-class occlusions explicitly. This paper introduces two new amodal datasets for image amodal completion tasks, which contain a total of over 267K images of intra-class occlusion scenarios, annotated with multiple masks, amodal bounding boxes, dual order relations and full appearance for instances and background. We also present a point-supervised scheme with layer priors for amodal instance segmentation specifically designed for intra-class occlusion scenarios. Experiments show that our weakly supervised approach outperforms the SOTA fully supervised methods, while our layer priors design exhibits remarkable performance improvements in the case of intra-class occlusion in both synthetic and real images.
|
2010.04592
|
Joshua Robinson
|
Joshua Robinson, Ching-Yao Chuang, Suvrit Sra, Stefanie Jegelka
|
Contrastive Learning with Hard Negative Samples
|
Published as a conference paper at ICLR 2021
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
How can you sample good negative examples for contrastive learning? We argue
that, as with metric learning, contrastive learning of representations benefits
from hard negative samples (i.e., points that are difficult to distinguish from
an anchor point). The key challenge toward using hard negatives is that
contrastive methods must remain unsupervised, making it infeasible to adopt
existing negative sampling strategies that use true similarity information. In
response, we develop a new family of unsupervised sampling methods for
selecting hard negative samples where the user can control the hardness. A
limiting case of this sampling results in a representation that tightly
clusters each class, and pushes different classes as far apart as possible. The
proposed method improves downstream performance across multiple modalities,
requires only few additional lines of code to implement, and introduces no
computational overhead.
|
[
{
"created": "Fri, 9 Oct 2020 14:18:53 GMT",
"version": "v1"
},
{
"created": "Sun, 24 Jan 2021 22:19:30 GMT",
"version": "v2"
}
] |
2021-01-26
|
[
[
"Robinson",
"Joshua",
""
],
[
"Chuang",
"Ching-Yao",
""
],
[
"Sra",
"Suvrit",
""
],
[
"Jegelka",
"Stefanie",
""
]
] |
How can you sample good negative examples for contrastive learning? We argue that, as with metric learning, contrastive learning of representations benefits from hard negative samples (i.e., points that are difficult to distinguish from an anchor point). The key challenge toward using hard negatives is that contrastive methods must remain unsupervised, making it infeasible to adopt existing negative sampling strategies that use true similarity information. In response, we develop a new family of unsupervised sampling methods for selecting hard negative samples where the user can control the hardness. A limiting case of this sampling results in a representation that tightly clusters each class, and pushes different classes as far apart as possible. The proposed method improves downstream performance across multiple modalities, requires only few additional lines of code to implement, and introduces no computational overhead.
|
1809.02953
|
Carey Ming-Li Chen
|
Carey Ming-Li Chen, Wen-Yau Cathy Lin
|
What indicators matter? The Analysis of Perception toward Research
Assessment Indicators and Leiden Manifesto- The Case Study of Taiwan
|
Paper presented at 23rd International Conference on Science and
Technology Indicators (STI 2018) in Leiden, The Netherlands
| null | null | null |
cs.DL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This study aims to investigate the Taiwanese researchers' awareness toward
bibliometric indicators and the principles from Leiden Manifesto. The online
survey was conducted and obtained a total of 417 valid responses. The results
show that evoking the right concept of use of bibliometric indicators and
research evaluation has a long way to go. The lack of recognition of
bibliometric indicators exists in Taiwanese academia. Generally speaking,
researchers may hear of the certain indicator, but they are not familiar with
its definition and calculation process. Only JIF and h-index are considered as
well-known indicators. The results also suggest that the ten principles from
Leiden Manifesto can be considered as the universal guideline in research
evaluation since most of Taiwanese researchers agree the contents. Especially
for the principle 6 "Account for variation by field in publication and citation
practices" has highest degree of agreement. However, it is interesting to
compare the result of recognition of relative citation ratio, only few
researchers have fully understood the definition. This result indicates the
scientometricians should make more effort to disseminate the concept of
field-normalization in bibliometric indicators. The researchers do have
understanding about the importance of comparison on the same basis, at the
meantime, they may use the inappropriate indicators just because of lacking of
enough knowledge on the variety of indicators. Hence, it is important to
initiate the education of informetrics to all of the stakeholders in research
evaluation so that the misuse and abuse of bibliometric indicators may possibly
not happen again, and the bibliometric analysis is able to turn to
contextualization-based analysis in the future.
|
[
{
"created": "Sun, 9 Sep 2018 11:04:06 GMT",
"version": "v1"
}
] |
2018-09-11
|
[
[
"Chen",
"Carey Ming-Li",
""
],
[
"Lin",
"Wen-Yau Cathy",
""
]
] |
This study aims to investigate the Taiwanese researchers' awareness toward bibliometric indicators and the principles from Leiden Manifesto. The online survey was conducted and obtained a total of 417 valid responses. The results show that evoking the right concept of use of bibliometric indicators and research evaluation has a long way to go. The lack of recognition of bibliometric indicators exists in Taiwanese academia. Generally speaking, researchers may hear of the certain indicator, but they are not familiar with its definition and calculation process. Only JIF and h-index are considered as well-known indicators. The results also suggest that the ten principles from Leiden Manifesto can be considered as the universal guideline in research evaluation since most of Taiwanese researchers agree the contents. Especially for the principle 6 "Account for variation by field in publication and citation practices" has highest degree of agreement. However, it is interesting to compare the result of recognition of relative citation ratio, only few researchers have fully understood the definition. This result indicates the scientometricians should make more effort to disseminate the concept of field-normalization in bibliometric indicators. The researchers do have understanding about the importance of comparison on the same basis, at the meantime, they may use the inappropriate indicators just because of lacking of enough knowledge on the variety of indicators. Hence, it is important to initiate the education of informetrics to all of the stakeholders in research evaluation so that the misuse and abuse of bibliometric indicators may possibly not happen again, and the bibliometric analysis is able to turn to contextualization-based analysis in the future.
|
2105.13718
|
Amin Honarmandi Shandiz
|
Amin Honarmandi Shandiz and L\'aszl\'o T\'oth
|
Voice Activity Detection for Ultrasound-based Silent Speech Interfaces
using Convolutional Neural Networks
|
12 pages, 7 tables, 4 figures
| null |
10.1007/978-3-030-83527-9_43
| null |
cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Voice Activity Detection (VAD) is not easy task when the input audio signal
is noisy, and it is even more complicated when the input is not even an audio
recording. This is the case with Silent Speech Interfaces (SSI) where we record
the movement of the articulatory organs during speech, and we aim to
reconstruct the speech signal from this recording. Our SSI system synthesizes
speech from ultrasonic videos of the tongue movement, and the quality of the
resulting speech signals are evaluated by metrics such as the mean squared
error loss function of the underlying neural network and the Mel-Cepstral
Distortion (MCD) of the reconstructed speech compared to the original. Here, we
first demonstrate that the amount of silence in the training data can have an
influence both on the MCD evaluation metric and on the performance of the
neural network model. Then, we train a convolutional neural network classifier
to separate silent and speech-containing ultrasound tongue images, using a
conventional VAD algorithm to create the training labels from the corresponding
speech signal. In the experiments our ultrasound-based speech/silence separator
achieved a classification accuracy of about 85\% and an AUC score around 86\%.
|
[
{
"created": "Fri, 28 May 2021 10:33:22 GMT",
"version": "v1"
},
{
"created": "Thu, 3 Jun 2021 11:40:48 GMT",
"version": "v2"
},
{
"created": "Sat, 18 Sep 2021 20:47:32 GMT",
"version": "v3"
}
] |
2021-09-21
|
[
[
"Shandiz",
"Amin Honarmandi",
""
],
[
"Tóth",
"László",
""
]
] |
Voice Activity Detection (VAD) is not easy task when the input audio signal is noisy, and it is even more complicated when the input is not even an audio recording. This is the case with Silent Speech Interfaces (SSI) where we record the movement of the articulatory organs during speech, and we aim to reconstruct the speech signal from this recording. Our SSI system synthesizes speech from ultrasonic videos of the tongue movement, and the quality of the resulting speech signals are evaluated by metrics such as the mean squared error loss function of the underlying neural network and the Mel-Cepstral Distortion (MCD) of the reconstructed speech compared to the original. Here, we first demonstrate that the amount of silence in the training data can have an influence both on the MCD evaluation metric and on the performance of the neural network model. Then, we train a convolutional neural network classifier to separate silent and speech-containing ultrasound tongue images, using a conventional VAD algorithm to create the training labels from the corresponding speech signal. In the experiments our ultrasound-based speech/silence separator achieved a classification accuracy of about 85\% and an AUC score around 86\%.
|
2307.10201
|
Luyao Zhang
|
Luyao Zhang, Yutong Sun, Yutong Quan, Jiaxun Cao, Xin Tong
|
On the Mechanics of NFT Valuation: AI Ethics and Social Media
|
Presented at ChainScience Conference, 2003 (arXiv:2307.03277v2
[cs.DC] 11 Jul 2023)
| null |
10.31219/osf.io/qwpdx
|
ChainScience/2023/16
|
cs.CY cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As CryptoPunks pioneers the innovation of non-fungible tokens (NFTs) in AI
and art, the valuation mechanics of NFTs has become a trending topic. Earlier
research identifies the impact of ethics and society on the price prediction of
CryptoPunks. Since the booming year of the NFT market in 2021, the discussion
of CryptoPunks has propagated on social media. Still, existing literature
hasn't considered the social sentiment factors after the historical turning
point on NFT valuation. In this paper, we study how sentiments in social media,
together with gender and skin tone, contribute to NFT valuations by an
empirical analysis of social media, blockchain, and crypto exchange data. We
evidence social sentiments as a significant contributor to the price prediction
of CryptoPunks. Furthermore, we document structure changes in the valuation
mechanics before and after 2021. Although people's attitudes towards
Cryptopunks are primarily positive, our findings reflect imbalances in
transaction activities and pricing based on gender and skin tone. Our result is
consistent and robust, controlling for the rarity of an NFT based on the set of
human-readable attributes, including gender and skin tone. Our research
contributes to the interdisciplinary study at the intersection of AI, Ethics,
and Society, focusing on the ecosystem of decentralized AI or blockchain. We
provide our data and code for replicability as open access on GitHub.
|
[
{
"created": "Thu, 13 Jul 2023 03:12:00 GMT",
"version": "v1"
},
{
"created": "Fri, 21 Jul 2023 10:40:36 GMT",
"version": "v2"
}
] |
2023-07-24
|
[
[
"Zhang",
"Luyao",
""
],
[
"Sun",
"Yutong",
""
],
[
"Quan",
"Yutong",
""
],
[
"Cao",
"Jiaxun",
""
],
[
"Tong",
"Xin",
""
]
] |
As CryptoPunks pioneers the innovation of non-fungible tokens (NFTs) in AI and art, the valuation mechanics of NFTs has become a trending topic. Earlier research identifies the impact of ethics and society on the price prediction of CryptoPunks. Since the booming year of the NFT market in 2021, the discussion of CryptoPunks has propagated on social media. Still, existing literature hasn't considered the social sentiment factors after the historical turning point on NFT valuation. In this paper, we study how sentiments in social media, together with gender and skin tone, contribute to NFT valuations by an empirical analysis of social media, blockchain, and crypto exchange data. We evidence social sentiments as a significant contributor to the price prediction of CryptoPunks. Furthermore, we document structure changes in the valuation mechanics before and after 2021. Although people's attitudes towards Cryptopunks are primarily positive, our findings reflect imbalances in transaction activities and pricing based on gender and skin tone. Our result is consistent and robust, controlling for the rarity of an NFT based on the set of human-readable attributes, including gender and skin tone. Our research contributes to the interdisciplinary study at the intersection of AI, Ethics, and Society, focusing on the ecosystem of decentralized AI or blockchain. We provide our data and code for replicability as open access on GitHub.
|
2007.01790
|
Jihong Park
|
Anis Elgabli, Jihong Park, Chaouki Ben Issaid, Mehdi Bennis
|
Harnessing Wireless Channels for Scalable and Privacy-Preserving
Federated Learning
|
14 pages, 7 figures; This article has been submitted to IEEE for
possible publication
| null | null | null |
cs.LG cs.IT cs.NI math.IT stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Wireless connectivity is instrumental in enabling scalable federated learning
(FL), yet wireless channels bring challenges for model training, in which
channel randomness perturbs each worker's model update while multiple workers'
updates incur significant interference under limited bandwidth. To address
these challenges, in this work we formulate a novel constrained optimization
problem, and propose an FL framework harnessing wireless channel perturbations
and interference for improving privacy, bandwidth-efficiency, and scalability.
The resultant algorithm is coined analog federated ADMM (A-FADMM) based on
analog transmissions and the alternating direction method of multipliers
(ADMM). In A-FADMM, all workers upload their model updates to the parameter
server (PS) using a single channel via analog transmissions, during which all
models are perturbed and aggregated over-the-air. This not only saves
communication bandwidth, but also hides each worker's exact model update
trajectory from any eavesdropper including the honest-but-curious PS, thereby
preserving data privacy against model inversion attacks. We formally prove the
convergence and privacy guarantees of A-FADMM for convex functions under
time-varying channels, and numerically show the effectiveness of A-FADMM under
noisy channels and stochastic non-convex functions, in terms of convergence
speed and scalability, as well as communication bandwidth and energy
efficiency.
|
[
{
"created": "Fri, 3 Jul 2020 16:31:15 GMT",
"version": "v1"
},
{
"created": "Tue, 17 Nov 2020 09:17:13 GMT",
"version": "v2"
}
] |
2020-11-18
|
[
[
"Elgabli",
"Anis",
""
],
[
"Park",
"Jihong",
""
],
[
"Issaid",
"Chaouki Ben",
""
],
[
"Bennis",
"Mehdi",
""
]
] |
Wireless connectivity is instrumental in enabling scalable federated learning (FL), yet wireless channels bring challenges for model training, in which channel randomness perturbs each worker's model update while multiple workers' updates incur significant interference under limited bandwidth. To address these challenges, in this work we formulate a novel constrained optimization problem, and propose an FL framework harnessing wireless channel perturbations and interference for improving privacy, bandwidth-efficiency, and scalability. The resultant algorithm is coined analog federated ADMM (A-FADMM) based on analog transmissions and the alternating direction method of multipliers (ADMM). In A-FADMM, all workers upload their model updates to the parameter server (PS) using a single channel via analog transmissions, during which all models are perturbed and aggregated over-the-air. This not only saves communication bandwidth, but also hides each worker's exact model update trajectory from any eavesdropper including the honest-but-curious PS, thereby preserving data privacy against model inversion attacks. We formally prove the convergence and privacy guarantees of A-FADMM for convex functions under time-varying channels, and numerically show the effectiveness of A-FADMM under noisy channels and stochastic non-convex functions, in terms of convergence speed and scalability, as well as communication bandwidth and energy efficiency.
|
2405.14001
|
Sander Beckers
|
Sander Beckers
|
Nondeterministic Causal Models
|
Preliminary version: currently under review
| null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
I generalize acyclic deterministic structural equation models to the
nondeterministic case and argue that it offers an improved semantics for
counterfactuals. The standard, deterministic, semantics developed by Halpern
(and based on the initial proposal of Galles & Pearl) assumes that for each
assignment of values to parent variables there is a unique assignment to their
child variable, and it assumes that the actual world (an assignment of values
to all variables of a model) specifies a unique counterfactual world for each
intervention. Both assumptions are unrealistic, and therefore I drop both of
them in my proposal. I do so by allowing multi-valued functions in the
structural equations. In addition, I adjust the semantics so that the solutions
to the equations that obtained in the actual world are preserved in any
counterfactual world. I motivate the resulting logic by comparing it to the
standard one by Halpern and to more recent proposals that are closer to mine.
Finally, I extend these models to the probabilistic case and show that they
open up the way to identifying counterfactuals even in Causal Bayesian
Networks.
|
[
{
"created": "Wed, 22 May 2024 21:17:52 GMT",
"version": "v1"
}
] |
2024-05-24
|
[
[
"Beckers",
"Sander",
""
]
] |
I generalize acyclic deterministic structural equation models to the nondeterministic case and argue that it offers an improved semantics for counterfactuals. The standard, deterministic, semantics developed by Halpern (and based on the initial proposal of Galles & Pearl) assumes that for each assignment of values to parent variables there is a unique assignment to their child variable, and it assumes that the actual world (an assignment of values to all variables of a model) specifies a unique counterfactual world for each intervention. Both assumptions are unrealistic, and therefore I drop both of them in my proposal. I do so by allowing multi-valued functions in the structural equations. In addition, I adjust the semantics so that the solutions to the equations that obtained in the actual world are preserved in any counterfactual world. I motivate the resulting logic by comparing it to the standard one by Halpern and to more recent proposals that are closer to mine. Finally, I extend these models to the probabilistic case and show that they open up the way to identifying counterfactuals even in Causal Bayesian Networks.
|
1812.07103
|
Omar Mohammed
|
Omar Mohammed, Gerard Bailly, Damien Pellier
|
Style Transfer and Extraction for the Handwritten Letters Using Deep
Learning
|
Accepted in ICAART 2019
| null | null | null |
cs.CV cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
How can we learn, transfer and extract handwriting styles using deep neural
networks? This paper explores these questions using a deep conditioned
autoencoder on the IRON-OFF handwriting data-set. We perform three experiments
that systematically explore the quality of our style extraction procedure.
First, We compare our model to handwriting benchmarks using multidimensional
performance metrics. Second, we explore the quality of style transfer, i.e. how
the model performs on new, unseen writers. In both experiments, we improve the
metrics of state of the art methods by a large margin. Lastly, we analyze the
latent space of our model, and we see that it separates consistently writing
styles.
|
[
{
"created": "Mon, 10 Dec 2018 13:38:46 GMT",
"version": "v1"
}
] |
2018-12-19
|
[
[
"Mohammed",
"Omar",
""
],
[
"Bailly",
"Gerard",
""
],
[
"Pellier",
"Damien",
""
]
] |
How can we learn, transfer and extract handwriting styles using deep neural networks? This paper explores these questions using a deep conditioned autoencoder on the IRON-OFF handwriting data-set. We perform three experiments that systematically explore the quality of our style extraction procedure. First, We compare our model to handwriting benchmarks using multidimensional performance metrics. Second, we explore the quality of style transfer, i.e. how the model performs on new, unseen writers. In both experiments, we improve the metrics of state of the art methods by a large margin. Lastly, we analyze the latent space of our model, and we see that it separates consistently writing styles.
|
2208.02114
|
Ekrem Fatih Y{\i}lmazer
|
Ekrem Fatih Y{\i}lmazer, Delio Vicini, Wenzel Jakob
|
Solving Inverse PDE Problems using Grid-Free Monte Carlo Estimators
|
9 pages (2 pages references and appendix), 9 figures
| null | null | null |
cs.GR math.AP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modeling physical phenomena like heat transport and diffusion is crucially
dependent on the numerical solution of partial differential equations (PDEs). A
PDE solver finds the solution given coefficients and a boundary condition,
whereas an inverse PDE solver goes the opposite way and reconstructs these
inputs from an existing solution. In this article, we investigate techniques
for solving inverse PDE problems using a gradient-based methodology.
Conventional PDE solvers based on the finite element method require a domain
meshing step that can be fragile and costly. Grid-free Monte Carlo methods
instead stochastically sample paths using variations of the walk on spheres
algorithm to construct an unbiased estimator of the solution. The uncanny
similarity of these methods to physically-based rendering algorithms has been
observed by several recent works. In the area of rendering, recent progress has
led to the development of efficient unbiased derivative estimators. They solve
an adjoint form of the problem and exploit arithmetic invertibility to compute
gradients using a constant amount of memory and linear time complexity. Could
these two lines of work be combined to compute cheap parametric derivatives of
a grid-free PDE solver? We investigate this question and present preliminary
results.
|
[
{
"created": "Wed, 3 Aug 2022 14:44:40 GMT",
"version": "v1"
}
] |
2022-08-04
|
[
[
"Yılmazer",
"Ekrem Fatih",
""
],
[
"Vicini",
"Delio",
""
],
[
"Jakob",
"Wenzel",
""
]
] |
Modeling physical phenomena like heat transport and diffusion is crucially dependent on the numerical solution of partial differential equations (PDEs). A PDE solver finds the solution given coefficients and a boundary condition, whereas an inverse PDE solver goes the opposite way and reconstructs these inputs from an existing solution. In this article, we investigate techniques for solving inverse PDE problems using a gradient-based methodology. Conventional PDE solvers based on the finite element method require a domain meshing step that can be fragile and costly. Grid-free Monte Carlo methods instead stochastically sample paths using variations of the walk on spheres algorithm to construct an unbiased estimator of the solution. The uncanny similarity of these methods to physically-based rendering algorithms has been observed by several recent works. In the area of rendering, recent progress has led to the development of efficient unbiased derivative estimators. They solve an adjoint form of the problem and exploit arithmetic invertibility to compute gradients using a constant amount of memory and linear time complexity. Could these two lines of work be combined to compute cheap parametric derivatives of a grid-free PDE solver? We investigate this question and present preliminary results.
|
2001.00285
|
Thanh Nho Do
|
Mai Thanh Thai, Phuoc Thien Phan, Shing Wong, Nigel H. Lovell, Thanh
Nho Do
|
Advanced Intelligent Systems for Surgical Robotics
|
76 pages, 23 figures, 1 table
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Surgical robots have had clinical use since the mid 1990s. Robot-assisted
surgeries offer many benefits over the conventional approach including lower
risk of infection and blood loss, shorter recovery, and an overall safer
procedure for patients. The past few decades have shown many emerging surgical
robotic platforms that can work in complex and confined channels of the
internal human organs and improve the cognitive and physical skills of the
surgeons during the operation. Advanced technologies for sensing, actuation,
and intelligent control have enabled multiple surgical devices to
simultaneously operate within the human body at low cost and with more
efficiency. Despite advances, current surgical intervention systems are not
able to execute autonomous tasks and make cognitive decisions that are
analogous to that of humans. This paper will overview a historical development
of surgery from conventional open to robotic-assisted approaches with
discussion on the capabilities of advanced intelligent systems and devices that
are currently implemented in existing surgical robotic systems. It will also
revisit available autonomous surgical platforms with comments on the essential
technologies, existing challenges, and suggestions for the future development
of intelligent robotic-assisted surgical systems towards the achievement of
fully autonomous operation.
|
[
{
"created": "Thu, 2 Jan 2020 00:30:32 GMT",
"version": "v1"
}
] |
2020-01-03
|
[
[
"Thai",
"Mai Thanh",
""
],
[
"Phan",
"Phuoc Thien",
""
],
[
"Wong",
"Shing",
""
],
[
"Lovell",
"Nigel H.",
""
],
[
"Do",
"Thanh Nho",
""
]
] |
Surgical robots have had clinical use since the mid 1990s. Robot-assisted surgeries offer many benefits over the conventional approach including lower risk of infection and blood loss, shorter recovery, and an overall safer procedure for patients. The past few decades have shown many emerging surgical robotic platforms that can work in complex and confined channels of the internal human organs and improve the cognitive and physical skills of the surgeons during the operation. Advanced technologies for sensing, actuation, and intelligent control have enabled multiple surgical devices to simultaneously operate within the human body at low cost and with more efficiency. Despite advances, current surgical intervention systems are not able to execute autonomous tasks and make cognitive decisions that are analogous to that of humans. This paper will overview a historical development of surgery from conventional open to robotic-assisted approaches with discussion on the capabilities of advanced intelligent systems and devices that are currently implemented in existing surgical robotic systems. It will also revisit available autonomous surgical platforms with comments on the essential technologies, existing challenges, and suggestions for the future development of intelligent robotic-assisted surgical systems towards the achievement of fully autonomous operation.
|
2304.14459
|
Sriniwas Pandey
|
Sriniwas Pandey, Yiding Cao, Yingjun Dong, Minjun Kim, Neil G.
MacLaren, Shelley D. Dionne, Francis J. Yammarino, and Hiroki Sayama
|
Generation and Influence of Eccentric Ideas on Social Networks
|
15 pages, 3 figures
| null | null | null |
cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Studying extreme ideas in routine choices and discussions is of utmost
importance to understand the increasing polarization in society. In this study,
we focus on understanding the generation and influence of extreme ideas in
routine conversations which we label "eccentric" ideas. The eccentricity of any
idea is defined as the deviation of that idea from the norm of the social
neighborhood. We collected and analyzed data from two completely different
sources: public social media and online experiments in a controlled
environment. We compared the popularity of ideas against their eccentricity to
understand individuals' fascination towards eccentricity. We found that more
eccentric ideas have a higher probability of getting a greater number of
"likes". Additionally, we demonstrate that the social neighborhood of an
individual conceals eccentricity changes in one's own opinions and facilitates
generation of eccentric ideas at a collective level.
|
[
{
"created": "Thu, 27 Apr 2023 18:34:39 GMT",
"version": "v1"
}
] |
2023-05-01
|
[
[
"Pandey",
"Sriniwas",
""
],
[
"Cao",
"Yiding",
""
],
[
"Dong",
"Yingjun",
""
],
[
"Kim",
"Minjun",
""
],
[
"MacLaren",
"Neil G.",
""
],
[
"Dionne",
"Shelley D.",
""
],
[
"Yammarino",
"Francis J.",
""
],
[
"Sayama",
"Hiroki",
""
]
] |
Studying extreme ideas in routine choices and discussions is of utmost importance to understand the increasing polarization in society. In this study, we focus on understanding the generation and influence of extreme ideas in routine conversations which we label "eccentric" ideas. The eccentricity of any idea is defined as the deviation of that idea from the norm of the social neighborhood. We collected and analyzed data from two completely different sources: public social media and online experiments in a controlled environment. We compared the popularity of ideas against their eccentricity to understand individuals' fascination towards eccentricity. We found that more eccentric ideas have a higher probability of getting a greater number of "likes". Additionally, we demonstrate that the social neighborhood of an individual conceals eccentricity changes in one's own opinions and facilitates generation of eccentric ideas at a collective level.
|
1502.04095
|
J.T. Geneson
|
Jesse Geneson, Peter Tian
|
Sequences of formation width $4$ and alternation length $5$
|
20 pages
| null | null | null |
cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sequence pattern avoidance is a central topic in combinatorics. A sequence
$s$ contains a sequence $u$ if some subsequence of $s$ can be changed into $u$
by a one-to-one renaming of its letters. If $s$ does not contain $u$, then $s$
avoids $u$. A widely studied extremal function related to pattern avoidance is
$Ex(u, n)$, the maximum length of an $n$-letter sequence that avoids $u$ and
has every $r$ consecutive letters pairwise distinct, where $r$ is the number of
distinct letters in $u$.
We bound $Ex(u, n)$ using the formation width function, $fw(u)$, which is the
minimum $s$ for which there exists $r$ such that any concatenation of $s$
permutations, each on the same $r$ letters, contains $u$. In particular, we
identify every sequence $u$ such that $fw(u)=4$ and $u$ contains $ababa$. The
significance of this result lies in its implication that, for every such
sequence $u$, we have $Ex(u, n) = \Theta(n \alpha(n))$, where $\alpha(n)$
denotes the incredibly slow-growing inverse Ackermann function. We have thus
identified the extremal function of many infinite classes of previously
unidentified sequences.
|
[
{
"created": "Fri, 13 Feb 2015 19:13:02 GMT",
"version": "v1"
}
] |
2015-02-16
|
[
[
"Geneson",
"Jesse",
""
],
[
"Tian",
"Peter",
""
]
] |
Sequence pattern avoidance is a central topic in combinatorics. A sequence $s$ contains a sequence $u$ if some subsequence of $s$ can be changed into $u$ by a one-to-one renaming of its letters. If $s$ does not contain $u$, then $s$ avoids $u$. A widely studied extremal function related to pattern avoidance is $Ex(u, n)$, the maximum length of an $n$-letter sequence that avoids $u$ and has every $r$ consecutive letters pairwise distinct, where $r$ is the number of distinct letters in $u$. We bound $Ex(u, n)$ using the formation width function, $fw(u)$, which is the minimum $s$ for which there exists $r$ such that any concatenation of $s$ permutations, each on the same $r$ letters, contains $u$. In particular, we identify every sequence $u$ such that $fw(u)=4$ and $u$ contains $ababa$. The significance of this result lies in its implication that, for every such sequence $u$, we have $Ex(u, n) = \Theta(n \alpha(n))$, where $\alpha(n)$ denotes the incredibly slow-growing inverse Ackermann function. We have thus identified the extremal function of many infinite classes of previously unidentified sequences.
|
2407.05961
|
Eran Ben-Haim
|
Geron Yamit, Ben-Haim Eran, Gat D. Amir, Or Yizhar, Givli Sefi
|
Dynamic single-input control of multi-state multi-transition soft
robotic actuator
|
6 figures
| null | null | null |
cs.RO physics.app-ph
|
http://creativecommons.org/licenses/by/4.0/
|
Soft robotics is an attractive and rapidly emerging field, in which actuation
is coupled with the elastic response of the robot's structure to achieve
complex deformation patterns. A crucial challenge is the need for multiple
control inputs, which adds significant complication to the system. We propose a
novel concept of single-input control of an actuator composed of interconnected
bi-stable elements. Dynamic response of the actuator and pre-designed
differences between the elements are exploited to facilitate any desired
multi-state transition, using a single dynamic input. We show formulation and
analysis of the control system's dynamics and pre-design of its multiple
equilibrium states, as well as their stability. Then we fabricate and
demonstrate experimentally on single-input control of two- and four-element
actuators, where the latter can achieve transitions between up to 48 desired
states. Our work paves the way for next-generation soft robotic actuators with
minimal actuation and maximal dexterity.
|
[
{
"created": "Mon, 8 Jul 2024 13:59:42 GMT",
"version": "v1"
},
{
"created": "Mon, 22 Jul 2024 13:44:30 GMT",
"version": "v2"
}
] |
2024-07-23
|
[
[
"Yamit",
"Geron",
""
],
[
"Eran",
"Ben-Haim",
""
],
[
"Amir",
"Gat D.",
""
],
[
"Yizhar",
"Or",
""
],
[
"Sefi",
"Givli",
""
]
] |
Soft robotics is an attractive and rapidly emerging field, in which actuation is coupled with the elastic response of the robot's structure to achieve complex deformation patterns. A crucial challenge is the need for multiple control inputs, which adds significant complication to the system. We propose a novel concept of single-input control of an actuator composed of interconnected bi-stable elements. Dynamic response of the actuator and pre-designed differences between the elements are exploited to facilitate any desired multi-state transition, using a single dynamic input. We show formulation and analysis of the control system's dynamics and pre-design of its multiple equilibrium states, as well as their stability. Then we fabricate and demonstrate experimentally on single-input control of two- and four-element actuators, where the latter can achieve transitions between up to 48 desired states. Our work paves the way for next-generation soft robotic actuators with minimal actuation and maximal dexterity.
|
1809.02403
|
Kan Ren
|
Kan Ren, Jiarui Qin, Lei Zheng, Zhengyu Yang, Weinan Zhang, Lin Qiu,
Yong Yu
|
Deep Recurrent Survival Analysis
|
AAAI 2019. Supplemental material, slides, code:
https://github.com/rk2900/drsa
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Survival analysis is a hotspot in statistical research for modeling
time-to-event information with data censorship handling, which has been widely
used in many applications such as clinical research, information system and
other fields with survivorship bias. Many works have been proposed for survival
analysis ranging from traditional statistic methods to machine learning models.
However, the existing methodologies either utilize counting-based statistics on
the segmented data, or have a pre-assumption on the event probability
distribution w.r.t. time. Moreover, few works consider sequential patterns
within the feature space. In this paper, we propose a Deep Recurrent Survival
Analysis model which combines deep learning for conditional probability
prediction at fine-grained level of the data, and survival analysis for
tackling the censorship. By capturing the time dependency through modeling the
conditional probability of the event for each sample, our method predicts the
likelihood of the true event occurrence and estimates the survival rate over
time, i.e., the probability of the non-occurrence of the event, for the
censored data. Meanwhile, without assuming any specific form of the event
probability distribution, our model shows great advantages over the previous
works on fitting various sophisticated data distributions. In the experiments
on the three real-world tasks from different fields, our model significantly
outperforms the state-of-the-art solutions under various metrics.
|
[
{
"created": "Fri, 7 Sep 2018 11:13:44 GMT",
"version": "v1"
},
{
"created": "Tue, 13 Nov 2018 11:40:19 GMT",
"version": "v2"
}
] |
2018-11-14
|
[
[
"Ren",
"Kan",
""
],
[
"Qin",
"Jiarui",
""
],
[
"Zheng",
"Lei",
""
],
[
"Yang",
"Zhengyu",
""
],
[
"Zhang",
"Weinan",
""
],
[
"Qiu",
"Lin",
""
],
[
"Yu",
"Yong",
""
]
] |
Survival analysis is a hotspot in statistical research for modeling time-to-event information with data censorship handling, which has been widely used in many applications such as clinical research, information system and other fields with survivorship bias. Many works have been proposed for survival analysis ranging from traditional statistic methods to machine learning models. However, the existing methodologies either utilize counting-based statistics on the segmented data, or have a pre-assumption on the event probability distribution w.r.t. time. Moreover, few works consider sequential patterns within the feature space. In this paper, we propose a Deep Recurrent Survival Analysis model which combines deep learning for conditional probability prediction at fine-grained level of the data, and survival analysis for tackling the censorship. By capturing the time dependency through modeling the conditional probability of the event for each sample, our method predicts the likelihood of the true event occurrence and estimates the survival rate over time, i.e., the probability of the non-occurrence of the event, for the censored data. Meanwhile, without assuming any specific form of the event probability distribution, our model shows great advantages over the previous works on fitting various sophisticated data distributions. In the experiments on the three real-world tasks from different fields, our model significantly outperforms the state-of-the-art solutions under various metrics.
|
cs/0510028
|
Victor Grishchenko
|
Victor S. Grishchenko
|
Geo-aggregation permits low stretch and routing tables of logarithmical
size
|
6 pages
| null | null | null |
cs.NI
| null |
This article first addresses applicability of Euclidean models to the domain
of Internet routing. Those models are found (limitedly) applicable. Then a
simplistic model of routing is constructed for Euclidean plane densely covered
with points-routers. The model guarantees low stretch and logarithmical size of
routing tables at any node. The paper concludes with a discussion on
applicability of the model to real-world Internet routing.
|
[
{
"created": "Tue, 11 Oct 2005 16:20:28 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Grishchenko",
"Victor S.",
""
]
] |
This article first addresses applicability of Euclidean models to the domain of Internet routing. Those models are found (limitedly) applicable. Then a simplistic model of routing is constructed for Euclidean plane densely covered with points-routers. The model guarantees low stretch and logarithmical size of routing tables at any node. The paper concludes with a discussion on applicability of the model to real-world Internet routing.
|
2202.03697
|
Nishad Gothoskar
|
Nishad Gothoskar, Miguel L\'azaro-Gredilla, Yasemin Bekiroglu,
Abhishek Agarwal, Joshua B. Tenenbaum, Vikash K. Mansinghka, Dileep George
|
DURableVS: Data-efficient Unsupervised Recalibrating Visual Servoing via
online learning in a structured generative model
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Visual servoing enables robotic systems to perform accurate closed-loop
control, which is required in many applications. However, existing methods
either require precise calibration of the robot kinematic model and cameras or
use neural architectures that require large amounts of data to train. In this
work, we present a method for unsupervised learning of visual servoing that
does not require any prior calibration and is extremely data-efficient. Our key
insight is that visual servoing does not depend on identifying the veridical
kinematic and camera parameters, but instead only on an accurate generative
model of image feature observations from the joint positions of the robot. We
demonstrate that with our model architecture and learning algorithm, we can
consistently learn accurate models from less than 50 training samples (which
amounts to less than 1 min of unsupervised data collection), and that such
data-efficient learning is not possible with standard neural architectures.
Further, we show that by using the generative model in the loop and learning
online, we can enable a robotic system to recover from calibration errors and
to detect and quickly adapt to possibly unexpected changes in the robot-camera
system (e.g. bumped camera, new objects).
|
[
{
"created": "Tue, 8 Feb 2022 07:44:20 GMT",
"version": "v1"
}
] |
2022-02-09
|
[
[
"Gothoskar",
"Nishad",
""
],
[
"Lázaro-Gredilla",
"Miguel",
""
],
[
"Bekiroglu",
"Yasemin",
""
],
[
"Agarwal",
"Abhishek",
""
],
[
"Tenenbaum",
"Joshua B.",
""
],
[
"Mansinghka",
"Vikash K.",
""
],
[
"George",
"Dileep",
""
]
] |
Visual servoing enables robotic systems to perform accurate closed-loop control, which is required in many applications. However, existing methods either require precise calibration of the robot kinematic model and cameras or use neural architectures that require large amounts of data to train. In this work, we present a method for unsupervised learning of visual servoing that does not require any prior calibration and is extremely data-efficient. Our key insight is that visual servoing does not depend on identifying the veridical kinematic and camera parameters, but instead only on an accurate generative model of image feature observations from the joint positions of the robot. We demonstrate that with our model architecture and learning algorithm, we can consistently learn accurate models from less than 50 training samples (which amounts to less than 1 min of unsupervised data collection), and that such data-efficient learning is not possible with standard neural architectures. Further, we show that by using the generative model in the loop and learning online, we can enable a robotic system to recover from calibration errors and to detect and quickly adapt to possibly unexpected changes in the robot-camera system (e.g. bumped camera, new objects).
|
0809.3546
|
Danilo Silva
|
Danilo Silva, Frank R. Kschischang
|
Universal Secure Network Coding via Rank-Metric Codes
|
12 pages, 1 figure, substantially rewritten and improved. Submitted
to IEEE Transactions on Information Theory
|
IEEE Transactions on Information Theory, vol. 57, no. 2, pp.
1124-1135, Feb. 2011
|
10.1109/TIT.2010.2090212
| null |
cs.IT cs.CR math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The problem of securing a network coding communication system against an
eavesdropper adversary is considered. The network implements linear network
coding to deliver n packets from source to each receiver, and the adversary can
eavesdrop on \mu arbitrarily chosen links. The objective is to provide reliable
communication to all receivers, while guaranteeing that the source information
remains information-theoretically secure from the adversary. A coding scheme is
proposed that can achieve the maximum possible rate of n-\mu packets. The
scheme, which is based on rank-metric codes, has the distinctive property of
being universal: it can be applied on top of any communication network without
requiring knowledge of or any modifications on the underlying network code. The
only requirement of the scheme is that the packet length be at least n, which
is shown to be strictly necessary for universal communication at the maximum
rate. A further scenario is considered where the adversary is allowed not only
to eavesdrop but also to inject up to t erroneous packets into the network, and
the network may suffer from a rank deficiency of at most \rho. In this case,
the proposed scheme can be extended to achieve the rate of n-\rho-2t-\mu
packets. This rate is shown to be optimal under the assumption of zero-error
communication.
|
[
{
"created": "Sun, 21 Sep 2008 02:16:18 GMT",
"version": "v1"
},
{
"created": "Tue, 27 Apr 2010 04:21:21 GMT",
"version": "v2"
}
] |
2019-05-07
|
[
[
"Silva",
"Danilo",
""
],
[
"Kschischang",
"Frank R.",
""
]
] |
The problem of securing a network coding communication system against an eavesdropper adversary is considered. The network implements linear network coding to deliver n packets from source to each receiver, and the adversary can eavesdrop on \mu arbitrarily chosen links. The objective is to provide reliable communication to all receivers, while guaranteeing that the source information remains information-theoretically secure from the adversary. A coding scheme is proposed that can achieve the maximum possible rate of n-\mu packets. The scheme, which is based on rank-metric codes, has the distinctive property of being universal: it can be applied on top of any communication network without requiring knowledge of or any modifications on the underlying network code. The only requirement of the scheme is that the packet length be at least n, which is shown to be strictly necessary for universal communication at the maximum rate. A further scenario is considered where the adversary is allowed not only to eavesdrop but also to inject up to t erroneous packets into the network, and the network may suffer from a rank deficiency of at most \rho. In this case, the proposed scheme can be extended to achieve the rate of n-\rho-2t-\mu packets. This rate is shown to be optimal under the assumption of zero-error communication.
|
2011.07960
|
Yikang Shen
|
Yikang Shen, Shawn Tan, Alessandro Sordoni, Siva Reddy, Aaron
Courville
|
Explicitly Modeling Syntax in Language Models with Incremental Parsing
and a Dynamic Oracle
|
12 pages, 10 figures
|
NAACL 2021
| null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Syntax is fundamental to our thinking about language. Failing to capture the
structure of input language could lead to generalization problems and
over-parametrization. In the present work, we propose a new syntax-aware
language model: Syntactic Ordered Memory (SOM). The model explicitly models the
structure with an incremental parser and maintains the conditional probability
setting of a standard language model (left-to-right). To train the incremental
parser and avoid exposure bias, we also propose a novel dynamic oracle, so that
SOM is more robust to wrong parsing decisions. Experiments show that SOM can
achieve strong results in language modeling, incremental parsing and syntactic
generalization tests, while using fewer parameters than other models.
|
[
{
"created": "Wed, 21 Oct 2020 17:39:15 GMT",
"version": "v1"
},
{
"created": "Mon, 10 May 2021 18:13:41 GMT",
"version": "v2"
}
] |
2021-05-12
|
[
[
"Shen",
"Yikang",
""
],
[
"Tan",
"Shawn",
""
],
[
"Sordoni",
"Alessandro",
""
],
[
"Reddy",
"Siva",
""
],
[
"Courville",
"Aaron",
""
]
] |
Syntax is fundamental to our thinking about language. Failing to capture the structure of input language could lead to generalization problems and over-parametrization. In the present work, we propose a new syntax-aware language model: Syntactic Ordered Memory (SOM). The model explicitly models the structure with an incremental parser and maintains the conditional probability setting of a standard language model (left-to-right). To train the incremental parser and avoid exposure bias, we also propose a novel dynamic oracle, so that SOM is more robust to wrong parsing decisions. Experiments show that SOM can achieve strong results in language modeling, incremental parsing and syntactic generalization tests, while using fewer parameters than other models.
|
2005.01584
|
Betis Baheri
|
Betis Baheri, Jacob Tronge, Bo Fang, Ang Li, Vipin Chaudhary and Qiang
Guan
|
MARS: Malleable Actor-Critic Reinforcement Learning Scheduler
|
10 pages, HPC, Cloud System, Scheduling, Workflow Management,
Reinforcement Learning, Deep Learning
|
2022 IEEE International Performance Computing and Communications
Conference (IPCCC) 217-226
|
10.1109/IPCCC55026.2022.9894315
| null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we introduce MARS, a new scheduling system for HPC-cloud
infrastructures based on a cost-aware, flexible reinforcement learning
approach, which serves as an intermediate layer for next generation HPC-cloud
resource manager. MARS ensembles the pre-trained models from heuristic
workloads and decides on the most cost-effective strategy for optimization. A
whole workflow application would be split into several optimizable dependent
sub-tasks, then based on the pre-defined resource management plan, a reward
will be generated after executing a scheduled task. Lastly, MARS updates the
Deep Neural Network (DNN) model based on the reward. MARS is designed to
optimize the existing models through reinforcement mechanisms. MARS adapts to
the dynamics of workflow applications, selects the most cost-effective
scheduling solution among pre-built scheduling strategies (backfilling, SJF,
etc.) and self-learning deep neural network model at run-time. We evaluate MARS
with different real-world workflow traces. MARS can achieve 5%-60% increased
performance compared to the state-of-the-art approaches.
|
[
{
"created": "Mon, 4 May 2020 15:51:41 GMT",
"version": "v1"
},
{
"created": "Mon, 22 Aug 2022 18:08:06 GMT",
"version": "v2"
},
{
"created": "Fri, 23 Dec 2022 07:14:29 GMT",
"version": "v3"
}
] |
2022-12-26
|
[
[
"Baheri",
"Betis",
""
],
[
"Tronge",
"Jacob",
""
],
[
"Fang",
"Bo",
""
],
[
"Li",
"Ang",
""
],
[
"Chaudhary",
"Vipin",
""
],
[
"Guan",
"Qiang",
""
]
] |
In this paper, we introduce MARS, a new scheduling system for HPC-cloud infrastructures based on a cost-aware, flexible reinforcement learning approach, which serves as an intermediate layer for next generation HPC-cloud resource manager. MARS ensembles the pre-trained models from heuristic workloads and decides on the most cost-effective strategy for optimization. A whole workflow application would be split into several optimizable dependent sub-tasks, then based on the pre-defined resource management plan, a reward will be generated after executing a scheduled task. Lastly, MARS updates the Deep Neural Network (DNN) model based on the reward. MARS is designed to optimize the existing models through reinforcement mechanisms. MARS adapts to the dynamics of workflow applications, selects the most cost-effective scheduling solution among pre-built scheduling strategies (backfilling, SJF, etc.) and self-learning deep neural network model at run-time. We evaluate MARS with different real-world workflow traces. MARS can achieve 5%-60% increased performance compared to the state-of-the-art approaches.
|
2304.12939
|
Carlos Eduardo Cancino-Chac\'on
|
Carlos Cancino-Chac\'on, Silvan Peter, Patricia Hu, Emmanouil
Karystinaios, Florian Henkel, Francesco Foscarin, Nimrod Varga, Gerhard
Widmer
|
The ACCompanion: Combining Reactivity, Robustness, and Musical
Expressivity in an Automatic Piano Accompanist
|
In Proceedings of the 32nd International Joint Conference on
Artificial Intelligence (IJCAI-23), Macao, China. The differences/extensions
with the previous version include a technical appendix, added missing links,
and minor text updates. 10 pages, 4 figures
| null | null | null |
cs.SD cs.HC eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces the ACCompanion, an expressive accompaniment system.
Similarly to a musician who accompanies a soloist playing a given musical
piece, our system can produce a human-like rendition of the accompaniment part
that follows the soloist's choices in terms of tempo, dynamics, and
articulation. The ACCompanion works in the symbolic domain, i.e., it needs a
musical instrument capable of producing and playing MIDI data, with explicitly
encoded onset, offset, and pitch for each played note. We describe the
components that go into such a system, from real-time score following and
prediction to expressive performance generation and online adaptation to the
expressive choices of the human player. Based on our experience with repeated
live demonstrations in front of various audiences, we offer an analysis of the
challenges of combining these components into a system that is highly reactive
and precise, while still a reliable musical partner, robust to possible
performance errors and responsive to expressive variations.
|
[
{
"created": "Mon, 24 Apr 2023 05:19:52 GMT",
"version": "v1"
},
{
"created": "Tue, 30 May 2023 14:53:47 GMT",
"version": "v2"
}
] |
2023-05-31
|
[
[
"Cancino-Chacón",
"Carlos",
""
],
[
"Peter",
"Silvan",
""
],
[
"Hu",
"Patricia",
""
],
[
"Karystinaios",
"Emmanouil",
""
],
[
"Henkel",
"Florian",
""
],
[
"Foscarin",
"Francesco",
""
],
[
"Varga",
"Nimrod",
""
],
[
"Widmer",
"Gerhard",
""
]
] |
This paper introduces the ACCompanion, an expressive accompaniment system. Similarly to a musician who accompanies a soloist playing a given musical piece, our system can produce a human-like rendition of the accompaniment part that follows the soloist's choices in terms of tempo, dynamics, and articulation. The ACCompanion works in the symbolic domain, i.e., it needs a musical instrument capable of producing and playing MIDI data, with explicitly encoded onset, offset, and pitch for each played note. We describe the components that go into such a system, from real-time score following and prediction to expressive performance generation and online adaptation to the expressive choices of the human player. Based on our experience with repeated live demonstrations in front of various audiences, we offer an analysis of the challenges of combining these components into a system that is highly reactive and precise, while still a reliable musical partner, robust to possible performance errors and responsive to expressive variations.
|
2405.03660
|
Sankalp Sinha
|
Sankalp Sinha, Muhammad Saif Ullah Khan, Talha Uddin Sheikh, Didier
Stricker and Muhammad Zeshan Afzal
|
CICA: Content-Injected Contrastive Alignment for Zero-Shot Document
Image Classification
|
18 Pages, 4 Figures and Accepted in ICDAR 2024
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Zero-shot learning has been extensively investigated in the broader field of
visual recognition, attracting significant interest recently. However, the
current work on zero-shot learning in document image classification remains
scarce. The existing studies either focus exclusively on zero-shot inference,
or their evaluation does not align with the established criteria of zero-shot
evaluation in the visual recognition domain. We provide a comprehensive
document image classification analysis in Zero-Shot Learning (ZSL) and
Generalized Zero-Shot Learning (GZSL) settings to address this gap. Our
methodology and evaluation align with the established practices of this domain.
Additionally, we propose zero-shot splits for the RVL-CDIP dataset.
Furthermore, we introduce CICA (pronounced 'ki-ka'), a framework that enhances
the zero-shot learning capabilities of CLIP. CICA consists of a novel 'content
module' designed to leverage any generic document-related textual information.
The discriminative features extracted by this module are aligned with CLIP's
text and image features using a novel 'coupled-contrastive' loss. Our module
improves CLIP's ZSL top-1 accuracy by 6.7% and GZSL harmonic mean by 24% on the
RVL-CDIP dataset. Our module is lightweight and adds only 3.3% more parameters
to CLIP. Our work sets the direction for future research in zero-shot document
classification.
|
[
{
"created": "Mon, 6 May 2024 17:37:23 GMT",
"version": "v1"
}
] |
2024-05-07
|
[
[
"Sinha",
"Sankalp",
""
],
[
"Khan",
"Muhammad Saif Ullah",
""
],
[
"Sheikh",
"Talha Uddin",
""
],
[
"Stricker",
"Didier",
""
],
[
"Afzal",
"Muhammad Zeshan",
""
]
] |
Zero-shot learning has been extensively investigated in the broader field of visual recognition, attracting significant interest recently. However, the current work on zero-shot learning in document image classification remains scarce. The existing studies either focus exclusively on zero-shot inference, or their evaluation does not align with the established criteria of zero-shot evaluation in the visual recognition domain. We provide a comprehensive document image classification analysis in Zero-Shot Learning (ZSL) and Generalized Zero-Shot Learning (GZSL) settings to address this gap. Our methodology and evaluation align with the established practices of this domain. Additionally, we propose zero-shot splits for the RVL-CDIP dataset. Furthermore, we introduce CICA (pronounced 'ki-ka'), a framework that enhances the zero-shot learning capabilities of CLIP. CICA consists of a novel 'content module' designed to leverage any generic document-related textual information. The discriminative features extracted by this module are aligned with CLIP's text and image features using a novel 'coupled-contrastive' loss. Our module improves CLIP's ZSL top-1 accuracy by 6.7% and GZSL harmonic mean by 24% on the RVL-CDIP dataset. Our module is lightweight and adds only 3.3% more parameters to CLIP. Our work sets the direction for future research in zero-shot document classification.
|
2302.08761
|
Moritz Neun
|
Moritz Neun, Christian Eichenberger, Yanan Xin, Cheng Fu, Nina
Wiedemann, Henry Martin, Martin Tomko, Lukas Amb\"uhl, Luca Hermes, Michael
Kopp
|
Metropolitan Segment Traffic Speeds from Massive Floating Car Data in 10
Cities
|
Accepted by IEEE Transactions on Intelligent Transportation Systems
(T-ITS), DOI: https://doi.org/10.1109/TITS.2023.3291737
|
IEEE Transactions on Intelligent Transportation Systems (T-ITS),
2023
|
10.1109/TITS.2023.3291737
| null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Traffic analysis is crucial for urban operations and planning, while the
availability of dense urban traffic data beyond loop detectors is still scarce.
We present a large-scale floating vehicle dataset of per-street segment traffic
information, Metropolitan Segment Traffic Speeds from Massive Floating Car Data
in 10 Cities (MeTS-10), available for 10 global cities with a 15-minute
resolution for collection periods ranging between 108 and 361 days in 2019-2021
and covering more than 1500 square kilometers per metropolitan area. MeTS-10
features traffic speed information at all street levels from main arterials to
local streets for Antwerp, Bangkok, Barcelona, Berlin, Chicago, Istanbul,
London, Madrid, Melbourne and Moscow. The dataset leverages the
industrial-scale floating vehicle Traffic4cast data with speeds and vehicle
counts provided in a privacy-preserving spatio-temporal aggregation. We detail
the efficient matching approach mapping the data to the OpenStreetMap road
graph. We evaluate the dataset by comparing it with publicly available
stationary vehicle detector data (for Berlin, London, and Madrid) and the Uber
traffic speed dataset (for Barcelona, Berlin, and London). The comparison
highlights the differences across datasets in spatio-temporal coverage and
variations in the reported traffic caused by the binning method. MeTS-10
enables novel, city-wide analysis of mobility and traffic patterns for ten
major world cities, overcoming current limitations of spatially sparse vehicle
detector data. The large spatial and temporal coverage offers an opportunity
for joining the MeTS-10 with other datasets, such as traffic surveys in traffic
planning studies or vehicle detector data in traffic control settings.
|
[
{
"created": "Fri, 17 Feb 2023 08:56:07 GMT",
"version": "v1"
},
{
"created": "Thu, 20 Apr 2023 08:28:46 GMT",
"version": "v2"
},
{
"created": "Thu, 31 Aug 2023 16:21:10 GMT",
"version": "v3"
}
] |
2023-09-01
|
[
[
"Neun",
"Moritz",
""
],
[
"Eichenberger",
"Christian",
""
],
[
"Xin",
"Yanan",
""
],
[
"Fu",
"Cheng",
""
],
[
"Wiedemann",
"Nina",
""
],
[
"Martin",
"Henry",
""
],
[
"Tomko",
"Martin",
""
],
[
"Ambühl",
"Lukas",
""
],
[
"Hermes",
"Luca",
""
],
[
"Kopp",
"Michael",
""
]
] |
Traffic analysis is crucial for urban operations and planning, while the availability of dense urban traffic data beyond loop detectors is still scarce. We present a large-scale floating vehicle dataset of per-street segment traffic information, Metropolitan Segment Traffic Speeds from Massive Floating Car Data in 10 Cities (MeTS-10), available for 10 global cities with a 15-minute resolution for collection periods ranging between 108 and 361 days in 2019-2021 and covering more than 1500 square kilometers per metropolitan area. MeTS-10 features traffic speed information at all street levels from main arterials to local streets for Antwerp, Bangkok, Barcelona, Berlin, Chicago, Istanbul, London, Madrid, Melbourne and Moscow. The dataset leverages the industrial-scale floating vehicle Traffic4cast data with speeds and vehicle counts provided in a privacy-preserving spatio-temporal aggregation. We detail the efficient matching approach mapping the data to the OpenStreetMap road graph. We evaluate the dataset by comparing it with publicly available stationary vehicle detector data (for Berlin, London, and Madrid) and the Uber traffic speed dataset (for Barcelona, Berlin, and London). The comparison highlights the differences across datasets in spatio-temporal coverage and variations in the reported traffic caused by the binning method. MeTS-10 enables novel, city-wide analysis of mobility and traffic patterns for ten major world cities, overcoming current limitations of spatially sparse vehicle detector data. The large spatial and temporal coverage offers an opportunity for joining the MeTS-10 with other datasets, such as traffic surveys in traffic planning studies or vehicle detector data in traffic control settings.
|
1711.00745
|
Miguel Calvo-Fullana
|
Miguel Calvo-Fullana, Carles Ant\'on-Haro, Javier Matamoros, Alejandro
Ribeiro
|
Stochastic Routing and Scheduling Policies for Energy Harvesting
Communication Networks
| null | null |
10.1109/TSP.2018.2833814
| null |
cs.IT math.IT math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we study the joint routing-scheduling problem in energy
harvesting communication networks. Our policies, which are based on stochastic
subgradient methods on the dual domain, act as an energy harvesting variant of
the stochastic family of backpresure algorithms. Specifically, we propose two
policies: (i) the Stochastic Backpressure with Energy Harvesting (SBP-EH), in
which a node's routing-scheduling decisions are determined by the difference
between the Lagrange multipliers associated to their queue stability
constraints and their neighbors'; and (ii) the Stochastic Soft Backpressure
with Energy Harvesting (SSBP-EH), an improved algorithm where the
routing-scheduling decision is of a probabilistic nature. For both policies, we
show that given sustainable data and energy arrival rates, the stability of the
data queues over all network nodes is guaranteed. Numerical results corroborate
the stability guarantees and illustrate the minimal gap in performance that our
policies offer with respect to classical ones which work with an unlimited
energy supply.
|
[
{
"created": "Thu, 2 Nov 2017 13:54:24 GMT",
"version": "v1"
}
] |
2018-07-04
|
[
[
"Calvo-Fullana",
"Miguel",
""
],
[
"Antón-Haro",
"Carles",
""
],
[
"Matamoros",
"Javier",
""
],
[
"Ribeiro",
"Alejandro",
""
]
] |
In this paper, we study the joint routing-scheduling problem in energy harvesting communication networks. Our policies, which are based on stochastic subgradient methods on the dual domain, act as an energy harvesting variant of the stochastic family of backpresure algorithms. Specifically, we propose two policies: (i) the Stochastic Backpressure with Energy Harvesting (SBP-EH), in which a node's routing-scheduling decisions are determined by the difference between the Lagrange multipliers associated to their queue stability constraints and their neighbors'; and (ii) the Stochastic Soft Backpressure with Energy Harvesting (SSBP-EH), an improved algorithm where the routing-scheduling decision is of a probabilistic nature. For both policies, we show that given sustainable data and energy arrival rates, the stability of the data queues over all network nodes is guaranteed. Numerical results corroborate the stability guarantees and illustrate the minimal gap in performance that our policies offer with respect to classical ones which work with an unlimited energy supply.
|
1711.00267
|
Wenbin Li
|
Wenbin Li, Jeannette Bohg, Mario Fritz
|
Acquiring Target Stacking Skills by Goal-Parameterized Deep
Reinforcement Learning
| null | null | null | null |
cs.RO cs.AI cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Understanding physical phenomena is a key component of human intelligence and
enables physical interaction with previously unseen environments. In this
paper, we study how an artificial agent can autonomously acquire this intuition
through interaction with the environment. We created a synthetic block stacking
environment with physics simulation in which the agent can learn a policy
end-to-end through trial and error. Thereby, we bypass to explicitly model
physical knowledge within the policy. We are specifically interested in tasks
that require the agent to reach a given goal state that may be different for
every new trial. To this end, we propose a deep reinforcement learning
framework that learns policies which are parametrized by a goal. We validated
the model on a toy example navigating in a grid world with different target
positions and in a block stacking task with different target structures of the
final tower. In contrast to prior work, our policies show better generalization
across different goals.
|
[
{
"created": "Wed, 1 Nov 2017 10:04:29 GMT",
"version": "v1"
},
{
"created": "Wed, 22 Nov 2017 11:38:17 GMT",
"version": "v2"
}
] |
2017-11-23
|
[
[
"Li",
"Wenbin",
""
],
[
"Bohg",
"Jeannette",
""
],
[
"Fritz",
"Mario",
""
]
] |
Understanding physical phenomena is a key component of human intelligence and enables physical interaction with previously unseen environments. In this paper, we study how an artificial agent can autonomously acquire this intuition through interaction with the environment. We created a synthetic block stacking environment with physics simulation in which the agent can learn a policy end-to-end through trial and error. Thereby, we bypass to explicitly model physical knowledge within the policy. We are specifically interested in tasks that require the agent to reach a given goal state that may be different for every new trial. To this end, we propose a deep reinforcement learning framework that learns policies which are parametrized by a goal. We validated the model on a toy example navigating in a grid world with different target positions and in a block stacking task with different target structures of the final tower. In contrast to prior work, our policies show better generalization across different goals.
|
2010.12280
|
Zahra Motaqy
|
Z. Motaqy, G. Almashaqbeh, B. Bahrak and N.Yazdani
|
Bet and Attack: Incentive Compatible Collaborative Attacks Using Smart
Contracts
|
Final Version
| null |
10.1007/978-3-030-90370-1_16
| null |
cs.GT cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Smart contract-enabled blockchains allow building decentralized applications
in which mutually-distrusted parties can work together. Recently, oracle
services emerged to provide these applications with real-world data feeds.
Unfortunately, these capabilities have been used for malicious purposes under
what is called criminal smart contracts. A few works explored this dark side
and showed a variety of such attacks. However, none of them considered
collaborative attacks against targets that reside outside the blockchain
ecosystem. In this paper, we bridge this gap and introduce a smart
contract-based framework that allows a sponsor to orchestrate a collaborative
attack among (pseudo)anonymous attackers and reward them for that. While all
previous works required a technique to quantify an attacker's individual
contribution, which could be infeasible with respect to real-world targets, our
framework avoids that. This is done by developing a novel scheme for trustless
collaboration through betting. That is, attackers bet on an event (i.e., the
attack takes place) and then work on making that event happen (i.e., perform
the attack). By taking DDoS as a usecase, we formulate attackers' interaction
as a game, and formally prove that these attackers will collaborate in
proportion to the amount of their bets in the game's unique equilibrium. We
also model our framework and its reward function as an incentive mechanism and
prove that it is a strategy proof and budget-balanced one. Finally, we conduct
numerical simulations to demonstrate the equilibrium behavior of our framework.
|
[
{
"created": "Fri, 23 Oct 2020 10:21:53 GMT",
"version": "v1"
},
{
"created": "Sun, 11 Apr 2021 08:04:20 GMT",
"version": "v2"
},
{
"created": "Sun, 19 Sep 2021 00:34:32 GMT",
"version": "v3"
},
{
"created": "Thu, 23 Sep 2021 20:46:11 GMT",
"version": "v4"
}
] |
2021-12-24
|
[
[
"Motaqy",
"Z.",
""
],
[
"Almashaqbeh",
"G.",
""
],
[
"Bahrak",
"B.",
""
],
[
"Yazdani",
"N.",
""
]
] |
Smart contract-enabled blockchains allow building decentralized applications in which mutually-distrusted parties can work together. Recently, oracle services emerged to provide these applications with real-world data feeds. Unfortunately, these capabilities have been used for malicious purposes under what is called criminal smart contracts. A few works explored this dark side and showed a variety of such attacks. However, none of them considered collaborative attacks against targets that reside outside the blockchain ecosystem. In this paper, we bridge this gap and introduce a smart contract-based framework that allows a sponsor to orchestrate a collaborative attack among (pseudo)anonymous attackers and reward them for that. While all previous works required a technique to quantify an attacker's individual contribution, which could be infeasible with respect to real-world targets, our framework avoids that. This is done by developing a novel scheme for trustless collaboration through betting. That is, attackers bet on an event (i.e., the attack takes place) and then work on making that event happen (i.e., perform the attack). By taking DDoS as a usecase, we formulate attackers' interaction as a game, and formally prove that these attackers will collaborate in proportion to the amount of their bets in the game's unique equilibrium. We also model our framework and its reward function as an incentive mechanism and prove that it is a strategy proof and budget-balanced one. Finally, we conduct numerical simulations to demonstrate the equilibrium behavior of our framework.
|
1607.08098
|
Carlos Leandro
|
Carlos Leandro and Helder Pita and Lu\'is Monteiro
|
The Actias system: supervised multi-strategy learning paradigm using
categorical logic
|
9 pages, 6 figures, conference ICKEDS'04
| null | null | null |
cs.DB cs.AI cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
One of the most difficult problems in the development of intelligent systems
is the construction of the underlying knowledge base. As a consequence, the
rate of progress in the development of this type of system is directly related
to the speed with which knowledge bases can be assembled, and on its quality.
We attempt to solve the knowledge acquisition problem, for a Business
Information System, developing a supervised multistrategy learning paradigm.
This paradigm is centred on a collaborative data mining strategy, where groups
of experts collaborate using data-mining process on the supervised acquisition
of new knowledge extracted from heterogeneous machine learning data models.
The Actias system is our approach to this paradigm. It is the result of
applying the graphic logic based language of sketches to knowledge integration.
The system is a data mining collaborative workplace, where the Information
System knowledge base is an algebraic structure. It results from the
integration of background knowledge with new insights extracted from data
models, generated for specific data modelling tasks, and represented as rules
using the sketches language.
|
[
{
"created": "Fri, 6 May 2016 20:04:40 GMT",
"version": "v1"
}
] |
2016-07-28
|
[
[
"Leandro",
"Carlos",
""
],
[
"Pita",
"Helder",
""
],
[
"Monteiro",
"Luís",
""
]
] |
One of the most difficult problems in the development of intelligent systems is the construction of the underlying knowledge base. As a consequence, the rate of progress in the development of this type of system is directly related to the speed with which knowledge bases can be assembled, and on its quality. We attempt to solve the knowledge acquisition problem, for a Business Information System, developing a supervised multistrategy learning paradigm. This paradigm is centred on a collaborative data mining strategy, where groups of experts collaborate using data-mining process on the supervised acquisition of new knowledge extracted from heterogeneous machine learning data models. The Actias system is our approach to this paradigm. It is the result of applying the graphic logic based language of sketches to knowledge integration. The system is a data mining collaborative workplace, where the Information System knowledge base is an algebraic structure. It results from the integration of background knowledge with new insights extracted from data models, generated for specific data modelling tasks, and represented as rules using the sketches language.
|
1104.2756
|
Tanzima Hashem
|
Tanzima Hashem, Lars Kulik, Rui Zhang
|
Privacy Preserving Moving KNN Queries
| null | null | null | null |
cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a novel approach that protects trajectory privacy of users who
access location-based services through a moving k nearest neighbor (MkNN)
query. An MkNN query continuously returns the k nearest data objects for a
moving user (query point). Simply updating a user's imprecise location such as
a region instead of the exact position to a location-based service provider
(LSP) cannot ensure privacy of the user for an MkNN query: continuous
disclosure of regions enables the LSP to follow a user's trajectory. We
identify the problem of trajectory privacy that arises from the overlap of
consecutive regions while requesting an MkNN query and provide the first
solution to this problem. Our approach allows a user to specify the confidence
level that represents a bound of how much more the user may need to travel than
the actual kth nearest data object. By hiding a user's required confidence
level and the required number of nearest data objects from an LSP, we develop a
technique to prevent the LSP from tracking the user's trajectory for MkNN
queries. We propose an efficient algorithm for the LSP to find k nearest data
objects for a region with a user's specified confidence level, which is an
essential component to evaluate an MkNN query in a privacy preserving manner;
this algorithm is at least two times faster than the state-of-the-art
algorithm. Extensive experimental studies validate the effectiveness of our
trajectory privacy protection technique and the efficiency of our algorithm.
|
[
{
"created": "Thu, 14 Apr 2011 13:28:08 GMT",
"version": "v1"
}
] |
2011-04-15
|
[
[
"Hashem",
"Tanzima",
""
],
[
"Kulik",
"Lars",
""
],
[
"Zhang",
"Rui",
""
]
] |
We present a novel approach that protects trajectory privacy of users who access location-based services through a moving k nearest neighbor (MkNN) query. An MkNN query continuously returns the k nearest data objects for a moving user (query point). Simply updating a user's imprecise location such as a region instead of the exact position to a location-based service provider (LSP) cannot ensure privacy of the user for an MkNN query: continuous disclosure of regions enables the LSP to follow a user's trajectory. We identify the problem of trajectory privacy that arises from the overlap of consecutive regions while requesting an MkNN query and provide the first solution to this problem. Our approach allows a user to specify the confidence level that represents a bound of how much more the user may need to travel than the actual kth nearest data object. By hiding a user's required confidence level and the required number of nearest data objects from an LSP, we develop a technique to prevent the LSP from tracking the user's trajectory for MkNN queries. We propose an efficient algorithm for the LSP to find k nearest data objects for a region with a user's specified confidence level, which is an essential component to evaluate an MkNN query in a privacy preserving manner; this algorithm is at least two times faster than the state-of-the-art algorithm. Extensive experimental studies validate the effectiveness of our trajectory privacy protection technique and the efficiency of our algorithm.
|
2401.06321
|
Wonjune Kang
|
Wonjune Kang, Yun Wang, Shun Zhang, Arthur Hinsvark, Qing He
|
Multi-Task Learning for Front-End Text Processing in TTS
|
ICASSP 2024
| null |
10.1109/ICASSP48485.2024.10446241
| null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We propose a multi-task learning (MTL) model for jointly performing three
tasks that are commonly solved in a text-to-speech (TTS) front-end: text
normalization (TN), part-of-speech (POS) tagging, and homograph disambiguation
(HD). Our framework utilizes a tree-like structure with a trunk that learns
shared representations, followed by separate task-specific heads. We further
incorporate a pre-trained language model to utilize its built-in lexical and
contextual knowledge, and study how to best use its embeddings so as to most
effectively benefit our multi-task model. Through task-wise ablations, we show
that our full model trained on all three tasks achieves the strongest overall
performance compared to models trained on individual or sub-combinations of
tasks, confirming the advantages of our MTL framework. Finally, we introduce a
new HD dataset containing a balanced number of sentences in diverse contexts
for a variety of homographs and their pronunciations. We demonstrate that
incorporating this dataset into training significantly improves HD performance
over only using a commonly used, but imbalanced, pre-existing dataset.
|
[
{
"created": "Fri, 12 Jan 2024 02:13:21 GMT",
"version": "v1"
}
] |
2024-04-04
|
[
[
"Kang",
"Wonjune",
""
],
[
"Wang",
"Yun",
""
],
[
"Zhang",
"Shun",
""
],
[
"Hinsvark",
"Arthur",
""
],
[
"He",
"Qing",
""
]
] |
We propose a multi-task learning (MTL) model for jointly performing three tasks that are commonly solved in a text-to-speech (TTS) front-end: text normalization (TN), part-of-speech (POS) tagging, and homograph disambiguation (HD). Our framework utilizes a tree-like structure with a trunk that learns shared representations, followed by separate task-specific heads. We further incorporate a pre-trained language model to utilize its built-in lexical and contextual knowledge, and study how to best use its embeddings so as to most effectively benefit our multi-task model. Through task-wise ablations, we show that our full model trained on all three tasks achieves the strongest overall performance compared to models trained on individual or sub-combinations of tasks, confirming the advantages of our MTL framework. Finally, we introduce a new HD dataset containing a balanced number of sentences in diverse contexts for a variety of homographs and their pronunciations. We demonstrate that incorporating this dataset into training significantly improves HD performance over only using a commonly used, but imbalanced, pre-existing dataset.
|
2102.01187
|
Peiye Zhuang
|
Peiye Zhuang, Oluwasanmi Koyejo, Alexander G. Schwing
|
Enjoy Your Editing: Controllable GANs for Image Editing via Latent Space
Navigation
|
Accepted to ICLR 2021. 14 pages, 15 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Controllable semantic image editing enables a user to change entire image
attributes with a few clicks, e.g., gradually making a summer scene look like
it was taken in winter. Classic approaches for this task use a Generative
Adversarial Net (GAN) to learn a latent space and suitable latent-space
transformations. However, current approaches often suffer from attribute edits
that are entangled, global image identity changes, and diminished
photo-realism. To address these concerns, we learn multiple attribute
transformations simultaneously, integrate attribute regression into the
training of transformation functions, and apply a content loss and an
adversarial loss that encourages the maintenance of image identity and
photo-realism. We propose quantitative evaluation strategies for measuring
controllable editing performance, unlike prior work, which primarily focuses on
qualitative evaluation. Our model permits better control for both single- and
multiple-attribute editing while preserving image identity and realism during
transformation. We provide empirical results for both natural and synthetic
images, highlighting that our model achieves state-of-the-art performance for
targeted image manipulation.
|
[
{
"created": "Mon, 1 Feb 2021 21:38:36 GMT",
"version": "v1"
},
{
"created": "Wed, 3 Feb 2021 07:21:18 GMT",
"version": "v2"
},
{
"created": "Sun, 28 Mar 2021 20:04:36 GMT",
"version": "v3"
}
] |
2021-03-30
|
[
[
"Zhuang",
"Peiye",
""
],
[
"Koyejo",
"Oluwasanmi",
""
],
[
"Schwing",
"Alexander G.",
""
]
] |
Controllable semantic image editing enables a user to change entire image attributes with a few clicks, e.g., gradually making a summer scene look like it was taken in winter. Classic approaches for this task use a Generative Adversarial Net (GAN) to learn a latent space and suitable latent-space transformations. However, current approaches often suffer from attribute edits that are entangled, global image identity changes, and diminished photo-realism. To address these concerns, we learn multiple attribute transformations simultaneously, integrate attribute regression into the training of transformation functions, and apply a content loss and an adversarial loss that encourages the maintenance of image identity and photo-realism. We propose quantitative evaluation strategies for measuring controllable editing performance, unlike prior work, which primarily focuses on qualitative evaluation. Our model permits better control for both single- and multiple-attribute editing while preserving image identity and realism during transformation. We provide empirical results for both natural and synthetic images, highlighting that our model achieves state-of-the-art performance for targeted image manipulation.
|
2103.04188
|
Qinheping Hu
|
Qinheping Hu, John Cyphert, Loris D'Antoni, Thomas Reps
|
Synthesis with Asymptotic Resource Bounds
| null | null | null | null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a method for synthesizing recursive functions that satisfy both a
functional specification and an asymptotic resource bound. Prior methods for
synthesis with a resource metric require the user to specify a concrete
expression exactly describing resource usage, whereas our method uses big-O
notation to specify the asymptotic resource usage. Our method can synthesize
programs with complex resource bounds, such as a sort function that has
complexity O(nlog(n)). Our synthesis procedure uses a type system that is able
to assign an asymptotic complexity to terms, and can track recurrence relations
of functions. These typing rules are justified by theorems used in analysis of
algorithms, such as the Master Theorem and the Akra-Bazzi method. We
implemented our method as an extension of prior type-based synthesis work. Our
tool, SynPlexity, was able to synthesize complex divide-and-conquer programs
that cannot be synthesized by prior solvers.
|
[
{
"created": "Sat, 6 Mar 2021 20:03:50 GMT",
"version": "v1"
},
{
"created": "Wed, 26 May 2021 05:16:44 GMT",
"version": "v2"
}
] |
2021-05-27
|
[
[
"Hu",
"Qinheping",
""
],
[
"Cyphert",
"John",
""
],
[
"D'Antoni",
"Loris",
""
],
[
"Reps",
"Thomas",
""
]
] |
We present a method for synthesizing recursive functions that satisfy both a functional specification and an asymptotic resource bound. Prior methods for synthesis with a resource metric require the user to specify a concrete expression exactly describing resource usage, whereas our method uses big-O notation to specify the asymptotic resource usage. Our method can synthesize programs with complex resource bounds, such as a sort function that has complexity O(nlog(n)). Our synthesis procedure uses a type system that is able to assign an asymptotic complexity to terms, and can track recurrence relations of functions. These typing rules are justified by theorems used in analysis of algorithms, such as the Master Theorem and the Akra-Bazzi method. We implemented our method as an extension of prior type-based synthesis work. Our tool, SynPlexity, was able to synthesize complex divide-and-conquer programs that cannot be synthesized by prior solvers.
|
1812.08868
|
Tom Hanika
|
Tom Hanika and Maren Koyda and Gerd Stumme
|
Relevant Attributes in Formal Contexts
|
14 pages, 5 figures
| null |
10.1007/978-3-030-23182-8_8
| null |
cs.AI cs.IT cs.LG math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Computing conceptual structures, like formal concept lattices, is in the age
of massive data sets a challenging task. There are various approaches to deal
with this, e.g., random sampling, parallelization, or attribute extraction. A
so far not investigated method in the realm of formal concept analysis is
attribute selection, as done in machine learning. Building up on this we
introduce a method for attribute selection in formal contexts. To this end, we
propose the notion of relevant attributes which enables us to define a relative
relevance function, reflecting both the order structure of the concept lattice
as well as distribution of objects on it. Finally, we overcome computational
challenges for computing the relative relevance through an approximation
approach based on information entropy.
|
[
{
"created": "Thu, 20 Dec 2018 22:07:16 GMT",
"version": "v1"
}
] |
2020-02-28
|
[
[
"Hanika",
"Tom",
""
],
[
"Koyda",
"Maren",
""
],
[
"Stumme",
"Gerd",
""
]
] |
Computing conceptual structures, like formal concept lattices, is in the age of massive data sets a challenging task. There are various approaches to deal with this, e.g., random sampling, parallelization, or attribute extraction. A so far not investigated method in the realm of formal concept analysis is attribute selection, as done in machine learning. Building up on this we introduce a method for attribute selection in formal contexts. To this end, we propose the notion of relevant attributes which enables us to define a relative relevance function, reflecting both the order structure of the concept lattice as well as distribution of objects on it. Finally, we overcome computational challenges for computing the relative relevance through an approximation approach based on information entropy.
|
1805.02442
|
Vered Shwartz
|
Vered Shwartz and Ido Dagan
|
Paraphrase to Explicate: Revealing Implicit Noun-Compound Relations
|
Long paper at ACL 2018
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Revealing the implicit semantic relation between the constituents of a
noun-compound is important for many NLP applications. It has been addressed in
the literature either as a classification task to a set of pre-defined
relations or by producing free text paraphrases explicating the relations. Most
existing paraphrasing methods lack the ability to generalize, and have a hard
time interpreting infrequent or new noun-compounds. We propose a neural model
that generalizes better by representing paraphrases in a continuous space,
generalizing for both unseen noun-compounds and rare paraphrases. Our model
helps improving performance on both the noun-compound paraphrasing and
classification tasks.
|
[
{
"created": "Mon, 7 May 2018 11:14:07 GMT",
"version": "v1"
}
] |
2018-05-08
|
[
[
"Shwartz",
"Vered",
""
],
[
"Dagan",
"Ido",
""
]
] |
Revealing the implicit semantic relation between the constituents of a noun-compound is important for many NLP applications. It has been addressed in the literature either as a classification task to a set of pre-defined relations or by producing free text paraphrases explicating the relations. Most existing paraphrasing methods lack the ability to generalize, and have a hard time interpreting infrequent or new noun-compounds. We propose a neural model that generalizes better by representing paraphrases in a continuous space, generalizing for both unseen noun-compounds and rare paraphrases. Our model helps improving performance on both the noun-compound paraphrasing and classification tasks.
|
2401.08461
|
J\'er\^ome Botoko Ekila
|
J\'er\^ome Botoko Ekila, Jens Nevens, Lara Verheyen, Katrien Beuls,
Paul Van Eecke
|
Decentralised Emergence of Robust and Adaptive Linguistic Conventions in
Populations of Autonomous Agents Grounded in Continuous Worlds
| null | null | null | null |
cs.AI cs.CL cs.NE
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This paper introduces a methodology through which a population of autonomous
agents can establish a linguistic convention that enables them to refer to
arbitrary entities that they observe in their environment. The linguistic
convention emerges in a decentralised manner through local communicative
interactions between pairs of agents drawn from the population. The convention
consists of symbolic labels (word forms) associated to concept representations
(word meanings) that are grounded in a continuous feature space. The concept
representations of each agent are individually constructed yet compatible on a
communicative level. Through a range of experiments, we show (i) that the
methodology enables a population to converge on a communicatively effective,
coherent and human-interpretable linguistic convention, (ii) that it is
naturally robust against sensor defects in individual agents, (iii) that it can
effectively deal with noisy observations, uncalibrated sensors and
heteromorphic populations, (iv) that the method is adequate for continual
learning, and (v) that the convention self-adapts to changes in the environment
and communicative needs of the agents.
|
[
{
"created": "Tue, 16 Jan 2024 16:11:35 GMT",
"version": "v1"
}
] |
2024-01-17
|
[
[
"Ekila",
"Jérôme Botoko",
""
],
[
"Nevens",
"Jens",
""
],
[
"Verheyen",
"Lara",
""
],
[
"Beuls",
"Katrien",
""
],
[
"Van Eecke",
"Paul",
""
]
] |
This paper introduces a methodology through which a population of autonomous agents can establish a linguistic convention that enables them to refer to arbitrary entities that they observe in their environment. The linguistic convention emerges in a decentralised manner through local communicative interactions between pairs of agents drawn from the population. The convention consists of symbolic labels (word forms) associated to concept representations (word meanings) that are grounded in a continuous feature space. The concept representations of each agent are individually constructed yet compatible on a communicative level. Through a range of experiments, we show (i) that the methodology enables a population to converge on a communicatively effective, coherent and human-interpretable linguistic convention, (ii) that it is naturally robust against sensor defects in individual agents, (iii) that it can effectively deal with noisy observations, uncalibrated sensors and heteromorphic populations, (iv) that the method is adequate for continual learning, and (v) that the convention self-adapts to changes in the environment and communicative needs of the agents.
|
2312.15901
|
Zixian Guo
|
Zixian Guo, Yuxiang Wei, Ming Liu, Zhilong Ji, Jinfeng Bai, Yiwen Guo,
Wangmeng Zuo
|
Black-Box Tuning of Vision-Language Models with Effective Gradient
Approximation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Parameter-efficient fine-tuning (PEFT) methods have provided an effective way
for adapting large vision-language models to specific tasks or scenarios.
Typically, they learn a very small scale of parameters for pre-trained models
in a white-box formulation, which assumes model architectures to be known and
parameters to be accessible. However, large models are often not open-source
due to considerations of preventing abuse or commercial factors, hence posing a
barrier to the deployment of white-box PEFT methods. To alleviate the
dependence on model accessibility, we introduce collaborative black-box tuning
(CBBT) for both textual prompt optimization and output feature adaptation for
black-box models. Specifically, considering that the backpropagation gradients
are blocked, we approximate the gradients of textual prompts by analyzing the
predictions with perturbed prompts. Secondly, a lightweight adapter is deployed
over the output feature of the inaccessible model, further facilitating the
model adaptation process. Empowered with these designs, our CBBT is extensively
evaluated on eleven downstream benchmarks and achieves remarkable improvements
compared to existing black-box VL adaptation methods. Code is released at
https://github.com/guozix/cbbt.
|
[
{
"created": "Tue, 26 Dec 2023 06:31:28 GMT",
"version": "v1"
}
] |
2023-12-27
|
[
[
"Guo",
"Zixian",
""
],
[
"Wei",
"Yuxiang",
""
],
[
"Liu",
"Ming",
""
],
[
"Ji",
"Zhilong",
""
],
[
"Bai",
"Jinfeng",
""
],
[
"Guo",
"Yiwen",
""
],
[
"Zuo",
"Wangmeng",
""
]
] |
Parameter-efficient fine-tuning (PEFT) methods have provided an effective way for adapting large vision-language models to specific tasks or scenarios. Typically, they learn a very small scale of parameters for pre-trained models in a white-box formulation, which assumes model architectures to be known and parameters to be accessible. However, large models are often not open-source due to considerations of preventing abuse or commercial factors, hence posing a barrier to the deployment of white-box PEFT methods. To alleviate the dependence on model accessibility, we introduce collaborative black-box tuning (CBBT) for both textual prompt optimization and output feature adaptation for black-box models. Specifically, considering that the backpropagation gradients are blocked, we approximate the gradients of textual prompts by analyzing the predictions with perturbed prompts. Secondly, a lightweight adapter is deployed over the output feature of the inaccessible model, further facilitating the model adaptation process. Empowered with these designs, our CBBT is extensively evaluated on eleven downstream benchmarks and achieves remarkable improvements compared to existing black-box VL adaptation methods. Code is released at https://github.com/guozix/cbbt.
|
1511.08972
|
Guanping Lu Dr.
|
Guanping Lu, Robert C. Qiu, Wenxian Yu
|
Uplink One-tone Filtered Multitone Modulation Transmission for Machine
Type Communications
|
This paper has been withdrawn by the author due to the paper has
something wrong
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To accommodate current machine type communications (MTC), an uplink waveform
is proposed where MTC nodes use one carrier to transmit signal, and central
nodes demodulate different nodes' signal jointly. Furthermore, the carrier
bandwidth is variable to fit for the channels of nodes. This waveform may
reduce the hardware complexity of low cost MTC nodes, and loose the time and
frequency domain synchronization requirements of the entire system. This paper
also provides the interference analysis and complexity comparisons of proposed
scheme and orthogonal frequency division multiplexing (OFDM).
|
[
{
"created": "Sun, 29 Nov 2015 06:10:18 GMT",
"version": "v1"
},
{
"created": "Mon, 14 Dec 2015 07:09:33 GMT",
"version": "v2"
}
] |
2015-12-15
|
[
[
"Lu",
"Guanping",
""
],
[
"Qiu",
"Robert C.",
""
],
[
"Yu",
"Wenxian",
""
]
] |
To accommodate current machine type communications (MTC), an uplink waveform is proposed where MTC nodes use one carrier to transmit signal, and central nodes demodulate different nodes' signal jointly. Furthermore, the carrier bandwidth is variable to fit for the channels of nodes. This waveform may reduce the hardware complexity of low cost MTC nodes, and loose the time and frequency domain synchronization requirements of the entire system. This paper also provides the interference analysis and complexity comparisons of proposed scheme and orthogonal frequency division multiplexing (OFDM).
|
1007.1100
|
Jalaluddin Qureshi
|
Chuan Heng Foh, Jianfei Cai, and Jalaluddin Qureshi
|
Collision Codes: Decoding Superimposed BPSK Modulated Wireless
Transmissions
| null | null |
10.1109/CCNC.2010.5421745
| null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The introduction of physical layer network coding gives rise to the concept
of turning a collision of transmissions on a wireless channel useful. In the
idea of physical layer network coding, two synchronized simultaneous packet
transmissions are carefully encoded such that the superimposed transmission can
be decoded to produce a packet which is identical to the bitwise binary sum of
the two transmitted packets. This paper explores the decoding of superimposed
transmission resulted by multiple synchronized simultaneous transmissions. We
devise a coding scheme that achieves the identification of individual
transmission from the synchronized superimposed transmission. A mathematical
proof for the existence of such a coding scheme is given.
|
[
{
"created": "Wed, 7 Jul 2010 10:34:34 GMT",
"version": "v1"
}
] |
2010-07-08
|
[
[
"Foh",
"Chuan Heng",
""
],
[
"Cai",
"Jianfei",
""
],
[
"Qureshi",
"Jalaluddin",
""
]
] |
The introduction of physical layer network coding gives rise to the concept of turning a collision of transmissions on a wireless channel useful. In the idea of physical layer network coding, two synchronized simultaneous packet transmissions are carefully encoded such that the superimposed transmission can be decoded to produce a packet which is identical to the bitwise binary sum of the two transmitted packets. This paper explores the decoding of superimposed transmission resulted by multiple synchronized simultaneous transmissions. We devise a coding scheme that achieves the identification of individual transmission from the synchronized superimposed transmission. A mathematical proof for the existence of such a coding scheme is given.
|
1502.04148
|
James Voss
|
James Voss, Mikhail Belkin, and Luis Rademacher
|
A Pseudo-Euclidean Iteration for Optimal Recovery in Noisy ICA
|
17 pages, 2 figures
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Independent Component Analysis (ICA) is a popular model for blind signal
separation. The ICA model assumes that a number of independent source signals
are linearly mixed to form the observed signals. We propose a new algorithm,
PEGI (for pseudo-Euclidean Gradient Iteration), for provable model recovery for
ICA with Gaussian noise. The main technical innovation of the algorithm is to
use a fixed point iteration in a pseudo-Euclidean (indefinite "inner product")
space. The use of this indefinite "inner product" resolves technical issues
common to several existing algorithms for noisy ICA. This leads to an algorithm
which is conceptually simple, efficient and accurate in testing.
Our second contribution is combining PEGI with the analysis of objectives for
optimal recovery in the noisy ICA model. It has been observed that the direct
approach of demixing with the inverse of the mixing matrix is suboptimal for
signal recovery in terms of the natural Signal to Interference plus Noise Ratio
(SINR) criterion. There have been several partial solutions proposed in the ICA
literature. It turns out that any solution to the mixing matrix reconstruction
problem can be used to construct an SINR-optimal ICA demixing, despite the fact
that SINR itself cannot be computed from data. That allows us to obtain a
practical and provably SINR-optimal recovery method for ICA with arbitrary
Gaussian noise.
|
[
{
"created": "Fri, 13 Feb 2015 23:18:35 GMT",
"version": "v1"
},
{
"created": "Thu, 1 Oct 2015 16:05:56 GMT",
"version": "v2"
}
] |
2015-10-02
|
[
[
"Voss",
"James",
""
],
[
"Belkin",
"Mikhail",
""
],
[
"Rademacher",
"Luis",
""
]
] |
Independent Component Analysis (ICA) is a popular model for blind signal separation. The ICA model assumes that a number of independent source signals are linearly mixed to form the observed signals. We propose a new algorithm, PEGI (for pseudo-Euclidean Gradient Iteration), for provable model recovery for ICA with Gaussian noise. The main technical innovation of the algorithm is to use a fixed point iteration in a pseudo-Euclidean (indefinite "inner product") space. The use of this indefinite "inner product" resolves technical issues common to several existing algorithms for noisy ICA. This leads to an algorithm which is conceptually simple, efficient and accurate in testing. Our second contribution is combining PEGI with the analysis of objectives for optimal recovery in the noisy ICA model. It has been observed that the direct approach of demixing with the inverse of the mixing matrix is suboptimal for signal recovery in terms of the natural Signal to Interference plus Noise Ratio (SINR) criterion. There have been several partial solutions proposed in the ICA literature. It turns out that any solution to the mixing matrix reconstruction problem can be used to construct an SINR-optimal ICA demixing, despite the fact that SINR itself cannot be computed from data. That allows us to obtain a practical and provably SINR-optimal recovery method for ICA with arbitrary Gaussian noise.
|
2008.08652
|
Leonardo Alexandre Ferreira Leite
|
Leonardo Leite, Gustavo Pinto, Fabio Kon, Paulo Meirelles
|
The Organization of Software Teams in the Quest for Continuous Delivery:
A Grounded Theory Approach
|
Version accepted for publication in the Information and Software
Technology journal (Jun, 2021) / CC-BY-NC-ND license / affiliation of last
author changed
| null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Context: To accelerate time-to-market and improve customer satisfaction,
software-producing organizations have adopted continuous delivery practices,
impacting the relations between development and infrastructure professionals.
Yet, no substantial literature has substantially tackled how the software
industry structures the organization of development and infrastructure teams.
Objective: In this study, we investigate how software-producing organizations
structure their development and infrastructure teams, specifically how is the
division of labor among these groups and how they interact.
Method: After brainstorming with 7 DevOps experts to better formulate our
research and procedures, we collected and analyzed data from 37 semi-structured
interviews with IT professionals, following Grounded Theory guidelines.
Results: After a careful analysis, we identified four common organizational
structures: (1) siloed departments, (2) classical DevOps, (3) cross-functional
teams, and (4) platform teams. We also observed that some companies are
transitioning between these structures.
Conclusion: The main contribution of this study is a theory in the form of a
taxonomy that organizes the found structures along with their properties. This
theory could guide researchers and practitioners to think about how to better
structure development and infrastructure professionals in software-producing
organizations.
|
[
{
"created": "Wed, 19 Aug 2020 20:00:24 GMT",
"version": "v1"
},
{
"created": "Sat, 19 Jun 2021 17:35:50 GMT",
"version": "v2"
},
{
"created": "Tue, 22 Jun 2021 16:54:55 GMT",
"version": "v3"
},
{
"created": "Wed, 23 Jun 2021 21:53:12 GMT",
"version": "v4"
}
] |
2021-06-25
|
[
[
"Leite",
"Leonardo",
""
],
[
"Pinto",
"Gustavo",
""
],
[
"Kon",
"Fabio",
""
],
[
"Meirelles",
"Paulo",
""
]
] |
Context: To accelerate time-to-market and improve customer satisfaction, software-producing organizations have adopted continuous delivery practices, impacting the relations between development and infrastructure professionals. Yet, no substantial literature has substantially tackled how the software industry structures the organization of development and infrastructure teams. Objective: In this study, we investigate how software-producing organizations structure their development and infrastructure teams, specifically how is the division of labor among these groups and how they interact. Method: After brainstorming with 7 DevOps experts to better formulate our research and procedures, we collected and analyzed data from 37 semi-structured interviews with IT professionals, following Grounded Theory guidelines. Results: After a careful analysis, we identified four common organizational structures: (1) siloed departments, (2) classical DevOps, (3) cross-functional teams, and (4) platform teams. We also observed that some companies are transitioning between these structures. Conclusion: The main contribution of this study is a theory in the form of a taxonomy that organizes the found structures along with their properties. This theory could guide researchers and practitioners to think about how to better structure development and infrastructure professionals in software-producing organizations.
|
2102.04148
|
Birgitta Dresp-Langley
|
Rongrong Liu, Florent Nageotte, Philippe Zanne, Michel de Mathelin and
Birgitta Dresp-Langley
|
Deep Reinforcement Learning for the Control of Robotic Manipulation: A
Focussed Mini-Review
| null |
Robotics, 2021, 10, 1, 22
|
10.3390/robotics10010022
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep learning has provided new ways of manipulating, processing and analyzing
data. It sometimes may achieve results comparable to, or surpassing human
expert performance, and has become a source of inspiration in the era of
artificial intelligence. Another subfield of machine learning named
reinforcement learning, tries to find an optimal behavior strategy through
interactions with the environment. Combining deep learning and reinforcement
learning permits resolving critical issues relative to the dimensionality and
scalability of data in tasks with sparse reward signals, such as robotic
manipulation and control tasks, that neither method permits resolving when
applied on its own. In this paper, we present recent significant progress of
deep reinforcement learning algorithms, which try to tackle the problems for
the application in the domain of robotic manipulation control, such as sample
efficiency and generalization. Despite these continuous improvements,
currently, the challenges of learning robust and versatile manipulation skills
for robots with deep reinforcement learning are still far from being resolved
for real world applications.
|
[
{
"created": "Mon, 8 Feb 2021 11:57:06 GMT",
"version": "v1"
}
] |
2021-02-09
|
[
[
"Liu",
"Rongrong",
""
],
[
"Nageotte",
"Florent",
""
],
[
"Zanne",
"Philippe",
""
],
[
"de Mathelin",
"Michel",
""
],
[
"Dresp-Langley",
"Birgitta",
""
]
] |
Deep learning has provided new ways of manipulating, processing and analyzing data. It sometimes may achieve results comparable to, or surpassing human expert performance, and has become a source of inspiration in the era of artificial intelligence. Another subfield of machine learning named reinforcement learning, tries to find an optimal behavior strategy through interactions with the environment. Combining deep learning and reinforcement learning permits resolving critical issues relative to the dimensionality and scalability of data in tasks with sparse reward signals, such as robotic manipulation and control tasks, that neither method permits resolving when applied on its own. In this paper, we present recent significant progress of deep reinforcement learning algorithms, which try to tackle the problems for the application in the domain of robotic manipulation control, such as sample efficiency and generalization. Despite these continuous improvements, currently, the challenges of learning robust and versatile manipulation skills for robots with deep reinforcement learning are still far from being resolved for real world applications.
|
2207.11357
|
Molly Jane Nicholas
|
Molly Jane Nicholas, Eric Paulos
|
PREPRINT: Found Object Puppeteering as a Tool for Rapid Movement
Sketching in 3D Animation
| null | null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Both expert and novice animators have a need to engage in movement sketching
-- low-cost, rapid iteration on a character's movement style -- especially
early on in the ideation process. Yet animation tools currently focus on
low-level character control mechanisms rather than encouraging engagement with
and deep observation of movement. We identify Found Object puppeteering --
where puppeteers manipulate everyday physical objects with their hands -- as a
creative practice whose use of material "jigs" is uniquely well-positioned to
scaffold the novice animator's developing skills. In this paper, we draw on the
practice of an expert puppeteer practitioner to inform the design of a system
that incorporates physical objects into the animation workflow to scaffold
novices into diverse movement exploration while manipulating digital puppets.
|
[
{
"created": "Fri, 22 Jul 2022 22:22:16 GMT",
"version": "v1"
}
] |
2022-07-26
|
[
[
"Nicholas",
"Molly Jane",
""
],
[
"Paulos",
"Eric",
""
]
] |
Both expert and novice animators have a need to engage in movement sketching -- low-cost, rapid iteration on a character's movement style -- especially early on in the ideation process. Yet animation tools currently focus on low-level character control mechanisms rather than encouraging engagement with and deep observation of movement. We identify Found Object puppeteering -- where puppeteers manipulate everyday physical objects with their hands -- as a creative practice whose use of material "jigs" is uniquely well-positioned to scaffold the novice animator's developing skills. In this paper, we draw on the practice of an expert puppeteer practitioner to inform the design of a system that incorporates physical objects into the animation workflow to scaffold novices into diverse movement exploration while manipulating digital puppets.
|
1601.00428
|
Charith Perera
|
Charith Perera, Dumidu Talagala, Chi Harold Liu, Julio C. Estrella
|
Energy Efficient Location and Activity-aware On-Demand Mobile
Distributed Sensing Platform for Sensing as a Service in IoT Clouds
|
IEEE Transactions on Computational Social Systems 2016
| null |
10.1109/TCSS.2016.2515844
| null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Internet of Things (IoT) envisions billions of sensors deployed around us
and connected to the Internet, where the mobile crowd sensing technologies are
widely used to collect data in different contexts of the IoT paradigm. Due to
the popularity of Big Data technologies, processing and storing large volumes
of data has become easier than ever. However, large scale data management tasks
still require significant amounts of resources that can be expensive regardless
of whether they are purchased or rented (e.g. pay-as-you-go infrastructure).
Further, not everyone is interested in such large scale data collection and
analysis. More importantly, not everyone has the financial and computational
resources to deal with such large volumes of data. Therefore, a timely need
exists for a cloud-integrated mobile crowd sensing platform that is capable of
capturing sensors data, on-demand, based on conditions enforced by the data
consumers. In this paper, we propose a context-aware, specifically, location
and activity-aware mobile sensing platform called C-MOSDEN ( Context-aware
Mobile Sensor Data ENgine) for the IoT domain. We evaluated the proposed
platform using three real-world scenarios that highlight the importance of
'selective sensing'. The computational effectiveness and efficiency of the
proposed platform are investigated and is used to highlight the advantages of
context-aware selective sensing.
|
[
{
"created": "Mon, 4 Jan 2016 09:45:42 GMT",
"version": "v1"
}
] |
2016-11-18
|
[
[
"Perera",
"Charith",
""
],
[
"Talagala",
"Dumidu",
""
],
[
"Liu",
"Chi Harold",
""
],
[
"Estrella",
"Julio C.",
""
]
] |
The Internet of Things (IoT) envisions billions of sensors deployed around us and connected to the Internet, where the mobile crowd sensing technologies are widely used to collect data in different contexts of the IoT paradigm. Due to the popularity of Big Data technologies, processing and storing large volumes of data has become easier than ever. However, large scale data management tasks still require significant amounts of resources that can be expensive regardless of whether they are purchased or rented (e.g. pay-as-you-go infrastructure). Further, not everyone is interested in such large scale data collection and analysis. More importantly, not everyone has the financial and computational resources to deal with such large volumes of data. Therefore, a timely need exists for a cloud-integrated mobile crowd sensing platform that is capable of capturing sensors data, on-demand, based on conditions enforced by the data consumers. In this paper, we propose a context-aware, specifically, location and activity-aware mobile sensing platform called C-MOSDEN ( Context-aware Mobile Sensor Data ENgine) for the IoT domain. We evaluated the proposed platform using three real-world scenarios that highlight the importance of 'selective sensing'. The computational effectiveness and efficiency of the proposed platform are investigated and is used to highlight the advantages of context-aware selective sensing.
|
2307.08652
|
Aalok Gangopadhyay
|
Aalok Gangopadhyay, Paras Gupta, Tarun Sharma, Prajwal Singh,
Shanmuganathan Raman
|
Search Me Knot, Render Me Knot: Embedding Search and Differentiable
Rendering of Knots in 3D
| null | null | null | null |
cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce the problem of knot-based inverse perceptual art. Given multiple
target images and their corresponding viewing configurations, the objective is
to find a 3D knot-based tubular structure whose appearance resembles the target
images when viewed from the specified viewing configurations. To solve this
problem, we first design a differentiable rendering algorithm for rendering
tubular knots embedded in 3D for arbitrary perspective camera configurations.
Utilizing this differentiable rendering algorithm, we search over the space of
knot configurations to find the ideal knot embedding. We represent the knot
embeddings via homeomorphisms of the desired template knot, where the
homeomorphisms are parametrized by the weights of an invertible neural network.
Our approach is fully differentiable, making it possible to find the ideal 3D
tubular structure for the desired perceptual art using gradient-based
optimization. We propose several loss functions that impose additional physical
constraints, enforcing that the tube is free of self-intersection, lies within
a predefined region in space, satisfies the physical bending limits of the tube
material and the material cost is within a specified budget. We demonstrate
through results that our knot representation is highly expressive and gives
impressive results even for challenging target images in both single view as
well as multiple view constraints. Through extensive ablation study we show
that each of the proposed loss function is effective in ensuring physical
realizability. We construct a real world 3D-printed object to demonstrate the
practical utility of our approach. To the best of our knowledge, we are the
first to propose a fully differentiable optimization framework for knot-based
inverse perceptual art.
|
[
{
"created": "Mon, 17 Jul 2023 17:03:26 GMT",
"version": "v1"
},
{
"created": "Tue, 18 Jul 2023 03:16:22 GMT",
"version": "v2"
},
{
"created": "Fri, 21 Jul 2023 12:19:33 GMT",
"version": "v3"
},
{
"created": "Sat, 19 Aug 2023 07:31:26 GMT",
"version": "v4"
}
] |
2023-08-22
|
[
[
"Gangopadhyay",
"Aalok",
""
],
[
"Gupta",
"Paras",
""
],
[
"Sharma",
"Tarun",
""
],
[
"Singh",
"Prajwal",
""
],
[
"Raman",
"Shanmuganathan",
""
]
] |
We introduce the problem of knot-based inverse perceptual art. Given multiple target images and their corresponding viewing configurations, the objective is to find a 3D knot-based tubular structure whose appearance resembles the target images when viewed from the specified viewing configurations. To solve this problem, we first design a differentiable rendering algorithm for rendering tubular knots embedded in 3D for arbitrary perspective camera configurations. Utilizing this differentiable rendering algorithm, we search over the space of knot configurations to find the ideal knot embedding. We represent the knot embeddings via homeomorphisms of the desired template knot, where the homeomorphisms are parametrized by the weights of an invertible neural network. Our approach is fully differentiable, making it possible to find the ideal 3D tubular structure for the desired perceptual art using gradient-based optimization. We propose several loss functions that impose additional physical constraints, enforcing that the tube is free of self-intersection, lies within a predefined region in space, satisfies the physical bending limits of the tube material and the material cost is within a specified budget. We demonstrate through results that our knot representation is highly expressive and gives impressive results even for challenging target images in both single view as well as multiple view constraints. Through extensive ablation study we show that each of the proposed loss function is effective in ensuring physical realizability. We construct a real world 3D-printed object to demonstrate the practical utility of our approach. To the best of our knowledge, we are the first to propose a fully differentiable optimization framework for knot-based inverse perceptual art.
|
2206.02367
|
Chuanzhe Jing
|
Chuanzhe Jing, Tho Nguyen Duc, Phan Xuan Tan, Eiji Kamioka
|
Subtitle-based Viewport Prediction for 360-degree Virtual Tourism Video
| null | null | null | null |
cs.MM
|
http://creativecommons.org/licenses/by/4.0/
|
360-degree streaming videos can provide a rich immersive experiences to the
users. However, it requires an extremely high bandwidth network. One of the
common solutions for saving bandwidth consumption is to stream only a portion
of video covered by the user's viewport. To do that, the user's viewpoint
prediction is indispensable. In existing viewport prediction methods, they
mainly concentrate on the user's head movement trajectory and video saliency.
None of them consider navigation information contained in the video, which can
turn the attention of the user to specific regions in the video with high
probability. Such information can be included in video subtitles, especially
the one in 360-degree virtual tourism videos. This fact reveals the potential
contribution of video subtitles to viewport prediction. Therefore, in this
paper, a subtitle-based viewport prediction model for 360-degree virtual
tourism videos is proposed. This model leverages the navigation information in
the video subtitles in addition to head movement trajectory and video saliency,
to improve the prediction accuracy. The experimental results demonstrate that
the proposed model outperforms baseline methods which only use head movement
trajectory and video saliency for viewport prediction.
|
[
{
"created": "Mon, 6 Jun 2022 05:48:25 GMT",
"version": "v1"
}
] |
2022-06-07
|
[
[
"Jing",
"Chuanzhe",
""
],
[
"Duc",
"Tho Nguyen",
""
],
[
"Tan",
"Phan Xuan",
""
],
[
"Kamioka",
"Eiji",
""
]
] |
360-degree streaming videos can provide a rich immersive experiences to the users. However, it requires an extremely high bandwidth network. One of the common solutions for saving bandwidth consumption is to stream only a portion of video covered by the user's viewport. To do that, the user's viewpoint prediction is indispensable. In existing viewport prediction methods, they mainly concentrate on the user's head movement trajectory and video saliency. None of them consider navigation information contained in the video, which can turn the attention of the user to specific regions in the video with high probability. Such information can be included in video subtitles, especially the one in 360-degree virtual tourism videos. This fact reveals the potential contribution of video subtitles to viewport prediction. Therefore, in this paper, a subtitle-based viewport prediction model for 360-degree virtual tourism videos is proposed. This model leverages the navigation information in the video subtitles in addition to head movement trajectory and video saliency, to improve the prediction accuracy. The experimental results demonstrate that the proposed model outperforms baseline methods which only use head movement trajectory and video saliency for viewport prediction.
|
1401.6497
|
Qibin Zhao Dr
|
Qibin Zhao, Liqing Zhang, and Andrzej Cichocki
|
Bayesian CP Factorization of Incomplete Tensors with Automatic Rank
Determination
| null | null |
10.1109/TPAMI.2015.2392756
| null |
cs.LG cs.CV stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
CANDECOMP/PARAFAC (CP) tensor factorization of incomplete data is a powerful
technique for tensor completion through explicitly capturing the multilinear
latent factors. The existing CP algorithms require the tensor rank to be
manually specified, however, the determination of tensor rank remains a
challenging problem especially for CP rank. In addition, existing approaches do
not take into account uncertainty information of latent factors, as well as
missing entries. To address these issues, we formulate CP factorization using a
hierarchical probabilistic model and employ a fully Bayesian treatment by
incorporating a sparsity-inducing prior over multiple latent factors and the
appropriate hyperpriors over all hyperparameters, resulting in automatic rank
determination. To learn the model, we develop an efficient deterministic
Bayesian inference algorithm, which scales linearly with data size. Our method
is characterized as a tuning parameter-free approach, which can effectively
infer underlying multilinear factors with a low-rank constraint, while also
providing predictive distributions over missing entries. Extensive simulations
on synthetic data illustrate the intrinsic capability of our method to recover
the ground-truth of CP rank and prevent the overfitting problem, even when a
large amount of entries are missing. Moreover, the results from real-world
applications, including image inpainting and facial image synthesis,
demonstrate that our method outperforms state-of-the-art approaches for both
tensor factorization and tensor completion in terms of predictive performance.
|
[
{
"created": "Sat, 25 Jan 2014 05:08:33 GMT",
"version": "v1"
},
{
"created": "Thu, 9 Oct 2014 09:48:37 GMT",
"version": "v2"
}
] |
2015-01-22
|
[
[
"Zhao",
"Qibin",
""
],
[
"Zhang",
"Liqing",
""
],
[
"Cichocki",
"Andrzej",
""
]
] |
CANDECOMP/PARAFAC (CP) tensor factorization of incomplete data is a powerful technique for tensor completion through explicitly capturing the multilinear latent factors. The existing CP algorithms require the tensor rank to be manually specified, however, the determination of tensor rank remains a challenging problem especially for CP rank. In addition, existing approaches do not take into account uncertainty information of latent factors, as well as missing entries. To address these issues, we formulate CP factorization using a hierarchical probabilistic model and employ a fully Bayesian treatment by incorporating a sparsity-inducing prior over multiple latent factors and the appropriate hyperpriors over all hyperparameters, resulting in automatic rank determination. To learn the model, we develop an efficient deterministic Bayesian inference algorithm, which scales linearly with data size. Our method is characterized as a tuning parameter-free approach, which can effectively infer underlying multilinear factors with a low-rank constraint, while also providing predictive distributions over missing entries. Extensive simulations on synthetic data illustrate the intrinsic capability of our method to recover the ground-truth of CP rank and prevent the overfitting problem, even when a large amount of entries are missing. Moreover, the results from real-world applications, including image inpainting and facial image synthesis, demonstrate that our method outperforms state-of-the-art approaches for both tensor factorization and tensor completion in terms of predictive performance.
|
2204.02162
|
Diego Antognini
|
Diego Antognini and Boi Faltings
|
Positive and Negative Critiquing for VAE-based Recommenders
|
5 pages, 2 figures, 2 tables
| null | null | null |
cs.IR cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Providing explanations for recommended items allows users to refine the
recommendations by critiquing parts of the explanations. As a result of
revisiting critiquing from the perspective of multimodal generative models,
recent work has proposed M&Ms-VAE, which achieves state-of-the-art performance
in terms of recommendation, explanation, and critiquing. M&Ms-VAE and similar
models allow users to negatively critique (i.e., explicitly disagree). However,
they share a significant drawback: users cannot positively critique (i.e.,
highlight a desired feature). We address this deficiency with M&Ms-VAE+, an
extension of M&Ms-VAE that enables positive and negative critiquing. In
addition to modeling users' interactions and keyphrase-usage preferences, we
model their keyphrase-usage dislikes. Moreover, we design a novel critiquing
module that is trained in a self-supervised fashion. Our experiments on two
datasets show that M&Ms-VAE+ matches or exceeds M&Ms-VAE in recommendation and
explanation performance. Furthermore, our results demonstrate that representing
positive and negative critiques differently enables M&Ms-VAE+ to significantly
outperform M&Ms-VAE and other models in positive and negative multi-step
critiquing.
|
[
{
"created": "Tue, 5 Apr 2022 12:40:53 GMT",
"version": "v1"
}
] |
2022-04-06
|
[
[
"Antognini",
"Diego",
""
],
[
"Faltings",
"Boi",
""
]
] |
Providing explanations for recommended items allows users to refine the recommendations by critiquing parts of the explanations. As a result of revisiting critiquing from the perspective of multimodal generative models, recent work has proposed M&Ms-VAE, which achieves state-of-the-art performance in terms of recommendation, explanation, and critiquing. M&Ms-VAE and similar models allow users to negatively critique (i.e., explicitly disagree). However, they share a significant drawback: users cannot positively critique (i.e., highlight a desired feature). We address this deficiency with M&Ms-VAE+, an extension of M&Ms-VAE that enables positive and negative critiquing. In addition to modeling users' interactions and keyphrase-usage preferences, we model their keyphrase-usage dislikes. Moreover, we design a novel critiquing module that is trained in a self-supervised fashion. Our experiments on two datasets show that M&Ms-VAE+ matches or exceeds M&Ms-VAE in recommendation and explanation performance. Furthermore, our results demonstrate that representing positive and negative critiques differently enables M&Ms-VAE+ to significantly outperform M&Ms-VAE and other models in positive and negative multi-step critiquing.
|
2306.16205
|
David Radke
|
David Radke, Kate Larson, Tim Brecht and Kyle Tilbury
|
Towards a Better Understanding of Learning with Multiagent Teams
|
15 pages, 11 figures, published at the International Joint Conference
on Artificial Intelligence (IJCAI) in 2023
| null | null | null |
cs.AI
|
http://creativecommons.org/publicdomain/zero/1.0/
|
While it has long been recognized that a team of individual learning agents
can be greater than the sum of its parts, recent work has shown that larger
teams are not necessarily more effective than smaller ones. In this paper, we
study why and under which conditions certain team structures promote effective
learning for a population of individual learning agents. We show that,
depending on the environment, some team structures help agents learn to
specialize into specific roles, resulting in more favorable global results.
However, large teams create credit assignment challenges that reduce
coordination, leading to large teams performing poorly compared to smaller
ones. We support our conclusions with both theoretical analysis and empirical
results.
|
[
{
"created": "Wed, 28 Jun 2023 13:37:48 GMT",
"version": "v1"
}
] |
2023-06-29
|
[
[
"Radke",
"David",
""
],
[
"Larson",
"Kate",
""
],
[
"Brecht",
"Tim",
""
],
[
"Tilbury",
"Kyle",
""
]
] |
While it has long been recognized that a team of individual learning agents can be greater than the sum of its parts, recent work has shown that larger teams are not necessarily more effective than smaller ones. In this paper, we study why and under which conditions certain team structures promote effective learning for a population of individual learning agents. We show that, depending on the environment, some team structures help agents learn to specialize into specific roles, resulting in more favorable global results. However, large teams create credit assignment challenges that reduce coordination, leading to large teams performing poorly compared to smaller ones. We support our conclusions with both theoretical analysis and empirical results.
|
1008.0147
|
Jaeok Park
|
Jaeok Park and Mihaela van der Schaar
|
Intervention Mechanism Design for Networks With Selfish Users
|
20 pages, 1 table
| null | null | null |
cs.GT cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider a multi-user network where a network manager and selfish users
interact. The network manager monitors the behavior of users and intervenes in
the interaction among users if necessary, while users make decisions
independently to optimize their individual objectives. In this paper, we
develop a framework of intervention mechanism design, which is aimed to
optimize the objective of the manager, or the network performance, taking the
incentives of selfish users into account. Our framework is general enough to
cover a wide range of application scenarios, and it has advantages over
existing approaches such as Stackelberg strategies and pricing. To design an
intervention mechanism and to predict the resulting operating point, we
formulate a new class of games called intervention games and a new solution
concept called intervention equilibrium. We provide analytic results about
intervention equilibrium and optimal intervention mechanisms in the case of a
benevolent manager with perfect monitoring. We illustrate these results with a
random access model. Our illustrative example suggests that intervention
requires less knowledge about users than pricing.
|
[
{
"created": "Sun, 1 Aug 2010 04:33:27 GMT",
"version": "v1"
}
] |
2010-08-03
|
[
[
"Park",
"Jaeok",
""
],
[
"van der Schaar",
"Mihaela",
""
]
] |
We consider a multi-user network where a network manager and selfish users interact. The network manager monitors the behavior of users and intervenes in the interaction among users if necessary, while users make decisions independently to optimize their individual objectives. In this paper, we develop a framework of intervention mechanism design, which is aimed to optimize the objective of the manager, or the network performance, taking the incentives of selfish users into account. Our framework is general enough to cover a wide range of application scenarios, and it has advantages over existing approaches such as Stackelberg strategies and pricing. To design an intervention mechanism and to predict the resulting operating point, we formulate a new class of games called intervention games and a new solution concept called intervention equilibrium. We provide analytic results about intervention equilibrium and optimal intervention mechanisms in the case of a benevolent manager with perfect monitoring. We illustrate these results with a random access model. Our illustrative example suggests that intervention requires less knowledge about users than pricing.
|
2210.06654
|
Yash Vekaria
|
Yash Vekaria (1), Rishab Nithyanand (2), Zubair Shafiq (1) ((1)
University of California, Davis, (2) University of Iowa)
|
The Inventory is Dark and Full of Misinformation: Understanding the
Abuse of Ad Inventory Pooling in the Ad-Tech Supply Chain
|
To appear at IEEE Symposium on Security & Privacy (Oakland) 2024
| null | null | null |
cs.CR cs.CY cs.NI cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
Ad-tech enables publishers to programmatically sell their ad inventory to
millions of demand partners through a complex supply chain. Bogus or low
quality publishers can exploit the opaque nature of the ad-tech to deceptively
monetize their ad inventory. In this paper, we investigate for the first time
how misinformation sites subvert the ad-tech transparency standards and pool
their ad inventory with unrelated sites to circumvent brand safety protections.
We find that a few major ad exchanges are disproportionately responsible for
the dark pools that are exploited by misinformation websites. We further find
evidence that dark pooling allows misinformation sites to deceptively sell
their ad inventory to reputable brands. We conclude with a discussion of
potential countermeasures such as better vetting of ad exchange partners,
adoption of new ad-tech transparency standards that enable end-to-end
validation of the ad-tech supply chain, as well as widespread deployment of
independent audits like ours.
|
[
{
"created": "Thu, 13 Oct 2022 01:24:19 GMT",
"version": "v1"
},
{
"created": "Thu, 18 May 2023 15:43:57 GMT",
"version": "v2"
},
{
"created": "Sat, 14 Oct 2023 05:53:02 GMT",
"version": "v3"
}
] |
2023-10-17
|
[
[
"Vekaria",
"Yash",
""
],
[
"Nithyanand",
"Rishab",
""
],
[
"Shafiq",
"Zubair",
""
]
] |
Ad-tech enables publishers to programmatically sell their ad inventory to millions of demand partners through a complex supply chain. Bogus or low quality publishers can exploit the opaque nature of the ad-tech to deceptively monetize their ad inventory. In this paper, we investigate for the first time how misinformation sites subvert the ad-tech transparency standards and pool their ad inventory with unrelated sites to circumvent brand safety protections. We find that a few major ad exchanges are disproportionately responsible for the dark pools that are exploited by misinformation websites. We further find evidence that dark pooling allows misinformation sites to deceptively sell their ad inventory to reputable brands. We conclude with a discussion of potential countermeasures such as better vetting of ad exchange partners, adoption of new ad-tech transparency standards that enable end-to-end validation of the ad-tech supply chain, as well as widespread deployment of independent audits like ours.
|
2002.08429
|
Yue Yang
|
Yue Yang
|
An improved FastEuler-DLKF small-UAV AHRS algorithm
|
in Chinese
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The accurate Attitude Heading Reference System(AHRS) is an important apart of
the UAV reliable flight system. Aiming at the application scenarios of near
ground navigation of small-UAV, this paper establishes a loose couple error
model of the gyroscope/accelerometer/magnetometer, and presents an improved
FastEuler Double-Layer Kalman Filter algorithm. Using low-cost devices which
include MEMS Inertial Measurement Units(IMU) and magnetometers, this paper
constructs the AHRS hardware and software systems of UAV, and designs the
offline and real-time verification platforms. Moreover, the attitude changes of
UAV is analyzed by the simulation and flight test, respectively. In addition,
an adaptive factor is used to adjust the measurement noise covariance in order
to eliminate the harmful effects of linear acceleration in the accelerometer,
which is solved the roll and ptich angle. The experimental comparison with the
Complementary Filter shows that the proposed algorithm can provide accurate
attitude information when UAV is flying, which improves the accuracy and
reliability of attitude solution, and removes the influence the gyro bias for
the attitude estimation.
|
[
{
"created": "Tue, 18 Feb 2020 05:37:52 GMT",
"version": "v1"
}
] |
2020-02-21
|
[
[
"Yang",
"Yue",
""
]
] |
The accurate Attitude Heading Reference System(AHRS) is an important apart of the UAV reliable flight system. Aiming at the application scenarios of near ground navigation of small-UAV, this paper establishes a loose couple error model of the gyroscope/accelerometer/magnetometer, and presents an improved FastEuler Double-Layer Kalman Filter algorithm. Using low-cost devices which include MEMS Inertial Measurement Units(IMU) and magnetometers, this paper constructs the AHRS hardware and software systems of UAV, and designs the offline and real-time verification platforms. Moreover, the attitude changes of UAV is analyzed by the simulation and flight test, respectively. In addition, an adaptive factor is used to adjust the measurement noise covariance in order to eliminate the harmful effects of linear acceleration in the accelerometer, which is solved the roll and ptich angle. The experimental comparison with the Complementary Filter shows that the proposed algorithm can provide accurate attitude information when UAV is flying, which improves the accuracy and reliability of attitude solution, and removes the influence the gyro bias for the attitude estimation.
|
2101.04794
|
Yue You
|
Yue You, Yubo Kou, Xianghua Ding, Xinning Gui
|
The Medical Authority of AI: A Study of AI-enabled Consumer-facing
Health Technology
| null | null |
10.13140/RG.2.2.33097.98403
| null |
cs.HC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Recently, consumer-facing health technologies such as Artificial Intelligence
(AI)-based symptom checkers (AISCs) have sprung up in everyday healthcare
practice. AISCs solicit symptom information from users and provide medical
suggestions and possible diagnoses, a responsibility that people usually
entrust with real-person authorities such as physicians and expert patients.
Thus, the advent of AISCs begs a question of whether and how they transform the
notion of medical authority in everyday healthcare practice. To answer this
question, we conducted an interview study with thirty AISC users. We found that
users assess the medical authority of AISCs using various factors including
automated decisions and interaction design patterns of AISC apps, associations
with established medical authorities like hospitals, and comparisons with other
health technologies. We reveal how AISCs are used in healthcare delivery,
discuss how AI transforms conventional understandings of medical authority, and
derive implications for designing AI-enabled health technology.
|
[
{
"created": "Tue, 12 Jan 2021 23:02:40 GMT",
"version": "v1"
}
] |
2021-01-14
|
[
[
"You",
"Yue",
""
],
[
"Kou",
"Yubo",
""
],
[
"Ding",
"Xianghua",
""
],
[
"Gui",
"Xinning",
""
]
] |
Recently, consumer-facing health technologies such as Artificial Intelligence (AI)-based symptom checkers (AISCs) have sprung up in everyday healthcare practice. AISCs solicit symptom information from users and provide medical suggestions and possible diagnoses, a responsibility that people usually entrust with real-person authorities such as physicians and expert patients. Thus, the advent of AISCs begs a question of whether and how they transform the notion of medical authority in everyday healthcare practice. To answer this question, we conducted an interview study with thirty AISC users. We found that users assess the medical authority of AISCs using various factors including automated decisions and interaction design patterns of AISC apps, associations with established medical authorities like hospitals, and comparisons with other health technologies. We reveal how AISCs are used in healthcare delivery, discuss how AI transforms conventional understandings of medical authority, and derive implications for designing AI-enabled health technology.
|
2006.15854
|
Taiwo Kolajo
|
Taiwo Kolajo, Olawande Daramola, Ayodele Adebiyi, Seth Aaditeshwar
|
A Framework for Pre-processing of Social Media Feeds based on Integrated
Local Knowledge Base
|
38 pages, 5 figures, 6 tables
| null |
10.1016/j.ipm.2020.102348
| null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Most of the previous studies on the semantic analysis of social media feeds
have not considered the issue of ambiguity that is associated with slangs,
abbreviations, and acronyms that are embedded in social media posts. These
noisy terms have implicit meanings and form part of the rich semantic context
that must be analysed to gain complete insights from social media feeds. This
paper proposes an improved framework for pre-processing of social media feeds
for better performance. To do this, the use of an integrated knowledge base
(ikb) which comprises a local knowledge source (Naijalingo), urban dictionary
and internet slang was combined with the adapted Lesk algorithm to facilitate
semantic analysis of social media feeds. Experimental results showed that the
proposed approach performed better than existing methods when it was tested on
three machine learning models, which are support vector machines, multilayer
perceptron, and convolutional neural networks. The framework had an accuracy of
94.07% on a standardized dataset, and 99.78% on localised dataset when used to
extract sentiments from tweets. The improved performance on the localised
dataset reveals the advantage of integrating the use of local knowledge sources
into the process of analysing social media feeds particularly in interpreting
slangs/acronyms/abbreviations that have contextually rooted meanings.
|
[
{
"created": "Mon, 29 Jun 2020 07:56:22 GMT",
"version": "v1"
}
] |
2020-07-08
|
[
[
"Kolajo",
"Taiwo",
""
],
[
"Daramola",
"Olawande",
""
],
[
"Adebiyi",
"Ayodele",
""
],
[
"Aaditeshwar",
"Seth",
""
]
] |
Most of the previous studies on the semantic analysis of social media feeds have not considered the issue of ambiguity that is associated with slangs, abbreviations, and acronyms that are embedded in social media posts. These noisy terms have implicit meanings and form part of the rich semantic context that must be analysed to gain complete insights from social media feeds. This paper proposes an improved framework for pre-processing of social media feeds for better performance. To do this, the use of an integrated knowledge base (ikb) which comprises a local knowledge source (Naijalingo), urban dictionary and internet slang was combined with the adapted Lesk algorithm to facilitate semantic analysis of social media feeds. Experimental results showed that the proposed approach performed better than existing methods when it was tested on three machine learning models, which are support vector machines, multilayer perceptron, and convolutional neural networks. The framework had an accuracy of 94.07% on a standardized dataset, and 99.78% on localised dataset when used to extract sentiments from tweets. The improved performance on the localised dataset reveals the advantage of integrating the use of local knowledge sources into the process of analysing social media feeds particularly in interpreting slangs/acronyms/abbreviations that have contextually rooted meanings.
|
2110.09152
|
Tanya Braun
|
Tanya Braun, Stefan Fischer, Florian Lau, Ralf M\"oller
|
Lifting DecPOMDPs for Nanoscale Systems -- A Work in Progress
|
Accepted at the Tenth International Workshop on Statistical
Relational AI (StarAI-2021)
| null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
DNA-based nanonetworks have a wide range of promising use cases, especially
in the field of medicine. With a large set of agents, a partially observable
stochastic environment, and noisy observations, such nanoscale systems can be
modelled as a decentralised, partially observable, Markov decision process
(DecPOMDP). As the agent set is a dominating factor, this paper presents (i)
lifted DecPOMDPs, partitioning the agent set into sets of indistinguishable
agents, reducing the worst-case space required, and (ii) a nanoscale medical
system as an application. Future work turns to solving and implementing lifted
DecPOMDPs.
|
[
{
"created": "Mon, 18 Oct 2021 10:14:00 GMT",
"version": "v1"
}
] |
2021-10-19
|
[
[
"Braun",
"Tanya",
""
],
[
"Fischer",
"Stefan",
""
],
[
"Lau",
"Florian",
""
],
[
"Möller",
"Ralf",
""
]
] |
DNA-based nanonetworks have a wide range of promising use cases, especially in the field of medicine. With a large set of agents, a partially observable stochastic environment, and noisy observations, such nanoscale systems can be modelled as a decentralised, partially observable, Markov decision process (DecPOMDP). As the agent set is a dominating factor, this paper presents (i) lifted DecPOMDPs, partitioning the agent set into sets of indistinguishable agents, reducing the worst-case space required, and (ii) a nanoscale medical system as an application. Future work turns to solving and implementing lifted DecPOMDPs.
|
2307.09756
|
Weijia Wu
|
Yuzhong Zhao, Qixiang Ye, Weijia Wu, Chunhua Shen, Fang Wan
|
Generative Prompt Model for Weakly Supervised Object Localization
| null |
International Conference on Computer Vision Conference (ICCV2023)
| null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Weakly supervised object localization (WSOL) remains challenging when
learning object localization models from image category labels. Conventional
methods that discriminatively train activation models ignore representative yet
less discriminative object parts. In this study, we propose a generative prompt
model (GenPromp), defining the first generative pipeline to localize less
discriminative object parts by formulating WSOL as a conditional image
denoising procedure. During training, GenPromp converts image category labels
to learnable prompt embeddings which are fed to a generative model to
conditionally recover the input image with noise and learn representative
embeddings. During inference, enPromp combines the representative embeddings
with discriminative embeddings (queried from an off-the-shelf vision-language
model) for both representative and discriminative capacity. The combined
embeddings are finally used to generate multi-scale high-quality attention
maps, which facilitate localizing full object extent. Experiments on
CUB-200-2011 and ILSVRC show that GenPromp respectively outperforms the best
discriminative models by 5.2% and 5.6% (Top-1 Loc), setting a solid baseline
for WSOL with the generative model. Code is available at
https://github.com/callsys/GenPromp.
|
[
{
"created": "Wed, 19 Jul 2023 05:40:38 GMT",
"version": "v1"
}
] |
2023-07-20
|
[
[
"Zhao",
"Yuzhong",
""
],
[
"Ye",
"Qixiang",
""
],
[
"Wu",
"Weijia",
""
],
[
"Shen",
"Chunhua",
""
],
[
"Wan",
"Fang",
""
]
] |
Weakly supervised object localization (WSOL) remains challenging when learning object localization models from image category labels. Conventional methods that discriminatively train activation models ignore representative yet less discriminative object parts. In this study, we propose a generative prompt model (GenPromp), defining the first generative pipeline to localize less discriminative object parts by formulating WSOL as a conditional image denoising procedure. During training, GenPromp converts image category labels to learnable prompt embeddings which are fed to a generative model to conditionally recover the input image with noise and learn representative embeddings. During inference, enPromp combines the representative embeddings with discriminative embeddings (queried from an off-the-shelf vision-language model) for both representative and discriminative capacity. The combined embeddings are finally used to generate multi-scale high-quality attention maps, which facilitate localizing full object extent. Experiments on CUB-200-2011 and ILSVRC show that GenPromp respectively outperforms the best discriminative models by 5.2% and 5.6% (Top-1 Loc), setting a solid baseline for WSOL with the generative model. Code is available at https://github.com/callsys/GenPromp.
|
1202.2895
|
Dmitry Ignatov
|
Jonas Poelmans, Paul Elzinga, Alexey Neznanov, Stijn Viaene, Sergei O.
Kuznetsov, Dmitry Ignatov, Guido Dedene
|
Concept Relation Discovery and Innovation Enabling Technology (CORDIET)
| null |
In CEUR Workshop proceedings Vol-757, CDUD'11 - Concept Discovery
in Unstructured Data, pp. 53-62, 2011
| null | null |
cs.AI cs.IR stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Concept Relation Discovery and Innovation Enabling Technology (CORDIET), is a
toolbox for gaining new knowledge from unstructured text data. At the core of
CORDIET is the C-K theory which captures the essential elements of innovation.
The tool uses Formal Concept Analysis (FCA), Emergent Self Organizing Maps
(ESOM) and Hidden Markov Models (HMM) as main artifacts in the analysis
process. The user can define temporal, text mining and compound attributes. The
text mining attributes are used to analyze the unstructured text in documents,
the temporal attributes use these document's timestamps for analysis. The
compound attributes are XML rules based on text mining and temporal attributes.
The user can cluster objects with object-cluster rules and can chop the data in
pieces with segmentation rules. The artifacts are optimized for efficient data
analysis; object labels in the FCA lattice and ESOM map contain an URL on which
the user can click to open the selected document.
|
[
{
"created": "Mon, 13 Feb 2012 23:19:51 GMT",
"version": "v1"
}
] |
2013-12-03
|
[
[
"Poelmans",
"Jonas",
""
],
[
"Elzinga",
"Paul",
""
],
[
"Neznanov",
"Alexey",
""
],
[
"Viaene",
"Stijn",
""
],
[
"Kuznetsov",
"Sergei O.",
""
],
[
"Ignatov",
"Dmitry",
""
],
[
"Dedene",
"Guido",
""
]
] |
Concept Relation Discovery and Innovation Enabling Technology (CORDIET), is a toolbox for gaining new knowledge from unstructured text data. At the core of CORDIET is the C-K theory which captures the essential elements of innovation. The tool uses Formal Concept Analysis (FCA), Emergent Self Organizing Maps (ESOM) and Hidden Markov Models (HMM) as main artifacts in the analysis process. The user can define temporal, text mining and compound attributes. The text mining attributes are used to analyze the unstructured text in documents, the temporal attributes use these document's timestamps for analysis. The compound attributes are XML rules based on text mining and temporal attributes. The user can cluster objects with object-cluster rules and can chop the data in pieces with segmentation rules. The artifacts are optimized for efficient data analysis; object labels in the FCA lattice and ESOM map contain an URL on which the user can click to open the selected document.
|
2302.14284
|
Zhengzhuo Xu
|
Zhengzhuo Xu, Shuo Yang, Xingjun Wang, Chun Yuan
|
Rethink Long-tailed Recognition with Vision Transformers
|
Accepted by ICASSP 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In the real world, data tends to follow long-tailed distributions w.r.t.
class or attribution, motivating the challenging Long-Tailed Recognition (LTR)
problem. In this paper, we revisit recent LTR methods with promising Vision
Transformers (ViT). We figure out that 1) ViT is hard to train with long-tailed
data. 2) ViT learns generalized features in an unsupervised manner, like mask
generative training, either on long-tailed or balanced datasets. Hence, we
propose to adopt unsupervised learning to utilize long-tailed data.
Furthermore, we propose the Predictive Distribution Calibration (PDC) as a
novel metric for LTR, where the model tends to simply classify inputs into
common classes. Our PDC can measure the model calibration of predictive
preferences quantitatively. On this basis, we find many LTR approaches
alleviate it slightly, despite the accuracy improvement. Extensive experiments
on benchmark datasets validate that PDC reflects the model's predictive
preference precisely, which is consistent with the visualization.
|
[
{
"created": "Tue, 28 Feb 2023 03:36:48 GMT",
"version": "v1"
},
{
"created": "Mon, 17 Apr 2023 08:35:02 GMT",
"version": "v2"
}
] |
2023-04-18
|
[
[
"Xu",
"Zhengzhuo",
""
],
[
"Yang",
"Shuo",
""
],
[
"Wang",
"Xingjun",
""
],
[
"Yuan",
"Chun",
""
]
] |
In the real world, data tends to follow long-tailed distributions w.r.t. class or attribution, motivating the challenging Long-Tailed Recognition (LTR) problem. In this paper, we revisit recent LTR methods with promising Vision Transformers (ViT). We figure out that 1) ViT is hard to train with long-tailed data. 2) ViT learns generalized features in an unsupervised manner, like mask generative training, either on long-tailed or balanced datasets. Hence, we propose to adopt unsupervised learning to utilize long-tailed data. Furthermore, we propose the Predictive Distribution Calibration (PDC) as a novel metric for LTR, where the model tends to simply classify inputs into common classes. Our PDC can measure the model calibration of predictive preferences quantitatively. On this basis, we find many LTR approaches alleviate it slightly, despite the accuracy improvement. Extensive experiments on benchmark datasets validate that PDC reflects the model's predictive preference precisely, which is consistent with the visualization.
|
1202.2026
|
Akira SaiToh
|
Akira SaiToh, Robabeh Rahimi, Mikio Nakahara
|
A quantum genetic algorithm with quantum crossover and mutation
operations
|
21 pages, 1 table, v2: typos corrected, minor modifications in
sections 3.5 and 4, v3: minor revision, title changed (original title:
Semiclassical genetic algorithm with quantum crossover and mutation
operations), v4: minor revision, v5: minor grammatical corrections, to appear
in QIP
|
Quantum Inf. Process. 13, 737-755 (2014)
|
10.1007/s11128-013-0686-6
| null |
cs.NE quant-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the context of evolutionary quantum computing in the literal meaning, a
quantum crossover operation has not been introduced so far. Here, we introduce
a novel quantum genetic algorithm which has a quantum crossover procedure
performing crossovers among all chromosomes in parallel for each generation. A
complexity analysis shows that a quadratic speedup is achieved over its
classical counterpart in the dominant factor of the run time to handle each
generation.
|
[
{
"created": "Thu, 9 Feb 2012 16:04:52 GMT",
"version": "v1"
},
{
"created": "Mon, 13 Feb 2012 07:19:53 GMT",
"version": "v2"
},
{
"created": "Mon, 19 Nov 2012 19:12:14 GMT",
"version": "v3"
},
{
"created": "Fri, 8 Nov 2013 04:48:49 GMT",
"version": "v4"
},
{
"created": "Fri, 22 Nov 2013 04:22:29 GMT",
"version": "v5"
}
] |
2014-02-05
|
[
[
"SaiToh",
"Akira",
""
],
[
"Rahimi",
"Robabeh",
""
],
[
"Nakahara",
"Mikio",
""
]
] |
In the context of evolutionary quantum computing in the literal meaning, a quantum crossover operation has not been introduced so far. Here, we introduce a novel quantum genetic algorithm which has a quantum crossover procedure performing crossovers among all chromosomes in parallel for each generation. A complexity analysis shows that a quadratic speedup is achieved over its classical counterpart in the dominant factor of the run time to handle each generation.
|
2305.03966
|
Yang Li
|
Shipeng Ji, Yang Li, Ruizhi Fu, Jiabao Wang, Zhuang Miao
|
Feature Chirality in Deep Learning Models
|
6 pages,6 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
As deep learning applications extensively increase by leaps and bounds, their
interpretability has become increasingly prominent. As a universal property,
chirality exists widely in nature, and applying it to the explanatory research
of deep learning may be helpful to some extent. Inspired by a recent study that
used CNN (convolutional neural network), which applied visual chirality, to
distinguish whether an image is flipped or not. In this paper, we study feature
chirality innovatively, which shows how the statistics of deep learning models'
feature data are changed by training. We rethink the feature-level chirality
property, propose the feature chirality, and give the measure. Our analysis of
feature chirality on AlexNet, VGG, and ResNet reveals similar but surprising
results, including the prevalence of feature chirality in these models, the
initialization methods of the models do not affect feature chirality. Our work
shows that feature chirality implies model evaluation, interpretability of the
model, and model parameters optimization.
|
[
{
"created": "Sat, 6 May 2023 07:57:38 GMT",
"version": "v1"
}
] |
2023-05-09
|
[
[
"Ji",
"Shipeng",
""
],
[
"Li",
"Yang",
""
],
[
"Fu",
"Ruizhi",
""
],
[
"Wang",
"Jiabao",
""
],
[
"Miao",
"Zhuang",
""
]
] |
As deep learning applications extensively increase by leaps and bounds, their interpretability has become increasingly prominent. As a universal property, chirality exists widely in nature, and applying it to the explanatory research of deep learning may be helpful to some extent. Inspired by a recent study that used CNN (convolutional neural network), which applied visual chirality, to distinguish whether an image is flipped or not. In this paper, we study feature chirality innovatively, which shows how the statistics of deep learning models' feature data are changed by training. We rethink the feature-level chirality property, propose the feature chirality, and give the measure. Our analysis of feature chirality on AlexNet, VGG, and ResNet reveals similar but surprising results, including the prevalence of feature chirality in these models, the initialization methods of the models do not affect feature chirality. Our work shows that feature chirality implies model evaluation, interpretability of the model, and model parameters optimization.
|
2404.11841
|
Jiatu Li
|
Lijie Chen, Jiatu Li, Igor C. Oliveira
|
On the Unprovability of Circuit Size Bounds in Intuitionistic
$\mathsf{S}^1_2$
| null | null | null | null |
cs.LO cs.CC
|
http://creativecommons.org/licenses/by/4.0/
|
We show that there is a constant $k$ such that Buss's intuitionistic theory
$\mathsf{IS}^1_2$ does not prove that SAT requires co-nondeterministic circuits
of size at least $n^k$. To our knowledge, this is the first unconditional
unprovability result in bounded arithmetic in the context of worst-case
fixed-polynomial size circuit lower bounds. We complement this result by
showing that the upper bound $\mathsf{NP} \subseteq \mathsf{coNSIZE}[n^k]$ is
unprovable in $\mathsf{IS}^1_2$.
|
[
{
"created": "Thu, 18 Apr 2024 01:45:22 GMT",
"version": "v1"
}
] |
2024-04-19
|
[
[
"Chen",
"Lijie",
""
],
[
"Li",
"Jiatu",
""
],
[
"Oliveira",
"Igor C.",
""
]
] |
We show that there is a constant $k$ such that Buss's intuitionistic theory $\mathsf{IS}^1_2$ does not prove that SAT requires co-nondeterministic circuits of size at least $n^k$. To our knowledge, this is the first unconditional unprovability result in bounded arithmetic in the context of worst-case fixed-polynomial size circuit lower bounds. We complement this result by showing that the upper bound $\mathsf{NP} \subseteq \mathsf{coNSIZE}[n^k]$ is unprovable in $\mathsf{IS}^1_2$.
|
2402.09508
|
Liwei Lin
|
Liwei Lin, Gus Xia, Yixiao Zhang, Junyan Jiang
|
Arrange, Inpaint, and Refine: Steerable Long-term Music Audio Generation
and Editing via Content-based Controls
| null | null | null | null |
cs.SD cs.AI eess.AS
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Controllable music generation plays a vital role in human-AI music
co-creation. While Large Language Models (LLMs) have shown promise in
generating high-quality music, their focus on autoregressive generation limits
their utility in music editing tasks. To address this gap, we propose a novel
approach leveraging a parameter-efficient heterogeneous adapter combined with a
masking training scheme. This approach enables autoregressive language models
to seamlessly address music inpainting tasks. Additionally, our method
integrates frame-level content-based controls, facilitating track-conditioned
music refinement and score-conditioned music arrangement. We apply this method
to fine-tune MusicGen, a leading autoregressive music generation model. Our
experiments demonstrate promising results across multiple music editing tasks,
offering more flexible controls for future AI-driven music editing tools. The
source codes and a demo page showcasing our work are available at
https://kikyo-16.github.io/AIR.
|
[
{
"created": "Wed, 14 Feb 2024 19:00:01 GMT",
"version": "v1"
},
{
"created": "Mon, 10 Jun 2024 14:08:17 GMT",
"version": "v2"
}
] |
2024-06-11
|
[
[
"Lin",
"Liwei",
""
],
[
"Xia",
"Gus",
""
],
[
"Zhang",
"Yixiao",
""
],
[
"Jiang",
"Junyan",
""
]
] |
Controllable music generation plays a vital role in human-AI music co-creation. While Large Language Models (LLMs) have shown promise in generating high-quality music, their focus on autoregressive generation limits their utility in music editing tasks. To address this gap, we propose a novel approach leveraging a parameter-efficient heterogeneous adapter combined with a masking training scheme. This approach enables autoregressive language models to seamlessly address music inpainting tasks. Additionally, our method integrates frame-level content-based controls, facilitating track-conditioned music refinement and score-conditioned music arrangement. We apply this method to fine-tune MusicGen, a leading autoregressive music generation model. Our experiments demonstrate promising results across multiple music editing tasks, offering more flexible controls for future AI-driven music editing tools. The source codes and a demo page showcasing our work are available at https://kikyo-16.github.io/AIR.
|
2205.03859
|
Vedant Singh
|
Vedant Singh, Surgan Jandial, Ayush Chopra, Siddharth Ramesh, Balaji
Krishnamurthy, Vineeth N. Balasubramanian
|
On Conditioning the Input Noise for Controlled Image Generation with
Diffusion Models
|
Accepted at the workshop on AI for Content Creation at CVPR 2022
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Conditional image generation has paved the way for several breakthroughs in
image editing, generating stock photos and 3-D object generation. This
continues to be a significant area of interest with the rise of new
state-of-the-art methods that are based on diffusion models. However, diffusion
models provide very little control over the generated image, which led to
subsequent works exploring techniques like classifier guidance, that provides a
way to trade off diversity with fidelity. In this work, we explore techniques
to condition diffusion models with carefully crafted input noise artifacts.
This allows generation of images conditioned on semantic attributes. This is
different from existing approaches that input Gaussian noise and further
introduce conditioning at the diffusion model's inference step. Our experiments
over several examples and conditional settings show the potential of our
approach.
|
[
{
"created": "Sun, 8 May 2022 13:18:14 GMT",
"version": "v1"
}
] |
2022-05-10
|
[
[
"Singh",
"Vedant",
""
],
[
"Jandial",
"Surgan",
""
],
[
"Chopra",
"Ayush",
""
],
[
"Ramesh",
"Siddharth",
""
],
[
"Krishnamurthy",
"Balaji",
""
],
[
"Balasubramanian",
"Vineeth N.",
""
]
] |
Conditional image generation has paved the way for several breakthroughs in image editing, generating stock photos and 3-D object generation. This continues to be a significant area of interest with the rise of new state-of-the-art methods that are based on diffusion models. However, diffusion models provide very little control over the generated image, which led to subsequent works exploring techniques like classifier guidance, that provides a way to trade off diversity with fidelity. In this work, we explore techniques to condition diffusion models with carefully crafted input noise artifacts. This allows generation of images conditioned on semantic attributes. This is different from existing approaches that input Gaussian noise and further introduce conditioning at the diffusion model's inference step. Our experiments over several examples and conditional settings show the potential of our approach.
|
1810.03793
|
Jiawei Li
|
Jiawei Li and etc
|
Collective Strategies with a Master-slave Mechanism Dominate in Spatial
Iterated Prisoner's Dilemma
|
11 pages, 31 figures
|
International Journal of Swarm Intelligence Research 2021
| null | null |
cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
How cooperation emerges and persists in a population of selfish agents is a
fundamental question in evolutionary game theory. Our research shows that
Collective Strategies with Master-Slave Mechanism (CSMSM) defeat Tit-for-Tat
and other well-known strategies in spatial iterated prisoner's dilemma. A CSMSM
identifies kin members by means of a handshaking mechanism. If the opponent is
identified as non-kin, a CSMSM will always defect. Once two CSMSMs meet, they
play master and slave roles. A mater defects and a slave cooperates in order to
maximize the master's payoff. CSMSM outperforms non-collective strategies in
spatial IPD even if there is only a small cluster of CSMSMs in the population.
The existence and performance of CSMSM in spatial iterated prisoner's dilemma
suggests that cooperation first appears and persists in a group of collective
agents.
|
[
{
"created": "Tue, 9 Oct 2018 03:28:51 GMT",
"version": "v1"
},
{
"created": "Mon, 1 Nov 2021 14:19:14 GMT",
"version": "v2"
}
] |
2021-11-02
|
[
[
"Li",
"Jiawei",
""
],
[
"etc",
"",
""
]
] |
How cooperation emerges and persists in a population of selfish agents is a fundamental question in evolutionary game theory. Our research shows that Collective Strategies with Master-Slave Mechanism (CSMSM) defeat Tit-for-Tat and other well-known strategies in spatial iterated prisoner's dilemma. A CSMSM identifies kin members by means of a handshaking mechanism. If the opponent is identified as non-kin, a CSMSM will always defect. Once two CSMSMs meet, they play master and slave roles. A mater defects and a slave cooperates in order to maximize the master's payoff. CSMSM outperforms non-collective strategies in spatial IPD even if there is only a small cluster of CSMSMs in the population. The existence and performance of CSMSM in spatial iterated prisoner's dilemma suggests that cooperation first appears and persists in a group of collective agents.
|
1603.00570
|
Ohad Shamir
|
Ohad Shamir
|
Without-Replacement Sampling for Stochastic Gradient Methods:
Convergence Results and Application to Distributed Optimization
|
Fixed a few minor typos, and slightly tightened Corollary 1
| null | null | null |
cs.LG math.OC stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Stochastic gradient methods for machine learning and optimization problems
are usually analyzed assuming data points are sampled \emph{with} replacement.
In practice, however, sampling \emph{without} replacement is very common,
easier to implement in many cases, and often performs better. In this paper, we
provide competitive convergence guarantees for without-replacement sampling,
under various scenarios, for three types of algorithms: Any algorithm with
online regret guarantees, stochastic gradient descent, and SVRG. A useful
application of our SVRG analysis is a nearly-optimal algorithm for regularized
least squares in a distributed setting, in terms of both communication
complexity and runtime complexity, when the data is randomly partitioned and
the condition number can be as large as the data size per machine (up to
logarithmic factors). Our proof techniques combine ideas from stochastic
optimization, adversarial online learning, and transductive learning theory,
and can potentially be applied to other stochastic optimization and learning
problems.
|
[
{
"created": "Wed, 2 Mar 2016 04:02:57 GMT",
"version": "v1"
},
{
"created": "Fri, 29 Apr 2016 00:29:34 GMT",
"version": "v2"
},
{
"created": "Mon, 17 Oct 2016 03:58:41 GMT",
"version": "v3"
}
] |
2016-10-18
|
[
[
"Shamir",
"Ohad",
""
]
] |
Stochastic gradient methods for machine learning and optimization problems are usually analyzed assuming data points are sampled \emph{with} replacement. In practice, however, sampling \emph{without} replacement is very common, easier to implement in many cases, and often performs better. In this paper, we provide competitive convergence guarantees for without-replacement sampling, under various scenarios, for three types of algorithms: Any algorithm with online regret guarantees, stochastic gradient descent, and SVRG. A useful application of our SVRG analysis is a nearly-optimal algorithm for regularized least squares in a distributed setting, in terms of both communication complexity and runtime complexity, when the data is randomly partitioned and the condition number can be as large as the data size per machine (up to logarithmic factors). Our proof techniques combine ideas from stochastic optimization, adversarial online learning, and transductive learning theory, and can potentially be applied to other stochastic optimization and learning problems.
|
2309.06263
|
Yohan Beugin
|
Alban H\'eon, Ryan Sheatsley, Quinn Burke, Blaine Hoak, Eric Pauley,
Yohan Beugin, Patrick McDaniel
|
Systematic Evaluation of Geolocation Privacy Mechanisms
|
M.S. Thesis (https://etda.libraries.psu.edu/catalog/25677abh5960)
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Location data privacy has become a serious concern for users as Location
Based Services (LBSs) have become an important part of their life. It is
possible for malicious parties having access to geolocation data to learn
sensitive information about the user such as religion or political views.
Location Privacy Preserving Mechanisms (LPPMs) have been proposed by previous
works to ensure the privacy of the shared data while allowing the users to use
LBSs. But there is no clear view of which mechanism to use according to the
scenario in which the user makes use of a LBS. The scenario is the way the user
is using a LBS (frequency of reports, number of reports). In this paper, we
study the sensitivity of LPPMs on the scenario on which they are used. We
propose a framework to systematically evaluate LPPMs by considering an
exhaustive combination of LPPMs, attacks and metrics. Using our framework we
compare a selection of LPPMs including an improved mechanism that we introduce.
By evaluating over a variety of scenarios, we find that the efficacy (privacy,
utility, and robustness) of the studied mechanisms is dependent on the
scenario: for example the privacy of Planar Laplace geo-indistinguishability is
greatly reduced in a continuous scenario. We show that the scenario is
essential to consider when choosing an obfuscation mechanism for a given
application.
|
[
{
"created": "Tue, 12 Sep 2023 14:23:19 GMT",
"version": "v1"
}
] |
2023-09-13
|
[
[
"Héon",
"Alban",
""
],
[
"Sheatsley",
"Ryan",
""
],
[
"Burke",
"Quinn",
""
],
[
"Hoak",
"Blaine",
""
],
[
"Pauley",
"Eric",
""
],
[
"Beugin",
"Yohan",
""
],
[
"McDaniel",
"Patrick",
""
]
] |
Location data privacy has become a serious concern for users as Location Based Services (LBSs) have become an important part of their life. It is possible for malicious parties having access to geolocation data to learn sensitive information about the user such as religion or political views. Location Privacy Preserving Mechanisms (LPPMs) have been proposed by previous works to ensure the privacy of the shared data while allowing the users to use LBSs. But there is no clear view of which mechanism to use according to the scenario in which the user makes use of a LBS. The scenario is the way the user is using a LBS (frequency of reports, number of reports). In this paper, we study the sensitivity of LPPMs on the scenario on which they are used. We propose a framework to systematically evaluate LPPMs by considering an exhaustive combination of LPPMs, attacks and metrics. Using our framework we compare a selection of LPPMs including an improved mechanism that we introduce. By evaluating over a variety of scenarios, we find that the efficacy (privacy, utility, and robustness) of the studied mechanisms is dependent on the scenario: for example the privacy of Planar Laplace geo-indistinguishability is greatly reduced in a continuous scenario. We show that the scenario is essential to consider when choosing an obfuscation mechanism for a given application.
|
1906.07437
|
Qi Lei
|
Qi Lei, Ajil Jalal, Inderjit S. Dhillon, Alexandros G. Dimakis
|
Inverting Deep Generative models, One layer at a time
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the problem of inverting a deep generative model with ReLU
activations. Inversion corresponds to finding a latent code vector that
explains observed measurements as much as possible. In most prior works this is
performed by attempting to solve a non-convex optimization problem involving
the generator. In this paper we obtain several novel theoretical results for
the inversion problem.
We show that for the realizable case, single layer inversion can be performed
exactly in polynomial time, by solving a linear program. Further, we show that
for multiple layers, inversion is NP-hard and the pre-image set can be
non-convex.
For generative models of arbitrary depth, we show that exact recovery is
possible in polynomial time with high probability, if the layers are expanding
and the weights are randomly selected. Very recent work analyzed the same
problem for gradient descent inversion. Their analysis requires significantly
higher expansion (logarithmic in the latent dimension) while our proposed
algorithm can provably reconstruct even with constant factor expansion. We also
provide provable error bounds for different norms for reconstructing noisy
observations. Our empirical validation demonstrates that we obtain better
reconstructions when the latent dimension is large.
|
[
{
"created": "Tue, 18 Jun 2019 08:20:34 GMT",
"version": "v1"
},
{
"created": "Wed, 19 Jun 2019 06:19:19 GMT",
"version": "v2"
}
] |
2019-06-20
|
[
[
"Lei",
"Qi",
""
],
[
"Jalal",
"Ajil",
""
],
[
"Dhillon",
"Inderjit S.",
""
],
[
"Dimakis",
"Alexandros G.",
""
]
] |
We study the problem of inverting a deep generative model with ReLU activations. Inversion corresponds to finding a latent code vector that explains observed measurements as much as possible. In most prior works this is performed by attempting to solve a non-convex optimization problem involving the generator. In this paper we obtain several novel theoretical results for the inversion problem. We show that for the realizable case, single layer inversion can be performed exactly in polynomial time, by solving a linear program. Further, we show that for multiple layers, inversion is NP-hard and the pre-image set can be non-convex. For generative models of arbitrary depth, we show that exact recovery is possible in polynomial time with high probability, if the layers are expanding and the weights are randomly selected. Very recent work analyzed the same problem for gradient descent inversion. Their analysis requires significantly higher expansion (logarithmic in the latent dimension) while our proposed algorithm can provably reconstruct even with constant factor expansion. We also provide provable error bounds for different norms for reconstructing noisy observations. Our empirical validation demonstrates that we obtain better reconstructions when the latent dimension is large.
|
1512.03221
|
He Chen
|
Yifan Gu, He Chen, Yonghui Li, Branka Vucetic
|
A Discrete Time-Switching Protocol for Wireless-Powered Communications
with Energy Accumulation
|
Presented at Globecom'15
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper investigates a wireless-powered communication network (WPCN) setup
with one multi-antenna access point (AP) and one single-antenna source. It is
assumed that the AP is connected to an external power supply, while the source
does not have an embedded energy supply. But the source could harvest energy
from radio frequency (RF) signals sent by the AP and store it for future
information transmission. We develop a discrete time-switching (DTS) protocol
for the considered WPCN. In the proposed protocol, either energy harvesting
(EH) or information transmission (IT) operation is performed during each
transmission block. Specifically, based on the channel state information (CSI)
between source and AP, the source can determine the minimum energy required for
an outage-free IT operation. If the residual energy of the source is
sufficient, the source will start the IT phase. Otherwise, EH phase is invoked
and the source accumulates the harvested energy. To characterize the
performance of the proposed protocol, we adopt a discrete Markov chain (MC) to
model the energy accumulation process at the source battery. A closed-form
expression for the average throughput of the DTS protocol is derived. Numerical
results validate our theoretical analysis and show that the proposed DTS
protocol considerably outperforms the existing harvest-then-transmit protocol
when the battery capacity at the source is large.
|
[
{
"created": "Thu, 10 Dec 2015 11:39:20 GMT",
"version": "v1"
}
] |
2015-12-11
|
[
[
"Gu",
"Yifan",
""
],
[
"Chen",
"He",
""
],
[
"Li",
"Yonghui",
""
],
[
"Vucetic",
"Branka",
""
]
] |
This paper investigates a wireless-powered communication network (WPCN) setup with one multi-antenna access point (AP) and one single-antenna source. It is assumed that the AP is connected to an external power supply, while the source does not have an embedded energy supply. But the source could harvest energy from radio frequency (RF) signals sent by the AP and store it for future information transmission. We develop a discrete time-switching (DTS) protocol for the considered WPCN. In the proposed protocol, either energy harvesting (EH) or information transmission (IT) operation is performed during each transmission block. Specifically, based on the channel state information (CSI) between source and AP, the source can determine the minimum energy required for an outage-free IT operation. If the residual energy of the source is sufficient, the source will start the IT phase. Otherwise, EH phase is invoked and the source accumulates the harvested energy. To characterize the performance of the proposed protocol, we adopt a discrete Markov chain (MC) to model the energy accumulation process at the source battery. A closed-form expression for the average throughput of the DTS protocol is derived. Numerical results validate our theoretical analysis and show that the proposed DTS protocol considerably outperforms the existing harvest-then-transmit protocol when the battery capacity at the source is large.
|
1009.2785
|
Grenville Croll
|
Salvatore Aurigemma, Raymond R. Panko
|
The Detection of Human Spreadsheet Errors by Humans versus Inspection
(Auditing) Software
|
14 Pages, 4 Figures
|
Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2010 73-85
ISBN 978-1-905404-50-6
| null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Previous spreadsheet inspection experiments have had human subjects look for
seeded errors in spreadsheets. In this study, subjects attempted to find errors
in human-developed spreadsheets to avoid the potential artifacts created by
error seeding. Human subject success rates were compared to the successful
rates for error-flagging by spreadsheet static analysis tools (SSATs) applied
to the same spreadsheets. The human error detection results were comparable to
those of studies using error seeding. However, Excel Error Check and
Spreadsheet Professional were almost useless for correctly flagging natural
(human) errors in this study.
|
[
{
"created": "Tue, 14 Sep 2010 20:54:42 GMT",
"version": "v1"
}
] |
2010-09-16
|
[
[
"Aurigemma",
"Salvatore",
""
],
[
"Panko",
"Raymond R.",
""
]
] |
Previous spreadsheet inspection experiments have had human subjects look for seeded errors in spreadsheets. In this study, subjects attempted to find errors in human-developed spreadsheets to avoid the potential artifacts created by error seeding. Human subject success rates were compared to the successful rates for error-flagging by spreadsheet static analysis tools (SSATs) applied to the same spreadsheets. The human error detection results were comparable to those of studies using error seeding. However, Excel Error Check and Spreadsheet Professional were almost useless for correctly flagging natural (human) errors in this study.
|
2004.00365
|
Sami Muhaidat
|
Lina Mohjazi, Ahmed Zoha, Lina Bariah, Sami Muhaidat, Paschalis C.
Sofotasios, Muhammad Ali Imran, and Octavia A. Dobre
|
An Outlook on the Interplay of Machine Learning and Reconfigurable
Intelligent Surfaces: An Overview of Opportunities and Limitations
| null | null | null | null |
cs.NI cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent advances in programmable metasurfaces, also dubbed as reconfigurable
intelligent surfaces (RISs), are envisioned to offer a paradigm shift from
uncontrollable to fully tunable and customizable wireless propagation
environments, enabling a plethora of new applications and technological trends.
Therefore, in view of this cutting edge technological concept, we first review
the architecture and electromagnetic waves manipulation functionalities of
RISs. We then detail some of the recent advancements that have been made
towards realizing these programmable functionalities in wireless communication
applications. Furthermore, we elaborate on how machine learning (ML) can
address various constraints introduced by the real-time deployment of RISs,
particularly in terms of latency, storage, energy efficiency, and computation.
A review of the state-of-the-art research on the integration of ML with RISs is
presented, highlighting their potentials as well as challenges. Finally, the
paper concludes by offering a look ahead towards unexplored possibilities of ML
mechanisms in the context of RISs.
|
[
{
"created": "Mon, 9 Mar 2020 21:03:28 GMT",
"version": "v1"
},
{
"created": "Fri, 10 Sep 2021 18:06:21 GMT",
"version": "v2"
}
] |
2021-09-14
|
[
[
"Mohjazi",
"Lina",
""
],
[
"Zoha",
"Ahmed",
""
],
[
"Bariah",
"Lina",
""
],
[
"Muhaidat",
"Sami",
""
],
[
"Sofotasios",
"Paschalis C.",
""
],
[
"Imran",
"Muhammad Ali",
""
],
[
"Dobre",
"Octavia A.",
""
]
] |
Recent advances in programmable metasurfaces, also dubbed as reconfigurable intelligent surfaces (RISs), are envisioned to offer a paradigm shift from uncontrollable to fully tunable and customizable wireless propagation environments, enabling a plethora of new applications and technological trends. Therefore, in view of this cutting edge technological concept, we first review the architecture and electromagnetic waves manipulation functionalities of RISs. We then detail some of the recent advancements that have been made towards realizing these programmable functionalities in wireless communication applications. Furthermore, we elaborate on how machine learning (ML) can address various constraints introduced by the real-time deployment of RISs, particularly in terms of latency, storage, energy efficiency, and computation. A review of the state-of-the-art research on the integration of ML with RISs is presented, highlighting their potentials as well as challenges. Finally, the paper concludes by offering a look ahead towards unexplored possibilities of ML mechanisms in the context of RISs.
|
2011.07348
|
Jonah Casebeer
|
Jonah Casebeer, Jamshed Kaikaus, Paris Smaragdis
|
Communication-Cost Aware Microphone Selection For Neural Speech
Enhancement with Ad-hoc Microphone Arrays
|
5 pages, 4 figures, ICASSP 2021
| null | null | null |
cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present a method for jointly-learning a microphone
selection mechanism and a speech enhancement network for multi-channel speech
enhancement with an ad-hoc microphone array. The attention-based microphone
selection mechanism is trained to reduce communication-costs through a penalty
term which represents a task-performance/ communication-cost trade-off. While
working within the trade-off, our method can intelligently stream from more
microphones in lower SNR scenes and fewer microphones in higher SNR scenes. We
evaluate the model in complex echoic acoustic scenes with moving sources and
show that it matches the performance of models that stream from a fixed number
of microphones while reducing communication costs.
|
[
{
"created": "Sat, 14 Nov 2020 17:46:29 GMT",
"version": "v1"
},
{
"created": "Mon, 15 Feb 2021 18:20:01 GMT",
"version": "v2"
},
{
"created": "Thu, 18 Mar 2021 18:53:50 GMT",
"version": "v3"
},
{
"created": "Wed, 21 Apr 2021 15:16:50 GMT",
"version": "v4"
}
] |
2021-04-22
|
[
[
"Casebeer",
"Jonah",
""
],
[
"Kaikaus",
"Jamshed",
""
],
[
"Smaragdis",
"Paris",
""
]
] |
In this paper, we present a method for jointly-learning a microphone selection mechanism and a speech enhancement network for multi-channel speech enhancement with an ad-hoc microphone array. The attention-based microphone selection mechanism is trained to reduce communication-costs through a penalty term which represents a task-performance/ communication-cost trade-off. While working within the trade-off, our method can intelligently stream from more microphones in lower SNR scenes and fewer microphones in higher SNR scenes. We evaluate the model in complex echoic acoustic scenes with moving sources and show that it matches the performance of models that stream from a fixed number of microphones while reducing communication costs.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.