id
stringlengths 9
10
| submitter
stringlengths 1
64
⌀ | authors
stringlengths 4
20.7k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
404
⌀ | doi
stringlengths 11
153
⌀ | report-no
stringlengths 2
254
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | orig_abstract
stringlengths 14
3.35k
| versions
listlengths 1
60
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
1.35k
| abstract
stringlengths 11
3.34k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1302.3268
|
Aleksandrs Slivkins
|
Ittai Abraham, Omar Alonso, Vasilis Kandylas and Aleksandrs Slivkins
|
Adaptive Crowdsourcing Algorithms for the Bandit Survey Problem
|
Full version of a paper in COLT 2013
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Very recently crowdsourcing has become the de facto platform for distributing
and collecting human computation for a wide range of tasks and applications
such as information retrieval, natural language processing and machine
learning. Current crowdsourcing platforms have some limitations in the area of
quality control. Most of the effort to ensure good quality has to be done by
the experimenter who has to manage the number of workers needed to reach good
results.
We propose a simple model for adaptive quality control in crowdsourced
multiple-choice tasks which we call the \emph{bandit survey problem}. This
model is related to, but technically different from the well-known multi-armed
bandit problem. We present several algorithms for this problem, and support
them with analysis and simulations. Our approach is based in our experience
conducting relevance evaluation for a large commercial search engine.
|
[
{
"created": "Wed, 13 Feb 2013 22:42:44 GMT",
"version": "v1"
},
{
"created": "Mon, 20 May 2013 15:02:15 GMT",
"version": "v2"
}
] |
2013-05-21
|
[
[
"Abraham",
"Ittai",
""
],
[
"Alonso",
"Omar",
""
],
[
"Kandylas",
"Vasilis",
""
],
[
"Slivkins",
"Aleksandrs",
""
]
] |
Very recently crowdsourcing has become the de facto platform for distributing and collecting human computation for a wide range of tasks and applications such as information retrieval, natural language processing and machine learning. Current crowdsourcing platforms have some limitations in the area of quality control. Most of the effort to ensure good quality has to be done by the experimenter who has to manage the number of workers needed to reach good results. We propose a simple model for adaptive quality control in crowdsourced multiple-choice tasks which we call the \emph{bandit survey problem}. This model is related to, but technically different from the well-known multi-armed bandit problem. We present several algorithms for this problem, and support them with analysis and simulations. Our approach is based in our experience conducting relevance evaluation for a large commercial search engine.
|
2401.03529
|
Evan Ryan Gunter
|
Evan Ryan Gunter (1), Yevgeny Liokumovich (2), Victoria Krakovna (3)
((1) ML Alignment & Theory Scholars (MATS), (2) University of Toronto, (3)
Google DeepMind)
|
Quantifying stability of non-power-seeking in artificial agents
|
37 pages, 5 figures
| null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
We investigate the question: if an AI agent is known to be safe in one
setting, is it also safe in a new setting similar to the first? This is a core
question of AI alignment--we train and test models in a certain environment,
but deploy them in another, and we need to guarantee that models that seem safe
in testing remain so in deployment. Our notion of safety is based on
power-seeking--an agent which seeks power is not safe. In particular, we focus
on a crucial type of power-seeking: resisting shutdown. We model agents as
policies for Markov decision processes, and show (in two cases of interest)
that not resisting shutdown is "stable": if an MDP has certain policies which
don't avoid shutdown, the corresponding policies for a similar MDP also don't
avoid shutdown. We also show that there are natural cases where safety is _not_
stable--arbitrarily small perturbations may result in policies which never shut
down. In our first case of interest--near-optimal policies--we use a
bisimulation metric on MDPs to prove that small perturbations won't make the
agent take longer to shut down. Our second case of interest is policies for
MDPs satisfying certain constraints which hold for various models (including
language models). Here, we demonstrate a quantitative bound on how fast the
probability of not shutting down can increase: by defining a metric on MDPs;
proving that the probability of not shutting down, as a function on MDPs, is
lower semicontinuous; and bounding how quickly this function decreases.
|
[
{
"created": "Sun, 7 Jan 2024 15:57:38 GMT",
"version": "v1"
}
] |
2024-01-09
|
[
[
"Gunter",
"Evan Ryan",
""
],
[
"Liokumovich",
"Yevgeny",
""
],
[
"Krakovna",
"Victoria",
""
]
] |
We investigate the question: if an AI agent is known to be safe in one setting, is it also safe in a new setting similar to the first? This is a core question of AI alignment--we train and test models in a certain environment, but deploy them in another, and we need to guarantee that models that seem safe in testing remain so in deployment. Our notion of safety is based on power-seeking--an agent which seeks power is not safe. In particular, we focus on a crucial type of power-seeking: resisting shutdown. We model agents as policies for Markov decision processes, and show (in two cases of interest) that not resisting shutdown is "stable": if an MDP has certain policies which don't avoid shutdown, the corresponding policies for a similar MDP also don't avoid shutdown. We also show that there are natural cases where safety is _not_ stable--arbitrarily small perturbations may result in policies which never shut down. In our first case of interest--near-optimal policies--we use a bisimulation metric on MDPs to prove that small perturbations won't make the agent take longer to shut down. Our second case of interest is policies for MDPs satisfying certain constraints which hold for various models (including language models). Here, we demonstrate a quantitative bound on how fast the probability of not shutting down can increase: by defining a metric on MDPs; proving that the probability of not shutting down, as a function on MDPs, is lower semicontinuous; and bounding how quickly this function decreases.
|
2311.04499
|
Lin Meng
|
Lin Meng, Yuzhong Sun, Weimin Li
|
Near-Linear Scaling Data Parallel Training with Overlapping-Aware
Gradient Compression
|
10 pages, 11 figures
| null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing Data Parallel (DP) trainings for deep neural networks (DNNs) often
experience limited scalability in speedup due to substantial communication
overheads. While Overlapping technique can mitigate such problem by paralleling
communication and computation in DP, its effectiveness is constrained by the
high communication-to-computation ratios (CCR) of DP training tasks. Gradient
compression (GC) is a promising technique to obtain lower CCR by reducing
communication volume directly. However, it is challenging to obtain real
performance improvement by applying GC into Overlapping because of (1) severe
performance penalties in traditional GCs caused by high compression overhead
and (2) decline of Overlapping benefit owing to the possible data dependency in
GC schemes. In this paper, we propose COVAP, a novel GC scheme designing a new
coarse-grained filter, makes the compression overhead close to zero. COVAP
ensures an almost complete overlap of communication and computation by
employing adaptive compression ratios and tensor sharding tailored to specific
training tasks. COVAP also adopts an improved error feedback mechanism to
maintain training accuracy. Experiments are conducted on Alibaba Cloud ECS
instances with different DNNs of real-world applications. The results
illustrate that COVAP outperforms existent GC schemes in time-to-solution by
1.92x-15.39x and exhibits near-linear scaling. Furthermore, COVAP achieves best
scalability under experiments on four different cluster sizes.
|
[
{
"created": "Wed, 8 Nov 2023 07:17:41 GMT",
"version": "v1"
}
] |
2023-11-09
|
[
[
"Meng",
"Lin",
""
],
[
"Sun",
"Yuzhong",
""
],
[
"Li",
"Weimin",
""
]
] |
Existing Data Parallel (DP) trainings for deep neural networks (DNNs) often experience limited scalability in speedup due to substantial communication overheads. While Overlapping technique can mitigate such problem by paralleling communication and computation in DP, its effectiveness is constrained by the high communication-to-computation ratios (CCR) of DP training tasks. Gradient compression (GC) is a promising technique to obtain lower CCR by reducing communication volume directly. However, it is challenging to obtain real performance improvement by applying GC into Overlapping because of (1) severe performance penalties in traditional GCs caused by high compression overhead and (2) decline of Overlapping benefit owing to the possible data dependency in GC schemes. In this paper, we propose COVAP, a novel GC scheme designing a new coarse-grained filter, makes the compression overhead close to zero. COVAP ensures an almost complete overlap of communication and computation by employing adaptive compression ratios and tensor sharding tailored to specific training tasks. COVAP also adopts an improved error feedback mechanism to maintain training accuracy. Experiments are conducted on Alibaba Cloud ECS instances with different DNNs of real-world applications. The results illustrate that COVAP outperforms existent GC schemes in time-to-solution by 1.92x-15.39x and exhibits near-linear scaling. Furthermore, COVAP achieves best scalability under experiments on four different cluster sizes.
|
1711.11564
|
Yun Ma
|
Yun Ma, Ziniu Hu, Dian Yang, Xuanzhe Liu
|
Automating Release of Deep Link APIs for Android Applications
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Unlike the Web where each web page has a global URL to reach, a specific
"content page" inside a mobile app cannot be opened unless the user explores
the app with several operations from the landing page. Recently, deep links
have been advocated by major companies to enable targeting and opening a
specific page of an app externally with an accessible uniform resource
identifier (URI). To empirically investigate the state of the practice on
adopting deep links, in this article, we present the largest empirical study of
deep links over 20,000 Android apps, and find that deep links do not get wide
adoption among current Android apps, and non-trivial manual efforts are
required for app developers to support deep links. To address such an issue, we
propose the Aladdin approach and supporting tool to release deep links to
access arbitrary location of existing apps. Aladdin instantiates our novel
cooperative framework to synergically combine static analysis and dynamic
analysis while minimally engaging developers to provide inputs to the framework
for automation, without requiring any coding efforts or additional deployment
efforts. We evaluate Aladdin with popular apps and demonstrate its
effectiveness and performance.
|
[
{
"created": "Thu, 30 Nov 2017 18:33:16 GMT",
"version": "v1"
},
{
"created": "Fri, 1 Dec 2017 06:59:31 GMT",
"version": "v2"
}
] |
2017-12-04
|
[
[
"Ma",
"Yun",
""
],
[
"Hu",
"Ziniu",
""
],
[
"Yang",
"Dian",
""
],
[
"Liu",
"Xuanzhe",
""
]
] |
Unlike the Web where each web page has a global URL to reach, a specific "content page" inside a mobile app cannot be opened unless the user explores the app with several operations from the landing page. Recently, deep links have been advocated by major companies to enable targeting and opening a specific page of an app externally with an accessible uniform resource identifier (URI). To empirically investigate the state of the practice on adopting deep links, in this article, we present the largest empirical study of deep links over 20,000 Android apps, and find that deep links do not get wide adoption among current Android apps, and non-trivial manual efforts are required for app developers to support deep links. To address such an issue, we propose the Aladdin approach and supporting tool to release deep links to access arbitrary location of existing apps. Aladdin instantiates our novel cooperative framework to synergically combine static analysis and dynamic analysis while minimally engaging developers to provide inputs to the framework for automation, without requiring any coding efforts or additional deployment efforts. We evaluate Aladdin with popular apps and demonstrate its effectiveness and performance.
|
2111.01455
|
Hanh Le Thi Ngoc
|
Charles C.Morace, Thi-Ngoc-Hanh Le, Sheng-Yi Yao, Shang-Wei Zhang,
Tong-Yee Lee
|
Learning a perceptual manifold with deep features for animation video
resequencing
|
Under major revision; Project website:
http://graphics.csie.ncku.edu.tw/ManifoldAnimationSequence
| null | null | null |
cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
We propose a novel deep learning framework for animation video resequencing.
Our system produces new video sequences by minimizing a perceptual distance of
images from an existing animation video clip. To measure perceptual distance,
we utilize the activations of convolutional neural networks and learn a
perceptual distance by training these features on a small network with data
comprised of human perceptual judgments. We show that with this perceptual
metric and graph-based manifold learning techniques, our framework can produce
new smooth and visually appealing animation video results for a variety of
animation video styles. In contrast to previous work on animation video
resequencing, the proposed framework applies to wide range of image styles and
does not require hand-crafted feature extraction, background subtraction, or
feature correspondence. In addition, we also show that our framework has
applications to appealing arrange unordered collections of images.
|
[
{
"created": "Tue, 2 Nov 2021 09:30:37 GMT",
"version": "v1"
}
] |
2021-11-03
|
[
[
"Morace",
"Charles C.",
""
],
[
"Le",
"Thi-Ngoc-Hanh",
""
],
[
"Yao",
"Sheng-Yi",
""
],
[
"Zhang",
"Shang-Wei",
""
],
[
"Lee",
"Tong-Yee",
""
]
] |
We propose a novel deep learning framework for animation video resequencing. Our system produces new video sequences by minimizing a perceptual distance of images from an existing animation video clip. To measure perceptual distance, we utilize the activations of convolutional neural networks and learn a perceptual distance by training these features on a small network with data comprised of human perceptual judgments. We show that with this perceptual metric and graph-based manifold learning techniques, our framework can produce new smooth and visually appealing animation video results for a variety of animation video styles. In contrast to previous work on animation video resequencing, the proposed framework applies to wide range of image styles and does not require hand-crafted feature extraction, background subtraction, or feature correspondence. In addition, we also show that our framework has applications to appealing arrange unordered collections of images.
|
2404.16370
|
Kenji Koide
|
Kenji Koide, Shuji Oishi, Masashi Yokozuka, and Atsuhiko Banno
|
MegaParticles: Range-based 6-DoF Monte Carlo Localization with
GPU-Accelerated Stein Particle Filter
|
IEEE International Conference on Robotics and Automation (ICRA2024)
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a 6-DoF range-based Monte Carlo localization method with
a GPU-accelerated Stein particle filter. To update a massive amount of
particles, we propose a Gauss-Newton-based Stein variational gradient descent
(SVGD) with iterative neighbor particle search. This method uses SVGD to
collectively update particle states with gradient and neighborhood information,
which provides efficient particle sampling. For an efficient neighbor particle
search, it uses locality sensitive hashing and iteratively updates the neighbor
list of each particle over time. The neighbor list is then used to propagate
the posterior probabilities of particles over the neighbor particle graph. The
proposed method is capable of evaluating one million particles in real-time on
a single GPU and enables robust pose initialization and re-localization without
an initial pose estimate. In experiments, the proposed method showed an extreme
robustness to complete sensor occlusion (i.e., kidnapping), and enabled
pinpoint sensor localization without any prior information.
|
[
{
"created": "Thu, 25 Apr 2024 07:17:30 GMT",
"version": "v1"
}
] |
2024-04-26
|
[
[
"Koide",
"Kenji",
""
],
[
"Oishi",
"Shuji",
""
],
[
"Yokozuka",
"Masashi",
""
],
[
"Banno",
"Atsuhiko",
""
]
] |
This paper presents a 6-DoF range-based Monte Carlo localization method with a GPU-accelerated Stein particle filter. To update a massive amount of particles, we propose a Gauss-Newton-based Stein variational gradient descent (SVGD) with iterative neighbor particle search. This method uses SVGD to collectively update particle states with gradient and neighborhood information, which provides efficient particle sampling. For an efficient neighbor particle search, it uses locality sensitive hashing and iteratively updates the neighbor list of each particle over time. The neighbor list is then used to propagate the posterior probabilities of particles over the neighbor particle graph. The proposed method is capable of evaluating one million particles in real-time on a single GPU and enables robust pose initialization and re-localization without an initial pose estimate. In experiments, the proposed method showed an extreme robustness to complete sensor occlusion (i.e., kidnapping), and enabled pinpoint sensor localization without any prior information.
|
2109.12389
|
Yun Lin
|
Xuezhi Song, Yun Lin, Siang Hwee Ng, Yijian Wu, Xin Peng, Jin Song
Dong, Hong Mei
|
RegMiner: Towards Constructing a Large Regression Dataset from Code
Evolution History
|
ISSTA'22
| null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Bug datasets consisting of real-world bugs are important artifacts for
researchers and programmers, which lay empirical and experimental foundation
for various SE/PL research such as fault localization, software testing, and
program repair. All known state-of-the-art datasets are constructed manually,
which inevitably limits their scalability, representativeness, and the support
for the emerging data-driven research. In this work, we propose an approach to
automate the process of harvesting replicable regression bugs from the code
evolutionary history. We focus on regression bug dataset, as they (1) manifest
how a bug is introduced and fixed (as normal bugs), (2) support regression bug
analysis, and (3) incorporate a much stronger specification (i.e., the original
passing version) for general bug analysis. Technically, we address an
information retrieval problem on code evolution history. Given a code
repository, we search for regressions where a test can pass a regression-fixing
commit, fail a regressioninducing commit, and pass a working commit. In this
work, we address the challenges of (1) identifying potential regression-fixing
commits from the code evolution history, (2) migrating the test and its code
dependencies over the history, and (3) minimizing the compilation overhead
during the regression search. We build our tool, RegMiner, which harvested 537
regressions over 66 projects for 3 weeks, created the largest replicable
regression dataset within shortest period, to the best of our knowledge.
Moreover, our empirical study on our regression dataset shows a gap between the
popular regression fault localization techniques (e.g, delta-debugging) and the
real fix, revealing new data-driven research opportunities.
|
[
{
"created": "Sat, 25 Sep 2021 15:29:13 GMT",
"version": "v1"
},
{
"created": "Mon, 4 Jul 2022 16:43:05 GMT",
"version": "v2"
}
] |
2022-07-05
|
[
[
"Song",
"Xuezhi",
""
],
[
"Lin",
"Yun",
""
],
[
"Ng",
"Siang Hwee",
""
],
[
"Wu",
"Yijian",
""
],
[
"Peng",
"Xin",
""
],
[
"Dong",
"Jin Song",
""
],
[
"Mei",
"Hong",
""
]
] |
Bug datasets consisting of real-world bugs are important artifacts for researchers and programmers, which lay empirical and experimental foundation for various SE/PL research such as fault localization, software testing, and program repair. All known state-of-the-art datasets are constructed manually, which inevitably limits their scalability, representativeness, and the support for the emerging data-driven research. In this work, we propose an approach to automate the process of harvesting replicable regression bugs from the code evolutionary history. We focus on regression bug dataset, as they (1) manifest how a bug is introduced and fixed (as normal bugs), (2) support regression bug analysis, and (3) incorporate a much stronger specification (i.e., the original passing version) for general bug analysis. Technically, we address an information retrieval problem on code evolution history. Given a code repository, we search for regressions where a test can pass a regression-fixing commit, fail a regressioninducing commit, and pass a working commit. In this work, we address the challenges of (1) identifying potential regression-fixing commits from the code evolution history, (2) migrating the test and its code dependencies over the history, and (3) minimizing the compilation overhead during the regression search. We build our tool, RegMiner, which harvested 537 regressions over 66 projects for 3 weeks, created the largest replicable regression dataset within shortest period, to the best of our knowledge. Moreover, our empirical study on our regression dataset shows a gap between the popular regression fault localization techniques (e.g, delta-debugging) and the real fix, revealing new data-driven research opportunities.
|
1612.01939
|
Baochen Sun
|
Baochen Sun, Jiashi Feng, Kate Saenko
|
Correlation Alignment for Unsupervised Domain Adaptation
|
Introduction to CORAL, CORAL-LDA, and Deep CORAL. arXiv admin note:
text overlap with arXiv:1511.05547
| null | null | null |
cs.CV cs.AI cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this chapter, we present CORrelation ALignment (CORAL), a simple yet
effective method for unsupervised domain adaptation. CORAL minimizes domain
shift by aligning the second-order statistics of source and target
distributions, without requiring any target labels. In contrast to subspace
manifold methods, it aligns the original feature distributions of the source
and target domains, rather than the bases of lower-dimensional subspaces. It is
also much simpler than other distribution matching methods. CORAL performs
remarkably well in extensive evaluations on standard benchmark datasets. We
first describe a solution that applies a linear transformation to source
features to align them with target features before classifier training. For
linear classifiers, we propose to equivalently apply CORAL to the classifier
weights, leading to added efficiency when the number of classifiers is small
but the number and dimensionality of target examples are very high. The
resulting CORAL Linear Discriminant Analysis (CORAL-LDA) outperforms LDA by a
large margin on standard domain adaptation benchmarks. Finally, we extend CORAL
to learn a nonlinear transformation that aligns correlations of layer
activations in deep neural networks (DNNs). The resulting Deep CORAL approach
works seamlessly with DNNs and achieves state-of-the-art performance on
standard benchmark datasets. Our code is available
at:~\url{https://github.com/VisionLearningGroup/CORAL}
|
[
{
"created": "Tue, 6 Dec 2016 18:31:57 GMT",
"version": "v1"
}
] |
2016-12-07
|
[
[
"Sun",
"Baochen",
""
],
[
"Feng",
"Jiashi",
""
],
[
"Saenko",
"Kate",
""
]
] |
In this chapter, we present CORrelation ALignment (CORAL), a simple yet effective method for unsupervised domain adaptation. CORAL minimizes domain shift by aligning the second-order statistics of source and target distributions, without requiring any target labels. In contrast to subspace manifold methods, it aligns the original feature distributions of the source and target domains, rather than the bases of lower-dimensional subspaces. It is also much simpler than other distribution matching methods. CORAL performs remarkably well in extensive evaluations on standard benchmark datasets. We first describe a solution that applies a linear transformation to source features to align them with target features before classifier training. For linear classifiers, we propose to equivalently apply CORAL to the classifier weights, leading to added efficiency when the number of classifiers is small but the number and dimensionality of target examples are very high. The resulting CORAL Linear Discriminant Analysis (CORAL-LDA) outperforms LDA by a large margin on standard domain adaptation benchmarks. Finally, we extend CORAL to learn a nonlinear transformation that aligns correlations of layer activations in deep neural networks (DNNs). The resulting Deep CORAL approach works seamlessly with DNNs and achieves state-of-the-art performance on standard benchmark datasets. Our code is available at:~\url{https://github.com/VisionLearningGroup/CORAL}
|
1908.08717
|
Mahault Garnerin
|
Mahault Garnerin, Solange Rossato, Laurent Besacier
|
Gender Representation in French Broadcast Corpora and Its Impact on ASR
Performance
|
Accepted to ACM Workshop AI4TV
| null | null | null |
cs.CL cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper analyzes the gender representation in four major corpora of French
broadcast. These corpora being widely used within the speech processing
community, they are a primary material for training automatic speech
recognition (ASR) systems. As gender bias has been highlighted in numerous
natural language processing (NLP) applications, we study the impact of the
gender imbalance in TV and radio broadcast on the performance of an ASR system.
This analysis shows that women are under-represented in our data in terms of
speakers and speech turns. We introduce the notion of speaker role to refine
our analysis and find that women are even fewer within the Anchor category
corresponding to prominent speakers. The disparity of available data for both
gender causes performance to decrease on women. However this global trend can
be counterbalanced for speaker who are used to speak in the media when
sufficient amount of data is available.
|
[
{
"created": "Fri, 23 Aug 2019 08:51:19 GMT",
"version": "v1"
}
] |
2019-08-26
|
[
[
"Garnerin",
"Mahault",
""
],
[
"Rossato",
"Solange",
""
],
[
"Besacier",
"Laurent",
""
]
] |
This paper analyzes the gender representation in four major corpora of French broadcast. These corpora being widely used within the speech processing community, they are a primary material for training automatic speech recognition (ASR) systems. As gender bias has been highlighted in numerous natural language processing (NLP) applications, we study the impact of the gender imbalance in TV and radio broadcast on the performance of an ASR system. This analysis shows that women are under-represented in our data in terms of speakers and speech turns. We introduce the notion of speaker role to refine our analysis and find that women are even fewer within the Anchor category corresponding to prominent speakers. The disparity of available data for both gender causes performance to decrease on women. However this global trend can be counterbalanced for speaker who are used to speak in the media when sufficient amount of data is available.
|
2401.03246
|
Egor Shvetsov
|
Igor Udovichenko, Egor Shvetsov, Denis Divitsky, Dmitry Osin, Ilya
Trofimov, Anatoly Glushenko, Ivan Sukharev, Dmitry Berestenev, Evgeny Burnaev
|
SeqNAS: Neural Architecture Search for Event Sequence Classification
|
in IEEE Access
| null |
10.1109/ACCESS.2024.3349497
| null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Neural Architecture Search (NAS) methods are widely used in various
industries to obtain high quality taskspecific solutions with minimal human
intervention. Event Sequences find widespread use in various industrial
applications including churn prediction customer segmentation fraud detection
and fault diagnosis among others. Such data consist of categorical and
real-valued components with irregular timestamps. Despite the usefulness of NAS
methods previous approaches only have been applied to other domains images
texts or time series. Our work addresses this limitation by introducing a novel
NAS algorithm SeqNAS specifically designed for event sequence classification.
We develop a simple yet expressive search space that leverages commonly used
building blocks for event sequence classification including multihead self
attention convolutions and recurrent cells. To perform the search we adopt
sequential Bayesian Optimization and utilize previously trained models as an
ensemble of teachers to augment knowledge distillation. As a result of our work
we demonstrate that our method surpasses state of the art NAS methods and
popular architectures suitable for sequence classification and holds great
potential for various industrial applications.
|
[
{
"created": "Sat, 6 Jan 2024 16:00:26 GMT",
"version": "v1"
}
] |
2024-01-09
|
[
[
"Udovichenko",
"Igor",
""
],
[
"Shvetsov",
"Egor",
""
],
[
"Divitsky",
"Denis",
""
],
[
"Osin",
"Dmitry",
""
],
[
"Trofimov",
"Ilya",
""
],
[
"Glushenko",
"Anatoly",
""
],
[
"Sukharev",
"Ivan",
""
],
[
"Berestenev",
"Dmitry",
""
],
[
"Burnaev",
"Evgeny",
""
]
] |
Neural Architecture Search (NAS) methods are widely used in various industries to obtain high quality taskspecific solutions with minimal human intervention. Event Sequences find widespread use in various industrial applications including churn prediction customer segmentation fraud detection and fault diagnosis among others. Such data consist of categorical and real-valued components with irregular timestamps. Despite the usefulness of NAS methods previous approaches only have been applied to other domains images texts or time series. Our work addresses this limitation by introducing a novel NAS algorithm SeqNAS specifically designed for event sequence classification. We develop a simple yet expressive search space that leverages commonly used building blocks for event sequence classification including multihead self attention convolutions and recurrent cells. To perform the search we adopt sequential Bayesian Optimization and utilize previously trained models as an ensemble of teachers to augment knowledge distillation. As a result of our work we demonstrate that our method surpasses state of the art NAS methods and popular architectures suitable for sequence classification and holds great potential for various industrial applications.
|
1501.01432
|
Kuang Zhou
|
Kuang Zhou (IRISA), Arnaud Martin (IRISA), Quan Pan
|
Evidential-EM Algorithm Applied to Progressively Censored Observations
| null |
15th International Conference on Information Processing and
Management of Uncertainty in Knowledge-Based Systems, Jul 2014, Montpellier,
France. pp.180 - 189
|
10.1007/978-3-319-08852-5_19
| null |
cs.AI stat.ME
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Evidential-EM (E2M) algorithm is an effective approach for computing maximum
likelihood estimations under finite mixture models, especially when there is
uncertain information about data. In this paper we present an extension of the
E2M method in a particular case of incom-plete data, where the loss of
information is due to both mixture models and censored observations. The prior
uncertain information is expressed by belief functions, while the
pseudo-likelihood function is derived based on imprecise observations and prior
knowledge. Then E2M method is evoked to maximize the generalized likelihood
function to obtain the optimal estimation of parameters. Numerical examples
show that the proposed method could effectively integrate the uncertain prior
infor-mation with the current imprecise knowledge conveyed by the observed
data.
|
[
{
"created": "Wed, 7 Jan 2015 10:27:45 GMT",
"version": "v1"
}
] |
2015-01-08
|
[
[
"Zhou",
"Kuang",
"",
"IRISA"
],
[
"Martin",
"Arnaud",
"",
"IRISA"
],
[
"Pan",
"Quan",
""
]
] |
Evidential-EM (E2M) algorithm is an effective approach for computing maximum likelihood estimations under finite mixture models, especially when there is uncertain information about data. In this paper we present an extension of the E2M method in a particular case of incom-plete data, where the loss of information is due to both mixture models and censored observations. The prior uncertain information is expressed by belief functions, while the pseudo-likelihood function is derived based on imprecise observations and prior knowledge. Then E2M method is evoked to maximize the generalized likelihood function to obtain the optimal estimation of parameters. Numerical examples show that the proposed method could effectively integrate the uncertain prior infor-mation with the current imprecise knowledge conveyed by the observed data.
|
1905.11203
|
G\"unter Rote
|
G\"unter Rote
|
The Largest Contained Quadrilateral and the Smallest Enclosing
Parallelogram of a Convex Polygon
|
7 pages + 4 pages of appendices, 4 figures, plus a prototype
implementation in Python. This version is substantially extended, and the
algorithms are given in pseudocode in the appendices. (Version 1 was only
about the largest contained quadrilateral.)
| null | null | null |
cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a linear-time algorithm for finding the quadrilateral of largest
area contained in a convex polygon, and we show that it is closely related to
an old algorithm for the smallest enclosing parallelogram of a convex polygon.
|
[
{
"created": "Mon, 27 May 2019 13:44:15 GMT",
"version": "v1"
},
{
"created": "Tue, 18 Jun 2019 09:49:16 GMT",
"version": "v2"
}
] |
2019-06-19
|
[
[
"Rote",
"Günter",
""
]
] |
We present a linear-time algorithm for finding the quadrilateral of largest area contained in a convex polygon, and we show that it is closely related to an old algorithm for the smallest enclosing parallelogram of a convex polygon.
|
2309.13225
|
Christopher Ye
|
Barna Saha and Christopher Ye
|
Faster Approximate All Pairs Shortest Paths
|
81 pages
| null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The all pairs shortest path problem (APSP) is one of the foundational
problems in computer science. For weighted dense graphs on $n$ vertices, no
truly sub-cubic algorithms exist to compute APSP exactly even for undirected
graphs. This is popularly known as the APSP conjecture and has played a
prominent role in developing the field of fine-grained complexity. The seminal
result of Seidel uses fast matrix multiplication (FMM) to compute APSP on
unweighted undirected graphs exactly in $\tilde{O}(n^{\omega})$ time, where
$\omega=2.372$. Even for unweighted undirected graphs, it is not possible to
obtain a $(2-\epsilon)$-approximation of APSP in $o(n^\omega)$ time.
In this paper, we provide a multitude of new results for multiplicative and
additive approximations of APSP in undirected graphs for both unweighted and
weighted cases. We provide new algorithms for multiplicative 2-approximation of
unweighted graphs: a deterministic one that runs in $\tilde{O}(n^{2.072})$ time
and a randomized one that runs in $\tilde{O}(n^{2.032})$ on expectation
improving upon the best known bound of $\tilde{O}(n^{2.25})$ by Roditty (STOC,
2023). For $2$-approximating paths of length $\geq k$, $k \geq 4$, we provide
the first improvement after Dor, Halperin, Zwick (2000) for dense graphs even
just using combinatorial methods, and then improve it further using FMM. We
next consider additive approximations, and provide improved bounds for all
additive $\beta$-approximations, $\beta \geq 4$. For weighted graphs, we show
that by allowing small additive errors along with an
$(1+\epsilon)$-multiplicative approximation, it is possible to improve upon
Zwick's $\tilde{O}(n^\omega)$ algorithm. Our results point out the crucial role
that FMM can play even on approximating APSP on unweighted undirected graphs,
and reveal new bottlenecks towards achieving a quadratic running time to
approximate APSP.
|
[
{
"created": "Sat, 23 Sep 2023 00:27:31 GMT",
"version": "v1"
}
] |
2023-09-26
|
[
[
"Saha",
"Barna",
""
],
[
"Ye",
"Christopher",
""
]
] |
The all pairs shortest path problem (APSP) is one of the foundational problems in computer science. For weighted dense graphs on $n$ vertices, no truly sub-cubic algorithms exist to compute APSP exactly even for undirected graphs. This is popularly known as the APSP conjecture and has played a prominent role in developing the field of fine-grained complexity. The seminal result of Seidel uses fast matrix multiplication (FMM) to compute APSP on unweighted undirected graphs exactly in $\tilde{O}(n^{\omega})$ time, where $\omega=2.372$. Even for unweighted undirected graphs, it is not possible to obtain a $(2-\epsilon)$-approximation of APSP in $o(n^\omega)$ time. In this paper, we provide a multitude of new results for multiplicative and additive approximations of APSP in undirected graphs for both unweighted and weighted cases. We provide new algorithms for multiplicative 2-approximation of unweighted graphs: a deterministic one that runs in $\tilde{O}(n^{2.072})$ time and a randomized one that runs in $\tilde{O}(n^{2.032})$ on expectation improving upon the best known bound of $\tilde{O}(n^{2.25})$ by Roditty (STOC, 2023). For $2$-approximating paths of length $\geq k$, $k \geq 4$, we provide the first improvement after Dor, Halperin, Zwick (2000) for dense graphs even just using combinatorial methods, and then improve it further using FMM. We next consider additive approximations, and provide improved bounds for all additive $\beta$-approximations, $\beta \geq 4$. For weighted graphs, we show that by allowing small additive errors along with an $(1+\epsilon)$-multiplicative approximation, it is possible to improve upon Zwick's $\tilde{O}(n^\omega)$ algorithm. Our results point out the crucial role that FMM can play even on approximating APSP on unweighted undirected graphs, and reveal new bottlenecks towards achieving a quadratic running time to approximate APSP.
|
1805.07527
|
Ram Prakash Sharma Mr.
|
Ram Prakash Sharma, Somnath Dey
|
Two-stage quality adaptive fingerprint image enhancement using Fuzzy
c-means clustering based fingerprint quality analysis
|
34 pages, 8 figures, Submitted to Image and Vision Computing
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fingerprint recognition techniques are immensely dependent on quality of the
fingerprint images. To improve the performance of recognition algorithm for
poor quality images an efficient enhancement algorithm should be designed.
Performance improvement of recognition algorithm will be more if enhancement
process is adaptive to the fingerprint quality (wet, dry or normal). In this
paper, a quality adaptive fingerprint enhancement algorithm is proposed. The
proposed fingerprint quality assessment algorithm clusters the fingerprint
images in appropriate quality class of dry, wet, normal dry, normal wet and
good quality using fuzzy c-means technique. It considers seven features namely,
mean, moisture, variance, uniformity, contrast, ridge valley area uniformity
and ridge valley uniformity into account for clustering the fingerprint images
in appropriate quality class. Fingerprint images of each quality class undergo
through a two-stage fingerprint quality enhancement process. A quality adaptive
preprocessing method is used as front-end before enhancing the fingerprint
images with Gabor, short term Fourier transform and oriented diffusion
filtering based enhancement techniques. Experimental results show improvement
in the verification results for FVC2004 datasets. Significant improvement in
equal error rate is observed while using quality adaptive preprocessing based
approaches in comparison to the current state-of-the-art enhancement
techniques.
|
[
{
"created": "Sat, 19 May 2018 06:49:17 GMT",
"version": "v1"
}
] |
2018-05-22
|
[
[
"Sharma",
"Ram Prakash",
""
],
[
"Dey",
"Somnath",
""
]
] |
Fingerprint recognition techniques are immensely dependent on quality of the fingerprint images. To improve the performance of recognition algorithm for poor quality images an efficient enhancement algorithm should be designed. Performance improvement of recognition algorithm will be more if enhancement process is adaptive to the fingerprint quality (wet, dry or normal). In this paper, a quality adaptive fingerprint enhancement algorithm is proposed. The proposed fingerprint quality assessment algorithm clusters the fingerprint images in appropriate quality class of dry, wet, normal dry, normal wet and good quality using fuzzy c-means technique. It considers seven features namely, mean, moisture, variance, uniformity, contrast, ridge valley area uniformity and ridge valley uniformity into account for clustering the fingerprint images in appropriate quality class. Fingerprint images of each quality class undergo through a two-stage fingerprint quality enhancement process. A quality adaptive preprocessing method is used as front-end before enhancing the fingerprint images with Gabor, short term Fourier transform and oriented diffusion filtering based enhancement techniques. Experimental results show improvement in the verification results for FVC2004 datasets. Significant improvement in equal error rate is observed while using quality adaptive preprocessing based approaches in comparison to the current state-of-the-art enhancement techniques.
|
1701.05290
|
Rasmus Pagh
|
Joachim Gudmundsson and Rasmus Pagh
|
Range-efficient consistent sampling and locality-sensitive hashing for
polygons
|
A shorter version appears in Proceedings of ISAAC 2017
| null | null | null |
cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Locality-sensitive hashing (LSH) is a fundamental technique for similarity
search and similarity estimation in high-dimensional spaces. The basic idea is
that similar objects should produce hash collisions with probability
significantly larger than objects with low similarity. We consider LSH for
objects that can be represented as point sets in either one or two dimensions.
To make the point sets finite size we consider the subset of points on a grid.
Directly applying LSH (e.g. min-wise hashing) to these point sets would require
time proportional to the number of points. We seek to achieve time that is much
lower than direct approaches.
Technically, we introduce new primitives for range-efficient consistent
sampling (of independent interest), and show how to turn such samples into LSH
values. Another application of our technique is a data structure for quickly
estimating the size of the intersection or union of a set of preprocessed
polygons. Curiously, our consistent sampling method uses transformation to a
geometric problem.
|
[
{
"created": "Thu, 19 Jan 2017 03:57:28 GMT",
"version": "v1"
},
{
"created": "Fri, 22 Sep 2017 12:12:29 GMT",
"version": "v2"
}
] |
2017-09-25
|
[
[
"Gudmundsson",
"Joachim",
""
],
[
"Pagh",
"Rasmus",
""
]
] |
Locality-sensitive hashing (LSH) is a fundamental technique for similarity search and similarity estimation in high-dimensional spaces. The basic idea is that similar objects should produce hash collisions with probability significantly larger than objects with low similarity. We consider LSH for objects that can be represented as point sets in either one or two dimensions. To make the point sets finite size we consider the subset of points on a grid. Directly applying LSH (e.g. min-wise hashing) to these point sets would require time proportional to the number of points. We seek to achieve time that is much lower than direct approaches. Technically, we introduce new primitives for range-efficient consistent sampling (of independent interest), and show how to turn such samples into LSH values. Another application of our technique is a data structure for quickly estimating the size of the intersection or union of a set of preprocessed polygons. Curiously, our consistent sampling method uses transformation to a geometric problem.
|
2403.19786
|
Mingxing Rao
|
Mingxing Rao, Yinhong Qin, Soheil Kolouri, Jie Ying Wu, Daniel Moyer
|
Zero-shot Prompt-based Video Encoder for Surgical Gesture Recognition
|
17 pages,4 figures, 7 tables, IPCAI 2024
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Purpose: Surgical video is an important data stream for gesture recognition.
Thus, robust visual encoders for those data-streams is similarly important.
Methods: Leveraging the Bridge-Prompt framework, we fine-tune a pre-trained
vision-text model (CLIP) for gesture recognition in surgical videos. This can
utilize extensive outside video data such as text, but also make use of label
meta-data and weakly supervised contrastive losses. Results: Our experiments
show that prompt-based video encoder outperforms standard encoders in surgical
gesture recognition tasks. Notably, it displays strong performance in zero-shot
scenarios, where gestures/tasks that were not provided during the encoder
training phase are included in the prediction phase. Additionally, we measure
the benefit of inclusion text descriptions in the feature extractor training
schema. Conclusion: Bridge-Prompt and similar pre-trained+fine-tuned video
encoder models present significant visual representation for surgical robotics,
especially in gesture recognition tasks. Given the diverse range of surgical
tasks (gestures), the ability of these models to zero-shot transfer without the
need for any task (gesture) specific retraining makes them invaluable.
|
[
{
"created": "Thu, 28 Mar 2024 19:10:54 GMT",
"version": "v1"
}
] |
2024-04-01
|
[
[
"Rao",
"Mingxing",
""
],
[
"Qin",
"Yinhong",
""
],
[
"Kolouri",
"Soheil",
""
],
[
"Wu",
"Jie Ying",
""
],
[
"Moyer",
"Daniel",
""
]
] |
Purpose: Surgical video is an important data stream for gesture recognition. Thus, robust visual encoders for those data-streams is similarly important. Methods: Leveraging the Bridge-Prompt framework, we fine-tune a pre-trained vision-text model (CLIP) for gesture recognition in surgical videos. This can utilize extensive outside video data such as text, but also make use of label meta-data and weakly supervised contrastive losses. Results: Our experiments show that prompt-based video encoder outperforms standard encoders in surgical gesture recognition tasks. Notably, it displays strong performance in zero-shot scenarios, where gestures/tasks that were not provided during the encoder training phase are included in the prediction phase. Additionally, we measure the benefit of inclusion text descriptions in the feature extractor training schema. Conclusion: Bridge-Prompt and similar pre-trained+fine-tuned video encoder models present significant visual representation for surgical robotics, especially in gesture recognition tasks. Given the diverse range of surgical tasks (gestures), the ability of these models to zero-shot transfer without the need for any task (gesture) specific retraining makes them invaluable.
|
2205.12215
|
Gabriele Sarti
|
Gabriele Sarti, Arianna Bisazza, Ana Guerberof Arenas, Antonio Toral
|
DivEMT: Neural Machine Translation Post-Editing Effort Across
Typologically Diverse Languages
|
EMNLP 2022, materials: https://github.com/gsarti/divemt
|
Proceedings of EMNLP (2022) 7795-7816
|
10.18653/v1/2022.emnlp-main.532
| null |
cs.CL
|
http://creativecommons.org/licenses/by-sa/4.0/
|
We introduce DivEMT, the first publicly available post-editing study of
Neural Machine Translation (NMT) over a typologically diverse set of target
languages. Using a strictly controlled setup, 18 professional translators were
instructed to translate or post-edit the same set of English documents into
Arabic, Dutch, Italian, Turkish, Ukrainian, and Vietnamese. During the process,
their edits, keystrokes, editing times and pauses were recorded, enabling an
in-depth, cross-lingual evaluation of NMT quality and post-editing
effectiveness. Using this new dataset, we assess the impact of two
state-of-the-art NMT systems, Google Translate and the multilingual mBART-50
model, on translation productivity. We find that post-editing is consistently
faster than translation from scratch. However, the magnitude of productivity
gains varies widely across systems and languages, highlighting major
disparities in post-editing effectiveness for languages at different degrees of
typological relatedness to English, even when controlling for system
architecture and training data size. We publicly release the complete dataset
including all collected behavioral data, to foster new research on the
translation capabilities of NMT systems for typologically diverse languages.
|
[
{
"created": "Tue, 24 May 2022 17:22:52 GMT",
"version": "v1"
},
{
"created": "Tue, 18 Oct 2022 16:38:00 GMT",
"version": "v2"
}
] |
2023-09-08
|
[
[
"Sarti",
"Gabriele",
""
],
[
"Bisazza",
"Arianna",
""
],
[
"Arenas",
"Ana Guerberof",
""
],
[
"Toral",
"Antonio",
""
]
] |
We introduce DivEMT, the first publicly available post-editing study of Neural Machine Translation (NMT) over a typologically diverse set of target languages. Using a strictly controlled setup, 18 professional translators were instructed to translate or post-edit the same set of English documents into Arabic, Dutch, Italian, Turkish, Ukrainian, and Vietnamese. During the process, their edits, keystrokes, editing times and pauses were recorded, enabling an in-depth, cross-lingual evaluation of NMT quality and post-editing effectiveness. Using this new dataset, we assess the impact of two state-of-the-art NMT systems, Google Translate and the multilingual mBART-50 model, on translation productivity. We find that post-editing is consistently faster than translation from scratch. However, the magnitude of productivity gains varies widely across systems and languages, highlighting major disparities in post-editing effectiveness for languages at different degrees of typological relatedness to English, even when controlling for system architecture and training data size. We publicly release the complete dataset including all collected behavioral data, to foster new research on the translation capabilities of NMT systems for typologically diverse languages.
|
1808.05665
|
Lea Sch\"onherr
|
Lea Sch\"onherr, Katharina Kohls, Steffen Zeiler, Thorsten Holz,
Dorothea Kolossa
|
Adversarial Attacks Against Automatic Speech Recognition Systems via
Psychoacoustic Hiding
| null | null | null | null |
cs.CR cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Voice interfaces are becoming accepted widely as input methods for a diverse
set of devices. This development is driven by rapid improvements in automatic
speech recognition (ASR), which now performs on par with human listening in
many tasks. These improvements base on an ongoing evolution of DNNs as the
computational core of ASR. However, recent research results show that DNNs are
vulnerable to adversarial perturbations, which allow attackers to force the
transcription into a malicious output.
In this paper, we introduce a new type of adversarial examples based on
psychoacoustic hiding. Our attack exploits the characteristics of DNN-based ASR
systems, where we extend the original analysis procedure by an additional
backpropagation step. We use this backpropagation to learn the degrees of
freedom for the adversarial perturbation of the input signal, i.e., we apply a
psychoacoustic model and manipulate the acoustic signal below the thresholds of
human perception. To further minimize the perceptibility of the perturbations,
we use forced alignment to find the best fitting temporal alignment between the
original audio sample and the malicious target transcription. These extensions
allow us to embed an arbitrary audio input with a malicious voice command that
is then transcribed by the ASR system, with the audio signal remaining barely
distinguishable from the original signal. In an experimental evaluation, we
attack the state-of-the-art speech recognition system Kaldi and determine the
best performing parameter and analysis setup for different types of input. Our
results show that we are successful in up to 98% of cases with a computational
effort of fewer than two minutes for a ten-second audio file. Based on user
studies, we found that none of our target transcriptions were audible to human
listeners, who still understand the original speech content with unchanged
accuracy.
|
[
{
"created": "Thu, 16 Aug 2018 20:00:47 GMT",
"version": "v1"
},
{
"created": "Tue, 30 Oct 2018 12:02:07 GMT",
"version": "v2"
}
] |
2018-10-31
|
[
[
"Schönherr",
"Lea",
""
],
[
"Kohls",
"Katharina",
""
],
[
"Zeiler",
"Steffen",
""
],
[
"Holz",
"Thorsten",
""
],
[
"Kolossa",
"Dorothea",
""
]
] |
Voice interfaces are becoming accepted widely as input methods for a diverse set of devices. This development is driven by rapid improvements in automatic speech recognition (ASR), which now performs on par with human listening in many tasks. These improvements base on an ongoing evolution of DNNs as the computational core of ASR. However, recent research results show that DNNs are vulnerable to adversarial perturbations, which allow attackers to force the transcription into a malicious output. In this paper, we introduce a new type of adversarial examples based on psychoacoustic hiding. Our attack exploits the characteristics of DNN-based ASR systems, where we extend the original analysis procedure by an additional backpropagation step. We use this backpropagation to learn the degrees of freedom for the adversarial perturbation of the input signal, i.e., we apply a psychoacoustic model and manipulate the acoustic signal below the thresholds of human perception. To further minimize the perceptibility of the perturbations, we use forced alignment to find the best fitting temporal alignment between the original audio sample and the malicious target transcription. These extensions allow us to embed an arbitrary audio input with a malicious voice command that is then transcribed by the ASR system, with the audio signal remaining barely distinguishable from the original signal. In an experimental evaluation, we attack the state-of-the-art speech recognition system Kaldi and determine the best performing parameter and analysis setup for different types of input. Our results show that we are successful in up to 98% of cases with a computational effort of fewer than two minutes for a ten-second audio file. Based on user studies, we found that none of our target transcriptions were audible to human listeners, who still understand the original speech content with unchanged accuracy.
|
2406.07040
|
Michael Foster
|
Germ\'an Vega, Roland Groz, Catherine Oriat, Michael Foster, Neil
Walkinshaw, Adenilso Sim\~ao
|
Learning EFSM Models with Registers in Guards
|
14 pages (last page blank), 8 figures, 4 algorithms Submitted to
LearnAut workshop 2024 (not published)
| null | null | null |
cs.FL
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents an active inference method for Extended Finite State
Machines, where inputs and outputs are parametrized, and transitions can be
conditioned by guards involving input parameters and internal variables called
registers. The method applies to (software) systems that cannot be reset, so it
learns an EFSM model of the system on a single trace.
|
[
{
"created": "Tue, 11 Jun 2024 07:55:09 GMT",
"version": "v1"
}
] |
2024-06-12
|
[
[
"Vega",
"Germán",
""
],
[
"Groz",
"Roland",
""
],
[
"Oriat",
"Catherine",
""
],
[
"Foster",
"Michael",
""
],
[
"Walkinshaw",
"Neil",
""
],
[
"Simão",
"Adenilso",
""
]
] |
This paper presents an active inference method for Extended Finite State Machines, where inputs and outputs are parametrized, and transitions can be conditioned by guards involving input parameters and internal variables called registers. The method applies to (software) systems that cannot be reset, so it learns an EFSM model of the system on a single trace.
|
2404.03061
|
Kleinner Farias
|
Maicon Azevedo da Luz, Kleinner Farias
|
WebSPL: A Software Product Line for Web Applications
|
6 figures, 3 tables
| null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Companies developing Web applications have faced an increasing demand for
high-quality products with low cost and production time ever smaller. However,
developing such applications is still considered a time-consuming and
error-prone task, mainly due to the difficulty of promoting the reuse of
features (or functionalities) and modules, and the heterogeneity of Web
frameworks. Nowadays, companies must face ever-changing requirements. Software
product lines emerged as an alternative to face this challenge by creating a
collection of applications from a core of software assets. Despite the
potential, the current literature lacks works that propose a product line for
Web applications. This paper, therefore, presents WebSPL, a product line for
Web applications that supports the main features found in Wed applications in
real-world settings. The proposed WebSPL was evaluated by comparing it with a
Web application developed based on a traditional approach. A case study that
involves the development of two Web applications enabled data collection. Two
Web applications were developed -- one with and another without the support of
the proposed WebSPL. We compared these two applications using software design
metrics, including complexity, size, duplicate lines, and technical debt. The
initial results were encouraging and showed the potential for using WebSPL to
support the development of Web applications.
|
[
{
"created": "Wed, 3 Apr 2024 21:04:54 GMT",
"version": "v1"
}
] |
2024-04-05
|
[
[
"da Luz",
"Maicon Azevedo",
""
],
[
"Farias",
"Kleinner",
""
]
] |
Companies developing Web applications have faced an increasing demand for high-quality products with low cost and production time ever smaller. However, developing such applications is still considered a time-consuming and error-prone task, mainly due to the difficulty of promoting the reuse of features (or functionalities) and modules, and the heterogeneity of Web frameworks. Nowadays, companies must face ever-changing requirements. Software product lines emerged as an alternative to face this challenge by creating a collection of applications from a core of software assets. Despite the potential, the current literature lacks works that propose a product line for Web applications. This paper, therefore, presents WebSPL, a product line for Web applications that supports the main features found in Wed applications in real-world settings. The proposed WebSPL was evaluated by comparing it with a Web application developed based on a traditional approach. A case study that involves the development of two Web applications enabled data collection. Two Web applications were developed -- one with and another without the support of the proposed WebSPL. We compared these two applications using software design metrics, including complexity, size, duplicate lines, and technical debt. The initial results were encouraging and showed the potential for using WebSPL to support the development of Web applications.
|
2105.07924
|
Yuan Cao
|
Yuan Cao, Yonglin Cao, Fanghui Ma
|
Construction and enumeration of left dihedral codes satisfying certain
duality properties
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Let $\mathbb{F}_{q}$ be the finite field of $q$ elements and let
$D_{2n}=\langle x,y\mid x^n=1, y^2=1, yxy=x^{n-1}\rangle$ be the dihedral group
of order $n$. Left ideals of the group algebra $\mathbb{F}_{q}[D_{2n}]$ are
known as left dihedral codes over $\mathbb{F}_{q}$ of length $2n$, and
abbreviated as left $D_{2n}$-codes. Let ${\rm gcd}(n,q)=1$. In this paper, we
give an explicit representation for the Euclidean hull of every left
$D_{2n}$-code over $\mathbb{F}_{q}$. On this basis, we determine all distinct
Euclidean LCD codes and Euclidean self-orthogonal codes which are left
$D_{2n}$-codes over $\mathbb{F}_{q}$. In particular, we provide an explicit
representation and a precise enumeration for these two subclasses of left
$D_{2n}$-codes and self-dual left $D_{2n}$-codes,
respectively. Moreover, we give a direct and simple method for determining
the encoder (generator matrix) of any left $D_{2n}$-code over $\mathbb{F}_{q}$,
and present several numerical examples to illustrative our applications.
|
[
{
"created": "Mon, 17 May 2021 15:01:38 GMT",
"version": "v1"
}
] |
2021-05-18
|
[
[
"Cao",
"Yuan",
""
],
[
"Cao",
"Yonglin",
""
],
[
"Ma",
"Fanghui",
""
]
] |
Let $\mathbb{F}_{q}$ be the finite field of $q$ elements and let $D_{2n}=\langle x,y\mid x^n=1, y^2=1, yxy=x^{n-1}\rangle$ be the dihedral group of order $n$. Left ideals of the group algebra $\mathbb{F}_{q}[D_{2n}]$ are known as left dihedral codes over $\mathbb{F}_{q}$ of length $2n$, and abbreviated as left $D_{2n}$-codes. Let ${\rm gcd}(n,q)=1$. In this paper, we give an explicit representation for the Euclidean hull of every left $D_{2n}$-code over $\mathbb{F}_{q}$. On this basis, we determine all distinct Euclidean LCD codes and Euclidean self-orthogonal codes which are left $D_{2n}$-codes over $\mathbb{F}_{q}$. In particular, we provide an explicit representation and a precise enumeration for these two subclasses of left $D_{2n}$-codes and self-dual left $D_{2n}$-codes, respectively. Moreover, we give a direct and simple method for determining the encoder (generator matrix) of any left $D_{2n}$-code over $\mathbb{F}_{q}$, and present several numerical examples to illustrative our applications.
|
2005.11194
|
Charlie Kirkwood
|
Charlie Kirkwood
|
Deep covariate-learning: optimising information extraction from terrain
texture for geostatistical modelling applications
|
14 pages, 8 figures, submitted to journal
| null | null | null |
cs.CV cs.LG eess.IV stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Where data is available, it is desirable in geostatistical modelling to make
use of additional covariates, for example terrain data, in order to improve
prediction accuracy in the modelling task. While elevation itself may be
important, additional explanatory power for any given problem can be sought
(but not necessarily found) by filtering digital elevation models to extract
higher-order derivatives such as slope angles, curvatures, and roughness. In
essence, it would be beneficial to extract as much task-relevant information as
possible from the elevation grid. However, given the complexities of the
natural world, chance dictates that the use of 'off-the-shelf' filters is
unlikely to derive covariates that provide strong explanatory power to the
target variable at hand, and any attempt to manually design informative
covariates is likely to be a trial-and-error process -- not optimal. In this
paper we present a solution to this problem in the form of a deep learning
approach to automatically deriving optimal task-specific terrain texture
covariates from a standard SRTM 90m gridded digital elevation model (DEM). For
our target variables we use point-sampled geochemical data from the British
Geological Survey: concentrations of potassium, calcium and arsenic in stream
sediments. We find that our deep learning approach produces covariates for
geostatistical modelling that have surprisingly strong explanatory power on
their own, with R-squared values around 0.6 for all three elements (with
arsenic on the log scale). These results are achieved without the neural
network being provided with easting, northing, or absolute elevation as inputs,
and purely reflect the capacity of our deep neural network to extract
task-specific information from terrain texture. We hope that these results will
inspire further investigation into the capabilities of deep learning within
geostatistical applications.
|
[
{
"created": "Fri, 22 May 2020 14:00:28 GMT",
"version": "v1"
},
{
"created": "Mon, 15 Jun 2020 11:19:48 GMT",
"version": "v2"
}
] |
2020-06-16
|
[
[
"Kirkwood",
"Charlie",
""
]
] |
Where data is available, it is desirable in geostatistical modelling to make use of additional covariates, for example terrain data, in order to improve prediction accuracy in the modelling task. While elevation itself may be important, additional explanatory power for any given problem can be sought (but not necessarily found) by filtering digital elevation models to extract higher-order derivatives such as slope angles, curvatures, and roughness. In essence, it would be beneficial to extract as much task-relevant information as possible from the elevation grid. However, given the complexities of the natural world, chance dictates that the use of 'off-the-shelf' filters is unlikely to derive covariates that provide strong explanatory power to the target variable at hand, and any attempt to manually design informative covariates is likely to be a trial-and-error process -- not optimal. In this paper we present a solution to this problem in the form of a deep learning approach to automatically deriving optimal task-specific terrain texture covariates from a standard SRTM 90m gridded digital elevation model (DEM). For our target variables we use point-sampled geochemical data from the British Geological Survey: concentrations of potassium, calcium and arsenic in stream sediments. We find that our deep learning approach produces covariates for geostatistical modelling that have surprisingly strong explanatory power on their own, with R-squared values around 0.6 for all three elements (with arsenic on the log scale). These results are achieved without the neural network being provided with easting, northing, or absolute elevation as inputs, and purely reflect the capacity of our deep neural network to extract task-specific information from terrain texture. We hope that these results will inspire further investigation into the capabilities of deep learning within geostatistical applications.
|
1206.4679
|
Ryohei Fujimaki
|
Ryohei Fujimaki (NEC Laboratories America), Kohei Hayashi (Nara
Institute of Science and Technology)
|
Factorized Asymptotic Bayesian Hidden Markov Models
|
ICML2012
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper addresses the issue of model selection for hidden Markov models
(HMMs). We generalize factorized asymptotic Bayesian inference (FAB), which has
been recently developed for model selection on independent hidden variables
(i.e., mixture models), for time-dependent hidden variables. As with FAB in
mixture models, FAB for HMMs is derived as an iterative lower bound
maximization algorithm of a factorized information criterion (FIC). It
inherits, from FAB for mixture models, several desirable properties for
learning HMMs, such as asymptotic consistency of FIC with marginal
log-likelihood, a shrinkage effect for hidden state selection, monotonic
increase of the lower FIC bound through the iterative optimization. Further, it
does not have a tunable hyper-parameter, and thus its model selection process
can be fully automated. Experimental results shows that FAB outperforms
states-of-the-art variational Bayesian HMM and non-parametric Bayesian HMM in
terms of model selection accuracy and computational efficiency.
|
[
{
"created": "Mon, 18 Jun 2012 15:37:59 GMT",
"version": "v1"
}
] |
2012-06-22
|
[
[
"Fujimaki",
"Ryohei",
"",
"NEC Laboratories America"
],
[
"Hayashi",
"Kohei",
"",
"Nara\n Institute of Science and Technology"
]
] |
This paper addresses the issue of model selection for hidden Markov models (HMMs). We generalize factorized asymptotic Bayesian inference (FAB), which has been recently developed for model selection on independent hidden variables (i.e., mixture models), for time-dependent hidden variables. As with FAB in mixture models, FAB for HMMs is derived as an iterative lower bound maximization algorithm of a factorized information criterion (FIC). It inherits, from FAB for mixture models, several desirable properties for learning HMMs, such as asymptotic consistency of FIC with marginal log-likelihood, a shrinkage effect for hidden state selection, monotonic increase of the lower FIC bound through the iterative optimization. Further, it does not have a tunable hyper-parameter, and thus its model selection process can be fully automated. Experimental results shows that FAB outperforms states-of-the-art variational Bayesian HMM and non-parametric Bayesian HMM in terms of model selection accuracy and computational efficiency.
|
2102.07530
|
Huanjie Wang
|
Huanjie Wang, Wenshuo Wang, Shihua Yuan, Xueyuan Li
|
Uncovering Interpretable Internal States of Merging Tasks at Highway
On-Ramps for Autonomous Driving Decision-Making
|
12 pages, 9 figures
| null |
10.1109/TASE.2021.3103179
| null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Humans make daily routine decisions based on their internal states in
intricate interaction scenarios. This paper presents a probabilistically
reconstructive learning approach to identify the internal states of
multi-vehicle sequential interactions when merging at highway on-ramps. We
treated the merging task's sequential decision as a dynamic, stochastic process
and then integrated the internal states into an HMM-GMR model, a probabilistic
combination of an extended Gaussian mixture regression (GMR) and hidden Markov
models (HMM). We also developed a variant expectation-maximum (EM) algorithm to
estimate the model parameters and verified it based on a real-world data set.
Experiment results reveal that three interpretable internal states can
semantically describe the interactive merge procedure at highway on-ramps. This
finding provides a basis to develop an efficient model-based decision-making
algorithm for autonomous vehicles (AVs) in a partially observable environment.
|
[
{
"created": "Mon, 15 Feb 2021 13:06:35 GMT",
"version": "v1"
},
{
"created": "Fri, 14 May 2021 03:02:42 GMT",
"version": "v2"
}
] |
2021-08-17
|
[
[
"Wang",
"Huanjie",
""
],
[
"Wang",
"Wenshuo",
""
],
[
"Yuan",
"Shihua",
""
],
[
"Li",
"Xueyuan",
""
]
] |
Humans make daily routine decisions based on their internal states in intricate interaction scenarios. This paper presents a probabilistically reconstructive learning approach to identify the internal states of multi-vehicle sequential interactions when merging at highway on-ramps. We treated the merging task's sequential decision as a dynamic, stochastic process and then integrated the internal states into an HMM-GMR model, a probabilistic combination of an extended Gaussian mixture regression (GMR) and hidden Markov models (HMM). We also developed a variant expectation-maximum (EM) algorithm to estimate the model parameters and verified it based on a real-world data set. Experiment results reveal that three interpretable internal states can semantically describe the interactive merge procedure at highway on-ramps. This finding provides a basis to develop an efficient model-based decision-making algorithm for autonomous vehicles (AVs) in a partially observable environment.
|
1502.03634
|
Youngsung Kim
|
Youngsung Kim, Francisco C. Pereira, Fang Zhao, Ajinkya Ghorpade, P.
Christopher Zegras, Moshe Ben-Akiva
|
Activity recognition for a smartphone and web based travel survey
| null | null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In transport modeling and prediction, trip purposes play an important role
since mobility choices (e.g. modes, routes, departure times) are made in order
to carry out specific activities. Activity based models, which have been
gaining popularity in recent years, are built from a large number of observed
trips and their purposes. However, data acquired through traditional
interview-based travel surveys lack the accuracy and quantity required by such
models. Smartphones and interactive web interfaces have emerged as an
attractive alternative to conventional travel surveys. A smartphone-based
travel survey, Future Mobility Survey (FMS), was developed and field-tested in
Singapore and collected travel data from more than 1000 participants for
multiple days. To provide a more intelligent interface, inferring the
activities of a user at a certain location is a crucial challenge. This paper
presents a learning model that infers the most likely activity associated to a
certain visited place. The data collected in FMS contain errors or noise due to
various reasons, so a robust approach via ensemble learning is used to improve
generalization performance. Our model takes advantage of cross-user historical
data as well as user-specific information, including socio-demographics. Our
empirical results using FMS data demonstrate that the proposed method
contributes significantly to our travel survey application.
|
[
{
"created": "Thu, 12 Feb 2015 12:40:32 GMT",
"version": "v1"
}
] |
2015-02-16
|
[
[
"Kim",
"Youngsung",
""
],
[
"Pereira",
"Francisco C.",
""
],
[
"Zhao",
"Fang",
""
],
[
"Ghorpade",
"Ajinkya",
""
],
[
"Zegras",
"P. Christopher",
""
],
[
"Ben-Akiva",
"Moshe",
""
]
] |
In transport modeling and prediction, trip purposes play an important role since mobility choices (e.g. modes, routes, departure times) are made in order to carry out specific activities. Activity based models, which have been gaining popularity in recent years, are built from a large number of observed trips and their purposes. However, data acquired through traditional interview-based travel surveys lack the accuracy and quantity required by such models. Smartphones and interactive web interfaces have emerged as an attractive alternative to conventional travel surveys. A smartphone-based travel survey, Future Mobility Survey (FMS), was developed and field-tested in Singapore and collected travel data from more than 1000 participants for multiple days. To provide a more intelligent interface, inferring the activities of a user at a certain location is a crucial challenge. This paper presents a learning model that infers the most likely activity associated to a certain visited place. The data collected in FMS contain errors or noise due to various reasons, so a robust approach via ensemble learning is used to improve generalization performance. Our model takes advantage of cross-user historical data as well as user-specific information, including socio-demographics. Our empirical results using FMS data demonstrate that the proposed method contributes significantly to our travel survey application.
|
2310.06390
|
Joosung Lee
|
Joosung Lee, Minsik Oh, Donghun Lee
|
P5: Plug-and-Play Persona Prompting for Personalized Response Selection
|
EMNLP 2023 main conference
| null | null | null |
cs.CL cs.AI cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
The use of persona-grounded retrieval-based chatbots is crucial for
personalized conversations, but there are several challenges that need to be
addressed. 1) In general, collecting persona-grounded corpus is very expensive.
2) The chatbot system does not always respond in consideration of persona at
real applications. To address these challenges, we propose a plug-and-play
persona prompting method. Our system can function as a standard open-domain
chatbot if persona information is not available. We demonstrate that this
approach performs well in the zero-shot setting, which reduces the dependence
on persona-ground training data. This makes it easier to expand the system to
other languages without the need to build a persona-grounded corpus.
Additionally, our model can be fine-tuned for even better performance. In our
experiments, the zero-shot model improved the standard model by 7.71 and 1.04
points in the original persona and revised persona, respectively. The
fine-tuned model improved the previous state-of-the-art system by 1.95 and 3.39
points in the original persona and revised persona, respectively. To the best
of our knowledge, this is the first attempt to solve the problem of
personalized response selection using prompt sequences. Our code is available
on github~\footnote{https://github.com/rungjoo/plug-and-play-prompt-persona}.
|
[
{
"created": "Tue, 10 Oct 2023 07:53:36 GMT",
"version": "v1"
}
] |
2023-10-11
|
[
[
"Lee",
"Joosung",
""
],
[
"Oh",
"Minsik",
""
],
[
"Lee",
"Donghun",
""
]
] |
The use of persona-grounded retrieval-based chatbots is crucial for personalized conversations, but there are several challenges that need to be addressed. 1) In general, collecting persona-grounded corpus is very expensive. 2) The chatbot system does not always respond in consideration of persona at real applications. To address these challenges, we propose a plug-and-play persona prompting method. Our system can function as a standard open-domain chatbot if persona information is not available. We demonstrate that this approach performs well in the zero-shot setting, which reduces the dependence on persona-ground training data. This makes it easier to expand the system to other languages without the need to build a persona-grounded corpus. Additionally, our model can be fine-tuned for even better performance. In our experiments, the zero-shot model improved the standard model by 7.71 and 1.04 points in the original persona and revised persona, respectively. The fine-tuned model improved the previous state-of-the-art system by 1.95 and 3.39 points in the original persona and revised persona, respectively. To the best of our knowledge, this is the first attempt to solve the problem of personalized response selection using prompt sequences. Our code is available on github~\footnote{https://github.com/rungjoo/plug-and-play-prompt-persona}.
|
2203.06760
|
Yuan Gong
|
Yuan Gong, Sameer Khurana, Andrew Rouditchenko, and James Glass
|
CMKD: CNN/Transformer-Based Cross-Model Knowledge Distillation for Audio
Classification
| null | null | null | null |
cs.SD cs.AI eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Audio classification is an active research area with a wide range of
applications. Over the past decade, convolutional neural networks (CNNs) have
been the de-facto standard building block for end-to-end audio classification
models. Recently, neural networks based solely on self-attention mechanisms
such as the Audio Spectrogram Transformer (AST) have been shown to outperform
CNNs. In this paper, we find an intriguing interaction between the two very
different models - CNN and AST models are good teachers for each other. When we
use either of them as the teacher and train the other model as the student via
knowledge distillation (KD), the performance of the student model noticeably
improves, and in many cases, is better than the teacher model. In our
experiments with this CNN/Transformer Cross-Model Knowledge Distillation (CMKD)
method we achieve new state-of-the-art performance on FSD50K, AudioSet, and
ESC-50.
|
[
{
"created": "Sun, 13 Mar 2022 21:14:04 GMT",
"version": "v1"
}
] |
2022-03-15
|
[
[
"Gong",
"Yuan",
""
],
[
"Khurana",
"Sameer",
""
],
[
"Rouditchenko",
"Andrew",
""
],
[
"Glass",
"James",
""
]
] |
Audio classification is an active research area with a wide range of applications. Over the past decade, convolutional neural networks (CNNs) have been the de-facto standard building block for end-to-end audio classification models. Recently, neural networks based solely on self-attention mechanisms such as the Audio Spectrogram Transformer (AST) have been shown to outperform CNNs. In this paper, we find an intriguing interaction between the two very different models - CNN and AST models are good teachers for each other. When we use either of them as the teacher and train the other model as the student via knowledge distillation (KD), the performance of the student model noticeably improves, and in many cases, is better than the teacher model. In our experiments with this CNN/Transformer Cross-Model Knowledge Distillation (CMKD) method we achieve new state-of-the-art performance on FSD50K, AudioSet, and ESC-50.
|
1309.1700
|
Yang Liu
|
Yang Liu
|
Towards a Unified Belief Structure in Games with indeterminate
probabilities
|
Preliminary versions of different parts of this paper were at the 4th
Formal Epistemology Festival (Konstanz 2012) and the 67th European meeting of
the Econometric Society (EEA|ESEM 2013)
| null | null | null |
cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper provides an analysis of different formal representations of
beliefs in epistemic game theory. The aim is to attempt a synthesis of
different structures of beliefs in the presence of indeterminate probabilities.
Special attention is also paid to the decision-theoretic principle known as the
thesis of no subjective probability for self-action. Conditions in cope with
this principle are given which underlie the interrelationships between
different models of beliefs, and it is shown that under these conditions
different doxastic structures can be coherently unified.
|
[
{
"created": "Fri, 6 Sep 2013 17:00:31 GMT",
"version": "v1"
}
] |
2013-09-09
|
[
[
"Liu",
"Yang",
""
]
] |
This paper provides an analysis of different formal representations of beliefs in epistemic game theory. The aim is to attempt a synthesis of different structures of beliefs in the presence of indeterminate probabilities. Special attention is also paid to the decision-theoretic principle known as the thesis of no subjective probability for self-action. Conditions in cope with this principle are given which underlie the interrelationships between different models of beliefs, and it is shown that under these conditions different doxastic structures can be coherently unified.
|
2307.09065
|
Avishkar Saha
|
Avishkar Saha, Oscar Mendez, Chris Russell, Richard Bowden
|
Learning Adaptive Neighborhoods for Graph Neural Networks
|
ICCV 2023
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Graph convolutional networks (GCNs) enable end-to-end learning on graph
structured data. However, many works assume a given graph structure. When the
input graph is noisy or unavailable, one approach is to construct or learn a
latent graph structure. These methods typically fix the choice of node degree
for the entire graph, which is suboptimal. Instead, we propose a novel
end-to-end differentiable graph generator which builds graph topologies where
each node selects both its neighborhood and its size. Our module can be readily
integrated into existing pipelines involving graph convolution operations,
replacing the predetermined or existing adjacency matrix with one that is
learned, and optimized, as part of the general objective. As such it is
applicable to any GCN. We integrate our module into trajectory prediction,
point cloud classification and node classification pipelines resulting in
improved accuracy over other structure-learning methods across a wide range of
datasets and GCN backbones.
|
[
{
"created": "Tue, 18 Jul 2023 08:37:25 GMT",
"version": "v1"
}
] |
2023-07-19
|
[
[
"Saha",
"Avishkar",
""
],
[
"Mendez",
"Oscar",
""
],
[
"Russell",
"Chris",
""
],
[
"Bowden",
"Richard",
""
]
] |
Graph convolutional networks (GCNs) enable end-to-end learning on graph structured data. However, many works assume a given graph structure. When the input graph is noisy or unavailable, one approach is to construct or learn a latent graph structure. These methods typically fix the choice of node degree for the entire graph, which is suboptimal. Instead, we propose a novel end-to-end differentiable graph generator which builds graph topologies where each node selects both its neighborhood and its size. Our module can be readily integrated into existing pipelines involving graph convolution operations, replacing the predetermined or existing adjacency matrix with one that is learned, and optimized, as part of the general objective. As such it is applicable to any GCN. We integrate our module into trajectory prediction, point cloud classification and node classification pipelines resulting in improved accuracy over other structure-learning methods across a wide range of datasets and GCN backbones.
|
1603.04610
|
Florent Altche
|
Florent Altch\'e, Xiangjun Qian, Arnaud de La Fortelle
|
Time-optimal Coordination of Mobile Robots along Specified Paths
|
Published in 2016 IEEE/RSJ International Conference on Intelligent
Robots and Systems (IROS)
| null |
10.1109/IROS.2016.7759737
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we address the problem of time-optimal coordination of mobile
robots under kinodynamic constraints along specified paths. We propose a novel
approach based on time discretization that leads to a mixed-integer linear
programming (MILP) formulation. This problem can be solved using
general-purpose MILP solvers in a reasonable time, resulting in a
resolution-optimal solution. Moreover, unlike previous work found in the
literature, our formulation allows an exact linear modeling (up to the
discretization resolution) of second-order dynamic constraints. Extensive
simulations are performed to demonstrate the effectiveness of our approach.
|
[
{
"created": "Tue, 15 Mar 2016 09:29:28 GMT",
"version": "v1"
},
{
"created": "Tue, 30 Aug 2016 09:34:25 GMT",
"version": "v2"
},
{
"created": "Wed, 5 Apr 2017 15:30:44 GMT",
"version": "v3"
}
] |
2017-04-06
|
[
[
"Altché",
"Florent",
""
],
[
"Qian",
"Xiangjun",
""
],
[
"de La Fortelle",
"Arnaud",
""
]
] |
In this paper, we address the problem of time-optimal coordination of mobile robots under kinodynamic constraints along specified paths. We propose a novel approach based on time discretization that leads to a mixed-integer linear programming (MILP) formulation. This problem can be solved using general-purpose MILP solvers in a reasonable time, resulting in a resolution-optimal solution. Moreover, unlike previous work found in the literature, our formulation allows an exact linear modeling (up to the discretization resolution) of second-order dynamic constraints. Extensive simulations are performed to demonstrate the effectiveness of our approach.
|
1906.06085
|
Dimitri Vorona
|
Dimitri Vorona, Andreas Kipf, Thomas Neumann, Alfons Kemper
|
DeepSPACE: Approximate Geospatial Query Processing with Deep Learning
| null | null | null | null |
cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The amount of the available geospatial data grows at an ever faster pace.
This leads to the constantly increasing demand for processing power and storage
in order to provide data analysis in a timely manner. At the same time, a lot
of geospatial processing is visual and exploratory in nature, thus having
bounded precision requirements. We present DeepSPACE, a deep learning-based
approximate geospatial query processing engine which combines modest hardware
requirements with the ability to answer flexible aggregation queries while
keeping the required state to a few hundred KiBs.
|
[
{
"created": "Fri, 14 Jun 2019 09:16:16 GMT",
"version": "v1"
}
] |
2019-06-17
|
[
[
"Vorona",
"Dimitri",
""
],
[
"Kipf",
"Andreas",
""
],
[
"Neumann",
"Thomas",
""
],
[
"Kemper",
"Alfons",
""
]
] |
The amount of the available geospatial data grows at an ever faster pace. This leads to the constantly increasing demand for processing power and storage in order to provide data analysis in a timely manner. At the same time, a lot of geospatial processing is visual and exploratory in nature, thus having bounded precision requirements. We present DeepSPACE, a deep learning-based approximate geospatial query processing engine which combines modest hardware requirements with the ability to answer flexible aggregation queries while keeping the required state to a few hundred KiBs.
|
2302.01417
|
Nelly Elsayed
|
Shrish Pellakur, Nelly Elsayed, Zag ElSayed, Murat Ozer
|
A Convolutional-based Model for Early Prediction of Alzheimer's based on
the Dementia Stage in the MRI Brain Images
|
Short paper, Under Review in FLAIRS-36
| null | null | null |
cs.LG cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Alzheimer's disease is a degenerative brain disease. Being the primary cause
of Dementia in adults and progressively destroys brain memory. Though
Alzheimer's disease does not have a cure currently, diagnosing it at an earlier
stage will help reduce the severity of the disease. Thus, early diagnosis of
Alzheimer's could help to reduce or stop the disease from progressing. In this
paper, we proposed a deep convolutional neural network-based model for learning
model using to determine the stage of Dementia in adults based on the Magnetic
Resonance Imaging (MRI) images to detect the early onset of Alzheimer's.
|
[
{
"created": "Thu, 2 Feb 2023 21:10:31 GMT",
"version": "v1"
},
{
"created": "Wed, 15 Feb 2023 23:54:18 GMT",
"version": "v2"
}
] |
2023-02-17
|
[
[
"Pellakur",
"Shrish",
""
],
[
"Elsayed",
"Nelly",
""
],
[
"ElSayed",
"Zag",
""
],
[
"Ozer",
"Murat",
""
]
] |
Alzheimer's disease is a degenerative brain disease. Being the primary cause of Dementia in adults and progressively destroys brain memory. Though Alzheimer's disease does not have a cure currently, diagnosing it at an earlier stage will help reduce the severity of the disease. Thus, early diagnosis of Alzheimer's could help to reduce or stop the disease from progressing. In this paper, we proposed a deep convolutional neural network-based model for learning model using to determine the stage of Dementia in adults based on the Magnetic Resonance Imaging (MRI) images to detect the early onset of Alzheimer's.
|
2205.11981
|
Yaoyao Zhong
|
Yaoyao Zhong and Weihong Deng
|
OPOM: Customized Invisible Cloak towards Face Privacy Protection
|
This article has been accepted by IEEE Transactions on Pattern
Analysis & Machine Intelligence. Datasets and code are available at
https://github.com/zhongyy/OPOM
| null |
10.1109/TPAMI.2022.3175602
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While convenient in daily life, face recognition technologies also raise
privacy concerns for regular users on the social media since they could be used
to analyze face images and videos, efficiently and surreptitiously without any
security restrictions. In this paper, we investigate the face privacy
protection from a technology standpoint based on a new type of customized
cloak, which can be applied to all the images of a regular user, to prevent
malicious face recognition systems from uncovering their identity.
Specifically, we propose a new method, named one person one mask (OPOM), to
generate person-specific (class-wise) universal masks by optimizing each
training sample in the direction away from the feature subspace of the source
identity. To make full use of the limited training images, we investigate
several modeling methods, including affine hulls, class centers, and convex
hulls, to obtain a better description of the feature subspace of source
identities. The effectiveness of the proposed method is evaluated on both
common and celebrity datasets against black-box face recognition models with
different loss functions and network architectures. In addition, we discuss the
advantages and potential problems of the proposed method. In particular, we
conduct an application study on the privacy protection of a video dataset,
Sherlock, to demonstrate the potential practical usage of the proposed method.
Datasets and code are available at https://github.com/zhongyy/OPOM.
|
[
{
"created": "Tue, 24 May 2022 11:29:37 GMT",
"version": "v1"
}
] |
2022-05-25
|
[
[
"Zhong",
"Yaoyao",
""
],
[
"Deng",
"Weihong",
""
]
] |
While convenient in daily life, face recognition technologies also raise privacy concerns for regular users on the social media since they could be used to analyze face images and videos, efficiently and surreptitiously without any security restrictions. In this paper, we investigate the face privacy protection from a technology standpoint based on a new type of customized cloak, which can be applied to all the images of a regular user, to prevent malicious face recognition systems from uncovering their identity. Specifically, we propose a new method, named one person one mask (OPOM), to generate person-specific (class-wise) universal masks by optimizing each training sample in the direction away from the feature subspace of the source identity. To make full use of the limited training images, we investigate several modeling methods, including affine hulls, class centers, and convex hulls, to obtain a better description of the feature subspace of source identities. The effectiveness of the proposed method is evaluated on both common and celebrity datasets against black-box face recognition models with different loss functions and network architectures. In addition, we discuss the advantages and potential problems of the proposed method. In particular, we conduct an application study on the privacy protection of a video dataset, Sherlock, to demonstrate the potential practical usage of the proposed method. Datasets and code are available at https://github.com/zhongyy/OPOM.
|
2206.00182
|
Ali Athar
|
Ali Athar, Jonathon Luiten, Alexander Hermans, Deva Ramanan, Bastian
Leibe
|
Differentiable Soft-Masked Attention
|
arXiv admin note: text overlap with arXiv:2112.09131
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Transformers have become prevalent in computer vision due to their
performance and flexibility in modelling complex operations. Of particular
significance is the 'cross-attention' operation, which allows a vector
representation (e.g. of an object in an image) to be learned by attending to an
arbitrarily sized set of input features. Recently, "Masked Attention" was
proposed in which a given object representation only attends to those image
pixel features for which the segmentation mask of that object is active. This
specialization of attention proved beneficial for various image and video
segmentation tasks. In this paper, we propose another specialization of
attention which enables attending over `soft-masks' (those with continuous mask
probabilities instead of binary values), and is also differentiable through
these mask probabilities, thus allowing the mask used for attention to be
learned within the network without requiring direct loss supervision. This can
be useful for several applications. Specifically, we employ our "Differentiable
Soft-Masked Attention" for the task of Weakly-Supervised Video Object
Segmentation (VOS), where we develop a transformer-based network for VOS which
only requires a single annotated image frame for training, but can also benefit
from cycle consistency training on a video with just one annotated frame.
Although there is no loss for masks in unlabeled frames, the network is still
able to segment objects in those frames due to our novel attention formulation.
Code:
https://github.com/Ali2500/HODOR/blob/main/hodor/modelling/encoder/soft_masked_attention.py
|
[
{
"created": "Wed, 1 Jun 2022 02:05:13 GMT",
"version": "v1"
},
{
"created": "Fri, 5 Aug 2022 14:09:12 GMT",
"version": "v2"
}
] |
2022-08-08
|
[
[
"Athar",
"Ali",
""
],
[
"Luiten",
"Jonathon",
""
],
[
"Hermans",
"Alexander",
""
],
[
"Ramanan",
"Deva",
""
],
[
"Leibe",
"Bastian",
""
]
] |
Transformers have become prevalent in computer vision due to their performance and flexibility in modelling complex operations. Of particular significance is the 'cross-attention' operation, which allows a vector representation (e.g. of an object in an image) to be learned by attending to an arbitrarily sized set of input features. Recently, "Masked Attention" was proposed in which a given object representation only attends to those image pixel features for which the segmentation mask of that object is active. This specialization of attention proved beneficial for various image and video segmentation tasks. In this paper, we propose another specialization of attention which enables attending over `soft-masks' (those with continuous mask probabilities instead of binary values), and is also differentiable through these mask probabilities, thus allowing the mask used for attention to be learned within the network without requiring direct loss supervision. This can be useful for several applications. Specifically, we employ our "Differentiable Soft-Masked Attention" for the task of Weakly-Supervised Video Object Segmentation (VOS), where we develop a transformer-based network for VOS which only requires a single annotated image frame for training, but can also benefit from cycle consistency training on a video with just one annotated frame. Although there is no loss for masks in unlabeled frames, the network is still able to segment objects in those frames due to our novel attention formulation. Code: https://github.com/Ali2500/HODOR/blob/main/hodor/modelling/encoder/soft_masked_attention.py
|
1609.00878
|
Joao Papa
|
Silas E. N. Fernandes, Danillo R. Pereira, Caio C. O. Ramos, Andre N.
Souza and Joao P. Papa
|
A Probabilistic Optimum-Path Forest Classifier for Binary Classification
Problems
|
Submitted to Neural Processing Letters
| null | null | null |
cs.CV cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Probabilistic-driven classification techniques extend the role of traditional
approaches that output labels (usually integer numbers) only. Such techniques
are more fruitful when dealing with problems where one is not interested in
recognition/identification only, but also into monitoring the behavior of
consumers and/or machines, for instance. Therefore, by means of probability
estimates, one can take decisions to work better in a number of scenarios. In
this paper, we propose a probabilistic-based Optimum Path Forest (OPF)
classifier to handle with binary classification problems, and we show it can be
more accurate than naive OPF in a number of datasets. In addition to being just
more accurate or not, probabilistic OPF turns to be another useful tool to the
scientific community.
|
[
{
"created": "Sun, 4 Sep 2016 00:12:04 GMT",
"version": "v1"
}
] |
2016-09-06
|
[
[
"Fernandes",
"Silas E. N.",
""
],
[
"Pereira",
"Danillo R.",
""
],
[
"Ramos",
"Caio C. O.",
""
],
[
"Souza",
"Andre N.",
""
],
[
"Papa",
"Joao P.",
""
]
] |
Probabilistic-driven classification techniques extend the role of traditional approaches that output labels (usually integer numbers) only. Such techniques are more fruitful when dealing with problems where one is not interested in recognition/identification only, but also into monitoring the behavior of consumers and/or machines, for instance. Therefore, by means of probability estimates, one can take decisions to work better in a number of scenarios. In this paper, we propose a probabilistic-based Optimum Path Forest (OPF) classifier to handle with binary classification problems, and we show it can be more accurate than naive OPF in a number of datasets. In addition to being just more accurate or not, probabilistic OPF turns to be another useful tool to the scientific community.
|
2111.12332
|
Srivatsan Sridhar
|
Joachim Neu, Srivatsan Sridhar, Lei Yang, David Tse, Mohammad Alizadeh
|
Longest Chain Consensus Under Bandwidth Constraint
| null | null | null | null |
cs.CR cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Spamming attacks are a serious concern for consensus protocols, as witnessed
by recent outages of a major blockchain, Solana. They cause congestion and
excessive message delays in a real network due to its bandwidth constraints. In
contrast, longest chain (LC), an important family of consensus protocols, has
previously only been proven secure assuming an idealized network model in which
all messages are delivered within bounded delay. This model-reality mismatch is
further aggravated for Proof-of-Stake (PoS) LC where the adversary can spam the
network with equivocating blocks. Hence, we extend the network model to capture
bandwidth constraints, under which nodes now need to choose carefully which
blocks to spend their limited download budget on. To illustrate this point, we
show that 'download along the longest header chain', a natural download rule
for Proof-of-Work (PoW) LC, is insecure for PoS LC. We propose a simple rule
'download towards the freshest block', formalize two common heuristics 'not
downloading equivocations' and 'blocklisting', and prove in a unified framework
that PoS LC with any one of these download rules is secure in
bandwidth-constrained networks. In experiments, we validate our claims and
showcase the behavior of these download rules under attack. By composing
multiple instances of a PoS LC protocol with a suitable download rule in
parallel, we obtain a PoS consensus protocol that achieves a constant fraction
of the network's throughput limit even under worst-case adversarial strategies.
|
[
{
"created": "Wed, 24 Nov 2021 08:29:34 GMT",
"version": "v1"
},
{
"created": "Fri, 28 Jan 2022 03:09:29 GMT",
"version": "v2"
},
{
"created": "Wed, 18 May 2022 00:05:34 GMT",
"version": "v3"
}
] |
2022-05-19
|
[
[
"Neu",
"Joachim",
""
],
[
"Sridhar",
"Srivatsan",
""
],
[
"Yang",
"Lei",
""
],
[
"Tse",
"David",
""
],
[
"Alizadeh",
"Mohammad",
""
]
] |
Spamming attacks are a serious concern for consensus protocols, as witnessed by recent outages of a major blockchain, Solana. They cause congestion and excessive message delays in a real network due to its bandwidth constraints. In contrast, longest chain (LC), an important family of consensus protocols, has previously only been proven secure assuming an idealized network model in which all messages are delivered within bounded delay. This model-reality mismatch is further aggravated for Proof-of-Stake (PoS) LC where the adversary can spam the network with equivocating blocks. Hence, we extend the network model to capture bandwidth constraints, under which nodes now need to choose carefully which blocks to spend their limited download budget on. To illustrate this point, we show that 'download along the longest header chain', a natural download rule for Proof-of-Work (PoW) LC, is insecure for PoS LC. We propose a simple rule 'download towards the freshest block', formalize two common heuristics 'not downloading equivocations' and 'blocklisting', and prove in a unified framework that PoS LC with any one of these download rules is secure in bandwidth-constrained networks. In experiments, we validate our claims and showcase the behavior of these download rules under attack. By composing multiple instances of a PoS LC protocol with a suitable download rule in parallel, we obtain a PoS consensus protocol that achieves a constant fraction of the network's throughput limit even under worst-case adversarial strategies.
|
1809.08928
|
Preslav Nakov
|
Shafiq Joty, Lluis Marquez, Preslav Nakov
|
Joint Multitask Learning for Community Question Answering Using
Task-Specific Embeddings
|
community question answering, task-specific embeddings, multi-task
learning, EMNLP-2018
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We address jointly two important tasks for Question Answering in community
forums: given a new question, (i) find related existing questions, and (ii)
find relevant answers to this new question. We further use an auxiliary task to
complement the previous two, i.e., (iii) find good answers with respect to the
thread question in a question-comment thread. We use deep neural networks
(DNNs) to learn meaningful task-specific embeddings, which we then incorporate
into a conditional random field (CRF) model for the multitask setting,
performing joint learning over a complex graph structure. While DNNs alone
achieve competitive results when trained to produce the embeddings, the CRF,
which makes use of the embeddings and the dependencies between the tasks,
improves the results significantly and consistently across a variety of
evaluation metrics, thus showing the complementarity of DNNs and structured
learning.
|
[
{
"created": "Mon, 24 Sep 2018 13:49:14 GMT",
"version": "v1"
}
] |
2018-09-25
|
[
[
"Joty",
"Shafiq",
""
],
[
"Marquez",
"Lluis",
""
],
[
"Nakov",
"Preslav",
""
]
] |
We address jointly two important tasks for Question Answering in community forums: given a new question, (i) find related existing questions, and (ii) find relevant answers to this new question. We further use an auxiliary task to complement the previous two, i.e., (iii) find good answers with respect to the thread question in a question-comment thread. We use deep neural networks (DNNs) to learn meaningful task-specific embeddings, which we then incorporate into a conditional random field (CRF) model for the multitask setting, performing joint learning over a complex graph structure. While DNNs alone achieve competitive results when trained to produce the embeddings, the CRF, which makes use of the embeddings and the dependencies between the tasks, improves the results significantly and consistently across a variety of evaluation metrics, thus showing the complementarity of DNNs and structured learning.
|
1702.05730
|
Jie Hao
|
Jie Hao, Shu-Tao Xia, and Bin Chen
|
On Optimal Ternary Locally Repairable Codes
|
5 pages
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In an $[n,k,d]$ linear code, a code symbol is said to have locality $r$ if it
can be repaired by accessing at most $r$ other code symbols. For an $(n,k,r)$
\emph{locally repairable code} (LRC), the minimum distance satisfies the
well-known Singleton-like bound $d\le n-k-\lceil k/r\rceil +2$. In this paper,
we study optimal ternary LRCs meeting this Singleton-like bound by employing a
parity-check matrix approach. It is proved that there are only $8$ classes of
possible parameters with which optimal ternary LRCs exist. Moreover, we obtain
explicit constructions of optimal ternary LRCs for all these $8$ classes of
parameters, where the minimum distance could only be 2, 3, 4, 5 and 6.
|
[
{
"created": "Sun, 19 Feb 2017 10:02:02 GMT",
"version": "v1"
}
] |
2017-02-21
|
[
[
"Hao",
"Jie",
""
],
[
"Xia",
"Shu-Tao",
""
],
[
"Chen",
"Bin",
""
]
] |
In an $[n,k,d]$ linear code, a code symbol is said to have locality $r$ if it can be repaired by accessing at most $r$ other code symbols. For an $(n,k,r)$ \emph{locally repairable code} (LRC), the minimum distance satisfies the well-known Singleton-like bound $d\le n-k-\lceil k/r\rceil +2$. In this paper, we study optimal ternary LRCs meeting this Singleton-like bound by employing a parity-check matrix approach. It is proved that there are only $8$ classes of possible parameters with which optimal ternary LRCs exist. Moreover, we obtain explicit constructions of optimal ternary LRCs for all these $8$ classes of parameters, where the minimum distance could only be 2, 3, 4, 5 and 6.
|
1501.02670
|
Amaru Cuba Gyllensten
|
Amaru Cuba Gyllensten and Magnus Sahlgren
|
Navigating the Semantic Horizon using Relative Neighborhood Graphs
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper is concerned with nearest neighbor search in distributional
semantic models. A normal nearest neighbor search only returns a ranked list of
neighbors, with no information about the structure or topology of the local
neighborhood. This is a potentially serious shortcoming of the mode of querying
a distributional semantic model, since a ranked list of neighbors may conflate
several different senses. We argue that the topology of neighborhoods in
semantic space provides important information about the different senses of
terms, and that such topological structures can be used for word-sense
induction. We also argue that the topology of the neighborhoods in semantic
space can be used to determine the semantic horizon of a point, which we define
as the set of neighbors that have a direct connection to the point. We
introduce relative neighborhood graphs as method to uncover the topological
properties of neighborhoods in semantic models. We also provide examples of
relative neighborhood graphs for three well-known semantic models; the PMI
model, the GloVe model, and the skipgram model.
|
[
{
"created": "Mon, 12 Jan 2015 14:48:54 GMT",
"version": "v1"
}
] |
2015-01-13
|
[
[
"Gyllensten",
"Amaru Cuba",
""
],
[
"Sahlgren",
"Magnus",
""
]
] |
This paper is concerned with nearest neighbor search in distributional semantic models. A normal nearest neighbor search only returns a ranked list of neighbors, with no information about the structure or topology of the local neighborhood. This is a potentially serious shortcoming of the mode of querying a distributional semantic model, since a ranked list of neighbors may conflate several different senses. We argue that the topology of neighborhoods in semantic space provides important information about the different senses of terms, and that such topological structures can be used for word-sense induction. We also argue that the topology of the neighborhoods in semantic space can be used to determine the semantic horizon of a point, which we define as the set of neighbors that have a direct connection to the point. We introduce relative neighborhood graphs as method to uncover the topological properties of neighborhoods in semantic models. We also provide examples of relative neighborhood graphs for three well-known semantic models; the PMI model, the GloVe model, and the skipgram model.
|
1504.07678
|
Hongzhao Huang
|
Hongzhao Huang and Larry Heck and Heng Ji
|
Leveraging Deep Neural Networks and Knowledge Graphs for Entity
Disambiguation
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Entity Disambiguation aims to link mentions of ambiguous entities to a
knowledge base (e.g., Wikipedia). Modeling topical coherence is crucial for
this task based on the assumption that information from the same semantic
context tends to belong to the same topic. This paper presents a novel deep
semantic relatedness model (DSRM) based on deep neural networks (DNN) and
semantic knowledge graphs (KGs) to measure entity semantic relatedness for
topical coherence modeling. The DSRM is directly trained on large-scale KGs and
it maps heterogeneous types of knowledge of an entity from KGs to numerical
feature vectors in a latent space such that the distance between two
semantically-related entities is minimized. Compared with the state-of-the-art
relatedness approach proposed by (Milne and Witten, 2008a), the DSRM obtains
19.4% and 24.5% reductions in entity disambiguation errors on two publicly
available datasets respectively.
|
[
{
"created": "Tue, 28 Apr 2015 22:47:25 GMT",
"version": "v1"
}
] |
2015-04-30
|
[
[
"Huang",
"Hongzhao",
""
],
[
"Heck",
"Larry",
""
],
[
"Ji",
"Heng",
""
]
] |
Entity Disambiguation aims to link mentions of ambiguous entities to a knowledge base (e.g., Wikipedia). Modeling topical coherence is crucial for this task based on the assumption that information from the same semantic context tends to belong to the same topic. This paper presents a novel deep semantic relatedness model (DSRM) based on deep neural networks (DNN) and semantic knowledge graphs (KGs) to measure entity semantic relatedness for topical coherence modeling. The DSRM is directly trained on large-scale KGs and it maps heterogeneous types of knowledge of an entity from KGs to numerical feature vectors in a latent space such that the distance between two semantically-related entities is minimized. Compared with the state-of-the-art relatedness approach proposed by (Milne and Witten, 2008a), the DSRM obtains 19.4% and 24.5% reductions in entity disambiguation errors on two publicly available datasets respectively.
|
2112.01988
|
Can G\"umeli
|
Can G\"umeli, Angela Dai, Matthias Nie{\ss}ner
|
ROCA: Robust CAD Model Retrieval and Alignment from a Single Image
| null | null |
10.1109/CVPR52688.2022.00399
| null |
cs.CV cs.GR cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We present ROCA, a novel end-to-end approach that retrieves and aligns 3D CAD
models from a shape database to a single input image. This enables 3D
perception of an observed scene from a 2D RGB observation, characterized as a
lightweight, compact, clean CAD representation. Core to our approach is our
differentiable alignment optimization based on dense 2D-3D object
correspondences and Procrustes alignment. ROCA can thus provide a robust CAD
alignment while simultaneously informing CAD retrieval by leveraging the 2D-3D
correspondences to learn geometrically similar CAD models. Experiments on
challenging, real-world imagery from ScanNet show that ROCA significantly
improves on state of the art, from 9.5% to 17.6% in retrieval-aware CAD
alignment accuracy.
|
[
{
"created": "Fri, 3 Dec 2021 16:02:32 GMT",
"version": "v1"
},
{
"created": "Sun, 24 Jul 2022 16:25:49 GMT",
"version": "v2"
}
] |
2022-11-28
|
[
[
"Gümeli",
"Can",
""
],
[
"Dai",
"Angela",
""
],
[
"Nießner",
"Matthias",
""
]
] |
We present ROCA, a novel end-to-end approach that retrieves and aligns 3D CAD models from a shape database to a single input image. This enables 3D perception of an observed scene from a 2D RGB observation, characterized as a lightweight, compact, clean CAD representation. Core to our approach is our differentiable alignment optimization based on dense 2D-3D object correspondences and Procrustes alignment. ROCA can thus provide a robust CAD alignment while simultaneously informing CAD retrieval by leveraging the 2D-3D correspondences to learn geometrically similar CAD models. Experiments on challenging, real-world imagery from ScanNet show that ROCA significantly improves on state of the art, from 9.5% to 17.6% in retrieval-aware CAD alignment accuracy.
|
2303.04356
|
Taisuke Kobayashi
|
Taisuke Kobayashi
|
Soft Actor-Critic Algorithm with Truly-satisfied Inequality Constraint
|
10 pages, 9 figures
| null | null | null |
cs.LG cs.AI cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Soft actor-critic (SAC) in reinforcement learning is expected to be one of
the next-generation robot control schemes. Its ability to maximize policy
entropy would make a robotic controller robust to noise and perturbation, which
is useful for real-world robot applications. However, the priority of
maximizing the policy entropy is automatically tuned in the current
implementation, the rule of which can be interpreted as one for equality
constraint, binding the policy entropy into its specified lower bound. The
current SAC is therefore no longer maximize the policy entropy, contrary to our
expectation. To resolve this issue in SAC, this paper improves its
implementation with a learnable state-dependent slack variable for
appropriately handling the inequality constraint to maximize the policy entropy
by reformulating it as the corresponding equality constraint. The introduced
slack variable is optimized by a switching-type loss function that takes into
account the dual objectives of satisfying the equality constraint and checking
the lower bound. In Mujoco and Pybullet simulators, the modified SAC
statistically achieved the higher robustness for adversarial attacks than
before while regularizing the norm of action. A real-robot variable impedance
task was demonstrated for showing the applicability of the modified SAC to
real-world robot control. In particular, the modified SAC maintained adaptive
behaviors for physical human-robot interaction, which had no experience at all
during training. https://youtu.be/EH3xVtlVaJw
|
[
{
"created": "Wed, 8 Mar 2023 03:32:50 GMT",
"version": "v1"
},
{
"created": "Sun, 2 Jul 2023 08:48:56 GMT",
"version": "v2"
}
] |
2023-07-04
|
[
[
"Kobayashi",
"Taisuke",
""
]
] |
Soft actor-critic (SAC) in reinforcement learning is expected to be one of the next-generation robot control schemes. Its ability to maximize policy entropy would make a robotic controller robust to noise and perturbation, which is useful for real-world robot applications. However, the priority of maximizing the policy entropy is automatically tuned in the current implementation, the rule of which can be interpreted as one for equality constraint, binding the policy entropy into its specified lower bound. The current SAC is therefore no longer maximize the policy entropy, contrary to our expectation. To resolve this issue in SAC, this paper improves its implementation with a learnable state-dependent slack variable for appropriately handling the inequality constraint to maximize the policy entropy by reformulating it as the corresponding equality constraint. The introduced slack variable is optimized by a switching-type loss function that takes into account the dual objectives of satisfying the equality constraint and checking the lower bound. In Mujoco and Pybullet simulators, the modified SAC statistically achieved the higher robustness for adversarial attacks than before while regularizing the norm of action. A real-robot variable impedance task was demonstrated for showing the applicability of the modified SAC to real-world robot control. In particular, the modified SAC maintained adaptive behaviors for physical human-robot interaction, which had no experience at all during training. https://youtu.be/EH3xVtlVaJw
|
2304.10513
|
Shen Zheng
|
Shen Zheng, Jie Huang, Kevin Chen-Chuan Chang
|
Why Does ChatGPT Fall Short in Providing Truthful Answers?
| null | null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent advancements in large language models, such as ChatGPT, have
demonstrated significant potential to impact various aspects of human life.
However, ChatGPT still faces challenges in providing reliable and accurate
answers to user questions. To better understand the model's particular
weaknesses in providing truthful answers, we embark an in-depth exploration of
open-domain question answering. Specifically, we undertake a detailed
examination of ChatGPT's failures, categorized into: comprehension, factuality,
specificity, and inference. We further pinpoint factuality as the most
contributing failure and identify two critical abilities associated with
factuality: knowledge memorization and knowledge recall. Through experiments
focusing on factuality, we propose several potential enhancement strategies.
Our findings suggest that augmenting the model with granular external knowledge
and cues for knowledge recall can enhance the model's factuality in answering
questions.
|
[
{
"created": "Thu, 20 Apr 2023 17:48:43 GMT",
"version": "v1"
},
{
"created": "Wed, 24 May 2023 04:58:47 GMT",
"version": "v2"
},
{
"created": "Sun, 3 Dec 2023 23:01:19 GMT",
"version": "v3"
}
] |
2023-12-05
|
[
[
"Zheng",
"Shen",
""
],
[
"Huang",
"Jie",
""
],
[
"Chang",
"Kevin Chen-Chuan",
""
]
] |
Recent advancements in large language models, such as ChatGPT, have demonstrated significant potential to impact various aspects of human life. However, ChatGPT still faces challenges in providing reliable and accurate answers to user questions. To better understand the model's particular weaknesses in providing truthful answers, we embark an in-depth exploration of open-domain question answering. Specifically, we undertake a detailed examination of ChatGPT's failures, categorized into: comprehension, factuality, specificity, and inference. We further pinpoint factuality as the most contributing failure and identify two critical abilities associated with factuality: knowledge memorization and knowledge recall. Through experiments focusing on factuality, we propose several potential enhancement strategies. Our findings suggest that augmenting the model with granular external knowledge and cues for knowledge recall can enhance the model's factuality in answering questions.
|
2402.10318
|
Ralf M\"uller
|
Ralf R. M\"uller
|
Multi-Antenna Towards Inband Shift Keying
|
The initial version of this paper contains an error. It calculates
the beamforming gain of transmit beamforming for N antennas as N, but it
should be N^2. This error has been corrected in the latest version
| null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We propose a new continuous phase frequency shift keying that is particularly
suited for multi-antenna communications when the link budget is critical and
beam alignment is problematic. It combines the constant envelope of frequency
modulation with low-rate repetition coding in order to compensate for the
absence of transmit beamforming. Although it is a frequency modulation, its
transmit signal shows close to rectangular spectral shape. Similar to GSM's
Gaussian minimum shift keying, it can be well approximated by linear
modulation, when combined with differential precoding. This allows for easy
coherent demodulation by means of a windowed fast Fourier transform.
|
[
{
"created": "Thu, 15 Feb 2024 20:41:09 GMT",
"version": "v1"
},
{
"created": "Tue, 20 Feb 2024 08:33:13 GMT",
"version": "v2"
},
{
"created": "Mon, 8 Apr 2024 15:42:41 GMT",
"version": "v3"
}
] |
2024-04-09
|
[
[
"Müller",
"Ralf R.",
""
]
] |
We propose a new continuous phase frequency shift keying that is particularly suited for multi-antenna communications when the link budget is critical and beam alignment is problematic. It combines the constant envelope of frequency modulation with low-rate repetition coding in order to compensate for the absence of transmit beamforming. Although it is a frequency modulation, its transmit signal shows close to rectangular spectral shape. Similar to GSM's Gaussian minimum shift keying, it can be well approximated by linear modulation, when combined with differential precoding. This allows for easy coherent demodulation by means of a windowed fast Fourier transform.
|
1604.04721
|
Victor Sanchez-Anguix Dr.
|
Juan M. Alberola, Elena Del Val, Victor Sanchez-Anguix, Alberto
Palomares and Maria Dolores Teruel
|
An artificial intelligence tool for heterogeneous team formation in the
classroom
| null |
Knowledge-Based Systems, 2016
|
10.1016/j.knosys.2016.02.010
| null |
cs.AI cs.CY cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Nowadays, there is increasing interest in the development of teamwork skills
in the educational context. This growing interest is motivated by its
pedagogical effectiveness and the fact that, in labour contexts, enterprises
organize their employees in teams to carry out complex projects. Despite its
crucial importance in the classroom and industry, there is a lack of support
for the team formation process. Not only do many factors influence team
performance, but the problem becomes exponentially costly if teams are to be
optimized. In this article, we propose a tool whose aim it is to cover such a
gap. It combines artificial intelligence techniques such as coalition structure
generation, Bayesian learning, and Belbin's role theory to facilitate the
generation of working groups in an educational context. This tool improves
current state of the art proposals in three ways: i) it takes into account the
feedback of other teammates in order to establish the most predominant role of
a student instead of self-perception questionnaires; ii) it handles uncertainty
with regard to each student's predominant team role; iii) it is iterative since
it considers information from several interactions in order to improve the
estimation of role assignments. We tested the performance of the proposed tool
in an experiment involving students that took part in three different team
activities. The experiments suggest that the proposed tool is able to improve
different teamwork aspects such as team dynamics and student satisfaction.
|
[
{
"created": "Sat, 16 Apr 2016 10:50:02 GMT",
"version": "v1"
}
] |
2016-04-19
|
[
[
"Alberola",
"Juan M.",
""
],
[
"Del Val",
"Elena",
""
],
[
"Sanchez-Anguix",
"Victor",
""
],
[
"Palomares",
"Alberto",
""
],
[
"Teruel",
"Maria Dolores",
""
]
] |
Nowadays, there is increasing interest in the development of teamwork skills in the educational context. This growing interest is motivated by its pedagogical effectiveness and the fact that, in labour contexts, enterprises organize their employees in teams to carry out complex projects. Despite its crucial importance in the classroom and industry, there is a lack of support for the team formation process. Not only do many factors influence team performance, but the problem becomes exponentially costly if teams are to be optimized. In this article, we propose a tool whose aim it is to cover such a gap. It combines artificial intelligence techniques such as coalition structure generation, Bayesian learning, and Belbin's role theory to facilitate the generation of working groups in an educational context. This tool improves current state of the art proposals in three ways: i) it takes into account the feedback of other teammates in order to establish the most predominant role of a student instead of self-perception questionnaires; ii) it handles uncertainty with regard to each student's predominant team role; iii) it is iterative since it considers information from several interactions in order to improve the estimation of role assignments. We tested the performance of the proposed tool in an experiment involving students that took part in three different team activities. The experiments suggest that the proposed tool is able to improve different teamwork aspects such as team dynamics and student satisfaction.
|
1908.00916
|
Dominik Mach\'a\v{c}ek
|
Dominik Mach\'a\v{c}ek, Jon\'a\v{s} Kratochv\'il, Tereza
Vojt\v{e}chov\'a, Ond\v{r}ej Bojar
|
A Speech Test Set of Practice Business Presentations with Additional
Relevant Texts
|
SLSP 2019
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a test corpus of audio recordings and transcriptions of
presentations of students' enterprises together with their slides and
web-pages. The corpus is intended for evaluation of automatic speech
recognition (ASR) systems, especially in conditions where the prior
availability of in-domain vocabulary and named entities is benefitable. The
corpus consists of 39 presentations in English, each up to 90 seconds long. The
speakers are high school students from European countries with English as their
second language. We benchmark three baseline ASR systems on the corpus and show
their imperfection.
|
[
{
"created": "Fri, 2 Aug 2019 15:39:31 GMT",
"version": "v1"
}
] |
2019-08-05
|
[
[
"Macháček",
"Dominik",
""
],
[
"Kratochvíl",
"Jonáš",
""
],
[
"Vojtěchová",
"Tereza",
""
],
[
"Bojar",
"Ondřej",
""
]
] |
We present a test corpus of audio recordings and transcriptions of presentations of students' enterprises together with their slides and web-pages. The corpus is intended for evaluation of automatic speech recognition (ASR) systems, especially in conditions where the prior availability of in-domain vocabulary and named entities is benefitable. The corpus consists of 39 presentations in English, each up to 90 seconds long. The speakers are high school students from European countries with English as their second language. We benchmark three baseline ASR systems on the corpus and show their imperfection.
|
1108.5250
|
Tshilidzi Marwala
|
A.K. Mohamed, T. Marwala, and L.R. John
|
Single-trial EEG Discrimination between Wrist and Finger Movement
Imagery and Execution in a Sensorimotor BCI
|
33rd Annual International IEEE EMBS Conference 2011
| null |
10.1109/IEMBS.2011.6091552
| null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A brain-computer interface (BCI) may be used to control a prosthetic or
orthotic hand using neural activity from the brain. The core of this
sensorimotor BCI lies in the interpretation of the neural information extracted
from electroencephalogram (EEG). It is desired to improve on the interpretation
of EEG to allow people with neuromuscular disorders to perform daily
activities. This paper investigates the possibility of discriminating between
the EEG associated with wrist and finger movements. The EEG was recorded from
test subjects as they executed and imagined five essential hand movements using
both hands. Independent component analysis (ICA) and time-frequency techniques
were used to extract spectral features based on event-related
(de)synchronisation (ERD/ERS), while the Bhattacharyya distance (BD) was used
for feature reduction. Mahalanobis distance (MD) clustering and artificial
neural networks (ANN) were used as classifiers and obtained average accuracies
of 65 % and 71 % respectively. This shows that EEG discrimination between wrist
and finger movements is possible. The research introduces a new combination of
motor tasks to BCI research.
|
[
{
"created": "Fri, 26 Aug 2011 07:10:04 GMT",
"version": "v1"
}
] |
2016-11-17
|
[
[
"Mohamed",
"A. K.",
""
],
[
"Marwala",
"T.",
""
],
[
"John",
"L. R.",
""
]
] |
A brain-computer interface (BCI) may be used to control a prosthetic or orthotic hand using neural activity from the brain. The core of this sensorimotor BCI lies in the interpretation of the neural information extracted from electroencephalogram (EEG). It is desired to improve on the interpretation of EEG to allow people with neuromuscular disorders to perform daily activities. This paper investigates the possibility of discriminating between the EEG associated with wrist and finger movements. The EEG was recorded from test subjects as they executed and imagined five essential hand movements using both hands. Independent component analysis (ICA) and time-frequency techniques were used to extract spectral features based on event-related (de)synchronisation (ERD/ERS), while the Bhattacharyya distance (BD) was used for feature reduction. Mahalanobis distance (MD) clustering and artificial neural networks (ANN) were used as classifiers and obtained average accuracies of 65 % and 71 % respectively. This shows that EEG discrimination between wrist and finger movements is possible. The research introduces a new combination of motor tasks to BCI research.
|
2110.15789
|
Thi Huyen Nguyen
|
Thi Huyen Nguyen, Tu Nguyen, Tuan-Anh Hoang, Claudia Nieder\'ee
|
On the Feasibility of Predicting Questions being Forgotten in Stack
Overflow
| null | null | null | null |
cs.IR cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
For their attractiveness, comprehensiveness and dynamic coverage of relevant
topics, community-based question answering sites such as Stack Overflow heavily
rely on the engagement of their communities: Questions on new technologies,
technology features as well as technology versions come up and have to be
answered as technology evolves (and as community members gather experience with
it). At the same time, other questions cease in importance over time, finally
becoming irrelevant to users. Beyond filtering low-quality questions,
"forgetting" questions, which have become redundant, is an important step for
keeping the Stack Overflow content concise and useful. In this work, we study
this managed forgetting task for Stack Overflow. Our work is based on data from
more than a decade (2008 - 2019) - covering 18.1M questions, that are made
publicly available by the site itself. For establishing a deeper understanding,
we first analyze and characterize the set of questions about to be forgotten,
i.e., questions that get a considerable number of views in the current period
but become unattractive in the near future. Subsequently, we examine the
capability of a wide range of features in predicting such forgotten questions
in different categories. We find some categories in which those questions are
more predictable. We also discover that the text-based features are
surprisingly not helpful in this prediction task, while the meta information is
much more predictive.
|
[
{
"created": "Fri, 29 Oct 2021 15:59:11 GMT",
"version": "v1"
}
] |
2021-11-01
|
[
[
"Nguyen",
"Thi Huyen",
""
],
[
"Nguyen",
"Tu",
""
],
[
"Hoang",
"Tuan-Anh",
""
],
[
"Niederée",
"Claudia",
""
]
] |
For their attractiveness, comprehensiveness and dynamic coverage of relevant topics, community-based question answering sites such as Stack Overflow heavily rely on the engagement of their communities: Questions on new technologies, technology features as well as technology versions come up and have to be answered as technology evolves (and as community members gather experience with it). At the same time, other questions cease in importance over time, finally becoming irrelevant to users. Beyond filtering low-quality questions, "forgetting" questions, which have become redundant, is an important step for keeping the Stack Overflow content concise and useful. In this work, we study this managed forgetting task for Stack Overflow. Our work is based on data from more than a decade (2008 - 2019) - covering 18.1M questions, that are made publicly available by the site itself. For establishing a deeper understanding, we first analyze and characterize the set of questions about to be forgotten, i.e., questions that get a considerable number of views in the current period but become unattractive in the near future. Subsequently, we examine the capability of a wide range of features in predicting such forgotten questions in different categories. We find some categories in which those questions are more predictable. We also discover that the text-based features are surprisingly not helpful in this prediction task, while the meta information is much more predictive.
|
2112.10930
|
Tianyun Zhang
|
Minghai Qin, Tianyun Zhang, Fei Sun, Yen-Kuang Chen, Makan Fardad,
Yanzhi Wang, Yuan Xie
|
Compact Multi-level Sparse Neural Networks with Input Independent
Dynamic Rerouting
| null | null | null | null |
cs.NE cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep neural networks (DNNs) have shown to provide superb performance in many
real life applications, but their large computation cost and storage
requirement have prevented them from being deployed to many edge and
internet-of-things (IoT) devices. Sparse deep neural networks, whose majority
weight parameters are zeros, can substantially reduce the computation
complexity and memory consumption of the models. In real-use scenarios, devices
may suffer from large fluctuations of the available computation and memory
resources under different environment, and the quality of service (QoS) is
difficult to maintain due to the long tail inferences with large latency.
Facing the real-life challenges, we propose to train a sparse model that
supports multiple sparse levels. That is, a hierarchical structure of weights
are satisfied such that the locations and the values of the non-zero parameters
of the more-sparse sub-model area subset of the less-sparse sub-model. In this
way, one can dynamically select the appropriate sparsity level during
inference, while the storage cost is capped by the least sparse sub-model. We
have verified our methodologies on a variety of DNN models and tasks, including
the ResNet-50, PointNet++, GNMT, and graph attention networks. We obtain sparse
sub-models with an average of 13.38% weights and 14.97% FLOPs, while the
accuracies are as good as their dense counterparts. More-sparse sub-models with
5.38% weights and 4.47% of FLOPs, which are subsets of the less-sparse ones,
can be obtained with only 3.25% relative accuracy loss.
|
[
{
"created": "Tue, 21 Dec 2021 01:35:51 GMT",
"version": "v1"
}
] |
2021-12-22
|
[
[
"Qin",
"Minghai",
""
],
[
"Zhang",
"Tianyun",
""
],
[
"Sun",
"Fei",
""
],
[
"Chen",
"Yen-Kuang",
""
],
[
"Fardad",
"Makan",
""
],
[
"Wang",
"Yanzhi",
""
],
[
"Xie",
"Yuan",
""
]
] |
Deep neural networks (DNNs) have shown to provide superb performance in many real life applications, but their large computation cost and storage requirement have prevented them from being deployed to many edge and internet-of-things (IoT) devices. Sparse deep neural networks, whose majority weight parameters are zeros, can substantially reduce the computation complexity and memory consumption of the models. In real-use scenarios, devices may suffer from large fluctuations of the available computation and memory resources under different environment, and the quality of service (QoS) is difficult to maintain due to the long tail inferences with large latency. Facing the real-life challenges, we propose to train a sparse model that supports multiple sparse levels. That is, a hierarchical structure of weights are satisfied such that the locations and the values of the non-zero parameters of the more-sparse sub-model area subset of the less-sparse sub-model. In this way, one can dynamically select the appropriate sparsity level during inference, while the storage cost is capped by the least sparse sub-model. We have verified our methodologies on a variety of DNN models and tasks, including the ResNet-50, PointNet++, GNMT, and graph attention networks. We obtain sparse sub-models with an average of 13.38% weights and 14.97% FLOPs, while the accuracies are as good as their dense counterparts. More-sparse sub-models with 5.38% weights and 4.47% of FLOPs, which are subsets of the less-sparse ones, can be obtained with only 3.25% relative accuracy loss.
|
1901.07129
|
Xiang Kong
|
Xiang Kong, Bohan Li, Graham Neubig, Eduard Hovy, Yiming Yang
|
An Adversarial Approach to High-Quality, Sentiment-Controlled Neural
Dialogue Generation
|
DEEP-DIAL 2019
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we propose a method for neural dialogue response generation
that allows not only generating semantically reasonable responses according to
the dialogue history, but also explicitly controlling the sentiment of the
response via sentiment labels. Our proposed model is based on the paradigm of
conditional adversarial learning; the training of a sentiment-controlled
dialogue generator is assisted by an adversarial discriminator which assesses
the fluency and feasibility of the response generating from the dialogue
history and a given sentiment label. Because of the flexibility of our
framework, the generator could be a standard sequence-to-sequence (SEQ2SEQ)
model or a more complicated one such as a conditional variational
autoencoder-based SEQ2SEQ model. Experimental results using automatic and human
evaluation both demonstrate that our proposed framework is able to generate
both semantically reasonable and sentiment-controlled dialogue responses.
|
[
{
"created": "Tue, 22 Jan 2019 00:29:27 GMT",
"version": "v1"
}
] |
2019-01-23
|
[
[
"Kong",
"Xiang",
""
],
[
"Li",
"Bohan",
""
],
[
"Neubig",
"Graham",
""
],
[
"Hovy",
"Eduard",
""
],
[
"Yang",
"Yiming",
""
]
] |
In this work, we propose a method for neural dialogue response generation that allows not only generating semantically reasonable responses according to the dialogue history, but also explicitly controlling the sentiment of the response via sentiment labels. Our proposed model is based on the paradigm of conditional adversarial learning; the training of a sentiment-controlled dialogue generator is assisted by an adversarial discriminator which assesses the fluency and feasibility of the response generating from the dialogue history and a given sentiment label. Because of the flexibility of our framework, the generator could be a standard sequence-to-sequence (SEQ2SEQ) model or a more complicated one such as a conditional variational autoencoder-based SEQ2SEQ model. Experimental results using automatic and human evaluation both demonstrate that our proposed framework is able to generate both semantically reasonable and sentiment-controlled dialogue responses.
|
2311.12159
|
Jia-Hong Huang
|
Jia-Hong Huang, Chao-Han Huck Yang, Pin-Yu Chen, Min-Hung Chen, Marcel
Worring
|
Conditional Modeling Based Automatic Video Summarization
|
This work has been submitted to the IEEE for possible publication.
arXiv admin note: substantial text overlap with arXiv:2305.00455
| null | null | null |
cs.CV cs.AI cs.IR cs.LG cs.MM
|
http://creativecommons.org/licenses/by/4.0/
|
The aim of video summarization is to shorten videos automatically while
retaining the key information necessary to convey the overall story. Video
summarization methods mainly rely on visual factors, such as visual
consecutiveness and diversity, which may not be sufficient to fully understand
the content of the video. There are other non-visual factors, such as
interestingness, representativeness, and storyline consistency that should also
be considered for generating high-quality video summaries. Current methods do
not adequately take into account these non-visual factors, resulting in
suboptimal performance. In this work, a new approach to video summarization is
proposed based on insights gained from how humans create ground truth video
summaries. The method utilizes a conditional modeling perspective and
introduces multiple meaningful random variables and joint distributions to
characterize the key components of video summarization. Helper distributions
are employed to improve the training of the model. A conditional attention
module is designed to mitigate potential performance degradation in the
presence of multi-modal input. The proposed video summarization method
incorporates the above innovative design choices that aim to narrow the gap
between human-generated and machine-generated video summaries. Extensive
experiments show that the proposed approach outperforms existing methods and
achieves state-of-the-art performance on commonly used video summarization
datasets.
|
[
{
"created": "Mon, 20 Nov 2023 20:24:45 GMT",
"version": "v1"
}
] |
2023-11-22
|
[
[
"Huang",
"Jia-Hong",
""
],
[
"Yang",
"Chao-Han Huck",
""
],
[
"Chen",
"Pin-Yu",
""
],
[
"Chen",
"Min-Hung",
""
],
[
"Worring",
"Marcel",
""
]
] |
The aim of video summarization is to shorten videos automatically while retaining the key information necessary to convey the overall story. Video summarization methods mainly rely on visual factors, such as visual consecutiveness and diversity, which may not be sufficient to fully understand the content of the video. There are other non-visual factors, such as interestingness, representativeness, and storyline consistency that should also be considered for generating high-quality video summaries. Current methods do not adequately take into account these non-visual factors, resulting in suboptimal performance. In this work, a new approach to video summarization is proposed based on insights gained from how humans create ground truth video summaries. The method utilizes a conditional modeling perspective and introduces multiple meaningful random variables and joint distributions to characterize the key components of video summarization. Helper distributions are employed to improve the training of the model. A conditional attention module is designed to mitigate potential performance degradation in the presence of multi-modal input. The proposed video summarization method incorporates the above innovative design choices that aim to narrow the gap between human-generated and machine-generated video summaries. Extensive experiments show that the proposed approach outperforms existing methods and achieves state-of-the-art performance on commonly used video summarization datasets.
|
2403.17491
|
Xinyu Ning
|
Xinyu Ning and Yutong Zhao and Yitong Liu and Hongwen Yang
|
DGoT: Dynamic Graph of Thoughts for Scientific Abstract Generation
|
Accepted by LREC-COLING 2024
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
The method of training language models based on domain datasets has obtained
significant achievements in the task of generating scientific paper abstracts.
However, such models face problems of generalization and expensive training
costs. The use of large language models (LLMs) to solve the task of generating
paper abstracts saves the cost of model training. However, due to the
hallucination problem of LLM, it is often necessary to improve the reliability
of the results through multi-round query prompt approach such as Graph of
Thoughts (GoT), which also brings additional reasoning costs. In this paper, we
propose a Dynamic Graph of Thought (DGoT). It not only inherits the advantages
of the existing GoT prompt approach, but also dynamically adjust the graph
structure according to data characteristics while reducing model reasoning
cost. Experimental results show that our method's cost-effectiveness in
abstract generation tasks is only 43.7% to 56.4% of other multi-round query
prompt approaches. Our code is available at https://github.com/JayceNing/DGoT.
|
[
{
"created": "Tue, 26 Mar 2024 08:47:23 GMT",
"version": "v1"
}
] |
2024-03-27
|
[
[
"Ning",
"Xinyu",
""
],
[
"Zhao",
"Yutong",
""
],
[
"Liu",
"Yitong",
""
],
[
"Yang",
"Hongwen",
""
]
] |
The method of training language models based on domain datasets has obtained significant achievements in the task of generating scientific paper abstracts. However, such models face problems of generalization and expensive training costs. The use of large language models (LLMs) to solve the task of generating paper abstracts saves the cost of model training. However, due to the hallucination problem of LLM, it is often necessary to improve the reliability of the results through multi-round query prompt approach such as Graph of Thoughts (GoT), which also brings additional reasoning costs. In this paper, we propose a Dynamic Graph of Thought (DGoT). It not only inherits the advantages of the existing GoT prompt approach, but also dynamically adjust the graph structure according to data characteristics while reducing model reasoning cost. Experimental results show that our method's cost-effectiveness in abstract generation tasks is only 43.7% to 56.4% of other multi-round query prompt approaches. Our code is available at https://github.com/JayceNing/DGoT.
|
2010.09141
|
Zafeiria Moumoulidou
|
Zafeiria Moumoulidou, Andrew McGregor, and Alexandra Meliou
|
Diverse Data Selection under Fairness Constraints
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Diversity is an important principle in data selection and summarization,
facility location, and recommendation systems. Our work focuses on maximizing
diversity in data selection, while offering fairness guarantees. In particular,
we offer the first study that augments the Max-Min diversification objective
with fairness constraints. More specifically, given a universe $U$ of $n$
elements that can be partitioned into $m$ disjoint groups, we aim to retrieve a
$k$-sized subset that maximizes the pairwise minimum distance within the set
(diversity) and contains a pre-specified $k_i$ number of elements from each
group $i$ (fairness). We show that this problem is NP-complete even in metric
spaces, and we propose three novel algorithms, linear in $n$, that provide
strong theoretical approximation guarantees for different values of $m$ and
$k$. Finally, we extend our algorithms and analysis to the case where groups
can be overlapping.
|
[
{
"created": "Sun, 18 Oct 2020 23:51:53 GMT",
"version": "v1"
}
] |
2020-10-20
|
[
[
"Moumoulidou",
"Zafeiria",
""
],
[
"McGregor",
"Andrew",
""
],
[
"Meliou",
"Alexandra",
""
]
] |
Diversity is an important principle in data selection and summarization, facility location, and recommendation systems. Our work focuses on maximizing diversity in data selection, while offering fairness guarantees. In particular, we offer the first study that augments the Max-Min diversification objective with fairness constraints. More specifically, given a universe $U$ of $n$ elements that can be partitioned into $m$ disjoint groups, we aim to retrieve a $k$-sized subset that maximizes the pairwise minimum distance within the set (diversity) and contains a pre-specified $k_i$ number of elements from each group $i$ (fairness). We show that this problem is NP-complete even in metric spaces, and we propose three novel algorithms, linear in $n$, that provide strong theoretical approximation guarantees for different values of $m$ and $k$. Finally, we extend our algorithms and analysis to the case where groups can be overlapping.
|
2102.02051
|
Zongbo Han
|
Zongbo Han, Changqing Zhang, Huazhu Fu, Joey Tianyi Zhou
|
Trusted Multi-View Classification
|
Accepted by ICLR 2021
| null | null | null |
cs.LG cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Multi-view classification (MVC) generally focuses on improving classification
accuracy by using information from different views, typically integrating them
into a unified comprehensive representation for downstream tasks. However, it
is also crucial to dynamically assess the quality of a view for different
samples in order to provide reliable uncertainty estimations, which indicate
whether predictions can be trusted. To this end, we propose a novel multi-view
classification method, termed trusted multi-view classification, which provides
a new paradigm for multi-view learning by dynamically integrating different
views at an evidence level. The algorithm jointly utilizes multiple views to
promote both classification reliability and robustness by integrating evidence
from each view. To achieve this, the Dirichlet distribution is used to model
the distribution of the class probabilities, parameterized with evidence from
different views and integrated with the Dempster-Shafer theory. The unified
learning framework induces accurate uncertainty and accordingly endows the
model with both reliability and robustness for out-of-distribution samples.
Extensive experimental results validate the effectiveness of the proposed model
in accuracy, reliability and robustness.
|
[
{
"created": "Wed, 3 Feb 2021 13:30:26 GMT",
"version": "v1"
}
] |
2021-02-04
|
[
[
"Han",
"Zongbo",
""
],
[
"Zhang",
"Changqing",
""
],
[
"Fu",
"Huazhu",
""
],
[
"Zhou",
"Joey Tianyi",
""
]
] |
Multi-view classification (MVC) generally focuses on improving classification accuracy by using information from different views, typically integrating them into a unified comprehensive representation for downstream tasks. However, it is also crucial to dynamically assess the quality of a view for different samples in order to provide reliable uncertainty estimations, which indicate whether predictions can be trusted. To this end, we propose a novel multi-view classification method, termed trusted multi-view classification, which provides a new paradigm for multi-view learning by dynamically integrating different views at an evidence level. The algorithm jointly utilizes multiple views to promote both classification reliability and robustness by integrating evidence from each view. To achieve this, the Dirichlet distribution is used to model the distribution of the class probabilities, parameterized with evidence from different views and integrated with the Dempster-Shafer theory. The unified learning framework induces accurate uncertainty and accordingly endows the model with both reliability and robustness for out-of-distribution samples. Extensive experimental results validate the effectiveness of the proposed model in accuracy, reliability and robustness.
|
2105.08306
|
Kiran Koshy Thekumparampil
|
Kiran Koshy Thekumparampil, Prateek Jain, Praneeth Netrapalli, Sewoong
Oh
|
Sample Efficient Linear Meta-Learning by Alternating Minimization
| null | null | null | null |
cs.LG cs.AI math.OC stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Meta-learning synthesizes and leverages the knowledge from a given set of
tasks to rapidly learn new tasks using very little data. Meta-learning of
linear regression tasks, where the regressors lie in a low-dimensional
subspace, is an extensively-studied fundamental problem in this domain.
However, existing results either guarantee highly suboptimal estimation errors,
or require $\Omega(d)$ samples per task (where $d$ is the data dimensionality)
thus providing little gain over separately learning each task. In this work, we
study a simple alternating minimization method (MLLAM), which alternately
learns the low-dimensional subspace and the regressors. We show that, for a
constant subspace dimension MLLAM obtains nearly-optimal estimation error,
despite requiring only $\Omega(\log d)$ samples per task. However, the number
of samples required per task grows logarithmically with the number of tasks. To
remedy this in the low-noise regime, we propose a novel task subset selection
scheme that ensures the same strong statistical guarantee as MLLAM, even with
bounded number of samples per task for arbitrarily large number of tasks.
|
[
{
"created": "Tue, 18 May 2021 06:46:48 GMT",
"version": "v1"
}
] |
2021-05-19
|
[
[
"Thekumparampil",
"Kiran Koshy",
""
],
[
"Jain",
"Prateek",
""
],
[
"Netrapalli",
"Praneeth",
""
],
[
"Oh",
"Sewoong",
""
]
] |
Meta-learning synthesizes and leverages the knowledge from a given set of tasks to rapidly learn new tasks using very little data. Meta-learning of linear regression tasks, where the regressors lie in a low-dimensional subspace, is an extensively-studied fundamental problem in this domain. However, existing results either guarantee highly suboptimal estimation errors, or require $\Omega(d)$ samples per task (where $d$ is the data dimensionality) thus providing little gain over separately learning each task. In this work, we study a simple alternating minimization method (MLLAM), which alternately learns the low-dimensional subspace and the regressors. We show that, for a constant subspace dimension MLLAM obtains nearly-optimal estimation error, despite requiring only $\Omega(\log d)$ samples per task. However, the number of samples required per task grows logarithmically with the number of tasks. To remedy this in the low-noise regime, we propose a novel task subset selection scheme that ensures the same strong statistical guarantee as MLLAM, even with bounded number of samples per task for arbitrarily large number of tasks.
|
0910.3127
|
Jakob Nordstr\"om
|
Jakob Nordstr\"om, Alexander Razborov
|
On Minimal Unsatisfiability and Time-Space Trade-offs for k-DNF
Resolution
| null | null | null | null |
cs.DM cs.CC math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the context of proving lower bounds on proof space in k-DNF resolution,
[Ben-Sasson and Nordstrom 2009] introduced the concept of minimally
unsatisfiable sets of k-DNF formulas and proved that a minimally unsatisfiable
k-DNF set with m formulas can have at most O((mk)^(k+1)) variables. They also
gave an example of such sets with Omega(mk^2) variables.
In this paper we significantly improve the lower bound to Omega(m)^k, which
almost matches the upper bound above. Furthermore, we show that this implies
that the analysis of their technique for proving time-space separations and
trade-offs for k-DNF resolution is almost tight. This means that although it is
possible, or even plausible, that stronger results than in [Ben-Sasson and
Nordstrom 2009] should hold, a fundamentally different approach would be needed
to obtain such results.
|
[
{
"created": "Fri, 16 Oct 2009 14:33:58 GMT",
"version": "v1"
},
{
"created": "Fri, 16 Oct 2009 21:10:31 GMT",
"version": "v2"
}
] |
2016-09-08
|
[
[
"Nordström",
"Jakob",
""
],
[
"Razborov",
"Alexander",
""
]
] |
In the context of proving lower bounds on proof space in k-DNF resolution, [Ben-Sasson and Nordstrom 2009] introduced the concept of minimally unsatisfiable sets of k-DNF formulas and proved that a minimally unsatisfiable k-DNF set with m formulas can have at most O((mk)^(k+1)) variables. They also gave an example of such sets with Omega(mk^2) variables. In this paper we significantly improve the lower bound to Omega(m)^k, which almost matches the upper bound above. Furthermore, we show that this implies that the analysis of their technique for proving time-space separations and trade-offs for k-DNF resolution is almost tight. This means that although it is possible, or even plausible, that stronger results than in [Ben-Sasson and Nordstrom 2009] should hold, a fundamentally different approach would be needed to obtain such results.
|
1606.05696
|
Yang Shi
|
Yang Shi, U. N. Niranjan, Animashree Anandkumar, Cris Cecka
|
Tensor Contractions with Extended BLAS Kernels on CPU and GPU
| null | null |
10.1109/HiPC.2016.031
| null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Tensor contractions constitute a key computational ingredient of numerical
multi-linear algebra. However, as the order and dimension of tensors grow, the
time and space complexities of tensor-based computations grow quickly. Existing
approaches for tensor contractions typically involves explicit copy and
transpose operations. In this paper, we propose and evaluate a new BLAS-like
primitive STRIDEDBATCHEDGEMM that is capable of performing a wide range of
tensor contractions on CPU and GPU efficiently. Through systematic
benchmarking, we demonstrate the advantages of our approach over conventional
approaches. Concretely, we implement the Tucker decomposition and show that
using our kernels yields 100x speedup as compared to the implementation using
existing state-of-the-art libraries.
|
[
{
"created": "Fri, 17 Jun 2016 22:39:19 GMT",
"version": "v1"
},
{
"created": "Sun, 2 Oct 2016 20:14:04 GMT",
"version": "v2"
}
] |
2018-08-15
|
[
[
"Shi",
"Yang",
""
],
[
"Niranjan",
"U. N.",
""
],
[
"Anandkumar",
"Animashree",
""
],
[
"Cecka",
"Cris",
""
]
] |
Tensor contractions constitute a key computational ingredient of numerical multi-linear algebra. However, as the order and dimension of tensors grow, the time and space complexities of tensor-based computations grow quickly. Existing approaches for tensor contractions typically involves explicit copy and transpose operations. In this paper, we propose and evaluate a new BLAS-like primitive STRIDEDBATCHEDGEMM that is capable of performing a wide range of tensor contractions on CPU and GPU efficiently. Through systematic benchmarking, we demonstrate the advantages of our approach over conventional approaches. Concretely, we implement the Tucker decomposition and show that using our kernels yields 100x speedup as compared to the implementation using existing state-of-the-art libraries.
|
2103.08095
|
Alessandro Lameiras Koerich
|
Mohammad Esmaeilpour and Patrick Cardinal and Alessandro Lameiras
Koerich
|
Towards Robust Speech-to-Text Adversarial Attack
|
5 pages
| null | null | null |
cs.SD cs.LG eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces a novel adversarial algorithm for attacking the
state-of-the-art speech-to-text systems, namely DeepSpeech, Kaldi, and Lingvo.
Our approach is based on developing an extension for the conventional
distortion condition of the adversarial optimization formulation using the
Cram\`er integral probability metric. Minimizing over this metric, which
measures the discrepancies between original and adversarial samples'
distributions, contributes to crafting signals very close to the subspace of
legitimate speech recordings. This helps to yield more robust adversarial
signals against playback over-the-air without employing neither costly
expectation over transformation operations nor static room impulse response
simulations. Our approach outperforms other targeted and non-targeted
algorithms in terms of word error rate and sentence-level-accuracy with
competitive performance on the crafted adversarial signals' quality. Compared
to seven other strong white and black-box adversarial attacks, our proposed
approach is considerably more resilient against multiple consecutive playbacks
over-the-air, corroborating its higher robustness in noisy environments.
|
[
{
"created": "Mon, 15 Mar 2021 01:51:41 GMT",
"version": "v1"
}
] |
2021-03-16
|
[
[
"Esmaeilpour",
"Mohammad",
""
],
[
"Cardinal",
"Patrick",
""
],
[
"Koerich",
"Alessandro Lameiras",
""
]
] |
This paper introduces a novel adversarial algorithm for attacking the state-of-the-art speech-to-text systems, namely DeepSpeech, Kaldi, and Lingvo. Our approach is based on developing an extension for the conventional distortion condition of the adversarial optimization formulation using the Cram\`er integral probability metric. Minimizing over this metric, which measures the discrepancies between original and adversarial samples' distributions, contributes to crafting signals very close to the subspace of legitimate speech recordings. This helps to yield more robust adversarial signals against playback over-the-air without employing neither costly expectation over transformation operations nor static room impulse response simulations. Our approach outperforms other targeted and non-targeted algorithms in terms of word error rate and sentence-level-accuracy with competitive performance on the crafted adversarial signals' quality. Compared to seven other strong white and black-box adversarial attacks, our proposed approach is considerably more resilient against multiple consecutive playbacks over-the-air, corroborating its higher robustness in noisy environments.
|
1304.7282
|
Urmila Shrawankar Ms
|
Priti Saktel, Urmila Shrawankar
|
An Improved Approach for Word Ambiguity Removal
|
Pages:12 Tables: 07 Figures: 14, International Journal of Human
Computer Interaction (IJHCI), Volume (3): Issue (3): 2012
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Word ambiguity removal is a task of removing ambiguity from a word, i.e.
correct sense of word is identified from ambiguous sentences. This paper
describes a model that uses Part of Speech tagger and three categories for word
sense disambiguation (WSD). Human Computer Interaction is very needful to
improve interactions between users and computers. For this, the Supervised and
Unsupervised methods are combined. The WSD algorithm is used to find the
efficient and accurate sense of a word based on domain information. The
accuracy of this work is evaluated with the aim of finding best suitable domain
of word.
|
[
{
"created": "Thu, 25 Apr 2013 10:25:41 GMT",
"version": "v1"
}
] |
2013-04-30
|
[
[
"Saktel",
"Priti",
""
],
[
"Shrawankar",
"Urmila",
""
]
] |
Word ambiguity removal is a task of removing ambiguity from a word, i.e. correct sense of word is identified from ambiguous sentences. This paper describes a model that uses Part of Speech tagger and three categories for word sense disambiguation (WSD). Human Computer Interaction is very needful to improve interactions between users and computers. For this, the Supervised and Unsupervised methods are combined. The WSD algorithm is used to find the efficient and accurate sense of a word based on domain information. The accuracy of this work is evaluated with the aim of finding best suitable domain of word.
|
2207.08548
|
Manu Joseph
|
Manu Joseph, Harsh Raj
|
GANDALF: Gated Adaptive Network for Deep Automated Learning of Features
|
15 pages + Reference & Appendix
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We propose a novel high-performance, interpretable, and parameter \&
computationally efficient deep learning architecture for tabular data, Gated
Adaptive Network for Deep Automated Learning of Features (GANDALF). GANDALF
relies on a new tabular processing unit with a gating mechanism and in-built
feature selection called Gated Feature Learning Unit (GFLU) as a feature
representation learning unit. We demonstrate that GANDALF outperforms or stays
at-par with SOTA approaches like XGBoost, SAINT, FT-Transformers, etc. by
experiments on multiple established public benchmarks. We have made available
the code at github.com/manujosephv/pytorch_tabular under MIT License.
|
[
{
"created": "Mon, 18 Jul 2022 12:12:24 GMT",
"version": "v1"
},
{
"created": "Tue, 19 Jul 2022 09:28:56 GMT",
"version": "v2"
},
{
"created": "Wed, 17 Aug 2022 01:57:38 GMT",
"version": "v3"
},
{
"created": "Fri, 27 Jan 2023 07:26:43 GMT",
"version": "v4"
},
{
"created": "Sat, 22 Jul 2023 11:42:37 GMT",
"version": "v5"
},
{
"created": "Wed, 10 Jan 2024 00:28:04 GMT",
"version": "v6"
}
] |
2024-01-11
|
[
[
"Joseph",
"Manu",
""
],
[
"Raj",
"Harsh",
""
]
] |
We propose a novel high-performance, interpretable, and parameter \& computationally efficient deep learning architecture for tabular data, Gated Adaptive Network for Deep Automated Learning of Features (GANDALF). GANDALF relies on a new tabular processing unit with a gating mechanism and in-built feature selection called Gated Feature Learning Unit (GFLU) as a feature representation learning unit. We demonstrate that GANDALF outperforms or stays at-par with SOTA approaches like XGBoost, SAINT, FT-Transformers, etc. by experiments on multiple established public benchmarks. We have made available the code at github.com/manujosephv/pytorch_tabular under MIT License.
|
2112.05068
|
Rika Antonova
|
Rika Antonova, Jingyun Yang, Priya Sundaresan, Dieter Fox, Fabio
Ramos, Jeannette Bohg
|
A Bayesian Treatment of Real-to-Sim for Deformable Object Manipulation
| null | null | null | null |
cs.RO cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deformable object manipulation remains a challenging task in robotics
research. Conventional techniques for parameter inference and state estimation
typically rely on a precise definition of the state space and its dynamics.
While this is appropriate for rigid objects and robot states, it is challenging
to define the state space of a deformable object and how it evolves in time. In
this work, we pose the problem of inferring physical parameters of deformable
objects as a probabilistic inference task defined with a simulator. We propose
a novel methodology for extracting state information from image sequences via a
technique to represent the state of a deformable object as a distribution
embedding. This allows to incorporate noisy state observations directly into
modern Bayesian simulation-based inference tools in a principled manner. Our
experiments confirm that we can estimate posterior distributions of physical
properties, such as elasticity, friction and scale of highly deformable
objects, such as cloth and ropes. Overall, our method addresses the real-to-sim
problem probabilistically and helps to better represent the evolution of the
state of deformable objects.
|
[
{
"created": "Thu, 9 Dec 2021 17:50:54 GMT",
"version": "v1"
}
] |
2021-12-10
|
[
[
"Antonova",
"Rika",
""
],
[
"Yang",
"Jingyun",
""
],
[
"Sundaresan",
"Priya",
""
],
[
"Fox",
"Dieter",
""
],
[
"Ramos",
"Fabio",
""
],
[
"Bohg",
"Jeannette",
""
]
] |
Deformable object manipulation remains a challenging task in robotics research. Conventional techniques for parameter inference and state estimation typically rely on a precise definition of the state space and its dynamics. While this is appropriate for rigid objects and robot states, it is challenging to define the state space of a deformable object and how it evolves in time. In this work, we pose the problem of inferring physical parameters of deformable objects as a probabilistic inference task defined with a simulator. We propose a novel methodology for extracting state information from image sequences via a technique to represent the state of a deformable object as a distribution embedding. This allows to incorporate noisy state observations directly into modern Bayesian simulation-based inference tools in a principled manner. Our experiments confirm that we can estimate posterior distributions of physical properties, such as elasticity, friction and scale of highly deformable objects, such as cloth and ropes. Overall, our method addresses the real-to-sim problem probabilistically and helps to better represent the evolution of the state of deformable objects.
|
2209.13832
|
Tie Luo
|
Tao Wu, Tie Luo, Donald Wunsch
|
Learning Deep Representations via Contrastive Learning for Instance
Retrieval
|
IEEE Symposium Series On Computational Intelligence (SSCI), December
2022. Accepted
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Instance-level Image Retrieval (IIR), or simply Instance Retrieval, deals
with the problem of finding all the images within an dataset that contain a
query instance (e.g. an object). This paper makes the first attempt that
tackles this problem using instance-discrimination based contrastive learning
(CL). While CL has shown impressive performance for many computer vision tasks,
the similar success has never been found in the field of IIR. In this work, we
approach this problem by exploring the capability of deriving discriminative
representations from pre-trained and fine-tuned CL models. To begin with, we
investigate the efficacy of transfer learning in IIR, by comparing
off-the-shelf features learned by a pre-trained deep neural network (DNN)
classifier with features learned by a CL model. The findings inspired us to
propose a new training strategy that optimizes CL towards learning IIR-oriented
features, by using an Average Precision (AP) loss together with a fine-tuning
method to learn contrastive feature representations that are tailored to IIR.
Our empirical evaluation demonstrates significant performance enhancement over
the off-the-shelf features learned from a pre-trained DNN classifier on the
challenging Oxford and Paris datasets.
|
[
{
"created": "Wed, 28 Sep 2022 04:36:34 GMT",
"version": "v1"
}
] |
2022-09-29
|
[
[
"Wu",
"Tao",
""
],
[
"Luo",
"Tie",
""
],
[
"Wunsch",
"Donald",
""
]
] |
Instance-level Image Retrieval (IIR), or simply Instance Retrieval, deals with the problem of finding all the images within an dataset that contain a query instance (e.g. an object). This paper makes the first attempt that tackles this problem using instance-discrimination based contrastive learning (CL). While CL has shown impressive performance for many computer vision tasks, the similar success has never been found in the field of IIR. In this work, we approach this problem by exploring the capability of deriving discriminative representations from pre-trained and fine-tuned CL models. To begin with, we investigate the efficacy of transfer learning in IIR, by comparing off-the-shelf features learned by a pre-trained deep neural network (DNN) classifier with features learned by a CL model. The findings inspired us to propose a new training strategy that optimizes CL towards learning IIR-oriented features, by using an Average Precision (AP) loss together with a fine-tuning method to learn contrastive feature representations that are tailored to IIR. Our empirical evaluation demonstrates significant performance enhancement over the off-the-shelf features learned from a pre-trained DNN classifier on the challenging Oxford and Paris datasets.
|
1707.07240
|
Bin Wang
|
Bin Wang and Zhijian Ou
|
Language modeling with Neural trans-dimensional random fields
|
6 pages, 2 figures and 3 tables, accepted to ASRU 2017
| null | null | null |
cs.CL cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Trans-dimensional random field language models (TRF LMs) have recently been
introduced, where sentences are modeled as a collection of random fields. The
TRF approach has been shown to have the advantages of being computationally
more efficient in inference than LSTM LMs with close performance and being able
to flexibly integrating rich features. In this paper we propose neural TRFs,
beyond of the previous discrete TRFs that only use linear potentials with
discrete features. The idea is to use nonlinear potentials with continuous
features, implemented by neural networks (NNs), in the TRF framework. Neural
TRFs combine the advantages of both NNs and TRFs. The benefits of word
embedding, nonlinear feature learning and larger context modeling are inherited
from the use of NNs. At the same time, the strength of efficient inference by
avoiding expensive softmax is preserved. A number of technical contributions,
including employing deep convolutional neural networks (CNNs) to define the
potentials and incorporating the joint stochastic approximation (JSA) strategy
in the training algorithm, are developed in this work, which enable us to
successfully train neural TRF LMs. Various LMs are evaluated in terms of speech
recognition WERs by rescoring the 1000-best lists of WSJ'92 test data. The
results show that neural TRF LMs not only improve over discrete TRF LMs, but
also perform slightly better than LSTM LMs with only one fifth of parameters
and 16x faster inference efficiency.
|
[
{
"created": "Sun, 23 Jul 2017 03:06:47 GMT",
"version": "v1"
},
{
"created": "Tue, 25 Jul 2017 01:25:18 GMT",
"version": "v2"
},
{
"created": "Tue, 19 Sep 2017 08:28:42 GMT",
"version": "v3"
}
] |
2017-09-20
|
[
[
"Wang",
"Bin",
""
],
[
"Ou",
"Zhijian",
""
]
] |
Trans-dimensional random field language models (TRF LMs) have recently been introduced, where sentences are modeled as a collection of random fields. The TRF approach has been shown to have the advantages of being computationally more efficient in inference than LSTM LMs with close performance and being able to flexibly integrating rich features. In this paper we propose neural TRFs, beyond of the previous discrete TRFs that only use linear potentials with discrete features. The idea is to use nonlinear potentials with continuous features, implemented by neural networks (NNs), in the TRF framework. Neural TRFs combine the advantages of both NNs and TRFs. The benefits of word embedding, nonlinear feature learning and larger context modeling are inherited from the use of NNs. At the same time, the strength of efficient inference by avoiding expensive softmax is preserved. A number of technical contributions, including employing deep convolutional neural networks (CNNs) to define the potentials and incorporating the joint stochastic approximation (JSA) strategy in the training algorithm, are developed in this work, which enable us to successfully train neural TRF LMs. Various LMs are evaluated in terms of speech recognition WERs by rescoring the 1000-best lists of WSJ'92 test data. The results show that neural TRF LMs not only improve over discrete TRF LMs, but also perform slightly better than LSTM LMs with only one fifth of parameters and 16x faster inference efficiency.
|
2003.07701
|
Gabriel Gon\c{c}alves
|
Gabriel F. N. Gon\c{c}alves, Assen Batchvarov, Yuyi Liu, Yuxin Liu,
Lachlan Mason, Indranil Pan, Omar K. Matar
|
Data-driven surrogate modelling and benchmarking for process equipment
| null |
Data-Centric Engineering (2020), 1, E7
|
10.1017/dce.2020.8
| null |
cs.CE cs.LG physics.flu-dyn stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
In chemical process engineering, surrogate models of complex systems are
often necessary for tasks of domain exploration, sensitivity analysis of the
design parameters, and optimization. A suite of computational fluid dynamics
(CFD) simulations geared toward chemical process equipment modeling has been
developed and validated with experimental results from the literature. Various
regression-based active learning strategies are explored with these CFD
simulators in-the-loop under the constraints of a limited function evaluation
budget. Specifically, five different sampling strategies and five regression
techniques are compared, considering a set of four test cases of industrial
significance and varying complexity. Gaussian process regression was observed
to have a consistently good performance for these applications. The present
quantitative study outlines the pros and cons of the different available
techniques and highlights the best practices for their adoption. The test cases
and tools are available with an open-source license to ensure reproducibility
and engage the wider research community in contributing to both the CFD models
and developing and benchmarking new improved algorithms tailored to this field.
|
[
{
"created": "Fri, 13 Mar 2020 18:22:43 GMT",
"version": "v1"
},
{
"created": "Tue, 8 Sep 2020 14:42:14 GMT",
"version": "v2"
}
] |
2020-09-09
|
[
[
"Gonçalves",
"Gabriel F. N.",
""
],
[
"Batchvarov",
"Assen",
""
],
[
"Liu",
"Yuyi",
""
],
[
"Liu",
"Yuxin",
""
],
[
"Mason",
"Lachlan",
""
],
[
"Pan",
"Indranil",
""
],
[
"Matar",
"Omar K.",
""
]
] |
In chemical process engineering, surrogate models of complex systems are often necessary for tasks of domain exploration, sensitivity analysis of the design parameters, and optimization. A suite of computational fluid dynamics (CFD) simulations geared toward chemical process equipment modeling has been developed and validated with experimental results from the literature. Various regression-based active learning strategies are explored with these CFD simulators in-the-loop under the constraints of a limited function evaluation budget. Specifically, five different sampling strategies and five regression techniques are compared, considering a set of four test cases of industrial significance and varying complexity. Gaussian process regression was observed to have a consistently good performance for these applications. The present quantitative study outlines the pros and cons of the different available techniques and highlights the best practices for their adoption. The test cases and tools are available with an open-source license to ensure reproducibility and engage the wider research community in contributing to both the CFD models and developing and benchmarking new improved algorithms tailored to this field.
|
1707.07998
|
Peter Anderson
|
Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark
Johnson, Stephen Gould, Lei Zhang
|
Bottom-Up and Top-Down Attention for Image Captioning and Visual
Question Answering
|
CVPR 2018 full oral, winner of the 2017 Visual Question Answering
challenge
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Top-down visual attention mechanisms have been used extensively in image
captioning and visual question answering (VQA) to enable deeper image
understanding through fine-grained analysis and even multiple steps of
reasoning. In this work, we propose a combined bottom-up and top-down attention
mechanism that enables attention to be calculated at the level of objects and
other salient image regions. This is the natural basis for attention to be
considered. Within our approach, the bottom-up mechanism (based on Faster
R-CNN) proposes image regions, each with an associated feature vector, while
the top-down mechanism determines feature weightings. Applying this approach to
image captioning, our results on the MSCOCO test server establish a new
state-of-the-art for the task, achieving CIDEr / SPICE / BLEU-4 scores of
117.9, 21.5 and 36.9, respectively. Demonstrating the broad applicability of
the method, applying the same approach to VQA we obtain first place in the 2017
VQA Challenge.
|
[
{
"created": "Tue, 25 Jul 2017 13:50:17 GMT",
"version": "v1"
},
{
"created": "Thu, 10 Aug 2017 23:24:23 GMT",
"version": "v2"
},
{
"created": "Wed, 14 Mar 2018 05:24:23 GMT",
"version": "v3"
}
] |
2018-03-15
|
[
[
"Anderson",
"Peter",
""
],
[
"He",
"Xiaodong",
""
],
[
"Buehler",
"Chris",
""
],
[
"Teney",
"Damien",
""
],
[
"Johnson",
"Mark",
""
],
[
"Gould",
"Stephen",
""
],
[
"Zhang",
"Lei",
""
]
] |
Top-down visual attention mechanisms have been used extensively in image captioning and visual question answering (VQA) to enable deeper image understanding through fine-grained analysis and even multiple steps of reasoning. In this work, we propose a combined bottom-up and top-down attention mechanism that enables attention to be calculated at the level of objects and other salient image regions. This is the natural basis for attention to be considered. Within our approach, the bottom-up mechanism (based on Faster R-CNN) proposes image regions, each with an associated feature vector, while the top-down mechanism determines feature weightings. Applying this approach to image captioning, our results on the MSCOCO test server establish a new state-of-the-art for the task, achieving CIDEr / SPICE / BLEU-4 scores of 117.9, 21.5 and 36.9, respectively. Demonstrating the broad applicability of the method, applying the same approach to VQA we obtain first place in the 2017 VQA Challenge.
|
2201.00129
|
Yuxing Wang
|
Yuxing Wang, Tiantian Zhang, Yongzhe Chang, Bin Liang, Xueqian Wang,
Bo Yuan
|
A Surrogate-Assisted Controller for Expensive Evolutionary Reinforcement
Learning
| null | null |
10.1016/j.ins.2022.10.134
| null |
cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The integration of Reinforcement Learning (RL) and Evolutionary Algorithms
(EAs) aims at simultaneously exploiting the sample efficiency as well as the
diversity and robustness of the two paradigms. Recently, hybrid learning
frameworks based on this principle have achieved great success in various
challenging robot control tasks. However, in these methods, policies from the
genetic population are evaluated via interactions with the real environments,
limiting their applicability in computationally expensive problems. In this
work, we propose Surrogate-assisted Controller (SC), a novel and efficient
module that can be integrated into existing frameworks to alleviate the
computational burden of EAs by partially replacing the expensive policy
evaluation. The key challenge in applying this module is to prevent the
optimization process from being misled by the possible false minima introduced
by the surrogate. To address this issue, we present two strategies for SC to
control the workflow of hybrid frameworks. Experiments on six continuous
control tasks from the OpenAI Gym platform show that SC can not only
significantly reduce the cost of fitness evaluations, but also boost the
performance of the original hybrid frameworks with collaborative learning and
evolutionary processes.
|
[
{
"created": "Sat, 1 Jan 2022 06:42:51 GMT",
"version": "v1"
},
{
"created": "Wed, 27 Apr 2022 15:30:26 GMT",
"version": "v2"
}
] |
2022-11-08
|
[
[
"Wang",
"Yuxing",
""
],
[
"Zhang",
"Tiantian",
""
],
[
"Chang",
"Yongzhe",
""
],
[
"Liang",
"Bin",
""
],
[
"Wang",
"Xueqian",
""
],
[
"Yuan",
"Bo",
""
]
] |
The integration of Reinforcement Learning (RL) and Evolutionary Algorithms (EAs) aims at simultaneously exploiting the sample efficiency as well as the diversity and robustness of the two paradigms. Recently, hybrid learning frameworks based on this principle have achieved great success in various challenging robot control tasks. However, in these methods, policies from the genetic population are evaluated via interactions with the real environments, limiting their applicability in computationally expensive problems. In this work, we propose Surrogate-assisted Controller (SC), a novel and efficient module that can be integrated into existing frameworks to alleviate the computational burden of EAs by partially replacing the expensive policy evaluation. The key challenge in applying this module is to prevent the optimization process from being misled by the possible false minima introduced by the surrogate. To address this issue, we present two strategies for SC to control the workflow of hybrid frameworks. Experiments on six continuous control tasks from the OpenAI Gym platform show that SC can not only significantly reduce the cost of fitness evaluations, but also boost the performance of the original hybrid frameworks with collaborative learning and evolutionary processes.
|
2104.14294
|
Mathilde Caron
|
Mathilde Caron, Hugo Touvron, Ishan Misra, Herv\'e J\'egou, Julien
Mairal, Piotr Bojanowski, Armand Joulin
|
Emerging Properties in Self-Supervised Vision Transformers
|
21 pages
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we question if self-supervised learning provides new
properties to Vision Transformer (ViT) that stand out compared to convolutional
networks (convnets). Beyond the fact that adapting self-supervised methods to
this architecture works particularly well, we make the following observations:
first, self-supervised ViT features contain explicit information about the
semantic segmentation of an image, which does not emerge as clearly with
supervised ViTs, nor with convnets. Second, these features are also excellent
k-NN classifiers, reaching 78.3% top-1 on ImageNet with a small ViT. Our study
also underlines the importance of momentum encoder, multi-crop training, and
the use of small patches with ViTs. We implement our findings into a simple
self-supervised method, called DINO, which we interpret as a form of
self-distillation with no labels. We show the synergy between DINO and ViTs by
achieving 80.1% top-1 on ImageNet in linear evaluation with ViT-Base.
|
[
{
"created": "Thu, 29 Apr 2021 12:28:51 GMT",
"version": "v1"
},
{
"created": "Mon, 24 May 2021 17:49:18 GMT",
"version": "v2"
}
] |
2021-05-25
|
[
[
"Caron",
"Mathilde",
""
],
[
"Touvron",
"Hugo",
""
],
[
"Misra",
"Ishan",
""
],
[
"Jégou",
"Hervé",
""
],
[
"Mairal",
"Julien",
""
],
[
"Bojanowski",
"Piotr",
""
],
[
"Joulin",
"Armand",
""
]
] |
In this paper, we question if self-supervised learning provides new properties to Vision Transformer (ViT) that stand out compared to convolutional networks (convnets). Beyond the fact that adapting self-supervised methods to this architecture works particularly well, we make the following observations: first, self-supervised ViT features contain explicit information about the semantic segmentation of an image, which does not emerge as clearly with supervised ViTs, nor with convnets. Second, these features are also excellent k-NN classifiers, reaching 78.3% top-1 on ImageNet with a small ViT. Our study also underlines the importance of momentum encoder, multi-crop training, and the use of small patches with ViTs. We implement our findings into a simple self-supervised method, called DINO, which we interpret as a form of self-distillation with no labels. We show the synergy between DINO and ViTs by achieving 80.1% top-1 on ImageNet in linear evaluation with ViT-Base.
|
2306.17750
|
Kavosh Asadi
|
Kavosh Asadi, Shoham Sabach, Yao Liu, Omer Gottesman, Rasool Fakoor
|
TD Convergence: An Optimization Perspective
|
Accepted at Thirty-seventh Conference on Neural Information
Processing Systems (NeurIPS 2023)
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the convergence behavior of the celebrated temporal-difference (TD)
learning algorithm. By looking at the algorithm through the lens of
optimization, we first argue that TD can be viewed as an iterative optimization
algorithm where the function to be minimized changes per iteration. By
carefully investigating the divergence displayed by TD on a classical counter
example, we identify two forces that determine the convergent or divergent
behavior of the algorithm. We next formalize our discovery in the linear TD
setting with quadratic loss and prove that convergence of TD hinges on the
interplay between these two forces. We extend this optimization perspective to
prove convergence of TD in a much broader setting than just linear
approximation and squared loss. Our results provide a theoretical explanation
for the successful application of TD in reinforcement learning.
|
[
{
"created": "Fri, 30 Jun 2023 16:01:04 GMT",
"version": "v1"
},
{
"created": "Wed, 8 Nov 2023 21:20:39 GMT",
"version": "v2"
}
] |
2023-11-10
|
[
[
"Asadi",
"Kavosh",
""
],
[
"Sabach",
"Shoham",
""
],
[
"Liu",
"Yao",
""
],
[
"Gottesman",
"Omer",
""
],
[
"Fakoor",
"Rasool",
""
]
] |
We study the convergence behavior of the celebrated temporal-difference (TD) learning algorithm. By looking at the algorithm through the lens of optimization, we first argue that TD can be viewed as an iterative optimization algorithm where the function to be minimized changes per iteration. By carefully investigating the divergence displayed by TD on a classical counter example, we identify two forces that determine the convergent or divergent behavior of the algorithm. We next formalize our discovery in the linear TD setting with quadratic loss and prove that convergence of TD hinges on the interplay between these two forces. We extend this optimization perspective to prove convergence of TD in a much broader setting than just linear approximation and squared loss. Our results provide a theoretical explanation for the successful application of TD in reinforcement learning.
|
1109.1604
|
Aly El Gamal
|
Aly El Gamal, V. Sreekanth Annapureddy, Venugopal V. Veeravalli
|
Degrees of Freedom (DoF) of Locally Connected Interference Channels with
Coordinated Multi-Point (CoMP) Transmission
|
In Proc. IEEE International Conference on Communications (ICC),
Ottawa, Jun. 2012
| null |
10.1109/ICC.2012.6364077
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The degrees of freedom (DoF) available for communication provides an
analytically tractable way to characterize the information-theoretic capacity
of interference channels. In this paper, the DoF of a K-user interference
channel is studied under the assumption that the transmitters can cooperate via
coordinated multi-point (CoMP) transmission. In [1], the authors considered the
linear asymmetric model of Wyner, where each transmitter is connected to its
own receiver and its successor, and is aware of its own message as well as M-1
preceding messages. The per user DoF was shown to go to M/(M+1) as the number
of users increases to infinity. In this work, the same model of channel
connectivity is considered, with a relaxed cooperation constraint that bounds
the maximum number of transmitters at which each message can be available, by a
cooperation order M. We show that the relaxation of the cooperation constraint,
while maintaining the same load imposed on a backhaul link needed to distribute
the messages, results in a gain in the DoF. In particular, the asymptotic limit
of the per user DoF under the cooperation order constraint is (2M)/(2M+1) .
Moreover, the optimal transmit set selection satisfies a local cooperation
constraint. i.e., each message needs only to be available at neighboring
transmitters. [1] A. Lapidoth, S. Shamai (Shitz) and M. A. Wigger, "A linear
interference network with local Side-Information," in Proc. IEEE International
Symposium on Information Theory (ISIT), Nice, Jun. 2007.
|
[
{
"created": "Wed, 7 Sep 2011 23:46:19 GMT",
"version": "v1"
},
{
"created": "Tue, 27 Sep 2011 20:06:44 GMT",
"version": "v2"
},
{
"created": "Tue, 21 Feb 2012 15:10:31 GMT",
"version": "v3"
}
] |
2016-11-15
|
[
[
"Gamal",
"Aly El",
""
],
[
"Annapureddy",
"V. Sreekanth",
""
],
[
"Veeravalli",
"Venugopal V.",
""
]
] |
The degrees of freedom (DoF) available for communication provides an analytically tractable way to characterize the information-theoretic capacity of interference channels. In this paper, the DoF of a K-user interference channel is studied under the assumption that the transmitters can cooperate via coordinated multi-point (CoMP) transmission. In [1], the authors considered the linear asymmetric model of Wyner, where each transmitter is connected to its own receiver and its successor, and is aware of its own message as well as M-1 preceding messages. The per user DoF was shown to go to M/(M+1) as the number of users increases to infinity. In this work, the same model of channel connectivity is considered, with a relaxed cooperation constraint that bounds the maximum number of transmitters at which each message can be available, by a cooperation order M. We show that the relaxation of the cooperation constraint, while maintaining the same load imposed on a backhaul link needed to distribute the messages, results in a gain in the DoF. In particular, the asymptotic limit of the per user DoF under the cooperation order constraint is (2M)/(2M+1) . Moreover, the optimal transmit set selection satisfies a local cooperation constraint. i.e., each message needs only to be available at neighboring transmitters. [1] A. Lapidoth, S. Shamai (Shitz) and M. A. Wigger, "A linear interference network with local Side-Information," in Proc. IEEE International Symposium on Information Theory (ISIT), Nice, Jun. 2007.
|
1701.07290
|
Giuseppe Santucci
|
Enrico Bertini and Giuseppe Santucci
|
Modelling internet based applications for designing multi-device
adaptive interfaces
|
Keywords: adaptive interfaces, multi-device and multi-channel
applications
|
Proceedings of the Workshop on Advanced Visual Interfaces AVI
2004, Pages 252-256 Working Conference on Advanced Visual Interfaces, AVI
2004; Gallipoli; Italy; 25 May 2004 through 28 May 2004
| null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The wide spread of mobile devices in the consumer market has posed a number
of new issues in the design of internet applications and their user interfaces.
In particular, applications need to adapt their interaction modalities to
different portable devices. In this paper we address the problem of defining
models and techniques for designing internet based applications that
automatically adapt to different mobile devices. First, we define a formal
model that allows for specifying the interaction in a way that is abstract
enough to be decoupled from the presentation layer, which is to be adapted to
different contexts. The model is mainly based on the idea of describing the
user interaction in terms of elementary actions. Then, we provide a formal
device characterization showing how to effectively implements the AIUs in a
multidevice context.
|
[
{
"created": "Wed, 25 Jan 2017 12:55:45 GMT",
"version": "v1"
}
] |
2017-01-26
|
[
[
"Bertini",
"Enrico",
""
],
[
"Santucci",
"Giuseppe",
""
]
] |
The wide spread of mobile devices in the consumer market has posed a number of new issues in the design of internet applications and their user interfaces. In particular, applications need to adapt their interaction modalities to different portable devices. In this paper we address the problem of defining models and techniques for designing internet based applications that automatically adapt to different mobile devices. First, we define a formal model that allows for specifying the interaction in a way that is abstract enough to be decoupled from the presentation layer, which is to be adapted to different contexts. The model is mainly based on the idea of describing the user interaction in terms of elementary actions. Then, we provide a formal device characterization showing how to effectively implements the AIUs in a multidevice context.
|
2310.06911
|
Ludovic Noels
|
Van-Dung Nguyen, Ling Wu, Fran\c{c}oise Remacle, Ludovic Noels
|
A quantum annealing-sequential quadratic programming assisted finite
element simulation for non-linear and history-dependent mechanical problems
|
This is an updated version following reviewing process. The code and
raw/processed data required to reproduce these findings is available on
http://dx.doi.org/10.5281/zenodo.10451584 under the Creative Commons
Attribution 4.0 International (CC BY 4.0) licence
|
European Journal of Mechanics - A/Solids, Volume 105, 2024, 105254
|
10.1016/j.euromechsol.2024.105254
| null |
cs.CE
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We propose a framework to solve non-linear and history-dependent mechanical
problems based on a hybrid classical computer -- quantum annealer approach.
Quantum Computers are anticipated to solve particular operations exponentially
faster. The available possible operations are however not as versatile as with
a classical computer. However, quantum annealers (QAs) are well suited to
evaluate the minimum state of a Hamiltonian quadratic potential. Therefore, we
reformulate the elasto-plastic finite element problem as a double-minimisation
process framed at the structural scale using the variational updates
formulation. In order to comply with the expected quadratic nature of the
Hamiltonian, the resulting non-linear minimisation problems are iteratively
solved with the suggested Quantum Annealing-assisted Sequential Quadratic
Programming (QA-SQP): a sequence of minimising quadratic problems is performed
by approximating the objective function by a quadratic Taylor's series. Each
quadratic minimisation problem of continuous variables is then transformed into
a binary quadratic problem. This binary quadratic minimisation problem can be
solved on quantum annealing hardware such as the D-Wave system. The
applicability of the proposed framework is demonstrated with one- and
two-dimensional elasto-plastic numerical benchmarks. The current work provides
a pathway of performing general non-linear finite element simulations assisted
by quantum computing.
|
[
{
"created": "Tue, 10 Oct 2023 18:09:08 GMT",
"version": "v1"
},
{
"created": "Fri, 5 Jan 2024 13:39:40 GMT",
"version": "v2"
}
] |
2024-02-20
|
[
[
"Nguyen",
"Van-Dung",
""
],
[
"Wu",
"Ling",
""
],
[
"Remacle",
"Françoise",
""
],
[
"Noels",
"Ludovic",
""
]
] |
We propose a framework to solve non-linear and history-dependent mechanical problems based on a hybrid classical computer -- quantum annealer approach. Quantum Computers are anticipated to solve particular operations exponentially faster. The available possible operations are however not as versatile as with a classical computer. However, quantum annealers (QAs) are well suited to evaluate the minimum state of a Hamiltonian quadratic potential. Therefore, we reformulate the elasto-plastic finite element problem as a double-minimisation process framed at the structural scale using the variational updates formulation. In order to comply with the expected quadratic nature of the Hamiltonian, the resulting non-linear minimisation problems are iteratively solved with the suggested Quantum Annealing-assisted Sequential Quadratic Programming (QA-SQP): a sequence of minimising quadratic problems is performed by approximating the objective function by a quadratic Taylor's series. Each quadratic minimisation problem of continuous variables is then transformed into a binary quadratic problem. This binary quadratic minimisation problem can be solved on quantum annealing hardware such as the D-Wave system. The applicability of the proposed framework is demonstrated with one- and two-dimensional elasto-plastic numerical benchmarks. The current work provides a pathway of performing general non-linear finite element simulations assisted by quantum computing.
|
1709.01189
|
Fan Yang
|
Fan Yang, Arjun Mukherjee, Eduard Dragut
|
Satirical News Detection and Analysis using Attention Mechanism and
Linguistic Features
|
EMNLP 2017, 11 pages
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Satirical news is considered to be entertainment, but it is potentially
deceptive and harmful. Despite the embedded genre in the article, not everyone
can recognize the satirical cues and therefore believe the news as true news.
We observe that satirical cues are often reflected in certain paragraphs rather
than the whole document. Existing works only consider document-level features
to detect the satire, which could be limited. We consider paragraph-level
linguistic features to unveil the satire by incorporating neural network and
attention mechanism. We investigate the difference between paragraph-level
features and document-level features, and analyze them on a large satirical
news dataset. The evaluation shows that the proposed model detects satirical
news effectively and reveals what features are important at which level.
|
[
{
"created": "Mon, 4 Sep 2017 23:06:36 GMT",
"version": "v1"
}
] |
2017-09-06
|
[
[
"Yang",
"Fan",
""
],
[
"Mukherjee",
"Arjun",
""
],
[
"Dragut",
"Eduard",
""
]
] |
Satirical news is considered to be entertainment, but it is potentially deceptive and harmful. Despite the embedded genre in the article, not everyone can recognize the satirical cues and therefore believe the news as true news. We observe that satirical cues are often reflected in certain paragraphs rather than the whole document. Existing works only consider document-level features to detect the satire, which could be limited. We consider paragraph-level linguistic features to unveil the satire by incorporating neural network and attention mechanism. We investigate the difference between paragraph-level features and document-level features, and analyze them on a large satirical news dataset. The evaluation shows that the proposed model detects satirical news effectively and reveals what features are important at which level.
|
2301.12569
|
Zahra Zahedi
|
Zahra Zahedi, Sarath Sreedharan, Subbarao Kambhampati
|
A Mental Model Based Theory of Trust
| null | null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Handling trust is one of the core requirements for facilitating effective
interaction between the human and the AI agent. Thus, any decision-making
framework designed to work with humans must possess the ability to estimate and
leverage human trust. In this paper, we propose a mental model based theory of
trust that not only can be used to infer trust, thus providing an alternative
to psychological or behavioral trust inference methods, but also can be used as
a foundation for any trust-aware decision-making frameworks. First, we
introduce what trust means according to our theory and then use the theory to
define trust evolution, human reliance and decision making, and a formalization
of the appropriate level of trust in the agent. Using human subject studies, we
compare our theory against one of the most common trust scales (Muir scale) to
evaluate 1) whether the observations from the human studies match our proposed
theory and 2) what aspects of trust are more aligned with our proposed theory.
|
[
{
"created": "Sun, 29 Jan 2023 22:36:37 GMT",
"version": "v1"
}
] |
2023-01-31
|
[
[
"Zahedi",
"Zahra",
""
],
[
"Sreedharan",
"Sarath",
""
],
[
"Kambhampati",
"Subbarao",
""
]
] |
Handling trust is one of the core requirements for facilitating effective interaction between the human and the AI agent. Thus, any decision-making framework designed to work with humans must possess the ability to estimate and leverage human trust. In this paper, we propose a mental model based theory of trust that not only can be used to infer trust, thus providing an alternative to psychological or behavioral trust inference methods, but also can be used as a foundation for any trust-aware decision-making frameworks. First, we introduce what trust means according to our theory and then use the theory to define trust evolution, human reliance and decision making, and a formalization of the appropriate level of trust in the agent. Using human subject studies, we compare our theory against one of the most common trust scales (Muir scale) to evaluate 1) whether the observations from the human studies match our proposed theory and 2) what aspects of trust are more aligned with our proposed theory.
|
2304.05659
|
Jiahao Wang
|
Jiahao Wang, Songyang Zhang, Yong Liu, Taiqiang Wu, Yujiu Yang, Xihui
Liu, Kai Chen, Ping Luo, Dahua Lin
|
RIFormer: Keep Your Vision Backbone Effective While Removing Token Mixer
|
8 pages, accepted by CVPR2023
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper studies how to keep a vision backbone effective while removing
token mixers in its basic building blocks. Token mixers, as self-attention for
vision transformers (ViTs), are intended to perform information communication
between different spatial tokens but suffer from considerable computational
cost and latency. However, directly removing them will lead to an incomplete
model structure prior, and thus brings a significant accuracy drop. To this
end, we first develop an RepIdentityFormer base on the re-parameterizing idea,
to study the token mixer free model architecture. And we then explore the
improved learning paradigm to break the limitation of simple token mixer free
backbone, and summarize the empirical practice into 5 guidelines. Equipped with
the proposed optimization strategy, we are able to build an extremely simple
vision backbone with encouraging performance, while enjoying the high
efficiency during inference. Extensive experiments and ablative analysis also
demonstrate that the inductive bias of network architecture, can be
incorporated into simple network structure with appropriate optimization
strategy. We hope this work can serve as a starting point for the exploration
of optimization-driven efficient network design. Project page:
https://techmonsterwang.github.io/RIFormer/.
|
[
{
"created": "Wed, 12 Apr 2023 07:34:13 GMT",
"version": "v1"
}
] |
2023-04-13
|
[
[
"Wang",
"Jiahao",
""
],
[
"Zhang",
"Songyang",
""
],
[
"Liu",
"Yong",
""
],
[
"Wu",
"Taiqiang",
""
],
[
"Yang",
"Yujiu",
""
],
[
"Liu",
"Xihui",
""
],
[
"Chen",
"Kai",
""
],
[
"Luo",
"Ping",
""
],
[
"Lin",
"Dahua",
""
]
] |
This paper studies how to keep a vision backbone effective while removing token mixers in its basic building blocks. Token mixers, as self-attention for vision transformers (ViTs), are intended to perform information communication between different spatial tokens but suffer from considerable computational cost and latency. However, directly removing them will lead to an incomplete model structure prior, and thus brings a significant accuracy drop. To this end, we first develop an RepIdentityFormer base on the re-parameterizing idea, to study the token mixer free model architecture. And we then explore the improved learning paradigm to break the limitation of simple token mixer free backbone, and summarize the empirical practice into 5 guidelines. Equipped with the proposed optimization strategy, we are able to build an extremely simple vision backbone with encouraging performance, while enjoying the high efficiency during inference. Extensive experiments and ablative analysis also demonstrate that the inductive bias of network architecture, can be incorporated into simple network structure with appropriate optimization strategy. We hope this work can serve as a starting point for the exploration of optimization-driven efficient network design. Project page: https://techmonsterwang.github.io/RIFormer/.
|
1002.0574
|
Aubin Lecointre
|
Aubin Lecointre (LAAS), Daniela Dragomirescu (LAAS), Robert Plana
(LAAS)
|
Channel Capacity Limitations versus Hardware Implementation for UWB
Impulse Radio Communications
| null |
Romanian Journal of Information Science and Technology (2009)
339-353
| null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Starting from the Shannon channel capacity, we propose an IR-UWB channel
capacity based on the delay spread for multipath time variant channels. This
IR-UWB channel capacity is obtained from the no ISI (Inter Symbol Interference)
assumption and for binary modulations. The impact of the kind of implementation
is considered on the IR-UWB channel capacity. This study is lead for mixed and
mostly digital implementation. The key parameters and theirs impacts on the
channel capacity are exposed in each case: the data converters for mostly
digital implementations and the pulse generator capabilities for mixed
implementations. Finally, these two implementations are compared from a data
rate point of view. Their behaviors regarding an increase of the operating
frequency are also studied.
|
[
{
"created": "Tue, 2 Feb 2010 20:06:12 GMT",
"version": "v1"
}
] |
2010-02-03
|
[
[
"Lecointre",
"Aubin",
"",
"LAAS"
],
[
"Dragomirescu",
"Daniela",
"",
"LAAS"
],
[
"Plana",
"Robert",
"",
"LAAS"
]
] |
Starting from the Shannon channel capacity, we propose an IR-UWB channel capacity based on the delay spread for multipath time variant channels. This IR-UWB channel capacity is obtained from the no ISI (Inter Symbol Interference) assumption and for binary modulations. The impact of the kind of implementation is considered on the IR-UWB channel capacity. This study is lead for mixed and mostly digital implementation. The key parameters and theirs impacts on the channel capacity are exposed in each case: the data converters for mostly digital implementations and the pulse generator capabilities for mixed implementations. Finally, these two implementations are compared from a data rate point of view. Their behaviors regarding an increase of the operating frequency are also studied.
|
2310.10928
|
Navin Kumar
|
Adam Valen Levinson, Abhay Goyal, Roger Ho Chun Man, Roy Ka-Wei Lee,
Koustuv Saha, Nimay Parekh, Frederick L. Altice, Lam Yin Cheung, Munmun De
Choudhury and Navin Kumar
|
Using Audio Data to Facilitate Depression Risk Assessment in Primary
Health Care
| null | null | null | null |
cs.HC cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Telehealth is a valuable tool for primary health care (PHC), where depression
is a common condition. PHC is the first point of contact for most people with
depression, but about 25% of diagnoses made by PHC physicians are inaccurate.
Many other barriers also hinder depression detection and treatment in PHC.
Artificial intelligence (AI) may help reduce depression misdiagnosis in PHC and
improve overall diagnosis and treatment outcomes. Telehealth consultations
often have video issues, such as poor connectivity or dropped calls. Audio-only
telehealth is often more practical for lower-income patients who may lack
stable internet connections. Thus, our study focused on using audio data to
predict depression risk. The objectives were to: 1) Collect audio data from 24
people (12 with depression and 12 without mental health or major health
condition diagnoses); 2) Build a machine learning model to predict depression
risk. TPOT, an autoML tool, was used to select the best machine learning
algorithm, which was the K-nearest neighbors classifier. The selected model had
high performance in classifying depression risk (Precision: 0.98, Recall: 0.93,
F1-Score: 0.96). These findings may lead to a range of tools to help screen for
and treat depression. By developing tools to detect depression risk, patients
can be routed to AI-driven chatbots for initial screenings. Partnerships with a
range of stakeholders are crucial to implementing these solutions. Moreover,
ethical considerations, especially around data privacy and potential biases in
AI models, need to be at the forefront of any AI-driven intervention in mental
health care.
|
[
{
"created": "Tue, 17 Oct 2023 01:55:49 GMT",
"version": "v1"
}
] |
2023-10-18
|
[
[
"Levinson",
"Adam Valen",
""
],
[
"Goyal",
"Abhay",
""
],
[
"Man",
"Roger Ho Chun",
""
],
[
"Lee",
"Roy Ka-Wei",
""
],
[
"Saha",
"Koustuv",
""
],
[
"Parekh",
"Nimay",
""
],
[
"Altice",
"Frederick L.",
""
],
[
"Cheung",
"Lam Yin",
""
],
[
"De Choudhury",
"Munmun",
""
],
[
"Kumar",
"Navin",
""
]
] |
Telehealth is a valuable tool for primary health care (PHC), where depression is a common condition. PHC is the first point of contact for most people with depression, but about 25% of diagnoses made by PHC physicians are inaccurate. Many other barriers also hinder depression detection and treatment in PHC. Artificial intelligence (AI) may help reduce depression misdiagnosis in PHC and improve overall diagnosis and treatment outcomes. Telehealth consultations often have video issues, such as poor connectivity or dropped calls. Audio-only telehealth is often more practical for lower-income patients who may lack stable internet connections. Thus, our study focused on using audio data to predict depression risk. The objectives were to: 1) Collect audio data from 24 people (12 with depression and 12 without mental health or major health condition diagnoses); 2) Build a machine learning model to predict depression risk. TPOT, an autoML tool, was used to select the best machine learning algorithm, which was the K-nearest neighbors classifier. The selected model had high performance in classifying depression risk (Precision: 0.98, Recall: 0.93, F1-Score: 0.96). These findings may lead to a range of tools to help screen for and treat depression. By developing tools to detect depression risk, patients can be routed to AI-driven chatbots for initial screenings. Partnerships with a range of stakeholders are crucial to implementing these solutions. Moreover, ethical considerations, especially around data privacy and potential biases in AI models, need to be at the forefront of any AI-driven intervention in mental health care.
|
2309.07668
|
Ankit Dhiman
|
Ankit Dhiman and R Srinath and Srinjay Sarkar and Lokesh R Boregowda
and R Venkatesh Babu
|
CoRF : Colorizing Radiance Fields using Knowledge Distillation
|
AI3DCC @ ICCV 2023
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Neural radiance field (NeRF) based methods enable high-quality novel-view
synthesis for multi-view images. This work presents a method for synthesizing
colorized novel views from input grey-scale multi-view images. When we apply
image or video-based colorization methods on the generated grey-scale novel
views, we observe artifacts due to inconsistency across views. Training a
radiance field network on the colorized grey-scale image sequence also does not
solve the 3D consistency issue. We propose a distillation based method to
transfer color knowledge from the colorization networks trained on natural
images to the radiance field network. Specifically, our method uses the
radiance field network as a 3D representation and transfers knowledge from
existing 2D colorization methods. The experimental results demonstrate that the
proposed method produces superior colorized novel views for indoor and outdoor
scenes while maintaining cross-view consistency than baselines. Further, we
show the efficacy of our method on applications like colorization of radiance
field network trained from 1.) Infra-Red (IR) multi-view images and 2.) Old
grey-scale multi-view image sequences.
|
[
{
"created": "Thu, 14 Sep 2023 12:30:48 GMT",
"version": "v1"
}
] |
2023-09-15
|
[
[
"Dhiman",
"Ankit",
""
],
[
"Srinath",
"R",
""
],
[
"Sarkar",
"Srinjay",
""
],
[
"Boregowda",
"Lokesh R",
""
],
[
"Babu",
"R Venkatesh",
""
]
] |
Neural radiance field (NeRF) based methods enable high-quality novel-view synthesis for multi-view images. This work presents a method for synthesizing colorized novel views from input grey-scale multi-view images. When we apply image or video-based colorization methods on the generated grey-scale novel views, we observe artifacts due to inconsistency across views. Training a radiance field network on the colorized grey-scale image sequence also does not solve the 3D consistency issue. We propose a distillation based method to transfer color knowledge from the colorization networks trained on natural images to the radiance field network. Specifically, our method uses the radiance field network as a 3D representation and transfers knowledge from existing 2D colorization methods. The experimental results demonstrate that the proposed method produces superior colorized novel views for indoor and outdoor scenes while maintaining cross-view consistency than baselines. Further, we show the efficacy of our method on applications like colorization of radiance field network trained from 1.) Infra-Red (IR) multi-view images and 2.) Old grey-scale multi-view image sequences.
|
1901.05754
|
Fernando Mac\'ias
|
Fernando Mac\'ias, Uwe Wolter, Adrian Rutle, Francisco Dur\'an,
Roberto Rodriguez-Echeverria
|
Multilevel Coupled Model Transformations for Precise and Reusable
Definition of Model Behaviour
|
Journal of Logical and Algebraic Methods in Programming. Available
online 11 January 2019. In Press, Accepted Manuscript
| null |
10.1016/j.jlamp.2018.12.005
| null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The use of Domain-Specific Languages (DSLs) is a promising field for the
development of tools tailored to specific problem spaces, effectively
diminishing the complexity of hand-made software. With the goal of making
models as precise, simple and reusable as possible, we augment DSLs with
concepts from multilevel modelling, where the number of abstraction levels are
not limited. This is particularly useful for DSL definitions with behaviour,
whose concepts inherently belong to different levels of abstraction. Here,
models can represent the state of the modelled system and evolve using model
transformations. These transformations can benefit from a multilevel setting,
becoming a precise and reusable definition of the semantics for behavioural
modelling languages. We present in this paper the concept of Multilevel Coupled
Model Transformations, together with examples, formal definitions and tools to
assess their conceptual soundness and practical value.
|
[
{
"created": "Thu, 17 Jan 2019 12:25:47 GMT",
"version": "v1"
}
] |
2019-01-18
|
[
[
"Macías",
"Fernando",
""
],
[
"Wolter",
"Uwe",
""
],
[
"Rutle",
"Adrian",
""
],
[
"Durán",
"Francisco",
""
],
[
"Rodriguez-Echeverria",
"Roberto",
""
]
] |
The use of Domain-Specific Languages (DSLs) is a promising field for the development of tools tailored to specific problem spaces, effectively diminishing the complexity of hand-made software. With the goal of making models as precise, simple and reusable as possible, we augment DSLs with concepts from multilevel modelling, where the number of abstraction levels are not limited. This is particularly useful for DSL definitions with behaviour, whose concepts inherently belong to different levels of abstraction. Here, models can represent the state of the modelled system and evolve using model transformations. These transformations can benefit from a multilevel setting, becoming a precise and reusable definition of the semantics for behavioural modelling languages. We present in this paper the concept of Multilevel Coupled Model Transformations, together with examples, formal definitions and tools to assess their conceptual soundness and practical value.
|
2407.07719
|
Baptiste CHATELIER
|
Baptiste Chatelier (IETR, MERCE-France, INSA Rennes), Vincent Corlay
(MERCE-France), Matthieu Crussi\`ere (IETR, INSA Rennes), Luc Le Magoarou
(IETR, INSA Rennes)
|
Model-based learning for multi-antenna multi-frequency
location-to-channel mapping
| null | null | null | null |
cs.IT cs.AI cs.LG eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Years of study of the propagation channel showed a close relation between a
location and the associated communication channel response. The use of a neural
network to learn the location-to-channel mapping can therefore be envisioned.
The Implicit Neural Representation (INR) literature showed that classical
neural architecture are biased towards learning low-frequency content, making
the location-to-channel mapping learning a non-trivial problem. Indeed, it is
well known that this mapping is a function rapidly varying with the location,
on the order of the wavelength. This paper leverages the model-based machine
learning paradigm to derive a problem-specific neural architecture from a
propagation channel model. The resulting architecture efficiently overcomes the
spectral-bias issue. It only learns low-frequency sparse correction terms
activating a dictionary of high-frequency components. The proposed architecture
is evaluated against classical INR architectures on realistic synthetic data,
showing much better accuracy. Its mapping learning performance is explained
based on the approximated channel model, highlighting the explainability of the
model-based machine learning paradigm.
|
[
{
"created": "Mon, 17 Jun 2024 13:09:25 GMT",
"version": "v1"
},
{
"created": "Mon, 15 Jul 2024 06:54:53 GMT",
"version": "v2"
}
] |
2024-07-16
|
[
[
"Chatelier",
"Baptiste",
"",
"IETR, MERCE-France, INSA Rennes"
],
[
"Corlay",
"Vincent",
"",
"MERCE-France"
],
[
"Crussière",
"Matthieu",
"",
"IETR, INSA Rennes"
],
[
"Magoarou",
"Luc Le",
"",
"IETR, INSA Rennes"
]
] |
Years of study of the propagation channel showed a close relation between a location and the associated communication channel response. The use of a neural network to learn the location-to-channel mapping can therefore be envisioned. The Implicit Neural Representation (INR) literature showed that classical neural architecture are biased towards learning low-frequency content, making the location-to-channel mapping learning a non-trivial problem. Indeed, it is well known that this mapping is a function rapidly varying with the location, on the order of the wavelength. This paper leverages the model-based machine learning paradigm to derive a problem-specific neural architecture from a propagation channel model. The resulting architecture efficiently overcomes the spectral-bias issue. It only learns low-frequency sparse correction terms activating a dictionary of high-frequency components. The proposed architecture is evaluated against classical INR architectures on realistic synthetic data, showing much better accuracy. Its mapping learning performance is explained based on the approximated channel model, highlighting the explainability of the model-based machine learning paradigm.
|
2306.06742
|
Carlos Baquero
|
Ana Rodrigues, Ariel Shtul, Carlos Baquero, Paulo S\'ergio Almeida
|
Time-limited Bloom Filter
|
This version extends the 4-page version published in ACM SAC 2023 and
adds a section on Experimental Evaluation
| null | null | null |
cs.DS cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A Bloom Filter is a probabilistic data structure designed to check, rapidly
and memory-efficiently, whether an element is present in a set. It has been
vastly used in various computing areas and several variants, allowing
deletions, dynamic sets and working with sliding windows, have surfaced over
the years. When summarizing data streams, it becomes relevant to identify the
more recent elements in the stream. However, most of the sliding window schemes
consider the most recent items of a data stream without considering time as a
factor. While this allows, e.g., storing the most recent 10000 elements, it
does not easily translate into storing elements received in the last 60
seconds, unless the insertion rate is stable and known in advance. In this
paper, we present the Time-limited Bloom Filter, a new BF-based approach that
can save information of a given time period and correctly identify it as
present when queried, while also being able to retire data when it becomes
stale. The approach supports variable insertion rates while striving to keep a
target false positive rate. We also make available a reference implementation
of the data structure as a Redis module.
|
[
{
"created": "Sun, 11 Jun 2023 18:33:45 GMT",
"version": "v1"
}
] |
2023-06-13
|
[
[
"Rodrigues",
"Ana",
""
],
[
"Shtul",
"Ariel",
""
],
[
"Baquero",
"Carlos",
""
],
[
"Almeida",
"Paulo Sérgio",
""
]
] |
A Bloom Filter is a probabilistic data structure designed to check, rapidly and memory-efficiently, whether an element is present in a set. It has been vastly used in various computing areas and several variants, allowing deletions, dynamic sets and working with sliding windows, have surfaced over the years. When summarizing data streams, it becomes relevant to identify the more recent elements in the stream. However, most of the sliding window schemes consider the most recent items of a data stream without considering time as a factor. While this allows, e.g., storing the most recent 10000 elements, it does not easily translate into storing elements received in the last 60 seconds, unless the insertion rate is stable and known in advance. In this paper, we present the Time-limited Bloom Filter, a new BF-based approach that can save information of a given time period and correctly identify it as present when queried, while also being able to retire data when it becomes stale. The approach supports variable insertion rates while striving to keep a target false positive rate. We also make available a reference implementation of the data structure as a Redis module.
|
1709.00029
|
Patrick Helber
|
Patrick Helber, Benjamin Bischke, Andreas Dengel, Damian Borth
|
EuroSAT: A Novel Dataset and Deep Learning Benchmark for Land Use and
Land Cover Classification
| null | null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we address the challenge of land use and land cover
classification using Sentinel-2 satellite images. The Sentinel-2 satellite
images are openly and freely accessible provided in the Earth observation
program Copernicus. We present a novel dataset based on Sentinel-2 satellite
images covering 13 spectral bands and consisting out of 10 classes with in
total 27,000 labeled and geo-referenced images. We provide benchmarks for this
novel dataset with its spectral bands using state-of-the-art deep Convolutional
Neural Network (CNNs). With the proposed novel dataset, we achieved an overall
classification accuracy of 98.57%. The resulting classification system opens a
gate towards a number of Earth observation applications. We demonstrate how
this classification system can be used for detecting land use and land cover
changes and how it can assist in improving geographical maps. The
geo-referenced dataset EuroSAT is made publicly available at
https://github.com/phelber/eurosat.
|
[
{
"created": "Thu, 31 Aug 2017 18:19:10 GMT",
"version": "v1"
},
{
"created": "Fri, 1 Feb 2019 09:51:18 GMT",
"version": "v2"
}
] |
2019-02-04
|
[
[
"Helber",
"Patrick",
""
],
[
"Bischke",
"Benjamin",
""
],
[
"Dengel",
"Andreas",
""
],
[
"Borth",
"Damian",
""
]
] |
In this paper, we address the challenge of land use and land cover classification using Sentinel-2 satellite images. The Sentinel-2 satellite images are openly and freely accessible provided in the Earth observation program Copernicus. We present a novel dataset based on Sentinel-2 satellite images covering 13 spectral bands and consisting out of 10 classes with in total 27,000 labeled and geo-referenced images. We provide benchmarks for this novel dataset with its spectral bands using state-of-the-art deep Convolutional Neural Network (CNNs). With the proposed novel dataset, we achieved an overall classification accuracy of 98.57%. The resulting classification system opens a gate towards a number of Earth observation applications. We demonstrate how this classification system can be used for detecting land use and land cover changes and how it can assist in improving geographical maps. The geo-referenced dataset EuroSAT is made publicly available at https://github.com/phelber/eurosat.
|
1801.06542
|
Fengrong Zhang
|
Sihem Mesnager, Fengrong Zhang, Chunming Tang, Yong Zhou
|
Further study on the maximum number of bent components of vectorial
functions
|
17 pages
| null | null | null |
cs.IT cs.CR math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In 2018, Pott, at al. have studied in [IEEE Transactions on Information
Theory. Volume: 64, Issue: 1, 2018] the maximum number of bent components of
vectorial function. They have presented serval nice results and suggested
several open problems in this context. This paper is in the continuation of
their study in which we solve two open problems raised by Pott et al. and
partially solve an open problem raised by the same authors. Firstly, we prove
that for a vectorial function, the property of having the maximum number of
bent components is invariant under the so-called CCZ equivalence. Secondly, we
prove the non-existence of APN plateaued having the maximum number of bent
components. In particular, quadratic APN functions cannot have the maximum
number of bent components. Finally, we present some sufficient conditions that
the vectorial function defined from $\mathbb{F}_{2^{2k}}$ to
$\mathbb{F}_{2^{2k}}$ by its univariate representation:
$$ \alpha
x^{2^i}\left(x+x^{2^k}+\sum\limits_{j=1}^{\rho}\gamma^{(j)}x^{2^{t_j}}
+\sum\limits_{j=1}^{\rho}\gamma^{(j)}x^{2^{t_j+k}}\right)$$ has the maximum
number of {components bent functions, where $\rho\leq k$}. Further, we show
that the differential spectrum of the function $
x^{2^i}(x+x^{2^k}+x^{2^{t_1}}+x^{2^{t_1+k}}+x^{2^{t_2}}+x^{2^{t_2+k}})$ (where
$i,t_1,t_2$ satisfy some conditions) is different from the binomial function
$F^i(x)= x^{2^i}(x+x^{2^k})$ presented in the article of Pott et al.
Finally, we provide sufficient and necessary conditions so that the functions
$$Tr_1^{2k}\left(\alpha
x^{2^i}\left(Tr^{2k}_{e}(x)+\sum\limits_{j=1}^{\rho}\gamma^{(j)}(Tr^{2k}_{e}(x))^{2^j}
\right)\right) $$ are bent.
|
[
{
"created": "Sun, 21 Jan 2018 11:58:09 GMT",
"version": "v1"
}
] |
2018-01-23
|
[
[
"Mesnager",
"Sihem",
""
],
[
"Zhang",
"Fengrong",
""
],
[
"Tang",
"Chunming",
""
],
[
"Zhou",
"Yong",
""
]
] |
In 2018, Pott, at al. have studied in [IEEE Transactions on Information Theory. Volume: 64, Issue: 1, 2018] the maximum number of bent components of vectorial function. They have presented serval nice results and suggested several open problems in this context. This paper is in the continuation of their study in which we solve two open problems raised by Pott et al. and partially solve an open problem raised by the same authors. Firstly, we prove that for a vectorial function, the property of having the maximum number of bent components is invariant under the so-called CCZ equivalence. Secondly, we prove the non-existence of APN plateaued having the maximum number of bent components. In particular, quadratic APN functions cannot have the maximum number of bent components. Finally, we present some sufficient conditions that the vectorial function defined from $\mathbb{F}_{2^{2k}}$ to $\mathbb{F}_{2^{2k}}$ by its univariate representation: $$ \alpha x^{2^i}\left(x+x^{2^k}+\sum\limits_{j=1}^{\rho}\gamma^{(j)}x^{2^{t_j}} +\sum\limits_{j=1}^{\rho}\gamma^{(j)}x^{2^{t_j+k}}\right)$$ has the maximum number of {components bent functions, where $\rho\leq k$}. Further, we show that the differential spectrum of the function $ x^{2^i}(x+x^{2^k}+x^{2^{t_1}}+x^{2^{t_1+k}}+x^{2^{t_2}}+x^{2^{t_2+k}})$ (where $i,t_1,t_2$ satisfy some conditions) is different from the binomial function $F^i(x)= x^{2^i}(x+x^{2^k})$ presented in the article of Pott et al. Finally, we provide sufficient and necessary conditions so that the functions $$Tr_1^{2k}\left(\alpha x^{2^i}\left(Tr^{2k}_{e}(x)+\sum\limits_{j=1}^{\rho}\gamma^{(j)}(Tr^{2k}_{e}(x))^{2^j} \right)\right) $$ are bent.
|
1509.00268
|
Michael Kallitsis
|
Michael Kallitsis, Stilian Stoev, Shrijita Bhattacharya, George
Michailidis
|
AMON: An Open Source Architecture for Online Monitoring, Statistical
Analysis and Forensics of Multi-gigabit Streams
| null | null | null | null |
cs.NI math.PR math.ST stat.TH
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Internet, as a global system of interconnected networks, carries an
extensive array of information resources and services. Key requirements include
good quality-of-service and protection of the infrastructure from nefarious
activity (e.g. distributed denial of service--DDoS--attacks). Network
monitoring is essential to network engineering, capacity planning and
prevention / mitigation of threats. We develop an open source architecture,
AMON (All-packet MONitor), for online monitoring and analysis of multi-gigabit
network streams. It leverages the high-performance packet monitor PF RING and
is readily deployable on commodity hardware. AMON examines all packets,
partitions traffic into sub-streams by using rapid hashing and computes certain
real-time data products. The resulting data structures provide views of the
intensity and connectivity structure of network traffic at the time-scale of
routing. The proposed integrated framework includes modules for the
identification of heavy-hitters as well as for visualization and statistical
detection at the time-of-onset of high impact events such as DDoS. This allows
operators to quickly visualize and diagnose attacks, and limit offline and time
consuming post-mortem analysis. We demonstrate our system in the context of
real-world attack incidents, and validate it against state-of-the-art
alternatives. AMON has been deployed and is currently processing 10Gbps+ live
Internet traffic at Merit Network. It is extensible and allows the addition of
further statistical and filtering modules for real-time forensics.
|
[
{
"created": "Tue, 1 Sep 2015 13:00:10 GMT",
"version": "v1"
},
{
"created": "Wed, 27 Jan 2016 13:32:51 GMT",
"version": "v2"
}
] |
2016-08-02
|
[
[
"Kallitsis",
"Michael",
""
],
[
"Stoev",
"Stilian",
""
],
[
"Bhattacharya",
"Shrijita",
""
],
[
"Michailidis",
"George",
""
]
] |
The Internet, as a global system of interconnected networks, carries an extensive array of information resources and services. Key requirements include good quality-of-service and protection of the infrastructure from nefarious activity (e.g. distributed denial of service--DDoS--attacks). Network monitoring is essential to network engineering, capacity planning and prevention / mitigation of threats. We develop an open source architecture, AMON (All-packet MONitor), for online monitoring and analysis of multi-gigabit network streams. It leverages the high-performance packet monitor PF RING and is readily deployable on commodity hardware. AMON examines all packets, partitions traffic into sub-streams by using rapid hashing and computes certain real-time data products. The resulting data structures provide views of the intensity and connectivity structure of network traffic at the time-scale of routing. The proposed integrated framework includes modules for the identification of heavy-hitters as well as for visualization and statistical detection at the time-of-onset of high impact events such as DDoS. This allows operators to quickly visualize and diagnose attacks, and limit offline and time consuming post-mortem analysis. We demonstrate our system in the context of real-world attack incidents, and validate it against state-of-the-art alternatives. AMON has been deployed and is currently processing 10Gbps+ live Internet traffic at Merit Network. It is extensible and allows the addition of further statistical and filtering modules for real-time forensics.
|
1404.0846
|
EPTCS
|
Fenglin Han, Jan Olaf Blech, Peter Herrmann, Heinz Schmidt
|
Towards Verifying Safety Properties of Real-Time Probabilistic Systems
|
In Proceedings FESCA 2014, arXiv:1404.0436
|
EPTCS 147, 2014, pp. 1-15
|
10.4204/EPTCS.147.1
| null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Using probabilities in the formal-methods-based development of
safety-critical software has quickened interests in academia and industry. We
address this area by our model-driven engineering method for reactive systems
SPACE and its tool-set Reactive Blocks that provide an extension to support the
modeling and verification of real-time behaviors. The approach facilitates the
composition of system models from reusable building blocks as well as the
verification of functional and real-time properties and the automatic
generation of Java code.
In this paper, we describe the extension of the tool-set to enable the
modeling and verification of probabilistic real-time system behavior with the
focus on spatial properties that ensure system safety. In particular, we
incorporate descriptions of probabilistic behavior into our Reactive Blocks
models and integrate the model checker PRISM which allows to verify that a
real-time system satisfies certain safety properties with a given probability.
Moreover, we consider the spatial implication of probabilistic system
specifications by integrating the spatial verification tool BeSpaceD and give
an automatic approach to translate system specifications to the input languages
of PRISM and BeSpaceD. The approach is highlighted by an example.
|
[
{
"created": "Thu, 3 Apr 2014 10:43:37 GMT",
"version": "v1"
}
] |
2014-04-04
|
[
[
"Han",
"Fenglin",
""
],
[
"Blech",
"Jan Olaf",
""
],
[
"Herrmann",
"Peter",
""
],
[
"Schmidt",
"Heinz",
""
]
] |
Using probabilities in the formal-methods-based development of safety-critical software has quickened interests in academia and industry. We address this area by our model-driven engineering method for reactive systems SPACE and its tool-set Reactive Blocks that provide an extension to support the modeling and verification of real-time behaviors. The approach facilitates the composition of system models from reusable building blocks as well as the verification of functional and real-time properties and the automatic generation of Java code. In this paper, we describe the extension of the tool-set to enable the modeling and verification of probabilistic real-time system behavior with the focus on spatial properties that ensure system safety. In particular, we incorporate descriptions of probabilistic behavior into our Reactive Blocks models and integrate the model checker PRISM which allows to verify that a real-time system satisfies certain safety properties with a given probability. Moreover, we consider the spatial implication of probabilistic system specifications by integrating the spatial verification tool BeSpaceD and give an automatic approach to translate system specifications to the input languages of PRISM and BeSpaceD. The approach is highlighted by an example.
|
1709.02291
|
Monika Doerfler
|
Monika Doerfler, Thomas Grill, Roswitha Bammer, Arthur Flexer
|
Basic Filters for Convolutional Neural Networks Applied to Music:
Training or Design?
|
Completely revised version; 21 pages, 4 figures
| null | null | null |
cs.LG cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
When convolutional neural networks are used to tackle learning problems based
on music or, more generally, time series data, raw one-dimensional data are
commonly pre-processed to obtain spectrogram or mel-spectrogram coefficients,
which are then used as input to the actual neural network. In this
contribution, we investigate, both theoretically and experimentally, the
influence of this pre-processing step on the network's performance and pose the
question, whether replacing it by applying adaptive or learned filters directly
to the raw data, can improve learning success. The theoretical results show
that approximately reproducing mel-spectrogram coefficients by applying
adaptive filters and subsequent time-averaging is in principle possible. We
also conducted extensive experimental work on the task of singing voice
detection in music. The results of these experiments show that for
classification based on Convolutional Neural Networks the features obtained
from adaptive filter banks followed by time-averaging perform better than the
canonical Fourier-transform-based mel-spectrogram coefficients. Alternative
adaptive approaches with center frequencies or time-averaging lengths learned
from training data perform equally well.
|
[
{
"created": "Thu, 7 Sep 2017 14:51:37 GMT",
"version": "v1"
},
{
"created": "Thu, 14 Dec 2017 11:25:18 GMT",
"version": "v2"
},
{
"created": "Wed, 19 Sep 2018 14:30:19 GMT",
"version": "v3"
}
] |
2018-09-20
|
[
[
"Doerfler",
"Monika",
""
],
[
"Grill",
"Thomas",
""
],
[
"Bammer",
"Roswitha",
""
],
[
"Flexer",
"Arthur",
""
]
] |
When convolutional neural networks are used to tackle learning problems based on music or, more generally, time series data, raw one-dimensional data are commonly pre-processed to obtain spectrogram or mel-spectrogram coefficients, which are then used as input to the actual neural network. In this contribution, we investigate, both theoretically and experimentally, the influence of this pre-processing step on the network's performance and pose the question, whether replacing it by applying adaptive or learned filters directly to the raw data, can improve learning success. The theoretical results show that approximately reproducing mel-spectrogram coefficients by applying adaptive filters and subsequent time-averaging is in principle possible. We also conducted extensive experimental work on the task of singing voice detection in music. The results of these experiments show that for classification based on Convolutional Neural Networks the features obtained from adaptive filter banks followed by time-averaging perform better than the canonical Fourier-transform-based mel-spectrogram coefficients. Alternative adaptive approaches with center frequencies or time-averaging lengths learned from training data perform equally well.
|
2007.15176
|
Sujoy Paul
|
Sujoy Paul, Yi-Hsuan Tsai, Samuel Schulter, Amit K. Roy-Chowdhury,
Manmohan Chandraker
|
Domain Adaptive Semantic Segmentation Using Weak Labels
|
ECCV 2020
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Learning semantic segmentation models requires a huge amount of pixel-wise
labeling. However, labeled data may only be available abundantly in a domain
different from the desired target domain, which only has minimal or no
annotations. In this work, we propose a novel framework for domain adaptation
in semantic segmentation with image-level weak labels in the target domain. The
weak labels may be obtained based on a model prediction for unsupervised domain
adaptation (UDA), or from a human annotator in a new weakly-supervised domain
adaptation (WDA) paradigm for semantic segmentation. Using weak labels is both
practical and useful, since (i) collecting image-level target annotations is
comparably cheap in WDA and incurs no cost in UDA, and (ii) it opens the
opportunity for category-wise domain alignment. Our framework uses weak labels
to enable the interplay between feature alignment and pseudo-labeling,
improving both in the process of domain adaptation. Specifically, we develop a
weak-label classification module to enforce the network to attend to certain
categories, and then use such training signals to guide the proposed
category-wise alignment method. In experiments, we show considerable
improvements with respect to the existing state-of-the-arts in UDA and present
a new benchmark in the WDA setting. Project page is at
http://www.nec-labs.com/~mas/WeakSegDA.
|
[
{
"created": "Thu, 30 Jul 2020 01:33:57 GMT",
"version": "v1"
},
{
"created": "Wed, 12 Aug 2020 10:05:48 GMT",
"version": "v2"
}
] |
2020-08-13
|
[
[
"Paul",
"Sujoy",
""
],
[
"Tsai",
"Yi-Hsuan",
""
],
[
"Schulter",
"Samuel",
""
],
[
"Roy-Chowdhury",
"Amit K.",
""
],
[
"Chandraker",
"Manmohan",
""
]
] |
Learning semantic segmentation models requires a huge amount of pixel-wise labeling. However, labeled data may only be available abundantly in a domain different from the desired target domain, which only has minimal or no annotations. In this work, we propose a novel framework for domain adaptation in semantic segmentation with image-level weak labels in the target domain. The weak labels may be obtained based on a model prediction for unsupervised domain adaptation (UDA), or from a human annotator in a new weakly-supervised domain adaptation (WDA) paradigm for semantic segmentation. Using weak labels is both practical and useful, since (i) collecting image-level target annotations is comparably cheap in WDA and incurs no cost in UDA, and (ii) it opens the opportunity for category-wise domain alignment. Our framework uses weak labels to enable the interplay between feature alignment and pseudo-labeling, improving both in the process of domain adaptation. Specifically, we develop a weak-label classification module to enforce the network to attend to certain categories, and then use such training signals to guide the proposed category-wise alignment method. In experiments, we show considerable improvements with respect to the existing state-of-the-arts in UDA and present a new benchmark in the WDA setting. Project page is at http://www.nec-labs.com/~mas/WeakSegDA.
|
1806.04765
|
Adon Phillips
|
Adon Phillips, Iris Teo, Jochen Lang
|
Fully Convolutional Network for Melanoma Diagnostics
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work seeks to determine how modern machine learning techniques may be
applied to the previously unexplored topic of melanoma diagnostics using
digital pathology. We curated a new dataset of 50 patient cases of cutaneous
melanoma using digital pathology. We provide gold standard annotations for
three tissue types (tumour, epidermis, and dermis) which are important for the
prognostic measurements known as Breslow thickness and Clark level. Then, we
devised a novel multi-stride fully convolutional network (FCN) architecture
that outperformed other networks trained and evaluated using the same data
according to standard metrics. Finally, we trained a model to detect and
localize the target tissue types. When processing previously unseen cases, our
model's output is qualitatively very similar to the gold standard. In addition
to the standard metrics computed as a baseline for our approach, we asked three
additional pathologists to measure the Breslow thickness on the network's
output. Their responses were diagnostically equivalent to the ground truth
measurements, and when removing cases where a measurement was not appropriate,
inter-rater reliability (IRR) between the four pathologists was 75.0%. Given
the qualitative and quantitative results, it is possible to overcome the
discriminative challenges of the skin and tumour anatomy for segmentation using
modern machine learning techniques, though more work is required to improve the
network's performance on dermis segmentation. Further, we show that it is
possible to achieve a level of accuracy required to manually perform the
Breslow thickness measurement.
|
[
{
"created": "Tue, 12 Jun 2018 20:53:55 GMT",
"version": "v1"
}
] |
2018-06-14
|
[
[
"Phillips",
"Adon",
""
],
[
"Teo",
"Iris",
""
],
[
"Lang",
"Jochen",
""
]
] |
This work seeks to determine how modern machine learning techniques may be applied to the previously unexplored topic of melanoma diagnostics using digital pathology. We curated a new dataset of 50 patient cases of cutaneous melanoma using digital pathology. We provide gold standard annotations for three tissue types (tumour, epidermis, and dermis) which are important for the prognostic measurements known as Breslow thickness and Clark level. Then, we devised a novel multi-stride fully convolutional network (FCN) architecture that outperformed other networks trained and evaluated using the same data according to standard metrics. Finally, we trained a model to detect and localize the target tissue types. When processing previously unseen cases, our model's output is qualitatively very similar to the gold standard. In addition to the standard metrics computed as a baseline for our approach, we asked three additional pathologists to measure the Breslow thickness on the network's output. Their responses were diagnostically equivalent to the ground truth measurements, and when removing cases where a measurement was not appropriate, inter-rater reliability (IRR) between the four pathologists was 75.0%. Given the qualitative and quantitative results, it is possible to overcome the discriminative challenges of the skin and tumour anatomy for segmentation using modern machine learning techniques, though more work is required to improve the network's performance on dermis segmentation. Further, we show that it is possible to achieve a level of accuracy required to manually perform the Breslow thickness measurement.
|
2402.02401
|
Jincao Yao
|
Jincao Yao (1 and 2 and 3 and 4 and 5 and 6), Yunpeng Wang (7), Zhikai
Lei (8), Kai Wang (9), Xiaoxian Li (10) Jianhua Zhou (10), Xiang Hao (7),
Jiafei Shen (1 and 2), Zhenping Wang (9), Rongrong Ru (11), Yaqing Chen (11),
Yahan Zhou (6), Chen Chen (1 and 2), Yanming Zhang (12 and 13), Ping Liang
(14), Dong Xu (1 and 2 and 3 and 4 and 5 and 6) ((1) Department of Radiology,
Zhejiang Cancer Hospital, Hangzhou, 310022, China (2) Hangzhou Institute of
Medicine (HIM), Chinese Academy of Sciences, Hangzhou, 310000, China,(3) Key
Laboratory of Head and Neck Cancer Translational Research of Zhejiang
Province, Hangzhou, 310022, China,(4) Zhejiang Provincial Research Center for
Cancer Intelligent Diagnosis and Molecular Technology, Hangzhou, 310000,
China, (5) Wenling Medical Big Data and Artificial Intelligence Research
Institute, 24th Floor, Machang Road, Taizhou, 310061, China,(6) Taizhou Key
Laboratory of Minimally Invasive Interventional Therapy and Artificial
Intelligence, Taizhou Campus of Zhejiang Cancer Hospital (Taizhou Cancer
Hospital), Taizhou, 317502, China,(7) College of Optical Science and
Engineering, Zhejiang University, No.38 of Zheda Road, Hangzhou, Zhejiang
Province, China,(8) Zhejiang Provincial Hospital of Chinese Medicine, 54
Youdian Road, Hangzhou, 310003, China,(9) Department of Ultrasound, The
Affiliated Dongyang Hospital of Wenzhou Medical University, Dongyang, 322100,
China,(10) Department of Ultrasound, Sun Yat sen University Cancer Center,
State Key Laboratory of Oncology in South China, Collaborative Innovation
Center for Cancer Medicine, Guangzhou, 510060, China, (11) Affiliated
Xiaoshan Hospital, Hangzhou Normal University, No.728 North Yucai Road,
Hangzhou, 311202, China,(12) Zhejiang Provincial People's Hospital Affiliated
People's Hospital, Hangzhou Medical College, Hangzhou, 314408, China,(13) Key
Laboratory of Endocrine Gland Diseases of Zhejiang Province, Hangzhou,
314408, China,(14) Department of Ultrasound, Chinese PLA General Hospital,
Chinese PLA Medical School, Beijing, 100853, China)
|
AI-Generated Content Enhanced Computer-Aided Diagnosis Model for Thyroid
Nodules: A ChatGPT-Style Assistant
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An artificial intelligence-generated content-enhanced computer-aided
diagnosis (AIGC-CAD) model, designated as ThyGPT, has been developed. This
model, inspired by the architecture of ChatGPT, could assist radiologists in
assessing the risk of thyroid nodules through semantic-level human-machine
interaction. A dataset comprising 19,165 thyroid nodule ultrasound cases from
Zhejiang Cancer Hospital was assembled to facilitate the training and
validation of the model. After training, ThyGPT could automatically evaluate
thyroid nodule and engage in effective communication with physicians through
human-computer interaction. The performance of ThyGPT was rigorously quantified
using established metrics such as the receiver operating characteristic (ROC)
curve, area under the curve (AUC), sensitivity, and specificity. The empirical
findings revealed that radiologists, when supplemented with ThyGPT, markedly
surpassed the diagnostic acumen of their peers utilizing traditional methods as
well as the performance of the model in isolation. These findings suggest that
AIGC-CAD systems, exemplified by ThyGPT, hold the promise to fundamentally
transform the diagnostic workflows of radiologists in forthcoming years.
|
[
{
"created": "Sun, 4 Feb 2024 08:24:13 GMT",
"version": "v1"
}
] |
2024-02-07
|
[
[
"Yao",
"Jincao",
"",
"1 and 2 and 3 and 4 and 5 and 6"
],
[
"Wang",
"Yunpeng",
"",
"1 and 2"
],
[
"Lei",
"Zhikai",
"",
"1 and 2"
],
[
"Wang",
"Kai",
"",
"1 and 2"
],
[
"Li",
"Xiaoxian",
"",
"1 and 2"
],
[
"Zhou",
"Jianhua",
"",
"1 and 2"
],
[
"Hao",
"Xiang",
"",
"1 and 2"
],
[
"Shen",
"Jiafei",
"",
"1 and 2"
],
[
"Wang",
"Zhenping",
"",
"1 and 2"
],
[
"Ru",
"Rongrong",
"",
"1 and 2"
],
[
"Chen",
"Yaqing",
"",
"1 and 2"
],
[
"Zhou",
"Yahan",
"",
"1 and 2"
],
[
"Chen",
"Chen",
"",
"1 and 2"
],
[
"Zhang",
"Yanming",
"",
"12 and 13"
],
[
"Liang",
"Ping",
"",
"1 and 2 and 3 and 4 and 5 and 6"
],
[
"Xu",
"Dong",
"",
"1 and 2 and 3 and 4 and 5 and 6"
]
] |
An artificial intelligence-generated content-enhanced computer-aided diagnosis (AIGC-CAD) model, designated as ThyGPT, has been developed. This model, inspired by the architecture of ChatGPT, could assist radiologists in assessing the risk of thyroid nodules through semantic-level human-machine interaction. A dataset comprising 19,165 thyroid nodule ultrasound cases from Zhejiang Cancer Hospital was assembled to facilitate the training and validation of the model. After training, ThyGPT could automatically evaluate thyroid nodule and engage in effective communication with physicians through human-computer interaction. The performance of ThyGPT was rigorously quantified using established metrics such as the receiver operating characteristic (ROC) curve, area under the curve (AUC), sensitivity, and specificity. The empirical findings revealed that radiologists, when supplemented with ThyGPT, markedly surpassed the diagnostic acumen of their peers utilizing traditional methods as well as the performance of the model in isolation. These findings suggest that AIGC-CAD systems, exemplified by ThyGPT, hold the promise to fundamentally transform the diagnostic workflows of radiologists in forthcoming years.
|
2212.02758
|
Yutong Dai
|
Yutong Dai, Zeyuan Chen, Junnan Li, Shelby Heinecke, Lichao Sun, Ran
Xu
|
Tackling Data Heterogeneity in Federated Learning with Class Prototypes
|
Accepted for presentation at AAAI 2023. This is a technical report
version that contains an appendix with additional details about experiments
and proofs for technical results. Grant information is also added
| null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Data heterogeneity across clients in federated learning (FL) settings is a
widely acknowledged challenge. In response, personalized federated learning
(PFL) emerged as a framework to curate local models for clients' tasks. In PFL,
a common strategy is to develop local and global models jointly - the global
model (for generalization) informs the local models, and the local models (for
personalization) are aggregated to update the global model. A key observation
is that if we can improve the generalization ability of local models, then we
can improve the generalization of global models, which in turn builds better
personalized models. In this work, we consider class imbalance, an overlooked
type of data heterogeneity, in the classification setting. We propose FedNH, a
novel method that improves the local models' performance for both
personalization and generalization by combining the uniformity and semantics of
class prototypes. FedNH initially distributes class prototypes uniformly in the
latent space and smoothly infuses the class semantics into class prototypes. We
show that imposing uniformity helps to combat prototype collapse while infusing
class semantics improves local models. Extensive experiments were conducted on
popular classification datasets under the cross-device setting. Our results
demonstrate the effectiveness and stability of our method over recent works.
|
[
{
"created": "Tue, 6 Dec 2022 05:15:38 GMT",
"version": "v1"
},
{
"created": "Tue, 26 Dec 2023 04:56:33 GMT",
"version": "v2"
}
] |
2023-12-27
|
[
[
"Dai",
"Yutong",
""
],
[
"Chen",
"Zeyuan",
""
],
[
"Li",
"Junnan",
""
],
[
"Heinecke",
"Shelby",
""
],
[
"Sun",
"Lichao",
""
],
[
"Xu",
"Ran",
""
]
] |
Data heterogeneity across clients in federated learning (FL) settings is a widely acknowledged challenge. In response, personalized federated learning (PFL) emerged as a framework to curate local models for clients' tasks. In PFL, a common strategy is to develop local and global models jointly - the global model (for generalization) informs the local models, and the local models (for personalization) are aggregated to update the global model. A key observation is that if we can improve the generalization ability of local models, then we can improve the generalization of global models, which in turn builds better personalized models. In this work, we consider class imbalance, an overlooked type of data heterogeneity, in the classification setting. We propose FedNH, a novel method that improves the local models' performance for both personalization and generalization by combining the uniformity and semantics of class prototypes. FedNH initially distributes class prototypes uniformly in the latent space and smoothly infuses the class semantics into class prototypes. We show that imposing uniformity helps to combat prototype collapse while infusing class semantics improves local models. Extensive experiments were conducted on popular classification datasets under the cross-device setting. Our results demonstrate the effectiveness and stability of our method over recent works.
|
2212.10818
|
Yui Sudo
|
Yui Sudo, Muhammad Shakeel, Brian Yan, Jiatong Shi, Shinji Watanabe
|
4D ASR: Joint modeling of CTC, Attention, Transducer, and Mask-Predict
decoders
|
Accepted by INTERRSPEECH2023
| null | null | null |
cs.SD cs.CL eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
The network architecture of end-to-end (E2E) automatic speech recognition
(ASR) can be classified into several models, including connectionist temporal
classification (CTC), recurrent neural network transducer (RNN-T), attention
mechanism, and non-autoregressive mask-predict models. Since each of these
network architectures has pros and cons, a typical use case is to switch these
separate models depending on the application requirement, resulting in the
increased overhead of maintaining all models. Several methods for integrating
two of these complementary models to mitigate the overhead issue have been
proposed; however, if we integrate more models, we will further benefit from
these complementary models and realize broader applications with a single
system. This paper proposes four-decoder joint modeling (4D) of CTC, attention,
RNN-T, and mask-predict, which has the following three advantages: 1) The four
decoders are jointly trained so that they can be easily switched depending on
the application scenarios. 2) Joint training may bring model regularization and
improve the model robustness thanks to their complementary properties. 3) Novel
one-pass joint decoding methods using CTC, attention, and RNN-T further
improves the performance. The experimental results showed that the proposed
model consistently reduced the WER.
|
[
{
"created": "Wed, 21 Dec 2022 07:15:59 GMT",
"version": "v1"
},
{
"created": "Mon, 29 May 2023 23:16:56 GMT",
"version": "v2"
}
] |
2023-05-31
|
[
[
"Sudo",
"Yui",
""
],
[
"Shakeel",
"Muhammad",
""
],
[
"Yan",
"Brian",
""
],
[
"Shi",
"Jiatong",
""
],
[
"Watanabe",
"Shinji",
""
]
] |
The network architecture of end-to-end (E2E) automatic speech recognition (ASR) can be classified into several models, including connectionist temporal classification (CTC), recurrent neural network transducer (RNN-T), attention mechanism, and non-autoregressive mask-predict models. Since each of these network architectures has pros and cons, a typical use case is to switch these separate models depending on the application requirement, resulting in the increased overhead of maintaining all models. Several methods for integrating two of these complementary models to mitigate the overhead issue have been proposed; however, if we integrate more models, we will further benefit from these complementary models and realize broader applications with a single system. This paper proposes four-decoder joint modeling (4D) of CTC, attention, RNN-T, and mask-predict, which has the following three advantages: 1) The four decoders are jointly trained so that they can be easily switched depending on the application scenarios. 2) Joint training may bring model regularization and improve the model robustness thanks to their complementary properties. 3) Novel one-pass joint decoding methods using CTC, attention, and RNN-T further improves the performance. The experimental results showed that the proposed model consistently reduced the WER.
|
2306.05911
|
Deng Yu
|
Deng Yu, Chufeng Xiao, Manfred Lau, and Hongbo Fu
|
Sketch2Stress: Sketching with Structural Stress Awareness
|
Accepted by IEEE Transactions on Visualization and Computer Graphics
(IEEE TVCG)
| null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the process of product design and digital fabrication, the structural
analysis of a designed prototype is a fundamental and essential step. However,
such a step is usually invisible or inaccessible to designers at the early
sketching phase. This limits the user's ability to consider a shape's physical
properties and structural soundness. To bridge this gap, we introduce a novel
approach Sketch2Stress that allows users to perform structural analysis of
desired objects at the sketching stage. This method takes as input a 2D
freehand sketch and one or multiple locations of user-assigned external forces.
With the specially-designed two-branch generative-adversarial framework, it
automatically predicts a normal map and a corresponding structural stress map
distributed over the user-sketched underlying object. In this way, our method
empowers designers to easily examine the stress sustained everywhere and
identify potential problematic regions of their sketched object. Furthermore,
combined with the predicted normal map, users are able to conduct a region-wise
structural analysis efficiently by aggregating the stress effects of multiple
forces in the same direction. Finally, we demonstrate the effectiveness and
practicality of our system with extensive experiments and user studies.
|
[
{
"created": "Fri, 9 Jun 2023 14:05:41 GMT",
"version": "v1"
},
{
"created": "Mon, 11 Dec 2023 06:51:51 GMT",
"version": "v2"
}
] |
2023-12-12
|
[
[
"Yu",
"Deng",
""
],
[
"Xiao",
"Chufeng",
""
],
[
"Lau",
"Manfred",
""
],
[
"Fu",
"Hongbo",
""
]
] |
In the process of product design and digital fabrication, the structural analysis of a designed prototype is a fundamental and essential step. However, such a step is usually invisible or inaccessible to designers at the early sketching phase. This limits the user's ability to consider a shape's physical properties and structural soundness. To bridge this gap, we introduce a novel approach Sketch2Stress that allows users to perform structural analysis of desired objects at the sketching stage. This method takes as input a 2D freehand sketch and one or multiple locations of user-assigned external forces. With the specially-designed two-branch generative-adversarial framework, it automatically predicts a normal map and a corresponding structural stress map distributed over the user-sketched underlying object. In this way, our method empowers designers to easily examine the stress sustained everywhere and identify potential problematic regions of their sketched object. Furthermore, combined with the predicted normal map, users are able to conduct a region-wise structural analysis efficiently by aggregating the stress effects of multiple forces in the same direction. Finally, we demonstrate the effectiveness and practicality of our system with extensive experiments and user studies.
|
1505.03776
|
David Garcia
|
Raquel Alvarez, David Garcia, Yamir Moreno and Frank Schweitzer
|
Sentiment cascades in the 15M movement
|
EPJ Data Science vol 4 (2015) (forthcoming)
| null | null | null |
cs.SI physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent grassroots movements have suggested that online social networks might
play a key role in their organization, as adherents have a fast, many-to-many,
communication channel to help coordinate their mobilization. The structure and
dynamics of the networks constructed from the digital traces of protesters have
been analyzed to some extent recently. However, less effort has been devoted to
the analysis of the semantic content of messages exchanged during the protest.
Using the data obtained from a microblogging service during the brewing and
active phases of the 15M movement in Spain, we perform the first large scale
test of theories on collective emotions and social interaction in collective
actions. Our findings show that activity and information cascades in the
movement are larger in the presence of negative collective emotions and when
users express themselves in terms related to social content. At the level of
individual participants, our results show that their social integration in the
movement, as measured through social network metrics, increases with their
level of engagement and of expression of negativity. Our findings show that
non-rational factors play a role in the formation and activity of social
movements through online media, having important consequences for viral
spreading.
|
[
{
"created": "Thu, 14 May 2015 16:08:11 GMT",
"version": "v1"
}
] |
2015-05-15
|
[
[
"Alvarez",
"Raquel",
""
],
[
"Garcia",
"David",
""
],
[
"Moreno",
"Yamir",
""
],
[
"Schweitzer",
"Frank",
""
]
] |
Recent grassroots movements have suggested that online social networks might play a key role in their organization, as adherents have a fast, many-to-many, communication channel to help coordinate their mobilization. The structure and dynamics of the networks constructed from the digital traces of protesters have been analyzed to some extent recently. However, less effort has been devoted to the analysis of the semantic content of messages exchanged during the protest. Using the data obtained from a microblogging service during the brewing and active phases of the 15M movement in Spain, we perform the first large scale test of theories on collective emotions and social interaction in collective actions. Our findings show that activity and information cascades in the movement are larger in the presence of negative collective emotions and when users express themselves in terms related to social content. At the level of individual participants, our results show that their social integration in the movement, as measured through social network metrics, increases with their level of engagement and of expression of negativity. Our findings show that non-rational factors play a role in the formation and activity of social movements through online media, having important consequences for viral spreading.
|
2306.00800
|
Juan A. Rodriguez
|
Juan A Rodriguez, David Vazquez, Issam Laradji, Marco Pedersoli, Pau
Rodriguez
|
FigGen: Text to Scientific Figure Generation
|
Published at ICLR 2023 as a Tiny Paper
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
The generative modeling landscape has experienced tremendous growth in recent
years, particularly in generating natural images and art. Recent techniques
have shown impressive potential in creating complex visual compositions while
delivering impressive realism and quality. However, state-of-the-art methods
have been focusing on the narrow domain of natural images, while other
distributions remain unexplored. In this paper, we introduce the problem of
text-to-figure generation, that is creating scientific figures of papers from
text descriptions. We present FigGen, a diffusion-based approach for
text-to-figure as well as the main challenges of the proposed task. Code and
models are available at https://github.com/joanrod/figure-diffusion
|
[
{
"created": "Thu, 1 Jun 2023 15:28:41 GMT",
"version": "v1"
},
{
"created": "Wed, 21 Jun 2023 13:55:08 GMT",
"version": "v2"
},
{
"created": "Sun, 17 Dec 2023 08:24:37 GMT",
"version": "v3"
}
] |
2023-12-19
|
[
[
"Rodriguez",
"Juan A",
""
],
[
"Vazquez",
"David",
""
],
[
"Laradji",
"Issam",
""
],
[
"Pedersoli",
"Marco",
""
],
[
"Rodriguez",
"Pau",
""
]
] |
The generative modeling landscape has experienced tremendous growth in recent years, particularly in generating natural images and art. Recent techniques have shown impressive potential in creating complex visual compositions while delivering impressive realism and quality. However, state-of-the-art methods have been focusing on the narrow domain of natural images, while other distributions remain unexplored. In this paper, we introduce the problem of text-to-figure generation, that is creating scientific figures of papers from text descriptions. We present FigGen, a diffusion-based approach for text-to-figure as well as the main challenges of the proposed task. Code and models are available at https://github.com/joanrod/figure-diffusion
|
1908.10033
|
Shantanu Sharma
|
Nisha Panwar, Shantanu Sharma, Guoxi Wang, Sharad Mehrotra, Nalini
Venkatasubramanian, Mamadou H. Diallo, Ardalan Amiri Sani
|
IoT Notary: Sensor Data Attestation in Smart Environment
|
Accepted in IEEE International Symposium on Network Computing and
Applications (NCA), 2019
| null | null | null |
cs.CR cs.DB cs.DC cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Contemporary IoT environments, such as smart buildings, require end-users to
trust data-capturing rules published by the systems. There are several reasons
why such a trust is misplaced --- IoT systems may violate the rules
deliberately or IoT devices may transfer user data to a malicious third-party
due to cyberattacks, leading to the loss of individuals' privacy or service
integrity. To address such concerns, we propose IoT Notary, a framework to
ensure trust in IoT systems and applications. IoT Notary provides secure log
sealing on live sensor data to produce a verifiable `proof-of-integrity,' based
on which a verifier can attest that captured sensor data adheres to the
published data-capturing rules. IoT Notary is an integral part of TIPPERS, a
smart space system that has been deployed at UCI to provide various real-time
location-based services in the campus. IoT Notary imposes nominal overheads for
verification, thereby users can verify their data of one day in less than two
seconds.
|
[
{
"created": "Tue, 27 Aug 2019 05:10:04 GMT",
"version": "v1"
}
] |
2019-08-28
|
[
[
"Panwar",
"Nisha",
""
],
[
"Sharma",
"Shantanu",
""
],
[
"Wang",
"Guoxi",
""
],
[
"Mehrotra",
"Sharad",
""
],
[
"Venkatasubramanian",
"Nalini",
""
],
[
"Diallo",
"Mamadou H.",
""
],
[
"Sani",
"Ardalan Amiri",
""
]
] |
Contemporary IoT environments, such as smart buildings, require end-users to trust data-capturing rules published by the systems. There are several reasons why such a trust is misplaced --- IoT systems may violate the rules deliberately or IoT devices may transfer user data to a malicious third-party due to cyberattacks, leading to the loss of individuals' privacy or service integrity. To address such concerns, we propose IoT Notary, a framework to ensure trust in IoT systems and applications. IoT Notary provides secure log sealing on live sensor data to produce a verifiable `proof-of-integrity,' based on which a verifier can attest that captured sensor data adheres to the published data-capturing rules. IoT Notary is an integral part of TIPPERS, a smart space system that has been deployed at UCI to provide various real-time location-based services in the campus. IoT Notary imposes nominal overheads for verification, thereby users can verify their data of one day in less than two seconds.
|
1905.01833
|
Yuqun Zhang
|
Mingyuan Wu, Husheng Zhou, Lingming Zhang, Cong Liu, Yuqun Zhang
|
Characterizing and Detecting CUDA Program Bugs
|
12 pages, 8 figures
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While CUDA has become a major parallel computing platform and programming
model for general-purpose GPU computing, CUDA-induced bug patterns have not yet
been well explored. In this paper, we conduct the first empirical study to
reveal important categories of CUDA program bug patterns based on 319 bugs
identified within 5 popular CUDA projects in GitHub. Our findings demonstrate
that CUDA-specific characteristics may cause program bugs such as
synchronization bugs that are rather difficult to detect. To efficiently detect
such synchronization bugs, we establish the first lightweight general CUDA bug
detection framework, namely Simulee, to simulate CUDA program execution by
interpreting the corresponding llvm bytecode and collecting the memory-access
information to automatically detect CUDA synchronization bugs. To evaluate the
effectiveness and efficiency of Simulee, we conduct a set of experiments and
the experimental results suggest that Simulee can detect 20 out of the 27
studied synchronization bugs and successfully detects 26 previously unknown
synchronization bugs, 10 of which have been confirmed by the developers.
|
[
{
"created": "Mon, 6 May 2019 06:12:58 GMT",
"version": "v1"
},
{
"created": "Sat, 11 May 2019 10:35:34 GMT",
"version": "v2"
},
{
"created": "Wed, 29 May 2019 11:35:23 GMT",
"version": "v3"
}
] |
2019-05-30
|
[
[
"Wu",
"Mingyuan",
""
],
[
"Zhou",
"Husheng",
""
],
[
"Zhang",
"Lingming",
""
],
[
"Liu",
"Cong",
""
],
[
"Zhang",
"Yuqun",
""
]
] |
While CUDA has become a major parallel computing platform and programming model for general-purpose GPU computing, CUDA-induced bug patterns have not yet been well explored. In this paper, we conduct the first empirical study to reveal important categories of CUDA program bug patterns based on 319 bugs identified within 5 popular CUDA projects in GitHub. Our findings demonstrate that CUDA-specific characteristics may cause program bugs such as synchronization bugs that are rather difficult to detect. To efficiently detect such synchronization bugs, we establish the first lightweight general CUDA bug detection framework, namely Simulee, to simulate CUDA program execution by interpreting the corresponding llvm bytecode and collecting the memory-access information to automatically detect CUDA synchronization bugs. To evaluate the effectiveness and efficiency of Simulee, we conduct a set of experiments and the experimental results suggest that Simulee can detect 20 out of the 27 studied synchronization bugs and successfully detects 26 previously unknown synchronization bugs, 10 of which have been confirmed by the developers.
|
2111.11745
|
Yan Wang
|
Xintian Mao, Yiming Liu, Fengze Liu, Qingli Li, Wei Shen, Yan Wang
|
Intriguing Findings of Frequency Selection for Image Deblurring
|
AAAI 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Blur was naturally analyzed in the frequency domain, by estimating the latent
sharp image and the blur kernel given a blurry image. Recent progress on image
deblurring always designs end-to-end architectures and aims at learning the
difference between blurry and sharp image pairs from pixel-level, which
inevitably overlooks the importance of blur kernels. This paper reveals an
intriguing phenomenon that simply applying ReLU operation on the frequency
domain of a blur image followed by inverse Fourier transform, i.e., frequency
selection, provides faithful information about the blur pattern (e.g., the blur
direction and blur level, implicitly shows the kernel pattern). Based on this
observation, we attempt to leverage kernel-level information for image
deblurring networks by inserting Fourier transform, ReLU operation, and inverse
Fourier transform to the standard ResBlock. 1x1 convolution is further added to
let the network modulate flexible thresholds for frequency selection. We term
our newly built block as Res FFT-ReLU Block, which takes advantages of both
kernel-level and pixel-level features via learning frequency-spatial
dual-domain representations. Extensive experiments are conducted to acquire a
thorough analysis on the insights of the method. Moreover, after plugging the
proposed block into NAFNet, we can achieve 33.85 dB in PSNR on GoPro dataset.
Our method noticeably improves backbone architectures without introducing many
parameters, while maintaining low computational complexity. Code is available
at https://github.com/DeepMed-Lab/DeepRFT-AAAI2023.
|
[
{
"created": "Tue, 23 Nov 2021 09:40:40 GMT",
"version": "v1"
},
{
"created": "Tue, 29 Nov 2022 10:15:57 GMT",
"version": "v2"
}
] |
2022-11-30
|
[
[
"Mao",
"Xintian",
""
],
[
"Liu",
"Yiming",
""
],
[
"Liu",
"Fengze",
""
],
[
"Li",
"Qingli",
""
],
[
"Shen",
"Wei",
""
],
[
"Wang",
"Yan",
""
]
] |
Blur was naturally analyzed in the frequency domain, by estimating the latent sharp image and the blur kernel given a blurry image. Recent progress on image deblurring always designs end-to-end architectures and aims at learning the difference between blurry and sharp image pairs from pixel-level, which inevitably overlooks the importance of blur kernels. This paper reveals an intriguing phenomenon that simply applying ReLU operation on the frequency domain of a blur image followed by inverse Fourier transform, i.e., frequency selection, provides faithful information about the blur pattern (e.g., the blur direction and blur level, implicitly shows the kernel pattern). Based on this observation, we attempt to leverage kernel-level information for image deblurring networks by inserting Fourier transform, ReLU operation, and inverse Fourier transform to the standard ResBlock. 1x1 convolution is further added to let the network modulate flexible thresholds for frequency selection. We term our newly built block as Res FFT-ReLU Block, which takes advantages of both kernel-level and pixel-level features via learning frequency-spatial dual-domain representations. Extensive experiments are conducted to acquire a thorough analysis on the insights of the method. Moreover, after plugging the proposed block into NAFNet, we can achieve 33.85 dB in PSNR on GoPro dataset. Our method noticeably improves backbone architectures without introducing many parameters, while maintaining low computational complexity. Code is available at https://github.com/DeepMed-Lab/DeepRFT-AAAI2023.
|
0705.1384
|
Navin Kashyap
|
Navin Kashyap
|
Matroid Pathwidth and Code Trellis Complexity
|
Submitted to SIAM Journal on Discrete Mathematics; 18 pages, 6
figures
| null | null | null |
cs.DM cs.IT math.IT
| null |
We relate the notion of matroid pathwidth to the minimum trellis
state-complexity (which we term trellis-width) of a linear code, and to the
pathwidth of a graph. By reducing from the problem of computing the pathwidth
of a graph, we show that the problem of determining the pathwidth of a
representable matroid is NP-hard. Consequently, the problem of computing the
trellis-width of a linear code is also NP-hard. For a finite field $\F$, we
also consider the class of $\F$-representable matroids of pathwidth at most
$w$, and correspondingly, the family of linear codes over $\F$ with
trellis-width at most $w$. These are easily seen to be minor-closed. Since
these matroids (and codes) have branchwidth at most $w$, a result of Geelen and
Whittle shows that such matroids (and the corresponding codes) are
characterized by finitely many excluded minors. We provide the complete list of
excluded minors for $w=1$, and give a partial list for $w=2$.
|
[
{
"created": "Thu, 10 May 2007 03:00:54 GMT",
"version": "v1"
}
] |
2007-07-13
|
[
[
"Kashyap",
"Navin",
""
]
] |
We relate the notion of matroid pathwidth to the minimum trellis state-complexity (which we term trellis-width) of a linear code, and to the pathwidth of a graph. By reducing from the problem of computing the pathwidth of a graph, we show that the problem of determining the pathwidth of a representable matroid is NP-hard. Consequently, the problem of computing the trellis-width of a linear code is also NP-hard. For a finite field $\F$, we also consider the class of $\F$-representable matroids of pathwidth at most $w$, and correspondingly, the family of linear codes over $\F$ with trellis-width at most $w$. These are easily seen to be minor-closed. Since these matroids (and codes) have branchwidth at most $w$, a result of Geelen and Whittle shows that such matroids (and the corresponding codes) are characterized by finitely many excluded minors. We provide the complete list of excluded minors for $w=1$, and give a partial list for $w=2$.
|
1803.07139
|
No\'e Casas
|
Marta R. Costa-juss\`a, Noe Casas, Maite Melero
|
English-Catalan Neural Machine Translation in the Biomedical Domain
through the cascade approach
|
Full workshop proceedings can be found at
https://multilingualbio.bsc.es/wp-content/uploads/2018/03/LREC-2018-PROCEEDINGS-MultilingualBIO.pdf
|
Proceedings of workshop "MultilingualBIO: Multilingual Biomedical
Text Processing" of the 11th Edition of the Language Resources and Evaluation
Conference, 2018
| null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper describes the methodology followed to build a neural machine
translation system in the biomedical domain for the English-Catalan language
pair. This task can be considered a low-resourced task from the point of view
of the domain and the language pair. To face this task, this paper reports
experiments on a cascade pivot strategy through Spanish for the neural machine
translation using the English-Spanish SCIELO and Spanish-Catalan El Peri\'odico
database. To test the final performance of the system, we have created a new
test data set for English-Catalan in the biomedical domain which is freely
available on request.
|
[
{
"created": "Mon, 19 Mar 2018 19:48:48 GMT",
"version": "v1"
},
{
"created": "Thu, 26 Apr 2018 15:22:04 GMT",
"version": "v2"
}
] |
2018-04-27
|
[
[
"Costa-jussà",
"Marta R.",
""
],
[
"Casas",
"Noe",
""
],
[
"Melero",
"Maite",
""
]
] |
This paper describes the methodology followed to build a neural machine translation system in the biomedical domain for the English-Catalan language pair. This task can be considered a low-resourced task from the point of view of the domain and the language pair. To face this task, this paper reports experiments on a cascade pivot strategy through Spanish for the neural machine translation using the English-Spanish SCIELO and Spanish-Catalan El Peri\'odico database. To test the final performance of the system, we have created a new test data set for English-Catalan in the biomedical domain which is freely available on request.
|
1509.07178
|
Yu Wang
|
Yu Wang and Jianbo Yuan and Jiebo Luo
|
America Tweets China: A Fine-Grained Analysis of the State and
Individual Characteristics Regarding Attitudes towards China
|
8 pages, 5 figures and 4 tables, IEEE BigData 2015
| null | null | null |
cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The U.S.-China relationship is arguably the most important bilateral
relationship in the 21st century. Typically it is measured through opinion
polls, for example, by Gallup and Pew Institute. In this paper, we propose a
new method to measure U.S.-China relations using data from Twitter, one of the
most popular social networks. Compared with traditional opinion polls, our
method has two distinctive advantages. First, our sample size is significantly
larger. National opinion polls have at most a few thousand samples. Our data
set has 724,146 samples. The large size of our data set enables us to perform
state level analysis, which so far even large opinion polls have left
unexplored. Second, our method can control for fixed state and date effects. We
first demonstrate the existence of inter-state and inter-day variances and then
control for these variances in our regression analysis. Empirically, our study
is able to replicate the stylized results from opinion polls as well as
generate new insights. At the state level, we find New York, Michigan, Indiana
and Arizona are the top four most China-friendly states. Wyoming, South Dakota,
Kansas and Nevada are most homogeneous. At the individual level, we find
attitudes towards China improve as an individual's Twitter experience grows
longer and more intense. We also find individuals of Chinese ethnicity are
statistically more China-friendly.
|
[
{
"created": "Wed, 23 Sep 2015 23:11:46 GMT",
"version": "v1"
}
] |
2015-09-25
|
[
[
"Wang",
"Yu",
""
],
[
"Yuan",
"Jianbo",
""
],
[
"Luo",
"Jiebo",
""
]
] |
The U.S.-China relationship is arguably the most important bilateral relationship in the 21st century. Typically it is measured through opinion polls, for example, by Gallup and Pew Institute. In this paper, we propose a new method to measure U.S.-China relations using data from Twitter, one of the most popular social networks. Compared with traditional opinion polls, our method has two distinctive advantages. First, our sample size is significantly larger. National opinion polls have at most a few thousand samples. Our data set has 724,146 samples. The large size of our data set enables us to perform state level analysis, which so far even large opinion polls have left unexplored. Second, our method can control for fixed state and date effects. We first demonstrate the existence of inter-state and inter-day variances and then control for these variances in our regression analysis. Empirically, our study is able to replicate the stylized results from opinion polls as well as generate new insights. At the state level, we find New York, Michigan, Indiana and Arizona are the top four most China-friendly states. Wyoming, South Dakota, Kansas and Nevada are most homogeneous. At the individual level, we find attitudes towards China improve as an individual's Twitter experience grows longer and more intense. We also find individuals of Chinese ethnicity are statistically more China-friendly.
|
2407.10366
|
Yitian Zhang
|
Yitian Zhang, Xu Ma, Yue Bai, Huan Wang, Yun Fu
|
Accessing Vision Foundation Models at ImageNet-level Costs
| null | null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vision foundation models are renowned for their generalization ability due to
massive training data. Nevertheless, they demand tremendous training resources,
and the training data is often inaccessible, e.g., CLIP, DINOv2, posing great
challenges to developing derivatives that could advance research in this field.
In this work, we offer a very simple and general solution, named Proteus, to
distill foundation models into smaller equivalents on ImageNet-1K without
access to the original training data. Specifically, we remove the designs from
conventional knowledge distillation settings that result in dataset bias and
present three levels of training objectives, i.e., token, patch, and feature,
to maximize the efficacy of knowledge transfer. In this manner, Proteus is
trained at ImageNet-level costs with surprising ability, facilitating the
accessibility of training foundation models for the broader research community.
Leveraging DINOv2-g/14 as the teacher, Proteus-L/14 matches the performance of
the Oracle method DINOv2-L/14 (142M training data) across 15 benchmarks and
outperforms other vision foundation models including CLIP-L/14 (400M),
OpenCLIP-L/14 (400M/2B) and SynCLR-L/14 (600M).
|
[
{
"created": "Mon, 15 Jul 2024 00:13:53 GMT",
"version": "v1"
}
] |
2024-07-16
|
[
[
"Zhang",
"Yitian",
""
],
[
"Ma",
"Xu",
""
],
[
"Bai",
"Yue",
""
],
[
"Wang",
"Huan",
""
],
[
"Fu",
"Yun",
""
]
] |
Vision foundation models are renowned for their generalization ability due to massive training data. Nevertheless, they demand tremendous training resources, and the training data is often inaccessible, e.g., CLIP, DINOv2, posing great challenges to developing derivatives that could advance research in this field. In this work, we offer a very simple and general solution, named Proteus, to distill foundation models into smaller equivalents on ImageNet-1K without access to the original training data. Specifically, we remove the designs from conventional knowledge distillation settings that result in dataset bias and present three levels of training objectives, i.e., token, patch, and feature, to maximize the efficacy of knowledge transfer. In this manner, Proteus is trained at ImageNet-level costs with surprising ability, facilitating the accessibility of training foundation models for the broader research community. Leveraging DINOv2-g/14 as the teacher, Proteus-L/14 matches the performance of the Oracle method DINOv2-L/14 (142M training data) across 15 benchmarks and outperforms other vision foundation models including CLIP-L/14 (400M), OpenCLIP-L/14 (400M/2B) and SynCLR-L/14 (600M).
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.