id
stringlengths 9
10
| submitter
stringlengths 1
64
⌀ | authors
stringlengths 4
20.7k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
404
⌀ | doi
stringlengths 11
153
⌀ | report-no
stringlengths 2
254
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | orig_abstract
stringlengths 14
3.35k
| versions
listlengths 1
60
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
1.35k
| abstract
stringlengths 11
3.34k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1910.02423
|
Harikrishnan Nellippallil Balakrishnan
|
Harikrishnan Nellippallil Balakrishnan, Aditi Kathpalia, Snehanshu
Saha, Nithin Nagaraj
|
ChaosNet: A Chaos based Artificial Neural Network Architecture for
Classification
|
27 pages, 23 figures
| null | null | null |
cs.LG nlin.CD stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Inspired by chaotic firing of neurons in the brain, we propose ChaosNet -- a
novel chaos based artificial neural network architecture for classification
tasks. ChaosNet is built using layers of neurons, each of which is a 1D chaotic
map known as the Generalized Luroth Series (GLS) which has been shown in
earlier works to possess very useful properties for compression, cryptography
and for computing XOR and other logical operations. In this work, we design a
novel learning algorithm on ChaosNet that exploits the topological transitivity
property of the chaotic GLS neurons. The proposed learning algorithm gives
consistently good performance accuracy in a number of classification tasks on
well known publicly available datasets with very limited training samples. Even
with as low as 7 (or fewer) training samples/class (which accounts for less
than 0.05% of the total available data), ChaosNet yields performance accuracies
in the range 73.89 % - 98.33 %. We demonstrate the robustness of ChaosNet to
additive parameter noise and also provide an example implementation of a
2-layer ChaosNet for enhancing classification accuracy. We envisage the
development of several other novel learning algorithms on ChaosNet in the near
future.
|
[
{
"created": "Sun, 6 Oct 2019 11:40:40 GMT",
"version": "v1"
}
] |
2019-10-08
|
[
[
"Balakrishnan",
"Harikrishnan Nellippallil",
""
],
[
"Kathpalia",
"Aditi",
""
],
[
"Saha",
"Snehanshu",
""
],
[
"Nagaraj",
"Nithin",
""
]
] |
Inspired by chaotic firing of neurons in the brain, we propose ChaosNet -- a novel chaos based artificial neural network architecture for classification tasks. ChaosNet is built using layers of neurons, each of which is a 1D chaotic map known as the Generalized Luroth Series (GLS) which has been shown in earlier works to possess very useful properties for compression, cryptography and for computing XOR and other logical operations. In this work, we design a novel learning algorithm on ChaosNet that exploits the topological transitivity property of the chaotic GLS neurons. The proposed learning algorithm gives consistently good performance accuracy in a number of classification tasks on well known publicly available datasets with very limited training samples. Even with as low as 7 (or fewer) training samples/class (which accounts for less than 0.05% of the total available data), ChaosNet yields performance accuracies in the range 73.89 % - 98.33 %. We demonstrate the robustness of ChaosNet to additive parameter noise and also provide an example implementation of a 2-layer ChaosNet for enhancing classification accuracy. We envisage the development of several other novel learning algorithms on ChaosNet in the near future.
|
0801.4230
|
Simon Perdrix
|
Simon Perdrix
|
Quantum entanglement analysis based on abstract interpretation
|
13 pages
|
Proc. of 15th International Static Analysis Symposium (SAS 2008).
LNCS 5079, pp 270-282
|
10.1007/978-3-540-69166-2_18
| null |
cs.LO cs.PL quant-ph
| null |
Entanglement is a non local property of quantum states which has no classical
counterpart and plays a decisive role in quantum information theory. Several
protocols, like the teleportation, are based on quantum entangled states.
Moreover, any quantum algorithm which does not create entanglement can be
efficiently simulated on a classical computer. The exact role of the
entanglement is nevertheless not well understood. Since an exact analysis of
entanglement evolution induces an exponential slowdown, we consider
approximative analysis based on the framework of abstract interpretation. In
this paper, a concrete quantum semantics based on superoperators is associated
with a simple quantum programming language. The representation of entanglement,
i.e. the design of the abstract domain is a key issue. A representation of
entanglement as a partition of the memory is chosen. An abstract semantics is
introduced, and the soundness of the approximation is proven.
|
[
{
"created": "Mon, 28 Jan 2008 10:45:47 GMT",
"version": "v1"
}
] |
2008-12-08
|
[
[
"Perdrix",
"Simon",
""
]
] |
Entanglement is a non local property of quantum states which has no classical counterpart and plays a decisive role in quantum information theory. Several protocols, like the teleportation, are based on quantum entangled states. Moreover, any quantum algorithm which does not create entanglement can be efficiently simulated on a classical computer. The exact role of the entanglement is nevertheless not well understood. Since an exact analysis of entanglement evolution induces an exponential slowdown, we consider approximative analysis based on the framework of abstract interpretation. In this paper, a concrete quantum semantics based on superoperators is associated with a simple quantum programming language. The representation of entanglement, i.e. the design of the abstract domain is a key issue. A representation of entanglement as a partition of the memory is chosen. An abstract semantics is introduced, and the soundness of the approximation is proven.
|
1401.0561
|
Janne Lindqvist
|
Michael Sherman, Gradeigh Clark, Yulong Yang, Shridatt Sugrim, Arttu
Modig, Janne Lindqvist, Antti Oulasvirta, Teemu Roos
|
User-Generated Free-Form Gestures for Authentication: Security and
Memorability
| null | null | null | null |
cs.CR cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper studies the security and memorability of free-form multitouch
gestures for mobile authentication. Towards this end, we collected a dataset
with a generate-test-retest paradigm where participants (N=63) generated
free-form gestures, repeated them, and were later retested for memory. Half of
the participants decided to generate one-finger gestures, and the other half
generated multi-finger gestures. Although there has been recent work on
template-based gestures, there are yet no metrics to analyze security of either
template or free-form gestures. For example, entropy-based metrics used for
text-based passwords are not suitable for capturing the security and
memorability of free-form gestures. Hence, we modify a recently proposed metric
for analyzing information capacity of continuous full-body movements for this
purpose. Our metric computed estimated mutual information in repeated sets of
gestures. Surprisingly, one-finger gestures had higher average mutual
information. Gestures with many hard angles and turns had the highest mutual
information. The best-remembered gestures included signatures and simple
angular shapes. We also implemented a multitouch recognizer to evaluate the
practicality of free-form gestures in a real authentication system and how they
perform against shoulder surfing attacks. We conclude the paper with strategies
for generating secure and memorable free-form gestures, which present a robust
method for mobile authentication.
|
[
{
"created": "Thu, 2 Jan 2014 23:15:27 GMT",
"version": "v1"
}
] |
2014-01-06
|
[
[
"Sherman",
"Michael",
""
],
[
"Clark",
"Gradeigh",
""
],
[
"Yang",
"Yulong",
""
],
[
"Sugrim",
"Shridatt",
""
],
[
"Modig",
"Arttu",
""
],
[
"Lindqvist",
"Janne",
""
],
[
"Oulasvirta",
"Antti",
""
],
[
"Roos",
"Teemu",
""
]
] |
This paper studies the security and memorability of free-form multitouch gestures for mobile authentication. Towards this end, we collected a dataset with a generate-test-retest paradigm where participants (N=63) generated free-form gestures, repeated them, and were later retested for memory. Half of the participants decided to generate one-finger gestures, and the other half generated multi-finger gestures. Although there has been recent work on template-based gestures, there are yet no metrics to analyze security of either template or free-form gestures. For example, entropy-based metrics used for text-based passwords are not suitable for capturing the security and memorability of free-form gestures. Hence, we modify a recently proposed metric for analyzing information capacity of continuous full-body movements for this purpose. Our metric computed estimated mutual information in repeated sets of gestures. Surprisingly, one-finger gestures had higher average mutual information. Gestures with many hard angles and turns had the highest mutual information. The best-remembered gestures included signatures and simple angular shapes. We also implemented a multitouch recognizer to evaluate the practicality of free-form gestures in a real authentication system and how they perform against shoulder surfing attacks. We conclude the paper with strategies for generating secure and memorable free-form gestures, which present a robust method for mobile authentication.
|
2005.13956
|
Mao Ye
|
Xinpeng Li
|
Improving Generalized Zero-Shot Learning by Semantic Discriminator
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
It is a recognized fact that the classification accuracy of unseen classes in
the setting of Generalized Zero-Shot Learning (GZSL) is much lower than that of
traditional Zero-Shot Leaning (ZSL). One of the reasons is that an instance is
always misclassified to the wrong domain. Here we refer to the seen and unseen
classes as two domains respectively. We propose a new approach to distinguish
whether the instances come from the seen or unseen classes. First the visual
feature of instance is projected into the semantic space. Then the absolute
norm difference between the projected semantic vector and the class semantic
embedding vector, and the minimum distance between the projected semantic
vectors and the semantic embedding vectors of the seen classes are used as
discrimination basis. This approach is termed as SD (Semantic Discriminator)
because domain judgement of instance is performed in the semantic space. Our
approach can be combined with any existing ZSL method and fully supervision
classification model to form a new GZSL method. Furthermore, our approach is
very simple and does not need any fixed parameters.
|
[
{
"created": "Thu, 28 May 2020 12:48:38 GMT",
"version": "v1"
},
{
"created": "Thu, 11 Jun 2020 14:43:10 GMT",
"version": "v2"
}
] |
2020-06-12
|
[
[
"Li",
"Xinpeng",
""
]
] |
It is a recognized fact that the classification accuracy of unseen classes in the setting of Generalized Zero-Shot Learning (GZSL) is much lower than that of traditional Zero-Shot Leaning (ZSL). One of the reasons is that an instance is always misclassified to the wrong domain. Here we refer to the seen and unseen classes as two domains respectively. We propose a new approach to distinguish whether the instances come from the seen or unseen classes. First the visual feature of instance is projected into the semantic space. Then the absolute norm difference between the projected semantic vector and the class semantic embedding vector, and the minimum distance between the projected semantic vectors and the semantic embedding vectors of the seen classes are used as discrimination basis. This approach is termed as SD (Semantic Discriminator) because domain judgement of instance is performed in the semantic space. Our approach can be combined with any existing ZSL method and fully supervision classification model to form a new GZSL method. Furthermore, our approach is very simple and does not need any fixed parameters.
|
2211.14086
|
Jingwang Ling
|
Jingwang Ling, Zhibo Wang, Feng Xu
|
ShadowNeuS: Neural SDF Reconstruction by Shadow Ray Supervision
|
CVPR 2023. Project page: https://gerwang.github.io/shadowneus/
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
By supervising camera rays between a scene and multi-view image planes, NeRF
reconstructs a neural scene representation for the task of novel view
synthesis. On the other hand, shadow rays between the light source and the
scene have yet to be considered. Therefore, we propose a novel shadow ray
supervision scheme that optimizes both the samples along the ray and the ray
location. By supervising shadow rays, we successfully reconstruct a neural SDF
of the scene from single-view images under multiple lighting conditions. Given
single-view binary shadows, we train a neural network to reconstruct a complete
scene not limited by the camera's line of sight. By further modeling the
correlation between the image colors and the shadow rays, our technique can
also be effectively extended to RGB inputs. We compare our method with previous
works on challenging tasks of shape reconstruction from single-view binary
shadow or RGB images and observe significant improvements. The code and data
are available at https://github.com/gerwang/ShadowNeuS.
|
[
{
"created": "Fri, 25 Nov 2022 13:14:56 GMT",
"version": "v1"
},
{
"created": "Thu, 23 Mar 2023 14:21:24 GMT",
"version": "v2"
}
] |
2023-03-24
|
[
[
"Ling",
"Jingwang",
""
],
[
"Wang",
"Zhibo",
""
],
[
"Xu",
"Feng",
""
]
] |
By supervising camera rays between a scene and multi-view image planes, NeRF reconstructs a neural scene representation for the task of novel view synthesis. On the other hand, shadow rays between the light source and the scene have yet to be considered. Therefore, we propose a novel shadow ray supervision scheme that optimizes both the samples along the ray and the ray location. By supervising shadow rays, we successfully reconstruct a neural SDF of the scene from single-view images under multiple lighting conditions. Given single-view binary shadows, we train a neural network to reconstruct a complete scene not limited by the camera's line of sight. By further modeling the correlation between the image colors and the shadow rays, our technique can also be effectively extended to RGB inputs. We compare our method with previous works on challenging tasks of shape reconstruction from single-view binary shadow or RGB images and observe significant improvements. The code and data are available at https://github.com/gerwang/ShadowNeuS.
|
1907.07366
|
Yen Hao Huang
|
Yen-Hao Huang, Yi-Hsin Chen, Fernando Henrique Calderon Alvarado,
Ssu-Rui Lee, Shu-I Wu, Yuwen Lai and Yi-Shin Chen
|
Leveraging Linguistic Characteristics for Bipolar Disorder Recognition
with Gender Differences
|
Accepted by DSHealth '19: 2019 KDD Workshop on Applied Data Science
for Healthcare
| null | null | null |
cs.IR cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Most previous studies on automatic recognition model for bipolar disorder
(BD) were based on both social media and linguistic features. The present study
investigates the possibility of adopting only language-based features, namely
the syntax and morpheme collocation. We also examine the effect of gender on
the results considering gender has long been recognized as an important
modulating factor for mental disorders, yet it received little attention in
previous linguistic models. The present study collects Twitter posts 3 months
prior to the self-disclosure by 349 BD users (231 female, 118 male). We
construct a set of syntactic patterns in terms of the word usage based on graph
pattern construction and pattern attention mechanism. The factors examined are
gender differences, syntactic patterns, and bipolar recognition performance.
The performance indicates our F1 scores reach over 91% and outperform several
baselines, including those using TF-IDF, LIWC and pre-trained language models
(ELMO and BERT). The contributions of the present study are: (1) The features
are contextualized, domain-agnostic, and purely linguistic. (2) The performance
of BD recognition is improved by gender-enriched linguistic pattern features,
which are constructed with gender differences in language usage.
|
[
{
"created": "Wed, 17 Jul 2019 07:37:13 GMT",
"version": "v1"
}
] |
2019-07-18
|
[
[
"Huang",
"Yen-Hao",
""
],
[
"Chen",
"Yi-Hsin",
""
],
[
"Alvarado",
"Fernando Henrique Calderon",
""
],
[
"Lee",
"Ssu-Rui",
""
],
[
"Wu",
"Shu-I",
""
],
[
"Lai",
"Yuwen",
""
],
[
"Chen",
"Yi-Shin",
""
]
] |
Most previous studies on automatic recognition model for bipolar disorder (BD) were based on both social media and linguistic features. The present study investigates the possibility of adopting only language-based features, namely the syntax and morpheme collocation. We also examine the effect of gender on the results considering gender has long been recognized as an important modulating factor for mental disorders, yet it received little attention in previous linguistic models. The present study collects Twitter posts 3 months prior to the self-disclosure by 349 BD users (231 female, 118 male). We construct a set of syntactic patterns in terms of the word usage based on graph pattern construction and pattern attention mechanism. The factors examined are gender differences, syntactic patterns, and bipolar recognition performance. The performance indicates our F1 scores reach over 91% and outperform several baselines, including those using TF-IDF, LIWC and pre-trained language models (ELMO and BERT). The contributions of the present study are: (1) The features are contextualized, domain-agnostic, and purely linguistic. (2) The performance of BD recognition is improved by gender-enriched linguistic pattern features, which are constructed with gender differences in language usage.
|
2310.09238
|
Saumajit Saha
|
Saumajit Saha and Albert Nanda
|
BanglaNLP at BLP-2023 Task 2: Benchmarking different Transformer Models
for Sentiment Analysis of Bangla Social Media Posts
|
7 pages, 2 figures, workshop
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Bangla is the 7th most widely spoken language globally, with a staggering 234
million native speakers primarily hailing from India and Bangladesh. This
morphologically rich language boasts a rich literary tradition, encompassing
diverse dialects and language-specific challenges. Despite its linguistic
richness and history, Bangla remains categorized as a low-resource language
within the natural language processing (NLP) and speech community. This paper
presents our submission to Task 2 (Sentiment Analysis of Bangla Social Media
Posts) of the BLP Workshop. We experiment with various Transformer-based
architectures to solve this task. Our quantitative results show that transfer
learning really helps in better learning of the models in this low-resource
language scenario. This becomes evident when we further finetune a model which
has already been finetuned on twitter data for sentiment analysis task and that
finetuned model performs the best among all other models. We also perform a
detailed error analysis where we find some instances where ground truth labels
need to be relooked at. We obtain a micro-F1 of 67.02\% on the test set and our
performance in this shared task is ranked at 21 in the leaderboard.
|
[
{
"created": "Fri, 13 Oct 2023 16:46:38 GMT",
"version": "v1"
},
{
"created": "Wed, 18 Oct 2023 03:51:38 GMT",
"version": "v2"
}
] |
2023-10-19
|
[
[
"Saha",
"Saumajit",
""
],
[
"Nanda",
"Albert",
""
]
] |
Bangla is the 7th most widely spoken language globally, with a staggering 234 million native speakers primarily hailing from India and Bangladesh. This morphologically rich language boasts a rich literary tradition, encompassing diverse dialects and language-specific challenges. Despite its linguistic richness and history, Bangla remains categorized as a low-resource language within the natural language processing (NLP) and speech community. This paper presents our submission to Task 2 (Sentiment Analysis of Bangla Social Media Posts) of the BLP Workshop. We experiment with various Transformer-based architectures to solve this task. Our quantitative results show that transfer learning really helps in better learning of the models in this low-resource language scenario. This becomes evident when we further finetune a model which has already been finetuned on twitter data for sentiment analysis task and that finetuned model performs the best among all other models. We also perform a detailed error analysis where we find some instances where ground truth labels need to be relooked at. We obtain a micro-F1 of 67.02\% on the test set and our performance in this shared task is ranked at 21 in the leaderboard.
|
1403.7720
|
Quan Yu
|
Quan Yu, Chi Wan Sung, Terence H. Chan
|
Irregular Fractional Repetition Code Optimization for Heterogeneous
Cloud Storage
|
12 pages, 10 figures. to appear in IEEE Journal on Selected Areas in
Communications 2014
| null |
10.1109/JSAC.2014.140523
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a flexible irregular model for heterogeneous cloud
storage systems and investigates how the cost of repairing failed nodes can be
minimized. The fractional repetition code, originally designed for minimizing
repair bandwidth for homogeneous storage systems, is generalized to the
irregular fractional repetition code, which is adaptable to heterogeneous
environments. The code structure and the associated storage allocation can be
obtained by solving an integer linear programming problem. For moderate sized
networks, a heuristic algorithm is proposed and shown to be near-optimal by
computer simulations.
|
[
{
"created": "Sun, 30 Mar 2014 09:00:51 GMT",
"version": "v1"
}
] |
2016-11-18
|
[
[
"Yu",
"Quan",
""
],
[
"Sung",
"Chi Wan",
""
],
[
"Chan",
"Terence H.",
""
]
] |
This paper presents a flexible irregular model for heterogeneous cloud storage systems and investigates how the cost of repairing failed nodes can be minimized. The fractional repetition code, originally designed for minimizing repair bandwidth for homogeneous storage systems, is generalized to the irregular fractional repetition code, which is adaptable to heterogeneous environments. The code structure and the associated storage allocation can be obtained by solving an integer linear programming problem. For moderate sized networks, a heuristic algorithm is proposed and shown to be near-optimal by computer simulations.
|
2112.08765
|
Enguerrand Prebet
|
Enguerrand Prebet (ENS Lyon, FOCUS)
|
On Up-to Context Techniques in the $\pi$-calculus
| null | null | null | null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a variant of the theory of compatible functions on relations, due
to Sangiorgi and Pous. We show that the up-to context proof technique for
bisimulation is compatible in this setting for two subsets of the pi-calculus:
the asynchronous pi-calculus and a pi-calculus with immediately available
names.
|
[
{
"created": "Thu, 16 Dec 2021 10:24:09 GMT",
"version": "v1"
},
{
"created": "Fri, 3 Jun 2022 08:32:10 GMT",
"version": "v2"
}
] |
2022-06-06
|
[
[
"Prebet",
"Enguerrand",
"",
"ENS Lyon, FOCUS"
]
] |
We present a variant of the theory of compatible functions on relations, due to Sangiorgi and Pous. We show that the up-to context proof technique for bisimulation is compatible in this setting for two subsets of the pi-calculus: the asynchronous pi-calculus and a pi-calculus with immediately available names.
|
2107.00584
|
Lucas da Silva Reis
|
Claudio Qureshi and Lucas Reis
|
On the functional graph of the power map over finite groups
| null | null | null | null |
cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we study the description of the functional graphs associated
with the power maps over finite groups. We present a structural result which
describes the isomorphism class of these graphs for abelian groups and also for
flower groups, which is a special class of non abelian groups introduced in
this paper. Unlike the abelian case where all the trees associated with
periodic points are isomorphic, in the case of flower groups we prove that
several different classes of trees can occur. The class of central trees (i.e.
associated with periodic points that are in the center of the group) are in
general non-elementary and a recursive description is given in this work.
Flower groups include many non abelian groups such as dihedral and generalized
quaternion groups, and the projective general linear group of order two over a
finite field. In particular, we provide improvements on past works regarding
the description of the dynamics of the power map over these groups.
|
[
{
"created": "Thu, 1 Jul 2021 16:17:00 GMT",
"version": "v1"
},
{
"created": "Tue, 6 Sep 2022 19:56:23 GMT",
"version": "v2"
}
] |
2022-09-08
|
[
[
"Qureshi",
"Claudio",
""
],
[
"Reis",
"Lucas",
""
]
] |
In this paper we study the description of the functional graphs associated with the power maps over finite groups. We present a structural result which describes the isomorphism class of these graphs for abelian groups and also for flower groups, which is a special class of non abelian groups introduced in this paper. Unlike the abelian case where all the trees associated with periodic points are isomorphic, in the case of flower groups we prove that several different classes of trees can occur. The class of central trees (i.e. associated with periodic points that are in the center of the group) are in general non-elementary and a recursive description is given in this work. Flower groups include many non abelian groups such as dihedral and generalized quaternion groups, and the projective general linear group of order two over a finite field. In particular, we provide improvements on past works regarding the description of the dynamics of the power map over these groups.
|
2401.09621
|
Jes\'us Camacho-Rodr\'iguez
|
Ashvin Agrawal, Tim Brown, Anoop Johnson, Jes\'us Camacho-Rodr\'iguez,
Kyle Weller, Carlo Curino, Raghu Ramakrishnan
|
XTable in Action: Seamless Interoperability in Data Lakes
| null | null | null | null |
cs.DB
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Contemporary approaches to data management are increasingly relying on
unified analytics and AI platforms to foster collaboration, interoperability,
seamless access to reliable data, and high performance. Data Lakes featuring
open standard table formats such as Delta Lake, Apache Hudi, and Apache Iceberg
are central components of these data architectures. Choosing the right format
for managing a table is crucial for achieving the objectives mentioned above.
The challenge lies in selecting the best format, a task that is onerous and can
yield temporary results, as the ideal choice may shift over time with data
growth, evolving workloads, and the competitive development of table formats
and processing engines. Moreover, restricting data access to a single format
can hinder data sharing resulting in diminished business value over the long
term. The ability to seamlessly interoperate between formats and with
negligible overhead can effectively address these challenges. Our solution in
this direction is an innovative omni-directional translator, XTable, that
facilitates writing data in one format and reading it in any format, thus
achieving the desired format interoperability. In this work, we demonstrate the
effectiveness of XTable through application scenarios inspired by real-world
use cases.
|
[
{
"created": "Wed, 17 Jan 2024 22:18:00 GMT",
"version": "v1"
}
] |
2024-01-19
|
[
[
"Agrawal",
"Ashvin",
""
],
[
"Brown",
"Tim",
""
],
[
"Johnson",
"Anoop",
""
],
[
"Camacho-Rodríguez",
"Jesús",
""
],
[
"Weller",
"Kyle",
""
],
[
"Curino",
"Carlo",
""
],
[
"Ramakrishnan",
"Raghu",
""
]
] |
Contemporary approaches to data management are increasingly relying on unified analytics and AI platforms to foster collaboration, interoperability, seamless access to reliable data, and high performance. Data Lakes featuring open standard table formats such as Delta Lake, Apache Hudi, and Apache Iceberg are central components of these data architectures. Choosing the right format for managing a table is crucial for achieving the objectives mentioned above. The challenge lies in selecting the best format, a task that is onerous and can yield temporary results, as the ideal choice may shift over time with data growth, evolving workloads, and the competitive development of table formats and processing engines. Moreover, restricting data access to a single format can hinder data sharing resulting in diminished business value over the long term. The ability to seamlessly interoperate between formats and with negligible overhead can effectively address these challenges. Our solution in this direction is an innovative omni-directional translator, XTable, that facilitates writing data in one format and reading it in any format, thus achieving the desired format interoperability. In this work, we demonstrate the effectiveness of XTable through application scenarios inspired by real-world use cases.
|
2203.08321
|
Emadeldeen Eldele
|
Mohamed Ragab, Emadeldeen Eldele, Wee Ling Tan, Chuan-Sheng Foo,
Zhenghua Chen, Min Wu, Chee-Keong Kwoh, Xiaoli Li
|
ADATIME: A Benchmarking Suite for Domain Adaptation on Time Series Data
|
Accepted in the ACM Transactions on Knowledge Discovery from Data
(TKDD)
| null |
10.1145/3587937
| null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Unsupervised domain adaptation methods aim to generalize well on unlabeled
test data that may have a different (shifted) distribution from the training
data. Such methods are typically developed on image data, and their application
to time series data is less explored. Existing works on time series domain
adaptation suffer from inconsistencies in evaluation schemes, datasets, and
backbone neural network architectures. Moreover, labeled target data are often
used for model selection, which violates the fundamental assumption of
unsupervised domain adaptation. To address these issues, we develop a
benchmarking evaluation suite (AdaTime) to systematically and fairly evaluate
different domain adaptation methods on time series data. Specifically, we
standardize the backbone neural network architectures and benchmarking
datasets, while also exploring more realistic model selection approaches that
can work with no labeled data or just a few labeled samples. Our evaluation
includes adapting state-of-the-art visual domain adaptation methods to time
series data as well as the recent methods specifically developed for time
series data. We conduct extensive experiments to evaluate 11 state-of-the-art
methods on five representative datasets spanning 50 cross-domain scenarios. Our
results suggest that with careful selection of hyper-parameters, visual domain
adaptation methods are competitive with methods proposed for time series domain
adaptation. In addition, we find that hyper-parameters could be selected based
on realistic model selection approaches. Our work unveils practical insights
for applying domain adaptation methods on time series data and builds a solid
foundation for future works in the field. The code is available at
\href{https://github.com/emadeldeen24/AdaTime}{github.com/emadeldeen24/AdaTime}.
|
[
{
"created": "Tue, 15 Mar 2022 23:55:05 GMT",
"version": "v1"
},
{
"created": "Fri, 5 May 2023 14:06:57 GMT",
"version": "v2"
}
] |
2023-05-08
|
[
[
"Ragab",
"Mohamed",
""
],
[
"Eldele",
"Emadeldeen",
""
],
[
"Tan",
"Wee Ling",
""
],
[
"Foo",
"Chuan-Sheng",
""
],
[
"Chen",
"Zhenghua",
""
],
[
"Wu",
"Min",
""
],
[
"Kwoh",
"Chee-Keong",
""
],
[
"Li",
"Xiaoli",
""
]
] |
Unsupervised domain adaptation methods aim to generalize well on unlabeled test data that may have a different (shifted) distribution from the training data. Such methods are typically developed on image data, and their application to time series data is less explored. Existing works on time series domain adaptation suffer from inconsistencies in evaluation schemes, datasets, and backbone neural network architectures. Moreover, labeled target data are often used for model selection, which violates the fundamental assumption of unsupervised domain adaptation. To address these issues, we develop a benchmarking evaluation suite (AdaTime) to systematically and fairly evaluate different domain adaptation methods on time series data. Specifically, we standardize the backbone neural network architectures and benchmarking datasets, while also exploring more realistic model selection approaches that can work with no labeled data or just a few labeled samples. Our evaluation includes adapting state-of-the-art visual domain adaptation methods to time series data as well as the recent methods specifically developed for time series data. We conduct extensive experiments to evaluate 11 state-of-the-art methods on five representative datasets spanning 50 cross-domain scenarios. Our results suggest that with careful selection of hyper-parameters, visual domain adaptation methods are competitive with methods proposed for time series domain adaptation. In addition, we find that hyper-parameters could be selected based on realistic model selection approaches. Our work unveils practical insights for applying domain adaptation methods on time series data and builds a solid foundation for future works in the field. The code is available at \href{https://github.com/emadeldeen24/AdaTime}{github.com/emadeldeen24/AdaTime}.
|
2006.01713
|
ShiLiang Zhang
|
Zhifu Gao, Shiliang Zhang, Ming Lei, Ian McLoughlin
|
SAN-M: Memory Equipped Self-Attention for End-to-End Speech Recognition
|
submitted to INTERSPEECH2020
| null | null | null |
cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
End-to-end speech recognition has become popular in recent years, since it
can integrate the acoustic, pronunciation and language models into a single
neural network. Among end-to-end approaches, attention-based methods have
emerged as being superior. For example, Transformer, which adopts an
encoder-decoder architecture. The key improvement introduced by Transformer is
the utilization of self-attention instead of recurrent mechanisms, enabling
both encoder and decoder to capture long-range dependencies with lower
computational complexity.In this work, we propose boosting the self-attention
ability with a DFSMN memory block, forming the proposed memory equipped
self-attention (SAN-M) mechanism. Theoretical and empirical comparisons have
been made to demonstrate the relevancy and complementarity between
self-attention and the DFSMN memory block. Furthermore, the proposed SAN-M
provides an efficient mechanism to integrate these two modules. We have
evaluated our approach on the public AISHELL-1 benchmark and an
industrial-level 20,000-hour Mandarin speech recognition task. On both tasks,
SAN-M systems achieved much better performance than the self-attention based
Transformer baseline system. Specially, it can achieve a CER of 6.46% on the
AISHELL-1 task even without using any external LM, comfortably outperforming
other state-of-the-art systems.
|
[
{
"created": "Thu, 21 May 2020 03:33:09 GMT",
"version": "v1"
}
] |
2020-06-03
|
[
[
"Gao",
"Zhifu",
""
],
[
"Zhang",
"Shiliang",
""
],
[
"Lei",
"Ming",
""
],
[
"McLoughlin",
"Ian",
""
]
] |
End-to-end speech recognition has become popular in recent years, since it can integrate the acoustic, pronunciation and language models into a single neural network. Among end-to-end approaches, attention-based methods have emerged as being superior. For example, Transformer, which adopts an encoder-decoder architecture. The key improvement introduced by Transformer is the utilization of self-attention instead of recurrent mechanisms, enabling both encoder and decoder to capture long-range dependencies with lower computational complexity.In this work, we propose boosting the self-attention ability with a DFSMN memory block, forming the proposed memory equipped self-attention (SAN-M) mechanism. Theoretical and empirical comparisons have been made to demonstrate the relevancy and complementarity between self-attention and the DFSMN memory block. Furthermore, the proposed SAN-M provides an efficient mechanism to integrate these two modules. We have evaluated our approach on the public AISHELL-1 benchmark and an industrial-level 20,000-hour Mandarin speech recognition task. On both tasks, SAN-M systems achieved much better performance than the self-attention based Transformer baseline system. Specially, it can achieve a CER of 6.46% on the AISHELL-1 task even without using any external LM, comfortably outperforming other state-of-the-art systems.
|
1701.08407
|
Lu Lu
|
Lu Lu, Haiquan Zhao
|
Subband adaptive filter trained by differential evolution for channel
estimation
|
7 pages, 4 figures
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The normalized subband adaptive filter (NSAF) is widely accepted as a
preeminent adaptive filtering algorithm because of its efficiency under the
colored excitation. However, the convergence rate of NSAF is slow. To address
this drawback, in this paper, a variant of the NSAF, called the differential
evolution (DE)-NSAF (DE-NSAF), is proposed for channel estimation based on DE
strategy. It is worth noticing that there are several papers concerning
designing DE strategies for adaptive filter. But their signal models are still
the single adaptive filter model rather than the fullband adaptive filter model
considered in this paper. Thus, the problem considered in our work is quite
different from those. The proposed DE-NSAF algorithm is based on real-valued
manipulations and has fast convergence rate for searching the global solution
of optimized weight vector. Moreover, a design step of new algorithm is given
in detail. Simulation results demonstrate the improved performance of the
proposed DE-NSAF algorithm in terms of the convergence rate.
|
[
{
"created": "Sun, 29 Jan 2017 17:30:53 GMT",
"version": "v1"
},
{
"created": "Sun, 5 Mar 2017 18:33:38 GMT",
"version": "v2"
},
{
"created": "Fri, 17 Mar 2017 02:04:06 GMT",
"version": "v3"
}
] |
2017-03-20
|
[
[
"Lu",
"Lu",
""
],
[
"Zhao",
"Haiquan",
""
]
] |
The normalized subband adaptive filter (NSAF) is widely accepted as a preeminent adaptive filtering algorithm because of its efficiency under the colored excitation. However, the convergence rate of NSAF is slow. To address this drawback, in this paper, a variant of the NSAF, called the differential evolution (DE)-NSAF (DE-NSAF), is proposed for channel estimation based on DE strategy. It is worth noticing that there are several papers concerning designing DE strategies for adaptive filter. But their signal models are still the single adaptive filter model rather than the fullband adaptive filter model considered in this paper. Thus, the problem considered in our work is quite different from those. The proposed DE-NSAF algorithm is based on real-valued manipulations and has fast convergence rate for searching the global solution of optimized weight vector. Moreover, a design step of new algorithm is given in detail. Simulation results demonstrate the improved performance of the proposed DE-NSAF algorithm in terms of the convergence rate.
|
2302.07116
|
Lechao Cheng
|
Tian Qiu, Linyun Zhou, Wenxiang Xu, Lechao Cheng, Zunlei Feng, Mingli
Song
|
Team DETR: Guide Queries as a Professional Team in Detection
Transformers
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Recent proposed DETR variants have made tremendous progress in various
scenarios due to their streamlined processes and remarkable performance.
However, the learned queries usually explore the global context to generate the
final set prediction, resulting in redundant burdens and unfaithful results.
More specifically, a query is commonly responsible for objects of different
scales and positions, which is a challenge for the query itself, and will cause
spatial resource competition among queries. To alleviate this issue, we propose
Team DETR, which leverages query collaboration and position constraints to
embrace objects of interest more precisely. We also dynamically cater to each
query member's prediction preference, offering the query better scale and
spatial priors. In addition, the proposed Team DETR is flexible enough to be
adapted to other existing DETR variants without increasing parameters and
calculations. Extensive experiments on the COCO dataset show that Team DETR
achieves remarkable gains, especially for small and large objects. Code is
available at \url{https://github.com/horrible-dong/TeamDETR}.
|
[
{
"created": "Tue, 14 Feb 2023 15:21:53 GMT",
"version": "v1"
},
{
"created": "Wed, 15 Feb 2023 07:25:10 GMT",
"version": "v2"
},
{
"created": "Tue, 28 Feb 2023 04:28:12 GMT",
"version": "v3"
}
] |
2023-03-01
|
[
[
"Qiu",
"Tian",
""
],
[
"Zhou",
"Linyun",
""
],
[
"Xu",
"Wenxiang",
""
],
[
"Cheng",
"Lechao",
""
],
[
"Feng",
"Zunlei",
""
],
[
"Song",
"Mingli",
""
]
] |
Recent proposed DETR variants have made tremendous progress in various scenarios due to their streamlined processes and remarkable performance. However, the learned queries usually explore the global context to generate the final set prediction, resulting in redundant burdens and unfaithful results. More specifically, a query is commonly responsible for objects of different scales and positions, which is a challenge for the query itself, and will cause spatial resource competition among queries. To alleviate this issue, we propose Team DETR, which leverages query collaboration and position constraints to embrace objects of interest more precisely. We also dynamically cater to each query member's prediction preference, offering the query better scale and spatial priors. In addition, the proposed Team DETR is flexible enough to be adapted to other existing DETR variants without increasing parameters and calculations. Extensive experiments on the COCO dataset show that Team DETR achieves remarkable gains, especially for small and large objects. Code is available at \url{https://github.com/horrible-dong/TeamDETR}.
|
1912.00369
|
Roger Moore
|
Roger K. Moore
|
Talking with Robots: Opportunities and Challenges
|
Submitted for presentation at the UNESCO International Conference
Language Technologies for All (LT4All), Paris, 4-6 December 2019
(https://en.unesco.org/LT4All)
| null | null | null |
cs.HC cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Notwithstanding the tremendous progress that is taking place in spoken
language technology, effective speech-based human-robot interaction still
raises a number of important challenges. Not only do the fields of robotics and
spoken language technology present their own special problems, but their
combination raises an additional set of issues. In particular, there is a large
gap between the formulaic speech that typifies contemporary spoken dialogue
systems and the flexible nature of human-human conversation. It is pointed out
that grounded and situated speech-based human-robot interaction may lead to
deeper insights into the pragmatics of language usage, thereby overcoming the
current `habitability gap'.
|
[
{
"created": "Sun, 1 Dec 2019 09:42:50 GMT",
"version": "v1"
}
] |
2019-12-03
|
[
[
"Moore",
"Roger K.",
""
]
] |
Notwithstanding the tremendous progress that is taking place in spoken language technology, effective speech-based human-robot interaction still raises a number of important challenges. Not only do the fields of robotics and spoken language technology present their own special problems, but their combination raises an additional set of issues. In particular, there is a large gap between the formulaic speech that typifies contemporary spoken dialogue systems and the flexible nature of human-human conversation. It is pointed out that grounded and situated speech-based human-robot interaction may lead to deeper insights into the pragmatics of language usage, thereby overcoming the current `habitability gap'.
|
2107.13304
|
Bang Xiang Yong
|
Bang Xiang Yong, Tim Pearce, Alexandra Brintrup
|
Bayesian Autoencoders: Analysing and Fixing the Bernoulli likelihood for
Out-of-Distribution Detection
|
Presented at the ICML 2020 Workshop on Uncertainty and Ro-bustness in
Deep Learning
| null | null | null |
cs.LG stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
After an autoencoder (AE) has learnt to reconstruct one dataset, it might be
expected that the likelihood on an out-of-distribution (OOD) input would be
low. This has been studied as an approach to detect OOD inputs. Recent work
showed this intuitive approach can fail for the dataset pairs FashionMNIST vs
MNIST. This paper suggests this is due to the use of Bernoulli likelihood and
analyses why this is the case, proposing two fixes: 1) Compute the uncertainty
of likelihood estimate by using a Bayesian version of the AE. 2) Use
alternative distributions to model the likelihood.
|
[
{
"created": "Wed, 28 Jul 2021 11:51:35 GMT",
"version": "v1"
}
] |
2021-07-29
|
[
[
"Yong",
"Bang Xiang",
""
],
[
"Pearce",
"Tim",
""
],
[
"Brintrup",
"Alexandra",
""
]
] |
After an autoencoder (AE) has learnt to reconstruct one dataset, it might be expected that the likelihood on an out-of-distribution (OOD) input would be low. This has been studied as an approach to detect OOD inputs. Recent work showed this intuitive approach can fail for the dataset pairs FashionMNIST vs MNIST. This paper suggests this is due to the use of Bernoulli likelihood and analyses why this is the case, proposing two fixes: 1) Compute the uncertainty of likelihood estimate by using a Bayesian version of the AE. 2) Use alternative distributions to model the likelihood.
|
2009.12626
|
Klim Zaporojets
|
Klim Zaporojets, Johannes Deleu, Chris Develder, Thomas Demeester
|
DWIE: an entity-centric dataset for multi-task document-level
information extraction
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents DWIE, the 'Deutsche Welle corpus for Information
Extraction', a newly created multi-task dataset that combines four main
Information Extraction (IE) annotation subtasks: (i) Named Entity Recognition
(NER), (ii) Coreference Resolution, (iii) Relation Extraction (RE), and (iv)
Entity Linking. DWIE is conceived as an entity-centric dataset that describes
interactions and properties of conceptual entities on the level of the complete
document. This contrasts with currently dominant mention-driven approaches that
start from the detection and classification of named entity mentions in
individual sentences. Further, DWIE presented two main challenges when building
and evaluating IE models for it. First, the use of traditional mention-level
evaluation metrics for NER and RE tasks on entity-centric DWIE dataset can
result in measurements dominated by predictions on more frequently mentioned
entities. We tackle this issue by proposing a new entity-driven metric that
takes into account the number of mentions that compose each of the predicted
and ground truth entities. Second, the document-level multi-task annotations
require the models to transfer information between entity mentions located in
different parts of the document, as well as between different tasks, in a joint
learning setting. To realize this, we propose to use graph-based neural message
passing techniques between document-level mention spans. Our experiments show
an improvement of up to 5.5 F1 percentage points when incorporating neural
graph propagation into our joint model. This demonstrates DWIE's potential to
stimulate further research in graph neural networks for representation learning
in multi-task IE. We make DWIE publicly available at
https://github.com/klimzaporojets/DWIE.
|
[
{
"created": "Sat, 26 Sep 2020 15:53:22 GMT",
"version": "v1"
},
{
"created": "Tue, 9 Mar 2021 13:46:09 GMT",
"version": "v2"
}
] |
2021-03-10
|
[
[
"Zaporojets",
"Klim",
""
],
[
"Deleu",
"Johannes",
""
],
[
"Develder",
"Chris",
""
],
[
"Demeester",
"Thomas",
""
]
] |
This paper presents DWIE, the 'Deutsche Welle corpus for Information Extraction', a newly created multi-task dataset that combines four main Information Extraction (IE) annotation subtasks: (i) Named Entity Recognition (NER), (ii) Coreference Resolution, (iii) Relation Extraction (RE), and (iv) Entity Linking. DWIE is conceived as an entity-centric dataset that describes interactions and properties of conceptual entities on the level of the complete document. This contrasts with currently dominant mention-driven approaches that start from the detection and classification of named entity mentions in individual sentences. Further, DWIE presented two main challenges when building and evaluating IE models for it. First, the use of traditional mention-level evaluation metrics for NER and RE tasks on entity-centric DWIE dataset can result in measurements dominated by predictions on more frequently mentioned entities. We tackle this issue by proposing a new entity-driven metric that takes into account the number of mentions that compose each of the predicted and ground truth entities. Second, the document-level multi-task annotations require the models to transfer information between entity mentions located in different parts of the document, as well as between different tasks, in a joint learning setting. To realize this, we propose to use graph-based neural message passing techniques between document-level mention spans. Our experiments show an improvement of up to 5.5 F1 percentage points when incorporating neural graph propagation into our joint model. This demonstrates DWIE's potential to stimulate further research in graph neural networks for representation learning in multi-task IE. We make DWIE publicly available at https://github.com/klimzaporojets/DWIE.
|
2107.01726
|
Vladimir Vovk
|
Vladimir Vovk, Ivan Petej, and Alex Gammerman
|
Protected probabilistic classification
|
23 pages, 14 figures, and 4 tables
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper proposes a way of protecting probabilistic prediction models
against changes in the data distribution, concentrating on the case of
classification and paying particular attention to binary classification. This
is important in applications of machine learning, where the quality of a
trained prediction algorithm may drop significantly in the process of its
exploitation. Our techniques are based on recent work on conformal test
martingales and older work on prediction with expert advice, namely tracking
the best expert.
|
[
{
"created": "Sun, 4 Jul 2021 20:32:52 GMT",
"version": "v1"
},
{
"created": "Fri, 22 Oct 2021 19:04:51 GMT",
"version": "v2"
}
] |
2021-10-26
|
[
[
"Vovk",
"Vladimir",
""
],
[
"Petej",
"Ivan",
""
],
[
"Gammerman",
"Alex",
""
]
] |
This paper proposes a way of protecting probabilistic prediction models against changes in the data distribution, concentrating on the case of classification and paying particular attention to binary classification. This is important in applications of machine learning, where the quality of a trained prediction algorithm may drop significantly in the process of its exploitation. Our techniques are based on recent work on conformal test martingales and older work on prediction with expert advice, namely tracking the best expert.
|
2302.05889
|
Yifei Wang
|
Yifei Wang, Yupan Wang, Zeyu Zhang, Song Yang, Kaiqi Zhao, Jiamou Liu
|
USER: Unsupervised Structural Entropy-based Robust Graph Neural Network
| null | null |
10.1609/aaai.v37i8.26219
| null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Unsupervised/self-supervised graph neural networks (GNN) are vulnerable to
inherent randomness in the input graph data which greatly affects the
performance of the model in downstream tasks. In this paper, we alleviate the
interference of graph randomness and learn appropriate representations of nodes
without label information. To this end, we propose USER, an unsupervised robust
version of graph neural networks that is based on structural entropy. We
analyze the property of intrinsic connectivity and define intrinsic
connectivity graph. We also identify the rank of the adjacency matrix as a
crucial factor in revealing a graph that provides the same embeddings as the
intrinsic connectivity graph. We then introduce structural entropy in the
objective function to capture such a graph. Extensive experiments conducted on
clustering and link prediction tasks under random-noises and meta-attack over
three datasets show USER outperforms benchmarks and is robust to heavier
randomness.
|
[
{
"created": "Sun, 12 Feb 2023 10:32:12 GMT",
"version": "v1"
}
] |
2023-08-14
|
[
[
"Wang",
"Yifei",
""
],
[
"Wang",
"Yupan",
""
],
[
"Zhang",
"Zeyu",
""
],
[
"Yang",
"Song",
""
],
[
"Zhao",
"Kaiqi",
""
],
[
"Liu",
"Jiamou",
""
]
] |
Unsupervised/self-supervised graph neural networks (GNN) are vulnerable to inherent randomness in the input graph data which greatly affects the performance of the model in downstream tasks. In this paper, we alleviate the interference of graph randomness and learn appropriate representations of nodes without label information. To this end, we propose USER, an unsupervised robust version of graph neural networks that is based on structural entropy. We analyze the property of intrinsic connectivity and define intrinsic connectivity graph. We also identify the rank of the adjacency matrix as a crucial factor in revealing a graph that provides the same embeddings as the intrinsic connectivity graph. We then introduce structural entropy in the objective function to capture such a graph. Extensive experiments conducted on clustering and link prediction tasks under random-noises and meta-attack over three datasets show USER outperforms benchmarks and is robust to heavier randomness.
|
2405.09550
|
Jeongjin Shin
|
Jeongjin Shin
|
Mask-based Invisible Backdoor Attacks on Object Detection
|
7 pages, 3 figures
| null | null | null |
cs.CV cs.AI cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep learning models have achieved unprecedented performance in the domain of
object detection, resulting in breakthroughs in areas such as autonomous
driving and security. However, deep learning models are vulnerable to backdoor
attacks. These attacks prompt models to behave similarly to standard models
without a trigger; however, they act maliciously upon detecting a predefined
trigger. Despite extensive research on backdoor attacks in image
classification, their application to object detection remains relatively
underexplored. Given the widespread application of object detection in critical
real-world scenarios, the sensitivity and potential impact of these
vulnerabilities cannot be overstated. In this study, we propose an effective
invisible backdoor attack on object detection utilizing a mask-based approach.
Three distinct attack scenarios were explored for object detection: object
disappearance, object misclassification, and object generation attack. Through
extensive experiments, we comprehensively examined the effectiveness of these
attacks and tested certain defense methods to determine effective
countermeasures. Code will be available at
https://github.com/jeongjin0/invisible-backdoor-object-detection
|
[
{
"created": "Wed, 20 Mar 2024 12:27:30 GMT",
"version": "v1"
},
{
"created": "Fri, 24 May 2024 13:17:39 GMT",
"version": "v2"
},
{
"created": "Tue, 4 Jun 2024 11:28:42 GMT",
"version": "v3"
}
] |
2024-06-05
|
[
[
"Shin",
"Jeongjin",
""
]
] |
Deep learning models have achieved unprecedented performance in the domain of object detection, resulting in breakthroughs in areas such as autonomous driving and security. However, deep learning models are vulnerable to backdoor attacks. These attacks prompt models to behave similarly to standard models without a trigger; however, they act maliciously upon detecting a predefined trigger. Despite extensive research on backdoor attacks in image classification, their application to object detection remains relatively underexplored. Given the widespread application of object detection in critical real-world scenarios, the sensitivity and potential impact of these vulnerabilities cannot be overstated. In this study, we propose an effective invisible backdoor attack on object detection utilizing a mask-based approach. Three distinct attack scenarios were explored for object detection: object disappearance, object misclassification, and object generation attack. Through extensive experiments, we comprehensively examined the effectiveness of these attacks and tested certain defense methods to determine effective countermeasures. Code will be available at https://github.com/jeongjin0/invisible-backdoor-object-detection
|
2405.11267
|
Yo\`av Montacute
|
Yo\`av Montacute and Glynn Winskel
|
Concurrent Games over Relational Structures: The Origin of Game Comonads
|
Extended version of the paper in Logic in Computer Science (LICS)
2024 Proceedings
| null | null | null |
cs.LO cs.PL math.CT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Spoiler-Duplicator games are used in finite model theory to examine the
expressive power of logics. Their strategies have recently been reformulated as
coKleisli maps of game comonads over relational structures, providing new
results in finite model theory via categorical techniques. We present a novel
framework for studying Spoiler-Duplicator games by viewing them as event
structures. We introduce a first systematic method for constructing comonads
for all one-sided Spoiler-Duplicator games: game comonads are now realised by
adjunctions to a category of games, generically constructed from a comonad in a
bicategory of game schema (called signature games). Maps of the constructed
categories of games are strategies and generalise coKleisli maps of game
comonads; in the case of one-sided games they are shown to coincide with
suitably generalised homomorphisms. Finally, we provide characterisations of
strategies on two-sided Spoiler-Duplicator games; in a common special case they
coincide with spans of event structures.
|
[
{
"created": "Sat, 18 May 2024 11:34:05 GMT",
"version": "v1"
}
] |
2024-05-21
|
[
[
"Montacute",
"Yoàv",
""
],
[
"Winskel",
"Glynn",
""
]
] |
Spoiler-Duplicator games are used in finite model theory to examine the expressive power of logics. Their strategies have recently been reformulated as coKleisli maps of game comonads over relational structures, providing new results in finite model theory via categorical techniques. We present a novel framework for studying Spoiler-Duplicator games by viewing them as event structures. We introduce a first systematic method for constructing comonads for all one-sided Spoiler-Duplicator games: game comonads are now realised by adjunctions to a category of games, generically constructed from a comonad in a bicategory of game schema (called signature games). Maps of the constructed categories of games are strategies and generalise coKleisli maps of game comonads; in the case of one-sided games they are shown to coincide with suitably generalised homomorphisms. Finally, we provide characterisations of strategies on two-sided Spoiler-Duplicator games; in a common special case they coincide with spans of event structures.
|
2205.00872
|
Chen Xu
|
Chen Xu, Piji Li, Wei Wang, Haoran Yang, Siyun Wang, and Chuangbai
Xiao
|
COSPLAY: Concept Set Guided Personalized Dialogue Generation Across Both
Party Personas
|
Accepted by SIGIR 2022, 11 pages, 9 figures
| null |
10.1145/3477495.3531957
| null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Maintaining a consistent persona is essential for building a human-like
conversational model. However, the lack of attention to the partner makes the
model more egocentric: they tend to show their persona by all means such as
twisting the topic stiffly, pulling the conversation to their own interests
regardless, and rambling their persona with little curiosity to the partner. In
this work, we propose COSPLAY(COncept Set guided PersonaLized dialogue
generation Across both partY personas) that considers both parties as a "team":
expressing self-persona while keeping curiosity toward the partner, leading
responses around mutual personas, and finding the common ground. Specifically,
we first represent self-persona, partner persona and mutual dialogue all in the
concept sets. Then, we propose the Concept Set framework with a suite of
knowledge-enhanced operations to process them such as set algebras, set
expansion, and set distance. Based on these operations as medium, we train the
model by utilizing 1) concepts of both party personas, 2) concept relationship
between them, and 3) their relationship to the future dialogue. Extensive
experiments on a large public dataset, Persona-Chat, demonstrate that our model
outperforms state-of-the-art baselines for generating less egocentric, more
human-like, and higher quality responses in both automatic and human
evaluations.
|
[
{
"created": "Mon, 2 May 2022 12:55:40 GMT",
"version": "v1"
},
{
"created": "Wed, 4 May 2022 16:26:23 GMT",
"version": "v2"
},
{
"created": "Sun, 15 May 2022 06:42:43 GMT",
"version": "v3"
}
] |
2022-05-17
|
[
[
"Xu",
"Chen",
""
],
[
"Li",
"Piji",
""
],
[
"Wang",
"Wei",
""
],
[
"Yang",
"Haoran",
""
],
[
"Wang",
"Siyun",
""
],
[
"Xiao",
"Chuangbai",
""
]
] |
Maintaining a consistent persona is essential for building a human-like conversational model. However, the lack of attention to the partner makes the model more egocentric: they tend to show their persona by all means such as twisting the topic stiffly, pulling the conversation to their own interests regardless, and rambling their persona with little curiosity to the partner. In this work, we propose COSPLAY(COncept Set guided PersonaLized dialogue generation Across both partY personas) that considers both parties as a "team": expressing self-persona while keeping curiosity toward the partner, leading responses around mutual personas, and finding the common ground. Specifically, we first represent self-persona, partner persona and mutual dialogue all in the concept sets. Then, we propose the Concept Set framework with a suite of knowledge-enhanced operations to process them such as set algebras, set expansion, and set distance. Based on these operations as medium, we train the model by utilizing 1) concepts of both party personas, 2) concept relationship between them, and 3) their relationship to the future dialogue. Extensive experiments on a large public dataset, Persona-Chat, demonstrate that our model outperforms state-of-the-art baselines for generating less egocentric, more human-like, and higher quality responses in both automatic and human evaluations.
|
1803.02096
|
Maarten Bieshaar
|
G\"unther Reitberger and Stefan Zernetsch and Maarten Bieshaar and
Bernhard Sick and Konrad Doll and Erich Fuchs
|
Cooperative Tracking of Cyclists Based on Smart Devices and
Infrastructure
|
7 pages, 6 figures. submitted (accepted for publication) IEEE
Conference on Intelligent Transportation Systems(ITSC) 2018, Maui, HI
| null | null | null |
cs.CY cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In future traffic scenarios, vehicles and other traffic participants will be
interconnected and equipped with various types of sensors, allowing for
cooperation based on data or information exchange. This article presents an
approach to cooperative tracking of cyclists using smart devices and
infrastructure-based sensors. A smart device is carried by the cyclists and an
intersection is equipped with a wide angle stereo camera system. Two tracking
models are presented and compared. The first model is based on the stereo
camera system detections only, whereas the second model cooperatively combines
the camera based detections with velocity and yaw rate data provided by the
smart device. Our aim is to overcome limitations of tracking approaches based
on single data sources. We show in numerical evaluations on scenes where
cyclists are starting or turning right that the cooperation leads to an
improvement in both the ability to keep track of a cyclist and the accuracy of
the track particularly when it comes to occlusions in the visual system. We,
therefore, contribute to the safety of vulnerable road users in future traffic.
|
[
{
"created": "Tue, 6 Mar 2018 10:33:35 GMT",
"version": "v1"
},
{
"created": "Tue, 3 Jul 2018 08:13:51 GMT",
"version": "v2"
}
] |
2018-07-04
|
[
[
"Reitberger",
"Günther",
""
],
[
"Zernetsch",
"Stefan",
""
],
[
"Bieshaar",
"Maarten",
""
],
[
"Sick",
"Bernhard",
""
],
[
"Doll",
"Konrad",
""
],
[
"Fuchs",
"Erich",
""
]
] |
In future traffic scenarios, vehicles and other traffic participants will be interconnected and equipped with various types of sensors, allowing for cooperation based on data or information exchange. This article presents an approach to cooperative tracking of cyclists using smart devices and infrastructure-based sensors. A smart device is carried by the cyclists and an intersection is equipped with a wide angle stereo camera system. Two tracking models are presented and compared. The first model is based on the stereo camera system detections only, whereas the second model cooperatively combines the camera based detections with velocity and yaw rate data provided by the smart device. Our aim is to overcome limitations of tracking approaches based on single data sources. We show in numerical evaluations on scenes where cyclists are starting or turning right that the cooperation leads to an improvement in both the ability to keep track of a cyclist and the accuracy of the track particularly when it comes to occlusions in the visual system. We, therefore, contribute to the safety of vulnerable road users in future traffic.
|
2405.01646
|
Alessio Xompero
|
Alessio Xompero, Myriam Bontonou, Jean-Michel Arbona, Emmanouil
Benetos, Andrea Cavallaro
|
Explaining models relating objects and privacy
|
7 pages, 3 figures, 1 table, supplementary material included as
Appendix. Paper accepted at the 3rd XAI4CV Workshop at CVPR 2024. Code:
https://github.com/graphnex/ig-privacy
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Accurately predicting whether an image is private before sharing it online is
difficult due to the vast variety of content and the subjective nature of
privacy itself. In this paper, we evaluate privacy models that use objects
extracted from an image to determine why the image is predicted as private. To
explain the decision of these models, we use feature-attribution to identify
and quantify which objects (and which of their features) are more relevant to
privacy classification with respect to a reference input (i.e., no objects
localised in an image) predicted as public. We show that the presence of the
person category and its cardinality is the main factor for the privacy
decision. Therefore, these models mostly fail to identify private images
depicting documents with sensitive data, vehicle ownership, and internet
activity, or public images with people (e.g., an outdoor concert or people
walking in a public space next to a famous landmark). As baselines for future
benchmarks, we also devise two strategies that are based on the person presence
and cardinality and achieve comparable classification performance of the
privacy models.
|
[
{
"created": "Thu, 2 May 2024 18:06:48 GMT",
"version": "v1"
}
] |
2024-05-06
|
[
[
"Xompero",
"Alessio",
""
],
[
"Bontonou",
"Myriam",
""
],
[
"Arbona",
"Jean-Michel",
""
],
[
"Benetos",
"Emmanouil",
""
],
[
"Cavallaro",
"Andrea",
""
]
] |
Accurately predicting whether an image is private before sharing it online is difficult due to the vast variety of content and the subjective nature of privacy itself. In this paper, we evaluate privacy models that use objects extracted from an image to determine why the image is predicted as private. To explain the decision of these models, we use feature-attribution to identify and quantify which objects (and which of their features) are more relevant to privacy classification with respect to a reference input (i.e., no objects localised in an image) predicted as public. We show that the presence of the person category and its cardinality is the main factor for the privacy decision. Therefore, these models mostly fail to identify private images depicting documents with sensitive data, vehicle ownership, and internet activity, or public images with people (e.g., an outdoor concert or people walking in a public space next to a famous landmark). As baselines for future benchmarks, we also devise two strategies that are based on the person presence and cardinality and achieve comparable classification performance of the privacy models.
|
2106.11033
|
Stephan Stahlschmidt
|
Axel Oberschelp, Stephan Stahlschmidt
|
Gr\"o{\ss}e als Erfolgsgarant? Zur Bedeutung der Organisationstruktur
f\"ur die Einwerbung von Drittmitteln der Deutschen Forschungsgemeinschaft
|
in German
| null | null | null |
cs.CY physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Research funding through third-party financing is of considerable importance
for the German science system. The funds of the German Research Foundation
(DFG) serve as the central focus due to the high reputation of the foundation.
However, it has not been clarified yet to what extent the chances of
successfully acquiring these funds depend on the structure of the university as
an institution. The present study analyses DFG funding in the context of
university research and examines the role of organisational conditions in the
acquisition of funding. Several factors, such as size of the institution,
equipment, and teaching activities, are analysed. The empirical study focuses
on four subjects and investigates the correlation between funding success and
conditional factors using a Bayesian approach. Results reveal the considerable
relevance of the factors size as well as provision of academic and non-academic
personnel. This implies that the organisational conditions are to be taken into
account while evaluating third-party financing success.
|
[
{
"created": "Thu, 3 Jun 2021 08:47:34 GMT",
"version": "v1"
}
] |
2021-06-22
|
[
[
"Oberschelp",
"Axel",
""
],
[
"Stahlschmidt",
"Stephan",
""
]
] |
Research funding through third-party financing is of considerable importance for the German science system. The funds of the German Research Foundation (DFG) serve as the central focus due to the high reputation of the foundation. However, it has not been clarified yet to what extent the chances of successfully acquiring these funds depend on the structure of the university as an institution. The present study analyses DFG funding in the context of university research and examines the role of organisational conditions in the acquisition of funding. Several factors, such as size of the institution, equipment, and teaching activities, are analysed. The empirical study focuses on four subjects and investigates the correlation between funding success and conditional factors using a Bayesian approach. Results reveal the considerable relevance of the factors size as well as provision of academic and non-academic personnel. This implies that the organisational conditions are to be taken into account while evaluating third-party financing success.
|
2107.02112
|
Meng-Jiun Chiou
|
Meng-Jiun Chiou, Henghui Ding, Hanshu Yan, Changhu Wang, Roger
Zimmermann, Jiashi Feng
|
Recovering the Unbiased Scene Graphs from the Biased Ones
|
Accepted by ACMMM 2021. Source code will be available at
https://github.com/coldmanck/recovering-unbiased-scene-graphs
| null | null | null |
cs.CV cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Given input images, scene graph generation (SGG) aims to produce
comprehensive, graphical representations describing visual relationships among
salient objects. Recently, more efforts have been paid to the long tail problem
in SGG; however, the imbalance in the fraction of missing labels of different
classes, or reporting bias, exacerbating the long tail is rarely considered and
cannot be solved by the existing debiasing methods. In this paper we show that,
due to the missing labels, SGG can be viewed as a "Learning from Positive and
Unlabeled data" (PU learning) problem, where the reporting bias can be removed
by recovering the unbiased probabilities from the biased ones by utilizing
label frequencies, i.e., the per-class fraction of labeled, positive examples
in all the positive examples. To obtain accurate label frequency estimates, we
propose Dynamic Label Frequency Estimation (DLFE) to take advantage of
training-time data augmentation and average over multiple training iterations
to introduce more valid examples. Extensive experiments show that DLFE is more
effective in estimating label frequencies than a naive variant of the
traditional estimate, and DLFE significantly alleviates the long tail and
achieves state-of-the-art debiasing performance on the VG dataset. We also show
qualitatively that SGG models with DLFE produce prominently more balanced and
unbiased scene graphs.
|
[
{
"created": "Mon, 5 Jul 2021 16:10:41 GMT",
"version": "v1"
}
] |
2021-07-06
|
[
[
"Chiou",
"Meng-Jiun",
""
],
[
"Ding",
"Henghui",
""
],
[
"Yan",
"Hanshu",
""
],
[
"Wang",
"Changhu",
""
],
[
"Zimmermann",
"Roger",
""
],
[
"Feng",
"Jiashi",
""
]
] |
Given input images, scene graph generation (SGG) aims to produce comprehensive, graphical representations describing visual relationships among salient objects. Recently, more efforts have been paid to the long tail problem in SGG; however, the imbalance in the fraction of missing labels of different classes, or reporting bias, exacerbating the long tail is rarely considered and cannot be solved by the existing debiasing methods. In this paper we show that, due to the missing labels, SGG can be viewed as a "Learning from Positive and Unlabeled data" (PU learning) problem, where the reporting bias can be removed by recovering the unbiased probabilities from the biased ones by utilizing label frequencies, i.e., the per-class fraction of labeled, positive examples in all the positive examples. To obtain accurate label frequency estimates, we propose Dynamic Label Frequency Estimation (DLFE) to take advantage of training-time data augmentation and average over multiple training iterations to introduce more valid examples. Extensive experiments show that DLFE is more effective in estimating label frequencies than a naive variant of the traditional estimate, and DLFE significantly alleviates the long tail and achieves state-of-the-art debiasing performance on the VG dataset. We also show qualitatively that SGG models with DLFE produce prominently more balanced and unbiased scene graphs.
|
2401.10226
|
Xiangtai Li Dr
|
Jianzong Wu, Xiangtai Li, Chenyang Si, Shangchen Zhou, Jingkang Yang,
Jiangning Zhang, Yining Li, Kai Chen, Yunhai Tong, Ziwei Liu, Chen Change Loy
|
Towards Language-Driven Video Inpainting via Multimodal Large Language
Models
|
Project Page: https://jianzongwu.github.io/projects/rovi
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce a new task -- language-driven video inpainting, which uses
natural language instructions to guide the inpainting process. This approach
overcomes the limitations of traditional video inpainting methods that depend
on manually labeled binary masks, a process often tedious and labor-intensive.
We present the Remove Objects from Videos by Instructions (ROVI) dataset,
containing 5,650 videos and 9,091 inpainting results, to support training and
evaluation for this task. We also propose a novel diffusion-based
language-driven video inpainting framework, the first end-to-end baseline for
this task, integrating Multimodal Large Language Models to understand and
execute complex language-based inpainting requests effectively. Our
comprehensive results showcase the dataset's versatility and the model's
effectiveness in various language-instructed inpainting scenarios. We will make
datasets, code, and models publicly available.
|
[
{
"created": "Thu, 18 Jan 2024 18:59:13 GMT",
"version": "v1"
}
] |
2024-01-19
|
[
[
"Wu",
"Jianzong",
""
],
[
"Li",
"Xiangtai",
""
],
[
"Si",
"Chenyang",
""
],
[
"Zhou",
"Shangchen",
""
],
[
"Yang",
"Jingkang",
""
],
[
"Zhang",
"Jiangning",
""
],
[
"Li",
"Yining",
""
],
[
"Chen",
"Kai",
""
],
[
"Tong",
"Yunhai",
""
],
[
"Liu",
"Ziwei",
""
],
[
"Loy",
"Chen Change",
""
]
] |
We introduce a new task -- language-driven video inpainting, which uses natural language instructions to guide the inpainting process. This approach overcomes the limitations of traditional video inpainting methods that depend on manually labeled binary masks, a process often tedious and labor-intensive. We present the Remove Objects from Videos by Instructions (ROVI) dataset, containing 5,650 videos and 9,091 inpainting results, to support training and evaluation for this task. We also propose a novel diffusion-based language-driven video inpainting framework, the first end-to-end baseline for this task, integrating Multimodal Large Language Models to understand and execute complex language-based inpainting requests effectively. Our comprehensive results showcase the dataset's versatility and the model's effectiveness in various language-instructed inpainting scenarios. We will make datasets, code, and models publicly available.
|
2011.06253
|
Avraham. Trahtman N
|
A.N.Trahtman
|
Precise estimation on the order of local testability of deterministic
finite automaton
|
15 pages
| null | null | null |
cs.FL
|
http://creativecommons.org/licenses/by/4.0/
|
A locally testable language L is a language with the property that for some
non negative integer k, called the order or the level of local testable,
whether or not a word u in the language L depends on (1) the prefix and the
suffix of the word u of length k-1 and (2) the set of intermediate partial
strings of length k of the word u. For given k the language is called
k-testable. We give necessary and sufficient conditions for the language of an
automaton to be k-testable in the terms of the length of paths of a related
graph. Some estimations of the upper and of the lower bound of testable order
follow from these results. We improve the upper bound on the testable order of
locally testable deterministic finite automaton with n states to n(n-2)+1 This
bound is the best possible. We give an answer on the following conjecture of
Kim, McNaughton and Mac-CLoskey for deterministic finite locally testable
automaton with n states: \Is the local testable order of no greater than n in
power 1.5 when the alphabet size is two?" Our answer is negative. In the case
of size two the situation is the same as in general case.
|
[
{
"created": "Thu, 12 Nov 2020 08:18:02 GMT",
"version": "v1"
}
] |
2020-11-13
|
[
[
"Trahtman",
"A. N.",
""
]
] |
A locally testable language L is a language with the property that for some non negative integer k, called the order or the level of local testable, whether or not a word u in the language L depends on (1) the prefix and the suffix of the word u of length k-1 and (2) the set of intermediate partial strings of length k of the word u. For given k the language is called k-testable. We give necessary and sufficient conditions for the language of an automaton to be k-testable in the terms of the length of paths of a related graph. Some estimations of the upper and of the lower bound of testable order follow from these results. We improve the upper bound on the testable order of locally testable deterministic finite automaton with n states to n(n-2)+1 This bound is the best possible. We give an answer on the following conjecture of Kim, McNaughton and Mac-CLoskey for deterministic finite locally testable automaton with n states: \Is the local testable order of no greater than n in power 1.5 when the alphabet size is two?" Our answer is negative. In the case of size two the situation is the same as in general case.
|
2007.09186
|
Kristjan Arumae
|
Parminder Bhatia, Lan Liu, Kristjan Arumae, Nima Pourdamghani, Suyog
Deshpande, Ben Snively, Mona Mona, Colby Wise, George Price, Shyam Ramaswamy,
Xiaofei Ma, Ramesh Nallapati, Zhiheng Huang, Bing Xiang, Taha Kass-Hout
|
AWS CORD-19 Search: A Neural Search Engine for COVID-19 Literature
| null | null | null | null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Coronavirus disease (COVID-19) has been declared as a pandemic by WHO with
thousands of cases being reported each day. Numerous scientific articles are
being published on the disease raising the need for a service which can
organize, and query them in a reliable fashion. To support this cause we
present AWS CORD-19 Search (ACS), a public, COVID-19 specific, neural search
engine that is powered by several machine learning systems to support natural
language based searches. ACS with capabilities such as document ranking,
passage ranking, question answering and topic classification provides a
scalable solution to COVID-19 researchers and policy makers in their search and
discovery for answers to high priority scientific questions. We present a
quantitative evaluation and qualitative analysis of the system against other
leading COVID-19 search platforms. ACS is top performing across these systems
yielding quality results which we detail with relevant examples in this work.
|
[
{
"created": "Fri, 17 Jul 2020 18:41:29 GMT",
"version": "v1"
},
{
"created": "Sat, 25 Jul 2020 01:38:58 GMT",
"version": "v2"
},
{
"created": "Wed, 7 Oct 2020 05:59:53 GMT",
"version": "v3"
}
] |
2020-10-08
|
[
[
"Bhatia",
"Parminder",
""
],
[
"Liu",
"Lan",
""
],
[
"Arumae",
"Kristjan",
""
],
[
"Pourdamghani",
"Nima",
""
],
[
"Deshpande",
"Suyog",
""
],
[
"Snively",
"Ben",
""
],
[
"Mona",
"Mona",
""
],
[
"Wise",
"Colby",
""
],
[
"Price",
"George",
""
],
[
"Ramaswamy",
"Shyam",
""
],
[
"Ma",
"Xiaofei",
""
],
[
"Nallapati",
"Ramesh",
""
],
[
"Huang",
"Zhiheng",
""
],
[
"Xiang",
"Bing",
""
],
[
"Kass-Hout",
"Taha",
""
]
] |
Coronavirus disease (COVID-19) has been declared as a pandemic by WHO with thousands of cases being reported each day. Numerous scientific articles are being published on the disease raising the need for a service which can organize, and query them in a reliable fashion. To support this cause we present AWS CORD-19 Search (ACS), a public, COVID-19 specific, neural search engine that is powered by several machine learning systems to support natural language based searches. ACS with capabilities such as document ranking, passage ranking, question answering and topic classification provides a scalable solution to COVID-19 researchers and policy makers in their search and discovery for answers to high priority scientific questions. We present a quantitative evaluation and qualitative analysis of the system against other leading COVID-19 search platforms. ACS is top performing across these systems yielding quality results which we detail with relevant examples in this work.
|
2303.06844
|
Kijung Lee
|
Kijung Lee
|
Why do Tweeters regret sharing? Impacts of Twitter users' perception of
sharing risk, perceived problems on Twitter, and the motivation of use on
their behavior of regret sharing
| null | null | null | null |
cs.SI cs.HC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
This study presents a secondary data analysis of the survey data collected as
part of the American Trends Panel series by the Pew Research Center. A logistic
regression was performed to ascertain the effects of the perceived risk of
sharing, perceived problems on Twitter, and motivation of using Twitter on the
likelihood that participants regret sharing on Twitter. The logistic regression
model was statistically significant, \c{hi}2(15) = 102.5, p < .001. The model
correctly classified 78.5 percent of cases. Whether or not Twitter users regret
sharing on Twitter depends on different motivations for using Twitter. We
observe that "A way to express my opinion" is statistically significant in the
mod-el, indicating that the odds of Twitter users regretting sharing for this
motivation is 2.1 times higher than that of entertainment. Perceived risks of
potential hostility and visibility were negatively associated with an increased
likelihood of regret sharing. In contrast, perceived problems on Twitter
concerning misinformation were negatively associated with the likelihood of
regret sharing.
|
[
{
"created": "Mon, 13 Mar 2023 04:20:37 GMT",
"version": "v1"
}
] |
2023-03-14
|
[
[
"Lee",
"Kijung",
""
]
] |
This study presents a secondary data analysis of the survey data collected as part of the American Trends Panel series by the Pew Research Center. A logistic regression was performed to ascertain the effects of the perceived risk of sharing, perceived problems on Twitter, and motivation of using Twitter on the likelihood that participants regret sharing on Twitter. The logistic regression model was statistically significant, \c{hi}2(15) = 102.5, p < .001. The model correctly classified 78.5 percent of cases. Whether or not Twitter users regret sharing on Twitter depends on different motivations for using Twitter. We observe that "A way to express my opinion" is statistically significant in the mod-el, indicating that the odds of Twitter users regretting sharing for this motivation is 2.1 times higher than that of entertainment. Perceived risks of potential hostility and visibility were negatively associated with an increased likelihood of regret sharing. In contrast, perceived problems on Twitter concerning misinformation were negatively associated with the likelihood of regret sharing.
|
1703.02002
|
Md Mizanur Rahman
|
Mahmudur Rahman, Mizanur Rahman, Bogdan Carbunar, Duen Horng Chau
|
FairPlay: Fraud and Malware Detection in Google Play
|
Proceedings of the 2016 SIAM International Conference on Data Mining.
Society for Industrial and Applied Mathematics, 2016
| null | null | null |
cs.SI cs.CR cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fraudulent behaviors in Google Android app market fuel search rank abuse and
malware proliferation. We present FairPlay, a novel system that uncovers both
malware and search rank fraud apps, by picking out trails that fraudsters leave
behind. To identify suspicious apps, FairPlay PCF algorithm correlates review
activities and uniquely combines detected review relations with linguistic and
behavioral signals gleaned from longitudinal Google Play app data. We
contribute a new longitudinal app dataset to the community, which consists of
over 87K apps, 2.9M reviews, and 2.4M reviewers, collected over half a year.
FairPlay achieves over 95% accuracy in classifying gold standard datasets of
malware, fraudulent and legitimate apps. We show that 75% of the identified
malware apps engage in search rank fraud. FairPlay discovers hundreds of
fraudulent apps that currently evade Google Bouncer detection technology, and
reveals a new type of attack campaign, where users are harassed into writing
positive reviews, and install and review other apps.
|
[
{
"created": "Mon, 6 Mar 2017 17:51:16 GMT",
"version": "v1"
}
] |
2017-03-07
|
[
[
"Rahman",
"Mahmudur",
""
],
[
"Rahman",
"Mizanur",
""
],
[
"Carbunar",
"Bogdan",
""
],
[
"Chau",
"Duen Horng",
""
]
] |
Fraudulent behaviors in Google Android app market fuel search rank abuse and malware proliferation. We present FairPlay, a novel system that uncovers both malware and search rank fraud apps, by picking out trails that fraudsters leave behind. To identify suspicious apps, FairPlay PCF algorithm correlates review activities and uniquely combines detected review relations with linguistic and behavioral signals gleaned from longitudinal Google Play app data. We contribute a new longitudinal app dataset to the community, which consists of over 87K apps, 2.9M reviews, and 2.4M reviewers, collected over half a year. FairPlay achieves over 95% accuracy in classifying gold standard datasets of malware, fraudulent and legitimate apps. We show that 75% of the identified malware apps engage in search rank fraud. FairPlay discovers hundreds of fraudulent apps that currently evade Google Bouncer detection technology, and reveals a new type of attack campaign, where users are harassed into writing positive reviews, and install and review other apps.
|
1904.10158
|
Sasinee Pruekprasert
|
Sasinee Pruekprasert, Xiaoyi Zhang, J\'er\'emy Dubut, Chao Huang,
Masako Kishida
|
Decision Making for Autonomous Vehicles at Unsignalized Intersection in
Presence of Malicious Vehicles
|
IEEE Conference on Intelligent Transportation Systems (ITSC), 2019
| null | null | null |
cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we investigate the decision making of autonomous vehicles in
an unsignalized intersection in presence of malicious vehicles, which are
vehicles that do not respect the law by not using the proper rules of the right
of way. Each vehicle computes its control input as a Nash equilibrium of a game
determined by the priority order based on its own belief: each of non-malicious
vehicle bases its order on the law, while a malicious one considers itself as
having priority. To illustrate our method, we provide numerical simulations,
with different scenarios given by different cases of malicious vehicles.
|
[
{
"created": "Tue, 23 Apr 2019 05:38:25 GMT",
"version": "v1"
},
{
"created": "Thu, 3 Oct 2019 22:35:00 GMT",
"version": "v2"
}
] |
2019-10-07
|
[
[
"Pruekprasert",
"Sasinee",
""
],
[
"Zhang",
"Xiaoyi",
""
],
[
"Dubut",
"Jérémy",
""
],
[
"Huang",
"Chao",
""
],
[
"Kishida",
"Masako",
""
]
] |
In this paper, we investigate the decision making of autonomous vehicles in an unsignalized intersection in presence of malicious vehicles, which are vehicles that do not respect the law by not using the proper rules of the right of way. Each vehicle computes its control input as a Nash equilibrium of a game determined by the priority order based on its own belief: each of non-malicious vehicle bases its order on the law, while a malicious one considers itself as having priority. To illustrate our method, we provide numerical simulations, with different scenarios given by different cases of malicious vehicles.
|
1807.01864
|
Wei Ao
|
Wei Ao, Yanwei Fu and Feng Xu
|
Detecting Tiny Moving Vehicles in Satellite Videos
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, the satellite videos have been captured by a moving
satellite platform. In contrast to consumer, movie, and common surveillance
videos, satellite video can record the snapshot of the city-scale scene. In a
broad field-of-view of satellite videos, each moving target would be very tiny
and usually composed of several pixels in frames. Even worse, the noise signals
also existed in the video frames, since the background of the video frame has
the subpixel-level and uneven moving thanks to the motion of satellites. We
argue that this is a new type of computer vision task since previous
technologies are unable to detect such tiny vehicles efficiently. This paper
proposes a novel framework that can identify the small moving vehicles in
satellite videos. In particular, we offer a novel detecting algorithm based on
the local noise modeling. We differentiate the potential vehicle targets from
noise patterns by an exponential probability distribution. Subsequently, a
multi-morphological-cue based discrimination strategy is designed to
distinguish correct vehicle targets from a few existing noises further. Another
significant contribution is to introduce a series of evaluation protocols to
measure the performance of tiny moving vehicle detection systematically. We
annotate a satellite video manually and use it to test our algorithms under
different evaluation criterion. The proposed algorithm is also compared with
the state-of-the-art baselines, and demonstrates the advantages of our
framework over the benchmarks.
|
[
{
"created": "Thu, 5 Jul 2018 06:46:31 GMT",
"version": "v1"
}
] |
2018-07-06
|
[
[
"Ao",
"Wei",
""
],
[
"Fu",
"Yanwei",
""
],
[
"Xu",
"Feng",
""
]
] |
In recent years, the satellite videos have been captured by a moving satellite platform. In contrast to consumer, movie, and common surveillance videos, satellite video can record the snapshot of the city-scale scene. In a broad field-of-view of satellite videos, each moving target would be very tiny and usually composed of several pixels in frames. Even worse, the noise signals also existed in the video frames, since the background of the video frame has the subpixel-level and uneven moving thanks to the motion of satellites. We argue that this is a new type of computer vision task since previous technologies are unable to detect such tiny vehicles efficiently. This paper proposes a novel framework that can identify the small moving vehicles in satellite videos. In particular, we offer a novel detecting algorithm based on the local noise modeling. We differentiate the potential vehicle targets from noise patterns by an exponential probability distribution. Subsequently, a multi-morphological-cue based discrimination strategy is designed to distinguish correct vehicle targets from a few existing noises further. Another significant contribution is to introduce a series of evaluation protocols to measure the performance of tiny moving vehicle detection systematically. We annotate a satellite video manually and use it to test our algorithms under different evaluation criterion. The proposed algorithm is also compared with the state-of-the-art baselines, and demonstrates the advantages of our framework over the benchmarks.
|
2107.02547
|
Cheng Chu
|
Dawen Xu, Cheng Chu, Cheng Liu, Ying Wang, Huawei Li, Xiaowei Li,
Kwang-Ting Cheng
|
Energy-Efficient Accelerator Design for Deformable Convolution Networks
| null | null | null | null |
cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deformable convolution networks (DCNs) proposed to address the image
recognition with geometric or photometric variations typically involve
deformable convolution that convolves on arbitrary locations of input features.
The locations change with different inputs and induce considerable dynamic and
irregular memory accesses which cannot be handled by classic neural network
accelerators (NNAs). Moreover, bilinear interpolation (BLI) operation that is
required to obtain deformed features in DCNs also cannot be deployed on
existing NNAs directly. Although a general purposed processor (GPP) seated
along with classic NNAs can process the deformable convolution, the processing
on GPP can be extremely slow due to the lack of parallel computing capability.
To address the problem, we develop a DCN accelerator on existing NNAs to
support both the standard convolution and deformable convolution. Specifically,
for the dynamic and irregular accesses in DCNs, we have both the input and
output features divided into tiles and build a tile dependency table (TDT) to
track the irregular tile dependency at runtime. With the TDT, we further
develop an on-chip tile scheduler to handle the dynamic and irregular accesses
efficiently. In addition, we propose a novel mapping strategy to enable
parallel BLI processing on NNAs and apply layer fusion techniques for more
energy-efficient DCN processing. According to our experiments, the proposed
accelerator achieves orders of magnitude higher performance and energy
efficiency compared to the typical computing architectures including ARM,
ARM+TPU, and GPU with 6.6\% chip area penalty to a classic NNA.
|
[
{
"created": "Tue, 6 Jul 2021 11:26:33 GMT",
"version": "v1"
}
] |
2021-07-07
|
[
[
"Xu",
"Dawen",
""
],
[
"Chu",
"Cheng",
""
],
[
"Liu",
"Cheng",
""
],
[
"Wang",
"Ying",
""
],
[
"Li",
"Huawei",
""
],
[
"Li",
"Xiaowei",
""
],
[
"Cheng",
"Kwang-Ting",
""
]
] |
Deformable convolution networks (DCNs) proposed to address the image recognition with geometric or photometric variations typically involve deformable convolution that convolves on arbitrary locations of input features. The locations change with different inputs and induce considerable dynamic and irregular memory accesses which cannot be handled by classic neural network accelerators (NNAs). Moreover, bilinear interpolation (BLI) operation that is required to obtain deformed features in DCNs also cannot be deployed on existing NNAs directly. Although a general purposed processor (GPP) seated along with classic NNAs can process the deformable convolution, the processing on GPP can be extremely slow due to the lack of parallel computing capability. To address the problem, we develop a DCN accelerator on existing NNAs to support both the standard convolution and deformable convolution. Specifically, for the dynamic and irregular accesses in DCNs, we have both the input and output features divided into tiles and build a tile dependency table (TDT) to track the irregular tile dependency at runtime. With the TDT, we further develop an on-chip tile scheduler to handle the dynamic and irregular accesses efficiently. In addition, we propose a novel mapping strategy to enable parallel BLI processing on NNAs and apply layer fusion techniques for more energy-efficient DCN processing. According to our experiments, the proposed accelerator achieves orders of magnitude higher performance and energy efficiency compared to the typical computing architectures including ARM, ARM+TPU, and GPU with 6.6\% chip area penalty to a classic NNA.
|
2401.13716
|
Vibeke Binz Vallevik Mrs
|
Vibeke Binz Vallevik, Aleksandar Babic, Serena Elizabeth Marshall,
Severin Elvatun, Helga Br{\o}gger, Sharmini Alagaratnam, Bj{\o}rn Edwin,
Narasimha Raghavan Veeraragavan, Anne Kjersti Befring, Jan Franz Nyg{\aa}rd
|
Can I trust my fake data -- A comprehensive quality assessment framework
for synthetic tabular data in healthcare
| null |
Int. J. Med. Inform.185 (2024)
|
10.1016/j.ijmedinf.2024.105413
| null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Ensuring safe adoption of AI tools in healthcare hinges on access to
sufficient data for training, testing and validation. In response to privacy
concerns and regulatory requirements, using synthetic data has been suggested.
Synthetic data is created by training a generator on real data to produce a
dataset with similar statistical properties. Competing metrics with differing
taxonomies for quality evaluation have been suggested, resulting in a complex
landscape. Optimising quality entails balancing considerations that make the
data fit for use, yet relevant dimensions are left out of existing frameworks.
We performed a comprehensive literature review on the use of quality evaluation
metrics on SD within the scope of tabular healthcare data and SD made using
deep generative methods. Based on this and the collective team experiences, we
developed a conceptual framework for quality assurance. The applicability was
benchmarked against a practical case from the Dutch National Cancer Registry.
We present a conceptual framework for quality assurance of SD for AI
applications in healthcare that aligns diverging taxonomies, expands on common
quality dimensions to include the dimensions of Fairness and Carbon footprint,
and proposes stages necessary to support real-life applications. Building trust
in synthetic data by increasing transparency and reducing the safety risk will
accelerate the development and uptake of trustworthy AI tools for the benefit
of patients. Despite the growing emphasis on algorithmic fairness and carbon
footprint, these metrics were scarce in the literature review. The overwhelming
focus was on statistical similarity using distance metrics while sequential
logic detection was scarce. A consensus-backed framework that includes all
relevant quality dimensions can provide assurance for safe and responsible
real-life applications of SD.
|
[
{
"created": "Wed, 24 Jan 2024 08:14:20 GMT",
"version": "v1"
}
] |
2024-04-19
|
[
[
"Vallevik",
"Vibeke Binz",
""
],
[
"Babic",
"Aleksandar",
""
],
[
"Marshall",
"Serena Elizabeth",
""
],
[
"Elvatun",
"Severin",
""
],
[
"Brøgger",
"Helga",
""
],
[
"Alagaratnam",
"Sharmini",
""
],
[
"Edwin",
"Bjørn",
""
],
[
"Veeraragavan",
"Narasimha Raghavan",
""
],
[
"Befring",
"Anne Kjersti",
""
],
[
"Nygård",
"Jan Franz",
""
]
] |
Ensuring safe adoption of AI tools in healthcare hinges on access to sufficient data for training, testing and validation. In response to privacy concerns and regulatory requirements, using synthetic data has been suggested. Synthetic data is created by training a generator on real data to produce a dataset with similar statistical properties. Competing metrics with differing taxonomies for quality evaluation have been suggested, resulting in a complex landscape. Optimising quality entails balancing considerations that make the data fit for use, yet relevant dimensions are left out of existing frameworks. We performed a comprehensive literature review on the use of quality evaluation metrics on SD within the scope of tabular healthcare data and SD made using deep generative methods. Based on this and the collective team experiences, we developed a conceptual framework for quality assurance. The applicability was benchmarked against a practical case from the Dutch National Cancer Registry. We present a conceptual framework for quality assurance of SD for AI applications in healthcare that aligns diverging taxonomies, expands on common quality dimensions to include the dimensions of Fairness and Carbon footprint, and proposes stages necessary to support real-life applications. Building trust in synthetic data by increasing transparency and reducing the safety risk will accelerate the development and uptake of trustworthy AI tools for the benefit of patients. Despite the growing emphasis on algorithmic fairness and carbon footprint, these metrics were scarce in the literature review. The overwhelming focus was on statistical similarity using distance metrics while sequential logic detection was scarce. A consensus-backed framework that includes all relevant quality dimensions can provide assurance for safe and responsible real-life applications of SD.
|
1606.03248
|
ShenChen Ruan
|
Shenchen Ruan, Haixia Wang and Dongsheng Wang
|
MAC: a novel systematically multilevel cache replacement policy for PCM
memory
| null | null | null | null |
cs.AR cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The rapid development of multi-core system and increase of data-intensive
application in recent years call for larger main memory. Traditional DRAM
memory can increase its capacity by reducing the feature size of storage cell.
Now further scaling of DRAM faces great challenge, and the frequent refresh
operations of DRAM can bring a lot of energy consumption. As an emerging
technology, Phase Change Memory (PCM) is promising to be used as main memory.
It draws wide attention due to the advantages of low power consumption, high
density and nonvolatility, while it incurs finite endurance and relatively long
write latency. To handle the problem of write, optimizing the cache replacement
policy to protect dirty cache block is an efficient way. In this paper, we
construct a systematically multilevel structure, and based on it propose a
novel cache replacement policy called MAC. MAC can effectively reduce write
traffic to PCM memory with low hardware overhead. We conduct simulation
experiments on GEM5 to evaluate the performances of MAC and other related
works. The results show that MAC performs best in reducing the amount of writes
(averagely 25.12%) without increasing the program execution time.
|
[
{
"created": "Fri, 10 Jun 2016 09:47:14 GMT",
"version": "v1"
}
] |
2016-06-13
|
[
[
"Ruan",
"Shenchen",
""
],
[
"Wang",
"Haixia",
""
],
[
"Wang",
"Dongsheng",
""
]
] |
The rapid development of multi-core system and increase of data-intensive application in recent years call for larger main memory. Traditional DRAM memory can increase its capacity by reducing the feature size of storage cell. Now further scaling of DRAM faces great challenge, and the frequent refresh operations of DRAM can bring a lot of energy consumption. As an emerging technology, Phase Change Memory (PCM) is promising to be used as main memory. It draws wide attention due to the advantages of low power consumption, high density and nonvolatility, while it incurs finite endurance and relatively long write latency. To handle the problem of write, optimizing the cache replacement policy to protect dirty cache block is an efficient way. In this paper, we construct a systematically multilevel structure, and based on it propose a novel cache replacement policy called MAC. MAC can effectively reduce write traffic to PCM memory with low hardware overhead. We conduct simulation experiments on GEM5 to evaluate the performances of MAC and other related works. The results show that MAC performs best in reducing the amount of writes (averagely 25.12%) without increasing the program execution time.
|
1702.05977
|
Jos\'e Mairton Barros da Silva J\'unior
|
Jose Mairton B. da Silva Jr., Gabor Fodor, Carlo Fischione
|
On the Spectral Efficiency and Fairness in Full-Duplex Cellular Networks
|
6 pages, 4 figures, accepted in IEEE ICC 2017. arXiv admin note: text
overlap with arXiv:1603.00671
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To increase the spectral efficiency of wireless networks without requiring
full-duplex capability of user devices, a potential solution is the recently
proposed three-node full-duplex mode. To realize this potential, networks
employing three-node full-duplex transmissions must deal with self-interference
and user-to-user interference, which can be managed by frequency channel and
power allocation techniques. Whereas previous works investigated either
spectral efficient or fair mechanisms, a scheme that balances these two metrics
among users is investigated in this paper. This balancing scheme is based on a
new solution method of the multi-objective optimization problem to maximize the
weighted sum of the per-user spectral efficiency and the minimum spectral
efficiency among users. The mixed integer non-linear nature of this problem is
dealt by Lagrangian duality. Based on the proposed solution approach, a
low-complexity centralized algorithm is developed, which relies on large scale
fading measurements that can be advantageously implemented at the base station.
Numerical results indicate that the proposed algorithm increases the spectral
efficiency and fairness among users without the need of weighting the spectral
efficiency. An important conclusion is that managing user-to-user interference
by resource assignment and power control is crucial for ensuring spectral
efficient and fair operation of full-duplex networks.
|
[
{
"created": "Mon, 20 Feb 2017 14:10:19 GMT",
"version": "v1"
}
] |
2017-02-25
|
[
[
"Silva",
"Jose Mairton B. da",
"Jr."
],
[
"Fodor",
"Gabor",
""
],
[
"Fischione",
"Carlo",
""
]
] |
To increase the spectral efficiency of wireless networks without requiring full-duplex capability of user devices, a potential solution is the recently proposed three-node full-duplex mode. To realize this potential, networks employing three-node full-duplex transmissions must deal with self-interference and user-to-user interference, which can be managed by frequency channel and power allocation techniques. Whereas previous works investigated either spectral efficient or fair mechanisms, a scheme that balances these two metrics among users is investigated in this paper. This balancing scheme is based on a new solution method of the multi-objective optimization problem to maximize the weighted sum of the per-user spectral efficiency and the minimum spectral efficiency among users. The mixed integer non-linear nature of this problem is dealt by Lagrangian duality. Based on the proposed solution approach, a low-complexity centralized algorithm is developed, which relies on large scale fading measurements that can be advantageously implemented at the base station. Numerical results indicate that the proposed algorithm increases the spectral efficiency and fairness among users without the need of weighting the spectral efficiency. An important conclusion is that managing user-to-user interference by resource assignment and power control is crucial for ensuring spectral efficient and fair operation of full-duplex networks.
|
1304.6146
|
Marc Killpack
|
Advait Jain, Marc D. Killpack, Aaron Edsinger, Charles C. Kemp
|
Manipulation in Clutter with Whole-Arm Tactile Sensing
|
This is the first version of a paper that we submitted to the
International Journal of Robotics Research on December 31, 2011 and uploaded
to our website on January 16, 2012
|
The International Journal of Robotics Research April 2013 vol. 32
no. 4 pg. 458-482
|
10.1177/0278364912471865
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We begin this paper by presenting our approach to robot manipulation, which
emphasizes the benefits of making contact with the world across the entire
manipulator. We assume that low contact forces are benign, and focus on the
development of robots that can control their contact forces during
goal-directed motion. Inspired by biology, we assume that the robot has
low-stiffness actuation at its joints, and tactile sensing across the entire
surface of its manipulator. We then describe a novel controller that exploits
these assumptions. The controller only requires haptic sensing and does not
need an explicit model of the environment prior to contact. It also handles
multiple contacts across the surface of the manipulator. The controller uses
model predictive control (MPC) with a time horizon of length one, and a linear
quasi-static mechanical model that it constructs at each time step. We show
that this controller enables both real and simulated robots to reach goal
locations in high clutter with low contact forces. Our experiments include
tests using a real robot with a novel tactile sensor array on its forearm
reaching into simulated foliage and a cinder block. In our experiments, robots
made contact across their entire arms while pushing aside movable objects,
deforming compliant objects, and perceiving the world.
|
[
{
"created": "Tue, 23 Apr 2013 01:40:46 GMT",
"version": "v1"
}
] |
2013-04-24
|
[
[
"Jain",
"Advait",
""
],
[
"Killpack",
"Marc D.",
""
],
[
"Edsinger",
"Aaron",
""
],
[
"Kemp",
"Charles C.",
""
]
] |
We begin this paper by presenting our approach to robot manipulation, which emphasizes the benefits of making contact with the world across the entire manipulator. We assume that low contact forces are benign, and focus on the development of robots that can control their contact forces during goal-directed motion. Inspired by biology, we assume that the robot has low-stiffness actuation at its joints, and tactile sensing across the entire surface of its manipulator. We then describe a novel controller that exploits these assumptions. The controller only requires haptic sensing and does not need an explicit model of the environment prior to contact. It also handles multiple contacts across the surface of the manipulator. The controller uses model predictive control (MPC) with a time horizon of length one, and a linear quasi-static mechanical model that it constructs at each time step. We show that this controller enables both real and simulated robots to reach goal locations in high clutter with low contact forces. Our experiments include tests using a real robot with a novel tactile sensor array on its forearm reaching into simulated foliage and a cinder block. In our experiments, robots made contact across their entire arms while pushing aside movable objects, deforming compliant objects, and perceiving the world.
|
1805.08498
|
Michael Figurnov
|
Michael Figurnov, Shakir Mohamed, Andriy Mnih
|
Implicit Reparameterization Gradients
|
NeurIPS 2018
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
By providing a simple and efficient way of computing low-variance gradients
of continuous random variables, the reparameterization trick has become the
technique of choice for training a variety of latent variable models. However,
it is not applicable to a number of important continuous distributions. We
introduce an alternative approach to computing reparameterization gradients
based on implicit differentiation and demonstrate its broader applicability by
applying it to Gamma, Beta, Dirichlet, and von Mises distributions, which
cannot be used with the classic reparameterization trick. Our experiments show
that the proposed approach is faster and more accurate than the existing
gradient estimators for these distributions.
|
[
{
"created": "Tue, 22 May 2018 11:00:19 GMT",
"version": "v1"
},
{
"created": "Fri, 1 Jun 2018 12:38:04 GMT",
"version": "v2"
},
{
"created": "Thu, 1 Nov 2018 17:49:12 GMT",
"version": "v3"
},
{
"created": "Wed, 30 Jan 2019 15:24:42 GMT",
"version": "v4"
}
] |
2019-01-31
|
[
[
"Figurnov",
"Michael",
""
],
[
"Mohamed",
"Shakir",
""
],
[
"Mnih",
"Andriy",
""
]
] |
By providing a simple and efficient way of computing low-variance gradients of continuous random variables, the reparameterization trick has become the technique of choice for training a variety of latent variable models. However, it is not applicable to a number of important continuous distributions. We introduce an alternative approach to computing reparameterization gradients based on implicit differentiation and demonstrate its broader applicability by applying it to Gamma, Beta, Dirichlet, and von Mises distributions, which cannot be used with the classic reparameterization trick. Our experiments show that the proposed approach is faster and more accurate than the existing gradient estimators for these distributions.
|
2006.05158
|
Liangzu Peng
|
Liangzu Peng and Manolis C. Tsakiris
|
Homomorphic Sensing of Subspace Arrangements
|
18 pages
|
Applied and Computational Harmonic Analysis, 55, 466-485 (2021)
|
10.1016/j.acha.2021.06.008
| null |
cs.LG math.AG stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
Homomorphic sensing is a recent algebraic-geometric framework that studies
the unique recovery of points in a linear subspace from their images under a
given collection of linear maps. It has been successful in interpreting such a
recovery in the case of permutations composed by coordinate projections, an
important instance in applications known as unlabeled sensing, which models
data that are out of order and have missing values. In this paper, we provide
tighter and simpler conditions that guarantee the unique recovery for the
single-subspace case, extend the result to the case of a subspace arrangement,
and show that the unique recovery in a single subspace is locally stable under
noise. We specialize our results to several examples of homomorphic sensing
such as real phase retrieval and unlabeled sensing. In so doing, in a unified
way, we obtain conditions that guarantee the unique recovery for those
examples, typically known via diverse techniques in the literature, as well as
novel conditions for sparse and unsigned versions of unlabeled sensing.
Similarly, our noise result also implies that the unique recovery in unlabeled
sensing is locally stable.
|
[
{
"created": "Tue, 9 Jun 2020 09:52:15 GMT",
"version": "v1"
},
{
"created": "Wed, 30 Dec 2020 03:27:36 GMT",
"version": "v2"
},
{
"created": "Tue, 1 Jun 2021 06:36:14 GMT",
"version": "v3"
},
{
"created": "Mon, 19 Sep 2022 14:13:47 GMT",
"version": "v4"
}
] |
2022-09-20
|
[
[
"Peng",
"Liangzu",
""
],
[
"Tsakiris",
"Manolis C.",
""
]
] |
Homomorphic sensing is a recent algebraic-geometric framework that studies the unique recovery of points in a linear subspace from their images under a given collection of linear maps. It has been successful in interpreting such a recovery in the case of permutations composed by coordinate projections, an important instance in applications known as unlabeled sensing, which models data that are out of order and have missing values. In this paper, we provide tighter and simpler conditions that guarantee the unique recovery for the single-subspace case, extend the result to the case of a subspace arrangement, and show that the unique recovery in a single subspace is locally stable under noise. We specialize our results to several examples of homomorphic sensing such as real phase retrieval and unlabeled sensing. In so doing, in a unified way, we obtain conditions that guarantee the unique recovery for those examples, typically known via diverse techniques in the literature, as well as novel conditions for sparse and unsigned versions of unlabeled sensing. Similarly, our noise result also implies that the unique recovery in unlabeled sensing is locally stable.
|
2211.05229
|
Rajdeep Adak
|
Rajdeep Adak, Abhishek Kumbhar, Rajas Pathare, Sagar Gowda
|
Automatic Number Plate Recognition (ANPR) with YOLOv3-CNN
|
29 pages, 4 figures, 2 tables
| null | null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a YOLOv3-CNN pipeline for detecting vehicles, segregation of
number plates, and local storage of final recognized characters. Vehicle
identification is performed under various image correction schemes to determine
the effect of environmental factors (angle of perception, luminosity,
motion-blurring, and multi-line custom font etc.). A YOLOv3 object detection
model was trained to identify vehicles from a dataset of traffic images. A
second YOLOv3 layer was trained to identify number plates from vehicle images.
Based upon correction schemes, individual characters were segregated and
verified against real-time data to calculate accuracy of this approach. While
characters under direct view were recognized accurately, some numberplates
affected by environmental factors had reduced levels of accuracy. We summarize
the results under various environmental factors against real-time data and
produce an overall accuracy of the pipeline model.
|
[
{
"created": "Mon, 7 Nov 2022 12:59:01 GMT",
"version": "v1"
}
] |
2022-11-11
|
[
[
"Adak",
"Rajdeep",
""
],
[
"Kumbhar",
"Abhishek",
""
],
[
"Pathare",
"Rajas",
""
],
[
"Gowda",
"Sagar",
""
]
] |
We present a YOLOv3-CNN pipeline for detecting vehicles, segregation of number plates, and local storage of final recognized characters. Vehicle identification is performed under various image correction schemes to determine the effect of environmental factors (angle of perception, luminosity, motion-blurring, and multi-line custom font etc.). A YOLOv3 object detection model was trained to identify vehicles from a dataset of traffic images. A second YOLOv3 layer was trained to identify number plates from vehicle images. Based upon correction schemes, individual characters were segregated and verified against real-time data to calculate accuracy of this approach. While characters under direct view were recognized accurately, some numberplates affected by environmental factors had reduced levels of accuracy. We summarize the results under various environmental factors against real-time data and produce an overall accuracy of the pipeline model.
|
1412.0879
|
Sean Gallagher
|
Sean Gallagher, Wlodek Zadrozny, Walid Shalaby, Adarsh Avadhani
|
Watsonsim: Overview of a Question Answering Engine
| null | null | null | null |
cs.CL cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The objective of the project is to design and run a system similar to Watson,
designed to answer Jeopardy questions. In the course of a semester, we
developed an open source question answering system using the Indri, Lucene,
Bing and Google search engines, Apache UIMA, Open- and CoreNLP, and Weka among
additional modules. By the end of the semester, we achieved 18% accuracy on
Jeopardy questions, and work has not stopped since then.
|
[
{
"created": "Tue, 2 Dec 2014 12:15:18 GMT",
"version": "v1"
}
] |
2014-12-03
|
[
[
"Gallagher",
"Sean",
""
],
[
"Zadrozny",
"Wlodek",
""
],
[
"Shalaby",
"Walid",
""
],
[
"Avadhani",
"Adarsh",
""
]
] |
The objective of the project is to design and run a system similar to Watson, designed to answer Jeopardy questions. In the course of a semester, we developed an open source question answering system using the Indri, Lucene, Bing and Google search engines, Apache UIMA, Open- and CoreNLP, and Weka among additional modules. By the end of the semester, we achieved 18% accuracy on Jeopardy questions, and work has not stopped since then.
|
2211.15382
|
Tim Whittaker
|
Tim Whittaker, Romuald A. Janik, Yaron Oz
|
Neural Network Complexity of Chaos and Turbulence
| null |
Eur. Phys. J. E 46, 57 (2023)
|
10.1140/epje/s10189-023-00321-7
| null |
cs.LG hep-th nlin.CD physics.flu-dyn
|
http://creativecommons.org/licenses/by/4.0/
|
Chaos and turbulence are complex physical phenomena, yet a precise definition
of the complexity measure that quantifies them is still lacking. In this work
we consider the relative complexity of chaos and turbulence from the
perspective of deep neural networks. We analyze a set of classification
problems, where the network has to distinguish images of fluid profiles in the
turbulent regime from other classes of images such as fluid profiles in the
chaotic regime, various constructions of noise and real world images. We
analyze incompressible as well as weakly compressible fluid flows. We quantify
the complexity of the computation performed by the network via the intrinsic
dimensionality of the internal feature representations, and calculate the
effective number of independent features which the network uses in order to
distinguish between classes. In addition to providing a numerical estimate of
the complexity of the computation, the measure also characterizes the neural
network processing at intermediate and final stages. We construct adversarial
examples and use them to identify the two point correlation spectra for the
chaotic and turbulent vorticity as the feature used by the network for
classification.
|
[
{
"created": "Thu, 24 Nov 2022 13:21:36 GMT",
"version": "v1"
},
{
"created": "Thu, 20 Jul 2023 12:18:49 GMT",
"version": "v2"
}
] |
2023-07-21
|
[
[
"Whittaker",
"Tim",
""
],
[
"Janik",
"Romuald A.",
""
],
[
"Oz",
"Yaron",
""
]
] |
Chaos and turbulence are complex physical phenomena, yet a precise definition of the complexity measure that quantifies them is still lacking. In this work we consider the relative complexity of chaos and turbulence from the perspective of deep neural networks. We analyze a set of classification problems, where the network has to distinguish images of fluid profiles in the turbulent regime from other classes of images such as fluid profiles in the chaotic regime, various constructions of noise and real world images. We analyze incompressible as well as weakly compressible fluid flows. We quantify the complexity of the computation performed by the network via the intrinsic dimensionality of the internal feature representations, and calculate the effective number of independent features which the network uses in order to distinguish between classes. In addition to providing a numerical estimate of the complexity of the computation, the measure also characterizes the neural network processing at intermediate and final stages. We construct adversarial examples and use them to identify the two point correlation spectra for the chaotic and turbulent vorticity as the feature used by the network for classification.
|
2111.11720
|
Xingkai Zheng
|
Xingkai Zheng, Xirui Li, Ke Xu, Xinghao Jiang, Tanfeng Sun
|
Gait Identification under Surveillance Environment based on Human
Skeleton
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
As an emerging biological identification technology, vision-based gait
identification is an important research content in biometrics. Most existing
gait identification methods extract features from gait videos and identify a
probe sample by a query in the gallery. However, video data contains redundant
information and can be easily influenced by bagging (BG) and clothing (CL).
Since human body skeletons convey essential information about human gaits, a
skeleton-based gait identification network is proposed in our project. First,
extract skeleton sequences from the video and map them into a gait graph. Then
a feature extraction network based on Spatio-Temporal Graph Convolutional
Network (ST-GCN) is constructed to learn gait representations. Finally, the
probe sample is identified by matching with the most similar piece in the
gallery. We tested our method on the CASIA-B dataset. The result shows that our
approach is highly adaptive and gets the advanced result in BG, CL conditions,
and average.
|
[
{
"created": "Tue, 23 Nov 2021 08:30:26 GMT",
"version": "v1"
},
{
"created": "Wed, 24 Nov 2021 14:43:51 GMT",
"version": "v2"
}
] |
2021-11-25
|
[
[
"Zheng",
"Xingkai",
""
],
[
"Li",
"Xirui",
""
],
[
"Xu",
"Ke",
""
],
[
"Jiang",
"Xinghao",
""
],
[
"Sun",
"Tanfeng",
""
]
] |
As an emerging biological identification technology, vision-based gait identification is an important research content in biometrics. Most existing gait identification methods extract features from gait videos and identify a probe sample by a query in the gallery. However, video data contains redundant information and can be easily influenced by bagging (BG) and clothing (CL). Since human body skeletons convey essential information about human gaits, a skeleton-based gait identification network is proposed in our project. First, extract skeleton sequences from the video and map them into a gait graph. Then a feature extraction network based on Spatio-Temporal Graph Convolutional Network (ST-GCN) is constructed to learn gait representations. Finally, the probe sample is identified by matching with the most similar piece in the gallery. We tested our method on the CASIA-B dataset. The result shows that our approach is highly adaptive and gets the advanced result in BG, CL conditions, and average.
|
2202.12093
|
Qinghua Zhao
|
Qinghua Zhao, Shuai Ma, Shuo Ren
|
KESA: A Knowledge Enhanced Approach For Sentiment Analysis
| null | null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Though some recent works focus on injecting sentiment knowledge into
pre-trained language models, they usually design mask and reconstruction tasks
in the post-training phase. In this paper, we aim to benefit from sentiment
knowledge in a lighter way. To achieve this goal, we study sentence-level
sentiment analysis and, correspondingly, propose two sentiment-aware auxiliary
tasks named sentiment word cloze and conditional sentiment prediction. The
first task learns to select the correct sentiment words within the input, given
the overall sentiment polarity as prior knowledge. On the contrary, the second
task predicts the overall sentiment polarity given the sentiment polarity of
the word as prior knowledge. In addition, two kinds of label combination
methods are investigated to unify multiple types of labels in each task. We
argue that more information can promote the models to learn more profound
semantic representation. We implement it in a straightforward way to verify
this hypothesis. The experimental results demonstrate that our approach
consistently outperforms pre-trained models and is additive to existing
knowledge-enhanced post-trained models. The code and data are released at
https://github.com/lshowway/KESA.
|
[
{
"created": "Thu, 24 Feb 2022 13:21:27 GMT",
"version": "v1"
}
] |
2022-02-25
|
[
[
"Zhao",
"Qinghua",
""
],
[
"Ma",
"Shuai",
""
],
[
"Ren",
"Shuo",
""
]
] |
Though some recent works focus on injecting sentiment knowledge into pre-trained language models, they usually design mask and reconstruction tasks in the post-training phase. In this paper, we aim to benefit from sentiment knowledge in a lighter way. To achieve this goal, we study sentence-level sentiment analysis and, correspondingly, propose two sentiment-aware auxiliary tasks named sentiment word cloze and conditional sentiment prediction. The first task learns to select the correct sentiment words within the input, given the overall sentiment polarity as prior knowledge. On the contrary, the second task predicts the overall sentiment polarity given the sentiment polarity of the word as prior knowledge. In addition, two kinds of label combination methods are investigated to unify multiple types of labels in each task. We argue that more information can promote the models to learn more profound semantic representation. We implement it in a straightforward way to verify this hypothesis. The experimental results demonstrate that our approach consistently outperforms pre-trained models and is additive to existing knowledge-enhanced post-trained models. The code and data are released at https://github.com/lshowway/KESA.
|
2206.01812
|
Andrew Li
|
Andrew C. Li, Pashootan Vaezipoor, Rodrigo Toro Icarte, Sheila A.
McIlraith
|
Challenges to Solving Combinatorially Hard Long-Horizon Deep RL Tasks
| null | null | null | null |
cs.LG cs.AI cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Deep reinforcement learning has shown promise in discrete domains requiring
complex reasoning, including games such as Chess, Go, and Hanabi. However, this
type of reasoning is less often observed in long-horizon, continuous domains
with high-dimensional observations, where instead RL research has predominantly
focused on problems with simple high-level structure (e.g. opening a drawer or
moving a robot as fast as possible). Inspired by combinatorially hard
optimization problems, we propose a set of robotics tasks which admit many
distinct solutions at the high-level, but require reasoning about states and
rewards thousands of steps into the future for the best performance.
Critically, while RL has traditionally suffered on complex, long-horizon tasks
due to sparse rewards, our tasks are carefully designed to be solvable without
specialized exploration. Nevertheless, our investigation finds that standard RL
methods often neglect long-term effects due to discounting, while
general-purpose hierarchical RL approaches struggle unless additional abstract
domain knowledge can be exploited.
|
[
{
"created": "Fri, 3 Jun 2022 20:38:27 GMT",
"version": "v1"
}
] |
2022-06-07
|
[
[
"Li",
"Andrew C.",
""
],
[
"Vaezipoor",
"Pashootan",
""
],
[
"Icarte",
"Rodrigo Toro",
""
],
[
"McIlraith",
"Sheila A.",
""
]
] |
Deep reinforcement learning has shown promise in discrete domains requiring complex reasoning, including games such as Chess, Go, and Hanabi. However, this type of reasoning is less often observed in long-horizon, continuous domains with high-dimensional observations, where instead RL research has predominantly focused on problems with simple high-level structure (e.g. opening a drawer or moving a robot as fast as possible). Inspired by combinatorially hard optimization problems, we propose a set of robotics tasks which admit many distinct solutions at the high-level, but require reasoning about states and rewards thousands of steps into the future for the best performance. Critically, while RL has traditionally suffered on complex, long-horizon tasks due to sparse rewards, our tasks are carefully designed to be solvable without specialized exploration. Nevertheless, our investigation finds that standard RL methods often neglect long-term effects due to discounting, while general-purpose hierarchical RL approaches struggle unless additional abstract domain knowledge can be exploited.
|
1604.00794
|
Pramod Bhatotia
|
Pramod Bhatotia
|
Asymptotic Analysis of Self-Adjusting Contraction Trees
| null | null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present asymptotic analysis of self-adjusting contraction
trees for incremental sliding window analytics.
|
[
{
"created": "Mon, 4 Apr 2016 09:55:06 GMT",
"version": "v1"
}
] |
2016-04-05
|
[
[
"Bhatotia",
"Pramod",
""
]
] |
In this paper, we present asymptotic analysis of self-adjusting contraction trees for incremental sliding window analytics.
|
2306.12686
|
Yu Zhang
|
Yu Zhang, Hao Zeng, Bowen Ma, Wei Zhang, Zhimeng Zhang, Yu Ding,
Tangjie Lv, Changjie Fan
|
FlowFace++: Explicit Semantic Flow-supervised End-to-End Face Swapping
|
arXiv admin note: text overlap with arXiv:2212.02797
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work proposes a novel face-swapping framework FlowFace++, utilizing
explicit semantic flow supervision and end-to-end architecture to facilitate
shape-aware face-swapping. Specifically, our work pretrains a facial shape
discriminator to supervise the face swapping network. The discriminator is
shape-aware and relies on a semantic flow-guided operation to explicitly
calculate the shape discrepancies between the target and source faces, thus
optimizing the face swapping network to generate highly realistic results. The
face swapping network is a stack of a pre-trained face-masked autoencoder
(MAE), a cross-attention fusion module, and a convolutional decoder. The MAE
provides a fine-grained facial image representation space, which is unified for
the target and source faces and thus facilitates final realistic results. The
cross-attention fusion module carries out the source-to-target face swapping in
a fine-grained latent space while preserving other attributes of the target
image (e.g. expression, head pose, hair, background, illumination, etc).
Lastly, the convolutional decoder further synthesizes the swapping results
according to the face-swapping latent embedding from the cross-attention fusion
module. Extensive quantitative and qualitative experiments on in-the-wild faces
demonstrate that our FlowFace++ outperforms the state-of-the-art significantly,
particularly while the source face is obstructed by uneven lighting or angle
offset.
|
[
{
"created": "Thu, 22 Jun 2023 06:18:29 GMT",
"version": "v1"
},
{
"created": "Mon, 26 Jun 2023 05:11:17 GMT",
"version": "v2"
}
] |
2023-06-27
|
[
[
"Zhang",
"Yu",
""
],
[
"Zeng",
"Hao",
""
],
[
"Ma",
"Bowen",
""
],
[
"Zhang",
"Wei",
""
],
[
"Zhang",
"Zhimeng",
""
],
[
"Ding",
"Yu",
""
],
[
"Lv",
"Tangjie",
""
],
[
"Fan",
"Changjie",
""
]
] |
This work proposes a novel face-swapping framework FlowFace++, utilizing explicit semantic flow supervision and end-to-end architecture to facilitate shape-aware face-swapping. Specifically, our work pretrains a facial shape discriminator to supervise the face swapping network. The discriminator is shape-aware and relies on a semantic flow-guided operation to explicitly calculate the shape discrepancies between the target and source faces, thus optimizing the face swapping network to generate highly realistic results. The face swapping network is a stack of a pre-trained face-masked autoencoder (MAE), a cross-attention fusion module, and a convolutional decoder. The MAE provides a fine-grained facial image representation space, which is unified for the target and source faces and thus facilitates final realistic results. The cross-attention fusion module carries out the source-to-target face swapping in a fine-grained latent space while preserving other attributes of the target image (e.g. expression, head pose, hair, background, illumination, etc). Lastly, the convolutional decoder further synthesizes the swapping results according to the face-swapping latent embedding from the cross-attention fusion module. Extensive quantitative and qualitative experiments on in-the-wild faces demonstrate that our FlowFace++ outperforms the state-of-the-art significantly, particularly while the source face is obstructed by uneven lighting or angle offset.
|
2310.16656
|
Dani Valevski
|
Eyal Segalis, Dani Valevski, Danny Lumen, Yossi Matias, Yaniv
Leviathan
|
A Picture is Worth a Thousand Words: Principled Recaptioning Improves
Image Generation
| null | null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Text-to-image diffusion models achieved a remarkable leap in capabilities
over the last few years, enabling high-quality and diverse synthesis of images
from a textual prompt. However, even the most advanced models often struggle to
precisely follow all of the directions in their prompts. The vast majority of
these models are trained on datasets consisting of (image, caption) pairs where
the images often come from the web, and the captions are their HTML alternate
text. A notable example is the LAION dataset, used by Stable Diffusion and
other models. In this work we observe that these captions are often of low
quality, and argue that this significantly affects the model's capability to
understand nuanced semantics in the textual prompts. We show that by relabeling
the corpus with a specialized automatic captioning model and training a
text-to-image model on the recaptioned dataset, the model benefits
substantially across the board. First, in overall image quality: e.g. FID 14.84
vs. the baseline of 17.87, and 64.3% improvement in faithful image generation
according to human evaluation. Second, in semantic alignment, e.g. semantic
object accuracy 84.34 vs. 78.90, counting alignment errors 1.32 vs. 1.44 and
positional alignment 62.42 vs. 57.60. We analyze various ways to relabel the
corpus and provide evidence that this technique, which we call RECAP, both
reduces the train-inference discrepancy and provides the model with more
information per example, increasing sample efficiency and allowing the model to
better understand the relations between captions and images.
|
[
{
"created": "Wed, 25 Oct 2023 14:10:08 GMT",
"version": "v1"
}
] |
2023-10-26
|
[
[
"Segalis",
"Eyal",
""
],
[
"Valevski",
"Dani",
""
],
[
"Lumen",
"Danny",
""
],
[
"Matias",
"Yossi",
""
],
[
"Leviathan",
"Yaniv",
""
]
] |
Text-to-image diffusion models achieved a remarkable leap in capabilities over the last few years, enabling high-quality and diverse synthesis of images from a textual prompt. However, even the most advanced models often struggle to precisely follow all of the directions in their prompts. The vast majority of these models are trained on datasets consisting of (image, caption) pairs where the images often come from the web, and the captions are their HTML alternate text. A notable example is the LAION dataset, used by Stable Diffusion and other models. In this work we observe that these captions are often of low quality, and argue that this significantly affects the model's capability to understand nuanced semantics in the textual prompts. We show that by relabeling the corpus with a specialized automatic captioning model and training a text-to-image model on the recaptioned dataset, the model benefits substantially across the board. First, in overall image quality: e.g. FID 14.84 vs. the baseline of 17.87, and 64.3% improvement in faithful image generation according to human evaluation. Second, in semantic alignment, e.g. semantic object accuracy 84.34 vs. 78.90, counting alignment errors 1.32 vs. 1.44 and positional alignment 62.42 vs. 57.60. We analyze various ways to relabel the corpus and provide evidence that this technique, which we call RECAP, both reduces the train-inference discrepancy and provides the model with more information per example, increasing sample efficiency and allowing the model to better understand the relations between captions and images.
|
1111.4898
|
Vijesh M
|
Vijesh M., Sudarshan Iyengar, Vijay Mahantesh, Amitash Ramesh, Veni
Madhavan
|
A Navigation Algorithm Inspired by Human Navigation
|
Human Navigation, Path Concatenation, Hotspots, Center Strategic
Paths, Approximation Algorithm
| null | null | null |
cs.SI physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Human navigation has been a topic of interest in spatial cognition from the
past few decades. It has been experimentally observed that humans accomplish
the task of way-finding a destination in an unknown environment by recognizing
landmarks. Investigations using network analytic techniques reveal that humans,
when asked to way-find their destination, learn the top ranked nodes of a
network. In this paper we report a study simulating the strategy used by humans
to recognize the centers of a network. We show that the paths obtained from our
simulation has the same properties as the paths obtained in human based
experiment. The simulation thus performed leads to a novel way of path-finding
in a network. We discuss the performance of our method and compare it with the
existing techniques to find a path between a pair of nodes in a network.
|
[
{
"created": "Mon, 21 Nov 2011 15:37:15 GMT",
"version": "v1"
}
] |
2011-11-22
|
[
[
"M.",
"Vijesh",
""
],
[
"Iyengar",
"Sudarshan",
""
],
[
"Mahantesh",
"Vijay",
""
],
[
"Ramesh",
"Amitash",
""
],
[
"Madhavan",
"Veni",
""
]
] |
Human navigation has been a topic of interest in spatial cognition from the past few decades. It has been experimentally observed that humans accomplish the task of way-finding a destination in an unknown environment by recognizing landmarks. Investigations using network analytic techniques reveal that humans, when asked to way-find their destination, learn the top ranked nodes of a network. In this paper we report a study simulating the strategy used by humans to recognize the centers of a network. We show that the paths obtained from our simulation has the same properties as the paths obtained in human based experiment. The simulation thus performed leads to a novel way of path-finding in a network. We discuss the performance of our method and compare it with the existing techniques to find a path between a pair of nodes in a network.
|
2205.08685
|
Jinwei Xing
|
Jinwei Xing, Takashi Nagata, Xinyun Zou, Emre Neftci, Jeffrey L.
Krichmar
|
Policy Distillation with Selective Input Gradient Regularization for
Efficient Interpretability
| null | null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Although deep Reinforcement Learning (RL) has proven successful in a wide
range of tasks, one challenge it faces is interpretability when applied to
real-world problems. Saliency maps are frequently used to provide
interpretability for deep neural networks. However, in the RL domain, existing
saliency map approaches are either computationally expensive and thus cannot
satisfy the real-time requirement of real-world scenarios or cannot produce
interpretable saliency maps for RL policies. In this work, we propose an
approach of Distillation with selective Input Gradient Regularization (DIGR)
which uses policy distillation and input gradient regularization to produce new
policies that achieve both high interpretability and computation efficiency in
generating saliency maps. Our approach is also found to improve the robustness
of RL policies to multiple adversarial attacks. We conduct experiments on three
tasks, MiniGrid (Fetch Object), Atari (Breakout) and CARLA Autonomous Driving,
to demonstrate the importance and effectiveness of our approach.
|
[
{
"created": "Wed, 18 May 2022 01:47:16 GMT",
"version": "v1"
}
] |
2022-05-19
|
[
[
"Xing",
"Jinwei",
""
],
[
"Nagata",
"Takashi",
""
],
[
"Zou",
"Xinyun",
""
],
[
"Neftci",
"Emre",
""
],
[
"Krichmar",
"Jeffrey L.",
""
]
] |
Although deep Reinforcement Learning (RL) has proven successful in a wide range of tasks, one challenge it faces is interpretability when applied to real-world problems. Saliency maps are frequently used to provide interpretability for deep neural networks. However, in the RL domain, existing saliency map approaches are either computationally expensive and thus cannot satisfy the real-time requirement of real-world scenarios or cannot produce interpretable saliency maps for RL policies. In this work, we propose an approach of Distillation with selective Input Gradient Regularization (DIGR) which uses policy distillation and input gradient regularization to produce new policies that achieve both high interpretability and computation efficiency in generating saliency maps. Our approach is also found to improve the robustness of RL policies to multiple adversarial attacks. We conduct experiments on three tasks, MiniGrid (Fetch Object), Atari (Breakout) and CARLA Autonomous Driving, to demonstrate the importance and effectiveness of our approach.
|
1908.00418
|
Yang Xin
|
Hui Li, Jiangxing Wu, Xin Yang, Han Wang, Julong Lan, Ke Xu, Yunyong
Zhang, Jinwu Wei, Shisheng Chen, Wei Liang, Fusheng Zhu, Yiqin Lu, Wai Ho
Mow, Yeung Wai-Ho, Zefeng Zheng, Peng Yi, Xinsheng Ji, Qinrang Liu, Wei Li,
Kaiyan Tian, Jiang Zhu, Jiaxing Song, Yijun Liu, Junfeng Ma, Jiawei Hu, Rui
Xu, Jiansen Huang, Guohua Wei, Jiuhua Qi, Ting Huang, Kaixuan Xing
|
MIN: Co-Governing Multi-Identifier Network Architecture and its
Prototype on Operator's Network
|
13 pages
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
IP protocol is the core of TCP/IP network layer. However, since IP address
and its Domain Name are allocated and managed by a single agency, there are
risks of centralization. The semantic overload of IP address also reduces its
scalability and mobility, which further hinders the security.
This paper proposes a co-governing Multi-Identifier Network (MIN)
architecture that constructs a network layer with parallel coexistence of
multiple identifiers, including identity, content, geographic information, and
IP address. On the management plane, we develop an efficient management system
using consortium blockchain with voting consensus, so the network can
simultaneously manage and support by hundreds or thousands of nodes with high
throughput. On the data plane, we propose an algorithm merging hash table and
prefix tree (HTP) for FIB, which avoids the false-negative error and can
inter-translate different identifiers with tens of billions of entries.
Further, we propose a scheme to transport IP packets using CCN as a tunnel for
supporting progressive deployment. We deployed the prototype of MIN to the
largest operators' network in Mainland China, Hongkong and Macao, and
demonstrated that the network can register identifier under co-governing
consensus algorithm, support VoD service very well.
|
[
{
"created": "Thu, 1 Aug 2019 14:12:27 GMT",
"version": "v1"
}
] |
2019-08-02
|
[
[
"Li",
"Hui",
""
],
[
"Wu",
"Jiangxing",
""
],
[
"Yang",
"Xin",
""
],
[
"Wang",
"Han",
""
],
[
"Lan",
"Julong",
""
],
[
"Xu",
"Ke",
""
],
[
"Zhang",
"Yunyong",
""
],
[
"Wei",
"Jinwu",
""
],
[
"Chen",
"Shisheng",
""
],
[
"Liang",
"Wei",
""
],
[
"Zhu",
"Fusheng",
""
],
[
"Lu",
"Yiqin",
""
],
[
"Mow",
"Wai Ho",
""
],
[
"Wai-Ho",
"Yeung",
""
],
[
"Zheng",
"Zefeng",
""
],
[
"Yi",
"Peng",
""
],
[
"Ji",
"Xinsheng",
""
],
[
"Liu",
"Qinrang",
""
],
[
"Li",
"Wei",
""
],
[
"Tian",
"Kaiyan",
""
],
[
"Zhu",
"Jiang",
""
],
[
"Song",
"Jiaxing",
""
],
[
"Liu",
"Yijun",
""
],
[
"Ma",
"Junfeng",
""
],
[
"Hu",
"Jiawei",
""
],
[
"Xu",
"Rui",
""
],
[
"Huang",
"Jiansen",
""
],
[
"Wei",
"Guohua",
""
],
[
"Qi",
"Jiuhua",
""
],
[
"Huang",
"Ting",
""
],
[
"Xing",
"Kaixuan",
""
]
] |
IP protocol is the core of TCP/IP network layer. However, since IP address and its Domain Name are allocated and managed by a single agency, there are risks of centralization. The semantic overload of IP address also reduces its scalability and mobility, which further hinders the security. This paper proposes a co-governing Multi-Identifier Network (MIN) architecture that constructs a network layer with parallel coexistence of multiple identifiers, including identity, content, geographic information, and IP address. On the management plane, we develop an efficient management system using consortium blockchain with voting consensus, so the network can simultaneously manage and support by hundreds or thousands of nodes with high throughput. On the data plane, we propose an algorithm merging hash table and prefix tree (HTP) for FIB, which avoids the false-negative error and can inter-translate different identifiers with tens of billions of entries. Further, we propose a scheme to transport IP packets using CCN as a tunnel for supporting progressive deployment. We deployed the prototype of MIN to the largest operators' network in Mainland China, Hongkong and Macao, and demonstrated that the network can register identifier under co-governing consensus algorithm, support VoD service very well.
|
2003.00946
|
Piotr Kicki
|
Piotr Kicki, Tomasz Gawron, Piotr Skrzypczy\'nski
|
A Self-Supervised Learning Approach to Rapid Path Planning for Car-Like
Vehicles Maneuvering in Urban Environment
| null | null | null | null |
cs.RO cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An efficient path planner for autonomous car-like vehicles should handle the
strong kinematic constraints, particularly in confined spaces commonly
encountered while maneuvering in city traffic, and should enable rapid
planning, as the city traffic scenarios are highly dynamic. State-of-the-art
planning algorithms handle such difficult cases at high computational cost,
often yielding non-deterministic results. However, feasible local paths can be
quickly generated leveraging the past planning experience gained in the same or
similar environment. While learning through supervised training is problematic
for real traffic scenarios, we introduce in this paper a novel neural
network-based method for path planning, which employs a gradient-based
self-supervised learning algorithm to predict feasible paths. This approach
strongly exploits the experience gained in the past and rapidly yields feasible
maneuver plans for car-like vehicles with limited steering-angle. The
effectiveness of such an approach has been confirmed by computational
experiments.
|
[
{
"created": "Mon, 2 Mar 2020 14:48:29 GMT",
"version": "v1"
}
] |
2020-03-03
|
[
[
"Kicki",
"Piotr",
""
],
[
"Gawron",
"Tomasz",
""
],
[
"Skrzypczyński",
"Piotr",
""
]
] |
An efficient path planner for autonomous car-like vehicles should handle the strong kinematic constraints, particularly in confined spaces commonly encountered while maneuvering in city traffic, and should enable rapid planning, as the city traffic scenarios are highly dynamic. State-of-the-art planning algorithms handle such difficult cases at high computational cost, often yielding non-deterministic results. However, feasible local paths can be quickly generated leveraging the past planning experience gained in the same or similar environment. While learning through supervised training is problematic for real traffic scenarios, we introduce in this paper a novel neural network-based method for path planning, which employs a gradient-based self-supervised learning algorithm to predict feasible paths. This approach strongly exploits the experience gained in the past and rapidly yields feasible maneuver plans for car-like vehicles with limited steering-angle. The effectiveness of such an approach has been confirmed by computational experiments.
|
1912.07447
|
Nian Xue
|
Zhen Li, Hanyang Shao, Nian Xue, Liang Niu and LiangLiang Cao
|
Progressive Learning Algorithm for Efficient Person Re-Identification
|
ICPR2020
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper studies the problem of Person Re-Identification (ReID)for
large-scale applications. Recent research efforts have been devoted to building
complicated part models, which introduce considerably high computational cost
and memory consumption, inhibiting its practicability in large-scale
applications. This paper aims to develop a novel learning strategy to find
efficient feature embeddings while maintaining the balance of accuracy and
model complexity. More specifically, we find by enhancing the classical triplet
loss together with cross-entropy loss, our method can explore the hard examples
and build a discriminant feature embedding yet compact enough for large-scale
applications. Our method is carried out progressively using Bayesian
optimization, and we call it the Progressive Learning Algorithm (PLA).
Extensive experiments on three large-scale datasets show that our PLA is
comparable or better than the-state-of-the-arts. Especially, on the challenging
Market-1501 dataset, we achieve Rank-1=94.7\%/mAP=89.4\% while saving at least
30\% parameters than strong part models.
|
[
{
"created": "Mon, 16 Dec 2019 15:32:01 GMT",
"version": "v1"
},
{
"created": "Mon, 23 Nov 2020 22:08:16 GMT",
"version": "v2"
}
] |
2020-11-25
|
[
[
"Li",
"Zhen",
""
],
[
"Shao",
"Hanyang",
""
],
[
"Xue",
"Nian",
""
],
[
"Niu",
"Liang",
""
],
[
"Cao",
"LiangLiang",
""
]
] |
This paper studies the problem of Person Re-Identification (ReID)for large-scale applications. Recent research efforts have been devoted to building complicated part models, which introduce considerably high computational cost and memory consumption, inhibiting its practicability in large-scale applications. This paper aims to develop a novel learning strategy to find efficient feature embeddings while maintaining the balance of accuracy and model complexity. More specifically, we find by enhancing the classical triplet loss together with cross-entropy loss, our method can explore the hard examples and build a discriminant feature embedding yet compact enough for large-scale applications. Our method is carried out progressively using Bayesian optimization, and we call it the Progressive Learning Algorithm (PLA). Extensive experiments on three large-scale datasets show that our PLA is comparable or better than the-state-of-the-arts. Especially, on the challenging Market-1501 dataset, we achieve Rank-1=94.7\%/mAP=89.4\% while saving at least 30\% parameters than strong part models.
|
1609.03205
|
Ella Rabinovich
|
Ella Rabinovich and Shuly Wintner
|
Unsupervised Identification of Translationese
|
TACL2015, 14 pages
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Translated texts are distinctively different from original ones, to the
extent that supervised text classification methods can distinguish between them
with high accuracy. These differences were proven useful for statistical
machine translation. However, it has been suggested that the accuracy of
translation detection deteriorates when the classifier is evaluated outside the
domain it was trained on. We show that this is indeed the case, in a variety of
evaluation scenarios. We then show that unsupervised classification is highly
accurate on this task. We suggest a method for determining the correct labels
of the clustering outcomes, and then use the labels for voting, improving the
accuracy even further. Moreover, we suggest a simple method for clustering in
the challenging case of mixed-domain datasets, in spite of the dominance of
domain-related features over translation-related ones. The result is an
effective, fully-unsupervised method for distinguishing between original and
translated texts that can be applied to new domains with reasonable accuracy.
|
[
{
"created": "Sun, 11 Sep 2016 19:52:28 GMT",
"version": "v1"
}
] |
2016-09-13
|
[
[
"Rabinovich",
"Ella",
""
],
[
"Wintner",
"Shuly",
""
]
] |
Translated texts are distinctively different from original ones, to the extent that supervised text classification methods can distinguish between them with high accuracy. These differences were proven useful for statistical machine translation. However, it has been suggested that the accuracy of translation detection deteriorates when the classifier is evaluated outside the domain it was trained on. We show that this is indeed the case, in a variety of evaluation scenarios. We then show that unsupervised classification is highly accurate on this task. We suggest a method for determining the correct labels of the clustering outcomes, and then use the labels for voting, improving the accuracy even further. Moreover, we suggest a simple method for clustering in the challenging case of mixed-domain datasets, in spite of the dominance of domain-related features over translation-related ones. The result is an effective, fully-unsupervised method for distinguishing between original and translated texts that can be applied to new domains with reasonable accuracy.
|
1811.03531
|
Zachary Charles
|
Zachary Charles, Harrison Rosenberg, Dimitris Papailiopoulos
|
A Geometric Perspective on the Transferability of Adversarial Directions
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
State-of-the-art machine learning models frequently misclassify inputs that
have been perturbed in an adversarial manner. Adversarial perturbations
generated for a given input and a specific classifier often seem to be
effective on other inputs and even different classifiers. In other words,
adversarial perturbations seem to transfer between different inputs, models,
and even different neural network architectures. In this work, we show that in
the context of linear classifiers and two-layer ReLU networks, there provably
exist directions that give rise to adversarial perturbations for many
classifiers and data points simultaneously. We show that these "transferable
adversarial directions" are guaranteed to exist for linear separators of a
given set, and will exist with high probability for linear classifiers trained
on independent sets drawn from the same distribution. We extend our results to
large classes of two-layer ReLU networks. We further show that adversarial
directions for ReLU networks transfer to linear classifiers while the reverse
need not hold, suggesting that adversarial perturbations for more complex
models are more likely to transfer to other classifiers. We validate our
findings empirically, even for deeper ReLU networks.
|
[
{
"created": "Thu, 8 Nov 2018 16:23:50 GMT",
"version": "v1"
}
] |
2018-11-09
|
[
[
"Charles",
"Zachary",
""
],
[
"Rosenberg",
"Harrison",
""
],
[
"Papailiopoulos",
"Dimitris",
""
]
] |
State-of-the-art machine learning models frequently misclassify inputs that have been perturbed in an adversarial manner. Adversarial perturbations generated for a given input and a specific classifier often seem to be effective on other inputs and even different classifiers. In other words, adversarial perturbations seem to transfer between different inputs, models, and even different neural network architectures. In this work, we show that in the context of linear classifiers and two-layer ReLU networks, there provably exist directions that give rise to adversarial perturbations for many classifiers and data points simultaneously. We show that these "transferable adversarial directions" are guaranteed to exist for linear separators of a given set, and will exist with high probability for linear classifiers trained on independent sets drawn from the same distribution. We extend our results to large classes of two-layer ReLU networks. We further show that adversarial directions for ReLU networks transfer to linear classifiers while the reverse need not hold, suggesting that adversarial perturbations for more complex models are more likely to transfer to other classifiers. We validate our findings empirically, even for deeper ReLU networks.
|
2010.01113
|
S. Mohammad Razavizadeh
|
Anahid Rafieifar, and S. Mohammad Razavizadeh
|
Secrecy Rate Maximization in Multi-IRS Millimeter Wave Networks
|
20 pages, 6 figures
|
Physical Communication (Elsevier) - vol.48 - Oct. 2021
|
10.1016/j.phycom.2021.101436
| null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper investigates the problem of increasing the security at the
physical layer of a Millimeter Wave (mmWave) network equipped with several
Intelligent Reflecting Surfaces (IRSs). In this network, multiple IRSs help the
Base Station (BS) to reach the signal to the desired user and at the same time
maintain the security of the network i.e. securing the signal from receiving by
the unallowable eavesdropper. The target of the proposed scheme is to maximize
the secrecy rate by jointly optimizing the active beamforming at the BS and
passive beamforming at the IRSs. This leads to a non-convex optimization
problem which we solve by decomposing into two sub-problems. The sub-problems
alternatively solve the active and passive beamforming design problems using
the Semi-Definite Relaxation (SDR) technique. Finally, simulations are done to
assess the performance of the proposed algorithm. These results show the
superiority of using multiple IRSs in the enhancement of the secrecy rate in
the wireless networks that operate in the mmWave frequency bands.
|
[
{
"created": "Fri, 2 Oct 2020 17:17:13 GMT",
"version": "v1"
},
{
"created": "Fri, 21 May 2021 13:24:49 GMT",
"version": "v2"
}
] |
2021-10-07
|
[
[
"Rafieifar",
"Anahid",
""
],
[
"Razavizadeh",
"S. Mohammad",
""
]
] |
This paper investigates the problem of increasing the security at the physical layer of a Millimeter Wave (mmWave) network equipped with several Intelligent Reflecting Surfaces (IRSs). In this network, multiple IRSs help the Base Station (BS) to reach the signal to the desired user and at the same time maintain the security of the network i.e. securing the signal from receiving by the unallowable eavesdropper. The target of the proposed scheme is to maximize the secrecy rate by jointly optimizing the active beamforming at the BS and passive beamforming at the IRSs. This leads to a non-convex optimization problem which we solve by decomposing into two sub-problems. The sub-problems alternatively solve the active and passive beamforming design problems using the Semi-Definite Relaxation (SDR) technique. Finally, simulations are done to assess the performance of the proposed algorithm. These results show the superiority of using multiple IRSs in the enhancement of the secrecy rate in the wireless networks that operate in the mmWave frequency bands.
|
1711.11499
|
Leonardo Ermann
|
Leonardo Ermann, Klaus M. Frahm and Dima L. Shepelyansky
|
Google matrix of Bitcoin network
|
12 pages, 15 figures
|
Eur. Phys. J. B 91, 127 (2018)
|
10.1140/epjb/e2018-80674-y
| null |
cs.SI physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We construct and study the Google matrix of Bitcoin transactions during the
time period from the very beginning in 2009 till April 2013. The Bitcoin
network has up to a few millions of bitcoin users and we present its main
characteristics including the PageRank and CheiRank probability distributions,
the spectrum of eigenvalues of Google matrix and related eigenvectors. We find
that the spectrum has an unusual circle-type structure which we attribute to
existing hidden communities of nodes linked between their members. We show that
the Gini coefficient of the transactions for the whole period is close to unity
showing that the main part of wealth of the network is captured by a small
fraction of users.
|
[
{
"created": "Thu, 30 Nov 2017 16:35:43 GMT",
"version": "v1"
}
] |
2018-06-29
|
[
[
"Ermann",
"Leonardo",
""
],
[
"Frahm",
"Klaus M.",
""
],
[
"Shepelyansky",
"Dima L.",
""
]
] |
We construct and study the Google matrix of Bitcoin transactions during the time period from the very beginning in 2009 till April 2013. The Bitcoin network has up to a few millions of bitcoin users and we present its main characteristics including the PageRank and CheiRank probability distributions, the spectrum of eigenvalues of Google matrix and related eigenvectors. We find that the spectrum has an unusual circle-type structure which we attribute to existing hidden communities of nodes linked between their members. We show that the Gini coefficient of the transactions for the whole period is close to unity showing that the main part of wealth of the network is captured by a small fraction of users.
|
2102.06984
|
Hanbaek Lyu
|
Hanbaek Lyu, Yacoub H. Kureh, Joshua Vendrow, Mason A. Porter
|
Learning low-rank latent mesoscale structures in networks
|
82 pages, 25 figures, 2 tables
| null | null | null |
cs.SI cs.LG math.OC physics.soc-ph stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
It is common to use networks to encode the architecture of interactions
between entities in complex systems in the physical, biological, social, and
information sciences. To study the large-scale behavior of complex systems, it
is useful to examine mesoscale structures in networks as building blocks that
influence such behavior. We present a new approach for describing low-rank
mesoscale structures in networks, and we illustrate our approach using several
synthetic network models and empirical friendship, collaboration, and
protein--protein interaction (PPI) networks. We find that these networks
possess a relatively small number of `latent motifs' that together can
successfully approximate most subgraphs of a network at a fixed mesoscale. We
use an algorithm for `network dictionary learning' (NDL), which combines a
network-sampling method and nonnegative matrix factorization, to learn the
latent motifs of a given network. The ability to encode a network using a set
of latent motifs has a wide variety of applications to network-analysis tasks,
such as comparison, denoising, and edge inference. Additionally, using a new
network denoising and reconstruction (NDR) algorithm, we demonstrate how to
denoise a corrupted network by using only the latent motifs that one learns
directly from the corrupted network.
|
[
{
"created": "Sat, 13 Feb 2021 18:54:49 GMT",
"version": "v1"
},
{
"created": "Sun, 25 Jul 2021 16:45:12 GMT",
"version": "v2"
},
{
"created": "Tue, 16 Aug 2022 23:13:44 GMT",
"version": "v3"
},
{
"created": "Sat, 4 Mar 2023 05:52:30 GMT",
"version": "v4"
},
{
"created": "Thu, 13 Jul 2023 05:42:06 GMT",
"version": "v5"
}
] |
2023-07-14
|
[
[
"Lyu",
"Hanbaek",
""
],
[
"Kureh",
"Yacoub H.",
""
],
[
"Vendrow",
"Joshua",
""
],
[
"Porter",
"Mason A.",
""
]
] |
It is common to use networks to encode the architecture of interactions between entities in complex systems in the physical, biological, social, and information sciences. To study the large-scale behavior of complex systems, it is useful to examine mesoscale structures in networks as building blocks that influence such behavior. We present a new approach for describing low-rank mesoscale structures in networks, and we illustrate our approach using several synthetic network models and empirical friendship, collaboration, and protein--protein interaction (PPI) networks. We find that these networks possess a relatively small number of `latent motifs' that together can successfully approximate most subgraphs of a network at a fixed mesoscale. We use an algorithm for `network dictionary learning' (NDL), which combines a network-sampling method and nonnegative matrix factorization, to learn the latent motifs of a given network. The ability to encode a network using a set of latent motifs has a wide variety of applications to network-analysis tasks, such as comparison, denoising, and edge inference. Additionally, using a new network denoising and reconstruction (NDR) algorithm, we demonstrate how to denoise a corrupted network by using only the latent motifs that one learns directly from the corrupted network.
|
1807.01185
|
Iman Valiulahi
|
Iman Valiulahi, Farzan Haddadi, and Arash Amini
|
Robustness of Two-Dimensional Line Spectral Estimation Against Spiky
Noise
| null | null |
10.1109/TSP.2019.2951220
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The aim of two-dimensional line spectral estimation is to super-resolve the
spectral point sources of the signal from time samples. In many associated
applications such as radar and sonar, due to cut-off and saturation regions in
electronic devices, some of the numbers of samples are corrupted by spiky
noise. To overcome this problem, we present a new convex program to
simultaneously estimate spectral point sources and spiky noise in two
dimensions. To prove uniqueness of the solution, it is sufficient to show that
a dual certificate exists. Construction of the dual certificate imposes a mild
condition on the separation of the spectral point sources. Also, the number of
spikes and detectable sparse sources are shown to be a logarithmic function of
the number of time samples. Simulation results confirm the conclusions of our
general theory.
|
[
{
"created": "Tue, 3 Jul 2018 13:45:56 GMT",
"version": "v1"
}
] |
2020-01-08
|
[
[
"Valiulahi",
"Iman",
""
],
[
"Haddadi",
"Farzan",
""
],
[
"Amini",
"Arash",
""
]
] |
The aim of two-dimensional line spectral estimation is to super-resolve the spectral point sources of the signal from time samples. In many associated applications such as radar and sonar, due to cut-off and saturation regions in electronic devices, some of the numbers of samples are corrupted by spiky noise. To overcome this problem, we present a new convex program to simultaneously estimate spectral point sources and spiky noise in two dimensions. To prove uniqueness of the solution, it is sufficient to show that a dual certificate exists. Construction of the dual certificate imposes a mild condition on the separation of the spectral point sources. Also, the number of spikes and detectable sparse sources are shown to be a logarithmic function of the number of time samples. Simulation results confirm the conclusions of our general theory.
|
2009.02406
|
Xinli Yu T
|
Xinli Yu, Mohsen Malmir, Cynthia He, Yue Liu, Rex Wu
|
Video Moment Retrieval via Natural Language Queries
|
needs internal approval
| null | null | null |
cs.CV cs.AI cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a novel method for video moment retrieval (VMR)
that achieves state of the arts (SOTA) performance on R@1 metrics and
surpassing the SOTA on the high IoU metric (R@1, IoU=0.7).
First, we propose to use a multi-head self-attention mechanism, and further a
cross-attention scheme to capture video/query interaction and long-range query
dependencies from video context. The attention-based methods can develop
frame-to-query interaction and query-to-frame interaction at arbitrary
positions and the multi-head setting ensures the sufficient understanding of
complicated dependencies. Our model has a simple architecture, which enables
faster training and inference while maintaining .
Second, We also propose to use multiple task training objective consists of
moment segmentation task, start/end distribution prediction and start/end
location regression task. We have verified that start/end prediction are noisy
due to annotator disagreement and joint training with moment segmentation task
can provide richer information since frames inside the target clip are also
utilized as positive training examples.
Third, we propose to use an early fusion approach, which achieves better
performance at the cost of inference time. However, the inference time will not
be a problem for our model since our model has a simple architecture which
enables efficient training and inference.
|
[
{
"created": "Fri, 4 Sep 2020 22:06:34 GMT",
"version": "v1"
},
{
"created": "Thu, 10 Sep 2020 14:49:04 GMT",
"version": "v2"
}
] |
2020-09-11
|
[
[
"Yu",
"Xinli",
""
],
[
"Malmir",
"Mohsen",
""
],
[
"He",
"Cynthia",
""
],
[
"Liu",
"Yue",
""
],
[
"Wu",
"Rex",
""
]
] |
In this paper, we propose a novel method for video moment retrieval (VMR) that achieves state of the arts (SOTA) performance on R@1 metrics and surpassing the SOTA on the high IoU metric (R@1, IoU=0.7). First, we propose to use a multi-head self-attention mechanism, and further a cross-attention scheme to capture video/query interaction and long-range query dependencies from video context. The attention-based methods can develop frame-to-query interaction and query-to-frame interaction at arbitrary positions and the multi-head setting ensures the sufficient understanding of complicated dependencies. Our model has a simple architecture, which enables faster training and inference while maintaining . Second, We also propose to use multiple task training objective consists of moment segmentation task, start/end distribution prediction and start/end location regression task. We have verified that start/end prediction are noisy due to annotator disagreement and joint training with moment segmentation task can provide richer information since frames inside the target clip are also utilized as positive training examples. Third, we propose to use an early fusion approach, which achieves better performance at the cost of inference time. However, the inference time will not be a problem for our model since our model has a simple architecture which enables efficient training and inference.
|
0704.3433
|
Tshilidzi Marwala
|
Tshilidzi Marwala and Bodie Crossingham
|
Bayesian approach to rough set
|
20 pages, 3 figures
| null | null | null |
cs.AI
| null |
This paper proposes an approach to training rough set models using Bayesian
framework trained using Markov Chain Monte Carlo (MCMC) method. The prior
probabilities are constructed from the prior knowledge that good rough set
models have fewer rules. Markov Chain Monte Carlo sampling is conducted through
sampling in the rough set granule space and Metropolis algorithm is used as an
acceptance criteria. The proposed method is tested to estimate the risk of HIV
given demographic data. The results obtained shows that the proposed approach
is able to achieve an average accuracy of 58% with the accuracy varying up to
66%. In addition the Bayesian rough set give the probabilities of the estimated
HIV status as well as the linguistic rules describing how the demographic
parameters drive the risk of HIV.
|
[
{
"created": "Wed, 25 Apr 2007 19:50:59 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Marwala",
"Tshilidzi",
""
],
[
"Crossingham",
"Bodie",
""
]
] |
This paper proposes an approach to training rough set models using Bayesian framework trained using Markov Chain Monte Carlo (MCMC) method. The prior probabilities are constructed from the prior knowledge that good rough set models have fewer rules. Markov Chain Monte Carlo sampling is conducted through sampling in the rough set granule space and Metropolis algorithm is used as an acceptance criteria. The proposed method is tested to estimate the risk of HIV given demographic data. The results obtained shows that the proposed approach is able to achieve an average accuracy of 58% with the accuracy varying up to 66%. In addition the Bayesian rough set give the probabilities of the estimated HIV status as well as the linguistic rules describing how the demographic parameters drive the risk of HIV.
|
2207.04403
|
Litao Yu
|
Litao Yu, Zhibin Li, Jian Zhang, Qiang Wu
|
Self-attention on Multi-Shifted Windows for Scene Segmentation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Scene segmentation in images is a fundamental yet challenging problem in
visual content understanding, which is to learn a model to assign every image
pixel to a categorical label. One of the challenges for this learning task is
to consider the spatial and semantic relationships to obtain descriptive
feature representations, so learning the feature maps from multiple scales is a
common practice in scene segmentation. In this paper, we explore the effective
use of self-attention within multi-scale image windows to learn descriptive
visual features, then propose three different strategies to aggregate these
feature maps to decode the feature representation for dense prediction. Our
design is based on the recently proposed Swin Transformer models, which totally
discards convolution operations. With the simple yet effective multi-scale
feature learning and aggregation, our models achieve very promising performance
on four public scene segmentation datasets, PASCAL VOC2012, COCO-Stuff 10K,
ADE20K and Cityscapes.
|
[
{
"created": "Sun, 10 Jul 2022 07:36:36 GMT",
"version": "v1"
}
] |
2022-07-12
|
[
[
"Yu",
"Litao",
""
],
[
"Li",
"Zhibin",
""
],
[
"Zhang",
"Jian",
""
],
[
"Wu",
"Qiang",
""
]
] |
Scene segmentation in images is a fundamental yet challenging problem in visual content understanding, which is to learn a model to assign every image pixel to a categorical label. One of the challenges for this learning task is to consider the spatial and semantic relationships to obtain descriptive feature representations, so learning the feature maps from multiple scales is a common practice in scene segmentation. In this paper, we explore the effective use of self-attention within multi-scale image windows to learn descriptive visual features, then propose three different strategies to aggregate these feature maps to decode the feature representation for dense prediction. Our design is based on the recently proposed Swin Transformer models, which totally discards convolution operations. With the simple yet effective multi-scale feature learning and aggregation, our models achieve very promising performance on four public scene segmentation datasets, PASCAL VOC2012, COCO-Stuff 10K, ADE20K and Cityscapes.
|
1703.09200
|
Yuanhan Mo
|
Yuanhan Mo, Fangde Liu, Douglas McIlwraith, Guang Yang, Jingqing
Zhang, Taigang He, Yike Guo
|
The Deep Poincar\'e Map: A Novel Approach for Left Ventricle
Segmentation
|
MICCAI 2018 Spotlight
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Precise segmentation of the left ventricle (LV) within cardiac MRI images is
a prerequisite for the quantitative measurement of heart function. However,
this task is challenging due to the limited availability of labeled data and
motion artifacts from cardiac imaging. In this work, we present an iterative
segmentation algorithm for LV delineation. By coupling deep learning with a
novel dynamic-based labeling scheme, we present a new methodology where a
policy model is learned to guide an agent to travel over the the image, tracing
out a boundary of the ROI -- using the magnitude difference of the Poincar\'e
map as a stopping criterion. Our method is evaluated on two datasets, namely
the Sunnybrook Cardiac Dataset (SCD) and data from the STACOM 2011 LV
segmentation challenge. Our method outperforms the previous research over many
metrics. In order to demonstrate the transferability of our method we present
encouraging results over the STACOM 2011 data, when using a model trained on
the SCD dataset.
|
[
{
"created": "Mon, 27 Mar 2017 17:37:33 GMT",
"version": "v1"
},
{
"created": "Tue, 30 Oct 2018 11:10:09 GMT",
"version": "v2"
}
] |
2018-10-31
|
[
[
"Mo",
"Yuanhan",
""
],
[
"Liu",
"Fangde",
""
],
[
"McIlwraith",
"Douglas",
""
],
[
"Yang",
"Guang",
""
],
[
"Zhang",
"Jingqing",
""
],
[
"He",
"Taigang",
""
],
[
"Guo",
"Yike",
""
]
] |
Precise segmentation of the left ventricle (LV) within cardiac MRI images is a prerequisite for the quantitative measurement of heart function. However, this task is challenging due to the limited availability of labeled data and motion artifacts from cardiac imaging. In this work, we present an iterative segmentation algorithm for LV delineation. By coupling deep learning with a novel dynamic-based labeling scheme, we present a new methodology where a policy model is learned to guide an agent to travel over the the image, tracing out a boundary of the ROI -- using the magnitude difference of the Poincar\'e map as a stopping criterion. Our method is evaluated on two datasets, namely the Sunnybrook Cardiac Dataset (SCD) and data from the STACOM 2011 LV segmentation challenge. Our method outperforms the previous research over many metrics. In order to demonstrate the transferability of our method we present encouraging results over the STACOM 2011 data, when using a model trained on the SCD dataset.
|
2407.11344
|
Xu Zheng
|
Xu Zheng, Yuanhuiyi Lyu, Jiazhou Zhou, Lin Wang
|
Centering the Value of Every Modality: Towards Efficient and Resilient
Modality-agnostic Semantic Segmentation
|
Accepted to ECCV 2024
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fusing an arbitrary number of modalities is vital for achieving robust
multi-modal fusion of semantic segmentation yet remains less explored to date.
Recent endeavors regard RGB modality as the center and the others as the
auxiliary, yielding an asymmetric architecture with two branches. However, the
RGB modality may struggle in certain circumstances, e.g., nighttime, while
others, e.g., event data, own their merits; thus, it is imperative for the
fusion model to discern robust and fragile modalities, and incorporate the most
robust and fragile ones to learn a resilient multi-modal framework. To this
end, we propose a novel method, named MAGIC, that can be flexibly paired with
various backbones, ranging from compact to high-performance models. Our method
comprises two key plug-and-play modules. Firstly, we introduce a multi-modal
aggregation module to efficiently process features from multi-modal batches and
extract complementary scene information. On top, a unified arbitrary-modal
selection module is proposed to utilize the aggregated features as the
benchmark to rank the multi-modal features based on the similarity scores. This
way, our method can eliminate the dependence on RGB modality and better
overcome sensor failures while ensuring the segmentation performance. Under the
commonly considered multi-modal setting, our method achieves state-of-the-art
performance while reducing the model parameters by 60%. Moreover, our method is
superior in the novel modality-agnostic setting, where it outperforms prior
arts by a large margin of +19.41% mIoU
|
[
{
"created": "Tue, 16 Jul 2024 03:19:59 GMT",
"version": "v1"
}
] |
2024-07-17
|
[
[
"Zheng",
"Xu",
""
],
[
"Lyu",
"Yuanhuiyi",
""
],
[
"Zhou",
"Jiazhou",
""
],
[
"Wang",
"Lin",
""
]
] |
Fusing an arbitrary number of modalities is vital for achieving robust multi-modal fusion of semantic segmentation yet remains less explored to date. Recent endeavors regard RGB modality as the center and the others as the auxiliary, yielding an asymmetric architecture with two branches. However, the RGB modality may struggle in certain circumstances, e.g., nighttime, while others, e.g., event data, own their merits; thus, it is imperative for the fusion model to discern robust and fragile modalities, and incorporate the most robust and fragile ones to learn a resilient multi-modal framework. To this end, we propose a novel method, named MAGIC, that can be flexibly paired with various backbones, ranging from compact to high-performance models. Our method comprises two key plug-and-play modules. Firstly, we introduce a multi-modal aggregation module to efficiently process features from multi-modal batches and extract complementary scene information. On top, a unified arbitrary-modal selection module is proposed to utilize the aggregated features as the benchmark to rank the multi-modal features based on the similarity scores. This way, our method can eliminate the dependence on RGB modality and better overcome sensor failures while ensuring the segmentation performance. Under the commonly considered multi-modal setting, our method achieves state-of-the-art performance while reducing the model parameters by 60%. Moreover, our method is superior in the novel modality-agnostic setting, where it outperforms prior arts by a large margin of +19.41% mIoU
|
2307.03017
|
Yijie Deng
|
Yijie Deng, Lei Han, Tianpeng Lin, Lin Li, Jinzhi Zhang, and Lu Fang
|
RealLiFe: Real-Time Light Field Reconstruction via Hierarchical Sparse
Gradient Descent
|
Submitted to IEEE TPAMI
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the rise of Extended Reality (XR) technology, there is a growing need
for real-time light field generation from sparse view inputs. Existing methods
can be classified into offline techniques, which can generate high-quality
novel views but at the cost of long inference/training time, and online
methods, which either lack generalizability or produce unsatisfactory results.
However, we have observed that the intrinsic sparse manifold of Multi-plane
Images (MPI) enables a significant acceleration of light field generation while
maintaining rendering quality. Based on this insight, we introduce EffLiFe, a
novel light field optimization method, which leverages the proposed
Hierarchical Sparse Gradient Descent (HSGD) to produce high-quality light
fields from sparse view images in real time. Technically, the coarse MPI of a
scene is first generated using a 3D CNN, and it is further sparsely optimized
by focusing only on important MPI gradients in a few iterations. Nevertheless,
relying solely on optimization can lead to artifacts at occlusion boundaries.
Therefore, we propose an occlusion-aware iterative refinement module that
removes visual artifacts in occluded regions by iteratively filtering the
input. Extensive experiments demonstrate that our method achieves comparable
visual quality while being 100x faster on average than state-of-the-art offline
methods and delivering better performance (about 2 dB higher in PSNR) compared
to other online approaches.
|
[
{
"created": "Thu, 6 Jul 2023 14:31:01 GMT",
"version": "v1"
},
{
"created": "Mon, 10 Jul 2023 12:47:34 GMT",
"version": "v2"
},
{
"created": "Mon, 27 Nov 2023 11:38:39 GMT",
"version": "v3"
}
] |
2023-11-28
|
[
[
"Deng",
"Yijie",
""
],
[
"Han",
"Lei",
""
],
[
"Lin",
"Tianpeng",
""
],
[
"Li",
"Lin",
""
],
[
"Zhang",
"Jinzhi",
""
],
[
"Fang",
"Lu",
""
]
] |
With the rise of Extended Reality (XR) technology, there is a growing need for real-time light field generation from sparse view inputs. Existing methods can be classified into offline techniques, which can generate high-quality novel views but at the cost of long inference/training time, and online methods, which either lack generalizability or produce unsatisfactory results. However, we have observed that the intrinsic sparse manifold of Multi-plane Images (MPI) enables a significant acceleration of light field generation while maintaining rendering quality. Based on this insight, we introduce EffLiFe, a novel light field optimization method, which leverages the proposed Hierarchical Sparse Gradient Descent (HSGD) to produce high-quality light fields from sparse view images in real time. Technically, the coarse MPI of a scene is first generated using a 3D CNN, and it is further sparsely optimized by focusing only on important MPI gradients in a few iterations. Nevertheless, relying solely on optimization can lead to artifacts at occlusion boundaries. Therefore, we propose an occlusion-aware iterative refinement module that removes visual artifacts in occluded regions by iteratively filtering the input. Extensive experiments demonstrate that our method achieves comparable visual quality while being 100x faster on average than state-of-the-art offline methods and delivering better performance (about 2 dB higher in PSNR) compared to other online approaches.
|
2406.16300
|
Sidak Pal Singh
|
Sidak Pal Singh, Linara Adilova, Michael Kamp, Asja Fischer, Bernhard
Sch\"olkopf, Thomas Hofmann
|
Landscaping Linear Mode Connectivity
|
ICML 2024 HiLD workshop paper
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The presence of linear paths in parameter space between two different network
solutions in certain cases, i.e., linear mode connectivity (LMC), has garnered
interest from both theoretical and practical fronts. There has been significant
research that either practically designs algorithms catered for connecting
networks by adjusting for the permutation symmetries as well as some others
that more theoretically construct paths through which networks can be
connected. Yet, the core reasons for the occurrence of LMC, when in fact it
does occur, in the highly non-convex loss landscapes of neural networks are far
from clear. In this work, we take a step towards understanding it by providing
a model of how the loss landscape needs to behave topographically for LMC (or
the lack thereof) to manifest. Concretely, we present a `mountainside and
ridge' perspective that helps to neatly tie together different geometric
features that can be spotted in the loss landscape along the training runs. We
also complement this perspective by providing a theoretical analysis of the
barrier height, for which we provide empirical support, and which additionally
extends as a faithful predictor of layer-wise LMC. We close with a toy example
that provides further intuition on how barriers arise in the first place, all
in all, showcasing the larger aim of the work -- to provide a working model of
the landscape and its topography for the occurrence of LMC.
|
[
{
"created": "Mon, 24 Jun 2024 03:53:30 GMT",
"version": "v1"
}
] |
2024-06-25
|
[
[
"Singh",
"Sidak Pal",
""
],
[
"Adilova",
"Linara",
""
],
[
"Kamp",
"Michael",
""
],
[
"Fischer",
"Asja",
""
],
[
"Schölkopf",
"Bernhard",
""
],
[
"Hofmann",
"Thomas",
""
]
] |
The presence of linear paths in parameter space between two different network solutions in certain cases, i.e., linear mode connectivity (LMC), has garnered interest from both theoretical and practical fronts. There has been significant research that either practically designs algorithms catered for connecting networks by adjusting for the permutation symmetries as well as some others that more theoretically construct paths through which networks can be connected. Yet, the core reasons for the occurrence of LMC, when in fact it does occur, in the highly non-convex loss landscapes of neural networks are far from clear. In this work, we take a step towards understanding it by providing a model of how the loss landscape needs to behave topographically for LMC (or the lack thereof) to manifest. Concretely, we present a `mountainside and ridge' perspective that helps to neatly tie together different geometric features that can be spotted in the loss landscape along the training runs. We also complement this perspective by providing a theoretical analysis of the barrier height, for which we provide empirical support, and which additionally extends as a faithful predictor of layer-wise LMC. We close with a toy example that provides further intuition on how barriers arise in the first place, all in all, showcasing the larger aim of the work -- to provide a working model of the landscape and its topography for the occurrence of LMC.
|
2305.17400
|
Xiao Hu
|
Xiao Hu, Jianxiong Li, Xianyuan Zhan, Qing-Shan Jia, Ya-Qin Zhang
|
Query-Policy Misalignment in Preference-Based Reinforcement Learning
|
Accepted by ICLR 2024
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Preference-based reinforcement learning (PbRL) provides a natural way to
align RL agents' behavior with human desired outcomes, but is often restrained
by costly human feedback. To improve feedback efficiency, most existing PbRL
methods focus on selecting queries to maximally improve the overall quality of
the reward model, but counter-intuitively, we find that this may not
necessarily lead to improved performance. To unravel this mystery, we identify
a long-neglected issue in the query selection schemes of existing PbRL studies:
Query-Policy Misalignment. We show that the seemingly informative queries
selected to improve the overall quality of reward model actually may not align
with RL agents' interests, thus offering little help on policy learning and
eventually resulting in poor feedback efficiency. We show that this issue can
be effectively addressed via near on-policy query and a specially designed
hybrid experience replay, which together enforce the bidirectional query-policy
alignment. Simple yet elegant, our method can be easily incorporated into
existing approaches by changing only a few lines of code. We showcase in
comprehensive experiments that our method achieves substantial gains in both
human feedback and RL sample efficiency, demonstrating the importance of
addressing query-policy misalignment in PbRL tasks.
|
[
{
"created": "Sat, 27 May 2023 07:55:17 GMT",
"version": "v1"
},
{
"created": "Thu, 23 Nov 2023 16:27:42 GMT",
"version": "v2"
},
{
"created": "Fri, 5 Jul 2024 14:26:21 GMT",
"version": "v3"
}
] |
2024-07-08
|
[
[
"Hu",
"Xiao",
""
],
[
"Li",
"Jianxiong",
""
],
[
"Zhan",
"Xianyuan",
""
],
[
"Jia",
"Qing-Shan",
""
],
[
"Zhang",
"Ya-Qin",
""
]
] |
Preference-based reinforcement learning (PbRL) provides a natural way to align RL agents' behavior with human desired outcomes, but is often restrained by costly human feedback. To improve feedback efficiency, most existing PbRL methods focus on selecting queries to maximally improve the overall quality of the reward model, but counter-intuitively, we find that this may not necessarily lead to improved performance. To unravel this mystery, we identify a long-neglected issue in the query selection schemes of existing PbRL studies: Query-Policy Misalignment. We show that the seemingly informative queries selected to improve the overall quality of reward model actually may not align with RL agents' interests, thus offering little help on policy learning and eventually resulting in poor feedback efficiency. We show that this issue can be effectively addressed via near on-policy query and a specially designed hybrid experience replay, which together enforce the bidirectional query-policy alignment. Simple yet elegant, our method can be easily incorporated into existing approaches by changing only a few lines of code. We showcase in comprehensive experiments that our method achieves substantial gains in both human feedback and RL sample efficiency, demonstrating the importance of addressing query-policy misalignment in PbRL tasks.
|
1707.06391
|
William Moses Jr.
|
Ankush Agarwalla, John Augustine, William K. Moses Jr., Madhav Sankar
K., Arvind Krishna Sridhar
|
Deterministic Dispersion of Mobile Robots in Dynamic Rings
|
21 pages, 10 figures, concise version of paper to appear in ICDCN
2018
| null | null | null |
cs.DC cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we study the problem of dispersion of mobile robots on dynamic
rings. The problem of dispersion of $n$ robots on an $n$ node graph, introduced
by Augustine and Moses Jr. [1], requires robots to coordinate with each other
and reach a configuration where exactly one robot is present on each node. This
problem has real world applications and applies whenever we want to minimize
the total cost of $n$ agents sharing $n$ resources, located at various places,
subject to the constraint that the cost of an agent moving to a different
resource is comparatively much smaller than the cost of multiple agents sharing
a resource (e.g. smart electric cars sharing recharge stations). The study of
this problem also provides indirect benefits to the study of scattering on
graphs, the study of exploration by mobile robots, and the study of load
balancing on graphs.
We solve the problem of dispersion in the presence of two types of dynamism
in the underlying graph: (i) vertex permutation and (ii) 1-interval
connectivity. We introduce the notion of vertex permutation dynamism and have
it mean that for a given set of nodes, in every round, the adversary ensures a
ring structure is maintained, but the connections between the nodes may change.
We use the idea of 1-interval connectivity from Di Luna et al. [10], where for
a given ring, in each round, the adversary chooses at most one edge to remove.
We assume robots have full visibility and present asymptotically time optimal
algorithms to achieve dispersion in the presence of both types of dynamism when
robots have chirality. When robots do not have chirality, we present
asymptotically time optimal algorithms to achieve dispersion subject to certain
constraints. Finally, we provide impossibility results for dispersion when
robots have no visibility.
|
[
{
"created": "Thu, 20 Jul 2017 06:46:15 GMT",
"version": "v1"
},
{
"created": "Wed, 4 Oct 2017 03:29:44 GMT",
"version": "v2"
},
{
"created": "Mon, 16 Oct 2017 13:50:46 GMT",
"version": "v3"
}
] |
2017-10-17
|
[
[
"Agarwalla",
"Ankush",
""
],
[
"Augustine",
"John",
""
],
[
"Moses",
"William K.",
"Jr."
],
[
"K.",
"Madhav Sankar",
""
],
[
"Sridhar",
"Arvind Krishna",
""
]
] |
In this work, we study the problem of dispersion of mobile robots on dynamic rings. The problem of dispersion of $n$ robots on an $n$ node graph, introduced by Augustine and Moses Jr. [1], requires robots to coordinate with each other and reach a configuration where exactly one robot is present on each node. This problem has real world applications and applies whenever we want to minimize the total cost of $n$ agents sharing $n$ resources, located at various places, subject to the constraint that the cost of an agent moving to a different resource is comparatively much smaller than the cost of multiple agents sharing a resource (e.g. smart electric cars sharing recharge stations). The study of this problem also provides indirect benefits to the study of scattering on graphs, the study of exploration by mobile robots, and the study of load balancing on graphs. We solve the problem of dispersion in the presence of two types of dynamism in the underlying graph: (i) vertex permutation and (ii) 1-interval connectivity. We introduce the notion of vertex permutation dynamism and have it mean that for a given set of nodes, in every round, the adversary ensures a ring structure is maintained, but the connections between the nodes may change. We use the idea of 1-interval connectivity from Di Luna et al. [10], where for a given ring, in each round, the adversary chooses at most one edge to remove. We assume robots have full visibility and present asymptotically time optimal algorithms to achieve dispersion in the presence of both types of dynamism when robots have chirality. When robots do not have chirality, we present asymptotically time optimal algorithms to achieve dispersion subject to certain constraints. Finally, we provide impossibility results for dispersion when robots have no visibility.
|
2208.12184
|
Mahdi Hejrati
|
Mahdi Hejrati, Jouni Mattila
|
Decentralized Nonlinear Control of Redundant Upper Limb Exoskeleton with
Natural Adaptation Law
|
Manuscript is published in 2022 IEEE-RAS 21st International
Conference on Humanoid Robots (Humanoids)
| null |
10.1109/Humanoids53995.2022.10000105
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The aim of this work is to utilize an adaptive decentralized control method
called virtual decomposition control (VDC) to control the orientation and
position of the end-effector of a 7 degrees of freedom (DoF) right-hand
upper-limb exoskeleton. The prevailing adaptive VDC approach requires tuning of
13n adaptation gains along with 26n upper and lower parameter bounds, where n
is the number of rigid bodies. Therefore, utilizing the VDC scheme to control
high DoF robots like the 7-DoF upper-limb exoskeleton can be an arduous task.
In this paper, a new adaptation function, so-called natural adaptation law
(NAL), is employed to eliminate these burdens from VDC, which results in
reducing all 13n gains to one and removing dependency on upper and lower
bounds. In doing so, VDC-based dynamic equations are restructured, and inertial
parameter vectors are made compatible with NAL. Then, the NAL adaptation
function is exploited to design a new adaptive VDC scheme. This novel adaptive
VDC approach ensures physical consistency conditions for estimated parameters
with no need for upper and lower bounds. Finally, the asymptotic stability of
the algorithm is proved with the virtual stability concept and the accompanying
function. The experimental results are utilized to demonstrate the excellent
performance of the proposed new adaptive VDC scheme.
|
[
{
"created": "Thu, 25 Aug 2022 16:10:49 GMT",
"version": "v1"
},
{
"created": "Thu, 29 Sep 2022 15:00:41 GMT",
"version": "v2"
},
{
"created": "Fri, 20 Jan 2023 15:50:32 GMT",
"version": "v3"
}
] |
2023-01-23
|
[
[
"Hejrati",
"Mahdi",
""
],
[
"Mattila",
"Jouni",
""
]
] |
The aim of this work is to utilize an adaptive decentralized control method called virtual decomposition control (VDC) to control the orientation and position of the end-effector of a 7 degrees of freedom (DoF) right-hand upper-limb exoskeleton. The prevailing adaptive VDC approach requires tuning of 13n adaptation gains along with 26n upper and lower parameter bounds, where n is the number of rigid bodies. Therefore, utilizing the VDC scheme to control high DoF robots like the 7-DoF upper-limb exoskeleton can be an arduous task. In this paper, a new adaptation function, so-called natural adaptation law (NAL), is employed to eliminate these burdens from VDC, which results in reducing all 13n gains to one and removing dependency on upper and lower bounds. In doing so, VDC-based dynamic equations are restructured, and inertial parameter vectors are made compatible with NAL. Then, the NAL adaptation function is exploited to design a new adaptive VDC scheme. This novel adaptive VDC approach ensures physical consistency conditions for estimated parameters with no need for upper and lower bounds. Finally, the asymptotic stability of the algorithm is proved with the virtual stability concept and the accompanying function. The experimental results are utilized to demonstrate the excellent performance of the proposed new adaptive VDC scheme.
|
1908.07195
|
Pei Ke
|
Pei Ke, Fei Huang, Minlie Huang, Xiaoyan Zhu
|
ARAML: A Stable Adversarial Training Framework for Text Generation
|
Accepted by EMNLP 2019
| null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Most of the existing generative adversarial networks (GAN) for text
generation suffer from the instability of reinforcement learning training
algorithms such as policy gradient, leading to unstable performance. To tackle
this problem, we propose a novel framework called Adversarial Reward Augmented
Maximum Likelihood (ARAML). During adversarial training, the discriminator
assigns rewards to samples which are acquired from a stationary distribution
near the data rather than the generator's distribution. The generator is
optimized with maximum likelihood estimation augmented by the discriminator's
rewards instead of policy gradient. Experiments show that our model can
outperform state-of-the-art text GANs with a more stable training process.
|
[
{
"created": "Tue, 20 Aug 2019 07:25:14 GMT",
"version": "v1"
}
] |
2019-08-21
|
[
[
"Ke",
"Pei",
""
],
[
"Huang",
"Fei",
""
],
[
"Huang",
"Minlie",
""
],
[
"Zhu",
"Xiaoyan",
""
]
] |
Most of the existing generative adversarial networks (GAN) for text generation suffer from the instability of reinforcement learning training algorithms such as policy gradient, leading to unstable performance. To tackle this problem, we propose a novel framework called Adversarial Reward Augmented Maximum Likelihood (ARAML). During adversarial training, the discriminator assigns rewards to samples which are acquired from a stationary distribution near the data rather than the generator's distribution. The generator is optimized with maximum likelihood estimation augmented by the discriminator's rewards instead of policy gradient. Experiments show that our model can outperform state-of-the-art text GANs with a more stable training process.
|
2105.10332
|
Kyle Niemeyer
|
Anthony S. Walker and Kyle E. Niemeyer
|
The Two-Dimensional Swept Rule Applied on Heterogeneous Architectures
|
18 pages, 11 figures
| null | null | null |
cs.DC cs.MS physics.comp-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The partial differential equations describing compressible fluid flows can be
notoriously difficult to resolve on a pragmatic scale and often require the use
of high performance computing systems and/or accelerators. However, these
systems face scaling issues such as latency, the fixed cost of communicating
information between devices in the system. The swept rule is a technique
designed to minimize these costs by obtaining a solution to unsteady equations
at as many possible spatial locations and times prior to communicating. In this
study, we implemented and tested the swept rule for solving two-dimensional
problems on heterogeneous computing systems across two distinct systems. Our
solver showed a speedup range of 0.22-2.71 for the heat diffusion equation and
0.52-1.46 for the compressible Euler equations. We can conclude from this study
that the swept rule offers both potential for speedups and slowdowns and that
care should be taken when designing such a solver to maximize benefits. These
results can help make decisions to maximize these benefits and inform designs.
|
[
{
"created": "Thu, 1 Apr 2021 20:06:09 GMT",
"version": "v1"
}
] |
2021-05-24
|
[
[
"Walker",
"Anthony S.",
""
],
[
"Niemeyer",
"Kyle E.",
""
]
] |
The partial differential equations describing compressible fluid flows can be notoriously difficult to resolve on a pragmatic scale and often require the use of high performance computing systems and/or accelerators. However, these systems face scaling issues such as latency, the fixed cost of communicating information between devices in the system. The swept rule is a technique designed to minimize these costs by obtaining a solution to unsteady equations at as many possible spatial locations and times prior to communicating. In this study, we implemented and tested the swept rule for solving two-dimensional problems on heterogeneous computing systems across two distinct systems. Our solver showed a speedup range of 0.22-2.71 for the heat diffusion equation and 0.52-1.46 for the compressible Euler equations. We can conclude from this study that the swept rule offers both potential for speedups and slowdowns and that care should be taken when designing such a solver to maximize benefits. These results can help make decisions to maximize these benefits and inform designs.
|
1512.05403
|
Jose Morales Escalante
|
Jose Morales-Escalante, Irene M. Gamba, Yingda Cheng, Armando
Majorana, Chi-Wang Shu, and James Chelikowsky
|
Discontinuous Galerkin Deterministic Solvers for a Boltzmann-Poisson
Model of Hot Electron Transport by Averaged Empirical Pseudopotential Band
Structures
|
submission to CMAME (Computer Methods in Applied Mechanics and
Engineering) Journal as a reply to the reviewers on February 2017
|
Computer Methods in Applied Mechanics and Engineering, Volume 321,
2017, Pages 209-234
|
10.1016/j.cma.2017.03.003
| null |
cs.CE cond-mat.mes-hall math.NA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The purpose of this work is to incorporate numerically, in a discontinuous
Galerkin (DG) solver of a Boltzmann-Poisson model for hot electron transport,
an electronic conduction band whose values are obtained by the spherical
averaging of the full band structure given by a local empirical pseudopotential
method (EPM) around a local minimum of the conduction band for silicon, as a
midpoint between a radial band model and an anisotropic full band, in order to
provide a more accurate physical description of the electron group velocity and
conduction energy band structure in a semiconductor. This gives a better
quantitative description of the transport and collision phenomena that
fundamentally define the behaviour of the Boltzmann - Poisson model for
electron transport used in this work. The numerical values of the derivatives
of this conduction energy band, needed for the description of the electron
group velocity, are obtained by means of a cubic spline interpolation. The
EPM-Boltzmann-Poisson transport with this spherically averaged EPM calculated
energy surface is numerically simulated and compared to the output of
traditional analytic band models such as the parabolic and Kane bands,
numerically implemented too, for the case of 1D $n^+-n-n^+$ silicon diodes with
400nm and 50nm channels. Quantitative differences are observed in the kinetic
moments related to the conduction energy band used, such as mean velocity,
average energy, and electric current (momentum).
|
[
{
"created": "Wed, 16 Dec 2015 22:51:53 GMT",
"version": "v1"
},
{
"created": "Wed, 17 Jan 2018 20:46:27 GMT",
"version": "v2"
}
] |
2018-01-19
|
[
[
"Morales-Escalante",
"Jose",
""
],
[
"Gamba",
"Irene M.",
""
],
[
"Cheng",
"Yingda",
""
],
[
"Majorana",
"Armando",
""
],
[
"Shu",
"Chi-Wang",
""
],
[
"Chelikowsky",
"James",
""
]
] |
The purpose of this work is to incorporate numerically, in a discontinuous Galerkin (DG) solver of a Boltzmann-Poisson model for hot electron transport, an electronic conduction band whose values are obtained by the spherical averaging of the full band structure given by a local empirical pseudopotential method (EPM) around a local minimum of the conduction band for silicon, as a midpoint between a radial band model and an anisotropic full band, in order to provide a more accurate physical description of the electron group velocity and conduction energy band structure in a semiconductor. This gives a better quantitative description of the transport and collision phenomena that fundamentally define the behaviour of the Boltzmann - Poisson model for electron transport used in this work. The numerical values of the derivatives of this conduction energy band, needed for the description of the electron group velocity, are obtained by means of a cubic spline interpolation. The EPM-Boltzmann-Poisson transport with this spherically averaged EPM calculated energy surface is numerically simulated and compared to the output of traditional analytic band models such as the parabolic and Kane bands, numerically implemented too, for the case of 1D $n^+-n-n^+$ silicon diodes with 400nm and 50nm channels. Quantitative differences are observed in the kinetic moments related to the conduction energy band used, such as mean velocity, average energy, and electric current (momentum).
|
1904.07429
|
Mingxin Jin
|
Mingxin Jin, Yongsheng Dong, Lintao Zheng, Lingfei Liang, Tianyu Wang,
Hongyan zhang
|
Shortest Paths in HSI Space for Color Texture Classification
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Color texture representation is an important step in the task of texture
classification. Shortest paths was used to extract color texture features from
RGB and HSV color spaces. In this paper, we propose to use shortest paths in
the HSI space to build a texture representation for classification. In
particular, two undirected graphs are used to model the H channel and the S and
I channels respectively in order to represent a color texture image. Moreover,
the shortest paths is constructed by using four pairs of pixels according to
different scales and directions of the texture image. Experimental results on
colored Brodatz and USPTex databases reveal that our proposed method is
effective, and the highest classification accuracy rate is 96.93% in the
Brodatz database.
|
[
{
"created": "Tue, 16 Apr 2019 03:24:35 GMT",
"version": "v1"
}
] |
2019-04-17
|
[
[
"Jin",
"Mingxin",
""
],
[
"Dong",
"Yongsheng",
""
],
[
"Zheng",
"Lintao",
""
],
[
"Liang",
"Lingfei",
""
],
[
"Wang",
"Tianyu",
""
],
[
"zhang",
"Hongyan",
""
]
] |
Color texture representation is an important step in the task of texture classification. Shortest paths was used to extract color texture features from RGB and HSV color spaces. In this paper, we propose to use shortest paths in the HSI space to build a texture representation for classification. In particular, two undirected graphs are used to model the H channel and the S and I channels respectively in order to represent a color texture image. Moreover, the shortest paths is constructed by using four pairs of pixels according to different scales and directions of the texture image. Experimental results on colored Brodatz and USPTex databases reveal that our proposed method is effective, and the highest classification accuracy rate is 96.93% in the Brodatz database.
|
1804.00344
|
Marcin Junczys-Dowmunt
|
Marcin Junczys-Dowmunt, Roman Grundkiewicz, Tomasz Dwojak, Hieu Hoang,
Kenneth Heafield, Tom Neckermann, Frank Seide, Ulrich Germann, Alham Fikri
Aji, Nikolay Bogoychev, Andr\'e F. T. Martins, Alexandra Birch
|
Marian: Fast Neural Machine Translation in C++
|
Demonstration paper
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present Marian, an efficient and self-contained Neural Machine Translation
framework with an integrated automatic differentiation engine based on dynamic
computation graphs. Marian is written entirely in C++. We describe the design
of the encoder-decoder framework and demonstrate that a research-friendly
toolkit can achieve high training and translation speed.
|
[
{
"created": "Sun, 1 Apr 2018 20:50:57 GMT",
"version": "v1"
},
{
"created": "Tue, 3 Apr 2018 04:11:44 GMT",
"version": "v2"
},
{
"created": "Wed, 4 Apr 2018 15:34:17 GMT",
"version": "v3"
}
] |
2018-04-05
|
[
[
"Junczys-Dowmunt",
"Marcin",
""
],
[
"Grundkiewicz",
"Roman",
""
],
[
"Dwojak",
"Tomasz",
""
],
[
"Hoang",
"Hieu",
""
],
[
"Heafield",
"Kenneth",
""
],
[
"Neckermann",
"Tom",
""
],
[
"Seide",
"Frank",
""
],
[
"Germann",
"Ulrich",
""
],
[
"Aji",
"Alham Fikri",
""
],
[
"Bogoychev",
"Nikolay",
""
],
[
"Martins",
"André F. T.",
""
],
[
"Birch",
"Alexandra",
""
]
] |
We present Marian, an efficient and self-contained Neural Machine Translation framework with an integrated automatic differentiation engine based on dynamic computation graphs. Marian is written entirely in C++. We describe the design of the encoder-decoder framework and demonstrate that a research-friendly toolkit can achieve high training and translation speed.
|
1902.10645
|
Kurniawan Irianto
|
Kurniawan D. Irianto, Juan A. Cabrera, Giang T. Nguyen, Hani Salah,
and Frank H.P. Fitzek
|
S-PRAC: Fast Partial Packet Recovery with Network Coding in Very Noisy
Wireless Channels
| null | null |
10.1109/WD.2019.8734223
| null |
cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
Well-known error detection and correction solutions in wireless
communications are slow or incur high transmission overhead. Recently, notable
solutions like PRAC and DAPRAC, implementing partial packet recovery with
network coding, could address these problems. However, they perform slowly when
there are many errors. We propose S-PRAC, a fast scheme for partial packet
recovery, particularly designed for very noisy wireless channels. S-PRAC
improves on DAPRAC. It divides each packet into segments consisting of a fixed
number of small RLNC encoded symbols and then attaches a CRC code to each
segment and one to each coded packet. Extensive simulations show that S-PRAC
can detect and correct errors quickly. It also outperforms DAPRAC significantly
when the number of errors is high.
|
[
{
"created": "Wed, 27 Feb 2019 17:25:56 GMT",
"version": "v1"
}
] |
2019-10-07
|
[
[
"Irianto",
"Kurniawan D.",
""
],
[
"Cabrera",
"Juan A.",
""
],
[
"Nguyen",
"Giang T.",
""
],
[
"Salah",
"Hani",
""
],
[
"Fitzek",
"Frank H. P.",
""
]
] |
Well-known error detection and correction solutions in wireless communications are slow or incur high transmission overhead. Recently, notable solutions like PRAC and DAPRAC, implementing partial packet recovery with network coding, could address these problems. However, they perform slowly when there are many errors. We propose S-PRAC, a fast scheme for partial packet recovery, particularly designed for very noisy wireless channels. S-PRAC improves on DAPRAC. It divides each packet into segments consisting of a fixed number of small RLNC encoded symbols and then attaches a CRC code to each segment and one to each coded packet. Extensive simulations show that S-PRAC can detect and correct errors quickly. It also outperforms DAPRAC significantly when the number of errors is high.
|
1701.08554
|
Maxime Ferreira Da Costa
|
Maxime Ferreira Da Costa and Wei Dai
|
Low Dimensional Atomic Norm Representations in Line Spectral Estimation
| null | null |
10.1109/ISIT.2017.8006523
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The line spectral estimation problem consists in recovering the frequencies
of a complex valued time signal that is assumed to be sparse in the spectral
domain from its discrete observations. Unlike the gridding required by the
classical compressed sensing framework, line spectral estimation reconstructs
signals whose spectral supports lie continuously in the Fourier domain. If
recent advances have shown that atomic norm relaxation produces highly robust
estimates in this context, the computational cost of this approach remains,
however, the major flaw for its application to practical systems.
In this work, we aim to bridge the complexity issue by studying the atomic
norm minimization problem from low dimensional projection of the signal
samples. We derive conditions on the sub-sampling matrix under which the
partial atomic norm can be expressed by a low-dimensional semidefinite program.
Moreover, we illustrate the tightness of this relaxation by showing that it is
possible to recover the original signal in poly-logarithmic time for two
specific sub-sampling patterns.
|
[
{
"created": "Mon, 30 Jan 2017 11:35:04 GMT",
"version": "v1"
}
] |
2021-10-18
|
[
[
"Da Costa",
"Maxime Ferreira",
""
],
[
"Dai",
"Wei",
""
]
] |
The line spectral estimation problem consists in recovering the frequencies of a complex valued time signal that is assumed to be sparse in the spectral domain from its discrete observations. Unlike the gridding required by the classical compressed sensing framework, line spectral estimation reconstructs signals whose spectral supports lie continuously in the Fourier domain. If recent advances have shown that atomic norm relaxation produces highly robust estimates in this context, the computational cost of this approach remains, however, the major flaw for its application to practical systems. In this work, we aim to bridge the complexity issue by studying the atomic norm minimization problem from low dimensional projection of the signal samples. We derive conditions on the sub-sampling matrix under which the partial atomic norm can be expressed by a low-dimensional semidefinite program. Moreover, we illustrate the tightness of this relaxation by showing that it is possible to recover the original signal in poly-logarithmic time for two specific sub-sampling patterns.
|
cs/0003005
|
Prasan Roy
|
Prasan Roy, Krithi Ramamritham, S. Seshadri, Pradeep Shenoy, S.
Sudarshan
|
Don't Trash your Intermediate Results, Cache 'em
|
22 pages, 4 figures
| null | null | null |
cs.DB
| null |
In data warehouse and data mart systems, queries often take a long time to
execute due to their complex nature. Query response times can be greatly
improved by caching final/intermediate results of previous queries, and using
them to answer later queries. In this paper we describe a caching system called
Exchequer which incorporates several novel features including optimization
aware cache maintenance and the use of a cache aware optimizer. In contrast, in
existing work, the module that makes cost-benefit decisions is part of the
cache manager and works independent of the optimizer which essentially
reconsiders these decisions while finding the best plan for a query. In our
work, the optimizer takes the decisions for the cache manager. Furthermore,
existing approaches are either restricted to cube (slice/point) queries, or
cache just the query results. On the other hand, our work is extens ible and in
fact presents a data-model independent framework and algorithm. Our
experimental results attest to the efficacy of our cache management techniques
and show that over a wide range of parameters (a) Exchequer's query response
times are lower by more than 30% compared to the best performing competitor,
and (b) Exchequer can deliver the same response time as its competitor with
just one tenth of the cache size.
|
[
{
"created": "Thu, 2 Mar 2000 08:15:21 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Roy",
"Prasan",
""
],
[
"Ramamritham",
"Krithi",
""
],
[
"Seshadri",
"S.",
""
],
[
"Shenoy",
"Pradeep",
""
],
[
"Sudarshan",
"S.",
""
]
] |
In data warehouse and data mart systems, queries often take a long time to execute due to their complex nature. Query response times can be greatly improved by caching final/intermediate results of previous queries, and using them to answer later queries. In this paper we describe a caching system called Exchequer which incorporates several novel features including optimization aware cache maintenance and the use of a cache aware optimizer. In contrast, in existing work, the module that makes cost-benefit decisions is part of the cache manager and works independent of the optimizer which essentially reconsiders these decisions while finding the best plan for a query. In our work, the optimizer takes the decisions for the cache manager. Furthermore, existing approaches are either restricted to cube (slice/point) queries, or cache just the query results. On the other hand, our work is extens ible and in fact presents a data-model independent framework and algorithm. Our experimental results attest to the efficacy of our cache management techniques and show that over a wide range of parameters (a) Exchequer's query response times are lower by more than 30% compared to the best performing competitor, and (b) Exchequer can deliver the same response time as its competitor with just one tenth of the cache size.
|
2302.09420
|
Marwan Omar Dr
|
Marwan Omar
|
RobustNLP: A Technique to Defend NLP Models Against Backdoor Attacks
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
As machine learning (ML) systems are being increasingly employed in the real
world to handle sensitive tasks and make decisions in various fields, the
security and privacy of those models have also become increasingly critical. In
particular, Deep Neural Networks (DNN) have been shown to be vulnerable to
backdoor attacks whereby adversaries have access to the training data and the
opportunity to manipulate such data by inserting carefully developed samples
into the training dataset. Although the NLP community has produced several
studies on generating backdoor attacks proving the vulnerable state of language
modes, to the best of our knowledge, there does not exist any work to combat
such attacks. To bridge this gap, we present RobustEncoder: a novel
clustering-based technique for detecting and removing backdoor attacks in the
text domain. Extensive empirical results demonstrate the effectiveness of our
technique in detecting and removing backdoor triggers. Our code is available at
https://github.com/marwanomar1/Backdoor-Learning-for-NLP
|
[
{
"created": "Sat, 18 Feb 2023 20:52:08 GMT",
"version": "v1"
}
] |
2023-02-21
|
[
[
"Omar",
"Marwan",
""
]
] |
As machine learning (ML) systems are being increasingly employed in the real world to handle sensitive tasks and make decisions in various fields, the security and privacy of those models have also become increasingly critical. In particular, Deep Neural Networks (DNN) have been shown to be vulnerable to backdoor attacks whereby adversaries have access to the training data and the opportunity to manipulate such data by inserting carefully developed samples into the training dataset. Although the NLP community has produced several studies on generating backdoor attacks proving the vulnerable state of language modes, to the best of our knowledge, there does not exist any work to combat such attacks. To bridge this gap, we present RobustEncoder: a novel clustering-based technique for detecting and removing backdoor attacks in the text domain. Extensive empirical results demonstrate the effectiveness of our technique in detecting and removing backdoor triggers. Our code is available at https://github.com/marwanomar1/Backdoor-Learning-for-NLP
|
2006.16533
|
Shusen Liu
|
Shusen Liu, Bhavya Kailkhura, Jize Zhang, Anna M. Hiszpanski, Emily
Robertson, Donald Loveland, T. Yong-Jin Han
|
Actionable Attribution Maps for Scientific Machine Learning
| null | null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The scientific community has been increasingly interested in harnessing the
power of deep learning to solve various domain challenges. However, despite the
effectiveness in building predictive models, fundamental challenges exist in
extracting actionable knowledge from the deep neural network due to their
opaque nature. In this work, we propose techniques for exploring the behavior
of deep learning models by injecting domain-specific actionable concepts as
tunable ``knobs'' in the analysis pipeline. By incorporating the domain
knowledge with generative modeling, we are not only able to better understand
the behavior of these black-box models, but also provide scientists with
actionable insights that can potentially lead to fundamental discoveries.
|
[
{
"created": "Tue, 30 Jun 2020 05:12:29 GMT",
"version": "v1"
}
] |
2020-07-01
|
[
[
"Liu",
"Shusen",
""
],
[
"Kailkhura",
"Bhavya",
""
],
[
"Zhang",
"Jize",
""
],
[
"Hiszpanski",
"Anna M.",
""
],
[
"Robertson",
"Emily",
""
],
[
"Loveland",
"Donald",
""
],
[
"Han",
"T. Yong-Jin",
""
]
] |
The scientific community has been increasingly interested in harnessing the power of deep learning to solve various domain challenges. However, despite the effectiveness in building predictive models, fundamental challenges exist in extracting actionable knowledge from the deep neural network due to their opaque nature. In this work, we propose techniques for exploring the behavior of deep learning models by injecting domain-specific actionable concepts as tunable ``knobs'' in the analysis pipeline. By incorporating the domain knowledge with generative modeling, we are not only able to better understand the behavior of these black-box models, but also provide scientists with actionable insights that can potentially lead to fundamental discoveries.
|
1608.08142
|
Reza Rafie Borujeny
|
Reza Rafie Borujeny, Moslem Noori, Masoud Ardakani
|
Maximizing Data Rate for Multiway Relay Channels with Pairwise
Transmission Strategy
|
Submitted to IEEE Transactions on Wireless Communications, under
second round of revisions. 10 pages, 8 figures. arXiv admin note: text
overlap with arXiv:1406.4610
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In a multiway relay channel (MWRC), pairwise transmission strategy can be
used to reduce the computational complexity at the relay and the users without
sacrificing the data rate, significantly. The performance of such pairwise
strategies, however, is affected by the way that the users are paired to
transmit. In this paper, we study the effect of pairing on the common rate and
sum rate of an MWRC with functional-decode-forward (FDF) relaying strategy
where users experience asymmetric channel conditions. To this end, we first
develop a graphical model for an MWRC with pairwise transmission strategy.
Using this model, we then find the maximum achievable common rate and sum rate
as well as the user pairings that achieve these rates. This marks the ultimate
performance of FDF relaying in an MWRC setup. Further, we show that the rate
enhancement achieved through the optimal user pairing becomes less pronounced
at higher SNRs. Using computer simulations, the performance of the optimal
pairing is compared with those of other proposed pairings in the literature.
|
[
{
"created": "Mon, 29 Aug 2016 16:47:09 GMT",
"version": "v1"
}
] |
2016-08-30
|
[
[
"Borujeny",
"Reza Rafie",
""
],
[
"Noori",
"Moslem",
""
],
[
"Ardakani",
"Masoud",
""
]
] |
In a multiway relay channel (MWRC), pairwise transmission strategy can be used to reduce the computational complexity at the relay and the users without sacrificing the data rate, significantly. The performance of such pairwise strategies, however, is affected by the way that the users are paired to transmit. In this paper, we study the effect of pairing on the common rate and sum rate of an MWRC with functional-decode-forward (FDF) relaying strategy where users experience asymmetric channel conditions. To this end, we first develop a graphical model for an MWRC with pairwise transmission strategy. Using this model, we then find the maximum achievable common rate and sum rate as well as the user pairings that achieve these rates. This marks the ultimate performance of FDF relaying in an MWRC setup. Further, we show that the rate enhancement achieved through the optimal user pairing becomes less pronounced at higher SNRs. Using computer simulations, the performance of the optimal pairing is compared with those of other proposed pairings in the literature.
|
2307.02443
|
Leon Moonen
|
Max Hort and Anastasiia Grishina and Leon Moonen
|
An Exploratory Literature Study on Sharing and Energy Use of Language
Models for Source Code
|
Accepted for publication in the 17th ACM/IEEE International Symposium
on Empirical Software Engineering and Measurement (ESEM 2023)
| null | null | null |
cs.SE cs.AI cs.CL cs.LG cs.NE
|
http://creativecommons.org/licenses/by/4.0/
|
Large language models trained on source code can support a variety of
software development tasks, such as code recommendation and program repair.
Large amounts of data for training such models benefit the models' performance.
However, the size of the data and models results in long training times and
high energy consumption. While publishing source code allows for replicability,
users need to repeat the expensive training process if models are not shared.
The main goal of the study is to investigate if publications that trained
language models for software engineering (SE) tasks share source code and
trained artifacts. The second goal is to analyze the transparency on training
energy usage. We perform a snowballing-based literature search to find
publications on language models for source code, and analyze their reusability
from a sustainability standpoint.
From 494 unique publications, we identified 293 relevant publications that
use language models to address code-related tasks. Among them, 27% (79 out of
293) make artifacts available for reuse. This can be in the form of tools or
IDE plugins designed for specific tasks or task-agnostic models that can be
fine-tuned for a variety of downstream tasks. Moreover, we collect insights on
the hardware used for model training, as well as training time, which together
determine the energy consumption of the development process. We find that there
are deficiencies in the sharing of information and artifacts for current
studies on source code models for software engineering tasks, with 40% of the
surveyed papers not sharing source code or trained artifacts. We recommend the
sharing of source code as well as trained artifacts, to enable sustainable
reproducibility. Moreover, comprehensive information on training times and
hardware configurations should be shared for transparency on a model's carbon
footprint.
|
[
{
"created": "Wed, 5 Jul 2023 17:13:00 GMT",
"version": "v1"
}
] |
2023-07-06
|
[
[
"Hort",
"Max",
""
],
[
"Grishina",
"Anastasiia",
""
],
[
"Moonen",
"Leon",
""
]
] |
Large language models trained on source code can support a variety of software development tasks, such as code recommendation and program repair. Large amounts of data for training such models benefit the models' performance. However, the size of the data and models results in long training times and high energy consumption. While publishing source code allows for replicability, users need to repeat the expensive training process if models are not shared. The main goal of the study is to investigate if publications that trained language models for software engineering (SE) tasks share source code and trained artifacts. The second goal is to analyze the transparency on training energy usage. We perform a snowballing-based literature search to find publications on language models for source code, and analyze their reusability from a sustainability standpoint. From 494 unique publications, we identified 293 relevant publications that use language models to address code-related tasks. Among them, 27% (79 out of 293) make artifacts available for reuse. This can be in the form of tools or IDE plugins designed for specific tasks or task-agnostic models that can be fine-tuned for a variety of downstream tasks. Moreover, we collect insights on the hardware used for model training, as well as training time, which together determine the energy consumption of the development process. We find that there are deficiencies in the sharing of information and artifacts for current studies on source code models for software engineering tasks, with 40% of the surveyed papers not sharing source code or trained artifacts. We recommend the sharing of source code as well as trained artifacts, to enable sustainable reproducibility. Moreover, comprehensive information on training times and hardware configurations should be shared for transparency on a model's carbon footprint.
|
2308.07789
|
Matteo Acclavio
|
Matteo Acclavio, Gianluca Curzi, Giulio Guerrieri
|
Infinitary cut-elimination via finite approximations (extended version)
|
Extended version of the paper "Infinitary cut-elimination via finite
approximations" accepted at CSL2024
| null | null | null |
cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
We investigate non-wellfounded proof systems based on parsimonious logic, a
weaker variant of linear logic where the exponential modality ! is interpreted
as a constructor for streams over finite data. Logical consistency is
maintained at a global level by adapting a standard progressing criterion. We
present an infinitary version of cut-elimination based on finite
approximations, and we prove that, in presence of the progressing criterion, it
returns well-defined non-wellfounded proofs at its limit. Furthermore, we show
that cut-elimination preserves the progressive criterion and various regularity
conditions internalizing degrees of proof-theoretical uniformity. Finally, we
provide a denotational semantics for our systems based on the relational model.
|
[
{
"created": "Tue, 15 Aug 2023 14:10:56 GMT",
"version": "v1"
},
{
"created": "Mon, 27 May 2024 18:52:23 GMT",
"version": "v2"
}
] |
2024-05-29
|
[
[
"Acclavio",
"Matteo",
""
],
[
"Curzi",
"Gianluca",
""
],
[
"Guerrieri",
"Giulio",
""
]
] |
We investigate non-wellfounded proof systems based on parsimonious logic, a weaker variant of linear logic where the exponential modality ! is interpreted as a constructor for streams over finite data. Logical consistency is maintained at a global level by adapting a standard progressing criterion. We present an infinitary version of cut-elimination based on finite approximations, and we prove that, in presence of the progressing criterion, it returns well-defined non-wellfounded proofs at its limit. Furthermore, we show that cut-elimination preserves the progressive criterion and various regularity conditions internalizing degrees of proof-theoretical uniformity. Finally, we provide a denotational semantics for our systems based on the relational model.
|
cs/0309040
|
Valmir Barbosa
|
L. D. Penso, V. C. Barbosa
|
A distributed algorithm to find k-dominating sets
|
To appear in Discrete Applied Mathematics
|
Discrete Applied Mathematics 141 (2004), 243-253
|
10.1016/S0166-218X(03)00368-8
|
ES-552/01
|
cs.DC
| null |
We consider a connected undirected graph $G(n,m)$ with $n$ nodes and $m$
edges. A $k$-dominating set $D$ in $G$ is a set of nodes having the property
that every node in $G$ is at most $k$ edges away from at least one node in $D$.
Finding a $k$-dominating set of minimum size is NP-hard. We give a new
synchronous distributed algorithm to find a $k$-dominating set in $G$ of size
no greater than $\lfloor n/(k+1)\rfloor$. Our algorithm requires $O(k\log^*n)$
time and $O(m\log k+n\log k\log^*n)$ messages to run. It has the same time
complexity as the best currently known algorithm, but improves on that
algorithm's message complexity and is, in addition, conceptually simpler.
|
[
{
"created": "Tue, 23 Sep 2003 01:14:43 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Penso",
"L. D.",
""
],
[
"Barbosa",
"V. C.",
""
]
] |
We consider a connected undirected graph $G(n,m)$ with $n$ nodes and $m$ edges. A $k$-dominating set $D$ in $G$ is a set of nodes having the property that every node in $G$ is at most $k$ edges away from at least one node in $D$. Finding a $k$-dominating set of minimum size is NP-hard. We give a new synchronous distributed algorithm to find a $k$-dominating set in $G$ of size no greater than $\lfloor n/(k+1)\rfloor$. Our algorithm requires $O(k\log^*n)$ time and $O(m\log k+n\log k\log^*n)$ messages to run. It has the same time complexity as the best currently known algorithm, but improves on that algorithm's message complexity and is, in addition, conceptually simpler.
|
1804.07396
|
David Eppstein
|
David Eppstein
|
Making Change in 2048
|
13 pages, 1 figure. To appear in the Proceedings of the 9th
International Conference on Fun with Algorithms (FUN 2018), Leibniz
International Proceedings in Informatics
| null | null | null |
cs.DM
|
http://creativecommons.org/licenses/by/4.0/
|
The 2048 game involves tiles labeled with powers of two that can be merged to
form bigger powers of two; variants of the same puzzle involve similar merges
of other tile values. We analyze the maximum score achievable in these games by
proving a min-max theorem equating this maximum score (in an abstract
generalized variation of 2048 that allows all the moves of the original game)
with the minimum value that causes a greedy change-making algorithm to use a
given number of coins. A widely-followed strategy in 2048 maintains tiles that
represent the move number in binary notation, and a similar strategy in the
Fibonacci number variant of the game (987) maintains the Zeckendorf
representation of the move number as a sum of the fewest possible Fibonacci
numbers; our analysis shows that the ability to follow these strategies is
intimately connected with the fact that greedy change-making is optimal for
binary and Fibonacci coinage. For variants of 2048 using tile values for which
greedy change-making is suboptimal, it is the greedy strategy, not the optimal
representation as sums of tile values, that controls the length of the game. In
particular, the game will always terminate whenever the sequence of allowable
tile values has arbitrarily large gaps between consecutive values.
|
[
{
"created": "Thu, 19 Apr 2018 22:58:51 GMT",
"version": "v1"
}
] |
2018-04-23
|
[
[
"Eppstein",
"David",
""
]
] |
The 2048 game involves tiles labeled with powers of two that can be merged to form bigger powers of two; variants of the same puzzle involve similar merges of other tile values. We analyze the maximum score achievable in these games by proving a min-max theorem equating this maximum score (in an abstract generalized variation of 2048 that allows all the moves of the original game) with the minimum value that causes a greedy change-making algorithm to use a given number of coins. A widely-followed strategy in 2048 maintains tiles that represent the move number in binary notation, and a similar strategy in the Fibonacci number variant of the game (987) maintains the Zeckendorf representation of the move number as a sum of the fewest possible Fibonacci numbers; our analysis shows that the ability to follow these strategies is intimately connected with the fact that greedy change-making is optimal for binary and Fibonacci coinage. For variants of 2048 using tile values for which greedy change-making is suboptimal, it is the greedy strategy, not the optimal representation as sums of tile values, that controls the length of the game. In particular, the game will always terminate whenever the sequence of allowable tile values has arbitrarily large gaps between consecutive values.
|
2310.10486
|
Milad Shafiee
|
Milad Shafiee, Guillaume Bellegarda and Auke Ijspeert
|
ManyQuadrupeds: Learning a Single Locomotion Policy for Diverse
Quadruped Robots
|
Accepted for IEEE International Conference on Robotics and Automation
(ICRA) 2024, Webpage: https://miladshafiee.github.io/ManyQuadrupeds/
| null | null | null |
cs.RO cs.AI cs.LG cs.SY eess.SY
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Learning a locomotion policy for quadruped robots has traditionally been
constrained to a specific robot morphology, mass, and size. The learning
process must usually be repeated for every new robot, where hyperparameters and
reward function weights must be re-tuned to maximize performance for each new
system. Alternatively, attempting to train a single policy to accommodate
different robot sizes, while maintaining the same degrees of freedom (DoF) and
morphology, requires either complex learning frameworks, or mass, inertia, and
dimension randomization, which leads to prolonged training periods. In our
study, we show that drawing inspiration from animal motor control allows us to
effectively train a single locomotion policy capable of controlling a diverse
range of quadruped robots. The robot differences encompass: a variable number
of DoFs, (i.e. 12 or 16 joints), three distinct morphologies, a broad mass
range spanning from 2 kg to 200 kg, and nominal standing heights ranging from
18 cm to 100 cm. Our policy modulates a representation of the Central Pattern
Generator (CPG) in the spinal cord, effectively coordinating both frequencies
and amplitudes of the CPG to produce rhythmic output (Rhythm Generation), which
is then mapped to a Pattern Formation (PF) layer. Across different robots, the
only varying component is the PF layer, which adjusts the scaling parameters
for the stride height and length. Subsequently, we evaluate the sim-to-real
transfer by testing the single policy on both the Unitree Go1 and A1 robots.
Remarkably, we observe robust performance, even when adding a 15 kg load,
equivalent to 125% of the A1 robot's nominal mass.
|
[
{
"created": "Mon, 16 Oct 2023 15:06:16 GMT",
"version": "v1"
},
{
"created": "Fri, 8 Mar 2024 14:10:02 GMT",
"version": "v2"
}
] |
2024-03-11
|
[
[
"Shafiee",
"Milad",
""
],
[
"Bellegarda",
"Guillaume",
""
],
[
"Ijspeert",
"Auke",
""
]
] |
Learning a locomotion policy for quadruped robots has traditionally been constrained to a specific robot morphology, mass, and size. The learning process must usually be repeated for every new robot, where hyperparameters and reward function weights must be re-tuned to maximize performance for each new system. Alternatively, attempting to train a single policy to accommodate different robot sizes, while maintaining the same degrees of freedom (DoF) and morphology, requires either complex learning frameworks, or mass, inertia, and dimension randomization, which leads to prolonged training periods. In our study, we show that drawing inspiration from animal motor control allows us to effectively train a single locomotion policy capable of controlling a diverse range of quadruped robots. The robot differences encompass: a variable number of DoFs, (i.e. 12 or 16 joints), three distinct morphologies, a broad mass range spanning from 2 kg to 200 kg, and nominal standing heights ranging from 18 cm to 100 cm. Our policy modulates a representation of the Central Pattern Generator (CPG) in the spinal cord, effectively coordinating both frequencies and amplitudes of the CPG to produce rhythmic output (Rhythm Generation), which is then mapped to a Pattern Formation (PF) layer. Across different robots, the only varying component is the PF layer, which adjusts the scaling parameters for the stride height and length. Subsequently, we evaluate the sim-to-real transfer by testing the single policy on both the Unitree Go1 and A1 robots. Remarkably, we observe robust performance, even when adding a 15 kg load, equivalent to 125% of the A1 robot's nominal mass.
|
2109.08771
|
Jacky Liang
|
Jacky Liang, Mohit Sharma, Alex LaGrassa, Shivam Vats, Saumya Saxena,
Oliver Kroemer
|
Search-Based Task Planning with Learned Skill Effect Models for Lifelong
Robotic Manipulation
|
To appear in the International Conference on Robotics and Automation
(ICRA) 2022
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Robots deployed in many real-world settings need to be able to acquire new
skills and solve new tasks over time. Prior works on planning with skills often
make assumptions on the structure of skills and tasks, such as subgoal skills,
shared skill implementations, or task-specific plan skeletons, which limit
adaptation to new skills and tasks. By contrast, we propose doing task planning
by jointly searching in the space of parameterized skills using high-level
skill effect models learned in simulation. We use an iterative training
procedure to efficiently generate relevant data to train such models. Our
approach allows flexible skill parameterizations and task specifications to
facilitate lifelong learning in general-purpose domains. Experiments
demonstrate the ability of our planner to integrate new skills in a lifelong
manner, finding new task strategies with lower costs in both train and test
tasks. We additionally show that our method can transfer to the real world
without further fine-tuning.
|
[
{
"created": "Fri, 17 Sep 2021 22:06:58 GMT",
"version": "v1"
},
{
"created": "Thu, 14 Apr 2022 01:07:35 GMT",
"version": "v2"
}
] |
2022-04-15
|
[
[
"Liang",
"Jacky",
""
],
[
"Sharma",
"Mohit",
""
],
[
"LaGrassa",
"Alex",
""
],
[
"Vats",
"Shivam",
""
],
[
"Saxena",
"Saumya",
""
],
[
"Kroemer",
"Oliver",
""
]
] |
Robots deployed in many real-world settings need to be able to acquire new skills and solve new tasks over time. Prior works on planning with skills often make assumptions on the structure of skills and tasks, such as subgoal skills, shared skill implementations, or task-specific plan skeletons, which limit adaptation to new skills and tasks. By contrast, we propose doing task planning by jointly searching in the space of parameterized skills using high-level skill effect models learned in simulation. We use an iterative training procedure to efficiently generate relevant data to train such models. Our approach allows flexible skill parameterizations and task specifications to facilitate lifelong learning in general-purpose domains. Experiments demonstrate the ability of our planner to integrate new skills in a lifelong manner, finding new task strategies with lower costs in both train and test tasks. We additionally show that our method can transfer to the real world without further fine-tuning.
|
2303.14953
|
Ming Wang
|
Ming Wang, Xianda Guo, Beibei Lin, Tian Yang, Zheng Zhu, Lincheng Li,
Shunli Zhang and Xin Yu
|
DyGait: Exploiting Dynamic Representations for High-performance Gait
Recognition
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Gait recognition is a biometric technology that recognizes the identity of
humans through their walking patterns. Compared with other biometric
technologies, gait recognition is more difficult to disguise and can be applied
to the condition of long-distance without the cooperation of subjects. Thus, it
has unique potential and wide application for crime prevention and social
security. At present, most gait recognition methods directly extract features
from the video frames to establish representations. However, these
architectures learn representations from different features equally but do not
pay enough attention to dynamic features, which refers to a representation of
dynamic parts of silhouettes over time (e.g. legs). Since dynamic parts of the
human body are more informative than other parts (e.g. bags) during walking, in
this paper, we propose a novel and high-performance framework named DyGait.
This is the first framework on gait recognition that is designed to focus on
the extraction of dynamic features. Specifically, to take full advantage of the
dynamic information, we propose a Dynamic Augmentation Module (DAM), which can
automatically establish spatial-temporal feature representations of the dynamic
parts of the human body. The experimental results show that our DyGait network
outperforms other state-of-the-art gait recognition methods. It achieves an
average Rank-1 accuracy of 71.4% on the GREW dataset, 66.3% on the Gait3D
dataset, 98.4% on the CASIA-B dataset and 98.3% on the OU-MVLP dataset.
|
[
{
"created": "Mon, 27 Mar 2023 07:36:47 GMT",
"version": "v1"
}
] |
2023-03-28
|
[
[
"Wang",
"Ming",
""
],
[
"Guo",
"Xianda",
""
],
[
"Lin",
"Beibei",
""
],
[
"Yang",
"Tian",
""
],
[
"Zhu",
"Zheng",
""
],
[
"Li",
"Lincheng",
""
],
[
"Zhang",
"Shunli",
""
],
[
"Yu",
"Xin",
""
]
] |
Gait recognition is a biometric technology that recognizes the identity of humans through their walking patterns. Compared with other biometric technologies, gait recognition is more difficult to disguise and can be applied to the condition of long-distance without the cooperation of subjects. Thus, it has unique potential and wide application for crime prevention and social security. At present, most gait recognition methods directly extract features from the video frames to establish representations. However, these architectures learn representations from different features equally but do not pay enough attention to dynamic features, which refers to a representation of dynamic parts of silhouettes over time (e.g. legs). Since dynamic parts of the human body are more informative than other parts (e.g. bags) during walking, in this paper, we propose a novel and high-performance framework named DyGait. This is the first framework on gait recognition that is designed to focus on the extraction of dynamic features. Specifically, to take full advantage of the dynamic information, we propose a Dynamic Augmentation Module (DAM), which can automatically establish spatial-temporal feature representations of the dynamic parts of the human body. The experimental results show that our DyGait network outperforms other state-of-the-art gait recognition methods. It achieves an average Rank-1 accuracy of 71.4% on the GREW dataset, 66.3% on the Gait3D dataset, 98.4% on the CASIA-B dataset and 98.3% on the OU-MVLP dataset.
|
2103.16434
|
Georgios Papadopoulos Th.
|
Georgios Th. Papadopoulos, Asterios Leonidis, Margherita Antona,
Constantine Stephanidis
|
User profile-driven large-scale multi-agent learning from demonstration
in federated human-robot collaborative environments
|
arXiv admin note: substantial text overlap with arXiv:2012.08174
| null | null | null |
cs.RO cs.AI cs.HC cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Learning from Demonstration (LfD) has been established as the dominant
paradigm for efficiently transferring skills from human teachers to robots. In
this context, the Federated Learning (FL) conceptualization has very recently
been introduced for developing large-scale human-robot collaborative
environments, targeting to robustly address, among others, the critical
challenges of multi-agent learning and long-term autonomy. In the current work,
the latter scheme is further extended and enhanced, by designing and
integrating a novel user profile formulation for providing a fine-grained
representation of the exhibited human behavior, adopting a Deep Learning
(DL)-based formalism. In particular, a hierarchically organized set of key
information sources is considered, including: a) User attributes (e.g.
demographic, anthropomorphic, educational, etc.), b) User state (e.g. fatigue
detection, stress detection, emotion recognition, etc.) and c)
Psychophysiological measurements (e.g. gaze, electrodermal activity, heart
rate, etc.) related data. Then, a combination of Long Short-Term Memory (LSTM)
and stacked autoencoders, with appropriately defined neural network
architectures, is employed for the modelling step. The overall designed scheme
enables both short- and long-term analysis/interpretation of the human behavior
(as observed during the feedback capturing sessions), so as to adaptively
adjust the importance of the collected feedback samples when aggregating
information originating from the same and different human teachers,
respectively.
|
[
{
"created": "Tue, 30 Mar 2021 15:33:21 GMT",
"version": "v1"
}
] |
2021-03-31
|
[
[
"Papadopoulos",
"Georgios Th.",
""
],
[
"Leonidis",
"Asterios",
""
],
[
"Antona",
"Margherita",
""
],
[
"Stephanidis",
"Constantine",
""
]
] |
Learning from Demonstration (LfD) has been established as the dominant paradigm for efficiently transferring skills from human teachers to robots. In this context, the Federated Learning (FL) conceptualization has very recently been introduced for developing large-scale human-robot collaborative environments, targeting to robustly address, among others, the critical challenges of multi-agent learning and long-term autonomy. In the current work, the latter scheme is further extended and enhanced, by designing and integrating a novel user profile formulation for providing a fine-grained representation of the exhibited human behavior, adopting a Deep Learning (DL)-based formalism. In particular, a hierarchically organized set of key information sources is considered, including: a) User attributes (e.g. demographic, anthropomorphic, educational, etc.), b) User state (e.g. fatigue detection, stress detection, emotion recognition, etc.) and c) Psychophysiological measurements (e.g. gaze, electrodermal activity, heart rate, etc.) related data. Then, a combination of Long Short-Term Memory (LSTM) and stacked autoencoders, with appropriately defined neural network architectures, is employed for the modelling step. The overall designed scheme enables both short- and long-term analysis/interpretation of the human behavior (as observed during the feedback capturing sessions), so as to adaptively adjust the importance of the collected feedback samples when aggregating information originating from the same and different human teachers, respectively.
|
1604.04675
|
Hamid Tizhoosh
|
Shujin Zhu, H.R.Tizhoosh
|
Radon Features and Barcodes for Medical Image Retrieval via SVM
|
To appear in proceedings of The 2016 IEEE International Joint
Conference on Neural Networks (IJCNN 2016), July 24-29, 2016, Vancouver,
Canada
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For more than two decades, research has been performed on content-based image
retrieval (CBIR). By combining Radon projections and the support vector
machines (SVM), a content-based medical image retrieval method is presented in
this work. The proposed approach employs the normalized Radon projections with
corresponding image category labels to build an SVM classifier, and the Radon
barcode database which encodes every image in a binary format is also generated
simultaneously to tag all images. To retrieve similar images when a query image
is given, Radon projections and the barcode of the query image are generated.
Subsequently, the k-nearest neighbor search method is applied to find the
images with minimum Hamming distance of the Radon barcode within the same class
predicted by the trained SVM classifier that uses Radon features. The
performance of the proposed method is validated by using the IRMA 2009 dataset
with 14,410 x-ray images in 57 categories. The results demonstrate that our
method has the capacity to retrieve similar responses for the correctly
identified query image and even for those mistakenly classified by SVM. The
approach further is very fast and has low memory requirement.
|
[
{
"created": "Sat, 16 Apr 2016 01:13:23 GMT",
"version": "v1"
}
] |
2016-04-19
|
[
[
"Zhu",
"Shujin",
""
],
[
"Tizhoosh",
"H. R.",
""
]
] |
For more than two decades, research has been performed on content-based image retrieval (CBIR). By combining Radon projections and the support vector machines (SVM), a content-based medical image retrieval method is presented in this work. The proposed approach employs the normalized Radon projections with corresponding image category labels to build an SVM classifier, and the Radon barcode database which encodes every image in a binary format is also generated simultaneously to tag all images. To retrieve similar images when a query image is given, Radon projections and the barcode of the query image are generated. Subsequently, the k-nearest neighbor search method is applied to find the images with minimum Hamming distance of the Radon barcode within the same class predicted by the trained SVM classifier that uses Radon features. The performance of the proposed method is validated by using the IRMA 2009 dataset with 14,410 x-ray images in 57 categories. The results demonstrate that our method has the capacity to retrieve similar responses for the correctly identified query image and even for those mistakenly classified by SVM. The approach further is very fast and has low memory requirement.
|
2105.02742
|
Christopher K\"ummel
|
Christopher Kissel, Christopher K\"ummel, Dennis Ritter, Kristian
Hildebrand
|
Pose-Guided Sign Language Video GAN with Dynamic Lambda
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
We propose a novel approach for the synthesis of sign language videos using
GANs. We extend the previous work of Stoll et al. by using the human semantic
parser of the Soft-Gated Warping-GAN from to produce photorealistic videos
guided by region-level spatial layouts. Synthesizing target poses improves
performance on independent and contrasting signers. Therefore, we have
evaluated our system with the highly heterogeneous MS-ASL dataset with over 200
signers resulting in a SSIM of 0.893. Furthermore, we introduce a periodic
weighting approach to the generator that reactivates the training and leads to
quantitatively better results.
|
[
{
"created": "Thu, 6 May 2021 15:12:09 GMT",
"version": "v1"
}
] |
2021-05-07
|
[
[
"Kissel",
"Christopher",
""
],
[
"Kümmel",
"Christopher",
""
],
[
"Ritter",
"Dennis",
""
],
[
"Hildebrand",
"Kristian",
""
]
] |
We propose a novel approach for the synthesis of sign language videos using GANs. We extend the previous work of Stoll et al. by using the human semantic parser of the Soft-Gated Warping-GAN from to produce photorealistic videos guided by region-level spatial layouts. Synthesizing target poses improves performance on independent and contrasting signers. Therefore, we have evaluated our system with the highly heterogeneous MS-ASL dataset with over 200 signers resulting in a SSIM of 0.893. Furthermore, we introduce a periodic weighting approach to the generator that reactivates the training and leads to quantitatively better results.
|
2103.05902
|
Dongseok Shim
|
Dongseok Shim and H. Jin Kim
|
Learning a Domain-Agnostic Visual Representation for Autonomous Driving
via Contrastive Loss
|
IEEE IROS 2021 Submission
| null | null | null |
cs.CV cs.LG cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Deep neural networks have been widely studied in autonomous driving
applications such as semantic segmentation or depth estimation. However,
training a neural network in a supervised manner requires a large amount of
annotated labels which are expensive and time-consuming to collect. Recent
studies leverage synthetic data collected from a virtual environment which are
much easier to acquire and more accurate compared to data from the real world,
but they usually suffer from poor generalization due to the inherent domain
shift problem. In this paper, we propose a Domain-Agnostic Contrastive Learning
(DACL) which is a two-stage unsupervised domain adaptation framework with
cyclic adversarial training and contrastive loss. DACL leads the neural network
to learn domain-agnostic representation to overcome performance degradation
when there exists a difference between training and test data distribution. Our
proposed approach achieves better performance in the monocular depth estimation
task compared to previous state-of-the-art methods and also shows effectiveness
in the semantic segmentation task.
|
[
{
"created": "Wed, 10 Mar 2021 07:06:03 GMT",
"version": "v1"
}
] |
2021-03-11
|
[
[
"Shim",
"Dongseok",
""
],
[
"Kim",
"H. Jin",
""
]
] |
Deep neural networks have been widely studied in autonomous driving applications such as semantic segmentation or depth estimation. However, training a neural network in a supervised manner requires a large amount of annotated labels which are expensive and time-consuming to collect. Recent studies leverage synthetic data collected from a virtual environment which are much easier to acquire and more accurate compared to data from the real world, but they usually suffer from poor generalization due to the inherent domain shift problem. In this paper, we propose a Domain-Agnostic Contrastive Learning (DACL) which is a two-stage unsupervised domain adaptation framework with cyclic adversarial training and contrastive loss. DACL leads the neural network to learn domain-agnostic representation to overcome performance degradation when there exists a difference between training and test data distribution. Our proposed approach achieves better performance in the monocular depth estimation task compared to previous state-of-the-art methods and also shows effectiveness in the semantic segmentation task.
|
2303.12394
|
Qianxiong Xu
|
Qianxiong Xu, Cheng Long, Liang Yu, Chen Zhang
|
Road Extraction with Satellite Images and Partial Road Maps
|
This paper has been accepted by IEEE Transactions on Geoscience and
Remote Sensing
| null |
10.1109/TGRS.2023.3261332
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Road extraction is a process of automatically generating road maps mainly
from satellite images. Existing models all target to generate roads from the
scratch despite that a large quantity of road maps, though incomplete, are
publicly available (e.g. those from OpenStreetMap) and can help with road
extraction. In this paper, we propose to conduct road extraction based on
satellite images and partial road maps, which is new. We then propose a
two-branch Partial to Complete Network (P2CNet) for the task, which has two
prominent components: Gated Self-Attention Module (GSAM) and Missing Part (MP)
loss. GSAM leverages a channel-wise self-attention module and a gate module to
capture long-range semantics, filter out useless information, and better fuse
the features from two branches. MP loss is derived from the partial road maps,
trying to give more attention to the road pixels that do not exist in partial
road maps. Extensive experiments are conducted to demonstrate the effectiveness
of our model, e.g. P2CNet achieves state-of-the-art performance with the IoU
scores of 70.71% and 75.52%, respectively, on the SpaceNet and OSM datasets.
|
[
{
"created": "Wed, 22 Mar 2023 08:59:42 GMT",
"version": "v1"
}
] |
2023-05-03
|
[
[
"Xu",
"Qianxiong",
""
],
[
"Long",
"Cheng",
""
],
[
"Yu",
"Liang",
""
],
[
"Zhang",
"Chen",
""
]
] |
Road extraction is a process of automatically generating road maps mainly from satellite images. Existing models all target to generate roads from the scratch despite that a large quantity of road maps, though incomplete, are publicly available (e.g. those from OpenStreetMap) and can help with road extraction. In this paper, we propose to conduct road extraction based on satellite images and partial road maps, which is new. We then propose a two-branch Partial to Complete Network (P2CNet) for the task, which has two prominent components: Gated Self-Attention Module (GSAM) and Missing Part (MP) loss. GSAM leverages a channel-wise self-attention module and a gate module to capture long-range semantics, filter out useless information, and better fuse the features from two branches. MP loss is derived from the partial road maps, trying to give more attention to the road pixels that do not exist in partial road maps. Extensive experiments are conducted to demonstrate the effectiveness of our model, e.g. P2CNet achieves state-of-the-art performance with the IoU scores of 70.71% and 75.52%, respectively, on the SpaceNet and OSM datasets.
|
2111.05688
|
Antoine Vacavant
|
Antoine Vacavant and Bertrand Kerautret and Fabien Feschet
|
Robust reconstructions by multi-scale/irregular tangential covering
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we propose an original manner to employ a tangential cover
algorithm - minDSS - in order to geometrically reconstruct noisy digital
contours. To do so, we exploit the representation of graphical objects by
maximal primitives we have introduced in previous works. By calculating
multi-scale and irregular isothetic representations of the contour, we obtained
1-D (one-dimensional) intervals, and achieved afterwards a decomposition into
maximal line segments or circular arcs. By adapting minDSS to this sparse and
irregular data of 1-D intervals supporting the maximal primitives, we are now
able to reconstruct the input noisy objects into cyclic contours made of lines
or arcs with a minimal number of primitives. In this work, we explain our novel
complete pipeline, and present its experimental evaluation by considering both
synthetic and real image data. We also show that this is a robust approach,
with respect to selected references from state-of-the-art, and by considering a
multi-scale noise evaluation process.
|
[
{
"created": "Wed, 10 Nov 2021 14:02:05 GMT",
"version": "v1"
}
] |
2021-11-11
|
[
[
"Vacavant",
"Antoine",
""
],
[
"Kerautret",
"Bertrand",
""
],
[
"Feschet",
"Fabien",
""
]
] |
In this paper, we propose an original manner to employ a tangential cover algorithm - minDSS - in order to geometrically reconstruct noisy digital contours. To do so, we exploit the representation of graphical objects by maximal primitives we have introduced in previous works. By calculating multi-scale and irregular isothetic representations of the contour, we obtained 1-D (one-dimensional) intervals, and achieved afterwards a decomposition into maximal line segments or circular arcs. By adapting minDSS to this sparse and irregular data of 1-D intervals supporting the maximal primitives, we are now able to reconstruct the input noisy objects into cyclic contours made of lines or arcs with a minimal number of primitives. In this work, we explain our novel complete pipeline, and present its experimental evaluation by considering both synthetic and real image data. We also show that this is a robust approach, with respect to selected references from state-of-the-art, and by considering a multi-scale noise evaluation process.
|
2302.03567
|
Daniel Rigobon
|
Daniel E. Rigobon
|
From Utilitarian to Rawlsian Designs for Algorithmic Fairness
| null | null | null | null |
cs.CY cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
There is a lack of consensus within the literature as to how `fairness' of
algorithmic systems can be measured, and different metrics can often be at
odds. In this paper, we approach this task by drawing on the ethical frameworks
of utilitarianism and John Rawls. Informally, these two theories of
distributive justice measure the `good' as either a population's sum of
utility, or worst-off outcomes, respectively. We present a parameterized class
of objective functions that interpolates between these two (possibly)
conflicting notions of the `good'. This class is shown to represent a
relaxation of the Rawlsian `veil of ignorance', and its sequence of optimal
solutions converges to both a utilitarian and Rawlsian optimum. Several other
properties of this class are studied, including: 1) a relationship to
regularized optimization, 2) feasibility of consistent estimation, and 3)
algorithmic cost. In several real-world datasets, we compute optimal solutions
and construct the tradeoff between utilitarian and Rawlsian notions of the
`good'. Empirically, we demonstrate that increasing model complexity can
manifest strict improvements to both measures of the `good'. This work suggests
that the proper degree of `fairness' can be informed by a designer's
preferences over the space of induced utilitarian and Rawlsian `good'.
|
[
{
"created": "Tue, 7 Feb 2023 16:28:10 GMT",
"version": "v1"
}
] |
2023-02-08
|
[
[
"Rigobon",
"Daniel E.",
""
]
] |
There is a lack of consensus within the literature as to how `fairness' of algorithmic systems can be measured, and different metrics can often be at odds. In this paper, we approach this task by drawing on the ethical frameworks of utilitarianism and John Rawls. Informally, these two theories of distributive justice measure the `good' as either a population's sum of utility, or worst-off outcomes, respectively. We present a parameterized class of objective functions that interpolates between these two (possibly) conflicting notions of the `good'. This class is shown to represent a relaxation of the Rawlsian `veil of ignorance', and its sequence of optimal solutions converges to both a utilitarian and Rawlsian optimum. Several other properties of this class are studied, including: 1) a relationship to regularized optimization, 2) feasibility of consistent estimation, and 3) algorithmic cost. In several real-world datasets, we compute optimal solutions and construct the tradeoff between utilitarian and Rawlsian notions of the `good'. Empirically, we demonstrate that increasing model complexity can manifest strict improvements to both measures of the `good'. This work suggests that the proper degree of `fairness' can be informed by a designer's preferences over the space of induced utilitarian and Rawlsian `good'.
|
1303.1747
|
Emilio Ferrara
|
Pasquale De Meo, Emilio Ferrara, Giacomo Fiumara, Angela Ricciardello
|
A Novel Measure of Edge Centrality in Social Networks
|
28 pages, 5 figures
|
Knowledge-based Systems, 30:136-150, 2012
|
10.1016/j.knosys.2012.01.007
| null |
cs.SI cs.DS physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The problem of assigning centrality values to nodes and edges in graphs has
been widely investigated during last years. Recently, a novel measure of node
centrality has been proposed, called k-path centrality index, which is based on
the propagation of messages inside a network along paths consisting of at most
k edges. On the other hand, the importance of computing the centrality of edges
has been put into evidence since 1970's by Anthonisse and, subsequently by
Girvan and Newman. In this work we propose the generalization of the concept of
k-path centrality by defining the k-path edge centrality, a measure of
centrality introduced to compute the importance of edges. We provide an
efficient algorithm, running in O(k m), being m the number of edges in the
graph. Thus, our technique is feasible for large scale network analysis.
Finally, the performance of our algorithm is analyzed, discussing the results
obtained against large online social network datasets.
|
[
{
"created": "Thu, 7 Mar 2013 16:54:34 GMT",
"version": "v1"
}
] |
2013-03-08
|
[
[
"De Meo",
"Pasquale",
""
],
[
"Ferrara",
"Emilio",
""
],
[
"Fiumara",
"Giacomo",
""
],
[
"Ricciardello",
"Angela",
""
]
] |
The problem of assigning centrality values to nodes and edges in graphs has been widely investigated during last years. Recently, a novel measure of node centrality has been proposed, called k-path centrality index, which is based on the propagation of messages inside a network along paths consisting of at most k edges. On the other hand, the importance of computing the centrality of edges has been put into evidence since 1970's by Anthonisse and, subsequently by Girvan and Newman. In this work we propose the generalization of the concept of k-path centrality by defining the k-path edge centrality, a measure of centrality introduced to compute the importance of edges. We provide an efficient algorithm, running in O(k m), being m the number of edges in the graph. Thus, our technique is feasible for large scale network analysis. Finally, the performance of our algorithm is analyzed, discussing the results obtained against large online social network datasets.
|
2110.09796
|
Xiaoteng Ma
|
Xiaoteng Ma, Yiqin Yang, Hao Hu, Qihan Liu, Jun Yang, Chongjie Zhang,
Qianchuan Zhao, Bin Liang
|
Offline Reinforcement Learning with Value-based Episodic Memory
| null | null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Offline reinforcement learning (RL) shows promise of applying RL to
real-world problems by effectively utilizing previously collected data. Most
existing offline RL algorithms use regularization or constraints to suppress
extrapolation error for actions outside the dataset. In this paper, we adopt a
different framework, which learns the V-function instead of the Q-function to
naturally keep the learning procedure within the support of an offline dataset.
To enable effective generalization while maintaining proper conservatism in
offline learning, we propose Expectile V-Learning (EVL), which smoothly
interpolates between the optimal value learning and behavior cloning. Further,
we introduce implicit planning along offline trajectories to enhance learned
V-values and accelerate convergence. Together, we present a new offline method
called Value-based Episodic Memory (VEM). We provide theoretical analysis for
the convergence properties of our proposed VEM method, and empirical results in
the D4RL benchmark show that our method achieves superior performance in most
tasks, particularly in sparse-reward tasks.
|
[
{
"created": "Tue, 19 Oct 2021 08:20:11 GMT",
"version": "v1"
}
] |
2021-10-20
|
[
[
"Ma",
"Xiaoteng",
""
],
[
"Yang",
"Yiqin",
""
],
[
"Hu",
"Hao",
""
],
[
"Liu",
"Qihan",
""
],
[
"Yang",
"Jun",
""
],
[
"Zhang",
"Chongjie",
""
],
[
"Zhao",
"Qianchuan",
""
],
[
"Liang",
"Bin",
""
]
] |
Offline reinforcement learning (RL) shows promise of applying RL to real-world problems by effectively utilizing previously collected data. Most existing offline RL algorithms use regularization or constraints to suppress extrapolation error for actions outside the dataset. In this paper, we adopt a different framework, which learns the V-function instead of the Q-function to naturally keep the learning procedure within the support of an offline dataset. To enable effective generalization while maintaining proper conservatism in offline learning, we propose Expectile V-Learning (EVL), which smoothly interpolates between the optimal value learning and behavior cloning. Further, we introduce implicit planning along offline trajectories to enhance learned V-values and accelerate convergence. Together, we present a new offline method called Value-based Episodic Memory (VEM). We provide theoretical analysis for the convergence properties of our proposed VEM method, and empirical results in the D4RL benchmark show that our method achieves superior performance in most tasks, particularly in sparse-reward tasks.
|
1108.3632
|
EPTCS
|
Thierry Monteil (CNRS - Universit\'e Montpellier 2)
|
The complexity of tangent words
|
In Proceedings WORDS 2011, arXiv:1108.3412
|
EPTCS 63, 2011, pp. 152-157
|
10.4204/EPTCS.63.21
| null |
cs.DM cs.CG cs.FL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In a previous paper, we described the set of words that appear in the coding
of smooth (resp. analytic) curves at arbitrary small scale. The aim of this
paper is to compute the complexity of those languages.
|
[
{
"created": "Thu, 18 Aug 2011 03:54:08 GMT",
"version": "v1"
}
] |
2011-08-19
|
[
[
"Monteil",
"Thierry",
"",
"CNRS - Université Montpellier 2"
]
] |
In a previous paper, we described the set of words that appear in the coding of smooth (resp. analytic) curves at arbitrary small scale. The aim of this paper is to compute the complexity of those languages.
|
2407.04106
|
Asma Alkhaldi
|
Asma Alkhaldi, Raneem Alnajim, Layan Alabdullatef, Rawan Alyahya, Jun
Chen, Deyao Zhu, Ahmed Alsinan, Mohamed Elhoseiny
|
MiniGPT-Med: Large Language Model as a General Interface for Radiology
Diagnosis
| null | null | null | null |
cs.AI cs.CL cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Recent advancements in artificial intelligence (AI) have precipitated
significant breakthroughs in healthcare, particularly in refining diagnostic
procedures. However, previous studies have often been constrained to limited
functionalities. This study introduces MiniGPT-Med, a vision-language model
derived from large-scale language models and tailored for medical applications.
MiniGPT-Med demonstrates remarkable versatility across various imaging
modalities, including X-rays, CT scans, and MRIs, enhancing its utility. The
model is capable of performing tasks such as medical report generation, visual
question answering (VQA), and disease identification within medical imagery.
Its integrated processing of both image and textual clinical data markedly
improves diagnostic accuracy. Our empirical assessments confirm MiniGPT-Med's
superior performance in disease grounding, medical report generation, and VQA
benchmarks, representing a significant step towards reducing the gap in
assisting radiology practice. Furthermore, it achieves state-of-the-art
performance on medical report generation, higher than the previous best model
by 19\% accuracy. MiniGPT-Med promises to become a general interface for
radiology diagnoses, enhancing diagnostic efficiency across a wide range of
medical imaging applications.
|
[
{
"created": "Thu, 4 Jul 2024 18:21:10 GMT",
"version": "v1"
}
] |
2024-07-08
|
[
[
"Alkhaldi",
"Asma",
""
],
[
"Alnajim",
"Raneem",
""
],
[
"Alabdullatef",
"Layan",
""
],
[
"Alyahya",
"Rawan",
""
],
[
"Chen",
"Jun",
""
],
[
"Zhu",
"Deyao",
""
],
[
"Alsinan",
"Ahmed",
""
],
[
"Elhoseiny",
"Mohamed",
""
]
] |
Recent advancements in artificial intelligence (AI) have precipitated significant breakthroughs in healthcare, particularly in refining diagnostic procedures. However, previous studies have often been constrained to limited functionalities. This study introduces MiniGPT-Med, a vision-language model derived from large-scale language models and tailored for medical applications. MiniGPT-Med demonstrates remarkable versatility across various imaging modalities, including X-rays, CT scans, and MRIs, enhancing its utility. The model is capable of performing tasks such as medical report generation, visual question answering (VQA), and disease identification within medical imagery. Its integrated processing of both image and textual clinical data markedly improves diagnostic accuracy. Our empirical assessments confirm MiniGPT-Med's superior performance in disease grounding, medical report generation, and VQA benchmarks, representing a significant step towards reducing the gap in assisting radiology practice. Furthermore, it achieves state-of-the-art performance on medical report generation, higher than the previous best model by 19\% accuracy. MiniGPT-Med promises to become a general interface for radiology diagnoses, enhancing diagnostic efficiency across a wide range of medical imaging applications.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.