id
stringlengths 9
10
| submitter
stringlengths 1
64
⌀ | authors
stringlengths 4
20.7k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
404
⌀ | doi
stringlengths 11
153
⌀ | report-no
stringlengths 2
254
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | orig_abstract
stringlengths 14
3.35k
| versions
listlengths 1
60
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
1.35k
| abstract
stringlengths 11
3.34k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1305.7482
|
Uwe Aickelin
|
Xiyang Liu, Zhongjie Ren, Xiuling Chang, Haichang Gao, Uwe Aickelin
|
Draw a line on your PDA to authenticate
|
The sixth Symposium on Usable Privacy and Security, SOUPS2010, July
14-16, Redmond, WA, 7, 2010
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The trend toward a highly mobile workforce and the ubiquity of graphical
interfaces (such as the stylus and touch-screen) has enabled the emergence of
graphical authentications in Personal Digital Assistants (PDAs) [1]. However,
most of the current graphical password schemes are vulnerable to
shoulder-surfing [2,3], a known risk where an attacker can capture a password
by direct observation or by recording the authentication session. Several
approaches have been developed to deal with this problem, but they have
significant usability drawbacks, usually in the time and effort to log in,
making them less suitable for authentication [4, 8]. For example, it is
time-consuming for users to log in CHC [4] and there are complex text memory
requirements in scheme proposed by Hong [5]. With respect to the scheme
proposed by Weinshall [6], not only is it intricate to log in, but also the
main claim of resisting shoulder-surfing is proven false [7]. In this paper, we
introduce a new graphical password scheme which provides a good resistance to
shouldersurfing and preserves a desirable usability.
|
[
{
"created": "Fri, 31 May 2013 16:41:47 GMT",
"version": "v1"
}
] |
2013-06-03
|
[
[
"Liu",
"Xiyang",
""
],
[
"Ren",
"Zhongjie",
""
],
[
"Chang",
"Xiuling",
""
],
[
"Gao",
"Haichang",
""
],
[
"Aickelin",
"Uwe",
""
]
] |
The trend toward a highly mobile workforce and the ubiquity of graphical interfaces (such as the stylus and touch-screen) has enabled the emergence of graphical authentications in Personal Digital Assistants (PDAs) [1]. However, most of the current graphical password schemes are vulnerable to shoulder-surfing [2,3], a known risk where an attacker can capture a password by direct observation or by recording the authentication session. Several approaches have been developed to deal with this problem, but they have significant usability drawbacks, usually in the time and effort to log in, making them less suitable for authentication [4, 8]. For example, it is time-consuming for users to log in CHC [4] and there are complex text memory requirements in scheme proposed by Hong [5]. With respect to the scheme proposed by Weinshall [6], not only is it intricate to log in, but also the main claim of resisting shoulder-surfing is proven false [7]. In this paper, we introduce a new graphical password scheme which provides a good resistance to shouldersurfing and preserves a desirable usability.
|
2401.06772
|
Qisong Li
|
Sijia Wei, Wenwen Zhang, Qisong Li, Jiang Zhao
|
Semantic Parsing for Question Answering over Knowledge Graphs
|
arXiv admin note: text overlap with arXiv:2401.02968
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we introduce a novel method with graph-to-segment mapping for
question answering over knowledge graphs, which helps understanding question
utterances. This method centers on semantic parsing, a key approach for
interpreting these utterances. The challenges lie in comprehending implicit
entities, relationships, and complex constraints like time, ordinality, and
aggregation within questions, contextualized by the knowledge graph. Our
framework employs a combination of rule-based and neural-based techniques to
parse and construct highly accurate and comprehensive semantic segment
sequences. These sequences form semantic query graphs, effectively representing
question utterances. We approach question semantic parsing as a sequence
generation task, utilizing an encoder-decoder neural network to transform
natural language questions into semantic segments. Moreover, to enhance the
parsing of implicit entities and relations, we incorporate a graph neural
network that leverages the context of the knowledge graph to better understand
question representations. Our experimental evaluations on two datasets
demonstrate the effectiveness and superior performance of our model in semantic
parsing for question answering.
|
[
{
"created": "Fri, 1 Dec 2023 20:45:06 GMT",
"version": "v1"
},
{
"created": "Sat, 27 Jan 2024 20:56:20 GMT",
"version": "v2"
}
] |
2024-01-30
|
[
[
"Wei",
"Sijia",
""
],
[
"Zhang",
"Wenwen",
""
],
[
"Li",
"Qisong",
""
],
[
"Zhao",
"Jiang",
""
]
] |
In this paper, we introduce a novel method with graph-to-segment mapping for question answering over knowledge graphs, which helps understanding question utterances. This method centers on semantic parsing, a key approach for interpreting these utterances. The challenges lie in comprehending implicit entities, relationships, and complex constraints like time, ordinality, and aggregation within questions, contextualized by the knowledge graph. Our framework employs a combination of rule-based and neural-based techniques to parse and construct highly accurate and comprehensive semantic segment sequences. These sequences form semantic query graphs, effectively representing question utterances. We approach question semantic parsing as a sequence generation task, utilizing an encoder-decoder neural network to transform natural language questions into semantic segments. Moreover, to enhance the parsing of implicit entities and relations, we incorporate a graph neural network that leverages the context of the knowledge graph to better understand question representations. Our experimental evaluations on two datasets demonstrate the effectiveness and superior performance of our model in semantic parsing for question answering.
|
1807.04735
|
Maksims Dimitrijevs
|
Maksims Dimitrijevs, Abuzer Yakary{\i}lmaz
|
Probabilistic verification of all languages
|
20 pages
| null | null | null |
cs.CC cs.FL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present three protocols for verifying all languages: (i) For any unary
(binary) language, there is a log-space (linear-space) interactive proof system
(IPS); (ii) for any language, there is a constant-space weak-IPS (the
non-members may not be rejected with high probability); and, (iii) for any
language, there is a constant-space IPS with two provers where the verifier
reads the input once. Additionally, we show that uncountably many binary
(unary) languages can be verified in constant space and in linear (quadratic)
expected time.
|
[
{
"created": "Thu, 12 Jul 2018 17:20:27 GMT",
"version": "v1"
}
] |
2018-07-13
|
[
[
"Dimitrijevs",
"Maksims",
""
],
[
"Yakaryılmaz",
"Abuzer",
""
]
] |
We present three protocols for verifying all languages: (i) For any unary (binary) language, there is a log-space (linear-space) interactive proof system (IPS); (ii) for any language, there is a constant-space weak-IPS (the non-members may not be rejected with high probability); and, (iii) for any language, there is a constant-space IPS with two provers where the verifier reads the input once. Additionally, we show that uncountably many binary (unary) languages can be verified in constant space and in linear (quadratic) expected time.
|
1303.2017
|
Tunji Adebiyi
|
A. Adebiyi, Johnnes Arreymbi and Chris Imafidon
|
Security Assessment of Software Design using Neural Network
|
7 pages, 1 figure, 4 tables, (IJARAI) International Journal of
Advanced Research in Artificial Intelligence, Vol. 1(4), 2012, pp.1-7,
ISSN:2165-4069 (Online), ISSN:2165-4050 (Print)
|
(IJARAI) International Journal of Advanced Research in Artificial
Intelligence, Vol. 1(4), 2012, pp.1-7
| null | null |
cs.CR cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Security flaws in software applications today has been attributed mostly to
design flaws. With limited budget and time to release software into the market,
many developers often consider security as an afterthought. Previous research
shows that integrating security into software applications at a later stage of
software development lifecycle (SDLC) has been found to be more costly than
when it is integrated during the early stages. To assist in the integration of
security early in the SDLC stages, a new approach for assessing security during
the design phase by neural network is investigated in this paper. Our findings
show that by training a back propagation neural network to identify attack
patterns, possible attacks can be identified from design scenarios presented to
it. The result of performance of the neural network is presented in this paper.
|
[
{
"created": "Fri, 8 Mar 2013 15:09:53 GMT",
"version": "v1"
}
] |
2013-03-11
|
[
[
"Adebiyi",
"A.",
""
],
[
"Arreymbi",
"Johnnes",
""
],
[
"Imafidon",
"Chris",
""
]
] |
Security flaws in software applications today has been attributed mostly to design flaws. With limited budget and time to release software into the market, many developers often consider security as an afterthought. Previous research shows that integrating security into software applications at a later stage of software development lifecycle (SDLC) has been found to be more costly than when it is integrated during the early stages. To assist in the integration of security early in the SDLC stages, a new approach for assessing security during the design phase by neural network is investigated in this paper. Our findings show that by training a back propagation neural network to identify attack patterns, possible attacks can be identified from design scenarios presented to it. The result of performance of the neural network is presented in this paper.
|
2001.01458
|
Yingshi Chen
|
Yingshi Chen
|
Express Wavenet -- a low parameter optical neural network with random
shift wavelet pattern
|
5 pages,4 figures
| null |
10.1016/j.optcom.2020.126709
| null |
cs.LG cs.CV eess.IV stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Express Wavenet is an improved optical diffractive neural network. At each
layer, it uses wavelet-like pattern to modulate the phase of optical waves. For
input image with n2 pixels, express wavenet reduce parameter number from O(n2)
to O(n). Only need one percent of the parameters, and the accuracy is still
very high. In the MNIST dataset, it only needs 1229 parameters to get accuracy
of 92%, while the standard optical network needs 125440 parameters. The random
shift wavelets show the characteristics of optical network more vividly.
Especially the vanishing gradient phenomenon in the training process. We
present a modified expressway structure for this problem. Experiments verified
the effect of random shift wavelet and expressway structure. Our work shows
optical diffractive network would use much fewer parameters than other neural
networks. The source codes are available at
https://github.com/closest-git/ONNet.
|
[
{
"created": "Mon, 6 Jan 2020 09:45:20 GMT",
"version": "v1"
}
] |
2021-02-03
|
[
[
"Chen",
"Yingshi",
""
]
] |
Express Wavenet is an improved optical diffractive neural network. At each layer, it uses wavelet-like pattern to modulate the phase of optical waves. For input image with n2 pixels, express wavenet reduce parameter number from O(n2) to O(n). Only need one percent of the parameters, and the accuracy is still very high. In the MNIST dataset, it only needs 1229 parameters to get accuracy of 92%, while the standard optical network needs 125440 parameters. The random shift wavelets show the characteristics of optical network more vividly. Especially the vanishing gradient phenomenon in the training process. We present a modified expressway structure for this problem. Experiments verified the effect of random shift wavelet and expressway structure. Our work shows optical diffractive network would use much fewer parameters than other neural networks. The source codes are available at https://github.com/closest-git/ONNet.
|
1711.08199
|
He Chen
|
Yifan Gu, He Chen, Yonghui Li, Branka Vucetic
|
Ultra-Reliable Short-Packet Communications: Half-Duplex or Full-Duplex
Relaying?
|
Accepted to appear in IEEE Wireless Communication Letters
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This letter analyzes and compares the performance of full-duplex relaying
(FDR) and half-duplex relaying (HDR) for ultra-reliable short-packet
communications. Specifically, we derive both approximate and asymptotic
closed-form expressions of the block error rate (BLER) for FDR and HDR using
short packets with finite blocklength codes. We define and attain a closed-form
expression of a critical BLER, which can be used to efficiently determine the
optimal duplex mode for ultra-reliable low latency communication scenarios. Our
results unveil that FDR is more appealing to the system with relatively lower
transmit power constraint, less stringent BLER requirement and stronger loop
interference suppression.
|
[
{
"created": "Wed, 22 Nov 2017 09:57:05 GMT",
"version": "v1"
}
] |
2017-11-23
|
[
[
"Gu",
"Yifan",
""
],
[
"Chen",
"He",
""
],
[
"Li",
"Yonghui",
""
],
[
"Vucetic",
"Branka",
""
]
] |
This letter analyzes and compares the performance of full-duplex relaying (FDR) and half-duplex relaying (HDR) for ultra-reliable short-packet communications. Specifically, we derive both approximate and asymptotic closed-form expressions of the block error rate (BLER) for FDR and HDR using short packets with finite blocklength codes. We define and attain a closed-form expression of a critical BLER, which can be used to efficiently determine the optimal duplex mode for ultra-reliable low latency communication scenarios. Our results unveil that FDR is more appealing to the system with relatively lower transmit power constraint, less stringent BLER requirement and stronger loop interference suppression.
|
1202.2981
|
Bogdan Alexandru Caprarescu
|
Bogdan Alexandru Caprarescu, Eva Kaslik, Dana Petcu
|
Theoretical Analysis and Tuning of Decentralized Probabilistic
Auto-Scaling
|
Submitted to Journal of Computer and System Sciences
| null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A major impediment towards the industrial adoption of decentralized
distributed systems comes from the difficulty to theoretically prove that these
systems exhibit the required behavior. In this paper, we use probability theory
to analyze a decentralized auto-scaling algorithm in which each node
probabilistically decides to scale in or out. We prove that, in the context of
dynamic workloads, the average load of the system is maintained within a
variation interval with a given probability, provided that the number of nodes
and the variation interval length are higher than certain bounds. The paper
also proposes numerical algorithms for approximating these minimum bounds.
|
[
{
"created": "Tue, 14 Feb 2012 10:21:51 GMT",
"version": "v1"
}
] |
2012-02-15
|
[
[
"Caprarescu",
"Bogdan Alexandru",
""
],
[
"Kaslik",
"Eva",
""
],
[
"Petcu",
"Dana",
""
]
] |
A major impediment towards the industrial adoption of decentralized distributed systems comes from the difficulty to theoretically prove that these systems exhibit the required behavior. In this paper, we use probability theory to analyze a decentralized auto-scaling algorithm in which each node probabilistically decides to scale in or out. We prove that, in the context of dynamic workloads, the average load of the system is maintained within a variation interval with a given probability, provided that the number of nodes and the variation interval length are higher than certain bounds. The paper also proposes numerical algorithms for approximating these minimum bounds.
|
1407.6877
|
Minati Mishra
|
Minati Mishra and M. C. Adhikary
|
An Easy yet Effective Method for Detecting Spatial Domain LSB
Steganography
|
12 pages; International Journal of Computer Science and Business
Informatics, Dec 2012
| null | null | null |
cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Digitization of image was a revolutionary step for the fields of photography
and Image processing as this made the editing of images much effortless and
easier. Image editing was not an issue until it was limited to corrective
editing procedures used to enhance the quality of an image such as, contrast
stretching, noise filtering, sharpening etc. But, it became a headache for many
fields when image editing became manipulative. Digital images have become an
easier source of tampering and forgery during last few decades. Today users and
editing specialists, equipped with easily available image editing software,
manipulate digital images with varied goals. Photo journalists often tamper
photographs to give dramatic effect to their stories. Scientists and
researchers use this trick to get theirs works published. Patients' diagnoses
are misrepresented by manipulating medical imageries. Lawyers and Politicians
use tampered images to direct the opinion of people or court to their favor.
Terrorists, anti-social groups use manipulated Stego images for secret
communication. In this paper we present an effective method for detecting
spatial domain Steganography.
|
[
{
"created": "Fri, 25 Jul 2014 12:58:23 GMT",
"version": "v1"
}
] |
2014-07-28
|
[
[
"Mishra",
"Minati",
""
],
[
"Adhikary",
"M. C.",
""
]
] |
Digitization of image was a revolutionary step for the fields of photography and Image processing as this made the editing of images much effortless and easier. Image editing was not an issue until it was limited to corrective editing procedures used to enhance the quality of an image such as, contrast stretching, noise filtering, sharpening etc. But, it became a headache for many fields when image editing became manipulative. Digital images have become an easier source of tampering and forgery during last few decades. Today users and editing specialists, equipped with easily available image editing software, manipulate digital images with varied goals. Photo journalists often tamper photographs to give dramatic effect to their stories. Scientists and researchers use this trick to get theirs works published. Patients' diagnoses are misrepresented by manipulating medical imageries. Lawyers and Politicians use tampered images to direct the opinion of people or court to their favor. Terrorists, anti-social groups use manipulated Stego images for secret communication. In this paper we present an effective method for detecting spatial domain Steganography.
|
1905.08723
|
Ting-Shuo Yo
|
Ting-Shuo Yo and Edwin de Jong
|
A comparison of evaluation methods in coevolution
|
8 pages, 7 figures, GECCO '07: Proceedings of the 9th annual
conference on Genetic and evolutionary computation
| null |
10.1145/1276958.1277060
| null |
cs.NE cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
In this research, we compare four different evaluation methods in coevolution
on the Majority Function problem. The size of the problem is selected such that
evaluation against all possible test cases is feasible. Two measures are used
for the comparisons, i.e., the objective fitness derived from evaluating
solutions against all test cases, and the objective fitness correlation (OFC),
which is defined as the correlation coefficient between subjective and
objective fitness. The results of our experiments suggest that a combination of
average score and weighted informativeness may provide a more accurate
evaluation in coevolution. In order to confirm this difference, a series of
t-tests on the preference between each pair of the evaluation methods is
performed. The resulting significance is affirmative, and the tests for two
quality measures show similar preference on four evaluation methods. %This
study is the first time OFC is actually computed on a real problem. Experiments
on Majority Function problems with larger sizes and Parity problems are in
progress, and their results will be added in the final version.
|
[
{
"created": "Tue, 21 May 2019 16:11:00 GMT",
"version": "v1"
}
] |
2019-05-22
|
[
[
"Yo",
"Ting-Shuo",
""
],
[
"de Jong",
"Edwin",
""
]
] |
In this research, we compare four different evaluation methods in coevolution on the Majority Function problem. The size of the problem is selected such that evaluation against all possible test cases is feasible. Two measures are used for the comparisons, i.e., the objective fitness derived from evaluating solutions against all test cases, and the objective fitness correlation (OFC), which is defined as the correlation coefficient between subjective and objective fitness. The results of our experiments suggest that a combination of average score and weighted informativeness may provide a more accurate evaluation in coevolution. In order to confirm this difference, a series of t-tests on the preference between each pair of the evaluation methods is performed. The resulting significance is affirmative, and the tests for two quality measures show similar preference on four evaluation methods. %This study is the first time OFC is actually computed on a real problem. Experiments on Majority Function problems with larger sizes and Parity problems are in progress, and their results will be added in the final version.
|
1802.09232
|
Diogo Luvizon
|
Diogo C. Luvizon and David Picard and Hedi Tabia
|
2D/3D Pose Estimation and Action Recognition using Multitask Deep
Learning
|
To appear in CVPR 2018
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Action recognition and human pose estimation are closely related but both
problems are generally handled as distinct tasks in the literature. In this
work, we propose a multitask framework for jointly 2D and 3D pose estimation
from still images and human action recognition from video sequences. We show
that a single architecture can be used to solve the two problems in an
efficient way and still achieves state-of-the-art results. Additionally, we
demonstrate that optimization from end-to-end leads to significantly higher
accuracy than separated learning. The proposed architecture can be trained with
data from different categories simultaneously in a seamlessly way. The reported
results on four datasets (MPII, Human3.6M, Penn Action and NTU) demonstrate the
effectiveness of our method on the targeted tasks.
|
[
{
"created": "Mon, 26 Feb 2018 10:16:48 GMT",
"version": "v1"
},
{
"created": "Wed, 21 Mar 2018 13:39:45 GMT",
"version": "v2"
}
] |
2018-03-22
|
[
[
"Luvizon",
"Diogo C.",
""
],
[
"Picard",
"David",
""
],
[
"Tabia",
"Hedi",
""
]
] |
Action recognition and human pose estimation are closely related but both problems are generally handled as distinct tasks in the literature. In this work, we propose a multitask framework for jointly 2D and 3D pose estimation from still images and human action recognition from video sequences. We show that a single architecture can be used to solve the two problems in an efficient way and still achieves state-of-the-art results. Additionally, we demonstrate that optimization from end-to-end leads to significantly higher accuracy than separated learning. The proposed architecture can be trained with data from different categories simultaneously in a seamlessly way. The reported results on four datasets (MPII, Human3.6M, Penn Action and NTU) demonstrate the effectiveness of our method on the targeted tasks.
|
2112.09925
|
Jinpeng Hu
|
Jinpeng Hu, Jianling Li, Zhihong Chen, Yaling Shen, Yan Song, Xiang
Wan, Tsung-Hui Chang
|
Word Graph Guided Summarization for Radiology Findings
|
11 pages, 6 figures, ACL2021 Findings
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Radiology reports play a critical role in communicating medical findings to
physicians. In each report, the impression section summarizes essential
radiology findings. In clinical practice, writing impression is highly demanded
yet time-consuming and prone to errors for radiologists. Therefore, automatic
impression generation has emerged as an attractive research direction to
facilitate such clinical practice. Existing studies mainly focused on
introducing salient word information to the general text summarization
framework to guide the selection of the key content in radiology findings.
However, for this task, a model needs not only capture the important words in
findings but also accurately describe their relations so as to generate
high-quality impressions. In this paper, we propose a novel method for
automatic impression generation, where a word graph is constructed from the
findings to record the critical words and their relations, then a Word Graph
guided Summarization model (WGSum) is designed to generate impressions with the
help of the word graph. Experimental results on two datasets, OpenI and
MIMIC-CXR, confirm the validity and effectiveness of our proposed approach,
where the state-of-the-art results are achieved on both datasets. Further
experiments are also conducted to analyze the impact of different graph designs
to the performance of our method.
|
[
{
"created": "Sat, 18 Dec 2021 13:20:18 GMT",
"version": "v1"
}
] |
2021-12-21
|
[
[
"Hu",
"Jinpeng",
""
],
[
"Li",
"Jianling",
""
],
[
"Chen",
"Zhihong",
""
],
[
"Shen",
"Yaling",
""
],
[
"Song",
"Yan",
""
],
[
"Wan",
"Xiang",
""
],
[
"Chang",
"Tsung-Hui",
""
]
] |
Radiology reports play a critical role in communicating medical findings to physicians. In each report, the impression section summarizes essential radiology findings. In clinical practice, writing impression is highly demanded yet time-consuming and prone to errors for radiologists. Therefore, automatic impression generation has emerged as an attractive research direction to facilitate such clinical practice. Existing studies mainly focused on introducing salient word information to the general text summarization framework to guide the selection of the key content in radiology findings. However, for this task, a model needs not only capture the important words in findings but also accurately describe their relations so as to generate high-quality impressions. In this paper, we propose a novel method for automatic impression generation, where a word graph is constructed from the findings to record the critical words and their relations, then a Word Graph guided Summarization model (WGSum) is designed to generate impressions with the help of the word graph. Experimental results on two datasets, OpenI and MIMIC-CXR, confirm the validity and effectiveness of our proposed approach, where the state-of-the-art results are achieved on both datasets. Further experiments are also conducted to analyze the impact of different graph designs to the performance of our method.
|
2309.12033
|
Bartosz W\'ojcik
|
Adrian Suwa{\l}a, Bartosz W\'ojcik, Magdalena Proszewska, Jacek Tabor,
Przemys{\l}aw Spurek, Marek \'Smieja
|
Face Identity-Aware Disentanglement in StyleGAN
| null | null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Conditional GANs are frequently used for manipulating the attributes of face
images, such as expression, hairstyle, pose, or age. Even though the
state-of-the-art models successfully modify the requested attributes, they
simultaneously modify other important characteristics of the image, such as a
person's identity. In this paper, we focus on solving this problem by
introducing PluGeN4Faces, a plugin to StyleGAN, which explicitly disentangles
face attributes from a person's identity. Our key idea is to perform training
on images retrieved from movie frames, where a given person appears in various
poses and with different attributes. By applying a type of contrastive loss, we
encourage the model to group images of the same person in similar regions of
latent space. Our experiments demonstrate that the modifications of face
attributes performed by PluGeN4Faces are significantly less invasive on the
remaining characteristics of the image than in the existing state-of-the-art
models.
|
[
{
"created": "Thu, 21 Sep 2023 12:54:09 GMT",
"version": "v1"
}
] |
2023-09-22
|
[
[
"Suwała",
"Adrian",
""
],
[
"Wójcik",
"Bartosz",
""
],
[
"Proszewska",
"Magdalena",
""
],
[
"Tabor",
"Jacek",
""
],
[
"Spurek",
"Przemysław",
""
],
[
"Śmieja",
"Marek",
""
]
] |
Conditional GANs are frequently used for manipulating the attributes of face images, such as expression, hairstyle, pose, or age. Even though the state-of-the-art models successfully modify the requested attributes, they simultaneously modify other important characteristics of the image, such as a person's identity. In this paper, we focus on solving this problem by introducing PluGeN4Faces, a plugin to StyleGAN, which explicitly disentangles face attributes from a person's identity. Our key idea is to perform training on images retrieved from movie frames, where a given person appears in various poses and with different attributes. By applying a type of contrastive loss, we encourage the model to group images of the same person in similar regions of latent space. Our experiments demonstrate that the modifications of face attributes performed by PluGeN4Faces are significantly less invasive on the remaining characteristics of the image than in the existing state-of-the-art models.
|
2311.06158
|
Jiazhan Feng
|
Jiazhan Feng, Ruochen Xu, Junheng Hao, Hiteshi Sharma, Yelong Shen,
Dongyan Zhao, Weizhu Chen
|
Language Models can be Logical Solvers
|
Preprint
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Logical reasoning is a fundamental aspect of human intelligence and a key
component of tasks like problem-solving and decision-making. Recent
advancements have enabled Large Language Models (LLMs) to potentially exhibit
reasoning capabilities, but complex logical reasoning remains a challenge. The
state-of-the-art, solver-augmented language models, use LLMs to parse natural
language logical questions into symbolic representations first and then adopt
external logical solvers to take in the symbolic representations and output the
answers. Despite their impressive performance, any parsing errors will
inevitably result in the failure of the execution of the external logical
solver and no answer to the logical questions. In this paper, we introduce
LoGiPT, a novel language model that directly emulates the reasoning processes
of logical solvers and bypasses the parsing errors by learning to strict
adherence to solver syntax and grammar. LoGiPT is fine-tuned on a newly
constructed instruction-tuning dataset derived from revealing and refining the
invisible reasoning process of deductive solvers. Experimental results on two
public deductive reasoning datasets demonstrate that LoGiPT outperforms
state-of-the-art solver-augmented LMs and few-shot prompting methods on
competitive LLMs like ChatGPT or GPT-4.
|
[
{
"created": "Fri, 10 Nov 2023 16:23:50 GMT",
"version": "v1"
}
] |
2023-11-13
|
[
[
"Feng",
"Jiazhan",
""
],
[
"Xu",
"Ruochen",
""
],
[
"Hao",
"Junheng",
""
],
[
"Sharma",
"Hiteshi",
""
],
[
"Shen",
"Yelong",
""
],
[
"Zhao",
"Dongyan",
""
],
[
"Chen",
"Weizhu",
""
]
] |
Logical reasoning is a fundamental aspect of human intelligence and a key component of tasks like problem-solving and decision-making. Recent advancements have enabled Large Language Models (LLMs) to potentially exhibit reasoning capabilities, but complex logical reasoning remains a challenge. The state-of-the-art, solver-augmented language models, use LLMs to parse natural language logical questions into symbolic representations first and then adopt external logical solvers to take in the symbolic representations and output the answers. Despite their impressive performance, any parsing errors will inevitably result in the failure of the execution of the external logical solver and no answer to the logical questions. In this paper, we introduce LoGiPT, a novel language model that directly emulates the reasoning processes of logical solvers and bypasses the parsing errors by learning to strict adherence to solver syntax and grammar. LoGiPT is fine-tuned on a newly constructed instruction-tuning dataset derived from revealing and refining the invisible reasoning process of deductive solvers. Experimental results on two public deductive reasoning datasets demonstrate that LoGiPT outperforms state-of-the-art solver-augmented LMs and few-shot prompting methods on competitive LLMs like ChatGPT or GPT-4.
|
2205.07147
|
Ion Stoica
|
Sarah Chasins, Alvin Cheung, Natacha Crooks, Ali Ghodsi, Ken Goldberg,
Joseph E. Gonzalez, Joseph M. Hellerstein, Michael I. Jordan, Anthony D.
Joseph, Michael W. Mahoney, Aditya Parameswaran, David Patterson, Raluca Ada
Popa, Koushik Sen, Scott Shenker, Dawn Song, Ion Stoica
|
The Sky Above The Clouds
|
35 pages
| null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Technology ecosystems often undergo significant transformations as they
mature. For example, telephony, the Internet, and PCs all started with a single
provider, but in the United States each is now served by a competitive market
that uses comprehensive and universal technology standards to provide
compatibility. This white paper presents our view on how the cloud ecosystem,
barely over fifteen years old, could evolve as it matures.
|
[
{
"created": "Sat, 14 May 2022 23:13:00 GMT",
"version": "v1"
}
] |
2022-05-17
|
[
[
"Chasins",
"Sarah",
""
],
[
"Cheung",
"Alvin",
""
],
[
"Crooks",
"Natacha",
""
],
[
"Ghodsi",
"Ali",
""
],
[
"Goldberg",
"Ken",
""
],
[
"Gonzalez",
"Joseph E.",
""
],
[
"Hellerstein",
"Joseph M.",
""
],
[
"Jordan",
"Michael I.",
""
],
[
"Joseph",
"Anthony D.",
""
],
[
"Mahoney",
"Michael W.",
""
],
[
"Parameswaran",
"Aditya",
""
],
[
"Patterson",
"David",
""
],
[
"Popa",
"Raluca Ada",
""
],
[
"Sen",
"Koushik",
""
],
[
"Shenker",
"Scott",
""
],
[
"Song",
"Dawn",
""
],
[
"Stoica",
"Ion",
""
]
] |
Technology ecosystems often undergo significant transformations as they mature. For example, telephony, the Internet, and PCs all started with a single provider, but in the United States each is now served by a competitive market that uses comprehensive and universal technology standards to provide compatibility. This white paper presents our view on how the cloud ecosystem, barely over fifteen years old, could evolve as it matures.
|
1709.01956
|
Yang He
|
Yang He, Margret Keuper, Bernt Schiele, Mario Fritz
|
Learning Dilation Factors for Semantic Segmentation of Street Scenes
|
GCPR2017
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Contextual information is crucial for semantic segmentation. However, finding
the optimal trade-off between keeping desired fine details and at the same time
providing sufficiently large receptive fields is non trivial. This is even more
so, when objects or classes present in an image significantly vary in size.
Dilated convolutions have proven valuable for semantic segmentation, because
they allow to increase the size of the receptive field without sacrificing
image resolution. However, in current state-of-the-art methods, dilation
parameters are hand-tuned and fixed. In this paper, we present an approach for
learning dilation parameters adaptively per channel, consistently improving
semantic segmentation results on street-scene datasets like Cityscapes and
Camvid.
|
[
{
"created": "Wed, 6 Sep 2017 18:19:10 GMT",
"version": "v1"
}
] |
2017-09-08
|
[
[
"He",
"Yang",
""
],
[
"Keuper",
"Margret",
""
],
[
"Schiele",
"Bernt",
""
],
[
"Fritz",
"Mario",
""
]
] |
Contextual information is crucial for semantic segmentation. However, finding the optimal trade-off between keeping desired fine details and at the same time providing sufficiently large receptive fields is non trivial. This is even more so, when objects or classes present in an image significantly vary in size. Dilated convolutions have proven valuable for semantic segmentation, because they allow to increase the size of the receptive field without sacrificing image resolution. However, in current state-of-the-art methods, dilation parameters are hand-tuned and fixed. In this paper, we present an approach for learning dilation parameters adaptively per channel, consistently improving semantic segmentation results on street-scene datasets like Cityscapes and Camvid.
|
2306.08765
|
Charles Assaad
|
Daria Bystrova, Charles K. Assaad, Julyan Arbel, Emilie Devijver, Eric
Gaussier, Wilfried Thuiller
|
Causal Discovery from Time Series with Hybrids of Constraint-Based and
Noise-Based Algorithms
|
Accepted in TMLR: https://openreview.net/forum?id=PGLbZpVk2n
| null | null | null |
cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Constraint-based methods and noise-based methods are two distinct families of
methods proposed for uncovering causal graphs from observational data. However,
both operate under strong assumptions that may be challenging to validate or
could be violated in real-world scenarios. In response to these challenges,
there is a growing interest in hybrid methods that amalgamate principles from
both methods, showing robustness to assumption violations. This paper
introduces a novel comprehensive framework for hybridizing constraint-based and
noise-based methods designed to uncover causal graphs from observational time
series. The framework is structured into two classes. The first class employs a
noise-based strategy to identify a super graph, containing the true graph,
followed by a constraint-based strategy to eliminate unnecessary edges. In the
second class, a constraint-based strategy is applied to identify a skeleton,
which is then oriented using a noise-based strategy. The paper provides
theoretical guarantees for each class under the condition that all assumptions
are satisfied, and it outlines some properties when assumptions are violated.
To validate the efficacy of the framework, two algorithms from each class are
experimentally tested on simulated data, realistic ecological data, and real
datasets sourced from diverse applications. Notably, two novel datasets related
to Information Technology monitoring are introduced within the set of
considered real datasets. The experimental results underscore the robustness
and effectiveness of the hybrid approaches across a broad spectrum of datasets.
|
[
{
"created": "Wed, 14 Jun 2023 22:27:26 GMT",
"version": "v1"
},
{
"created": "Tue, 30 Apr 2024 17:12:18 GMT",
"version": "v2"
}
] |
2024-05-01
|
[
[
"Bystrova",
"Daria",
""
],
[
"Assaad",
"Charles K.",
""
],
[
"Arbel",
"Julyan",
""
],
[
"Devijver",
"Emilie",
""
],
[
"Gaussier",
"Eric",
""
],
[
"Thuiller",
"Wilfried",
""
]
] |
Constraint-based methods and noise-based methods are two distinct families of methods proposed for uncovering causal graphs from observational data. However, both operate under strong assumptions that may be challenging to validate or could be violated in real-world scenarios. In response to these challenges, there is a growing interest in hybrid methods that amalgamate principles from both methods, showing robustness to assumption violations. This paper introduces a novel comprehensive framework for hybridizing constraint-based and noise-based methods designed to uncover causal graphs from observational time series. The framework is structured into two classes. The first class employs a noise-based strategy to identify a super graph, containing the true graph, followed by a constraint-based strategy to eliminate unnecessary edges. In the second class, a constraint-based strategy is applied to identify a skeleton, which is then oriented using a noise-based strategy. The paper provides theoretical guarantees for each class under the condition that all assumptions are satisfied, and it outlines some properties when assumptions are violated. To validate the efficacy of the framework, two algorithms from each class are experimentally tested on simulated data, realistic ecological data, and real datasets sourced from diverse applications. Notably, two novel datasets related to Information Technology monitoring are introduced within the set of considered real datasets. The experimental results underscore the robustness and effectiveness of the hybrid approaches across a broad spectrum of datasets.
|
2303.12032
|
Eamon Duede
|
Eamon Duede
|
The Representational Status of Deep Learning Models
|
19 pages
| null | null | null |
cs.AI cs.CY cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper aims to clarify the representational status of Deep Learning
Models (DLMs). While commonly referred to as 'representations', what this
entails is ambiguous due to a conflation of functional and relational
conceptions of representation. This paper argues that while DLMs represent
their targets in a relational sense, they are best understood as highly
idealized models. This result has immediate implications for explainable AI
(XAI) and directs philosophical attention toward examining the idealized nature
of DLM representations and their role in future scientific investigation.
|
[
{
"created": "Tue, 21 Mar 2023 17:19:35 GMT",
"version": "v1"
}
] |
2023-03-22
|
[
[
"Duede",
"Eamon",
""
]
] |
This paper aims to clarify the representational status of Deep Learning Models (DLMs). While commonly referred to as 'representations', what this entails is ambiguous due to a conflation of functional and relational conceptions of representation. This paper argues that while DLMs represent their targets in a relational sense, they are best understood as highly idealized models. This result has immediate implications for explainable AI (XAI) and directs philosophical attention toward examining the idealized nature of DLM representations and their role in future scientific investigation.
|
2406.08673
|
Zhilin Wang
|
Zhilin Wang, Yi Dong, Olivier Delalleau, Jiaqi Zeng, Gerald Shen,
Daniel Egert, Jimmy J. Zhang, Makesh Narsimhan Sreedhar, Oleksii Kuchaiev
|
HelpSteer2: Open-source dataset for training top-performing reward
models
| null | null | null | null |
cs.CL cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
High-quality preference datasets are essential for training reward models
that can effectively guide large language models (LLMs) in generating
high-quality responses aligned with human preferences. As LLMs become stronger
and better aligned, permissively licensed preference datasets, such as Open
Assistant, HH-RLHF, and HelpSteer need to be updated to remain effective for
reward modeling. Methods that distil preference data from proprietary LLMs such
as GPT-4 have restrictions on commercial usage imposed by model providers. To
improve upon both generated responses and attribute labeling quality, we
release HelpSteer2, a permissively licensed preference dataset (CC-BY-4.0).
Using a powerful internal base model trained on HelpSteer2, we are able to
achieve the SOTA score (92.0%) on Reward-Bench's primary dataset, outperforming
currently listed open and proprietary models, as of June 12th, 2024. Notably,
HelpSteer2 consists of only ten thousand response pairs, an order of magnitude
fewer than existing preference datasets (e.g., HH-RLHF), which makes it highly
efficient for training reward models. Our extensive experiments demonstrate
that reward models trained with HelpSteer2 are effective in aligning LLMs. In
particular, we propose SteerLM 2.0, a model alignment approach that can
effectively make use of the rich multi-attribute score predicted by our reward
models. HelpSteer2 is available at
https://huggingface.co/datasets/nvidia/HelpSteer2 and code is available at
https://github.com/NVIDIA/NeMo-Aligner
|
[
{
"created": "Wed, 12 Jun 2024 22:28:08 GMT",
"version": "v1"
}
] |
2024-06-14
|
[
[
"Wang",
"Zhilin",
""
],
[
"Dong",
"Yi",
""
],
[
"Delalleau",
"Olivier",
""
],
[
"Zeng",
"Jiaqi",
""
],
[
"Shen",
"Gerald",
""
],
[
"Egert",
"Daniel",
""
],
[
"Zhang",
"Jimmy J.",
""
],
[
"Sreedhar",
"Makesh Narsimhan",
""
],
[
"Kuchaiev",
"Oleksii",
""
]
] |
High-quality preference datasets are essential for training reward models that can effectively guide large language models (LLMs) in generating high-quality responses aligned with human preferences. As LLMs become stronger and better aligned, permissively licensed preference datasets, such as Open Assistant, HH-RLHF, and HelpSteer need to be updated to remain effective for reward modeling. Methods that distil preference data from proprietary LLMs such as GPT-4 have restrictions on commercial usage imposed by model providers. To improve upon both generated responses and attribute labeling quality, we release HelpSteer2, a permissively licensed preference dataset (CC-BY-4.0). Using a powerful internal base model trained on HelpSteer2, we are able to achieve the SOTA score (92.0%) on Reward-Bench's primary dataset, outperforming currently listed open and proprietary models, as of June 12th, 2024. Notably, HelpSteer2 consists of only ten thousand response pairs, an order of magnitude fewer than existing preference datasets (e.g., HH-RLHF), which makes it highly efficient for training reward models. Our extensive experiments demonstrate that reward models trained with HelpSteer2 are effective in aligning LLMs. In particular, we propose SteerLM 2.0, a model alignment approach that can effectively make use of the rich multi-attribute score predicted by our reward models. HelpSteer2 is available at https://huggingface.co/datasets/nvidia/HelpSteer2 and code is available at https://github.com/NVIDIA/NeMo-Aligner
|
1003.5891
|
Sandip Rakshit
|
Sandip Rakshit, Subhadip Basu
|
Recognition of Handwritten Roman Script Using Tesseract Open source OCR
Engine
|
Proc. National Conference on NAQC (2008) 141-145
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the present work, we have used Tesseract 2.01 open source Optical
Character Recognition (OCR) Engine under Apache License 2.0 for recognition of
handwriting samples of lower case Roman script. Handwritten isolated and
free-flow text samples were collected from multiple users. Tesseract is trained
to recognize user-specific handwriting samples of both the categories of
document pages. On a single user model, the system is trained with 1844
isolated handwritten characters and the performance is tested on 1133
characters, taken form the test set. The overall character-level accuracy of
the system is observed as 83.5%. The system fails to segment 5.56% characters
and erroneously classifies 10.94% characters.
|
[
{
"created": "Tue, 30 Mar 2010 18:35:37 GMT",
"version": "v1"
}
] |
2010-03-31
|
[
[
"Rakshit",
"Sandip",
""
],
[
"Basu",
"Subhadip",
""
]
] |
In the present work, we have used Tesseract 2.01 open source Optical Character Recognition (OCR) Engine under Apache License 2.0 for recognition of handwriting samples of lower case Roman script. Handwritten isolated and free-flow text samples were collected from multiple users. Tesseract is trained to recognize user-specific handwriting samples of both the categories of document pages. On a single user model, the system is trained with 1844 isolated handwritten characters and the performance is tested on 1133 characters, taken form the test set. The overall character-level accuracy of the system is observed as 83.5%. The system fails to segment 5.56% characters and erroneously classifies 10.94% characters.
|
2009.10697
|
Leopold Cambier
|
L\'eopold Cambier, Yizhou Qian, Eric Darve
|
TaskTorrent: a Lightweight Distributed Task-Based Runtime System in C++
| null | null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present TaskTorrent, a lightweight distributed task-based runtime in C++.
TaskTorrent uses a parametrized task graph to express the task DAG, and
one-sided active messages to trigger remote tasks asynchronously. As a result
the task DAG is completely distributed and discovered in parallel. It is a
C++14 library and only depends on MPI. We explain the API and the
implementation. We perform a series of benchmarks against StarPU and ScaLAPACK.
Micro benchmarks show it has a minimal overhead compared to other solutions. We
then apply it to two large linear algebra problems. TaskTorrent scales very
well to thousands of cores, exhibiting good weak and strong scalings.
|
[
{
"created": "Tue, 22 Sep 2020 17:16:11 GMT",
"version": "v1"
}
] |
2020-09-23
|
[
[
"Cambier",
"Léopold",
""
],
[
"Qian",
"Yizhou",
""
],
[
"Darve",
"Eric",
""
]
] |
We present TaskTorrent, a lightweight distributed task-based runtime in C++. TaskTorrent uses a parametrized task graph to express the task DAG, and one-sided active messages to trigger remote tasks asynchronously. As a result the task DAG is completely distributed and discovered in parallel. It is a C++14 library and only depends on MPI. We explain the API and the implementation. We perform a series of benchmarks against StarPU and ScaLAPACK. Micro benchmarks show it has a minimal overhead compared to other solutions. We then apply it to two large linear algebra problems. TaskTorrent scales very well to thousands of cores, exhibiting good weak and strong scalings.
|
2306.15419
|
Tianxiang Ma
|
Tianxiang Ma, Kang Zhao, Jianxin Sun, Yingya Zhang, Jing Dong
|
Freestyle 3D-Aware Portrait Synthesis Based on Compositional Generative
Priors
|
project website: https://tianxiangma.github.io/FF3D
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Efficiently generating a freestyle 3D portrait with high quality and
3D-consistency is a promising yet challenging task. The portrait styles
generated by most existing methods are usually restricted by their 3D
generators, which are learned in specific facial datasets, such as FFHQ. To get
the diverse 3D portraits, one can build a large-scale multi-style database to
retrain a 3D-aware generator, or use a off-the-shelf tool to do the style
translation. However, the former is time-consuming due to data collection and
training process, the latter may destroy the multi-view consistency. To tackle
this problem, we propose a novel text-driven 3D-aware portrait synthesis
framework that can generate out-of-distribution portrait styles. Specifically,
for a given portrait style prompt, we first composite two generative priors, a
3D-aware GAN generator and a text-guided image editor, to quickly construct a
few-shot stylized portrait set. Then we map the special style domain of this
set to our proposed 3D latent feature generator and obtain a 3D representation
containing the given style information. Finally we use a pre-trained 3D
renderer to generate view-consistent stylized portraits from the 3D
representation. Extensive experimental results show that our method is capable
of synthesizing high-quality 3D portraits with specified styles in a few
minutes, outperforming the state-of-the-art.
|
[
{
"created": "Tue, 27 Jun 2023 12:23:04 GMT",
"version": "v1"
},
{
"created": "Wed, 28 Jun 2023 07:20:58 GMT",
"version": "v2"
},
{
"created": "Sun, 24 Dec 2023 08:35:38 GMT",
"version": "v3"
}
] |
2023-12-27
|
[
[
"Ma",
"Tianxiang",
""
],
[
"Zhao",
"Kang",
""
],
[
"Sun",
"Jianxin",
""
],
[
"Zhang",
"Yingya",
""
],
[
"Dong",
"Jing",
""
]
] |
Efficiently generating a freestyle 3D portrait with high quality and 3D-consistency is a promising yet challenging task. The portrait styles generated by most existing methods are usually restricted by their 3D generators, which are learned in specific facial datasets, such as FFHQ. To get the diverse 3D portraits, one can build a large-scale multi-style database to retrain a 3D-aware generator, or use a off-the-shelf tool to do the style translation. However, the former is time-consuming due to data collection and training process, the latter may destroy the multi-view consistency. To tackle this problem, we propose a novel text-driven 3D-aware portrait synthesis framework that can generate out-of-distribution portrait styles. Specifically, for a given portrait style prompt, we first composite two generative priors, a 3D-aware GAN generator and a text-guided image editor, to quickly construct a few-shot stylized portrait set. Then we map the special style domain of this set to our proposed 3D latent feature generator and obtain a 3D representation containing the given style information. Finally we use a pre-trained 3D renderer to generate view-consistent stylized portraits from the 3D representation. Extensive experimental results show that our method is capable of synthesizing high-quality 3D portraits with specified styles in a few minutes, outperforming the state-of-the-art.
|
2206.14337
|
Jinyoung Park
|
Jinyoung Park, Seongjun Yun, Hyeonjin Park, Jaewoo Kang, Jisu Jeong,
Kyung-Min Kim, Jung-woo Ha, Hyunwoo J. Kim
|
Deformable Graph Transformer
|
16 pages, 3 figures
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Transformer-based models have recently shown success in representation
learning on graph-structured data beyond natural language processing and
computer vision. However, the success is limited to small-scale graphs due to
the drawbacks of full dot-product attention on graphs such as the quadratic
complexity with respect to the number of nodes and message aggregation from
enormous irrelevant nodes. To address these issues, we propose Deformable Graph
Transformer (DGT) that performs sparse attention via dynamically sampled
relevant nodes for efficiently handling large-scale graphs with a linear
complexity in the number of nodes. Specifically, our framework first constructs
multiple node sequences with various criteria to consider both structural and
semantic proximity. Then, combining with our learnable Katz Positional
Encodings, the sparse attention is applied to the node sequences for learning
node representations with a significantly reduced computational cost. Extensive
experiments demonstrate that our DGT achieves state-of-the-art performance on 7
graph benchmark datasets with 2.5 - 449 times less computational cost compared
to transformer-based graph models with full attention.
|
[
{
"created": "Wed, 29 Jun 2022 00:23:25 GMT",
"version": "v1"
},
{
"created": "Tue, 4 Oct 2022 03:13:03 GMT",
"version": "v2"
}
] |
2022-10-05
|
[
[
"Park",
"Jinyoung",
""
],
[
"Yun",
"Seongjun",
""
],
[
"Park",
"Hyeonjin",
""
],
[
"Kang",
"Jaewoo",
""
],
[
"Jeong",
"Jisu",
""
],
[
"Kim",
"Kyung-Min",
""
],
[
"Ha",
"Jung-woo",
""
],
[
"Kim",
"Hyunwoo J.",
""
]
] |
Transformer-based models have recently shown success in representation learning on graph-structured data beyond natural language processing and computer vision. However, the success is limited to small-scale graphs due to the drawbacks of full dot-product attention on graphs such as the quadratic complexity with respect to the number of nodes and message aggregation from enormous irrelevant nodes. To address these issues, we propose Deformable Graph Transformer (DGT) that performs sparse attention via dynamically sampled relevant nodes for efficiently handling large-scale graphs with a linear complexity in the number of nodes. Specifically, our framework first constructs multiple node sequences with various criteria to consider both structural and semantic proximity. Then, combining with our learnable Katz Positional Encodings, the sparse attention is applied to the node sequences for learning node representations with a significantly reduced computational cost. Extensive experiments demonstrate that our DGT achieves state-of-the-art performance on 7 graph benchmark datasets with 2.5 - 449 times less computational cost compared to transformer-based graph models with full attention.
|
2110.02068
|
Lukas Kondmann
|
Lukas Kondmann, Aysim Toker, Sudipan Saha, Bernhard Sch\"olkopf, Laura
Leal-Taix\'e, Xiao Xiang Zhu
|
Spatial Context Awareness for Unsupervised Change Detection in Optical
Satellite Images
|
Submitted to IEEE Transactions on Geoscience and Remote Sensing (IEEE
TGRS)
| null |
10.1109/TGRS.2021.3130842
| null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Detecting changes on the ground in multitemporal Earth observation data is
one of the key problems in remote sensing. In this paper, we introduce Sibling
Regression for Optical Change detection (SiROC), an unsupervised method for
change detection in optical satellite images with medium and high resolution.
SiROC is a spatial context-based method that models a pixel as a linear
combination of its distant neighbors. It uses this model to analyze differences
in the pixel and its spatial context-based predictions in subsequent time
periods for change detection. We combine this spatial context-based change
detection with ensembling over mutually exclusive neighborhoods and
transitioning from pixel to object-level changes with morphological operations.
SiROC achieves competitive performance for change detection with
medium-resolution Sentinel-2 and high-resolution Planetscope imagery on four
datasets. Besides accurate predictions without the need for training, SiROC
also provides a well-calibrated uncertainty of its predictions. This makes the
method especially useful in conjunction with deep-learning based methods for
applications such as pseudo-labeling.
|
[
{
"created": "Tue, 5 Oct 2021 14:13:48 GMT",
"version": "v1"
}
] |
2022-05-04
|
[
[
"Kondmann",
"Lukas",
""
],
[
"Toker",
"Aysim",
""
],
[
"Saha",
"Sudipan",
""
],
[
"Schölkopf",
"Bernhard",
""
],
[
"Leal-Taixé",
"Laura",
""
],
[
"Zhu",
"Xiao Xiang",
""
]
] |
Detecting changes on the ground in multitemporal Earth observation data is one of the key problems in remote sensing. In this paper, we introduce Sibling Regression for Optical Change detection (SiROC), an unsupervised method for change detection in optical satellite images with medium and high resolution. SiROC is a spatial context-based method that models a pixel as a linear combination of its distant neighbors. It uses this model to analyze differences in the pixel and its spatial context-based predictions in subsequent time periods for change detection. We combine this spatial context-based change detection with ensembling over mutually exclusive neighborhoods and transitioning from pixel to object-level changes with morphological operations. SiROC achieves competitive performance for change detection with medium-resolution Sentinel-2 and high-resolution Planetscope imagery on four datasets. Besides accurate predictions without the need for training, SiROC also provides a well-calibrated uncertainty of its predictions. This makes the method especially useful in conjunction with deep-learning based methods for applications such as pseudo-labeling.
|
2208.09948
|
Ashwin Maran
|
Jin-Yi Cai, Ashwin Maran
|
Counting Cycles on Planar Graphs in Subexponential Time
|
28 pages, 6 figures, COCOON 2022
| null | null | null |
cs.DS math.CO
|
http://creativecommons.org/licenses/by/4.0/
|
We study the problem of counting all cycles or self-avoiding walks (SAWs) on
triangulated planar graphs. We present a subexponential $2^{O(\sqrt{n})}$ time
algorithm for this counting problem. Among the technical ingredients used in
this algorithm are the planar separator theorem and a delicate analysis using
pairs of Motzkin paths and Motzkin numbers. We can then adapt this algorithm to
uniformly sample SAWs, in subexponential time. Our work is motivated by the
problem of gerrymandered districting maps.
|
[
{
"created": "Sun, 21 Aug 2022 19:00:33 GMT",
"version": "v1"
}
] |
2022-08-23
|
[
[
"Cai",
"Jin-Yi",
""
],
[
"Maran",
"Ashwin",
""
]
] |
We study the problem of counting all cycles or self-avoiding walks (SAWs) on triangulated planar graphs. We present a subexponential $2^{O(\sqrt{n})}$ time algorithm for this counting problem. Among the technical ingredients used in this algorithm are the planar separator theorem and a delicate analysis using pairs of Motzkin paths and Motzkin numbers. We can then adapt this algorithm to uniformly sample SAWs, in subexponential time. Our work is motivated by the problem of gerrymandered districting maps.
|
1910.14528
|
Shu Jiang
|
Shu Jiang, Rui Wang, Zuchao Li, Masao Utiyama, Kehai Chen, Eiichiro
Sumita, Hai Zhao, Bao-liang Lu
|
Document-level Neural Machine Translation with Associated Memory Network
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Standard neural machine translation (NMT) is on the assumption that the
document-level context is independent. Most existing document-level NMT
approaches are satisfied with a smattering sense of global document-level
information, while this work focuses on exploiting detailed document-level
context in terms of a memory network. The capacity of the memory network that
detecting the most relevant part of the current sentence from memory renders a
natural solution to model the rich document-level context. In this work, the
proposed document-aware memory network is implemented to enhance the
Transformer NMT baseline. Experiments on several tasks show that the proposed
method significantly improves the NMT performance over strong Transformer
baselines and other related studies.
|
[
{
"created": "Thu, 31 Oct 2019 15:14:54 GMT",
"version": "v1"
},
{
"created": "Tue, 24 Aug 2021 10:14:20 GMT",
"version": "v2"
}
] |
2021-08-25
|
[
[
"Jiang",
"Shu",
""
],
[
"Wang",
"Rui",
""
],
[
"Li",
"Zuchao",
""
],
[
"Utiyama",
"Masao",
""
],
[
"Chen",
"Kehai",
""
],
[
"Sumita",
"Eiichiro",
""
],
[
"Zhao",
"Hai",
""
],
[
"Lu",
"Bao-liang",
""
]
] |
Standard neural machine translation (NMT) is on the assumption that the document-level context is independent. Most existing document-level NMT approaches are satisfied with a smattering sense of global document-level information, while this work focuses on exploiting detailed document-level context in terms of a memory network. The capacity of the memory network that detecting the most relevant part of the current sentence from memory renders a natural solution to model the rich document-level context. In this work, the proposed document-aware memory network is implemented to enhance the Transformer NMT baseline. Experiments on several tasks show that the proposed method significantly improves the NMT performance over strong Transformer baselines and other related studies.
|
1711.04898
|
Eun-jin Kim
|
Eun-jin Kim and Ismail Movahedi
|
Effect of enhanced dissipation by shear flows on transient relaxation
and probability density function in two dimensions
|
26 pages, 5 figures
| null |
10.1063/1.5003014
| null |
cs.IT math.IT physics.flu-dyn physics.plasm-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We report a non-perturbative study of the effects of shear flows on
turbulence reduction in a decaying turbulence in two dimensions. By considering
different initial power spectra and shear flows (zonal flows, combined zonal
flows and streamers), we demonstrate how shear flows rapidly generate small
scales, leading to a fast damping of turbulence amplitude. In particular, a
double exponential decrease in turbulence amplitude is shown to occur due to an
exponential increase in wavenumber. The scaling of the effective dissipation
time scale $\tau_{e}$, previously taken to be a hybrid time scale $\tau_{e}
\propto \tau_{\Omega}^{{2/3}} \tau_{\eta}$, is shown to depend on types of
depend on the type of shear flow as well as the initial power spectrum. Here,
$\tau_{\Omega}$ and $\tau_{\eta}$ are shearing and molecular diffusion times,
respectively. Furthermore, we present time-dependent Probability Density
Functions (PDFs) and discuss the effect of enhanced dissipation on PDFs and a
dynamical time scale $\tau(t)$, which represents the time scale over which a
system passes through statistically different states.
|
[
{
"created": "Tue, 14 Nov 2017 01:23:09 GMT",
"version": "v1"
}
] |
2017-12-05
|
[
[
"Kim",
"Eun-jin",
""
],
[
"Movahedi",
"Ismail",
""
]
] |
We report a non-perturbative study of the effects of shear flows on turbulence reduction in a decaying turbulence in two dimensions. By considering different initial power spectra and shear flows (zonal flows, combined zonal flows and streamers), we demonstrate how shear flows rapidly generate small scales, leading to a fast damping of turbulence amplitude. In particular, a double exponential decrease in turbulence amplitude is shown to occur due to an exponential increase in wavenumber. The scaling of the effective dissipation time scale $\tau_{e}$, previously taken to be a hybrid time scale $\tau_{e} \propto \tau_{\Omega}^{{2/3}} \tau_{\eta}$, is shown to depend on types of depend on the type of shear flow as well as the initial power spectrum. Here, $\tau_{\Omega}$ and $\tau_{\eta}$ are shearing and molecular diffusion times, respectively. Furthermore, we present time-dependent Probability Density Functions (PDFs) and discuss the effect of enhanced dissipation on PDFs and a dynamical time scale $\tau(t)$, which represents the time scale over which a system passes through statistically different states.
|
2010.04977
|
Gang Chen
|
Gang Chen, Wei Dong, Xinjun Sheng, Xiangyang Zhu, Han Ding
|
An Active Sense and Avoid System for Flying Robots in Dynamic
Environments
|
Accepted by IEEE Transactions on Mechatronics on 27 Jan 2021
| null |
10.1109/TMECH.2021.3060511
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper investigates a novel active-sensing-based obstacle avoidance
paradigm for flying robots in dynamic environments. Instead of fusing multiple
sensors to enlarge the field of view (FOV), we introduce an alternative
approach that utilizes a stereo camera with an independent rotational DOF to
sense the obstacles actively. In particular, the sensing direction is planned
heuristically by multiple objectives, including tracking dynamic obstacles,
observing the heading direction, and exploring the previously unseen area. With
the sensing result, a flight path is then planned based on real-time sampling
and uncertainty-aware collision checking in the state space, which constitutes
an active sense and avoid (ASAA) system. Experiments in both simulation and the
real world demonstrate that this system can well cope with dynamic obstacles
and abrupt goal direction changes. Since only one stereo camera is utilized,
this system provides a low-cost and effective approach to overcome the FOV
limitation in visual navigation.
|
[
{
"created": "Sat, 10 Oct 2020 11:36:56 GMT",
"version": "v1"
},
{
"created": "Wed, 17 Feb 2021 15:00:59 GMT",
"version": "v2"
}
] |
2021-02-18
|
[
[
"Chen",
"Gang",
""
],
[
"Dong",
"Wei",
""
],
[
"Sheng",
"Xinjun",
""
],
[
"Zhu",
"Xiangyang",
""
],
[
"Ding",
"Han",
""
]
] |
This paper investigates a novel active-sensing-based obstacle avoidance paradigm for flying robots in dynamic environments. Instead of fusing multiple sensors to enlarge the field of view (FOV), we introduce an alternative approach that utilizes a stereo camera with an independent rotational DOF to sense the obstacles actively. In particular, the sensing direction is planned heuristically by multiple objectives, including tracking dynamic obstacles, observing the heading direction, and exploring the previously unseen area. With the sensing result, a flight path is then planned based on real-time sampling and uncertainty-aware collision checking in the state space, which constitutes an active sense and avoid (ASAA) system. Experiments in both simulation and the real world demonstrate that this system can well cope with dynamic obstacles and abrupt goal direction changes. Since only one stereo camera is utilized, this system provides a low-cost and effective approach to overcome the FOV limitation in visual navigation.
|
1811.12373
|
Ke Li
|
Ke Li, Tianhao Zhang, Jitendra Malik
|
Diverse Image Synthesis from Semantic Layouts via Conditional IMLE
|
18 pages, 16 figures; IEEE International Conference on Computer
Vision (ICCV), 2019
| null | null | null |
cs.CV cs.GR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Most existing methods for conditional image synthesis are only able to
generate a single plausible image for any given input, or at best a fixed
number of plausible images. In this paper, we focus on the problem of
generating images from semantic segmentation maps and present a simple new
method that can generate an arbitrary number of images with diverse appearance
for the same semantic layout. Unlike most existing approaches which adopt the
GAN framework, our method is based on the recently introduced Implicit Maximum
Likelihood Estimation (IMLE) framework. Compared to the leading approach, our
method is able to generate more diverse images while producing fewer artifacts
despite using the same architecture. The learned latent space also has sensible
structure despite the lack of supervision that encourages such behaviour.
Videos and code are available at
https://people.eecs.berkeley.edu/~ke.li/projects/imle/scene_layouts/.
|
[
{
"created": "Thu, 29 Nov 2018 18:36:00 GMT",
"version": "v1"
},
{
"created": "Thu, 29 Aug 2019 17:54:53 GMT",
"version": "v2"
}
] |
2019-08-30
|
[
[
"Li",
"Ke",
""
],
[
"Zhang",
"Tianhao",
""
],
[
"Malik",
"Jitendra",
""
]
] |
Most existing methods for conditional image synthesis are only able to generate a single plausible image for any given input, or at best a fixed number of plausible images. In this paper, we focus on the problem of generating images from semantic segmentation maps and present a simple new method that can generate an arbitrary number of images with diverse appearance for the same semantic layout. Unlike most existing approaches which adopt the GAN framework, our method is based on the recently introduced Implicit Maximum Likelihood Estimation (IMLE) framework. Compared to the leading approach, our method is able to generate more diverse images while producing fewer artifacts despite using the same architecture. The learned latent space also has sensible structure despite the lack of supervision that encourages such behaviour. Videos and code are available at https://people.eecs.berkeley.edu/~ke.li/projects/imle/scene_layouts/.
|
1708.01461
|
Hamid Hoorfar
|
Hamid Hoorfar and Alireza Bagheri
|
A Linear-time Algorithm for Orthogonal Watchman Route Problem with
Minimum Bends
| null | null | null | null |
cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Given an orthogonal polygon $ P $ with $ n $ vertices, the goal of the
watchman route problem is finding a path $ S $ of the minimum length in $ P $
such that every point of the polygon $ P $ is visible from at least one of the
point of $ S $. In the other words, in the watchman route problem we must
compute a shortest watchman route inside a simple polygon of $ n $ vertices
such that all the points interior to the polygon and on its boundary are
visible to at least one point on the route. If route and polygon be orthogonal,
it is called orthogonal watchman route problem. One of the targets of this
problem is finding the orthogonal path with the minimum number of bends as
possible. We present a linear-time algorithm for the orthogonal watchman route
problem, in which the given polygon is monotone. Our algorithm can be used also
for the problem on simple orthogonal polygons $ P $ for which the dual graph
induced by the vertical decomposition of $ P $ is a path, which is called path
polygon.
|
[
{
"created": "Fri, 4 Aug 2017 11:49:52 GMT",
"version": "v1"
}
] |
2017-08-07
|
[
[
"Hoorfar",
"Hamid",
""
],
[
"Bagheri",
"Alireza",
""
]
] |
Given an orthogonal polygon $ P $ with $ n $ vertices, the goal of the watchman route problem is finding a path $ S $ of the minimum length in $ P $ such that every point of the polygon $ P $ is visible from at least one of the point of $ S $. In the other words, in the watchman route problem we must compute a shortest watchman route inside a simple polygon of $ n $ vertices such that all the points interior to the polygon and on its boundary are visible to at least one point on the route. If route and polygon be orthogonal, it is called orthogonal watchman route problem. One of the targets of this problem is finding the orthogonal path with the minimum number of bends as possible. We present a linear-time algorithm for the orthogonal watchman route problem, in which the given polygon is monotone. Our algorithm can be used also for the problem on simple orthogonal polygons $ P $ for which the dual graph induced by the vertical decomposition of $ P $ is a path, which is called path polygon.
|
2405.14278
|
Zhaorui Tan
|
Kai Yao, Zhaorui Tan, Zixian Su, Xi Yang, Jie Sun, Kaizhu Huang
|
SCMix: Stochastic Compound Mixing for Open Compound Domain Adaptation in
Semantic Segmentation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Open compound domain adaptation (OCDA) aims to transfer knowledge from a
labeled source domain to a mix of unlabeled homogeneous compound target domains
while generalizing to open unseen domains. Existing OCDA methods solve the
intra-domain gaps by a divide-and-conquer strategy, which divides the problem
into several individual and parallel domain adaptation (DA) tasks. Such
approaches often contain multiple sub-networks or stages, which may constrain
the model's performance. In this work, starting from the general DA theory, we
establish the generalization bound for the setting of OCDA. Built upon this, we
argue that conventional OCDA approaches may substantially underestimate the
inherent variance inside the compound target domains for model generalization.
We subsequently present Stochastic Compound Mixing (SCMix), an augmentation
strategy with the primary objective of mitigating the divergence between source
and mixed target distributions. We provide theoretical analysis to substantiate
the superiority of SCMix and prove that the previous methods are sub-groups of
our methods. Extensive experiments show that our method attains a lower
empirical risk on OCDA semantic segmentation tasks, thus supporting our
theories. Combining the transformer architecture, SCMix achieves a notable
performance boost compared to the SoTA results.
|
[
{
"created": "Thu, 23 May 2024 07:53:10 GMT",
"version": "v1"
}
] |
2024-05-24
|
[
[
"Yao",
"Kai",
""
],
[
"Tan",
"Zhaorui",
""
],
[
"Su",
"Zixian",
""
],
[
"Yang",
"Xi",
""
],
[
"Sun",
"Jie",
""
],
[
"Huang",
"Kaizhu",
""
]
] |
Open compound domain adaptation (OCDA) aims to transfer knowledge from a labeled source domain to a mix of unlabeled homogeneous compound target domains while generalizing to open unseen domains. Existing OCDA methods solve the intra-domain gaps by a divide-and-conquer strategy, which divides the problem into several individual and parallel domain adaptation (DA) tasks. Such approaches often contain multiple sub-networks or stages, which may constrain the model's performance. In this work, starting from the general DA theory, we establish the generalization bound for the setting of OCDA. Built upon this, we argue that conventional OCDA approaches may substantially underestimate the inherent variance inside the compound target domains for model generalization. We subsequently present Stochastic Compound Mixing (SCMix), an augmentation strategy with the primary objective of mitigating the divergence between source and mixed target distributions. We provide theoretical analysis to substantiate the superiority of SCMix and prove that the previous methods are sub-groups of our methods. Extensive experiments show that our method attains a lower empirical risk on OCDA semantic segmentation tasks, thus supporting our theories. Combining the transformer architecture, SCMix achieves a notable performance boost compared to the SoTA results.
|
1403.5524
|
Brendan McLaughlin Dr
|
Brendan M. McLaughlin and Connor P. Ballance
|
Petascale computations for Large-scale Atomic and Molecular collisions
|
14 pages, 5 figures, 3 tables, Chapter in: Workshop on Sustained
Simulated Performance 2013, Published by Springer, 2014, edited by Michael
Resch, Yevgeniya Kovalenko, Eric Focht, Wolfgang Bez and Hiroaki Kobaysahi
| null | null | null |
cs.DC physics.atom-ph physics.comp-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Petaflop architectures are currently being utilized efficiently to perform
large scale computations in Atomic, Molecular and Optical Collisions. We solve
the Schroedinger or Dirac equation for the appropriate collision problem using
the R-matrix or R-matrix with pseudo-states approach. We briefly outline the
parallel methodology used and implemented for the current suite of Breit-Pauli
and DARC codes. Various examples are shown of our theoretical results compared
with those obtained from Synchrotron Radiation facilities and from Satellite
observations. We also indicate future directions and implementation of the
R-matrix codes on emerging GPU architectures.
|
[
{
"created": "Fri, 21 Mar 2014 17:38:25 GMT",
"version": "v1"
},
{
"created": "Fri, 15 Aug 2014 09:40:57 GMT",
"version": "v2"
}
] |
2014-08-18
|
[
[
"McLaughlin",
"Brendan M.",
""
],
[
"Ballance",
"Connor P.",
""
]
] |
Petaflop architectures are currently being utilized efficiently to perform large scale computations in Atomic, Molecular and Optical Collisions. We solve the Schroedinger or Dirac equation for the appropriate collision problem using the R-matrix or R-matrix with pseudo-states approach. We briefly outline the parallel methodology used and implemented for the current suite of Breit-Pauli and DARC codes. Various examples are shown of our theoretical results compared with those obtained from Synchrotron Radiation facilities and from Satellite observations. We also indicate future directions and implementation of the R-matrix codes on emerging GPU architectures.
|
2107.04986
|
Han Zhang
|
Dazhuan Xu, Han Zhang, Nan Wang
|
Theoretical Performance Limit for Radar Parameter Estimation
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we employ the thoughts and methodologies of Shannon's
information theory to solve the problem of the optimal radar parameter
estimation. Based on a general radar system model, the \textit{a posteriori}
probability density function of targets' parameters is derived. Range
information (RI) and entropy error (EE) are defined to evaluate the
performance. It is proved that acquiring 1 bit of the range information is
equivalent to reducing estimation deviation by half. The closed-form
approximation for the EE is deduced in all signal-to-noise ratio (SNR) regions,
which demonstrates that the EE degenerates to the mean square error (MSE) when
the SNR is tending to infinity. Parameter estimation theorem is then proved,
which claims that the theoretical RI is achievable.
The converse claims that there exists no unbiased estimator whose empirical
RI is larger than the theoretical RI. Simulation result demonstrates that the
theoretical EE is tighter than the commonly used Cram\'er-Rao bound and the
ZivZakai bound.
|
[
{
"created": "Sun, 11 Jul 2021 07:16:57 GMT",
"version": "v1"
},
{
"created": "Sun, 12 Jun 2022 11:34:18 GMT",
"version": "v2"
},
{
"created": "Mon, 6 Feb 2023 08:19:39 GMT",
"version": "v3"
}
] |
2023-02-07
|
[
[
"Xu",
"Dazhuan",
""
],
[
"Zhang",
"Han",
""
],
[
"Wang",
"Nan",
""
]
] |
In this paper, we employ the thoughts and methodologies of Shannon's information theory to solve the problem of the optimal radar parameter estimation. Based on a general radar system model, the \textit{a posteriori} probability density function of targets' parameters is derived. Range information (RI) and entropy error (EE) are defined to evaluate the performance. It is proved that acquiring 1 bit of the range information is equivalent to reducing estimation deviation by half. The closed-form approximation for the EE is deduced in all signal-to-noise ratio (SNR) regions, which demonstrates that the EE degenerates to the mean square error (MSE) when the SNR is tending to infinity. Parameter estimation theorem is then proved, which claims that the theoretical RI is achievable. The converse claims that there exists no unbiased estimator whose empirical RI is larger than the theoretical RI. Simulation result demonstrates that the theoretical EE is tighter than the commonly used Cram\'er-Rao bound and the ZivZakai bound.
|
2001.04333
|
R\'emi Morvan
|
Marcin Jurdzi\'nski, R\'emi Morvan, K. S. Thejaswini
|
Universal Algorithms for Parity Games and Nested Fixpoints
| null | null | null | null |
cs.DS cs.FL cs.GT cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
An attractor decomposition meta-algorithm for solving parity games is given
that generalises the classic McNaughton-Zielonka algorithm and its recent
quasi-polynomial variants due to Parys (2019), and to Lehtinen, Schewe, and
Wojtczak (2019). The central concepts studied and exploited are attractor
decompositions of dominia in parity games and the ordered trees that describe
the inductive structure of attractor decompositions. The universal algorithm
yields McNaughton-Zielonka, Parys, and Lehtinen-Schewe-Wojtczak algorithms as
special cases when suitable universal trees are given to it as inputs. The main
technical results provide a unified proof of correctness and structural
insights into those algorithms. Suitably adapting the universal algorithm for
parity games to fixpoint games gives a quasi-polynomial time algorithm to
compute nested fixpoints over finite complete lattices. The universal
algorithms for parity games and nested fixpoints can be implemented
symbolically. It is shown how this can be done with $O(\lg d)$ symbolic space
complexity, improving the $O(d \lg n)$ symbolic space complexity achieved by
Chatterjee, Dvo\v{r}\'{a}k, Henzinger, and Svozil (2018) for parity games,
where $n$ is the number of vertices and $d$ is the number of distinct
priorities in a parity game.
|
[
{
"created": "Mon, 13 Jan 2020 15:19:05 GMT",
"version": "v1"
},
{
"created": "Mon, 29 Aug 2022 09:08:49 GMT",
"version": "v2"
}
] |
2022-08-30
|
[
[
"Jurdziński",
"Marcin",
""
],
[
"Morvan",
"Rémi",
""
],
[
"Thejaswini",
"K. S.",
""
]
] |
An attractor decomposition meta-algorithm for solving parity games is given that generalises the classic McNaughton-Zielonka algorithm and its recent quasi-polynomial variants due to Parys (2019), and to Lehtinen, Schewe, and Wojtczak (2019). The central concepts studied and exploited are attractor decompositions of dominia in parity games and the ordered trees that describe the inductive structure of attractor decompositions. The universal algorithm yields McNaughton-Zielonka, Parys, and Lehtinen-Schewe-Wojtczak algorithms as special cases when suitable universal trees are given to it as inputs. The main technical results provide a unified proof of correctness and structural insights into those algorithms. Suitably adapting the universal algorithm for parity games to fixpoint games gives a quasi-polynomial time algorithm to compute nested fixpoints over finite complete lattices. The universal algorithms for parity games and nested fixpoints can be implemented symbolically. It is shown how this can be done with $O(\lg d)$ symbolic space complexity, improving the $O(d \lg n)$ symbolic space complexity achieved by Chatterjee, Dvo\v{r}\'{a}k, Henzinger, and Svozil (2018) for parity games, where $n$ is the number of vertices and $d$ is the number of distinct priorities in a parity game.
|
1002.2191
|
Rdv Ijcsis
|
S. Sumathi, S. K. Srivatsa, M. Uma Maheswari
|
Vision Based Game Development Using Human Computer Interaction
|
IEEE format, International Journal of Computer Science and
Information Security, IJCSIS January 2010, ISSN 1947 5500,
http://sites.google.com/site/ijcsis/
|
International Journal of Computer Science and Information
Security, IJCSIS, Vol. 7, No. 1, pp. 147-153, January 2010, USA
| null |
Journal of Computer Science, ISSN 19475500
|
cs.HC cs.CV cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A Human Computer Interface (HCI) System for playing games is designed here
for more natural communication with the machines. The system presented here is
a vision-based system for detection of long voluntary eye blinks and
interpretation of blink patterns for communication between man and machine.
This system replaces the mouse with the human face as a new way to interact
with the computer. Facial features (nose tip and eyes) are detected and tracked
in realtime to use their actions as mouse events. The coordinates and movement
of the nose tip in the live video feed are translated to become the coordinates
and movement of the mouse pointer on the application. The left or right eye
blinks fire left or right mouse click events. The system works with inexpensive
USB cameras and runs at a frame rate of 30 frames per second.
|
[
{
"created": "Wed, 10 Feb 2010 19:46:07 GMT",
"version": "v1"
}
] |
2010-02-11
|
[
[
"Sumathi",
"S.",
""
],
[
"Srivatsa",
"S. K.",
""
],
[
"Maheswari",
"M. Uma",
""
]
] |
A Human Computer Interface (HCI) System for playing games is designed here for more natural communication with the machines. The system presented here is a vision-based system for detection of long voluntary eye blinks and interpretation of blink patterns for communication between man and machine. This system replaces the mouse with the human face as a new way to interact with the computer. Facial features (nose tip and eyes) are detected and tracked in realtime to use their actions as mouse events. The coordinates and movement of the nose tip in the live video feed are translated to become the coordinates and movement of the mouse pointer on the application. The left or right eye blinks fire left or right mouse click events. The system works with inexpensive USB cameras and runs at a frame rate of 30 frames per second.
|
1911.09157
|
Gal Dalal
|
Gal Dalal, Balazs Szorenyi, Gugan Thoppe
|
A Tale of Two-Timescale Reinforcement Learning with the Tightest
Finite-Time Bound
| null | null | null | null |
cs.LG math.PR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Policy evaluation in reinforcement learning is often conducted using
two-timescale stochastic approximation, which results in various gradient
temporal difference methods such as GTD(0), GTD2, and TDC. Here, we provide
convergence rate bounds for this suite of algorithms. Algorithms such as these
have two iterates, $\theta_n$ and $w_n,$ which are updated using two distinct
stepsize sequences, $\alpha_n$ and $\beta_n,$ respectively. Assuming $\alpha_n
= n^{-\alpha}$ and $\beta_n = n^{-\beta}$ with $1 > \alpha > \beta > 0,$ we
show that, with high probability, the two iterates converge to their respective
solutions $\theta^*$ and $w^*$ at rates given by $\|\theta_n - \theta^*\| =
\tilde{O}( n^{-\alpha/2})$ and $\|w_n - w^*\| = \tilde{O}(n^{-\beta/2});$ here,
$\tilde{O}$ hides logarithmic terms. Via comparable lower bounds, we show that
these bounds are, in fact, tight. To the best of our knowledge, ours is the
first finite-time analysis which achieves these rates. While it was known that
the two timescale components decouple asymptotically, our results depict this
phenomenon more explicitly by showing that it in fact happens from some finite
time onwards. Lastly, compared to existing works, our result applies to a
broader family of stepsizes, including non-square summable ones.
|
[
{
"created": "Wed, 20 Nov 2019 20:21:21 GMT",
"version": "v1"
},
{
"created": "Wed, 4 Dec 2019 13:07:57 GMT",
"version": "v2"
}
] |
2019-12-05
|
[
[
"Dalal",
"Gal",
""
],
[
"Szorenyi",
"Balazs",
""
],
[
"Thoppe",
"Gugan",
""
]
] |
Policy evaluation in reinforcement learning is often conducted using two-timescale stochastic approximation, which results in various gradient temporal difference methods such as GTD(0), GTD2, and TDC. Here, we provide convergence rate bounds for this suite of algorithms. Algorithms such as these have two iterates, $\theta_n$ and $w_n,$ which are updated using two distinct stepsize sequences, $\alpha_n$ and $\beta_n,$ respectively. Assuming $\alpha_n = n^{-\alpha}$ and $\beta_n = n^{-\beta}$ with $1 > \alpha > \beta > 0,$ we show that, with high probability, the two iterates converge to their respective solutions $\theta^*$ and $w^*$ at rates given by $\|\theta_n - \theta^*\| = \tilde{O}( n^{-\alpha/2})$ and $\|w_n - w^*\| = \tilde{O}(n^{-\beta/2});$ here, $\tilde{O}$ hides logarithmic terms. Via comparable lower bounds, we show that these bounds are, in fact, tight. To the best of our knowledge, ours is the first finite-time analysis which achieves these rates. While it was known that the two timescale components decouple asymptotically, our results depict this phenomenon more explicitly by showing that it in fact happens from some finite time onwards. Lastly, compared to existing works, our result applies to a broader family of stepsizes, including non-square summable ones.
|
2408.08300
|
Andy Xu
|
Andy Xu, Arno Gau
|
HELP: Hierarchical Embeddings-based Log Parsing
| null | null | null | null |
cs.SE cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Logs are a first-hand source of information for software maintenance and
failure diagnosis. Log parsing, which converts semi-structured log messages
into structured templates, is a prerequisite for automated log analysis tasks
such as anomaly detection, troubleshooting, and root cause analysis. However,
existing log parsers fail in real-world systems for three main reasons. First,
traditional heuristics-based parsers require handcrafted features and domain
knowledge, which are difficult to generalize at scale. Second, existing large
language model-based parsers rely on periodic offline processing, limiting
their effectiveness in real-time use cases. Third, existing online parsing
algorithms are susceptible to log drift, where slight log changes create false
positives that drown out real anomalies. To address these challenges, we
propose HELP, a Hierarchical Embeddings-based Log Parser. HELP is the first
online semantic-based parser to leverage LLMs for performant and cost-effective
log parsing. We achieve this through a novel hierarchical embeddings module,
which fine-tunes a text embedding model to cluster logs before parsing,
reducing querying costs by multiple orders of magnitude. To combat log drift,
we also develop an iterative rebalancing module, which periodically updates
existing log groupings. We evaluate HELP extensively on 14 public large-scale
datasets, showing that HELP achieves significantly higher F1-weighted grouping
and parsing accuracy than current state-of-the-art online log parsers. We also
implement HELP into Iudex's production observability platform, confirming
HELP's practicality in a production environment. Our results show that HELP is
effective and efficient for high-throughput real-world log parsing.
|
[
{
"created": "Thu, 15 Aug 2024 17:54:31 GMT",
"version": "v1"
}
] |
2024-08-16
|
[
[
"Xu",
"Andy",
""
],
[
"Gau",
"Arno",
""
]
] |
Logs are a first-hand source of information for software maintenance and failure diagnosis. Log parsing, which converts semi-structured log messages into structured templates, is a prerequisite for automated log analysis tasks such as anomaly detection, troubleshooting, and root cause analysis. However, existing log parsers fail in real-world systems for three main reasons. First, traditional heuristics-based parsers require handcrafted features and domain knowledge, which are difficult to generalize at scale. Second, existing large language model-based parsers rely on periodic offline processing, limiting their effectiveness in real-time use cases. Third, existing online parsing algorithms are susceptible to log drift, where slight log changes create false positives that drown out real anomalies. To address these challenges, we propose HELP, a Hierarchical Embeddings-based Log Parser. HELP is the first online semantic-based parser to leverage LLMs for performant and cost-effective log parsing. We achieve this through a novel hierarchical embeddings module, which fine-tunes a text embedding model to cluster logs before parsing, reducing querying costs by multiple orders of magnitude. To combat log drift, we also develop an iterative rebalancing module, which periodically updates existing log groupings. We evaluate HELP extensively on 14 public large-scale datasets, showing that HELP achieves significantly higher F1-weighted grouping and parsing accuracy than current state-of-the-art online log parsers. We also implement HELP into Iudex's production observability platform, confirming HELP's practicality in a production environment. Our results show that HELP is effective and efficient for high-throughput real-world log parsing.
|
1708.06887
|
EPTCS
|
Alexei Lisitsa (University of Liverpool, UK), Andrei P. Nemytykh
(ISPRAS, Russia), Maurizio Proietti (CNR-IASI, Italy)
|
Proceedings Fifth International Workshop on Verification and Program
Transformation
| null |
EPTCS 253, 2017
|
10.4204/EPTCS.253
| null |
cs.LO cs.PL cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This volume contains the proceedings of the Fifth International Workshop on
Verification and Program Transformation (VPT 2017). The workshop took place in
Uppsala, Sweden, on April 29th, 2017, affiliated with the European Joint
Conferences on Theory and Practice of Software (ETAPS). The aim of the VPT
workshop series is to provide a forum where people from the areas of program
transformation and program verification can fruitfully exchange ideas and gain
a deeper understanding of the interactions between those two fields. Seven
papers were presented at the workshop. Additionally, three invited talks were
given by Javier Esparza (Technische Universit\"at M\"unchen, Germany), Manuel
Hermenegildo (IMDEA Software Institute, Madrid, Spain), and Alexey Khoroshilov
(Linux Verification Center, ISPRAS, Moscow, Russia).
|
[
{
"created": "Wed, 23 Aug 2017 05:39:02 GMT",
"version": "v1"
}
] |
2017-08-24
|
[
[
"Lisitsa",
"Alexei",
"",
"University of Liverpool, UK"
],
[
"Nemytykh",
"Andrei P.",
"",
"ISPRAS, Russia"
],
[
"Proietti",
"Maurizio",
"",
"CNR-IASI, Italy"
]
] |
This volume contains the proceedings of the Fifth International Workshop on Verification and Program Transformation (VPT 2017). The workshop took place in Uppsala, Sweden, on April 29th, 2017, affiliated with the European Joint Conferences on Theory and Practice of Software (ETAPS). The aim of the VPT workshop series is to provide a forum where people from the areas of program transformation and program verification can fruitfully exchange ideas and gain a deeper understanding of the interactions between those two fields. Seven papers were presented at the workshop. Additionally, three invited talks were given by Javier Esparza (Technische Universit\"at M\"unchen, Germany), Manuel Hermenegildo (IMDEA Software Institute, Madrid, Spain), and Alexey Khoroshilov (Linux Verification Center, ISPRAS, Moscow, Russia).
|
2004.08340
|
Vahid Moosavi
|
Zifeng Guo, Joao P. Leitao, Nuno E. Simoes, and Vahid Moosavi
|
Data-driven Flood Emulation: Speeding up Urban Flood Predictions by Deep
Convolutional Neural Networks
| null | null | null | null |
cs.CV cs.CY cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Computational complexity has been the bottleneck of applying physically-based
simulations on large urban areas with high spatial resolution for efficient and
systematic flooding analyses and risk assessments. To address this issue of
long computational time, this paper proposes that the prediction of maximum
water depth rasters can be considered as an image-to-image translation problem
where the results are generated from input elevation rasters using the
information learned from data rather than by conducting simulations, which can
significantly accelerate the prediction process. The proposed approach was
implemented by a deep convolutional neural network trained on flood simulation
data of 18 designed hyetographs on three selected catchments. Multiple tests
with both designed and real rainfall events were performed and the results show
that the flood predictions by neural network uses only 0.5 % of time comparing
with physically-based approaches, with promising accuracy and ability of
generalizations. The proposed neural network can also potentially be applied to
different but relevant problems including flood predictions for urban layout
planning.
|
[
{
"created": "Fri, 17 Apr 2020 16:44:46 GMT",
"version": "v1"
},
{
"created": "Wed, 13 May 2020 10:19:29 GMT",
"version": "v2"
}
] |
2020-05-14
|
[
[
"Guo",
"Zifeng",
""
],
[
"Leitao",
"Joao P.",
""
],
[
"Simoes",
"Nuno E.",
""
],
[
"Moosavi",
"Vahid",
""
]
] |
Computational complexity has been the bottleneck of applying physically-based simulations on large urban areas with high spatial resolution for efficient and systematic flooding analyses and risk assessments. To address this issue of long computational time, this paper proposes that the prediction of maximum water depth rasters can be considered as an image-to-image translation problem where the results are generated from input elevation rasters using the information learned from data rather than by conducting simulations, which can significantly accelerate the prediction process. The proposed approach was implemented by a deep convolutional neural network trained on flood simulation data of 18 designed hyetographs on three selected catchments. Multiple tests with both designed and real rainfall events were performed and the results show that the flood predictions by neural network uses only 0.5 % of time comparing with physically-based approaches, with promising accuracy and ability of generalizations. The proposed neural network can also potentially be applied to different but relevant problems including flood predictions for urban layout planning.
|
2102.09249
|
Nicolas Grislain
|
Johan Leduc and Nicolas Grislain
|
Composable Generative Models
|
11 pages
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Generative modeling has recently seen many exciting developments with the
advent of deep generative architectures such as Variational Auto-Encoders (VAE)
or Generative Adversarial Networks (GAN). The ability to draw synthetic i.i.d.
observations with the same joint probability distribution as a given dataset
has a wide range of applications including representation learning, compression
or imputation. It appears that it also has many applications in privacy
preserving data analysis, especially when used in conjunction with differential
privacy techniques. This paper focuses on synthetic data generation models with
privacy preserving applications in mind. It introduces a novel architecture,
the Composable Generative Model (CGM) that is state-of-the-art in tabular data
generation. Any conditional generative model can be used as a sub-component of
the CGM, including CGMs themselves, allowing the generation of numerical,
categorical data as well as images, text, or time series. The CGM has been
evaluated on 13 datasets (6 standard datasets and 7 simulated) and compared to
14 recent generative models. It beats the state of the art in tabular data
generation by a significant margin.
|
[
{
"created": "Thu, 18 Feb 2021 10:11:29 GMT",
"version": "v1"
}
] |
2021-02-19
|
[
[
"Leduc",
"Johan",
""
],
[
"Grislain",
"Nicolas",
""
]
] |
Generative modeling has recently seen many exciting developments with the advent of deep generative architectures such as Variational Auto-Encoders (VAE) or Generative Adversarial Networks (GAN). The ability to draw synthetic i.i.d. observations with the same joint probability distribution as a given dataset has a wide range of applications including representation learning, compression or imputation. It appears that it also has many applications in privacy preserving data analysis, especially when used in conjunction with differential privacy techniques. This paper focuses on synthetic data generation models with privacy preserving applications in mind. It introduces a novel architecture, the Composable Generative Model (CGM) that is state-of-the-art in tabular data generation. Any conditional generative model can be used as a sub-component of the CGM, including CGMs themselves, allowing the generation of numerical, categorical data as well as images, text, or time series. The CGM has been evaluated on 13 datasets (6 standard datasets and 7 simulated) and compared to 14 recent generative models. It beats the state of the art in tabular data generation by a significant margin.
|
2102.08896
|
Arvind Kiwelekar
|
Arvind W. Kiwelekar, Pramod Patil, Laxman D. Netak, Sanjay U Waikar
|
Blockchain-based Security Services for Fog Computing
|
This is a pre-print of the following Chapter: Arvind W. Kiwelekar,
Pramod Patil Laxman D. Netak and Sanjay U Waikar, {\em Blockchain-Based
Security Services for Fog Computing} accepted and final version is published
in Chang W., Wu J. (eds) Fog/Edge Computing For Security, Privacy, and
Applications. Advances in Information Security, vol 83. Springer
| null |
10.1007/978-3-030-57328-7_11
| null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Fog computing is a paradigm for distributed computing that enables sharing of
resources such as computing, storage and network services. Unlike cloud
computing, fog computing platforms primarily support {\em non-functional
properties} such as location awareness, mobility and reduced latency. This
emerging paradigm has many potential applications in domains such as smart
grids, smart cities, and transport management.
Most of these domains collect and monitor personal information through edge
devices to offer personalized services. A {\em centralized} server either at
the level of cloud or fog, has been found ineffective to provide a high degree
of security and privacy-preserving services.
Blockchain technology supports the development of {\em decentralized}
applications designed around the principles of immutability, cryptography,
consistency preserving consensus protocols and smart contracts. Hence
blockchain technology has emerged as a preferred technology in recent times to
build trustworthy distributed applications.
The chapter describes the potential of blockchain technology to realize
security services such as authentication, secured communication, availability,
privacy and trust management to support the development of dependable fog
services.
|
[
{
"created": "Tue, 16 Feb 2021 08:26:20 GMT",
"version": "v1"
}
] |
2021-02-18
|
[
[
"Kiwelekar",
"Arvind W.",
""
],
[
"Patil",
"Pramod",
""
],
[
"Netak",
"Laxman D.",
""
],
[
"Waikar",
"Sanjay U",
""
]
] |
Fog computing is a paradigm for distributed computing that enables sharing of resources such as computing, storage and network services. Unlike cloud computing, fog computing platforms primarily support {\em non-functional properties} such as location awareness, mobility and reduced latency. This emerging paradigm has many potential applications in domains such as smart grids, smart cities, and transport management. Most of these domains collect and monitor personal information through edge devices to offer personalized services. A {\em centralized} server either at the level of cloud or fog, has been found ineffective to provide a high degree of security and privacy-preserving services. Blockchain technology supports the development of {\em decentralized} applications designed around the principles of immutability, cryptography, consistency preserving consensus protocols and smart contracts. Hence blockchain technology has emerged as a preferred technology in recent times to build trustworthy distributed applications. The chapter describes the potential of blockchain technology to realize security services such as authentication, secured communication, availability, privacy and trust management to support the development of dependable fog services.
|
2002.12888
|
Bingchen Liu
|
Bingchen Liu, Kunpeng Song, Ahmed Elgammal
|
Sketch-to-Art: Synthesizing Stylized Art Images From Sketches
|
24 pages
|
ACCV 2020
| null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a new approach for synthesizing fully detailed art-stylized images
from sketches. Given a sketch, with no semantic tagging, and a reference image
of a specific style, the model can synthesize meaningful details with colors
and textures. The model consists of three modules designed explicitly for
better artistic style capturing and generation. Based on a GAN framework, a
dual-masked mechanism is introduced to enforce the content constraints (from
the sketch), and a feature-map transformation technique is developed to
strengthen the style consistency (to the reference image). Finally, an inverse
procedure of instance-normalization is proposed to disentangle the style and
content information, therefore yields better synthesis performance. Experiments
demonstrate a significant qualitative and quantitative boost over baselines
based on previous state-of-the-art techniques, adopted for the proposed
process.
|
[
{
"created": "Wed, 26 Feb 2020 19:02:10 GMT",
"version": "v1"
},
{
"created": "Tue, 3 Mar 2020 19:07:49 GMT",
"version": "v2"
},
{
"created": "Fri, 2 Oct 2020 17:20:25 GMT",
"version": "v3"
}
] |
2020-10-05
|
[
[
"Liu",
"Bingchen",
""
],
[
"Song",
"Kunpeng",
""
],
[
"Elgammal",
"Ahmed",
""
]
] |
We propose a new approach for synthesizing fully detailed art-stylized images from sketches. Given a sketch, with no semantic tagging, and a reference image of a specific style, the model can synthesize meaningful details with colors and textures. The model consists of three modules designed explicitly for better artistic style capturing and generation. Based on a GAN framework, a dual-masked mechanism is introduced to enforce the content constraints (from the sketch), and a feature-map transformation technique is developed to strengthen the style consistency (to the reference image). Finally, an inverse procedure of instance-normalization is proposed to disentangle the style and content information, therefore yields better synthesis performance. Experiments demonstrate a significant qualitative and quantitative boost over baselines based on previous state-of-the-art techniques, adopted for the proposed process.
|
2305.08481
|
Arsham Mostaani
|
Arsham Mostaani, Thang X. Vu, Hamed Habibi, Symeon Chatzinotas, Bjorn
Ottersten
|
Task-Oriented Communication Design at Scale
| null | null | null | null |
cs.IT cs.LG cs.MA math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With countless promising applications in various domains such as IoT and
industry 4.0, task-oriented communication design (TOCD) is getting accelerated
attention from the research community. This paper presents a novel approach for
designing scalable task-oriented quantization and communications in cooperative
multi-agent systems (MAS). The proposed approach utilizes the TOCD framework
and the value of information (VoI) concept to enable efficient communication of
quantized observations among agents while maximizing the average return
performance of the MAS, a parameter that quantifies the MAS's task
effectiveness. The computational complexity of learning the VoI, however, grows
exponentially with the number of agents. Thus, we propose a three-step
framework: i) learning the VoI (using reinforcement learning (RL)) for a
two-agent system, ii) designing the quantization policy for an $N$-agent MAS
using the learned VoI for a range of bit-budgets and, (iii) learning the
agents' control policies using RL while following the designed quantization
policies in the earlier step. We observe that one can reduce the computational
cost of obtaining the value of information by exploiting insights gained from
studying a similar two-agent system - instead of the original $N$-agent system.
We then quantize agents' observations such that their more valuable
observations are communicated more precisely. Our analytical results show the
applicability of the proposed framework under a wide range of problems.
Numerical results show striking improvements in reducing the computational
complexity of obtaining VoI needed for the TOCD in a MAS problem without
compromising the average return performance of the MAS.
|
[
{
"created": "Mon, 15 May 2023 09:32:42 GMT",
"version": "v1"
}
] |
2023-05-16
|
[
[
"Mostaani",
"Arsham",
""
],
[
"Vu",
"Thang X.",
""
],
[
"Habibi",
"Hamed",
""
],
[
"Chatzinotas",
"Symeon",
""
],
[
"Ottersten",
"Bjorn",
""
]
] |
With countless promising applications in various domains such as IoT and industry 4.0, task-oriented communication design (TOCD) is getting accelerated attention from the research community. This paper presents a novel approach for designing scalable task-oriented quantization and communications in cooperative multi-agent systems (MAS). The proposed approach utilizes the TOCD framework and the value of information (VoI) concept to enable efficient communication of quantized observations among agents while maximizing the average return performance of the MAS, a parameter that quantifies the MAS's task effectiveness. The computational complexity of learning the VoI, however, grows exponentially with the number of agents. Thus, we propose a three-step framework: i) learning the VoI (using reinforcement learning (RL)) for a two-agent system, ii) designing the quantization policy for an $N$-agent MAS using the learned VoI for a range of bit-budgets and, (iii) learning the agents' control policies using RL while following the designed quantization policies in the earlier step. We observe that one can reduce the computational cost of obtaining the value of information by exploiting insights gained from studying a similar two-agent system - instead of the original $N$-agent system. We then quantize agents' observations such that their more valuable observations are communicated more precisely. Our analytical results show the applicability of the proposed framework under a wide range of problems. Numerical results show striking improvements in reducing the computational complexity of obtaining VoI needed for the TOCD in a MAS problem without compromising the average return performance of the MAS.
|
2006.05944
|
Hua Sun
|
Hua Sun
|
Secure Groupcast: Extra-Entropic Structure and Linear Feasibility
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the secure groupcast problem, a transmitter wants to securely groupcast a
message with the maximum rate to the first $N$ of $K$ receivers by broadcasting
with the minimum bandwidth, where the $K$ receivers are each equipped with a
key variable from a known joint distribution. Examples are provided to prove
that different instances of secure groupcast that have the same entropic
structure, i.e., the same entropy for all subsets of the key variables, can
have different maximum groupcast rates and different minimum broadcast
bandwidth. Thus, extra-entropic structure matters for secure groupcast. Next,
the maximum groupcast rate is explored when the key variables are generic
linear combinations of a basis set of independent key symbols, i.e., the keys
lie in generic subspaces. The maximum groupcast rate is characterized when the
dimension of each key subspace is either small or large, i.e., the extreme
regimes. For the intermediate regime, various interference alignment schemes
originated from wireless interference networks, such as eigenvector based and
asymptotic schemes, are shown to be useful.
|
[
{
"created": "Wed, 10 Jun 2020 16:50:47 GMT",
"version": "v1"
}
] |
2020-06-11
|
[
[
"Sun",
"Hua",
""
]
] |
In the secure groupcast problem, a transmitter wants to securely groupcast a message with the maximum rate to the first $N$ of $K$ receivers by broadcasting with the minimum bandwidth, where the $K$ receivers are each equipped with a key variable from a known joint distribution. Examples are provided to prove that different instances of secure groupcast that have the same entropic structure, i.e., the same entropy for all subsets of the key variables, can have different maximum groupcast rates and different minimum broadcast bandwidth. Thus, extra-entropic structure matters for secure groupcast. Next, the maximum groupcast rate is explored when the key variables are generic linear combinations of a basis set of independent key symbols, i.e., the keys lie in generic subspaces. The maximum groupcast rate is characterized when the dimension of each key subspace is either small or large, i.e., the extreme regimes. For the intermediate regime, various interference alignment schemes originated from wireless interference networks, such as eigenvector based and asymptotic schemes, are shown to be useful.
|
1711.03373
|
Ziqi Zhang
|
Ziqi Zhang, Jie Gao, Fabio Ciravegna
|
SemRe-Rank: Improving Automatic Term Extraction By Incorporating
Semantic Relatedness With Personalised PageRank
|
Accepted by ACM TKDD. This is a pre-print
| null | null | null |
cs.IR cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automatic Term Extraction deals with the extraction of terminology from a
domain specific corpus, and has long been an established research area in data
and knowledge acquisition. ATE remains a challenging task as it is known that
there is no existing ATE methods that can consistently outperform others in any
domain. This work adopts a refreshed perspective to this problem: instead of
searching for such a 'one-size-fit-all' solution that may never exist, we
propose to develop generic methods to 'enhance' existing ATE methods. We
introduce SemRe-Rank, the first method based on this principle, to incorporate
semantic relatedness - an often overlooked venue - into an existing ATE method
to further improve its performance. SemRe-Rank incorporates word embeddings
into a personalised PageRank process to compute 'semantic importance' scores
for candidate terms from a graph of semantically related words (nodes), which
are then used to revise the scores of candidate terms computed by a base ATE
algorithm. Extensively evaluated with 13 state-of-the-art base ATE methods on
four datasets of diverse nature, it is shown to have achieved widespread
improvement over all base methods and across all datasets, with up to 15
percentage points when measured by the Precision in the top ranked K candidate
terms (the average for a set of K's), or up to 28 percentage points in F1
measured at a K that equals to the expected real terms in the candidates (F1 in
short). Compared to an alternative approach built on the well-known TextRank
algorithm, SemRe-Rank can potentially outperform by up to 8 points in Precision
at top K, or up to 17 points in F1.
|
[
{
"created": "Thu, 9 Nov 2017 13:39:21 GMT",
"version": "v1"
},
{
"created": "Thu, 15 Mar 2018 20:55:54 GMT",
"version": "v2"
},
{
"created": "Wed, 28 Mar 2018 20:52:19 GMT",
"version": "v3"
}
] |
2018-03-30
|
[
[
"Zhang",
"Ziqi",
""
],
[
"Gao",
"Jie",
""
],
[
"Ciravegna",
"Fabio",
""
]
] |
Automatic Term Extraction deals with the extraction of terminology from a domain specific corpus, and has long been an established research area in data and knowledge acquisition. ATE remains a challenging task as it is known that there is no existing ATE methods that can consistently outperform others in any domain. This work adopts a refreshed perspective to this problem: instead of searching for such a 'one-size-fit-all' solution that may never exist, we propose to develop generic methods to 'enhance' existing ATE methods. We introduce SemRe-Rank, the first method based on this principle, to incorporate semantic relatedness - an often overlooked venue - into an existing ATE method to further improve its performance. SemRe-Rank incorporates word embeddings into a personalised PageRank process to compute 'semantic importance' scores for candidate terms from a graph of semantically related words (nodes), which are then used to revise the scores of candidate terms computed by a base ATE algorithm. Extensively evaluated with 13 state-of-the-art base ATE methods on four datasets of diverse nature, it is shown to have achieved widespread improvement over all base methods and across all datasets, with up to 15 percentage points when measured by the Precision in the top ranked K candidate terms (the average for a set of K's), or up to 28 percentage points in F1 measured at a K that equals to the expected real terms in the candidates (F1 in short). Compared to an alternative approach built on the well-known TextRank algorithm, SemRe-Rank can potentially outperform by up to 8 points in Precision at top K, or up to 17 points in F1.
|
2403.09188
|
Yu-Tang Chang
|
Yu Tang Chang and Shih Fang Chen
|
Design of an basis-projected layer for sparse datasets in deep learning
training using gc-ms spectra as a case study
|
5 pages, 2 figures, 2 tables, conference
| null | null | null |
cs.LG eess.SP
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Deep learning (DL) models encompass millions or even billions of parameters
and learn complex patterns from big data. However, not all data are initially
stored in a suitable formation to effectively train a DL model, e.g., gas
chromatography-mass spectrometry (GC-MS) spectra and DNA sequence. These
datasets commonly contain many zero values, and the sparse data formation
causes difficulties in optimizing DL models. A DL module called the
basis-projected layer (BPL) was proposed to mitigate the issue by transforming
the sparse data into a dense representation. The transformed data is expected
to facilitate the gradient calculation and finetuned process in a DL training
process. The dataset, example of a sparse dataset, contained 362 specialty
coffee odorant spectra detected from GC-MS. The BPL layer was placed at the
beginning of the DL model. The tunable parameters in the layer were learnable
projected axes that were the bases of a new representation space. The layer
rotated these bases when its parameters were updated. When the number of the
bases was the same as the original dimension, the increasing percentage of the
F1 scores was 8.56%. Furthermore, when the number was set as 768 (the original
dimension was 490), the increasing percentage of the F1 score was 11.49%. The
layer not only maintained the model performance and even constructed a better
representation space in analyzing sparse datasets.
|
[
{
"created": "Thu, 14 Mar 2024 09:03:51 GMT",
"version": "v1"
}
] |
2024-03-15
|
[
[
"Chang",
"Yu Tang",
""
],
[
"Chen",
"Shih Fang",
""
]
] |
Deep learning (DL) models encompass millions or even billions of parameters and learn complex patterns from big data. However, not all data are initially stored in a suitable formation to effectively train a DL model, e.g., gas chromatography-mass spectrometry (GC-MS) spectra and DNA sequence. These datasets commonly contain many zero values, and the sparse data formation causes difficulties in optimizing DL models. A DL module called the basis-projected layer (BPL) was proposed to mitigate the issue by transforming the sparse data into a dense representation. The transformed data is expected to facilitate the gradient calculation and finetuned process in a DL training process. The dataset, example of a sparse dataset, contained 362 specialty coffee odorant spectra detected from GC-MS. The BPL layer was placed at the beginning of the DL model. The tunable parameters in the layer were learnable projected axes that were the bases of a new representation space. The layer rotated these bases when its parameters were updated. When the number of the bases was the same as the original dimension, the increasing percentage of the F1 scores was 8.56%. Furthermore, when the number was set as 768 (the original dimension was 490), the increasing percentage of the F1 score was 11.49%. The layer not only maintained the model performance and even constructed a better representation space in analyzing sparse datasets.
|
2406.03146
|
Erik Landolsi
|
Erik Landolsi, Fredrik Kahl
|
Tiny models from tiny data: Textual and null-text inversion for few-shot
distillation
|
21 pages (9 main pages + references and appendix)
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Few-shot image classification involves classifying images using very few
training examples. Recent vision foundation models show excellent few-shot
transfer abilities, but are large and slow at inference. Using knowledge
distillation, the capabilities of high-performing but slow models can be
transferred to tiny, efficient models. However, common distillation methods
require a large set of unlabeled data, which is not available in the few-shot
setting. To overcome this lack of data, there has been a recent interest in
using synthetic data.
We expand on this work by presenting a novel diffusion model inversion
technique (TINT) combining the diversity of textual inversion with the
specificity of null-text inversion. Using this method in a few-shot
distillation pipeline leads to state-of-the-art accuracy among small student
models on popular benchmarks, while being significantly faster than prior work.
This allows us to push even tiny models to high accuracy using only a tiny
application-specific dataset, albeit relying on extra data for pre-training.
Popular few-shot benchmarks involve evaluation over a large number of
episodes, which is computationally cumbersome for methods involving synthetic
data generation. Therefore, we also present a theoretical analysis on how the
variance of the accuracy estimator depends on the number of episodes and query
examples, and use these results to lower the computational effort required for
method evaluation. In addition, to further motivate the use of generative
models in few-shot distillation, we demonstrate that our method performs better
compared to training on real data mined from the dataset used to train the
diffusion model.
Source code will be made available at https://github.com/pixwse/tiny2.
|
[
{
"created": "Wed, 5 Jun 2024 11:01:42 GMT",
"version": "v1"
}
] |
2024-06-06
|
[
[
"Landolsi",
"Erik",
""
],
[
"Kahl",
"Fredrik",
""
]
] |
Few-shot image classification involves classifying images using very few training examples. Recent vision foundation models show excellent few-shot transfer abilities, but are large and slow at inference. Using knowledge distillation, the capabilities of high-performing but slow models can be transferred to tiny, efficient models. However, common distillation methods require a large set of unlabeled data, which is not available in the few-shot setting. To overcome this lack of data, there has been a recent interest in using synthetic data. We expand on this work by presenting a novel diffusion model inversion technique (TINT) combining the diversity of textual inversion with the specificity of null-text inversion. Using this method in a few-shot distillation pipeline leads to state-of-the-art accuracy among small student models on popular benchmarks, while being significantly faster than prior work. This allows us to push even tiny models to high accuracy using only a tiny application-specific dataset, albeit relying on extra data for pre-training. Popular few-shot benchmarks involve evaluation over a large number of episodes, which is computationally cumbersome for methods involving synthetic data generation. Therefore, we also present a theoretical analysis on how the variance of the accuracy estimator depends on the number of episodes and query examples, and use these results to lower the computational effort required for method evaluation. In addition, to further motivate the use of generative models in few-shot distillation, we demonstrate that our method performs better compared to training on real data mined from the dataset used to train the diffusion model. Source code will be made available at https://github.com/pixwse/tiny2.
|
2306.06823
|
Sujoy Paul
|
Sujoy Paul and Gagan Madan and Akankshya Mishra and Narayan Hegde and
Pradeep Kumar and Gaurav Aggarwal
|
Weakly supervised information extraction from inscrutable handwritten
document images
|
Accepted at ICDAR 2023
| null | null | null |
cs.CV cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
State-of-the-art information extraction methods are limited by OCR errors.
They work well for printed text in form-like documents, but unstructured,
handwritten documents still remain a challenge. Adapting existing models to
domain-specific training data is quite expensive, because of two factors, 1)
limited availability of the domain-specific documents (such as handwritten
prescriptions, lab notes, etc.), and 2) annotations become even more
challenging as one needs domain-specific knowledge to decode inscrutable
handwritten document images. In this work, we focus on the complex problem of
extracting medicine names from handwritten prescriptions using only weakly
labeled data. The data consists of images along with the list of medicine names
in it, but not their location in the image. We solve the problem by first
identifying the regions of interest, i.e., medicine lines from just weak labels
and then injecting a domain-specific medicine language model learned using only
synthetically generated data. Compared to off-the-shelf state-of-the-art
methods, our approach performs >2.5x better in medicine names extraction from
prescriptions.
|
[
{
"created": "Mon, 12 Jun 2023 02:22:30 GMT",
"version": "v1"
}
] |
2023-06-13
|
[
[
"Paul",
"Sujoy",
""
],
[
"Madan",
"Gagan",
""
],
[
"Mishra",
"Akankshya",
""
],
[
"Hegde",
"Narayan",
""
],
[
"Kumar",
"Pradeep",
""
],
[
"Aggarwal",
"Gaurav",
""
]
] |
State-of-the-art information extraction methods are limited by OCR errors. They work well for printed text in form-like documents, but unstructured, handwritten documents still remain a challenge. Adapting existing models to domain-specific training data is quite expensive, because of two factors, 1) limited availability of the domain-specific documents (such as handwritten prescriptions, lab notes, etc.), and 2) annotations become even more challenging as one needs domain-specific knowledge to decode inscrutable handwritten document images. In this work, we focus on the complex problem of extracting medicine names from handwritten prescriptions using only weakly labeled data. The data consists of images along with the list of medicine names in it, but not their location in the image. We solve the problem by first identifying the regions of interest, i.e., medicine lines from just weak labels and then injecting a domain-specific medicine language model learned using only synthetically generated data. Compared to off-the-shelf state-of-the-art methods, our approach performs >2.5x better in medicine names extraction from prescriptions.
|
2401.14009
|
Yuxia Wu
|
Yuxia Wu, Yuan Fang and Lizi Liao
|
On the Feasibility of Simple Transformer for Dynamic Graph Modeling
|
accepted by WWW'24
| null | null | null |
cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dynamic graph modeling is crucial for understanding complex structures in web
graphs, spanning applications in social networks, recommender systems, and
more. Most existing methods primarily emphasize structural dependencies and
their temporal changes. However, these approaches often overlook detailed
temporal aspects or struggle with long-term dependencies. Furthermore, many
solutions overly complicate the process by emphasizing intricate module designs
to capture dynamic evolutions. In this work, we harness the strength of the
Transformer's self-attention mechanism, known for adeptly handling long-range
dependencies in sequence modeling. Our approach offers a simple Transformer
model, called SimpleDyG, tailored for dynamic graph modeling without complex
modifications. We re-conceptualize dynamic graphs as a sequence modeling
challenge and introduce a novel temporal alignment technique. This technique
not only captures the inherent temporal evolution patterns within dynamic
graphs but also streamlines the modeling process of their evolution. To
evaluate the efficacy of SimpleDyG, we conduct extensive experiments on four
real-world datasets from various domains. The results demonstrate the
competitive performance of SimpleDyG in comparison to a series of
state-of-the-art approaches despite its simple design.
|
[
{
"created": "Thu, 25 Jan 2024 08:18:31 GMT",
"version": "v1"
},
{
"created": "Tue, 27 Feb 2024 02:30:01 GMT",
"version": "v2"
}
] |
2024-02-28
|
[
[
"Wu",
"Yuxia",
""
],
[
"Fang",
"Yuan",
""
],
[
"Liao",
"Lizi",
""
]
] |
Dynamic graph modeling is crucial for understanding complex structures in web graphs, spanning applications in social networks, recommender systems, and more. Most existing methods primarily emphasize structural dependencies and their temporal changes. However, these approaches often overlook detailed temporal aspects or struggle with long-term dependencies. Furthermore, many solutions overly complicate the process by emphasizing intricate module designs to capture dynamic evolutions. In this work, we harness the strength of the Transformer's self-attention mechanism, known for adeptly handling long-range dependencies in sequence modeling. Our approach offers a simple Transformer model, called SimpleDyG, tailored for dynamic graph modeling without complex modifications. We re-conceptualize dynamic graphs as a sequence modeling challenge and introduce a novel temporal alignment technique. This technique not only captures the inherent temporal evolution patterns within dynamic graphs but also streamlines the modeling process of their evolution. To evaluate the efficacy of SimpleDyG, we conduct extensive experiments on four real-world datasets from various domains. The results demonstrate the competitive performance of SimpleDyG in comparison to a series of state-of-the-art approaches despite its simple design.
|
2105.05751
|
Revanth V S
|
Revanth V S, Suthan L
|
Prevention Of Attack In Vehicular Adhoc Network Using Trust Model
|
i have to modify the contents of the paper
| null | null | null |
cs.NI
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Vehicular ad hoc networks is a modern technology that holds an important
aspect in the transportation domain due to its abilities to increase traffic
efficiency and safety. It is a another variant of Mobile ad-hoc networks that
provides Vehicles to Vehicles (V2V), Rode-side Unit to Road-side Unit(R2R) and
Vehicles to road-side Unit (V2R) communication.VANET is a multidimensional
network in which the vehicles ceaselessly alter their locations. Connected
vehicles broadcast sensitive information which must be communicated with the
neighbors in a safe and established environment. VANET may also contain
dishonest nodes such as man in -the-middle attackers that aim to distribute and
share malicious content with the vehicles, thus contaminating the network with
secure information. In this situation implementing a trust among connected
vehicles can raise security as every participating vehicle will create and
propagate authentic, accurate,and trusted content within the network.In this
paper we used a trust model to determine the trust level and eliminate the
malicious nodes.We created a simulation for calculating the trust level and
eliminating the malicious node in the wireless ad hoc networks using ns2
simulator and network animator(nam).The simulation results showed a better
bandwidth in communication between the nodes after the trust level is updated
|
[
{
"created": "Wed, 12 May 2021 16:08:54 GMT",
"version": "v1"
},
{
"created": "Wed, 19 May 2021 14:16:57 GMT",
"version": "v2"
}
] |
2021-05-20
|
[
[
"S",
"Revanth V",
""
],
[
"L",
"Suthan",
""
]
] |
Vehicular ad hoc networks is a modern technology that holds an important aspect in the transportation domain due to its abilities to increase traffic efficiency and safety. It is a another variant of Mobile ad-hoc networks that provides Vehicles to Vehicles (V2V), Rode-side Unit to Road-side Unit(R2R) and Vehicles to road-side Unit (V2R) communication.VANET is a multidimensional network in which the vehicles ceaselessly alter their locations. Connected vehicles broadcast sensitive information which must be communicated with the neighbors in a safe and established environment. VANET may also contain dishonest nodes such as man in -the-middle attackers that aim to distribute and share malicious content with the vehicles, thus contaminating the network with secure information. In this situation implementing a trust among connected vehicles can raise security as every participating vehicle will create and propagate authentic, accurate,and trusted content within the network.In this paper we used a trust model to determine the trust level and eliminate the malicious nodes.We created a simulation for calculating the trust level and eliminating the malicious node in the wireless ad hoc networks using ns2 simulator and network animator(nam).The simulation results showed a better bandwidth in communication between the nodes after the trust level is updated
|
1204.5981
|
Barnaby Martin
|
Florent Madelaine and Barnaby Martin
|
Containment, Equivalence and Coreness from CSP to QCSP and beyond
| null | null | null | null |
cs.LO cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The constraint satisfaction problem (CSP) and its quantified extensions,
whether without (QCSP) or with disjunction (QCSP_or), correspond naturally to
the model checking problem for three increasingly stronger fragments of
positive first-order logic. Their complexity is often studied when
parameterised by a fixed model, the so-called template.
It is a natural question to ask when two templates are equivalent, or more
generally when one "contain" another, in the sense that a satisfied instance of
the first will be necessarily satisfied in the second. One can also ask for a
smallest possible equivalent template: this is known as the core for CSP.
We recall and extend previous results on containment, equivalence and
"coreness" for QCSP_or before initiating a preliminary study of cores for QCSP
which we characterise for certain structures and which turns out to be more
elusive.
|
[
{
"created": "Thu, 26 Apr 2012 16:46:25 GMT",
"version": "v1"
}
] |
2012-04-27
|
[
[
"Madelaine",
"Florent",
""
],
[
"Martin",
"Barnaby",
""
]
] |
The constraint satisfaction problem (CSP) and its quantified extensions, whether without (QCSP) or with disjunction (QCSP_or), correspond naturally to the model checking problem for three increasingly stronger fragments of positive first-order logic. Their complexity is often studied when parameterised by a fixed model, the so-called template. It is a natural question to ask when two templates are equivalent, or more generally when one "contain" another, in the sense that a satisfied instance of the first will be necessarily satisfied in the second. One can also ask for a smallest possible equivalent template: this is known as the core for CSP. We recall and extend previous results on containment, equivalence and "coreness" for QCSP_or before initiating a preliminary study of cores for QCSP which we characterise for certain structures and which turns out to be more elusive.
|
2210.13852
|
Zhuoran Zheng
|
Weiyi Cong, Zhuoran Zheng and Xiuyi Jia
|
TabMixer: Excavating Label Distribution Learning with Small-scale
Features
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Label distribution learning (LDL) differs from multi-label learning which
aims at representing the polysemy of instances by transforming single-label
values into descriptive degrees. Unfortunately, the feature space of the label
distribution dataset is affected by human factors and the inductive bias of the
feature extractor causing uncertainty in the feature space. Especially, for
datasets with small-scale feature spaces (the feature space dimension $\approx$
the label space), the existing LDL algorithms do not perform well. To address
this issue, we seek to model the uncertainty augmentation of the feature space
to alleviate the problem in LDL tasks. Specifically, we start with augmenting
each feature value in the feature vector of a sample into a vector (sampling on
a Gaussian distribution function). Which, the variance parameter of the
Gaussian distribution function is learned by using a sub-network, and the mean
parameter is filled by this feature value. Then, each feature vector is
augmented to a matrix which is fed into a mixer with local attention
(\textit{TabMixer}) to extract the latent feature. Finally, the latent feature
is squeezed to yield an accurate label distribution via a squeezed network.
Extensive experiments verify that our proposed algorithm can be competitive
compared to other LDL algorithms on several benchmarks.
|
[
{
"created": "Tue, 25 Oct 2022 09:18:15 GMT",
"version": "v1"
}
] |
2022-10-26
|
[
[
"Cong",
"Weiyi",
""
],
[
"Zheng",
"Zhuoran",
""
],
[
"Jia",
"Xiuyi",
""
]
] |
Label distribution learning (LDL) differs from multi-label learning which aims at representing the polysemy of instances by transforming single-label values into descriptive degrees. Unfortunately, the feature space of the label distribution dataset is affected by human factors and the inductive bias of the feature extractor causing uncertainty in the feature space. Especially, for datasets with small-scale feature spaces (the feature space dimension $\approx$ the label space), the existing LDL algorithms do not perform well. To address this issue, we seek to model the uncertainty augmentation of the feature space to alleviate the problem in LDL tasks. Specifically, we start with augmenting each feature value in the feature vector of a sample into a vector (sampling on a Gaussian distribution function). Which, the variance parameter of the Gaussian distribution function is learned by using a sub-network, and the mean parameter is filled by this feature value. Then, each feature vector is augmented to a matrix which is fed into a mixer with local attention (\textit{TabMixer}) to extract the latent feature. Finally, the latent feature is squeezed to yield an accurate label distribution via a squeezed network. Extensive experiments verify that our proposed algorithm can be competitive compared to other LDL algorithms on several benchmarks.
|
1708.03366
|
Sangdon Park
|
Sangdon Park, James Weimer and Insup Lee
|
Resilient Linear Classification: An Approach to Deal with Attacks on
Training Data
|
Accepted as a conference paper at ICCPS17
| null |
10.1145/3055004.3055006
| null |
cs.LG cs.AI cs.CR cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Data-driven techniques are used in cyber-physical systems (CPS) for
controlling autonomous vehicles, handling demand responses for energy
management, and modeling human physiology for medical devices. These
data-driven techniques extract models from training data, where their
performance is often analyzed with respect to random errors in the training
data. However, if the training data is maliciously altered by attackers, the
effect of these attacks on the learning algorithms underpinning data-driven CPS
have yet to be considered. In this paper, we analyze the resilience of
classification algorithms to training data attacks. Specifically, a generic
metric is proposed that is tailored to measure resilience of classification
algorithms with respect to worst-case tampering of the training data. Using the
metric, we show that traditional linear classification algorithms are resilient
under restricted conditions. To overcome these limitations, we propose a linear
classification algorithm with a majority constraint and prove that it is
strictly more resilient than the traditional algorithms. Evaluations on both
synthetic data and a real-world retrospective arrhythmia medical case-study
show that the traditional algorithms are vulnerable to tampered training data,
whereas the proposed algorithm is more resilient (as measured by worst-case
tampering).
|
[
{
"created": "Thu, 10 Aug 2017 19:54:58 GMT",
"version": "v1"
},
{
"created": "Tue, 15 Aug 2017 15:25:16 GMT",
"version": "v2"
}
] |
2017-08-16
|
[
[
"Park",
"Sangdon",
""
],
[
"Weimer",
"James",
""
],
[
"Lee",
"Insup",
""
]
] |
Data-driven techniques are used in cyber-physical systems (CPS) for controlling autonomous vehicles, handling demand responses for energy management, and modeling human physiology for medical devices. These data-driven techniques extract models from training data, where their performance is often analyzed with respect to random errors in the training data. However, if the training data is maliciously altered by attackers, the effect of these attacks on the learning algorithms underpinning data-driven CPS have yet to be considered. In this paper, we analyze the resilience of classification algorithms to training data attacks. Specifically, a generic metric is proposed that is tailored to measure resilience of classification algorithms with respect to worst-case tampering of the training data. Using the metric, we show that traditional linear classification algorithms are resilient under restricted conditions. To overcome these limitations, we propose a linear classification algorithm with a majority constraint and prove that it is strictly more resilient than the traditional algorithms. Evaluations on both synthetic data and a real-world retrospective arrhythmia medical case-study show that the traditional algorithms are vulnerable to tampered training data, whereas the proposed algorithm is more resilient (as measured by worst-case tampering).
|
2408.06709
|
Zhuoran Zheng
|
Xin Su, Zhuoran Zheng, Chen Wu
|
Review Learning: Advancing All-in-One Ultra-High-Definition Image
Restoration Training Method
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
All-in-one image restoration tasks are becoming increasingly important,
especially for ultra-high-definition (UHD) images. Existing all-in-one UHD
image restoration methods usually boost the model's performance by introducing
prompt or customized dynamized networks for different degradation types. For
the inference stage, it might be friendly, but in the training stage, since the
model encounters multiple degraded images of different quality in an epoch,
these cluttered learning objectives might be information pollution for the
model. To address this problem, we propose a new training paradigm for general
image restoration models, which we name \textbf{Review Learning}, which enables
image restoration models to be capable enough to handle multiple types of
degradation without prior knowledge and prompts. This approach begins with
sequential training of an image restoration model on several degraded datasets,
combined with a review mechanism that enhances the image restoration model's
memory for several previous classes of degraded datasets. In addition, we
design a lightweight all-purpose image restoration network that can efficiently
reason about degraded images with 4K ($3840 \times 2160$) resolution on a
single consumer-grade GPU.
|
[
{
"created": "Tue, 13 Aug 2024 08:08:45 GMT",
"version": "v1"
}
] |
2024-08-14
|
[
[
"Su",
"Xin",
""
],
[
"Zheng",
"Zhuoran",
""
],
[
"Wu",
"Chen",
""
]
] |
All-in-one image restoration tasks are becoming increasingly important, especially for ultra-high-definition (UHD) images. Existing all-in-one UHD image restoration methods usually boost the model's performance by introducing prompt or customized dynamized networks for different degradation types. For the inference stage, it might be friendly, but in the training stage, since the model encounters multiple degraded images of different quality in an epoch, these cluttered learning objectives might be information pollution for the model. To address this problem, we propose a new training paradigm for general image restoration models, which we name \textbf{Review Learning}, which enables image restoration models to be capable enough to handle multiple types of degradation without prior knowledge and prompts. This approach begins with sequential training of an image restoration model on several degraded datasets, combined with a review mechanism that enhances the image restoration model's memory for several previous classes of degraded datasets. In addition, we design a lightweight all-purpose image restoration network that can efficiently reason about degraded images with 4K ($3840 \times 2160$) resolution on a single consumer-grade GPU.
|
2201.05057
|
Qingzhao Zhang
|
Qingzhao Zhang, Shengtuo Hu, Jiachen Sun, Qi Alfred Chen, Z. Morley
Mao
|
On Adversarial Robustness of Trajectory Prediction for Autonomous
Vehicles
|
13 pages, 13 figures, accepted by CVPR 2022
| null | null | null |
cs.CV cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Trajectory prediction is a critical component for autonomous vehicles (AVs)
to perform safe planning and navigation. However, few studies have analyzed the
adversarial robustness of trajectory prediction or investigated whether the
worst-case prediction can still lead to safe planning. To bridge this gap, we
study the adversarial robustness of trajectory prediction models by proposing a
new adversarial attack that perturbs normal vehicle trajectories to maximize
the prediction error. Our experiments on three models and three datasets show
that the adversarial prediction increases the prediction error by more than
150%. Our case studies show that if an adversary drives a vehicle close to the
target AV following the adversarial trajectory, the AV may make an inaccurate
prediction and even make unsafe driving decisions. We also explore possible
mitigation techniques via data augmentation and trajectory smoothing. The
implementation is open source at
https://github.com/zqzqz/AdvTrajectoryPrediction.
|
[
{
"created": "Thu, 13 Jan 2022 16:33:04 GMT",
"version": "v1"
},
{
"created": "Fri, 11 Mar 2022 18:09:01 GMT",
"version": "v2"
},
{
"created": "Mon, 22 Aug 2022 01:51:52 GMT",
"version": "v3"
}
] |
2022-08-23
|
[
[
"Zhang",
"Qingzhao",
""
],
[
"Hu",
"Shengtuo",
""
],
[
"Sun",
"Jiachen",
""
],
[
"Chen",
"Qi Alfred",
""
],
[
"Mao",
"Z. Morley",
""
]
] |
Trajectory prediction is a critical component for autonomous vehicles (AVs) to perform safe planning and navigation. However, few studies have analyzed the adversarial robustness of trajectory prediction or investigated whether the worst-case prediction can still lead to safe planning. To bridge this gap, we study the adversarial robustness of trajectory prediction models by proposing a new adversarial attack that perturbs normal vehicle trajectories to maximize the prediction error. Our experiments on three models and three datasets show that the adversarial prediction increases the prediction error by more than 150%. Our case studies show that if an adversary drives a vehicle close to the target AV following the adversarial trajectory, the AV may make an inaccurate prediction and even make unsafe driving decisions. We also explore possible mitigation techniques via data augmentation and trajectory smoothing. The implementation is open source at https://github.com/zqzqz/AdvTrajectoryPrediction.
|
2006.00617
|
Casper Hansen
|
Casper Hansen and Christian Hansen and Jakob Grue Simonsen and Stephen
Alstrup and Christina Lioma
|
Content-aware Neural Hashing for Cold-start Recommendation
|
Accepted to SIGIR 2020
| null |
10.1145/3397271.3401060
| null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Content-aware recommendation approaches are essential for providing
meaningful recommendations for \textit{new} (i.e., \textit{cold-start}) items
in a recommender system. We present a content-aware neural hashing-based
collaborative filtering approach (NeuHash-CF), which generates binary hash
codes for users and items, such that the highly efficient Hamming distance can
be used for estimating user-item relevance. NeuHash-CF is modelled as an
autoencoder architecture, consisting of two joint hashing components for
generating user and item hash codes. Inspired from semantic hashing, the item
hashing component generates a hash code directly from an item's content
information (i.e., it generates cold-start and seen item hash codes in the same
manner). This contrasts existing state-of-the-art models, which treat the two
item cases separately. The user hash codes are generated directly based on user
id, through learning a user embedding matrix. We show experimentally that
NeuHash-CF significantly outperforms state-of-the-art baselines by up to 12\%
NDCG and 13\% MRR in cold-start recommendation settings, and up to 4\% in both
NDCG and MRR in standard settings where all items are present while training.
Our approach uses 2-4x shorter hash codes, while obtaining the same or better
performance compared to the state of the art, thus consequently also enabling a
notable storage reduction.
|
[
{
"created": "Sun, 31 May 2020 21:29:38 GMT",
"version": "v1"
}
] |
2020-06-02
|
[
[
"Hansen",
"Casper",
""
],
[
"Hansen",
"Christian",
""
],
[
"Simonsen",
"Jakob Grue",
""
],
[
"Alstrup",
"Stephen",
""
],
[
"Lioma",
"Christina",
""
]
] |
Content-aware recommendation approaches are essential for providing meaningful recommendations for \textit{new} (i.e., \textit{cold-start}) items in a recommender system. We present a content-aware neural hashing-based collaborative filtering approach (NeuHash-CF), which generates binary hash codes for users and items, such that the highly efficient Hamming distance can be used for estimating user-item relevance. NeuHash-CF is modelled as an autoencoder architecture, consisting of two joint hashing components for generating user and item hash codes. Inspired from semantic hashing, the item hashing component generates a hash code directly from an item's content information (i.e., it generates cold-start and seen item hash codes in the same manner). This contrasts existing state-of-the-art models, which treat the two item cases separately. The user hash codes are generated directly based on user id, through learning a user embedding matrix. We show experimentally that NeuHash-CF significantly outperforms state-of-the-art baselines by up to 12\% NDCG and 13\% MRR in cold-start recommendation settings, and up to 4\% in both NDCG and MRR in standard settings where all items are present while training. Our approach uses 2-4x shorter hash codes, while obtaining the same or better performance compared to the state of the art, thus consequently also enabling a notable storage reduction.
|
2405.15310
|
Duke Nguyen
|
Duke Nguyen, Aditya Joshi, Flora Salim
|
Spectraformer: A Unified Random Feature Framework for Transformer
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Linearization of attention using various kernel approximation and kernel
learning techniques has shown promise. Past methods use a subset of
combinations of component functions and weight matrices within the random
features paradigm. We identify the need for a systematic comparison of
different combinations of weight matrix and component functions for attention
learning in Transformer. In this work, we introduce Spectraformer, a unified
framework for approximating and learning the kernel function in linearized
attention of the Transformer. We experiment with broad classes of component
functions and weight matrices for three textual tasks in the LRA benchmark. Our
experimentation with multiple combinations of component functions and weight
matrices leads us to a novel combination with 23.4% faster training time and
25.2% lower memory consumption over the previous SOTA random feature
Transformer, while maintaining the performance, as compared to the Original
Transformer. Our code is available at:
https://github.com/dukeraphaelng/spectraformer .
|
[
{
"created": "Fri, 24 May 2024 07:52:53 GMT",
"version": "v1"
},
{
"created": "Wed, 29 May 2024 04:45:26 GMT",
"version": "v2"
}
] |
2024-05-30
|
[
[
"Nguyen",
"Duke",
""
],
[
"Joshi",
"Aditya",
""
],
[
"Salim",
"Flora",
""
]
] |
Linearization of attention using various kernel approximation and kernel learning techniques has shown promise. Past methods use a subset of combinations of component functions and weight matrices within the random features paradigm. We identify the need for a systematic comparison of different combinations of weight matrix and component functions for attention learning in Transformer. In this work, we introduce Spectraformer, a unified framework for approximating and learning the kernel function in linearized attention of the Transformer. We experiment with broad classes of component functions and weight matrices for three textual tasks in the LRA benchmark. Our experimentation with multiple combinations of component functions and weight matrices leads us to a novel combination with 23.4% faster training time and 25.2% lower memory consumption over the previous SOTA random feature Transformer, while maintaining the performance, as compared to the Original Transformer. Our code is available at: https://github.com/dukeraphaelng/spectraformer .
|
1912.09528
|
Nirupam Gupta
|
Nirupam Gupta and Nitin H. Vaidya
|
Randomized Reactive Redundancy for Byzantine Fault-Tolerance in
Parallelized Learning
| null | null | null | null |
cs.DC cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This report considers the problem of Byzantine fault-tolerance in synchronous
parallelized learning that is founded on the parallelized stochastic gradient
descent (parallelized-SGD) algorithm. The system comprises a master, and $n$
workers, where up to $f$ of the workers are Byzantine faulty. Byzantine workers
need not follow the master's instructions correctly, and might send malicious
incorrect (or faulty) information. The identity of the Byzantine workers
remains fixed throughout the learning process, and is unknown a priori to the
master. We propose two coding schemes, a deterministic scheme and a randomized
scheme, for guaranteeing exact fault-tolerance if $2f < n$. The coding schemes
use the concept of reactive redundancy for isolating Byzantine workers that
eventually send faulty information. We note that the computation efficiencies
of the schemes compare favorably with other (deterministic or randomized)
coding schemes, for exact fault-tolerance.
|
[
{
"created": "Thu, 19 Dec 2019 20:15:28 GMT",
"version": "v1"
}
] |
2019-12-23
|
[
[
"Gupta",
"Nirupam",
""
],
[
"Vaidya",
"Nitin H.",
""
]
] |
This report considers the problem of Byzantine fault-tolerance in synchronous parallelized learning that is founded on the parallelized stochastic gradient descent (parallelized-SGD) algorithm. The system comprises a master, and $n$ workers, where up to $f$ of the workers are Byzantine faulty. Byzantine workers need not follow the master's instructions correctly, and might send malicious incorrect (or faulty) information. The identity of the Byzantine workers remains fixed throughout the learning process, and is unknown a priori to the master. We propose two coding schemes, a deterministic scheme and a randomized scheme, for guaranteeing exact fault-tolerance if $2f < n$. The coding schemes use the concept of reactive redundancy for isolating Byzantine workers that eventually send faulty information. We note that the computation efficiencies of the schemes compare favorably with other (deterministic or randomized) coding schemes, for exact fault-tolerance.
|
2112.02701
|
Xuanli He
|
Xuanli He, Qiongkai Xu, Lingjuan Lyu, Fangzhao Wu, Chenguang Wang
|
Protecting Intellectual Property of Language Generation APIs with
Lexical Watermark
|
accepted to AAAI2022
| null | null | null |
cs.CR cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Nowadays, due to the breakthrough in natural language generation (NLG),
including machine translation, document summarization, image captioning, etc
NLG models have been encapsulated in cloud APIs to serve over half a billion
people worldwide and process over one hundred billion word generations per day.
Thus, NLG APIs have already become essential profitable services in many
commercial companies. Due to the substantial financial and intellectual
investments, service providers adopt a pay-as-you-use policy to promote
sustainable market growth. However, recent works have shown that cloud
platforms suffer from financial losses imposed by model extraction attacks,
which aim to imitate the functionality and utility of the victim services, thus
violating the intellectual property (IP) of cloud APIs. This work targets at
protecting IP of NLG APIs by identifying the attackers who have utilized
watermarked responses from the victim NLG APIs. However, most existing
watermarking techniques are not directly amenable for IP protection of NLG
APIs. To bridge this gap, we first present a novel watermarking method for text
generation APIs by conducting lexical modification to the original outputs.
Compared with the competitive baselines, our watermark approach achieves better
identifiable performance in terms of p-value, with fewer semantic losses. In
addition, our watermarks are more understandable and intuitive to humans than
the baselines. Finally, the empirical studies show our approach is also
applicable to queries from different domains, and is effective on the attacker
trained on a mixture of the corpus which includes less than 10\% watermarked
samples.
|
[
{
"created": "Sun, 5 Dec 2021 22:54:54 GMT",
"version": "v1"
}
] |
2021-12-07
|
[
[
"He",
"Xuanli",
""
],
[
"Xu",
"Qiongkai",
""
],
[
"Lyu",
"Lingjuan",
""
],
[
"Wu",
"Fangzhao",
""
],
[
"Wang",
"Chenguang",
""
]
] |
Nowadays, due to the breakthrough in natural language generation (NLG), including machine translation, document summarization, image captioning, etc NLG models have been encapsulated in cloud APIs to serve over half a billion people worldwide and process over one hundred billion word generations per day. Thus, NLG APIs have already become essential profitable services in many commercial companies. Due to the substantial financial and intellectual investments, service providers adopt a pay-as-you-use policy to promote sustainable market growth. However, recent works have shown that cloud platforms suffer from financial losses imposed by model extraction attacks, which aim to imitate the functionality and utility of the victim services, thus violating the intellectual property (IP) of cloud APIs. This work targets at protecting IP of NLG APIs by identifying the attackers who have utilized watermarked responses from the victim NLG APIs. However, most existing watermarking techniques are not directly amenable for IP protection of NLG APIs. To bridge this gap, we first present a novel watermarking method for text generation APIs by conducting lexical modification to the original outputs. Compared with the competitive baselines, our watermark approach achieves better identifiable performance in terms of p-value, with fewer semantic losses. In addition, our watermarks are more understandable and intuitive to humans than the baselines. Finally, the empirical studies show our approach is also applicable to queries from different domains, and is effective on the attacker trained on a mixture of the corpus which includes less than 10\% watermarked samples.
|
2006.11404
|
Safalya Pal
|
Safalya Pal
|
Auto-Encoding for Shared Cross Domain Feature Representation and
Image-to-Image Translation
| null | null | null | null |
cs.CV cs.LG eess.IV stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Image-to-image translation is a subset of computer vision and pattern
recognition problems where our goal is to learn a mapping between input images
of domain $\mathbf{X}_1$ and output images of domain $\mathbf{X}_2$. Current
methods use neural networks with an encoder-decoder structure to learn a
mapping $G:\mathbf{X}_1 \to\mathbf{X}_2$ such that the distribution of images
from $\mathbf{X}_2$ and $G(\mathbf{X}_1)$ are identical, where $G(\mathbf{X}_1)
= d_G (f_G (\mathbf{X}_1))$ and $f_G (\cdot)$ is referred as the encoder and
$d_G(\cdot)$ is referred to as the decoder. Currently, such methods which also
compute an inverse mapping $F:\mathbf{X}_2 \to \mathbf{X}_1$ use a separate
encoder-decoder pair $d_F (f_F (\mathbf{X}_2))$ or at least a separate decoder
$d_F (\cdot)$ to do so. Here we introduce a method to perform cross domain
image-to-image translation across multiple domains using a single
encoder-decoder architecture. We use an auto-encoder network which given an
input image $\mathbf{X}_1$, first computes a latent domain encoding $Z_d = f_d
(\mathbf{X}_1)$ and a latent content encoding $Z_c = f_c (\mathbf{X}_1)$, where
the domain encoding $Z_d$ and content encoding $Z_c$ are independent. And then
a decoder network $g(Z_d,Z_c)$ creates a reconstruction of the original image
$\mathbf{\widehat{X}}_1=g(Z_d,Z_c )\approx \mathbf{X}_1$. Ideally, the domain
encoding $Z_d$ contains no information regarding the content of the image and
the content encoding $Z_c$ contains no information regarding the domain of the
image. We use this property of the encodings to find the mapping across domains
$G: X\to Y$ by simply changing the domain encoding $Z_d$ of the decoder's
input. $G(\mathbf{X}_1 )=d(f_d (\mathbf{x}_2^i ),f_c (\mathbf{X}_1))$ where
$\mathbf{x}_2^i$ is the $i^{th}$ observation of $\mathbf{X}_2$.
|
[
{
"created": "Thu, 11 Jun 2020 21:38:23 GMT",
"version": "v1"
}
] |
2020-06-23
|
[
[
"Pal",
"Safalya",
""
]
] |
Image-to-image translation is a subset of computer vision and pattern recognition problems where our goal is to learn a mapping between input images of domain $\mathbf{X}_1$ and output images of domain $\mathbf{X}_2$. Current methods use neural networks with an encoder-decoder structure to learn a mapping $G:\mathbf{X}_1 \to\mathbf{X}_2$ such that the distribution of images from $\mathbf{X}_2$ and $G(\mathbf{X}_1)$ are identical, where $G(\mathbf{X}_1) = d_G (f_G (\mathbf{X}_1))$ and $f_G (\cdot)$ is referred as the encoder and $d_G(\cdot)$ is referred to as the decoder. Currently, such methods which also compute an inverse mapping $F:\mathbf{X}_2 \to \mathbf{X}_1$ use a separate encoder-decoder pair $d_F (f_F (\mathbf{X}_2))$ or at least a separate decoder $d_F (\cdot)$ to do so. Here we introduce a method to perform cross domain image-to-image translation across multiple domains using a single encoder-decoder architecture. We use an auto-encoder network which given an input image $\mathbf{X}_1$, first computes a latent domain encoding $Z_d = f_d (\mathbf{X}_1)$ and a latent content encoding $Z_c = f_c (\mathbf{X}_1)$, where the domain encoding $Z_d$ and content encoding $Z_c$ are independent. And then a decoder network $g(Z_d,Z_c)$ creates a reconstruction of the original image $\mathbf{\widehat{X}}_1=g(Z_d,Z_c )\approx \mathbf{X}_1$. Ideally, the domain encoding $Z_d$ contains no information regarding the content of the image and the content encoding $Z_c$ contains no information regarding the domain of the image. We use this property of the encodings to find the mapping across domains $G: X\to Y$ by simply changing the domain encoding $Z_d$ of the decoder's input. $G(\mathbf{X}_1 )=d(f_d (\mathbf{x}_2^i ),f_c (\mathbf{X}_1))$ where $\mathbf{x}_2^i$ is the $i^{th}$ observation of $\mathbf{X}_2$.
|
1608.06235
|
Yunpeng Pan
|
Yunpeng Pan, Xinyan Yan, Evangelos Theodorou and Byron Boots
|
Adaptive Probabilistic Trajectory Optimization via Efficient Approximate
Inference
| null | null | null | null |
cs.RO cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Robotic systems must be able to quickly and robustly make decisions when
operating in uncertain and dynamic environments. While Reinforcement Learning
(RL) can be used to compute optimal policies with little prior knowledge about
the environment, it suffers from slow convergence. An alternative approach is
Model Predictive Control (MPC), which optimizes policies quickly, but also
requires accurate models of the system dynamics and environment. In this paper
we propose a new approach, adaptive probabilistic trajectory optimization, that
combines the benefits of RL and MPC. Our method uses scalable approximate
inference to learn and updates probabilistic models in an online incremental
fashion while also computing optimal control policies via successive local
approximations. We present two variations of our algorithm based on the Sparse
Spectrum Gaussian Process (SSGP) model, and we test our algorithm on three
learning tasks, demonstrating the effectiveness and efficiency of our approach.
|
[
{
"created": "Mon, 22 Aug 2016 17:49:50 GMT",
"version": "v1"
},
{
"created": "Sun, 11 Sep 2016 23:11:23 GMT",
"version": "v2"
}
] |
2016-09-13
|
[
[
"Pan",
"Yunpeng",
""
],
[
"Yan",
"Xinyan",
""
],
[
"Theodorou",
"Evangelos",
""
],
[
"Boots",
"Byron",
""
]
] |
Robotic systems must be able to quickly and robustly make decisions when operating in uncertain and dynamic environments. While Reinforcement Learning (RL) can be used to compute optimal policies with little prior knowledge about the environment, it suffers from slow convergence. An alternative approach is Model Predictive Control (MPC), which optimizes policies quickly, but also requires accurate models of the system dynamics and environment. In this paper we propose a new approach, adaptive probabilistic trajectory optimization, that combines the benefits of RL and MPC. Our method uses scalable approximate inference to learn and updates probabilistic models in an online incremental fashion while also computing optimal control policies via successive local approximations. We present two variations of our algorithm based on the Sparse Spectrum Gaussian Process (SSGP) model, and we test our algorithm on three learning tasks, demonstrating the effectiveness and efficiency of our approach.
|
2006.07589
|
Minseon Kim
|
Minseon Kim, Jihoon Tack, Sung Ju Hwang
|
Adversarial Self-Supervised Contrastive Learning
|
NeurIPS 2020. Code: https://github.com/Kim-Minseon/RoCL
| null | null | null |
cs.LG cs.CV stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing adversarial learning approaches mostly use class labels to generate
adversarial samples that lead to incorrect predictions, which are then used to
augment the training of the model for improved robustness. While some recent
works propose semi-supervised adversarial learning methods that utilize
unlabeled data, they still require class labels. However, do we really need
class labels at all, for adversarially robust training of deep neural networks?
In this paper, we propose a novel adversarial attack for unlabeled data, which
makes the model confuse the instance-level identities of the perturbed data
samples. Further, we present a self-supervised contrastive learning framework
to adversarially train a robust neural network without labeled data, which aims
to maximize the similarity between a random augmentation of a data sample and
its instance-wise adversarial perturbation. We validate our method, Robust
Contrastive Learning (RoCL), on multiple benchmark datasets, on which it
obtains comparable robust accuracy over state-of-the-art supervised adversarial
learning methods, and significantly improved robustness against the black box
and unseen types of attacks. Moreover, with further joint fine-tuning with
supervised adversarial loss, RoCL obtains even higher robust accuracy over
using self-supervised learning alone. Notably, RoCL also demonstrate impressive
results in robust transfer learning.
|
[
{
"created": "Sat, 13 Jun 2020 08:24:33 GMT",
"version": "v1"
},
{
"created": "Mon, 26 Oct 2020 14:04:48 GMT",
"version": "v2"
}
] |
2020-10-27
|
[
[
"Kim",
"Minseon",
""
],
[
"Tack",
"Jihoon",
""
],
[
"Hwang",
"Sung Ju",
""
]
] |
Existing adversarial learning approaches mostly use class labels to generate adversarial samples that lead to incorrect predictions, which are then used to augment the training of the model for improved robustness. While some recent works propose semi-supervised adversarial learning methods that utilize unlabeled data, they still require class labels. However, do we really need class labels at all, for adversarially robust training of deep neural networks? In this paper, we propose a novel adversarial attack for unlabeled data, which makes the model confuse the instance-level identities of the perturbed data samples. Further, we present a self-supervised contrastive learning framework to adversarially train a robust neural network without labeled data, which aims to maximize the similarity between a random augmentation of a data sample and its instance-wise adversarial perturbation. We validate our method, Robust Contrastive Learning (RoCL), on multiple benchmark datasets, on which it obtains comparable robust accuracy over state-of-the-art supervised adversarial learning methods, and significantly improved robustness against the black box and unseen types of attacks. Moreover, with further joint fine-tuning with supervised adversarial loss, RoCL obtains even higher robust accuracy over using self-supervised learning alone. Notably, RoCL also demonstrate impressive results in robust transfer learning.
|
1909.06317
|
Shigeki Karita
|
Shigeki Karita, Nanxin Chen, Tomoki Hayashi, Takaaki Hori, Hirofumi
Inaguma, Ziyan Jiang, Masao Someki, Nelson Enrique Yalta Soplin, Ryuichi
Yamamoto, Xiaofei Wang, Shinji Watanabe, Takenori Yoshimura, Wangyou Zhang
|
A Comparative Study on Transformer vs RNN in Speech Applications
|
Accepted at ASRU 2019
|
IEEE Automatic Speech Recognition and Understanding Workshop 2019
|
10.1109/ASRU46091.2019.9003750
| null |
cs.CL cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sequence-to-sequence models have been widely used in end-to-end speech
processing, for example, automatic speech recognition (ASR), speech translation
(ST), and text-to-speech (TTS). This paper focuses on an emergent
sequence-to-sequence model called Transformer, which achieves state-of-the-art
performance in neural machine translation and other natural language processing
applications. We undertook intensive studies in which we experimentally
compared and analyzed Transformer and conventional recurrent neural networks
(RNN) in a total of 15 ASR, one multilingual ASR, one ST, and two TTS
benchmarks. Our experiments revealed various training tips and significant
performance benefits obtained with Transformer for each task including the
surprising superiority of Transformer in 13/15 ASR benchmarks in comparison
with RNN. We are preparing to release Kaldi-style reproducible recipes using
open source and publicly available datasets for all the ASR, ST, and TTS tasks
for the community to succeed our exciting outcomes.
|
[
{
"created": "Fri, 13 Sep 2019 16:27:08 GMT",
"version": "v1"
},
{
"created": "Sat, 28 Sep 2019 11:11:38 GMT",
"version": "v2"
}
] |
2021-06-10
|
[
[
"Karita",
"Shigeki",
""
],
[
"Chen",
"Nanxin",
""
],
[
"Hayashi",
"Tomoki",
""
],
[
"Hori",
"Takaaki",
""
],
[
"Inaguma",
"Hirofumi",
""
],
[
"Jiang",
"Ziyan",
""
],
[
"Someki",
"Masao",
""
],
[
"Soplin",
"Nelson Enrique Yalta",
""
],
[
"Yamamoto",
"Ryuichi",
""
],
[
"Wang",
"Xiaofei",
""
],
[
"Watanabe",
"Shinji",
""
],
[
"Yoshimura",
"Takenori",
""
],
[
"Zhang",
"Wangyou",
""
]
] |
Sequence-to-sequence models have been widely used in end-to-end speech processing, for example, automatic speech recognition (ASR), speech translation (ST), and text-to-speech (TTS). This paper focuses on an emergent sequence-to-sequence model called Transformer, which achieves state-of-the-art performance in neural machine translation and other natural language processing applications. We undertook intensive studies in which we experimentally compared and analyzed Transformer and conventional recurrent neural networks (RNN) in a total of 15 ASR, one multilingual ASR, one ST, and two TTS benchmarks. Our experiments revealed various training tips and significant performance benefits obtained with Transformer for each task including the surprising superiority of Transformer in 13/15 ASR benchmarks in comparison with RNN. We are preparing to release Kaldi-style reproducible recipes using open source and publicly available datasets for all the ASR, ST, and TTS tasks for the community to succeed our exciting outcomes.
|
1411.5319
|
Kota Hara
|
Kota Hara, Vignesh Jagadeesh, Robinson Piramuthu
|
Fashion Apparel Detection: The Role of Deep Convolutional Neural Network
and Pose-dependent Priors
|
Accepted for publication at IEEE Winter Conference on Applications of
Computer Vision (WACV) 2016
| null |
10.1109/WACV.2016.7477611
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we propose and address a new computer vision task, which we
call fashion item detection, where the aim is to detect various fashion items a
person in the image is wearing or carrying. The types of fashion items we
consider in this work include hat, glasses, bag, pants, shoes and so on. The
detection of fashion items can be an important first step of various e-commerce
applications for fashion industry. Our method is based on state-of-the-art
object detection method pipeline which combines object proposal methods with a
Deep Convolutional Neural Network. Since the locations of fashion items are in
strong correlation with the locations of body joints positions, we incorporate
contextual information from body poses in order to improve the detection
performance. Through the experiments, we demonstrate the effectiveness of the
proposed method.
|
[
{
"created": "Wed, 19 Nov 2014 19:09:00 GMT",
"version": "v1"
},
{
"created": "Sun, 24 Jan 2016 19:45:37 GMT",
"version": "v2"
}
] |
2016-11-17
|
[
[
"Hara",
"Kota",
""
],
[
"Jagadeesh",
"Vignesh",
""
],
[
"Piramuthu",
"Robinson",
""
]
] |
In this work, we propose and address a new computer vision task, which we call fashion item detection, where the aim is to detect various fashion items a person in the image is wearing or carrying. The types of fashion items we consider in this work include hat, glasses, bag, pants, shoes and so on. The detection of fashion items can be an important first step of various e-commerce applications for fashion industry. Our method is based on state-of-the-art object detection method pipeline which combines object proposal methods with a Deep Convolutional Neural Network. Since the locations of fashion items are in strong correlation with the locations of body joints positions, we incorporate contextual information from body poses in order to improve the detection performance. Through the experiments, we demonstrate the effectiveness of the proposed method.
|
2006.14808
|
Masayoshi Aritsugi
|
Riku Anegawa and Masayoshi Aritsugi
|
Text Detection on Roughly Placed Books by Leveraging a Learning-based
Model Trained with Another Domain Data
| null | null | null | null |
cs.CV cs.IR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Text detection enables us to extract rich information from images. In this
paper, we focus on how to generate bounding boxes that are appropriate to grasp
text areas on books to help implement automatic text detection. We attempt not
to improve a learning-based model by training it with an enough amount of data
in the target domain but to leverage it, which has been already trained with
another domain data. We develop algorithms that construct the bounding boxes by
improving and leveraging the results of a learning-based method. Our algorithms
can utilize different learning-based approaches to detect scene texts.
Experimental evaluations demonstrate that our algorithms work well in various
situations where books are roughly placed.
|
[
{
"created": "Fri, 26 Jun 2020 05:53:23 GMT",
"version": "v1"
}
] |
2020-06-29
|
[
[
"Anegawa",
"Riku",
""
],
[
"Aritsugi",
"Masayoshi",
""
]
] |
Text detection enables us to extract rich information from images. In this paper, we focus on how to generate bounding boxes that are appropriate to grasp text areas on books to help implement automatic text detection. We attempt not to improve a learning-based model by training it with an enough amount of data in the target domain but to leverage it, which has been already trained with another domain data. We develop algorithms that construct the bounding boxes by improving and leveraging the results of a learning-based method. Our algorithms can utilize different learning-based approaches to detect scene texts. Experimental evaluations demonstrate that our algorithms work well in various situations where books are roughly placed.
|
2110.04599
|
Sarah Di
|
Sarah Di, Robin Yu, Amol Kapoor
|
Embed Everything: A Method for Efficiently Co-Embedding Multi-Modal
Spaces
|
7 pages, 4 figures
| null | null | null |
cs.LG cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Any general artificial intelligence system must be able to interpret, operate
on, and produce data in a multi-modal latent space that can represent audio,
imagery, text, and more. In the last decade, deep neural networks have seen
remarkable success in unimodal data distributions, while transfer learning
techniques have seen a massive expansion of model reuse across related domains.
However, training multi-modal networks from scratch remains expensive and
illusive, while heterogeneous transfer learning (HTL) techniques remain
relatively underdeveloped. In this paper, we propose a novel and cost-effective
HTL strategy for co-embedding multi-modal spaces. Our method avoids cost
inefficiencies by preprocessing embeddings using pretrained models for all
components, without passing gradients through these models. We prove the use of
this system in a joint image-audio embedding task. Our method has wide-reaching
applications, as successfully bridging the gap between different latent spaces
could provide a framework for the promised "universal" embedding.
|
[
{
"created": "Sat, 9 Oct 2021 15:39:27 GMT",
"version": "v1"
}
] |
2021-10-12
|
[
[
"Di",
"Sarah",
""
],
[
"Yu",
"Robin",
""
],
[
"Kapoor",
"Amol",
""
]
] |
Any general artificial intelligence system must be able to interpret, operate on, and produce data in a multi-modal latent space that can represent audio, imagery, text, and more. In the last decade, deep neural networks have seen remarkable success in unimodal data distributions, while transfer learning techniques have seen a massive expansion of model reuse across related domains. However, training multi-modal networks from scratch remains expensive and illusive, while heterogeneous transfer learning (HTL) techniques remain relatively underdeveloped. In this paper, we propose a novel and cost-effective HTL strategy for co-embedding multi-modal spaces. Our method avoids cost inefficiencies by preprocessing embeddings using pretrained models for all components, without passing gradients through these models. We prove the use of this system in a joint image-audio embedding task. Our method has wide-reaching applications, as successfully bridging the gap between different latent spaces could provide a framework for the promised "universal" embedding.
|
1005.1856
|
Mohsen Toorani
|
M. Toorani, A. A. Beheshti
|
An Elliptic Curve-based Signcryption Scheme with Forward Secrecy
|
13 Pages, 5 Figures, 2 Tables
|
Journal of Applied Sciences, Vol.9, No.6, pp.1025-1035, 2009
|
10.3923/jas.2009.1025.1035
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An elliptic curve-based signcryption scheme is introduced in this paper that
effectively combines the functionalities of digital signature and encryption,
and decreases the computational costs and communication overheads in comparison
with the traditional signature-then-encryption schemes. It simultaneously
provides the attributes of message confidentiality, authentication, integrity,
unforgeability, non-repudiation, public verifiability, and forward secrecy of
message confidentiality. Since it is based on elliptic curves and can use any
fast and secure symmetric algorithm for encrypting messages, it has great
advantages to be used for security establishments in store-and-forward
applications and when dealing with resource-constrained devices.
|
[
{
"created": "Tue, 11 May 2010 15:14:54 GMT",
"version": "v1"
},
{
"created": "Mon, 19 Mar 2012 20:35:20 GMT",
"version": "v2"
}
] |
2012-03-21
|
[
[
"Toorani",
"M.",
""
],
[
"Beheshti",
"A. A.",
""
]
] |
An elliptic curve-based signcryption scheme is introduced in this paper that effectively combines the functionalities of digital signature and encryption, and decreases the computational costs and communication overheads in comparison with the traditional signature-then-encryption schemes. It simultaneously provides the attributes of message confidentiality, authentication, integrity, unforgeability, non-repudiation, public verifiability, and forward secrecy of message confidentiality. Since it is based on elliptic curves and can use any fast and secure symmetric algorithm for encrypting messages, it has great advantages to be used for security establishments in store-and-forward applications and when dealing with resource-constrained devices.
|
2108.08255
|
Linan Huang
|
Linan Huang and Quanyan Zhu
|
Combating Informational Denial-of-Service (IDoS) Attacks: Modeling and
Mitigation of Attentional Human Vulnerability
| null | null |
10.1007/978-3-030-90370-1_17
| null |
cs.CR cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work proposes a new class of proactive attacks called the Informational
Denial-of-Service (IDoS) attacks that exploit the attentional human
vulnerability. By generating a large volume of feints, IDoS attacks deplete the
cognitive resources of human operators to prevent humans from identifying the
real attacks hidden among feints. This work aims to formally define IDoS
attacks, quantify their consequences, and develop human-assistive security
technologies to mitigate the severity level and risks of IDoS attacks. To this
end, we use the semi-Markov process to model the sequential arrivals of feints
and real attacks with category labels attached in the associated alerts. The
assistive technology strategically manages human attention by highlighting
selective alerts periodically to prevent the distraction of other alerts. A
data-driven approach is applied to evaluate human performance under different
Attention Management (AM) strategies. Under a representative special case, we
establish the computational equivalency between two dynamic programming
representations to reduce the computation complexity and enable online learning
with samples of reduced size and zero delays. A case study corroborates the
effectiveness of the learning framework. The numerical results illustrate how
AM strategies can alleviate the severity level and the risk of IDoS attacks.
Furthermore, the results show that the minimum risk is achieved with a proper
level of intentional inattention to alerts, which we refer to as the law of
rational risk-reduction inattention.
|
[
{
"created": "Wed, 4 Aug 2021 05:09:32 GMT",
"version": "v1"
},
{
"created": "Sat, 16 Oct 2021 19:00:01 GMT",
"version": "v2"
}
] |
2021-10-19
|
[
[
"Huang",
"Linan",
""
],
[
"Zhu",
"Quanyan",
""
]
] |
This work proposes a new class of proactive attacks called the Informational Denial-of-Service (IDoS) attacks that exploit the attentional human vulnerability. By generating a large volume of feints, IDoS attacks deplete the cognitive resources of human operators to prevent humans from identifying the real attacks hidden among feints. This work aims to formally define IDoS attacks, quantify their consequences, and develop human-assistive security technologies to mitigate the severity level and risks of IDoS attacks. To this end, we use the semi-Markov process to model the sequential arrivals of feints and real attacks with category labels attached in the associated alerts. The assistive technology strategically manages human attention by highlighting selective alerts periodically to prevent the distraction of other alerts. A data-driven approach is applied to evaluate human performance under different Attention Management (AM) strategies. Under a representative special case, we establish the computational equivalency between two dynamic programming representations to reduce the computation complexity and enable online learning with samples of reduced size and zero delays. A case study corroborates the effectiveness of the learning framework. The numerical results illustrate how AM strategies can alleviate the severity level and the risk of IDoS attacks. Furthermore, the results show that the minimum risk is achieved with a proper level of intentional inattention to alerts, which we refer to as the law of rational risk-reduction inattention.
|
1711.09360
|
Alexios Balatsoukas-Stimming
|
Alexios Balatsoukas Stimming and Athanasios P. Liavas
|
Design of LDPC Codes for the Unequal Power Two-User Gaussian Multiple
Access Channel
| null | null |
10.1109/LWC.2018.2833855
| null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we describe an LDPC code design framework for the unequal power
two-user Gaussian multiple access channel using EXIT charts. We show that the
sum-rate of the LDPC codes designed using our approach can get close to the
maximal sum-rate of the two-user Gaussian multiple access channel. Moreover, we
provide numerical simulation results that demonstrate the excellent
finite-length performance of the designed LDPC codes.
|
[
{
"created": "Sun, 26 Nov 2017 09:59:13 GMT",
"version": "v1"
},
{
"created": "Fri, 4 May 2018 08:40:08 GMT",
"version": "v2"
},
{
"created": "Wed, 16 May 2018 09:06:56 GMT",
"version": "v3"
}
] |
2018-05-17
|
[
[
"Stimming",
"Alexios Balatsoukas",
""
],
[
"Liavas",
"Athanasios P.",
""
]
] |
In this work, we describe an LDPC code design framework for the unequal power two-user Gaussian multiple access channel using EXIT charts. We show that the sum-rate of the LDPC codes designed using our approach can get close to the maximal sum-rate of the two-user Gaussian multiple access channel. Moreover, we provide numerical simulation results that demonstrate the excellent finite-length performance of the designed LDPC codes.
|
2407.14023
|
Aakash Sorathiya
|
Aakash Sorathiya, Gouri Ginde
|
Towards Extracting Ethical Concerns-related Software Requirements from
App Reviews
| null | null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
As mobile applications become increasingly integral to our daily lives,
concerns about ethics have grown drastically. Users share their experiences,
report bugs, and request new features in application reviews, often
highlighting safety, privacy, and accountability concerns. Approaches using
machine learning techniques have been used in the past to identify these
ethical concerns. However, understanding the underlying reasons behind them and
extracting requirements that could address these concerns is crucial for safer
software solution development. Thus, we propose a novel approach that leverages
a knowledge graph (KG) model to extract software requirements from app reviews,
capturing contextual data related to ethical concerns. Our framework consists
of three main components: developing an ontology with relevant entities and
relations, extracting key entities from app reviews, and creating connections
between them. This study analyzes app reviews of the Uber mobile application (a
popular taxi/ride app) and presents the preliminary results from the proposed
solution. Initial results show that KG can effectively capture contextual data
related to software ethical concerns, the underlying reasons behind these
concerns, and the corresponding potential requirements.
|
[
{
"created": "Fri, 19 Jul 2024 04:50:32 GMT",
"version": "v1"
}
] |
2024-07-22
|
[
[
"Sorathiya",
"Aakash",
""
],
[
"Ginde",
"Gouri",
""
]
] |
As mobile applications become increasingly integral to our daily lives, concerns about ethics have grown drastically. Users share their experiences, report bugs, and request new features in application reviews, often highlighting safety, privacy, and accountability concerns. Approaches using machine learning techniques have been used in the past to identify these ethical concerns. However, understanding the underlying reasons behind them and extracting requirements that could address these concerns is crucial for safer software solution development. Thus, we propose a novel approach that leverages a knowledge graph (KG) model to extract software requirements from app reviews, capturing contextual data related to ethical concerns. Our framework consists of three main components: developing an ontology with relevant entities and relations, extracting key entities from app reviews, and creating connections between them. This study analyzes app reviews of the Uber mobile application (a popular taxi/ride app) and presents the preliminary results from the proposed solution. Initial results show that KG can effectively capture contextual data related to software ethical concerns, the underlying reasons behind these concerns, and the corresponding potential requirements.
|
2101.11704
|
Mahsa Shafaei
|
Mahsa Shafaei, Christos Smailis, Ioannis A. Kakadiaris, Thamar Solorio
|
A Case Study of Deep Learning Based Multi-Modal Methods for Predicting
the Age-Suitability Rating of Movie Trailers
| null | null | null | null |
cs.LG cs.MM cs.SD eess.AS eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we explore different approaches to combine modalities for the
problem of automated age-suitability rating of movie trailers. First, we
introduce a new dataset containing videos of movie trailers in English
downloaded from IMDB and YouTube, along with their corresponding
age-suitability rating labels. Secondly, we propose a multi-modal deep learning
pipeline addressing the movie trailer age suitability rating problem. This is
the first attempt to combine video, audio, and speech information for this
problem, and our experimental results show that multi-modal approaches
significantly outperform the best mono and bimodal models in this task.
|
[
{
"created": "Tue, 26 Jan 2021 17:15:35 GMT",
"version": "v1"
}
] |
2021-01-29
|
[
[
"Shafaei",
"Mahsa",
""
],
[
"Smailis",
"Christos",
""
],
[
"Kakadiaris",
"Ioannis A.",
""
],
[
"Solorio",
"Thamar",
""
]
] |
In this work, we explore different approaches to combine modalities for the problem of automated age-suitability rating of movie trailers. First, we introduce a new dataset containing videos of movie trailers in English downloaded from IMDB and YouTube, along with their corresponding age-suitability rating labels. Secondly, we propose a multi-modal deep learning pipeline addressing the movie trailer age suitability rating problem. This is the first attempt to combine video, audio, and speech information for this problem, and our experimental results show that multi-modal approaches significantly outperform the best mono and bimodal models in this task.
|
1701.03274
|
Jun Chen
|
Jun Chen, Chaokun Wang
|
Investigating the role of musical genre in human perception of music
stretching resistance
| null | null | null | null |
cs.MM cs.SD
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To stretch a music piece to a given length is a common demand in people's
daily lives, e.g., in audio-video synchronization and animation production.
However, it is not always guaranteed that the stretched music piece is
acceptable for general audience since music stretching suffers from people's
perceptual artefacts. Over-stretching a music piece will make it uncomfortable
for human psychoacoustic hearing. The research on music stretching resistance
attempts to estimate the maximum stretchability of music pieces to further
avoid over-stretch. It has been observed that musical genres can significantly
improve the accuracy of automatic estimation of music stretching resistance,
but how musical genres are related to music stretching resistance has never
been explained or studied in detail in the literature. In this paper, the
characteristics of music stretching resistance are compared across different
musical genres. It is found that music stretching resistance has strong
intra-genre cohesiveness and inter-genre discrepancies in the experiments.
Moreover, the ambiguity and the symmetry of music stretching resistance are
also observed in the experimental analysis. These findings lead to a new
measurement on the similarity between different musical genres based on their
music stretching resistance. In addition, the analysis of variance (ANOVA) also
supports the findings in this paper by verifying the significance of musical
genre in shaping music stretching resistance.
|
[
{
"created": "Thu, 12 Jan 2017 09:26:22 GMT",
"version": "v1"
}
] |
2017-01-13
|
[
[
"Chen",
"Jun",
""
],
[
"Wang",
"Chaokun",
""
]
] |
To stretch a music piece to a given length is a common demand in people's daily lives, e.g., in audio-video synchronization and animation production. However, it is not always guaranteed that the stretched music piece is acceptable for general audience since music stretching suffers from people's perceptual artefacts. Over-stretching a music piece will make it uncomfortable for human psychoacoustic hearing. The research on music stretching resistance attempts to estimate the maximum stretchability of music pieces to further avoid over-stretch. It has been observed that musical genres can significantly improve the accuracy of automatic estimation of music stretching resistance, but how musical genres are related to music stretching resistance has never been explained or studied in detail in the literature. In this paper, the characteristics of music stretching resistance are compared across different musical genres. It is found that music stretching resistance has strong intra-genre cohesiveness and inter-genre discrepancies in the experiments. Moreover, the ambiguity and the symmetry of music stretching resistance are also observed in the experimental analysis. These findings lead to a new measurement on the similarity between different musical genres based on their music stretching resistance. In addition, the analysis of variance (ANOVA) also supports the findings in this paper by verifying the significance of musical genre in shaping music stretching resistance.
|
1208.6269
|
Hadi Katebi
|
Hadi Katebi and Karem A. Sakallah and Igor L. Markov
|
Conflict Anticipation in the Search for Graph Automorphisms
|
15 pages, 9 Figures, 1 Table, Int'l Conf. on Logic for Programming,
Artificial Intelligence and Reasoning (LPAR)
|
H. Katebi, K. A. Sakallah and I. L. Markov, "Conflict Anticipation
in the Search for Graph Automorphisms" in Proc. Int'l Conf. on Logic for
Programming, Artificial Intelligence and Reasoning (LPAR), pp. 243-257,
Merida, Venezuela, 2012
| null | null |
cs.DS math.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Effective search for graph automorphisms allows identifying symmetries in
many discrete structures, ranging from chemical molecules to microprocessor
circuits. Using this type of structure can enhance visualization as well as
speed up computational optimization and verification. Competitive algorithms
for the graph automorphism problem are based on efficient partition refinement
augmented with group-theoretic pruning techniques. In this paper, we improve
prior algorithms for the graph automorphism problem by introducing simultaneous
refinement of multiple partitions, which enables the anticipation of future
conflicts in search and leads to significant pruning, reducing overall
runtimes. Empirically, we observe an exponential speedup for the family of
Miyazaki graphs, which have been shown to impede leading graph-automorphism
algorithms.
|
[
{
"created": "Thu, 30 Aug 2012 19:10:11 GMT",
"version": "v1"
}
] |
2012-08-31
|
[
[
"Katebi",
"Hadi",
""
],
[
"Sakallah",
"Karem A.",
""
],
[
"Markov",
"Igor L.",
""
]
] |
Effective search for graph automorphisms allows identifying symmetries in many discrete structures, ranging from chemical molecules to microprocessor circuits. Using this type of structure can enhance visualization as well as speed up computational optimization and verification. Competitive algorithms for the graph automorphism problem are based on efficient partition refinement augmented with group-theoretic pruning techniques. In this paper, we improve prior algorithms for the graph automorphism problem by introducing simultaneous refinement of multiple partitions, which enables the anticipation of future conflicts in search and leads to significant pruning, reducing overall runtimes. Empirically, we observe an exponential speedup for the family of Miyazaki graphs, which have been shown to impede leading graph-automorphism algorithms.
|
2302.10891
|
Matthieu NASTORG
|
Matthieu Nastorg (TAU, IFPEN), Michele Alessandro Bucci (TAU),
Thibault Faney (IFPEN), Jean-Marc Gratien (IFPEN), Guillaume Charpiat (TAU),
Marc Schoenauer (TAU)
|
An Implicit GNN Solver for Poisson-like problems
| null | null | null | null |
cs.LG cs.AI math.AP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents $\Psi$-GNN, a novel Graph Neural Network (GNN) approach
for solving the ubiquitous Poisson PDE problems with mixed boundary conditions.
By leveraging the Implicit Layer Theory, $\Psi$-GNN models an "infinitely" deep
network, thus avoiding the empirical tuning of the number of required Message
Passing layers to attain the solution. Its original architecture explicitly
takes into account the boundary conditions, a critical prerequisite for
physical applications, and is able to adapt to any initially provided solution.
$\Psi$-GNN is trained using a "physics-informed" loss, and the training process
is stable by design, and insensitive to its initialization. Furthermore, the
consistency of the approach is theoretically proven, and its flexibility and
generalization efficiency are experimentally demonstrated: the same learned
model can accurately handle unstructured meshes of various sizes, as well as
different boundary conditions. To the best of our knowledge, $\Psi$-GNN is the
first physics-informed GNN-based method that can handle various unstructured
domains, boundary conditions and initial solutions while also providing
convergence guarantees.
|
[
{
"created": "Mon, 6 Feb 2023 10:08:42 GMT",
"version": "v1"
},
{
"created": "Thu, 23 Feb 2023 14:55:41 GMT",
"version": "v2"
},
{
"created": "Tue, 26 Mar 2024 08:50:19 GMT",
"version": "v3"
}
] |
2024-03-27
|
[
[
"Nastorg",
"Matthieu",
"",
"TAU, IFPEN"
],
[
"Bucci",
"Michele Alessandro",
"",
"TAU"
],
[
"Faney",
"Thibault",
"",
"IFPEN"
],
[
"Gratien",
"Jean-Marc",
"",
"IFPEN"
],
[
"Charpiat",
"Guillaume",
"",
"TAU"
],
[
"Schoenauer",
"Marc",
"",
"TAU"
]
] |
This paper presents $\Psi$-GNN, a novel Graph Neural Network (GNN) approach for solving the ubiquitous Poisson PDE problems with mixed boundary conditions. By leveraging the Implicit Layer Theory, $\Psi$-GNN models an "infinitely" deep network, thus avoiding the empirical tuning of the number of required Message Passing layers to attain the solution. Its original architecture explicitly takes into account the boundary conditions, a critical prerequisite for physical applications, and is able to adapt to any initially provided solution. $\Psi$-GNN is trained using a "physics-informed" loss, and the training process is stable by design, and insensitive to its initialization. Furthermore, the consistency of the approach is theoretically proven, and its flexibility and generalization efficiency are experimentally demonstrated: the same learned model can accurately handle unstructured meshes of various sizes, as well as different boundary conditions. To the best of our knowledge, $\Psi$-GNN is the first physics-informed GNN-based method that can handle various unstructured domains, boundary conditions and initial solutions while also providing convergence guarantees.
|
2301.07390
|
Alberto De Marchi
|
Luca Sciullo, Alberto De Marchi, Angelo Trotta, Federico Montori,
Luciano Bononi, Marco Di Felice
|
Relativistic Digital Twin: Bringing the IoT to the Future
|
18 pages, 10 figures, 4 tables, 6 listings
|
Future Generation Computer Systems 153, 521-536 (2024)
|
10.1016/j.future.2023.12.016
| null |
cs.NI cs.LG math.OC
|
http://creativecommons.org/licenses/by/4.0/
|
Complex IoT ecosystems often require the usage of Digital Twins (DTs) of
their physical assets in order to perform predictive analytics and simulate
what-if scenarios. DTs are able to replicate IoT devices and adapt over time to
their behavioral changes. However, DTs in IoT are typically tailored to a
specific use case, without the possibility to seamlessly adapt to different
scenarios. Further, the fragmentation of IoT poses additional challenges on how
to deploy DTs in heterogeneous scenarios characterized by the usage of multiple
data formats and IoT network protocols. In this paper, we propose the
Relativistic Digital Twin (RDT) framework, through which we automatically
generate general-purpose DTs of IoT entities and tune their behavioral models
over time by constantly observing their real counterparts. The framework relies
on the object representation via the Web of Things (WoT), to offer a
standardized interface to each of the IoT devices as well as to their DTs. To
this purpose, we extended the W3C WoT standard in order to encompass the
concept of behavioral model and define it in the Thing Description (TD) through
a new vocabulary. Finally, we evaluated the RDT framework over two disjoint use
cases to assess its correctness and learning performance, i.e., the DT of a
simulated smart home scenario with the capability of forecasting the indoor
temperature, and the DT of a real-world drone with the capability of
forecasting its trajectory in an outdoor scenario. Experiments show that the
generated DT can estimate the behavior of its real counterpart after an
observation stage, regardless of the considered scenario.
|
[
{
"created": "Wed, 18 Jan 2023 09:37:05 GMT",
"version": "v1"
},
{
"created": "Fri, 8 Sep 2023 21:13:01 GMT",
"version": "v2"
},
{
"created": "Sat, 30 Dec 2023 11:12:13 GMT",
"version": "v3"
}
] |
2024-01-02
|
[
[
"Sciullo",
"Luca",
""
],
[
"De Marchi",
"Alberto",
""
],
[
"Trotta",
"Angelo",
""
],
[
"Montori",
"Federico",
""
],
[
"Bononi",
"Luciano",
""
],
[
"Di Felice",
"Marco",
""
]
] |
Complex IoT ecosystems often require the usage of Digital Twins (DTs) of their physical assets in order to perform predictive analytics and simulate what-if scenarios. DTs are able to replicate IoT devices and adapt over time to their behavioral changes. However, DTs in IoT are typically tailored to a specific use case, without the possibility to seamlessly adapt to different scenarios. Further, the fragmentation of IoT poses additional challenges on how to deploy DTs in heterogeneous scenarios characterized by the usage of multiple data formats and IoT network protocols. In this paper, we propose the Relativistic Digital Twin (RDT) framework, through which we automatically generate general-purpose DTs of IoT entities and tune their behavioral models over time by constantly observing their real counterparts. The framework relies on the object representation via the Web of Things (WoT), to offer a standardized interface to each of the IoT devices as well as to their DTs. To this purpose, we extended the W3C WoT standard in order to encompass the concept of behavioral model and define it in the Thing Description (TD) through a new vocabulary. Finally, we evaluated the RDT framework over two disjoint use cases to assess its correctness and learning performance, i.e., the DT of a simulated smart home scenario with the capability of forecasting the indoor temperature, and the DT of a real-world drone with the capability of forecasting its trajectory in an outdoor scenario. Experiments show that the generated DT can estimate the behavior of its real counterpart after an observation stage, regardless of the considered scenario.
|
2102.09150
|
Decky Aspandi
|
Decky Aspandi, Federico Sukno, Bj\"orn Schuller and Xavier Binefa
|
An Enhanced Adversarial Network with Combined Latent Features for
Spatio-Temporal Facial Affect Estimation in the Wild
|
Accepted Version on VISAPP 2021
| null |
10.5220/0010332001720181
| null |
cs.CV cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Affective Computing has recently attracted the attention of the research
community, due to its numerous applications in diverse areas. In this context,
the emergence of video-based data allows to enrich the widely used spatial
features with the inclusion of temporal information. However, such
spatio-temporal modelling often results in very high-dimensional feature spaces
and large volumes of data, making training difficult and time consuming. This
paper addresses these shortcomings by proposing a novel model that efficiently
extracts both spatial and temporal features of the data by means of its
enhanced temporal modelling based on latent features. Our proposed model
consists of three major networks, coined Generator, Discriminator, and
Combiner, which are trained in an adversarial setting combined with curriculum
learning to enable our adaptive attention modules. In our experiments, we show
the effectiveness of our approach by reporting our competitive results on both
the AFEW-VA and SEWA datasets, suggesting that temporal modelling improves the
affect estimates both in qualitative and quantitative terms. Furthermore, we
find that the inclusion of attention mechanisms leads to the highest accuracy
improvements, as its weights seem to correlate well with the appearance of
facial movements, both in terms of temporal localisation and intensity.
Finally, we observe the sequence length of around 160\,ms to be the optimum one
for temporal modelling, which is consistent with other relevant findings
utilising similar lengths.
|
[
{
"created": "Thu, 18 Feb 2021 04:10:12 GMT",
"version": "v1"
}
] |
2021-02-19
|
[
[
"Aspandi",
"Decky",
""
],
[
"Sukno",
"Federico",
""
],
[
"Schuller",
"Björn",
""
],
[
"Binefa",
"Xavier",
""
]
] |
Affective Computing has recently attracted the attention of the research community, due to its numerous applications in diverse areas. In this context, the emergence of video-based data allows to enrich the widely used spatial features with the inclusion of temporal information. However, such spatio-temporal modelling often results in very high-dimensional feature spaces and large volumes of data, making training difficult and time consuming. This paper addresses these shortcomings by proposing a novel model that efficiently extracts both spatial and temporal features of the data by means of its enhanced temporal modelling based on latent features. Our proposed model consists of three major networks, coined Generator, Discriminator, and Combiner, which are trained in an adversarial setting combined with curriculum learning to enable our adaptive attention modules. In our experiments, we show the effectiveness of our approach by reporting our competitive results on both the AFEW-VA and SEWA datasets, suggesting that temporal modelling improves the affect estimates both in qualitative and quantitative terms. Furthermore, we find that the inclusion of attention mechanisms leads to the highest accuracy improvements, as its weights seem to correlate well with the appearance of facial movements, both in terms of temporal localisation and intensity. Finally, we observe the sequence length of around 160\,ms to be the optimum one for temporal modelling, which is consistent with other relevant findings utilising similar lengths.
|
1501.05153
|
Nadeem Akhtar
|
Nadeem Akhtar, Yann Le Guyadec, Flavio Oquendo
|
Formal requirement and architecture specifications of a multi-agent
robotic system
|
6 pages
|
Journal of Computing, eISSN 2151-9617. Volume 4, Issue 4, April
2012
| null | null |
cs.SE cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
One of the most challenging tasks in specification engineering for a
multi-agent robotic system is to formally specify and architect the system,
especially as a multi-agent robotic system is concurrent having concurrent
processing, and often having dynamic environment. The formal requirement and
architecture specifications along with step-wise refinement from abstract to
concrete concepts play major role in formalizing the system. This paper
proposes the formal requirement and architecture specifications aspects of an
approach that supports analysis with respect to functional as well as
non-functional properties by step-wise refinement from abstract to concrete
specifications and formal architecture definition. These formal specifications
have been exemplified by a case study. As formal specification techniques are
getting more mature, our capability to build a correct complex multi-agent
robotic system also grows quickly.
|
[
{
"created": "Wed, 21 Jan 2015 12:32:42 GMT",
"version": "v1"
}
] |
2015-01-22
|
[
[
"Akhtar",
"Nadeem",
""
],
[
"Guyadec",
"Yann Le",
""
],
[
"Oquendo",
"Flavio",
""
]
] |
One of the most challenging tasks in specification engineering for a multi-agent robotic system is to formally specify and architect the system, especially as a multi-agent robotic system is concurrent having concurrent processing, and often having dynamic environment. The formal requirement and architecture specifications along with step-wise refinement from abstract to concrete concepts play major role in formalizing the system. This paper proposes the formal requirement and architecture specifications aspects of an approach that supports analysis with respect to functional as well as non-functional properties by step-wise refinement from abstract to concrete specifications and formal architecture definition. These formal specifications have been exemplified by a case study. As formal specification techniques are getting more mature, our capability to build a correct complex multi-agent robotic system also grows quickly.
|
2207.01706
|
Raja Karmakar
|
Raja Karmakar, Georges Kaddoum, Samiran Chattopadhyay
|
Mobility Management in 5G and Beyond: A Novel Smart Handover with
Adaptive Time-to-Trigger and Hysteresis Margin
|
16 pages
|
IEEE Transactions on Mobile Computing, 2022
| null | null |
cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
The 5th Generation (5G) New Radio (NR) and beyond technologies will support
enhanced mobile broadband, very low latency communications, and huge numbers of
mobile devices. Therefore, for very high speed users, seamless mobility needs
to be maintained during the migration from one cell to another in the handover.
Due to the presence of a massive number of mobile devices, the management of
the high mobility of a dense network becomes crucial. Moreover, a dynamic
adaptation is required for the Time-to-Trigger (TTT) and hysteresis margin,
which significantly impact the handover latency and overall throughput.
Therefore, in this paper, we propose an online learning-based mechanism, known
as Learning-based Intelligent Mobility Management (LIM2), for mobility
management in 5G and beyond, with an intelligent adaptation of the TTT and
hysteresis values. LIM2 uses a Kalman filter to predict the future signal
quality of the serving and neighbor cells, selects the target cell for the
handover using state-action-reward-state-action (SARSA)-based reinforcement
learning, and adapts the TTT and hysteresis using the epsilon-greedy policy. We
implement a prototype of the LIM2 in NS-3 and extensively analyze its
performance, where it is observed that the LIM2 algorithm can significantly
improve the handover operation in very high speed mobility scenarios.
|
[
{
"created": "Mon, 4 Jul 2022 20:12:33 GMT",
"version": "v1"
}
] |
2022-07-06
|
[
[
"Karmakar",
"Raja",
""
],
[
"Kaddoum",
"Georges",
""
],
[
"Chattopadhyay",
"Samiran",
""
]
] |
The 5th Generation (5G) New Radio (NR) and beyond technologies will support enhanced mobile broadband, very low latency communications, and huge numbers of mobile devices. Therefore, for very high speed users, seamless mobility needs to be maintained during the migration from one cell to another in the handover. Due to the presence of a massive number of mobile devices, the management of the high mobility of a dense network becomes crucial. Moreover, a dynamic adaptation is required for the Time-to-Trigger (TTT) and hysteresis margin, which significantly impact the handover latency and overall throughput. Therefore, in this paper, we propose an online learning-based mechanism, known as Learning-based Intelligent Mobility Management (LIM2), for mobility management in 5G and beyond, with an intelligent adaptation of the TTT and hysteresis values. LIM2 uses a Kalman filter to predict the future signal quality of the serving and neighbor cells, selects the target cell for the handover using state-action-reward-state-action (SARSA)-based reinforcement learning, and adapts the TTT and hysteresis using the epsilon-greedy policy. We implement a prototype of the LIM2 in NS-3 and extensively analyze its performance, where it is observed that the LIM2 algorithm can significantly improve the handover operation in very high speed mobility scenarios.
|
2402.16237
|
Giang Ngo
|
Giang Ngo, Dang Nguyen, Dat Phan-Trong, Sunil Gupta
|
Active Level Set Estimation for Continuous Search Space with Theoretical
Guarantee
| null | null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
A common problem encountered in many real-world applications is level set
estimation where the goal is to determine the region in the function domain
where the function is above or below a given threshold. When the function is
black-box and expensive to evaluate, the level sets need to be found in a
minimum set of function evaluations. Existing methods often assume a discrete
search space with a finite set of data points for function evaluations and
estimating the level sets. When applied to a continuous search space, these
methods often need to first discretize the space which leads to poor results
while needing high computational time. While some methods cater for the
continuous setting, they still lack a proper guarantee for theoretical
convergence. To address this problem, we propose a novel algorithm that does
not need any discretization and can directly work in continuous search spaces.
Our method suggests points by constructing an acquisition function that is
defined as a measure of confidence of the function being higher or lower than
the given threshold. A theoretical analysis for the convergence of the
algorithm to an accurate solution is provided. On multiple synthetic and
real-world datasets, our algorithm successfully outperforms state-of-the-art
methods.
|
[
{
"created": "Mon, 26 Feb 2024 01:46:56 GMT",
"version": "v1"
}
] |
2024-02-27
|
[
[
"Ngo",
"Giang",
""
],
[
"Nguyen",
"Dang",
""
],
[
"Phan-Trong",
"Dat",
""
],
[
"Gupta",
"Sunil",
""
]
] |
A common problem encountered in many real-world applications is level set estimation where the goal is to determine the region in the function domain where the function is above or below a given threshold. When the function is black-box and expensive to evaluate, the level sets need to be found in a minimum set of function evaluations. Existing methods often assume a discrete search space with a finite set of data points for function evaluations and estimating the level sets. When applied to a continuous search space, these methods often need to first discretize the space which leads to poor results while needing high computational time. While some methods cater for the continuous setting, they still lack a proper guarantee for theoretical convergence. To address this problem, we propose a novel algorithm that does not need any discretization and can directly work in continuous search spaces. Our method suggests points by constructing an acquisition function that is defined as a measure of confidence of the function being higher or lower than the given threshold. A theoretical analysis for the convergence of the algorithm to an accurate solution is provided. On multiple synthetic and real-world datasets, our algorithm successfully outperforms state-of-the-art methods.
|
2010.13407
|
Dominique Vaufreydaz
|
Niranjan Deshpande (CHROMA), Dominique Vaufreydaz (LIG), Anne
Spalanzani (CHROMA)
|
Behavioral decision-making for urban autonomous driving in the presence
of pedestrians using Deep Recurrent Q-Network
| null |
16th International Conference on Control, Automation, Robotics and
Vision (ICARCV), Dec 2020, Shenzhen, China
| null | null |
cs.NE cs.RO stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Decision making for autonomous driving in urban environments is challenging
due to the complexity of the road structure and the uncertainty in the behavior
of diverse road users. Traditional methods consist of manually designed rules
as the driving policy, which require expert domain knowledge, are difficult to
generalize and might give sub-optimal results as the environment gets complex.
Whereas, using reinforcement learning, optimal driving policy could be learned
and improved automatically through several interactions with the environment.
However, current research in the field of reinforcement learning for autonomous
driving is mainly focused on highway setup with little to no emphasis on urban
environments. In this work, a deep reinforcement learning based decision-making
approach for high-level driving behavior is proposed for urban environments in
the presence of pedestrians. For this, the use of Deep Recurrent Q-Network
(DRQN) is explored, a method combining state-of-the art Deep Q-Network (DQN)
with a long term short term memory (LSTM) layer helping the agent gain a memory
of the environment. A 3-D state representation is designed as the input
combined with a well defined reward function to train the agent for learning an
appropriate behavior policy in a real-world like urban simulator. The proposed
method is evaluated for dense urban scenarios and compared with a rule-based
approach and results show that the proposed DRQN based driving behavior
decision maker outperforms the rule-based approach.
|
[
{
"created": "Mon, 26 Oct 2020 08:08:06 GMT",
"version": "v1"
}
] |
2020-10-27
|
[
[
"Deshpande",
"Niranjan",
"",
"CHROMA"
],
[
"Vaufreydaz",
"Dominique",
"",
"LIG"
],
[
"Spalanzani",
"Anne",
"",
"CHROMA"
]
] |
Decision making for autonomous driving in urban environments is challenging due to the complexity of the road structure and the uncertainty in the behavior of diverse road users. Traditional methods consist of manually designed rules as the driving policy, which require expert domain knowledge, are difficult to generalize and might give sub-optimal results as the environment gets complex. Whereas, using reinforcement learning, optimal driving policy could be learned and improved automatically through several interactions with the environment. However, current research in the field of reinforcement learning for autonomous driving is mainly focused on highway setup with little to no emphasis on urban environments. In this work, a deep reinforcement learning based decision-making approach for high-level driving behavior is proposed for urban environments in the presence of pedestrians. For this, the use of Deep Recurrent Q-Network (DRQN) is explored, a method combining state-of-the art Deep Q-Network (DQN) with a long term short term memory (LSTM) layer helping the agent gain a memory of the environment. A 3-D state representation is designed as the input combined with a well defined reward function to train the agent for learning an appropriate behavior policy in a real-world like urban simulator. The proposed method is evaluated for dense urban scenarios and compared with a rule-based approach and results show that the proposed DRQN based driving behavior decision maker outperforms the rule-based approach.
|
1510.00857
|
Ramazan Gokberk Cinbis
|
Ramazan Gokberk Cinbis, Jakob Verbeek, Cordelia Schmid
|
Approximate Fisher Kernels of non-iid Image Models for Image
Categorization
|
IEEE Transactions on Pattern Analysis and Machine Intelligence, in
press, 2015
|
IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 38,
no. 6, pp. 1084-1098, June 1 2016
|
10.1109/TPAMI.2015.2484342
| null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The bag-of-words (BoW) model treats images as sets of local descriptors and
represents them by visual word histograms. The Fisher vector (FV)
representation extends BoW, by considering the first and second order
statistics of local descriptors. In both representations local descriptors are
assumed to be identically and independently distributed (iid), which is a poor
assumption from a modeling perspective. It has been experimentally observed
that the performance of BoW and FV representations can be improved by employing
discounting transformations such as power normalization. In this paper, we
introduce non-iid models by treating the model parameters as latent variables
which are integrated out, rendering all local regions dependent. Using the
Fisher kernel principle we encode an image by the gradient of the data
log-likelihood w.r.t. the model hyper-parameters. Our models naturally generate
discounting effects in the representations; suggesting that such
transformations have proven successful because they closely correspond to the
representations obtained for non-iid models. To enable tractable computation,
we rely on variational free-energy bounds to learn the hyper-parameters and to
compute approximate Fisher kernels. Our experimental evaluation results
validate that our models lead to performance improvements comparable to using
power normalization, as employed in state-of-the-art feature aggregation
methods.
|
[
{
"created": "Sat, 3 Oct 2015 19:35:38 GMT",
"version": "v1"
}
] |
2016-05-30
|
[
[
"Cinbis",
"Ramazan Gokberk",
""
],
[
"Verbeek",
"Jakob",
""
],
[
"Schmid",
"Cordelia",
""
]
] |
The bag-of-words (BoW) model treats images as sets of local descriptors and represents them by visual word histograms. The Fisher vector (FV) representation extends BoW, by considering the first and second order statistics of local descriptors. In both representations local descriptors are assumed to be identically and independently distributed (iid), which is a poor assumption from a modeling perspective. It has been experimentally observed that the performance of BoW and FV representations can be improved by employing discounting transformations such as power normalization. In this paper, we introduce non-iid models by treating the model parameters as latent variables which are integrated out, rendering all local regions dependent. Using the Fisher kernel principle we encode an image by the gradient of the data log-likelihood w.r.t. the model hyper-parameters. Our models naturally generate discounting effects in the representations; suggesting that such transformations have proven successful because they closely correspond to the representations obtained for non-iid models. To enable tractable computation, we rely on variational free-energy bounds to learn the hyper-parameters and to compute approximate Fisher kernels. Our experimental evaluation results validate that our models lead to performance improvements comparable to using power normalization, as employed in state-of-the-art feature aggregation methods.
|
2406.00034
|
Yinghao Zhu
|
Tianlong Wang, Xianfeng Jiao, Yifan He, Zhongzhi Chen, Yinghao Zhu, Xu
Chu, Junyi Gao, Yasha Wang, Liantao Ma
|
Adaptive Activation Steering: A Tuning-Free LLM Truthfulness Improvement
Method for Diverse Hallucinations Categories
|
arXiv admin note: text overlap with arXiv:2402.17811
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent studies have indicated that Large Language Models (LLMs) harbor an
inherent understanding of truthfulness, yet often fail to express fully and
generate false statements. This gap between "knowing" and "telling" poses a
challenge for ensuring the truthfulness of generated content. To address this,
we introduce Adaptive Activation Steering (ACT), a tuning-free method that
adaptively shift LLM's activations in "truthful" direction during inference.
ACT addresses diverse categories of hallucinations by utilizing diverse
steering vectors and adjusting the steering intensity adaptively. Applied as an
add-on across various models, ACT significantly improves truthfulness in LLaMA
($\uparrow$ 142\%), LLaMA2 ($\uparrow$ 24\%), Alpaca ($\uparrow$ 36\%), Vicuna
($\uparrow$ 28\%), and LLaMA2-Chat ($\uparrow$ 19\%). Furthermore, we verify
ACT's scalability across larger models (13B, 33B, 65B), underscoring the
adaptability of ACT to large-scale language models.
|
[
{
"created": "Sun, 26 May 2024 21:39:53 GMT",
"version": "v1"
}
] |
2024-06-06
|
[
[
"Wang",
"Tianlong",
""
],
[
"Jiao",
"Xianfeng",
""
],
[
"He",
"Yifan",
""
],
[
"Chen",
"Zhongzhi",
""
],
[
"Zhu",
"Yinghao",
""
],
[
"Chu",
"Xu",
""
],
[
"Gao",
"Junyi",
""
],
[
"Wang",
"Yasha",
""
],
[
"Ma",
"Liantao",
""
]
] |
Recent studies have indicated that Large Language Models (LLMs) harbor an inherent understanding of truthfulness, yet often fail to express fully and generate false statements. This gap between "knowing" and "telling" poses a challenge for ensuring the truthfulness of generated content. To address this, we introduce Adaptive Activation Steering (ACT), a tuning-free method that adaptively shift LLM's activations in "truthful" direction during inference. ACT addresses diverse categories of hallucinations by utilizing diverse steering vectors and adjusting the steering intensity adaptively. Applied as an add-on across various models, ACT significantly improves truthfulness in LLaMA ($\uparrow$ 142\%), LLaMA2 ($\uparrow$ 24\%), Alpaca ($\uparrow$ 36\%), Vicuna ($\uparrow$ 28\%), and LLaMA2-Chat ($\uparrow$ 19\%). Furthermore, we verify ACT's scalability across larger models (13B, 33B, 65B), underscoring the adaptability of ACT to large-scale language models.
|
2103.00738
|
Aoran Xiao
|
Aoran Xiao, Xiaofei Yang, Shijian Lu, Dayan Guan and Jiaxing Huang
|
FPS-Net: A Convolutional Fusion Network for Large-Scale LiDAR Point
Cloud Segmentation
|
20 pages, 7 figures
| null |
10.1016/j.isprsjprs.2021.04.011
| null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Scene understanding based on LiDAR point cloud is an essential task for
autonomous cars to drive safely, which often employs spherical projection to
map 3D point cloud into multi-channel 2D images for semantic segmentation. Most
existing methods simply stack different point attributes/modalities (e.g.
coordinates, intensity, depth, etc.) as image channels to increase information
capacity, but ignore distinct characteristics of point attributes in different
image channels. We design FPS-Net, a convolutional fusion network that exploits
the uniqueness and discrepancy among the projected image channels for optimal
point cloud segmentation. FPS-Net adopts an encoder-decoder structure. Instead
of simply stacking multiple channel images as a single input, we group them
into different modalities to first learn modality-specific features separately
and then map the learned features into a common high-dimensional feature space
for pixel-level fusion and learning. Specifically, we design a residual dense
block with multiple receptive fields as a building block in the encoder which
preserves detailed information in each modality and learns hierarchical
modality-specific and fused features effectively. In the FPS-Net decoder, we
use a recurrent convolution block likewise to hierarchically decode fused
features into output space for pixel-level classification. Extensive
experiments conducted on two widely adopted point cloud datasets show that
FPS-Net achieves superior semantic segmentation as compared with
state-of-the-art projection-based methods. In addition, the proposed modality
fusion idea is compatible with typical projection-based methods and can be
incorporated into them with consistent performance improvements.
|
[
{
"created": "Mon, 1 Mar 2021 04:08:28 GMT",
"version": "v1"
}
] |
2021-07-19
|
[
[
"Xiao",
"Aoran",
""
],
[
"Yang",
"Xiaofei",
""
],
[
"Lu",
"Shijian",
""
],
[
"Guan",
"Dayan",
""
],
[
"Huang",
"Jiaxing",
""
]
] |
Scene understanding based on LiDAR point cloud is an essential task for autonomous cars to drive safely, which often employs spherical projection to map 3D point cloud into multi-channel 2D images for semantic segmentation. Most existing methods simply stack different point attributes/modalities (e.g. coordinates, intensity, depth, etc.) as image channels to increase information capacity, but ignore distinct characteristics of point attributes in different image channels. We design FPS-Net, a convolutional fusion network that exploits the uniqueness and discrepancy among the projected image channels for optimal point cloud segmentation. FPS-Net adopts an encoder-decoder structure. Instead of simply stacking multiple channel images as a single input, we group them into different modalities to first learn modality-specific features separately and then map the learned features into a common high-dimensional feature space for pixel-level fusion and learning. Specifically, we design a residual dense block with multiple receptive fields as a building block in the encoder which preserves detailed information in each modality and learns hierarchical modality-specific and fused features effectively. In the FPS-Net decoder, we use a recurrent convolution block likewise to hierarchically decode fused features into output space for pixel-level classification. Extensive experiments conducted on two widely adopted point cloud datasets show that FPS-Net achieves superior semantic segmentation as compared with state-of-the-art projection-based methods. In addition, the proposed modality fusion idea is compatible with typical projection-based methods and can be incorporated into them with consistent performance improvements.
|
2407.20502
|
Yeqing Shen
|
Yeqing Shen, Shang Li and Kun Song
|
Restoring Real-World Degraded Events Improves Deblurring Quality
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Due to its high speed and low latency, DVS is frequently employed in motion
deblurring. Ideally, high-quality events would adeptly capture intricate motion
information. However, real-world events are generally degraded, thereby
introducing significant artifacts into the deblurred results. In response to
this challenge, we model the degradation of events and propose RDNet to improve
the quality of image deblurring. Specifically, we first analyze the mechanisms
underlying degradation and simulate paired events based on that. These paired
events are then fed into the first stage of the RDNet for training the
restoration model. The events restored in this stage serve as a guide for the
second-stage deblurring process. To better assess the deblurring performance of
different methods on real-world degraded events, we present a new real-world
dataset named DavisMCR. This dataset incorporates events with diverse
degradation levels, collected by manipulating environmental brightness and
target object contrast. Our experiments are conducted on synthetic datasets
(GOPRO), real-world datasets (REBlur), and the proposed dataset (DavisMCR). The
results demonstrate that RDNet outperforms classical event denoising methods in
event restoration. Furthermore, RDNet exhibits better performance in deblurring
tasks compared to state-of-the-art methods. DavisMCR are available at
https://github.com/Yeeesir/DVS_RDNet.
|
[
{
"created": "Tue, 30 Jul 2024 02:29:59 GMT",
"version": "v1"
}
] |
2024-07-31
|
[
[
"Shen",
"Yeqing",
""
],
[
"Li",
"Shang",
""
],
[
"Song",
"Kun",
""
]
] |
Due to its high speed and low latency, DVS is frequently employed in motion deblurring. Ideally, high-quality events would adeptly capture intricate motion information. However, real-world events are generally degraded, thereby introducing significant artifacts into the deblurred results. In response to this challenge, we model the degradation of events and propose RDNet to improve the quality of image deblurring. Specifically, we first analyze the mechanisms underlying degradation and simulate paired events based on that. These paired events are then fed into the first stage of the RDNet for training the restoration model. The events restored in this stage serve as a guide for the second-stage deblurring process. To better assess the deblurring performance of different methods on real-world degraded events, we present a new real-world dataset named DavisMCR. This dataset incorporates events with diverse degradation levels, collected by manipulating environmental brightness and target object contrast. Our experiments are conducted on synthetic datasets (GOPRO), real-world datasets (REBlur), and the proposed dataset (DavisMCR). The results demonstrate that RDNet outperforms classical event denoising methods in event restoration. Furthermore, RDNet exhibits better performance in deblurring tasks compared to state-of-the-art methods. DavisMCR are available at https://github.com/Yeeesir/DVS_RDNet.
|
2212.02640
|
Quentin Stout
|
Quentin Stout
|
Low Power Mesh Algorithms for Image Problems
| null | null | null | null |
cs.DS
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
We analyze a physically motivated fine-grained mesh-connected computer model,
assuming that a word of information takes a fixed area and that it takes unit
time and unit energy to move a word unit distance. This is a representation of
computing on a chip with myriad tiny processors arranged as a mesh. While most
mesh algorithms assume all processors are active at all times, we give
algorithms that have only a few processors on at any one time, which reduces
the power required. We apply this approach to basic problems involving images,
showing that there can be dramatic reductions in the peak power with only
small, if any, changes in the time required. We also show that these algorithms
give a more efficient way to utilize power when more power is available.
|
[
{
"created": "Mon, 5 Dec 2022 22:53:57 GMT",
"version": "v1"
}
] |
2022-12-07
|
[
[
"Stout",
"Quentin",
""
]
] |
We analyze a physically motivated fine-grained mesh-connected computer model, assuming that a word of information takes a fixed area and that it takes unit time and unit energy to move a word unit distance. This is a representation of computing on a chip with myriad tiny processors arranged as a mesh. While most mesh algorithms assume all processors are active at all times, we give algorithms that have only a few processors on at any one time, which reduces the power required. We apply this approach to basic problems involving images, showing that there can be dramatic reductions in the peak power with only small, if any, changes in the time required. We also show that these algorithms give a more efficient way to utilize power when more power is available.
|
2309.12642
|
Hao Zhu
|
Hao Zhu, Fengyi Liu, Qi Zhang, Xun Cao, Zhan Ma
|
RHINO: Regularizing the Hash-based Implicit Neural Representation
|
17 pages, 11 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The use of Implicit Neural Representation (INR) through a hash-table has
demonstrated impressive effectiveness and efficiency in characterizing
intricate signals. However, current state-of-the-art methods exhibit
insufficient regularization, often yielding unreliable and noisy results during
interpolations. We find that this issue stems from broken gradient flow between
input coordinates and indexed hash-keys, where the chain rule attempts to model
discrete hash-keys, rather than the continuous coordinates. To tackle this
concern, we introduce RHINO, in which a continuous analytical function is
incorporated to facilitate regularization by connecting the input coordinate
and the network additionally without modifying the architecture of current
hash-based INRs. This connection ensures a seamless backpropagation of
gradients from the network's output back to the input coordinates, thereby
enhancing regularization. Our experimental results not only showcase the
broadened regularization capability across different hash-based INRs like DINER
and Instant NGP, but also across a variety of tasks such as image fitting,
representation of signed distance functions, and optimization of 5D static / 6D
dynamic neural radiance fields. Notably, RHINO outperforms current
state-of-the-art techniques in both quality and speed, affirming its
superiority.
|
[
{
"created": "Fri, 22 Sep 2023 06:20:41 GMT",
"version": "v1"
}
] |
2023-09-25
|
[
[
"Zhu",
"Hao",
""
],
[
"Liu",
"Fengyi",
""
],
[
"Zhang",
"Qi",
""
],
[
"Cao",
"Xun",
""
],
[
"Ma",
"Zhan",
""
]
] |
The use of Implicit Neural Representation (INR) through a hash-table has demonstrated impressive effectiveness and efficiency in characterizing intricate signals. However, current state-of-the-art methods exhibit insufficient regularization, often yielding unreliable and noisy results during interpolations. We find that this issue stems from broken gradient flow between input coordinates and indexed hash-keys, where the chain rule attempts to model discrete hash-keys, rather than the continuous coordinates. To tackle this concern, we introduce RHINO, in which a continuous analytical function is incorporated to facilitate regularization by connecting the input coordinate and the network additionally without modifying the architecture of current hash-based INRs. This connection ensures a seamless backpropagation of gradients from the network's output back to the input coordinates, thereby enhancing regularization. Our experimental results not only showcase the broadened regularization capability across different hash-based INRs like DINER and Instant NGP, but also across a variety of tasks such as image fitting, representation of signed distance functions, and optimization of 5D static / 6D dynamic neural radiance fields. Notably, RHINO outperforms current state-of-the-art techniques in both quality and speed, affirming its superiority.
|
2405.11311
|
Tianxin Zhou
|
Tianxin Zhou, Xiang Li, Haibing Lu
|
A Dual Power Grid Cascading Failure Model for the Vulnerability Analysis
| null | null | null | null |
cs.LG cs.ET
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Considering the attacks against the power grid, one of the most effective
approaches could be the attack to the transmission lines that leads to large
cascading failures. Hence, the problem of locating the most critical or
vulnerable transmission lines for a Power Grid Cascading Failure (PGCF) has
drawn much attention from the research society. There exists many deterministic
solutions and stochastic approximation algorithms aiming to analyze the power
grid vulnerability. However, it has been challenging to reveal the correlations
between the transmission lines to identify the critical ones. In this paper, we
propose a novel approach of learning such correlations via attention mechanism
inspired by the Transformer based models that were initially designated to
learn the correlation of words in sentences. Multiple modifications and
adjustments are proposed to support the attention mechanism producing an
informative correlation matrix, the Attention Matrix. With the Attention
Ranking algorithm, we are able to identify the most critical lines. The
proposed Dual PGCF model provide a novel and effective analysis to improve the
power grid resilience against cascading failure, which is proved by extensive
experiment results.
|
[
{
"created": "Sat, 18 May 2024 15:04:44 GMT",
"version": "v1"
}
] |
2024-05-21
|
[
[
"Zhou",
"Tianxin",
""
],
[
"Li",
"Xiang",
""
],
[
"Lu",
"Haibing",
""
]
] |
Considering the attacks against the power grid, one of the most effective approaches could be the attack to the transmission lines that leads to large cascading failures. Hence, the problem of locating the most critical or vulnerable transmission lines for a Power Grid Cascading Failure (PGCF) has drawn much attention from the research society. There exists many deterministic solutions and stochastic approximation algorithms aiming to analyze the power grid vulnerability. However, it has been challenging to reveal the correlations between the transmission lines to identify the critical ones. In this paper, we propose a novel approach of learning such correlations via attention mechanism inspired by the Transformer based models that were initially designated to learn the correlation of words in sentences. Multiple modifications and adjustments are proposed to support the attention mechanism producing an informative correlation matrix, the Attention Matrix. With the Attention Ranking algorithm, we are able to identify the most critical lines. The proposed Dual PGCF model provide a novel and effective analysis to improve the power grid resilience against cascading failure, which is proved by extensive experiment results.
|
2104.06599
|
Guanghui Qin
|
Guanghui Qin, Jason Eisner
|
Learning How to Ask: Querying LMs with Mixtures of Soft Prompts
|
NAACL-HLT 2021 camera-ready
| null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Natural-language prompts have recently been used to coax pretrained language
models into performing other AI tasks, using a fill-in-the-blank paradigm
(Petroni et al., 2019) or a few-shot extrapolation paradigm (Brown et al.,
2020). For example, language models retain factual knowledge from their
training corpora that can be extracted by asking them to "fill in the blank" in
a sentential prompt. However, where does this prompt come from? We explore the
idea of learning prompts by gradient descent -- either fine-tuning prompts
taken from previous work, or starting from random initialization. Our prompts
consist of "soft words," i.e., continuous vectors that are not necessarily word
type embeddings from the language model. Furthermore, for each task, we
optimize a mixture of prompts, learning which prompts are most effective and
how to ensemble them. Across multiple English LMs and tasks, our approach
hugely outperforms previous methods, showing that the implicit factual
knowledge in language models was previously underestimated. Moreover, this
knowledge is cheap to elicit: random initialization is nearly as good as
informed initialization.
|
[
{
"created": "Wed, 14 Apr 2021 02:56:14 GMT",
"version": "v1"
}
] |
2021-04-15
|
[
[
"Qin",
"Guanghui",
""
],
[
"Eisner",
"Jason",
""
]
] |
Natural-language prompts have recently been used to coax pretrained language models into performing other AI tasks, using a fill-in-the-blank paradigm (Petroni et al., 2019) or a few-shot extrapolation paradigm (Brown et al., 2020). For example, language models retain factual knowledge from their training corpora that can be extracted by asking them to "fill in the blank" in a sentential prompt. However, where does this prompt come from? We explore the idea of learning prompts by gradient descent -- either fine-tuning prompts taken from previous work, or starting from random initialization. Our prompts consist of "soft words," i.e., continuous vectors that are not necessarily word type embeddings from the language model. Furthermore, for each task, we optimize a mixture of prompts, learning which prompts are most effective and how to ensemble them. Across multiple English LMs and tasks, our approach hugely outperforms previous methods, showing that the implicit factual knowledge in language models was previously underestimated. Moreover, this knowledge is cheap to elicit: random initialization is nearly as good as informed initialization.
|
1603.07692
|
Diego Klabjan
|
Taeheon Jeong, Diego Klabjan, Justin Starren
|
Predictive Analytics Using Smartphone Sensors for Depressive Episodes
|
HIAI 2016, Expanding the Boundaries of Health Informatics using AI,
Phoenix, AZ
| null | null | null |
cs.CY cs.HC stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The behaviors of patients with depression are usually difficult to predict
because the patients demonstrate the symptoms of a depressive episode without a
warning at unexpected times. The goal of this research is to build algorithms
that detect signals of such unusual moments so that doctors can be proactive in
approaching already diagnosed patients before they fall in depression. Each
patient is equipped with a smartphone with the capability to track its sensors.
We first find the home location of a patient, which is then augmented with
other sensor data to identify sleep patterns and select communication patterns.
The algorithms require two to three weeks of training data to build standard
patterns, which are considered normal behaviors; and then, the methods identify
any anomalies in day-to-day data readings of sensors. Four smartphone sensors,
including the accelerometer, the gyroscope, the location probe and the
communication log probe are used for anomaly detection in sleeping and
communication patterns.
|
[
{
"created": "Thu, 24 Mar 2016 18:14:43 GMT",
"version": "v1"
}
] |
2016-03-25
|
[
[
"Jeong",
"Taeheon",
""
],
[
"Klabjan",
"Diego",
""
],
[
"Starren",
"Justin",
""
]
] |
The behaviors of patients with depression are usually difficult to predict because the patients demonstrate the symptoms of a depressive episode without a warning at unexpected times. The goal of this research is to build algorithms that detect signals of such unusual moments so that doctors can be proactive in approaching already diagnosed patients before they fall in depression. Each patient is equipped with a smartphone with the capability to track its sensors. We first find the home location of a patient, which is then augmented with other sensor data to identify sleep patterns and select communication patterns. The algorithms require two to three weeks of training data to build standard patterns, which are considered normal behaviors; and then, the methods identify any anomalies in day-to-day data readings of sensors. Four smartphone sensors, including the accelerometer, the gyroscope, the location probe and the communication log probe are used for anomaly detection in sleeping and communication patterns.
|
1811.10074
|
Roman Snytsar
|
Roman Snytsar and Yatish Turakhia
|
Parallel approach to sliding window sums
|
10 pages, 5 figures
| null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sliding window sums are widely used in bioinformatics applications, including
sequence assembly, k-mer generation, hashing and compression. New vector
algorithms which utilize the advanced vector extension (AVX) instructions
available on modern processors, or the parallel compute units on GPUs and
FPGAs, would provide a significant performance boost for the bioinformatics
applications. We develop a generic vectorized sliding sum algorithm with
speedup for window size w and number of processors P is O(P/w) for a generic
sliding sum. For a sum with commutative operator the speedup is improved to
O(P/log(w)). When applied to the genomic application of minimizer based k-mer
table generation using AVX instructions, we obtain a speedup of over 5X.
|
[
{
"created": "Sun, 25 Nov 2018 19:09:47 GMT",
"version": "v1"
},
{
"created": "Tue, 3 Sep 2019 14:39:58 GMT",
"version": "v2"
}
] |
2019-09-04
|
[
[
"Snytsar",
"Roman",
""
],
[
"Turakhia",
"Yatish",
""
]
] |
Sliding window sums are widely used in bioinformatics applications, including sequence assembly, k-mer generation, hashing and compression. New vector algorithms which utilize the advanced vector extension (AVX) instructions available on modern processors, or the parallel compute units on GPUs and FPGAs, would provide a significant performance boost for the bioinformatics applications. We develop a generic vectorized sliding sum algorithm with speedup for window size w and number of processors P is O(P/w) for a generic sliding sum. For a sum with commutative operator the speedup is improved to O(P/log(w)). When applied to the genomic application of minimizer based k-mer table generation using AVX instructions, we obtain a speedup of over 5X.
|
1605.04761
|
Maurizio Naldi
|
Maurizio Naldi
|
Concentration in the mobile operating systems market
| null | null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Concentration phenomena concern the ICT market. Though the regulatory action
has been active mainly in the telecom network operators industry, even more
significant worldwide concentration phenomena affect other industries. The
market of mobile operating systems is analysed through two concentration
indices to get a quantitative picture of the current situation and its
evolution over time: the Hirschman Herfindahl Index (HHI) and the Four-Firm
Concentration Ratio (CR4). A strongly imbalanced oligopoly is shown to exist,
where the four major operating systems take over 99% of the market, but the
dominant operating system Android alone is installed on over 80% of the new
devices.
|
[
{
"created": "Mon, 16 May 2016 13:08:30 GMT",
"version": "v1"
}
] |
2016-05-17
|
[
[
"Naldi",
"Maurizio",
""
]
] |
Concentration phenomena concern the ICT market. Though the regulatory action has been active mainly in the telecom network operators industry, even more significant worldwide concentration phenomena affect other industries. The market of mobile operating systems is analysed through two concentration indices to get a quantitative picture of the current situation and its evolution over time: the Hirschman Herfindahl Index (HHI) and the Four-Firm Concentration Ratio (CR4). A strongly imbalanced oligopoly is shown to exist, where the four major operating systems take over 99% of the market, but the dominant operating system Android alone is installed on over 80% of the new devices.
|
2406.03403
|
Kangyu Zheng
|
Kangyu Zheng, Yingzhou Lu, Zaixi Zhang, Zhongwei Wan, Yao Ma, Marinka
Zitnik, Tianfan Fu
|
Structure-based Drug Design Benchmark: Do 3D Methods Really Dominate?
| null | null | null | null |
cs.LG cs.AI q-bio.QM
|
http://creativecommons.org/licenses/by/4.0/
|
Currently, the field of structure-based drug design is dominated by three
main types of algorithms: search-based algorithms, deep generative models, and
reinforcement learning. While existing works have typically focused on
comparing models within a single algorithmic category, cross-algorithm
comparisons remain scarce. In this paper, to fill the gap, we establish a
benchmark to evaluate the performance of sixteen models across these different
algorithmic foundations by assessing the pharmaceutical properties of the
generated molecules and their docking affinities with specified target
proteins. We highlight the unique advantages of each algorithmic approach and
offer recommendations for the design of future SBDD models. We emphasize that
1D/2D ligand-centric drug design methods can be used in SBDD by treating the
docking function as a black-box oracle, which is typically neglected. The
empirical results show that 1D/2D methods achieve competitive performance
compared with 3D-based methods that use the 3D structure of the target protein
explicitly. Also, AutoGrow4, a 2D molecular graph-based genetic algorithm,
dominates SBDD in terms of optimization ability. The relevant code is available
in https://github.com/zkysfls/2024-sbdd-benchmark.
|
[
{
"created": "Tue, 4 Jun 2024 15:37:14 GMT",
"version": "v1"
}
] |
2024-06-06
|
[
[
"Zheng",
"Kangyu",
""
],
[
"Lu",
"Yingzhou",
""
],
[
"Zhang",
"Zaixi",
""
],
[
"Wan",
"Zhongwei",
""
],
[
"Ma",
"Yao",
""
],
[
"Zitnik",
"Marinka",
""
],
[
"Fu",
"Tianfan",
""
]
] |
Currently, the field of structure-based drug design is dominated by three main types of algorithms: search-based algorithms, deep generative models, and reinforcement learning. While existing works have typically focused on comparing models within a single algorithmic category, cross-algorithm comparisons remain scarce. In this paper, to fill the gap, we establish a benchmark to evaluate the performance of sixteen models across these different algorithmic foundations by assessing the pharmaceutical properties of the generated molecules and their docking affinities with specified target proteins. We highlight the unique advantages of each algorithmic approach and offer recommendations for the design of future SBDD models. We emphasize that 1D/2D ligand-centric drug design methods can be used in SBDD by treating the docking function as a black-box oracle, which is typically neglected. The empirical results show that 1D/2D methods achieve competitive performance compared with 3D-based methods that use the 3D structure of the target protein explicitly. Also, AutoGrow4, a 2D molecular graph-based genetic algorithm, dominates SBDD in terms of optimization ability. The relevant code is available in https://github.com/zkysfls/2024-sbdd-benchmark.
|
1505.05231
|
Steve Hanneke
|
Liu Yang, Steve Hanneke, Jaime Carbonell
|
Bounds on the Minimax Rate for Estimating a Prior over a VC Class from
Independent Learning Tasks
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the optimal rates of convergence for estimating a prior distribution
over a VC class from a sequence of independent data sets respectively labeled
by independent target functions sampled from the prior. We specifically derive
upper and lower bounds on the optimal rates under a smoothness condition on the
correct prior, with the number of samples per data set equal the VC dimension.
These results have implications for the improvements achievable via transfer
learning. We additionally extend this setting to real-valued function, where we
establish consistency of an estimator for the prior, and discuss an additional
application to a preference elicitation problem in algorithmic economics.
|
[
{
"created": "Wed, 20 May 2015 02:43:24 GMT",
"version": "v1"
}
] |
2015-05-21
|
[
[
"Yang",
"Liu",
""
],
[
"Hanneke",
"Steve",
""
],
[
"Carbonell",
"Jaime",
""
]
] |
We study the optimal rates of convergence for estimating a prior distribution over a VC class from a sequence of independent data sets respectively labeled by independent target functions sampled from the prior. We specifically derive upper and lower bounds on the optimal rates under a smoothness condition on the correct prior, with the number of samples per data set equal the VC dimension. These results have implications for the improvements achievable via transfer learning. We additionally extend this setting to real-valued function, where we establish consistency of an estimator for the prior, and discuss an additional application to a preference elicitation problem in algorithmic economics.
|
2101.06749
|
Mateus Roder
|
Mateus Roder, Leandro A. Passos, Luiz Carlos Felix Ribeiro, Clayton
Pereira, Jo\~ao Paulo Papa
|
A Layer-Wise Information Reinforcement Approach to Improve Learning in
Deep Belief Networks
| null | null |
10.1007/978-3-030-61401-0_22
| null |
cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
With the advent of deep learning, the number of works proposing new methods
or improving existent ones has grown exponentially in the last years. In this
scenario, "very deep" models were emerging, once they were expected to extract
more intrinsic and abstract features while supporting a better performance.
However, such models suffer from the gradient vanishing problem, i.e.,
backpropagation values become too close to zero in their shallower layers,
ultimately causing learning to stagnate. Such an issue was overcome in the
context of convolution neural networks by creating "shortcut connections"
between layers, in a so-called deep residual learning framework. Nonetheless, a
very popular deep learning technique called Deep Belief Network still suffers
from gradient vanishing when dealing with discriminative tasks. Therefore, this
paper proposes the Residual Deep Belief Network, which considers the
information reinforcement layer-by-layer to improve the feature extraction and
knowledge retaining, that support better discriminative performance.
Experiments conducted over three public datasets demonstrate its robustness
concerning the task of binary image classification.
|
[
{
"created": "Sun, 17 Jan 2021 18:53:18 GMT",
"version": "v1"
}
] |
2021-01-19
|
[
[
"Roder",
"Mateus",
""
],
[
"Passos",
"Leandro A.",
""
],
[
"Ribeiro",
"Luiz Carlos Felix",
""
],
[
"Pereira",
"Clayton",
""
],
[
"Papa",
"João Paulo",
""
]
] |
With the advent of deep learning, the number of works proposing new methods or improving existent ones has grown exponentially in the last years. In this scenario, "very deep" models were emerging, once they were expected to extract more intrinsic and abstract features while supporting a better performance. However, such models suffer from the gradient vanishing problem, i.e., backpropagation values become too close to zero in their shallower layers, ultimately causing learning to stagnate. Such an issue was overcome in the context of convolution neural networks by creating "shortcut connections" between layers, in a so-called deep residual learning framework. Nonetheless, a very popular deep learning technique called Deep Belief Network still suffers from gradient vanishing when dealing with discriminative tasks. Therefore, this paper proposes the Residual Deep Belief Network, which considers the information reinforcement layer-by-layer to improve the feature extraction and knowledge retaining, that support better discriminative performance. Experiments conducted over three public datasets demonstrate its robustness concerning the task of binary image classification.
|
2001.02801
|
Olga Moskvyak
|
Olga Moskvyak, Frederic Maire, Feras Dayoub and Mahsa Baktashmotlagh
|
Learning landmark guided embeddings for animal re-identification
|
7 pages, 7 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Re-identification of individual animals in images can be ambiguous due to
subtle variations in body markings between different individuals and no
constraints on the poses of animals in the wild. Person re-identification is a
similar task and it has been approached with a deep convolutional neural
network (CNN) that learns discriminative embeddings for images of people.
However, learning discriminative features for an individual animal is more
challenging than for a person's appearance due to the relatively small size of
ecological datasets compared to labelled datasets of person's identities. We
propose to improve embedding learning by exploiting body landmarks information
explicitly. Body landmarks are provided to the input of a CNN as confidence
heatmaps that can be obtained from a separate body landmark predictor. The
model is encouraged to use heatmaps by learning an auxiliary task of
reconstructing input heatmaps. Body landmarks guide a feature extraction
network to learn the representation of a distinctive pattern and its position
on the body. We evaluate the proposed method on a large synthetic dataset and a
small real dataset. Our method outperforms the same model without body
landmarks input by 26% and 18% on the synthetic and the real datasets
respectively. The method is robust to noise in input coordinates and can
tolerate an error in coordinates up to 10% of the image size.
|
[
{
"created": "Thu, 9 Jan 2020 01:31:00 GMT",
"version": "v1"
}
] |
2020-01-10
|
[
[
"Moskvyak",
"Olga",
""
],
[
"Maire",
"Frederic",
""
],
[
"Dayoub",
"Feras",
""
],
[
"Baktashmotlagh",
"Mahsa",
""
]
] |
Re-identification of individual animals in images can be ambiguous due to subtle variations in body markings between different individuals and no constraints on the poses of animals in the wild. Person re-identification is a similar task and it has been approached with a deep convolutional neural network (CNN) that learns discriminative embeddings for images of people. However, learning discriminative features for an individual animal is more challenging than for a person's appearance due to the relatively small size of ecological datasets compared to labelled datasets of person's identities. We propose to improve embedding learning by exploiting body landmarks information explicitly. Body landmarks are provided to the input of a CNN as confidence heatmaps that can be obtained from a separate body landmark predictor. The model is encouraged to use heatmaps by learning an auxiliary task of reconstructing input heatmaps. Body landmarks guide a feature extraction network to learn the representation of a distinctive pattern and its position on the body. We evaluate the proposed method on a large synthetic dataset and a small real dataset. Our method outperforms the same model without body landmarks input by 26% and 18% on the synthetic and the real datasets respectively. The method is robust to noise in input coordinates and can tolerate an error in coordinates up to 10% of the image size.
|
2011.05180
|
Daniel Rodriguez Criado
|
Daniel Rodriguez-Criado and Pilar Bachiller and Luis J. Manso
|
Generation of Human-aware Navigation Maps using Graph Neural Networks
|
6 pages, 4 figures, conference paper
| null | null | null |
cs.RO cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Minimising the discomfort caused by robots when navigating in social
situations is crucial for them to be accepted. The paper presents a machine
learning-based framework that bootstraps existing one-dimensional datasets to
generate a cost map dataset and a model combining Graph Neural Network and
Convolutional Neural Network layers to produce cost maps for human-aware
navigation in real-time. The proposed framework is evaluated against the
original one-dimensional dataset and in simulated navigation tasks. The results
outperform similar state-of-the-art-methods considering the accuracy on the
dataset and the navigation metrics used. The applications of the proposed
framework are not limited to human-aware navigation, it could be applied to
other fields where map generation is needed.
|
[
{
"created": "Tue, 10 Nov 2020 15:32:14 GMT",
"version": "v1"
}
] |
2020-11-11
|
[
[
"Rodriguez-Criado",
"Daniel",
""
],
[
"Bachiller",
"Pilar",
""
],
[
"Manso",
"Luis J.",
""
]
] |
Minimising the discomfort caused by robots when navigating in social situations is crucial for them to be accepted. The paper presents a machine learning-based framework that bootstraps existing one-dimensional datasets to generate a cost map dataset and a model combining Graph Neural Network and Convolutional Neural Network layers to produce cost maps for human-aware navigation in real-time. The proposed framework is evaluated against the original one-dimensional dataset and in simulated navigation tasks. The results outperform similar state-of-the-art-methods considering the accuracy on the dataset and the navigation metrics used. The applications of the proposed framework are not limited to human-aware navigation, it could be applied to other fields where map generation is needed.
|
1907.08304
|
Lavina Jain
|
Syamantak Das, Lavina Jain, Nikhil Kumar
|
A Constant Factor Approximation for Capacitated Min-Max Tree Cover
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Given a graph $G=(V,E)$ with non-negative real edge lengths and an integer
parameter $k$, the Min-Max k-Tree Cover problem seeks to find a set of at most
$k$ subtrees of $G$, such that the union of the trees is the vertex set $V$.
The objective is to minimize the maximum length among all the trees. We give
the first constant factor approximation for the hard uniform capacitated
version of this problem, where, an input parameter $\lambda$ upper bounds the
number of vertices that can be covered by any of the trees. Our result extends
to the rooted version of the problem, where we are given a set of $k$ root
vertices, $R$ and each of the covering trees is required to include a distinct
vertex in $R$ as the root. Prior to our work, the only result known was a
$(2k-1)$-approximation algorithm for the special case when the total number of
vertices in the graph is $k\lambda$ [Guttmann-Beck and Hassin, J. of
Algorithms, 1997].
Our technique circumvents the difficulty of using the minimum spanning tree
of the graph as a lower bound, which is standard for the uncapacitated version
of the problem [Even et al., OR Letters 2004] [Khani et al., Algorithmica
2010]. Instead, we use Steiner trees that cover $\lambda$ vertices along with
an iterative refinement procedure that ensures that the output trees have low
cost and the vertices are well distributed among the trees.
|
[
{
"created": "Thu, 18 Jul 2019 21:59:33 GMT",
"version": "v1"
},
{
"created": "Thu, 12 Dec 2019 13:44:40 GMT",
"version": "v2"
}
] |
2019-12-13
|
[
[
"Das",
"Syamantak",
""
],
[
"Jain",
"Lavina",
""
],
[
"Kumar",
"Nikhil",
""
]
] |
Given a graph $G=(V,E)$ with non-negative real edge lengths and an integer parameter $k$, the Min-Max k-Tree Cover problem seeks to find a set of at most $k$ subtrees of $G$, such that the union of the trees is the vertex set $V$. The objective is to minimize the maximum length among all the trees. We give the first constant factor approximation for the hard uniform capacitated version of this problem, where, an input parameter $\lambda$ upper bounds the number of vertices that can be covered by any of the trees. Our result extends to the rooted version of the problem, where we are given a set of $k$ root vertices, $R$ and each of the covering trees is required to include a distinct vertex in $R$ as the root. Prior to our work, the only result known was a $(2k-1)$-approximation algorithm for the special case when the total number of vertices in the graph is $k\lambda$ [Guttmann-Beck and Hassin, J. of Algorithms, 1997]. Our technique circumvents the difficulty of using the minimum spanning tree of the graph as a lower bound, which is standard for the uncapacitated version of the problem [Even et al., OR Letters 2004] [Khani et al., Algorithmica 2010]. Instead, we use Steiner trees that cover $\lambda$ vertices along with an iterative refinement procedure that ensures that the output trees have low cost and the vertices are well distributed among the trees.
|
2212.07536
|
Md Masudur Rahman
|
Md Masudur Rahman and Yexiang Xue
|
Robust Policy Optimization in Deep Reinforcement Learning
| null | null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The policy gradient method enjoys the simplicity of the objective where the
agent optimizes the cumulative reward directly. Moreover, in the continuous
action domain, parameterized distribution of action distribution allows easy
control of exploration, resulting from the variance of the representing
distribution. Entropy can play an essential role in policy optimization by
selecting the stochastic policy, which eventually helps better explore the
environment in reinforcement learning (RL). However, the stochasticity often
reduces as the training progresses; thus, the policy becomes less exploratory.
Additionally, certain parametric distributions might only work for some
environments and require extensive hyperparameter tuning. This paper aims to
mitigate these issues. In particular, we propose an algorithm called Robust
Policy Optimization (RPO), which leverages a perturbed distribution. We
hypothesize that our method encourages high-entropy actions and provides a way
to represent the action space better. We further provide empirical evidence to
verify our hypothesis. We evaluated our methods on various continuous control
tasks from DeepMind Control, OpenAI Gym, Pybullet, and IsaacGym. We observed
that in many settings, RPO increases the policy entropy early in training and
then maintains a certain level of entropy throughout the training period.
Eventually, our agent RPO shows consistently improved performance compared to
PPO and other techniques: entropy regularization, different distributions, and
data augmentation. Furthermore, in several settings, our method stays robust in
performance, while other baseline mechanisms fail to improve and even worsen
the performance.
|
[
{
"created": "Wed, 14 Dec 2022 22:43:56 GMT",
"version": "v1"
}
] |
2022-12-16
|
[
[
"Rahman",
"Md Masudur",
""
],
[
"Xue",
"Yexiang",
""
]
] |
The policy gradient method enjoys the simplicity of the objective where the agent optimizes the cumulative reward directly. Moreover, in the continuous action domain, parameterized distribution of action distribution allows easy control of exploration, resulting from the variance of the representing distribution. Entropy can play an essential role in policy optimization by selecting the stochastic policy, which eventually helps better explore the environment in reinforcement learning (RL). However, the stochasticity often reduces as the training progresses; thus, the policy becomes less exploratory. Additionally, certain parametric distributions might only work for some environments and require extensive hyperparameter tuning. This paper aims to mitigate these issues. In particular, we propose an algorithm called Robust Policy Optimization (RPO), which leverages a perturbed distribution. We hypothesize that our method encourages high-entropy actions and provides a way to represent the action space better. We further provide empirical evidence to verify our hypothesis. We evaluated our methods on various continuous control tasks from DeepMind Control, OpenAI Gym, Pybullet, and IsaacGym. We observed that in many settings, RPO increases the policy entropy early in training and then maintains a certain level of entropy throughout the training period. Eventually, our agent RPO shows consistently improved performance compared to PPO and other techniques: entropy regularization, different distributions, and data augmentation. Furthermore, in several settings, our method stays robust in performance, while other baseline mechanisms fail to improve and even worsen the performance.
|
1308.2362
|
Bo young Lim
|
Yunsik Jake Jang, Bo young Lim
|
Harmonization among national cyber security and cybercrime response
organizations: New challenges of cybercrime
|
4th Asian Criminology Conference (2012)
| null | null | null |
cs.CY
|
http://creativecommons.org/licenses/by/3.0/
|
This paper will discuss the need for national-level organizational strategies
to effectively combat cyber security threats and cybercrime. In many countries,
new agencies have been established and/or new roles have been allotted to
existing agencies to cope with the needs for cyber security or fighting against
cybercrime. The two pillars of organizational structure and functions (i.e.,
security vs. law enforcement) have given new challenges to us, especially in
the context of traditional criminal justice system. To illustrate the
challenges, a case study examining the responses to major security incidents
followed by nationwide debates and remarkable organizational changes in Korea
will be given.
|
[
{
"created": "Sun, 11 Aug 2013 03:28:07 GMT",
"version": "v1"
}
] |
2013-08-13
|
[
[
"Jang",
"Yunsik Jake",
""
],
[
"Lim",
"Bo young",
""
]
] |
This paper will discuss the need for national-level organizational strategies to effectively combat cyber security threats and cybercrime. In many countries, new agencies have been established and/or new roles have been allotted to existing agencies to cope with the needs for cyber security or fighting against cybercrime. The two pillars of organizational structure and functions (i.e., security vs. law enforcement) have given new challenges to us, especially in the context of traditional criminal justice system. To illustrate the challenges, a case study examining the responses to major security incidents followed by nationwide debates and remarkable organizational changes in Korea will be given.
|
2205.10729
|
Haoyuan Cai
|
Haoyuan Cai, Tengyu Ma, Simon Du
|
Near-Optimal Algorithms for Autonomous Exploration and Multi-Goal
Stochastic Shortest Path
|
ICML 2022
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We revisit the incremental autonomous exploration problem proposed by Lim &
Auer (2012). In this setting, the agent aims to learn a set of near-optimal
goal-conditioned policies to reach the $L$-controllable states: states that are
incrementally reachable from an initial state $s_0$ within $L$ steps in
expectation. We introduce a new algorithm with stronger sample complexity
bounds than existing ones. Furthermore, we also prove the first lower bound for
the autonomous exploration problem. In particular, the lower bound implies that
our proposed algorithm, Value-Aware Autonomous Exploration, is nearly
minimax-optimal when the number of $L$-controllable states grows polynomially
with respect to $L$. Key in our algorithm design is a connection between
autonomous exploration and multi-goal stochastic shortest path, a new problem
that naturally generalizes the classical stochastic shortest path problem. This
new problem and its connection to autonomous exploration can be of independent
interest.
|
[
{
"created": "Sun, 22 May 2022 03:54:15 GMT",
"version": "v1"
}
] |
2022-05-24
|
[
[
"Cai",
"Haoyuan",
""
],
[
"Ma",
"Tengyu",
""
],
[
"Du",
"Simon",
""
]
] |
We revisit the incremental autonomous exploration problem proposed by Lim & Auer (2012). In this setting, the agent aims to learn a set of near-optimal goal-conditioned policies to reach the $L$-controllable states: states that are incrementally reachable from an initial state $s_0$ within $L$ steps in expectation. We introduce a new algorithm with stronger sample complexity bounds than existing ones. Furthermore, we also prove the first lower bound for the autonomous exploration problem. In particular, the lower bound implies that our proposed algorithm, Value-Aware Autonomous Exploration, is nearly minimax-optimal when the number of $L$-controllable states grows polynomially with respect to $L$. Key in our algorithm design is a connection between autonomous exploration and multi-goal stochastic shortest path, a new problem that naturally generalizes the classical stochastic shortest path problem. This new problem and its connection to autonomous exploration can be of independent interest.
|
1811.06749
|
Jakub T\v{e}tek
|
Tom\'a\v{s} Gaven\v{c}iak, Jakub T\v{e}tek
|
Compact I/O-Efficient Representation of Separable Graphs and Optimal
Tree Layouts
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Compact and I/O-efficient data representations play an important role in
efficient algorithm design, as memory bandwidth and latency can present a
significant performance bottleneck, slowing the computation by orders of
magnitude. While this problem is very well explored in e.g. uniform numerical
data processing, structural data applications (e.g. on huge graphs) require
different algorithm-dependent approaches. Separable graph classes (i.e. graph
classes with balanced separators of size $\mathcal{O}(n^c)$ with $c < 1$)
include planar graphs, bounded genus graphs, and minor-free graphs.
In this article we present two generalizations of the separator theorem, to
partitions with small regions only on average and to weighted graphs. Then we
propose I/O-efficient succinct representation and memory layout for random
walks in(weighted) separable graphs in the pointer machine model, including an
efficient algorithm to compute them. Finally, we present a worst-case
I/O-optimal tree layout algorithm for root-leaf path traversal, show an
additive (+1)-approximation of optimal compact layout and contrast this with
NP-completeness proof of finding an optimal compact layout.
|
[
{
"created": "Fri, 16 Nov 2018 11:01:06 GMT",
"version": "v1"
}
] |
2018-11-19
|
[
[
"Gavenčiak",
"Tomáš",
""
],
[
"Tětek",
"Jakub",
""
]
] |
Compact and I/O-efficient data representations play an important role in efficient algorithm design, as memory bandwidth and latency can present a significant performance bottleneck, slowing the computation by orders of magnitude. While this problem is very well explored in e.g. uniform numerical data processing, structural data applications (e.g. on huge graphs) require different algorithm-dependent approaches. Separable graph classes (i.e. graph classes with balanced separators of size $\mathcal{O}(n^c)$ with $c < 1$) include planar graphs, bounded genus graphs, and minor-free graphs. In this article we present two generalizations of the separator theorem, to partitions with small regions only on average and to weighted graphs. Then we propose I/O-efficient succinct representation and memory layout for random walks in(weighted) separable graphs in the pointer machine model, including an efficient algorithm to compute them. Finally, we present a worst-case I/O-optimal tree layout algorithm for root-leaf path traversal, show an additive (+1)-approximation of optimal compact layout and contrast this with NP-completeness proof of finding an optimal compact layout.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.