id
stringlengths 9
10
| submitter
stringlengths 1
64
⌀ | authors
stringlengths 4
20.7k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
404
⌀ | doi
stringlengths 11
153
⌀ | report-no
stringlengths 2
254
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | orig_abstract
stringlengths 14
3.35k
| versions
listlengths 1
60
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
1.35k
| abstract
stringlengths 11
3.34k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2210.08885
|
Kevin R\"osch
|
Kevin R\"osch, Florian Heidecker, Julian Truetsch, Kamil Kowol,
Clemens Schicktanz, Maarten Bieshaar, Bernhard Sick, Christoph Stiller
|
Space, Time, and Interaction: A Taxonomy of Corner Cases in Trajectory
Datasets for Automated Driving
| null | null | null | null |
cs.RO cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Trajectory data analysis is an essential component for highly automated
driving. Complex models developed with these data predict other road users'
movement and behavior patterns. Based on these predictions - and additional
contextual information such as the course of the road, (traffic) rules, and
interaction with other road users - the highly automated vehicle (HAV) must be
able to reliably and safely perform the task assigned to it, e.g., moving from
point A to B. Ideally, the HAV moves safely through its environment, just as we
would expect a human driver to do. However, if unusual trajectories occur,
so-called trajectory corner cases, a human driver can usually cope well, but an
HAV can quickly get into trouble. In the definition of trajectory corner cases,
which we provide in this work, we will consider the relevance of unusual
trajectories with respect to the task at hand. Based on this, we will also
present a taxonomy of different trajectory corner cases. The categorization of
corner cases into the taxonomy will be shown with examples and is done by cause
and required data sources. To illustrate the complexity between the machine
learning (ML) model and the corner case cause, we present a general processing
chain underlying the taxonomy.
|
[
{
"created": "Mon, 17 Oct 2022 09:27:45 GMT",
"version": "v1"
}
] |
2022-10-18
|
[
[
"Rösch",
"Kevin",
""
],
[
"Heidecker",
"Florian",
""
],
[
"Truetsch",
"Julian",
""
],
[
"Kowol",
"Kamil",
""
],
[
"Schicktanz",
"Clemens",
""
],
[
"Bieshaar",
"Maarten",
""
],
[
"Sick",
"Bernhard",
""
],
[
"Stiller",
"Christoph",
""
]
] |
Trajectory data analysis is an essential component for highly automated driving. Complex models developed with these data predict other road users' movement and behavior patterns. Based on these predictions - and additional contextual information such as the course of the road, (traffic) rules, and interaction with other road users - the highly automated vehicle (HAV) must be able to reliably and safely perform the task assigned to it, e.g., moving from point A to B. Ideally, the HAV moves safely through its environment, just as we would expect a human driver to do. However, if unusual trajectories occur, so-called trajectory corner cases, a human driver can usually cope well, but an HAV can quickly get into trouble. In the definition of trajectory corner cases, which we provide in this work, we will consider the relevance of unusual trajectories with respect to the task at hand. Based on this, we will also present a taxonomy of different trajectory corner cases. The categorization of corner cases into the taxonomy will be shown with examples and is done by cause and required data sources. To illustrate the complexity between the machine learning (ML) model and the corner case cause, we present a general processing chain underlying the taxonomy.
|
1904.07944
|
Chris Donahue
|
Paarth Neekhara, Chris Donahue, Miller Puckette, Shlomo Dubnov, Julian
McAuley
|
Expediting TTS Synthesis with Adversarial Vocoding
|
Published as a conference paper at INTERSPEECH 2019
| null | null | null |
cs.SD cs.LG eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent approaches in text-to-speech (TTS) synthesis employ neural network
strategies to vocode perceptually-informed spectrogram representations directly
into listenable waveforms. Such vocoding procedures create a computational
bottleneck in modern TTS pipelines. We propose an alternative approach which
utilizes generative adversarial networks (GANs) to learn mappings from
perceptually-informed spectrograms to simple magnitude spectrograms which can
be heuristically vocoded. Through a user study, we show that our approach
significantly outperforms na\"ive vocoding strategies while being hundreds of
times faster than neural network vocoders used in state-of-the-art TTS systems.
We also show that our method can be used to achieve state-of-the-art results in
unsupervised synthesis of individual words of speech.
|
[
{
"created": "Tue, 16 Apr 2019 19:42:43 GMT",
"version": "v1"
},
{
"created": "Fri, 26 Jul 2019 03:36:52 GMT",
"version": "v2"
}
] |
2019-07-29
|
[
[
"Neekhara",
"Paarth",
""
],
[
"Donahue",
"Chris",
""
],
[
"Puckette",
"Miller",
""
],
[
"Dubnov",
"Shlomo",
""
],
[
"McAuley",
"Julian",
""
]
] |
Recent approaches in text-to-speech (TTS) synthesis employ neural network strategies to vocode perceptually-informed spectrogram representations directly into listenable waveforms. Such vocoding procedures create a computational bottleneck in modern TTS pipelines. We propose an alternative approach which utilizes generative adversarial networks (GANs) to learn mappings from perceptually-informed spectrograms to simple magnitude spectrograms which can be heuristically vocoded. Through a user study, we show that our approach significantly outperforms na\"ive vocoding strategies while being hundreds of times faster than neural network vocoders used in state-of-the-art TTS systems. We also show that our method can be used to achieve state-of-the-art results in unsupervised synthesis of individual words of speech.
|
2307.00379
|
Sokratis Anagnostopoulos
|
Sokratis J. Anagnostopoulos, Juan Diego Toscano, Nikolaos
Stergiopulos, George Em Karniadakis
|
Residual-based attention and connection to information bottleneck theory
in PINNs
| null | null | null | null |
cs.LG physics.comp-ph
|
http://creativecommons.org/licenses/by/4.0/
|
Driven by the need for more efficient and seamless integration of physical
models and data, physics-informed neural networks (PINNs) have seen a surge of
interest in recent years. However, ensuring the reliability of their
convergence and accuracy remains a challenge. In this work, we propose an
efficient, gradient-less weighting scheme for PINNs, that accelerates the
convergence of dynamic or static systems. This simple yet effective attention
mechanism is a function of the evolving cumulative residuals and aims to make
the optimizer aware of problematic regions at no extra computational cost or
adversarial learning. We illustrate that this general method consistently
achieves a relative $L^{2}$ error of the order of $10^{-5}$ using standard
optimizers on typical benchmark cases of the literature. Furthermore, by
investigating the evolution of weights during training, we identify two
distinct learning phases reminiscent of the fitting and diffusion phases
proposed by the information bottleneck (IB) theory. Subsequent gradient
analysis supports this hypothesis by aligning the transition from high to low
signal-to-noise ratio (SNR) with the transition from fitting to diffusion
regimes of the adopted weights. This novel correlation between PINNs and IB
theory could open future possibilities for understanding the underlying
mechanisms behind the training and stability of PINNs and, more broadly, of
neural operators.
|
[
{
"created": "Sat, 1 Jul 2023 16:29:55 GMT",
"version": "v1"
}
] |
2023-07-04
|
[
[
"Anagnostopoulos",
"Sokratis J.",
""
],
[
"Toscano",
"Juan Diego",
""
],
[
"Stergiopulos",
"Nikolaos",
""
],
[
"Karniadakis",
"George Em",
""
]
] |
Driven by the need for more efficient and seamless integration of physical models and data, physics-informed neural networks (PINNs) have seen a surge of interest in recent years. However, ensuring the reliability of their convergence and accuracy remains a challenge. In this work, we propose an efficient, gradient-less weighting scheme for PINNs, that accelerates the convergence of dynamic or static systems. This simple yet effective attention mechanism is a function of the evolving cumulative residuals and aims to make the optimizer aware of problematic regions at no extra computational cost or adversarial learning. We illustrate that this general method consistently achieves a relative $L^{2}$ error of the order of $10^{-5}$ using standard optimizers on typical benchmark cases of the literature. Furthermore, by investigating the evolution of weights during training, we identify two distinct learning phases reminiscent of the fitting and diffusion phases proposed by the information bottleneck (IB) theory. Subsequent gradient analysis supports this hypothesis by aligning the transition from high to low signal-to-noise ratio (SNR) with the transition from fitting to diffusion regimes of the adopted weights. This novel correlation between PINNs and IB theory could open future possibilities for understanding the underlying mechanisms behind the training and stability of PINNs and, more broadly, of neural operators.
|
1811.00538
|
Medhini Narasimhan
|
Medhini Narasimhan, Svetlana Lazebnik, Alexander G. Schwing
|
Out of the Box: Reasoning with Graph Convolution Nets for Factual Visual
Question Answering
|
Accepted to NIPS 2018
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Accurately answering a question about a given image requires combining
observations with general knowledge. While this is effortless for humans,
reasoning with general knowledge remains an algorithmic challenge. To advance
research in this direction a novel `fact-based' visual question answering
(FVQA) task has been introduced recently along with a large set of curated
facts which link two entities, i.e., two possible answers, via a relation.
Given a question-image pair, deep network techniques have been employed to
successively reduce the large set of facts until one of the two entities of the
final remaining fact is predicted as the answer. We observe that a successive
process which considers one fact at a time to form a local decision is
sub-optimal. Instead, we develop an entity graph and use a graph convolutional
network to `reason' about the correct answer by jointly considering all
entities. We show on the challenging FVQA dataset that this leads to an
improvement in accuracy of around 7% compared to the state of the art.
|
[
{
"created": "Thu, 1 Nov 2018 17:59:56 GMT",
"version": "v1"
}
] |
2018-11-02
|
[
[
"Narasimhan",
"Medhini",
""
],
[
"Lazebnik",
"Svetlana",
""
],
[
"Schwing",
"Alexander G.",
""
]
] |
Accurately answering a question about a given image requires combining observations with general knowledge. While this is effortless for humans, reasoning with general knowledge remains an algorithmic challenge. To advance research in this direction a novel `fact-based' visual question answering (FVQA) task has been introduced recently along with a large set of curated facts which link two entities, i.e., two possible answers, via a relation. Given a question-image pair, deep network techniques have been employed to successively reduce the large set of facts until one of the two entities of the final remaining fact is predicted as the answer. We observe that a successive process which considers one fact at a time to form a local decision is sub-optimal. Instead, we develop an entity graph and use a graph convolutional network to `reason' about the correct answer by jointly considering all entities. We show on the challenging FVQA dataset that this leads to an improvement in accuracy of around 7% compared to the state of the art.
|
1806.06878
|
Shawn Jones
|
Shawn M. Jones, Alexander Nwala, Michele C. Weigle, Michael L. Nelson
|
The Many Shapes of Archive-It
|
10 pages, 12 figures, to appear in the proceedings of the 15th
International Conference on Digital Preservation (iPres 2018)
| null |
10.17605/OSF.IO/EV42P
| null |
cs.DL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Web archives, a key area of digital preservation, meet the needs of
journalists, social scientists, historians, and government organizations. The
use cases for these groups often require that they guide the archiving process
themselves, selecting their own original resources, or seeds, and creating
their own web archive collections. We focus on the collections within
Archive-It, a subscription service started by the Internet Archive in 2005 for
the purpose of allowing organizations to create their own collections of
archived web pages, or mementos. Understanding these collections could be done
via their user-supplied metadata or via text analysis, but the metadata is
applied inconsistently between collections and some Archive-It collections
consist of hundreds of thousands of seeds, making it costly in terms of time to
download each memento. Our work proposes using structural metadata as an
additional way to understand these collections. We explore structural features
currently existing in these collections that can unveil curation and crawling
behaviors. We adapt the concept of the collection growth curve for
understanding Archive-It collection curation and crawling behavior. We also
introduce several seed features and come to an understanding of the diversity
of resources that make up a collection. Finally, we use the descriptions of
each collection to identify four semantic categories of Archive-It collections.
Using the identified structural features, we reviewed the results of runs with
20 classifiers and are able to predict the semantic category of a collection
using a Random Forest classifier with a weighted average F1 score of 0.720,
thus bridging the structural to the descriptive. Our method is useful because
it saves the researcher time and bandwidth. Identifying collections by their
semantic category allows further downstream processing to be tailored to these
categories.
|
[
{
"created": "Mon, 18 Jun 2018 18:27:20 GMT",
"version": "v1"
}
] |
2021-01-26
|
[
[
"Jones",
"Shawn M.",
""
],
[
"Nwala",
"Alexander",
""
],
[
"Weigle",
"Michele C.",
""
],
[
"Nelson",
"Michael L.",
""
]
] |
Web archives, a key area of digital preservation, meet the needs of journalists, social scientists, historians, and government organizations. The use cases for these groups often require that they guide the archiving process themselves, selecting their own original resources, or seeds, and creating their own web archive collections. We focus on the collections within Archive-It, a subscription service started by the Internet Archive in 2005 for the purpose of allowing organizations to create their own collections of archived web pages, or mementos. Understanding these collections could be done via their user-supplied metadata or via text analysis, but the metadata is applied inconsistently between collections and some Archive-It collections consist of hundreds of thousands of seeds, making it costly in terms of time to download each memento. Our work proposes using structural metadata as an additional way to understand these collections. We explore structural features currently existing in these collections that can unveil curation and crawling behaviors. We adapt the concept of the collection growth curve for understanding Archive-It collection curation and crawling behavior. We also introduce several seed features and come to an understanding of the diversity of resources that make up a collection. Finally, we use the descriptions of each collection to identify four semantic categories of Archive-It collections. Using the identified structural features, we reviewed the results of runs with 20 classifiers and are able to predict the semantic category of a collection using a Random Forest classifier with a weighted average F1 score of 0.720, thus bridging the structural to the descriptive. Our method is useful because it saves the researcher time and bandwidth. Identifying collections by their semantic category allows further downstream processing to be tailored to these categories.
|
2204.03262
|
TaeYoung Kang
|
TaeYoung Kang, Eunrang Kwon, Junbum Lee, Youngeun Nam, Junmo Song,
JeongKyu Suh
|
Korean Online Hate Speech Dataset for Multilabel Classification: How Can
Social Science Improve Dataset on Hate Speech?
|
12 pages, 3 tables
| null | null | null |
cs.CL cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We suggest a multilabel Korean online hate speech dataset that covers seven
categories of hate speech: (1) Race and Nationality, (2) Religion, (3)
Regionalism, (4) Ageism, (5) Misogyny, (6) Sexual Minorities, and (7) Male. Our
35K dataset consists of 24K online comments with Krippendorff's Alpha label
accordance of .713, 2.2K neutral sentences from Wikipedia, 1.7K additionally
labeled sentences generated by the Human-in-the-Loop procedure and
rule-generated 7.1K neutral sentences. The base model with 24K initial dataset
achieved the accuracy of LRAP .892, but improved to .919 after being combined
with 11K additional data. Unlike the conventional binary hate and non-hate
dichotomy approach, we designed a dataset considering both the cultural and
linguistic context to overcome the limitations of western culture-based English
texts. Thus, this paper is not only limited to presenting a local hate speech
dataset but extends as a manual for building a more generalized hate speech
dataset with diverse cultural backgrounds based on social science perspectives.
|
[
{
"created": "Thu, 7 Apr 2022 07:29:06 GMT",
"version": "v1"
},
{
"created": "Fri, 8 Apr 2022 04:04:27 GMT",
"version": "v2"
}
] |
2022-04-11
|
[
[
"Kang",
"TaeYoung",
""
],
[
"Kwon",
"Eunrang",
""
],
[
"Lee",
"Junbum",
""
],
[
"Nam",
"Youngeun",
""
],
[
"Song",
"Junmo",
""
],
[
"Suh",
"JeongKyu",
""
]
] |
We suggest a multilabel Korean online hate speech dataset that covers seven categories of hate speech: (1) Race and Nationality, (2) Religion, (3) Regionalism, (4) Ageism, (5) Misogyny, (6) Sexual Minorities, and (7) Male. Our 35K dataset consists of 24K online comments with Krippendorff's Alpha label accordance of .713, 2.2K neutral sentences from Wikipedia, 1.7K additionally labeled sentences generated by the Human-in-the-Loop procedure and rule-generated 7.1K neutral sentences. The base model with 24K initial dataset achieved the accuracy of LRAP .892, but improved to .919 after being combined with 11K additional data. Unlike the conventional binary hate and non-hate dichotomy approach, we designed a dataset considering both the cultural and linguistic context to overcome the limitations of western culture-based English texts. Thus, this paper is not only limited to presenting a local hate speech dataset but extends as a manual for building a more generalized hate speech dataset with diverse cultural backgrounds based on social science perspectives.
|
1405.2092
|
Osvaldo Simeone
|
O. Simeone, E. Erkip and S. Shamai (Shitz)
|
Full-Duplex Cloud Radio Access Networks: An Information-Theoretic
Viewpoint
|
To appear in IEEE Wireless Communications Letters
| null |
10.1109/LWC.2014.2323073
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The conventional design of cellular systems prescribes the separation of
uplink and downlink transmissions via time-division or frequency-division
duplex. Recent advances in analog and digital domain self-interference
interference cancellation challenge the need for this arrangement and open up
the possibility to operate base stations, especially low-power ones, in a
full-duplex mode. As a means to cope with the resulting downlink-to-uplink
interference among base stations, this letter investigates the impact of the
Cloud Radio Access Network (C-RAN) architecture. The analysis follows an
information theoretic approach based on the classical Wyner model. The
analytical results herein confirm the significant potential advantages of the
C-RAN architecture in the presence of full-duplex base stations, as long as
sufficient fronthaul capacity is available and appropriate mobile station
scheduling, or successive interference cancellation at the mobile stations, is
implemented.
|
[
{
"created": "Thu, 8 May 2014 20:31:42 GMT",
"version": "v1"
}
] |
2016-11-15
|
[
[
"Simeone",
"O.",
"",
"Shitz"
],
[
"Erkip",
"E.",
"",
"Shitz"
],
[
"Shamai",
"S.",
"",
"Shitz"
]
] |
The conventional design of cellular systems prescribes the separation of uplink and downlink transmissions via time-division or frequency-division duplex. Recent advances in analog and digital domain self-interference interference cancellation challenge the need for this arrangement and open up the possibility to operate base stations, especially low-power ones, in a full-duplex mode. As a means to cope with the resulting downlink-to-uplink interference among base stations, this letter investigates the impact of the Cloud Radio Access Network (C-RAN) architecture. The analysis follows an information theoretic approach based on the classical Wyner model. The analytical results herein confirm the significant potential advantages of the C-RAN architecture in the presence of full-duplex base stations, as long as sufficient fronthaul capacity is available and appropriate mobile station scheduling, or successive interference cancellation at the mobile stations, is implemented.
|
1602.02991
|
Saeed Akhoondian Amiri
|
Saeed Akhoondian Amiri, Stefan Schmid, Sebastian Siebertz
|
A local constant factor approximation for the minimum dominating set
problem on bounded genus graphs
| null | null | null | null |
cs.DC cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Minimum Dominating Set (MDS) problem is not only one of the most
fundamental problems in distributed computing, it is also one of the most
challenging ones. While it is well-known that minimum dominating sets cannot be
approximated locally on general graphs, over the last years, several
breakthroughs have been made on computing local approximations on sparse
graphs.
This paper presents a deterministic and local constant factor approximation
for minimum dominating sets on bounded genus graphs, a very large family of
sparse graphs. Our main technical contribution is a new analysis of a slightly
modified, first-order definable variant of an existing algorithm by Lenzen et
al. Interestingly, unlike existing proofs for planar graphs, our analysis does
not rely on any topological arguments. We believe that our techniques can be
useful for the study of local problems on sparse graphs beyond the scope of
this paper.
|
[
{
"created": "Tue, 9 Feb 2016 14:17:55 GMT",
"version": "v1"
},
{
"created": "Fri, 12 Feb 2016 09:11:16 GMT",
"version": "v2"
}
] |
2016-02-15
|
[
[
"Amiri",
"Saeed Akhoondian",
""
],
[
"Schmid",
"Stefan",
""
],
[
"Siebertz",
"Sebastian",
""
]
] |
The Minimum Dominating Set (MDS) problem is not only one of the most fundamental problems in distributed computing, it is also one of the most challenging ones. While it is well-known that minimum dominating sets cannot be approximated locally on general graphs, over the last years, several breakthroughs have been made on computing local approximations on sparse graphs. This paper presents a deterministic and local constant factor approximation for minimum dominating sets on bounded genus graphs, a very large family of sparse graphs. Our main technical contribution is a new analysis of a slightly modified, first-order definable variant of an existing algorithm by Lenzen et al. Interestingly, unlike existing proofs for planar graphs, our analysis does not rely on any topological arguments. We believe that our techniques can be useful for the study of local problems on sparse graphs beyond the scope of this paper.
|
1401.6523
|
Haris Aziz
|
Haris Aziz and Serge Gaspers and Nick Mattei and Nina Narodytska and
Toby Walsh
|
Strategic aspects of the probabilistic serial rule for the allocation of
goods
|
28 pages
| null | null | null |
cs.GT cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The probabilistic serial (PS) rule is one of the most prominent randomized
rules for the assignment problem. It is well-known for its superior fairness
and welfare properties. However, PS is not immune to manipulative behaviour by
the agents. We examine computational and non-computational aspects of
strategising under the PS rule. Firstly, we study the computational complexity
of an agent manipulating the PS rule. We present polynomial-time algorithms for
optimal manipulation. Secondly, we show that expected utility best responses
can cycle. Thirdly, we examine the existence and computation of Nash
equilibrium profiles under the PS rule. We show that a pure Nash equilibrium is
guaranteed to exist under the PS rule. For two agents, we identify two
different types of preference profiles that are not only in Nash equilibrium
but can also be computed in linear time. Finally, we conduct experiments to
check the frequency of manipulability of the PS rule under different
combinations of the number of agents, objects, and utility functions.
|
[
{
"created": "Sat, 25 Jan 2014 12:11:09 GMT",
"version": "v1"
}
] |
2014-01-28
|
[
[
"Aziz",
"Haris",
""
],
[
"Gaspers",
"Serge",
""
],
[
"Mattei",
"Nick",
""
],
[
"Narodytska",
"Nina",
""
],
[
"Walsh",
"Toby",
""
]
] |
The probabilistic serial (PS) rule is one of the most prominent randomized rules for the assignment problem. It is well-known for its superior fairness and welfare properties. However, PS is not immune to manipulative behaviour by the agents. We examine computational and non-computational aspects of strategising under the PS rule. Firstly, we study the computational complexity of an agent manipulating the PS rule. We present polynomial-time algorithms for optimal manipulation. Secondly, we show that expected utility best responses can cycle. Thirdly, we examine the existence and computation of Nash equilibrium profiles under the PS rule. We show that a pure Nash equilibrium is guaranteed to exist under the PS rule. For two agents, we identify two different types of preference profiles that are not only in Nash equilibrium but can also be computed in linear time. Finally, we conduct experiments to check the frequency of manipulability of the PS rule under different combinations of the number of agents, objects, and utility functions.
|
2307.13541
|
Chuanchuan Wang
|
Chuanchuan Wang, Ahmad Sufril Azlan Mohamed
|
Group Activity Recognition in Computer Vision: A Comprehensive Review,
Challenges, and Future Perspectives
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Group activity recognition is a hot topic in computer vision. Recognizing
activities through group relationships plays a vital role in group activity
recognition. It holds practical implications in various scenarios, such as
video analysis, surveillance, automatic driving, and understanding social
activities. The model's key capabilities encompass efficiently modeling
hierarchical relationships within a scene and accurately extracting distinctive
spatiotemporal features from groups. Given this technology's extensive
applicability, identifying group activities has garnered significant research
attention. This work examines the current progress in technology for
recognizing group activities, with a specific focus on global interactivity and
activities. Firstly, we comprehensively review the pertinent literature and
various group activity recognition approaches, from traditional methodologies
to the latest methods based on spatial structure, descriptors, non-deep
learning, hierarchical recurrent neural networks (HRNN), relationship models,
and attention mechanisms. Subsequently, we present the relational network and
relational architectures for each module. Thirdly, we investigate methods for
recognizing group activity and compare their performance with state-of-the-art
technologies. We summarize the existing challenges and provide comprehensive
guidance for newcomers to understand group activity recognition. Furthermore,
we review emerging perspectives in group activity recognition to explore new
directions and possibilities.
|
[
{
"created": "Tue, 25 Jul 2023 14:44:41 GMT",
"version": "v1"
}
] |
2023-07-26
|
[
[
"Wang",
"Chuanchuan",
""
],
[
"Mohamed",
"Ahmad Sufril Azlan",
""
]
] |
Group activity recognition is a hot topic in computer vision. Recognizing activities through group relationships plays a vital role in group activity recognition. It holds practical implications in various scenarios, such as video analysis, surveillance, automatic driving, and understanding social activities. The model's key capabilities encompass efficiently modeling hierarchical relationships within a scene and accurately extracting distinctive spatiotemporal features from groups. Given this technology's extensive applicability, identifying group activities has garnered significant research attention. This work examines the current progress in technology for recognizing group activities, with a specific focus on global interactivity and activities. Firstly, we comprehensively review the pertinent literature and various group activity recognition approaches, from traditional methodologies to the latest methods based on spatial structure, descriptors, non-deep learning, hierarchical recurrent neural networks (HRNN), relationship models, and attention mechanisms. Subsequently, we present the relational network and relational architectures for each module. Thirdly, we investigate methods for recognizing group activity and compare their performance with state-of-the-art technologies. We summarize the existing challenges and provide comprehensive guidance for newcomers to understand group activity recognition. Furthermore, we review emerging perspectives in group activity recognition to explore new directions and possibilities.
|
1510.04188
|
Dmytro Terletskyi
|
Dmytro Terletskyi
|
Universal and Determined Constructors of Multisets of Objects
|
arXiv admin note: text overlap with arXiv:1510.04183
|
Information Theories and Applications, Vol. 21, Number 4, 2014,
pp. 339-361
| null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper contains analysis of creation of sets and multisets as an approach
for modeling of some aspects of human thinking. The creation of sets is
considered within constructive object-oriented version of set theory (COOST),
from different sides, in particular classical set theory, object-oriented
programming (OOP) and development of intelligent information systems (IIS). The
main feature of COOST in contrast to other versions of set theory is an
opportunity to describe essences of objects more precisely, using their
properties and methods, which can be applied to them. That is why this version
of set theory is object-oriented and close to OOP. Within COOST, the author
proposes universal constructor of multisets of objects that gives us a
possibility to create arbitrary multisets of objects. In addition, a few
determined constructors of multisets of objects, which allow creating
multisets, using strictly defined schemas, also are proposed in the paper. Such
constructors are very useful in cases of very big cardinalities of multisets,
because they give us an opportunity to calculate a multiplicity of each object
and cardinality of multiset before its creation. The proposed constructors of
multisets of objects allow us to model in a sense corresponding processes of
human thought, that in turn give us an opportunity to develop IIS, using these
tools.
|
[
{
"created": "Wed, 14 Oct 2015 16:27:26 GMT",
"version": "v1"
}
] |
2015-10-15
|
[
[
"Terletskyi",
"Dmytro",
""
]
] |
This paper contains analysis of creation of sets and multisets as an approach for modeling of some aspects of human thinking. The creation of sets is considered within constructive object-oriented version of set theory (COOST), from different sides, in particular classical set theory, object-oriented programming (OOP) and development of intelligent information systems (IIS). The main feature of COOST in contrast to other versions of set theory is an opportunity to describe essences of objects more precisely, using their properties and methods, which can be applied to them. That is why this version of set theory is object-oriented and close to OOP. Within COOST, the author proposes universal constructor of multisets of objects that gives us a possibility to create arbitrary multisets of objects. In addition, a few determined constructors of multisets of objects, which allow creating multisets, using strictly defined schemas, also are proposed in the paper. Such constructors are very useful in cases of very big cardinalities of multisets, because they give us an opportunity to calculate a multiplicity of each object and cardinality of multiset before its creation. The proposed constructors of multisets of objects allow us to model in a sense corresponding processes of human thought, that in turn give us an opportunity to develop IIS, using these tools.
|
2007.08689
|
Naoki Ide
|
Naoki Ide, Tetsuya Asayama, Hiroshi Ueno and Masayuki Ohzeki
|
Maximum-Likelihood Channel Decoding with Quantum Annealing Machine
|
accepted at ISITA 2020
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We formulate maximum likelihood (ML) channel decoding as a quadratic
unconstraint binary optimization (QUBO) and simulate the decoding by the
current commercial quantum annealing machine, D-Wave 2000Q. We prepared two
implementations with Ising model formulations, generated from the generator
matrix and the parity-check matrix respectively. We evaluated these
implementations of ML decoding for low-density parity-check (LDPC) codes,
analyzing the number of spins and connections and comparing the decoding
performance with belief propagation (BP) decoding and brute-force ML decoding
with classical computers. The results show that these implementations are
superior to BP decoding in relatively short length codes, and while the
performance in the long length codes deteriorates, the implementation from the
parity-check matrix formulation still works up to 1k length with fewer spins
and connections than that of the generator matrix formulation due to the
sparseness of parity-check matrices of LDPC.
|
[
{
"created": "Thu, 16 Jul 2020 23:33:33 GMT",
"version": "v1"
},
{
"created": "Mon, 5 Oct 2020 13:51:42 GMT",
"version": "v2"
}
] |
2020-10-06
|
[
[
"Ide",
"Naoki",
""
],
[
"Asayama",
"Tetsuya",
""
],
[
"Ueno",
"Hiroshi",
""
],
[
"Ohzeki",
"Masayuki",
""
]
] |
We formulate maximum likelihood (ML) channel decoding as a quadratic unconstraint binary optimization (QUBO) and simulate the decoding by the current commercial quantum annealing machine, D-Wave 2000Q. We prepared two implementations with Ising model formulations, generated from the generator matrix and the parity-check matrix respectively. We evaluated these implementations of ML decoding for low-density parity-check (LDPC) codes, analyzing the number of spins and connections and comparing the decoding performance with belief propagation (BP) decoding and brute-force ML decoding with classical computers. The results show that these implementations are superior to BP decoding in relatively short length codes, and while the performance in the long length codes deteriorates, the implementation from the parity-check matrix formulation still works up to 1k length with fewer spins and connections than that of the generator matrix formulation due to the sparseness of parity-check matrices of LDPC.
|
2109.04260
|
Xiao-Ming Wu
|
Xiao-Ming Wu, Xin Luo, Yu-Wei Zhan, Chen-Lu Ding, Zhen-Duo Chen,
Xin-Shun Xu
|
Online Enhanced Semantic Hashing: Towards Effective and Efficient
Retrieval for Streaming Multi-Modal Data
|
9 pages, 5 figures
| null | null | null |
cs.MM cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the vigorous development of multimedia equipment and applications,
efficient retrieval of large-scale multi-modal data has become a trendy
research topic. Thereinto, hashing has become a prevalent choice due to its
retrieval efficiency and low storage cost. Although multi-modal hashing has
drawn lots of attention in recent years, there still remain some problems. The
first point is that existing methods are mainly designed in batch mode and not
able to efficiently handle streaming multi-modal data. The second point is that
all existing online multi-modal hashing methods fail to effectively handle
unseen new classes which come continuously with streaming data chunks. In this
paper, we propose a new model, termed Online enhAnced SemantIc haShing (OASIS).
We design novel semantic-enhanced representation for data, which could help
handle the new coming classes, and thereby construct the enhanced semantic
objective function. An efficient and effective discrete online optimization
algorithm is further proposed for OASIS. Extensive experiments show that our
method can exceed the state-of-the-art models. For good reproducibility and
benefiting the community, our code and data are already available in
supplementary material and will be made publicly available.
|
[
{
"created": "Thu, 9 Sep 2021 13:30:31 GMT",
"version": "v1"
},
{
"created": "Thu, 24 Mar 2022 13:55:32 GMT",
"version": "v2"
}
] |
2022-03-25
|
[
[
"Wu",
"Xiao-Ming",
""
],
[
"Luo",
"Xin",
""
],
[
"Zhan",
"Yu-Wei",
""
],
[
"Ding",
"Chen-Lu",
""
],
[
"Chen",
"Zhen-Duo",
""
],
[
"Xu",
"Xin-Shun",
""
]
] |
With the vigorous development of multimedia equipment and applications, efficient retrieval of large-scale multi-modal data has become a trendy research topic. Thereinto, hashing has become a prevalent choice due to its retrieval efficiency and low storage cost. Although multi-modal hashing has drawn lots of attention in recent years, there still remain some problems. The first point is that existing methods are mainly designed in batch mode and not able to efficiently handle streaming multi-modal data. The second point is that all existing online multi-modal hashing methods fail to effectively handle unseen new classes which come continuously with streaming data chunks. In this paper, we propose a new model, termed Online enhAnced SemantIc haShing (OASIS). We design novel semantic-enhanced representation for data, which could help handle the new coming classes, and thereby construct the enhanced semantic objective function. An efficient and effective discrete online optimization algorithm is further proposed for OASIS. Extensive experiments show that our method can exceed the state-of-the-art models. For good reproducibility and benefiting the community, our code and data are already available in supplementary material and will be made publicly available.
|
2101.10605
|
Soramichi Akiyama
|
Soramichi Akiyama, Ryota Shioya
|
The Granularity Gap Problem: A Hurdle for Applying Approximate Memory to
Complex Data Layout
|
Extended version of a conference paper published in the 12th ACM/SPEC
International Conference on Performance Engineering (ICPE'21)
| null | null | null |
cs.ET cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The main memory access latency has not much improved for more than two
decades while the CPU performance had been exponentially increasing until
recently. Approximate memory is a technique to reduce the DRAM access latency
in return of losing data integrity. It is beneficial for applications that are
robust to noisy input and intermediate data such as artificial intelligence,
multimedia processing, and graph processing. To obtain reasonable outputs from
applications on approximate memory, it is crucial to protect critical data
while accelerating accesses to non-critical data. We refer the minimum size of
a continuous memory region that the same error rate is applied in approximate
memory to as the approximation granularity. A fundamental limitation of
approximate memory is that the approximation granularity is as large as a few
kilo bytes. However, applications may have critical and non-critical data
interleaved with smaller granularity. For example, a data structure for graph
nodes can have pointers (critical) to neighboring nodes and its score
(non-critical, depending on the use-case). This data structure cannot be
directly mapped to approximate memory due to the gap between the approximation
granularity and the granularity of data criticality. We refer to this issue as
the granularity gap problem. In this paper, we first show that many
applications potentially suffer from this problem. Then we propose a framework
to quantitatively evaluate the performance overhead of a possible method to
avoid this problem using known techniques. The evaluation results show that the
performance overhead is non-negligible compared to expected benefit from
approximate memory, suggesting that the granularity gap problem is a
significant concern.
|
[
{
"created": "Tue, 26 Jan 2021 07:34:24 GMT",
"version": "v1"
}
] |
2021-01-27
|
[
[
"Akiyama",
"Soramichi",
""
],
[
"Shioya",
"Ryota",
""
]
] |
The main memory access latency has not much improved for more than two decades while the CPU performance had been exponentially increasing until recently. Approximate memory is a technique to reduce the DRAM access latency in return of losing data integrity. It is beneficial for applications that are robust to noisy input and intermediate data such as artificial intelligence, multimedia processing, and graph processing. To obtain reasonable outputs from applications on approximate memory, it is crucial to protect critical data while accelerating accesses to non-critical data. We refer the minimum size of a continuous memory region that the same error rate is applied in approximate memory to as the approximation granularity. A fundamental limitation of approximate memory is that the approximation granularity is as large as a few kilo bytes. However, applications may have critical and non-critical data interleaved with smaller granularity. For example, a data structure for graph nodes can have pointers (critical) to neighboring nodes and its score (non-critical, depending on the use-case). This data structure cannot be directly mapped to approximate memory due to the gap between the approximation granularity and the granularity of data criticality. We refer to this issue as the granularity gap problem. In this paper, we first show that many applications potentially suffer from this problem. Then we propose a framework to quantitatively evaluate the performance overhead of a possible method to avoid this problem using known techniques. The evaluation results show that the performance overhead is non-negligible compared to expected benefit from approximate memory, suggesting that the granularity gap problem is a significant concern.
|
2105.03781
|
Yingjun Du
|
Yingjun Du, Haoliang Sun, Xiantong Zhen, Jun Xu, Yilong Yin, Ling
Shao, Cees G. M. Snoek
|
MetaKernel: Learning Variational Random Features with Limited Labels
|
19 pages,7 figures. arXiv admin note: substantial text overlap with
arXiv:2006.06707
| null | null | null |
cs.LG cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Few-shot learning deals with the fundamental and challenging problem of
learning from a few annotated samples, while being able to generalize well on
new tasks. The crux of few-shot learning is to extract prior knowledge from
related tasks to enable fast adaptation to a new task with a limited amount of
data. In this paper, we propose meta-learning kernels with random Fourier
features for few-shot learning, we call MetaKernel. Specifically, we propose
learning variational random features in a data-driven manner to obtain
task-specific kernels by leveraging the shared knowledge provided by related
tasks in a meta-learning setting. We treat the random feature basis as the
latent variable, which is estimated by variational inference. The shared
knowledge from related tasks is incorporated into a context inference of the
posterior, which we achieve via a long-short term memory module. To establish
more expressive kernels, we deploy conditional normalizing flows based on
coupling layers to achieve a richer posterior distribution over random Fourier
bases. The resultant kernels are more informative and discriminative, which
further improves the few-shot learning. To evaluate our method, we conduct
extensive experiments on both few-shot image classification and regression
tasks. A thorough ablation study demonstrates that the effectiveness of each
introduced component in our method. The benchmark results on fourteen datasets
demonstrate MetaKernel consistently delivers at least comparable and often
better performance than state-of-the-art alternatives.
|
[
{
"created": "Sat, 8 May 2021 21:24:09 GMT",
"version": "v1"
}
] |
2021-05-11
|
[
[
"Du",
"Yingjun",
""
],
[
"Sun",
"Haoliang",
""
],
[
"Zhen",
"Xiantong",
""
],
[
"Xu",
"Jun",
""
],
[
"Yin",
"Yilong",
""
],
[
"Shao",
"Ling",
""
],
[
"Snoek",
"Cees G. M.",
""
]
] |
Few-shot learning deals with the fundamental and challenging problem of learning from a few annotated samples, while being able to generalize well on new tasks. The crux of few-shot learning is to extract prior knowledge from related tasks to enable fast adaptation to a new task with a limited amount of data. In this paper, we propose meta-learning kernels with random Fourier features for few-shot learning, we call MetaKernel. Specifically, we propose learning variational random features in a data-driven manner to obtain task-specific kernels by leveraging the shared knowledge provided by related tasks in a meta-learning setting. We treat the random feature basis as the latent variable, which is estimated by variational inference. The shared knowledge from related tasks is incorporated into a context inference of the posterior, which we achieve via a long-short term memory module. To establish more expressive kernels, we deploy conditional normalizing flows based on coupling layers to achieve a richer posterior distribution over random Fourier bases. The resultant kernels are more informative and discriminative, which further improves the few-shot learning. To evaluate our method, we conduct extensive experiments on both few-shot image classification and regression tasks. A thorough ablation study demonstrates that the effectiveness of each introduced component in our method. The benchmark results on fourteen datasets demonstrate MetaKernel consistently delivers at least comparable and often better performance than state-of-the-art alternatives.
|
2108.00602
|
Yang Zhang
|
Yang Zhang, Xin Yu, Xiaobo Lu, Ping Liu
|
Pro-UIGAN: Progressive Face Hallucination from Occluded Thumbnails
| null | null |
10.1109/TIP.2022.3167280
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we study the task of hallucinating an authentic
high-resolution (HR) face from an occluded thumbnail. We propose a multi-stage
Progressive Upsampling and Inpainting Generative Adversarial Network, dubbed
Pro-UIGAN, which exploits facial geometry priors to replenish and upsample (8*)
the occluded and tiny faces (16*16 pixels). Pro-UIGAN iteratively (1) estimates
facial geometry priors for low-resolution (LR) faces and (2) acquires
non-occluded HR face images under the guidance of the estimated priors. Our
multi-stage hallucination network super-resolves and inpaints occluded LR faces
in a coarse-to-fine manner, thus reducing unwanted blurriness and artifacts
significantly. Specifically, we design a novel cross-modal transformer module
for facial priors estimation, in which an input face and its landmark features
are formulated as queries and keys, respectively. Such a design encourages
joint feature learning across the input facial and landmark features, and deep
feature correspondences will be discovered by attention. Thus, facial
appearance features and facial geometry priors are learned in a mutual
promotion manner. Extensive experiments demonstrate that our Pro-UIGAN achieves
visually pleasing HR faces, reaching superior performance in downstream tasks,
i.e., face alignment, face parsing, face recognition and expression
classification, compared with other state-of-the-art (SotA) methods.
|
[
{
"created": "Mon, 2 Aug 2021 02:29:24 GMT",
"version": "v1"
},
{
"created": "Wed, 4 Aug 2021 12:53:00 GMT",
"version": "v2"
},
{
"created": "Thu, 5 Aug 2021 00:55:43 GMT",
"version": "v3"
},
{
"created": "Sun, 8 Aug 2021 08:34:07 GMT",
"version": "v4"
},
{
"created": "Sat, 29 Jan 2022 13:10:20 GMT",
"version": "v5"
},
{
"created": "Fri, 1 Apr 2022 12:32:32 GMT",
"version": "v6"
}
] |
2022-05-11
|
[
[
"Zhang",
"Yang",
""
],
[
"Yu",
"Xin",
""
],
[
"Lu",
"Xiaobo",
""
],
[
"Liu",
"Ping",
""
]
] |
In this paper, we study the task of hallucinating an authentic high-resolution (HR) face from an occluded thumbnail. We propose a multi-stage Progressive Upsampling and Inpainting Generative Adversarial Network, dubbed Pro-UIGAN, which exploits facial geometry priors to replenish and upsample (8*) the occluded and tiny faces (16*16 pixels). Pro-UIGAN iteratively (1) estimates facial geometry priors for low-resolution (LR) faces and (2) acquires non-occluded HR face images under the guidance of the estimated priors. Our multi-stage hallucination network super-resolves and inpaints occluded LR faces in a coarse-to-fine manner, thus reducing unwanted blurriness and artifacts significantly. Specifically, we design a novel cross-modal transformer module for facial priors estimation, in which an input face and its landmark features are formulated as queries and keys, respectively. Such a design encourages joint feature learning across the input facial and landmark features, and deep feature correspondences will be discovered by attention. Thus, facial appearance features and facial geometry priors are learned in a mutual promotion manner. Extensive experiments demonstrate that our Pro-UIGAN achieves visually pleasing HR faces, reaching superior performance in downstream tasks, i.e., face alignment, face parsing, face recognition and expression classification, compared with other state-of-the-art (SotA) methods.
|
2104.00452
|
Jo\v{z}e Ro\v{z}anec
|
Jo\v{z}e M. Ro\v{z}anec and Dunja Mladeni\'c
|
Semantic XAI for contextualized demand forecasting explanations
| null | null | null | null |
cs.AI cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The paper proposes a novel architecture for explainable AI based on semantic
technologies and AI. We tailor the architecture for the domain of demand
forecasting and validate it on a real-world case study. The provided
explanations combine concepts describing features relevant to a particular
forecast, related media events, and metadata regarding external datasets of
interest. The knowledge graph provides concepts that convey feature information
at a higher abstraction level. By using them, explanations do not expose
sensitive details regarding the demand forecasting models. The explanations
also emphasize actionable dimensions where suitable. We link domain knowledge,
forecasted values, and forecast explanations in a Knowledge Graph. The ontology
and dataset we developed for this use case are publicly available for further
research.
|
[
{
"created": "Thu, 1 Apr 2021 13:08:53 GMT",
"version": "v1"
}
] |
2021-04-02
|
[
[
"Rožanec",
"Jože M.",
""
],
[
"Mladenić",
"Dunja",
""
]
] |
The paper proposes a novel architecture for explainable AI based on semantic technologies and AI. We tailor the architecture for the domain of demand forecasting and validate it on a real-world case study. The provided explanations combine concepts describing features relevant to a particular forecast, related media events, and metadata regarding external datasets of interest. The knowledge graph provides concepts that convey feature information at a higher abstraction level. By using them, explanations do not expose sensitive details regarding the demand forecasting models. The explanations also emphasize actionable dimensions where suitable. We link domain knowledge, forecasted values, and forecast explanations in a Knowledge Graph. The ontology and dataset we developed for this use case are publicly available for further research.
|
2202.09950
|
Tao Wang
|
Tao Wang, Jiangyan Yi, Ruibo Fu, Jianhua Tao, Zhengqi Wen
|
CampNet: Context-Aware Mask Prediction for End-to-End Text-Based Speech
Editing
|
under review, 14 pages, 14 figures, demo page is available at
https://hairuo55.github.io/CampNet
| null | null | null |
cs.SD cs.CL eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The text-based speech editor allows the editing of speech through intuitive
cutting, copying, and pasting operations to speed up the process of editing
speech. However, the major drawback of current systems is that edited speech
often sounds unnatural due to cut-copy-paste operation. In addition, it is not
obvious how to synthesize records according to a new word not appearing in the
transcript. This paper proposes a novel end-to-end text-based speech editing
method called context-aware mask prediction network (CampNet). The model can
simulate the text-based speech editing process by randomly masking part of
speech and then predicting the masked region by sensing the speech context. It
can solve unnatural prosody in the edited region and synthesize the speech
corresponding to the unseen words in the transcript. Secondly, for the possible
operation of text-based speech editing, we design three text-based operations
based on CampNet: deletion, insertion, and replacement. These operations can
cover various situations of speech editing. Thirdly, to synthesize the speech
corresponding to long text in insertion and replacement operations, a
word-level autoregressive generation method is proposed. Fourthly, we propose a
speaker adaptation method using only one sentence for CampNet and explore the
ability of few-shot learning based on CampNet, which provides a new idea for
speech forgery tasks. The subjective and objective experiments on VCTK and
LibriTTS datasets show that the speech editing results based on CampNet are
better than TTS technology, manual editing, and VoCo method. We also conduct
detailed ablation experiments to explore the effect of the CampNet structure on
its performance. Finally, the experiment shows that speaker adaptation with
only one sentence can further improve the naturalness of speech. Examples of
generated speech can be found at https://hairuo55.github.io/CampNet.
|
[
{
"created": "Mon, 21 Feb 2022 02:05:14 GMT",
"version": "v1"
},
{
"created": "Tue, 22 Mar 2022 12:45:11 GMT",
"version": "v2"
}
] |
2022-03-23
|
[
[
"Wang",
"Tao",
""
],
[
"Yi",
"Jiangyan",
""
],
[
"Fu",
"Ruibo",
""
],
[
"Tao",
"Jianhua",
""
],
[
"Wen",
"Zhengqi",
""
]
] |
The text-based speech editor allows the editing of speech through intuitive cutting, copying, and pasting operations to speed up the process of editing speech. However, the major drawback of current systems is that edited speech often sounds unnatural due to cut-copy-paste operation. In addition, it is not obvious how to synthesize records according to a new word not appearing in the transcript. This paper proposes a novel end-to-end text-based speech editing method called context-aware mask prediction network (CampNet). The model can simulate the text-based speech editing process by randomly masking part of speech and then predicting the masked region by sensing the speech context. It can solve unnatural prosody in the edited region and synthesize the speech corresponding to the unseen words in the transcript. Secondly, for the possible operation of text-based speech editing, we design three text-based operations based on CampNet: deletion, insertion, and replacement. These operations can cover various situations of speech editing. Thirdly, to synthesize the speech corresponding to long text in insertion and replacement operations, a word-level autoregressive generation method is proposed. Fourthly, we propose a speaker adaptation method using only one sentence for CampNet and explore the ability of few-shot learning based on CampNet, which provides a new idea for speech forgery tasks. The subjective and objective experiments on VCTK and LibriTTS datasets show that the speech editing results based on CampNet are better than TTS technology, manual editing, and VoCo method. We also conduct detailed ablation experiments to explore the effect of the CampNet structure on its performance. Finally, the experiment shows that speaker adaptation with only one sentence can further improve the naturalness of speech. Examples of generated speech can be found at https://hairuo55.github.io/CampNet.
|
2204.07653
|
Susu Xu
|
Susu Xu, Joshua Dimasaka, David J. Wald, Hae Young Noh
|
Bayesian Updating of Seismic Ground Failure Estimates via Causal
Graphical Models and Satellite Imagery
|
The 17th World Conference on Earthquake Engineering, Sendai, Japan,
Sep. 2021
| null | null | null |
cs.CE
|
http://creativecommons.org/licenses/by/4.0/
|
Earthquake-induced secondary ground failure hazards, such as liquefaction and
landslides, result in catastrophic building and infrastructure damage as well
as human fatalities. To facilitate emergency responses and mitigate losses, the
U.S. Geological Survey provides a rapid hazard estimation system for
earthquake-triggered landslides and liquefaction using geospatial
susceptibility proxies and ShakeMap ground motion estimates. In this study, we
develop a generalized causal graph-based Bayesian network that models the
physical interdependencies between geospatial features, seismic ground
failures, and building damage, as well as DPMs. Geospatial features provide
physical insights for estimating ground failure occurrence while DPMs contain
event-specific surface change observations. This physics-informed causal graph
incorporates these variables with complex physical relationships in one
holistic Bayesian updating scheme to effectively fuse information from both
geospatial models and remote sensing data. This framework is scalable and
flexible enough to deal with highly complex multi-hazard combinations. We then
develop a stochastic variational inference algorithm to jointly update the
intractable posterior probabilities of unobserved landslides, liquefaction, and
building damage at different locations efficiently. In addition, a local
graphical model pruning algorithm is presented to reduce the computational cost
of large-scale seismic ground failure estimation. We apply this framework to
the September 2018 Hokkaido Iburi-Tobu, Japan (M6.6) earthquake and January
2020 Southwest Puerto Rico (M6.4) earthquake to evaluate the performance of our
algorithm.
|
[
{
"created": "Fri, 15 Apr 2022 21:42:41 GMT",
"version": "v1"
}
] |
2022-04-19
|
[
[
"Xu",
"Susu",
""
],
[
"Dimasaka",
"Joshua",
""
],
[
"Wald",
"David J.",
""
],
[
"Noh",
"Hae Young",
""
]
] |
Earthquake-induced secondary ground failure hazards, such as liquefaction and landslides, result in catastrophic building and infrastructure damage as well as human fatalities. To facilitate emergency responses and mitigate losses, the U.S. Geological Survey provides a rapid hazard estimation system for earthquake-triggered landslides and liquefaction using geospatial susceptibility proxies and ShakeMap ground motion estimates. In this study, we develop a generalized causal graph-based Bayesian network that models the physical interdependencies between geospatial features, seismic ground failures, and building damage, as well as DPMs. Geospatial features provide physical insights for estimating ground failure occurrence while DPMs contain event-specific surface change observations. This physics-informed causal graph incorporates these variables with complex physical relationships in one holistic Bayesian updating scheme to effectively fuse information from both geospatial models and remote sensing data. This framework is scalable and flexible enough to deal with highly complex multi-hazard combinations. We then develop a stochastic variational inference algorithm to jointly update the intractable posterior probabilities of unobserved landslides, liquefaction, and building damage at different locations efficiently. In addition, a local graphical model pruning algorithm is presented to reduce the computational cost of large-scale seismic ground failure estimation. We apply this framework to the September 2018 Hokkaido Iburi-Tobu, Japan (M6.6) earthquake and January 2020 Southwest Puerto Rico (M6.4) earthquake to evaluate the performance of our algorithm.
|
2305.02217
|
Zhi-Hua Zhou
|
Zhi-Hua Zhou
|
Learnability with Time-Sharing Computational Resource Concerns
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Conventional theoretical machine learning studies generally assume explicitly
or implicitly that there are enough or even infinitely supplied computational
resources. In real practice, however, computational resources are usually
limited, and the performance of machine learning depends not only on how many
data have been received, but also on how many data can be handled subject to
computational resources available. Note that most current ``intelligent
supercomputing'' facilities work like exclusive operating systems, where a
fixed amount of resources are allocated to a machine learning task without
adaptive scheduling strategies considering important factors such as the
learning performance demands and learning process status. In this article, we
introduce the notion of machine learning throughput, define Computational
Resource Efficient Learning (CoRE-Learning), and present a theoretical
framework that takes into account the influence of computational resources in
learning theory. This framework can be naturally applied to stream learning
where the incoming data streams can be potentially endless with overwhelming
size and it is impractical to assume that all received data can be handled in
time. It may also provide a theoretical perspective for the design of
intelligent supercomputing operating systems.
|
[
{
"created": "Wed, 3 May 2023 15:54:23 GMT",
"version": "v1"
},
{
"created": "Mon, 19 Jun 2023 13:20:27 GMT",
"version": "v2"
},
{
"created": "Sun, 3 Dec 2023 09:00:24 GMT",
"version": "v3"
},
{
"created": "Mon, 20 May 2024 05:43:40 GMT",
"version": "v4"
}
] |
2024-05-21
|
[
[
"Zhou",
"Zhi-Hua",
""
]
] |
Conventional theoretical machine learning studies generally assume explicitly or implicitly that there are enough or even infinitely supplied computational resources. In real practice, however, computational resources are usually limited, and the performance of machine learning depends not only on how many data have been received, but also on how many data can be handled subject to computational resources available. Note that most current ``intelligent supercomputing'' facilities work like exclusive operating systems, where a fixed amount of resources are allocated to a machine learning task without adaptive scheduling strategies considering important factors such as the learning performance demands and learning process status. In this article, we introduce the notion of machine learning throughput, define Computational Resource Efficient Learning (CoRE-Learning), and present a theoretical framework that takes into account the influence of computational resources in learning theory. This framework can be naturally applied to stream learning where the incoming data streams can be potentially endless with overwhelming size and it is impractical to assume that all received data can be handled in time. It may also provide a theoretical perspective for the design of intelligent supercomputing operating systems.
|
2305.09585
|
Wieke Prummel
|
Wieke Prummel, Jhony H. Giraldo, Anastasia Zakharova, Thierry Bouwmans
|
Inductive Graph Neural Networks for Moving Object Segmentation
|
Submitted to ICIP 2023
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Moving Object Segmentation (MOS) is a challenging problem in computer vision,
particularly in scenarios with dynamic backgrounds, abrupt lighting changes,
shadows, camouflage, and moving cameras. While graph-based methods have shown
promising results in MOS, they have mainly relied on transductive learning
which assumes access to the entire training and testing data for evaluation.
However, this assumption is not realistic in real-world applications where the
system needs to handle new data during deployment. In this paper, we propose a
novel Graph Inductive Moving Object Segmentation (GraphIMOS) algorithm based on
a Graph Neural Network (GNN) architecture. Our approach builds a generic model
capable of performing prediction on newly added data frames using the already
trained model. GraphIMOS outperforms previous inductive learning methods and is
more generic than previous transductive techniques. Our proposed algorithm
enables the deployment of graph-based MOS models in real-world applications.
|
[
{
"created": "Tue, 16 May 2023 16:32:08 GMT",
"version": "v1"
}
] |
2023-05-17
|
[
[
"Prummel",
"Wieke",
""
],
[
"Giraldo",
"Jhony H.",
""
],
[
"Zakharova",
"Anastasia",
""
],
[
"Bouwmans",
"Thierry",
""
]
] |
Moving Object Segmentation (MOS) is a challenging problem in computer vision, particularly in scenarios with dynamic backgrounds, abrupt lighting changes, shadows, camouflage, and moving cameras. While graph-based methods have shown promising results in MOS, they have mainly relied on transductive learning which assumes access to the entire training and testing data for evaluation. However, this assumption is not realistic in real-world applications where the system needs to handle new data during deployment. In this paper, we propose a novel Graph Inductive Moving Object Segmentation (GraphIMOS) algorithm based on a Graph Neural Network (GNN) architecture. Our approach builds a generic model capable of performing prediction on newly added data frames using the already trained model. GraphIMOS outperforms previous inductive learning methods and is more generic than previous transductive techniques. Our proposed algorithm enables the deployment of graph-based MOS models in real-world applications.
|
2103.15066
|
Fang Wu
|
Fang Wu, Stan Z. Li
|
InsertGNN: Can Graph Neural Networks Outperform Humans in TOEFL Sentence
Insertion Problem?
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sentence insertion is an interesting NLP problem but received insufficient
attention. Existing approaches in sentence ordering, text coherence, and
question answering are neither suitable nor good enough at solving it. To
bridge this gap, we propose InsertGNN, a simple yet effective model that
represents the problem as a graph and adopts a hierarchical graph neural
network (GNN) to learn the connection between sentences. We evaluate our method
in our newly collected TOEFL dataset and further verify its effectiveness on
the larger arXiv dataset using cross-domain learning. Extensive experiments
demonstrate that InsertGNN outperforms all baselines by a large margin with an
accuracy of 70\%, rivaling the average human test scores.
|
[
{
"created": "Sun, 28 Mar 2021 06:50:31 GMT",
"version": "v1"
},
{
"created": "Sat, 7 Jan 2023 05:24:34 GMT",
"version": "v2"
}
] |
2023-01-10
|
[
[
"Wu",
"Fang",
""
],
[
"Li",
"Stan Z.",
""
]
] |
Sentence insertion is an interesting NLP problem but received insufficient attention. Existing approaches in sentence ordering, text coherence, and question answering are neither suitable nor good enough at solving it. To bridge this gap, we propose InsertGNN, a simple yet effective model that represents the problem as a graph and adopts a hierarchical graph neural network (GNN) to learn the connection between sentences. We evaluate our method in our newly collected TOEFL dataset and further verify its effectiveness on the larger arXiv dataset using cross-domain learning. Extensive experiments demonstrate that InsertGNN outperforms all baselines by a large margin with an accuracy of 70\%, rivaling the average human test scores.
|
1902.05183
|
Ngoc Duy Nguyen
|
Ngoc Duy Nguyen, Thanh Nguyen, Saeid Nahavandi, Asim Bhatti, Glenn
Guest
|
Manipulating Soft Tissues by Deep Reinforcement Learning for Autonomous
Robotic Surgery
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In robotic surgery, pattern cutting through a deformable material is a
challenging research field. The cutting procedure requires a robot to
concurrently manipulate a scissor and a gripper to cut through a predefined
contour trajectory on the deformable sheet. The gripper ensures the cutting
accuracy by nailing a point on the sheet and continuously tensioning the pinch
point to different directions while the scissor is in action. The goal is to
find a pinch point and a corresponding tensioning policy to minimize damage to
the material and increase cutting accuracy measured by the symmetric difference
between the predefined contour and the cut contour. Previous study considers
finding one fixed pinch point during the course of cutting, which is inaccurate
and unsafe when the contour trajectory is complex. In this paper, we examine
the soft tissue cutting task by using multiple pinch points, which imitates
human operations while cutting. This approach, however, does not require the
use of a multi-gripper robot. We use a deep reinforcement learning algorithm to
find an optimal tensioning policy of a pinch point. Simulation results show
that the multi-point approach outperforms the state-of-the-art method in soft
pattern cutting task with respect to both accuracy and reliability.
|
[
{
"created": "Thu, 14 Feb 2019 01:49:32 GMT",
"version": "v1"
}
] |
2019-02-15
|
[
[
"Nguyen",
"Ngoc Duy",
""
],
[
"Nguyen",
"Thanh",
""
],
[
"Nahavandi",
"Saeid",
""
],
[
"Bhatti",
"Asim",
""
],
[
"Guest",
"Glenn",
""
]
] |
In robotic surgery, pattern cutting through a deformable material is a challenging research field. The cutting procedure requires a robot to concurrently manipulate a scissor and a gripper to cut through a predefined contour trajectory on the deformable sheet. The gripper ensures the cutting accuracy by nailing a point on the sheet and continuously tensioning the pinch point to different directions while the scissor is in action. The goal is to find a pinch point and a corresponding tensioning policy to minimize damage to the material and increase cutting accuracy measured by the symmetric difference between the predefined contour and the cut contour. Previous study considers finding one fixed pinch point during the course of cutting, which is inaccurate and unsafe when the contour trajectory is complex. In this paper, we examine the soft tissue cutting task by using multiple pinch points, which imitates human operations while cutting. This approach, however, does not require the use of a multi-gripper robot. We use a deep reinforcement learning algorithm to find an optimal tensioning policy of a pinch point. Simulation results show that the multi-point approach outperforms the state-of-the-art method in soft pattern cutting task with respect to both accuracy and reliability.
|
2110.15766
|
Connor Holmes
|
Connor Holmes, Minjia Zhang, Yuxiong He, and Bo Wu
|
NxMTransformer: Semi-Structured Sparsification for Natural Language
Understanding via ADMM
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Natural Language Processing (NLP) has recently achieved success by using huge
pre-trained Transformer networks. However, these models often contain hundreds
of millions or even billions of parameters, bringing challenges to online
deployment due to latency constraints. Recently, hardware manufacturers have
introduced dedicated hardware for NxM sparsity to provide the flexibility of
unstructured pruning with the runtime efficiency of structured approaches. NxM
sparsity permits arbitrarily selecting M parameters to retain from a contiguous
group of N in the dense representation. However, due to the extremely high
complexity of pre-trained models, the standard sparse fine-tuning techniques
often fail to generalize well on downstream tasks, which have limited data
resources. To address such an issue in a principled manner, we introduce a new
learning framework, called NxMTransformer, to induce NxM semi-structured
sparsity on pretrained language models for natural language understanding to
obtain better performance. In particular, we propose to formulate the NxM
sparsity as a constrained optimization problem and use Alternating Direction
Method of Multipliers (ADMM) to optimize the downstream tasks while taking the
underlying hardware constraints into consideration. ADMM decomposes the NxM
sparsification problem into two sub-problems that can be solved sequentially,
generating sparsified Transformer networks that achieve high accuracy while
being able to effectively execute on newly released hardware. We apply our
approach to a wide range of NLP tasks, and our proposed method is able to
achieve 1.7 points higher accuracy in GLUE score than current practices.
Moreover, we perform detailed analysis on our approach and shed light on how
ADMM affects fine-tuning accuracy for downstream tasks. Finally, we illustrate
how NxMTransformer achieves performance improvement with knowledge
distillation.
|
[
{
"created": "Thu, 28 Oct 2021 17:43:06 GMT",
"version": "v1"
}
] |
2021-11-01
|
[
[
"Holmes",
"Connor",
""
],
[
"Zhang",
"Minjia",
""
],
[
"He",
"Yuxiong",
""
],
[
"Wu",
"Bo",
""
]
] |
Natural Language Processing (NLP) has recently achieved success by using huge pre-trained Transformer networks. However, these models often contain hundreds of millions or even billions of parameters, bringing challenges to online deployment due to latency constraints. Recently, hardware manufacturers have introduced dedicated hardware for NxM sparsity to provide the flexibility of unstructured pruning with the runtime efficiency of structured approaches. NxM sparsity permits arbitrarily selecting M parameters to retain from a contiguous group of N in the dense representation. However, due to the extremely high complexity of pre-trained models, the standard sparse fine-tuning techniques often fail to generalize well on downstream tasks, which have limited data resources. To address such an issue in a principled manner, we introduce a new learning framework, called NxMTransformer, to induce NxM semi-structured sparsity on pretrained language models for natural language understanding to obtain better performance. In particular, we propose to formulate the NxM sparsity as a constrained optimization problem and use Alternating Direction Method of Multipliers (ADMM) to optimize the downstream tasks while taking the underlying hardware constraints into consideration. ADMM decomposes the NxM sparsification problem into two sub-problems that can be solved sequentially, generating sparsified Transformer networks that achieve high accuracy while being able to effectively execute on newly released hardware. We apply our approach to a wide range of NLP tasks, and our proposed method is able to achieve 1.7 points higher accuracy in GLUE score than current practices. Moreover, we perform detailed analysis on our approach and shed light on how ADMM affects fine-tuning accuracy for downstream tasks. Finally, we illustrate how NxMTransformer achieves performance improvement with knowledge distillation.
|
1803.09225
|
Ali Nasir
|
A. A. Nasir, H. D. Tuan, T. Q. Duong, and M. Debbah
|
NOMA for throughput and EE maximization in Energy Harvesting Enabled
Networks
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Wireless power transfer via radio-frequency (RF) radiation is regarded as a
potential solution to energize energy-constrained users, who are deployed close
to the base stations (near-by users). However, energy transfer requires much
more transmit power than normal information transfer, which makes it very
challenging to provide the quality of service in terms of throughput for all
near-by users and cell-edge users. Thus, it is of practical interest to employ
non-orthogonal multiple access (NOMA) to improve the throughput of all network
users, while fulfilling the energy harvesting requirements of the near-by
users. To realize both energy harvesting and information decoding, we consider
a transmit time-switching (transmit-TS) protocol. We formulate two important
beamfoming problems of users' max-min throughput optimization and energy
efficiency maximization under power constraint and energy harvesting thresholds
at the nearly-located users. For these problems, the optimization objective and
energy harvesting are non-convex in beamforming vectors. Thus, we develop
efficient path-following algorithms to solve them. In addition, we also
consider conventional power splitting (PS)-based energy harvesting receiver.
Our numerical results confirm that the proposed transmit-TS based algorithms
clearly outperform PS-based algorithms in terms of both, throughput and energy
efficiency.
|
[
{
"created": "Sun, 25 Mar 2018 09:59:10 GMT",
"version": "v1"
},
{
"created": "Sat, 23 Jun 2018 06:40:58 GMT",
"version": "v2"
}
] |
2018-06-26
|
[
[
"Nasir",
"A. A.",
""
],
[
"Tuan",
"H. D.",
""
],
[
"Duong",
"T. Q.",
""
],
[
"Debbah",
"M.",
""
]
] |
Wireless power transfer via radio-frequency (RF) radiation is regarded as a potential solution to energize energy-constrained users, who are deployed close to the base stations (near-by users). However, energy transfer requires much more transmit power than normal information transfer, which makes it very challenging to provide the quality of service in terms of throughput for all near-by users and cell-edge users. Thus, it is of practical interest to employ non-orthogonal multiple access (NOMA) to improve the throughput of all network users, while fulfilling the energy harvesting requirements of the near-by users. To realize both energy harvesting and information decoding, we consider a transmit time-switching (transmit-TS) protocol. We formulate two important beamfoming problems of users' max-min throughput optimization and energy efficiency maximization under power constraint and energy harvesting thresholds at the nearly-located users. For these problems, the optimization objective and energy harvesting are non-convex in beamforming vectors. Thus, we develop efficient path-following algorithms to solve them. In addition, we also consider conventional power splitting (PS)-based energy harvesting receiver. Our numerical results confirm that the proposed transmit-TS based algorithms clearly outperform PS-based algorithms in terms of both, throughput and energy efficiency.
|
2210.05234
|
Min Yang
|
Yuxin Song, Min Yang, Wenhao Wu, Dongliang He, Fu Li and Jingdong Wang
|
It Takes Two: Masked Appearance-Motion Modeling for Self-supervised
Video Transformer Pre-training
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Self-supervised video transformer pre-training has recently benefited from
the mask-and-predict pipeline. They have demonstrated outstanding effectiveness
on downstream video tasks and superior data efficiency on small datasets.
However, temporal relation is not fully exploited by these methods. In this
work, we explicitly investigate motion cues in videos as extra prediction
target and propose our Masked Appearance-Motion Modeling (MAM2) framework.
Specifically, we design an encoder-regressor-decoder pipeline for this task.
The regressor separates feature encoding and pretext tasks completion, such
that the feature extraction process is completed adequately by the encoder. In
order to guide the encoder to fully excavate spatial-temporal features, two
separate decoders are used for two pretext tasks of disentangled appearance and
motion prediction. We explore various motion prediction targets and figure out
RGB-difference is simple yet effective. As for appearance prediction, VQGAN
codes are leveraged as prediction target. With our pre-training pipeline,
convergence can be remarkably speed up, e.g., we only require half of epochs
than state-of-the-art VideoMAE (400 v.s. 800) to achieve the competitive
performance. Extensive experimental results prove that our method learns
generalized video representations. Notably, our MAM2 with ViT-B achieves 82.3%
on Kinects-400, 71.3% on Something-Something V2, 91.5% on UCF101, and 62.5% on
HMDB51.
|
[
{
"created": "Tue, 11 Oct 2022 08:05:18 GMT",
"version": "v1"
}
] |
2022-10-12
|
[
[
"Song",
"Yuxin",
""
],
[
"Yang",
"Min",
""
],
[
"Wu",
"Wenhao",
""
],
[
"He",
"Dongliang",
""
],
[
"Li",
"Fu",
""
],
[
"Wang",
"Jingdong",
""
]
] |
Self-supervised video transformer pre-training has recently benefited from the mask-and-predict pipeline. They have demonstrated outstanding effectiveness on downstream video tasks and superior data efficiency on small datasets. However, temporal relation is not fully exploited by these methods. In this work, we explicitly investigate motion cues in videos as extra prediction target and propose our Masked Appearance-Motion Modeling (MAM2) framework. Specifically, we design an encoder-regressor-decoder pipeline for this task. The regressor separates feature encoding and pretext tasks completion, such that the feature extraction process is completed adequately by the encoder. In order to guide the encoder to fully excavate spatial-temporal features, two separate decoders are used for two pretext tasks of disentangled appearance and motion prediction. We explore various motion prediction targets and figure out RGB-difference is simple yet effective. As for appearance prediction, VQGAN codes are leveraged as prediction target. With our pre-training pipeline, convergence can be remarkably speed up, e.g., we only require half of epochs than state-of-the-art VideoMAE (400 v.s. 800) to achieve the competitive performance. Extensive experimental results prove that our method learns generalized video representations. Notably, our MAM2 with ViT-B achieves 82.3% on Kinects-400, 71.3% on Something-Something V2, 91.5% on UCF101, and 62.5% on HMDB51.
|
1510.03023
|
Dani Lischinski
|
Haisen Zhao, Lin Lu, Yuan Wei, Dani Lischinski, Andrei Sharf, Daniel
Cohen-Or, Baoquan Chen
|
Printed Perforated Lampshades for Continuous Projective Images
|
10 pages
| null | null | null |
cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a technique for designing 3D-printed perforated lampshades, which
project continuous grayscale images onto the surrounding walls. Given the
geometry of the lampshade and a target grayscale image, our method computes a
distribution of tiny holes over the shell, such that the combined footprints of
the light emanating through the holes form the target image on a nearby diffuse
surface. Our objective is to approximate the continuous tones and the spatial
detail of the target image, to the extent possible within the constraints of
the fabrication process.
To ensure structural integrity, there are lower bounds on the thickness of
the shell, the radii of the holes, and the minimal distances between adjacent
holes. Thus, the holes are realized as thin tubes distributed over the
lampshade surface. The amount of light passing through a single tube may be
controlled by the tube's radius and by its direction (tilt angle). The core of
our technique thus consists of determining a suitable configuration of the
tubes: their distribution across the relevant portion of the lampshade, as well
as the parameters (radius, tilt angle) of each tube. This is achieved by
computing a capacity-constrained Voronoi tessellation over a suitably defined
density function, and embedding a tube inside the maximal inscribed circle of
each tessellation cell. The density function for a particular target image is
derived from a series of simulated images, each corresponding to a different
uniform density tube pattern on the lampshade.
|
[
{
"created": "Sun, 11 Oct 2015 08:13:38 GMT",
"version": "v1"
}
] |
2015-10-13
|
[
[
"Zhao",
"Haisen",
""
],
[
"Lu",
"Lin",
""
],
[
"Wei",
"Yuan",
""
],
[
"Lischinski",
"Dani",
""
],
[
"Sharf",
"Andrei",
""
],
[
"Cohen-Or",
"Daniel",
""
],
[
"Chen",
"Baoquan",
""
]
] |
We present a technique for designing 3D-printed perforated lampshades, which project continuous grayscale images onto the surrounding walls. Given the geometry of the lampshade and a target grayscale image, our method computes a distribution of tiny holes over the shell, such that the combined footprints of the light emanating through the holes form the target image on a nearby diffuse surface. Our objective is to approximate the continuous tones and the spatial detail of the target image, to the extent possible within the constraints of the fabrication process. To ensure structural integrity, there are lower bounds on the thickness of the shell, the radii of the holes, and the minimal distances between adjacent holes. Thus, the holes are realized as thin tubes distributed over the lampshade surface. The amount of light passing through a single tube may be controlled by the tube's radius and by its direction (tilt angle). The core of our technique thus consists of determining a suitable configuration of the tubes: their distribution across the relevant portion of the lampshade, as well as the parameters (radius, tilt angle) of each tube. This is achieved by computing a capacity-constrained Voronoi tessellation over a suitably defined density function, and embedding a tube inside the maximal inscribed circle of each tessellation cell. The density function for a particular target image is derived from a series of simulated images, each corresponding to a different uniform density tube pattern on the lampshade.
|
1701.00722
|
Laslo Hunhold
|
Laslo Hunhold
|
The Unum Number Format: Mathematical Foundations, Implementation and
Comparison to IEEE 754 Floating-Point Numbers
|
95 pages, 7 figures, 14 code listings
| null | null | null |
cs.NA cs.MS
|
http://creativecommons.org/licenses/by/4.0/
|
This thesis examines a modern concept for machine numbers based on interval
arithmetic called 'Unums' and compares it to IEEE 754 floating-point
arithmetic, evaluating possible uses of this format where floating-point
numbers are inadequate. In the course of this examination, this thesis builds
theoretical foundations for IEEE 754 floating-point numbers, interval
arithmetic based on the projectively extended real numbers and Unums.
|
[
{
"created": "Mon, 2 Jan 2017 23:21:43 GMT",
"version": "v1"
}
] |
2017-01-04
|
[
[
"Hunhold",
"Laslo",
""
]
] |
This thesis examines a modern concept for machine numbers based on interval arithmetic called 'Unums' and compares it to IEEE 754 floating-point arithmetic, evaluating possible uses of this format where floating-point numbers are inadequate. In the course of this examination, this thesis builds theoretical foundations for IEEE 754 floating-point numbers, interval arithmetic based on the projectively extended real numbers and Unums.
|
2311.12257
|
Weihan Xu
|
Weihan Xu, Julian McAuley, Shlomo Dubnov, Hao-Wen Dong
|
Equipping Pretrained Unconditional Music Transformers with Instrument
and Genre Controls
| null | null | null | null |
cs.SD cs.IR cs.MM eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The ''pretraining-and-finetuning'' paradigm has become a norm for training
domain-specific models in natural language processing and computer vision. In
this work, we aim to examine this paradigm for symbolic music generation
through leveraging the largest ever symbolic music dataset sourced from the
MuseScore forum. We first pretrain a large unconditional transformer model
using 1.5 million songs. We then propose a simple technique to equip this
pretrained unconditional music transformer model with instrument and genre
controls by finetuning the model with additional control tokens. Our proposed
representation offers improved high-level controllability and expressiveness
against two existing representations. The experimental results show that the
proposed model can successfully generate music with user-specified instruments
and genre. In a subjective listening test, the proposed model outperforms the
pretrained baseline model in terms of coherence, harmony, arrangement and
overall quality.
|
[
{
"created": "Tue, 21 Nov 2023 00:37:47 GMT",
"version": "v1"
}
] |
2023-11-22
|
[
[
"Xu",
"Weihan",
""
],
[
"McAuley",
"Julian",
""
],
[
"Dubnov",
"Shlomo",
""
],
[
"Dong",
"Hao-Wen",
""
]
] |
The ''pretraining-and-finetuning'' paradigm has become a norm for training domain-specific models in natural language processing and computer vision. In this work, we aim to examine this paradigm for symbolic music generation through leveraging the largest ever symbolic music dataset sourced from the MuseScore forum. We first pretrain a large unconditional transformer model using 1.5 million songs. We then propose a simple technique to equip this pretrained unconditional music transformer model with instrument and genre controls by finetuning the model with additional control tokens. Our proposed representation offers improved high-level controllability and expressiveness against two existing representations. The experimental results show that the proposed model can successfully generate music with user-specified instruments and genre. In a subjective listening test, the proposed model outperforms the pretrained baseline model in terms of coherence, harmony, arrangement and overall quality.
|
2401.00981
|
Shawana Tabassum
|
Vivek Kumar Tiwari, Premananda Indic, Shawana Tabassum
|
Machine Learning Classification of Alzheimer's Disease Stages Using
Cerebrospinal Fluid Biomarkers Alone
| null | null | null | null |
cs.LG q-bio.QM stat.AP
|
http://creativecommons.org/licenses/by/4.0/
|
Early diagnosis of Alzheimer's disease is a challenge because the existing
methodologies do not identify the patients in their preclinical stage, which
can last up to a decade prior to the onset of clinical symptoms. Several
research studies demonstrate the potential of cerebrospinal fluid biomarkers,
amyloid beta 1-42, T-tau, and P-tau, in early diagnosis of Alzheimer's disease
stages. In this work, we used machine learning models to classify different
stages of Alzheimer's disease based on the cerebrospinal fluid biomarker levels
alone. An electronic health record of patients from the National Alzheimer's
Coordinating Centre database was analyzed and the patients were subdivided
based on mini-mental state scores and clinical dementia ratings. Statistical
and correlation analyses were performed to identify significant differences
between the Alzheimer's stages. Afterward, machine learning classifiers
including K-Nearest Neighbors, Ensemble Boosted Tree, Ensemble Bagged Tree,
Support Vector Machine, Logistic Regression, and Naive Bayes classifiers were
employed to classify the Alzheimer's disease stages. The results demonstrate
that Ensemble Boosted Tree (84.4%) and Logistic Regression (73.4%) provide the
highest accuracy for binary classification, while Ensemble Bagged Tree (75.4%)
demonstrates better accuracy for multiclassification. The findings from this
research are expected to help clinicians in making an informed decision
regarding the early diagnosis of Alzheimer's from the cerebrospinal fluid
biomarkers alone, monitoring of the disease progression, and implementation of
appropriate intervention measures.
|
[
{
"created": "Tue, 2 Jan 2024 00:55:10 GMT",
"version": "v1"
}
] |
2024-01-03
|
[
[
"Tiwari",
"Vivek Kumar",
""
],
[
"Indic",
"Premananda",
""
],
[
"Tabassum",
"Shawana",
""
]
] |
Early diagnosis of Alzheimer's disease is a challenge because the existing methodologies do not identify the patients in their preclinical stage, which can last up to a decade prior to the onset of clinical symptoms. Several research studies demonstrate the potential of cerebrospinal fluid biomarkers, amyloid beta 1-42, T-tau, and P-tau, in early diagnosis of Alzheimer's disease stages. In this work, we used machine learning models to classify different stages of Alzheimer's disease based on the cerebrospinal fluid biomarker levels alone. An electronic health record of patients from the National Alzheimer's Coordinating Centre database was analyzed and the patients were subdivided based on mini-mental state scores and clinical dementia ratings. Statistical and correlation analyses were performed to identify significant differences between the Alzheimer's stages. Afterward, machine learning classifiers including K-Nearest Neighbors, Ensemble Boosted Tree, Ensemble Bagged Tree, Support Vector Machine, Logistic Regression, and Naive Bayes classifiers were employed to classify the Alzheimer's disease stages. The results demonstrate that Ensemble Boosted Tree (84.4%) and Logistic Regression (73.4%) provide the highest accuracy for binary classification, while Ensemble Bagged Tree (75.4%) demonstrates better accuracy for multiclassification. The findings from this research are expected to help clinicians in making an informed decision regarding the early diagnosis of Alzheimer's from the cerebrospinal fluid biomarkers alone, monitoring of the disease progression, and implementation of appropriate intervention measures.
|
1606.07861
|
Sam Chiu-wai Wong
|
Sam Chiu-wai Wong
|
Tight Algorithms for Vertex Cover with Hard Capacities on Multigraphs
and Hypergraphs
|
15 pages; SODA 2017
| null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we give a f-approximation algorithm for the minimum unweighted
Vertex Cover problem with Hard Capacity constraints (VCHC) on f-hypergraphs.
This problem generalizes standard vertex cover for which the best known
approximation ratio is also f and cannot be improved assuming the unique game
conjecture. Our result is therefore essentially the best possible. This
improves over the previous 2.155 (for f=2) and 2f-approximation algorithms by
Cheung, Goemans and Wong (CGW).
At the heart of our approach is to apply iterative rounding to the problem
with ideas coming from several previous works. We also give a faster
implementation of the method based on certain iteratively rounding the solution
to certain CGW-style covering LPs.
We note that independent of this work, Kao [#kao2017iterative] also recently
obtained the same result.
|
[
{
"created": "Sat, 25 Jun 2016 01:12:46 GMT",
"version": "v1"
},
{
"created": "Sun, 22 Jan 2017 13:47:42 GMT",
"version": "v2"
}
] |
2017-01-24
|
[
[
"Wong",
"Sam Chiu-wai",
""
]
] |
In this paper we give a f-approximation algorithm for the minimum unweighted Vertex Cover problem with Hard Capacity constraints (VCHC) on f-hypergraphs. This problem generalizes standard vertex cover for which the best known approximation ratio is also f and cannot be improved assuming the unique game conjecture. Our result is therefore essentially the best possible. This improves over the previous 2.155 (for f=2) and 2f-approximation algorithms by Cheung, Goemans and Wong (CGW). At the heart of our approach is to apply iterative rounding to the problem with ideas coming from several previous works. We also give a faster implementation of the method based on certain iteratively rounding the solution to certain CGW-style covering LPs. We note that independent of this work, Kao [#kao2017iterative] also recently obtained the same result.
|
1902.08226
|
Fuli Feng
|
Fuli Feng, Xiangnan He, Jie Tang, Tat-Seng Chua
|
Graph Adversarial Training: Dynamically Regularizing Based on Graph
Structure
|
Accepted by TKDE
| null | null | null |
cs.LG cs.SI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent efforts show that neural networks are vulnerable to small but
intentional perturbations on input features in visual classification tasks. Due
to the additional consideration of connections between examples (\eg articles
with citation link tend to be in the same class), graph neural networks could
be more sensitive to the perturbations, since the perturbations from connected
examples exacerbate the impact on a target example. Adversarial Training (AT),
a dynamic regularization technique, can resist the worst-case perturbations on
input features and is a promising choice to improve model robustness and
generalization. However, existing AT methods focus on standard classification,
being less effective when training models on graph since it does not model the
impact from connected examples.
In this work, we explore adversarial training on graph, aiming to improve the
robustness and generalization of models learned on graph. We propose Graph
Adversarial Training (GraphAT), which takes the impact from connected examples
into account when learning to construct and resist perturbations. We give a
general formulation of GraphAT, which can be seen as a dynamic regularization
scheme based on the graph structure. To demonstrate the utility of GraphAT, we
employ it on a state-of-the-art graph neural network model --- Graph
Convolutional Network (GCN). We conduct experiments on two citation graphs
(Citeseer and Cora) and a knowledge graph (NELL), verifying the effectiveness
of GraphAT which outperforms normal training on GCN by 4.51% in node
classification accuracy. Codes are available via:
https://github.com/fulifeng/GraphAT.
|
[
{
"created": "Wed, 20 Feb 2019 05:18:01 GMT",
"version": "v1"
},
{
"created": "Sun, 15 Dec 2019 04:14:27 GMT",
"version": "v2"
}
] |
2019-12-17
|
[
[
"Feng",
"Fuli",
""
],
[
"He",
"Xiangnan",
""
],
[
"Tang",
"Jie",
""
],
[
"Chua",
"Tat-Seng",
""
]
] |
Recent efforts show that neural networks are vulnerable to small but intentional perturbations on input features in visual classification tasks. Due to the additional consideration of connections between examples (\eg articles with citation link tend to be in the same class), graph neural networks could be more sensitive to the perturbations, since the perturbations from connected examples exacerbate the impact on a target example. Adversarial Training (AT), a dynamic regularization technique, can resist the worst-case perturbations on input features and is a promising choice to improve model robustness and generalization. However, existing AT methods focus on standard classification, being less effective when training models on graph since it does not model the impact from connected examples. In this work, we explore adversarial training on graph, aiming to improve the robustness and generalization of models learned on graph. We propose Graph Adversarial Training (GraphAT), which takes the impact from connected examples into account when learning to construct and resist perturbations. We give a general formulation of GraphAT, which can be seen as a dynamic regularization scheme based on the graph structure. To demonstrate the utility of GraphAT, we employ it on a state-of-the-art graph neural network model --- Graph Convolutional Network (GCN). We conduct experiments on two citation graphs (Citeseer and Cora) and a knowledge graph (NELL), verifying the effectiveness of GraphAT which outperforms normal training on GCN by 4.51% in node classification accuracy. Codes are available via: https://github.com/fulifeng/GraphAT.
|
1805.02523
|
Julian M\"uller
|
Julian M\"uller and Klaus Dietmayer
|
Detecting Traffic Lights by Single Shot Detection
|
Submitted to International Conference on Intelligent Transportation
Systems (ITSC2018)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent improvements in object detection are driven by the success of
convolutional neural networks (CNN). They are able to learn rich features
outperforming hand-crafted features. So far, research in traffic light
detection mainly focused on hand-crafted features, such as color, shape or
brightness of the traffic light bulb. This paper presents a deep learning
approach for accurate traffic light detection in adapting a single shot
detection (SSD) approach. SSD performs object proposals creation and
classification using a single CNN. The original SSD struggles in detecting very
small objects, which is essential for traffic light detection. By our
adaptations it is possible to detect objects much smaller than ten pixels
without increasing the input image size. We present an extensive evaluation on
the DriveU Traffic Light Dataset (DTLD). We reach both, high accuracy and low
false positive rates. The trained model is real-time capable with ten frames
per second on a Nvidia Titan Xp. Code has been made available at
https://github.com/julimueller/tl_ssd.
|
[
{
"created": "Mon, 7 May 2018 13:37:17 GMT",
"version": "v1"
},
{
"created": "Tue, 14 Aug 2018 12:19:16 GMT",
"version": "v2"
},
{
"created": "Thu, 11 Oct 2018 13:50:50 GMT",
"version": "v3"
}
] |
2018-10-12
|
[
[
"Müller",
"Julian",
""
],
[
"Dietmayer",
"Klaus",
""
]
] |
Recent improvements in object detection are driven by the success of convolutional neural networks (CNN). They are able to learn rich features outperforming hand-crafted features. So far, research in traffic light detection mainly focused on hand-crafted features, such as color, shape or brightness of the traffic light bulb. This paper presents a deep learning approach for accurate traffic light detection in adapting a single shot detection (SSD) approach. SSD performs object proposals creation and classification using a single CNN. The original SSD struggles in detecting very small objects, which is essential for traffic light detection. By our adaptations it is possible to detect objects much smaller than ten pixels without increasing the input image size. We present an extensive evaluation on the DriveU Traffic Light Dataset (DTLD). We reach both, high accuracy and low false positive rates. The trained model is real-time capable with ten frames per second on a Nvidia Titan Xp. Code has been made available at https://github.com/julimueller/tl_ssd.
|
1710.05748
|
Nikolaos Pappas
|
Ioannis Dimitriou, Nikolaos Pappas
|
Performance Analysis of a Cooperative Wireless Network with Adaptive
Relays
| null | null |
10.1016/j.adhoc.2018.12.007
| null |
cs.IT cs.NI math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we investigate a slotted-time relay assisted cooperative random
access wireless network with multipacket (MPR) reception capabilities. MPR
refers to the capability of a wireless node to successfully receive packets
from more than two other modes that transmit simultaneously at the same slot.
We consider a network of $N$ saturated sources that transmit packets to a
common destination node with the cooperation of two infinite capacity relay
nodes. The relays assist the sources by forwarding the packets that failed to
reach the destination. Moreover, the relays have also packets of their own to
transmit to the destination. We further assume that the relays employ a
state-dependent retransmission control mechanism. In particular, a relay node
accordingly adapts its transmission probability based on the status of the
other relay. Such a protocol is towards self-aware networks and leads to
substantial performance gains in terms of delay. We investigate the stability
region and the throughput performance for the full MPR model. Moreover, for the
asymmetric two-sources, two-relay case we derive the generating function of the
stationary joint queue-length distribution with the aid of the theory of
boundary value problems. For the symmetric case, we obtain explicit expressions
for the average queueing delay in a relay node without solving a boundary value
problem. Extensive numerical examples are presented and provide insights on the
system performance.
|
[
{
"created": "Thu, 12 Oct 2017 19:35:16 GMT",
"version": "v1"
},
{
"created": "Thu, 19 Oct 2017 09:02:37 GMT",
"version": "v2"
},
{
"created": "Mon, 15 Jul 2019 20:44:54 GMT",
"version": "v3"
}
] |
2019-07-17
|
[
[
"Dimitriou",
"Ioannis",
""
],
[
"Pappas",
"Nikolaos",
""
]
] |
In this work, we investigate a slotted-time relay assisted cooperative random access wireless network with multipacket (MPR) reception capabilities. MPR refers to the capability of a wireless node to successfully receive packets from more than two other modes that transmit simultaneously at the same slot. We consider a network of $N$ saturated sources that transmit packets to a common destination node with the cooperation of two infinite capacity relay nodes. The relays assist the sources by forwarding the packets that failed to reach the destination. Moreover, the relays have also packets of their own to transmit to the destination. We further assume that the relays employ a state-dependent retransmission control mechanism. In particular, a relay node accordingly adapts its transmission probability based on the status of the other relay. Such a protocol is towards self-aware networks and leads to substantial performance gains in terms of delay. We investigate the stability region and the throughput performance for the full MPR model. Moreover, for the asymmetric two-sources, two-relay case we derive the generating function of the stationary joint queue-length distribution with the aid of the theory of boundary value problems. For the symmetric case, we obtain explicit expressions for the average queueing delay in a relay node without solving a boundary value problem. Extensive numerical examples are presented and provide insights on the system performance.
|
1706.06433
|
Liang Liu
|
Liang Liu and Wei Yu
|
Massive Connectivity with Massive MIMO-Part II: Achievable Rate
Characterization
|
This paper is accepted and to be published in the IEEE Transactions
on Signal Processing
| null |
10.1109/TSP.2018.2818070
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This two-part paper aims to quantify the cost of device activity detection in
an uplink massive connectivity scenario with a large number of devices but
device activities are sporadic. Part I of this paper shows that in an
asymptotic massive multiple-input multiple-output (MIMO) regime, device
activity detection can always be made perfect. Part II of this paper
subsequently shows that despite the perfect device activity detection, there is
nevertheless significant cost due to device detection in terms of overall
achievable rate, because of the fact that non-orthogonal pilot sequences have
to be used in order to accommodate the large number of potential devices,
resulting in significantly larger channel estimation error as compared to
conventional massive MIMO systems with orthogonal pilots. Specifically, this
paper characterizes each active user's achievable rate using random matrix
theory under either maximal-ratio combining (MRC) or minimum mean-squared error
(MMSE) receive beamforming at the base-station (BS), assuming the statistics of
their estimated channels as derived in Part I. The characterization of user
rate also allows the optimization of pilot sequences length. Moreover, in
contrast to the conventional massive MIMO system, the MMSE beamforming is shown
to achieve much higher rate than the MRC beamforming for the massive
connectivity scenario under consideration. Finally, this paper illustrates the
necessity of user scheduling for rate maximization when the number of active
users is larger than the number of antennas at the BS.
|
[
{
"created": "Tue, 20 Jun 2017 13:49:44 GMT",
"version": "v1"
},
{
"created": "Thu, 15 Mar 2018 03:40:20 GMT",
"version": "v2"
}
] |
2018-05-09
|
[
[
"Liu",
"Liang",
""
],
[
"Yu",
"Wei",
""
]
] |
This two-part paper aims to quantify the cost of device activity detection in an uplink massive connectivity scenario with a large number of devices but device activities are sporadic. Part I of this paper shows that in an asymptotic massive multiple-input multiple-output (MIMO) regime, device activity detection can always be made perfect. Part II of this paper subsequently shows that despite the perfect device activity detection, there is nevertheless significant cost due to device detection in terms of overall achievable rate, because of the fact that non-orthogonal pilot sequences have to be used in order to accommodate the large number of potential devices, resulting in significantly larger channel estimation error as compared to conventional massive MIMO systems with orthogonal pilots. Specifically, this paper characterizes each active user's achievable rate using random matrix theory under either maximal-ratio combining (MRC) or minimum mean-squared error (MMSE) receive beamforming at the base-station (BS), assuming the statistics of their estimated channels as derived in Part I. The characterization of user rate also allows the optimization of pilot sequences length. Moreover, in contrast to the conventional massive MIMO system, the MMSE beamforming is shown to achieve much higher rate than the MRC beamforming for the massive connectivity scenario under consideration. Finally, this paper illustrates the necessity of user scheduling for rate maximization when the number of active users is larger than the number of antennas at the BS.
|
2406.05070
|
Zefeng Chen
|
Jian Zhu, Xiaoye Chen, Wensheng Gan, Zefeng Chen, Philip S. Yu
|
Targeted Mining Precise-positioning Episode Rules
|
IEEE TETCI, 14 pages
| null | null | null |
cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The era characterized by an exponential increase in data has led to the
widespread adoption of data intelligence as a crucial task. Within the field of
data mining, frequent episode mining has emerged as an effective tool for
extracting valuable and essential information from event sequences. Various
algorithms have been developed to discover frequent episodes and subsequently
derive episode rules using the frequency function and anti-monotonicity
principles. However, currently, there is a lack of algorithms specifically
designed for mining episode rules that encompass user-specified query episodes.
To address this challenge and enable the mining of target episode rules, we
introduce the definition of targeted precise-positioning episode rules and
formulate the problem of targeted mining precise-positioning episode rules.
Most importantly, we develop an algorithm called Targeted Mining Precision
Episode Rules (TaMIPER) to address the problem and optimize it using four
proposed strategies, leading to significant reductions in both time and space
resource requirements. As a result, TaMIPER offers high accuracy and efficiency
in mining episode rules of user interest and holds promising potential for
prediction tasks in various domains, such as weather observation, network
intrusion, and e-commerce. Experimental results on six real datasets
demonstrate the exceptional performance of TaMIPER.
|
[
{
"created": "Fri, 7 Jun 2024 16:41:02 GMT",
"version": "v1"
}
] |
2024-06-10
|
[
[
"Zhu",
"Jian",
""
],
[
"Chen",
"Xiaoye",
""
],
[
"Gan",
"Wensheng",
""
],
[
"Chen",
"Zefeng",
""
],
[
"Yu",
"Philip S.",
""
]
] |
The era characterized by an exponential increase in data has led to the widespread adoption of data intelligence as a crucial task. Within the field of data mining, frequent episode mining has emerged as an effective tool for extracting valuable and essential information from event sequences. Various algorithms have been developed to discover frequent episodes and subsequently derive episode rules using the frequency function and anti-monotonicity principles. However, currently, there is a lack of algorithms specifically designed for mining episode rules that encompass user-specified query episodes. To address this challenge and enable the mining of target episode rules, we introduce the definition of targeted precise-positioning episode rules and formulate the problem of targeted mining precise-positioning episode rules. Most importantly, we develop an algorithm called Targeted Mining Precision Episode Rules (TaMIPER) to address the problem and optimize it using four proposed strategies, leading to significant reductions in both time and space resource requirements. As a result, TaMIPER offers high accuracy and efficiency in mining episode rules of user interest and holds promising potential for prediction tasks in various domains, such as weather observation, network intrusion, and e-commerce. Experimental results on six real datasets demonstrate the exceptional performance of TaMIPER.
|
1808.06468
|
Nitish Nag
|
Nitish Nag, Vaibhav Pandey, Ramesh C. Jain
|
Endogenous and Exogenous Multi-Modal Layers in Context Aware
Recommendation Systems for Health
|
6 pages
| null | null | null |
cs.CY cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
People care more about the solutions to their problems rather than data
alone. Inherently, this means using data to generate a list of recommendations
for a given situation. The rapid growth of multi-modal wearables and sensors
have not made this jump effectively in the domain of health. Modern user
content consumption and decision making in both cyber (e.g. entertainment,
news) and physical (eg. food, shopping) spaces rely heavily on targeted
personalized recommender systems. The utility function is the primary ranking
method to predict what a given person would explicitly prefer. In this work we
describe two unique layers of user and context modeling that can be coupled to
traditional recommender system approaches. The exogenous layer incorporates
factors outside of the person's body (eg. location, weather, social context),
while the endogenous layer integrates data to estimate the physiologic or
innate needs of the user. This is accomplished through multi-modal sensor data
integration applied to domain-specific utility functions, filters and
re-ranking weights. We showcase this concept through a nutrition guidance
system focused on controlling sodium intake at a personalized level,
dramatically improving upon the fixed recommendations.
|
[
{
"created": "Tue, 7 Aug 2018 01:12:39 GMT",
"version": "v1"
}
] |
2018-08-21
|
[
[
"Nag",
"Nitish",
""
],
[
"Pandey",
"Vaibhav",
""
],
[
"Jain",
"Ramesh C.",
""
]
] |
People care more about the solutions to their problems rather than data alone. Inherently, this means using data to generate a list of recommendations for a given situation. The rapid growth of multi-modal wearables and sensors have not made this jump effectively in the domain of health. Modern user content consumption and decision making in both cyber (e.g. entertainment, news) and physical (eg. food, shopping) spaces rely heavily on targeted personalized recommender systems. The utility function is the primary ranking method to predict what a given person would explicitly prefer. In this work we describe two unique layers of user and context modeling that can be coupled to traditional recommender system approaches. The exogenous layer incorporates factors outside of the person's body (eg. location, weather, social context), while the endogenous layer integrates data to estimate the physiologic or innate needs of the user. This is accomplished through multi-modal sensor data integration applied to domain-specific utility functions, filters and re-ranking weights. We showcase this concept through a nutrition guidance system focused on controlling sodium intake at a personalized level, dramatically improving upon the fixed recommendations.
|
1203.3513
|
Ross D. Shachter
|
Ross D. Shachter, Debarun Bhattacharjya
|
Dynamic programming in in uence diagrams with decision circuits
|
Appears in Proceedings of the Twenty-Sixth Conference on Uncertainty
in Artificial Intelligence (UAI2010)
| null | null |
UAI-P-2010-PG-509-516
|
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Decision circuits perform efficient evaluation of influence diagrams,
building on the ad- vances in arithmetic circuits for belief net- work
inference [Darwiche, 2003; Bhattachar- jya and Shachter, 2007]. We show how
even more compact decision circuits can be con- structed for dynamic
programming in influ- ence diagrams with separable value functions and
conditionally independent subproblems. Once a decision circuit has been
constructed based on the diagram's "global" graphical structure, it can be
compiled to exploit "lo- cal" structure for efficient evaluation and sen-
sitivity analysis.
|
[
{
"created": "Thu, 15 Mar 2012 11:17:56 GMT",
"version": "v1"
}
] |
2012-03-19
|
[
[
"Shachter",
"Ross D.",
""
],
[
"Bhattacharjya",
"Debarun",
""
]
] |
Decision circuits perform efficient evaluation of influence diagrams, building on the ad- vances in arithmetic circuits for belief net- work inference [Darwiche, 2003; Bhattachar- jya and Shachter, 2007]. We show how even more compact decision circuits can be con- structed for dynamic programming in influ- ence diagrams with separable value functions and conditionally independent subproblems. Once a decision circuit has been constructed based on the diagram's "global" graphical structure, it can be compiled to exploit "lo- cal" structure for efficient evaluation and sen- sitivity analysis.
|
2305.05274
|
Swarnava Dey
|
Swarnava Dey and Pallab Dasgupta and Partha P Chakrabarti
|
DietCNN: Multiplication-free Inference for Quantized CNNs
|
Supplementary for S. Dey, P. Dasgupta and P. P. Chakrabarti,
"DietCNN: Multiplication-free Inference for Quantized CNNs," 2023
International Joint Conference on Neural Networks (IJCNN), Gold Coast,
Australia, 2023, pp. 1-8, doi: 10.1109/IJCNN54540.2023.10191771
| null |
10.1109/IJCNN54540.2023.10191771
| null |
cs.CV cs.LG cs.PF
|
http://creativecommons.org/licenses/by/4.0/
|
The rising demand for networked embedded systems with machine intelligence
has been a catalyst for sustained attempts by the research community to
implement Convolutional Neural Networks (CNN) based inferencing on embedded
resource-limited devices. Redesigning a CNN by removing costly multiplication
operations has already shown promising results in terms of reducing inference
energy usage. This paper proposes a new method for replacing multiplications in
a CNN by table look-ups. Unlike existing methods that completely modify the CNN
operations, the proposed methodology preserves the semantics of the major CNN
operations. Conforming to the existing mechanism of the CNN layer operations
ensures that the reliability of a standard CNN is preserved. It is shown that
the proposed multiplication-free CNN, based on a single activation codebook,
can achieve 4.7x, 5.6x, and 3.5x reduction in energy per inference in an FPGA
implementation of MNIST-LeNet-5, CIFAR10-VGG-11, and Tiny ImageNet-ResNet-18
respectively. Our results show that the DietCNN approach significantly improves
the resource consumption and latency of deep inference for smaller models,
often used in embedded systems. Our code is available at:
https://github.com/swadeykgp/DietCNN
|
[
{
"created": "Tue, 9 May 2023 08:54:54 GMT",
"version": "v1"
},
{
"created": "Thu, 17 Aug 2023 13:10:41 GMT",
"version": "v2"
}
] |
2023-08-21
|
[
[
"Dey",
"Swarnava",
""
],
[
"Dasgupta",
"Pallab",
""
],
[
"Chakrabarti",
"Partha P",
""
]
] |
The rising demand for networked embedded systems with machine intelligence has been a catalyst for sustained attempts by the research community to implement Convolutional Neural Networks (CNN) based inferencing on embedded resource-limited devices. Redesigning a CNN by removing costly multiplication operations has already shown promising results in terms of reducing inference energy usage. This paper proposes a new method for replacing multiplications in a CNN by table look-ups. Unlike existing methods that completely modify the CNN operations, the proposed methodology preserves the semantics of the major CNN operations. Conforming to the existing mechanism of the CNN layer operations ensures that the reliability of a standard CNN is preserved. It is shown that the proposed multiplication-free CNN, based on a single activation codebook, can achieve 4.7x, 5.6x, and 3.5x reduction in energy per inference in an FPGA implementation of MNIST-LeNet-5, CIFAR10-VGG-11, and Tiny ImageNet-ResNet-18 respectively. Our results show that the DietCNN approach significantly improves the resource consumption and latency of deep inference for smaller models, often used in embedded systems. Our code is available at: https://github.com/swadeykgp/DietCNN
|
2402.02335
|
Bin Zhu
|
Bin Zhu, Kevin Flanagan, Adriano Fragomeni, Michael Wray, Dima Damen
|
Video Editing for Video Retrieval
| null | null | null | null |
cs.CV cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Though pre-training vision-language models have demonstrated significant
benefits in boosting video-text retrieval performance from large-scale web
videos, fine-tuning still plays a critical role with manually annotated clips
with start and end times, which requires considerable human effort. To address
this issue, we explore an alternative cheaper source of annotations, single
timestamps, for video-text retrieval. We initialise clips from timestamps in a
heuristic way to warm up a retrieval model. Then a video clip editing method is
proposed to refine the initial rough boundaries to improve retrieval
performance. A student-teacher network is introduced for video clip editing.
The teacher model is employed to edit the clips in the training set whereas the
student model trains on the edited clips. The teacher weights are updated from
the student's after the student's performance increases. Our method is model
agnostic and applicable to any retrieval models. We conduct experiments based
on three state-of-the-art retrieval models, COOT, VideoCLIP and CLIP4Clip.
Experiments conducted on three video retrieval datasets, YouCook2, DiDeMo and
ActivityNet-Captions show that our edited clips consistently improve retrieval
performance over initial clips across all the three retrieval models.
|
[
{
"created": "Sun, 4 Feb 2024 04:13:31 GMT",
"version": "v1"
}
] |
2024-02-07
|
[
[
"Zhu",
"Bin",
""
],
[
"Flanagan",
"Kevin",
""
],
[
"Fragomeni",
"Adriano",
""
],
[
"Wray",
"Michael",
""
],
[
"Damen",
"Dima",
""
]
] |
Though pre-training vision-language models have demonstrated significant benefits in boosting video-text retrieval performance from large-scale web videos, fine-tuning still plays a critical role with manually annotated clips with start and end times, which requires considerable human effort. To address this issue, we explore an alternative cheaper source of annotations, single timestamps, for video-text retrieval. We initialise clips from timestamps in a heuristic way to warm up a retrieval model. Then a video clip editing method is proposed to refine the initial rough boundaries to improve retrieval performance. A student-teacher network is introduced for video clip editing. The teacher model is employed to edit the clips in the training set whereas the student model trains on the edited clips. The teacher weights are updated from the student's after the student's performance increases. Our method is model agnostic and applicable to any retrieval models. We conduct experiments based on three state-of-the-art retrieval models, COOT, VideoCLIP and CLIP4Clip. Experiments conducted on three video retrieval datasets, YouCook2, DiDeMo and ActivityNet-Captions show that our edited clips consistently improve retrieval performance over initial clips across all the three retrieval models.
|
2104.08840
|
Qinyuan Ye
|
Qinyuan Ye, Belinda Z. Li, Sinong Wang, Benjamin Bolte, Hao Ma,
Wen-tau Yih, Xiang Ren, Madian Khabsa
|
On the Influence of Masking Policies in Intermediate Pre-training
|
Accepted to EMNLP 2021. Camera-ready version
| null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Current NLP models are predominantly trained through a two-stage "pre-train
then fine-tune" pipeline. Prior work has shown that inserting an intermediate
pre-training stage, using heuristic masking policies for masked language
modeling (MLM), can significantly improve final performance. However, it is
still unclear (1) in what cases such intermediate pre-training is helpful, (2)
whether hand-crafted heuristic objectives are optimal for a given task, and (3)
whether a masking policy designed for one task is generalizable beyond that
task. In this paper, we perform a large-scale empirical study to investigate
the effect of various masking policies in intermediate pre-training with nine
selected tasks across three categories. Crucially, we introduce methods to
automate the discovery of optimal masking policies via direct supervision or
meta-learning. We conclude that the success of intermediate pre-training is
dependent on appropriate pre-train corpus, selection of output format (i.e.,
masked spans or full sentence), and clear understanding of the role that MLM
plays for the downstream task. In addition, we find our learned masking
policies outperform the heuristic of masking named entities on TriviaQA, and
policies learned from one task can positively transfer to other tasks in
certain cases, inviting future research in this direction.
|
[
{
"created": "Sun, 18 Apr 2021 12:32:23 GMT",
"version": "v1"
},
{
"created": "Thu, 30 Sep 2021 23:52:48 GMT",
"version": "v2"
}
] |
2021-10-04
|
[
[
"Ye",
"Qinyuan",
""
],
[
"Li",
"Belinda Z.",
""
],
[
"Wang",
"Sinong",
""
],
[
"Bolte",
"Benjamin",
""
],
[
"Ma",
"Hao",
""
],
[
"Yih",
"Wen-tau",
""
],
[
"Ren",
"Xiang",
""
],
[
"Khabsa",
"Madian",
""
]
] |
Current NLP models are predominantly trained through a two-stage "pre-train then fine-tune" pipeline. Prior work has shown that inserting an intermediate pre-training stage, using heuristic masking policies for masked language modeling (MLM), can significantly improve final performance. However, it is still unclear (1) in what cases such intermediate pre-training is helpful, (2) whether hand-crafted heuristic objectives are optimal for a given task, and (3) whether a masking policy designed for one task is generalizable beyond that task. In this paper, we perform a large-scale empirical study to investigate the effect of various masking policies in intermediate pre-training with nine selected tasks across three categories. Crucially, we introduce methods to automate the discovery of optimal masking policies via direct supervision or meta-learning. We conclude that the success of intermediate pre-training is dependent on appropriate pre-train corpus, selection of output format (i.e., masked spans or full sentence), and clear understanding of the role that MLM plays for the downstream task. In addition, we find our learned masking policies outperform the heuristic of masking named entities on TriviaQA, and policies learned from one task can positively transfer to other tasks in certain cases, inviting future research in this direction.
|
2107.14498
|
R\'emy Leroy
|
R\'emy Leroy, Pauline Trouv\'e-Peloux, Fr\'ed\'eric Champagnat,
Bertrand Le Saux, Marcela Carvalho
|
Pix2Point: Learning Outdoor 3D Using Sparse Point Clouds and Optimal
Transport
|
5 pages, 2 figures, to be published in 2021 International Conference
on Machine Vision Applications
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Good quality reconstruction and comprehension of a scene rely on 3D
estimation methods. The 3D information was usually obtained from images by
stereo-photogrammetry, but deep learning has recently provided us with
excellent results for monocular depth estimation. Building up a sufficiently
large and rich training dataset to achieve these results requires onerous
processing. In this paper, we address the problem of learning outdoor 3D point
cloud from monocular data using a sparse ground-truth dataset. We propose
Pix2Point, a deep learning-based approach for monocular 3D point cloud
prediction, able to deal with complete and challenging outdoor scenes. Our
method relies on a 2D-3D hybrid neural network architecture, and a supervised
end-to-end minimisation of an optimal transport divergence between point
clouds. We show that, when trained on sparse point clouds, our simple promising
approach achieves a better coverage of 3D outdoor scenes than efficient
monocular depth methods.
|
[
{
"created": "Fri, 30 Jul 2021 09:03:39 GMT",
"version": "v1"
}
] |
2021-08-02
|
[
[
"Leroy",
"Rémy",
""
],
[
"Trouvé-Peloux",
"Pauline",
""
],
[
"Champagnat",
"Frédéric",
""
],
[
"Saux",
"Bertrand Le",
""
],
[
"Carvalho",
"Marcela",
""
]
] |
Good quality reconstruction and comprehension of a scene rely on 3D estimation methods. The 3D information was usually obtained from images by stereo-photogrammetry, but deep learning has recently provided us with excellent results for monocular depth estimation. Building up a sufficiently large and rich training dataset to achieve these results requires onerous processing. In this paper, we address the problem of learning outdoor 3D point cloud from monocular data using a sparse ground-truth dataset. We propose Pix2Point, a deep learning-based approach for monocular 3D point cloud prediction, able to deal with complete and challenging outdoor scenes. Our method relies on a 2D-3D hybrid neural network architecture, and a supervised end-to-end minimisation of an optimal transport divergence between point clouds. We show that, when trained on sparse point clouds, our simple promising approach achieves a better coverage of 3D outdoor scenes than efficient monocular depth methods.
|
2110.09193
|
Robin Vandaele
|
Robin Vandaele, Bo Kang, Jefrey Lijffijt, Tijl De Bie, Yvan Saeys
|
Topologically Regularized Data Embeddings
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Unsupervised feature learning often finds low-dimensional embeddings that
capture the structure of complex data. For tasks for which prior expert
topological knowledge is available, incorporating this into the learned
representation may lead to higher quality embeddings. For example, this may
help one to embed the data into a given number of clusters, or to accommodate
for noise that prevents one from deriving the distribution of the data over the
model directly, which can then be learned more effectively. However, a general
tool for integrating different prior topological knowledge into embeddings is
lacking. Although differentiable topology layers have been recently developed
that can (re)shape embeddings into prespecified topological models, they have
two important limitations for representation learning, which we address in this
paper. First, the currently suggested topological losses fail to represent
simple models such as clusters and flares in a natural manner. Second, these
losses neglect all original structural (such as neighborhood) information in
the data that is useful for learning. We overcome these limitations by
introducing a new set of topological losses, and proposing their usage as a way
for topologically regularizing data embeddings to naturally represent a
prespecified model. We include thorough experiments on synthetic and real data
that highlight the usefulness and versatility of this approach, with
applications ranging from modeling high-dimensional single-cell data, to graph
embedding.
|
[
{
"created": "Mon, 18 Oct 2021 11:25:47 GMT",
"version": "v1"
},
{
"created": "Wed, 1 Dec 2021 14:34:18 GMT",
"version": "v2"
},
{
"created": "Mon, 14 Feb 2022 13:15:50 GMT",
"version": "v3"
},
{
"created": "Mon, 7 Mar 2022 10:56:53 GMT",
"version": "v4"
}
] |
2022-03-08
|
[
[
"Vandaele",
"Robin",
""
],
[
"Kang",
"Bo",
""
],
[
"Lijffijt",
"Jefrey",
""
],
[
"De Bie",
"Tijl",
""
],
[
"Saeys",
"Yvan",
""
]
] |
Unsupervised feature learning often finds low-dimensional embeddings that capture the structure of complex data. For tasks for which prior expert topological knowledge is available, incorporating this into the learned representation may lead to higher quality embeddings. For example, this may help one to embed the data into a given number of clusters, or to accommodate for noise that prevents one from deriving the distribution of the data over the model directly, which can then be learned more effectively. However, a general tool for integrating different prior topological knowledge into embeddings is lacking. Although differentiable topology layers have been recently developed that can (re)shape embeddings into prespecified topological models, they have two important limitations for representation learning, which we address in this paper. First, the currently suggested topological losses fail to represent simple models such as clusters and flares in a natural manner. Second, these losses neglect all original structural (such as neighborhood) information in the data that is useful for learning. We overcome these limitations by introducing a new set of topological losses, and proposing their usage as a way for topologically regularizing data embeddings to naturally represent a prespecified model. We include thorough experiments on synthetic and real data that highlight the usefulness and versatility of this approach, with applications ranging from modeling high-dimensional single-cell data, to graph embedding.
|
2310.15849
|
Achilleas Santi Seisa
|
Gerasimos Damigos, Achilleas Santi Seisa, Sumeet Gajanan Satpute, Tore
Lindgren, George Nikolakopoulos
|
A Resilient Framework for 5G-Edge-Connected UAVs based on Switching
Edge-MPC and Onboard-PID Control
|
8 pages, 9 figures, isie2023
|
2023 IEEE 32nd International Symposium on Industrial Electronics
(ISIE)
|
10.1109/ISIE51358.2023.10228114
| null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
In recent years, the need for resources for handling processes with high
computational complexity for mobile robots is becoming increasingly urgent.
More specifically, robots need to autonomously operate in a robust and
continuous manner, while keeping high performance, a need that led to the
utilization of edge computing to offload many computationally demanding and
time-critical robotic procedures. However, safe mechanisms should be
implemented to handle situations when it is not possible to use the offloaded
procedures, such as if the communication is challenged or the edge cluster is
not available. To this end, this article presents a switching strategy for
safety, redundancy, and optimized behavior through an edge computing-based
Model Predictive Controller (MPC) and a low-level onboard-PID controller for
edge-connected Unmanned Aerial Vehicles (UAVs). The switching strategy is based
on the communication Key Performance Indicators (KPIs) over 5G to decide
whether the UAV should be controlled by the edge-based or have a safe fallback
based on the onboard controller.
|
[
{
"created": "Tue, 24 Oct 2023 14:04:26 GMT",
"version": "v1"
}
] |
2023-10-25
|
[
[
"Damigos",
"Gerasimos",
""
],
[
"Seisa",
"Achilleas Santi",
""
],
[
"Satpute",
"Sumeet Gajanan",
""
],
[
"Lindgren",
"Tore",
""
],
[
"Nikolakopoulos",
"George",
""
]
] |
In recent years, the need for resources for handling processes with high computational complexity for mobile robots is becoming increasingly urgent. More specifically, robots need to autonomously operate in a robust and continuous manner, while keeping high performance, a need that led to the utilization of edge computing to offload many computationally demanding and time-critical robotic procedures. However, safe mechanisms should be implemented to handle situations when it is not possible to use the offloaded procedures, such as if the communication is challenged or the edge cluster is not available. To this end, this article presents a switching strategy for safety, redundancy, and optimized behavior through an edge computing-based Model Predictive Controller (MPC) and a low-level onboard-PID controller for edge-connected Unmanned Aerial Vehicles (UAVs). The switching strategy is based on the communication Key Performance Indicators (KPIs) over 5G to decide whether the UAV should be controlled by the edge-based or have a safe fallback based on the onboard controller.
|
2405.01155
|
Miruna Cretu
|
Miruna Cretu, Charles Harris, Julien Roy, Emmanuel Bengio, Pietro
Li\`o
|
SynFlowNet: Towards Molecule Design with Guaranteed Synthesis Pathways
|
Presented at ICLR 2024 GEM Workshop
| null | null | null |
cs.LG q-bio.BM
|
http://creativecommons.org/licenses/by/4.0/
|
Recent breakthroughs in generative modelling have led to a number of works
proposing molecular generation models for drug discovery. While these models
perform well at capturing drug-like motifs, they are known to often produce
synthetically inaccessible molecules. This is because they are trained to
compose atoms or fragments in a way that approximates the training
distribution, but they are not explicitly aware of the synthesis constraints
that come with making molecules in the lab. To address this issue, we introduce
SynFlowNet, a GFlowNet model whose action space uses chemically validated
reactions and reactants to sequentially build new molecules. We evaluate our
approach using synthetic accessibility scores and an independent retrosynthesis
tool. SynFlowNet consistently samples synthetically feasible molecules, while
still being able to find diverse and high-utility candidates. Furthermore, we
compare molecules designed with SynFlowNet to experimentally validated actives,
and find that they show comparable properties of interest, such as molecular
weight, SA score and predicted protein binding affinity.
|
[
{
"created": "Thu, 2 May 2024 10:15:59 GMT",
"version": "v1"
}
] |
2024-05-03
|
[
[
"Cretu",
"Miruna",
""
],
[
"Harris",
"Charles",
""
],
[
"Roy",
"Julien",
""
],
[
"Bengio",
"Emmanuel",
""
],
[
"Liò",
"Pietro",
""
]
] |
Recent breakthroughs in generative modelling have led to a number of works proposing molecular generation models for drug discovery. While these models perform well at capturing drug-like motifs, they are known to often produce synthetically inaccessible molecules. This is because they are trained to compose atoms or fragments in a way that approximates the training distribution, but they are not explicitly aware of the synthesis constraints that come with making molecules in the lab. To address this issue, we introduce SynFlowNet, a GFlowNet model whose action space uses chemically validated reactions and reactants to sequentially build new molecules. We evaluate our approach using synthetic accessibility scores and an independent retrosynthesis tool. SynFlowNet consistently samples synthetically feasible molecules, while still being able to find diverse and high-utility candidates. Furthermore, we compare molecules designed with SynFlowNet to experimentally validated actives, and find that they show comparable properties of interest, such as molecular weight, SA score and predicted protein binding affinity.
|
1910.06573
|
Cheng-En Wu
|
Cheng-En Wu, Yi-Ming Chan, Chien-Hung Chen, Wen-Cheng Chen, Chu-Song
Chen
|
IMMVP: An Efficient Daytime and Nighttime On-Road Object Detector
|
Accepted at IEEE 21st International Workshop on Multimedia Signal
Processing (MMSP 2019)
| null | null | null |
cs.CV cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
It is hard to detect on-road objects under various lighting conditions. To
improve the quality of the classifier, three techniques are used. We define
subclasses to separate daytime and nighttime samples. Then we skip similar
samples in the training set to prevent overfitting. With the help of the
outside training samples, the detection accuracy is also improved. To detect
objects in an edge device, Nvidia Jetson TX2 platform, we exert the lightweight
model ResNet-18 FPN as the backbone feature extractor. The FPN (Feature Pyramid
Network) generates good features for detecting objects over various scales.
With Cascade R-CNN technique, the bounding boxes are iteratively refined for
better results.
|
[
{
"created": "Tue, 15 Oct 2019 07:46:03 GMT",
"version": "v1"
},
{
"created": "Fri, 25 Oct 2019 02:16:59 GMT",
"version": "v2"
},
{
"created": "Mon, 28 Oct 2019 05:00:49 GMT",
"version": "v3"
}
] |
2019-10-29
|
[
[
"Wu",
"Cheng-En",
""
],
[
"Chan",
"Yi-Ming",
""
],
[
"Chen",
"Chien-Hung",
""
],
[
"Chen",
"Wen-Cheng",
""
],
[
"Chen",
"Chu-Song",
""
]
] |
It is hard to detect on-road objects under various lighting conditions. To improve the quality of the classifier, three techniques are used. We define subclasses to separate daytime and nighttime samples. Then we skip similar samples in the training set to prevent overfitting. With the help of the outside training samples, the detection accuracy is also improved. To detect objects in an edge device, Nvidia Jetson TX2 platform, we exert the lightweight model ResNet-18 FPN as the backbone feature extractor. The FPN (Feature Pyramid Network) generates good features for detecting objects over various scales. With Cascade R-CNN technique, the bounding boxes are iteratively refined for better results.
|
2304.08135
|
Alexander Wein
|
Abhishek Dhawan, Cheng Mao, Alexander S. Wein
|
Detection of Dense Subhypergraphs by Low-Degree Polynomials
|
31 pages
| null | null | null |
cs.DS cs.CC math.ST stat.ML stat.TH
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Detection of a planted dense subgraph in a random graph is a fundamental
statistical and computational problem that has been extensively studied in
recent years. We study a hypergraph version of the problem. Let $G^r(n,p)$
denote the $r$-uniform Erd\H{o}s-R\'enyi hypergraph model with $n$ vertices and
edge density $p$. We consider detecting the presence of a planted
$G^r(n^\gamma, n^{-\alpha})$ subhypergraph in a $G^r(n, n^{-\beta})$
hypergraph, where $0< \alpha < \beta < r-1$ and $0 < \gamma < 1$. Focusing on
tests that are degree-$n^{o(1)}$ polynomials of the entries of the adjacency
tensor, we determine the threshold between the easy and hard regimes for the
detection problem. More precisely, for $0 < \gamma < 1/2$, the threshold is
given by $\alpha = \beta \gamma$, and for $1/2 \le \gamma < 1$, the threshold
is given by $\alpha = \beta/2 + r(\gamma - 1/2)$.
Our results are already new in the graph case $r=2$, as we consider the
subtle log-density regime where hardness based on average-case reductions is
not known. Our proof of low-degree hardness is based on a conditional variant
of the standard low-degree likelihood calculation.
|
[
{
"created": "Mon, 17 Apr 2023 10:38:08 GMT",
"version": "v1"
}
] |
2023-04-18
|
[
[
"Dhawan",
"Abhishek",
""
],
[
"Mao",
"Cheng",
""
],
[
"Wein",
"Alexander S.",
""
]
] |
Detection of a planted dense subgraph in a random graph is a fundamental statistical and computational problem that has been extensively studied in recent years. We study a hypergraph version of the problem. Let $G^r(n,p)$ denote the $r$-uniform Erd\H{o}s-R\'enyi hypergraph model with $n$ vertices and edge density $p$. We consider detecting the presence of a planted $G^r(n^\gamma, n^{-\alpha})$ subhypergraph in a $G^r(n, n^{-\beta})$ hypergraph, where $0< \alpha < \beta < r-1$ and $0 < \gamma < 1$. Focusing on tests that are degree-$n^{o(1)}$ polynomials of the entries of the adjacency tensor, we determine the threshold between the easy and hard regimes for the detection problem. More precisely, for $0 < \gamma < 1/2$, the threshold is given by $\alpha = \beta \gamma$, and for $1/2 \le \gamma < 1$, the threshold is given by $\alpha = \beta/2 + r(\gamma - 1/2)$. Our results are already new in the graph case $r=2$, as we consider the subtle log-density regime where hardness based on average-case reductions is not known. Our proof of low-degree hardness is based on a conditional variant of the standard low-degree likelihood calculation.
|
2406.15362
|
Nuredin Ali Abdelkadir
|
Nuredin Ali, Charles Chuankai Zhang, Ned Mayo, Stevie Chancellor
|
Diverse Perspectives, Divergent Models: Cross-Cultural Evaluation of
Depression Detection on Twitter
|
6 pages, 2 figures, NAACL 2024 Main Conference
|
2024 Annual Conference of the North American Chapter of the
Association for Computational Linguistics (NAACL)
| null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Social media data has been used for detecting users with mental disorders,
such as depression. Despite the global significance of cross-cultural
representation and its potential impact on model performance, publicly
available datasets often lack crucial metadata related to this aspect. In this
work, we evaluate the generalization of benchmark datasets to build AI models
on cross-cultural Twitter data. We gather a custom geo-located Twitter dataset
of depressed users from seven countries as a test dataset. Our results show
that depression detection models do not generalize globally. The models perform
worse on Global South users compared to Global North. Pre-trained language
models achieve the best generalization compared to Logistic Regression, though
still show significant gaps in performance on depressed and non-Western users.
We quantify our findings and provide several actionable suggestions to mitigate
this issue.
|
[
{
"created": "Mon, 1 Apr 2024 03:59:12 GMT",
"version": "v1"
}
] |
2024-06-25
|
[
[
"Ali",
"Nuredin",
""
],
[
"Zhang",
"Charles Chuankai",
""
],
[
"Mayo",
"Ned",
""
],
[
"Chancellor",
"Stevie",
""
]
] |
Social media data has been used for detecting users with mental disorders, such as depression. Despite the global significance of cross-cultural representation and its potential impact on model performance, publicly available datasets often lack crucial metadata related to this aspect. In this work, we evaluate the generalization of benchmark datasets to build AI models on cross-cultural Twitter data. We gather a custom geo-located Twitter dataset of depressed users from seven countries as a test dataset. Our results show that depression detection models do not generalize globally. The models perform worse on Global South users compared to Global North. Pre-trained language models achieve the best generalization compared to Logistic Regression, though still show significant gaps in performance on depressed and non-Western users. We quantify our findings and provide several actionable suggestions to mitigate this issue.
|
2307.03480
|
Erik Daniel
|
Erik Daniel and Marcel Ebert and Florian Tschorsch
|
Improving Bitswap Privacy with Forwarding and Source Obfuscation
|
short paper, 4 pages, accepted as a short paper at 2023 IEEE 48th
Conference on Local Computer Networks (LCN)
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
IPFS is a content-addressed decentralized peer-to-peer data network, using
the Bitswap protocol for exchanging data. The data exchange leaks the
information to all neighbors, compromising a user's privacy. This paper
investigates the suitability of forwarding with source obfuscation techniques
for improving the privacy of the Bitswap protocol. The usage of forwarding can
add plausible deniability and the source obfuscation provides additional
protection against passive observers. First results showed that through
trickle-spreading the source prediction could decrease to 40 %, at the cost of
an increased content fetching time. However, assuming short distances between
content provider and consumer the content fetching time can be faster even with
the additional source obfuscation.
|
[
{
"created": "Fri, 7 Jul 2023 09:37:58 GMT",
"version": "v1"
}
] |
2023-07-10
|
[
[
"Daniel",
"Erik",
""
],
[
"Ebert",
"Marcel",
""
],
[
"Tschorsch",
"Florian",
""
]
] |
IPFS is a content-addressed decentralized peer-to-peer data network, using the Bitswap protocol for exchanging data. The data exchange leaks the information to all neighbors, compromising a user's privacy. This paper investigates the suitability of forwarding with source obfuscation techniques for improving the privacy of the Bitswap protocol. The usage of forwarding can add plausible deniability and the source obfuscation provides additional protection against passive observers. First results showed that through trickle-spreading the source prediction could decrease to 40 %, at the cost of an increased content fetching time. However, assuming short distances between content provider and consumer the content fetching time can be faster even with the additional source obfuscation.
|
2106.00839
|
Agni Orfanoudaki
|
Dimitris Bertsimas, Agni Orfanoudaki
|
Algorithmic Insurance
| null | null | null | null |
cs.LG q-fin.RM stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
As machine learning algorithms start to get integrated into the
decision-making process of companies and organizations, insurance products are
being developed to protect their owners from liability risk. Algorithmic
liability differs from human liability since it is based on a single model
compared to multiple heterogeneous decision-makers and its performance is known
a priori for a given set of data. Traditional actuarial tools for human
liability do not take these properties into consideration, primarily focusing
on the distribution of historical claims. We propose, for the first time, a
quantitative framework to estimate the risk exposure of insurance contracts for
machine-driven liability, introducing the concept of algorithmic insurance.
Specifically, we present an optimization formulation to estimate the risk
exposure of a binary classification model given a pre-defined range of
premiums. We adjust the formulation to account for uncertainty in the resulting
losses using robust optimization. Our approach outlines how properties of the
model, such as accuracy, interpretability, and generalizability, can influence
the insurance contract evaluation. To showcase a practical implementation of
the proposed framework, we present a case study of medical malpractice in the
context of breast cancer detection. Our analysis focuses on measuring the
effect of the model parameters on the expected financial loss and identifying
the aspects of algorithmic performance that predominantly affect the risk of
the contract.
|
[
{
"created": "Tue, 1 Jun 2021 22:32:02 GMT",
"version": "v1"
},
{
"created": "Wed, 14 Dec 2022 15:24:32 GMT",
"version": "v2"
}
] |
2022-12-15
|
[
[
"Bertsimas",
"Dimitris",
""
],
[
"Orfanoudaki",
"Agni",
""
]
] |
As machine learning algorithms start to get integrated into the decision-making process of companies and organizations, insurance products are being developed to protect their owners from liability risk. Algorithmic liability differs from human liability since it is based on a single model compared to multiple heterogeneous decision-makers and its performance is known a priori for a given set of data. Traditional actuarial tools for human liability do not take these properties into consideration, primarily focusing on the distribution of historical claims. We propose, for the first time, a quantitative framework to estimate the risk exposure of insurance contracts for machine-driven liability, introducing the concept of algorithmic insurance. Specifically, we present an optimization formulation to estimate the risk exposure of a binary classification model given a pre-defined range of premiums. We adjust the formulation to account for uncertainty in the resulting losses using robust optimization. Our approach outlines how properties of the model, such as accuracy, interpretability, and generalizability, can influence the insurance contract evaluation. To showcase a practical implementation of the proposed framework, we present a case study of medical malpractice in the context of breast cancer detection. Our analysis focuses on measuring the effect of the model parameters on the expected financial loss and identifying the aspects of algorithmic performance that predominantly affect the risk of the contract.
|
2110.14528
|
Sariel Har-Peled
|
Sariel Har-Peled and Jiaqi Cheng
|
On Competitive Permutations for Set Cover by Intervals
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We revisit the problem of computing an optimal partial cover of points by
intervals. We show that the greedy algorithm computes a permutation $\Pi =
\pi_1, \pi_2,\ldots$ of the intervals that is $3/4$-competitive for any prefix
of $k$ intervals. That is, for any $k$, the intervals $\pi_1 \cup \cdots \cup
\pi_k$ covers at least $3/4$-fraction of the points covered by the optimal
solution using $k$ intervals.
We also provide an approximation algorithm that, in $O(n + m/\varepsilon)$
time, computes a cover by $(1+\varepsilon)k$ intervals that is as good as the
optimal solution using $k$ intervals, where $n$ is the number of input points,
and $m$ is the number of intervals (we assume here the input is presorted).
Finally, we show a counter example illustrating that the optimal solutions
for set cover do not have the diminishing return property -- that is, the
marginal benefit from using more sets is not monotonically decreasing.
Fortunately, the diminishing returns does hold for intervals.
|
[
{
"created": "Wed, 27 Oct 2021 15:41:09 GMT",
"version": "v1"
}
] |
2021-10-28
|
[
[
"Har-Peled",
"Sariel",
""
],
[
"Cheng",
"Jiaqi",
""
]
] |
We revisit the problem of computing an optimal partial cover of points by intervals. We show that the greedy algorithm computes a permutation $\Pi = \pi_1, \pi_2,\ldots$ of the intervals that is $3/4$-competitive for any prefix of $k$ intervals. That is, for any $k$, the intervals $\pi_1 \cup \cdots \cup \pi_k$ covers at least $3/4$-fraction of the points covered by the optimal solution using $k$ intervals. We also provide an approximation algorithm that, in $O(n + m/\varepsilon)$ time, computes a cover by $(1+\varepsilon)k$ intervals that is as good as the optimal solution using $k$ intervals, where $n$ is the number of input points, and $m$ is the number of intervals (we assume here the input is presorted). Finally, we show a counter example illustrating that the optimal solutions for set cover do not have the diminishing return property -- that is, the marginal benefit from using more sets is not monotonically decreasing. Fortunately, the diminishing returns does hold for intervals.
|
2008.09965
|
Zirui Wang
|
Zirui Wang, Victor Adrian Prisacariu
|
Neighbourhood-Insensitive Point Cloud Normal Estimation Network
|
Accepted in BMVC 2020 as oral presentation. Code available at
https://code.active.vision and project page at http://ninormal.active.vision
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce a novel self-attention-based normal estimation network that is
able to focus softly on relevant points and adjust the softness by learning a
temperature parameter, making it able to work naturally and effectively within
a large neighbourhood range. As a result, our model outperforms all existing
normal estimation algorithms by a large margin, achieving 94.1% accuracy in
comparison with the previous state of the art of 91.2%, with a 25x smaller
model and 12x faster inference time. We also use point-to-plane Iterative
Closest Point (ICP) as an application case to show that our normal estimations
lead to faster convergence than normal estimations from other methods, without
manually fine-tuning neighbourhood range parameters. Code available at
https://code.active.vision.
|
[
{
"created": "Sun, 23 Aug 2020 05:46:58 GMT",
"version": "v1"
},
{
"created": "Tue, 25 Aug 2020 02:36:33 GMT",
"version": "v2"
},
{
"created": "Fri, 15 Jan 2021 11:01:58 GMT",
"version": "v3"
}
] |
2021-01-18
|
[
[
"Wang",
"Zirui",
""
],
[
"Prisacariu",
"Victor Adrian",
""
]
] |
We introduce a novel self-attention-based normal estimation network that is able to focus softly on relevant points and adjust the softness by learning a temperature parameter, making it able to work naturally and effectively within a large neighbourhood range. As a result, our model outperforms all existing normal estimation algorithms by a large margin, achieving 94.1% accuracy in comparison with the previous state of the art of 91.2%, with a 25x smaller model and 12x faster inference time. We also use point-to-plane Iterative Closest Point (ICP) as an application case to show that our normal estimations lead to faster convergence than normal estimations from other methods, without manually fine-tuning neighbourhood range parameters. Code available at https://code.active.vision.
|
2307.00580
|
Hemanth Karnati
|
Hemanth Karnati
|
IoT-Based Air Quality Monitoring System with Machine Learning for
Accurate and Real-time Data Analysis
|
18 pages, 10 figures
| null | null | null |
cs.LG
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Air pollution in urban areas has severe consequences for both human health
and the environment, predominantly caused by exhaust emissions from vehicles.
To address the issue of air pollution awareness, Air Pollution Monitoring
systems are used to measure the concentration of gases like CO2, smoke,
alcohol, benzene, and NH3 present in the air. However, current mobile
applications are unable to provide users with real-time data specific to their
location. In this paper, we propose the development of a portable air quality
detection device that can be used anywhere. The data collected will be stored
and visualized using the cloud-based web app ThinkSpeak.
The device utilizes two sensors, MQ135 and MQ3, to detect harmful gases and
measure air quality in parts per million (PPM). Additionally, machine learning
analysis will be employed on the collected data.
|
[
{
"created": "Sun, 2 Jul 2023 14:18:04 GMT",
"version": "v1"
}
] |
2023-07-04
|
[
[
"Karnati",
"Hemanth",
""
]
] |
Air pollution in urban areas has severe consequences for both human health and the environment, predominantly caused by exhaust emissions from vehicles. To address the issue of air pollution awareness, Air Pollution Monitoring systems are used to measure the concentration of gases like CO2, smoke, alcohol, benzene, and NH3 present in the air. However, current mobile applications are unable to provide users with real-time data specific to their location. In this paper, we propose the development of a portable air quality detection device that can be used anywhere. The data collected will be stored and visualized using the cloud-based web app ThinkSpeak. The device utilizes two sensors, MQ135 and MQ3, to detect harmful gases and measure air quality in parts per million (PPM). Additionally, machine learning analysis will be employed on the collected data.
|
0808.0112
|
Didier Sornette
|
V.I. Yukalov and D. Sornette
|
Mathematical Structure of Quantum Decision Theory
|
40 pages
|
Advances in Complex Systems 13, 659-698 (2010)
| null | null |
cs.AI math-ph math.MP quant-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
One of the most complex systems is the human brain whose formalized
functioning is characterized by decision theory. We present a "Quantum Decision
Theory" of decision making, based on the mathematical theory of separable
Hilbert spaces. This mathematical structure captures the effect of
superposition of composite prospects, including many incorporated intentions,
which allows us to explain a variety of interesting fallacies and anomalies
that have been reported to particularize the decision making of real human
beings. The theory describes entangled decision making, non-commutativity of
subsequent decisions, and intention interference of composite prospects. We
demonstrate how the violation of the Savage's sure-thing principle (disjunction
effect) can be explained as a result of the interference of intentions, when
making decisions under uncertainty. The conjunction fallacy is also explained
by the presence of the interference terms. We demonstrate that all known
anomalies and paradoxes, documented in the context of classical decision
theory, are reducible to just a few mathematical archetypes, all of which
finding straightforward explanations in the frame of the developed quantum
approach.
|
[
{
"created": "Fri, 1 Aug 2008 13:14:20 GMT",
"version": "v1"
},
{
"created": "Wed, 27 Oct 2010 09:05:25 GMT",
"version": "v2"
},
{
"created": "Thu, 28 Oct 2010 10:28:03 GMT",
"version": "v3"
}
] |
2010-10-29
|
[
[
"Yukalov",
"V. I.",
""
],
[
"Sornette",
"D.",
""
]
] |
One of the most complex systems is the human brain whose formalized functioning is characterized by decision theory. We present a "Quantum Decision Theory" of decision making, based on the mathematical theory of separable Hilbert spaces. This mathematical structure captures the effect of superposition of composite prospects, including many incorporated intentions, which allows us to explain a variety of interesting fallacies and anomalies that have been reported to particularize the decision making of real human beings. The theory describes entangled decision making, non-commutativity of subsequent decisions, and intention interference of composite prospects. We demonstrate how the violation of the Savage's sure-thing principle (disjunction effect) can be explained as a result of the interference of intentions, when making decisions under uncertainty. The conjunction fallacy is also explained by the presence of the interference terms. We demonstrate that all known anomalies and paradoxes, documented in the context of classical decision theory, are reducible to just a few mathematical archetypes, all of which finding straightforward explanations in the frame of the developed quantum approach.
|
1805.01151
|
Robert Feldt
|
Jan-Philipp Stegh\"ofer, H{\aa}kan Burden, Regina Hebig, Gul Calikli,
Robert Feldt, Imed Hammouda, Jennifer Horkoff, Eric Knauss, Grischa Liebel
|
Involving External Stakeholders in Project Courses
|
Abstract shortened since arxiv.org limits length of abstracts. See
paper/pdf for full abstract. Paper is forthcoming, accepted August 2017.
Arxiv version 2 corrects misspelled author name
|
ACM Transactions on Computing Education (TOCE), acc. August 2017
| null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Problem: The involvement of external stakeholders in capstone projects and
project courses is desirable due to its potential positive effects on the
students. Capstone projects particularly profit from the inclusion of an
industrial partner to make the project relevant and help students acquire
professional skills. In addition, an increasing push towards education that is
aligned with industry and incorporates industrial partners can be observed.
However, the involvement of external stakeholders in teaching moments can
create friction and could, in the worst case, lead to frustration of all
involved parties. Contribution: We developed a model that allows analysing the
involvement of external stakeholders in university courses both in a
retrospective fashion, to gain insights from past course instances, and in a
constructive fashion, to plan the involvement of external stakeholders. Key
Concepts: The conceptual model and the accompanying guideline guide the
teachers in their analysis of stakeholder involvement. The model is comprised
of several activities (define, execute, and evaluate the collaboration). The
guideline provides questions that the teachers should answer for each of these
activities. In the constructive use, the model allows teachers to define an
action plan based on an analysis of potential stakeholders and the pedagogical
objectives. In the retrospective use, the model allows teachers to identify
issues that appeared during the project and their underlying causes. Drawing
from ideas of the reflective practitioner, the model contains an emphasis on
reflection and interpretation of the observations made by the teacher and other
groups involved in the courses. Key Lessons: Applying the model retrospectively
to a total of eight courses shows that it is possible to reveal hitherto
implicit risks and assumptions and to gain a better insight into the
interaction...
|
[
{
"created": "Thu, 3 May 2018 08:04:09 GMT",
"version": "v1"
},
{
"created": "Fri, 4 May 2018 07:13:25 GMT",
"version": "v2"
}
] |
2018-05-07
|
[
[
"Steghöfer",
"Jan-Philipp",
""
],
[
"Burden",
"Håkan",
""
],
[
"Hebig",
"Regina",
""
],
[
"Calikli",
"Gul",
""
],
[
"Feldt",
"Robert",
""
],
[
"Hammouda",
"Imed",
""
],
[
"Horkoff",
"Jennifer",
""
],
[
"Knauss",
"Eric",
""
],
[
"Liebel",
"Grischa",
""
]
] |
Problem: The involvement of external stakeholders in capstone projects and project courses is desirable due to its potential positive effects on the students. Capstone projects particularly profit from the inclusion of an industrial partner to make the project relevant and help students acquire professional skills. In addition, an increasing push towards education that is aligned with industry and incorporates industrial partners can be observed. However, the involvement of external stakeholders in teaching moments can create friction and could, in the worst case, lead to frustration of all involved parties. Contribution: We developed a model that allows analysing the involvement of external stakeholders in university courses both in a retrospective fashion, to gain insights from past course instances, and in a constructive fashion, to plan the involvement of external stakeholders. Key Concepts: The conceptual model and the accompanying guideline guide the teachers in their analysis of stakeholder involvement. The model is comprised of several activities (define, execute, and evaluate the collaboration). The guideline provides questions that the teachers should answer for each of these activities. In the constructive use, the model allows teachers to define an action plan based on an analysis of potential stakeholders and the pedagogical objectives. In the retrospective use, the model allows teachers to identify issues that appeared during the project and their underlying causes. Drawing from ideas of the reflective practitioner, the model contains an emphasis on reflection and interpretation of the observations made by the teacher and other groups involved in the courses. Key Lessons: Applying the model retrospectively to a total of eight courses shows that it is possible to reveal hitherto implicit risks and assumptions and to gain a better insight into the interaction...
|
2408.00508
|
Florian Dietz
|
Florian Dietz, Dietrich Klakow
|
Block-Operations: Using Modular Routing to Improve Compositional
Generalization
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We explore the hypothesis that poor compositional generalization in neural
networks is caused by difficulties with learning effective routing. To solve
this problem, we propose the concept of block-operations, which is based on
splitting all activation tensors in the network into uniformly sized blocks and
using an inductive bias to encourage modular routing and modification of these
blocks. Based on this concept we introduce the Multiplexer, a new architectural
component that enhances the Feed Forward Neural Network (FNN). We
experimentally confirm that Multiplexers exhibit strong compositional
generalization. On both a synthetic and a realistic task our model was able to
learn the underlying process behind the task, whereas both FNNs and
Transformers were only able to learn heuristic approximations. We propose as
future work to use the principles of block-operations to improve other existing
architectures.
|
[
{
"created": "Thu, 1 Aug 2024 12:28:22 GMT",
"version": "v1"
}
] |
2024-08-02
|
[
[
"Dietz",
"Florian",
""
],
[
"Klakow",
"Dietrich",
""
]
] |
We explore the hypothesis that poor compositional generalization in neural networks is caused by difficulties with learning effective routing. To solve this problem, we propose the concept of block-operations, which is based on splitting all activation tensors in the network into uniformly sized blocks and using an inductive bias to encourage modular routing and modification of these blocks. Based on this concept we introduce the Multiplexer, a new architectural component that enhances the Feed Forward Neural Network (FNN). We experimentally confirm that Multiplexers exhibit strong compositional generalization. On both a synthetic and a realistic task our model was able to learn the underlying process behind the task, whereas both FNNs and Transformers were only able to learn heuristic approximations. We propose as future work to use the principles of block-operations to improve other existing architectures.
|
1110.0976
|
Stefan Kratsch
|
Danny Hermelin and Stefan Kratsch and Karolina So{\l}tys and Magnus
Wahlstr\"om and Xi Wu
|
Hierarchies of Inefficient Kernelizability
| null | null | null | null |
cs.CC cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The framework of Bodlaender et al. (ICALP 2008) and Fortnow and Santhanam
(STOC 2008) allows us to exclude the existence of polynomial kernels for a
range of problems under reasonable complexity-theoretical assumptions. However,
there are also some issues that are not addressed by this framework, including
the existence of Turing kernels such as the "kernelization" of Leaf Out
Branching(k) into a disjunction over n instances of size poly(k). Observing
that Turing kernels are preserved by polynomial parametric transformations, we
define a kernelization hardness hierarchy, akin to the M- and W-hierarchy of
ordinary parameterized complexity, by the PPT-closure of problems that seem
likely to be fundamentally hard for efficient Turing kernelization. We find
that several previously considered problems are complete for our fundamental
hardness class, including Min Ones d-SAT(k), Binary NDTM Halting(k), Connected
Vertex Cover(k), and Clique(k log n), the clique problem parameterized by k log
n.
|
[
{
"created": "Wed, 5 Oct 2011 13:14:58 GMT",
"version": "v1"
}
] |
2015-03-19
|
[
[
"Hermelin",
"Danny",
""
],
[
"Kratsch",
"Stefan",
""
],
[
"Sołtys",
"Karolina",
""
],
[
"Wahlström",
"Magnus",
""
],
[
"Wu",
"Xi",
""
]
] |
The framework of Bodlaender et al. (ICALP 2008) and Fortnow and Santhanam (STOC 2008) allows us to exclude the existence of polynomial kernels for a range of problems under reasonable complexity-theoretical assumptions. However, there are also some issues that are not addressed by this framework, including the existence of Turing kernels such as the "kernelization" of Leaf Out Branching(k) into a disjunction over n instances of size poly(k). Observing that Turing kernels are preserved by polynomial parametric transformations, we define a kernelization hardness hierarchy, akin to the M- and W-hierarchy of ordinary parameterized complexity, by the PPT-closure of problems that seem likely to be fundamentally hard for efficient Turing kernelization. We find that several previously considered problems are complete for our fundamental hardness class, including Min Ones d-SAT(k), Binary NDTM Halting(k), Connected Vertex Cover(k), and Clique(k log n), the clique problem parameterized by k log n.
|
1506.08009
|
Francois Petitjean Ph.D.
|
Francois Petitjean, Tao Li, Nikolaj Tatti, Geoffrey I. Webb
|
Skopus: Mining top-k sequential patterns under leverage
| null |
Data Mining and Knowledge Discovery, September 2016, Volume 30,
Issue 5, pp 1086-1111
|
10.1007/s10618-016-0467-9
| null |
cs.AI cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a framework for exact discovery of the top-k sequential
patterns under Leverage. It combines (1) a novel definition of the expected
support for a sequential pattern - a concept on which most interestingness
measures directly rely - with (2) SkOPUS: a new branch-and-bound algorithm for
the exact discovery of top-k sequential patterns under a given measure of
interest. Our interestingness measure employs the partition approach. A pattern
is interesting to the extent that it is more frequent than can be explained by
assuming independence between any of the pairs of patterns from which it can be
composed. The larger the support compared to the expectation under
independence, the more interesting is the pattern. We build on these two
elements to exactly extract the k sequential patterns with highest leverage,
consistent with our definition of expected support. We conduct experiments on
both synthetic data with known patterns and real-world datasets; both
experiments confirm the consistency and relevance of our approach with regard
to the state of the art. This article was published in Data Mining and
Knowledge Discovery and is accessible at
http://dx.doi.org/10.1007/s10618-016-0467-9.
|
[
{
"created": "Fri, 26 Jun 2015 09:36:10 GMT",
"version": "v1"
},
{
"created": "Wed, 6 Jan 2016 04:48:08 GMT",
"version": "v2"
},
{
"created": "Mon, 8 May 2017 23:59:16 GMT",
"version": "v3"
},
{
"created": "Mon, 5 Feb 2018 01:26:34 GMT",
"version": "v4"
}
] |
2018-02-06
|
[
[
"Petitjean",
"Francois",
""
],
[
"Li",
"Tao",
""
],
[
"Tatti",
"Nikolaj",
""
],
[
"Webb",
"Geoffrey I.",
""
]
] |
This paper presents a framework for exact discovery of the top-k sequential patterns under Leverage. It combines (1) a novel definition of the expected support for a sequential pattern - a concept on which most interestingness measures directly rely - with (2) SkOPUS: a new branch-and-bound algorithm for the exact discovery of top-k sequential patterns under a given measure of interest. Our interestingness measure employs the partition approach. A pattern is interesting to the extent that it is more frequent than can be explained by assuming independence between any of the pairs of patterns from which it can be composed. The larger the support compared to the expectation under independence, the more interesting is the pattern. We build on these two elements to exactly extract the k sequential patterns with highest leverage, consistent with our definition of expected support. We conduct experiments on both synthetic data with known patterns and real-world datasets; both experiments confirm the consistency and relevance of our approach with regard to the state of the art. This article was published in Data Mining and Knowledge Discovery and is accessible at http://dx.doi.org/10.1007/s10618-016-0467-9.
|
1808.01666
|
Tshilidzi Marwala
|
Tshilidzi Marwala
|
On Robot Revolution and Taxation
| null | null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Advances in artificial intelligence are resulting in the rapid automation of
the work force. The tools that are used to automate are called robots. Bill
Gates proposed that in order to deal with the problem of the loss of jobs and
reduction of the tax revenue we ought to tax the robots. The problem with
taxing the robots is that it is not easy to know what a robot is. This article
studies the definition of a robot and the implication of advances in robotics
on taxation. It is evident from this article that it is a difficult task to
establish what a robot is and what is not a robot. It concludes that taxing
robots is the same as increasing corporate tax.
|
[
{
"created": "Sun, 5 Aug 2018 18:26:34 GMT",
"version": "v1"
}
] |
2018-08-07
|
[
[
"Marwala",
"Tshilidzi",
""
]
] |
Advances in artificial intelligence are resulting in the rapid automation of the work force. The tools that are used to automate are called robots. Bill Gates proposed that in order to deal with the problem of the loss of jobs and reduction of the tax revenue we ought to tax the robots. The problem with taxing the robots is that it is not easy to know what a robot is. This article studies the definition of a robot and the implication of advances in robotics on taxation. It is evident from this article that it is a difficult task to establish what a robot is and what is not a robot. It concludes that taxing robots is the same as increasing corporate tax.
|
2010.14858
|
Jo\~ao Ribeiro
|
Mahdi Cheraghchi, Jo\~ao Ribeiro
|
Non-Asymptotic Capacity Upper Bounds for the Discrete-Time Poisson
Channel with Positive Dark Current
|
12 pages, 2 figures. Minor revisions in the introduction
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We derive improved and easily computable upper bounds on the capacity of the
discrete-time Poisson channel under an average-power constraint and an
arbitrary constant dark current term. This is accomplished by combining a
general convex duality framework with a modified version of the digamma
distribution considered in previous work of the authors (Cheraghchi, J. ACM
2019; Cheraghchi, Ribeiro, IEEE Trans. Inf. Theory 2019). For most choices of
parameters, our upper bounds improve upon previous results even when an
additional peak-power constraint is imposed on the input.
|
[
{
"created": "Wed, 28 Oct 2020 10:10:14 GMT",
"version": "v1"
},
{
"created": "Sat, 31 Oct 2020 17:42:43 GMT",
"version": "v2"
}
] |
2020-11-03
|
[
[
"Cheraghchi",
"Mahdi",
""
],
[
"Ribeiro",
"João",
""
]
] |
We derive improved and easily computable upper bounds on the capacity of the discrete-time Poisson channel under an average-power constraint and an arbitrary constant dark current term. This is accomplished by combining a general convex duality framework with a modified version of the digamma distribution considered in previous work of the authors (Cheraghchi, J. ACM 2019; Cheraghchi, Ribeiro, IEEE Trans. Inf. Theory 2019). For most choices of parameters, our upper bounds improve upon previous results even when an additional peak-power constraint is imposed on the input.
|
2209.09775
|
Shashi Raj Pandey Dr.
|
Shashi Raj Pandey, Lam Duc Nguyen, and Petar Popovski
|
FedToken: Tokenized Incentives for Data Contribution in Federated
Learning
|
Accepted at Workshop on Federated Learning: Recent Advances and New
Challenges, in Conjunction with NeurIPS 2022 (FL-NeurIPS'22). 9 Pages, 5
Figures
| null | null | null |
cs.LG cs.DC cs.GT cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
Incentives that compensate for the involved costs in the decentralized
training of a Federated Learning (FL) model act as a key stimulus for clients'
long-term participation. However, it is challenging to convince clients for
quality participation in FL due to the absence of: (i) full information on the
client's data quality and properties; (ii) the value of client's data
contributions; and (iii) the trusted mechanism for monetary incentive offers.
This often leads to poor efficiency in training and communication. While
several works focus on strategic incentive designs and client selection to
overcome this problem, there is a major knowledge gap in terms of an overall
design tailored to the foreseen digital economy, including Web 3.0, while
simultaneously meeting the learning objectives. To address this gap, we propose
a contribution-based tokenized incentive scheme, namely \texttt{FedToken},
backed by blockchain technology that ensures fair allocation of tokens amongst
the clients that corresponds to the valuation of their data during model
training. Leveraging the engineered Shapley-based scheme, we first approximate
the contribution of local models during model aggregation, then strategically
schedule clients lowering the communication rounds for convergence and anchor
ways to allocate \emph{affordable} tokens under a constrained monetary budget.
Extensive simulations demonstrate the efficacy of our proposed method.
|
[
{
"created": "Tue, 20 Sep 2022 14:58:08 GMT",
"version": "v1"
},
{
"created": "Thu, 3 Nov 2022 15:03:36 GMT",
"version": "v2"
}
] |
2022-11-04
|
[
[
"Pandey",
"Shashi Raj",
""
],
[
"Nguyen",
"Lam Duc",
""
],
[
"Popovski",
"Petar",
""
]
] |
Incentives that compensate for the involved costs in the decentralized training of a Federated Learning (FL) model act as a key stimulus for clients' long-term participation. However, it is challenging to convince clients for quality participation in FL due to the absence of: (i) full information on the client's data quality and properties; (ii) the value of client's data contributions; and (iii) the trusted mechanism for monetary incentive offers. This often leads to poor efficiency in training and communication. While several works focus on strategic incentive designs and client selection to overcome this problem, there is a major knowledge gap in terms of an overall design tailored to the foreseen digital economy, including Web 3.0, while simultaneously meeting the learning objectives. To address this gap, we propose a contribution-based tokenized incentive scheme, namely \texttt{FedToken}, backed by blockchain technology that ensures fair allocation of tokens amongst the clients that corresponds to the valuation of their data during model training. Leveraging the engineered Shapley-based scheme, we first approximate the contribution of local models during model aggregation, then strategically schedule clients lowering the communication rounds for convergence and anchor ways to allocate \emph{affordable} tokens under a constrained monetary budget. Extensive simulations demonstrate the efficacy of our proposed method.
|
2402.03846
|
Jose Cribeiro-Ramallo
|
Jose Cribeiro-Ramallo, Vadim Arzamasov, Klemens B\"ohm
|
Efficient Generation of Hidden Outliers for Improved Outlier Detection
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Outlier generation is a popular technique used for solving important outlier
detection tasks. Generating outliers with realistic behavior is challenging.
Popular existing methods tend to disregard the 'multiple views' property of
outliers in high-dimensional spaces. The only existing method accounting for
this property falls short in efficiency and effectiveness. We propose BISECT, a
new outlier generation method that creates realistic outliers mimicking said
property. To do so, BISECT employs a novel proposition introduced in this
article stating how to efficiently generate said realistic outliers. Our method
has better guarantees and complexity than the current methodology for
recreating 'multiple views'. We use the synthetic outliers generated by BISECT
to effectively enhance outlier detection in diverse datasets, for multiple use
cases. For instance, oversampling with BISECT reduced the error by up to 3
times when compared with the baselines.
|
[
{
"created": "Tue, 6 Feb 2024 09:48:33 GMT",
"version": "v1"
}
] |
2024-02-07
|
[
[
"Cribeiro-Ramallo",
"Jose",
""
],
[
"Arzamasov",
"Vadim",
""
],
[
"Böhm",
"Klemens",
""
]
] |
Outlier generation is a popular technique used for solving important outlier detection tasks. Generating outliers with realistic behavior is challenging. Popular existing methods tend to disregard the 'multiple views' property of outliers in high-dimensional spaces. The only existing method accounting for this property falls short in efficiency and effectiveness. We propose BISECT, a new outlier generation method that creates realistic outliers mimicking said property. To do so, BISECT employs a novel proposition introduced in this article stating how to efficiently generate said realistic outliers. Our method has better guarantees and complexity than the current methodology for recreating 'multiple views'. We use the synthetic outliers generated by BISECT to effectively enhance outlier detection in diverse datasets, for multiple use cases. For instance, oversampling with BISECT reduced the error by up to 3 times when compared with the baselines.
|
1705.02503
|
Lamberto Ballan
|
Federico Bartoli, Giuseppe Lisanti, Lamberto Ballan, Alberto Del Bimbo
|
Context-Aware Trajectory Prediction
|
Submitted to BMVC 2017
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Human motion and behaviour in crowded spaces is influenced by several
factors, such as the dynamics of other moving agents in the scene, as well as
the static elements that might be perceived as points of attraction or
obstacles. In this work, we present a new model for human trajectory prediction
which is able to take advantage of both human-human and human-space
interactions. The future trajectory of humans, are generated by observing their
past positions and interactions with the surroundings. To this end, we propose
a "context-aware" recurrent neural network LSTM model, which can learn and
predict human motion in crowded spaces such as a sidewalk, a museum or a
shopping mall. We evaluate our model on a public pedestrian datasets, and we
contribute a new challenging dataset that collects videos of humans that
navigate in a (real) crowded space such as a big museum. Results show that our
approach can predict human trajectories better when compared to previous
state-of-the-art forecasting models.
|
[
{
"created": "Sat, 6 May 2017 16:36:32 GMT",
"version": "v1"
}
] |
2017-05-09
|
[
[
"Bartoli",
"Federico",
""
],
[
"Lisanti",
"Giuseppe",
""
],
[
"Ballan",
"Lamberto",
""
],
[
"Del Bimbo",
"Alberto",
""
]
] |
Human motion and behaviour in crowded spaces is influenced by several factors, such as the dynamics of other moving agents in the scene, as well as the static elements that might be perceived as points of attraction or obstacles. In this work, we present a new model for human trajectory prediction which is able to take advantage of both human-human and human-space interactions. The future trajectory of humans, are generated by observing their past positions and interactions with the surroundings. To this end, we propose a "context-aware" recurrent neural network LSTM model, which can learn and predict human motion in crowded spaces such as a sidewalk, a museum or a shopping mall. We evaluate our model on a public pedestrian datasets, and we contribute a new challenging dataset that collects videos of humans that navigate in a (real) crowded space such as a big museum. Results show that our approach can predict human trajectories better when compared to previous state-of-the-art forecasting models.
|
1702.06767
|
Jiasong Wu
|
Jiasong Wu, Shijie Qiu, Youyong Kong, Yang Chen, Lotfi Senhadji,
Huazhong Shu
|
MomentsNet: a simple learning-free method for binary image recognition
|
5 pages, 4 figures, 2 tables
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a new simple and learning-free deep learning
network named MomentsNet, whose convolution layer, nonlinear processing layer
and pooling layer are constructed by Moments kernels, binary hashing and
block-wise histogram, respectively. Twelve typical moments (including
geometrical moment, Zernike moment, Tchebichef moment, etc.) are used to
construct the MomentsNet whose recognition performance for binary image is
studied. The results reveal that MomentsNet has better recognition performance
than its corresponding moments in almost all cases and ZernikeNet achieves the
best recognition performance among MomentsNet constructed by twelve moments.
ZernikeNet also shows better recognition performance on binary image database
than that of PCANet, which is a learning-based deep learning network.
|
[
{
"created": "Wed, 22 Feb 2017 12:08:09 GMT",
"version": "v1"
}
] |
2017-02-23
|
[
[
"Wu",
"Jiasong",
""
],
[
"Qiu",
"Shijie",
""
],
[
"Kong",
"Youyong",
""
],
[
"Chen",
"Yang",
""
],
[
"Senhadji",
"Lotfi",
""
],
[
"Shu",
"Huazhong",
""
]
] |
In this paper, we propose a new simple and learning-free deep learning network named MomentsNet, whose convolution layer, nonlinear processing layer and pooling layer are constructed by Moments kernels, binary hashing and block-wise histogram, respectively. Twelve typical moments (including geometrical moment, Zernike moment, Tchebichef moment, etc.) are used to construct the MomentsNet whose recognition performance for binary image is studied. The results reveal that MomentsNet has better recognition performance than its corresponding moments in almost all cases and ZernikeNet achieves the best recognition performance among MomentsNet constructed by twelve moments. ZernikeNet also shows better recognition performance on binary image database than that of PCANet, which is a learning-based deep learning network.
|
2208.01520
|
Radu Iosif
|
Radu Iosif and Florian Zuleger
|
On the Expressiveness of a Logic of Separated Relations
| null | null | null | null |
cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
We compare the model-theoretic expressiveness of the existential fragment of
Separation Logic over unrestricted relational signatures (SLR) -- with only
separating conjunction as logical connective and higher-order inductive
definitions, traditionally known as the symbolic heap fragment -- with the
expressiveness of (Monadic) Second Order Logic ((M)SO). While SLR and MSO are
incomparable on structures of unbounded treewidth, it turns out that SLR can be
embedded in SO, in general, and that MSO becomes a strict subset of SLR, when
the treewidth of the models is bounded by a parameter given as input. We also
discuss the problem of defining a fragment of SLR that is equivalent to MSO
over models of bounded treewidth. Such a fragment would then become the most
general Separation Logic with a decidable entailment problem, a key ingredient
of practical verification methods for self-adapting (reconfigurable)
component-based and distributed systems.
|
[
{
"created": "Tue, 2 Aug 2022 15:11:58 GMT",
"version": "v1"
}
] |
2022-08-03
|
[
[
"Iosif",
"Radu",
""
],
[
"Zuleger",
"Florian",
""
]
] |
We compare the model-theoretic expressiveness of the existential fragment of Separation Logic over unrestricted relational signatures (SLR) -- with only separating conjunction as logical connective and higher-order inductive definitions, traditionally known as the symbolic heap fragment -- with the expressiveness of (Monadic) Second Order Logic ((M)SO). While SLR and MSO are incomparable on structures of unbounded treewidth, it turns out that SLR can be embedded in SO, in general, and that MSO becomes a strict subset of SLR, when the treewidth of the models is bounded by a parameter given as input. We also discuss the problem of defining a fragment of SLR that is equivalent to MSO over models of bounded treewidth. Such a fragment would then become the most general Separation Logic with a decidable entailment problem, a key ingredient of practical verification methods for self-adapting (reconfigurable) component-based and distributed systems.
|
1812.07050
|
Zhe Liu
|
Zhe Liu and Shunbo Zhou and Chuanzhe Suo and Yingtian Liu and Peng Yin
and Hesheng Wang and Yun-Hui Liu
|
LPD-Net: 3D Point Cloud Learning for Large-Scale Place Recognition and
Environment Analysis
|
This paper has been accepted by ICCV-2019
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Point cloud based place recognition is still an open issue due to the
difficulty in extracting local features from the raw 3D point cloud and
generating the global descriptor, and it's even harder in the large-scale
dynamic environments. In this paper, we develop a novel deep neural network,
named LPD-Net (Large-scale Place Description Network), which can extract
discriminative and generalizable global descriptors from the raw 3D point
cloud. Two modules, the adaptive local feature extraction module and the
graph-based neighborhood aggregation module, are proposed, which contribute to
extract the local structures and reveal the spatial distribution of local
features in the large-scale point cloud, with an end-to-end manner. We
implement the proposed global descriptor in solving point cloud based retrieval
tasks to achieve the large-scale place recognition. Comparison results show
that our LPD-Net is much better than PointNetVLAD and reaches the
state-of-the-art. We also compare our LPD-Net with the vision-based solutions
to show the robustness of our approach to different weather and light
conditions.
|
[
{
"created": "Tue, 11 Dec 2018 04:42:24 GMT",
"version": "v1"
},
{
"created": "Mon, 19 Aug 2019 08:02:47 GMT",
"version": "v2"
}
] |
2019-08-20
|
[
[
"Liu",
"Zhe",
""
],
[
"Zhou",
"Shunbo",
""
],
[
"Suo",
"Chuanzhe",
""
],
[
"Liu",
"Yingtian",
""
],
[
"Yin",
"Peng",
""
],
[
"Wang",
"Hesheng",
""
],
[
"Liu",
"Yun-Hui",
""
]
] |
Point cloud based place recognition is still an open issue due to the difficulty in extracting local features from the raw 3D point cloud and generating the global descriptor, and it's even harder in the large-scale dynamic environments. In this paper, we develop a novel deep neural network, named LPD-Net (Large-scale Place Description Network), which can extract discriminative and generalizable global descriptors from the raw 3D point cloud. Two modules, the adaptive local feature extraction module and the graph-based neighborhood aggregation module, are proposed, which contribute to extract the local structures and reveal the spatial distribution of local features in the large-scale point cloud, with an end-to-end manner. We implement the proposed global descriptor in solving point cloud based retrieval tasks to achieve the large-scale place recognition. Comparison results show that our LPD-Net is much better than PointNetVLAD and reaches the state-of-the-art. We also compare our LPD-Net with the vision-based solutions to show the robustness of our approach to different weather and light conditions.
|
1802.07830
|
Ana Sokolova
|
Ana Sokolova and Harald Woracek
|
Proper Semirings and Proper Convex Functors
|
FoSSaCS 2018 full version
| null | null | null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Esik and Maletti introduced the notion of a proper semiring and proved that
some important (classes of) semirings -- Noetherian semirings, natural numbers
-- are proper. Properness matters as the equivalence problem for weighted
automata over a semiring which is proper and finitely and effectively presented
is decidable. Milius generalised the notion of properness from a semiring to a
functor. As a consequence, a semiring is proper if and only if its associated
"cubic functor" is proper. Moreover, properness of a functor renders soundness
and completeness proofs for axiomatizations of equivalent behaviour.
In this paper we provide a method for proving properness of functors, and
instantiate it to cover both the known cases and several novel ones: (1)
properness of the semirings of positive rationals and positive reals, via
properness of the corresponding cubic functors; and (2) properness of two
functors on (positive) convex algebras. The latter functors are important for
axiomatizing trace equivalence of probabilistic transition systems. Our proofs
rely on results that stretch all the way back to Hilbert and Minkowski.
|
[
{
"created": "Wed, 21 Feb 2018 22:10:37 GMT",
"version": "v1"
},
{
"created": "Sat, 24 Feb 2018 08:35:07 GMT",
"version": "v2"
}
] |
2018-02-27
|
[
[
"Sokolova",
"Ana",
""
],
[
"Woracek",
"Harald",
""
]
] |
Esik and Maletti introduced the notion of a proper semiring and proved that some important (classes of) semirings -- Noetherian semirings, natural numbers -- are proper. Properness matters as the equivalence problem for weighted automata over a semiring which is proper and finitely and effectively presented is decidable. Milius generalised the notion of properness from a semiring to a functor. As a consequence, a semiring is proper if and only if its associated "cubic functor" is proper. Moreover, properness of a functor renders soundness and completeness proofs for axiomatizations of equivalent behaviour. In this paper we provide a method for proving properness of functors, and instantiate it to cover both the known cases and several novel ones: (1) properness of the semirings of positive rationals and positive reals, via properness of the corresponding cubic functors; and (2) properness of two functors on (positive) convex algebras. The latter functors are important for axiomatizing trace equivalence of probabilistic transition systems. Our proofs rely on results that stretch all the way back to Hilbert and Minkowski.
|
2401.03444
|
Meng Qin
|
Meng Qin
|
Towards a Unified Method for Network Dynamic via Adversarial Weighted
Link Prediction
| null | null | null | null |
cs.NI cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Network dynamic (e.g., traffic burst in data center networks and channel
fading in cellular WiFi networks) has a great impact on the performance of
communication networks (e.g., throughput, capacity, delay, and jitter). This
article proposes a unified prediction-based method to handle the dynamic of
various network systems. From the view of graph deep learning, I generally
formulate the dynamic prediction of networks as a temporal link prediction task
and analyze the possible challenges of the prediction of weighted networks,
where link weights have the wide-value-range and sparsity issues. Inspired by
the high-resolution video frame prediction with generative adversarial network
(GAN), I try to adopt adversarial learning to generate high-quality predicted
snapshots for network dynamic, which is expected to support the precise and
fine-grained network control. A novel high-quality temporal link prediction
(HQ-TLP) model with GAN is then developed to illustrate the potential of my
basic idea. Extensive experiments for various application scenarios further
demonstrate the powerful capability of HQ-TLP.
|
[
{
"created": "Sun, 7 Jan 2024 10:30:51 GMT",
"version": "v1"
}
] |
2024-01-09
|
[
[
"Qin",
"Meng",
""
]
] |
Network dynamic (e.g., traffic burst in data center networks and channel fading in cellular WiFi networks) has a great impact on the performance of communication networks (e.g., throughput, capacity, delay, and jitter). This article proposes a unified prediction-based method to handle the dynamic of various network systems. From the view of graph deep learning, I generally formulate the dynamic prediction of networks as a temporal link prediction task and analyze the possible challenges of the prediction of weighted networks, where link weights have the wide-value-range and sparsity issues. Inspired by the high-resolution video frame prediction with generative adversarial network (GAN), I try to adopt adversarial learning to generate high-quality predicted snapshots for network dynamic, which is expected to support the precise and fine-grained network control. A novel high-quality temporal link prediction (HQ-TLP) model with GAN is then developed to illustrate the potential of my basic idea. Extensive experiments for various application scenarios further demonstrate the powerful capability of HQ-TLP.
|
1709.04545
|
Antonio Cavalcante Araujo Neto
|
Antonio Cavalcante Araujo Neto, Joerg Sander, Ricardo J. G. B.
Campello, Mario A. Nascimento
|
Efficient Computation of Multiple Density-Based Clustering Hierarchies
|
A short version of this paper appears at IEEE ICDM 2017. Corrected
typos. Revised abstract
| null | null | null |
cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
HDBSCAN*, a state-of-the-art density-based hierarchical clustering method,
produces a hierarchical organization of clusters in a dataset w.r.t. a
parameter mpts. While the performance of HDBSCAN* is robust w.r.t. mpts in the
sense that a small change in mpts typically leads to only a small or no change
in the clustering structure, choosing a "good" mpts value can be challenging:
depending on the data distribution, a high or low value for mpts may be more
appropriate, and certain data clusters may reveal themselves at different
values of mpts. To explore results for a range of mpts values, however, one has
to run HDBSCAN* for each value in the range independently, which is
computationally inefficient. In this paper, we propose an efficient approach to
compute all HDBSCAN* hierarchies for a range of mpts values by replacing the
graph used by HDBSCAN* with a much smaller graph that is guaranteed to contain
the required information. An extensive experimental evaluation shows that with
our approach one can obtain over one hundred hierarchies for the computational
cost equivalent to running HDBSCAN* about 2 times.
|
[
{
"created": "Wed, 13 Sep 2017 21:24:42 GMT",
"version": "v1"
},
{
"created": "Fri, 22 Sep 2017 22:08:11 GMT",
"version": "v2"
},
{
"created": "Thu, 7 Jun 2018 23:05:58 GMT",
"version": "v3"
}
] |
2018-06-11
|
[
[
"Neto",
"Antonio Cavalcante Araujo",
""
],
[
"Sander",
"Joerg",
""
],
[
"Campello",
"Ricardo J. G. B.",
""
],
[
"Nascimento",
"Mario A.",
""
]
] |
HDBSCAN*, a state-of-the-art density-based hierarchical clustering method, produces a hierarchical organization of clusters in a dataset w.r.t. a parameter mpts. While the performance of HDBSCAN* is robust w.r.t. mpts in the sense that a small change in mpts typically leads to only a small or no change in the clustering structure, choosing a "good" mpts value can be challenging: depending on the data distribution, a high or low value for mpts may be more appropriate, and certain data clusters may reveal themselves at different values of mpts. To explore results for a range of mpts values, however, one has to run HDBSCAN* for each value in the range independently, which is computationally inefficient. In this paper, we propose an efficient approach to compute all HDBSCAN* hierarchies for a range of mpts values by replacing the graph used by HDBSCAN* with a much smaller graph that is guaranteed to contain the required information. An extensive experimental evaluation shows that with our approach one can obtain over one hundred hierarchies for the computational cost equivalent to running HDBSCAN* about 2 times.
|
2310.05307
|
Soumaya Cherkaoui
|
Benjamin Kalfon, Soumaya Cherkaoui, Jean-Fr\'ed\'eric Laprade, Ola
Ahmad and Shengrui Wang
|
Successive Data Injection in Conditional Quantum GAN Applied to Time
Series Anomaly Detection
| null | null | null | null |
cs.LG cs.NI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Classical GAN architectures have shown interesting results for solving
anomaly detection problems in general and for time series anomalies in
particular, such as those arising in communication networks. In recent years,
several quantum GAN architectures have been proposed in the literature. When
detecting anomalies in time series using QGANs, huge challenges arise due to
the limited number of qubits compared to the size of the data. To address these
challenges, we propose a new high-dimensional encoding approach, named
Successive Data Injection (SuDaI). In this approach, we explore a larger
portion of the quantum state than that in the conventional angle encoding, the
method used predominantly in the literature, through repeated data injections
into the quantum state. SuDaI encoding allows us to adapt the QGAN for anomaly
detection with network data of a much higher dimensionality than with the
existing known QGANs implementations. In addition, SuDaI encoding applies to
other types of high-dimensional time series and can be used in contexts beyond
anomaly detection and QGANs, opening up therefore multiple fields of
application.
|
[
{
"created": "Sun, 8 Oct 2023 22:58:44 GMT",
"version": "v1"
}
] |
2023-10-10
|
[
[
"Kalfon",
"Benjamin",
""
],
[
"Cherkaoui",
"Soumaya",
""
],
[
"Laprade",
"Jean-Frédéric",
""
],
[
"Ahmad",
"Ola",
""
],
[
"Wang",
"Shengrui",
""
]
] |
Classical GAN architectures have shown interesting results for solving anomaly detection problems in general and for time series anomalies in particular, such as those arising in communication networks. In recent years, several quantum GAN architectures have been proposed in the literature. When detecting anomalies in time series using QGANs, huge challenges arise due to the limited number of qubits compared to the size of the data. To address these challenges, we propose a new high-dimensional encoding approach, named Successive Data Injection (SuDaI). In this approach, we explore a larger portion of the quantum state than that in the conventional angle encoding, the method used predominantly in the literature, through repeated data injections into the quantum state. SuDaI encoding allows us to adapt the QGAN for anomaly detection with network data of a much higher dimensionality than with the existing known QGANs implementations. In addition, SuDaI encoding applies to other types of high-dimensional time series and can be used in contexts beyond anomaly detection and QGANs, opening up therefore multiple fields of application.
|
2009.00952
|
Kun Zhan
|
Kun Zhan, Chaoxi Niu
|
Mutual Teaching for Graph Convolutional Networks
|
GCN, 8 pages, 1 figures
|
Future Generation Computer Systems, 2021
| null | null |
cs.LG cs.CV stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Graph convolutional networks produce good predictions of unlabeled samples
due to its transductive label propagation. Since samples have different
predicted confidences, we take high-confidence predictions as pseudo labels to
expand the label set so that more samples are selected for updating models. We
propose a new training method named as mutual teaching, i.e., we train dual
models and let them teach each other during each batch. First, each network
feeds forward all samples and selects samples with high-confidence predictions.
Second, each model is updated by samples selected by its peer network. We view
the high-confidence predictions as useful knowledge, and the useful knowledge
of one network teaches the peer network with model updating in each batch. In
mutual teaching, the pseudo-label set of a network is from its peer network.
Since we use the new strategy of network training, performance improves
significantly. Extensive experimental results demonstrate that our method
achieves superior performance over state-of-the-art methods under very low
label rates.
|
[
{
"created": "Wed, 2 Sep 2020 11:10:55 GMT",
"version": "v1"
}
] |
2020-09-07
|
[
[
"Zhan",
"Kun",
""
],
[
"Niu",
"Chaoxi",
""
]
] |
Graph convolutional networks produce good predictions of unlabeled samples due to its transductive label propagation. Since samples have different predicted confidences, we take high-confidence predictions as pseudo labels to expand the label set so that more samples are selected for updating models. We propose a new training method named as mutual teaching, i.e., we train dual models and let them teach each other during each batch. First, each network feeds forward all samples and selects samples with high-confidence predictions. Second, each model is updated by samples selected by its peer network. We view the high-confidence predictions as useful knowledge, and the useful knowledge of one network teaches the peer network with model updating in each batch. In mutual teaching, the pseudo-label set of a network is from its peer network. Since we use the new strategy of network training, performance improves significantly. Extensive experimental results demonstrate that our method achieves superior performance over state-of-the-art methods under very low label rates.
|
2403.00972
|
Yinan Hu
|
Yinan Hu, Juntao Chen, Quanyan Zhu
|
Understanding Police Force Resource Allocation using Adversarial Optimal
Transport with Incomplete Information
| null | null | null | null |
cs.GT cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
Adversarial optimal transport has been proven useful as a mathematical
formulation to model resource allocation problems to maximize the efficiency of
transportation with an adversary, who modifies the data. It is often the case,
however, that only the adversary knows which nodes are malicious and which are
not. In this paper we formulate the problem of seeking adversarial optimal
transport into Bayesian games. We construct the concept of Bayesian equilibrium
and design a distributed algorithm that achieve those equilibria, making our
model applicable to large-scale networks. Keywords: game theory, crime control,
Markov games
|
[
{
"created": "Fri, 1 Mar 2024 20:46:36 GMT",
"version": "v1"
}
] |
2024-03-05
|
[
[
"Hu",
"Yinan",
""
],
[
"Chen",
"Juntao",
""
],
[
"Zhu",
"Quanyan",
""
]
] |
Adversarial optimal transport has been proven useful as a mathematical formulation to model resource allocation problems to maximize the efficiency of transportation with an adversary, who modifies the data. It is often the case, however, that only the adversary knows which nodes are malicious and which are not. In this paper we formulate the problem of seeking adversarial optimal transport into Bayesian games. We construct the concept of Bayesian equilibrium and design a distributed algorithm that achieve those equilibria, making our model applicable to large-scale networks. Keywords: game theory, crime control, Markov games
|
2212.03575
|
Sanghwan Jang
|
Sanghwan Jang
|
Tag Embedding and Well-defined Intermediate Representation improve
Auto-Formulation of Problem Description
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this report, I address auto-formulation of problem description, the task
of converting an optimization problem into a canonical representation. I first
simplify the auto-formulation task by defining an intermediate representation,
then introduce entity tag embedding to utilize a given entity tag information.
The ablation study demonstrate the effectiveness of the proposed method, which
finally took second place in NeurIPS 2022 NL4Opt competition subtask 2.
|
[
{
"created": "Wed, 7 Dec 2022 11:23:43 GMT",
"version": "v1"
}
] |
2022-12-08
|
[
[
"Jang",
"Sanghwan",
""
]
] |
In this report, I address auto-formulation of problem description, the task of converting an optimization problem into a canonical representation. I first simplify the auto-formulation task by defining an intermediate representation, then introduce entity tag embedding to utilize a given entity tag information. The ablation study demonstrate the effectiveness of the proposed method, which finally took second place in NeurIPS 2022 NL4Opt competition subtask 2.
|
2304.08502
|
Jiankun Zhao
|
Zhiqiang Nie, Jiankun Zhao, Qicheng Li, Yong Qin
|
CyFormer: Accurate State-of-Health Prediction of Lithium-Ion Batteries
via Cyclic Attention
| null | null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Predicting the State-of-Health (SoH) of lithium-ion batteries is a
fundamental task of battery management systems on electric vehicles. It aims at
estimating future SoH based on historical aging data. Most existing deep
learning methods rely on filter-based feature extractors (e.g., CNN or Kalman
filters) and recurrent time sequence models. Though efficient, they generally
ignore cyclic features and the domain gap between training and testing
batteries. To address this problem, we present CyFormer, a transformer-based
cyclic time sequence model for SoH prediction. Instead of the conventional
CNN-RNN structure, we adopt an encoder-decoder architecture. In the encoder,
row-wise and column-wise attention blocks effectively capture intra-cycle and
inter-cycle connections and extract cyclic features. In the decoder, the SoH
queries cross-attend to these features to form the final predictions. We
further utilize a transfer learning strategy to narrow the domain gap between
the training and testing set. To be specific, we use fine-tuning to shift the
model to a target working condition. Finally, we made our model more efficient
by pruning. The experiment shows that our method attains an MAE of 0.75\% with
only 10\% data for fine-tuning on a testing battery, surpassing prior methods
by a large margin. Effective and robust, our method provides a potential
solution for all cyclic time sequence prediction tasks.
|
[
{
"created": "Mon, 17 Apr 2023 02:16:40 GMT",
"version": "v1"
}
] |
2023-04-19
|
[
[
"Nie",
"Zhiqiang",
""
],
[
"Zhao",
"Jiankun",
""
],
[
"Li",
"Qicheng",
""
],
[
"Qin",
"Yong",
""
]
] |
Predicting the State-of-Health (SoH) of lithium-ion batteries is a fundamental task of battery management systems on electric vehicles. It aims at estimating future SoH based on historical aging data. Most existing deep learning methods rely on filter-based feature extractors (e.g., CNN or Kalman filters) and recurrent time sequence models. Though efficient, they generally ignore cyclic features and the domain gap between training and testing batteries. To address this problem, we present CyFormer, a transformer-based cyclic time sequence model for SoH prediction. Instead of the conventional CNN-RNN structure, we adopt an encoder-decoder architecture. In the encoder, row-wise and column-wise attention blocks effectively capture intra-cycle and inter-cycle connections and extract cyclic features. In the decoder, the SoH queries cross-attend to these features to form the final predictions. We further utilize a transfer learning strategy to narrow the domain gap between the training and testing set. To be specific, we use fine-tuning to shift the model to a target working condition. Finally, we made our model more efficient by pruning. The experiment shows that our method attains an MAE of 0.75\% with only 10\% data for fine-tuning on a testing battery, surpassing prior methods by a large margin. Effective and robust, our method provides a potential solution for all cyclic time sequence prediction tasks.
|
2405.16918
|
Nils Philipp Walter
|
Nils Philipp Walter, Linara Adilova, Jilles Vreeken, Michael Kamp
|
The Uncanny Valley: Exploring Adversarial Robustness from a Flatness
Perspective
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Flatness of the loss surface not only correlates positively with
generalization but is also related to adversarial robustness, since
perturbations of inputs relate non-linearly to perturbations of weights. In
this paper, we empirically analyze the relation between adversarial examples
and relative flatness with respect to the parameters of one layer. We observe a
peculiar property of adversarial examples: during an iterative first-order
white-box attack, the flatness of the loss surface measured around the
adversarial example first becomes sharper until the label is flipped, but if we
keep the attack running it runs into a flat uncanny valley where the label
remains flipped. We find this phenomenon across various model architectures and
datasets. Our results also extend to large language models (LLMs), but due to
the discrete nature of the input space and comparatively weak attacks, the
adversarial examples rarely reach a truly flat region. Most importantly, this
phenomenon shows that flatness alone cannot explain adversarial robustness
unless we can also guarantee the behavior of the function around the examples.
We theoretically connect relative flatness to adversarial robustness by
bounding the third derivative of the loss surface, underlining the need for
flatness in combination with a low global Lipschitz constant for a robust
model.
|
[
{
"created": "Mon, 27 May 2024 08:10:46 GMT",
"version": "v1"
}
] |
2024-05-28
|
[
[
"Walter",
"Nils Philipp",
""
],
[
"Adilova",
"Linara",
""
],
[
"Vreeken",
"Jilles",
""
],
[
"Kamp",
"Michael",
""
]
] |
Flatness of the loss surface not only correlates positively with generalization but is also related to adversarial robustness, since perturbations of inputs relate non-linearly to perturbations of weights. In this paper, we empirically analyze the relation between adversarial examples and relative flatness with respect to the parameters of one layer. We observe a peculiar property of adversarial examples: during an iterative first-order white-box attack, the flatness of the loss surface measured around the adversarial example first becomes sharper until the label is flipped, but if we keep the attack running it runs into a flat uncanny valley where the label remains flipped. We find this phenomenon across various model architectures and datasets. Our results also extend to large language models (LLMs), but due to the discrete nature of the input space and comparatively weak attacks, the adversarial examples rarely reach a truly flat region. Most importantly, this phenomenon shows that flatness alone cannot explain adversarial robustness unless we can also guarantee the behavior of the function around the examples. We theoretically connect relative flatness to adversarial robustness by bounding the third derivative of the loss surface, underlining the need for flatness in combination with a low global Lipschitz constant for a robust model.
|
2407.19913
|
Takato Yasuno
|
Takato Yasuno
|
Cell Culture Assistive Application for Precipitation Image Diagnosis
|
18 pages, 15 figures, 5 tables
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In regenerative medicine research, we experimentally design the composition
of chemical medium. We add different components to 384-well plates and culture
the biological cells. We monitor the condition of the cells and take time-lapse
bioimages for morphological assay. In particular, precipitation can appear as
artefacts in the image and contaminate the noise in the imaging assay.
Inspecting precipitates is a tedious task for the observer, and differences in
experience can lead to variations in judgement from person to person. The
machine learning approach will remove the burden of human inspection and
provide consistent inspection. In addition, precipitation features are as small
as 10-20 {\mu}m. A 1200 pixel square well image resized under a resolution of
2.82 {\mu}m/pixel will result in a reduction in precipitation features.
Dividing the well images into 240-pixel squares and learning without resizing
preserves the resolution of the original image. In this study, we developed an
application to automatically detect precipitation on 384-well plates utilising
optical microscope images. We apply MN-pair contrastive clustering to extract
precipitation classes from approximately 20,000 patch images. To detect
precipitation features, we compare deeper FCDDs detectors with optional
backbones and build a machine learning pipeline to detect precipitation from
the maximum score of quadruplet well images using isolation Forest algorithm,
where the anomaly score is ranged from zero to one. Furthermore, using this
application we can visualise precipitation situ heatmap on a 384-well plate.
|
[
{
"created": "Mon, 29 Jul 2024 11:42:32 GMT",
"version": "v1"
}
] |
2024-07-30
|
[
[
"Yasuno",
"Takato",
""
]
] |
In regenerative medicine research, we experimentally design the composition of chemical medium. We add different components to 384-well plates and culture the biological cells. We monitor the condition of the cells and take time-lapse bioimages for morphological assay. In particular, precipitation can appear as artefacts in the image and contaminate the noise in the imaging assay. Inspecting precipitates is a tedious task for the observer, and differences in experience can lead to variations in judgement from person to person. The machine learning approach will remove the burden of human inspection and provide consistent inspection. In addition, precipitation features are as small as 10-20 {\mu}m. A 1200 pixel square well image resized under a resolution of 2.82 {\mu}m/pixel will result in a reduction in precipitation features. Dividing the well images into 240-pixel squares and learning without resizing preserves the resolution of the original image. In this study, we developed an application to automatically detect precipitation on 384-well plates utilising optical microscope images. We apply MN-pair contrastive clustering to extract precipitation classes from approximately 20,000 patch images. To detect precipitation features, we compare deeper FCDDs detectors with optional backbones and build a machine learning pipeline to detect precipitation from the maximum score of quadruplet well images using isolation Forest algorithm, where the anomaly score is ranged from zero to one. Furthermore, using this application we can visualise precipitation situ heatmap on a 384-well plate.
|
1605.00390
|
Jose Armando Oviedo
|
Jose Armando Oviedo and Hamid R. Sadjadpour
|
A New NOMA Approach for Fair Power Allocation
|
This paper is published in IEEE INFOCOM 2016 Workshop on 5G & Beyond
- Enabling Technologies and Applications; 5 pages, 3 figures
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A non-orthogonal multiple access (NOMA) approach to user signal power
allocation called Fair-NOMA is introduced. Fair-NOMA is the application of NOMA
in such a way that two mobile users have the opportunity to always achieve at
least the information capacity they can achieve by using orthogonal multiple
access (OMA), regardless of the user selection criteria, making it suitable for
implementation using any current or future scheduling paradigms. Given this
condition, the bounds of the power allocation coefficients are derived as
functions of the channel gains of the two mobile users. The NOMA power
allocation is analyzed for two scheduled users that are selected randomly with
i.i.d. channel gains. The capacity improvements made by each user and the sum
capacity improvement are derived.
|
[
{
"created": "Mon, 2 May 2016 08:44:58 GMT",
"version": "v1"
}
] |
2016-05-03
|
[
[
"Oviedo",
"Jose Armando",
""
],
[
"Sadjadpour",
"Hamid R.",
""
]
] |
A non-orthogonal multiple access (NOMA) approach to user signal power allocation called Fair-NOMA is introduced. Fair-NOMA is the application of NOMA in such a way that two mobile users have the opportunity to always achieve at least the information capacity they can achieve by using orthogonal multiple access (OMA), regardless of the user selection criteria, making it suitable for implementation using any current or future scheduling paradigms. Given this condition, the bounds of the power allocation coefficients are derived as functions of the channel gains of the two mobile users. The NOMA power allocation is analyzed for two scheduled users that are selected randomly with i.i.d. channel gains. The capacity improvements made by each user and the sum capacity improvement are derived.
|
2402.08416
|
Gelei Deng
|
Gelei Deng, Yi Liu, Kailong Wang, Yuekang Li, Tianwei Zhang, Yang Liu
|
Pandora: Jailbreak GPTs by Retrieval Augmented Generation Poisoning
|
6 pages
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large Language Models~(LLMs) have gained immense popularity and are being
increasingly applied in various domains. Consequently, ensuring the security of
these models is of paramount importance. Jailbreak attacks, which manipulate
LLMs to generate malicious content, are recognized as a significant
vulnerability. While existing research has predominantly focused on direct
jailbreak attacks on LLMs, there has been limited exploration of indirect
methods. The integration of various plugins into LLMs, notably Retrieval
Augmented Generation~(RAG), which enables LLMs to incorporate external
knowledge bases into their response generation such as GPTs, introduces new
avenues for indirect jailbreak attacks.
To fill this gap, we investigate indirect jailbreak attacks on LLMs,
particularly GPTs, introducing a novel attack vector named Retrieval Augmented
Generation Poisoning. This method, Pandora, exploits the synergy between LLMs
and RAG through prompt manipulation to generate unexpected responses. Pandora
uses maliciously crafted content to influence the RAG process, effectively
initiating jailbreak attacks. Our preliminary tests show that Pandora
successfully conducts jailbreak attacks in four different scenarios, achieving
higher success rates than direct attacks, with 64.3\% for GPT-3.5 and 34.8\%
for GPT-4.
|
[
{
"created": "Tue, 13 Feb 2024 12:40:39 GMT",
"version": "v1"
}
] |
2024-02-14
|
[
[
"Deng",
"Gelei",
""
],
[
"Liu",
"Yi",
""
],
[
"Wang",
"Kailong",
""
],
[
"Li",
"Yuekang",
""
],
[
"Zhang",
"Tianwei",
""
],
[
"Liu",
"Yang",
""
]
] |
Large Language Models~(LLMs) have gained immense popularity and are being increasingly applied in various domains. Consequently, ensuring the security of these models is of paramount importance. Jailbreak attacks, which manipulate LLMs to generate malicious content, are recognized as a significant vulnerability. While existing research has predominantly focused on direct jailbreak attacks on LLMs, there has been limited exploration of indirect methods. The integration of various plugins into LLMs, notably Retrieval Augmented Generation~(RAG), which enables LLMs to incorporate external knowledge bases into their response generation such as GPTs, introduces new avenues for indirect jailbreak attacks. To fill this gap, we investigate indirect jailbreak attacks on LLMs, particularly GPTs, introducing a novel attack vector named Retrieval Augmented Generation Poisoning. This method, Pandora, exploits the synergy between LLMs and RAG through prompt manipulation to generate unexpected responses. Pandora uses maliciously crafted content to influence the RAG process, effectively initiating jailbreak attacks. Our preliminary tests show that Pandora successfully conducts jailbreak attacks in four different scenarios, achieving higher success rates than direct attacks, with 64.3\% for GPT-3.5 and 34.8\% for GPT-4.
|
2103.14962
|
Zixiang Zhou
|
Zixiang Zhou, Yang Zhang, Hassan Foroosh
|
Panoptic-PolarNet: Proposal-free LiDAR Point Cloud Panoptic Segmentation
|
Accepted by CVPR 2021
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Panoptic segmentation presents a new challenge in exploiting the merits of
both detection and segmentation, with the aim of unifying instance segmentation
and semantic segmentation in a single framework. However, an efficient solution
for panoptic segmentation in the emerging domain of LiDAR point cloud is still
an open research problem and is very much under-explored. In this paper, we
present a fast and robust LiDAR point cloud panoptic segmentation framework,
referred to as Panoptic-PolarNet. We learn both semantic segmentation and
class-agnostic instance clustering in a single inference network using a polar
Bird's Eye View (BEV) representation, enabling us to circumvent the issue of
occlusion among instances in urban street scenes. To improve our network's
learnability, we also propose an adapted instance augmentation technique and a
novel adversarial point cloud pruning method. Our experiments show that
Panoptic-PolarNet outperforms the baseline methods on SemanticKITTI and
nuScenes datasets with an almost real-time inference speed. Panoptic-PolarNet
achieved 54.1% PQ in the public SemanticKITTI panoptic segmentation leaderboard
and leading performance for the validation set of nuScenes.
|
[
{
"created": "Sat, 27 Mar 2021 18:31:40 GMT",
"version": "v1"
}
] |
2021-03-30
|
[
[
"Zhou",
"Zixiang",
""
],
[
"Zhang",
"Yang",
""
],
[
"Foroosh",
"Hassan",
""
]
] |
Panoptic segmentation presents a new challenge in exploiting the merits of both detection and segmentation, with the aim of unifying instance segmentation and semantic segmentation in a single framework. However, an efficient solution for panoptic segmentation in the emerging domain of LiDAR point cloud is still an open research problem and is very much under-explored. In this paper, we present a fast and robust LiDAR point cloud panoptic segmentation framework, referred to as Panoptic-PolarNet. We learn both semantic segmentation and class-agnostic instance clustering in a single inference network using a polar Bird's Eye View (BEV) representation, enabling us to circumvent the issue of occlusion among instances in urban street scenes. To improve our network's learnability, we also propose an adapted instance augmentation technique and a novel adversarial point cloud pruning method. Our experiments show that Panoptic-PolarNet outperforms the baseline methods on SemanticKITTI and nuScenes datasets with an almost real-time inference speed. Panoptic-PolarNet achieved 54.1% PQ in the public SemanticKITTI panoptic segmentation leaderboard and leading performance for the validation set of nuScenes.
|
1703.03370
|
Xiaozhe Wang
|
Xiaozhe Wang
|
Estimating Dynamic Load Parameters from Ambient PMU Measurements
|
The paper has been accepted by IEEE PES general meeting 2017
| null | null | null |
cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, a novel method to estimate dynamic load parameters via ambient
PMU measurements is proposed. Unlike conventional parameter identification
methods, the proposed algorithm does not require the existence of large
disturbance to power systems, and is able to provide up-to-date dynamic load
parameters consistently and continuously. The accuracy and robustness of the
method are demonstrated through numerical simulations.
|
[
{
"created": "Thu, 9 Mar 2017 17:47:38 GMT",
"version": "v1"
}
] |
2017-03-10
|
[
[
"Wang",
"Xiaozhe",
""
]
] |
In this paper, a novel method to estimate dynamic load parameters via ambient PMU measurements is proposed. Unlike conventional parameter identification methods, the proposed algorithm does not require the existence of large disturbance to power systems, and is able to provide up-to-date dynamic load parameters consistently and continuously. The accuracy and robustness of the method are demonstrated through numerical simulations.
|
2407.09107
|
Leonhard Faubel
|
Leonhard Faubel and Klaus Schmid
|
MLOps: A Multiple Case Study in Industry 4.0
| null | null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
As Machine Learning (ML) becomes more prevalent in Industry 4.0, there is a
growing need to understand how systematic approaches to bringing ML into
production can be practically implemented in industrial environments. Here,
MLOps comes into play. MLOps refers to the processes, tools, and organizational
structures used to develop, test, deploy, and manage ML models reliably and
efficiently. However, there is currently a lack of information on the practical
implementation of MLOps in industrial enterprises. To address this issue, we
conducted a multiple case study on MLOps in three large companies with
dedicated MLOps teams, using established tools and well-defined model
deployment processes in the Industry 4.0 environment. This study describes four
of the companies' Industry 4.0 scenarios and provides relevant insights into
their implementation and the challenges they faced in numerous projects.
Further, we discuss MLOps processes, procedures, technologies, as well as
contextual variations among companies.
|
[
{
"created": "Fri, 12 Jul 2024 09:17:26 GMT",
"version": "v1"
}
] |
2024-07-15
|
[
[
"Faubel",
"Leonhard",
""
],
[
"Schmid",
"Klaus",
""
]
] |
As Machine Learning (ML) becomes more prevalent in Industry 4.0, there is a growing need to understand how systematic approaches to bringing ML into production can be practically implemented in industrial environments. Here, MLOps comes into play. MLOps refers to the processes, tools, and organizational structures used to develop, test, deploy, and manage ML models reliably and efficiently. However, there is currently a lack of information on the practical implementation of MLOps in industrial enterprises. To address this issue, we conducted a multiple case study on MLOps in three large companies with dedicated MLOps teams, using established tools and well-defined model deployment processes in the Industry 4.0 environment. This study describes four of the companies' Industry 4.0 scenarios and provides relevant insights into their implementation and the challenges they faced in numerous projects. Further, we discuss MLOps processes, procedures, technologies, as well as contextual variations among companies.
|
2402.06591
|
Arnaud Carayol
|
Arnaud Carayol, Philippe Duchon, Florent Koechlin, Cyril Nicaud
|
Random DFA With One Added Transition
|
32 pages, 4 figures, extended version of STACS'2023
| null | null | null |
cs.FL cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
Every language recognized by a non-deterministic finite automaton can be
recognized by a deterministic automaton, at the cost of a potential increase of
the number of states, which in the worst case can go from $n$ states to $2^n$
states. In this article, we investigate this classical result in a
probabilistic setting where we take a deterministic automaton with $n$ states
uniformly at random and add just one random transition. These automata are
almost deterministic in the sense that only one state has a non-deterministic
choice when reading an input letter. In our model, each state has a fixed
probability to be final. We prove that for any $d\geq 1$, with non-negligible
probability the minimal (deterministic) automaton of the language recognized by
such an automaton has more than $n^d$ states; as a byproduct, the expected size
of its minimal automaton grows faster than any polynomial. Our result also
holds when each state is final with some probability that depends on $n$, as
long as it is not too close to $0$ and $1$, at distance at least
$\Omega(\frac1{\sqrt{n}})$ to be precise, therefore allowing models with a
sublinear number of final states in expectation.
|
[
{
"created": "Fri, 9 Feb 2024 18:11:57 GMT",
"version": "v1"
}
] |
2024-02-12
|
[
[
"Carayol",
"Arnaud",
""
],
[
"Duchon",
"Philippe",
""
],
[
"Koechlin",
"Florent",
""
],
[
"Nicaud",
"Cyril",
""
]
] |
Every language recognized by a non-deterministic finite automaton can be recognized by a deterministic automaton, at the cost of a potential increase of the number of states, which in the worst case can go from $n$ states to $2^n$ states. In this article, we investigate this classical result in a probabilistic setting where we take a deterministic automaton with $n$ states uniformly at random and add just one random transition. These automata are almost deterministic in the sense that only one state has a non-deterministic choice when reading an input letter. In our model, each state has a fixed probability to be final. We prove that for any $d\geq 1$, with non-negligible probability the minimal (deterministic) automaton of the language recognized by such an automaton has more than $n^d$ states; as a byproduct, the expected size of its minimal automaton grows faster than any polynomial. Our result also holds when each state is final with some probability that depends on $n$, as long as it is not too close to $0$ and $1$, at distance at least $\Omega(\frac1{\sqrt{n}})$ to be precise, therefore allowing models with a sublinear number of final states in expectation.
|
2311.01815
|
Jianxiong Shen
|
Jianxiong Shen and Ruijie Ren and Adria Ruiz and Francesc
Moreno-Noguer
|
Estimating 3D Uncertainty Field: Quantifying Uncertainty for Neural
Radiance Fields
| null | null | null | null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Current methods based on Neural Radiance Fields (NeRF) significantly lack the
capacity to quantify uncertainty in their predictions, particularly on the
unseen space including the occluded and outside scene content. This limitation
hinders their extensive applications in robotics, where the reliability of
model predictions has to be considered for tasks such as robotic exploration
and planning in unknown environments. To address this, we propose a novel
approach to estimate a 3D Uncertainty Field based on the learned incomplete
scene geometry, which explicitly identifies these unseen regions. By
considering the accumulated transmittance along each camera ray, our
Uncertainty Field infers 2D pixel-wise uncertainty, exhibiting high values for
rays directly casting towards occluded or outside the scene content. To
quantify the uncertainty on the learned surface, we model a stochastic radiance
field. Our experiments demonstrate that our approach is the only one that can
explicitly reason about high uncertainty both on 3D unseen regions and its
involved 2D rendered pixels, compared with recent methods. Furthermore, we
illustrate that our designed uncertainty field is ideally suited for real-world
robotics tasks, such as next-best-view selection.
|
[
{
"created": "Fri, 3 Nov 2023 09:47:53 GMT",
"version": "v1"
},
{
"created": "Sun, 26 Nov 2023 03:44:36 GMT",
"version": "v2"
}
] |
2023-11-28
|
[
[
"Shen",
"Jianxiong",
""
],
[
"Ren",
"Ruijie",
""
],
[
"Ruiz",
"Adria",
""
],
[
"Moreno-Noguer",
"Francesc",
""
]
] |
Current methods based on Neural Radiance Fields (NeRF) significantly lack the capacity to quantify uncertainty in their predictions, particularly on the unseen space including the occluded and outside scene content. This limitation hinders their extensive applications in robotics, where the reliability of model predictions has to be considered for tasks such as robotic exploration and planning in unknown environments. To address this, we propose a novel approach to estimate a 3D Uncertainty Field based on the learned incomplete scene geometry, which explicitly identifies these unseen regions. By considering the accumulated transmittance along each camera ray, our Uncertainty Field infers 2D pixel-wise uncertainty, exhibiting high values for rays directly casting towards occluded or outside the scene content. To quantify the uncertainty on the learned surface, we model a stochastic radiance field. Our experiments demonstrate that our approach is the only one that can explicitly reason about high uncertainty both on 3D unseen regions and its involved 2D rendered pixels, compared with recent methods. Furthermore, we illustrate that our designed uncertainty field is ideally suited for real-world robotics tasks, such as next-best-view selection.
|
2208.11304
|
Seongan Lim
|
Hyang-Sook Lee, Seongan Lim, Ikkwon Yie, Aaram Yun
|
On Insecure Uses of BGN for Privacy Preserving Data Aggregation
Protocols
|
11 pages
|
Fundamenta Informaticae, Volume 188, Issue 2 (March 7, 2023)
fi:9967
| null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The notion of aggregator oblivious (AO) security for privacy preserving data
aggregation was formalized with a specific construction of AO-secure blinding
technique over a cyclic group by Shi et al. Some of proposals of data
aggregation protocols use the blinding technique of Shi et al. for BGN
cryptosystem, an additive homomorphic encryption. Previously, there have been
some security analysis on some of BGN based data aggregation protocols in the
context of integrity or authenticity of data. Even with such security analysis,
the BGN cryptosystem has been a popular building block of privacy preserving
data aggregation protocol. In this paper, we study the privacy issues in the
blinding technique of Shi et al. used for BGN cryptosystem. We show that the
blinding techniques for the BGN cryptosystem used in several protocols are not
privacy preserving against the recipient, the decryptor. Our analysis is based
on the fact that the BGN cryptosystem uses a pairing e:GxG-->G_T and the
existence of the pairing makes the DDH problem on G easy to solve. We also
suggest how to prevent such privacy leakage in the blinding technique of Shi et
al. used for BGN cryptosystem.
|
[
{
"created": "Wed, 24 Aug 2022 05:21:51 GMT",
"version": "v1"
},
{
"created": "Tue, 13 Dec 2022 06:48:56 GMT",
"version": "v2"
},
{
"created": "Mon, 26 Dec 2022 04:43:53 GMT",
"version": "v3"
},
{
"created": "Fri, 3 Mar 2023 00:52:27 GMT",
"version": "v4"
},
{
"created": "Mon, 6 Mar 2023 03:21:16 GMT",
"version": "v5"
}
] |
2023-06-22
|
[
[
"Lee",
"Hyang-Sook",
""
],
[
"Lim",
"Seongan",
""
],
[
"Yie",
"Ikkwon",
""
],
[
"Yun",
"Aaram",
""
]
] |
The notion of aggregator oblivious (AO) security for privacy preserving data aggregation was formalized with a specific construction of AO-secure blinding technique over a cyclic group by Shi et al. Some of proposals of data aggregation protocols use the blinding technique of Shi et al. for BGN cryptosystem, an additive homomorphic encryption. Previously, there have been some security analysis on some of BGN based data aggregation protocols in the context of integrity or authenticity of data. Even with such security analysis, the BGN cryptosystem has been a popular building block of privacy preserving data aggregation protocol. In this paper, we study the privacy issues in the blinding technique of Shi et al. used for BGN cryptosystem. We show that the blinding techniques for the BGN cryptosystem used in several protocols are not privacy preserving against the recipient, the decryptor. Our analysis is based on the fact that the BGN cryptosystem uses a pairing e:GxG-->G_T and the existence of the pairing makes the DDH problem on G easy to solve. We also suggest how to prevent such privacy leakage in the blinding technique of Shi et al. used for BGN cryptosystem.
|
1701.08869
|
Xiaqing Pan
|
Xiaqing Pan, Yueru Chen, C.-C. Jay Kuo
|
3D Shape Retrieval via Irrelevance Filtering and Similarity Ranking
(IF/SR)
|
arXiv admin note: text overlap with arXiv:1603.01942
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A novel solution for the content-based 3D shape retrieval problem using an
unsupervised clustering approach, which does not need any label information of
3D shapes, is presented in this work. The proposed shape retrieval system
consists of two modules in cascade: the irrelevance filtering (IF) module and
the similarity ranking (SR) module. The IF module attempts to cluster gallery
shapes that are similar to each other by examining global and local features
simultaneously. However, shapes that are close in the local feature space can
be distant in the global feature space, and vice versa. To resolve this issue,
we propose a joint cost function that strikes a balance between two distances.
Irrelevant samples that are close in the local feature space but distant in the
global feature space can be removed in this stage. The remaining gallery
samples are ranked in the SR module using the local feature. The superior
performance of the proposed IF/SR method is demonstrated by extensive
experiments conducted on the popular SHREC12 dataset.
|
[
{
"created": "Mon, 30 Jan 2017 23:04:57 GMT",
"version": "v1"
}
] |
2017-02-01
|
[
[
"Pan",
"Xiaqing",
""
],
[
"Chen",
"Yueru",
""
],
[
"Kuo",
"C. -C. Jay",
""
]
] |
A novel solution for the content-based 3D shape retrieval problem using an unsupervised clustering approach, which does not need any label information of 3D shapes, is presented in this work. The proposed shape retrieval system consists of two modules in cascade: the irrelevance filtering (IF) module and the similarity ranking (SR) module. The IF module attempts to cluster gallery shapes that are similar to each other by examining global and local features simultaneously. However, shapes that are close in the local feature space can be distant in the global feature space, and vice versa. To resolve this issue, we propose a joint cost function that strikes a balance between two distances. Irrelevant samples that are close in the local feature space but distant in the global feature space can be removed in this stage. The remaining gallery samples are ranked in the SR module using the local feature. The superior performance of the proposed IF/SR method is demonstrated by extensive experiments conducted on the popular SHREC12 dataset.
|
1109.2405
|
David Monniaux
|
David Monniaux (VERIMAG - IMAG), Julien Le Guen (VERIMAG - IMAG, ST
Microelectronics)
|
Stratified Static Analysis Based on Variable Dependencies
| null | null | null | null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In static analysis by abstract interpretation, one often uses widening
operators in order to enforce convergence within finite time to an inductive
invariant. Certain widening operators, including the classical one over finite
polyhedra, exhibit an unintuitive behavior: analyzing the program over a subset
of its variables may lead a more precise result than analyzing the original
program! In this article, we present simple workarounds for such behavior.
|
[
{
"created": "Mon, 12 Sep 2011 08:48:00 GMT",
"version": "v1"
}
] |
2011-09-13
|
[
[
"Monniaux",
"David",
"",
"VERIMAG - IMAG"
],
[
"Guen",
"Julien Le",
"",
"VERIMAG - IMAG, ST\n Microelectronics"
]
] |
In static analysis by abstract interpretation, one often uses widening operators in order to enforce convergence within finite time to an inductive invariant. Certain widening operators, including the classical one over finite polyhedra, exhibit an unintuitive behavior: analyzing the program over a subset of its variables may lead a more precise result than analyzing the original program! In this article, we present simple workarounds for such behavior.
|
2103.06386
|
Bernie Wang
|
Bernie Wang, Simon Xu, Kurt Keutzer, Yang Gao, Bichen Wu
|
Improving Context-Based Meta-Reinforcement Learning with Self-Supervised
Trajectory Contrastive Learning
| null | null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Meta-reinforcement learning typically requires orders of magnitude more
samples than single task reinforcement learning methods. This is because
meta-training needs to deal with more diverse distributions and train extra
components such as context encoders. To address this, we propose a novel
self-supervised learning task, which we named Trajectory Contrastive Learning
(TCL), to improve meta-training. TCL adopts contrastive learning and trains a
context encoder to predict whether two transition windows are sampled from the
same trajectory. TCL leverages the natural hierarchical structure of
context-based meta-RL and makes minimal assumptions, allowing it to be
generally applicable to context-based meta-RL algorithms. It accelerates the
training of context encoders and improves meta-training overall. Experiments
show that TCL performs better or comparably than a strong meta-RL baseline in
most of the environments on both meta-RL MuJoCo (5 of 6) and Meta-World
benchmarks (44 out of 50).
|
[
{
"created": "Wed, 10 Mar 2021 23:31:19 GMT",
"version": "v1"
}
] |
2021-03-12
|
[
[
"Wang",
"Bernie",
""
],
[
"Xu",
"Simon",
""
],
[
"Keutzer",
"Kurt",
""
],
[
"Gao",
"Yang",
""
],
[
"Wu",
"Bichen",
""
]
] |
Meta-reinforcement learning typically requires orders of magnitude more samples than single task reinforcement learning methods. This is because meta-training needs to deal with more diverse distributions and train extra components such as context encoders. To address this, we propose a novel self-supervised learning task, which we named Trajectory Contrastive Learning (TCL), to improve meta-training. TCL adopts contrastive learning and trains a context encoder to predict whether two transition windows are sampled from the same trajectory. TCL leverages the natural hierarchical structure of context-based meta-RL and makes minimal assumptions, allowing it to be generally applicable to context-based meta-RL algorithms. It accelerates the training of context encoders and improves meta-training overall. Experiments show that TCL performs better or comparably than a strong meta-RL baseline in most of the environments on both meta-RL MuJoCo (5 of 6) and Meta-World benchmarks (44 out of 50).
|
2403.09274
|
Mingyuan Sun
|
Mingyuan Sun, Donghao Zhang, Zongyuan Ge, Jiaxu Wang, Jia Li, Zheng
Fang and Renjing Xu
|
EventRPG: Event Data Augmentation with Relevance Propagation Guidance
|
Accepted by ICLR 2024
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Event camera, a novel bio-inspired vision sensor, has drawn a lot of
attention for its low latency, low power consumption, and high dynamic range.
Currently, overfitting remains a critical problem in event-based classification
tasks for Spiking Neural Network (SNN) due to its relatively weak spatial
representation capability. Data augmentation is a simple but efficient method
to alleviate overfitting and improve the generalization ability of neural
networks, and saliency-based augmentation methods are proven to be effective in
the image processing field. However, there is no approach available for
extracting saliency maps from SNNs. Therefore, for the first time, we present
Spiking Layer-Time-wise Relevance Propagation rule (SLTRP) and Spiking
Layer-wise Relevance Propagation rule (SLRP) in order for SNN to generate
stable and accurate CAMs and saliency maps. Based on this, we propose EventRPG,
which leverages relevance propagation on the spiking neural network for more
efficient augmentation. Our proposed method has been evaluated on several SNN
structures, achieving state-of-the-art performance in object recognition tasks
including N-Caltech101, CIFAR10-DVS, with accuracies of 85.62% and 85.55%, as
well as action recognition task SL-Animals with an accuracy of 91.59%. Our code
is available at https://github.com/myuansun/EventRPG.
|
[
{
"created": "Thu, 14 Mar 2024 10:52:45 GMT",
"version": "v1"
}
] |
2024-03-15
|
[
[
"Sun",
"Mingyuan",
""
],
[
"Zhang",
"Donghao",
""
],
[
"Ge",
"Zongyuan",
""
],
[
"Wang",
"Jiaxu",
""
],
[
"Li",
"Jia",
""
],
[
"Fang",
"Zheng",
""
],
[
"Xu",
"Renjing",
""
]
] |
Event camera, a novel bio-inspired vision sensor, has drawn a lot of attention for its low latency, low power consumption, and high dynamic range. Currently, overfitting remains a critical problem in event-based classification tasks for Spiking Neural Network (SNN) due to its relatively weak spatial representation capability. Data augmentation is a simple but efficient method to alleviate overfitting and improve the generalization ability of neural networks, and saliency-based augmentation methods are proven to be effective in the image processing field. However, there is no approach available for extracting saliency maps from SNNs. Therefore, for the first time, we present Spiking Layer-Time-wise Relevance Propagation rule (SLTRP) and Spiking Layer-wise Relevance Propagation rule (SLRP) in order for SNN to generate stable and accurate CAMs and saliency maps. Based on this, we propose EventRPG, which leverages relevance propagation on the spiking neural network for more efficient augmentation. Our proposed method has been evaluated on several SNN structures, achieving state-of-the-art performance in object recognition tasks including N-Caltech101, CIFAR10-DVS, with accuracies of 85.62% and 85.55%, as well as action recognition task SL-Animals with an accuracy of 91.59%. Our code is available at https://github.com/myuansun/EventRPG.
|
2008.11321
|
Maciej Besta
|
Maciej Besta, Armon Carigiet, Zur Vonarburg-Shmaria, Kacper Janda,
Lukas Gianinazzi, Torsten Hoefler
|
High-Performance Parallel Graph Coloring with Strong Guarantees on Work,
Depth, and Quality
| null |
Proceedings of the ACM/IEEE International Conference on High
Performance Computing, Networking, Storage and Analysis (SC20), November 2020
| null | null |
cs.DS cs.DC cs.PF
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We develop the first parallel graph coloring heuristics with strong
theoretical guarantees on work and depth and coloring quality. The key idea is
to design a relaxation of the vertex degeneracy order, a well-known graph
theory concept, and to color vertices in the order dictated by this relaxation.
This introduces a tunable amount of parallelism into the degeneracy ordering
that is otherwise hard to parallelize. This simple idea enables significant
benefits in several key aspects of graph coloring. For example, one of our
algorithms ensures polylogarithmic depth and a bound on the number of used
colors that is superior to all other parallelizable schemes, while maintaining
work-efficiency. In addition to provable guarantees, the developed algorithms
have competitive run-times for several real-world graphs, while almost always
providing superior coloring quality. Our degeneracy ordering relaxation is of
separate interest for algorithms outside the context of coloring.
|
[
{
"created": "Wed, 26 Aug 2020 00:52:33 GMT",
"version": "v1"
},
{
"created": "Thu, 29 Oct 2020 22:56:42 GMT",
"version": "v2"
},
{
"created": "Wed, 11 Nov 2020 15:59:26 GMT",
"version": "v3"
}
] |
2020-11-12
|
[
[
"Besta",
"Maciej",
""
],
[
"Carigiet",
"Armon",
""
],
[
"Vonarburg-Shmaria",
"Zur",
""
],
[
"Janda",
"Kacper",
""
],
[
"Gianinazzi",
"Lukas",
""
],
[
"Hoefler",
"Torsten",
""
]
] |
We develop the first parallel graph coloring heuristics with strong theoretical guarantees on work and depth and coloring quality. The key idea is to design a relaxation of the vertex degeneracy order, a well-known graph theory concept, and to color vertices in the order dictated by this relaxation. This introduces a tunable amount of parallelism into the degeneracy ordering that is otherwise hard to parallelize. This simple idea enables significant benefits in several key aspects of graph coloring. For example, one of our algorithms ensures polylogarithmic depth and a bound on the number of used colors that is superior to all other parallelizable schemes, while maintaining work-efficiency. In addition to provable guarantees, the developed algorithms have competitive run-times for several real-world graphs, while almost always providing superior coloring quality. Our degeneracy ordering relaxation is of separate interest for algorithms outside the context of coloring.
|
1604.03882
|
Milind Gide
|
Milind S. Gide, Samuel F. Dodge, and Lina J. Karam
|
The Effect of Distortions on the Prediction of Visual Attention
|
14 pages, 2 column, 14 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing saliency models have been designed and evaluated for predicting the
saliency in distortion-free images. However, in practice, the image quality is
affected by a host of factors at several stages of the image processing
pipeline such as acquisition, compression and transmission. Several studies
have explored the effect of distortion on human visual attention; however, none
of them have considered the performance of visual saliency models in the
presence of distortion. Furthermore, given that one potential application of
visual saliency prediction is to aid pooling of objective visual quality
metrics, it is important to compare the performance of existing saliency models
on distorted images. In this paper, we evaluate several state-of-the-art visual
attention models over different databases consisting of distorted images with
various types of distortions such as blur, noise and compression with varying
levels of distortion severity. This paper also introduces new improved
performance evaluation metrics that are shown to overcome shortcomings in
existing performance metrics. We find that the performance of most models
improves with moderate and high levels of distortions as compared to the near
distortion-free case. In addition, model performance is also found to decrease
with an increase in image complexity.
|
[
{
"created": "Wed, 13 Apr 2016 17:37:54 GMT",
"version": "v1"
}
] |
2016-04-14
|
[
[
"Gide",
"Milind S.",
""
],
[
"Dodge",
"Samuel F.",
""
],
[
"Karam",
"Lina J.",
""
]
] |
Existing saliency models have been designed and evaluated for predicting the saliency in distortion-free images. However, in practice, the image quality is affected by a host of factors at several stages of the image processing pipeline such as acquisition, compression and transmission. Several studies have explored the effect of distortion on human visual attention; however, none of them have considered the performance of visual saliency models in the presence of distortion. Furthermore, given that one potential application of visual saliency prediction is to aid pooling of objective visual quality metrics, it is important to compare the performance of existing saliency models on distorted images. In this paper, we evaluate several state-of-the-art visual attention models over different databases consisting of distorted images with various types of distortions such as blur, noise and compression with varying levels of distortion severity. This paper also introduces new improved performance evaluation metrics that are shown to overcome shortcomings in existing performance metrics. We find that the performance of most models improves with moderate and high levels of distortions as compared to the near distortion-free case. In addition, model performance is also found to decrease with an increase in image complexity.
|
2201.07212
|
D David
|
David and Budi Adiperdana
|
Using Particle Swarm Optimization as Pathfinding Strategy in a Space
with Obstacles
| null | null | null | null |
cs.NE cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Particle swarm optimization (PSO) is a search algorithm based on stochastic
and population-based adaptive optimization. In this paper, a pathfinding
strategy is proposed to improve the efficiency of path planning for a broad
range of applications. This study aims to investigate the effect of PSO
parameters (numbers of particle, weight constant, particle constant, and global
constant) on algorithm performance to give solution paths. Increasing the PSO
parameters makes the swarm move faster to the target point but takes a long
time to converge because of too many random movements, and vice versa. From a
variety of simulations with different parameters, the PSO algorithm is proven
to be able to provide a solution path in a space with obstacles.
|
[
{
"created": "Thu, 16 Dec 2021 12:16:02 GMT",
"version": "v1"
},
{
"created": "Thu, 23 Jun 2022 13:03:30 GMT",
"version": "v2"
}
] |
2022-06-24
|
[
[
"David",
"",
""
],
[
"Adiperdana",
"Budi",
""
]
] |
Particle swarm optimization (PSO) is a search algorithm based on stochastic and population-based adaptive optimization. In this paper, a pathfinding strategy is proposed to improve the efficiency of path planning for a broad range of applications. This study aims to investigate the effect of PSO parameters (numbers of particle, weight constant, particle constant, and global constant) on algorithm performance to give solution paths. Increasing the PSO parameters makes the swarm move faster to the target point but takes a long time to converge because of too many random movements, and vice versa. From a variety of simulations with different parameters, the PSO algorithm is proven to be able to provide a solution path in a space with obstacles.
|
2212.07983
|
Yan-Bo Lin
|
Yan-Bo Lin, Yi-Lin Sung, Jie Lei, Mohit Bansal, Gedas Bertasius
|
Vision Transformers are Parameter-Efficient Audio-Visual Learners
|
CVPR 2023 Project Page: https://genjib.github.io/project_page/LAVISH/
| null | null | null |
cs.CV cs.CL cs.LG cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vision transformers (ViTs) have achieved impressive results on various
computer vision tasks in the last several years. In this work, we study the
capability of frozen ViTs, pretrained only on visual data, to generalize to
audio-visual data without finetuning any of its original parameters. To do so,
we propose a latent audio-visual hybrid (LAVISH) adapter that adapts pretrained
ViTs to audio-visual tasks by injecting a small number of trainable parameters
into every layer of a frozen ViT. To efficiently fuse visual and audio cues,
our LAVISH adapter uses a small set of latent tokens, which form an attention
bottleneck, thus, eliminating the quadratic cost of standard cross-attention.
Compared to the existing modality-specific audio-visual methods, our approach
achieves competitive or even better performance on various audio-visual tasks
while using fewer tunable parameters and without relying on costly audio
pretraining or external audio encoders. Our code is available at
https://genjib.github.io/project_page/LAVISH/
|
[
{
"created": "Thu, 15 Dec 2022 17:31:54 GMT",
"version": "v1"
},
{
"created": "Wed, 5 Apr 2023 17:41:12 GMT",
"version": "v2"
}
] |
2023-04-06
|
[
[
"Lin",
"Yan-Bo",
""
],
[
"Sung",
"Yi-Lin",
""
],
[
"Lei",
"Jie",
""
],
[
"Bansal",
"Mohit",
""
],
[
"Bertasius",
"Gedas",
""
]
] |
Vision transformers (ViTs) have achieved impressive results on various computer vision tasks in the last several years. In this work, we study the capability of frozen ViTs, pretrained only on visual data, to generalize to audio-visual data without finetuning any of its original parameters. To do so, we propose a latent audio-visual hybrid (LAVISH) adapter that adapts pretrained ViTs to audio-visual tasks by injecting a small number of trainable parameters into every layer of a frozen ViT. To efficiently fuse visual and audio cues, our LAVISH adapter uses a small set of latent tokens, which form an attention bottleneck, thus, eliminating the quadratic cost of standard cross-attention. Compared to the existing modality-specific audio-visual methods, our approach achieves competitive or even better performance on various audio-visual tasks while using fewer tunable parameters and without relying on costly audio pretraining or external audio encoders. Our code is available at https://genjib.github.io/project_page/LAVISH/
|
2009.05791
|
Petar Radanliev
|
Petar Radanliev, David De Roure, Rob Walton, Max Van Kleek, Rafael
Mantilla Montalvo, Omar Santos, LaTreall Maddox, Stacy Cannady
|
COVID-19 what have we learned? The rise of social machines and connected
devices in pandemic management following the concepts of predictive,
preventive and personalised medicine
| null |
EPMA Journal 11, 311 332 (2020)
|
10.1007/s13167-020-00218-x
| null |
cs.CY cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A comprehensive bibliographic review with R statistical methods of the COVID
pandemic in PubMed literature and Web of Science Core Collection, supported
with Google Scholar search. In addition, a case study review of emerging new
approaches in different regions, using medical literature, academic literature,
news articles and other reliable data sources. Public responses of mistrust
about privacy data misuse differ across countries, depending on the chosen
public communication strategy.
|
[
{
"created": "Sat, 12 Sep 2020 13:26:54 GMT",
"version": "v1"
},
{
"created": "Mon, 23 Nov 2020 21:26:57 GMT",
"version": "v2"
}
] |
2020-11-25
|
[
[
"Radanliev",
"Petar",
""
],
[
"De Roure",
"David",
""
],
[
"Walton",
"Rob",
""
],
[
"Van Kleek",
"Max",
""
],
[
"Montalvo",
"Rafael Mantilla",
""
],
[
"Santos",
"Omar",
""
],
[
"Maddox",
"LaTreall",
""
],
[
"Cannady",
"Stacy",
""
]
] |
A comprehensive bibliographic review with R statistical methods of the COVID pandemic in PubMed literature and Web of Science Core Collection, supported with Google Scholar search. In addition, a case study review of emerging new approaches in different regions, using medical literature, academic literature, news articles and other reliable data sources. Public responses of mistrust about privacy data misuse differ across countries, depending on the chosen public communication strategy.
|
2005.13110
|
Clifford Broni-Bediako
|
Clifford Broni-Bediako, Yuki Murata, Luiz Henrique Mormille and
Masayasu Atsumi
|
Evolutionary NAS with Gene Expression Programming of Cellular Encoding
|
Accepted at IEEE SSCI 2020 (7 pages, 3 figures)
| null | null | null |
cs.CV cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The renaissance of neural architecture search (NAS) has seen classical
methods such as genetic algorithms (GA) and genetic programming (GP) being
exploited for convolutional neural network (CNN) architectures. While recent
work have achieved promising performance on visual perception tasks, the direct
encoding scheme of both GA and GP has functional complexity deficiency and does
not scale well on large architectures like CNN. To address this, we present a
new generative encoding scheme -- $symbolic\ linear\ generative\ encoding$
(SLGE) -- simple, yet powerful scheme which embeds local graph transformations
in chromosomes of linear fixed-length string to develop CNN architectures of
variant shapes and sizes via evolutionary process of gene expression
programming. In experiments, the effectiveness of SLGE is shown in discovering
architectures that improve the performance of the state-of-the-art handcrafted
CNN architectures on CIFAR-10 and CIFAR-100 image classification tasks; and
achieves a competitive classification error rate with the existing NAS methods
using less GPU resources.
|
[
{
"created": "Wed, 27 May 2020 01:19:32 GMT",
"version": "v1"
},
{
"created": "Thu, 3 Dec 2020 15:41:20 GMT",
"version": "v2"
}
] |
2020-12-04
|
[
[
"Broni-Bediako",
"Clifford",
""
],
[
"Murata",
"Yuki",
""
],
[
"Mormille",
"Luiz Henrique",
""
],
[
"Atsumi",
"Masayasu",
""
]
] |
The renaissance of neural architecture search (NAS) has seen classical methods such as genetic algorithms (GA) and genetic programming (GP) being exploited for convolutional neural network (CNN) architectures. While recent work have achieved promising performance on visual perception tasks, the direct encoding scheme of both GA and GP has functional complexity deficiency and does not scale well on large architectures like CNN. To address this, we present a new generative encoding scheme -- $symbolic\ linear\ generative\ encoding$ (SLGE) -- simple, yet powerful scheme which embeds local graph transformations in chromosomes of linear fixed-length string to develop CNN architectures of variant shapes and sizes via evolutionary process of gene expression programming. In experiments, the effectiveness of SLGE is shown in discovering architectures that improve the performance of the state-of-the-art handcrafted CNN architectures on CIFAR-10 and CIFAR-100 image classification tasks; and achieves a competitive classification error rate with the existing NAS methods using less GPU resources.
|
2406.09933
|
Adham Ibrahim Ibrahim
|
Adham Ibrahim, Shady Shehata, Ajinkya Kulkarni, Mukhtar Mohamed,
Muhammad Abdul-Mageed
|
What Does it Take to Generalize SER Model Across Datasets? A
Comprehensive Benchmark
|
ACCEPTED AT INTERSPEECH 2024, GREECE
| null | null | null |
cs.SD cs.AI cs.HC cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Speech emotion recognition (SER) is essential for enhancing human-computer
interaction in speech-based applications. Despite improvements in specific
emotional datasets, there is still a research gap in SER's capability to
generalize across real-world situations. In this paper, we investigate
approaches to generalize the SER system across different emotion datasets. In
particular, we incorporate 11 emotional speech datasets and illustrate a
comprehensive benchmark on the SER task. We also address the challenge of
imbalanced data distribution using over-sampling methods when combining SER
datasets for training. Furthermore, we explore various evaluation protocols for
adeptness in the generalization of SER. Building on this, we explore the
potential of Whisper for SER, emphasizing the importance of thorough
evaluation. Our approach is designed to advance SER technology by integrating
speaker-independent methods.
|
[
{
"created": "Fri, 14 Jun 2024 11:27:19 GMT",
"version": "v1"
}
] |
2024-06-17
|
[
[
"Ibrahim",
"Adham",
""
],
[
"Shehata",
"Shady",
""
],
[
"Kulkarni",
"Ajinkya",
""
],
[
"Mohamed",
"Mukhtar",
""
],
[
"Abdul-Mageed",
"Muhammad",
""
]
] |
Speech emotion recognition (SER) is essential for enhancing human-computer interaction in speech-based applications. Despite improvements in specific emotional datasets, there is still a research gap in SER's capability to generalize across real-world situations. In this paper, we investigate approaches to generalize the SER system across different emotion datasets. In particular, we incorporate 11 emotional speech datasets and illustrate a comprehensive benchmark on the SER task. We also address the challenge of imbalanced data distribution using over-sampling methods when combining SER datasets for training. Furthermore, we explore various evaluation protocols for adeptness in the generalization of SER. Building on this, we explore the potential of Whisper for SER, emphasizing the importance of thorough evaluation. Our approach is designed to advance SER technology by integrating speaker-independent methods.
|
1307.6318
|
Tom Hirschowitz
|
Tom Hirschowitz (CNRS, Universit\'e de Savoie)
|
Cartesian closed 2-categories and permutation equivalence in
higher-order rewriting
| null |
Logical Methods in Computer Science, Volume 9, Issue 3 (September
4, 2013) lmcs:1132
|
10.2168/LMCS-9(3:10)2013
| null |
cs.LO cs.PL math.CT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a semantics for permutation equivalence in higher-order rewriting.
This semantics takes place in cartesian closed 2-categories, and is proved
sound and complete.
|
[
{
"created": "Wed, 24 Jul 2013 07:57:09 GMT",
"version": "v1"
},
{
"created": "Mon, 2 Sep 2013 22:54:21 GMT",
"version": "v2"
},
{
"created": "Fri, 20 Sep 2013 18:58:44 GMT",
"version": "v3"
}
] |
2015-07-01
|
[
[
"Hirschowitz",
"Tom",
"",
"CNRS, Université de Savoie"
]
] |
We propose a semantics for permutation equivalence in higher-order rewriting. This semantics takes place in cartesian closed 2-categories, and is proved sound and complete.
|
1209.1327
|
Philippe Kruchten
|
Philippe Kruchten
|
The frog and the octopus: a conceptual model of software development
| null | null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by-nc-sa/3.0/
|
We propose a conceptual model of software development that encompasses all
approaches: traditional or agile, light and heavy, for large and small
development efforts. The model identifies both the common aspects in all
software development, i.e., elements found in some form or another in each and
every software development project (Intent, Product, People, Work, Time,
Quality, Risk, Cost, Value), as well as the variable part, i.e., the main
factors that cause the very wide variations we can find in the software
development world (Size, Age, Criticality, Architecture stability, Business
model, Governance, Rate of change, Geographic distribution). We show how the
model can be used as an explanatory theory of software development, as a tool
for analysis of practices, techniques, processes, as the basis for curriculum
design or for software process adoption and improvement, and to support
empirical research on software development methods. This model is also proposed
as a way to depolarize the debate on agile methods versus the
rest-of-the-world: a unified model.
|
[
{
"created": "Thu, 6 Sep 2012 16:08:34 GMT",
"version": "v1"
}
] |
2012-09-07
|
[
[
"Kruchten",
"Philippe",
""
]
] |
We propose a conceptual model of software development that encompasses all approaches: traditional or agile, light and heavy, for large and small development efforts. The model identifies both the common aspects in all software development, i.e., elements found in some form or another in each and every software development project (Intent, Product, People, Work, Time, Quality, Risk, Cost, Value), as well as the variable part, i.e., the main factors that cause the very wide variations we can find in the software development world (Size, Age, Criticality, Architecture stability, Business model, Governance, Rate of change, Geographic distribution). We show how the model can be used as an explanatory theory of software development, as a tool for analysis of practices, techniques, processes, as the basis for curriculum design or for software process adoption and improvement, and to support empirical research on software development methods. This model is also proposed as a way to depolarize the debate on agile methods versus the rest-of-the-world: a unified model.
|
1812.09987
|
Batya Kenig
|
Batya Kenig and Dan Suciu
|
Integrity Constraints Revisited: From Exact to Approximate Implication
| null |
Logical Methods in Computer Science, Volume 18, Issue 1 (January
11, 2022) lmcs:6925
|
10.46298/lmcs-18(1:5)2022
| null |
cs.DB
|
http://creativecommons.org/licenses/by/4.0/
|
Integrity constraints such as functional dependencies (FD) and multi-valued
dependencies (MVD) are fundamental in database schema design. Likewise,
probabilistic conditional independences (CI) are crucial for reasoning about
multivariate probability distributions. The implication problem studies whether
a set of constraints (antecedents) implies another constraint (consequent), and
has been investigated in both the database and the AI literature, under the
assumption that all constraints hold exactly. However, many applications today
consider constraints that hold only approximately. In this paper we define an
approximate implication as a linear inequality between the degree of
satisfaction of the antecedents and consequent, and we study the relaxation
problem: when does an exact implication relax to an approximate implication? We
use information theory to define the degree of satisfaction, and prove several
results. First, we show that any implication from a set of data dependencies
(MVDs+FDs) can be relaxed to a simple linear inequality with a factor at most
quadratic in the number of variables; when the consequent is an FD, the factor
can be reduced to 1. Second, we prove that there exists an implication between
CIs that does not admit any relaxation; however, we prove that every
implication between CIs relaxes "in the limit". Then, we show that the
implication problem for differential constraints in market basket analysis also
admits a relaxation with a factor equal to 1. Finally, we show how some of the
results in the paper can be derived using the I-measure theory, which relates
between information theoretic measures and set theory. Our results recover, and
sometimes extend, previously known results about the implication problem: the
implication of MVDs and FDs can be checked by considering only 2-tuple
relations.
|
[
{
"created": "Mon, 24 Dec 2018 21:43:04 GMT",
"version": "v1"
},
{
"created": "Wed, 9 Jan 2019 20:21:11 GMT",
"version": "v2"
},
{
"created": "Wed, 3 Apr 2019 22:59:43 GMT",
"version": "v3"
},
{
"created": "Sun, 22 Nov 2020 16:42:52 GMT",
"version": "v4"
},
{
"created": "Wed, 25 Nov 2020 07:49:43 GMT",
"version": "v5"
},
{
"created": "Sun, 1 Aug 2021 12:38:54 GMT",
"version": "v6"
},
{
"created": "Mon, 29 Nov 2021 09:14:24 GMT",
"version": "v7"
},
{
"created": "Mon, 10 Jan 2022 13:51:26 GMT",
"version": "v8"
}
] |
2023-06-22
|
[
[
"Kenig",
"Batya",
""
],
[
"Suciu",
"Dan",
""
]
] |
Integrity constraints such as functional dependencies (FD) and multi-valued dependencies (MVD) are fundamental in database schema design. Likewise, probabilistic conditional independences (CI) are crucial for reasoning about multivariate probability distributions. The implication problem studies whether a set of constraints (antecedents) implies another constraint (consequent), and has been investigated in both the database and the AI literature, under the assumption that all constraints hold exactly. However, many applications today consider constraints that hold only approximately. In this paper we define an approximate implication as a linear inequality between the degree of satisfaction of the antecedents and consequent, and we study the relaxation problem: when does an exact implication relax to an approximate implication? We use information theory to define the degree of satisfaction, and prove several results. First, we show that any implication from a set of data dependencies (MVDs+FDs) can be relaxed to a simple linear inequality with a factor at most quadratic in the number of variables; when the consequent is an FD, the factor can be reduced to 1. Second, we prove that there exists an implication between CIs that does not admit any relaxation; however, we prove that every implication between CIs relaxes "in the limit". Then, we show that the implication problem for differential constraints in market basket analysis also admits a relaxation with a factor equal to 1. Finally, we show how some of the results in the paper can be derived using the I-measure theory, which relates between information theoretic measures and set theory. Our results recover, and sometimes extend, previously known results about the implication problem: the implication of MVDs and FDs can be checked by considering only 2-tuple relations.
|
1609.01575
|
Stefan Rass
|
Stefan Rass
|
On the Existence of Weak One-Way Functions
| null | null | null | null |
cs.CC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This note is an attempt to unconditionally prove the existence of weak one
way functions (OWF). Starting from a provably intractable decision problem
$L_D$ (whose existence is nonconstructively assured from the well-known
discrete time-hierarchy theorem from complexity theory), we construct another
intractable decision problem $L\subseteq \{0,1\}^*$ that has its words
scattered across $\{0,1\}^\ell$ at a relative frequency $p(\ell)$, for which
upper and lower bounds can be worked out. The value $p(\ell)$ is computed from
the density of the language within $\{0,1\}^\ell$ divided by the total word
count $2^\ell$. It corresponds to the probability of retrieving a yes-instance
of a decision problem upon a uniformly random draw from $\{0,1\}^\ell$. The
trick to find a language with known bounds on $p(\ell)$ relies on switching
from $L_D$ to $L_0:=L_D\cap L'$, where $L'$ is an easy-to-decide language with
a known density across $\{0,1\}^*$. In defining $L'$ properly (and upon a
suitable G\"odel numbering), the hardness of deciding $L_D\cap L'$ is inherited
from $L_D$, while its density is controlled by that of $L'$. The lower and
upper approximation of $p(\ell)$ then let us construct an explicit threshold
function (as in random graph theory) that can be used to efficiently and
intentionally sample yes- or no-instances of the decision problem (language)
$L_0$ (however, without any auxiliary information that could ease the decision
like a polynomial witness). In turn, this allows to construct a weak OWF that
encodes a bit string $w\in\{0,1\}^*$ by efficiently (in polynomial time)
emitting a sequence of randomly constructed intractable decision problems,
whose answers correspond to the preimage $w$.
|
[
{
"created": "Tue, 6 Sep 2016 14:35:40 GMT",
"version": "v1"
},
{
"created": "Thu, 20 Oct 2016 14:01:38 GMT",
"version": "v2"
},
{
"created": "Thu, 2 Nov 2017 14:15:57 GMT",
"version": "v3"
},
{
"created": "Tue, 18 Jul 2023 13:13:05 GMT",
"version": "v4"
}
] |
2023-07-19
|
[
[
"Rass",
"Stefan",
""
]
] |
This note is an attempt to unconditionally prove the existence of weak one way functions (OWF). Starting from a provably intractable decision problem $L_D$ (whose existence is nonconstructively assured from the well-known discrete time-hierarchy theorem from complexity theory), we construct another intractable decision problem $L\subseteq \{0,1\}^*$ that has its words scattered across $\{0,1\}^\ell$ at a relative frequency $p(\ell)$, for which upper and lower bounds can be worked out. The value $p(\ell)$ is computed from the density of the language within $\{0,1\}^\ell$ divided by the total word count $2^\ell$. It corresponds to the probability of retrieving a yes-instance of a decision problem upon a uniformly random draw from $\{0,1\}^\ell$. The trick to find a language with known bounds on $p(\ell)$ relies on switching from $L_D$ to $L_0:=L_D\cap L'$, where $L'$ is an easy-to-decide language with a known density across $\{0,1\}^*$. In defining $L'$ properly (and upon a suitable G\"odel numbering), the hardness of deciding $L_D\cap L'$ is inherited from $L_D$, while its density is controlled by that of $L'$. The lower and upper approximation of $p(\ell)$ then let us construct an explicit threshold function (as in random graph theory) that can be used to efficiently and intentionally sample yes- or no-instances of the decision problem (language) $L_0$ (however, without any auxiliary information that could ease the decision like a polynomial witness). In turn, this allows to construct a weak OWF that encodes a bit string $w\in\{0,1\}^*$ by efficiently (in polynomial time) emitting a sequence of randomly constructed intractable decision problems, whose answers correspond to the preimage $w$.
|
2110.02782
|
Eugene Kharitonov
|
Eugene Kharitonov and Marco Baroni and Dieuwke Hupkes
|
How BPE Affects Memorization in Transformers
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Training data memorization in NLP can both be beneficial (e.g., closed-book
QA) and undesirable (personal data extraction). In any case, successful model
training requires a non-trivial amount of memorization to store word spellings,
various linguistic idiosyncrasies and common knowledge. However, little is
known about what affects the memorization behavior of NLP models, as the field
tends to focus on the equally important question of generalization. In this
work, we demonstrate that the size of the subword vocabulary learned by
Byte-Pair Encoding (BPE) greatly affects both ability and tendency of standard
Transformer models to memorize training data, even when we control for the
number of learned parameters. We find that with a large subword vocabulary
size, Transformer models fit random mappings more easily and are more
vulnerable to membership inference attacks. Similarly, given a prompt,
Transformer-based language models with large subword vocabularies reproduce the
training data more often. We conjecture this effect is caused by reduction in
the sequences' length that happens as the BPE vocabulary grows. Our findings
can allow a more informed choice of hyper-parameters, that is better tailored
for a particular use-case.
|
[
{
"created": "Wed, 6 Oct 2021 14:01:56 GMT",
"version": "v1"
},
{
"created": "Thu, 2 Dec 2021 09:54:05 GMT",
"version": "v2"
}
] |
2021-12-03
|
[
[
"Kharitonov",
"Eugene",
""
],
[
"Baroni",
"Marco",
""
],
[
"Hupkes",
"Dieuwke",
""
]
] |
Training data memorization in NLP can both be beneficial (e.g., closed-book QA) and undesirable (personal data extraction). In any case, successful model training requires a non-trivial amount of memorization to store word spellings, various linguistic idiosyncrasies and common knowledge. However, little is known about what affects the memorization behavior of NLP models, as the field tends to focus on the equally important question of generalization. In this work, we demonstrate that the size of the subword vocabulary learned by Byte-Pair Encoding (BPE) greatly affects both ability and tendency of standard Transformer models to memorize training data, even when we control for the number of learned parameters. We find that with a large subword vocabulary size, Transformer models fit random mappings more easily and are more vulnerable to membership inference attacks. Similarly, given a prompt, Transformer-based language models with large subword vocabularies reproduce the training data more often. We conjecture this effect is caused by reduction in the sequences' length that happens as the BPE vocabulary grows. Our findings can allow a more informed choice of hyper-parameters, that is better tailored for a particular use-case.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.