id
stringlengths 9
10
| submitter
stringlengths 1
64
⌀ | authors
stringlengths 4
20.7k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
404
⌀ | doi
stringlengths 11
153
⌀ | report-no
stringlengths 2
254
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | orig_abstract
stringlengths 14
3.35k
| versions
listlengths 1
60
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
1.35k
| abstract
stringlengths 11
3.34k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2106.01805
|
Tiange Xiang
|
Tiange Xiang, Chaoyi Zhang, Yang Song, Siqi Liu, Hongliang Yuan,
Weidong Cai
|
Partial Graph Reasoning for Neural Network Regularization
|
Technical report
| null | null | null |
cs.LG cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Regularizers help deep neural networks prevent feature co-adaptations.
Dropout, as a commonly used regularization technique, stochastically disables
neuron activations during network optimization. However, such complete feature
disposal can affect the feature representation and network understanding.
Toward better descriptions of latent representations, we present DropGraph that
learns a regularization function by constructing a stand-alone graph from the
backbone features. DropGraph first samples stochastic spatial feature vectors
and then incorporates graph reasoning methods to generate feature map
distortions. This add-on graph regularizes the network during training and can
be completely skipped during inference. We provide intuitions on the linkage
between graph reasoning and Dropout with further discussions on how partial
graph reasoning method reduces feature correlations. To this end, we
extensively study the modeling of graph vertex dependencies and the utilization
of the graph for distorting backbone feature maps. DropGraph was validated on 4
tasks with a total of 8 different datasets. The experimental results show that
our method outperforms other state-of-the-art regularizers while leaving the
base model structure unmodified during inference.
|
[
{
"created": "Thu, 3 Jun 2021 12:57:01 GMT",
"version": "v1"
},
{
"created": "Mon, 24 Jan 2022 09:01:59 GMT",
"version": "v2"
}
] |
2022-01-25
|
[
[
"Xiang",
"Tiange",
""
],
[
"Zhang",
"Chaoyi",
""
],
[
"Song",
"Yang",
""
],
[
"Liu",
"Siqi",
""
],
[
"Yuan",
"Hongliang",
""
],
[
"Cai",
"Weidong",
""
]
] |
Regularizers help deep neural networks prevent feature co-adaptations. Dropout, as a commonly used regularization technique, stochastically disables neuron activations during network optimization. However, such complete feature disposal can affect the feature representation and network understanding. Toward better descriptions of latent representations, we present DropGraph that learns a regularization function by constructing a stand-alone graph from the backbone features. DropGraph first samples stochastic spatial feature vectors and then incorporates graph reasoning methods to generate feature map distortions. This add-on graph regularizes the network during training and can be completely skipped during inference. We provide intuitions on the linkage between graph reasoning and Dropout with further discussions on how partial graph reasoning method reduces feature correlations. To this end, we extensively study the modeling of graph vertex dependencies and the utilization of the graph for distorting backbone feature maps. DropGraph was validated on 4 tasks with a total of 8 different datasets. The experimental results show that our method outperforms other state-of-the-art regularizers while leaving the base model structure unmodified during inference.
|
2403.07072
|
Kurt Butler
|
Kurt Butler, Guanchao Feng, Petar M. Djuric
|
Explainable Learning with Gaussian Processes
|
38 pages, 7 figures
| null | null | null |
cs.LG eess.SP
|
http://creativecommons.org/licenses/by/4.0/
|
The field of explainable artificial intelligence (XAI) attempts to develop
methods that provide insight into how complicated machine learning methods make
predictions. Many methods of explanation have focused on the concept of feature
attribution, a decomposition of the model's prediction into individual
contributions corresponding to each input feature. In this work, we explore the
problem of feature attribution in the context of Gaussian process regression
(GPR). We take a principled approach to defining attributions under model
uncertainty, extending the existing literature. We show that although GPR is a
highly flexible and non-parametric approach, we can derive interpretable,
closed-form expressions for the feature attributions. When using integrated
gradients as an attribution method, we show that the attributions of a GPR
model also follow a Gaussian process distribution, which quantifies the
uncertainty in attribution arising from uncertainty in the model. We
demonstrate, both through theory and experimentation, the versatility and
robustness of this approach. We also show that, when applicable, the exact
expressions for GPR attributions are both more accurate and less
computationally expensive than the approximations currently used in practice.
The source code for this project is freely available under MIT license at
https://github.com/KurtButler/2024_attributions_paper.
|
[
{
"created": "Mon, 11 Mar 2024 18:03:02 GMT",
"version": "v1"
}
] |
2024-03-13
|
[
[
"Butler",
"Kurt",
""
],
[
"Feng",
"Guanchao",
""
],
[
"Djuric",
"Petar M.",
""
]
] |
The field of explainable artificial intelligence (XAI) attempts to develop methods that provide insight into how complicated machine learning methods make predictions. Many methods of explanation have focused on the concept of feature attribution, a decomposition of the model's prediction into individual contributions corresponding to each input feature. In this work, we explore the problem of feature attribution in the context of Gaussian process regression (GPR). We take a principled approach to defining attributions under model uncertainty, extending the existing literature. We show that although GPR is a highly flexible and non-parametric approach, we can derive interpretable, closed-form expressions for the feature attributions. When using integrated gradients as an attribution method, we show that the attributions of a GPR model also follow a Gaussian process distribution, which quantifies the uncertainty in attribution arising from uncertainty in the model. We demonstrate, both through theory and experimentation, the versatility and robustness of this approach. We also show that, when applicable, the exact expressions for GPR attributions are both more accurate and less computationally expensive than the approximations currently used in practice. The source code for this project is freely available under MIT license at https://github.com/KurtButler/2024_attributions_paper.
|
1601.06908
|
Jerome Lacan
|
Alexandre Soro, Jerome Lacan, Vincent Roca, Valentin Savin and Mathieu
Cunche
|
Enhanced Recursive Reed-Muller Erasure Decoding
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent work have shown that Reed-Muller (RM) codes achieve the erasure
channel capacity. However, this performance is obtained with maximum-likelihood
decoding which can be costly for practical applications. In this paper, we
propose an encoding/decoding scheme for Reed-Muller codes on the packet erasure
channel based on Plotkin construction. We present several improvements over the
generic decoding. They allow, for a light cost, to compete with
maximum-likelihood decoding performance, especially on high-rate codes, while
significantly outperforming it in terms of speed.
|
[
{
"created": "Tue, 26 Jan 2016 07:13:00 GMT",
"version": "v1"
}
] |
2016-01-27
|
[
[
"Soro",
"Alexandre",
""
],
[
"Lacan",
"Jerome",
""
],
[
"Roca",
"Vincent",
""
],
[
"Savin",
"Valentin",
""
],
[
"Cunche",
"Mathieu",
""
]
] |
Recent work have shown that Reed-Muller (RM) codes achieve the erasure channel capacity. However, this performance is obtained with maximum-likelihood decoding which can be costly for practical applications. In this paper, we propose an encoding/decoding scheme for Reed-Muller codes on the packet erasure channel based on Plotkin construction. We present several improvements over the generic decoding. They allow, for a light cost, to compete with maximum-likelihood decoding performance, especially on high-rate codes, while significantly outperforming it in terms of speed.
|
2406.08444
|
Wei-Tung Lin
|
Wei-Tung Lin and Yong-Xiang Lin and Jyun-Wei Chen and Kai-Lung Hua
|
PixMamba: Leveraging State Space Models in a Dual-Level Architecture for
Underwater Image Enhancement
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Underwater Image Enhancement (UIE) is critical for marine research and
exploration but hindered by complex color distortions and severe blurring.
Recent deep learning-based methods have achieved remarkable results, yet these
methods struggle with high computational costs and insufficient global
modeling, resulting in locally under- or over- adjusted regions. We present
PixMamba, a novel architecture, designed to overcome these challenges by
leveraging State Space Models (SSMs) for efficient global dependency modeling.
Unlike convolutional neural networks (CNNs) with limited receptive fields and
transformer networks with high computational costs, PixMamba efficiently
captures global contextual information while maintaining computational
efficiency. Our dual-level strategy features the patch-level Efficient Mamba
Net (EMNet) for reconstructing enhanced image feature and the pixel-level
PixMamba Net (PixNet) to ensure fine-grained feature capturing and global
consistency of enhanced image that were previously difficult to obtain.
PixMamba achieves state-of-the-art performance across various underwater image
datasets and delivers visually superior results. Code is available at:
https://github.com/weitunglin/pixmamba.
|
[
{
"created": "Wed, 12 Jun 2024 17:34:38 GMT",
"version": "v1"
}
] |
2024-06-13
|
[
[
"Lin",
"Wei-Tung",
""
],
[
"Lin",
"Yong-Xiang",
""
],
[
"Chen",
"Jyun-Wei",
""
],
[
"Hua",
"Kai-Lung",
""
]
] |
Underwater Image Enhancement (UIE) is critical for marine research and exploration but hindered by complex color distortions and severe blurring. Recent deep learning-based methods have achieved remarkable results, yet these methods struggle with high computational costs and insufficient global modeling, resulting in locally under- or over- adjusted regions. We present PixMamba, a novel architecture, designed to overcome these challenges by leveraging State Space Models (SSMs) for efficient global dependency modeling. Unlike convolutional neural networks (CNNs) with limited receptive fields and transformer networks with high computational costs, PixMamba efficiently captures global contextual information while maintaining computational efficiency. Our dual-level strategy features the patch-level Efficient Mamba Net (EMNet) for reconstructing enhanced image feature and the pixel-level PixMamba Net (PixNet) to ensure fine-grained feature capturing and global consistency of enhanced image that were previously difficult to obtain. PixMamba achieves state-of-the-art performance across various underwater image datasets and delivers visually superior results. Code is available at: https://github.com/weitunglin/pixmamba.
|
1410.7330
|
Edvin Wedin
|
Peter Hegarty, Anders Martinsson and Edvin Wedin
|
The Hegselmann-Krause dynamics on the circle converge
|
9 pages, 2 figures. Version 2: A small error in the proof of Theorem
1.1 is corrected and an acknowledgement added. Bibliography updated
| null | null | null |
cs.SY math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider the Hegselmann-Krause dynamics on a one-dimensional torus and
provide the first proof of convergence of this system. The proof requires only
fairly minor modifications of existing methods for proving convergence in
Euclidean space.
|
[
{
"created": "Mon, 27 Oct 2014 17:50:12 GMT",
"version": "v1"
},
{
"created": "Mon, 6 Apr 2015 17:19:06 GMT",
"version": "v2"
}
] |
2015-04-07
|
[
[
"Hegarty",
"Peter",
""
],
[
"Martinsson",
"Anders",
""
],
[
"Wedin",
"Edvin",
""
]
] |
We consider the Hegselmann-Krause dynamics on a one-dimensional torus and provide the first proof of convergence of this system. The proof requires only fairly minor modifications of existing methods for proving convergence in Euclidean space.
|
1309.1264
|
EPTCS
|
Kenichi Morita (Hiroshima University)
|
Reversible Logic Elements with Memory and Their Universality
|
In Proceedings MCU 2013, arXiv:1309.1043
|
EPTCS 128, 2013, pp. 3-14
|
10.4204/EPTCS.128.3
| null |
cs.FL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reversible computing is a paradigm of computation that reflects physical
reversibility, one of the fundamental microscopic laws of Nature. In this
survey, we discuss topics on reversible logic elements with memory (RLEM),
which can be used to build reversible computing systems, and their
universality. An RLEM is called universal, if any reversible sequential machine
(RSM) can be realized as a circuit composed only of it. Since a finite-state
control and a tape cell of a reversible Turing machine (RTM) are formalized as
RSMs, any RTM can be constructed from a universal RLEM. Here, we investigate
2-state RLEMs, and show that infinitely many kinds of non-degenerate RLEMs are
all universal besides only four exceptions. Non-universality of these
exceptional RLEMs is also argued.
|
[
{
"created": "Thu, 5 Sep 2013 08:06:55 GMT",
"version": "v1"
}
] |
2013-09-06
|
[
[
"Morita",
"Kenichi",
"",
"Hiroshima University"
]
] |
Reversible computing is a paradigm of computation that reflects physical reversibility, one of the fundamental microscopic laws of Nature. In this survey, we discuss topics on reversible logic elements with memory (RLEM), which can be used to build reversible computing systems, and their universality. An RLEM is called universal, if any reversible sequential machine (RSM) can be realized as a circuit composed only of it. Since a finite-state control and a tape cell of a reversible Turing machine (RTM) are formalized as RSMs, any RTM can be constructed from a universal RLEM. Here, we investigate 2-state RLEMs, and show that infinitely many kinds of non-degenerate RLEMs are all universal besides only four exceptions. Non-universality of these exceptional RLEMs is also argued.
|
1510.07176
|
Marco Levorato
|
Qing Han, Chenxi Wang, Marco Levorato and Osvaldo Simeone
|
On the Effect of Fronthaul Latency on ARQ in C-RAN Systems
|
Submitted
| null | null | null |
cs.IT cs.NI math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the Cloud Radio Access Network (C-RAN) architecture, a Control Unit (CU)
implements the baseband processing functionalities of a cluster of Base
Stations (BSs), which are connected to it through a fronthaul network. This
architecture enables centralized processing at the CU, and hence the
implementation of enhanced interference mitigation strategies, but it also
entails an increased decoding latency due to the transport on the fronthaul
network. The fronthaul latency may offset the benefits of centralized
processing when considering the performance of protocols at layer 2 and above.
This letter studies the impact of fronthaul latency on the performance of
standard Automatic Retransmission reQuest (ARQ) protocols, namely Stop and
Wait, Go-Back-N and Selective Repeat. The performance of the C-RAN architecture
in terms of throughput and efficiency is compared to the that of a conventional
cellular system with local processing, as well as with that of a proposed
hybrid C-RAN system in which BSs can perform decoding. The dynamics of the
system are modeled as a multi-dimensional Markov process that includes
sub-chains to capture the temporal correlation of interference and channel
gains. Numerical results yield insights into the impact of system parameters
such as fronthaul latency and signal-to-interference ratio on different ARQ
protocols.
|
[
{
"created": "Sat, 24 Oct 2015 19:16:59 GMT",
"version": "v1"
}
] |
2015-10-27
|
[
[
"Han",
"Qing",
""
],
[
"Wang",
"Chenxi",
""
],
[
"Levorato",
"Marco",
""
],
[
"Simeone",
"Osvaldo",
""
]
] |
In the Cloud Radio Access Network (C-RAN) architecture, a Control Unit (CU) implements the baseband processing functionalities of a cluster of Base Stations (BSs), which are connected to it through a fronthaul network. This architecture enables centralized processing at the CU, and hence the implementation of enhanced interference mitigation strategies, but it also entails an increased decoding latency due to the transport on the fronthaul network. The fronthaul latency may offset the benefits of centralized processing when considering the performance of protocols at layer 2 and above. This letter studies the impact of fronthaul latency on the performance of standard Automatic Retransmission reQuest (ARQ) protocols, namely Stop and Wait, Go-Back-N and Selective Repeat. The performance of the C-RAN architecture in terms of throughput and efficiency is compared to the that of a conventional cellular system with local processing, as well as with that of a proposed hybrid C-RAN system in which BSs can perform decoding. The dynamics of the system are modeled as a multi-dimensional Markov process that includes sub-chains to capture the temporal correlation of interference and channel gains. Numerical results yield insights into the impact of system parameters such as fronthaul latency and signal-to-interference ratio on different ARQ protocols.
|
1801.09796
|
Maiara F. Bollauf
|
Maiara F. Bollauf, Vinay A. Vaishampayan, Sueli I. R. Costa
|
Communication-Efficient Search for an Approximate Closest Lattice Point
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider the problem of finding the closest lattice point to a vector in
n-dimensional Euclidean space when each component of the vector is available at
a distinct node in a network. Our objectives are (i) minimize the communication
cost and (ii) obtain the error probability. The approximate closest lattice
point considered here is the one obtained using the nearest-plane (Babai)
algorithm. Assuming a triangular special basis for the lattice, we develop
communication-efficient protocols for computing the approximate lattice point
and determine the communication cost for lattices of dimension n>1. Based on
available parameterizations of reduced bases, we determine the error
probability of the nearest plane algorithm for two dimensional lattices
analytically, and present a computational error estimation algorithm in three
dimensions. For dimensions 2 and 3, our results show that the error probability
increases with the packing density of the lattice.
|
[
{
"created": "Mon, 29 Jan 2018 23:31:36 GMT",
"version": "v1"
}
] |
2018-01-31
|
[
[
"Bollauf",
"Maiara F.",
""
],
[
"Vaishampayan",
"Vinay A.",
""
],
[
"Costa",
"Sueli I. R.",
""
]
] |
We consider the problem of finding the closest lattice point to a vector in n-dimensional Euclidean space when each component of the vector is available at a distinct node in a network. Our objectives are (i) minimize the communication cost and (ii) obtain the error probability. The approximate closest lattice point considered here is the one obtained using the nearest-plane (Babai) algorithm. Assuming a triangular special basis for the lattice, we develop communication-efficient protocols for computing the approximate lattice point and determine the communication cost for lattices of dimension n>1. Based on available parameterizations of reduced bases, we determine the error probability of the nearest plane algorithm for two dimensional lattices analytically, and present a computational error estimation algorithm in three dimensions. For dimensions 2 and 3, our results show that the error probability increases with the packing density of the lattice.
|
1311.0536
|
Nikos Bikakis
|
Nikos Bikakis, Chrisa Tsinaraki, Ioannis Stavrakantonakis, Nektarios
Gioldasis, Stavros Christodoulakis
|
The SPARQL2XQuery Interoperability Framework. Utilizing Schema Mapping,
Schema Transformation and Query Translation to Integrate XML and the Semantic
Web
|
To appear in World Wide Web Journal (WWWJ), Springer 2013
| null | null | null |
cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Web of Data is an open environment consisting of a great number of large
inter-linked RDF datasets from various domains. In this environment,
organizations and companies adopt the Linked Data practices utilizing Semantic
Web (SW) technologies, in order to publish their data and offer SPARQL
endpoints (i.e., SPARQL-based search services). On the other hand, the dominant
standard for information exchange in the Web today is XML. The SW and XML
worlds and their developed infrastructures are based on different data models,
semantics and query languages. Thus, it is crucial to develop interoperability
mechanisms that allow the Web of Data users to access XML datasets, using
SPARQL, from their own working environments. It is unrealistic to expect that
all the existing legacy data (e.g., Relational, XML, etc.) will be transformed
into SW data. Therefore, publishing legacy data as Linked Data and providing
SPARQL endpoints over them has become a major research challenge. In this
direction, we introduce the SPARQL2XQuery Framework which creates an
interoperable environment, where SPARQL queries are automatically translated to
XQuery queries, in order to access XML data across the Web. The SPARQL2XQuery
Framework provides a mapping model for the expression of OWL-RDF/S to XML
Schema mappings as well as a method for SPARQL to XQuery translation. To this
end, our Framework supports both manual and automatic mapping specification
between ontologies and XML Schemas. In the automatic mapping specification
scenario, the SPARQL2XQuery exploits the XS2OWL component which transforms XML
Schemas into OWL ontologies. Finally, extensive experiments have been conducted
in order to evaluate the schema transformation, mapping generation, query
translation and query evaluation efficiency, using both real and synthetic
datasets.
|
[
{
"created": "Sun, 3 Nov 2013 21:57:48 GMT",
"version": "v1"
},
{
"created": "Thu, 26 Dec 2013 00:20:14 GMT",
"version": "v2"
},
{
"created": "Thu, 2 Jan 2014 02:53:19 GMT",
"version": "v3"
}
] |
2014-01-03
|
[
[
"Bikakis",
"Nikos",
""
],
[
"Tsinaraki",
"Chrisa",
""
],
[
"Stavrakantonakis",
"Ioannis",
""
],
[
"Gioldasis",
"Nektarios",
""
],
[
"Christodoulakis",
"Stavros",
""
]
] |
The Web of Data is an open environment consisting of a great number of large inter-linked RDF datasets from various domains. In this environment, organizations and companies adopt the Linked Data practices utilizing Semantic Web (SW) technologies, in order to publish their data and offer SPARQL endpoints (i.e., SPARQL-based search services). On the other hand, the dominant standard for information exchange in the Web today is XML. The SW and XML worlds and their developed infrastructures are based on different data models, semantics and query languages. Thus, it is crucial to develop interoperability mechanisms that allow the Web of Data users to access XML datasets, using SPARQL, from their own working environments. It is unrealistic to expect that all the existing legacy data (e.g., Relational, XML, etc.) will be transformed into SW data. Therefore, publishing legacy data as Linked Data and providing SPARQL endpoints over them has become a major research challenge. In this direction, we introduce the SPARQL2XQuery Framework which creates an interoperable environment, where SPARQL queries are automatically translated to XQuery queries, in order to access XML data across the Web. The SPARQL2XQuery Framework provides a mapping model for the expression of OWL-RDF/S to XML Schema mappings as well as a method for SPARQL to XQuery translation. To this end, our Framework supports both manual and automatic mapping specification between ontologies and XML Schemas. In the automatic mapping specification scenario, the SPARQL2XQuery exploits the XS2OWL component which transforms XML Schemas into OWL ontologies. Finally, extensive experiments have been conducted in order to evaluate the schema transformation, mapping generation, query translation and query evaluation efficiency, using both real and synthetic datasets.
|
1302.2167
|
Kartik Venkat
|
Kartik Venkat, Tsachy Weissman, Yair Carmon and Shlomo Shamai
|
Information, Estimation, and Lookahead in the Gaussian channel
|
30 pages, 10 figures, submitted to IEEE Transactions on Information
Theory
| null |
10.1109/TSP.2016.2544748
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider mean squared estimation with lookahead of a continuous-time
signal corrupted by additive white Gaussian noise. We show that the mutual
information rate function, i.e., the mutual information rate as function of the
signal-to-noise ratio (SNR), does not, in general, determine the minimum mean
squared error (MMSE) with fixed finite lookahead, in contrast to the special
cases with 0 and infinite lookahead (filtering and smoothing errors),
respectively, which were previously established in the literature. We also
establish a new expectation identity under a generalized observation model
where the Gaussian channel has an SNR jump at $t=0$, capturing the tradeoff
between lookahead and SNR.
Further, we study the class of continuous-time stationary Gauss-Markov
processes (Ornstein-Uhlenbeck processes) as channel inputs, and explicitly
characterize the behavior of the minimum mean squared error (MMSE) with finite
lookahead and signal-to-noise ratio (SNR). The MMSE with lookahead is shown to
converge exponentially rapidly to the non-causal error, with the exponent being
the reciprocal of the non-causal error. We extend our results to mixtures of
Ornstein-Uhlenbeck processes, and use the insight gained to present lower and
upper bounds on the MMSE with lookahead for a class of stationary Gaussian
input processes, whose spectrum can be expressed as a mixture of
Ornstein-Uhlenbeck spectra.
|
[
{
"created": "Fri, 8 Feb 2013 22:33:26 GMT",
"version": "v1"
}
] |
2016-11-18
|
[
[
"Venkat",
"Kartik",
""
],
[
"Weissman",
"Tsachy",
""
],
[
"Carmon",
"Yair",
""
],
[
"Shamai",
"Shlomo",
""
]
] |
We consider mean squared estimation with lookahead of a continuous-time signal corrupted by additive white Gaussian noise. We show that the mutual information rate function, i.e., the mutual information rate as function of the signal-to-noise ratio (SNR), does not, in general, determine the minimum mean squared error (MMSE) with fixed finite lookahead, in contrast to the special cases with 0 and infinite lookahead (filtering and smoothing errors), respectively, which were previously established in the literature. We also establish a new expectation identity under a generalized observation model where the Gaussian channel has an SNR jump at $t=0$, capturing the tradeoff between lookahead and SNR. Further, we study the class of continuous-time stationary Gauss-Markov processes (Ornstein-Uhlenbeck processes) as channel inputs, and explicitly characterize the behavior of the minimum mean squared error (MMSE) with finite lookahead and signal-to-noise ratio (SNR). The MMSE with lookahead is shown to converge exponentially rapidly to the non-causal error, with the exponent being the reciprocal of the non-causal error. We extend our results to mixtures of Ornstein-Uhlenbeck processes, and use the insight gained to present lower and upper bounds on the MMSE with lookahead for a class of stationary Gaussian input processes, whose spectrum can be expressed as a mixture of Ornstein-Uhlenbeck spectra.
|
2101.12584
|
Yakup Kutlu
|
Yakup Kutlu, Z\"ulf\"u Alanoglu, Ahmet G\"ok\c{c}en, Mustafa Yeniad
|
Raspberry Pi Based Intelligent Robot that Recognizes and Places Puzzle
Objects
|
5 pages, in Turkish language, 8 figures, journal of intelligent
systems with applications
| null | null | null |
cs.RO cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
In this study; in order to diagnose congestive heart failure (CHF) patients,
non-linear secondorder difference plot (SODP) obtained from raw 256 Hz sampled
frequency and windowed record with different time of ECG records are used. All
of the data rows are labelled with their belongings to classify much more
realistically. SODPs are divided into different radius of quadrant regions and
numbers of the points fall in the quadrants are computed in order to extract
feature vectors. Fisher's linear discriminant, Naive Bayes, and artificial
neural network are used as classifier. The results are considered in two step
validation methods as general kfold cross-validation and patient based
cross-validation. As a result, it is shown that using neural network classifier
with features obtained from SODP, the constructed system could distinguish
normal and CHF patients with 100% accuracy rate.
|
[
{
"created": "Wed, 20 Jan 2021 18:58:59 GMT",
"version": "v1"
}
] |
2021-02-01
|
[
[
"Kutlu",
"Yakup",
""
],
[
"Alanoglu",
"Zülfü",
""
],
[
"Gökçen",
"Ahmet",
""
],
[
"Yeniad",
"Mustafa",
""
]
] |
In this study; in order to diagnose congestive heart failure (CHF) patients, non-linear secondorder difference plot (SODP) obtained from raw 256 Hz sampled frequency and windowed record with different time of ECG records are used. All of the data rows are labelled with their belongings to classify much more realistically. SODPs are divided into different radius of quadrant regions and numbers of the points fall in the quadrants are computed in order to extract feature vectors. Fisher's linear discriminant, Naive Bayes, and artificial neural network are used as classifier. The results are considered in two step validation methods as general kfold cross-validation and patient based cross-validation. As a result, it is shown that using neural network classifier with features obtained from SODP, the constructed system could distinguish normal and CHF patients with 100% accuracy rate.
|
2209.02851
|
Deepthi Raghunandan
|
Deepthi Raghunandan, Aayushi Roy, Shenzhi Shi, Niklas Elmqvist, and
Leilani Battle
|
Code Code Evolution: Understanding How People Change Data Science
Notebooks Over Time
| null | null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Sensemaking is the iterative process of identifying, extracting, and
explaining insights from data, where each iteration is referred to as the
"sensemaking loop." Although recent work observes snapshots of the sensemaking
loop within computational notebooks, none measure shifts in sensemaking
behaviors over time -- between exploration and explanation. This gap limits our
ability to understand the full scope of the sensemaking process and thus our
ability to design tools to fully support sensemaking. We contribute the first
quantitative method to characterize how sensemaking evolves within data science
computational notebooks. To this end, we conducted a quantitative study of
2,574 Jupyter notebooks mined from GitHub. First, we identify data
science-focused notebooks that have undergone significant iterations. Second,
we present regression models that automatically characterize sensemaking
activity within individual notebooks by assigning them a score representing
their position within the sensemaking spectrum. Finally, we use our regression
models to calculate and analyze shifts in notebook scores across GitHub
versions. Our results show that notebook authors participate in a diverse range
of sensemaking tasks over time, such as annotation, branching analysis, and
documentation. Finally, we propose design recommendations for extending
notebook environments to support the sensemaking behaviors we observed.
|
[
{
"created": "Tue, 6 Sep 2022 23:24:24 GMT",
"version": "v1"
},
{
"created": "Thu, 8 Sep 2022 14:07:16 GMT",
"version": "v2"
}
] |
2022-09-09
|
[
[
"Raghunandan",
"Deepthi",
""
],
[
"Roy",
"Aayushi",
""
],
[
"Shi",
"Shenzhi",
""
],
[
"Elmqvist",
"Niklas",
""
],
[
"Battle",
"Leilani",
""
]
] |
Sensemaking is the iterative process of identifying, extracting, and explaining insights from data, where each iteration is referred to as the "sensemaking loop." Although recent work observes snapshots of the sensemaking loop within computational notebooks, none measure shifts in sensemaking behaviors over time -- between exploration and explanation. This gap limits our ability to understand the full scope of the sensemaking process and thus our ability to design tools to fully support sensemaking. We contribute the first quantitative method to characterize how sensemaking evolves within data science computational notebooks. To this end, we conducted a quantitative study of 2,574 Jupyter notebooks mined from GitHub. First, we identify data science-focused notebooks that have undergone significant iterations. Second, we present regression models that automatically characterize sensemaking activity within individual notebooks by assigning them a score representing their position within the sensemaking spectrum. Finally, we use our regression models to calculate and analyze shifts in notebook scores across GitHub versions. Our results show that notebook authors participate in a diverse range of sensemaking tasks over time, such as annotation, branching analysis, and documentation. Finally, we propose design recommendations for extending notebook environments to support the sensemaking behaviors we observed.
|
1510.00571
|
Jeff Erickson
|
Hsien-Chih Chang and Jeff Erickson
|
Electrical Reduction, Homotopy Moves, and Defect
|
27 pages, 15 figures
| null | null | null |
cs.CG math.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We prove the first nontrivial worst-case lower bounds for two closely related
problems. First, $\Omega(n^{3/2})$ degree-1 reductions, series-parallel
reductions, and $\Delta$Y transformations are required in the worst case to
reduce an $n$-vertex plane graph to a single vertex or edge. The lower bound is
achieved by any planar graph with treewidth $\Theta(\sqrt{n})$. Second,
$\Omega(n^{3/2})$ homotopy moves are required in the worst case to reduce a
closed curve in the plane with $n$ self-intersection points to a simple closed
curve. For both problems, the best upper bound known is $O(n^2)$, and the only
lower bound previously known was the trivial $\Omega(n)$.
The first lower bound follows from the second using medial graph techniques
ultimately due to Steinitz, together with more recent arguments of Noble and
Welsh [J. Graph Theory 2000]. The lower bound on homotopy moves follows from an
observation by Haiyashi et al. [J. Knot Theory Ramif. 2012] that the standard
projections of certain torus knots have large defect, a topological invariant
of generic closed curves introduced by Aicardi and Arnold. Finally, we prove
that every closed curve in the plane with $n$ crossings has defect
$O(n^{3/2})$, which implies that better lower bounds for our algorithmic
problems will require different techniques.
|
[
{
"created": "Fri, 2 Oct 2015 12:03:29 GMT",
"version": "v1"
}
] |
2015-10-05
|
[
[
"Chang",
"Hsien-Chih",
""
],
[
"Erickson",
"Jeff",
""
]
] |
We prove the first nontrivial worst-case lower bounds for two closely related problems. First, $\Omega(n^{3/2})$ degree-1 reductions, series-parallel reductions, and $\Delta$Y transformations are required in the worst case to reduce an $n$-vertex plane graph to a single vertex or edge. The lower bound is achieved by any planar graph with treewidth $\Theta(\sqrt{n})$. Second, $\Omega(n^{3/2})$ homotopy moves are required in the worst case to reduce a closed curve in the plane with $n$ self-intersection points to a simple closed curve. For both problems, the best upper bound known is $O(n^2)$, and the only lower bound previously known was the trivial $\Omega(n)$. The first lower bound follows from the second using medial graph techniques ultimately due to Steinitz, together with more recent arguments of Noble and Welsh [J. Graph Theory 2000]. The lower bound on homotopy moves follows from an observation by Haiyashi et al. [J. Knot Theory Ramif. 2012] that the standard projections of certain torus knots have large defect, a topological invariant of generic closed curves introduced by Aicardi and Arnold. Finally, we prove that every closed curve in the plane with $n$ crossings has defect $O(n^{3/2})$, which implies that better lower bounds for our algorithmic problems will require different techniques.
|
2105.12900
|
Weijia Xu
|
Weijia Xu, Shuming Ma, Dongdong Zhang, Marine Carpuat
|
How Does Distilled Data Complexity Impact the Quality and Confidence of
Non-Autoregressive Machine Translation?
|
Findings of ACL 2021
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While non-autoregressive (NAR) models are showing great promise for machine
translation, their use is limited by their dependence on knowledge distillation
from autoregressive models. To address this issue, we seek to understand why
distillation is so effective. Prior work suggests that distilled training data
is less complex than manual translations. Based on experiments with the
Levenshtein Transformer and the Mask-Predict NAR models on the WMT14
German-English task, this paper shows that different types of complexity have
different impacts: while reducing lexical diversity and decreasing reordering
complexity both help NAR learn better alignment between source and target, and
thus improve translation quality, lexical diversity is the main reason why
distillation increases model confidence, which affects the calibration of
different NAR models differently.
|
[
{
"created": "Thu, 27 May 2021 01:19:11 GMT",
"version": "v1"
}
] |
2021-05-28
|
[
[
"Xu",
"Weijia",
""
],
[
"Ma",
"Shuming",
""
],
[
"Zhang",
"Dongdong",
""
],
[
"Carpuat",
"Marine",
""
]
] |
While non-autoregressive (NAR) models are showing great promise for machine translation, their use is limited by their dependence on knowledge distillation from autoregressive models. To address this issue, we seek to understand why distillation is so effective. Prior work suggests that distilled training data is less complex than manual translations. Based on experiments with the Levenshtein Transformer and the Mask-Predict NAR models on the WMT14 German-English task, this paper shows that different types of complexity have different impacts: while reducing lexical diversity and decreasing reordering complexity both help NAR learn better alignment between source and target, and thus improve translation quality, lexical diversity is the main reason why distillation increases model confidence, which affects the calibration of different NAR models differently.
|
2009.01664
|
Kai Dresia
|
Kai Dresia, Simon Jentzsch, G\"unther Waxenegger-Wilfing, Robson Hahn,
Jan Deeken, Michael Oschwald, Fabio Mota
|
Multidisciplinary Design Optimization of Reusable Launch Vehicles for
Different Propellants and Objectives
| null | null |
10.2514/1.A34944
| null |
cs.NE cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Identifying the optimal design of a new launch vehicle is most important
since design decisions made in the early development phase limit the vehicles'
later performance and determines the associated costs. Reusing the first stage
via retro-propulsive landing increases the complexity even more. Therefore, we
develop an optimization framework for partially reusable launch vehicles, which
enables multidisciplinary design studies. The framework contains suitable mass
estimates of all essential subsystems and a routine to calculate the needed
propellant for the ascent and landing maneuvers. For design optimization, the
framework can be coupled with a genetic algorithm. The overall goal is to
reveal the implications of different propellant combinations and objective
functions on the launcher's optimal design for various mission scenarios. The
results show that the optimization objective influences the most suitable
propellant choice and the overall launcher design, concerning staging, weight,
size, and rocket engine parameters. In terms of gross lift-off weight, liquid
hydrogen seems to be favorable. When optimizing for a minimum structural mass
or an expandable structural mass, hydrocarbon-based solutions show better
results. Finally, launch vehicles using a hydrocarbon fuel in the first stage
and liquid hydrogen in the upper stage are an appealing alternative, combining
both fuels' benefits.
|
[
{
"created": "Thu, 3 Sep 2020 13:48:54 GMT",
"version": "v1"
}
] |
2021-02-17
|
[
[
"Dresia",
"Kai",
""
],
[
"Jentzsch",
"Simon",
""
],
[
"Waxenegger-Wilfing",
"Günther",
""
],
[
"Hahn",
"Robson",
""
],
[
"Deeken",
"Jan",
""
],
[
"Oschwald",
"Michael",
""
],
[
"Mota",
"Fabio",
""
]
] |
Identifying the optimal design of a new launch vehicle is most important since design decisions made in the early development phase limit the vehicles' later performance and determines the associated costs. Reusing the first stage via retro-propulsive landing increases the complexity even more. Therefore, we develop an optimization framework for partially reusable launch vehicles, which enables multidisciplinary design studies. The framework contains suitable mass estimates of all essential subsystems and a routine to calculate the needed propellant for the ascent and landing maneuvers. For design optimization, the framework can be coupled with a genetic algorithm. The overall goal is to reveal the implications of different propellant combinations and objective functions on the launcher's optimal design for various mission scenarios. The results show that the optimization objective influences the most suitable propellant choice and the overall launcher design, concerning staging, weight, size, and rocket engine parameters. In terms of gross lift-off weight, liquid hydrogen seems to be favorable. When optimizing for a minimum structural mass or an expandable structural mass, hydrocarbon-based solutions show better results. Finally, launch vehicles using a hydrocarbon fuel in the first stage and liquid hydrogen in the upper stage are an appealing alternative, combining both fuels' benefits.
|
2308.06378
|
Mojtaba Yeganejou
|
Mojtaba Yeganejou, Kimia Honari, Ryan Kluzinski, Scott Dick, Michael
Lipsett, James Miller
|
DCNFIS: Deep Convolutional Neuro-Fuzzy Inference System
| null | null | null | null |
cs.AI cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A key challenge in eXplainable Artificial Intelligence is the well-known
tradeoff between the transparency of an algorithm (i.e., how easily a human can
directly understand the algorithm, as opposed to receiving a post-hoc
explanation), and its accuracy. We report on the design of a new deep network
that achieves improved transparency without sacrificing accuracy. We design a
deep convolutional neuro-fuzzy inference system (DCNFIS) by hybridizing fuzzy
logic and deep learning models and show that DCNFIS performs as accurately as
existing convolutional neural networks on four well-known datasets and 3 famous
architectures. Our performance comparison with available fuzzy methods show
that DCNFIS is now state-of-the-art fuzzy system and outperforms other shallow
and deep fuzzy methods to the best of our knowledge. At the end, we exploit the
transparency of fuzzy logic by deriving explanations, in the form of saliency
maps, from the fuzzy rules encoded in the network to take benefit of fuzzy
logic upon regular deep learning methods. We investigate the properties of
these explanations in greater depth using the Fashion-MNIST dataset.
|
[
{
"created": "Fri, 11 Aug 2023 20:32:39 GMT",
"version": "v1"
},
{
"created": "Fri, 15 Sep 2023 17:20:48 GMT",
"version": "v2"
},
{
"created": "Sun, 17 Mar 2024 14:10:15 GMT",
"version": "v3"
}
] |
2024-03-19
|
[
[
"Yeganejou",
"Mojtaba",
""
],
[
"Honari",
"Kimia",
""
],
[
"Kluzinski",
"Ryan",
""
],
[
"Dick",
"Scott",
""
],
[
"Lipsett",
"Michael",
""
],
[
"Miller",
"James",
""
]
] |
A key challenge in eXplainable Artificial Intelligence is the well-known tradeoff between the transparency of an algorithm (i.e., how easily a human can directly understand the algorithm, as opposed to receiving a post-hoc explanation), and its accuracy. We report on the design of a new deep network that achieves improved transparency without sacrificing accuracy. We design a deep convolutional neuro-fuzzy inference system (DCNFIS) by hybridizing fuzzy logic and deep learning models and show that DCNFIS performs as accurately as existing convolutional neural networks on four well-known datasets and 3 famous architectures. Our performance comparison with available fuzzy methods show that DCNFIS is now state-of-the-art fuzzy system and outperforms other shallow and deep fuzzy methods to the best of our knowledge. At the end, we exploit the transparency of fuzzy logic by deriving explanations, in the form of saliency maps, from the fuzzy rules encoded in the network to take benefit of fuzzy logic upon regular deep learning methods. We investigate the properties of these explanations in greater depth using the Fashion-MNIST dataset.
|
cs/0601124
|
Onur Kaya
|
Onur Kaya and Sennur Ulukus
|
Power Control for User Cooperation
|
Submitted to IEEE Transactions on Wireless Communications, October
2005
| null | null | null |
cs.IT math.IT
| null |
For a fading Gaussian multiple access channel with user cooperation, we
obtain the optimal power allocation policies that maximize the rates achievable
by block Markov superposition coding. The optimal policies result in a coding
scheme that is simpler than the one for a general multiple access channel with
generalized feedback. This simpler coding scheme also leads to the possibility
of formulating an otherwise non-concave optimization problem as a concave one.
Using the channel state information at the transmitters to adapt the powers, we
demonstrate significant gains over the achievable rates for existing
cooperative systems.
|
[
{
"created": "Mon, 30 Jan 2006 12:21:25 GMT",
"version": "v1"
}
] |
2007-07-13
|
[
[
"Kaya",
"Onur",
""
],
[
"Ulukus",
"Sennur",
""
]
] |
For a fading Gaussian multiple access channel with user cooperation, we obtain the optimal power allocation policies that maximize the rates achievable by block Markov superposition coding. The optimal policies result in a coding scheme that is simpler than the one for a general multiple access channel with generalized feedback. This simpler coding scheme also leads to the possibility of formulating an otherwise non-concave optimization problem as a concave one. Using the channel state information at the transmitters to adapt the powers, we demonstrate significant gains over the achievable rates for existing cooperative systems.
|
2302.05061
|
Zhen Wang
|
Zhen Wang, Peide Zhu, Jie Yang
|
ControversialQA: Exploring Controversy in Question Answering
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Controversy is widespread online. Previous studies mainly define controversy
based on vague assumptions of its relation to sentiment such as hate speech and
offensive words. This paper introduces the first question-answering dataset
that defines content controversy by user perception, i.e., votes from plenty of
users. It contains nearly 10K questions, and each question has a best answer
and a most controversial answer. Experimental results reveal that controversy
detection in question answering is essential and challenging, and there is no
strong correlation between controversy and sentiment tasks.
|
[
{
"created": "Fri, 10 Feb 2023 05:39:29 GMT",
"version": "v1"
}
] |
2023-02-13
|
[
[
"Wang",
"Zhen",
""
],
[
"Zhu",
"Peide",
""
],
[
"Yang",
"Jie",
""
]
] |
Controversy is widespread online. Previous studies mainly define controversy based on vague assumptions of its relation to sentiment such as hate speech and offensive words. This paper introduces the first question-answering dataset that defines content controversy by user perception, i.e., votes from plenty of users. It contains nearly 10K questions, and each question has a best answer and a most controversial answer. Experimental results reveal that controversy detection in question answering is essential and challenging, and there is no strong correlation between controversy and sentiment tasks.
|
2110.01794
|
Yi Sui
|
Yi Sui, Ga Wu, Scott Sanner
|
Multi-axis Attentive Prediction for Sparse EventData: An Application to
Crime Prediction
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Spatiotemporal prediction of event data is a challenging task with a long
history of research. While recent work in spatiotemporal prediction has
leveraged deep sequential models that substantially improve over classical
approaches, these models are prone to overfitting when the observation is
extremely sparse, as in the task of crime event prediction. To overcome these
sparsity issues, we present Multi-axis Attentive Prediction for Sparse Event
Data (MAPSED). We propose a purely attentional approach to extract both
short-term dynamics and long-term semantics of event propagation through two
observation angles. Unlike existing temporal prediction models that propagate
latent information primarily along the temporal dimension, the MAPSED
simultaneously operates over all axes (time, 2D space, event type) of the
embedded data tensor. We additionally introduce a novel Frobenius norm-based
contrastive learning objective to improve latent representational
generalization.Empirically, we validate MAPSED on two publicly accessible urban
crime datasets for spatiotemporal sparse event prediction, where MAPSED
outperforms both classical and state-of-the-art deep learning models. The
proposed contrastive learning objective significantly enhances the MAPSED's
ability to capture the semantics and dynamics of the events, resulting in
better generalization ability to combat sparse observations.
|
[
{
"created": "Tue, 5 Oct 2021 02:38:46 GMT",
"version": "v1"
}
] |
2021-10-06
|
[
[
"Sui",
"Yi",
""
],
[
"Wu",
"Ga",
""
],
[
"Sanner",
"Scott",
""
]
] |
Spatiotemporal prediction of event data is a challenging task with a long history of research. While recent work in spatiotemporal prediction has leveraged deep sequential models that substantially improve over classical approaches, these models are prone to overfitting when the observation is extremely sparse, as in the task of crime event prediction. To overcome these sparsity issues, we present Multi-axis Attentive Prediction for Sparse Event Data (MAPSED). We propose a purely attentional approach to extract both short-term dynamics and long-term semantics of event propagation through two observation angles. Unlike existing temporal prediction models that propagate latent information primarily along the temporal dimension, the MAPSED simultaneously operates over all axes (time, 2D space, event type) of the embedded data tensor. We additionally introduce a novel Frobenius norm-based contrastive learning objective to improve latent representational generalization.Empirically, we validate MAPSED on two publicly accessible urban crime datasets for spatiotemporal sparse event prediction, where MAPSED outperforms both classical and state-of-the-art deep learning models. The proposed contrastive learning objective significantly enhances the MAPSED's ability to capture the semantics and dynamics of the events, resulting in better generalization ability to combat sparse observations.
|
1907.08456
|
Frederik Kratzert
|
Frederik Kratzert, Daniel Klotz, Guy Shalev, G\"unter Klambauer, Sepp
Hochreiter, Grey Nearing
|
Towards Learning Universal, Regional, and Local Hydrological Behaviors
via Machine-Learning Applied to Large-Sample Datasets
| null | null | null | null |
cs.LG stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
Regional rainfall-runoff modeling is an old but still mostly out-standing
problem in Hydrological Sciences. The problem currently is that traditional
hydrological models degrade significantly in performance when calibrated for
multiple basins together instead of for a single basin alone. In this paper, we
propose a novel, data-driven approach using Long Short-Term Memory networks
(LSTMs), and demonstrate that under a 'big data' paradigm, this is not
necessarily the case. By training a single LSTM model on 531 basins from the
CAMELS data set using meteorological time series data and static catchment
attributes, we were able to significantly improve performance compared to a set
of several different hydrological benchmark models. Our proposed approach not
only significantly outperforms hydrological models that were calibrated
regionally but also achieves better performance than hydrological models that
were calibrated for each basin individually. Furthermore, we propose an
adaption to the standard LSTM architecture, which we call an Entity-Aware-LSTM
(EA-LSTM), that allows for learning, and embedding as a feature layer in a deep
learning model, catchment similarities. We show that this learned catchment
similarity corresponds well with what we would expect from prior hydrological
understanding.
|
[
{
"created": "Fri, 19 Jul 2019 10:52:12 GMT",
"version": "v1"
},
{
"created": "Sun, 10 Nov 2019 15:03:52 GMT",
"version": "v2"
}
] |
2019-11-12
|
[
[
"Kratzert",
"Frederik",
""
],
[
"Klotz",
"Daniel",
""
],
[
"Shalev",
"Guy",
""
],
[
"Klambauer",
"Günter",
""
],
[
"Hochreiter",
"Sepp",
""
],
[
"Nearing",
"Grey",
""
]
] |
Regional rainfall-runoff modeling is an old but still mostly out-standing problem in Hydrological Sciences. The problem currently is that traditional hydrological models degrade significantly in performance when calibrated for multiple basins together instead of for a single basin alone. In this paper, we propose a novel, data-driven approach using Long Short-Term Memory networks (LSTMs), and demonstrate that under a 'big data' paradigm, this is not necessarily the case. By training a single LSTM model on 531 basins from the CAMELS data set using meteorological time series data and static catchment attributes, we were able to significantly improve performance compared to a set of several different hydrological benchmark models. Our proposed approach not only significantly outperforms hydrological models that were calibrated regionally but also achieves better performance than hydrological models that were calibrated for each basin individually. Furthermore, we propose an adaption to the standard LSTM architecture, which we call an Entity-Aware-LSTM (EA-LSTM), that allows for learning, and embedding as a feature layer in a deep learning model, catchment similarities. We show that this learned catchment similarity corresponds well with what we would expect from prior hydrological understanding.
|
2407.00389
|
Yuan-Gen Wang
|
Chao Zhou, Xiaowen Shi and Yuan-Gen Wang
|
Query-Efficient Hard-Label Black-Box Attack against Vision Transformers
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent studies have revealed that vision transformers (ViTs) face similar
security risks from adversarial attacks as deep convolutional neural networks
(CNNs). However, directly applying attack methodology on CNNs to ViTs has been
demonstrated to be ineffective since the ViTs typically work on patch-wise
encoding. This article explores the vulnerability of ViTs against adversarial
attacks under a black-box scenario, and proposes a novel query-efficient
hard-label adversarial attack method called AdvViT. Specifically, considering
that ViTs are highly sensitive to patch modification, we propose to optimize
the adversarial perturbation on the individual patches. To reduce the dimension
of perturbation search space, we modify only a handful of low-frequency
components of each patch. Moreover, we design a weight mask matrix for all
patches to further optimize the perturbation on different regions of a whole
image. We test six mainstream ViT backbones on the ImageNet-1k dataset.
Experimental results show that compared with the state-of-the-art attacks on
CNNs, our AdvViT achieves much lower $L_2$-norm distortion under the same query
budget, sufficiently validating the vulnerability of ViTs against adversarial
attacks.
|
[
{
"created": "Sat, 29 Jun 2024 10:09:12 GMT",
"version": "v1"
}
] |
2024-07-02
|
[
[
"Zhou",
"Chao",
""
],
[
"Shi",
"Xiaowen",
""
],
[
"Wang",
"Yuan-Gen",
""
]
] |
Recent studies have revealed that vision transformers (ViTs) face similar security risks from adversarial attacks as deep convolutional neural networks (CNNs). However, directly applying attack methodology on CNNs to ViTs has been demonstrated to be ineffective since the ViTs typically work on patch-wise encoding. This article explores the vulnerability of ViTs against adversarial attacks under a black-box scenario, and proposes a novel query-efficient hard-label adversarial attack method called AdvViT. Specifically, considering that ViTs are highly sensitive to patch modification, we propose to optimize the adversarial perturbation on the individual patches. To reduce the dimension of perturbation search space, we modify only a handful of low-frequency components of each patch. Moreover, we design a weight mask matrix for all patches to further optimize the perturbation on different regions of a whole image. We test six mainstream ViT backbones on the ImageNet-1k dataset. Experimental results show that compared with the state-of-the-art attacks on CNNs, our AdvViT achieves much lower $L_2$-norm distortion under the same query budget, sufficiently validating the vulnerability of ViTs against adversarial attacks.
|
2306.04950
|
Jun Zhao
|
Jun Zhao, Xin Zhao, Wenyu Zhan, Qi Zhang, Tao Gui, Zhongyu Wei, Yunwen
Chen, Xiang Gao, Xuanjing Huang
|
Open Set Relation Extraction via Unknown-Aware Training
|
Accepted by ACL2023
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The existing supervised relation extraction methods have achieved impressive
performance in a closed-set setting, where the relations during both training
and testing remain the same. In a more realistic open-set setting, unknown
relations may appear in the test set. Due to the lack of supervision signals
from unknown relations, a well-performing closed-set relation extractor can
still confidently misclassify them into known relations. In this paper, we
propose an unknown-aware training method, regularizing the model by dynamically
synthesizing negative instances. To facilitate a compact decision boundary,
``difficult'' negative instances are necessary. Inspired by text adversarial
attacks, we adaptively apply small but critical perturbations to original
training instances and thus synthesizing negative instances that are more
likely to be mistaken by the model as known relations. Experimental results
show that this method achieves SOTA unknown relation detection without
compromising the classification of known relations.
|
[
{
"created": "Thu, 8 Jun 2023 05:45:25 GMT",
"version": "v1"
}
] |
2023-06-09
|
[
[
"Zhao",
"Jun",
""
],
[
"Zhao",
"Xin",
""
],
[
"Zhan",
"Wenyu",
""
],
[
"Zhang",
"Qi",
""
],
[
"Gui",
"Tao",
""
],
[
"Wei",
"Zhongyu",
""
],
[
"Chen",
"Yunwen",
""
],
[
"Gao",
"Xiang",
""
],
[
"Huang",
"Xuanjing",
""
]
] |
The existing supervised relation extraction methods have achieved impressive performance in a closed-set setting, where the relations during both training and testing remain the same. In a more realistic open-set setting, unknown relations may appear in the test set. Due to the lack of supervision signals from unknown relations, a well-performing closed-set relation extractor can still confidently misclassify them into known relations. In this paper, we propose an unknown-aware training method, regularizing the model by dynamically synthesizing negative instances. To facilitate a compact decision boundary, ``difficult'' negative instances are necessary. Inspired by text adversarial attacks, we adaptively apply small but critical perturbations to original training instances and thus synthesizing negative instances that are more likely to be mistaken by the model as known relations. Experimental results show that this method achieves SOTA unknown relation detection without compromising the classification of known relations.
|
2408.05097
|
Paolo Mandica
|
Paolo Mandica, Luca Franco, Konstantinos Kallidromitis, Suzanne
Petryk, Fabio Galasso
|
Hyperbolic Learning with Multimodal Large Language Models
|
ECCV 2024 - Beyond Euclidean Workshop
| null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Hyperbolic embeddings have demonstrated their effectiveness in capturing
measures of uncertainty and hierarchical relationships across various
deep-learning tasks, including image segmentation and active learning. However,
their application in modern vision-language models (VLMs) has been limited. A
notable exception is MERU, which leverages the hierarchical properties of
hyperbolic space in the CLIP ViT-large model, consisting of hundreds of
millions parameters. In our work, we address the challenges of scaling
multi-modal hyperbolic models by orders of magnitude in terms of parameters
(billions) and training complexity using the BLIP-2 architecture. Although
hyperbolic embeddings offer potential insights into uncertainty not present in
Euclidean embeddings, our analysis reveals that scaling these models is
particularly difficult. We propose a novel training strategy for a hyperbolic
version of BLIP-2, which allows to achieve comparable performance to its
Euclidean counterpart, while maintaining stability throughout the training
process and showing a meaningful indication of uncertainty with each embedding.
|
[
{
"created": "Fri, 9 Aug 2024 14:39:15 GMT",
"version": "v1"
}
] |
2024-08-12
|
[
[
"Mandica",
"Paolo",
""
],
[
"Franco",
"Luca",
""
],
[
"Kallidromitis",
"Konstantinos",
""
],
[
"Petryk",
"Suzanne",
""
],
[
"Galasso",
"Fabio",
""
]
] |
Hyperbolic embeddings have demonstrated their effectiveness in capturing measures of uncertainty and hierarchical relationships across various deep-learning tasks, including image segmentation and active learning. However, their application in modern vision-language models (VLMs) has been limited. A notable exception is MERU, which leverages the hierarchical properties of hyperbolic space in the CLIP ViT-large model, consisting of hundreds of millions parameters. In our work, we address the challenges of scaling multi-modal hyperbolic models by orders of magnitude in terms of parameters (billions) and training complexity using the BLIP-2 architecture. Although hyperbolic embeddings offer potential insights into uncertainty not present in Euclidean embeddings, our analysis reveals that scaling these models is particularly difficult. We propose a novel training strategy for a hyperbolic version of BLIP-2, which allows to achieve comparable performance to its Euclidean counterpart, while maintaining stability throughout the training process and showing a meaningful indication of uncertainty with each embedding.
|
2301.02359
|
Jinming Zhuang
|
Jinming Zhuang, Jason Lau, Hanchen Ye, Zhuoping Yang, Yubo Du, Jack
Lo, Kristof Denolf, Stephen Neuendorffer, Alex Jones, Jingtong Hu, Deming
Chen, Jason Cong, Peipei Zhou
|
CHARM: Composing Heterogeneous Accelerators for Matrix Multiply on
Versal ACAP Architecture
| null | null | null | null |
cs.AR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Dense matrix multiply (MM) serves as one of the most heavily used kernels in
deep learning applications. To cope with the high computation demands of these
applications, heterogeneous architectures featuring both FPGA and dedicated
ASIC accelerators have emerged as promising platforms. For example, the
AMD/Xilinx Versal ACAP architecture combines general-purpose CPU cores and
programmable logic with AI Engine processors optimized for AI/ML. With 400
AIEs, it provides up to 6.4 TFLOPs performance for 32-bit floating-point data.
However, machine learning models often contain both large and small MM
operations. While large MM operations can be parallelized efficiently across
many cores, small MM operations typically cannot. We observe that executing
some small MM layers from the BERT natural language processing model on a
large, monolithic MM accelerator in Versal ACAP achieved less than 5% of the
theoretical peak performance. Therefore, one key question arises: How can we
design accelerators to fully use the abundant computation resources under
limited communication bandwidth for applications with multiple MM layers of
diverse sizes? We identify the biggest system throughput bottleneck resulting
from the mismatch of massive computation resources of one monolithic
accelerator and the various MM layers of small sizes in the application. To
resolve this problem, we propose the CHARM framework to compose multiple
diverse MM accelerator architectures working concurrently towards different
layers in one application. We deploy the CHARM framework for four different
applications, including BERT, ViT, NCF, MLP, on the AMD Versal ACAP VCK190
evaluation board. Our experiments show that we achieve 1.46 TFLOPs, 1.61
TFLOPs, 1.74 TFLOPs, and 2.94 TFLOPs inference throughput for BERT, ViT, NCF
and MLP, which obtain 5.40x, 32.51x, 1.00x and 1.00x throughput gains compared
to one monolithic accelerator.
|
[
{
"created": "Fri, 6 Jan 2023 02:05:54 GMT",
"version": "v1"
}
] |
2023-01-09
|
[
[
"Zhuang",
"Jinming",
""
],
[
"Lau",
"Jason",
""
],
[
"Ye",
"Hanchen",
""
],
[
"Yang",
"Zhuoping",
""
],
[
"Du",
"Yubo",
""
],
[
"Lo",
"Jack",
""
],
[
"Denolf",
"Kristof",
""
],
[
"Neuendorffer",
"Stephen",
""
],
[
"Jones",
"Alex",
""
],
[
"Hu",
"Jingtong",
""
],
[
"Chen",
"Deming",
""
],
[
"Cong",
"Jason",
""
],
[
"Zhou",
"Peipei",
""
]
] |
Dense matrix multiply (MM) serves as one of the most heavily used kernels in deep learning applications. To cope with the high computation demands of these applications, heterogeneous architectures featuring both FPGA and dedicated ASIC accelerators have emerged as promising platforms. For example, the AMD/Xilinx Versal ACAP architecture combines general-purpose CPU cores and programmable logic with AI Engine processors optimized for AI/ML. With 400 AIEs, it provides up to 6.4 TFLOPs performance for 32-bit floating-point data. However, machine learning models often contain both large and small MM operations. While large MM operations can be parallelized efficiently across many cores, small MM operations typically cannot. We observe that executing some small MM layers from the BERT natural language processing model on a large, monolithic MM accelerator in Versal ACAP achieved less than 5% of the theoretical peak performance. Therefore, one key question arises: How can we design accelerators to fully use the abundant computation resources under limited communication bandwidth for applications with multiple MM layers of diverse sizes? We identify the biggest system throughput bottleneck resulting from the mismatch of massive computation resources of one monolithic accelerator and the various MM layers of small sizes in the application. To resolve this problem, we propose the CHARM framework to compose multiple diverse MM accelerator architectures working concurrently towards different layers in one application. We deploy the CHARM framework for four different applications, including BERT, ViT, NCF, MLP, on the AMD Versal ACAP VCK190 evaluation board. Our experiments show that we achieve 1.46 TFLOPs, 1.61 TFLOPs, 1.74 TFLOPs, and 2.94 TFLOPs inference throughput for BERT, ViT, NCF and MLP, which obtain 5.40x, 32.51x, 1.00x and 1.00x throughput gains compared to one monolithic accelerator.
|
2010.09464
|
Vanja Dosko\v{c}
|
Vanja Dosko\v{c} and Timo K\"otzing
|
Mapping Monotonic Restrictions in Inductive Inference
| null | null | null | null |
cs.LG cs.FL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In language learning in the limit we investigate computable devices
(learners) learning formal languages. Through the years, many natural
restrictions have been imposed on the studied learners. As such, monotonic
restrictions always enjoyed particular attention as, although being a natural
requirement, monotonic learners show significantly diverse behaviour when
studied in different settings. A recent study thoroughly analysed the learning
capabilities of strongly monotone learners imposed with memory restrictions and
various additional requirements. The unveiled differences between explanatory
and behaviourally correct such learners motivate our studies of monotone
learners dealing with the same restrictions.
We reveal differences and similarities between monotone learners and their
strongly monotone counterpart when studied with various additional
restrictions. In particular, we show that explanatory monotone learners,
although known to be strictly stronger, do (almost) preserve the pairwise
relation as seen in strongly monotone learning. Contrasting this similarity, we
find substantial differences when studying behaviourally correct monotone
learners. Most notably, we show that monotone learners, as opposed to their
strongly monotone counterpart, do heavily rely on the order the information is
given in, an unusual result for behaviourally correct learners.
|
[
{
"created": "Thu, 15 Oct 2020 08:54:30 GMT",
"version": "v1"
}
] |
2020-10-20
|
[
[
"Doskoč",
"Vanja",
""
],
[
"Kötzing",
"Timo",
""
]
] |
In language learning in the limit we investigate computable devices (learners) learning formal languages. Through the years, many natural restrictions have been imposed on the studied learners. As such, monotonic restrictions always enjoyed particular attention as, although being a natural requirement, monotonic learners show significantly diverse behaviour when studied in different settings. A recent study thoroughly analysed the learning capabilities of strongly monotone learners imposed with memory restrictions and various additional requirements. The unveiled differences between explanatory and behaviourally correct such learners motivate our studies of monotone learners dealing with the same restrictions. We reveal differences and similarities between monotone learners and their strongly monotone counterpart when studied with various additional restrictions. In particular, we show that explanatory monotone learners, although known to be strictly stronger, do (almost) preserve the pairwise relation as seen in strongly monotone learning. Contrasting this similarity, we find substantial differences when studying behaviourally correct monotone learners. Most notably, we show that monotone learners, as opposed to their strongly monotone counterpart, do heavily rely on the order the information is given in, an unusual result for behaviourally correct learners.
|
1901.11211
|
Weiwen Jiang
|
Weiwen Jiang, Xinyi Zhang, Edwin H.-M. Sha, Lei Yang, Qingfeng Zhuge,
Yiyu Shi, Jingtong Hu
|
Accuracy vs. Efficiency: Achieving Both through FPGA-Implementation
Aware Neural Architecture Search
|
6 pages, 8 figures, 1 table, accepted by DAC
| null | null | null |
cs.DC cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A fundamental question lies in almost every application of deep neural
networks: what is the optimal neural architecture given a specific dataset?
Recently, several Neural Architecture Search (NAS) frameworks have been
developed that use reinforcement learning and evolutionary algorithm to search
for the solution. However, most of them take a long time to find the optimal
architecture due to the huge search space and the lengthy training process
needed to evaluate each candidate. In addition, most of them aim at accuracy
only and do not take into consideration the hardware that will be used to
implement the architecture. This will potentially lead to excessive latencies
beyond specifications, rendering the resulting architectures useless. To
address both issues, in this paper we use Field Programmable Gate Arrays
(FPGAs) as a vehicle to present a novel hardware-aware NAS framework, namely
FNAS, which will provide an optimal neural architecture with latency guaranteed
to meet the specification. In addition, with a performance abstraction model to
analyze the latency of neural architectures without training, our framework can
quickly prune architectures that do not satisfy the specification, leading to
higher efficiency. Experimental results on common data set such as ImageNet
show that in the cases where the state-of-the-art generates architectures with
latencies 7.81x longer than the specification, those from FNAS can meet the
specs with less than 1% accuracy loss. Moreover, FNAS also achieves up to
11.13x speedup for the search process. To the best of the authors' knowledge,
this is the very first hardware aware NAS.
|
[
{
"created": "Thu, 31 Jan 2019 04:57:16 GMT",
"version": "v1"
}
] |
2019-02-04
|
[
[
"Jiang",
"Weiwen",
""
],
[
"Zhang",
"Xinyi",
""
],
[
"Sha",
"Edwin H. -M.",
""
],
[
"Yang",
"Lei",
""
],
[
"Zhuge",
"Qingfeng",
""
],
[
"Shi",
"Yiyu",
""
],
[
"Hu",
"Jingtong",
""
]
] |
A fundamental question lies in almost every application of deep neural networks: what is the optimal neural architecture given a specific dataset? Recently, several Neural Architecture Search (NAS) frameworks have been developed that use reinforcement learning and evolutionary algorithm to search for the solution. However, most of them take a long time to find the optimal architecture due to the huge search space and the lengthy training process needed to evaluate each candidate. In addition, most of them aim at accuracy only and do not take into consideration the hardware that will be used to implement the architecture. This will potentially lead to excessive latencies beyond specifications, rendering the resulting architectures useless. To address both issues, in this paper we use Field Programmable Gate Arrays (FPGAs) as a vehicle to present a novel hardware-aware NAS framework, namely FNAS, which will provide an optimal neural architecture with latency guaranteed to meet the specification. In addition, with a performance abstraction model to analyze the latency of neural architectures without training, our framework can quickly prune architectures that do not satisfy the specification, leading to higher efficiency. Experimental results on common data set such as ImageNet show that in the cases where the state-of-the-art generates architectures with latencies 7.81x longer than the specification, those from FNAS can meet the specs with less than 1% accuracy loss. Moreover, FNAS also achieves up to 11.13x speedup for the search process. To the best of the authors' knowledge, this is the very first hardware aware NAS.
|
2311.12755
|
Carine Rebello
|
Carine Menezes Rebello, Johannes J\"aschkea, and Idelfonso B. R.
Nogueira
|
Digital Twin Framework for Optimal and Autonomous Decision-Making in
Cyber-Physical Systems: Enhancing Reliability and Adaptability in the Oil and
Gas Industry
| null | null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
The concept of creating a virtual copy of a complete Cyber-Physical System
opens up numerous possibilities, including real-time assessments of the
physical environment and continuous learning from the system to provide
reliable and precise information. This process, known as the twinning process
or the development of a digital twin (DT), has been widely adopted across
various industries. However, challenges arise when considering the
computational demands of implementing AI models, such as those employed in
digital twins, in real-time information exchange scenarios. This work proposes
a digital twin framework for optimal and autonomous decision-making applied to
a gas-lift process in the oil and gas industry, focusing on enhancing the
robustness and adaptability of the DT. The framework combines Bayesian
inference, Monte Carlo simulations, transfer learning, online learning, and
novel strategies to confer cognition to the DT, including model
hyperdimensional reduction and cognitive tack. Consequently, creating a
framework for efficient, reliable, and trustworthy DT identification was
possible. The proposed approach addresses the current gap in the literature
regarding integrating various learning techniques and uncertainty management in
digital twin strategies. This digital twin framework aims to provide a reliable
and efficient system capable of adapting to changing environments and
incorporating prediction uncertainty, thus enhancing the overall
decision-making process in complex, real-world scenarios. Additionally, this
work lays the foundation for further developments in digital twins for process
systems engineering, potentially fostering new advancements and applications
across various industrial sectors.
|
[
{
"created": "Tue, 21 Nov 2023 18:02:52 GMT",
"version": "v1"
}
] |
2023-11-22
|
[
[
"Rebello",
"Carine Menezes",
""
],
[
"Jäschkea",
"Johannes",
""
],
[
"Nogueira",
"Idelfonso B. R.",
""
]
] |
The concept of creating a virtual copy of a complete Cyber-Physical System opens up numerous possibilities, including real-time assessments of the physical environment and continuous learning from the system to provide reliable and precise information. This process, known as the twinning process or the development of a digital twin (DT), has been widely adopted across various industries. However, challenges arise when considering the computational demands of implementing AI models, such as those employed in digital twins, in real-time information exchange scenarios. This work proposes a digital twin framework for optimal and autonomous decision-making applied to a gas-lift process in the oil and gas industry, focusing on enhancing the robustness and adaptability of the DT. The framework combines Bayesian inference, Monte Carlo simulations, transfer learning, online learning, and novel strategies to confer cognition to the DT, including model hyperdimensional reduction and cognitive tack. Consequently, creating a framework for efficient, reliable, and trustworthy DT identification was possible. The proposed approach addresses the current gap in the literature regarding integrating various learning techniques and uncertainty management in digital twin strategies. This digital twin framework aims to provide a reliable and efficient system capable of adapting to changing environments and incorporating prediction uncertainty, thus enhancing the overall decision-making process in complex, real-world scenarios. Additionally, this work lays the foundation for further developments in digital twins for process systems engineering, potentially fostering new advancements and applications across various industrial sectors.
|
2011.03909
|
Filipp Skomorokhov
|
Filipp Skomorokhov (1 and 2) and George Ovchinnikov (2) ((1) Moscow
Institute of Physics and Technology, (2) Skolkovo Institute of Science and
Technology)
|
Reinforcement Learning for Assignment problem
| null | null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper is dedicated to the application of reinforcement learning combined
with neural networks to the general formulation of user scheduling problem. Our
simulator resembles real world problems by means of stochastic changes in
environment. We applied Q-learning based method to the number of dynamic
simulations and outperformed analytical greedy-based solution in terms of total
reward, the aim of which is to get the lowest possible penalty throughout
simulation.
|
[
{
"created": "Sun, 8 Nov 2020 06:25:50 GMT",
"version": "v1"
}
] |
2020-11-10
|
[
[
"Skomorokhov",
"Filipp",
"",
"1 and 2"
],
[
"Ovchinnikov",
"George",
""
]
] |
This paper is dedicated to the application of reinforcement learning combined with neural networks to the general formulation of user scheduling problem. Our simulator resembles real world problems by means of stochastic changes in environment. We applied Q-learning based method to the number of dynamic simulations and outperformed analytical greedy-based solution in terms of total reward, the aim of which is to get the lowest possible penalty throughout simulation.
|
2403.17883
|
Liang Chao
|
Chao Liang, Jianwen Jiang, Tianyun Zhong, Gaojie Lin, Zhengkun Rong,
Jiaqi Yang, Yongming Zhu
|
Superior and Pragmatic Talking Face Generation with Teacher-Student
Framework
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Talking face generation technology creates talking videos from arbitrary
appearance and motion signal, with the "arbitrary" offering ease of use but
also introducing challenges in practical applications. Existing methods work
well with standard inputs but suffer serious performance degradation with
intricate real-world ones. Moreover, efficiency is also an important concern in
deployment. To comprehensively address these issues, we introduce SuperFace, a
teacher-student framework that balances quality, robustness, cost and
editability. We first propose a simple but effective teacher model capable of
handling inputs of varying qualities to generate high-quality results. Building
on this, we devise an efficient distillation strategy to acquire an
identity-specific student model that maintains quality with significantly
reduced computational load. Our experiments validate that SuperFace offers a
more comprehensive solution than existing methods for the four mentioned
objectives, especially in reducing FLOPs by 99\% with the student model.
SuperFace can be driven by both video and audio and allows for localized facial
attributes editing.
|
[
{
"created": "Tue, 26 Mar 2024 17:13:17 GMT",
"version": "v1"
}
] |
2024-03-27
|
[
[
"Liang",
"Chao",
""
],
[
"Jiang",
"Jianwen",
""
],
[
"Zhong",
"Tianyun",
""
],
[
"Lin",
"Gaojie",
""
],
[
"Rong",
"Zhengkun",
""
],
[
"Yang",
"Jiaqi",
""
],
[
"Zhu",
"Yongming",
""
]
] |
Talking face generation technology creates talking videos from arbitrary appearance and motion signal, with the "arbitrary" offering ease of use but also introducing challenges in practical applications. Existing methods work well with standard inputs but suffer serious performance degradation with intricate real-world ones. Moreover, efficiency is also an important concern in deployment. To comprehensively address these issues, we introduce SuperFace, a teacher-student framework that balances quality, robustness, cost and editability. We first propose a simple but effective teacher model capable of handling inputs of varying qualities to generate high-quality results. Building on this, we devise an efficient distillation strategy to acquire an identity-specific student model that maintains quality with significantly reduced computational load. Our experiments validate that SuperFace offers a more comprehensive solution than existing methods for the four mentioned objectives, especially in reducing FLOPs by 99\% with the student model. SuperFace can be driven by both video and audio and allows for localized facial attributes editing.
|
0912.3980
|
William Jackson
|
Jamal A. Hussein, Mumtaz A. AlMukhtar
|
Fair Exchange of Digital Signatures using RSA-based CEMBS and Offline
STTP
| null |
Journal of Computing, Volume 1, Issue 1, pp 87-91, December 2009
| null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
One of the essential security services needed to safeguard online
transactions is fair exchange. In fair exchange protocols two parties can
exchange their signatures in a fair manner, so that either each party gain the
other's signature or no one obtain anything useful. This paper examines
security solutions for achieving fair exchange. It proposes new security
protocols based on the "Certified Encrypted Message Being Signature" (CEMBS) by
using RSA signature scheme. This protocol relies on the help of an "off-line
Semi-Trusted Third Party" (STTP) to achieve fairness. They provide with
confidential protection from the STTP for the exchanged items by limiting the
role and power of the STTP. Three different protocols have been proposed. In
the first protocol, the two main parties exchange their signatures on a common
message. In the second protocol, the signatures are exchanged on two different
messages. While in the third one, the exchange is between confidential data and
signature.
|
[
{
"created": "Sun, 20 Dec 2009 05:15:22 GMT",
"version": "v1"
}
] |
2009-12-22
|
[
[
"Hussein",
"Jamal A.",
""
],
[
"AlMukhtar",
"Mumtaz A.",
""
]
] |
One of the essential security services needed to safeguard online transactions is fair exchange. In fair exchange protocols two parties can exchange their signatures in a fair manner, so that either each party gain the other's signature or no one obtain anything useful. This paper examines security solutions for achieving fair exchange. It proposes new security protocols based on the "Certified Encrypted Message Being Signature" (CEMBS) by using RSA signature scheme. This protocol relies on the help of an "off-line Semi-Trusted Third Party" (STTP) to achieve fairness. They provide with confidential protection from the STTP for the exchanged items by limiting the role and power of the STTP. Three different protocols have been proposed. In the first protocol, the two main parties exchange their signatures on a common message. In the second protocol, the signatures are exchanged on two different messages. While in the third one, the exchange is between confidential data and signature.
|
1311.5018
|
Teodor Cioaca
|
Teodor Cioaca and Horea Caramizaru
|
On the impact of explicit or semi-implicit integration methods over the
stability of real-time numerical simulations
|
Submitted to the ROMAI Journal of Applied Mathematics. Presented at
the CAIM 2013 Conference on Applied and Industrial Mathematics
| null | null | null |
cs.GR
|
http://creativecommons.org/licenses/by-nc-sa/3.0/
|
Physics-based animation of soft or rigid bodies for real-time applications
often suffers from numerical instabilities. We analyse one of the most common
sources of unwanted behaviour: the numerical integration strategy. To assess
the impact of popular integration methods, we consider a scenario where soft
and hard constraints are added to a custom designed deformable linear object.
Since the goal for this class of simulation methods is to attain interactive
frame-rates, we present the drawbacks of using explicit integration methods
over inherently stable, implicit integrators. To help numerical solver
designers better understand the impact of an integrator on a certain simulated
world, we have conceived a method of benchmarking the efficiency of an
integrator with respect to its speed, stability and symplecticity.
|
[
{
"created": "Wed, 20 Nov 2013 11:30:03 GMT",
"version": "v1"
}
] |
2013-11-21
|
[
[
"Cioaca",
"Teodor",
""
],
[
"Caramizaru",
"Horea",
""
]
] |
Physics-based animation of soft or rigid bodies for real-time applications often suffers from numerical instabilities. We analyse one of the most common sources of unwanted behaviour: the numerical integration strategy. To assess the impact of popular integration methods, we consider a scenario where soft and hard constraints are added to a custom designed deformable linear object. Since the goal for this class of simulation methods is to attain interactive frame-rates, we present the drawbacks of using explicit integration methods over inherently stable, implicit integrators. To help numerical solver designers better understand the impact of an integrator on a certain simulated world, we have conceived a method of benchmarking the efficiency of an integrator with respect to its speed, stability and symplecticity.
|
2301.11509
|
Jose Antonio Lara Benitez
|
J. Antonio Lara Benitez, Takashi Furuya, Florian Faucher, Anastasis
Kratsios, Xavier Tricoche, Maarten V. de Hoop
|
Out-of-distributional risk bounds for neural operators with applications
to the Helmholtz equation
| null | null | null | null |
cs.LG cs.NA math.NA stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Despite their remarkable success in approximating a wide range of operators
defined by PDEs, existing neural operators (NOs) do not necessarily perform
well for all physics problems. We focus here on high-frequency waves to
highlight possible shortcomings. To resolve these, we propose a subfamily of
NOs enabling an enhanced empirical approximation of the nonlinear operator
mapping wave speed to solution, or boundary values for the Helmholtz equation
on a bounded domain. The latter operator is commonly referred to as the
''forward'' operator in the study of inverse problems. Our methodology draws
inspiration from transformers and techniques such as stochastic depth. Our
experiments reveal certain surprises in the generalization and the relevance of
introducing stochastic depth. Our NOs show superior performance as compared
with standard NOs, not only for testing within the training distribution but
also for out-of-distribution scenarios. To delve into this observation, we
offer an in-depth analysis of the Rademacher complexity associated with our
modified models and prove an upper bound tied to their stochastic depth that
existing NOs do not satisfy. Furthermore, we obtain a novel out-of-distribution
risk bound tailored to Gaussian measures on Banach spaces, again relating
stochastic depth with the bound. We conclude by proposing a hypernetwork
version of the subfamily of NOs as a surrogate model for the mentioned forward
operator.
|
[
{
"created": "Fri, 27 Jan 2023 03:02:12 GMT",
"version": "v1"
},
{
"created": "Wed, 19 Apr 2023 03:06:03 GMT",
"version": "v2"
},
{
"created": "Tue, 4 Jul 2023 22:42:47 GMT",
"version": "v3"
}
] |
2023-07-06
|
[
[
"Benitez",
"J. Antonio Lara",
""
],
[
"Furuya",
"Takashi",
""
],
[
"Faucher",
"Florian",
""
],
[
"Kratsios",
"Anastasis",
""
],
[
"Tricoche",
"Xavier",
""
],
[
"de Hoop",
"Maarten V.",
""
]
] |
Despite their remarkable success in approximating a wide range of operators defined by PDEs, existing neural operators (NOs) do not necessarily perform well for all physics problems. We focus here on high-frequency waves to highlight possible shortcomings. To resolve these, we propose a subfamily of NOs enabling an enhanced empirical approximation of the nonlinear operator mapping wave speed to solution, or boundary values for the Helmholtz equation on a bounded domain. The latter operator is commonly referred to as the ''forward'' operator in the study of inverse problems. Our methodology draws inspiration from transformers and techniques such as stochastic depth. Our experiments reveal certain surprises in the generalization and the relevance of introducing stochastic depth. Our NOs show superior performance as compared with standard NOs, not only for testing within the training distribution but also for out-of-distribution scenarios. To delve into this observation, we offer an in-depth analysis of the Rademacher complexity associated with our modified models and prove an upper bound tied to their stochastic depth that existing NOs do not satisfy. Furthermore, we obtain a novel out-of-distribution risk bound tailored to Gaussian measures on Banach spaces, again relating stochastic depth with the bound. We conclude by proposing a hypernetwork version of the subfamily of NOs as a surrogate model for the mentioned forward operator.
|
2305.00350
|
Korawat Tanwisuth
|
Korawat Tanwisuth, Shujian Zhang, Huangjie Zheng, Pengcheng He,
Mingyuan Zhou
|
POUF: Prompt-oriented unsupervised fine-tuning for large pre-trained
models
|
ICML 2023; PyTorch code is available at
https://github.com/korawat-tanwisuth/POUF
| null | null | null |
cs.LG cs.AI cs.CL cs.CV stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Through prompting, large-scale pre-trained models have become more expressive
and powerful, gaining significant attention in recent years. Though these big
models have zero-shot capabilities, in general, labeled data are still required
to adapt them to downstream tasks. To overcome this critical limitation, we
propose an unsupervised fine-tuning framework to directly fine-tune the model
or prompt on the unlabeled target data. We demonstrate how to apply our method
to both language-augmented vision and masked-language models by aligning the
discrete distributions extracted from the prompts and target data. To verify
our approach's applicability, we conduct extensive experiments on image
classification, sentiment analysis, and natural language inference tasks.
Across 13 image-related tasks and 15 language-related ones, the proposed
approach achieves consistent improvements over the baselines.
|
[
{
"created": "Sat, 29 Apr 2023 22:05:22 GMT",
"version": "v1"
}
] |
2023-05-02
|
[
[
"Tanwisuth",
"Korawat",
""
],
[
"Zhang",
"Shujian",
""
],
[
"Zheng",
"Huangjie",
""
],
[
"He",
"Pengcheng",
""
],
[
"Zhou",
"Mingyuan",
""
]
] |
Through prompting, large-scale pre-trained models have become more expressive and powerful, gaining significant attention in recent years. Though these big models have zero-shot capabilities, in general, labeled data are still required to adapt them to downstream tasks. To overcome this critical limitation, we propose an unsupervised fine-tuning framework to directly fine-tune the model or prompt on the unlabeled target data. We demonstrate how to apply our method to both language-augmented vision and masked-language models by aligning the discrete distributions extracted from the prompts and target data. To verify our approach's applicability, we conduct extensive experiments on image classification, sentiment analysis, and natural language inference tasks. Across 13 image-related tasks and 15 language-related ones, the proposed approach achieves consistent improvements over the baselines.
|
1504.03213
|
Paolo Di Francesco
|
Paolo Di Francesco, Francesco Malandrino, Tim K. Forde, Luiz A.
DaSilva
|
A Sharing- and Competition-Aware Framework for Cellular Network
Evolution Planning
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Mobile network operators are facing the difficult task of significantly
increasing capacity to meet projected demand while keeping CAPEX and OPEX down.
We argue that infrastructure sharing is a key consideration in operators'
planning of the evolution of their networks, and that such planning can be
viewed as a stage in the cognitive cycle. In this paper, we present a framework
to model this planning process while taking into account both the ability to
share resources and the constraints imposed by competition regulation (the
latter quantified using the Herfindahl index). Using real-world demand and
deployment data, we find that the ability to share infrastructure essentially
moves capacity from rural, sparsely populated areas (where some of the current
infrastructure can be decommissioned) to urban ones (where most of the
next-generation base stations would be deployed), with significant increases in
resource efficiency. Tight competition regulation somewhat limits the ability
to share but does not entirely jeopardize those gains, while having the
secondary effect of encouraging the wider deployment of next-generation
technologies.
|
[
{
"created": "Mon, 13 Apr 2015 15:19:48 GMT",
"version": "v1"
}
] |
2015-04-14
|
[
[
"Di Francesco",
"Paolo",
""
],
[
"Malandrino",
"Francesco",
""
],
[
"Forde",
"Tim K.",
""
],
[
"DaSilva",
"Luiz A.",
""
]
] |
Mobile network operators are facing the difficult task of significantly increasing capacity to meet projected demand while keeping CAPEX and OPEX down. We argue that infrastructure sharing is a key consideration in operators' planning of the evolution of their networks, and that such planning can be viewed as a stage in the cognitive cycle. In this paper, we present a framework to model this planning process while taking into account both the ability to share resources and the constraints imposed by competition regulation (the latter quantified using the Herfindahl index). Using real-world demand and deployment data, we find that the ability to share infrastructure essentially moves capacity from rural, sparsely populated areas (where some of the current infrastructure can be decommissioned) to urban ones (where most of the next-generation base stations would be deployed), with significant increases in resource efficiency. Tight competition regulation somewhat limits the ability to share but does not entirely jeopardize those gains, while having the secondary effect of encouraging the wider deployment of next-generation technologies.
|
1708.08142
|
Rodrigo de Lamare
|
R. C. de Lamare and Andr\'e Flores
|
Study of Set-Membership Kernel Adaptive Algorithms and Applications
|
4 figures, 6 pages
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Adaptive algorithms based on kernel structures have been a topic of
significant research over the past few years. The main advantage is that they
form a family of universal approximators, offering an elegant solution to
problems with nonlinearities. Nevertheless these methods deal with kernel
expansions, creating a growing structure also known as dictionary, whose size
depends on the number of new inputs. In this paper we derive the set-membership
kernel-based normalized least-mean square (SM-NKLMS) algorithm, which is
capable of limiting the size of the dictionary created in stationary
environments. We also derive as an extension the set-membership kernelized
affine projection (SM-KAP) algorithm. Finally several experiments are presented
to compare the proposed SM-NKLMS and SM-KAP algorithms to the existing methods.
|
[
{
"created": "Sun, 27 Aug 2017 21:41:48 GMT",
"version": "v1"
}
] |
2017-08-29
|
[
[
"de Lamare",
"R. C.",
""
],
[
"Flores",
"André",
""
]
] |
Adaptive algorithms based on kernel structures have been a topic of significant research over the past few years. The main advantage is that they form a family of universal approximators, offering an elegant solution to problems with nonlinearities. Nevertheless these methods deal with kernel expansions, creating a growing structure also known as dictionary, whose size depends on the number of new inputs. In this paper we derive the set-membership kernel-based normalized least-mean square (SM-NKLMS) algorithm, which is capable of limiting the size of the dictionary created in stationary environments. We also derive as an extension the set-membership kernelized affine projection (SM-KAP) algorithm. Finally several experiments are presented to compare the proposed SM-NKLMS and SM-KAP algorithms to the existing methods.
|
2007.10144
|
Guy Aridor
|
Guy Aridor and Yishay Mansour and Aleksandrs Slivkins and Zhiwei
Steven Wu
|
Competing Bandits: The Perils of Exploration Under Competition
|
merged and extended version of arXiv:1702.08533 and arXiv:1902.05590
| null | null | null |
cs.GT cs.LG econ.TH
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Most online platforms strive to learn from interactions with users, and many
engage in exploration: making potentially suboptimal choices for the sake of
acquiring new information. We study the interplay between exploration and
competition: how such platforms balance the exploration for learning and the
competition for users. Here users play three distinct roles: they are customers
that generate revenue, they are sources of data for learning, and they are
self-interested agents which choose among the competing platforms.
We consider a stylized duopoly model in which two firms face the same
multi-armed bandit problem. Users arrive one by one and choose between the two
firms, so that each firm makes progress on its bandit problem only if it is
chosen. Through a mix of theoretical results and numerical simulations, we
study whether and to what extent competition incentivizes the adoption of
better bandit algorithms, and whether it leads to welfare increases for users.
We find that stark competition induces firms to commit to a "greedy" bandit
algorithm that leads to low welfare. However, weakening competition by
providing firms with some "free" users incentivizes better exploration
strategies and increases welfare. We investigate two channels for weakening the
competition: relaxing the rationality of users and giving one firm a
first-mover advantage. Our findings are closely related to the "competition vs.
innovation" relationship, and elucidate the first-mover advantage in the
digital economy.
|
[
{
"created": "Mon, 20 Jul 2020 14:19:08 GMT",
"version": "v1"
},
{
"created": "Tue, 29 Sep 2020 13:07:42 GMT",
"version": "v2"
},
{
"created": "Tue, 22 Dec 2020 14:59:25 GMT",
"version": "v3"
},
{
"created": "Tue, 2 Mar 2021 21:00:18 GMT",
"version": "v4"
},
{
"created": "Mon, 20 Sep 2021 23:30:13 GMT",
"version": "v5"
},
{
"created": "Tue, 14 Jun 2022 08:11:10 GMT",
"version": "v6"
},
{
"created": "Sun, 4 Dec 2022 19:36:50 GMT",
"version": "v7"
}
] |
2022-12-06
|
[
[
"Aridor",
"Guy",
""
],
[
"Mansour",
"Yishay",
""
],
[
"Slivkins",
"Aleksandrs",
""
],
[
"Wu",
"Zhiwei Steven",
""
]
] |
Most online platforms strive to learn from interactions with users, and many engage in exploration: making potentially suboptimal choices for the sake of acquiring new information. We study the interplay between exploration and competition: how such platforms balance the exploration for learning and the competition for users. Here users play three distinct roles: they are customers that generate revenue, they are sources of data for learning, and they are self-interested agents which choose among the competing platforms. We consider a stylized duopoly model in which two firms face the same multi-armed bandit problem. Users arrive one by one and choose between the two firms, so that each firm makes progress on its bandit problem only if it is chosen. Through a mix of theoretical results and numerical simulations, we study whether and to what extent competition incentivizes the adoption of better bandit algorithms, and whether it leads to welfare increases for users. We find that stark competition induces firms to commit to a "greedy" bandit algorithm that leads to low welfare. However, weakening competition by providing firms with some "free" users incentivizes better exploration strategies and increases welfare. We investigate two channels for weakening the competition: relaxing the rationality of users and giving one firm a first-mover advantage. Our findings are closely related to the "competition vs. innovation" relationship, and elucidate the first-mover advantage in the digital economy.
|
1407.7072
|
Seungyeon Kim
|
Seungyeon Kim, Haesun Park, Guy Lebanon
|
Fast Spammer Detection Using Structural Rank
|
8 pages, 1 figure
| null | null | null |
cs.IR cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Comments for a product or a news article are rapidly growing and became a
medium of measuring quality products or services. Consequently, spammers have
been emerged in this area to bias them toward their favor. In this paper, we
propose an efficient spammer detection method using structural rank of author
specific term-document matrices. The use of structural rank was found effective
and far faster than similar methods.
|
[
{
"created": "Fri, 25 Jul 2014 22:33:49 GMT",
"version": "v1"
}
] |
2014-07-29
|
[
[
"Kim",
"Seungyeon",
""
],
[
"Park",
"Haesun",
""
],
[
"Lebanon",
"Guy",
""
]
] |
Comments for a product or a news article are rapidly growing and became a medium of measuring quality products or services. Consequently, spammers have been emerged in this area to bias them toward their favor. In this paper, we propose an efficient spammer detection method using structural rank of author specific term-document matrices. The use of structural rank was found effective and far faster than similar methods.
|
1808.00143
|
Rodrigo de Lamare
|
R. M. Oliveira and R. C. de Lamare
|
Study of Polarization-Driven Shortening for Polar Codes Designed with
the Gaussian Approximation
|
5 pages, 3 figures
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a polarization-driven (PD) shortening technique for the
design of rate-compatible polar codes. The proposed shortening strategy
consists of reducing the generator matrix by relating its row index with the
channel polarization index. We assume that the shortened bits are known by both
the encoder and the decoder and employ successive cancellation (SC) for
decoding the shortened codes constructed by the proposed PD technique. A
performance analysis is then carried out based on the Spectrum Distance (SD).
Simulations show that the proposed PD-based shortened polar codes outperform
existing shortened polar codes.
|
[
{
"created": "Wed, 1 Aug 2018 02:34:45 GMT",
"version": "v1"
}
] |
2018-08-02
|
[
[
"Oliveira",
"R. M.",
""
],
[
"de Lamare",
"R. C.",
""
]
] |
This paper presents a polarization-driven (PD) shortening technique for the design of rate-compatible polar codes. The proposed shortening strategy consists of reducing the generator matrix by relating its row index with the channel polarization index. We assume that the shortened bits are known by both the encoder and the decoder and employ successive cancellation (SC) for decoding the shortened codes constructed by the proposed PD technique. A performance analysis is then carried out based on the Spectrum Distance (SD). Simulations show that the proposed PD-based shortened polar codes outperform existing shortened polar codes.
|
2303.13558
|
Yu Dong
|
Yu Dong, Jie Liang, Longbing Cao, Daniel Catchpoole
|
ClinicLens: Visual Analytics for Exploring and Optimizing the Testing
Capacity of Clinics given Uncertainty
| null | null | null | null |
cs.HC cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
Clinic testing plays a critical role in containing infectious diseases such
as COVID-19. However, one of the key research questions in fighting such
pandemics is how to optimize testing capacities across clinics. In particular,
domain experts expect to know exactly how to adjust the features that may
affect testing capacities, given that dynamics and uncertainty make this a
highly challenging problem. Hence, as a tool to support both policymakers and
clinicians, we collaborated with domain experts to build ClinicLens, an
interactive visual analytics system for exploring and optimizing the testing
capacities of clinics. ClinicLens houses a range of features based on an
aggregated set of COVID-19 data. It comprises Back-end Engine and Front-end
Visualization that take users through an iterative exploration chain of
extracting, training, and predicting testing-sensitive features and visual
representations. It also combines AI4VIS and visual analytics to demonstrate
how a clinic might optimize its testing capacity given the impacts of a range
of features. Three qualitative case studies along with feedback from
subject-matter experts validate that ClinicLens is both a useful and effective
tool for exploring the trends in COVID-19 and optimizing clinic testing
capacities across regions. The entire approach has been open-sourced online:
https://github.com/YuDong5018/clinic-lens.
|
[
{
"created": "Thu, 23 Mar 2023 01:34:56 GMT",
"version": "v1"
},
{
"created": "Tue, 28 Mar 2023 06:11:53 GMT",
"version": "v2"
}
] |
2023-03-29
|
[
[
"Dong",
"Yu",
""
],
[
"Liang",
"Jie",
""
],
[
"Cao",
"Longbing",
""
],
[
"Catchpoole",
"Daniel",
""
]
] |
Clinic testing plays a critical role in containing infectious diseases such as COVID-19. However, one of the key research questions in fighting such pandemics is how to optimize testing capacities across clinics. In particular, domain experts expect to know exactly how to adjust the features that may affect testing capacities, given that dynamics and uncertainty make this a highly challenging problem. Hence, as a tool to support both policymakers and clinicians, we collaborated with domain experts to build ClinicLens, an interactive visual analytics system for exploring and optimizing the testing capacities of clinics. ClinicLens houses a range of features based on an aggregated set of COVID-19 data. It comprises Back-end Engine and Front-end Visualization that take users through an iterative exploration chain of extracting, training, and predicting testing-sensitive features and visual representations. It also combines AI4VIS and visual analytics to demonstrate how a clinic might optimize its testing capacity given the impacts of a range of features. Three qualitative case studies along with feedback from subject-matter experts validate that ClinicLens is both a useful and effective tool for exploring the trends in COVID-19 and optimizing clinic testing capacities across regions. The entire approach has been open-sourced online: https://github.com/YuDong5018/clinic-lens.
|
1808.00356
|
Yongsung Kim
|
Yongsung Kim, Adam Fourney, Ece Kamar
|
Studying Preferences and Concerns about Information Disclosure in Email
Notifications
| null | null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The proliferation of network-connected devices and applications has resulted
in people receiving dozens, or hundreds, of notifications per day. When people
are in the presence of others, each notification poses some risk of accidental
information disclosure; onlookers may see notifications appear above the lock
screen of a mobile phone, on the periphery of a desktop or laptop display, or
projected onscreen during a presentation. In this paper, we quantify the
prevalence of these accidental disclosures in the context of email
notifications, and we study people's relevant preferences and concerns. Our
results are compiled from an exploratory retrospective survey of 131
respondents, and a separate contextual-labeling study in which 169 participants
labeled 1,040 meeting-email pairs. We find that, for 53% of people, at least 1
in 10 email notifications poses an information disclosure risk. We also find
that the real or perceived severity of these risks depend both on user
characteristics and attributes of the meeting or email (e.g. the number of
recipients or attendees). We conclude by exploring machine learning algorithms
to predict people's comfort levels given an email notification and a context,
then we present implications for the design of future contextually-relevant
notification systems.
|
[
{
"created": "Wed, 1 Aug 2018 15:07:14 GMT",
"version": "v1"
}
] |
2018-08-02
|
[
[
"Kim",
"Yongsung",
""
],
[
"Fourney",
"Adam",
""
],
[
"Kamar",
"Ece",
""
]
] |
The proliferation of network-connected devices and applications has resulted in people receiving dozens, or hundreds, of notifications per day. When people are in the presence of others, each notification poses some risk of accidental information disclosure; onlookers may see notifications appear above the lock screen of a mobile phone, on the periphery of a desktop or laptop display, or projected onscreen during a presentation. In this paper, we quantify the prevalence of these accidental disclosures in the context of email notifications, and we study people's relevant preferences and concerns. Our results are compiled from an exploratory retrospective survey of 131 respondents, and a separate contextual-labeling study in which 169 participants labeled 1,040 meeting-email pairs. We find that, for 53% of people, at least 1 in 10 email notifications poses an information disclosure risk. We also find that the real or perceived severity of these risks depend both on user characteristics and attributes of the meeting or email (e.g. the number of recipients or attendees). We conclude by exploring machine learning algorithms to predict people's comfort levels given an email notification and a context, then we present implications for the design of future contextually-relevant notification systems.
|
1805.01325
|
Li Zhang
|
Li Zhang
|
Choice revision on belief bases
| null | null | null | null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this contribution we explore choice revision, a sort of belief change in
which the new information is represented by a set of sentences and the agent
could accept some of the sentences while rejecting the others. We propose a
generalized version of expansion operation called partial expansion for
developing models of choice revision. By using the partial expansion and two
multiple contraction operations previously introduced in the literature, we
construct two kinds of choice revision on belief bases. For each of them we
propose a set of postulates and prove a partial or full representation theorem.
Furthermore, we investigate the operations of making up one's mind derived from
these two kinds of choice revision and also give the associated representation
theorems.
|
[
{
"created": "Thu, 3 May 2018 14:30:20 GMT",
"version": "v1"
}
] |
2018-05-04
|
[
[
"Zhang",
"Li",
""
]
] |
In this contribution we explore choice revision, a sort of belief change in which the new information is represented by a set of sentences and the agent could accept some of the sentences while rejecting the others. We propose a generalized version of expansion operation called partial expansion for developing models of choice revision. By using the partial expansion and two multiple contraction operations previously introduced in the literature, we construct two kinds of choice revision on belief bases. For each of them we propose a set of postulates and prove a partial or full representation theorem. Furthermore, we investigate the operations of making up one's mind derived from these two kinds of choice revision and also give the associated representation theorems.
|
1712.07861
|
Pierre Hauweele
|
Gauvain Devillez, Pierre Hauweele, Hadrien M\'elot
|
PHOEG Helps Obtaining Extremal Graphs
|
6 pages, 9 figures, 1 table, technical paper
| null | null | null |
cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Extremal Graph Theory aims to determine bounds for graph invariants as well
as the graphs attaining those bounds.
We are currently developping PHOEG, an ecosystem of tools designed to help
researchers in Extremal Graph Theory.
It uses a big relational database of undirected graphs and works with the
convex hull of the graphs as points in the invariants space in order to exactly
obtain the extremal graphs and optimal bounds on the invariants for some fixed
parameters. The results obtained on the restricted finite class of graphs can
later be used to infer conjectures. This database also allows us to make
queries on those graphs. Once the conjecture defined, PHOEG goes one step
further by helping in the process of designing a proof guided by successive
applications of transformations from any graph to an extremal graph. To this
aim, we use a second database based on a graph data model.
The paper presents ideas and techniques used in PHOEG to assist the study of
Extremal Graph Theory.
|
[
{
"created": "Thu, 21 Dec 2017 10:31:46 GMT",
"version": "v1"
}
] |
2017-12-22
|
[
[
"Devillez",
"Gauvain",
""
],
[
"Hauweele",
"Pierre",
""
],
[
"Mélot",
"Hadrien",
""
]
] |
Extremal Graph Theory aims to determine bounds for graph invariants as well as the graphs attaining those bounds. We are currently developping PHOEG, an ecosystem of tools designed to help researchers in Extremal Graph Theory. It uses a big relational database of undirected graphs and works with the convex hull of the graphs as points in the invariants space in order to exactly obtain the extremal graphs and optimal bounds on the invariants for some fixed parameters. The results obtained on the restricted finite class of graphs can later be used to infer conjectures. This database also allows us to make queries on those graphs. Once the conjecture defined, PHOEG goes one step further by helping in the process of designing a proof guided by successive applications of transformations from any graph to an extremal graph. To this aim, we use a second database based on a graph data model. The paper presents ideas and techniques used in PHOEG to assist the study of Extremal Graph Theory.
|
2405.01803
|
Geyu Huang
|
Xin Tan, Yan Gong, Geyu Huang, Haohua Wu, Li Zhang
|
How to Gain Commit Rights in Modern Top Open Source Communities?
|
23 pages,5 figures,FSE 2024
|
Proceedings of the ACM on Software Engineering (PACMSE) Issue FSE
2024
|
10.1145/3660784
| null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The success of open source software (OSS) projects relies on voluntary
contributions from various community roles.Being a committer signifies gaining
trust and higher privileges. Substantial studies have focused on the
requirements of becoming a committer, but most of them are based on interviews
or several hypotheses, lacking a comprehensive understanding of committers'
qualifications.We explore both the policies and practical implementations of
committer qualifications in modern top OSS communities. Through a thematic
analysis of these policies, we construct a taxonomy of committer
qualifications, consisting of 26 codes categorized into nine themes, including
Personnel-related to Project, Communication, and Long-term Participation. We
also highlight the variations in committer qualifications emphasized in
different OSS community governance models. For example, projects following the
core maintainer model value project comprehension, while projects following the
company-backed model place significant emphasis on user issue resolution. Then,
we propose eight sets of metrics and perform survival analysis on two
representative OSS projects to understand how these qualifications are
implemented in practice. We find that the probability of gaining commit rights
decreases as participation time passes.The selection criteria in practice are
generally consistent with the community policies. Developers who submit
high-quality code, actively engage in code review, and make extensive
contributions to related projects are more likely to be granted commit rights.
However, there are some qualifications that do not align precisely, and some
are not adequately evaluated. This study contributes to the understanding of
trust establishment in modern top OSS communities, assists communities in
better allocating commit rights, and supports developers in achieving
self-actualization through OSS participation.
|
[
{
"created": "Fri, 3 May 2024 01:23:06 GMT",
"version": "v1"
},
{
"created": "Mon, 6 May 2024 06:31:21 GMT",
"version": "v2"
},
{
"created": "Thu, 16 May 2024 10:16:20 GMT",
"version": "v3"
}
] |
2024-05-17
|
[
[
"Tan",
"Xin",
""
],
[
"Gong",
"Yan",
""
],
[
"Huang",
"Geyu",
""
],
[
"Wu",
"Haohua",
""
],
[
"Zhang",
"Li",
""
]
] |
The success of open source software (OSS) projects relies on voluntary contributions from various community roles.Being a committer signifies gaining trust and higher privileges. Substantial studies have focused on the requirements of becoming a committer, but most of them are based on interviews or several hypotheses, lacking a comprehensive understanding of committers' qualifications.We explore both the policies and practical implementations of committer qualifications in modern top OSS communities. Through a thematic analysis of these policies, we construct a taxonomy of committer qualifications, consisting of 26 codes categorized into nine themes, including Personnel-related to Project, Communication, and Long-term Participation. We also highlight the variations in committer qualifications emphasized in different OSS community governance models. For example, projects following the core maintainer model value project comprehension, while projects following the company-backed model place significant emphasis on user issue resolution. Then, we propose eight sets of metrics and perform survival analysis on two representative OSS projects to understand how these qualifications are implemented in practice. We find that the probability of gaining commit rights decreases as participation time passes.The selection criteria in practice are generally consistent with the community policies. Developers who submit high-quality code, actively engage in code review, and make extensive contributions to related projects are more likely to be granted commit rights. However, there are some qualifications that do not align precisely, and some are not adequately evaluated. This study contributes to the understanding of trust establishment in modern top OSS communities, assists communities in better allocating commit rights, and supports developers in achieving self-actualization through OSS participation.
|
2209.04800
|
Fouad Sukkar
|
Fouad Sukkar, Jennifer Wakulicz, Ki Myung Brian Lee, Weiming Zhi, and
Robert Fitch
|
Multi-query Robotic Manipulator Task Sequencing with Gromov-Hausdorff
Approximations
|
Submitted to IEEE Transactions on Robotics (TRO). 15 Pages. 13
Figures
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Robotic manipulator applications often require efficient online motion
planning. When completing multiple tasks, sequence order and choice of goal
configuration can have a drastic impact on planning performance. This is well
known as the robot task sequencing problem (RTSP). Existing general purpose
RTSP algorithms are susceptible to producing poor quality solutions or fail
entirely when available computation time is restricted. We propose a new
multi-query task sequencing method designed to operate in semi-structured
environments with a combination of static and non-static obstacles. Our method
intentionally trades off workspace generality for planning efficiency. Given a
user-defined task space with static obstacles, we compute a subspace
decomposition. The key idea is to establish approximate isometries known as
$\epsilon$-Gromov-Hausdorff approximations that identify points that are close
to one another in both task and configuration space. Importantly, we prove
bounded suboptimality guarantees on the lengths of trajectories within these
subspaces. These bounding relations further imply that trajectories within the
same subspace can be smoothly concatenated which we show is useful for
determining efficient task sequences. We evaluate our method with several
kinematic configurations in a complex simulated environment, achieving up to 3x
faster motion planning and 5x lower maximum trajectory jerk compared to
baselines.
|
[
{
"created": "Sun, 11 Sep 2022 06:31:55 GMT",
"version": "v1"
},
{
"created": "Mon, 22 Jul 2024 14:34:53 GMT",
"version": "v2"
}
] |
2024-07-23
|
[
[
"Sukkar",
"Fouad",
""
],
[
"Wakulicz",
"Jennifer",
""
],
[
"Lee",
"Ki Myung Brian",
""
],
[
"Zhi",
"Weiming",
""
],
[
"Fitch",
"Robert",
""
]
] |
Robotic manipulator applications often require efficient online motion planning. When completing multiple tasks, sequence order and choice of goal configuration can have a drastic impact on planning performance. This is well known as the robot task sequencing problem (RTSP). Existing general purpose RTSP algorithms are susceptible to producing poor quality solutions or fail entirely when available computation time is restricted. We propose a new multi-query task sequencing method designed to operate in semi-structured environments with a combination of static and non-static obstacles. Our method intentionally trades off workspace generality for planning efficiency. Given a user-defined task space with static obstacles, we compute a subspace decomposition. The key idea is to establish approximate isometries known as $\epsilon$-Gromov-Hausdorff approximations that identify points that are close to one another in both task and configuration space. Importantly, we prove bounded suboptimality guarantees on the lengths of trajectories within these subspaces. These bounding relations further imply that trajectories within the same subspace can be smoothly concatenated which we show is useful for determining efficient task sequences. We evaluate our method with several kinematic configurations in a complex simulated environment, achieving up to 3x faster motion planning and 5x lower maximum trajectory jerk compared to baselines.
|
2110.11414
|
Alice Ruget
|
Alice Ruget, Max Tyler, Germ\'an Mora Mart\'in, Stirling Scholes, Feng
Zhu, Istvan Gyongy, Brent Hearn, Steve McLaughlin, Abderrahim Halimi,
Jonathan Leach
|
Real-time, low-cost multi-person 3D pose estimation
| null | null |
10.17861/e85a6eae-13f9-4bcd-9dff-73f8107c09a2
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The process of tracking human anatomy in computer vision is referred to pose
estimation, and it is used in fields ranging from gaming to surveillance.
Three-dimensional pose estimation traditionally requires advanced equipment,
such as multiple linked intensity cameras or high-resolution time-of-flight
cameras to produce depth images. However, there are applications, e.g.~consumer
electronics, where significant constraints are placed on the size, power
consumption, weight and cost of the usable technology. Here, we demonstrate
that computational imaging methods can achieve accurate pose estimation and
overcome the apparent limitations of time-of-flight sensors designed for much
simpler tasks. The sensor we use is already widely integrated in consumer-grade
mobile devices, and despite its low spatial resolution, only 4$\times$4 pixels,
our proposed Pixels2Pose system transforms its data into accurate depth maps
and 3D pose data of multiple people up to a distance of 3 m from the sensor. We
are able to generate depth maps at a resolution of 32$\times$32 and 3D
localization of a body parts with an error of only $\approx$10 cm at a frame
rate of 7 fps. This work opens up promising real-life applications in scenarios
that were previously restricted by the advanced hardware requirements and cost
of time-of-flight technology.
|
[
{
"created": "Mon, 11 Oct 2021 12:42:00 GMT",
"version": "v1"
},
{
"created": "Fri, 19 Aug 2022 10:56:07 GMT",
"version": "v2"
},
{
"created": "Wed, 24 Aug 2022 14:57:58 GMT",
"version": "v3"
}
] |
2022-08-25
|
[
[
"Ruget",
"Alice",
""
],
[
"Tyler",
"Max",
""
],
[
"Martín",
"Germán Mora",
""
],
[
"Scholes",
"Stirling",
""
],
[
"Zhu",
"Feng",
""
],
[
"Gyongy",
"Istvan",
""
],
[
"Hearn",
"Brent",
""
],
[
"McLaughlin",
"Steve",
""
],
[
"Halimi",
"Abderrahim",
""
],
[
"Leach",
"Jonathan",
""
]
] |
The process of tracking human anatomy in computer vision is referred to pose estimation, and it is used in fields ranging from gaming to surveillance. Three-dimensional pose estimation traditionally requires advanced equipment, such as multiple linked intensity cameras or high-resolution time-of-flight cameras to produce depth images. However, there are applications, e.g.~consumer electronics, where significant constraints are placed on the size, power consumption, weight and cost of the usable technology. Here, we demonstrate that computational imaging methods can achieve accurate pose estimation and overcome the apparent limitations of time-of-flight sensors designed for much simpler tasks. The sensor we use is already widely integrated in consumer-grade mobile devices, and despite its low spatial resolution, only 4$\times$4 pixels, our proposed Pixels2Pose system transforms its data into accurate depth maps and 3D pose data of multiple people up to a distance of 3 m from the sensor. We are able to generate depth maps at a resolution of 32$\times$32 and 3D localization of a body parts with an error of only $\approx$10 cm at a frame rate of 7 fps. This work opens up promising real-life applications in scenarios that were previously restricted by the advanced hardware requirements and cost of time-of-flight technology.
|
2405.16542
|
Yang Cao
|
Yang Cao, and Wei Zhang
|
Mamba4KT:An Efficient and Effective Mamba-based Knowledge Tracing Model
| null | null | null | null |
cs.AI cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Knowledge tracing (KT) enhances student learning by leveraging past
performance to predict future performance. Current research utilizes models
based on attention mechanisms and recurrent neural network structures to
capture long-term dependencies and correlations between exercises, aiming to
improve model accuracy. Due to the growing amount of data in smart education
scenarios, this poses a challenge in terms of time and space consumption for
knowledge tracing models. However, existing research often overlooks the
efficiency of model training and inference and the constraints of training
resources. Recognizing the significance of prioritizing model efficiency and
resource usage in knowledge tracing, we introduce Mamba4KT. This novel model is
the first to explore enhanced efficiency and resource utilization in knowledge
tracing. We also examine the interpretability of the Mamba structure both
sequence-level and exercise-level to enhance model interpretability.
Experimental findings across three public datasets demonstrate that Mamba4KT
achieves comparable prediction accuracy to state-of-the-art models while
significantly improving training and inference efficiency and resource
utilization. As educational data continues to grow, our work suggests a
promising research direction for knowledge tracing that improves model
prediction accuracy, model efficiency, resource utilization, and
interpretability simultaneously.
|
[
{
"created": "Sun, 26 May 2024 12:26:03 GMT",
"version": "v1"
}
] |
2024-05-28
|
[
[
"Cao",
"Yang",
""
],
[
"Zhang",
"Wei",
""
]
] |
Knowledge tracing (KT) enhances student learning by leveraging past performance to predict future performance. Current research utilizes models based on attention mechanisms and recurrent neural network structures to capture long-term dependencies and correlations between exercises, aiming to improve model accuracy. Due to the growing amount of data in smart education scenarios, this poses a challenge in terms of time and space consumption for knowledge tracing models. However, existing research often overlooks the efficiency of model training and inference and the constraints of training resources. Recognizing the significance of prioritizing model efficiency and resource usage in knowledge tracing, we introduce Mamba4KT. This novel model is the first to explore enhanced efficiency and resource utilization in knowledge tracing. We also examine the interpretability of the Mamba structure both sequence-level and exercise-level to enhance model interpretability. Experimental findings across three public datasets demonstrate that Mamba4KT achieves comparable prediction accuracy to state-of-the-art models while significantly improving training and inference efficiency and resource utilization. As educational data continues to grow, our work suggests a promising research direction for knowledge tracing that improves model prediction accuracy, model efficiency, resource utilization, and interpretability simultaneously.
|
2304.02737
|
Melissa Dell
|
Jacob Carlson and Tom Bryan and Melissa Dell
|
Efficient OCR for Building a Diverse Digital History
| null | null | null | null |
cs.CV cs.DL econ.GN q-fin.EC
|
http://creativecommons.org/licenses/by/4.0/
|
Thousands of users consult digital archives daily, but the information they
can access is unrepresentative of the diversity of documentary history. The
sequence-to-sequence architecture typically used for optical character
recognition (OCR) - which jointly learns a vision and language model - is
poorly extensible to low-resource document collections, as learning a
language-vision model requires extensive labeled sequences and compute. This
study models OCR as a character level image retrieval problem, using a
contrastively trained vision encoder. Because the model only learns characters'
visual features, it is more sample efficient and extensible than existing
architectures, enabling accurate OCR in settings where existing solutions fail.
Crucially, the model opens new avenues for community engagement in making
digital history more representative of documentary history.
|
[
{
"created": "Wed, 5 Apr 2023 20:36:04 GMT",
"version": "v1"
},
{
"created": "Thu, 25 Jul 2024 20:49:45 GMT",
"version": "v2"
}
] |
2024-07-29
|
[
[
"Carlson",
"Jacob",
""
],
[
"Bryan",
"Tom",
""
],
[
"Dell",
"Melissa",
""
]
] |
Thousands of users consult digital archives daily, but the information they can access is unrepresentative of the diversity of documentary history. The sequence-to-sequence architecture typically used for optical character recognition (OCR) - which jointly learns a vision and language model - is poorly extensible to low-resource document collections, as learning a language-vision model requires extensive labeled sequences and compute. This study models OCR as a character level image retrieval problem, using a contrastively trained vision encoder. Because the model only learns characters' visual features, it is more sample efficient and extensible than existing architectures, enabling accurate OCR in settings where existing solutions fail. Crucially, the model opens new avenues for community engagement in making digital history more representative of documentary history.
|
2112.00708
|
Wei Ren
|
Wei Ren, Eleftherios Vlahakis, Nikolaos Athanasopoulos and Raphael
Jungers
|
Optimal Resource Scheduling and Allocation in Distributed Computing
Systems
|
This work has been submitted to ACC2022
| null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
The essence of distributed computing systems is how to schedule incoming
requests and how to allocate all computing nodes to minimize both time and
computation costs. In this paper, we propose a cost-aware optimal scheduling
and allocation strategy for distributed computing systems while minimizing the
cost function including response time and service cost. First, based on the
proposed cost function, we derive the optimal request scheduling policy and the
optimal resource allocation policy synchronously. Second, considering the
effects of incoming requests on the scheduling policy, the additive increase
multiplicative decrease (AIMD) mechanism is implemented to model the relation
between the request arrival and scheduling. In particular, the AIMD parameters
can be designed such that the derived optimal strategy is still valid. Finally,
a numerical example is presented to illustrate the derived results.
|
[
{
"created": "Fri, 15 Oct 2021 11:45:32 GMT",
"version": "v1"
}
] |
2021-12-02
|
[
[
"Ren",
"Wei",
""
],
[
"Vlahakis",
"Eleftherios",
""
],
[
"Athanasopoulos",
"Nikolaos",
""
],
[
"Jungers",
"Raphael",
""
]
] |
The essence of distributed computing systems is how to schedule incoming requests and how to allocate all computing nodes to minimize both time and computation costs. In this paper, we propose a cost-aware optimal scheduling and allocation strategy for distributed computing systems while minimizing the cost function including response time and service cost. First, based on the proposed cost function, we derive the optimal request scheduling policy and the optimal resource allocation policy synchronously. Second, considering the effects of incoming requests on the scheduling policy, the additive increase multiplicative decrease (AIMD) mechanism is implemented to model the relation between the request arrival and scheduling. In particular, the AIMD parameters can be designed such that the derived optimal strategy is still valid. Finally, a numerical example is presented to illustrate the derived results.
|
2202.13328
|
Idan Amir
|
Idan Amir, Roi Livni, Nathan Srebro
|
Thinking Outside the Ball: Optimal Learning with Gradient Descent for
Generalized Linear Stochastic Convex Optimization
| null | null | null | null |
cs.LG math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider linear prediction with a convex Lipschitz loss, or more
generally, stochastic convex optimization problems of generalized linear form,
i.e.~where each instantaneous loss is a scalar convex function of a linear
function. We show that in this setting, early stopped Gradient Descent (GD),
without any explicit regularization or projection, ensures excess error at most
$\epsilon$ (compared to the best possible with unit Euclidean norm) with an
optimal, up to logarithmic factors, sample complexity of
$\tilde{O}(1/\epsilon^2)$ and only $\tilde{O}(1/\epsilon^2)$ iterations. This
contrasts with general stochastic convex optimization, where
$\Omega(1/\epsilon^4)$ iterations are needed Amir et al. [2021b]. The lower
iteration complexity is ensured by leveraging uniform convergence rather than
stability. But instead of uniform convergence in a norm ball, which we show can
guarantee suboptimal learning using $\Theta(1/\epsilon^4)$ samples, we rely on
uniform convergence in a distribution-dependent ball.
|
[
{
"created": "Sun, 27 Feb 2022 09:41:43 GMT",
"version": "v1"
},
{
"created": "Sun, 30 Oct 2022 13:16:36 GMT",
"version": "v2"
}
] |
2022-11-01
|
[
[
"Amir",
"Idan",
""
],
[
"Livni",
"Roi",
""
],
[
"Srebro",
"Nathan",
""
]
] |
We consider linear prediction with a convex Lipschitz loss, or more generally, stochastic convex optimization problems of generalized linear form, i.e.~where each instantaneous loss is a scalar convex function of a linear function. We show that in this setting, early stopped Gradient Descent (GD), without any explicit regularization or projection, ensures excess error at most $\epsilon$ (compared to the best possible with unit Euclidean norm) with an optimal, up to logarithmic factors, sample complexity of $\tilde{O}(1/\epsilon^2)$ and only $\tilde{O}(1/\epsilon^2)$ iterations. This contrasts with general stochastic convex optimization, where $\Omega(1/\epsilon^4)$ iterations are needed Amir et al. [2021b]. The lower iteration complexity is ensured by leveraging uniform convergence rather than stability. But instead of uniform convergence in a norm ball, which we show can guarantee suboptimal learning using $\Theta(1/\epsilon^4)$ samples, we rely on uniform convergence in a distribution-dependent ball.
|
2202.02131
|
Konstantina Nikita S
|
Panagiota Karatza, Kalliopi V. Dalakleidi, Maria Athanasiou,
Konstantina S. Nikita
|
Interpretability methods of machine learning algorithms with
applications in breast cancer diagnosis
|
2021 43rd Annual International Conference of the IEEE Engineering in
Medicine & Biology Society (EMBC)
| null |
10.1109/EMBC46164.2021.9630556
| null |
cs.LG cs.AI physics.med-ph
|
http://creativecommons.org/licenses/by/4.0/
|
Early detection of breast cancer is a powerful tool towards decreasing its
socioeconomic burden. Although, artificial intelligence (AI) methods have shown
remarkable results towards this goal, their "black box" nature hinders their
wide adoption in clinical practice. To address the need for AI guided breast
cancer diagnosis, interpretability methods can be utilized. In this study, we
used AI methods, i.e., Random Forests (RF), Neural Networks (NN) and Ensembles
of Neural Networks (ENN), towards this goal and explained and optimized their
performance through interpretability techniques, such as the Global Surrogate
(GS) method, the Individual Conditional Expectation (ICE) plots and the Shapley
values (SV). The Wisconsin Diagnostic Breast Cancer (WDBC) dataset of the open
UCI repository was used for the training and evaluation of the AI algorithms.
The best performance for breast cancer diagnosis was achieved by the proposed
ENN (96.6% accuracy and 0.96 area under the ROC curve), and its predictions
were explained by ICE plots, proving that its decisions were compliant with
current medical knowledge and can be further utilized to gain new insights in
the pathophysiological mechanisms of breast cancer. Feature selection based on
features' importance according to the GS model improved the performance of the
RF (leading the accuracy from 96.49% to 97.18% and the area under the ROC curve
from 0.96 to 0.97) and feature selection based on features' importance
according to SV improved the performance of the NN (leading the accuracy from
94.6% to 95.53% and the area under the ROC curve from 0.94 to 0.95). Compared
to other approaches on the same dataset, our proposed models demonstrated state
of the art performance while being interpretable.
|
[
{
"created": "Fri, 4 Feb 2022 13:41:30 GMT",
"version": "v1"
}
] |
2022-02-07
|
[
[
"Karatza",
"Panagiota",
""
],
[
"Dalakleidi",
"Kalliopi V.",
""
],
[
"Athanasiou",
"Maria",
""
],
[
"Nikita",
"Konstantina S.",
""
]
] |
Early detection of breast cancer is a powerful tool towards decreasing its socioeconomic burden. Although, artificial intelligence (AI) methods have shown remarkable results towards this goal, their "black box" nature hinders their wide adoption in clinical practice. To address the need for AI guided breast cancer diagnosis, interpretability methods can be utilized. In this study, we used AI methods, i.e., Random Forests (RF), Neural Networks (NN) and Ensembles of Neural Networks (ENN), towards this goal and explained and optimized their performance through interpretability techniques, such as the Global Surrogate (GS) method, the Individual Conditional Expectation (ICE) plots and the Shapley values (SV). The Wisconsin Diagnostic Breast Cancer (WDBC) dataset of the open UCI repository was used for the training and evaluation of the AI algorithms. The best performance for breast cancer diagnosis was achieved by the proposed ENN (96.6% accuracy and 0.96 area under the ROC curve), and its predictions were explained by ICE plots, proving that its decisions were compliant with current medical knowledge and can be further utilized to gain new insights in the pathophysiological mechanisms of breast cancer. Feature selection based on features' importance according to the GS model improved the performance of the RF (leading the accuracy from 96.49% to 97.18% and the area under the ROC curve from 0.96 to 0.97) and feature selection based on features' importance according to SV improved the performance of the NN (leading the accuracy from 94.6% to 95.53% and the area under the ROC curve from 0.94 to 0.95). Compared to other approaches on the same dataset, our proposed models demonstrated state of the art performance while being interpretable.
|
2306.02739
|
Benjamin Alt
|
Benjamin Alt, Franklin Kenghagho Kenfack, Andrei Haidu, Darko Katic,
Rainer J\"akel, Michael Beetz
|
Knowledge-Driven Robot Program Synthesis from Human VR Demonstrations
|
10 pages, 11 figures, accepted at the 20th International Conference
on Principles of Knowledge Representation and Reasoning (KR2023,
https://kr.org/KR2023)
| null | null | null |
cs.RO cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Aging societies, labor shortages and increasing wage costs call for
assistance robots capable of autonomously performing a wide array of real-world
tasks. Such open-ended robotic manipulation requires not only powerful
knowledge representations and reasoning (KR&R) algorithms, but also methods for
humans to instruct robots what tasks to perform and how to perform them. In
this paper, we present a system for automatically generating executable robot
control programs from human task demonstrations in virtual reality (VR). We
leverage common-sense knowledge and game engine-based physics to semantically
interpret human VR demonstrations, as well as an expressive and general task
representation and automatic path planning and code generation, embedded into a
state-of-the-art cognitive architecture. We demonstrate our approach in the
context of force-sensitive fetch-and-place for a robotic shopping assistant.
The source code is available at
https://github.com/ease-crc/vr-program-synthesis.
|
[
{
"created": "Mon, 5 Jun 2023 09:37:53 GMT",
"version": "v1"
},
{
"created": "Mon, 3 Jul 2023 08:57:00 GMT",
"version": "v2"
}
] |
2023-07-04
|
[
[
"Alt",
"Benjamin",
""
],
[
"Kenfack",
"Franklin Kenghagho",
""
],
[
"Haidu",
"Andrei",
""
],
[
"Katic",
"Darko",
""
],
[
"Jäkel",
"Rainer",
""
],
[
"Beetz",
"Michael",
""
]
] |
Aging societies, labor shortages and increasing wage costs call for assistance robots capable of autonomously performing a wide array of real-world tasks. Such open-ended robotic manipulation requires not only powerful knowledge representations and reasoning (KR&R) algorithms, but also methods for humans to instruct robots what tasks to perform and how to perform them. In this paper, we present a system for automatically generating executable robot control programs from human task demonstrations in virtual reality (VR). We leverage common-sense knowledge and game engine-based physics to semantically interpret human VR demonstrations, as well as an expressive and general task representation and automatic path planning and code generation, embedded into a state-of-the-art cognitive architecture. We demonstrate our approach in the context of force-sensitive fetch-and-place for a robotic shopping assistant. The source code is available at https://github.com/ease-crc/vr-program-synthesis.
|
1710.04688
|
Shadrokh Samavi
|
Shadrokh Samavi and Mohammad Reza Jahangir
|
Reduction of Look Up Tables for Computation of Reciprocal of Square
Roots
|
6 pages, 8 figures, 3 tables
| null | null | null |
cs.NA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Among many existing algorithms, convergence methods are the most popular
means of computing square root and the reciprocal of square root of numbers. An
initial approximation is required in these methods. Look up tables (LUT) are
employed to produce the initial approximation. In this paper a number of
methods are suggested to reduce the size of the look up tables. The precision
of the initial approximation plays an important role in the quality of the
final result. There are constraints for the use of a LUT in terms of its size
and its access time. Therefore, the optimization of the LUTs must be done in a
way to minimize hardware while offering acceptable convergence speed and
exactitude.
|
[
{
"created": "Thu, 12 Oct 2017 19:00:24 GMT",
"version": "v1"
}
] |
2017-10-16
|
[
[
"Samavi",
"Shadrokh",
""
],
[
"Jahangir",
"Mohammad Reza",
""
]
] |
Among many existing algorithms, convergence methods are the most popular means of computing square root and the reciprocal of square root of numbers. An initial approximation is required in these methods. Look up tables (LUT) are employed to produce the initial approximation. In this paper a number of methods are suggested to reduce the size of the look up tables. The precision of the initial approximation plays an important role in the quality of the final result. There are constraints for the use of a LUT in terms of its size and its access time. Therefore, the optimization of the LUTs must be done in a way to minimize hardware while offering acceptable convergence speed and exactitude.
|
2305.10846
|
Jesse Heyninck
|
Jesse Heyninck and Bart Bogaerts
|
Non-deterministic approximation operators: ultimate operators,
semi-equilibrium semantics and aggregates (full version)
|
Paper presented at the 39th International Conference on Logic
Programming (ICLP 2023)
| null | null | null |
cs.AI cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
Approximation fixpoint theory (AFT) is an abstract and general algebraic
framework for studying the semantics of non-monotonic logics. In recent work,
AFT was generalized to non-deterministic operators, i.e.\ operators whose range
are sets of elements rather than single elements. In this paper, we make three
further contributions to non-deterministic AFT: (1) we define and study
ultimate approximations of non-deterministic operators, (2) we give an
algebraic formulation of the semi-equilibrium semantics by Amendola, et al.,
and (3) we generalize the characterisations of disjunctive logic programs to
disjunctive logic programs with aggregates.
|
[
{
"created": "Thu, 18 May 2023 09:59:12 GMT",
"version": "v1"
}
] |
2023-05-19
|
[
[
"Heyninck",
"Jesse",
""
],
[
"Bogaerts",
"Bart",
""
]
] |
Approximation fixpoint theory (AFT) is an abstract and general algebraic framework for studying the semantics of non-monotonic logics. In recent work, AFT was generalized to non-deterministic operators, i.e.\ operators whose range are sets of elements rather than single elements. In this paper, we make three further contributions to non-deterministic AFT: (1) we define and study ultimate approximations of non-deterministic operators, (2) we give an algebraic formulation of the semi-equilibrium semantics by Amendola, et al., and (3) we generalize the characterisations of disjunctive logic programs to disjunctive logic programs with aggregates.
|
2102.03939
|
Nikolaos Zioulis Mr.
|
Nikolaos Zioulis, Federico Alvarez, Dimitrios Zarpalas, Petros Daras
|
Single-Shot Cuboids: Geodesics-based End-to-end Manhattan Aligned Layout
Estimation from Spherical Panoramas
| null | null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
It has been shown that global scene understanding tasks like layout
estimation can benefit from wider field of views, and specifically spherical
panoramas. While much progress has been made recently, all previous approaches
rely on intermediate representations and postprocessing to produce
Manhattan-aligned estimates. In this work we show how to estimate full room
layouts in a single-shot, eliminating the need for postprocessing. Our work is
the first to directly infer Manhattan-aligned outputs. To achieve this, our
data-driven model exploits direct coordinate regression and is supervised
end-to-end. As a result, we can explicitly add quasi-Manhattan constraints,
which set the necessary conditions for a homography-based Manhattan alignment
module. Finally, we introduce the geodesic heatmaps and loss and a
boundary-aware center of mass calculation that facilitate higher quality
keypoint estimation in the spherical domain. Our models and code are publicly
available at https://vcl3d.github.io/SingleShotCuboids/.
|
[
{
"created": "Sun, 7 Feb 2021 22:52:59 GMT",
"version": "v1"
},
{
"created": "Tue, 9 Feb 2021 08:46:37 GMT",
"version": "v2"
}
] |
2021-02-10
|
[
[
"Zioulis",
"Nikolaos",
""
],
[
"Alvarez",
"Federico",
""
],
[
"Zarpalas",
"Dimitrios",
""
],
[
"Daras",
"Petros",
""
]
] |
It has been shown that global scene understanding tasks like layout estimation can benefit from wider field of views, and specifically spherical panoramas. While much progress has been made recently, all previous approaches rely on intermediate representations and postprocessing to produce Manhattan-aligned estimates. In this work we show how to estimate full room layouts in a single-shot, eliminating the need for postprocessing. Our work is the first to directly infer Manhattan-aligned outputs. To achieve this, our data-driven model exploits direct coordinate regression and is supervised end-to-end. As a result, we can explicitly add quasi-Manhattan constraints, which set the necessary conditions for a homography-based Manhattan alignment module. Finally, we introduce the geodesic heatmaps and loss and a boundary-aware center of mass calculation that facilitate higher quality keypoint estimation in the spherical domain. Our models and code are publicly available at https://vcl3d.github.io/SingleShotCuboids/.
|
2406.11342
|
Zhao Zhuo
|
Zhao Zhuo, Rongzhen Li, Kai Liu, Huhai Zou, KaiMao Li, Jie Yu, Tianhao
Sun, Qingbo Wu
|
KAOS: Large Model Multi-Agent Operating System
|
The content is highly controversial and needs to be withdrawn
| null | null | null |
cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The intelligent interaction model based on large models reduces the
differences in user experience across various system platforms but faces
challenges in multi-agent collaboration and resource sharing. To demonstrate a
uniform user experience across different foundational software platforms and
address resource coordination management challenges, this paper proposes KAOS,
a multi-agent operating system based on the open-source Kylin. The research
method involves empowering agents with large models to serve applications.
First, by introducing management role agents and vertical multi-agent
collaboration to construct or replace typical application software. Second, by
studying system-level shared resource scheduling strategies to enhance user
experience and optimize resource utilization. And finally, by validating the
efficiency and superiority of the large model multi-agent operating system
through real applications and scoring intelligence. The feasibility of this
system is demonstrated, providing a new perspective for the development of
multi-agent operating systems. Experimental results show significant advantages
of multi-agent collaboration in various application scenarios.
|
[
{
"created": "Mon, 17 Jun 2024 08:59:32 GMT",
"version": "v1"
},
{
"created": "Tue, 6 Aug 2024 08:42:18 GMT",
"version": "v2"
},
{
"created": "Fri, 9 Aug 2024 01:45:39 GMT",
"version": "v3"
}
] |
2024-08-12
|
[
[
"Zhuo",
"Zhao",
""
],
[
"Li",
"Rongzhen",
""
],
[
"Liu",
"Kai",
""
],
[
"Zou",
"Huhai",
""
],
[
"Li",
"KaiMao",
""
],
[
"Yu",
"Jie",
""
],
[
"Sun",
"Tianhao",
""
],
[
"Wu",
"Qingbo",
""
]
] |
The intelligent interaction model based on large models reduces the differences in user experience across various system platforms but faces challenges in multi-agent collaboration and resource sharing. To demonstrate a uniform user experience across different foundational software platforms and address resource coordination management challenges, this paper proposes KAOS, a multi-agent operating system based on the open-source Kylin. The research method involves empowering agents with large models to serve applications. First, by introducing management role agents and vertical multi-agent collaboration to construct or replace typical application software. Second, by studying system-level shared resource scheduling strategies to enhance user experience and optimize resource utilization. And finally, by validating the efficiency and superiority of the large model multi-agent operating system through real applications and scoring intelligence. The feasibility of this system is demonstrated, providing a new perspective for the development of multi-agent operating systems. Experimental results show significant advantages of multi-agent collaboration in various application scenarios.
|
2106.11569
|
Herv\'e Tal\'e Kalachi
|
Herv\'e Tale Kalachi and Hermann Tchatchiem Kamche
|
On the Rank Decoding Problem Over Finite Principal Ideal Rings
|
20 pages
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The rank decoding problem has been the subject of much attention in this last
decade. This problem, which is at the base of the security of public-key
cryptosystems based on rank metric codes, is traditionally studied over finite
fields. But the recent generalizations of certain classes of rank-metric codes
from finite fields to finite rings have naturally created the interest to
tackle the rank decoding problem in the case of finite rings. In this paper, we
show that solving the rank decoding problem over finite principal ideal rings
is at least as hard as the rank decoding problem over finite fields. We also
show that computing the minimum rank distance for linear codes over finite
principal ideal rings is equivalent to the same problem for linear codes over
finite fields. Finally, we provide combinatorial type algorithms for solving
the rank decoding problem over finite chain rings together with their average
complexities.
|
[
{
"created": "Tue, 22 Jun 2021 06:59:10 GMT",
"version": "v1"
},
{
"created": "Fri, 4 Mar 2022 12:21:47 GMT",
"version": "v2"
},
{
"created": "Mon, 15 Aug 2022 14:19:47 GMT",
"version": "v3"
}
] |
2022-08-16
|
[
[
"Kalachi",
"Hervé Tale",
""
],
[
"Kamche",
"Hermann Tchatchiem",
""
]
] |
The rank decoding problem has been the subject of much attention in this last decade. This problem, which is at the base of the security of public-key cryptosystems based on rank metric codes, is traditionally studied over finite fields. But the recent generalizations of certain classes of rank-metric codes from finite fields to finite rings have naturally created the interest to tackle the rank decoding problem in the case of finite rings. In this paper, we show that solving the rank decoding problem over finite principal ideal rings is at least as hard as the rank decoding problem over finite fields. We also show that computing the minimum rank distance for linear codes over finite principal ideal rings is equivalent to the same problem for linear codes over finite fields. Finally, we provide combinatorial type algorithms for solving the rank decoding problem over finite chain rings together with their average complexities.
|
1505.04713
|
Merlin Sheeba
|
G. Merlin Sheeba, Alamelu Nachiappan, P.H. Pavan Kumar, Prateek
|
Placement Of Energy Aware Wireless Mesh Nodes For E-Learning In Green
Campuses
|
10 pages,4 figures
|
International Journal on Cybernetics & Informatics (IJCI) Vol. 4,
No. 2, April 2015
|
10.5121/ijci.2015.4218
| null |
cs.NI
|
http://creativecommons.org/licenses/by/3.0/
|
Energy efficiency solutions are more vital for Green Mesh Network (GMN)
campuses. Today students are benefited using these e-learning methodologies.
Renewable energies such as solar, wind, hydro has tremendous applications on
energy efficient wireless networks for sustaining the ever growing traffic
demands. One of the major issues in designing a GMN is minimizing the number of
deployed mesh routers and gateways and satisfying the sustainable QOS based
energy constraints. During low traffic periods the mesh routers are switched to
power save or sleep mode. In this paper we have mathematically formulated a
single objective function with multi constraints to optimize the energy. The
objective is to place minimum number of Mesh routers and gateways in a set of
candidate location. The mesh nodes are powered using the solar energy to meet
the traffic demands. Two global optimisation algorithms are compared in this
paper to optimize the energy sustainability, to guarantee seamless
connectivity.
|
[
{
"created": "Mon, 18 May 2015 16:42:16 GMT",
"version": "v1"
}
] |
2015-05-19
|
[
[
"Sheeba",
"G. Merlin",
""
],
[
"Nachiappan",
"Alamelu",
""
],
[
"Kumar",
"P. H. Pavan",
""
],
[
"Prateek",
"",
""
]
] |
Energy efficiency solutions are more vital for Green Mesh Network (GMN) campuses. Today students are benefited using these e-learning methodologies. Renewable energies such as solar, wind, hydro has tremendous applications on energy efficient wireless networks for sustaining the ever growing traffic demands. One of the major issues in designing a GMN is minimizing the number of deployed mesh routers and gateways and satisfying the sustainable QOS based energy constraints. During low traffic periods the mesh routers are switched to power save or sleep mode. In this paper we have mathematically formulated a single objective function with multi constraints to optimize the energy. The objective is to place minimum number of Mesh routers and gateways in a set of candidate location. The mesh nodes are powered using the solar energy to meet the traffic demands. Two global optimisation algorithms are compared in this paper to optimize the energy sustainability, to guarantee seamless connectivity.
|
2406.16219
|
Stefanos Chaliasos
|
Stefanos Chaliasos, Denis Firsov, Benjamin Livshits
|
Towards a Formal Foundation for Blockchain Rollups
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Blockchains like Bitcoin and Ethereum have revolutionized digital
transactions, yet scalability issues persist. Layer 2 solutions, such as
validity proof Rollups (ZK-Rollups), aim to address these challenges by
processing transactions off-chain and validating them on the main chain.
However, concerns remain about security and censorship resistance, particularly
regarding centralized control in Layer 2 and inadequate mechanisms for
enforcing these properties through Layer 1 contracts. This work presents a
formal analysis using the Alloy specification language to examine and design
key Layer 2 functionalities, including forced transaction queues, safe
blacklisting, and upgradeability. Through this analysis, we identify potential
vulnerabilities in current mechanisms and propose enhanced models to strengthen
security and censorship resistance, setting new standards for the security of
rollups.
|
[
{
"created": "Sun, 23 Jun 2024 21:12:19 GMT",
"version": "v1"
}
] |
2024-06-25
|
[
[
"Chaliasos",
"Stefanos",
""
],
[
"Firsov",
"Denis",
""
],
[
"Livshits",
"Benjamin",
""
]
] |
Blockchains like Bitcoin and Ethereum have revolutionized digital transactions, yet scalability issues persist. Layer 2 solutions, such as validity proof Rollups (ZK-Rollups), aim to address these challenges by processing transactions off-chain and validating them on the main chain. However, concerns remain about security and censorship resistance, particularly regarding centralized control in Layer 2 and inadequate mechanisms for enforcing these properties through Layer 1 contracts. This work presents a formal analysis using the Alloy specification language to examine and design key Layer 2 functionalities, including forced transaction queues, safe blacklisting, and upgradeability. Through this analysis, we identify potential vulnerabilities in current mechanisms and propose enhanced models to strengthen security and censorship resistance, setting new standards for the security of rollups.
|
2210.08472
|
Yuan-Gen Wang
|
Chao Zhou, Yuan-Gen Wang, Guopu Zhu
|
Object-Attentional Untargeted Adversarial Attack
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep neural networks are facing severe threats from adversarial attacks. Most
existing black-box attacks fool target model by generating either global
perturbations or local patches. However, both global perturbations and local
patches easily cause annoying visual artifacts in adversarial example. Compared
with some smooth regions of an image, the object region generally has more
edges and a more complex texture. Thus small perturbations on it will be more
imperceptible. On the other hand, the object region is undoubtfully the
decisive part of an image to classification tasks. Motivated by these two
facts, we propose an object-attentional adversarial attack method for
untargeted attack. Specifically, we first generate an object region by
intersecting the object detection region from YOLOv4 with the salient object
detection (SOD) region from HVPNet. Furthermore, we design an activation
strategy to avoid the reaction caused by the incomplete SOD. Then, we perform
an adversarial attack only on the detected object region by leveraging Simple
Black-box Adversarial Attack (SimBA). To verify the proposed method, we create
a unique dataset by extracting all the images containing the object defined by
COCO from ImageNet-1K, named COCO-Reduced-ImageNet in this paper. Experimental
results on ImageNet-1K and COCO-Reduced-ImageNet show that under various system
settings, our method yields the adversarial example with better perceptual
quality meanwhile saving the query budget up to 24.16\% compared to the
state-of-the-art approaches including SimBA.
|
[
{
"created": "Sun, 16 Oct 2022 07:45:13 GMT",
"version": "v1"
}
] |
2022-10-18
|
[
[
"Zhou",
"Chao",
""
],
[
"Wang",
"Yuan-Gen",
""
],
[
"Zhu",
"Guopu",
""
]
] |
Deep neural networks are facing severe threats from adversarial attacks. Most existing black-box attacks fool target model by generating either global perturbations or local patches. However, both global perturbations and local patches easily cause annoying visual artifacts in adversarial example. Compared with some smooth regions of an image, the object region generally has more edges and a more complex texture. Thus small perturbations on it will be more imperceptible. On the other hand, the object region is undoubtfully the decisive part of an image to classification tasks. Motivated by these two facts, we propose an object-attentional adversarial attack method for untargeted attack. Specifically, we first generate an object region by intersecting the object detection region from YOLOv4 with the salient object detection (SOD) region from HVPNet. Furthermore, we design an activation strategy to avoid the reaction caused by the incomplete SOD. Then, we perform an adversarial attack only on the detected object region by leveraging Simple Black-box Adversarial Attack (SimBA). To verify the proposed method, we create a unique dataset by extracting all the images containing the object defined by COCO from ImageNet-1K, named COCO-Reduced-ImageNet in this paper. Experimental results on ImageNet-1K and COCO-Reduced-ImageNet show that under various system settings, our method yields the adversarial example with better perceptual quality meanwhile saving the query budget up to 24.16\% compared to the state-of-the-art approaches including SimBA.
|
1409.5980
|
Kiran Garimella
|
Kiran Garimella, Ingmar Weber, Sonia Dal Cin
|
From "I love you babe" to "leave me alone" - Romantic Relationship
Breakups on Twitter
|
To appear in the 6th International Conference on Social Informatics
(SocInfo 2014), Barcelona
| null | null | null |
cs.SI cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We use public data from Twitter to study the breakups of the romantic
relationships of 661 couples. Couples are identified through profile references
such as @user1 writing "@user2 is the best boyfriend ever!!". Using this data
set we find evidence for a number of existing hypotheses describing
psychological processes including (i) pre-relationship closeness being
indicative of post-relationship closeness, (ii) "stonewalling", i.e., ignoring
messages by a partner, being indicative of a pending breakup, and (iii)
post-breakup depression. We also observe a previously undocumented phenomenon
of "batch un-friending and being un-friended" where users who break up
experience sudden drops of 15-20 followers and friends. Our work shows that
public Twitter data can be used to gain new insights into psychological
processes surrounding relationship dissolutions, something that most people go
through at least once in their lifetime.
|
[
{
"created": "Sun, 21 Sep 2014 13:23:35 GMT",
"version": "v1"
}
] |
2014-09-23
|
[
[
"Garimella",
"Kiran",
""
],
[
"Weber",
"Ingmar",
""
],
[
"Cin",
"Sonia Dal",
""
]
] |
We use public data from Twitter to study the breakups of the romantic relationships of 661 couples. Couples are identified through profile references such as @user1 writing "@user2 is the best boyfriend ever!!". Using this data set we find evidence for a number of existing hypotheses describing psychological processes including (i) pre-relationship closeness being indicative of post-relationship closeness, (ii) "stonewalling", i.e., ignoring messages by a partner, being indicative of a pending breakup, and (iii) post-breakup depression. We also observe a previously undocumented phenomenon of "batch un-friending and being un-friended" where users who break up experience sudden drops of 15-20 followers and friends. Our work shows that public Twitter data can be used to gain new insights into psychological processes surrounding relationship dissolutions, something that most people go through at least once in their lifetime.
|
1508.02133
|
Marek Szyku{\l}a
|
Vladimir V. Gusev, Marek Szyku{\l}a
|
On the Number of Synchronizing Colorings of Digraphs
|
CIAA 2015. The final publication is available at
http://link.springer.com/chapter/10.1007/978-3-319-22360-5_11
|
In Implementation and Application of Automata (CIAA 2015), volume
9223 of LNCS, pages 127-139, Springer, 2015
|
10.1007/978-3-319-22360-5_11
| null |
cs.FL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We deal with $k$-out-regular directed multigraphs with loops (called simply
\emph{digraphs}). The edges of such a digraph can be colored by elements of
some fixed $k$-element set in such a way that outgoing edges of every vertex
have different colors. Such a coloring corresponds naturally to an automaton.
The road coloring theorem states that every primitive digraph has a
synchronizing coloring.
In the present paper we study how many synchronizing colorings can exist for
a digraph with $n$ vertices. We performed an extensive experimental
investigation of digraphs with small number of vertices. This was done by using
our dedicated algorithm exhaustively enumerating all small digraphs. We also
present a series of digraphs whose fraction of synchronizing colorings is equal
to $1-1/k^d$, for every $d \ge 1$ and the number of vertices large enough.
On the basis of our results we state several conjectures and open problems.
In particular, we conjecture that $1-1/k$ is the smallest possible fraction of
synchronizing colorings, except for a single exceptional example on 6 vertices
for $k=2$.
|
[
{
"created": "Mon, 10 Aug 2015 06:15:24 GMT",
"version": "v1"
}
] |
2015-08-11
|
[
[
"Gusev",
"Vladimir V.",
""
],
[
"Szykuła",
"Marek",
""
]
] |
We deal with $k$-out-regular directed multigraphs with loops (called simply \emph{digraphs}). The edges of such a digraph can be colored by elements of some fixed $k$-element set in such a way that outgoing edges of every vertex have different colors. Such a coloring corresponds naturally to an automaton. The road coloring theorem states that every primitive digraph has a synchronizing coloring. In the present paper we study how many synchronizing colorings can exist for a digraph with $n$ vertices. We performed an extensive experimental investigation of digraphs with small number of vertices. This was done by using our dedicated algorithm exhaustively enumerating all small digraphs. We also present a series of digraphs whose fraction of synchronizing colorings is equal to $1-1/k^d$, for every $d \ge 1$ and the number of vertices large enough. On the basis of our results we state several conjectures and open problems. In particular, we conjecture that $1-1/k$ is the smallest possible fraction of synchronizing colorings, except for a single exceptional example on 6 vertices for $k=2$.
|
1205.1641
|
Bhavesh Patel
|
B V Patel, B B Meshram
|
Content based video retrieval systems
|
18 Pages
| null |
10.5121/iju.2012.3202
| null |
cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the development of multimedia data types and available bandwidth there
is huge demand of video retrieval systems, as users shift from text based
retrieval systems to content based retrieval systems. Selection of extracted
features play an important role in content based video retrieval regardless of
video attributes being under consideration. These features are intended for
selecting, indexing and ranking according to their potential interest to the
user. Good features selection also allows the time and space costs of the
retrieval process to be reduced. This survey reviews the interesting features
that can be extracted from video data for indexing and retrieval along with
similarity measurement methods. We also identify present research issues in
area of content based video retrieval systems.
|
[
{
"created": "Tue, 8 May 2012 09:27:29 GMT",
"version": "v1"
}
] |
2012-05-09
|
[
[
"Patel",
"B V",
""
],
[
"Meshram",
"B B",
""
]
] |
With the development of multimedia data types and available bandwidth there is huge demand of video retrieval systems, as users shift from text based retrieval systems to content based retrieval systems. Selection of extracted features play an important role in content based video retrieval regardless of video attributes being under consideration. These features are intended for selecting, indexing and ranking according to their potential interest to the user. Good features selection also allows the time and space costs of the retrieval process to be reduced. This survey reviews the interesting features that can be extracted from video data for indexing and retrieval along with similarity measurement methods. We also identify present research issues in area of content based video retrieval systems.
|
2103.06026
|
Soeren Becker
|
Ana Juan Ferrer, Soeren Becker, Florian Schmidt, Lauritz Thamsen, Odej
Kao
|
Towards a Cognitive Compute Continuum: An Architecture for Ad-Hoc
Self-Managed Swarms
|
8 pages, CCGrid 2021 Cloud2Things Workshop
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we introduce our vision of a Cognitive Computing Continuum to
address the changing IT service provisioning towards a distributed,
opportunistic, self-managed collaboration between heterogeneous devices outside
the traditional data center boundaries. The focal point of this continuum are
cognitive devices, which have to make decisions autonomously using their
on-board computation and storage capacity based on information sensed from
their environment. Such devices are moving and cannot rely on fixed
infrastructure elements, but instead realise on-the-fly networking and thus
frequently join and leave temporal swarms. All this creates novel demands for
the underlying architecture and resource management, which must bridge the gap
from edge to cloud environments, while keeping the QoS parameters within
required boundaries. The paper presents an initial architecture and a resource
management framework for the implementation of this type of IT service
provisioning.
|
[
{
"created": "Wed, 10 Mar 2021 12:56:00 GMT",
"version": "v1"
}
] |
2021-03-11
|
[
[
"Ferrer",
"Ana Juan",
""
],
[
"Becker",
"Soeren",
""
],
[
"Schmidt",
"Florian",
""
],
[
"Thamsen",
"Lauritz",
""
],
[
"Kao",
"Odej",
""
]
] |
In this paper we introduce our vision of a Cognitive Computing Continuum to address the changing IT service provisioning towards a distributed, opportunistic, self-managed collaboration between heterogeneous devices outside the traditional data center boundaries. The focal point of this continuum are cognitive devices, which have to make decisions autonomously using their on-board computation and storage capacity based on information sensed from their environment. Such devices are moving and cannot rely on fixed infrastructure elements, but instead realise on-the-fly networking and thus frequently join and leave temporal swarms. All this creates novel demands for the underlying architecture and resource management, which must bridge the gap from edge to cloud environments, while keeping the QoS parameters within required boundaries. The paper presents an initial architecture and a resource management framework for the implementation of this type of IT service provisioning.
|
2304.01041
|
Haichao Liu
|
Haichao Liu, Kai Chen, Yulin Li, Zhenmin Huang, Jianghua Duan, Jun Ma
|
Integrated Behavior Planning and Motion Control for Autonomous Vehicles
with Traffic Rules Compliance
|
7 pages, 5 figures, accepted for publication in The 2023 IEEE
International Conference on Robotics and Biomimetics (ROBIO)
| null | null | null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this article, we propose an optimization-based integrated behavior
planning and motion control scheme, which is an interpretable and adaptable
urban autonomous driving solution that complies with complex traffic rules
while ensuring driving safety. Inherently, to ensure compliance with traffic
rules, an innovative design of potential functions (PFs) is presented to
characterize various traffic rules related to traffic lights, traversable and
non-traversable traffic line markings, etc. These PFs are further incorporated
as part of the model predictive control (MPC) formulation. In this sense,
high-level behavior planning is attained implicitly along with motion control
as an integrated architecture, facilitating flexible maneuvers with safety
guarantees. Due to the well-designed objective function of the MPC scheme, our
integrated behavior planning and motion control scheme is competent for various
urban driving scenarios and able to generate versatile behaviors, such as
overtaking with adaptive cruise control, turning in the intersection, and
merging in and out of the roundabout. As demonstrated from a series of
simulations with challenging scenarios in CARLA, it is noteworthy that the
proposed framework admits real-time performance and high generalizability.
|
[
{
"created": "Mon, 3 Apr 2023 14:44:52 GMT",
"version": "v1"
},
{
"created": "Thu, 30 Nov 2023 18:19:08 GMT",
"version": "v2"
}
] |
2023-12-01
|
[
[
"Liu",
"Haichao",
""
],
[
"Chen",
"Kai",
""
],
[
"Li",
"Yulin",
""
],
[
"Huang",
"Zhenmin",
""
],
[
"Duan",
"Jianghua",
""
],
[
"Ma",
"Jun",
""
]
] |
In this article, we propose an optimization-based integrated behavior planning and motion control scheme, which is an interpretable and adaptable urban autonomous driving solution that complies with complex traffic rules while ensuring driving safety. Inherently, to ensure compliance with traffic rules, an innovative design of potential functions (PFs) is presented to characterize various traffic rules related to traffic lights, traversable and non-traversable traffic line markings, etc. These PFs are further incorporated as part of the model predictive control (MPC) formulation. In this sense, high-level behavior planning is attained implicitly along with motion control as an integrated architecture, facilitating flexible maneuvers with safety guarantees. Due to the well-designed objective function of the MPC scheme, our integrated behavior planning and motion control scheme is competent for various urban driving scenarios and able to generate versatile behaviors, such as overtaking with adaptive cruise control, turning in the intersection, and merging in and out of the roundabout. As demonstrated from a series of simulations with challenging scenarios in CARLA, it is noteworthy that the proposed framework admits real-time performance and high generalizability.
|
1706.04371
|
Haozhe Xie
|
Haozhe Xie, Jie Li, Hanqing Xue
|
A survey of dimensionality reduction techniques based on random
projection
|
10 pages, 6 figures
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dimensionality reduction techniques play important roles in the analysis of
big data. Traditional dimensionality reduction approaches, such as principal
component analysis (PCA) and linear discriminant analysis (LDA), have been
studied extensively in the past few decades. However, as the dimensionality of
data increases, the computational cost of traditional dimensionality reduction
methods grows exponentially, and the computation becomes prohibitively
intractable. These drawbacks have triggered the development of random
projection (RP) techniques, which map high-dimensional data onto a
low-dimensional subspace with extremely reduced time cost. However, the RP
transformation matrix is generated without considering the intrinsic structure
of the original data and usually leads to relatively high distortion.
Therefore, in recent years, methods based on RP have been proposed to address
this problem. In this paper, we summarize the methods used in different
situations to help practitioners to employ the proper techniques for their
specific applications. Meanwhile, we enumerate the benefits and limitations of
the various methods and provide further references for researchers to develop
novel RP-based approaches.
|
[
{
"created": "Wed, 14 Jun 2017 09:13:33 GMT",
"version": "v1"
},
{
"created": "Sat, 17 Jun 2017 04:09:17 GMT",
"version": "v2"
},
{
"created": "Fri, 27 Oct 2017 10:59:10 GMT",
"version": "v3"
},
{
"created": "Wed, 30 May 2018 12:47:50 GMT",
"version": "v4"
}
] |
2018-05-31
|
[
[
"Xie",
"Haozhe",
""
],
[
"Li",
"Jie",
""
],
[
"Xue",
"Hanqing",
""
]
] |
Dimensionality reduction techniques play important roles in the analysis of big data. Traditional dimensionality reduction approaches, such as principal component analysis (PCA) and linear discriminant analysis (LDA), have been studied extensively in the past few decades. However, as the dimensionality of data increases, the computational cost of traditional dimensionality reduction methods grows exponentially, and the computation becomes prohibitively intractable. These drawbacks have triggered the development of random projection (RP) techniques, which map high-dimensional data onto a low-dimensional subspace with extremely reduced time cost. However, the RP transformation matrix is generated without considering the intrinsic structure of the original data and usually leads to relatively high distortion. Therefore, in recent years, methods based on RP have been proposed to address this problem. In this paper, we summarize the methods used in different situations to help practitioners to employ the proper techniques for their specific applications. Meanwhile, we enumerate the benefits and limitations of the various methods and provide further references for researchers to develop novel RP-based approaches.
|
1907.00468
|
Erick Schmidt
|
Erick Schmidt, David Akopian
|
A Fast-rate WLAN Measurement Tool for Improved Miss-rate in Indoor
Navigation
| null |
Proceedings of the 31st International Technical Meeting of the
Satellite Division of The Institute of Navigation (ION GNSS+ 2018)
|
10.33012/2018.16042
| null |
cs.NI cs.PF cs.SE eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, location-based services (LBS) have steered attention to indoor
positioning systems (IPS). WLAN-based IPSs relying on received signal strength
(RSS) measurements such as fingerprinting are gaining popularity due to proven
high accuracy of their results. Typically, sets of RSS measurements at selected
locations from several WLAN access points (APs) are used to calibrate the
system. Retrieval of such measurements from WLAN cards are commonly at one-Hz
rate. Such measurement collection is needed for offline radio-map surveying
stage which aligns fingerprints to locations, and for online navigation stage,
when collected measurements are associated with the radio-map for user
navigation. As WLAN network is not originally designed for positioning, an RSS
measurement miss could have a high impact on the fingerprinting system.
Additionally, measurement fluctuations require laborious signal processing, and
surveying process can be very time consuming. This paper proposes a fast-rate
measurement collection method that addresses previously mentioned problems by
achieving a higher probability of RSS measurement collection during a given
one-second window. This translates to more data for statistical processing and
faster surveying. The fast-rate collection approach is analyzed against the
conventional measurement rate in a proposed testing methodology that mimics
real-life scenarios related to IPS surveying and online navigation.
|
[
{
"created": "Sun, 30 Jun 2019 21:29:48 GMT",
"version": "v1"
}
] |
2019-07-02
|
[
[
"Schmidt",
"Erick",
""
],
[
"Akopian",
"David",
""
]
] |
Recently, location-based services (LBS) have steered attention to indoor positioning systems (IPS). WLAN-based IPSs relying on received signal strength (RSS) measurements such as fingerprinting are gaining popularity due to proven high accuracy of their results. Typically, sets of RSS measurements at selected locations from several WLAN access points (APs) are used to calibrate the system. Retrieval of such measurements from WLAN cards are commonly at one-Hz rate. Such measurement collection is needed for offline radio-map surveying stage which aligns fingerprints to locations, and for online navigation stage, when collected measurements are associated with the radio-map for user navigation. As WLAN network is not originally designed for positioning, an RSS measurement miss could have a high impact on the fingerprinting system. Additionally, measurement fluctuations require laborious signal processing, and surveying process can be very time consuming. This paper proposes a fast-rate measurement collection method that addresses previously mentioned problems by achieving a higher probability of RSS measurement collection during a given one-second window. This translates to more data for statistical processing and faster surveying. The fast-rate collection approach is analyzed against the conventional measurement rate in a proposed testing methodology that mimics real-life scenarios related to IPS surveying and online navigation.
|
2107.09927
|
Serafeim Moustakidis
|
Zoumpolia Dikopoulou, Serafeim Moustakidis, Patrik Karlsson
|
GLIME: A new graphical methodology for interpretable model-agnostic
explanations
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Explainable artificial intelligence (XAI) is an emerging new domain in which
a set of processes and tools allow humans to better comprehend the decisions
generated by black box models. However, most of the available XAI tools are
often limited to simple explanations mainly quantifying the impact of
individual features to the models' output. Therefore, human users are not able
to understand how the features are related to each other to make predictions,
whereas the inner workings of the trained models remain hidden. This paper
contributes to the development of a novel graphical explainability tool that
not only indicates the significant features of the model but also reveals the
conditional relationships between features and the inference capturing both the
direct and indirect impact of features to the models' decision. The proposed
XAI methodology, termed as gLIME, provides graphical model-agnostic
explanations either at the global (for the entire dataset) or the local scale
(for specific data points). It relies on a combination of local interpretable
model-agnostic explanations (LIME) with graphical least absolute shrinkage and
selection operator (GLASSO) producing undirected Gaussian graphical models.
Regularization is adopted to shrink small partial correlation coefficients to
zero providing sparser and more interpretable graphical explanations. Two
well-known classification datasets (BIOPSY and OAI) were selected to confirm
the superiority of gLIME over LIME in terms of both robustness and consistency
over multiple permutations. Specifically, gLIME accomplished increased
stability over the two datasets with respect to features' importance (76%-96%
compared to 52%-77% using LIME). gLIME demonstrates a unique potential to
extend the functionality of the current state-of-the-art in XAI by providing
informative graphically given explanations that could unlock black boxes.
|
[
{
"created": "Wed, 21 Jul 2021 08:06:40 GMT",
"version": "v1"
}
] |
2021-07-22
|
[
[
"Dikopoulou",
"Zoumpolia",
""
],
[
"Moustakidis",
"Serafeim",
""
],
[
"Karlsson",
"Patrik",
""
]
] |
Explainable artificial intelligence (XAI) is an emerging new domain in which a set of processes and tools allow humans to better comprehend the decisions generated by black box models. However, most of the available XAI tools are often limited to simple explanations mainly quantifying the impact of individual features to the models' output. Therefore, human users are not able to understand how the features are related to each other to make predictions, whereas the inner workings of the trained models remain hidden. This paper contributes to the development of a novel graphical explainability tool that not only indicates the significant features of the model but also reveals the conditional relationships between features and the inference capturing both the direct and indirect impact of features to the models' decision. The proposed XAI methodology, termed as gLIME, provides graphical model-agnostic explanations either at the global (for the entire dataset) or the local scale (for specific data points). It relies on a combination of local interpretable model-agnostic explanations (LIME) with graphical least absolute shrinkage and selection operator (GLASSO) producing undirected Gaussian graphical models. Regularization is adopted to shrink small partial correlation coefficients to zero providing sparser and more interpretable graphical explanations. Two well-known classification datasets (BIOPSY and OAI) were selected to confirm the superiority of gLIME over LIME in terms of both robustness and consistency over multiple permutations. Specifically, gLIME accomplished increased stability over the two datasets with respect to features' importance (76%-96% compared to 52%-77% using LIME). gLIME demonstrates a unique potential to extend the functionality of the current state-of-the-art in XAI by providing informative graphically given explanations that could unlock black boxes.
|
1809.04993
|
Diego Romeres
|
Diego Romeres, Devesh Jha, Alberto Dalla Libera, William Yerazunis and
Daniel Nikovski
|
Semiparametrical Gaussian Processes Learning of Forward Dynamical Models
for Navigating in a Circular Maze
|
7 pages including the references, 5 figures. Changed title, improved
the structure of the article and the images
| null | null | null |
cs.RO cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a problem of model learning for the purpose of learning
how to navigate a ball to a goal state in a circular maze environment with two
degrees of freedom. The motion of the ball in the maze environment is
influenced by several non-linear effects such as dry friction and contacts,
which are difficult to model physically. We propose a semiparametric model to
estimate the motion dynamics of the ball based on Gaussian Process Regression
equipped with basis functions obtained from physics first principles. The
accuracy of this semiparametric model is shown not only in estimation but also
in prediction at n-steps ahead and its compared with standard algorithms for
model learning. The learned model is then used in a trajectory optimization
algorithm to compute ball trajectories. We propose the system presented in the
paper as a benchmark problem for reinforcement and robot learning, for its
interesting and challenging dynamics and its relative ease of reproducibility.
|
[
{
"created": "Thu, 13 Sep 2018 14:44:05 GMT",
"version": "v1"
},
{
"created": "Tue, 18 Sep 2018 18:22:48 GMT",
"version": "v2"
}
] |
2018-09-20
|
[
[
"Romeres",
"Diego",
""
],
[
"Jha",
"Devesh",
""
],
[
"Libera",
"Alberto Dalla",
""
],
[
"Yerazunis",
"William",
""
],
[
"Nikovski",
"Daniel",
""
]
] |
This paper presents a problem of model learning for the purpose of learning how to navigate a ball to a goal state in a circular maze environment with two degrees of freedom. The motion of the ball in the maze environment is influenced by several non-linear effects such as dry friction and contacts, which are difficult to model physically. We propose a semiparametric model to estimate the motion dynamics of the ball based on Gaussian Process Regression equipped with basis functions obtained from physics first principles. The accuracy of this semiparametric model is shown not only in estimation but also in prediction at n-steps ahead and its compared with standard algorithms for model learning. The learned model is then used in a trajectory optimization algorithm to compute ball trajectories. We propose the system presented in the paper as a benchmark problem for reinforcement and robot learning, for its interesting and challenging dynamics and its relative ease of reproducibility.
|
1705.00761
|
Samir Abdelrahman
|
Mahmoud Mahdi, Samir Abdelrahman, Reem Bahgat, and Ismail Ismail
|
F-tree: an algorithm for clustering transactional data using frequency
tree
|
Appeared at Al-Azhar University Engineering Journal, JAUES, Vol.5,
No. 8, Dec 2010
| null | null | null |
cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Clustering is an important data mining technique that groups similar data
records, recently categorical transaction clustering is received more
attention. In this research, we study the problem of categorical data
clustering for transactional data characterized with high dimensionality and
large volume. We propose a novel algorithm for clustering transactional data
called F-Tree, which is based on the idea of the frequent pattern algorithm
FP-tree; the fastest approaches to the frequent item set mining. And the simple
idea behind the F-Tree is to generate small high pure clusters, and then merge
them. That makes it fast, and dynamic in clustering large transactional
datasets with high dimensions. We also present a new solution to solve the
overlapping problem between clusters, by defining a new criterion function,
which is based on the probability of overlapping between weighted items. Our
experimental evaluation on real datasets shows that: Firstly, F-Tree is
effective in finding interesting clusters. Secondly, the usage of the tree
structure reduces the clustering process time of the large data set with high
attributes. Thirdly, the proposed evaluation metric used efficiently to solve
the overlapping of transaction items generates high-quality clustering results.
Finally, we have concluded that the process of merging pure and small clusters
increases the purity of resulted clusters as well as it reduces the time of
clustering better than the process of generating clusters directly from dataset
then refine clusters.
|
[
{
"created": "Tue, 2 May 2017 01:55:44 GMT",
"version": "v1"
}
] |
2017-05-03
|
[
[
"Mahdi",
"Mahmoud",
""
],
[
"Abdelrahman",
"Samir",
""
],
[
"Bahgat",
"Reem",
""
],
[
"Ismail",
"Ismail",
""
]
] |
Clustering is an important data mining technique that groups similar data records, recently categorical transaction clustering is received more attention. In this research, we study the problem of categorical data clustering for transactional data characterized with high dimensionality and large volume. We propose a novel algorithm for clustering transactional data called F-Tree, which is based on the idea of the frequent pattern algorithm FP-tree; the fastest approaches to the frequent item set mining. And the simple idea behind the F-Tree is to generate small high pure clusters, and then merge them. That makes it fast, and dynamic in clustering large transactional datasets with high dimensions. We also present a new solution to solve the overlapping problem between clusters, by defining a new criterion function, which is based on the probability of overlapping between weighted items. Our experimental evaluation on real datasets shows that: Firstly, F-Tree is effective in finding interesting clusters. Secondly, the usage of the tree structure reduces the clustering process time of the large data set with high attributes. Thirdly, the proposed evaluation metric used efficiently to solve the overlapping of transaction items generates high-quality clustering results. Finally, we have concluded that the process of merging pure and small clusters increases the purity of resulted clusters as well as it reduces the time of clustering better than the process of generating clusters directly from dataset then refine clusters.
|
2308.05621
|
Francesco Orabona
|
Francesco Orabona
|
Normalized Gradients for All
| null | null | null | null |
cs.LG math.OC stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this short note, I show how to adapt to H\"{o}lder smoothness using
normalized gradients in a black-box way. Moreover, the bound will depend on a
novel notion of local H\"{o}lder smoothness. The main idea directly comes from
Levy [2017].
|
[
{
"created": "Thu, 10 Aug 2023 15:10:08 GMT",
"version": "v1"
}
] |
2023-08-11
|
[
[
"Orabona",
"Francesco",
""
]
] |
In this short note, I show how to adapt to H\"{o}lder smoothness using normalized gradients in a black-box way. Moreover, the bound will depend on a novel notion of local H\"{o}lder smoothness. The main idea directly comes from Levy [2017].
|
2210.05728
|
Nikolas Lamb
|
Nikolas Lamb, Sean Banerjee, and Natasha Kholgade Banerjee
|
DeepMend: Learning Occupancy Functions to Represent Shape for Repair
|
To be published at ECCV 2020 (poster)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present DeepMend, a novel approach to reconstruct restorations to
fractured shapes using learned occupancy functions. Existing shape repair
approaches predict low-resolution voxelized restorations, or require symmetries
or access to a pre-existing complete oracle. We represent the occupancy of a
fractured shape as the conjunction of the occupancy of an underlying complete
shape and the fracture surface, which we model as functions of latent codes
using neural networks. Given occupancy samples from an input fractured shape,
we estimate latent codes using an inference loss augmented with novel penalty
terms that avoid empty or voluminous restorations. We use inferred codes to
reconstruct the restoration shape. We show results with simulated fractures on
synthetic and real-world scanned objects, and with scanned real fractured mugs.
Compared to the existing voxel approach and two baseline methods, our work
shows state-of-the-art results in accuracy and avoiding restoration artifacts
over non-fracture regions of the fractured shape.
|
[
{
"created": "Tue, 11 Oct 2022 18:42:20 GMT",
"version": "v1"
}
] |
2022-10-13
|
[
[
"Lamb",
"Nikolas",
""
],
[
"Banerjee",
"Sean",
""
],
[
"Banerjee",
"Natasha Kholgade",
""
]
] |
We present DeepMend, a novel approach to reconstruct restorations to fractured shapes using learned occupancy functions. Existing shape repair approaches predict low-resolution voxelized restorations, or require symmetries or access to a pre-existing complete oracle. We represent the occupancy of a fractured shape as the conjunction of the occupancy of an underlying complete shape and the fracture surface, which we model as functions of latent codes using neural networks. Given occupancy samples from an input fractured shape, we estimate latent codes using an inference loss augmented with novel penalty terms that avoid empty or voluminous restorations. We use inferred codes to reconstruct the restoration shape. We show results with simulated fractures on synthetic and real-world scanned objects, and with scanned real fractured mugs. Compared to the existing voxel approach and two baseline methods, our work shows state-of-the-art results in accuracy and avoiding restoration artifacts over non-fracture regions of the fractured shape.
|
1711.09352
|
Hisashi Shimodaira
|
Hisashi Shimodaira
|
Automatic Color Image Segmentation Using a Square Elemental Region-Based
Seeded Region Growing and Merging Method
|
14 pages with 9 figures and 3 tables
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents an efficient automatic color image segmentation method
using a seeded region growing and merging method based on square elemental
regions. Our segmentation method consists of the three steps: generating seed
regions, merging the regions, and applying a pixel-wise boundary determination
algorithm to the resultant polygonal regions. The major features of our method
are as follows: the use of square elemental regions instead of pixels as the
processing unit, a seed generation method based on enhanced gradient values, a
seed region growing method exploiting local gradient values, a region merging
method using a similarity measure including a homogeneity distance based on
Tsallis entropy, and a termination condition of region merging using an
estimated desired number of regions. Using square regions as the processing
unit substantially reduces the time complexity of the algorithm and makes the
performance stable. The experimental results show that our method exhibits
stable performance for a variety of natural images, including heavily textured
areas, and produces good segmentation results using the same parameter values.
The results of our method are fairly comparable to, and in some respects better
than, those of existing algorithms.
|
[
{
"created": "Sun, 26 Nov 2017 09:19:05 GMT",
"version": "v1"
}
] |
2017-11-28
|
[
[
"Shimodaira",
"Hisashi",
""
]
] |
This paper presents an efficient automatic color image segmentation method using a seeded region growing and merging method based on square elemental regions. Our segmentation method consists of the three steps: generating seed regions, merging the regions, and applying a pixel-wise boundary determination algorithm to the resultant polygonal regions. The major features of our method are as follows: the use of square elemental regions instead of pixels as the processing unit, a seed generation method based on enhanced gradient values, a seed region growing method exploiting local gradient values, a region merging method using a similarity measure including a homogeneity distance based on Tsallis entropy, and a termination condition of region merging using an estimated desired number of regions. Using square regions as the processing unit substantially reduces the time complexity of the algorithm and makes the performance stable. The experimental results show that our method exhibits stable performance for a variety of natural images, including heavily textured areas, and produces good segmentation results using the same parameter values. The results of our method are fairly comparable to, and in some respects better than, those of existing algorithms.
|
2308.14806
|
Wentao Xu
|
Wentao Xu, Kazutoshi Sasahara
|
Domain-based user embedding for competing events on social media
|
Computational social science application
| null | null | null |
cs.CY cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
Online social networks offer vast opportunities for computational social
science, but effective user embedding is crucial for downstream tasks.
Traditionally, researchers have used pre-defined network-based user features,
such as degree, and centrality measures, and/or content-based features, such as
posts and reposts. However, these measures may not capture the complex
characteristics of social media users. In this study, we propose a user
embedding method based on the URL domain co-occurrence network, which is simple
but effective for representing social media users in competing events. We
assessed the performance of this method in binary classification tasks using
benchmark datasets that included Twitter users related to COVID-19 infodemic
topics (QAnon, Biden, Ivermectin). Our results revealed that user embeddings
generated directly from the retweet network, and those based on language,
performed below expectations. In contrast, our domain-based embeddings
outperformed these methods while reducing computation time. These findings
suggest that the domain-based user embedding can serve as an effective tool to
characterize social media users participating in competing events, such as
political campaigns and public health crises.
|
[
{
"created": "Mon, 28 Aug 2023 18:01:14 GMT",
"version": "v1"
},
{
"created": "Wed, 30 Aug 2023 07:07:05 GMT",
"version": "v2"
}
] |
2023-08-31
|
[
[
"Xu",
"Wentao",
""
],
[
"Sasahara",
"Kazutoshi",
""
]
] |
Online social networks offer vast opportunities for computational social science, but effective user embedding is crucial for downstream tasks. Traditionally, researchers have used pre-defined network-based user features, such as degree, and centrality measures, and/or content-based features, such as posts and reposts. However, these measures may not capture the complex characteristics of social media users. In this study, we propose a user embedding method based on the URL domain co-occurrence network, which is simple but effective for representing social media users in competing events. We assessed the performance of this method in binary classification tasks using benchmark datasets that included Twitter users related to COVID-19 infodemic topics (QAnon, Biden, Ivermectin). Our results revealed that user embeddings generated directly from the retweet network, and those based on language, performed below expectations. In contrast, our domain-based embeddings outperformed these methods while reducing computation time. These findings suggest that the domain-based user embedding can serve as an effective tool to characterize social media users participating in competing events, such as political campaigns and public health crises.
|
1706.00523
|
Michael Chertkov
|
Michael Chertkov and Alexander Korotkevich
|
Adiabatic approach for natural gas pipeline computations
|
6 pages, 2 figures
| null | null |
LA-UR-17-22368
|
cs.SY physics.flu-dyn
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider slowly evolving, i.e. ADIABATIC, operational regime within a
transmission level (continental scale) natural gas pipeline system. This allows
us to introduce a set of nodal equations of reduced complexity describing gas
transients in injection/consumption UNBALANCED (so-called line-pack) cases. We
discuss, in details, construction of the UNBALANCED ADIABATIC (UA)
approximation on the basic example of a single pipe. The UA approximation is
expected to play a significant "model reduction" role in solving control,
optimization and planning problems relevant for flawless functioning of modern
natural gas networks.
|
[
{
"created": "Thu, 1 Jun 2017 23:36:55 GMT",
"version": "v1"
},
{
"created": "Wed, 2 Aug 2017 22:20:44 GMT",
"version": "v2"
}
] |
2017-08-04
|
[
[
"Chertkov",
"Michael",
""
],
[
"Korotkevich",
"Alexander",
""
]
] |
We consider slowly evolving, i.e. ADIABATIC, operational regime within a transmission level (continental scale) natural gas pipeline system. This allows us to introduce a set of nodal equations of reduced complexity describing gas transients in injection/consumption UNBALANCED (so-called line-pack) cases. We discuss, in details, construction of the UNBALANCED ADIABATIC (UA) approximation on the basic example of a single pipe. The UA approximation is expected to play a significant "model reduction" role in solving control, optimization and planning problems relevant for flawless functioning of modern natural gas networks.
|
1704.05915
|
Yuri G. Gordienko
|
S. Stirenko, Yu. Gordienko, T. Shemsedinov, O. Alienin, Yu. Kochura,
N. Gordienko, A. Rojbi, J.R. L\'opez Benito, E. Artetxe Gonz\'alez
|
User-driven Intelligent Interface on the Basis of Multimodal Augmented
Reality and Brain-Computer Interaction for People with Functional
Disabilities
|
10 pages, 11 figures, 1 table, submitted to Future of Information and
Communication Conference (FICC) 2018, 5-6 April 2018, Singapore
|
In: Arai K., Kapoor S., Bhatia R. (eds) Advances in Information
and Communication Networks. FICC 2018. Advances in Intelligent Systems and
Computing, vol 886, pp.612-631. Springer, Cham
|
10.1007/978-3-030-03402-3_43
| null |
cs.HC cs.AI cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The analysis of the current integration attempts of some modes and use cases
of user-machine interaction is presented. The new concept of the user-driven
intelligent interface is proposed on the basis of multimodal augmented reality
and brain-computer interaction for various applications: in disabilities
studies, education, home care, health care, etc. The several use cases of
multimodal augmentation are presented. The perspectives of the better human
comprehension by the immediate feedback through neurophysical channels by means
of brain-computer interaction are outlined. It is shown that brain-computer
interface (BCI) technology provides new strategies to overcome limits of the
currently available user interfaces, especially for people with functional
disabilities. The results of the previous studies of the low end consumer and
open-source BCI-devices allow us to conclude that combination of machine
learning (ML), multimodal interactions (visual, sound, tactile) with BCI will
profit from the immediate feedback from the actual neurophysical reactions
classified by ML methods. In general, BCI in combination with other modes of AR
interaction can deliver much more information than these types of interaction
themselves. Even in the current state the combined AR-BCI interfaces could
provide the highly adaptable and personal services, especially for people with
functional disabilities.
|
[
{
"created": "Wed, 12 Apr 2017 21:03:52 GMT",
"version": "v1"
},
{
"created": "Tue, 15 Aug 2017 22:51:53 GMT",
"version": "v2"
}
] |
2018-12-11
|
[
[
"Stirenko",
"S.",
""
],
[
"Gordienko",
"Yu.",
""
],
[
"Shemsedinov",
"T.",
""
],
[
"Alienin",
"O.",
""
],
[
"Kochura",
"Yu.",
""
],
[
"Gordienko",
"N.",
""
],
[
"Rojbi",
"A.",
""
],
[
"Benito",
"J. R. López",
""
],
[
"González",
"E. Artetxe",
""
]
] |
The analysis of the current integration attempts of some modes and use cases of user-machine interaction is presented. The new concept of the user-driven intelligent interface is proposed on the basis of multimodal augmented reality and brain-computer interaction for various applications: in disabilities studies, education, home care, health care, etc. The several use cases of multimodal augmentation are presented. The perspectives of the better human comprehension by the immediate feedback through neurophysical channels by means of brain-computer interaction are outlined. It is shown that brain-computer interface (BCI) technology provides new strategies to overcome limits of the currently available user interfaces, especially for people with functional disabilities. The results of the previous studies of the low end consumer and open-source BCI-devices allow us to conclude that combination of machine learning (ML), multimodal interactions (visual, sound, tactile) with BCI will profit from the immediate feedback from the actual neurophysical reactions classified by ML methods. In general, BCI in combination with other modes of AR interaction can deliver much more information than these types of interaction themselves. Even in the current state the combined AR-BCI interfaces could provide the highly adaptable and personal services, especially for people with functional disabilities.
|
1607.03848
|
Minghui Jiang
|
Minghui Jiang
|
Periodicity of identifying codes in strips
|
added two references [2,3] and updated introduction
| null | null | null |
cs.DM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An identifying code in a graph is a subset of vertices having a nonempty and
distinct intersection with the closed neighborhood of every vertex. We prove
that the infimum density of any identifying code in $S_k$ (an infinite strip of
$k$ rows in the square grid) can always be achieved by a periodic identifying
code with pattern length at most $2^{4k}$. Assisted by a compute program
implementing Karp's algorithm for minimum cycle mean, we find a periodic
identifying code in $S_4$ with the minimum density $11/28$, and a periodic
identifying code in $S_5$ with the minimum density $19/50$.
|
[
{
"created": "Wed, 13 Jul 2016 18:14:32 GMT",
"version": "v1"
},
{
"created": "Mon, 17 Oct 2016 18:37:32 GMT",
"version": "v2"
}
] |
2016-10-18
|
[
[
"Jiang",
"Minghui",
""
]
] |
An identifying code in a graph is a subset of vertices having a nonempty and distinct intersection with the closed neighborhood of every vertex. We prove that the infimum density of any identifying code in $S_k$ (an infinite strip of $k$ rows in the square grid) can always be achieved by a periodic identifying code with pattern length at most $2^{4k}$. Assisted by a compute program implementing Karp's algorithm for minimum cycle mean, we find a periodic identifying code in $S_4$ with the minimum density $11/28$, and a periodic identifying code in $S_5$ with the minimum density $19/50$.
|
1604.00066
|
Wenbin Li
|
Wenbin Li, Seyedmajid Azimi, Ale\v{s} Leonardis, Mario Fritz
|
To Fall Or Not To Fall: A Visual Approach to Physical Stability
Prediction
| null | null | null | null |
cs.CV cs.AI cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Understanding physical phenomena is a key competence that enables humans and
animals to act and interact under uncertain perception in previously unseen
environments containing novel object and their configurations. Developmental
psychology has shown that such skills are acquired by infants from observations
at a very early stage.
In this paper, we contrast a more traditional approach of taking a
model-based route with explicit 3D representations and physical simulation by
an end-to-end approach that directly predicts stability and related quantities
from appearance. We ask the question if and to what extent and quality such a
skill can directly be acquired in a data-driven way bypassing the need for an
explicit simulation.
We present a learning-based approach based on simulated data that predicts
stability of towers comprised of wooden blocks under different conditions and
quantities related to the potential fall of the towers. The evaluation is
carried out on synthetic data and compared to human judgments on the same
stimuli.
|
[
{
"created": "Thu, 31 Mar 2016 21:53:32 GMT",
"version": "v1"
}
] |
2016-04-04
|
[
[
"Li",
"Wenbin",
""
],
[
"Azimi",
"Seyedmajid",
""
],
[
"Leonardis",
"Aleš",
""
],
[
"Fritz",
"Mario",
""
]
] |
Understanding physical phenomena is a key competence that enables humans and animals to act and interact under uncertain perception in previously unseen environments containing novel object and their configurations. Developmental psychology has shown that such skills are acquired by infants from observations at a very early stage. In this paper, we contrast a more traditional approach of taking a model-based route with explicit 3D representations and physical simulation by an end-to-end approach that directly predicts stability and related quantities from appearance. We ask the question if and to what extent and quality such a skill can directly be acquired in a data-driven way bypassing the need for an explicit simulation. We present a learning-based approach based on simulated data that predicts stability of towers comprised of wooden blocks under different conditions and quantities related to the potential fall of the towers. The evaluation is carried out on synthetic data and compared to human judgments on the same stimuli.
|
1709.05172
|
Kamil Senel
|
Kamil Senel, Emil Bj\"ornson and Erik G. Larsson
|
Optimal Base Station Design with Limited Fronthaul: Massive Bandwidth or
Massive MIMO?
|
Accepted for publication in GC'17 Workshops - LSASLUB
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To reach a cost-efficient 5G architecture, the use of remote radio heads
connected through a fronthaul to baseband controllers is a promising solution.
However, the fronthaul links must support high bit rates as 5G networks are
projected to use wide bandwidths and many antennas. Upgrading all of the
existing fronthaul connections would be cumbersome, while replacing the remote
radio head and upgrading the software in the baseband controllers is relatively
simple. In this paper, we consider the uplink and seek the answer to the
question: If we have a fixed fronthaul capacity and can deploy any technology
in the remote radio head, what is the optimal technology? In particular, we
optimize the number of antennas, quantization bits and bandwidth to maximize
the sum rate under a fronthaul capacity constraint. The analytical results
suggest that operating with many antennas equipped with low-resolution
analog-to-digital converters, while the interplay between number of antennas
and bandwidth depends on various parameters. The numerical analysis provides
further insights into the design of communication systems with limited
fronthaul capacity.
|
[
{
"created": "Fri, 15 Sep 2017 12:15:53 GMT",
"version": "v1"
}
] |
2017-09-18
|
[
[
"Senel",
"Kamil",
""
],
[
"Björnson",
"Emil",
""
],
[
"Larsson",
"Erik G.",
""
]
] |
To reach a cost-efficient 5G architecture, the use of remote radio heads connected through a fronthaul to baseband controllers is a promising solution. However, the fronthaul links must support high bit rates as 5G networks are projected to use wide bandwidths and many antennas. Upgrading all of the existing fronthaul connections would be cumbersome, while replacing the remote radio head and upgrading the software in the baseband controllers is relatively simple. In this paper, we consider the uplink and seek the answer to the question: If we have a fixed fronthaul capacity and can deploy any technology in the remote radio head, what is the optimal technology? In particular, we optimize the number of antennas, quantization bits and bandwidth to maximize the sum rate under a fronthaul capacity constraint. The analytical results suggest that operating with many antennas equipped with low-resolution analog-to-digital converters, while the interplay between number of antennas and bandwidth depends on various parameters. The numerical analysis provides further insights into the design of communication systems with limited fronthaul capacity.
|
1802.05134
|
Kamil Khadiev
|
Kamil Khadiev, Aliya Khadieva, Mansur Ziatdinov, Dmitry Kravchenko,
Alexander Rivosh, Ramis Yamilov and Ilnaz Mannapov
|
Quantum versus Classical Online Streaming Algorithms with Advice
|
arXiv admin note: substantial text overlap with arXiv:1710.09595
| null | null | null |
cs.DS cs.CC quant-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider online algorithms with respect to the competitive ratio. Here, we
investigate quantum and classical one-way automata with non-constant size of
memory (streaming algorithms) as a model for online algorithms. We construct
problems that can be solved by quantum online streaming algorithms better than
by classical ones in a case of logarithmic or sublogarithmic size of memory,
even if classical online algorithms get advice bits. Furthermore, we show that
a quantum online algorithm with a constant number of qubits can be better than
any deterministic online algorithm with a constant number of advice bits and
unlimited computational power.
|
[
{
"created": "Tue, 13 Feb 2018 12:50:12 GMT",
"version": "v1"
},
{
"created": "Thu, 20 Jun 2019 11:07:18 GMT",
"version": "v2"
}
] |
2019-06-24
|
[
[
"Khadiev",
"Kamil",
""
],
[
"Khadieva",
"Aliya",
""
],
[
"Ziatdinov",
"Mansur",
""
],
[
"Kravchenko",
"Dmitry",
""
],
[
"Rivosh",
"Alexander",
""
],
[
"Yamilov",
"Ramis",
""
],
[
"Mannapov",
"Ilnaz",
""
]
] |
We consider online algorithms with respect to the competitive ratio. Here, we investigate quantum and classical one-way automata with non-constant size of memory (streaming algorithms) as a model for online algorithms. We construct problems that can be solved by quantum online streaming algorithms better than by classical ones in a case of logarithmic or sublogarithmic size of memory, even if classical online algorithms get advice bits. Furthermore, we show that a quantum online algorithm with a constant number of qubits can be better than any deterministic online algorithm with a constant number of advice bits and unlimited computational power.
|
2310.18112
|
Ayoub Raji
|
Ayoub Raji, Danilo Caporale, Francesco Gatti, Andrea Giove, Micaela
Verucchi, Davide Malatesta, Nicola Musiu, Alessandro Toschi, Silviu Roberto
Popitanu, Fabio Bagni, Massimiliano Bosi, Alexander Liniger, Marko Bertogna,
Daniele Morra, Francesco Amerotti, Luca Bartoli, Federico Martello, Riccardo
Porta
|
er.autopilot 1.0: The Full Autonomous Stack for Oval Racing at High
Speeds
|
Preprint: Accepted to Field Robotics "Opportunities and Challenges
with Autonomous Racing" Special Issue
|
Field Robotics, January 2024, Volume 4
|
10.55417/fr.2024004
| null |
cs.RO cs.AI cs.CV cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Indy Autonomous Challenge (IAC) brought together for the first time in
history nine autonomous racing teams competing at unprecedented speed and in
head-to-head scenario, using independently developed software on open-wheel
racecars. This paper presents the complete software architecture used by team
TII EuroRacing (TII-ER), covering all the modules needed to avoid static
obstacles, perform active overtakes and reach speeds above 75 m/s (270 km/h).
In addition to the most common modules related to perception, planning, and
control, we discuss the approaches used for vehicle dynamics modelling,
simulation, telemetry, and safety. Overall results and the performance of each
module are described, as well as the lessons learned during the first two
events of the competition on oval tracks, where the team placed respectively
second and third.
|
[
{
"created": "Fri, 27 Oct 2023 12:52:34 GMT",
"version": "v1"
}
] |
2024-02-08
|
[
[
"Raji",
"Ayoub",
""
],
[
"Caporale",
"Danilo",
""
],
[
"Gatti",
"Francesco",
""
],
[
"Giove",
"Andrea",
""
],
[
"Verucchi",
"Micaela",
""
],
[
"Malatesta",
"Davide",
""
],
[
"Musiu",
"Nicola",
""
],
[
"Toschi",
"Alessandro",
""
],
[
"Popitanu",
"Silviu Roberto",
""
],
[
"Bagni",
"Fabio",
""
],
[
"Bosi",
"Massimiliano",
""
],
[
"Liniger",
"Alexander",
""
],
[
"Bertogna",
"Marko",
""
],
[
"Morra",
"Daniele",
""
],
[
"Amerotti",
"Francesco",
""
],
[
"Bartoli",
"Luca",
""
],
[
"Martello",
"Federico",
""
],
[
"Porta",
"Riccardo",
""
]
] |
The Indy Autonomous Challenge (IAC) brought together for the first time in history nine autonomous racing teams competing at unprecedented speed and in head-to-head scenario, using independently developed software on open-wheel racecars. This paper presents the complete software architecture used by team TII EuroRacing (TII-ER), covering all the modules needed to avoid static obstacles, perform active overtakes and reach speeds above 75 m/s (270 km/h). In addition to the most common modules related to perception, planning, and control, we discuss the approaches used for vehicle dynamics modelling, simulation, telemetry, and safety. Overall results and the performance of each module are described, as well as the lessons learned during the first two events of the competition on oval tracks, where the team placed respectively second and third.
|
1710.09300
|
Filipe Alves Neto Verri
|
Filipe Alves Neto Verri, Renato Tin\'os, Liang Zhao
|
Feature learning in feature-sample networks using multi-objective
optimization
|
7 pages, 4 figures
| null |
10.1109/CEC.2018.8477891
| null |
cs.AI cs.LG cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Data and knowledge representation are fundamental concepts in machine
learning. The quality of the representation impacts the performance of the
learning model directly. Feature learning transforms or enhances raw data to
structures that are effectively exploited by those models. In recent years,
several works have been using complex networks for data representation and
analysis. However, no feature learning method has been proposed for such
category of techniques. Here, we present an unsupervised feature learning
mechanism that works on datasets with binary features. First, the dataset is
mapped into a feature--sample network. Then, a multi-objective optimization
process selects a set of new vertices to produce an enhanced version of the
network. The new features depend on a nonlinear function of a combination of
preexisting features. Effectively, the process projects the input data into a
higher-dimensional space. To solve the optimization problem, we design two
metaheuristics based on the lexicographic genetic algorithm and the improved
strength Pareto evolutionary algorithm (SPEA2). We show that the enhanced
network contains more information and can be exploited to improve the
performance of machine learning methods. The advantages and disadvantages of
each optimization strategy are discussed.
|
[
{
"created": "Wed, 25 Oct 2017 15:18:27 GMT",
"version": "v1"
}
] |
2021-04-26
|
[
[
"Verri",
"Filipe Alves Neto",
""
],
[
"Tinós",
"Renato",
""
],
[
"Zhao",
"Liang",
""
]
] |
Data and knowledge representation are fundamental concepts in machine learning. The quality of the representation impacts the performance of the learning model directly. Feature learning transforms or enhances raw data to structures that are effectively exploited by those models. In recent years, several works have been using complex networks for data representation and analysis. However, no feature learning method has been proposed for such category of techniques. Here, we present an unsupervised feature learning mechanism that works on datasets with binary features. First, the dataset is mapped into a feature--sample network. Then, a multi-objective optimization process selects a set of new vertices to produce an enhanced version of the network. The new features depend on a nonlinear function of a combination of preexisting features. Effectively, the process projects the input data into a higher-dimensional space. To solve the optimization problem, we design two metaheuristics based on the lexicographic genetic algorithm and the improved strength Pareto evolutionary algorithm (SPEA2). We show that the enhanced network contains more information and can be exploited to improve the performance of machine learning methods. The advantages and disadvantages of each optimization strategy are discussed.
|
2005.02829
|
Aleem Akhtar Asif
|
Aleem Akhtar
|
Role of Apache Software Foundation in Big Data Projects
| null | null | null | null |
cs.SE cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the increase in amount of Big Data being generated each year, tools and
technologies developed and used for the purpose of storing, processing and
analyzing Big Data has also improved. Open-Source software has been an
important factor in the success and innovation in the field of Big Data while
Apache Software Foundation (ASF) has played a crucial role in this success and
innovation by providing a number of state-of-the-art projects, free and open to
the public. ASF has classified its project in different categories. In this
report, projects listed under Big Data category are deeply analyzed and
discussed with reference to one-of-the seven sub-categories defined. Our
investigation has shown that many of the Apache Big Data projects are
autonomous but some are built based on other Apache projects and some work in
conjunction with other projects to improve and ease development in Big Data
space.
|
[
{
"created": "Tue, 5 May 2020 11:12:59 GMT",
"version": "v1"
}
] |
2020-05-07
|
[
[
"Akhtar",
"Aleem",
""
]
] |
With the increase in amount of Big Data being generated each year, tools and technologies developed and used for the purpose of storing, processing and analyzing Big Data has also improved. Open-Source software has been an important factor in the success and innovation in the field of Big Data while Apache Software Foundation (ASF) has played a crucial role in this success and innovation by providing a number of state-of-the-art projects, free and open to the public. ASF has classified its project in different categories. In this report, projects listed under Big Data category are deeply analyzed and discussed with reference to one-of-the seven sub-categories defined. Our investigation has shown that many of the Apache Big Data projects are autonomous but some are built based on other Apache projects and some work in conjunction with other projects to improve and ease development in Big Data space.
|
1909.13005
|
Xiaojiang Peng
|
Qing Li, Xiaojiang Peng, Yu Qiao, Qiang Peng
|
Learning Category Correlations for Multi-label Image Recognition with
Graph Networks
|
8 pages, 4 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multi-label image recognition is a task that predicts a set of object labels
in an image. As the objects co-occur in the physical world, it is desirable to
model label dependencies. Previous existing methods resort to either recurrent
networks or pre-defined label correlation graphs for this purpose. In this
paper, instead of using a pre-defined graph which is inflexible and may be
sub-optimal for multi-label classification, we propose the A-GCN, which
leverages the popular Graph Convolutional Networks with an Adaptive label
correlation graph to model label dependencies. Specifically, we introduce a
plug-and-play Label Graph (LG) module to learn label correlations with word
embeddings, and then utilize traditional GCN to map this graph into
label-dependent object classifiers which are further applied to image features.
The basic LG module incorporates two 1x1 convolutional layers and uses the dot
product to generate label graphs. In addition, we propose a sparse correlation
constraint to enhance the LG module and also explore different LG
architectures. We validate our method on two diverse multi-label datasets:
MS-COCO and Fashion550K. Experimental results show that our A-GCN significantly
improves baseline methods and achieves performance superior or comparable to
the state of the art.
|
[
{
"created": "Sat, 28 Sep 2019 02:03:25 GMT",
"version": "v1"
}
] |
2019-10-01
|
[
[
"Li",
"Qing",
""
],
[
"Peng",
"Xiaojiang",
""
],
[
"Qiao",
"Yu",
""
],
[
"Peng",
"Qiang",
""
]
] |
Multi-label image recognition is a task that predicts a set of object labels in an image. As the objects co-occur in the physical world, it is desirable to model label dependencies. Previous existing methods resort to either recurrent networks or pre-defined label correlation graphs for this purpose. In this paper, instead of using a pre-defined graph which is inflexible and may be sub-optimal for multi-label classification, we propose the A-GCN, which leverages the popular Graph Convolutional Networks with an Adaptive label correlation graph to model label dependencies. Specifically, we introduce a plug-and-play Label Graph (LG) module to learn label correlations with word embeddings, and then utilize traditional GCN to map this graph into label-dependent object classifiers which are further applied to image features. The basic LG module incorporates two 1x1 convolutional layers and uses the dot product to generate label graphs. In addition, we propose a sparse correlation constraint to enhance the LG module and also explore different LG architectures. We validate our method on two diverse multi-label datasets: MS-COCO and Fashion550K. Experimental results show that our A-GCN significantly improves baseline methods and achieves performance superior or comparable to the state of the art.
|
1711.06507
|
Irem Boybat
|
Irem Boybat, Manuel Le Gallo, S. R. Nandakumar, Timoleon Moraitis,
Thomas Parnell, Tomas Tuma, Bipin Rajendran, Yusuf Leblebici, Abu Sebastian,
Evangelos Eleftheriou
|
Neuromorphic computing with multi-memristive synapses
| null |
Nature Communications, volume 9, page 2514 (2018)
|
10.1038/s41467-018-04933-y
| null |
cs.ET
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Neuromorphic computing has emerged as a promising avenue towards building the
next generation of intelligent computing systems. It has been proposed that
memristive devices, which exhibit history-dependent conductivity modulation,
could efficiently represent the synaptic weights in artificial neural networks.
However, precise modulation of the device conductance over a wide dynamic
range, necessary to maintain high network accuracy, is proving to be
challenging. To address this, we present a multi-memristive synaptic
architecture with an efficient global counter-based arbitration scheme. We
focus on phase change memory devices, develop a comprehensive model and
demonstrate via simulations the effectiveness of the concept for both spiking
and non-spiking neural networks. Moreover, we present experimental results
involving over a million phase change memory devices for unsupervised learning
of temporal correlations using a spiking neural network. The work presents a
significant step towards the realization of large-scale and energy-efficient
neuromorphic computing systems.
|
[
{
"created": "Fri, 17 Nov 2017 12:19:54 GMT",
"version": "v1"
},
{
"created": "Sun, 24 Feb 2019 10:27:27 GMT",
"version": "v2"
}
] |
2019-02-26
|
[
[
"Boybat",
"Irem",
""
],
[
"Gallo",
"Manuel Le",
""
],
[
"Nandakumar",
"S. R.",
""
],
[
"Moraitis",
"Timoleon",
""
],
[
"Parnell",
"Thomas",
""
],
[
"Tuma",
"Tomas",
""
],
[
"Rajendran",
"Bipin",
""
],
[
"Leblebici",
"Yusuf",
""
],
[
"Sebastian",
"Abu",
""
],
[
"Eleftheriou",
"Evangelos",
""
]
] |
Neuromorphic computing has emerged as a promising avenue towards building the next generation of intelligent computing systems. It has been proposed that memristive devices, which exhibit history-dependent conductivity modulation, could efficiently represent the synaptic weights in artificial neural networks. However, precise modulation of the device conductance over a wide dynamic range, necessary to maintain high network accuracy, is proving to be challenging. To address this, we present a multi-memristive synaptic architecture with an efficient global counter-based arbitration scheme. We focus on phase change memory devices, develop a comprehensive model and demonstrate via simulations the effectiveness of the concept for both spiking and non-spiking neural networks. Moreover, we present experimental results involving over a million phase change memory devices for unsupervised learning of temporal correlations using a spiking neural network. The work presents a significant step towards the realization of large-scale and energy-efficient neuromorphic computing systems.
|
2405.06886
|
Yong Guan
|
Yong Guan, Dingxiao Liu, Jinchen Ma, Hao Peng, Xiaozhi Wang, Lei Hou,
Ru Li
|
Event GDR: Event-Centric Generative Document Retrieval
|
Accepted to WWW 2024
| null | null | null |
cs.IR cs.AI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Generative document retrieval, an emerging paradigm in information retrieval,
learns to build connections between documents and identifiers within a single
model, garnering significant attention. However, there are still two
challenges: (1) neglecting inner-content correlation during document
representation; (2) lacking explicit semantic structure during identifier
construction. Nonetheless, events have enriched relations and well-defined
taxonomy, which could facilitate addressing the above two challenges. Inspired
by this, we propose Event GDR, an event-centric generative document retrieval
model, integrating event knowledge into this task. Specifically, we utilize an
exchange-then-reflection method based on multi-agents for event knowledge
extraction. For document representation, we employ events and relations to
model the document to guarantee the comprehensiveness and inner-content
correlation. For identifier construction, we map the events to well-defined
event taxonomy to construct the identifiers with explicit semantic structure.
Our method achieves significant improvement over the baselines on two datasets,
and also hopes to provide insights for future research.
|
[
{
"created": "Sat, 11 May 2024 02:55:11 GMT",
"version": "v1"
}
] |
2024-05-14
|
[
[
"Guan",
"Yong",
""
],
[
"Liu",
"Dingxiao",
""
],
[
"Ma",
"Jinchen",
""
],
[
"Peng",
"Hao",
""
],
[
"Wang",
"Xiaozhi",
""
],
[
"Hou",
"Lei",
""
],
[
"Li",
"Ru",
""
]
] |
Generative document retrieval, an emerging paradigm in information retrieval, learns to build connections between documents and identifiers within a single model, garnering significant attention. However, there are still two challenges: (1) neglecting inner-content correlation during document representation; (2) lacking explicit semantic structure during identifier construction. Nonetheless, events have enriched relations and well-defined taxonomy, which could facilitate addressing the above two challenges. Inspired by this, we propose Event GDR, an event-centric generative document retrieval model, integrating event knowledge into this task. Specifically, we utilize an exchange-then-reflection method based on multi-agents for event knowledge extraction. For document representation, we employ events and relations to model the document to guarantee the comprehensiveness and inner-content correlation. For identifier construction, we map the events to well-defined event taxonomy to construct the identifiers with explicit semantic structure. Our method achieves significant improvement over the baselines on two datasets, and also hopes to provide insights for future research.
|
1806.10044
|
Johannes Zink
|
Steven Chaplick, Fabian Lipp, Alexander Wolff, Johannes Zink
|
Compact Drawings of 1-Planar Graphs with Right-Angle Crossings and Few
Bends
| null | null |
10.1016/j.comgeo.2019.07.006
| null |
cs.CG cs.DM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the following classes of beyond-planar graphs: 1-planar, IC-planar,
and NIC-planar graphs. These are the graphs that admit a 1-planar, IC-planar,
and NIC-planar drawing, respectively. A drawing of a graph is 1-planar if every
edge is crossed at most once. A 1-planar drawing is IC-planar if no two pairs
of crossing edges share a vertex. A 1-planar drawing is NIC-planar if no two
pairs of crossing edges share two vertices. We study the relations of these
beyond-planar graph classes (beyond-planar graphs is a collective term for the
primary attempts to generalize the planar graphs) to right-angle crossing (RAC)
graphs that admit compact drawings on the grid with few bends. We present four
drawing algorithms that preserve the given embeddings. First, we show that
every $n$-vertex NIC-planar graph admits a NIC-planar RAC drawing with at most
one bend per edge on a grid of size $O(n) \times O(n)$. Then, we show that
every $n$-vertex 1-planar graph admits a 1-planar RAC drawing with at most two
bends per edge on a grid of size $O(n^3) \times O(n^3)$. Finally, we make two
known algorithms embedding-preserving; for drawing 1-planar RAC graphs with at
most one bend per edge and for drawing IC-planar RAC graphs straight-line.
|
[
{
"created": "Tue, 26 Jun 2018 15:02:36 GMT",
"version": "v1"
},
{
"created": "Fri, 6 Jul 2018 11:57:24 GMT",
"version": "v2"
},
{
"created": "Tue, 28 Aug 2018 15:01:59 GMT",
"version": "v3"
},
{
"created": "Mon, 3 Sep 2018 13:58:52 GMT",
"version": "v4"
},
{
"created": "Thu, 8 Aug 2019 16:33:07 GMT",
"version": "v5"
}
] |
2019-08-12
|
[
[
"Chaplick",
"Steven",
""
],
[
"Lipp",
"Fabian",
""
],
[
"Wolff",
"Alexander",
""
],
[
"Zink",
"Johannes",
""
]
] |
We study the following classes of beyond-planar graphs: 1-planar, IC-planar, and NIC-planar graphs. These are the graphs that admit a 1-planar, IC-planar, and NIC-planar drawing, respectively. A drawing of a graph is 1-planar if every edge is crossed at most once. A 1-planar drawing is IC-planar if no two pairs of crossing edges share a vertex. A 1-planar drawing is NIC-planar if no two pairs of crossing edges share two vertices. We study the relations of these beyond-planar graph classes (beyond-planar graphs is a collective term for the primary attempts to generalize the planar graphs) to right-angle crossing (RAC) graphs that admit compact drawings on the grid with few bends. We present four drawing algorithms that preserve the given embeddings. First, we show that every $n$-vertex NIC-planar graph admits a NIC-planar RAC drawing with at most one bend per edge on a grid of size $O(n) \times O(n)$. Then, we show that every $n$-vertex 1-planar graph admits a 1-planar RAC drawing with at most two bends per edge on a grid of size $O(n^3) \times O(n^3)$. Finally, we make two known algorithms embedding-preserving; for drawing 1-planar RAC graphs with at most one bend per edge and for drawing IC-planar RAC graphs straight-line.
|
2007.11199
|
Jiahao Li
|
Jiahao Li, Meilin Cui, Jeeeun Kim, Xiang 'Anthony' Chen
|
Romeo: A Design Tool for Embedding Transformable Parts in 3D Models to
Robotically Augment Default Functionalities
| null | null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reconfiguring shapes of objects enables transforming existing passive objects
with robotic functionalities, e.g., a transformable coffee cup holder can be
attached to a chair's armrest, a piggy bank can reach out an arm to 'steal'
coins. Despite the advance in end-user 3D design and fabrication, it remains
challenging for non-experts to create such 'transformables' using existing
tools due to the requirement of specific engineering knowledge such as
mechanisms and robotic design.
We present Romeo -- a design tool for creating transformables to robotically
augment objects' default functionalities. Romeo allows users to transform an
object into a robotic arm by expressing at a high level what type of task is
expected. Users can select which part of the object to be transformed, specify
motion points in space for the transformed part to follow and the corresponding
action to be taken. Romeo then automatically generates a robotic arm embedded
in the transformable part ready for fabrication. A design session validated
this tool where participants used Romeo to accomplish controlled design tasks
and to open-endedly create coin-stealing piggy banks by transforming 3D objects
of their own choice.
|
[
{
"created": "Wed, 22 Jul 2020 04:54:43 GMT",
"version": "v1"
}
] |
2020-07-23
|
[
[
"Li",
"Jiahao",
""
],
[
"Cui",
"Meilin",
""
],
[
"Kim",
"Jeeeun",
""
],
[
"Chen",
"Xiang 'Anthony'",
""
]
] |
Reconfiguring shapes of objects enables transforming existing passive objects with robotic functionalities, e.g., a transformable coffee cup holder can be attached to a chair's armrest, a piggy bank can reach out an arm to 'steal' coins. Despite the advance in end-user 3D design and fabrication, it remains challenging for non-experts to create such 'transformables' using existing tools due to the requirement of specific engineering knowledge such as mechanisms and robotic design. We present Romeo -- a design tool for creating transformables to robotically augment objects' default functionalities. Romeo allows users to transform an object into a robotic arm by expressing at a high level what type of task is expected. Users can select which part of the object to be transformed, specify motion points in space for the transformed part to follow and the corresponding action to be taken. Romeo then automatically generates a robotic arm embedded in the transformable part ready for fabrication. A design session validated this tool where participants used Romeo to accomplish controlled design tasks and to open-endedly create coin-stealing piggy banks by transforming 3D objects of their own choice.
|
2109.03638
|
Tiago Colliri
|
Tiago Colliri
|
Evaluating Presidential Support in the Brazilian House of
Representatives Through a Network-Based Approach
| null | null | null | null |
cs.SI physics.soc-ph
|
http://creativecommons.org/licenses/by/4.0/
|
Conflicts between the executive and legislative powers are a common, and even
expected, characteristic of presidential systems, with some governments being
more successful in the activity of obtaining support from the Congress than
others. In the case of Brazil, specifically, this factor is considered crucial
in the so called "coalition governments", with direct positive or negative
consequences for the president, in terms of government performance during his
(her) term. In this work, we investigate this problem by testing and comparing
two different methods for evaluating the government support in the Brazilian
House of Representatives. The first method is a more traditional one, being
based on roll-call voting data, and measures the presidential support at the
legislators level. The second method uses a network-based approach, and
performs the same type of analysis but at the parties level. The obtained
results, when applying both methods on legislative data comprising the period
from 1998 until 2019, indicate that both methods are valid, with common
features being found not only between the results provided by the two of them,
but also when comparing their results with those obtained by previous and
relevant studies in this field, by using the same type of data but different
methodologies.
|
[
{
"created": "Wed, 8 Sep 2021 13:31:36 GMT",
"version": "v1"
}
] |
2021-09-09
|
[
[
"Colliri",
"Tiago",
""
]
] |
Conflicts between the executive and legislative powers are a common, and even expected, characteristic of presidential systems, with some governments being more successful in the activity of obtaining support from the Congress than others. In the case of Brazil, specifically, this factor is considered crucial in the so called "coalition governments", with direct positive or negative consequences for the president, in terms of government performance during his (her) term. In this work, we investigate this problem by testing and comparing two different methods for evaluating the government support in the Brazilian House of Representatives. The first method is a more traditional one, being based on roll-call voting data, and measures the presidential support at the legislators level. The second method uses a network-based approach, and performs the same type of analysis but at the parties level. The obtained results, when applying both methods on legislative data comprising the period from 1998 until 2019, indicate that both methods are valid, with common features being found not only between the results provided by the two of them, but also when comparing their results with those obtained by previous and relevant studies in this field, by using the same type of data but different methodologies.
|
2002.03485
|
Dhairya Dalal
|
Dhairya Dalal and Byron V. Galbraith
|
Evaluating Sequence-to-Sequence Learning Models for If-Then Program
Synthesis
|
AAAI IPA workshop submission
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Implementing enterprise process automation often requires significant
technical expertise and engineering effort. It would be beneficial for
non-technical users to be able to describe a business process in natural
language and have an intelligent system generate the workflow that can be
automatically executed. A building block of process automations are If-Then
programs. In the consumer space, sites like IFTTT and Zapier allow users to
create automations by defining If-Then programs using a graphical interface. We
explore the efficacy of modeling If-Then programs as a sequence learning task.
We find Seq2Seq approaches have high potential (performing strongly on the
Zapier recipes) and can serve as a promising approach to more complex program
synthesis challenges.
|
[
{
"created": "Mon, 10 Feb 2020 00:45:03 GMT",
"version": "v1"
}
] |
2020-02-11
|
[
[
"Dalal",
"Dhairya",
""
],
[
"Galbraith",
"Byron V.",
""
]
] |
Implementing enterprise process automation often requires significant technical expertise and engineering effort. It would be beneficial for non-technical users to be able to describe a business process in natural language and have an intelligent system generate the workflow that can be automatically executed. A building block of process automations are If-Then programs. In the consumer space, sites like IFTTT and Zapier allow users to create automations by defining If-Then programs using a graphical interface. We explore the efficacy of modeling If-Then programs as a sequence learning task. We find Seq2Seq approaches have high potential (performing strongly on the Zapier recipes) and can serve as a promising approach to more complex program synthesis challenges.
|
2105.06072
|
Jincheng Mei
|
Jincheng Mei, Yue Gao, Bo Dai, Csaba Szepesvari, Dale Schuurmans
|
Leveraging Non-uniformity in First-order Non-convex Optimization
|
48 pages, 10 figures. Accepted at ICML 2021
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Classical global convergence results for first-order methods rely on uniform
smoothness and the \L{}ojasiewicz inequality. Motivated by properties of
objective functions that arise in machine learning, we propose a non-uniform
refinement of these notions, leading to \emph{Non-uniform Smoothness} (NS) and
\emph{Non-uniform \L{}ojasiewicz inequality} (N\L{}). The new definitions
inspire new geometry-aware first-order methods that are able to converge to
global optimality faster than the classical $\Omega(1/t^2)$ lower bounds. To
illustrate the power of these geometry-aware methods and their corresponding
non-uniform analysis, we consider two important problems in machine learning:
policy gradient optimization in reinforcement learning (PG), and generalized
linear model training in supervised learning (GLM). For PG, we find that
normalizing the gradient ascent method can accelerate convergence to
$O(e^{-t})$ while incurring less overhead than existing algorithms. For GLM, we
show that geometry-aware normalized gradient descent can also achieve a linear
convergence rate, which significantly improves the best known results. We
additionally show that the proposed geometry-aware descent methods escape
landscape plateaus faster than standard gradient descent. Experimental results
are used to illustrate and complement the theoretical findings.
|
[
{
"created": "Thu, 13 May 2021 04:23:07 GMT",
"version": "v1"
},
{
"created": "Fri, 17 Sep 2021 21:13:47 GMT",
"version": "v2"
},
{
"created": "Thu, 2 Jun 2022 06:44:29 GMT",
"version": "v3"
}
] |
2022-06-03
|
[
[
"Mei",
"Jincheng",
""
],
[
"Gao",
"Yue",
""
],
[
"Dai",
"Bo",
""
],
[
"Szepesvari",
"Csaba",
""
],
[
"Schuurmans",
"Dale",
""
]
] |
Classical global convergence results for first-order methods rely on uniform smoothness and the \L{}ojasiewicz inequality. Motivated by properties of objective functions that arise in machine learning, we propose a non-uniform refinement of these notions, leading to \emph{Non-uniform Smoothness} (NS) and \emph{Non-uniform \L{}ojasiewicz inequality} (N\L{}). The new definitions inspire new geometry-aware first-order methods that are able to converge to global optimality faster than the classical $\Omega(1/t^2)$ lower bounds. To illustrate the power of these geometry-aware methods and their corresponding non-uniform analysis, we consider two important problems in machine learning: policy gradient optimization in reinforcement learning (PG), and generalized linear model training in supervised learning (GLM). For PG, we find that normalizing the gradient ascent method can accelerate convergence to $O(e^{-t})$ while incurring less overhead than existing algorithms. For GLM, we show that geometry-aware normalized gradient descent can also achieve a linear convergence rate, which significantly improves the best known results. We additionally show that the proposed geometry-aware descent methods escape landscape plateaus faster than standard gradient descent. Experimental results are used to illustrate and complement the theoretical findings.
|
2205.06182
|
Satwinder Singh
|
Satwinder Singh, Ruili Wang, Feng Hou
|
Improved Meta Learning for Low Resource Speech Recognition
|
Published in IEEE ICASSP 2022
|
ICASSP 2022 - 2022 IEEE International Conference on Acoustics,
Speech and Signal Processing (ICASSP), 2022, pp. 4798-4802
|
10.1109/ICASSP43922.2022.9746899
| null |
cs.CL cs.LG cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a new meta learning based framework for low resource speech
recognition that improves the previous model agnostic meta learning (MAML)
approach. The MAML is a simple yet powerful meta learning approach. However,
the MAML presents some core deficiencies such as training instabilities and
slower convergence speed. To address these issues, we adopt multi-step loss
(MSL). The MSL aims to calculate losses at every step of the inner loop of MAML
and then combines them with a weighted importance vector. The importance vector
ensures that the loss at the last step has more importance than the previous
steps. Our empirical evaluation shows that MSL significantly improves the
stability of the training procedure and it thus also improves the accuracy of
the overall system. Our proposed system outperforms MAML based low resource ASR
system on various languages in terms of character error rates and stable
training behavior.
|
[
{
"created": "Wed, 11 May 2022 15:50:47 GMT",
"version": "v1"
}
] |
2022-05-13
|
[
[
"Singh",
"Satwinder",
""
],
[
"Wang",
"Ruili",
""
],
[
"Hou",
"Feng",
""
]
] |
We propose a new meta learning based framework for low resource speech recognition that improves the previous model agnostic meta learning (MAML) approach. The MAML is a simple yet powerful meta learning approach. However, the MAML presents some core deficiencies such as training instabilities and slower convergence speed. To address these issues, we adopt multi-step loss (MSL). The MSL aims to calculate losses at every step of the inner loop of MAML and then combines them with a weighted importance vector. The importance vector ensures that the loss at the last step has more importance than the previous steps. Our empirical evaluation shows that MSL significantly improves the stability of the training procedure and it thus also improves the accuracy of the overall system. Our proposed system outperforms MAML based low resource ASR system on various languages in terms of character error rates and stable training behavior.
|
1803.09622
|
Jan Kelner
|
Jan M. Kelner, Bogdan Uljasz, Leszek Nowosielski (Military University
of Technology, Faculty of Electronics, Institute of Telecommunications,
Warsaw, Poland)
|
BER measurements in the evaluation of operation correctness of VSAT
modem traffic interfaces
|
5 pages, 5 figures, 2 tables
|
2018 14th International Conference on Advanced Trends in
Radioelectronics, Telecommunications and Computer Engineering (TCSET),
Lviv-Slavske, Ukraine, 20-24.02.2018., pp. 1-5
| null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents using bit error rate (BER) measurements to evaluate
operation correctness of traffic (input-output) interfaces in modem of very
small aperture terminal (VSAT). Such functional tests are carried out, for
example, when purchasing communication equipment for armed forces. Generally,
available standards do not describe measurement procedures in this area. In
this case, accredited laboratories should develop dedicated assessment
methodologies. In this paper, we show the methodology for the VSAT modems,
which is based on the BER measurements.
|
[
{
"created": "Mon, 26 Mar 2018 14:30:47 GMT",
"version": "v1"
}
] |
2018-03-28
|
[
[
"Kelner",
"Jan M.",
"",
"Military University\n of Technology, Faculty of Electronics, Institute of Telecommunications,\n Warsaw, Poland"
],
[
"Uljasz",
"Bogdan",
"",
"Military University\n of Technology, Faculty of Electronics, Institute of Telecommunications,\n Warsaw, Poland"
],
[
"Nowosielski",
"Leszek",
"",
"Military University\n of Technology, Faculty of Electronics, Institute of Telecommunications,\n Warsaw, Poland"
]
] |
This paper presents using bit error rate (BER) measurements to evaluate operation correctness of traffic (input-output) interfaces in modem of very small aperture terminal (VSAT). Such functional tests are carried out, for example, when purchasing communication equipment for armed forces. Generally, available standards do not describe measurement procedures in this area. In this case, accredited laboratories should develop dedicated assessment methodologies. In this paper, we show the methodology for the VSAT modems, which is based on the BER measurements.
|
1310.0234
|
Yuanming Shi
|
Yuanming Shi and Jun Zhang and Khaled B. Letaief
|
Group Sparse Beamforming for Green Cloud-RAN
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A cloud radio access network (Cloud-RAN) is a network architecture that holds
the promise of meeting the explosive growth of mobile data traffic. In this
architecture, all the baseband signal processing is shifted to a single
baseband unit (BBU) pool, which enables efficient resource allocation and
interference management. Meanwhile, conventional powerful base stations can be
replaced by low-cost low-power remote radio heads (RRHs), producing a green and
low-cost infrastructure. However, as all the RRHs need to be connected to the
BBU pool through optical transport links, the transport network power
consumption becomes significant. In this paper, we propose a new framework to
design a green Cloud-RAN, which is formulated as a joint RRH selection and
power minimization beamforming problem. To efficiently solve this problem, we
first propose a greedy selection algorithm, which is shown to provide near-
optimal performance. To further reduce the complexity, a novel group sparse
beamforming method is proposed by inducing the group-sparsity of beamformers
using the weighted $\ell_1/\ell_2$-norm minimization, where the group sparsity
pattern indicates those RRHs that can be switched off. Simulation results will
show that the proposed algorithms significantly reduce the network power
consumption and demonstrate the importance of considering the transport link
power consumption.
|
[
{
"created": "Tue, 1 Oct 2013 10:46:27 GMT",
"version": "v1"
}
] |
2013-10-02
|
[
[
"Shi",
"Yuanming",
""
],
[
"Zhang",
"Jun",
""
],
[
"Letaief",
"Khaled B.",
""
]
] |
A cloud radio access network (Cloud-RAN) is a network architecture that holds the promise of meeting the explosive growth of mobile data traffic. In this architecture, all the baseband signal processing is shifted to a single baseband unit (BBU) pool, which enables efficient resource allocation and interference management. Meanwhile, conventional powerful base stations can be replaced by low-cost low-power remote radio heads (RRHs), producing a green and low-cost infrastructure. However, as all the RRHs need to be connected to the BBU pool through optical transport links, the transport network power consumption becomes significant. In this paper, we propose a new framework to design a green Cloud-RAN, which is formulated as a joint RRH selection and power minimization beamforming problem. To efficiently solve this problem, we first propose a greedy selection algorithm, which is shown to provide near- optimal performance. To further reduce the complexity, a novel group sparse beamforming method is proposed by inducing the group-sparsity of beamformers using the weighted $\ell_1/\ell_2$-norm minimization, where the group sparsity pattern indicates those RRHs that can be switched off. Simulation results will show that the proposed algorithms significantly reduce the network power consumption and demonstrate the importance of considering the transport link power consumption.
|
2006.11078
|
Alexey Zaytsev
|
I. Fursov, A. Zaytsev, N. Kluchnikov, A. Kravchenko, E. Burnaev
|
Differentiable Language Model Adversarial Attacks on Categorical
Sequence Classifiers
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An adversarial attack paradigm explores various scenarios for the
vulnerability of deep learning models: minor changes of the input can force a
model failure. Most of the state of the art frameworks focus on adversarial
attacks for images and other structured model inputs, but not for categorical
sequences models.
Successful attacks on classifiers of categorical sequences are challenging
because the model input is tokens from finite sets, so a classifier score is
non-differentiable with respect to inputs, and gradient-based attacks are not
applicable. Common approaches deal with this problem working at a token level,
while the discrete optimization problem at hand requires a lot of resources to
solve.
We instead use a fine-tuning of a language model for adversarial attacks as a
generator of adversarial examples. To optimize the model, we define a
differentiable loss function that depends on a surrogate classifier score and
on a deep learning model that evaluates approximate edit distance. So, we
control both the adversability of a generated sequence and its similarity to
the initial sequence.
As a result, we obtain semantically better samples. Moreover, they are
resistant to adversarial training and adversarial detectors. Our model works
for diverse datasets on bank transactions, electronic health records, and NLP
datasets.
|
[
{
"created": "Fri, 19 Jun 2020 11:25:36 GMT",
"version": "v1"
}
] |
2020-06-22
|
[
[
"Fursov",
"I.",
""
],
[
"Zaytsev",
"A.",
""
],
[
"Kluchnikov",
"N.",
""
],
[
"Kravchenko",
"A.",
""
],
[
"Burnaev",
"E.",
""
]
] |
An adversarial attack paradigm explores various scenarios for the vulnerability of deep learning models: minor changes of the input can force a model failure. Most of the state of the art frameworks focus on adversarial attacks for images and other structured model inputs, but not for categorical sequences models. Successful attacks on classifiers of categorical sequences are challenging because the model input is tokens from finite sets, so a classifier score is non-differentiable with respect to inputs, and gradient-based attacks are not applicable. Common approaches deal with this problem working at a token level, while the discrete optimization problem at hand requires a lot of resources to solve. We instead use a fine-tuning of a language model for adversarial attacks as a generator of adversarial examples. To optimize the model, we define a differentiable loss function that depends on a surrogate classifier score and on a deep learning model that evaluates approximate edit distance. So, we control both the adversability of a generated sequence and its similarity to the initial sequence. As a result, we obtain semantically better samples. Moreover, they are resistant to adversarial training and adversarial detectors. Our model works for diverse datasets on bank transactions, electronic health records, and NLP datasets.
|
cs/0602062
|
Francois Denis
|
Fran\c{c}ois Denis (LIF), Yann Esposito (LIF), Amaury Habrard (LIF)
|
Learning rational stochastic languages
|
15 pages
| null | null | null |
cs.LG
| null |
Given a finite set of words w1,...,wn independently drawn according to a
fixed unknown distribution law P called a stochastic language, an usual goal in
Grammatical Inference is to infer an estimate of P in some class of
probabilistic models, such as Probabilistic Automata (PA). Here, we study the
class of rational stochastic languages, which consists in stochastic languages
that can be generated by Multiplicity Automata (MA) and which strictly includes
the class of stochastic languages generated by PA. Rational stochastic
languages have minimal normal representation which may be very concise, and
whose parameters can be efficiently estimated from stochastic samples. We
design an efficient inference algorithm DEES which aims at building a minimal
normal representation of the target. Despite the fact that no recursively
enumerable class of MA computes exactly the set of rational stochastic
languages over Q, we show that DEES strongly identifies tis set in the limit.
We study the intermediary MA output by DEES and show that they compute rational
series which converge absolutely to one and which can be used to provide
stochastic languages which closely estimate the target.
|
[
{
"created": "Fri, 17 Feb 2006 08:57:44 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Denis",
"François",
"",
"LIF"
],
[
"Esposito",
"Yann",
"",
"LIF"
],
[
"Habrard",
"Amaury",
"",
"LIF"
]
] |
Given a finite set of words w1,...,wn independently drawn according to a fixed unknown distribution law P called a stochastic language, an usual goal in Grammatical Inference is to infer an estimate of P in some class of probabilistic models, such as Probabilistic Automata (PA). Here, we study the class of rational stochastic languages, which consists in stochastic languages that can be generated by Multiplicity Automata (MA) and which strictly includes the class of stochastic languages generated by PA. Rational stochastic languages have minimal normal representation which may be very concise, and whose parameters can be efficiently estimated from stochastic samples. We design an efficient inference algorithm DEES which aims at building a minimal normal representation of the target. Despite the fact that no recursively enumerable class of MA computes exactly the set of rational stochastic languages over Q, we show that DEES strongly identifies tis set in the limit. We study the intermediary MA output by DEES and show that they compute rational series which converge absolutely to one and which can be used to provide stochastic languages which closely estimate the target.
|
2203.13086
|
Aibek Alanov
|
Pavel Andreev, Aibek Alanov, Oleg Ivanov, Dmitry Vetrov
|
HiFi++: a Unified Framework for Bandwidth Extension and Speech
Enhancement
|
Accepted to ICASSP 2023
| null |
10.1109/ICASSP49357.2023.10097255
| null |
cs.SD cs.LG eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Generative adversarial networks have recently demonstrated outstanding
performance in neural vocoding outperforming best autoregressive and flow-based
models. In this paper, we show that this success can be extended to other tasks
of conditional audio generation. In particular, building upon HiFi vocoders, we
propose a novel HiFi++ general framework for bandwidth extension and speech
enhancement. We show that with the improved generator architecture, HiFi++
performs better or comparably with the state-of-the-art in these tasks while
spending significantly less computational resources. The effectiveness of our
approach is validated through a series of extensive experiments.
|
[
{
"created": "Thu, 24 Mar 2022 14:25:51 GMT",
"version": "v1"
},
{
"created": "Thu, 8 Sep 2022 14:02:06 GMT",
"version": "v2"
},
{
"created": "Sun, 30 Jul 2023 09:57:21 GMT",
"version": "v3"
},
{
"created": "Sun, 10 Dec 2023 13:52:10 GMT",
"version": "v4"
}
] |
2023-12-12
|
[
[
"Andreev",
"Pavel",
""
],
[
"Alanov",
"Aibek",
""
],
[
"Ivanov",
"Oleg",
""
],
[
"Vetrov",
"Dmitry",
""
]
] |
Generative adversarial networks have recently demonstrated outstanding performance in neural vocoding outperforming best autoregressive and flow-based models. In this paper, we show that this success can be extended to other tasks of conditional audio generation. In particular, building upon HiFi vocoders, we propose a novel HiFi++ general framework for bandwidth extension and speech enhancement. We show that with the improved generator architecture, HiFi++ performs better or comparably with the state-of-the-art in these tasks while spending significantly less computational resources. The effectiveness of our approach is validated through a series of extensive experiments.
|
2201.01819
|
Diana Kim
|
Diana Kim, Ahmed Elgammal, Marian Mazzone
|
Formal Analysis of Art: Proxy Learning of Visual Concepts from Style
Through Language Models
|
23 pages, This paper is an extended version of a paper that will be
published at the 36th AAAI Conference on Artificial Intelligence, to beheld
in Vancouver, BC, Canada, February 22 - March 1, 2022
| null | null | null |
cs.LG cs.CL cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a machine learning system that can quantify fine art paintings
with a set of visual elements and principles of art. This formal analysis is
fundamental for understanding art, but developing such a system is challenging.
Paintings have high visual complexities, but it is also difficult to collect
enough training data with direct labels. To resolve these practical
limitations, we introduce a novel mechanism, called proxy learning, which
learns visual concepts in paintings though their general relation to styles.
This framework does not require any visual annotation, but only uses style
labels and a general relationship between visual concepts and style. In this
paper, we propose a novel proxy model and reformulate four pre-existing methods
in the context of proxy learning. Through quantitative and qualitative
comparison, we evaluate these methods and compare their effectiveness in
quantifying the artistic visual concepts, where the general relationship is
estimated by language models; GloVe or BERT. The language modeling is a
practical and scalable solution requiring no labeling, but it is inevitably
imperfect. We demonstrate how the new proxy model is robust to the
imperfection, while the other models are sensitively affected by it.
|
[
{
"created": "Wed, 5 Jan 2022 21:03:29 GMT",
"version": "v1"
}
] |
2022-01-14
|
[
[
"Kim",
"Diana",
""
],
[
"Elgammal",
"Ahmed",
""
],
[
"Mazzone",
"Marian",
""
]
] |
We present a machine learning system that can quantify fine art paintings with a set of visual elements and principles of art. This formal analysis is fundamental for understanding art, but developing such a system is challenging. Paintings have high visual complexities, but it is also difficult to collect enough training data with direct labels. To resolve these practical limitations, we introduce a novel mechanism, called proxy learning, which learns visual concepts in paintings though their general relation to styles. This framework does not require any visual annotation, but only uses style labels and a general relationship between visual concepts and style. In this paper, we propose a novel proxy model and reformulate four pre-existing methods in the context of proxy learning. Through quantitative and qualitative comparison, we evaluate these methods and compare their effectiveness in quantifying the artistic visual concepts, where the general relationship is estimated by language models; GloVe or BERT. The language modeling is a practical and scalable solution requiring no labeling, but it is inevitably imperfect. We demonstrate how the new proxy model is robust to the imperfection, while the other models are sensitively affected by it.
|
2203.13888
|
Jacob Rosenthal
|
David Brundage, Jacob Rosenthal, Ryan Carelli, Sophie Rand, Renato
Umeton, Massimo Loda, Luigi Marchionni
|
Whole Slide Image to DICOM Conversion as Event-Driven Cloud
Infrastructure
| null | null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The Digital Imaging and Communication in Medicine (DICOM) specification is
increasingly being adopted in digital pathology to promote data standardization
and interoperability. Efficient conversion of proprietary file formats into the
DICOM standard format is a key requirement for institutional adoption of DICOM,
necessary to ensure compatibility with existing scanners, microscopes, and data
archives. Here, we present a cloud computing architecture for DICOM conversion,
leveraging an event-driven microservices framework hosted in a serverless
computing environment in Google Cloud to enable efficient DICOM conversion at
scales ranging from individual images to institutional-scale datasets. In our
experiments, employing a microservices-based approach substantially reduced
runtime to process a batch of images relative to parallel and serial
processing. This work demonstrates the importance of designing scalable systems
for enabling enterprise-level adoption of digital pathology workflows, and
provides a blueprint for using a microservice architecture to enable efficient
DICOM conversion.
|
[
{
"created": "Fri, 25 Mar 2022 19:55:49 GMT",
"version": "v1"
}
] |
2022-03-29
|
[
[
"Brundage",
"David",
""
],
[
"Rosenthal",
"Jacob",
""
],
[
"Carelli",
"Ryan",
""
],
[
"Rand",
"Sophie",
""
],
[
"Umeton",
"Renato",
""
],
[
"Loda",
"Massimo",
""
],
[
"Marchionni",
"Luigi",
""
]
] |
The Digital Imaging and Communication in Medicine (DICOM) specification is increasingly being adopted in digital pathology to promote data standardization and interoperability. Efficient conversion of proprietary file formats into the DICOM standard format is a key requirement for institutional adoption of DICOM, necessary to ensure compatibility with existing scanners, microscopes, and data archives. Here, we present a cloud computing architecture for DICOM conversion, leveraging an event-driven microservices framework hosted in a serverless computing environment in Google Cloud to enable efficient DICOM conversion at scales ranging from individual images to institutional-scale datasets. In our experiments, employing a microservices-based approach substantially reduced runtime to process a batch of images relative to parallel and serial processing. This work demonstrates the importance of designing scalable systems for enabling enterprise-level adoption of digital pathology workflows, and provides a blueprint for using a microservice architecture to enable efficient DICOM conversion.
|
1006.4903
|
Frank Sottile
|
Luis David Garcia-Puente, Frank Sottile, Chungang Zhu
|
Toric degenerations of Bezier patches
|
21 pages, many .eps figures
| null | null | null |
cs.GR math.AG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The control polygon of a Bezier curve is well-defined and has geometric
significance---there is a sequence of weights under which the limiting position
of the curve is the control polygon. For a Bezier surface patch, there are many
possible polyhedral control structures, and none are canonical. We propose a
not necessarily polyhedral control structure for surface patches, regular
control surfaces, which are certain C^0 spline surfaces. While not unique,
regular control surfaces are exactly the possible limiting positions of a
Bezier patch when the weights are allowed to vary.
|
[
{
"created": "Fri, 25 Jun 2010 03:11:37 GMT",
"version": "v1"
},
{
"created": "Thu, 30 Dec 2010 05:17:12 GMT",
"version": "v2"
}
] |
2011-01-04
|
[
[
"Garcia-Puente",
"Luis David",
""
],
[
"Sottile",
"Frank",
""
],
[
"Zhu",
"Chungang",
""
]
] |
The control polygon of a Bezier curve is well-defined and has geometric significance---there is a sequence of weights under which the limiting position of the curve is the control polygon. For a Bezier surface patch, there are many possible polyhedral control structures, and none are canonical. We propose a not necessarily polyhedral control structure for surface patches, regular control surfaces, which are certain C^0 spline surfaces. While not unique, regular control surfaces are exactly the possible limiting positions of a Bezier patch when the weights are allowed to vary.
|
1608.00187
|
Ranjay Krishna
|
Cewu Lu, Ranjay Krishna, Michael Bernstein, Li Fei-Fei
|
Visual Relationship Detection with Language Priors
|
ECCV 2016 Oral
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Visual relationships capture a wide variety of interactions between pairs of
objects in images (e.g. "man riding bicycle" and "man pushing bicycle").
Consequently, the set of possible relationships is extremely large and it is
difficult to obtain sufficient training examples for all possible
relationships. Because of this limitation, previous work on visual relationship
detection has concentrated on predicting only a handful of relationships.
Though most relationships are infrequent, their objects (e.g. "man" and
"bicycle") and predicates (e.g. "riding" and "pushing") independently occur
more frequently. We propose a model that uses this insight to train visual
models for objects and predicates individually and later combines them together
to predict multiple relationships per image. We improve on prior work by
leveraging language priors from semantic word embeddings to finetune the
likelihood of a predicted relationship. Our model can scale to predict
thousands of types of relationships from a few examples. Additionally, we
localize the objects in the predicted relationships as bounding boxes in the
image. We further demonstrate that understanding relationships can improve
content based image retrieval.
|
[
{
"created": "Sun, 31 Jul 2016 05:54:13 GMT",
"version": "v1"
}
] |
2016-08-02
|
[
[
"Lu",
"Cewu",
""
],
[
"Krishna",
"Ranjay",
""
],
[
"Bernstein",
"Michael",
""
],
[
"Fei-Fei",
"Li",
""
]
] |
Visual relationships capture a wide variety of interactions between pairs of objects in images (e.g. "man riding bicycle" and "man pushing bicycle"). Consequently, the set of possible relationships is extremely large and it is difficult to obtain sufficient training examples for all possible relationships. Because of this limitation, previous work on visual relationship detection has concentrated on predicting only a handful of relationships. Though most relationships are infrequent, their objects (e.g. "man" and "bicycle") and predicates (e.g. "riding" and "pushing") independently occur more frequently. We propose a model that uses this insight to train visual models for objects and predicates individually and later combines them together to predict multiple relationships per image. We improve on prior work by leveraging language priors from semantic word embeddings to finetune the likelihood of a predicted relationship. Our model can scale to predict thousands of types of relationships from a few examples. Additionally, we localize the objects in the predicted relationships as bounding boxes in the image. We further demonstrate that understanding relationships can improve content based image retrieval.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.