id stringlengths 9 10 | submitter stringlengths 1 64 ⌀ | authors stringlengths 4 20.7k | title stringlengths 4 246 | comments stringlengths 1 523 ⌀ | journal-ref stringlengths 4 404 ⌀ | doi stringlengths 11 153 ⌀ | report-no stringlengths 2 254 ⌀ | categories stringlengths 5 98 | license stringclasses 9 values | orig_abstract stringlengths 14 3.35k | versions listlengths 1 60 | update_date stringlengths 10 10 | authors_parsed listlengths 1 1.35k | abstract stringlengths 11 3.34k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1805.02508 | Md Meftahul Ferdaus | Md Meftahul Ferdaus, Mahardhika Pratama, Sreenatha G. Anavatti,
Matthew A. Garratt | A Generic Self-Evolving Neuro-Fuzzy Controller based High-performance
Hexacopter Altitude Control System | submitted in the 2018 IEEE International Conference on Systems, Man,
and Cybernetics (SMC2018) | null | null | null | cs.SY cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Nowadays, the application of fully autonomous system like rotary wing
unmanned air vehicles (UAVs) is increasing sharply. Due to the complex
nonlinear dynamics, a huge research interest is witnessed in developing
learning machine based intelligent, self-organizing evolving controller for
these vehicles notably to address the system's dynamic characteristics. In this
work, such an evolving controller namely Generic-controller (G-controller) is
proposed to control the altitude of a rotary wing UAV namely hexacopter. This
controller can work with very minor expert domain knowledge. The evolving
architecture of this controller is based on an advanced incremental learning
algorithm namely Generic Evolving Neuro-Fuzzy Inference System (GENEFIS). The
controller does not require any offline training, since it starts operating
from scratch with an empty set of fuzzy rules, and then add or delete rules on
demand. The adaptation laws for the consequent parameters are derived from the
sliding mode control (SMC) theory. The Lyapunov theory is used to guarantee the
stability of the proposed controller. In addition, an auxiliary robustifying
control term is implemented to obtain a uniform asymptotic convergence of
tracking error to zero. Finally, the G-controller's performance evaluation is
observed through the altitude tracking of a UAV namely hexacopter for various
trajectories.
| [
{
"created": "Fri, 4 May 2018 05:31:25 GMT",
"version": "v1"
}
] | 2018-05-08 | [
[
"Ferdaus",
"Md Meftahul",
""
],
[
"Pratama",
"Mahardhika",
""
],
[
"Anavatti",
"Sreenatha G.",
""
],
[
"Garratt",
"Matthew A.",
""
]
] | Nowadays, the application of fully autonomous system like rotary wing unmanned air vehicles (UAVs) is increasing sharply. Due to the complex nonlinear dynamics, a huge research interest is witnessed in developing learning machine based intelligent, self-organizing evolving controller for these vehicles notably to address the system's dynamic characteristics. In this work, such an evolving controller namely Generic-controller (G-controller) is proposed to control the altitude of a rotary wing UAV namely hexacopter. This controller can work with very minor expert domain knowledge. The evolving architecture of this controller is based on an advanced incremental learning algorithm namely Generic Evolving Neuro-Fuzzy Inference System (GENEFIS). The controller does not require any offline training, since it starts operating from scratch with an empty set of fuzzy rules, and then add or delete rules on demand. The adaptation laws for the consequent parameters are derived from the sliding mode control (SMC) theory. The Lyapunov theory is used to guarantee the stability of the proposed controller. In addition, an auxiliary robustifying control term is implemented to obtain a uniform asymptotic convergence of tracking error to zero. Finally, the G-controller's performance evaluation is observed through the altitude tracking of a UAV namely hexacopter for various trajectories. |
1208.2561 | Patrick Traxler | Patrick Traxler | The Relative Exponential Time Complexity of Approximate Counting
Satisfying Assignments | null | null | null | null | cs.CC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the exponential time complexity of approximate counting satisfying
assignments of CNFs. We reduce the problem to deciding satisfiability of a CNF.
Our reduction preserves the number of variables of the input formula and thus
also preserves the exponential complexity of approximate counting.
Our algorithm is also similar to an algorithm which works particular well in
practice for which however no approximation guarantee was known. Towards an
analysis of our reduction we provide a new inequality similar to the
Bonami-Beckner hypercontractive inequality.
| [
{
"created": "Mon, 13 Aug 2012 12:08:11 GMT",
"version": "v1"
}
] | 2012-08-14 | [
[
"Traxler",
"Patrick",
""
]
] | We study the exponential time complexity of approximate counting satisfying assignments of CNFs. We reduce the problem to deciding satisfiability of a CNF. Our reduction preserves the number of variables of the input formula and thus also preserves the exponential complexity of approximate counting. Our algorithm is also similar to an algorithm which works particular well in practice for which however no approximation guarantee was known. Towards an analysis of our reduction we provide a new inequality similar to the Bonami-Beckner hypercontractive inequality. |
1303.1417 | Sugata Sanyal | Sugata Sanyal, Parthasarathy P. Iyer | Inter-Cloud Data Security Strategies | 5 pages, 1 Table. arXiv admin note: text overlap with
arXiv:0907.2485, arXiv:0903.0694 by other authors without attribution | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cloud computing is a complex infrastructure of software, hardware,
processing, and storage that is available as a service. Cloud computing offers
immediate access to large numbers of the world's most sophisticated
supercomputers and their corresponding processing power, interconnected at
various locations around the world, proffering speed in the tens of trillions
of computations per second. Information in databases and software scattered
around the Internet. There are many service providers in the internet, we can
call each service as a cloud, each cloud service will exchange data with other
cloud, so when the data is exchanged between the clouds, there exist the
problem of security. Security is an important issue for cloud computing, both
in terms of legal compliance and user trust, and needs to be considered at
every phase of design. In contrast to traditional solutions, where the IT
services are under proper physical, logical and personnel controls, Cloud
Computing moves the application software and databases to the large data
centers, where the management of the data and services may not be trustworthy.
This unique attribute, however, poses many new security challenges. Cloud
computing seems to offer some incredible benefits for communicators.
| [
{
"created": "Wed, 6 Mar 2013 18:36:09 GMT",
"version": "v1"
}
] | 2013-03-07 | [
[
"Sanyal",
"Sugata",
""
],
[
"Iyer",
"Parthasarathy P.",
""
]
] | Cloud computing is a complex infrastructure of software, hardware, processing, and storage that is available as a service. Cloud computing offers immediate access to large numbers of the world's most sophisticated supercomputers and their corresponding processing power, interconnected at various locations around the world, proffering speed in the tens of trillions of computations per second. Information in databases and software scattered around the Internet. There are many service providers in the internet, we can call each service as a cloud, each cloud service will exchange data with other cloud, so when the data is exchanged between the clouds, there exist the problem of security. Security is an important issue for cloud computing, both in terms of legal compliance and user trust, and needs to be considered at every phase of design. In contrast to traditional solutions, where the IT services are under proper physical, logical and personnel controls, Cloud Computing moves the application software and databases to the large data centers, where the management of the data and services may not be trustworthy. This unique attribute, however, poses many new security challenges. Cloud computing seems to offer some incredible benefits for communicators. |
1712.01653 | Ignacio Garcia Dorado | Aysegul Dundar and Ignacio Garcia-Dorado | Context Augmentation for Convolutional Neural Networks | 8 pages, 7 figures | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent enhancements of deep convolutional neural networks (ConvNets)
empowered by enormous amounts of labeled data have closed the gap with human
performance for many object recognition tasks. These impressive results have
generated interest in understanding and visualization of ConvNets. In this
work, we study the effect of background in the task of image classification.
Our results show that changing the backgrounds of the training datasets can
have drastic effects on testing accuracies. Furthermore, we enhance existing
augmentation techniques with the foreground segmented objects. The findings of
this work are important in increasing the accuracies when only a small dataset
is available, in creating datasets, and creating synthetic images.
| [
{
"created": "Wed, 22 Nov 2017 23:53:47 GMT",
"version": "v1"
},
{
"created": "Tue, 12 Dec 2017 01:11:35 GMT",
"version": "v2"
}
] | 2017-12-13 | [
[
"Dundar",
"Aysegul",
""
],
[
"Garcia-Dorado",
"Ignacio",
""
]
] | Recent enhancements of deep convolutional neural networks (ConvNets) empowered by enormous amounts of labeled data have closed the gap with human performance for many object recognition tasks. These impressive results have generated interest in understanding and visualization of ConvNets. In this work, we study the effect of background in the task of image classification. Our results show that changing the backgrounds of the training datasets can have drastic effects on testing accuracies. Furthermore, we enhance existing augmentation techniques with the foreground segmented objects. The findings of this work are important in increasing the accuracies when only a small dataset is available, in creating datasets, and creating synthetic images. |
1510.00208 | M\'ark F\"uzesdi | Mark F\"uzesdi | Boolean-type Retractable State-finite Automata Without Outputs | 12 pages | null | null | null | cs.FL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An automaton $\bf A$ is called a retractable automaton if, for every
subautomaton $\bf B$ of $\bf A$, there is at least one homomorphism of $\bf A$
onto $\bf B$ which leaves the elements of $B$ fixed (such homomorphism is
called a retract homomorphism of $\bf A$ onto $\bf B$). We say that a
retractable automaton ${\bf A}$=(A,X,$\delta$) is Boolean-type if there exists
a family $\{\lambda_B \mid \textrm{ B is a subautomaton of A } \}$ of retract
homomorphisms $\lambda _B$ of $\bf A$ such that, for arbitrary subautomata
${\bf B}_1$ and ${\bf B}_2$ of $\bf A$, the condition $B_1\subseteq B_2$
implies $Ker\lambda _{B_2}\subseteq Ker\lambda _{B_1}$. In this paper we
describe the Boolean-type retractable state-finite automata without outputs.
| [
{
"created": "Thu, 1 Oct 2015 13:02:22 GMT",
"version": "v1"
},
{
"created": "Mon, 5 Oct 2015 14:10:04 GMT",
"version": "v2"
}
] | 2015-10-06 | [
[
"Füzesdi",
"Mark",
""
]
] | An automaton $\bf A$ is called a retractable automaton if, for every subautomaton $\bf B$ of $\bf A$, there is at least one homomorphism of $\bf A$ onto $\bf B$ which leaves the elements of $B$ fixed (such homomorphism is called a retract homomorphism of $\bf A$ onto $\bf B$). We say that a retractable automaton ${\bf A}$=(A,X,$\delta$) is Boolean-type if there exists a family $\{\lambda_B \mid \textrm{ B is a subautomaton of A } \}$ of retract homomorphisms $\lambda _B$ of $\bf A$ such that, for arbitrary subautomata ${\bf B}_1$ and ${\bf B}_2$ of $\bf A$, the condition $B_1\subseteq B_2$ implies $Ker\lambda _{B_2}\subseteq Ker\lambda _{B_1}$. In this paper we describe the Boolean-type retractable state-finite automata without outputs. |
2209.11885 | Wendi Liu | Wendi Liu, Michael J. Pyrcz | Physics-Informed Graph Neural Network for Spatial-temporal Production
Forecasting | null | null | null | null | cs.LG physics.app-ph | http://creativecommons.org/licenses/by/4.0/ | Production forecast based on historical data provides essential value for
developing hydrocarbon resources. Classic history matching workflow is often
computationally intense and geometry-dependent. Analytical data-driven models
like decline curve analysis (DCA) and capacitance resistance models (CRM)
provide a grid-free solution with a relatively simple model capable of
integrating some degree of physics constraints. However, the analytical
solution may ignore subsurface geometries and is appropriate only for specific
flow regimes and otherwise may violate physics conditions resulting in degraded
model prediction accuracy. Machine learning-based predictive model for time
series provides non-parametric, assumption-free solutions for production
forecasting, but are prone to model overfit due to training data sparsity;
therefore may be accurate over short prediction time intervals.
We propose a grid-free, physics-informed graph neural network (PI-GNN) for
production forecasting. A customized graph convolution layer aggregates
neighborhood information from historical data and has the flexibility to
integrate domain expertise into the data-driven model. The proposed method
relaxes the dependence on close-form solutions like CRM and honors the given
physics-based constraints. Our proposed method is robust, with improved
performance and model interpretability relative to the conventional CRM and GNN
baseline without physics constraints.
| [
{
"created": "Fri, 23 Sep 2022 23:28:40 GMT",
"version": "v1"
}
] | 2022-09-27 | [
[
"Liu",
"Wendi",
""
],
[
"Pyrcz",
"Michael J.",
""
]
] | Production forecast based on historical data provides essential value for developing hydrocarbon resources. Classic history matching workflow is often computationally intense and geometry-dependent. Analytical data-driven models like decline curve analysis (DCA) and capacitance resistance models (CRM) provide a grid-free solution with a relatively simple model capable of integrating some degree of physics constraints. However, the analytical solution may ignore subsurface geometries and is appropriate only for specific flow regimes and otherwise may violate physics conditions resulting in degraded model prediction accuracy. Machine learning-based predictive model for time series provides non-parametric, assumption-free solutions for production forecasting, but are prone to model overfit due to training data sparsity; therefore may be accurate over short prediction time intervals. We propose a grid-free, physics-informed graph neural network (PI-GNN) for production forecasting. A customized graph convolution layer aggregates neighborhood information from historical data and has the flexibility to integrate domain expertise into the data-driven model. The proposed method relaxes the dependence on close-form solutions like CRM and honors the given physics-based constraints. Our proposed method is robust, with improved performance and model interpretability relative to the conventional CRM and GNN baseline without physics constraints. |
2212.08769 | Omead Pooladzandi | Omead Pooladzandi, Yiming Zhou | Improving Levenberg-Marquardt Algorithm for Neural Networks | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We explore the usage of the Levenberg-Marquardt (LM) algorithm for regression
(non-linear least squares) and classification (generalized Gauss-Newton
methods) tasks in neural networks. We compare the performance of the LM method
with other popular first-order algorithms such as SGD and Adam, as well as
other second-order algorithms such as L-BFGS , Hessian-Free and KFAC. We
further speed up the LM method by using adaptive momentum, learning rate line
search, and uphill step acceptance.
| [
{
"created": "Sat, 17 Dec 2022 00:36:46 GMT",
"version": "v1"
}
] | 2022-12-20 | [
[
"Pooladzandi",
"Omead",
""
],
[
"Zhou",
"Yiming",
""
]
] | We explore the usage of the Levenberg-Marquardt (LM) algorithm for regression (non-linear least squares) and classification (generalized Gauss-Newton methods) tasks in neural networks. We compare the performance of the LM method with other popular first-order algorithms such as SGD and Adam, as well as other second-order algorithms such as L-BFGS , Hessian-Free and KFAC. We further speed up the LM method by using adaptive momentum, learning rate line search, and uphill step acceptance. |
2306.09049 | Andreas Fischer | Zineddine Bettouche and Andreas Fischer | Mapping Researcher Activity based on Publication Data by means of
Transformers | Proc. of the Interdisciplinary Conference on Mechanics, Computers and
Electrics (ICMECE 2022) | null | null | null | cs.CL cs.DL cs.IR cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | Modern performance on several natural language processing (NLP) tasks has
been enhanced thanks to the Transformer-based pre-trained language model BERT.
We employ this concept to investigate a local publication database. Research
papers are encoded and clustered to form a landscape view of the scientific
topics, in which research is active. Authors working on similar topics can be
identified by calculating the similarity between their papers. Based on this,
we define a similarity metric between authors. Additionally we introduce the
concept of self-similarity to indicate the topical variety of authors.
| [
{
"created": "Thu, 15 Jun 2023 11:13:54 GMT",
"version": "v1"
}
] | 2023-06-16 | [
[
"Bettouche",
"Zineddine",
""
],
[
"Fischer",
"Andreas",
""
]
] | Modern performance on several natural language processing (NLP) tasks has been enhanced thanks to the Transformer-based pre-trained language model BERT. We employ this concept to investigate a local publication database. Research papers are encoded and clustered to form a landscape view of the scientific topics, in which research is active. Authors working on similar topics can be identified by calculating the similarity between their papers. Based on this, we define a similarity metric between authors. Additionally we introduce the concept of self-similarity to indicate the topical variety of authors. |
1011.2235 | Konstantinos Tsianos | Konstantinos I. Tsianos and Michael G. Rabbat | Multiscale Gossip for Efficient Decentralized Averaging in Wireless
Packet Networks | (under Review) | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper describes and analyzes a hierarchical gossip algorithm for solving
the distributed average consensus problem in wireless sensor networks. The
network is recursively partitioned into subnetworks. Initially, nodes at the
finest scale gossip to compute local averages. Then, using geographic routing
to enable gossip between nodes that are not directly connected, these local
averages are progressively fused up the hierarchy until the global average is
computed. We show that the proposed hierarchical scheme with $k$ levels of
hierarchy is competitive with state-of-the-art randomized gossip algorithms, in
terms of message complexity, achieving $\epsilon$-accuracy with high
probability after $O\big(n \log \log n \log \frac{kn}{\epsilon} \big)$
messages. Key to our analysis is the way in which the network is recursively
partitioned. We find that the optimal scaling law is achieved when subnetworks
at scale $j$ contain $O(n^{(2/3)^j})$ nodes; then the message complexity at any
individual scale is $O(n \log \frac{kn}{\epsilon})$, and the total number of
scales in the hierarchy grows slowly, as $\Theta(\log \log n)$. Another
important consequence of hierarchical construction is that the longest distance
over which messages are exchanged is $O(n^{1/3})$ hops (at the highest scale),
and most messages (at lower scales) travel shorter distances. In networks that
use link-level acknowledgements, this results in less congestion and resource
usage by reducing message retransmissions. Simulations illustrate that the
proposed scheme is more message-efficient than existing state-of-the-art
randomized gossip algorithms based on averaging along paths.
| [
{
"created": "Tue, 9 Nov 2010 23:50:10 GMT",
"version": "v1"
},
{
"created": "Thu, 26 Jan 2012 21:23:03 GMT",
"version": "v2"
},
{
"created": "Mon, 27 Feb 2012 21:51:58 GMT",
"version": "v3"
}
] | 2012-02-29 | [
[
"Tsianos",
"Konstantinos I.",
""
],
[
"Rabbat",
"Michael G.",
""
]
] | This paper describes and analyzes a hierarchical gossip algorithm for solving the distributed average consensus problem in wireless sensor networks. The network is recursively partitioned into subnetworks. Initially, nodes at the finest scale gossip to compute local averages. Then, using geographic routing to enable gossip between nodes that are not directly connected, these local averages are progressively fused up the hierarchy until the global average is computed. We show that the proposed hierarchical scheme with $k$ levels of hierarchy is competitive with state-of-the-art randomized gossip algorithms, in terms of message complexity, achieving $\epsilon$-accuracy with high probability after $O\big(n \log \log n \log \frac{kn}{\epsilon} \big)$ messages. Key to our analysis is the way in which the network is recursively partitioned. We find that the optimal scaling law is achieved when subnetworks at scale $j$ contain $O(n^{(2/3)^j})$ nodes; then the message complexity at any individual scale is $O(n \log \frac{kn}{\epsilon})$, and the total number of scales in the hierarchy grows slowly, as $\Theta(\log \log n)$. Another important consequence of hierarchical construction is that the longest distance over which messages are exchanged is $O(n^{1/3})$ hops (at the highest scale), and most messages (at lower scales) travel shorter distances. In networks that use link-level acknowledgements, this results in less congestion and resource usage by reducing message retransmissions. Simulations illustrate that the proposed scheme is more message-efficient than existing state-of-the-art randomized gossip algorithms based on averaging along paths. |
2311.18102 | Parshuram Aarotale | Parshuram N. Aarotale, Twyla Hill, Ajita Rattani | PatchBMI-Net: Lightweight Facial Patch-based Ensemble for BMI Prediction | 7 pages,3 figures | null | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Due to an alarming trend related to obesity affecting 93.3 million adults in
the United States alone, body mass index (BMI) and body weight have drawn
significant interest in various health monitoring applications. Consequently,
several studies have proposed self-diagnostic facial image-based BMI prediction
methods for healthy weight monitoring. These methods have mostly used
convolutional neural network (CNN) based regression baselines, such as VGG19,
ResNet50, and Efficient-NetB0, for BMI prediction from facial images. However,
the high computational requirement of these heavy-weight CNN models limits
their deployment to resource-constrained mobile devices, thus deterring weight
monitoring using smartphones. This paper aims to develop a lightweight facial
patch-based ensemble (PatchBMI-Net) for BMI prediction to facilitate the
deployment and weight monitoring using smartphones. Extensive experiments on
BMI-annotated facial image datasets suggest that our proposed PatchBMI-Net
model can obtain Mean Absolute Error (MAE) in the range [3.58, 6.51] with a
size of about 3.3 million parameters. On cross-comparison with heavyweight
models, such as ResNet-50 and Xception, trained for BMI prediction from facial
images, our proposed PatchBMI-Net obtains equivalent MAE along with the model
size reduction of about 5.4x and the average inference time reduction of about
3x when deployed on Apple-14 smartphone. Thus, demonstrating performance
efficiency as well as low latency for on-device deployment and weight
monitoring using smartphone applications.
| [
{
"created": "Wed, 29 Nov 2023 21:39:24 GMT",
"version": "v1"
}
] | 2023-12-01 | [
[
"Aarotale",
"Parshuram N.",
""
],
[
"Hill",
"Twyla",
""
],
[
"Rattani",
"Ajita",
""
]
] | Due to an alarming trend related to obesity affecting 93.3 million adults in the United States alone, body mass index (BMI) and body weight have drawn significant interest in various health monitoring applications. Consequently, several studies have proposed self-diagnostic facial image-based BMI prediction methods for healthy weight monitoring. These methods have mostly used convolutional neural network (CNN) based regression baselines, such as VGG19, ResNet50, and Efficient-NetB0, for BMI prediction from facial images. However, the high computational requirement of these heavy-weight CNN models limits their deployment to resource-constrained mobile devices, thus deterring weight monitoring using smartphones. This paper aims to develop a lightweight facial patch-based ensemble (PatchBMI-Net) for BMI prediction to facilitate the deployment and weight monitoring using smartphones. Extensive experiments on BMI-annotated facial image datasets suggest that our proposed PatchBMI-Net model can obtain Mean Absolute Error (MAE) in the range [3.58, 6.51] with a size of about 3.3 million parameters. On cross-comparison with heavyweight models, such as ResNet-50 and Xception, trained for BMI prediction from facial images, our proposed PatchBMI-Net obtains equivalent MAE along with the model size reduction of about 5.4x and the average inference time reduction of about 3x when deployed on Apple-14 smartphone. Thus, demonstrating performance efficiency as well as low latency for on-device deployment and weight monitoring using smartphone applications. |
1607.00234 | Florentin Smarandache | Florentin Smarandache | Neutrosophic Overset, Neutrosophic Underset, and Neutrosophic Offset.
Similarly for Neutrosophic Over-/Under-/Off- Logic, Probability, and
Statistics | 170 pages | Pons Editions, Bruxelles, 2016 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neutrosophic Over-/Under-/Off-Set and -Logic were defined by the author in
1995 and published for the first time in 2007. We extended the neutrosophic set
respectively to Neutrosophic Overset {when some neutrosophic component is over
1}, Neutrosophic Underset {when some neutrosophic component is below 0}, and to
Neutrosophic Offset {when some neutrosophic components are off the interval [0,
1], i.e. some neutrosophic component over 1 and other neutrosophic component
below 0}. This is no surprise with respect to the classical fuzzy set/logic,
intuitionistic fuzzy set/logic, or classical/imprecise probability, where the
values are not allowed outside the interval [0, 1], since our real-world has
numerous examples and applications of over-/under-/off-neutrosophic components.
For example, person working overtime deserves a membership degree over 1, while
a person producing more damage than benefit to a company deserves a membership
below 0. Then, similarly, the Neutrosophic Logic/Measure/Probability/Statistics
etc. were extended to respectively Neutrosophic Over-/Under-/Off-Logic,
-Measure, -Probability, -Statistics etc. [Smarandache, 2007].
| [
{
"created": "Thu, 30 Jun 2016 02:17:59 GMT",
"version": "v1"
}
] | 2016-07-04 | [
[
"Smarandache",
"Florentin",
""
]
] | Neutrosophic Over-/Under-/Off-Set and -Logic were defined by the author in 1995 and published for the first time in 2007. We extended the neutrosophic set respectively to Neutrosophic Overset {when some neutrosophic component is over 1}, Neutrosophic Underset {when some neutrosophic component is below 0}, and to Neutrosophic Offset {when some neutrosophic components are off the interval [0, 1], i.e. some neutrosophic component over 1 and other neutrosophic component below 0}. This is no surprise with respect to the classical fuzzy set/logic, intuitionistic fuzzy set/logic, or classical/imprecise probability, where the values are not allowed outside the interval [0, 1], since our real-world has numerous examples and applications of over-/under-/off-neutrosophic components. For example, person working overtime deserves a membership degree over 1, while a person producing more damage than benefit to a company deserves a membership below 0. Then, similarly, the Neutrosophic Logic/Measure/Probability/Statistics etc. were extended to respectively Neutrosophic Over-/Under-/Off-Logic, -Measure, -Probability, -Statistics etc. [Smarandache, 2007]. |
1911.11351 | Mingda Wu | Mingda Wu, Di Huang, Yuanfang Guo, Yunhong Wang | Distraction-Aware Feature Learning for Human Attribute Recognition via
Coarse-to-Fine Attention Mechanism | 8 pages, 5 figures, accepted by AAAI-20 as an oral presentation | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, Human Attribute Recognition (HAR) has become a hot topic due to its
scientific challenges and application potentials, where localizing attributes
is a crucial stage but not well handled. In this paper, we propose a novel deep
learning approach to HAR, namely Distraction-aware HAR (Da-HAR). It enhances
deep CNN feature learning by improving attribute localization through a
coarse-to-fine attention mechanism. At the coarse step, a self-mask block is
built to roughly discriminate and reduce distractions, while at the fine step,
a masked attention branch is applied to further eliminate irrelevant regions.
Thanks to this mechanism, feature learning is more accurate, especially when
heavy occlusions and complex backgrounds exist. Extensive experiments are
conducted on the WIDER-Attribute and RAP databases, and state-of-the-art
results are achieved, demonstrating the effectiveness of the proposed approach.
| [
{
"created": "Tue, 26 Nov 2019 05:49:52 GMT",
"version": "v1"
}
] | 2019-11-27 | [
[
"Wu",
"Mingda",
""
],
[
"Huang",
"Di",
""
],
[
"Guo",
"Yuanfang",
""
],
[
"Wang",
"Yunhong",
""
]
] | Recently, Human Attribute Recognition (HAR) has become a hot topic due to its scientific challenges and application potentials, where localizing attributes is a crucial stage but not well handled. In this paper, we propose a novel deep learning approach to HAR, namely Distraction-aware HAR (Da-HAR). It enhances deep CNN feature learning by improving attribute localization through a coarse-to-fine attention mechanism. At the coarse step, a self-mask block is built to roughly discriminate and reduce distractions, while at the fine step, a masked attention branch is applied to further eliminate irrelevant regions. Thanks to this mechanism, feature learning is more accurate, especially when heavy occlusions and complex backgrounds exist. Extensive experiments are conducted on the WIDER-Attribute and RAP databases, and state-of-the-art results are achieved, demonstrating the effectiveness of the proposed approach. |
2011.13347 | Catarina Lopes-Dias | Catarina Lopes-Dias, Andreea I. Sburlea, Katharina Breitegger, Daniela
Wyss, Harald Drescher, Renate Wildburger and Gernot R. M\"uller-Putz | Online asynchronous detection of error-related potentials in
participants with a spinal cord injury using a generic classifier | null | J. Neural Eng. 18 046022 (2021) | 10.1088/1741-2552/abd1eb | null | cs.HC | http://creativecommons.org/licenses/by/4.0/ | A BCI user awareness of an error is associated with a cortical signature
named error-related potential (ErrP). The incorporation of ErrPs' detection in
BCIs can improve BCIs' performance. This work is three-folded. First, we
investigate if an ErrP classifier is transferable from able-bodied participants
to participants with spinal cord injury (SCI). Second, we test this generic
ErrP classifier with SCI and control participants, in an online experiment
without offline calibration. Third, we investigate the morphology of ErrPs in
both groups of participants. We used previously recorded
electroencephalographic (EEG) data from able-bodied participants to train an
ErrP classifier. We tested the classifier asynchronously, in an online
experiment with 16 new participants: 8 participants with SCI and 8 able-bodied
control participants. The experiment had no offline calibration and
participants received feedback regarding the ErrPs' detection from its start.
The generic classifier was not trained with the user's brain signals. Still,
its performance was optimized during the online experiment with the use of
personalized decision thresholds. Participants with SCI presented a
non-homogenous ErrP morphology, and four of them did not present clear ErrP
signals. The generic classifier performed above chance level in participants
with clear ErrP signals, independently of the SCI (11 out of 16 participants).
Three out of the five participants that obtained chance level results with the
generic classifier would have not benefited from the use of a personalized
classifier. This work shows the feasibility of transferring an ErrP classifier
from able-bodied participants to participants with SCI, for asynchronous
detection of ErrPs in an online experiment without offline calibration, which
provided immediate feedback to the users.
| [
{
"created": "Thu, 26 Nov 2020 15:24:41 GMT",
"version": "v1"
},
{
"created": "Sun, 6 Dec 2020 08:08:22 GMT",
"version": "v2"
},
{
"created": "Fri, 2 Apr 2021 16:15:12 GMT",
"version": "v3"
}
] | 2021-04-05 | [
[
"Lopes-Dias",
"Catarina",
""
],
[
"Sburlea",
"Andreea I.",
""
],
[
"Breitegger",
"Katharina",
""
],
[
"Wyss",
"Daniela",
""
],
[
"Drescher",
"Harald",
""
],
[
"Wildburger",
"Renate",
""
],
[
"Müller-Putz",
"Gernot R.",
""
]
] | A BCI user awareness of an error is associated with a cortical signature named error-related potential (ErrP). The incorporation of ErrPs' detection in BCIs can improve BCIs' performance. This work is three-folded. First, we investigate if an ErrP classifier is transferable from able-bodied participants to participants with spinal cord injury (SCI). Second, we test this generic ErrP classifier with SCI and control participants, in an online experiment without offline calibration. Third, we investigate the morphology of ErrPs in both groups of participants. We used previously recorded electroencephalographic (EEG) data from able-bodied participants to train an ErrP classifier. We tested the classifier asynchronously, in an online experiment with 16 new participants: 8 participants with SCI and 8 able-bodied control participants. The experiment had no offline calibration and participants received feedback regarding the ErrPs' detection from its start. The generic classifier was not trained with the user's brain signals. Still, its performance was optimized during the online experiment with the use of personalized decision thresholds. Participants with SCI presented a non-homogenous ErrP morphology, and four of them did not present clear ErrP signals. The generic classifier performed above chance level in participants with clear ErrP signals, independently of the SCI (11 out of 16 participants). Three out of the five participants that obtained chance level results with the generic classifier would have not benefited from the use of a personalized classifier. This work shows the feasibility of transferring an ErrP classifier from able-bodied participants to participants with SCI, for asynchronous detection of ErrPs in an online experiment without offline calibration, which provided immediate feedback to the users. |
2305.09062 | Gilberto Ochoa-Ruiz | Mauricio Mendez-Ruiz, Jorge Gonzalez-Zapata, Ivan Reyes-Amezcua,
Daniel Flores-Araiza, Francisco Lopez-Tiro, Andres Mendez-Vazquez, Gilberto
Ochoa-Ruiz | SuSana Distancia is all you need: Enforcing class separability in metric
learning via two novel distance-based loss functions for few-shot image
classification | Paper submitted to a journal for publication | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Few-shot learning is a challenging area of research that aims to learn new
concepts with only a few labeled samples of data. Recent works based on
metric-learning approaches leverage the meta-learning approach, which is
encompassed by episodic tasks that make use a support (training) and query set
(test) with the objective of learning a similarity comparison metric between
those sets. Due to the lack of data, the learning process of the embedding
network becomes an important part of the few-shot task. Previous works have
addressed this problem using metric learning approaches, but the properties of
the underlying latent space and the separability of the difference classes on
it was not entirely enforced. In this work, we propose two different loss
functions which consider the importance of the embedding vectors by looking at
the intra-class and inter-class distance between the few data. The first loss
function is the Proto-Triplet Loss, which is based on the original triplet loss
with the modifications needed to better work on few-shot scenarios. The second
loss function, which we dub ICNN loss is based on an inter and intra class
nearest neighbors score, which help us to assess the quality of embeddings
obtained from the trained network. Our results, obtained from a extensive
experimental setup show a significant improvement in accuracy in the
miniImagenNet benchmark compared to other metric-based few-shot learning
methods by a margin of 2%, demonstrating the capability of these loss functions
to allow the network to generalize better to previously unseen classes. In our
experiments, we demonstrate competitive generalization capabilities to other
domains, such as the Caltech CUB, Dogs and Cars datasets compared with the
state of the art.
| [
{
"created": "Mon, 15 May 2023 23:12:09 GMT",
"version": "v1"
},
{
"created": "Wed, 17 May 2023 00:58:41 GMT",
"version": "v2"
},
{
"created": "Thu, 18 May 2023 20:41:34 GMT",
"version": "v3"
}
] | 2023-05-22 | [
[
"Mendez-Ruiz",
"Mauricio",
""
],
[
"Gonzalez-Zapata",
"Jorge",
""
],
[
"Reyes-Amezcua",
"Ivan",
""
],
[
"Flores-Araiza",
"Daniel",
""
],
[
"Lopez-Tiro",
"Francisco",
""
],
[
"Mendez-Vazquez",
"Andres",
""
],
[
"Ochoa-Ruiz",
"Gilberto",
""
]
] | Few-shot learning is a challenging area of research that aims to learn new concepts with only a few labeled samples of data. Recent works based on metric-learning approaches leverage the meta-learning approach, which is encompassed by episodic tasks that make use a support (training) and query set (test) with the objective of learning a similarity comparison metric between those sets. Due to the lack of data, the learning process of the embedding network becomes an important part of the few-shot task. Previous works have addressed this problem using metric learning approaches, but the properties of the underlying latent space and the separability of the difference classes on it was not entirely enforced. In this work, we propose two different loss functions which consider the importance of the embedding vectors by looking at the intra-class and inter-class distance between the few data. The first loss function is the Proto-Triplet Loss, which is based on the original triplet loss with the modifications needed to better work on few-shot scenarios. The second loss function, which we dub ICNN loss is based on an inter and intra class nearest neighbors score, which help us to assess the quality of embeddings obtained from the trained network. Our results, obtained from a extensive experimental setup show a significant improvement in accuracy in the miniImagenNet benchmark compared to other metric-based few-shot learning methods by a margin of 2%, demonstrating the capability of these loss functions to allow the network to generalize better to previously unseen classes. In our experiments, we demonstrate competitive generalization capabilities to other domains, such as the Caltech CUB, Dogs and Cars datasets compared with the state of the art. |
2107.00495 | Boxiang Dong | Boxiang Dong, Bo Zhang, Hui (Wendy) Wang | VeriDL: Integrity Verification of Outsourced Deep Learning Services
(Extended Version) | null | null | null | null | cs.CR | http://creativecommons.org/licenses/by-sa/4.0/ | Deep neural networks (DNNs) are prominent due to their superior performance
in many fields. The deep-learning-as-a-service (DLaaS) paradigm enables
individuals and organizations (clients) to outsource their DNN learning tasks
to the cloud-based platforms. However, the DLaaS server may return incorrect
DNN models due to various reasons (e.g., Byzantine failures). This raises the
serious concern of how to verify if the DNN models trained by potentially
untrusted DLaaS servers are indeed correct. To address this concern, in this
paper, we design VeriDL, a framework that supports efficient correctness
verification of DNN models in the DLaaS paradigm. The key idea of VeriDL is the
design of a small-size cryptographic proof of the training process of the DNN
model, which is associated with the model and returned to the client. Through
the proof, VeriDL can verify the correctness of the DNN model returned by the
DLaaS server with a deterministic guarantee and cheap overhead. Our experiments
on four real-world datasets demonstrate the efficiency and effectiveness of
VeriDL.
| [
{
"created": "Thu, 1 Jul 2021 14:37:49 GMT",
"version": "v1"
}
] | 2021-07-02 | [
[
"Dong",
"Boxiang",
"",
"Wendy"
],
[
"Zhang",
"Bo",
"",
"Wendy"
],
[
"Hui",
"",
"",
"Wendy"
],
[
"Wang",
"",
""
]
] | Deep neural networks (DNNs) are prominent due to their superior performance in many fields. The deep-learning-as-a-service (DLaaS) paradigm enables individuals and organizations (clients) to outsource their DNN learning tasks to the cloud-based platforms. However, the DLaaS server may return incorrect DNN models due to various reasons (e.g., Byzantine failures). This raises the serious concern of how to verify if the DNN models trained by potentially untrusted DLaaS servers are indeed correct. To address this concern, in this paper, we design VeriDL, a framework that supports efficient correctness verification of DNN models in the DLaaS paradigm. The key idea of VeriDL is the design of a small-size cryptographic proof of the training process of the DNN model, which is associated with the model and returned to the client. Through the proof, VeriDL can verify the correctness of the DNN model returned by the DLaaS server with a deterministic guarantee and cheap overhead. Our experiments on four real-world datasets demonstrate the efficiency and effectiveness of VeriDL. |
2404.06780 | Fan Lu | Fan Lu, Kwan-Yee Lin, Yan Xu, Hongsheng Li, Guang Chen, Changjun Jiang | Urban Architect: Steerable 3D Urban Scene Generation with Layout Prior | Project page: https://urbanarchitect.github.io/ | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Text-to-3D generation has achieved remarkable success via large-scale
text-to-image diffusion models. Nevertheless, there is no paradigm for scaling
up the methodology to urban scale. Urban scenes, characterized by numerous
elements, intricate arrangement relationships, and vast scale, present a
formidable barrier to the interpretability of ambiguous textual descriptions
for effective model optimization. In this work, we surmount the limitations by
introducing a compositional 3D layout representation into text-to-3D paradigm,
serving as an additional prior. It comprises a set of semantic primitives with
simple geometric structures and explicit arrangement relationships,
complementing textual descriptions and enabling steerable generation. Upon
this, we propose two modifications -- (1) We introduce Layout-Guided
Variational Score Distillation to address model optimization inadequacies. It
conditions the score distillation sampling process with geometric and semantic
constraints of 3D layouts. (2) To handle the unbounded nature of urban scenes,
we represent 3D scene with a Scalable Hash Grid structure, incrementally
adapting to the growing scale of urban scenes. Extensive experiments
substantiate the capability of our framework to scale text-to-3D generation to
large-scale urban scenes that cover over 1000m driving distance for the first
time. We also present various scene editing demonstrations, showing the powers
of steerable urban scene generation. Website: https://urbanarchitect.github.io.
| [
{
"created": "Wed, 10 Apr 2024 06:41:30 GMT",
"version": "v1"
}
] | 2024-04-11 | [
[
"Lu",
"Fan",
""
],
[
"Lin",
"Kwan-Yee",
""
],
[
"Xu",
"Yan",
""
],
[
"Li",
"Hongsheng",
""
],
[
"Chen",
"Guang",
""
],
[
"Jiang",
"Changjun",
""
]
] | Text-to-3D generation has achieved remarkable success via large-scale text-to-image diffusion models. Nevertheless, there is no paradigm for scaling up the methodology to urban scale. Urban scenes, characterized by numerous elements, intricate arrangement relationships, and vast scale, present a formidable barrier to the interpretability of ambiguous textual descriptions for effective model optimization. In this work, we surmount the limitations by introducing a compositional 3D layout representation into text-to-3D paradigm, serving as an additional prior. It comprises a set of semantic primitives with simple geometric structures and explicit arrangement relationships, complementing textual descriptions and enabling steerable generation. Upon this, we propose two modifications -- (1) We introduce Layout-Guided Variational Score Distillation to address model optimization inadequacies. It conditions the score distillation sampling process with geometric and semantic constraints of 3D layouts. (2) To handle the unbounded nature of urban scenes, we represent 3D scene with a Scalable Hash Grid structure, incrementally adapting to the growing scale of urban scenes. Extensive experiments substantiate the capability of our framework to scale text-to-3D generation to large-scale urban scenes that cover over 1000m driving distance for the first time. We also present various scene editing demonstrations, showing the powers of steerable urban scene generation. Website: https://urbanarchitect.github.io. |
2009.06110 | Gasper Begus | Ga\v{s}per Begu\v{s} | Identity-Based Patterns in Deep Convolutional Networks: Generative
Adversarial Phonology and Reduplication | Paper accepted at TACL | Transactions of the Association for Computational Linguistics 9
(2021): 1180-1196 | 10.1162/tacl_a_00421 | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper models unsupervised learning of an identity-based pattern (or
copying) in speech called reduplication from raw continuous data with deep
convolutional neural networks. We use the ciwGAN architecture Begu\v{s} (2021a;
arXiv:2006.02951) in which learning of meaningful representations in speech
emerges from a requirement that the CNNs generate informative data. We propose
a technique to wug-test CNNs trained on speech and, based on four generative
tests, argue that the network learns to represent an identity-based pattern in
its latent space. By manipulating only two categorical variables in the latent
space, we can actively turn an unreduplicated form into a reduplicated form
with no other substantial changes to the output in the majority of cases. We
also argue that the network extends the identity-based pattern to unobserved
data. Exploration of how meaningful representations of identity-based patterns
emerge in CNNs and how the latent space variables outside of the training range
correlate with identity-based patterns in the output has general implications
for neural network interpretability.
| [
{
"created": "Sun, 13 Sep 2020 23:12:49 GMT",
"version": "v1"
},
{
"created": "Sat, 17 Jul 2021 12:03:04 GMT",
"version": "v2"
}
] | 2021-11-23 | [
[
"Beguš",
"Gašper",
""
]
] | This paper models unsupervised learning of an identity-based pattern (or copying) in speech called reduplication from raw continuous data with deep convolutional neural networks. We use the ciwGAN architecture Begu\v{s} (2021a; arXiv:2006.02951) in which learning of meaningful representations in speech emerges from a requirement that the CNNs generate informative data. We propose a technique to wug-test CNNs trained on speech and, based on four generative tests, argue that the network learns to represent an identity-based pattern in its latent space. By manipulating only two categorical variables in the latent space, we can actively turn an unreduplicated form into a reduplicated form with no other substantial changes to the output in the majority of cases. We also argue that the network extends the identity-based pattern to unobserved data. Exploration of how meaningful representations of identity-based patterns emerge in CNNs and how the latent space variables outside of the training range correlate with identity-based patterns in the output has general implications for neural network interpretability. |
1207.2701 | Vijay Mankar | T. S. Das, V. H. Mankar and S. K. Sarkar | Spread Spectrum based Robust Image Watermark Authentication | ICACC 2007 International Conference, Madurai, India, 9-10 Feb, 2007 | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, a new approach to Spread Spectrum (SS) watermarking technique
is introduced. This problem is particularly interesting in the field of modern
multimedia applications like internet when copyright protection of digital
image is required. The approach exploits two-predecessor single attractor
(TPSA) cellular automata (CA) suitability to work as efficient authentication
function in wavelet based SS watermarking domain. The scheme is designed from
the analytical study of state transition behaviour of non-group CA and the
basic cryptography/encryption scheme is significantly different from the
conventional SS data hiding approaches. Experimental studies confirm that the
scheme is robust in terms of confidentiality, authentication, non-repudiation
and integrity. The transform domain blind watermarking technique offers better
visual & statistical imperceptibility and resiliency against different types of
intentional & unintentional image degradations. Interleaving and interference
cancellation methods are employed to improve the robustness performance
significantly compared to conventional matched filter detection.
| [
{
"created": "Wed, 11 Jul 2012 16:28:43 GMT",
"version": "v1"
}
] | 2012-07-12 | [
[
"Das",
"T. S.",
""
],
[
"Mankar",
"V. H.",
""
],
[
"Sarkar",
"S. K.",
""
]
] | In this paper, a new approach to Spread Spectrum (SS) watermarking technique is introduced. This problem is particularly interesting in the field of modern multimedia applications like internet when copyright protection of digital image is required. The approach exploits two-predecessor single attractor (TPSA) cellular automata (CA) suitability to work as efficient authentication function in wavelet based SS watermarking domain. The scheme is designed from the analytical study of state transition behaviour of non-group CA and the basic cryptography/encryption scheme is significantly different from the conventional SS data hiding approaches. Experimental studies confirm that the scheme is robust in terms of confidentiality, authentication, non-repudiation and integrity. The transform domain blind watermarking technique offers better visual & statistical imperceptibility and resiliency against different types of intentional & unintentional image degradations. Interleaving and interference cancellation methods are employed to improve the robustness performance significantly compared to conventional matched filter detection. |
1709.02642 | Dmytro Terletskyi | Dmytro Terletskyi | Object-Oriented Knowledge Extraction using Universal Exploiters | null | Proceedings of the XIIth International Scientific and Technical
Conference Computer Science and Information Technologies, CSIT-2017, 5-8
September, 2017, Lviv, Ukraine, pp. 257-266 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper contains analysis and extension of exploiters-based knowledge
extraction methods, which allow generation of new knowledge, based on the basic
ones. The main achievement of the paper is useful features of some universal
exploiters proof, which allow extending set of basic classes and set of basic
relations by finite set of new classes of objects and relations among them,
which allow creating of complete lattice. Proposed approach gives an
opportunity to compute quantity of new classes, which can be generated using
it, and quantity of different types, which each of obtained classes describes;
constructing of defined hierarchy of classes with determined subsumption
relation; avoidance of some problems of inheritance and more efficient
restoring of basic knowledge within the database.
| [
{
"created": "Fri, 8 Sep 2017 10:55:15 GMT",
"version": "v1"
}
] | 2017-09-11 | [
[
"Terletskyi",
"Dmytro",
""
]
] | This paper contains analysis and extension of exploiters-based knowledge extraction methods, which allow generation of new knowledge, based on the basic ones. The main achievement of the paper is useful features of some universal exploiters proof, which allow extending set of basic classes and set of basic relations by finite set of new classes of objects and relations among them, which allow creating of complete lattice. Proposed approach gives an opportunity to compute quantity of new classes, which can be generated using it, and quantity of different types, which each of obtained classes describes; constructing of defined hierarchy of classes with determined subsumption relation; avoidance of some problems of inheritance and more efficient restoring of basic knowledge within the database. |
2304.10246 | Baris Kayalibay | Baris Kayalibay, Atanas Mirchev, Ahmed Agha, Patrick van der Smagt,
Justin Bayer | Filter-Aware Model-Predictive Control | null | null | null | null | cs.LG cs.RO cs.SY eess.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Partially-observable problems pose a trade-off between reducing costs and
gathering information. They can be solved optimally by planning in belief
space, but that is often prohibitively expensive. Model-predictive control
(MPC) takes the alternative approach of using a state estimator to form a
belief over the state, and then plan in state space. This ignores potential
future observations during planning and, as a result, cannot actively increase
or preserve the certainty of its own state estimate. We find a middle-ground
between planning in belief space and completely ignoring its dynamics by only
reasoning about its future accuracy. Our approach, filter-aware MPC, penalises
the loss of information by what we call "trackability", the expected error of
the state estimator. We show that model-based simulation allows condensing
trackability into a neural network, which allows fast planning. In experiments
involving visual navigation, realistic every-day environments and a two-link
robot arm, we show that filter-aware MPC vastly improves regular MPC.
| [
{
"created": "Thu, 20 Apr 2023 12:06:41 GMT",
"version": "v1"
}
] | 2023-04-21 | [
[
"Kayalibay",
"Baris",
""
],
[
"Mirchev",
"Atanas",
""
],
[
"Agha",
"Ahmed",
""
],
[
"van der Smagt",
"Patrick",
""
],
[
"Bayer",
"Justin",
""
]
] | Partially-observable problems pose a trade-off between reducing costs and gathering information. They can be solved optimally by planning in belief space, but that is often prohibitively expensive. Model-predictive control (MPC) takes the alternative approach of using a state estimator to form a belief over the state, and then plan in state space. This ignores potential future observations during planning and, as a result, cannot actively increase or preserve the certainty of its own state estimate. We find a middle-ground between planning in belief space and completely ignoring its dynamics by only reasoning about its future accuracy. Our approach, filter-aware MPC, penalises the loss of information by what we call "trackability", the expected error of the state estimator. We show that model-based simulation allows condensing trackability into a neural network, which allows fast planning. In experiments involving visual navigation, realistic every-day environments and a two-link robot arm, we show that filter-aware MPC vastly improves regular MPC. |
1808.09682 | Hung Dang | Hung Dang and Dat Le Tien and Ee-Chien Chang | Fair Marketplace for Secure Outsourced Computations | null | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The cloud computing paradigm offers clients ubiquitous and on demand access
to a shared pool of computing resources, enabling the clients to provision
scalable services with minimal management effort. Such a pool of resources,
however, is typically owned and controlled by a single service provider, making
it a single-point-of-failure. This paper presents Kosto - a framework that
provisions a fair marketplace for secure outsourced computations, wherein the
pool of computing resources aggregates resources offered by a large cohort of
independent compute nodes. Kosto protects the confidentiality of clients'
inputs as well as the integrity of the outsourced computations and their
results using trusted hardware's enclave execution, in particular Intel SGX.
Furthermore, Kosto warrants fair exchanges between the clients' payments for
the execution of an outsourced computations and the compute nodes' work in
servicing the clients' requests. Empirical evaluation on the prototype
implementation of Kosto shows that performance overhead incurred by enclave
execution is as small as 3% for computation-intensive operations, and 1.5x for
IO-intensive operations.
| [
{
"created": "Wed, 29 Aug 2018 08:36:16 GMT",
"version": "v1"
}
] | 2018-08-30 | [
[
"Dang",
"Hung",
""
],
[
"Tien",
"Dat Le",
""
],
[
"Chang",
"Ee-Chien",
""
]
] | The cloud computing paradigm offers clients ubiquitous and on demand access to a shared pool of computing resources, enabling the clients to provision scalable services with minimal management effort. Such a pool of resources, however, is typically owned and controlled by a single service provider, making it a single-point-of-failure. This paper presents Kosto - a framework that provisions a fair marketplace for secure outsourced computations, wherein the pool of computing resources aggregates resources offered by a large cohort of independent compute nodes. Kosto protects the confidentiality of clients' inputs as well as the integrity of the outsourced computations and their results using trusted hardware's enclave execution, in particular Intel SGX. Furthermore, Kosto warrants fair exchanges between the clients' payments for the execution of an outsourced computations and the compute nodes' work in servicing the clients' requests. Empirical evaluation on the prototype implementation of Kosto shows that performance overhead incurred by enclave execution is as small as 3% for computation-intensive operations, and 1.5x for IO-intensive operations. |
2305.00075 | Nicolas Garcia Trillos | Nicolas Garcia Trillos, Matt Jacobs, Jakwang Kim | On the existence of solutions to adversarial training in multiclass
classification | null | null | null | null | cs.LG math.OC stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study three models of the problem of adversarial training in multiclass
classification designed to construct robust classifiers against adversarial
perturbations of data in the agnostic-classifier setting. We prove the
existence of Borel measurable robust classifiers in each model and provide a
unified perspective of the adversarial training problem, expanding the
connections with optimal transport initiated by the authors in previous work
and developing new connections between adversarial training in the multiclass
setting and total variation regularization. As a corollary of our results, we
prove the existence of Borel measurable solutions to the agnostic adversarial
training problem in the binary classification setting, a result that improves
results in the literature of adversarial training, where robust classifiers
were only known to exist within the enlarged universal $\sigma$-algebra of the
feature space.
| [
{
"created": "Fri, 28 Apr 2023 20:03:30 GMT",
"version": "v1"
},
{
"created": "Mon, 29 May 2023 06:19:45 GMT",
"version": "v2"
}
] | 2023-05-30 | [
[
"Trillos",
"Nicolas Garcia",
""
],
[
"Jacobs",
"Matt",
""
],
[
"Kim",
"Jakwang",
""
]
] | We study three models of the problem of adversarial training in multiclass classification designed to construct robust classifiers against adversarial perturbations of data in the agnostic-classifier setting. We prove the existence of Borel measurable robust classifiers in each model and provide a unified perspective of the adversarial training problem, expanding the connections with optimal transport initiated by the authors in previous work and developing new connections between adversarial training in the multiclass setting and total variation regularization. As a corollary of our results, we prove the existence of Borel measurable solutions to the agnostic adversarial training problem in the binary classification setting, a result that improves results in the literature of adversarial training, where robust classifiers were only known to exist within the enlarged universal $\sigma$-algebra of the feature space. |
1909.06940 | Zhao Kang | Zhao Kang and Guoxin Shi and Shudong Huang and Wenyu Chen and Xiaorong
Pu and Joey Tianyi Zhou and Zenglin Xu | Multi-graph Fusion for Multi-view Spectral Clustering | submitted to Knowledge-based Systems | null | null | null | cs.LG cs.CV stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A panoply of multi-view clustering algorithms has been developed to deal with
prevalent multi-view data. Among them, spectral clustering-based methods have
drawn much attention and demonstrated promising results recently. Despite
progress, there are still two fundamental questions that stay unanswered to
date. First, how to fuse different views into one graph. More often than not,
the similarities between samples may be manifested differently by different
views. Many existing algorithms either simply take the average of multiple
views or just learn a common graph. These simple approaches fail to consider
the flexible local manifold structures of all views. Hence, the rich
heterogeneous information is not fully exploited. Second, how to learn the
explicit cluster structure. Most existing methods don't pay attention to the
quality of the graphs and perform graph learning and spectral clustering
separately. Those unreliable graphs might lead to suboptimal clustering
results. To fill these gaps, in this paper, we propose a novel multi-view
spectral clustering model which performs graph fusion and spectral clustering
simultaneously. The fusion graph approximates the original graph of each
individual view but maintains an explicit cluster structure. Experiments on
four widely used data sets confirm the superiority of the proposed method.
| [
{
"created": "Mon, 16 Sep 2019 02:22:02 GMT",
"version": "v1"
}
] | 2019-09-17 | [
[
"Kang",
"Zhao",
""
],
[
"Shi",
"Guoxin",
""
],
[
"Huang",
"Shudong",
""
],
[
"Chen",
"Wenyu",
""
],
[
"Pu",
"Xiaorong",
""
],
[
"Zhou",
"Joey Tianyi",
""
],
[
"Xu",
"Zenglin",
""
]
] | A panoply of multi-view clustering algorithms has been developed to deal with prevalent multi-view data. Among them, spectral clustering-based methods have drawn much attention and demonstrated promising results recently. Despite progress, there are still two fundamental questions that stay unanswered to date. First, how to fuse different views into one graph. More often than not, the similarities between samples may be manifested differently by different views. Many existing algorithms either simply take the average of multiple views or just learn a common graph. These simple approaches fail to consider the flexible local manifold structures of all views. Hence, the rich heterogeneous information is not fully exploited. Second, how to learn the explicit cluster structure. Most existing methods don't pay attention to the quality of the graphs and perform graph learning and spectral clustering separately. Those unreliable graphs might lead to suboptimal clustering results. To fill these gaps, in this paper, we propose a novel multi-view spectral clustering model which performs graph fusion and spectral clustering simultaneously. The fusion graph approximates the original graph of each individual view but maintains an explicit cluster structure. Experiments on four widely used data sets confirm the superiority of the proposed method. |
2305.09905 | Karim Eldefrawy | Aysajan Abidin, Karim Eldefrawy, Dave Singelee | Entanglement-based Mutual Quantum Distance Bounding | 23 pages | null | null | null | cs.CR | http://creativecommons.org/licenses/by/4.0/ | Mutual distance bounding (DB) protocols enable two distrusting parties to
establish an upper-bound on the distance between them. DB has been so far
mainly considered in classical settings and for classical applications,
especially in wireless settings, e.g., to prevent relay attacks in wireless
authentication and access control systems, and for secure localization. While
recent research has started exploring DB in quantum settings, all current
quantum DB (QDB) protocols employ quantum-bits (qubits) in the rapid-bit
exchange phase and only perform one-way DB. Specifically, the latest QDB
proposals improve the initial ones by adding resistance to photon number
splitting attacks, and improving round complexity by avoiding communication
from the prover to the verifier in the last authentication phase. This paper
presents two new QDB protocols that differ from previously proposed protocols
in several aspects: (1) to the best of our knowledge, our protocols are the
first to utilize entangled qubits in the rapid-bit exchange phase, previous
protocols relied on sending individual qubits, not those from a pair of
entangled ones; (2) our second protocol can perform mutual QDB between two
parties in one execution, previous QDB protocols had to be executed twice with
the prover and verifier roles reversed in each execution; (3) the use of
entangled qubits in our protocols thwarts attacks that previous QDB protocols
were prone to; (4) and finally, our protocols also eliminate the need for
communication from the prover to the verifier in the last authentication phase,
which was necessary in some previous QDB protocols. Our work paves the way for
several interesting research directions which we briefly discuss in detail in
the appendix.
| [
{
"created": "Wed, 17 May 2023 02:28:00 GMT",
"version": "v1"
}
] | 2023-05-18 | [
[
"Abidin",
"Aysajan",
""
],
[
"Eldefrawy",
"Karim",
""
],
[
"Singelee",
"Dave",
""
]
] | Mutual distance bounding (DB) protocols enable two distrusting parties to establish an upper-bound on the distance between them. DB has been so far mainly considered in classical settings and for classical applications, especially in wireless settings, e.g., to prevent relay attacks in wireless authentication and access control systems, and for secure localization. While recent research has started exploring DB in quantum settings, all current quantum DB (QDB) protocols employ quantum-bits (qubits) in the rapid-bit exchange phase and only perform one-way DB. Specifically, the latest QDB proposals improve the initial ones by adding resistance to photon number splitting attacks, and improving round complexity by avoiding communication from the prover to the verifier in the last authentication phase. This paper presents two new QDB protocols that differ from previously proposed protocols in several aspects: (1) to the best of our knowledge, our protocols are the first to utilize entangled qubits in the rapid-bit exchange phase, previous protocols relied on sending individual qubits, not those from a pair of entangled ones; (2) our second protocol can perform mutual QDB between two parties in one execution, previous QDB protocols had to be executed twice with the prover and verifier roles reversed in each execution; (3) the use of entangled qubits in our protocols thwarts attacks that previous QDB protocols were prone to; (4) and finally, our protocols also eliminate the need for communication from the prover to the verifier in the last authentication phase, which was necessary in some previous QDB protocols. Our work paves the way for several interesting research directions which we briefly discuss in detail in the appendix. |
1703.01860 | Moritz Mueller | Yijia Chen, Michael Elberfeld, Moritz M\"uller | The parameterized space complexity of model-checking bounded variable
first-order logic | null | Logical Methods in Computer Science, Volume 15, Issue 3 (September
20, 2019) lmcs:3172 | 10.23638/LMCS-15(3:31)2019 | null | cs.LO | http://creativecommons.org/licenses/by/4.0/ | The parameterized model-checking problem for a class of first-order sentences
(queries) asks to decide whether a given sentence from the class holds true in
a given relational structure (database); the parameter is the length of the
sentence. We study the parameterized space complexity of the model-checking
problem for queries with a bounded number of variables. For each bound on the
quantifier alternation rank the problem becomes complete for the corresponding
level of what we call the tree hierarchy, a hierarchy of parameterized
complexity classes defined via space bounded alternating machines between
parameterized logarithmic space and fixed-parameter tractable time. We observe
that a parameterized logarithmic space model-checker for existential bounded
variable queries would allow to improve Savitch's classical simulation of
nondeterministic logarithmic space in deterministic space $O(\log^2n)$.
Further, we define a highly space efficient model-checker for queries with a
bounded number of variables and bounded quantifier alternation rank. We study
its optimality under the assumption that Savitch's Theorem is optimal.
| [
{
"created": "Mon, 6 Mar 2017 13:22:10 GMT",
"version": "v1"
},
{
"created": "Mon, 5 Nov 2018 14:56:08 GMT",
"version": "v2"
},
{
"created": "Tue, 4 Jun 2019 09:40:14 GMT",
"version": "v3"
},
{
"created": "Mon, 26 Aug 2019 06:56:34 GMT",
"version": "v4"
},
{
"created": "Mon, 9 Sep 2019 09:39:30 GMT",
"version": "v5"
},
{
"created": "Thu, 19 Sep 2019 09:53:17 GMT",
"version": "v6"
}
] | 2023-06-22 | [
[
"Chen",
"Yijia",
""
],
[
"Elberfeld",
"Michael",
""
],
[
"Müller",
"Moritz",
""
]
] | The parameterized model-checking problem for a class of first-order sentences (queries) asks to decide whether a given sentence from the class holds true in a given relational structure (database); the parameter is the length of the sentence. We study the parameterized space complexity of the model-checking problem for queries with a bounded number of variables. For each bound on the quantifier alternation rank the problem becomes complete for the corresponding level of what we call the tree hierarchy, a hierarchy of parameterized complexity classes defined via space bounded alternating machines between parameterized logarithmic space and fixed-parameter tractable time. We observe that a parameterized logarithmic space model-checker for existential bounded variable queries would allow to improve Savitch's classical simulation of nondeterministic logarithmic space in deterministic space $O(\log^2n)$. Further, we define a highly space efficient model-checker for queries with a bounded number of variables and bounded quantifier alternation rank. We study its optimality under the assumption that Savitch's Theorem is optimal. |
2405.13448 | Chengyu Wang | Yuanhao Yue, Chengyu Wang, Jun Huang, Peng Wang | Distilling Instruction-following Abilities of Large Language Models with
Task-aware Curriculum Planning | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The process of instruction tuning aligns pre-trained large language models
(LLMs) with open-domain instructions and human-preferred responses. While
several studies have explored autonomous approaches to distilling and
annotating instructions from more powerful proprietary LLMs, such as ChatGPT,
they often neglect the impact of task distributions and the varying difficulty
of instructions of the training sets. This oversight can lead to imbalanced
knowledge capabilities and poor generalization powers of small student LLMs. To
address this challenge, we introduce Task-Aware Curriculum Planning for
Instruction Refinement (TAPIR), a multi-round distillation framework with
balanced task distributions and dynamic difficulty adjustment. This approach
utilizes an oracle LLM to select instructions that are difficult for a student
LLM to follow and distill instructions with balanced task distributions. By
incorporating curriculum planning, our approach systematically escalates the
difficulty levels, progressively enhancing the student LLM's capabilities. We
rigorously evaluate TAPIR using two widely recognized benchmarks, including
AlpacaEval 2.0 and MT-Bench. The empirical results demonstrate that the student
LLMs, trained with our method and less training data, outperform larger
instruction-tuned models and strong distillation baselines. The improvement is
particularly notable in complex tasks, such as logical reasoning and code
generation.
| [
{
"created": "Wed, 22 May 2024 08:38:26 GMT",
"version": "v1"
}
] | 2024-05-24 | [
[
"Yue",
"Yuanhao",
""
],
[
"Wang",
"Chengyu",
""
],
[
"Huang",
"Jun",
""
],
[
"Wang",
"Peng",
""
]
] | The process of instruction tuning aligns pre-trained large language models (LLMs) with open-domain instructions and human-preferred responses. While several studies have explored autonomous approaches to distilling and annotating instructions from more powerful proprietary LLMs, such as ChatGPT, they often neglect the impact of task distributions and the varying difficulty of instructions of the training sets. This oversight can lead to imbalanced knowledge capabilities and poor generalization powers of small student LLMs. To address this challenge, we introduce Task-Aware Curriculum Planning for Instruction Refinement (TAPIR), a multi-round distillation framework with balanced task distributions and dynamic difficulty adjustment. This approach utilizes an oracle LLM to select instructions that are difficult for a student LLM to follow and distill instructions with balanced task distributions. By incorporating curriculum planning, our approach systematically escalates the difficulty levels, progressively enhancing the student LLM's capabilities. We rigorously evaluate TAPIR using two widely recognized benchmarks, including AlpacaEval 2.0 and MT-Bench. The empirical results demonstrate that the student LLMs, trained with our method and less training data, outperform larger instruction-tuned models and strong distillation baselines. The improvement is particularly notable in complex tasks, such as logical reasoning and code generation. |
1401.2949 | Larry Bull | Larry Bull | Exploiting generalisation symmetries in accuracy-based learning
classifier systems: An initial study | 6 pages, 13 figures | null | null | null | cs.NE cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Modern learning classifier systems typically exploit a niched genetic
algorithm to facilitate rule discovery. When used for reinforcement learning,
such rules represent generalisations over the state-action-reward space. Whilst
encouraging maximal generality, the niching can potentially hinder the
formation of generalisations in the state space which are symmetrical, or very
similar, over different actions. This paper introduces the use of rules which
contain multiple actions, maintaining accuracy and reward metrics for each
action. It is shown that problem symmetries can be exploited, improving
performance, whilst not degrading performance when symmetries are reduced.
| [
{
"created": "Fri, 10 Jan 2014 12:46:56 GMT",
"version": "v1"
}
] | 2014-01-14 | [
[
"Bull",
"Larry",
""
]
] | Modern learning classifier systems typically exploit a niched genetic algorithm to facilitate rule discovery. When used for reinforcement learning, such rules represent generalisations over the state-action-reward space. Whilst encouraging maximal generality, the niching can potentially hinder the formation of generalisations in the state space which are symmetrical, or very similar, over different actions. This paper introduces the use of rules which contain multiple actions, maintaining accuracy and reward metrics for each action. It is shown that problem symmetries can be exploited, improving performance, whilst not degrading performance when symmetries are reduced. |
2308.11911 | Hyekang Park | Hyekang Park, Jongyoun Noh, Youngmin Oh, Donghyeon Baek, Bumsub Ham | ACLS: Adaptive and Conditional Label Smoothing for Network Calibration | Accepted to ICCV 2023 (Oral presentation) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We address the problem of network calibration adjusting miscalibrated
confidences of deep neural networks. Many approaches to network calibration
adopt a regularization-based method that exploits a regularization term to
smooth the miscalibrated confidences. Although these approaches have shown the
effectiveness on calibrating the networks, there is still a lack of
understanding on the underlying principles of regularization in terms of
network calibration. We present in this paper an in-depth analysis of existing
regularization-based methods, providing a better understanding on how they
affect to network calibration. Specifically, we have observed that 1) the
regularization-based methods can be interpreted as variants of label smoothing,
and 2) they do not always behave desirably. Based on the analysis, we introduce
a novel loss function, dubbed ACLS, that unifies the merits of existing
regularization methods, while avoiding the limitations. We show extensive
experimental results for image classification and semantic segmentation on
standard benchmarks, including CIFAR10, Tiny-ImageNet, ImageNet, and PASCAL
VOC, demonstrating the effectiveness of our loss function.
| [
{
"created": "Wed, 23 Aug 2023 04:52:48 GMT",
"version": "v1"
},
{
"created": "Thu, 24 Aug 2023 06:35:22 GMT",
"version": "v2"
}
] | 2023-08-25 | [
[
"Park",
"Hyekang",
""
],
[
"Noh",
"Jongyoun",
""
],
[
"Oh",
"Youngmin",
""
],
[
"Baek",
"Donghyeon",
""
],
[
"Ham",
"Bumsub",
""
]
] | We address the problem of network calibration adjusting miscalibrated confidences of deep neural networks. Many approaches to network calibration adopt a regularization-based method that exploits a regularization term to smooth the miscalibrated confidences. Although these approaches have shown the effectiveness on calibrating the networks, there is still a lack of understanding on the underlying principles of regularization in terms of network calibration. We present in this paper an in-depth analysis of existing regularization-based methods, providing a better understanding on how they affect to network calibration. Specifically, we have observed that 1) the regularization-based methods can be interpreted as variants of label smoothing, and 2) they do not always behave desirably. Based on the analysis, we introduce a novel loss function, dubbed ACLS, that unifies the merits of existing regularization methods, while avoiding the limitations. We show extensive experimental results for image classification and semantic segmentation on standard benchmarks, including CIFAR10, Tiny-ImageNet, ImageNet, and PASCAL VOC, demonstrating the effectiveness of our loss function. |
1703.01148 | Bikash Chandra | Bikash Chandra, S. Sudarshan | Runtime Optimization of Join Location in Parallel Data Management
Systems | 17 pages | null | null | null | cs.DB cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Applications running on parallel systems often need to join a streaming
relation or a stored relation with data indexed in a parallel data storage
system. Some applications also compute UDFs on the joined tuples. The join can
be done at the data storage nodes, corresponding to reduce side joins, or by
fetching data from the storage system to compute nodes, corresponding to map
side join. Both may be suboptimal: reduce side joins may cause skew, while map
side joins may lead to a lot of data being transferred and replicated.
In this paper, we present techniques to make runtime decisions between the
two options on a per key basis, in order to improve the throughput of the join,
accounting for UDF computation if any. Our techniques are based on an extended
ski-rental algorithm and provide worst-case performance guarantees with respect
to the optimal point in the space considered by us. Our techniques use load
balancing taking into account the CPU, network and I/O costs as well as the
load on compute and storage nodes. We have implemented our techniques on
Hadoop, Spark and the Muppet stream processing engine. Our experiments show
that our optimization techniques provide a significant improvement in
throughput over existing techniques.
| [
{
"created": "Fri, 3 Mar 2017 13:21:25 GMT",
"version": "v1"
},
{
"created": "Mon, 12 Jun 2017 06:01:10 GMT",
"version": "v2"
},
{
"created": "Mon, 31 Jul 2017 09:26:37 GMT",
"version": "v3"
}
] | 2017-08-01 | [
[
"Chandra",
"Bikash",
""
],
[
"Sudarshan",
"S.",
""
]
] | Applications running on parallel systems often need to join a streaming relation or a stored relation with data indexed in a parallel data storage system. Some applications also compute UDFs on the joined tuples. The join can be done at the data storage nodes, corresponding to reduce side joins, or by fetching data from the storage system to compute nodes, corresponding to map side join. Both may be suboptimal: reduce side joins may cause skew, while map side joins may lead to a lot of data being transferred and replicated. In this paper, we present techniques to make runtime decisions between the two options on a per key basis, in order to improve the throughput of the join, accounting for UDF computation if any. Our techniques are based on an extended ski-rental algorithm and provide worst-case performance guarantees with respect to the optimal point in the space considered by us. Our techniques use load balancing taking into account the CPU, network and I/O costs as well as the load on compute and storage nodes. We have implemented our techniques on Hadoop, Spark and the Muppet stream processing engine. Our experiments show that our optimization techniques provide a significant improvement in throughput over existing techniques. |
1502.02551 | Suyog Gupta | Suyog Gupta, Ankur Agrawal, Kailash Gopalakrishnan, Pritish Narayanan | Deep Learning with Limited Numerical Precision | 10 pages, 6 figures, 1 table | null | null | null | cs.LG cs.NE stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Training of large-scale deep neural networks is often constrained by the
available computational resources. We study the effect of limited precision
data representation and computation on neural network training. Within the
context of low-precision fixed-point computations, we observe the rounding
scheme to play a crucial role in determining the network's behavior during
training. Our results show that deep networks can be trained using only 16-bit
wide fixed-point number representation when using stochastic rounding, and
incur little to no degradation in the classification accuracy. We also
demonstrate an energy-efficient hardware accelerator that implements
low-precision fixed-point arithmetic with stochastic rounding.
| [
{
"created": "Mon, 9 Feb 2015 16:37:29 GMT",
"version": "v1"
}
] | 2015-02-11 | [
[
"Gupta",
"Suyog",
""
],
[
"Agrawal",
"Ankur",
""
],
[
"Gopalakrishnan",
"Kailash",
""
],
[
"Narayanan",
"Pritish",
""
]
] | Training of large-scale deep neural networks is often constrained by the available computational resources. We study the effect of limited precision data representation and computation on neural network training. Within the context of low-precision fixed-point computations, we observe the rounding scheme to play a crucial role in determining the network's behavior during training. Our results show that deep networks can be trained using only 16-bit wide fixed-point number representation when using stochastic rounding, and incur little to no degradation in the classification accuracy. We also demonstrate an energy-efficient hardware accelerator that implements low-precision fixed-point arithmetic with stochastic rounding. |
1712.00656 | Alper Kose | Noyan Evirgen, Alper Kose and Hakan Gokcesu | An Asymptotically Optimal Algorithm for Communicating Multiplayer
Multi-Armed Bandit Problems | This work is an extension of the paper [arXiv:1711.01628] which has
been accepted to the 2017 IEEE ICMLA and submitted to Elsevier Signal
Processing | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider a decentralized stochastic multi-armed bandit problem with
multiple players. Each player aims to maximize his/her own reward by pulling an
arm. The arms give rewards based on i.i.d. stochastic Bernoulli distributions.
Players are not aware about the probability distributions of the arms. At the
end of each turn, the players inform their neighbors about the arm he/she
pulled and the reward he/she got. Neighbors of players are determined according
to an Erd{\H{o}}s-R{\'e}nyi graph with connectivity $\alpha$. This graph is
reproduced in the beginning of every turn with the same connectivity. When more
than one player choose the same arm in a turn, we assume that only one of the
players who is randomly chosen gets the reward where the others get nothing. We
first start by assuming players are not aware of the collision model and offer
an asymptotically optimal algorithm for $\alpha = 1$ case. Then, we extend our
prior work and offer an asymptotically optimal algorithm for any connectivity
but zero, assuming players aware of the collision model. We also study the
effect of $\alpha$, the degree of communication between players, empirically on
the cumulative regret by comparing them with traditional multi-armed bandit
algorithms.
| [
{
"created": "Sat, 2 Dec 2017 18:58:04 GMT",
"version": "v1"
}
] | 2017-12-05 | [
[
"Evirgen",
"Noyan",
""
],
[
"Kose",
"Alper",
""
],
[
"Gokcesu",
"Hakan",
""
]
] | We consider a decentralized stochastic multi-armed bandit problem with multiple players. Each player aims to maximize his/her own reward by pulling an arm. The arms give rewards based on i.i.d. stochastic Bernoulli distributions. Players are not aware about the probability distributions of the arms. At the end of each turn, the players inform their neighbors about the arm he/she pulled and the reward he/she got. Neighbors of players are determined according to an Erd{\H{o}}s-R{\'e}nyi graph with connectivity $\alpha$. This graph is reproduced in the beginning of every turn with the same connectivity. When more than one player choose the same arm in a turn, we assume that only one of the players who is randomly chosen gets the reward where the others get nothing. We first start by assuming players are not aware of the collision model and offer an asymptotically optimal algorithm for $\alpha = 1$ case. Then, we extend our prior work and offer an asymptotically optimal algorithm for any connectivity but zero, assuming players aware of the collision model. We also study the effect of $\alpha$, the degree of communication between players, empirically on the cumulative regret by comparing them with traditional multi-armed bandit algorithms. |
1602.05837 | Arturs Backurs | Arturs Backurs and Nishanth Dikkala and Christos Tzamos | Tight Hardness Results for Maximum Weight Rectangles | null | null | null | null | cs.DS cs.CC cs.CG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Given $n$ weighted points (positive or negative) in $d$ dimensions, what is
the axis-aligned box which maximizes the total weight of the points it
contains?
The best known algorithm for this problem is based on a reduction to a
related problem, the Weighted Depth problem [T. M. Chan, FOCS'13], and runs in
time $O(n^d)$. It was conjectured [Barbay et al., CCCG'13] that this runtime is
tight up to subpolynomial factors. We answer this conjecture affirmatively by
providing a matching conditional lower bound. We also provide conditional lower
bounds for the special case when points are arranged in a grid (a well studied
problem known as Maximum Subarray problem) as well as for other related
problems.
All our lower bounds are based on assumptions that the best known algorithms
for the All-Pairs Shortest Paths problem (APSP) and for the Max-Weight k-Clique
problem in edge-weighted graphs are essentially optimal.
| [
{
"created": "Thu, 18 Feb 2016 15:24:22 GMT",
"version": "v1"
},
{
"created": "Thu, 3 Mar 2016 00:12:38 GMT",
"version": "v2"
}
] | 2016-03-04 | [
[
"Backurs",
"Arturs",
""
],
[
"Dikkala",
"Nishanth",
""
],
[
"Tzamos",
"Christos",
""
]
] | Given $n$ weighted points (positive or negative) in $d$ dimensions, what is the axis-aligned box which maximizes the total weight of the points it contains? The best known algorithm for this problem is based on a reduction to a related problem, the Weighted Depth problem [T. M. Chan, FOCS'13], and runs in time $O(n^d)$. It was conjectured [Barbay et al., CCCG'13] that this runtime is tight up to subpolynomial factors. We answer this conjecture affirmatively by providing a matching conditional lower bound. We also provide conditional lower bounds for the special case when points are arranged in a grid (a well studied problem known as Maximum Subarray problem) as well as for other related problems. All our lower bounds are based on assumptions that the best known algorithms for the All-Pairs Shortest Paths problem (APSP) and for the Max-Weight k-Clique problem in edge-weighted graphs are essentially optimal. |
2106.05498 | Angela Zhou | Michelle Bao, Angela Zhou, Samantha Zottola, Brian Brubach, Sarah
Desmarais, Aaron Horowitz, Kristian Lum, Suresh Venkatasubramanian | It's COMPASlicated: The Messy Relationship between RAI Datasets and
Algorithmic Fairness Benchmarks | NeurIPS 2021 Datasets and Benchmarks | null | null | null | cs.CY | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Risk assessment instrument (RAI) datasets, particularly ProPublica's COMPAS
dataset, are commonly used in algorithmic fairness papers due to benchmarking
practices of comparing algorithms on datasets used in prior work. In many
cases, this data is used as a benchmark to demonstrate good performance without
accounting for the complexities of criminal justice (CJ) processes. However, we
show that pretrial RAI datasets can contain numerous measurement biases and
errors, and due to disparities in discretion and deployment, algorithmic
fairness applied to RAI datasets is limited in making claims about real-world
outcomes. These reasons make the datasets a poor fit for benchmarking under
assumptions of ground truth and real-world impact. Furthermore, conventional
practices of simply replicating previous data experiments may implicitly
inherit or edify normative positions without explicitly interrogating
value-laden assumptions. Without context of how interdisciplinary fields have
engaged in CJ research and context of how RAIs operate upstream and downstream,
algorithmic fairness practices are misaligned for meaningful contribution in
the context of CJ, and would benefit from transparent engagement with normative
considerations and values related to fairness, justice, and equality. These
factors prompt questions about whether benchmarks for intrinsically
socio-technical systems like the CJ system can exist in a beneficial and
ethical way.
| [
{
"created": "Thu, 10 Jun 2021 04:59:06 GMT",
"version": "v1"
},
{
"created": "Tue, 14 Dec 2021 21:22:06 GMT",
"version": "v2"
},
{
"created": "Thu, 28 Apr 2022 18:04:20 GMT",
"version": "v3"
}
] | 2022-05-02 | [
[
"Bao",
"Michelle",
""
],
[
"Zhou",
"Angela",
""
],
[
"Zottola",
"Samantha",
""
],
[
"Brubach",
"Brian",
""
],
[
"Desmarais",
"Sarah",
""
],
[
"Horowitz",
"Aaron",
""
],
[
"Lum",
"Kristian",
""
],
[
"Venkatasubramanian",
"Suresh",
""
]
] | Risk assessment instrument (RAI) datasets, particularly ProPublica's COMPAS dataset, are commonly used in algorithmic fairness papers due to benchmarking practices of comparing algorithms on datasets used in prior work. In many cases, this data is used as a benchmark to demonstrate good performance without accounting for the complexities of criminal justice (CJ) processes. However, we show that pretrial RAI datasets can contain numerous measurement biases and errors, and due to disparities in discretion and deployment, algorithmic fairness applied to RAI datasets is limited in making claims about real-world outcomes. These reasons make the datasets a poor fit for benchmarking under assumptions of ground truth and real-world impact. Furthermore, conventional practices of simply replicating previous data experiments may implicitly inherit or edify normative positions without explicitly interrogating value-laden assumptions. Without context of how interdisciplinary fields have engaged in CJ research and context of how RAIs operate upstream and downstream, algorithmic fairness practices are misaligned for meaningful contribution in the context of CJ, and would benefit from transparent engagement with normative considerations and values related to fairness, justice, and equality. These factors prompt questions about whether benchmarks for intrinsically socio-technical systems like the CJ system can exist in a beneficial and ethical way. |
1108.5212 | Gadiel Seroussi | Gadiel Seroussi, Wojciech Szpankowski, Marcelo J. Weinberger | Deinterleaving Finite Memory Processes via Penalized Maximum Likelihood | null | null | null | Hewlett-Packard Laboratories Technical Report HPL-2011-136 | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the problem of deinterleaving a set of finite-memory (Markov)
processes over disjoint finite alphabets, which have been randomly interleaved
by a finite-memory switch. The deinterleaver has access to a sample of the
resulting interleaved process, but no knowledge of the number or structure of
the component Markov processes, or of the switch. We study conditions for
uniqueness of the interleaved representation of a process, showing that certain
switch configurations, as well as memoryless component processes, can cause
ambiguities in the representation. We show that a deinterleaving scheme based
on minimizing a penalized maximum-likelihood cost function is strongly
consistent, in the sense of reconstructing, almost surely as the observed
sequence length tends to infinity, a set of component and switch Markov
processes compatible with the original interleaved process. Furthermore, under
certain conditions on the structure of the switch (including the special case
of a memoryless switch), we show that the scheme recovers \emph{all} possible
interleaved representations of the original process. Experimental results are
presented demonstrating that the proposed scheme performs well in practice,
even for relatively short input samples.
| [
{
"created": "Thu, 25 Aug 2011 22:25:48 GMT",
"version": "v1"
}
] | 2011-08-29 | [
[
"Seroussi",
"Gadiel",
""
],
[
"Szpankowski",
"Wojciech",
""
],
[
"Weinberger",
"Marcelo J.",
""
]
] | We study the problem of deinterleaving a set of finite-memory (Markov) processes over disjoint finite alphabets, which have been randomly interleaved by a finite-memory switch. The deinterleaver has access to a sample of the resulting interleaved process, but no knowledge of the number or structure of the component Markov processes, or of the switch. We study conditions for uniqueness of the interleaved representation of a process, showing that certain switch configurations, as well as memoryless component processes, can cause ambiguities in the representation. We show that a deinterleaving scheme based on minimizing a penalized maximum-likelihood cost function is strongly consistent, in the sense of reconstructing, almost surely as the observed sequence length tends to infinity, a set of component and switch Markov processes compatible with the original interleaved process. Furthermore, under certain conditions on the structure of the switch (including the special case of a memoryless switch), we show that the scheme recovers \emph{all} possible interleaved representations of the original process. Experimental results are presented demonstrating that the proposed scheme performs well in practice, even for relatively short input samples. |
2307.01069 | Konstantin Pakulev Stanislavovich | Konstantin Pakulev, Alexander Vakhitov, Gonzalo Ferrer | Shi-NeSS: Detecting Good and Stable Keypoints with a Neural Stability
Score | 10 pages, 4 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning a feature point detector presents a challenge both due to the
ambiguity of the definition of a keypoint and correspondingly the need for a
specially prepared ground truth labels for such points. In our work, we address
both of these issues by utilizing a combination of a hand-crafted Shi detector
and a neural network. We build on the principled and localized keypoints
provided by the Shi detector and perform their selection using the keypoint
stability score regressed by the neural network - Neural Stability Score
(NeSS). Therefore, our method is named Shi-NeSS since it combines the Shi
detector and the properties of the keypoint stability score, and it only
requires for training sets of images without dataset pre-labeling or the need
for reconstructed correspondence labels. We evaluate Shi-NeSS on HPatches,
ScanNet, MegaDepth and IMC-PT, demonstrating state-of-the-art performance and
good generalization on downstream tasks.
| [
{
"created": "Mon, 3 Jul 2023 14:50:14 GMT",
"version": "v1"
}
] | 2023-07-04 | [
[
"Pakulev",
"Konstantin",
""
],
[
"Vakhitov",
"Alexander",
""
],
[
"Ferrer",
"Gonzalo",
""
]
] | Learning a feature point detector presents a challenge both due to the ambiguity of the definition of a keypoint and correspondingly the need for a specially prepared ground truth labels for such points. In our work, we address both of these issues by utilizing a combination of a hand-crafted Shi detector and a neural network. We build on the principled and localized keypoints provided by the Shi detector and perform their selection using the keypoint stability score regressed by the neural network - Neural Stability Score (NeSS). Therefore, our method is named Shi-NeSS since it combines the Shi detector and the properties of the keypoint stability score, and it only requires for training sets of images without dataset pre-labeling or the need for reconstructed correspondence labels. We evaluate Shi-NeSS on HPatches, ScanNet, MegaDepth and IMC-PT, demonstrating state-of-the-art performance and good generalization on downstream tasks. |
2404.13400 | Linhui Xiao | Linhui Xiao, Xiaoshan Yang, Fang Peng, Yaowei Wang, Changsheng Xu | HiVG: Hierarchical Multimodal Fine-grained Modulation for Visual
Grounding | The project page: https://github.com/linhuixiao/HiVG | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visual grounding, which aims to ground a visual region via natural language,
is a task that heavily relies on cross-modal alignment. Existing works utilized
uni-modal pre-trained models to transfer visual/linguistic knowledge separately
while ignoring the multimodal corresponding information. Motivated by recent
advancements in contrastive language-image pre-training and low-rank adaptation
(LoRA) methods, we aim to solve the grounding task based on multimodal
pre-training. However, there exists significant task gaps between pre-training
and grounding. Therefore, to address these gaps, we propose a concise and
efficient hierarchical multimodal fine-grained modulation framework, namely
HiVG. Specifically, HiVG consists of a multi-layer adaptive cross-modal bridge
and a hierarchical multimodal low-rank adaptation (Hi LoRA) paradigm. The
cross-modal bridge can address the inconsistency between visual features and
those required for grounding, and establish a connection between multi-level
visual and text features. Hi LoRA prevents the accumulation of perceptual
errors by adapting the cross-modal features from shallow to deep layers in a
hierarchical manner. Experimental results on five datasets demonstrate the
effectiveness of our approach and showcase the significant grounding
capabilities as well as promising energy efficiency advantages. The project
page: https://github.com/linhuixiao/HiVG.
| [
{
"created": "Sat, 20 Apr 2024 14:57:31 GMT",
"version": "v1"
}
] | 2024-04-23 | [
[
"Xiao",
"Linhui",
""
],
[
"Yang",
"Xiaoshan",
""
],
[
"Peng",
"Fang",
""
],
[
"Wang",
"Yaowei",
""
],
[
"Xu",
"Changsheng",
""
]
] | Visual grounding, which aims to ground a visual region via natural language, is a task that heavily relies on cross-modal alignment. Existing works utilized uni-modal pre-trained models to transfer visual/linguistic knowledge separately while ignoring the multimodal corresponding information. Motivated by recent advancements in contrastive language-image pre-training and low-rank adaptation (LoRA) methods, we aim to solve the grounding task based on multimodal pre-training. However, there exists significant task gaps between pre-training and grounding. Therefore, to address these gaps, we propose a concise and efficient hierarchical multimodal fine-grained modulation framework, namely HiVG. Specifically, HiVG consists of a multi-layer adaptive cross-modal bridge and a hierarchical multimodal low-rank adaptation (Hi LoRA) paradigm. The cross-modal bridge can address the inconsistency between visual features and those required for grounding, and establish a connection between multi-level visual and text features. Hi LoRA prevents the accumulation of perceptual errors by adapting the cross-modal features from shallow to deep layers in a hierarchical manner. Experimental results on five datasets demonstrate the effectiveness of our approach and showcase the significant grounding capabilities as well as promising energy efficiency advantages. The project page: https://github.com/linhuixiao/HiVG. |
1911.05649 | Xin Zhang | Songbin Xu, Yang Xue, Xin Zhang, Lianwen Jin | Air-Writing Translater: A Novel Unsupervised Domain Adaptation Method
for Inertia-Trajectory Translation of In-air Handwriting | null | null | null | null | cs.CV cs.AI eess.SP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As a new way of human-computer interaction, inertial sensor based in-air
handwriting can provide a natural and unconstrained interaction to express more
complex and richer information in 3D space. However, most of the existing
in-air handwriting work is mainly focused on handwritten character recognition,
which makes these work suffer from poor readability of inertial signal and lack
of labeled samples. To address these two problems, we use unsupervised domain
adaptation method to reconstruct the trajectory of inertial signal and generate
inertial samples using online handwritten trajectories. In this paper, we
propose an AirWriting Translater model to learn the bi-directional translation
between trajectory domain and inertial domain in the absence of paired inertial
and trajectory samples. Through semantic-level adversarial training and latent
classification loss, the proposed model learns to extract domain-invariant
content between inertial signal and trajectory, while preserving semantic
consistency during the translation across the two domains. We carefully design
the architecture, so that the proposed framework can accept inputs of arbitrary
length and translate between different sampling rates. We also conduct
experiments on two public datasets: 6DMG (in-air handwriting dataset) and CT
(handwritten trajectory dataset), the results on the two datasets demonstrate
that the proposed network successes in both Inertia-to Trajectory and
Trajectory-to-Inertia translation tasks.
| [
{
"created": "Fri, 1 Nov 2019 14:09:44 GMT",
"version": "v1"
}
] | 2019-11-14 | [
[
"Xu",
"Songbin",
""
],
[
"Xue",
"Yang",
""
],
[
"Zhang",
"Xin",
""
],
[
"Jin",
"Lianwen",
""
]
] | As a new way of human-computer interaction, inertial sensor based in-air handwriting can provide a natural and unconstrained interaction to express more complex and richer information in 3D space. However, most of the existing in-air handwriting work is mainly focused on handwritten character recognition, which makes these work suffer from poor readability of inertial signal and lack of labeled samples. To address these two problems, we use unsupervised domain adaptation method to reconstruct the trajectory of inertial signal and generate inertial samples using online handwritten trajectories. In this paper, we propose an AirWriting Translater model to learn the bi-directional translation between trajectory domain and inertial domain in the absence of paired inertial and trajectory samples. Through semantic-level adversarial training and latent classification loss, the proposed model learns to extract domain-invariant content between inertial signal and trajectory, while preserving semantic consistency during the translation across the two domains. We carefully design the architecture, so that the proposed framework can accept inputs of arbitrary length and translate between different sampling rates. We also conduct experiments on two public datasets: 6DMG (in-air handwriting dataset) and CT (handwritten trajectory dataset), the results on the two datasets demonstrate that the proposed network successes in both Inertia-to Trajectory and Trajectory-to-Inertia translation tasks. |
2307.07854 | Fuxiang Chen | Iman Saberi, Fatemeh Fard and Fuxiang Chen | AdvFusion: Multilingual Adapter-based Knowledge Transfer for Code
Summarization | under submission | null | null | null | cs.SE | http://creativecommons.org/licenses/by/4.0/ | Parameter Efficient Fine-Tuning (PEFT) is an alternate choice to full
fine-tuning a language model. Though PEFT methods are used in natural language
domain widely, there are limited studies on using PEFT for language models that
are pre-trained on code and comment datasets (i.e., code-LMs). Previous
research has also shown that code summarization, a task that intends to
generate natural description of the given code snippet automatically and is
known to benefit the program comprehension, benefits from multilingual
fine-tuning approach. In multilingual fine-tuning, the code-LM is fine-tuned on
a dataset consisting of different programming languages.
AdapterFusion is a specific PEFT approach that aims to extract and compose
the latent knowledge from multiple (language) adapters for a downstream task.
However, our experiments reveal that the AdapterFusion still learns from the
same language, not taking advantage of other programming languages. Therefore,
we change the architecture and propose AdvFusion, a PEFT approach that enforces
the model to first learn from other programming languages, and then pay
attention to the language of the target task. Therefore, the AdvFusion
emphasizes the knowledge transfer among different programming languages, as
stated in the multilingual fine-tuning.
Our results on the CodeSearchNet dataset using two code-LMs, show that
Adapters, AdapterFusion, and our proposed AdvFusion can achieve results on-par
with or higher than the full fine-tuning models for code summarization and
method name prediction. Notably, the number of trainable parameters are 123x
less and the training time is reduced by ~30%. AdvFusion exhibits a notable
enhancement compared to AdapterFusion, showcasing a 0.9 to 1.7-point increase
in BLEU-4 scores specifically for Ruby, JavaScript, and Go.
| [
{
"created": "Sat, 15 Jul 2023 17:17:16 GMT",
"version": "v1"
},
{
"created": "Fri, 2 Feb 2024 10:47:32 GMT",
"version": "v2"
}
] | 2024-02-05 | [
[
"Saberi",
"Iman",
""
],
[
"Fard",
"Fatemeh",
""
],
[
"Chen",
"Fuxiang",
""
]
] | Parameter Efficient Fine-Tuning (PEFT) is an alternate choice to full fine-tuning a language model. Though PEFT methods are used in natural language domain widely, there are limited studies on using PEFT for language models that are pre-trained on code and comment datasets (i.e., code-LMs). Previous research has also shown that code summarization, a task that intends to generate natural description of the given code snippet automatically and is known to benefit the program comprehension, benefits from multilingual fine-tuning approach. In multilingual fine-tuning, the code-LM is fine-tuned on a dataset consisting of different programming languages. AdapterFusion is a specific PEFT approach that aims to extract and compose the latent knowledge from multiple (language) adapters for a downstream task. However, our experiments reveal that the AdapterFusion still learns from the same language, not taking advantage of other programming languages. Therefore, we change the architecture and propose AdvFusion, a PEFT approach that enforces the model to first learn from other programming languages, and then pay attention to the language of the target task. Therefore, the AdvFusion emphasizes the knowledge transfer among different programming languages, as stated in the multilingual fine-tuning. Our results on the CodeSearchNet dataset using two code-LMs, show that Adapters, AdapterFusion, and our proposed AdvFusion can achieve results on-par with or higher than the full fine-tuning models for code summarization and method name prediction. Notably, the number of trainable parameters are 123x less and the training time is reduced by ~30%. AdvFusion exhibits a notable enhancement compared to AdapterFusion, showcasing a 0.9 to 1.7-point increase in BLEU-4 scores specifically for Ruby, JavaScript, and Go. |
1407.0080 | Graeme Wilson N | Graeme N. Wilson, Alejandro Ramirez-Serrano, Mahmoud Mustafa, and
Krispin A. Davies | Velocity Selection for High-Speed UGVs in Rough Unknown Terrains using
Force Prediction | 10 pages, 6 figures, Proceedings of 5th International Conference on
Intelligent Robotics and Applications, Concordia University, October 3-5,
2012, Montreal, Canada | 5th International Conference, ICIRA 2012, Montreal, Canada,
October 3-5, 2012, Proceedings, Part II. 7507: 387-396 | 10.1007/978-3-642-33515-0_39 | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Enabling high speed navigation of Unmanned Ground Vehicles (UGVs) in unknown
rough terrain where limited or no information is available in advance requires
the assessment of terrain in front of the UGV. Attempts have been made to
predict the forces the terrain exerts on the UGV for the purpose of determining
the maximum allowable velocity for a given terrain. However, current methods
produce overly aggressive velocity profiles which could damage the UGV. This
paper presents three novel safer methods of force prediction that produce
effective velocity profiles. Two models, Instantaneous Elevation Change Model
(IECM) and Sinusoidal Base Excitation Model: using Excitation Force (SBEM:EF),
predict the forces exerted by the terrain on the vehicle at the ground contact
point, while another method, Sinusoidal Base Excitation Model: using
Transmitted Force (SBEM:TF), predicts the forces transmitted to the vehicle
frame by the suspension.
| [
{
"created": "Mon, 30 Jun 2014 23:43:59 GMT",
"version": "v1"
}
] | 2014-07-02 | [
[
"Wilson",
"Graeme N.",
""
],
[
"Ramirez-Serrano",
"Alejandro",
""
],
[
"Mustafa",
"Mahmoud",
""
],
[
"Davies",
"Krispin A.",
""
]
] | Enabling high speed navigation of Unmanned Ground Vehicles (UGVs) in unknown rough terrain where limited or no information is available in advance requires the assessment of terrain in front of the UGV. Attempts have been made to predict the forces the terrain exerts on the UGV for the purpose of determining the maximum allowable velocity for a given terrain. However, current methods produce overly aggressive velocity profiles which could damage the UGV. This paper presents three novel safer methods of force prediction that produce effective velocity profiles. Two models, Instantaneous Elevation Change Model (IECM) and Sinusoidal Base Excitation Model: using Excitation Force (SBEM:EF), predict the forces exerted by the terrain on the vehicle at the ground contact point, while another method, Sinusoidal Base Excitation Model: using Transmitted Force (SBEM:TF), predicts the forces transmitted to the vehicle frame by the suspension. |
1003.2441 | Kien Nguyen | Kien C. Nguyen, Dilip V. Sarwate | Up-sampling and Natural Sample Value Computation for Digital Pulse Width
Modulators | null | null | null | null | cs.SD cs.NA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Digital pulse width modulation has been considered for high-fidelity and
high-efficiency audio amplifiers for several years. It has been shown that the
distortion can be reduced and the implementation of the system can be
simplified if the switching frequency is much higher than the Nyquist rate of
the modulating waveform. Hence, the input digital source is normally upsampled
to a higher frequency. It was also proved that converting uniform samples to
natural samples will decrease the harmonic distortion. Thus, in this paper, we
examine a new approach that combines upsampling, digital interpolation and
natural sampling conversion. This approach uses poly-phase implementation of
the digital interpolation filter and digital differentiators. We will show that
the structure consists of an FIR-type linear stage and a nonlinear stage. Some
spectral simulation results of a pulse width modulation system based on this
approach will also be presented. Finally, we will discuss the improvement of
the new approach over old algorithms.
| [
{
"created": "Thu, 11 Mar 2010 23:00:15 GMT",
"version": "v1"
}
] | 2010-03-15 | [
[
"Nguyen",
"Kien C.",
""
],
[
"Sarwate",
"Dilip V.",
""
]
] | Digital pulse width modulation has been considered for high-fidelity and high-efficiency audio amplifiers for several years. It has been shown that the distortion can be reduced and the implementation of the system can be simplified if the switching frequency is much higher than the Nyquist rate of the modulating waveform. Hence, the input digital source is normally upsampled to a higher frequency. It was also proved that converting uniform samples to natural samples will decrease the harmonic distortion. Thus, in this paper, we examine a new approach that combines upsampling, digital interpolation and natural sampling conversion. This approach uses poly-phase implementation of the digital interpolation filter and digital differentiators. We will show that the structure consists of an FIR-type linear stage and a nonlinear stage. Some spectral simulation results of a pulse width modulation system based on this approach will also be presented. Finally, we will discuss the improvement of the new approach over old algorithms. |
1501.03124 | Amartansh Dubey | Amartansh Dubey and K. M. Bhurchandi | Robust and Real Time Detection of Curvy Lanes (Curves) with Desired
Slopes for Driving Assistance and Autonomous Vehicles | 13 pages, 12 figures, published in International Conference on Signal
and Image Processing (AIRCC Publishing Corporation) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the biggest reasons for road accidents is curvy lanes and blind turns.
Even one of the biggest hurdles for new autonomous vehicles is to detect curvy
lanes, multiple lanes and lanes with a lot of discontinuity and noise. This
paper presents very efficient and advanced algorithm for detecting curves
having desired slopes (especially for detecting curvy lanes in real time) and
detection of curves (lanes) with a lot of noise, discontinuity and
disturbances. Overall aim is to develop robust method for this task which is
applicable even in adverse conditions. Even in some of most famous and useful
libraries like OpenCV and Matlab, there is no function available for detecting
curves having desired slopes , shapes, discontinuities. Only few predefined
shapes like circle, ellipse, etc, can be detected using presently available
functions. Proposed algorithm can not only detect curves with discontinuity,
noise, desired slope but also it can perform shadow and illumination correction
and detect/ differentiate between different curves.
| [
{
"created": "Tue, 13 Jan 2015 19:35:18 GMT",
"version": "v1"
}
] | 2015-01-14 | [
[
"Dubey",
"Amartansh",
""
],
[
"Bhurchandi",
"K. M.",
""
]
] | One of the biggest reasons for road accidents is curvy lanes and blind turns. Even one of the biggest hurdles for new autonomous vehicles is to detect curvy lanes, multiple lanes and lanes with a lot of discontinuity and noise. This paper presents very efficient and advanced algorithm for detecting curves having desired slopes (especially for detecting curvy lanes in real time) and detection of curves (lanes) with a lot of noise, discontinuity and disturbances. Overall aim is to develop robust method for this task which is applicable even in adverse conditions. Even in some of most famous and useful libraries like OpenCV and Matlab, there is no function available for detecting curves having desired slopes , shapes, discontinuities. Only few predefined shapes like circle, ellipse, etc, can be detected using presently available functions. Proposed algorithm can not only detect curves with discontinuity, noise, desired slope but also it can perform shadow and illumination correction and detect/ differentiate between different curves. |
2307.08763 | Kumar Ashutosh | Kumar Ashutosh, Santhosh Kumar Ramakrishnan, Triantafyllos Afouras,
Kristen Grauman | Video-Mined Task Graphs for Keystep Recognition in Instructional Videos | NeurIPS 2023 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Procedural activity understanding requires perceiving human actions in terms
of a broader task, where multiple keysteps are performed in sequence across a
long video to reach a final goal state -- such as the steps of a recipe or a
DIY fix-it task. Prior work largely treats keystep recognition in isolation of
this broader structure, or else rigidly confines keysteps to align with a
predefined sequential script. We propose discovering a task graph automatically
from how-to videos to represent probabilistically how people tend to execute
keysteps, and then leverage this graph to regularize keystep recognition in
novel videos. On multiple datasets of real-world instructional videos, we show
the impact: more reliable zero-shot keystep localization and improved video
representation learning, exceeding the state of the art.
| [
{
"created": "Mon, 17 Jul 2023 18:19:36 GMT",
"version": "v1"
},
{
"created": "Sun, 29 Oct 2023 04:16:11 GMT",
"version": "v2"
}
] | 2023-10-31 | [
[
"Ashutosh",
"Kumar",
""
],
[
"Ramakrishnan",
"Santhosh Kumar",
""
],
[
"Afouras",
"Triantafyllos",
""
],
[
"Grauman",
"Kristen",
""
]
] | Procedural activity understanding requires perceiving human actions in terms of a broader task, where multiple keysteps are performed in sequence across a long video to reach a final goal state -- such as the steps of a recipe or a DIY fix-it task. Prior work largely treats keystep recognition in isolation of this broader structure, or else rigidly confines keysteps to align with a predefined sequential script. We propose discovering a task graph automatically from how-to videos to represent probabilistically how people tend to execute keysteps, and then leverage this graph to regularize keystep recognition in novel videos. On multiple datasets of real-world instructional videos, we show the impact: more reliable zero-shot keystep localization and improved video representation learning, exceeding the state of the art. |
2204.07886 | Yonis Gulzar | Saira Soomro, Arjumand Bano Soomro, Tarique Bhatti and Yonis Gulzar | Gender-Wise Perception of Students Towards Blended Learning in Higher
Education: Pakistan | 5 pages | Scientific Journal of King Faisal University (2021) 22 (2),
126-130 | 10.37575/h/edu/0019 | null | cs.HC | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Blended learning (BL) is a recent tread among many options that can best fit
learners' needs, regardless of time and place. This study aimed to discover
students' perceptions of BL and the challenges faced by them while using
technology. This quantitative study used data gathered from 300 students
enrolled in four public universities in the Sindh province of Pakistan. the
finding shows that students were compatible with the use of technology, and it
has a positive effect on their academic experience. The study also showed that
the use of technology encourages peer collaboration. The challenges found
include: neither teacher support nor a training program was provided to the
students for the course which needed to shift from a traditional face to face
paradigm to a blended format, a lake of space lies with skills in a laboratory
assistants for the courses with a blended format and as shortage of high tech
computer laboratories / computer units to run these courses. Therefore, it is
recommended that the authorities must develop and incorporate a comprehensive
mechanism for the effective implementation of BL in the learning
teaching-learning process heads of the departments should also provide
additional computing infrastructure to their departments.
| [
{
"created": "Sat, 16 Apr 2022 23:47:16 GMT",
"version": "v1"
}
] | 2022-04-19 | [
[
"Soomro",
"Saira",
""
],
[
"Soomro",
"Arjumand Bano",
""
],
[
"Bhatti",
"Tarique",
""
],
[
"Gulzar",
"Yonis",
""
]
] | Blended learning (BL) is a recent tread among many options that can best fit learners' needs, regardless of time and place. This study aimed to discover students' perceptions of BL and the challenges faced by them while using technology. This quantitative study used data gathered from 300 students enrolled in four public universities in the Sindh province of Pakistan. the finding shows that students were compatible with the use of technology, and it has a positive effect on their academic experience. The study also showed that the use of technology encourages peer collaboration. The challenges found include: neither teacher support nor a training program was provided to the students for the course which needed to shift from a traditional face to face paradigm to a blended format, a lake of space lies with skills in a laboratory assistants for the courses with a blended format and as shortage of high tech computer laboratories / computer units to run these courses. Therefore, it is recommended that the authorities must develop and incorporate a comprehensive mechanism for the effective implementation of BL in the learning teaching-learning process heads of the departments should also provide additional computing infrastructure to their departments. |
2109.01982 | Brian DuSell | Brian DuSell and David Chiang | Learning Hierarchical Structures with Differentiable Nondeterministic
Stacks | 17 pages, 4 figures. Published as a spotlight paper at ICLR 2022.
This revision fixes typos and minor errors | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning hierarchical structures in sequential data -- from simple
algorithmic patterns to natural language -- in a reliable, generalizable way
remains a challenging problem for neural language models. Past work has shown
that recurrent neural networks (RNNs) struggle to generalize on held-out
algorithmic or syntactic patterns without supervision or some inductive bias.
To remedy this, many papers have explored augmenting RNNs with various
differentiable stacks, by analogy with finite automata and pushdown automata
(PDAs). In this paper, we improve the performance of our recently proposed
Nondeterministic Stack RNN (NS-RNN), which uses a differentiable data structure
that simulates a nondeterministic PDA, with two important changes. First, the
model now assigns unnormalized positive weights instead of probabilities to
stack actions, and we provide an analysis of why this improves training.
Second, the model can directly observe the state of the underlying PDA. Our
model achieves lower cross-entropy than all previous stack RNNs on five
context-free language modeling tasks (within 0.05 nats of the
information-theoretic lower bound), including a task on which the NS-RNN
previously failed to outperform a deterministic stack RNN baseline. Finally, we
propose a restricted version of the NS-RNN that incrementally processes
infinitely long sequences, and we present language modeling results on the Penn
Treebank.
| [
{
"created": "Sun, 5 Sep 2021 03:25:23 GMT",
"version": "v1"
},
{
"created": "Sun, 24 Apr 2022 01:15:47 GMT",
"version": "v2"
},
{
"created": "Tue, 29 Nov 2022 23:39:55 GMT",
"version": "v3"
}
] | 2022-12-01 | [
[
"DuSell",
"Brian",
""
],
[
"Chiang",
"David",
""
]
] | Learning hierarchical structures in sequential data -- from simple algorithmic patterns to natural language -- in a reliable, generalizable way remains a challenging problem for neural language models. Past work has shown that recurrent neural networks (RNNs) struggle to generalize on held-out algorithmic or syntactic patterns without supervision or some inductive bias. To remedy this, many papers have explored augmenting RNNs with various differentiable stacks, by analogy with finite automata and pushdown automata (PDAs). In this paper, we improve the performance of our recently proposed Nondeterministic Stack RNN (NS-RNN), which uses a differentiable data structure that simulates a nondeterministic PDA, with two important changes. First, the model now assigns unnormalized positive weights instead of probabilities to stack actions, and we provide an analysis of why this improves training. Second, the model can directly observe the state of the underlying PDA. Our model achieves lower cross-entropy than all previous stack RNNs on five context-free language modeling tasks (within 0.05 nats of the information-theoretic lower bound), including a task on which the NS-RNN previously failed to outperform a deterministic stack RNN baseline. Finally, we propose a restricted version of the NS-RNN that incrementally processes infinitely long sequences, and we present language modeling results on the Penn Treebank. |
2211.02701 | M. Jorge Cardoso | M. Jorge Cardoso, Wenqi Li, Richard Brown, Nic Ma, Eric Kerfoot,
Yiheng Wang, Benjamin Murrey, Andriy Myronenko, Can Zhao, Dong Yang, Vishwesh
Nath, Yufan He, Ziyue Xu, Ali Hatamizadeh, Andriy Myronenko, Wentao Zhu, Yun
Liu, Mingxin Zheng, Yucheng Tang, Isaac Yang, Michael Zephyr, Behrooz
Hashemian, Sachidanand Alle, Mohammad Zalbagi Darestani, Charlie Budd, Marc
Modat, Tom Vercauteren, Guotai Wang, Yiwen Li, Yipeng Hu, Yunguan Fu,
Benjamin Gorman, Hans Johnson, Brad Genereaux, Barbaros S. Erdal, Vikash
Gupta, Andres Diaz-Pinto, Andre Dourson, Lena Maier-Hein, Paul F. Jaeger,
Michael Baumgartner, Jayashree Kalpathy-Cramer, Mona Flores, Justin Kirby,
Lee A.D. Cooper, Holger R. Roth, Daguang Xu, David Bericat, Ralf Floca, S.
Kevin Zhou, Haris Shuaib, Keyvan Farahani, Klaus H. Maier-Hein, Stephen
Aylward, Prerna Dogra, Sebastien Ourselin, Andrew Feng | MONAI: An open-source framework for deep learning in healthcare | www.monai.io | null | null | null | cs.LG cs.AI cs.CV | http://creativecommons.org/licenses/by/4.0/ | Artificial Intelligence (AI) is having a tremendous impact across most areas
of science. Applications of AI in healthcare have the potential to improve our
ability to detect, diagnose, prognose, and intervene on human disease. For AI
models to be used clinically, they need to be made safe, reproducible and
robust, and the underlying software framework must be aware of the
particularities (e.g. geometry, physiology, physics) of medical data being
processed. This work introduces MONAI, a freely available, community-supported,
and consortium-led PyTorch-based framework for deep learning in healthcare.
MONAI extends PyTorch to support medical data, with a particular focus on
imaging, and provide purpose-specific AI model architectures, transformations
and utilities that streamline the development and deployment of medical AI
models. MONAI follows best practices for software-development, providing an
easy-to-use, robust, well-documented, and well-tested software framework. MONAI
preserves the simple, additive, and compositional approach of its underlying
PyTorch libraries. MONAI is being used by and receiving contributions from
research, clinical and industrial teams from around the world, who are pursuing
applications spanning nearly every aspect of healthcare.
| [
{
"created": "Fri, 4 Nov 2022 18:35:00 GMT",
"version": "v1"
}
] | 2022-11-08 | [
[
"Cardoso",
"M. Jorge",
""
],
[
"Li",
"Wenqi",
""
],
[
"Brown",
"Richard",
""
],
[
"Ma",
"Nic",
""
],
[
"Kerfoot",
"Eric",
""
],
[
"Wang",
"Yiheng",
""
],
[
"Murrey",
"Benjamin",
""
],
[
"Myronenko",
"Andriy",
""
],
[
"Zhao",
"Can",
""
],
[
"Yang",
"Dong",
""
],
[
"Nath",
"Vishwesh",
""
],
[
"He",
"Yufan",
""
],
[
"Xu",
"Ziyue",
""
],
[
"Hatamizadeh",
"Ali",
""
],
[
"Myronenko",
"Andriy",
""
],
[
"Zhu",
"Wentao",
""
],
[
"Liu",
"Yun",
""
],
[
"Zheng",
"Mingxin",
""
],
[
"Tang",
"Yucheng",
""
],
[
"Yang",
"Isaac",
""
],
[
"Zephyr",
"Michael",
""
],
[
"Hashemian",
"Behrooz",
""
],
[
"Alle",
"Sachidanand",
""
],
[
"Darestani",
"Mohammad Zalbagi",
""
],
[
"Budd",
"Charlie",
""
],
[
"Modat",
"Marc",
""
],
[
"Vercauteren",
"Tom",
""
],
[
"Wang",
"Guotai",
""
],
[
"Li",
"Yiwen",
""
],
[
"Hu",
"Yipeng",
""
],
[
"Fu",
"Yunguan",
""
],
[
"Gorman",
"Benjamin",
""
],
[
"Johnson",
"Hans",
""
],
[
"Genereaux",
"Brad",
""
],
[
"Erdal",
"Barbaros S.",
""
],
[
"Gupta",
"Vikash",
""
],
[
"Diaz-Pinto",
"Andres",
""
],
[
"Dourson",
"Andre",
""
],
[
"Maier-Hein",
"Lena",
""
],
[
"Jaeger",
"Paul F.",
""
],
[
"Baumgartner",
"Michael",
""
],
[
"Kalpathy-Cramer",
"Jayashree",
""
],
[
"Flores",
"Mona",
""
],
[
"Kirby",
"Justin",
""
],
[
"Cooper",
"Lee A. D.",
""
],
[
"Roth",
"Holger R.",
""
],
[
"Xu",
"Daguang",
""
],
[
"Bericat",
"David",
""
],
[
"Floca",
"Ralf",
""
],
[
"Zhou",
"S. Kevin",
""
],
[
"Shuaib",
"Haris",
""
],
[
"Farahani",
"Keyvan",
""
],
[
"Maier-Hein",
"Klaus H.",
""
],
[
"Aylward",
"Stephen",
""
],
[
"Dogra",
"Prerna",
""
],
[
"Ourselin",
"Sebastien",
""
],
[
"Feng",
"Andrew",
""
]
] | Artificial Intelligence (AI) is having a tremendous impact across most areas of science. Applications of AI in healthcare have the potential to improve our ability to detect, diagnose, prognose, and intervene on human disease. For AI models to be used clinically, they need to be made safe, reproducible and robust, and the underlying software framework must be aware of the particularities (e.g. geometry, physiology, physics) of medical data being processed. This work introduces MONAI, a freely available, community-supported, and consortium-led PyTorch-based framework for deep learning in healthcare. MONAI extends PyTorch to support medical data, with a particular focus on imaging, and provide purpose-specific AI model architectures, transformations and utilities that streamline the development and deployment of medical AI models. MONAI follows best practices for software-development, providing an easy-to-use, robust, well-documented, and well-tested software framework. MONAI preserves the simple, additive, and compositional approach of its underlying PyTorch libraries. MONAI is being used by and receiving contributions from research, clinical and industrial teams from around the world, who are pursuing applications spanning nearly every aspect of healthcare. |
1906.09907 | Sibylle Hess | Sibylle Hess and Katharina Morik | C-SALT: Mining Class-Specific ALTerations in Boolean Matrix
Factorization | Joint European Conference on Machine Learning and Knowledge Discovery
in Databases. Springer, Cham, 2017 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Given labeled data represented by a binary matrix, we consider the task to
derive a Boolean matrix factorization which identifies commonalities and
specifications among the classes. While existing works focus on rank-one
factorizations which are either specific or common to the classes, we derive
class-specific alterations from common factorizations as well. Therewith, we
broaden the applicability of our new method to datasets whose
class-dependencies have a more complex structure. On the basis of synthetic and
real-world datasets, we show on the one hand that our method is able to filter
structure which corresponds to our model assumption, and on the other hand that
our model assumption is justified in real-world application. Our method is
parameter-free.
| [
{
"created": "Mon, 17 Jun 2019 22:38:00 GMT",
"version": "v1"
}
] | 2019-06-25 | [
[
"Hess",
"Sibylle",
""
],
[
"Morik",
"Katharina",
""
]
] | Given labeled data represented by a binary matrix, we consider the task to derive a Boolean matrix factorization which identifies commonalities and specifications among the classes. While existing works focus on rank-one factorizations which are either specific or common to the classes, we derive class-specific alterations from common factorizations as well. Therewith, we broaden the applicability of our new method to datasets whose class-dependencies have a more complex structure. On the basis of synthetic and real-world datasets, we show on the one hand that our method is able to filter structure which corresponds to our model assumption, and on the other hand that our model assumption is justified in real-world application. Our method is parameter-free. |
2203.11987 | Ryan Grainger | Ryan Grainger, Thomas Paniagua, Xi Song, Naresh Cuntoor, Mun Wai Lee,
Tianfu Wu | PaCa-ViT: Learning Patch-to-Cluster Attention in Vision Transformers | CVPR 2023 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vision Transformers (ViTs) are built on the assumption of treating image
patches as ``visual tokens" and learn patch-to-patch attention. The patch
embedding based tokenizer has a semantic gap with respect to its counterpart,
the textual tokenizer. The patch-to-patch attention suffers from the quadratic
complexity issue, and also makes it non-trivial to explain learned ViTs. To
address these issues in ViT, this paper proposes to learn Patch-to-Cluster
attention (PaCa) in ViT. Queries in our PaCa-ViT starts with patches, while
keys and values are directly based on clustering (with a predefined small
number of clusters). The clusters are learned end-to-end, leading to better
tokenizers and inducing joint clustering-for-attention and
attention-for-clustering for better and interpretable models. The quadratic
complexity is relaxed to linear complexity. The proposed PaCa module is used in
designing efficient and interpretable ViT backbones and semantic segmentation
head networks. In experiments, the proposed methods are tested on ImageNet-1k
image classification, MS-COCO object detection and instance segmentation and
MIT-ADE20k semantic segmentation. Compared with the prior art, it obtains
better performance in all the three benchmarks than the SWin and the PVTs by
significant margins in ImageNet-1k and MIT-ADE20k. It is also significantly
more efficient than PVT models in MS-COCO and MIT-ADE20k due to the linear
complexity. The learned clusters are semantically meaningful. Code and model
checkpoints are available at https://github.com/iVMCL/PaCaViT.
| [
{
"created": "Tue, 22 Mar 2022 18:28:02 GMT",
"version": "v1"
},
{
"created": "Fri, 7 Apr 2023 00:46:43 GMT",
"version": "v2"
}
] | 2023-04-10 | [
[
"Grainger",
"Ryan",
""
],
[
"Paniagua",
"Thomas",
""
],
[
"Song",
"Xi",
""
],
[
"Cuntoor",
"Naresh",
""
],
[
"Lee",
"Mun Wai",
""
],
[
"Wu",
"Tianfu",
""
]
] | Vision Transformers (ViTs) are built on the assumption of treating image patches as ``visual tokens" and learn patch-to-patch attention. The patch embedding based tokenizer has a semantic gap with respect to its counterpart, the textual tokenizer. The patch-to-patch attention suffers from the quadratic complexity issue, and also makes it non-trivial to explain learned ViTs. To address these issues in ViT, this paper proposes to learn Patch-to-Cluster attention (PaCa) in ViT. Queries in our PaCa-ViT starts with patches, while keys and values are directly based on clustering (with a predefined small number of clusters). The clusters are learned end-to-end, leading to better tokenizers and inducing joint clustering-for-attention and attention-for-clustering for better and interpretable models. The quadratic complexity is relaxed to linear complexity. The proposed PaCa module is used in designing efficient and interpretable ViT backbones and semantic segmentation head networks. In experiments, the proposed methods are tested on ImageNet-1k image classification, MS-COCO object detection and instance segmentation and MIT-ADE20k semantic segmentation. Compared with the prior art, it obtains better performance in all the three benchmarks than the SWin and the PVTs by significant margins in ImageNet-1k and MIT-ADE20k. It is also significantly more efficient than PVT models in MS-COCO and MIT-ADE20k due to the linear complexity. The learned clusters are semantically meaningful. Code and model checkpoints are available at https://github.com/iVMCL/PaCaViT. |
2404.13456 | Hanjiang Hu | Hanjiang Hu, Jianglin Lan, Changliu Liu | Real-Time Safe Control of Neural Network Dynamic Models with Sound
Approximation | Camera-ready version of L4DC 2024, 12 pages, 3 figures, 4 tables | null | null | null | cs.LG cs.RO cs.SY eess.SY | http://creativecommons.org/licenses/by-sa/4.0/ | Safe control of neural network dynamic models (NNDMs) is important to
robotics and many applications. However, it remains challenging to compute an
optimal safe control in real time for NNDM. To enable real-time computation, we
propose to use a sound approximation of the NNDM in the control synthesis. In
particular, we propose Bernstein over-approximated neural dynamics (BOND) based
on the Bernstein polynomial over-approximation (BPO) of ReLU activation
functions in NNDM. To mitigate the errors introduced by the approximation and
to ensure persistent feasibility of the safe control problems, we synthesize a
worst-case safety index using the most unsafe approximated state within the BPO
relaxation of NNDM offline. For the online real-time optimization, we formulate
the first-order Taylor approximation of the nonlinear worst-case safety
constraint as an additional linear layer of NNDM with the l2 bounded bias term
for the higher-order remainder. Comprehensive experiments with different neural
dynamics and safety constraints show that with safety guaranteed, our NNDMs
with sound approximation are 10-100 times faster than the safe control baseline
that uses mixed integer programming (MIP), validating the effectiveness of the
worst-case safety index and scalability of the proposed BOND in real-time
large-scale settings. The code is available at
https://github.com/intelligent-control-lab/BOND.
| [
{
"created": "Sat, 20 Apr 2024 19:51:29 GMT",
"version": "v1"
},
{
"created": "Mon, 20 May 2024 21:57:31 GMT",
"version": "v2"
}
] | 2024-05-22 | [
[
"Hu",
"Hanjiang",
""
],
[
"Lan",
"Jianglin",
""
],
[
"Liu",
"Changliu",
""
]
] | Safe control of neural network dynamic models (NNDMs) is important to robotics and many applications. However, it remains challenging to compute an optimal safe control in real time for NNDM. To enable real-time computation, we propose to use a sound approximation of the NNDM in the control synthesis. In particular, we propose Bernstein over-approximated neural dynamics (BOND) based on the Bernstein polynomial over-approximation (BPO) of ReLU activation functions in NNDM. To mitigate the errors introduced by the approximation and to ensure persistent feasibility of the safe control problems, we synthesize a worst-case safety index using the most unsafe approximated state within the BPO relaxation of NNDM offline. For the online real-time optimization, we formulate the first-order Taylor approximation of the nonlinear worst-case safety constraint as an additional linear layer of NNDM with the l2 bounded bias term for the higher-order remainder. Comprehensive experiments with different neural dynamics and safety constraints show that with safety guaranteed, our NNDMs with sound approximation are 10-100 times faster than the safe control baseline that uses mixed integer programming (MIP), validating the effectiveness of the worst-case safety index and scalability of the proposed BOND in real-time large-scale settings. The code is available at https://github.com/intelligent-control-lab/BOND. |
2004.07403 | Nisheeth Vishnoi | Jonathan Leake and Nisheeth K. Vishnoi | On the computability of continuous maximum entropy distributions with
applications | 50 pages, STOC 2020 | null | null | null | cs.DS math.OC stat.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We initiate a study of the following problem: Given a continuous domain
$\Omega$ along with its convex hull $\mathcal{K}$, a point $A \in \mathcal{K}$
and a prior measure $\mu$ on $\Omega$, find the probability density over
$\Omega$ whose marginal is $A$ and that minimizes the KL-divergence to $\mu$.
This framework gives rise to several extremal distributions that arise in
mathematics, quantum mechanics, statistics, and theoretical computer science.
Our technical contributions include a polynomial bound on the norm of the
optimizer of the dual problem that holds in a very general setting and relies
on a "balance" property of the measure $\mu$ on $\Omega$, and exact algorithms
for evaluating the dual and its gradient for several interesting settings of
$\Omega$ and $\mu$. Together, along with the ellipsoid method, these results
imply polynomial-time algorithms to compute such KL-divergence minimizing
distributions in several cases. Applications of our results include: 1) an
optimization characterization of the Goemans-Williamson measure that is used to
round a positive semidefinite matrix to a vector, 2) the computability of the
entropic barrier for polytopes studied by Bubeck and Eldan, and 3) a
polynomial-time algorithm to compute the barycentric quantum entropy of a
density matrix that was proposed as an alternative to von Neumann entropy in
the 1970s: this corresponds to the case when $\Omega$ is the set of rank one
projections matrices and $\mu$ corresponds to the Haar measure on the unit
sphere. Our techniques generalize to the setting of Hermitian rank $k$
projections using the Harish-Chandra-Itzykson-Zuber formula, and are applicable
even beyond, to adjoint orbits of compact Lie groups.
| [
{
"created": "Thu, 16 Apr 2020 00:41:40 GMT",
"version": "v1"
}
] | 2020-04-17 | [
[
"Leake",
"Jonathan",
""
],
[
"Vishnoi",
"Nisheeth K.",
""
]
] | We initiate a study of the following problem: Given a continuous domain $\Omega$ along with its convex hull $\mathcal{K}$, a point $A \in \mathcal{K}$ and a prior measure $\mu$ on $\Omega$, find the probability density over $\Omega$ whose marginal is $A$ and that minimizes the KL-divergence to $\mu$. This framework gives rise to several extremal distributions that arise in mathematics, quantum mechanics, statistics, and theoretical computer science. Our technical contributions include a polynomial bound on the norm of the optimizer of the dual problem that holds in a very general setting and relies on a "balance" property of the measure $\mu$ on $\Omega$, and exact algorithms for evaluating the dual and its gradient for several interesting settings of $\Omega$ and $\mu$. Together, along with the ellipsoid method, these results imply polynomial-time algorithms to compute such KL-divergence minimizing distributions in several cases. Applications of our results include: 1) an optimization characterization of the Goemans-Williamson measure that is used to round a positive semidefinite matrix to a vector, 2) the computability of the entropic barrier for polytopes studied by Bubeck and Eldan, and 3) a polynomial-time algorithm to compute the barycentric quantum entropy of a density matrix that was proposed as an alternative to von Neumann entropy in the 1970s: this corresponds to the case when $\Omega$ is the set of rank one projections matrices and $\mu$ corresponds to the Haar measure on the unit sphere. Our techniques generalize to the setting of Hermitian rank $k$ projections using the Harish-Chandra-Itzykson-Zuber formula, and are applicable even beyond, to adjoint orbits of compact Lie groups. |
2210.06213 | Somya Sharma | Somya Sharma, Rahul Ghosh, Arvind Renganathan, Xiang Li, Snigdhansu
Chatterjee, John Nieber, Christopher Duffy, Vipin Kumar | Probabilistic Inverse Modeling: An Application in Hydrology | null | null | null | null | cs.LG stat.ML | http://creativecommons.org/licenses/by-sa/4.0/ | The astounding success of these methods has made it imperative to obtain more
explainable and trustworthy estimates from these models. In hydrology, basin
characteristics can be noisy or missing, impacting streamflow prediction. For
solving inverse problems in such applications, ensuring explainability is
pivotal for tackling issues relating to data bias and large search space. We
propose a probabilistic inverse model framework that can reconstruct robust
hydrology basin characteristics from dynamic input weather driver and
streamflow response data. We address two aspects of building more explainable
inverse models, uncertainty estimation and robustness. This can help improve
the trust of water managers, handling of noisy data and reduce costs. We
propose uncertainty based learning method that offers 6\% improvement in $R^2$
for streamflow prediction (forward modeling) from inverse model inferred basin
characteristic estimates, 17\% reduction in uncertainty (40\% in presence of
noise) and 4\% higher coverage rate for basin characteristics.
| [
{
"created": "Wed, 12 Oct 2022 14:00:37 GMT",
"version": "v1"
}
] | 2022-10-13 | [
[
"Sharma",
"Somya",
""
],
[
"Ghosh",
"Rahul",
""
],
[
"Renganathan",
"Arvind",
""
],
[
"Li",
"Xiang",
""
],
[
"Chatterjee",
"Snigdhansu",
""
],
[
"Nieber",
"John",
""
],
[
"Duffy",
"Christopher",
""
],
[
"Kumar",
"Vipin",
""
]
] | The astounding success of these methods has made it imperative to obtain more explainable and trustworthy estimates from these models. In hydrology, basin characteristics can be noisy or missing, impacting streamflow prediction. For solving inverse problems in such applications, ensuring explainability is pivotal for tackling issues relating to data bias and large search space. We propose a probabilistic inverse model framework that can reconstruct robust hydrology basin characteristics from dynamic input weather driver and streamflow response data. We address two aspects of building more explainable inverse models, uncertainty estimation and robustness. This can help improve the trust of water managers, handling of noisy data and reduce costs. We propose uncertainty based learning method that offers 6\% improvement in $R^2$ for streamflow prediction (forward modeling) from inverse model inferred basin characteristic estimates, 17\% reduction in uncertainty (40\% in presence of noise) and 4\% higher coverage rate for basin characteristics. |
2012.12362 | Rolysent Paredes | Rolysent K Paredes and Alexander A. Hernandez | Designing an Adaptive Bandwidth Management for Higher Education
Institutions | null | null | 10.25147/ijcsr.2017.001.1.22 | null | cs.NI cs.DB | http://creativecommons.org/licenses/by/4.0/ | Purpose: This study proposes an adaptive bandwidth management system which
can be explicitly used by educational institutions. The primary goal of the
system is to increase the bandwidth of the users who access more on educational
websites. Through this proposed bandwidth management, the users of the campus
networks is encouraged to utilize the internet for educational purposes.
Method: The weblog from a university's pfSense proxy server was utilized and
undergo Web Usage Mining (WUM) to determine the number of educational and
non-educational websites accessed by the users. Certain formulas were used in
the computation of the bandwidth which was dynamically assigned to the users. A
prototyping technique was applied in developing adaptive bandwidth management
system. The prototype was simulated and evaluated by experts in compliance with
ISO/IEC 14598-6 and ISO/IEC 9126-1 standards.
Results: This study found that the prototype is capable of adjusting the
bandwidth of the network users dynamically. The users who browsed more on
educational websites or contents were assigned with higher bandwidth compared
to those who are not. Further, the evaluated prototype met the software
standards of ISO.
Conclusion: The proposed adaptive bandwidth management can contribute to the
continuous development in the area of computer networking, especially in
designing and managing campus networks. It also helps the network
administrators or IT managers in allocating bandwidth with minimal effort.
| [
{
"created": "Thu, 19 Nov 2020 11:59:29 GMT",
"version": "v1"
}
] | 2020-12-24 | [
[
"Paredes",
"Rolysent K",
""
],
[
"Hernandez",
"Alexander A.",
""
]
] | Purpose: This study proposes an adaptive bandwidth management system which can be explicitly used by educational institutions. The primary goal of the system is to increase the bandwidth of the users who access more on educational websites. Through this proposed bandwidth management, the users of the campus networks is encouraged to utilize the internet for educational purposes. Method: The weblog from a university's pfSense proxy server was utilized and undergo Web Usage Mining (WUM) to determine the number of educational and non-educational websites accessed by the users. Certain formulas were used in the computation of the bandwidth which was dynamically assigned to the users. A prototyping technique was applied in developing adaptive bandwidth management system. The prototype was simulated and evaluated by experts in compliance with ISO/IEC 14598-6 and ISO/IEC 9126-1 standards. Results: This study found that the prototype is capable of adjusting the bandwidth of the network users dynamically. The users who browsed more on educational websites or contents were assigned with higher bandwidth compared to those who are not. Further, the evaluated prototype met the software standards of ISO. Conclusion: The proposed adaptive bandwidth management can contribute to the continuous development in the area of computer networking, especially in designing and managing campus networks. It also helps the network administrators or IT managers in allocating bandwidth with minimal effort. |
1405.6164 | Ion Androutsopoulos | Ion Androutsopoulos, Gerasimos Lampouras, Dimitrios Galanis | Generating Natural Language Descriptions from OWL Ontologies: the
NaturalOWL System | null | Journal Of Artificial Intelligence Research, Volume 48, pages
671-715, 2013 | 10.1613/jair.4017 | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present NaturalOWL, a natural language generation system that produces
texts describing individuals or classes of OWL ontologies. Unlike simpler OWL
verbalizers, which typically express a single axiom at a time in controlled,
often not entirely fluent natural language primarily for the benefit of domain
experts, we aim to generate fluent and coherent multi-sentence texts for
end-users. With a system like NaturalOWL, one can publish information in OWL on
the Web, along with automatically produced corresponding texts in multiple
languages, making the information accessible not only to computer programs and
domain experts, but also end-users. We discuss the processing stages of
NaturalOWL, the optional domain-dependent linguistic resources that the system
can use at each stage, and why they are useful. We also present trials showing
that when the domain-dependent llinguistic resources are available, NaturalOWL
produces significantly better texts compared to a simpler verbalizer, and that
the resources can be created with relatively light effort.
| [
{
"created": "Thu, 24 Apr 2014 02:47:37 GMT",
"version": "v1"
}
] | 2014-05-26 | [
[
"Androutsopoulos",
"Ion",
""
],
[
"Lampouras",
"Gerasimos",
""
],
[
"Galanis",
"Dimitrios",
""
]
] | We present NaturalOWL, a natural language generation system that produces texts describing individuals or classes of OWL ontologies. Unlike simpler OWL verbalizers, which typically express a single axiom at a time in controlled, often not entirely fluent natural language primarily for the benefit of domain experts, we aim to generate fluent and coherent multi-sentence texts for end-users. With a system like NaturalOWL, one can publish information in OWL on the Web, along with automatically produced corresponding texts in multiple languages, making the information accessible not only to computer programs and domain experts, but also end-users. We discuss the processing stages of NaturalOWL, the optional domain-dependent linguistic resources that the system can use at each stage, and why they are useful. We also present trials showing that when the domain-dependent llinguistic resources are available, NaturalOWL produces significantly better texts compared to a simpler verbalizer, and that the resources can be created with relatively light effort. |
1801.05627 | Patrick Glauner | Patrick Glauner, Radu State, Petko Valtchev, Diogo Duarte | On the Reduction of Biases in Big Data Sets for the Detection of
Irregular Power Usage | null | Proceedings of the 13th International FLINS Conference on Data
Science and Knowledge Engineering for Sensing Decision Support (FLINS 2018) | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In machine learning, a bias occurs whenever training sets are not
representative for the test data, which results in unreliable models. The most
common biases in data are arguably class imbalance and covariate shift. In this
work, we aim to shed light on this topic in order to increase the overall
attention to this issue in the field of machine learning. We propose a scalable
novel framework for reducing multiple biases in high-dimensional data sets in
order to train more reliable predictors. We apply our methodology to the
detection of irregular power usage from real, noisy industrial data. In
emerging markets, irregular power usage, and electricity theft in particular,
may range up to 40% of the total electricity distributed. Biased data sets are
of particular issue in this domain. We show that reducing these biases
increases the accuracy of the trained predictors. Our models have the potential
to generate significant economic value in a real world application, as they are
being deployed in a commercial software for the detection of irregular power
usage.
| [
{
"created": "Wed, 17 Jan 2018 11:48:18 GMT",
"version": "v1"
},
{
"created": "Tue, 3 Apr 2018 09:06:42 GMT",
"version": "v2"
}
] | 2018-04-04 | [
[
"Glauner",
"Patrick",
""
],
[
"State",
"Radu",
""
],
[
"Valtchev",
"Petko",
""
],
[
"Duarte",
"Diogo",
""
]
] | In machine learning, a bias occurs whenever training sets are not representative for the test data, which results in unreliable models. The most common biases in data are arguably class imbalance and covariate shift. In this work, we aim to shed light on this topic in order to increase the overall attention to this issue in the field of machine learning. We propose a scalable novel framework for reducing multiple biases in high-dimensional data sets in order to train more reliable predictors. We apply our methodology to the detection of irregular power usage from real, noisy industrial data. In emerging markets, irregular power usage, and electricity theft in particular, may range up to 40% of the total electricity distributed. Biased data sets are of particular issue in this domain. We show that reducing these biases increases the accuracy of the trained predictors. Our models have the potential to generate significant economic value in a real world application, as they are being deployed in a commercial software for the detection of irregular power usage. |
2307.11073 | Oscar Michel | Oscar Michel, Anand Bhattad, Eli VanderBilt, Ranjay Krishna, Aniruddha
Kembhavi, Tanmay Gupta | OBJECT 3DIT: Language-guided 3D-aware Image Editing | null | null | null | null | cs.CV cs.AI cs.GR | http://creativecommons.org/licenses/by/4.0/ | Existing image editing tools, while powerful, typically disregard the
underlying 3D geometry from which the image is projected. As a result, edits
made using these tools may become detached from the geometry and lighting
conditions that are at the foundation of the image formation process. In this
work, we formulate the newt ask of language-guided 3D-aware editing, where
objects in an image should be edited according to a language instruction in
context of the underlying 3D scene. To promote progress towards this goal, we
release OBJECT: a dataset consisting of 400K editing examples created from
procedurally generated 3D scenes. Each example consists of an input image,
editing instruction in language, and the edited image. We also introduce 3DIT :
single and multi-task models for four editing tasks. Our models show impressive
abilities to understand the 3D composition of entire scenes, factoring in
surrounding objects, surfaces, lighting conditions, shadows, and
physically-plausible object configurations. Surprisingly, training on only
synthetic scenes from OBJECT, editing capabilities of 3DIT generalize to
real-world images.
| [
{
"created": "Thu, 20 Jul 2023 17:53:46 GMT",
"version": "v1"
}
] | 2023-07-21 | [
[
"Michel",
"Oscar",
""
],
[
"Bhattad",
"Anand",
""
],
[
"VanderBilt",
"Eli",
""
],
[
"Krishna",
"Ranjay",
""
],
[
"Kembhavi",
"Aniruddha",
""
],
[
"Gupta",
"Tanmay",
""
]
] | Existing image editing tools, while powerful, typically disregard the underlying 3D geometry from which the image is projected. As a result, edits made using these tools may become detached from the geometry and lighting conditions that are at the foundation of the image formation process. In this work, we formulate the newt ask of language-guided 3D-aware editing, where objects in an image should be edited according to a language instruction in context of the underlying 3D scene. To promote progress towards this goal, we release OBJECT: a dataset consisting of 400K editing examples created from procedurally generated 3D scenes. Each example consists of an input image, editing instruction in language, and the edited image. We also introduce 3DIT : single and multi-task models for four editing tasks. Our models show impressive abilities to understand the 3D composition of entire scenes, factoring in surrounding objects, surfaces, lighting conditions, shadows, and physically-plausible object configurations. Surprisingly, training on only synthetic scenes from OBJECT, editing capabilities of 3DIT generalize to real-world images. |
1910.10679 | Leslie Smith | Leslie N. Smith | A Useful Taxonomy for Adversarial Robustness of Neural Networks | NRL Technical Report | null | null | null | cs.LG cs.CR cs.CV stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Adversarial attacks and defenses are currently active areas of research for
the deep learning community. A recent review paper divided the defense
approaches into three categories; gradient masking, robust optimization, and
adversarial example detection. We divide gradient masking and robust
optimization differently: (1) increasing intra-class compactness and
inter-class separation of the feature vectors improves adversarial robustness,
and (2) marginalization or removal of non-robust image features also improves
adversarial robustness. By reframing these topics differently, we provide a
fresh perspective that provides insight into the underlying factors that enable
training more robust networks and can help inspire novel solutions. In
addition, there are several papers in the literature of adversarial defenses
that claim there is a cost for adversarial robustness, or a trade-off between
robustness and accuracy but, under this proposed taxonomy, we hypothesis that
this is not universal. We follow up on our taxonomy with several challenges to
the deep learning research community that builds on the connections and
insights in this paper.
| [
{
"created": "Wed, 23 Oct 2019 17:33:15 GMT",
"version": "v1"
}
] | 2019-10-24 | [
[
"Smith",
"Leslie N.",
""
]
] | Adversarial attacks and defenses are currently active areas of research for the deep learning community. A recent review paper divided the defense approaches into three categories; gradient masking, robust optimization, and adversarial example detection. We divide gradient masking and robust optimization differently: (1) increasing intra-class compactness and inter-class separation of the feature vectors improves adversarial robustness, and (2) marginalization or removal of non-robust image features also improves adversarial robustness. By reframing these topics differently, we provide a fresh perspective that provides insight into the underlying factors that enable training more robust networks and can help inspire novel solutions. In addition, there are several papers in the literature of adversarial defenses that claim there is a cost for adversarial robustness, or a trade-off between robustness and accuracy but, under this proposed taxonomy, we hypothesis that this is not universal. We follow up on our taxonomy with several challenges to the deep learning research community that builds on the connections and insights in this paper. |
1703.06113 | Pedro Recuero | Pedro Recuero | Toward an enumeration of unlabeled trees | 10 pages, 17 figures | null | null | null | cs.DS math.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present an algorithm that, on input $n$, lists every unlabeled tree of
order $n$.
| [
{
"created": "Sat, 11 Mar 2017 02:10:17 GMT",
"version": "v1"
}
] | 2017-03-20 | [
[
"Recuero",
"Pedro",
""
]
] | We present an algorithm that, on input $n$, lists every unlabeled tree of order $n$. |
2109.09034 | Chapman Siu | Chapman Siu, Jason Traish, Richard Yi Da Xu | Greedy UnMixing for Q-Learning in Multi-Agent Reinforcement Learning | null | null | null | null | cs.LG cs.MA stat.ML | http://creativecommons.org/licenses/by-nc-nd/4.0/ | This paper introduces Greedy UnMix (GUM) for cooperative multi-agent
reinforcement learning (MARL). Greedy UnMix aims to avoid scenarios where MARL
methods fail due to overestimation of values as part of the large joint
state-action space. It aims to address this through a conservative Q-learning
approach through restricting the state-marginal in the dataset to avoid
unobserved joint state action spaces, whilst concurrently attempting to unmix
or simplify the problem space under the centralized training with decentralized
execution paradigm. We demonstrate the adherence to Q-function lower bounds in
the Q-learning for MARL scenarios, and demonstrate superior performance to
existing Q-learning MARL approaches as well as more general MARL algorithms
over a set of benchmark MARL tasks, despite its relative simplicity compared
with state-of-the-art approaches.
| [
{
"created": "Sun, 19 Sep 2021 00:35:18 GMT",
"version": "v1"
}
] | 2021-09-21 | [
[
"Siu",
"Chapman",
""
],
[
"Traish",
"Jason",
""
],
[
"Da Xu",
"Richard Yi",
""
]
] | This paper introduces Greedy UnMix (GUM) for cooperative multi-agent reinforcement learning (MARL). Greedy UnMix aims to avoid scenarios where MARL methods fail due to overestimation of values as part of the large joint state-action space. It aims to address this through a conservative Q-learning approach through restricting the state-marginal in the dataset to avoid unobserved joint state action spaces, whilst concurrently attempting to unmix or simplify the problem space under the centralized training with decentralized execution paradigm. We demonstrate the adherence to Q-function lower bounds in the Q-learning for MARL scenarios, and demonstrate superior performance to existing Q-learning MARL approaches as well as more general MARL algorithms over a set of benchmark MARL tasks, despite its relative simplicity compared with state-of-the-art approaches. |
2110.01315 | Kritika Prakash | Andrew Trask, Kritika Prakash | Towards General-purpose Infrastructure for Protecting Scientific Data
Under Study | null | null | null | null | cs.CR cs.AI cs.LG | http://creativecommons.org/publicdomain/zero/1.0/ | The scientific method presents a key challenge to privacy because it requires
many samples to support a claim. When samples are commercially valuable or
privacy-sensitive enough, their owners have strong reasons to avoid releasing
them for scientific study. Privacy techniques seek to mitigate this tension by
enforcing limits on one's ability to use studied samples for secondary
purposes. Recent work has begun combining these techniques into end-to-end
systems for protecting data. In this work, we assemble the first such
combination which is sufficient for a privacy-layman to use familiar tools to
experiment over private data while the infrastructure automatically prohibits
privacy leakage. We support this theoretical system with a prototype within the
Syft privacy platform using the PyTorch framework.
| [
{
"created": "Mon, 4 Oct 2021 10:48:38 GMT",
"version": "v1"
}
] | 2021-10-05 | [
[
"Trask",
"Andrew",
""
],
[
"Prakash",
"Kritika",
""
]
] | The scientific method presents a key challenge to privacy because it requires many samples to support a claim. When samples are commercially valuable or privacy-sensitive enough, their owners have strong reasons to avoid releasing them for scientific study. Privacy techniques seek to mitigate this tension by enforcing limits on one's ability to use studied samples for secondary purposes. Recent work has begun combining these techniques into end-to-end systems for protecting data. In this work, we assemble the first such combination which is sufficient for a privacy-layman to use familiar tools to experiment over private data while the infrastructure automatically prohibits privacy leakage. We support this theoretical system with a prototype within the Syft privacy platform using the PyTorch framework. |
2305.08299 | Marcos Kalinowski | Silvio Alonso, Marcos Kalinowski, Bruna Ferreira, Simone D. J.
Barbosa, Helio Lopes | A Systematic Mapping Study and Practitioner Insights on the Use of
Software Engineering Practices to Develop MVPs | null | Information and Software Technology, Volume 156, April 2023,
107144 | 10.1016/j.infsof.2022.107144 | null | cs.SE | http://creativecommons.org/licenses/by/4.0/ | [Background] The MVP concept has influenced the way in which development
teams apply Software Engineering practices. However, the overall understanding
of this influence of MVPs on SE practices is still poor. [Objective] Our goal
is to characterize the publication landscape on practices that have been used
in the context of software MVPs and to gather practitioner insights on the
identified practices. [Method] We conducted a systematic mapping study and
discussed its results in two focus groups sessions involving twelve industry
practitioners that extensively use MVPs in their projects to capture their
perceptions on the findings of the mapping study. [Results] We identified 33
papers published between 2013 and 2020 and observed some trends related to MVP
ideation and evaluation practices. For instance, regarding ideation, we found
six different approaches and mainly informal end-user involvement practices.
Regarding evaluation, there is an emphasis on end-user validations based on
practices such as usability tests, A/B testing, and usage data analysis.
However, there is still limited research related to MVP technical feasibility
assessment and effort estimation. Practitioners of the focus group sessions
reinforced the confidence in our results regarding ideation and evaluation
practices, being aware of most of the identified practices. They also reported
how they deal with the technical feasibility assessments and effort estimation
in practice. [Conclusion] Our analysis suggests that there are opportunities
for solution proposals and evaluation studies to address literature gaps
concerning technical feasibility assessment and effort estimation. Overall,
more effort needs to be invested into empirically evaluating the existing
MVP-related practices.
| [
{
"created": "Mon, 15 May 2023 02:00:47 GMT",
"version": "v1"
}
] | 2023-05-16 | [
[
"Alonso",
"Silvio",
""
],
[
"Kalinowski",
"Marcos",
""
],
[
"Ferreira",
"Bruna",
""
],
[
"Barbosa",
"Simone D. J.",
""
],
[
"Lopes",
"Helio",
""
]
] | [Background] The MVP concept has influenced the way in which development teams apply Software Engineering practices. However, the overall understanding of this influence of MVPs on SE practices is still poor. [Objective] Our goal is to characterize the publication landscape on practices that have been used in the context of software MVPs and to gather practitioner insights on the identified practices. [Method] We conducted a systematic mapping study and discussed its results in two focus groups sessions involving twelve industry practitioners that extensively use MVPs in their projects to capture their perceptions on the findings of the mapping study. [Results] We identified 33 papers published between 2013 and 2020 and observed some trends related to MVP ideation and evaluation practices. For instance, regarding ideation, we found six different approaches and mainly informal end-user involvement practices. Regarding evaluation, there is an emphasis on end-user validations based on practices such as usability tests, A/B testing, and usage data analysis. However, there is still limited research related to MVP technical feasibility assessment and effort estimation. Practitioners of the focus group sessions reinforced the confidence in our results regarding ideation and evaluation practices, being aware of most of the identified practices. They also reported how they deal with the technical feasibility assessments and effort estimation in practice. [Conclusion] Our analysis suggests that there are opportunities for solution proposals and evaluation studies to address literature gaps concerning technical feasibility assessment and effort estimation. Overall, more effort needs to be invested into empirically evaluating the existing MVP-related practices. |
1108.1762 | Yoav Wilf | Michal Feldman and Yoav Wilf | Randomized Strategyproof Mechanisms for Facility Location and the
Mini-Sum-of-Squares Objective | null | null | null | null | cs.GT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of locating a public facility on a line, where a set
of $n$ strategic agents report their \emph{locations} and a mechanism
determines, either deterministically or randomly, the location of the facility.
Game theoretic perspectives of the facility location problem advanced in two
main directions. The first direction is concerned with the characterization of
\emph{strategyproof} (SP) mechanisms; i.e., mechanisms that induce truthful
reporting as a dominant strategy; and the second direction quantifies how well
various objective functions can be approximated when restricted to SP
mechanisms. The current paper provides contributions in both directions. First,
we construct a parameterized randomized SP mechanism, and show that all of the
previously proposed deterministic and randomized SP mechanisms for the current
settings can be formalized as special cases of this mechanism. Second, we give
tight results for the approximation ratio of SP mechanisms with respect to the
objective of minimizing the sum of squares of distances to the agents
(\emph{miniSOS}). Holzman \cite{Holzman1990} provided an axiomatic foundation
for this function, showing that it is the unique function that satisfies
unanimity, continuity and invariance. We devise a randomized mechanism that
gives a 1.5-approximation for the miniSOS function, and show that no other
randomized SP mechanism can provide a better approximation. This mechanism
chooses the average location with probability 1/2 and a \emph{random dictator}
with probability 1/2. For deterministic mechanisms, we show that the median
mechanism provides a 2-approximation, and this is tight. Together, our study
provides fundamental understanding of the miniSOS objective function and makes
a step toward the characterization of randomized SP facility location
mechanisms.
| [
{
"created": "Mon, 8 Aug 2011 17:47:11 GMT",
"version": "v1"
},
{
"created": "Tue, 3 Apr 2012 19:33:10 GMT",
"version": "v2"
},
{
"created": "Tue, 3 Jul 2012 18:06:25 GMT",
"version": "v3"
},
{
"created": "Sat, 26 Oct 2013 20:34:06 GMT",
"version": "v4"
}
] | 2013-10-29 | [
[
"Feldman",
"Michal",
""
],
[
"Wilf",
"Yoav",
""
]
] | We consider the problem of locating a public facility on a line, where a set of $n$ strategic agents report their \emph{locations} and a mechanism determines, either deterministically or randomly, the location of the facility. Game theoretic perspectives of the facility location problem advanced in two main directions. The first direction is concerned with the characterization of \emph{strategyproof} (SP) mechanisms; i.e., mechanisms that induce truthful reporting as a dominant strategy; and the second direction quantifies how well various objective functions can be approximated when restricted to SP mechanisms. The current paper provides contributions in both directions. First, we construct a parameterized randomized SP mechanism, and show that all of the previously proposed deterministic and randomized SP mechanisms for the current settings can be formalized as special cases of this mechanism. Second, we give tight results for the approximation ratio of SP mechanisms with respect to the objective of minimizing the sum of squares of distances to the agents (\emph{miniSOS}). Holzman \cite{Holzman1990} provided an axiomatic foundation for this function, showing that it is the unique function that satisfies unanimity, continuity and invariance. We devise a randomized mechanism that gives a 1.5-approximation for the miniSOS function, and show that no other randomized SP mechanism can provide a better approximation. This mechanism chooses the average location with probability 1/2 and a \emph{random dictator} with probability 1/2. For deterministic mechanisms, we show that the median mechanism provides a 2-approximation, and this is tight. Together, our study provides fundamental understanding of the miniSOS objective function and makes a step toward the characterization of randomized SP facility location mechanisms. |
2311.09071 | Fei Yuan | Fei Yuan, Shuai Yuan, Zhiyong Wu, Lei Li | How Vocabulary Sharing Facilitates Multilingualism in LLaMA? | ACL-2024 Findings | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Large Language Models (LLMs), often show strong performance on English tasks,
while exhibiting limitations on other languages. What is an LLM's multilingual
capability when it is trained only on certain languages? The underlying
mechanism remains unclear. This study endeavors to examine the multilingual
capability of LLMs from the vocabulary sharing perspective by conducting an
exhaustive analysis across 101 languages. Through the investigation of the
performance gap before and after embedding fine-tuning, we discovered four
distinct quadrants. By delving into each quadrant we provide actionable and
efficient guidelines for tuning these languages. Extensive experiments reveal
that existing LLMs possess multilingual capabilities that surpass our
expectations, and we can significantly improve the multilingual performance of
LLMs based on these attributes of each
quadrant~\footnote{\url{https://github.com/CONE-MT/Vocabulary-Sharing-Facilitates-Multilingualism}.}.
| [
{
"created": "Wed, 15 Nov 2023 16:13:14 GMT",
"version": "v1"
},
{
"created": "Mon, 3 Jun 2024 06:11:06 GMT",
"version": "v2"
}
] | 2024-06-04 | [
[
"Yuan",
"Fei",
""
],
[
"Yuan",
"Shuai",
""
],
[
"Wu",
"Zhiyong",
""
],
[
"Li",
"Lei",
""
]
] | Large Language Models (LLMs), often show strong performance on English tasks, while exhibiting limitations on other languages. What is an LLM's multilingual capability when it is trained only on certain languages? The underlying mechanism remains unclear. This study endeavors to examine the multilingual capability of LLMs from the vocabulary sharing perspective by conducting an exhaustive analysis across 101 languages. Through the investigation of the performance gap before and after embedding fine-tuning, we discovered four distinct quadrants. By delving into each quadrant we provide actionable and efficient guidelines for tuning these languages. Extensive experiments reveal that existing LLMs possess multilingual capabilities that surpass our expectations, and we can significantly improve the multilingual performance of LLMs based on these attributes of each quadrant~\footnote{\url{https://github.com/CONE-MT/Vocabulary-Sharing-Facilitates-Multilingualism}.}. |
2210.10906 | Stanislas Lauly | Suvodeep Majumder, Stanislas Lauly, Maria Nadejde, Marcello Federico,
Georgiana Dinu | A baseline revisited: Pushing the limits of multi-segment models for
context-aware translation | null | null | null | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper addresses the task of contextual translation using multi-segment
models. Specifically we show that increasing model capacity further pushes the
limits of this approach and that deeper models are more suited to capture
context dependencies. Furthermore, improvements observed with larger models can
be transferred to smaller models using knowledge distillation. Our experiments
show that this approach achieves competitive performance across several
languages and benchmarks, without additional language-specific tuning and task
specific architectures.
| [
{
"created": "Wed, 19 Oct 2022 22:04:25 GMT",
"version": "v1"
},
{
"created": "Fri, 21 Oct 2022 15:50:52 GMT",
"version": "v2"
}
] | 2022-10-24 | [
[
"Majumder",
"Suvodeep",
""
],
[
"Lauly",
"Stanislas",
""
],
[
"Nadejde",
"Maria",
""
],
[
"Federico",
"Marcello",
""
],
[
"Dinu",
"Georgiana",
""
]
] | This paper addresses the task of contextual translation using multi-segment models. Specifically we show that increasing model capacity further pushes the limits of this approach and that deeper models are more suited to capture context dependencies. Furthermore, improvements observed with larger models can be transferred to smaller models using knowledge distillation. Our experiments show that this approach achieves competitive performance across several languages and benchmarks, without additional language-specific tuning and task specific architectures. |
2402.13243 | Bo Jiang | Shaoyu Chen, Bo Jiang, Hao Gao, Bencheng Liao, Qing Xu, Qian Zhang,
Chang Huang, Wenyu Liu, Xinggang Wang | VADv2: End-to-End Vectorized Autonomous Driving via Probabilistic
Planning | Project Page: https://hgao-cv.github.io/VADv2 | null | null | null | cs.CV cs.RO | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Learning a human-like driving policy from large-scale driving demonstrations
is promising, but the uncertainty and non-deterministic nature of planning make
it challenging. In this work, to cope with the uncertainty problem, we propose
VADv2, an end-to-end driving model based on probabilistic planning. VADv2 takes
multi-view image sequences as input in a streaming manner, transforms sensor
data into environmental token embeddings, outputs the probabilistic
distribution of action, and samples one action to control the vehicle. Only
with camera sensors, VADv2 achieves state-of-the-art closed-loop performance on
the CARLA Town05 benchmark, significantly outperforming all existing methods.
It runs stably in a fully end-to-end manner, even without the rule-based
wrapper. Closed-loop demos are presented at https://hgao-cv.github.io/VADv2.
| [
{
"created": "Tue, 20 Feb 2024 18:55:09 GMT",
"version": "v1"
}
] | 2024-02-21 | [
[
"Chen",
"Shaoyu",
""
],
[
"Jiang",
"Bo",
""
],
[
"Gao",
"Hao",
""
],
[
"Liao",
"Bencheng",
""
],
[
"Xu",
"Qing",
""
],
[
"Zhang",
"Qian",
""
],
[
"Huang",
"Chang",
""
],
[
"Liu",
"Wenyu",
""
],
[
"Wang",
"Xinggang",
""
]
] | Learning a human-like driving policy from large-scale driving demonstrations is promising, but the uncertainty and non-deterministic nature of planning make it challenging. In this work, to cope with the uncertainty problem, we propose VADv2, an end-to-end driving model based on probabilistic planning. VADv2 takes multi-view image sequences as input in a streaming manner, transforms sensor data into environmental token embeddings, outputs the probabilistic distribution of action, and samples one action to control the vehicle. Only with camera sensors, VADv2 achieves state-of-the-art closed-loop performance on the CARLA Town05 benchmark, significantly outperforming all existing methods. It runs stably in a fully end-to-end manner, even without the rule-based wrapper. Closed-loop demos are presented at https://hgao-cv.github.io/VADv2. |
2304.10250 | Wentian Xu | Wentian Xu and Jianbo Jiao | Revisiting Implicit Neural Representations in Low-Level Vision | Published at the ICLR 2023 Neural Fields workshop. Project Webpage:
https://wentxul.github.io/LINR-projectpage | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Implicit Neural Representation (INR) has been emerging in computer vision in
recent years. It has been shown to be effective in parameterising continuous
signals such as dense 3D models from discrete image data, e.g. the neural
radius field (NeRF). However, INR is under-explored in 2D image processing
tasks. Considering the basic definition and the structure of INR, we are
interested in its effectiveness in low-level vision problems such as image
restoration. In this work, we revisit INR and investigate its application in
low-level image restoration tasks including image denoising, super-resolution,
inpainting, and deblurring. Extensive experimental evaluations suggest the
superior performance of INR in several low-level vision tasks with limited
resources, outperforming its counterparts by over 2dB. Code and models are
available at https://github.com/WenTXuL/LINR
| [
{
"created": "Thu, 20 Apr 2023 12:19:27 GMT",
"version": "v1"
}
] | 2023-04-21 | [
[
"Xu",
"Wentian",
""
],
[
"Jiao",
"Jianbo",
""
]
] | Implicit Neural Representation (INR) has been emerging in computer vision in recent years. It has been shown to be effective in parameterising continuous signals such as dense 3D models from discrete image data, e.g. the neural radius field (NeRF). However, INR is under-explored in 2D image processing tasks. Considering the basic definition and the structure of INR, we are interested in its effectiveness in low-level vision problems such as image restoration. In this work, we revisit INR and investigate its application in low-level image restoration tasks including image denoising, super-resolution, inpainting, and deblurring. Extensive experimental evaluations suggest the superior performance of INR in several low-level vision tasks with limited resources, outperforming its counterparts by over 2dB. Code and models are available at https://github.com/WenTXuL/LINR |
2304.05492 | Juntao Tan | Juntao Tan, Shelby Heinecke, Zhiwei Liu, Yongjun Chen, Yongfeng Zhang,
Huan Wang | Towards More Robust and Accurate Sequential Recommendation with
Cascade-guided Adversarial Training | Accepted to present at SIAM International Conference on Data Mining
(SDM24) | null | null | null | cs.IR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sequential recommendation models, models that learn from chronological
user-item interactions, outperform traditional recommendation models in many
settings. Despite the success of sequential recommendation models, their
robustness has recently come into question. Two properties unique to the nature
of sequential recommendation models may impair their robustness - the cascade
effects induced during training and the model's tendency to rely too heavily on
temporal information. To address these vulnerabilities, we propose
Cascade-guided Adversarial training, a new adversarial training procedure that
is specifically designed for sequential recommendation models. Our approach
harnesses the intrinsic cascade effects present in sequential modeling to
produce strategic adversarial perturbations to item embeddings during training.
Experiments on training state-of-the-art sequential models on four public
datasets from different domains show that our training approach produces
superior model ranking accuracy and superior model robustness to real item
replacement perturbations when compared to both standard model training and
generic adversarial training.
| [
{
"created": "Tue, 11 Apr 2023 20:55:02 GMT",
"version": "v1"
},
{
"created": "Tue, 16 Jan 2024 18:37:59 GMT",
"version": "v2"
}
] | 2024-01-17 | [
[
"Tan",
"Juntao",
""
],
[
"Heinecke",
"Shelby",
""
],
[
"Liu",
"Zhiwei",
""
],
[
"Chen",
"Yongjun",
""
],
[
"Zhang",
"Yongfeng",
""
],
[
"Wang",
"Huan",
""
]
] | Sequential recommendation models, models that learn from chronological user-item interactions, outperform traditional recommendation models in many settings. Despite the success of sequential recommendation models, their robustness has recently come into question. Two properties unique to the nature of sequential recommendation models may impair their robustness - the cascade effects induced during training and the model's tendency to rely too heavily on temporal information. To address these vulnerabilities, we propose Cascade-guided Adversarial training, a new adversarial training procedure that is specifically designed for sequential recommendation models. Our approach harnesses the intrinsic cascade effects present in sequential modeling to produce strategic adversarial perturbations to item embeddings during training. Experiments on training state-of-the-art sequential models on four public datasets from different domains show that our training approach produces superior model ranking accuracy and superior model robustness to real item replacement perturbations when compared to both standard model training and generic adversarial training. |
1612.01431 | Mike Thelwall Prof | Mike Thelwall | Three practical field normalised alternative indicator formulae for
research evaluation | Thelwall, M. (in press). Three practical field normalised alternative
indicator formulae for research evaluation. Journal of Informetrics.
doi:10.1016/j.joi.2016.12.002 Changes from the previous version are
highlighted in yellow | null | 10.1016/j.joi.2016.12.002 | null | cs.DL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although altmetrics and other web-based alternative indicators are now
commonplace in publishers' websites, they can be difficult for research
evaluators to use because of the time or expense of the data, the need to
benchmark in order to assess their values, the high proportion of zeros in some
alternative indicators, and the time taken to calculate multiple complex
indicators. These problems are addressed here by (a) a field normalisation
formula, the Mean Normalised Log-transformed Citation Score (MNLCS) that allows
simple confidence limits to be calculated and is similar to a proposal of
Lundberg, (b) field normalisation formulae for the proportion of cited articles
in a set, the Equalised Mean-based Normalised Proportion Cited (EMNPC) and the
Mean-based Normalised Proportion Cited (MNPC), to deal with mostly uncited data
sets, (c) a sampling strategy to minimise data collection costs, and (d) free
unified software to gather the raw data, implement the sampling strategy, and
calculate the indicator formulae and confidence limits. The approach is
demonstrated (but not fully tested) by comparing the Scopus citations, Mendeley
readers and Wikipedia mentions of research funded by Wellcome, NIH, and MRC in
three large fields for 2013-2016. Within the results, statistically significant
differences in both citation counts and Mendeley reader counts were found even
for sets of articles that were less than six months old. Mendeley reader counts
were more precise than Scopus citations for the most recent articles and all
three funders could be demonstrated to have an impact in Wikipedia that was
significantly above the world average.
| [
{
"created": "Mon, 5 Dec 2016 17:02:21 GMT",
"version": "v1"
},
{
"created": "Wed, 21 Dec 2016 12:02:49 GMT",
"version": "v2"
}
] | 2016-12-22 | [
[
"Thelwall",
"Mike",
""
]
] | Although altmetrics and other web-based alternative indicators are now commonplace in publishers' websites, they can be difficult for research evaluators to use because of the time or expense of the data, the need to benchmark in order to assess their values, the high proportion of zeros in some alternative indicators, and the time taken to calculate multiple complex indicators. These problems are addressed here by (a) a field normalisation formula, the Mean Normalised Log-transformed Citation Score (MNLCS) that allows simple confidence limits to be calculated and is similar to a proposal of Lundberg, (b) field normalisation formulae for the proportion of cited articles in a set, the Equalised Mean-based Normalised Proportion Cited (EMNPC) and the Mean-based Normalised Proportion Cited (MNPC), to deal with mostly uncited data sets, (c) a sampling strategy to minimise data collection costs, and (d) free unified software to gather the raw data, implement the sampling strategy, and calculate the indicator formulae and confidence limits. The approach is demonstrated (but not fully tested) by comparing the Scopus citations, Mendeley readers and Wikipedia mentions of research funded by Wellcome, NIH, and MRC in three large fields for 2013-2016. Within the results, statistically significant differences in both citation counts and Mendeley reader counts were found even for sets of articles that were less than six months old. Mendeley reader counts were more precise than Scopus citations for the most recent articles and all three funders could be demonstrated to have an impact in Wikipedia that was significantly above the world average. |
0804.0352 | \^Hamed \"Owladeghaffari | M.Sharifzadeh, H.Owladeghaffari, K.Shahriar, E.Bakhtavar | Permeability Analysis based on information granulation theory | 8 pages,7 figures | null | null | null | cs.NE cs.AI | http://creativecommons.org/licenses/by/3.0/ | This paper describes application of information granulation theory, on the
analysis of "lugeon data". In this manner, using a combining of Self Organizing
Map (SOM) and Neuro-Fuzzy Inference System (NFIS), crisp and fuzzy granules are
obtained. Balancing of crisp granules and sub- fuzzy granules, within non fuzzy
information (initial granulation), is rendered in open-close iteration. Using
two criteria, "simplicity of rules "and "suitable adaptive threshold error
level", stability of algorithm is guaranteed. In other part of paper, rough set
theory (RST), to approximate analysis, has been employed >.Validation of the
proposed methods, on the large data set of in-situ permeability in rock masses,
in the Shivashan dam, Iran, has been highlighted. By the implementation of the
proposed algorithm on the lugeon data set, was proved the suggested method,
relating the approximate analysis on the permeability, could be applied.
| [
{
"created": "Wed, 2 Apr 2008 13:45:51 GMT",
"version": "v1"
}
] | 2008-04-03 | [
[
"Sharifzadeh",
"M.",
""
],
[
"Owladeghaffari",
"H.",
""
],
[
"Shahriar",
"K.",
""
],
[
"Bakhtavar",
"E.",
""
]
] | This paper describes application of information granulation theory, on the analysis of "lugeon data". In this manner, using a combining of Self Organizing Map (SOM) and Neuro-Fuzzy Inference System (NFIS), crisp and fuzzy granules are obtained. Balancing of crisp granules and sub- fuzzy granules, within non fuzzy information (initial granulation), is rendered in open-close iteration. Using two criteria, "simplicity of rules "and "suitable adaptive threshold error level", stability of algorithm is guaranteed. In other part of paper, rough set theory (RST), to approximate analysis, has been employed >.Validation of the proposed methods, on the large data set of in-situ permeability in rock masses, in the Shivashan dam, Iran, has been highlighted. By the implementation of the proposed algorithm on the lugeon data set, was proved the suggested method, relating the approximate analysis on the permeability, could be applied. |
2407.05480 | Wenxin Zhou | Wenxin Zhou | Biomedical Nested NER with Large Language Model and UMLS Heuristics | Submitted to CEUR-WS for the BioNNE task of BioASQ Lab in Conference
and Labs of the Evaluation Forum (CLEF) 2024 as a working note | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | In this paper, we present our system for the BioNNE English track, which aims
to extract 8 types of biomedical nested named entities from biomedical text. We
use a large language model (Mixtral 8x7B instruct) and ScispaCy NER model to
identify entities in an article and build custom heuristics based on unified
medical language system (UMLS) semantic types to categorize the entities. We
discuss the results and limitations of our system and propose future
improvements. Our system achieved an F1 score of 0.39 on the BioNNE validation
set and 0.348 on the test set.
| [
{
"created": "Sun, 7 Jul 2024 19:37:40 GMT",
"version": "v1"
}
] | 2024-07-09 | [
[
"Zhou",
"Wenxin",
""
]
] | In this paper, we present our system for the BioNNE English track, which aims to extract 8 types of biomedical nested named entities from biomedical text. We use a large language model (Mixtral 8x7B instruct) and ScispaCy NER model to identify entities in an article and build custom heuristics based on unified medical language system (UMLS) semantic types to categorize the entities. We discuss the results and limitations of our system and propose future improvements. Our system achieved an F1 score of 0.39 on the BioNNE validation set and 0.348 on the test set. |
1311.6093 | Pushkar Mishra | Pushkar Mishra | A New Algorithm for Updating and Querying Sub-arrays of Multidimensional
Arrays | 14 Pages, 3 Figures, 1 Table | null | 10.13140/RG.2.1.2394.2485 | null | cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Given a $d$-dimensional array $A$, an update operation adds a given constant
$C$ to each element within a continuous sub-array of $A$. A query operation
computes the sum of all the elements within a continuous sub-array of $A$. The
one-dimensional update and query handling problem has been studied intensively
and is usually solved using segment trees with lazy propagation technique. In
this paper, we present a new algorithm incorporating Binary Indexed Trees and
Inclusion-Exclusion Principle to accomplish the same task. We extend the
algorithm to update and query sub-matrices of matrices (two-dimensional array).
Finally, we propose a general form of the algorithm for $d$-dimensions which
achieves $\mathcal{O}(4^d*\log^{d}n)$ time complexity for both updates and
queries. This is an improvement over the previously known algorithms which
utilize hierarchical data structures like quadtrees and octrees and have a
worst-case time complexity of $\Omega(n^{d-1})$ per update/query.
| [
{
"created": "Sun, 24 Nov 2013 08:18:04 GMT",
"version": "v1"
},
{
"created": "Tue, 3 Dec 2013 15:28:05 GMT",
"version": "v2"
},
{
"created": "Wed, 4 Dec 2013 17:27:11 GMT",
"version": "v3"
},
{
"created": "Thu, 23 Jan 2014 12:19:36 GMT",
"version": "v4"
},
{
"created": "Sun, 1 Nov 2015 10:34:52 GMT",
"version": "v5"
},
{
"created": "Wed, 3 Aug 2016 22:19:11 GMT",
"version": "v6"
}
] | 2016-08-05 | [
[
"Mishra",
"Pushkar",
""
]
] | Given a $d$-dimensional array $A$, an update operation adds a given constant $C$ to each element within a continuous sub-array of $A$. A query operation computes the sum of all the elements within a continuous sub-array of $A$. The one-dimensional update and query handling problem has been studied intensively and is usually solved using segment trees with lazy propagation technique. In this paper, we present a new algorithm incorporating Binary Indexed Trees and Inclusion-Exclusion Principle to accomplish the same task. We extend the algorithm to update and query sub-matrices of matrices (two-dimensional array). Finally, we propose a general form of the algorithm for $d$-dimensions which achieves $\mathcal{O}(4^d*\log^{d}n)$ time complexity for both updates and queries. This is an improvement over the previously known algorithms which utilize hierarchical data structures like quadtrees and octrees and have a worst-case time complexity of $\Omega(n^{d-1})$ per update/query. |
2004.08348 | Thibault Rieutord | Petr Kuznetsov, Thibault Rieutord and Yuan He | An Asynchronous Computability Theorem for Fair Adversaries | null | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes a simple topological characterization of a large class of
fair adversarial models via affine tasks: sub-complexes of the second iteration
of the standard chromatic subdivision. We show that the task computability of a
model in the class is precisely captured by iterations of the corresponding
affine task. Fair adversaries include, but are not restricted to, the models of
wait-freedom, t-resilience, and $k$-concurrency. Our results generalize and
improve all previously derived topological characterizations of the ability of
a model to solve distributed tasks.
| [
{
"created": "Fri, 17 Apr 2020 17:09:35 GMT",
"version": "v1"
}
] | 2020-04-20 | [
[
"Kuznetsov",
"Petr",
""
],
[
"Rieutord",
"Thibault",
""
],
[
"He",
"Yuan",
""
]
] | This paper proposes a simple topological characterization of a large class of fair adversarial models via affine tasks: sub-complexes of the second iteration of the standard chromatic subdivision. We show that the task computability of a model in the class is precisely captured by iterations of the corresponding affine task. Fair adversaries include, but are not restricted to, the models of wait-freedom, t-resilience, and $k$-concurrency. Our results generalize and improve all previously derived topological characterizations of the ability of a model to solve distributed tasks. |
2208.02313 | Monika Kwiatkowski | Dominik Kuhnke, Monika Kwiatkowski, Olaf Hellwich | Image-based Detection of Surface Defects in Concrete during Construction | null | null | 10.1007/978-3-658-42796-2_13 | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Defects increase the cost and duration of construction projects as they
require significant inspection and documentation efforts. Automating defect
detection could significantly reduce these efforts. This work focuses on
detecting honeycombs, a substantial defect in concrete structures that may
affect structural integrity. We compared honeycomb images scraped from the web
with images obtained from real construction inspections. We found that web
images do not capture the complete variance found in real-case scenarios and
that there is still a lack of data in this domain. Our dataset is therefore
freely available for further research. A Mask R-CNN and EfficientNet-B0 were
trained for honeycomb detection. The Mask R-CNN model allows detecting
honeycombs based on instance segmentation, whereas the EfficientNet-B0 model
allows a patch-based classification. Our experiments demonstrate that both
approaches are suitable for solving and automating honeycomb detection. In the
future, this solution can be incorporated into defect documentation systems.
| [
{
"created": "Wed, 3 Aug 2022 19:05:12 GMT",
"version": "v1"
},
{
"created": "Tue, 6 Dec 2022 15:19:33 GMT",
"version": "v2"
}
] | 2024-05-21 | [
[
"Kuhnke",
"Dominik",
""
],
[
"Kwiatkowski",
"Monika",
""
],
[
"Hellwich",
"Olaf",
""
]
] | Defects increase the cost and duration of construction projects as they require significant inspection and documentation efforts. Automating defect detection could significantly reduce these efforts. This work focuses on detecting honeycombs, a substantial defect in concrete structures that may affect structural integrity. We compared honeycomb images scraped from the web with images obtained from real construction inspections. We found that web images do not capture the complete variance found in real-case scenarios and that there is still a lack of data in this domain. Our dataset is therefore freely available for further research. A Mask R-CNN and EfficientNet-B0 were trained for honeycomb detection. The Mask R-CNN model allows detecting honeycombs based on instance segmentation, whereas the EfficientNet-B0 model allows a patch-based classification. Our experiments demonstrate that both approaches are suitable for solving and automating honeycomb detection. In the future, this solution can be incorporated into defect documentation systems. |
1811.00511 | Woon Sang Cho | Woon Sang Cho, Pengchuan Zhang, Yizhe Zhang, Xiujun Li, Michel Galley,
Chris Brockett, Mengdi Wang, Jianfeng Gao | Towards Coherent and Cohesive Long-form Text Generation | Selected for spotlight oral presentation at NAACL-HLT 2019 Workshop
on Narrative Understanding | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Generating coherent and cohesive long-form texts is a challenging task.
Previous works relied on large amounts of human-generated texts to train neural
language models. However, few attempted to explicitly improve neural language
models from the perspectives of coherence and cohesion. In this work, we
propose a new neural language model that is equipped with two neural
discriminators which provide feedback signals at the levels of sentence
(cohesion) and paragraph (coherence). Our model is trained using a simple yet
efficient variant of policy gradient, called negative-critical sequence
training, which is proposed to eliminate the need of training a separate critic
for estimating baseline. Results demonstrate the effectiveness of our approach,
showing improvements over the strong baseline -- recurrent attention-based
bidirectional MLE-trained neural language model.
| [
{
"created": "Thu, 1 Nov 2018 17:30:50 GMT",
"version": "v1"
},
{
"created": "Wed, 29 May 2019 15:56:31 GMT",
"version": "v2"
}
] | 2019-05-30 | [
[
"Cho",
"Woon Sang",
""
],
[
"Zhang",
"Pengchuan",
""
],
[
"Zhang",
"Yizhe",
""
],
[
"Li",
"Xiujun",
""
],
[
"Galley",
"Michel",
""
],
[
"Brockett",
"Chris",
""
],
[
"Wang",
"Mengdi",
""
],
[
"Gao",
"Jianfeng",
""
]
] | Generating coherent and cohesive long-form texts is a challenging task. Previous works relied on large amounts of human-generated texts to train neural language models. However, few attempted to explicitly improve neural language models from the perspectives of coherence and cohesion. In this work, we propose a new neural language model that is equipped with two neural discriminators which provide feedback signals at the levels of sentence (cohesion) and paragraph (coherence). Our model is trained using a simple yet efficient variant of policy gradient, called negative-critical sequence training, which is proposed to eliminate the need of training a separate critic for estimating baseline. Results demonstrate the effectiveness of our approach, showing improvements over the strong baseline -- recurrent attention-based bidirectional MLE-trained neural language model. |
2307.00493 | Nhat Thanh Tran | Nhat Thanh Tran, Jack Xin | Fourier-Mixed Window Attention: Accelerating Informer for Long Sequence
Time-Series Forecasting | 19 pages (main), 11 pages (appendix), 8 figures | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study a fast local-global window-based attention method to accelerate
Informer for long sequence time-series forecasting. While window attention
being local is a considerable computational saving, it lacks the ability to
capture global token information which is compensated by a subsequent Fourier
transform block. Our method, named FWin, does not rely on query sparsity
hypothesis and an empirical approximation underlying the ProbSparse attention
of Informer. Through experiments on univariate and multivariate datasets, we
show that FWin transformers improve the overall prediction accuracies of
Informer while accelerating its inference speeds by 1.6 to 2 times. We also
provide a mathematical definition of FWin attention, and prove that it is
equivalent to the canonical full attention under the block diagonal
invertibility (BDI) condition of the attention matrix. The BDI is shown
experimentally to hold with high probability for typical benchmark datasets.
| [
{
"created": "Sun, 2 Jul 2023 06:48:19 GMT",
"version": "v1"
},
{
"created": "Thu, 15 Feb 2024 01:29:54 GMT",
"version": "v2"
},
{
"created": "Wed, 17 Apr 2024 06:37:30 GMT",
"version": "v3"
}
] | 2024-04-18 | [
[
"Tran",
"Nhat Thanh",
""
],
[
"Xin",
"Jack",
""
]
] | We study a fast local-global window-based attention method to accelerate Informer for long sequence time-series forecasting. While window attention being local is a considerable computational saving, it lacks the ability to capture global token information which is compensated by a subsequent Fourier transform block. Our method, named FWin, does not rely on query sparsity hypothesis and an empirical approximation underlying the ProbSparse attention of Informer. Through experiments on univariate and multivariate datasets, we show that FWin transformers improve the overall prediction accuracies of Informer while accelerating its inference speeds by 1.6 to 2 times. We also provide a mathematical definition of FWin attention, and prove that it is equivalent to the canonical full attention under the block diagonal invertibility (BDI) condition of the attention matrix. The BDI is shown experimentally to hold with high probability for typical benchmark datasets. |
2405.13857 | Eman Alashwali | Xiaoxin Shen and Eman Alashwali and Lorrie Faith Cranor | What Do Privacy Advertisements Communicate to Consumers? | This document is the author's manuscript for a paper appeared at the
Proceedings on Privacy Enhancing Technologies 2024(4) | null | null | null | cs.CR cs.CY cs.HC | http://creativecommons.org/licenses/by/4.0/ | When companies release marketing materials aimed at promoting their privacy
practices or highlighting specific privacy features, what do they actually
communicate to consumers? In this paper, we explore the impact of privacy
marketing on: (1) consumers' attitudes toward the organizations providing the
campaigns, (2) overall privacy awareness, and (3) the actionability of
suggested privacy advice. To this end, we investigated the impact of four
privacy advertising videos and one privacy game published by five different
technology companies. We conducted 24 semi-structured interviews with
participants randomly assigned to view one or two of the videos or play the
game. Our findings suggest that awareness of privacy features can contribute to
positive perceptions of a company or its products. The ads we tested were more
successful in communicating the advertised privacy features than the game we
tested. We observed that advertising a single privacy feature using a single
metaphor in a short ad increased awareness of the advertised feature. The game
failed to communicate privacy features or motivate study participants to use
the features. Our results also suggest that privacy campaigns can be useful for
raising awareness about privacy features and improving brand image, but may not
be the most effective way to teach viewers how to use privacy features.
| [
{
"created": "Wed, 22 May 2024 17:32:04 GMT",
"version": "v1"
},
{
"created": "Mon, 24 Jun 2024 13:04:31 GMT",
"version": "v2"
},
{
"created": "Wed, 24 Jul 2024 10:34:59 GMT",
"version": "v3"
}
] | 2024-07-25 | [
[
"Shen",
"Xiaoxin",
""
],
[
"Alashwali",
"Eman",
""
],
[
"Cranor",
"Lorrie Faith",
""
]
] | When companies release marketing materials aimed at promoting their privacy practices or highlighting specific privacy features, what do they actually communicate to consumers? In this paper, we explore the impact of privacy marketing on: (1) consumers' attitudes toward the organizations providing the campaigns, (2) overall privacy awareness, and (3) the actionability of suggested privacy advice. To this end, we investigated the impact of four privacy advertising videos and one privacy game published by five different technology companies. We conducted 24 semi-structured interviews with participants randomly assigned to view one or two of the videos or play the game. Our findings suggest that awareness of privacy features can contribute to positive perceptions of a company or its products. The ads we tested were more successful in communicating the advertised privacy features than the game we tested. We observed that advertising a single privacy feature using a single metaphor in a short ad increased awareness of the advertised feature. The game failed to communicate privacy features or motivate study participants to use the features. Our results also suggest that privacy campaigns can be useful for raising awareness about privacy features and improving brand image, but may not be the most effective way to teach viewers how to use privacy features. |
1910.09086 | Jindong Gu | Jindong Gu, Volker Tresp | Contextual Prediction Difference Analysis for Explaining Individual
Image Classifications | null | null | null | null | cs.LG cs.CV stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Much effort has been devoted to understanding the decisions of deep neural
networks in recent years. A number of model-aware saliency methods were
proposed to explain individual classification decisions by creating saliency
maps. However, they are not applicable when the parameters and the gradients of
the underlying models are unavailable. Recently, model-agnostic methods have
also received attention. As one of them, \textit{Prediction Difference
Analysis} (PDA), a probabilistic sound methodology, was proposed. In this work,
we first show that PDA can suffer from saturated classifiers. The saturation
phenomenon of classifiers exists widely in current neural network-based
classifiers. To explain the decisions of saturated classifiers better, we
further propose Contextual PDA, which runs hundreds of times faster than PDA.
The experiments show the superiority of our method by explaining image
classifications of the state-of-the-art deep convolutional neural networks.
| [
{
"created": "Mon, 21 Oct 2019 00:04:22 GMT",
"version": "v1"
},
{
"created": "Sat, 6 Jun 2020 00:41:19 GMT",
"version": "v2"
}
] | 2020-06-09 | [
[
"Gu",
"Jindong",
""
],
[
"Tresp",
"Volker",
""
]
] | Much effort has been devoted to understanding the decisions of deep neural networks in recent years. A number of model-aware saliency methods were proposed to explain individual classification decisions by creating saliency maps. However, they are not applicable when the parameters and the gradients of the underlying models are unavailable. Recently, model-agnostic methods have also received attention. As one of them, \textit{Prediction Difference Analysis} (PDA), a probabilistic sound methodology, was proposed. In this work, we first show that PDA can suffer from saturated classifiers. The saturation phenomenon of classifiers exists widely in current neural network-based classifiers. To explain the decisions of saturated classifiers better, we further propose Contextual PDA, which runs hundreds of times faster than PDA. The experiments show the superiority of our method by explaining image classifications of the state-of-the-art deep convolutional neural networks. |
2105.02632 | Han Xu | Han Xu and Zhenjiang Hu | Analytical Differential Calculus with Integration | null | null | null | null | cs.PL | http://creativecommons.org/licenses/by/4.0/ | Differential lambda-calculus was first introduced by Thomas Ehrhard and
Laurent Regnier in 2003. Despite more than 15 years of history, little work has
been done on a differential calculus with integration. In this paper, we shall
propose a differential calculus with integration from programming point of
view. We show its good correspondence with mathematics, which is manifested by
how we construct these reduction rules and how we preserve important
mathematical theorems in our calculus. Moreover, we highlight applications of
the calculus in incremental computation, automatic differentiation, and
computation approximation.
| [
{
"created": "Thu, 6 May 2021 13:06:55 GMT",
"version": "v1"
},
{
"created": "Fri, 7 May 2021 01:43:31 GMT",
"version": "v2"
}
] | 2021-05-10 | [
[
"Xu",
"Han",
""
],
[
"Hu",
"Zhenjiang",
""
]
] | Differential lambda-calculus was first introduced by Thomas Ehrhard and Laurent Regnier in 2003. Despite more than 15 years of history, little work has been done on a differential calculus with integration. In this paper, we shall propose a differential calculus with integration from programming point of view. We show its good correspondence with mathematics, which is manifested by how we construct these reduction rules and how we preserve important mathematical theorems in our calculus. Moreover, we highlight applications of the calculus in incremental computation, automatic differentiation, and computation approximation. |
2308.02968 | Param Hanji | Param Hanji and Rafa{\l} K. Mantiuk | Robust estimation of exposure ratios in multi-exposure image stacks | 11 pages, 11 figures, journal | Transactions on Computational Imaging, 9, pp.721-731, 2023 | 10.1109/TCI.2023.3301338 | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Merging multi-exposure image stacks into a high dynamic range (HDR) image
requires knowledge of accurate exposure times. When exposure times are
inaccurate, for example, when they are extracted from a camera's EXIF metadata,
the reconstructed HDR images reveal banding artifacts at smooth gradients. To
remedy this, we propose to estimate exposure ratios directly from the input
images. We derive the exposure time estimation as an optimization problem, in
which pixels are selected from pairs of exposures to minimize estimation error
caused by camera noise. When pixel values are represented in the logarithmic
domain, the problem can be solved efficiently using a linear solver. We
demonstrate that the estimation can be easily made robust to pixel misalignment
caused by camera or object motion by collecting pixels from multiple spatial
tiles. The proposed automatic exposure estimation and alignment eliminates
banding artifacts in popular datasets and is essential for applications that
require physically accurate reconstructions, such as measuring the modulation
transfer function of a display. The code for the method is available.
| [
{
"created": "Sat, 5 Aug 2023 23:42:59 GMT",
"version": "v1"
},
{
"created": "Sat, 12 Aug 2023 10:36:52 GMT",
"version": "v2"
}
] | 2023-08-15 | [
[
"Hanji",
"Param",
""
],
[
"Mantiuk",
"Rafał K.",
""
]
] | Merging multi-exposure image stacks into a high dynamic range (HDR) image requires knowledge of accurate exposure times. When exposure times are inaccurate, for example, when they are extracted from a camera's EXIF metadata, the reconstructed HDR images reveal banding artifacts at smooth gradients. To remedy this, we propose to estimate exposure ratios directly from the input images. We derive the exposure time estimation as an optimization problem, in which pixels are selected from pairs of exposures to minimize estimation error caused by camera noise. When pixel values are represented in the logarithmic domain, the problem can be solved efficiently using a linear solver. We demonstrate that the estimation can be easily made robust to pixel misalignment caused by camera or object motion by collecting pixels from multiple spatial tiles. The proposed automatic exposure estimation and alignment eliminates banding artifacts in popular datasets and is essential for applications that require physically accurate reconstructions, such as measuring the modulation transfer function of a display. The code for the method is available. |
2010.04918 | James Koppel | James Koppel, Jackson Kearl, Armando Solar-Lezama | Automatically Deriving Control-Flow Graph Generators from Operational
Semantics | null | null | 10.1145/3547648 | null | cs.PL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We develop the first theory of control-flow graphs from first principles, and
use it to create an algorithm for automatically synthesizing many variants of
control-flow graph generators from a language's operational semantics. Our
approach first introduces a new algorithm for converting a large class of
small-step operational semantics to an abstract machine. It next uses a
technique called "abstract rewriting" to automatically abstract the semantics
of a language, which is used both to directly generate a CFG from a program
("interpreted mode") and to generate standalone code, similar to a
human-written CFG generator, for any program in a language. We show how the
choice of two abstraction and projection parameters allow our approach to
synthesize several families of CFG-generators useful for different kinds of
tools. We prove the correspondence between the generated graphs and the
original semantics. We provide and prove an algorithm for automatically proving
the termination of interpreted-mode generators. In addition to our theoretical
results, we have implemented this algorithm in a tool called Mandate, and show
that it produces human-readable code on two medium-size languages with 60-80
rules, featuring nearly all intraprocedural control constructs common in modern
languages. We then showed these CFG-generators were sufficient to build two
static analyzers atop them. Our work is a promising step towards the grand
vision of being able to synthesize all desired tools from the semantics of a
programming language.
| [
{
"created": "Sat, 10 Oct 2020 06:28:11 GMT",
"version": "v1"
},
{
"created": "Fri, 22 Jul 2022 04:17:02 GMT",
"version": "v2"
}
] | 2022-07-25 | [
[
"Koppel",
"James",
""
],
[
"Kearl",
"Jackson",
""
],
[
"Solar-Lezama",
"Armando",
""
]
] | We develop the first theory of control-flow graphs from first principles, and use it to create an algorithm for automatically synthesizing many variants of control-flow graph generators from a language's operational semantics. Our approach first introduces a new algorithm for converting a large class of small-step operational semantics to an abstract machine. It next uses a technique called "abstract rewriting" to automatically abstract the semantics of a language, which is used both to directly generate a CFG from a program ("interpreted mode") and to generate standalone code, similar to a human-written CFG generator, for any program in a language. We show how the choice of two abstraction and projection parameters allow our approach to synthesize several families of CFG-generators useful for different kinds of tools. We prove the correspondence between the generated graphs and the original semantics. We provide and prove an algorithm for automatically proving the termination of interpreted-mode generators. In addition to our theoretical results, we have implemented this algorithm in a tool called Mandate, and show that it produces human-readable code on two medium-size languages with 60-80 rules, featuring nearly all intraprocedural control constructs common in modern languages. We then showed these CFG-generators were sufficient to build two static analyzers atop them. Our work is a promising step towards the grand vision of being able to synthesize all desired tools from the semantics of a programming language. |
1603.06850 | Przemys{\l}aw Daca | Przemys{\l}aw Daca, Thomas A. Henzinger, Andrey Kupriyanov | Array Folds Logic | null | null | null | null | cs.FL cs.LO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present an extension to the quantifier-free theory of integer arrays which
allows us to express counting. The properties expressible in Array Folds Logic
(AFL) include statements such as "the first array cell contains the array
length," and "the array contains equally many minimal and maximal elements."
These properties cannot be expressed in quantified fragments of the theory of
arrays, nor in the theory of concatenation. Using reduction to counter
machines, we show that the satisfiability problem of AFL is PSPACE-complete,
and with a natural restriction the complexity decreases to NP. We also show
that adding either universal quantifiers or concatenation leads to
undecidability.
AFL contains terms that fold a function over an array. We demonstrate that
folding, a well-known concept from functional languages, allows us to concisely
summarize loops that count over arrays, which occurs frequently in real-life
programs. We provide a tool that can discharge proof obligations in AFL, and we
demonstrate on practical examples that our decision procedure can solve a broad
range of problems in symbolic testing and program verification.
| [
{
"created": "Tue, 22 Mar 2016 16:10:47 GMT",
"version": "v1"
},
{
"created": "Thu, 24 Mar 2016 19:49:04 GMT",
"version": "v2"
},
{
"created": "Thu, 12 May 2016 14:41:29 GMT",
"version": "v3"
}
] | 2016-05-13 | [
[
"Daca",
"Przemysław",
""
],
[
"Henzinger",
"Thomas A.",
""
],
[
"Kupriyanov",
"Andrey",
""
]
] | We present an extension to the quantifier-free theory of integer arrays which allows us to express counting. The properties expressible in Array Folds Logic (AFL) include statements such as "the first array cell contains the array length," and "the array contains equally many minimal and maximal elements." These properties cannot be expressed in quantified fragments of the theory of arrays, nor in the theory of concatenation. Using reduction to counter machines, we show that the satisfiability problem of AFL is PSPACE-complete, and with a natural restriction the complexity decreases to NP. We also show that adding either universal quantifiers or concatenation leads to undecidability. AFL contains terms that fold a function over an array. We demonstrate that folding, a well-known concept from functional languages, allows us to concisely summarize loops that count over arrays, which occurs frequently in real-life programs. We provide a tool that can discharge proof obligations in AFL, and we demonstrate on practical examples that our decision procedure can solve a broad range of problems in symbolic testing and program verification. |
2401.16373 | Calvin Tsay | Joel A. Paulson and Calvin Tsay | Bayesian optimization as a flexible and efficient design framework for
sustainable process systems | 16 pages, 1 figure, 1 table | null | null | null | cs.LG math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bayesian optimization (BO) is a powerful technology for optimizing noisy
expensive-to-evaluate black-box functions, with a broad range of real-world
applications in science, engineering, economics, manufacturing, and beyond. In
this paper, we provide an overview of recent developments, challenges, and
opportunities in BO for design of next-generation process systems. After
describing several motivating applications, we discuss how advanced BO methods
have been developed to more efficiently tackle important problems in these
applications. We conclude the paper with a summary of challenges and
opportunities related to improving the quality of the probabilistic model, the
choice of internal optimization procedure used to select the next sample point,
and the exploitation of problem structure to improve sample efficiency.
| [
{
"created": "Mon, 29 Jan 2024 18:12:32 GMT",
"version": "v1"
}
] | 2024-01-30 | [
[
"Paulson",
"Joel A.",
""
],
[
"Tsay",
"Calvin",
""
]
] | Bayesian optimization (BO) is a powerful technology for optimizing noisy expensive-to-evaluate black-box functions, with a broad range of real-world applications in science, engineering, economics, manufacturing, and beyond. In this paper, we provide an overview of recent developments, challenges, and opportunities in BO for design of next-generation process systems. After describing several motivating applications, we discuss how advanced BO methods have been developed to more efficiently tackle important problems in these applications. We conclude the paper with a summary of challenges and opportunities related to improving the quality of the probabilistic model, the choice of internal optimization procedure used to select the next sample point, and the exploitation of problem structure to improve sample efficiency. |
2005.11957 | Tianshi Li | Tianshi Li, Jackie (Junrui) Yang, Cori Faklaris, Jennifer King, Yuvraj
Agarwal, Laura Dabbish, Jason I. Hong | Decentralized is not risk-free: Understanding public perceptions of
privacy-utility trade-offs in COVID-19 contact-tracing apps | 21 pages, 8 figures | null | null | null | cs.HC cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Contact-tracing apps have potential benefits in helping health authorities to
act swiftly to halt the spread of COVID-19. However, their effectiveness is
heavily dependent on their installation rate, which may be influenced by
people's perceptions of the utility of these apps and any potential privacy
risks due to the collection and releasing of sensitive user data (e.g., user
identity and location). In this paper, we present a survey study that examined
people's willingness to install six different contact-tracing apps after
informing them of the risks and benefits of each design option (with a
U.S.-only sample on Amazon Mechanical Turk, $N=208$). The six app designs
covered two major design dimensions (centralized vs decentralized, basic
contact tracing vs. also providing hotspot information), grounded in our
analysis of existing contact-tracing app proposals.
Contrary to assumptions of some prior work, we found that the majority of
people in our sample preferred to install apps that use a centralized server
for contact tracing, as they are more willing to allow a centralized authority
to access the identity of app users rather than allowing tech-savvy users to
infer the identity of diagnosed users. We also found that the majority of our
sample preferred to install apps that share diagnosed users' recent locations
in public places to show hotspots of infection. Our results suggest that apps
using a centralized architecture with strong security protection to do basic
contact tracing and providing users with other useful information such as
hotspots of infection in public places may achieve a high adoption rate in the
U.S.
| [
{
"created": "Mon, 25 May 2020 07:50:51 GMT",
"version": "v1"
}
] | 2020-05-26 | [
[
"Li",
"Tianshi",
"",
"Junrui"
],
[
"Jackie",
"",
"",
"Junrui"
],
[
"Yang",
"",
""
],
[
"Faklaris",
"Cori",
""
],
[
"King",
"Jennifer",
""
],
[
"Agarwal",
"Yuvraj",
""
],
[
"Dabbish",
"Laura",
""
],
[
"Hong",
"Jason I.",
""
]
] | Contact-tracing apps have potential benefits in helping health authorities to act swiftly to halt the spread of COVID-19. However, their effectiveness is heavily dependent on their installation rate, which may be influenced by people's perceptions of the utility of these apps and any potential privacy risks due to the collection and releasing of sensitive user data (e.g., user identity and location). In this paper, we present a survey study that examined people's willingness to install six different contact-tracing apps after informing them of the risks and benefits of each design option (with a U.S.-only sample on Amazon Mechanical Turk, $N=208$). The six app designs covered two major design dimensions (centralized vs decentralized, basic contact tracing vs. also providing hotspot information), grounded in our analysis of existing contact-tracing app proposals. Contrary to assumptions of some prior work, we found that the majority of people in our sample preferred to install apps that use a centralized server for contact tracing, as they are more willing to allow a centralized authority to access the identity of app users rather than allowing tech-savvy users to infer the identity of diagnosed users. We also found that the majority of our sample preferred to install apps that share diagnosed users' recent locations in public places to show hotspots of infection. Our results suggest that apps using a centralized architecture with strong security protection to do basic contact tracing and providing users with other useful information such as hotspots of infection in public places may achieve a high adoption rate in the U.S. |
2004.01817 | Xuelu Li | Xuelu Li and Vishal Monga | Group Based Deep Shared Feature Learning for Fine-grained Image
Classification | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fine-grained image classification has emerged as a significant challenge
because objects in such images have small inter-class visual differences but
with large variations in pose, lighting, and viewpoints, etc. Most existing
work focuses on highly customized feature extraction via deep network
architectures which have been shown to deliver state of the art performance.
Given that images from distinct classes in fine-grained classification share
significant features of interest, we present a new deep network architecture
that explicitly models shared features and removes their effect to achieve
enhanced classification results. Our modeling of shared features is based on a
new group based learning wherein existing classes are divided into groups and
multiple shared feature patterns are discovered (learned). We call this
framework Group based deep Shared Feature Learning (GSFL) and the resulting
learned network as GSFL-Net. Specifically, the proposed GSFL-Net develops a
specially designed autoencoder which is constrained by a newly proposed Feature
Expression Loss to decompose a set of features into their constituent shared
and discriminative components. During inference, only the discriminative
feature component is used to accomplish the classification task. A key benefit
of our specialized autoencoder is that it is versatile and can be combined with
state-of-the-art fine-grained feature extraction models and trained together
with them to improve their performance directly. Experiments on benchmark
datasets show that GSFL-Net can enhance classification accuracy over the state
of the art with a more interpretable architecture.
| [
{
"created": "Sat, 4 Apr 2020 00:01:11 GMT",
"version": "v1"
}
] | 2020-04-07 | [
[
"Li",
"Xuelu",
""
],
[
"Monga",
"Vishal",
""
]
] | Fine-grained image classification has emerged as a significant challenge because objects in such images have small inter-class visual differences but with large variations in pose, lighting, and viewpoints, etc. Most existing work focuses on highly customized feature extraction via deep network architectures which have been shown to deliver state of the art performance. Given that images from distinct classes in fine-grained classification share significant features of interest, we present a new deep network architecture that explicitly models shared features and removes their effect to achieve enhanced classification results. Our modeling of shared features is based on a new group based learning wherein existing classes are divided into groups and multiple shared feature patterns are discovered (learned). We call this framework Group based deep Shared Feature Learning (GSFL) and the resulting learned network as GSFL-Net. Specifically, the proposed GSFL-Net develops a specially designed autoencoder which is constrained by a newly proposed Feature Expression Loss to decompose a set of features into their constituent shared and discriminative components. During inference, only the discriminative feature component is used to accomplish the classification task. A key benefit of our specialized autoencoder is that it is versatile and can be combined with state-of-the-art fine-grained feature extraction models and trained together with them to improve their performance directly. Experiments on benchmark datasets show that GSFL-Net can enhance classification accuracy over the state of the art with a more interpretable architecture. |
1404.3407 | Ruslan Shevchenko | Ruslan Shevchenko | Annotated imports | null | null | null | null | cs.PL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Presented simple extensions to scala language related to import statements:
exported imports, which provide ability to reuse sequence of import clauses in
composable form and default rewriters, which provide mechanism for pluggable
macro-based AST transformation of overall compilation unit, activated by import
of library object. Using these facilities not only allows more compact code, it
prevents application programmer from producing certain type of errors too and
allows to implement local language extension as libraries on top of standard
compiler. Part of discussed extensions is submitted to scala language committee
as pre-sip \cite{ai-presip} and can be used as first step for refining imports
semantics in the future version of scala language.
| [
{
"created": "Sun, 13 Apr 2014 18:20:40 GMT",
"version": "v1"
}
] | 2014-04-15 | [
[
"Shevchenko",
"Ruslan",
""
]
] | Presented simple extensions to scala language related to import statements: exported imports, which provide ability to reuse sequence of import clauses in composable form and default rewriters, which provide mechanism for pluggable macro-based AST transformation of overall compilation unit, activated by import of library object. Using these facilities not only allows more compact code, it prevents application programmer from producing certain type of errors too and allows to implement local language extension as libraries on top of standard compiler. Part of discussed extensions is submitted to scala language committee as pre-sip \cite{ai-presip} and can be used as first step for refining imports semantics in the future version of scala language. |
1903.11782 | Hamed Pezeshki | Hamed Pezeshki, Masoumeh Sadeghi, Martin Haenggi, and J. Nicholas
Laneman | Anywhere Decoding: Low-Overhead Uplink Interference Management for
Wireless Networks | null | null | null | null | cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Inter-cell interference (ICI) is one of the major performance-limiting
factors in the context of modern cellular systems. To tackle ICI, coordinated
multi-point (CoMP) schemes have been proposed as a key technology for
next-generation mobile communication systems. Although CoMP schemes offer
promising theoretical gains, their performance could degrade significantly
because of practical issues such as limited backhaul. To address this issue, we
explore a novel uplink interference management scheme called anywhere decoding,
which requires exchanging just a few bits of information per coding interval
among the base stations (BSs). In spite of the low overhead of anywhere
decoding, we observe considerable gains in the outage probability performance
of cell-edge users, compared to no cooperation between BSs. Additionally,
asymptotic results of the outage probability for high-SNR regimes demonstrate
that anywhere decoding schemes achieve full spatial diversity through multiple
decoding opportunities, and they are within 1.5 dB of full cooperation.
| [
{
"created": "Thu, 28 Mar 2019 04:34:02 GMT",
"version": "v1"
}
] | 2019-03-29 | [
[
"Pezeshki",
"Hamed",
""
],
[
"Sadeghi",
"Masoumeh",
""
],
[
"Haenggi",
"Martin",
""
],
[
"Laneman",
"J. Nicholas",
""
]
] | Inter-cell interference (ICI) is one of the major performance-limiting factors in the context of modern cellular systems. To tackle ICI, coordinated multi-point (CoMP) schemes have been proposed as a key technology for next-generation mobile communication systems. Although CoMP schemes offer promising theoretical gains, their performance could degrade significantly because of practical issues such as limited backhaul. To address this issue, we explore a novel uplink interference management scheme called anywhere decoding, which requires exchanging just a few bits of information per coding interval among the base stations (BSs). In spite of the low overhead of anywhere decoding, we observe considerable gains in the outage probability performance of cell-edge users, compared to no cooperation between BSs. Additionally, asymptotic results of the outage probability for high-SNR regimes demonstrate that anywhere decoding schemes achieve full spatial diversity through multiple decoding opportunities, and they are within 1.5 dB of full cooperation. |
2107.09257 | Raunak Srivastava | Raunak Srivastava, Roshan Sah and Kaushik Das | Attitude and In-orbit Residual Magnetic Moment Estimation of Small
Satellites Using only Magnetometer | 10 pages, 8 figures, Accepted in Small Satellite conference 2021 | null | null | null | cs.RO cs.SY eess.SY | http://creativecommons.org/licenses/by/4.0/ | Attitude estimation or determination is a fundamental task for satellites to
remain effectively operational. This task is furthermore complicated on small
satellites by the limited space and computational power available on-board.
This, coupled with a usually low budget, restricts small satellites from using
high precision sensors for its especially important task of attitude
estimation. On top of this, small satellites, on account of their size and
weight, are comparatively more sensitive to environmental or orbital
disturbances as compared to their larger counterparts. Magnetic disturbance
forms the major contributor to orbital disturbances on small satellites in
Lower Earth Orbits (LEO). This magnetic disturbance depends on the Residual
Magnetic Moment (RMM) of the satellite itself, which for higher accuracy should
be determined in real-time. This paper presents a method for in-orbit
estimation of the satellite magnetic dipole using a Random Walk Model in order
to circumnavigate the inaccuracy arising due to unknown orbital magnetic
disturbances. It is also ensured that the dipole as well as attitude estimation
of the satellite is done using only a magnetometer as the sensor.
| [
{
"created": "Tue, 20 Jul 2021 04:31:29 GMT",
"version": "v1"
}
] | 2021-07-21 | [
[
"Srivastava",
"Raunak",
""
],
[
"Sah",
"Roshan",
""
],
[
"Das",
"Kaushik",
""
]
] | Attitude estimation or determination is a fundamental task for satellites to remain effectively operational. This task is furthermore complicated on small satellites by the limited space and computational power available on-board. This, coupled with a usually low budget, restricts small satellites from using high precision sensors for its especially important task of attitude estimation. On top of this, small satellites, on account of their size and weight, are comparatively more sensitive to environmental or orbital disturbances as compared to their larger counterparts. Magnetic disturbance forms the major contributor to orbital disturbances on small satellites in Lower Earth Orbits (LEO). This magnetic disturbance depends on the Residual Magnetic Moment (RMM) of the satellite itself, which for higher accuracy should be determined in real-time. This paper presents a method for in-orbit estimation of the satellite magnetic dipole using a Random Walk Model in order to circumnavigate the inaccuracy arising due to unknown orbital magnetic disturbances. It is also ensured that the dipole as well as attitude estimation of the satellite is done using only a magnetometer as the sensor. |
cs/0510003 | Giuseppe Abreu | Giuseppe Thadeu Freitas de Abreu | Generalized ABBA Space-Time Block Codes | 47 pages, 6 figures, Matlab codes included | null | null | null | cs.IT math.IT | null | Linear space-time block codes (STBCs) of unitary rate and full diversity,
systematically constructed over arbitrary constellations for any number of
transmit antennas are introduced. The codes are obtained by generalizing the
existing ABBA STBCs, a.k.a quasi-orthogonal STBCs (QO-STBCs). Furthermore, a
fully orthogonal (symbol-by-symbol) decoder for the new generalized ABBA
(GABBA) codes is provided. This remarkably low-complexity decoder relies on
partition orthogonality properties of the code structure to decompose the
received signal vector into lower-dimension tuples, each dependent only on
certain subsets of the transmitted symbols. Orthogonal decodability results
from the nested application of this technique, with no matrix inversion or
iterative signal processing required. The exact bit-error-rate probability of
GABBA codes over generalized fading channels with maximum likelihood (ML)
decoding is evaluated analytically and compared against simulation results
obtained with the proposed orthogonal decoder. The comparison reveals that the
proposed GABBA solution, despite its very low complexity, achieves nearly the
same performance of the bound corresponding to the ML-decoded system,
especially in systems with large numbers of antennas.
| [
{
"created": "Sun, 2 Oct 2005 14:10:11 GMT",
"version": "v1"
}
] | 2007-07-13 | [
[
"de Abreu",
"Giuseppe Thadeu Freitas",
""
]
] | Linear space-time block codes (STBCs) of unitary rate and full diversity, systematically constructed over arbitrary constellations for any number of transmit antennas are introduced. The codes are obtained by generalizing the existing ABBA STBCs, a.k.a quasi-orthogonal STBCs (QO-STBCs). Furthermore, a fully orthogonal (symbol-by-symbol) decoder for the new generalized ABBA (GABBA) codes is provided. This remarkably low-complexity decoder relies on partition orthogonality properties of the code structure to decompose the received signal vector into lower-dimension tuples, each dependent only on certain subsets of the transmitted symbols. Orthogonal decodability results from the nested application of this technique, with no matrix inversion or iterative signal processing required. The exact bit-error-rate probability of GABBA codes over generalized fading channels with maximum likelihood (ML) decoding is evaluated analytically and compared against simulation results obtained with the proposed orthogonal decoder. The comparison reveals that the proposed GABBA solution, despite its very low complexity, achieves nearly the same performance of the bound corresponding to the ML-decoded system, especially in systems with large numbers of antennas. |
1910.02120 | Cameron R. Wolfe | Binhang Yuan and Cameron R. Wolfe and Chen Dun and Yuxin Tang and
Anastasios Kyrillidis and Christopher M. Jermaine | Distributed Learning of Deep Neural Networks using Independent Subnet
Training | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Distributed machine learning (ML) can bring more computational resources to
bear than single-machine learning, thus enabling reductions in training time.
Distributed learning partitions models and data over many machines, allowing
model and dataset sizes beyond the available compute power and memory of a
single machine. In practice though, distributed ML is challenging when
distribution is mandatory, rather than chosen by the practitioner. In such
scenarios, data could unavoidably be separated among workers due to limited
memory capacity per worker or even because of data privacy issues. There,
existing distributed methods will utterly fail due to dominant transfer costs
across workers, or do not even apply.
We propose a new approach to distributed fully connected neural network
learning, called independent subnet training (IST), to handle these cases. In
IST, the original network is decomposed into a set of narrow subnetworks with
the same depth. These subnetworks are then trained locally before parameters
are exchanged to produce new subnets and the training cycle repeats. Such a
naturally "model parallel" approach limits memory usage by storing only a
portion of network parameters on each device. Additionally, no requirements
exist for sharing data between workers (i.e., subnet training is local and
independent) and communication volume and frequency are reduced by decomposing
the original network into independent subnets. These properties of IST can cope
with issues due to distributed data, slow interconnects, or limited device
memory, making IST a suitable approach for cases of mandatory distribution. We
show experimentally that IST results in training times that are much lower than
common distributed learning approaches.
| [
{
"created": "Fri, 4 Oct 2019 19:46:16 GMT",
"version": "v1"
},
{
"created": "Fri, 7 Feb 2020 20:01:11 GMT",
"version": "v2"
},
{
"created": "Wed, 4 Mar 2020 02:17:29 GMT",
"version": "v3"
},
{
"created": "Mon, 8 Jun 2020 23:29:57 GMT",
"version": "v4"
},
{
"created": "Thu, 5 Nov 2020 15:11:47 GMT",
"version": "v5"
},
{
"created": "Mon, 7 Mar 2022 20:34:15 GMT",
"version": "v6"
},
{
"created": "Mon, 18 Apr 2022 20:19:23 GMT",
"version": "v7"
}
] | 2022-04-20 | [
[
"Yuan",
"Binhang",
""
],
[
"Wolfe",
"Cameron R.",
""
],
[
"Dun",
"Chen",
""
],
[
"Tang",
"Yuxin",
""
],
[
"Kyrillidis",
"Anastasios",
""
],
[
"Jermaine",
"Christopher M.",
""
]
] | Distributed machine learning (ML) can bring more computational resources to bear than single-machine learning, thus enabling reductions in training time. Distributed learning partitions models and data over many machines, allowing model and dataset sizes beyond the available compute power and memory of a single machine. In practice though, distributed ML is challenging when distribution is mandatory, rather than chosen by the practitioner. In such scenarios, data could unavoidably be separated among workers due to limited memory capacity per worker or even because of data privacy issues. There, existing distributed methods will utterly fail due to dominant transfer costs across workers, or do not even apply. We propose a new approach to distributed fully connected neural network learning, called independent subnet training (IST), to handle these cases. In IST, the original network is decomposed into a set of narrow subnetworks with the same depth. These subnetworks are then trained locally before parameters are exchanged to produce new subnets and the training cycle repeats. Such a naturally "model parallel" approach limits memory usage by storing only a portion of network parameters on each device. Additionally, no requirements exist for sharing data between workers (i.e., subnet training is local and independent) and communication volume and frequency are reduced by decomposing the original network into independent subnets. These properties of IST can cope with issues due to distributed data, slow interconnects, or limited device memory, making IST a suitable approach for cases of mandatory distribution. We show experimentally that IST results in training times that are much lower than common distributed learning approaches. |
1505.00278 | Michal \v{C}ertick\'y | Bj\"orn Persson Mattsson, Tom\'a\v{s} Vajda, Michal \v{C}ertick\'y | Automatic Observer Script for StarCraft: Brood War Bot Games (technical
report) | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This short report describes an automated BWAPI-based script developed for
live streams of a StarCraft Brood War bot tournament, SSCAIT. The script
controls the in-game camera in order to follow the relevant events and improve
the viewer experience. We enumerate its novel features and provide a few
implementation notes.
| [
{
"created": "Fri, 1 May 2015 20:41:19 GMT",
"version": "v1"
}
] | 2015-05-05 | [
[
"Mattsson",
"Björn Persson",
""
],
[
"Vajda",
"Tomáš",
""
],
[
"Čertický",
"Michal",
""
]
] | This short report describes an automated BWAPI-based script developed for live streams of a StarCraft Brood War bot tournament, SSCAIT. The script controls the in-game camera in order to follow the relevant events and improve the viewer experience. We enumerate its novel features and provide a few implementation notes. |
2105.14105 | Dominik Schildknecht | Dominik Schildknecht, Anastasia N. Popova, Jack Stellwagen, Matt
Thomson | Reinforcement Learning reveals fundamental limits on the mixing of
active particles | null | Soft Matter, 2022 | 10.1039/D1SM01400E | null | cs.LG cs.SY eess.SY nlin.AO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The control of far-from-equilibrium physical systems, including active
materials, has emerged as an important area for the application of
reinforcement learning (RL) strategies to derive control policies for physical
systems. In active materials, non-linear dynamics and long-range interactions
between particles prohibit closed-form descriptions of the system's dynamics
and prevent explicit solutions to optimal control problems. Due to fundamental
challenges in solving for explicit control strategies, RL has emerged as an
approach to derive control strategies for far-from-equilibrium active matter
systems. However, an important open question is how the mathematical structure
and the physical properties of the active matter systems determine the
tractability of RL for learning control policies. In this work, we show that RL
can only find good strategies to the canonical active matter task of mixing for
systems that combine attractive and repulsive particle interactions. Using
mathematical results from dynamical systems theory, we relate the availability
of both interaction types with the existence of hyperbolic dynamics and the
ability of RL to find homogeneous mixing strategies. In particular, we show
that for drag-dominated translational-invariant particle systems, hyperbolic
dynamics and, therefore, mixing requires combining attractive and repulsive
interactions. Broadly, our work demonstrates how fundamental physical and
mathematical properties of dynamical systems can enable or constrain
reinforcement learning-based control.
| [
{
"created": "Fri, 28 May 2021 21:04:55 GMT",
"version": "v1"
}
] | 2021-12-23 | [
[
"Schildknecht",
"Dominik",
""
],
[
"Popova",
"Anastasia N.",
""
],
[
"Stellwagen",
"Jack",
""
],
[
"Thomson",
"Matt",
""
]
] | The control of far-from-equilibrium physical systems, including active materials, has emerged as an important area for the application of reinforcement learning (RL) strategies to derive control policies for physical systems. In active materials, non-linear dynamics and long-range interactions between particles prohibit closed-form descriptions of the system's dynamics and prevent explicit solutions to optimal control problems. Due to fundamental challenges in solving for explicit control strategies, RL has emerged as an approach to derive control strategies for far-from-equilibrium active matter systems. However, an important open question is how the mathematical structure and the physical properties of the active matter systems determine the tractability of RL for learning control policies. In this work, we show that RL can only find good strategies to the canonical active matter task of mixing for systems that combine attractive and repulsive particle interactions. Using mathematical results from dynamical systems theory, we relate the availability of both interaction types with the existence of hyperbolic dynamics and the ability of RL to find homogeneous mixing strategies. In particular, we show that for drag-dominated translational-invariant particle systems, hyperbolic dynamics and, therefore, mixing requires combining attractive and repulsive interactions. Broadly, our work demonstrates how fundamental physical and mathematical properties of dynamical systems can enable or constrain reinforcement learning-based control. |
1704.04218 | Samuel Coogan | Samuel Coogan | A Contractive Approach to Separable Lyapunov Functions for Monotone
Systems | arXiv admin note: text overlap with arXiv:1609.06258 | null | null | null | cs.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Monotone systems preserve a partial ordering of states along system
trajectories and are often amenable to separable Lyapunov functions that are
either the sum or the maximum of a collection of functions of a scalar
argument. In this paper, we consider constructing separable Lyapunov functions
for monotone systems that are also contractive, that is, the distance between
any pair of trajectories exponentially decreases. The distance is defined in
terms of a possibly state-dependent norm. When this norm is a weighted
one-norm, we obtain conditions which lead to sum-separable Lyapunov functions,
and when this norm is a weighted infinity-norm, symmetric conditions lead to
max-separable Lyapunov functions. In addition, we consider two classes of
Lyapunov functions: the first class is separable along the system's state, and
the second class is separable along components of the system's vector field.
The latter case is advantageous for many practically motivated systems for
which it is difficult to measure the system's state but easier to measure the
system's velocity or rate of change. In addition, we present an algorithm based
on sum-of-squares programming to compute such separable Lyapunov functions. We
provide several examples to demonstrate our results.
| [
{
"created": "Thu, 13 Apr 2017 17:32:57 GMT",
"version": "v1"
},
{
"created": "Tue, 24 Oct 2017 22:41:17 GMT",
"version": "v2"
}
] | 2017-10-26 | [
[
"Coogan",
"Samuel",
""
]
] | Monotone systems preserve a partial ordering of states along system trajectories and are often amenable to separable Lyapunov functions that are either the sum or the maximum of a collection of functions of a scalar argument. In this paper, we consider constructing separable Lyapunov functions for monotone systems that are also contractive, that is, the distance between any pair of trajectories exponentially decreases. The distance is defined in terms of a possibly state-dependent norm. When this norm is a weighted one-norm, we obtain conditions which lead to sum-separable Lyapunov functions, and when this norm is a weighted infinity-norm, symmetric conditions lead to max-separable Lyapunov functions. In addition, we consider two classes of Lyapunov functions: the first class is separable along the system's state, and the second class is separable along components of the system's vector field. The latter case is advantageous for many practically motivated systems for which it is difficult to measure the system's state but easier to measure the system's velocity or rate of change. In addition, we present an algorithm based on sum-of-squares programming to compute such separable Lyapunov functions. We provide several examples to demonstrate our results. |
2206.05988 | Shoki Miyagawa | Shoki Miyagawa, Atsuyoshi Yano, Naoko Sawada and Isamu Ogawa | High-Dimensional Bayesian Optimization with Constraints: Application to
Powder Weighing | 14 pages, 6 figures, accepted to PDPTA 2022 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bayesian optimization works effectively optimizing parameters in black-box
problems. However, this method did not work for high-dimensional parameters in
limited trials. Parameters can be efficiently explored by nonlinearly embedding
them into a low-dimensional space; however, the constraints cannot be
considered. We proposed combining parameter decomposition by introducing
disentangled representation learning into nonlinear embedding to consider both
known equality and unknown inequality constraints in high-dimensional Bayesian
optimization. We applied the proposed method to a powder weighing task as a
usage scenario. Based on the experimental results, the proposed method
considers the constraints and contributes to reducing the number of trials by
approximately 66% compared to manual parameter tuning.
| [
{
"created": "Mon, 13 Jun 2022 09:14:06 GMT",
"version": "v1"
}
] | 2022-06-14 | [
[
"Miyagawa",
"Shoki",
""
],
[
"Yano",
"Atsuyoshi",
""
],
[
"Sawada",
"Naoko",
""
],
[
"Ogawa",
"Isamu",
""
]
] | Bayesian optimization works effectively optimizing parameters in black-box problems. However, this method did not work for high-dimensional parameters in limited trials. Parameters can be efficiently explored by nonlinearly embedding them into a low-dimensional space; however, the constraints cannot be considered. We proposed combining parameter decomposition by introducing disentangled representation learning into nonlinear embedding to consider both known equality and unknown inequality constraints in high-dimensional Bayesian optimization. We applied the proposed method to a powder weighing task as a usage scenario. Based on the experimental results, the proposed method considers the constraints and contributes to reducing the number of trials by approximately 66% compared to manual parameter tuning. |
1905.10990 | Frederik Diehl | Frederik Diehl | Edge Contraction Pooling for Graph Neural Networks | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph Neural Network (GNN) research has concentrated on improving
convolutional layers, with little attention paid to developing graph pooling
layers. Yet pooling layers can enable GNNs to reason over abstracted groups of
nodes instead of single nodes. To close this gap, we propose a graph pooling
layer relying on the notion of edge contraction: EdgePool learns a localized
and sparse hard pooling transform. We show that EdgePool outperforms
alternative pooling methods, can be easily integrated into most GNN models, and
improves performance on both node and graph classification.
| [
{
"created": "Mon, 27 May 2019 06:18:24 GMT",
"version": "v1"
}
] | 2019-05-28 | [
[
"Diehl",
"Frederik",
""
]
] | Graph Neural Network (GNN) research has concentrated on improving convolutional layers, with little attention paid to developing graph pooling layers. Yet pooling layers can enable GNNs to reason over abstracted groups of nodes instead of single nodes. To close this gap, we propose a graph pooling layer relying on the notion of edge contraction: EdgePool learns a localized and sparse hard pooling transform. We show that EdgePool outperforms alternative pooling methods, can be easily integrated into most GNN models, and improves performance on both node and graph classification. |
2104.04733 | Nadeem Yousaf | Nadeem Yousaf, Sarfaraz Hussein, Waqas Sultani | Estimation of BMI from Facial Images using Semantic Segmentation based
Region-Aware Pooling | Accepted for publication in computers in biology and medicine | Computers in Biology and Medicine Volume 133, June 2021, Pages
104392 | 10.1016/j.compbiomed.2021.104392 | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Body-Mass-Index (BMI) conveys important information about one's life such as
health and socio-economic conditions. Large-scale automatic estimation of BMIs
can help predict several societal behaviors such as health, job opportunities,
friendships, and popularity. The recent works have either employed hand-crafted
geometrical face features or face-level deep convolutional neural network
features for face to BMI prediction. The hand-crafted geometrical face feature
lack generalizability and face-level deep features don't have detailed local
information. Although useful, these methods missed the detailed local
information which is essential for exact BMI prediction. In this paper, we
propose to use deep features that are pooled from different face regions (eye,
nose, eyebrow, lips, etc.,) and demonstrate that this explicit pooling from
face regions can significantly boost the performance of BMI prediction. To
address the problem of accurate and pixel-level face regions localization, we
propose to use face semantic segmentation in our framework. Extensive
experiments are performed using different Convolutional Neural Network (CNN)
backbones including FaceNet and VGG-face on three publicly available datasets:
VisualBMI, Bollywood and VIP attributes. Experimental results demonstrate that,
as compared to the recent works, the proposed Reg-GAP gives a percentage
improvement of 22.4\% on VIP-attribute, 3.3\% on VisualBMI, and 63.09\% on the
Bollywood dataset.
| [
{
"created": "Sat, 10 Apr 2021 10:53:21 GMT",
"version": "v1"
}
] | 2021-04-26 | [
[
"Yousaf",
"Nadeem",
""
],
[
"Hussein",
"Sarfaraz",
""
],
[
"Sultani",
"Waqas",
""
]
] | Body-Mass-Index (BMI) conveys important information about one's life such as health and socio-economic conditions. Large-scale automatic estimation of BMIs can help predict several societal behaviors such as health, job opportunities, friendships, and popularity. The recent works have either employed hand-crafted geometrical face features or face-level deep convolutional neural network features for face to BMI prediction. The hand-crafted geometrical face feature lack generalizability and face-level deep features don't have detailed local information. Although useful, these methods missed the detailed local information which is essential for exact BMI prediction. In this paper, we propose to use deep features that are pooled from different face regions (eye, nose, eyebrow, lips, etc.,) and demonstrate that this explicit pooling from face regions can significantly boost the performance of BMI prediction. To address the problem of accurate and pixel-level face regions localization, we propose to use face semantic segmentation in our framework. Extensive experiments are performed using different Convolutional Neural Network (CNN) backbones including FaceNet and VGG-face on three publicly available datasets: VisualBMI, Bollywood and VIP attributes. Experimental results demonstrate that, as compared to the recent works, the proposed Reg-GAP gives a percentage improvement of 22.4\% on VIP-attribute, 3.3\% on VisualBMI, and 63.09\% on the Bollywood dataset. |
2404.08997 | Ryan Cotterell | Ryan Cotterell, Thomas M\"uller, Alexander Fraser, Hinrich Sch\"utze | Labeled Morphological Segmentation with Semi-Markov Models | CoNLL 2015 | null | 10.18653/v1/K15-1017 | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | We present labeled morphological segmentation, an alternative view of
morphological processing that unifies several tasks. From an annotation
standpoint, we additionally introduce a new hierarchy of morphotactic tagsets.
Finally, we develop \modelname, a discriminative morphological segmentation
system that, contrary to previous work, explicitly models morphotactics. We
show that \textsc{chipmunk} yields improved performance on three tasks for all
six languages: (i) morphological segmentation, (ii) stemming and (iii)
morphological tag classification. On morphological segmentation, our method
shows absolute improvements of 2--6 points $F_1$ over the baseline.
| [
{
"created": "Sat, 13 Apr 2024 12:51:53 GMT",
"version": "v1"
}
] | 2024-04-16 | [
[
"Cotterell",
"Ryan",
""
],
[
"Müller",
"Thomas",
""
],
[
"Fraser",
"Alexander",
""
],
[
"Schütze",
"Hinrich",
""
]
] | We present labeled morphological segmentation, an alternative view of morphological processing that unifies several tasks. From an annotation standpoint, we additionally introduce a new hierarchy of morphotactic tagsets. Finally, we develop \modelname, a discriminative morphological segmentation system that, contrary to previous work, explicitly models morphotactics. We show that \textsc{chipmunk} yields improved performance on three tasks for all six languages: (i) morphological segmentation, (ii) stemming and (iii) morphological tag classification. On morphological segmentation, our method shows absolute improvements of 2--6 points $F_1$ over the baseline. |
2212.11484 | Zohreh Azizi | Zohreh Azizi, C.-C. Jay Kuo | SALVE: Self-supervised Adaptive Low-light Video Enhancement | 12 pages, 7 figures, 4 tables | null | null | null | cs.CV eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A self-supervised adaptive low-light video enhancement method, called SALVE,
is proposed in this work. SALVE first enhances a few key frames of an input
low-light video using a retinex-based low-light image enhancement technique.
For each keyframe, it learns a mapping from low-light image patches to enhanced
ones via ridge regression. These mappings are then used to enhance the
remaining frames in the low-light video. The combination of traditional
retinex-based image enhancement and learning-based ridge regression leads to a
robust, adaptive and computationally inexpensive solution to enhance low-light
videos. Our extensive experiments along with a user study show that 87% of
participants prefer SALVE over prior work.
| [
{
"created": "Thu, 22 Dec 2022 05:00:18 GMT",
"version": "v1"
},
{
"created": "Wed, 22 Feb 2023 02:37:05 GMT",
"version": "v2"
}
] | 2023-02-23 | [
[
"Azizi",
"Zohreh",
""
],
[
"Kuo",
"C. -C. Jay",
""
]
] | A self-supervised adaptive low-light video enhancement method, called SALVE, is proposed in this work. SALVE first enhances a few key frames of an input low-light video using a retinex-based low-light image enhancement technique. For each keyframe, it learns a mapping from low-light image patches to enhanced ones via ridge regression. These mappings are then used to enhance the remaining frames in the low-light video. The combination of traditional retinex-based image enhancement and learning-based ridge regression leads to a robust, adaptive and computationally inexpensive solution to enhance low-light videos. Our extensive experiments along with a user study show that 87% of participants prefer SALVE over prior work. |
0912.2303 | Kadirvelu SivaKumar | Ratish Agarwal, Dr. Mahesh Motwani | Survey of clustering algorithms for MANET | null | IJCSE Volume 1 Issue 2 2009 98-104 | null | null | cs.DC cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many clustering schemes have been proposed for ad hoc networks. A systematic
classification of these clustering schemes enables one to better understand and
make improvements. In mobile ad hoc networks, the movement of the network nodes
may quickly change the topology resulting in the increase of the overhead
message in topology maintenance. Protocols try to keep the number of nodes in a
cluster around a pre-defined threshold to facilitate the optimal operation of
the medium access control protocol. The clusterhead election is invoked
on-demand, and is aimed to reduce the computation and communication costs. A
large variety of approaches for ad hoc clustering have been developed by
researchers which focus on different performance metrics. This paper presents a
survey of different clustering schemes.
| [
{
"created": "Fri, 11 Dec 2009 18:17:40 GMT",
"version": "v1"
}
] | 2009-12-14 | [
[
"Agarwal",
"Ratish",
""
],
[
"Motwani",
"Dr. Mahesh",
""
]
] | Many clustering schemes have been proposed for ad hoc networks. A systematic classification of these clustering schemes enables one to better understand and make improvements. In mobile ad hoc networks, the movement of the network nodes may quickly change the topology resulting in the increase of the overhead message in topology maintenance. Protocols try to keep the number of nodes in a cluster around a pre-defined threshold to facilitate the optimal operation of the medium access control protocol. The clusterhead election is invoked on-demand, and is aimed to reduce the computation and communication costs. A large variety of approaches for ad hoc clustering have been developed by researchers which focus on different performance metrics. This paper presents a survey of different clustering schemes. |
2201.04425 | Pavlo Mykytyn | Pavlo Mykytyn, Marcin Brzozowski, Zoya Dyka and Peter Langendoerfer | Jamming Detection for IR-UWB Ranging Technology in Autonomous UAV Swarms | 6 pages, 1 figure | 2021 10th MEDITERRANEAN CONFERENCE ON EMBEDDED COMPUTING, p. 81-86 | 10.1109/MECO52532.2021.9460250 | null | cs.CR | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Jamming is a form of the Denial of Service (J-DoS) attack. It is a
significant threat that causes malfunction in Unmanned Aerial Vehicle systems,
especially when used in hostile environments. The attackers mainly operate in
the wireless communication environment by following a few preexisting
scenarios. In this paper, we propose an idea for a Jamming detection mechanism.
The mechanism utilizes the network parameters available to the system and some
additional measures to distinguish between bad transmission quality and Jamming
to avoid false positive alarms. After detecting a Jamming attack, appropriate
countermeasures or mitigation techniques can be applied to keep the system
safe.
| [
{
"created": "Wed, 12 Jan 2022 11:45:32 GMT",
"version": "v1"
}
] | 2022-01-13 | [
[
"Mykytyn",
"Pavlo",
""
],
[
"Brzozowski",
"Marcin",
""
],
[
"Dyka",
"Zoya",
""
],
[
"Langendoerfer",
"Peter",
""
]
] | Jamming is a form of the Denial of Service (J-DoS) attack. It is a significant threat that causes malfunction in Unmanned Aerial Vehicle systems, especially when used in hostile environments. The attackers mainly operate in the wireless communication environment by following a few preexisting scenarios. In this paper, we propose an idea for a Jamming detection mechanism. The mechanism utilizes the network parameters available to the system and some additional measures to distinguish between bad transmission quality and Jamming to avoid false positive alarms. After detecting a Jamming attack, appropriate countermeasures or mitigation techniques can be applied to keep the system safe. |
1803.01768 | Wei Hu | Sanjeev Arora, Wei Hu, Pravesh K. Kothari | An Analysis of the t-SNE Algorithm for Data Visualization | In Conference on Learning Theory (COLT) 2018 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A first line of attack in exploratory data analysis is data visualization,
i.e., generating a 2-dimensional representation of data that makes clusters of
similar points visually identifiable. Standard Johnson-Lindenstrauss
dimensionality reduction does not produce data visualizations. The t-SNE
heuristic of van der Maaten and Hinton, which is based on non-convex
optimization, has become the de facto standard for visualization in a wide
range of applications.
This work gives a formal framework for the problem of data visualization -
finding a 2-dimensional embedding of clusterable data that correctly separates
individual clusters to make them visually identifiable. We then give a rigorous
analysis of the performance of t-SNE under a natural, deterministic condition
on the "ground-truth" clusters (similar to conditions assumed in earlier
analyses of clustering) in the underlying data. These are the first provable
guarantees on t-SNE for constructing good data visualizations.
We show that our deterministic condition is satisfied by considerably general
probabilistic generative models for clusterable data such as mixtures of
well-separated log-concave distributions. Finally, we give theoretical evidence
that t-SNE provably succeeds in partially recovering cluster structure even
when the above deterministic condition is not met.
| [
{
"created": "Mon, 5 Mar 2018 16:48:58 GMT",
"version": "v1"
},
{
"created": "Wed, 6 Jun 2018 19:27:28 GMT",
"version": "v2"
}
] | 2018-06-08 | [
[
"Arora",
"Sanjeev",
""
],
[
"Hu",
"Wei",
""
],
[
"Kothari",
"Pravesh K.",
""
]
] | A first line of attack in exploratory data analysis is data visualization, i.e., generating a 2-dimensional representation of data that makes clusters of similar points visually identifiable. Standard Johnson-Lindenstrauss dimensionality reduction does not produce data visualizations. The t-SNE heuristic of van der Maaten and Hinton, which is based on non-convex optimization, has become the de facto standard for visualization in a wide range of applications. This work gives a formal framework for the problem of data visualization - finding a 2-dimensional embedding of clusterable data that correctly separates individual clusters to make them visually identifiable. We then give a rigorous analysis of the performance of t-SNE under a natural, deterministic condition on the "ground-truth" clusters (similar to conditions assumed in earlier analyses of clustering) in the underlying data. These are the first provable guarantees on t-SNE for constructing good data visualizations. We show that our deterministic condition is satisfied by considerably general probabilistic generative models for clusterable data such as mixtures of well-separated log-concave distributions. Finally, we give theoretical evidence that t-SNE provably succeeds in partially recovering cluster structure even when the above deterministic condition is not met. |
2001.05755 | Eden Belouadah | Eden Belouadah and Adrian Popescu | ScaIL: Classifier Weights Scaling for Class Incremental Learning | 8 pages, 4 figures, 2 tables, accepted in WACV2020 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Incremental learning is useful if an AI agent needs to integrate data from a
stream. The problem is non trivial if the agent runs on a limited computational
budget and has a bounded memory of past data. In a deep learning approach, the
constant computational budget requires the use of a fixed architecture for all
incremental states. The bounded memory generates data imbalance in favor of new
classes and a prediction bias toward them appears. This bias is commonly
countered by introducing a data balancing step in addition to the basic network
training. We depart from this approach and propose simple but efficient scaling
of past class classifier weights to make them more comparable to those of new
classes. Scaling exploits incremental state level statistics and is applied to
the classifiers learned in the initial state of classes in order to profit from
all their available data. We also question the utility of the widely used
distillation loss component of incremental learning algorithms by comparing it
to vanilla fine tuning in presence of a bounded memory. Evaluation is done
against competitive baselines using four public datasets. Results show that the
classifier weights scaling and the removal of the distillation are both
beneficial.
| [
{
"created": "Thu, 16 Jan 2020 12:10:45 GMT",
"version": "v1"
}
] | 2020-01-17 | [
[
"Belouadah",
"Eden",
""
],
[
"Popescu",
"Adrian",
""
]
] | Incremental learning is useful if an AI agent needs to integrate data from a stream. The problem is non trivial if the agent runs on a limited computational budget and has a bounded memory of past data. In a deep learning approach, the constant computational budget requires the use of a fixed architecture for all incremental states. The bounded memory generates data imbalance in favor of new classes and a prediction bias toward them appears. This bias is commonly countered by introducing a data balancing step in addition to the basic network training. We depart from this approach and propose simple but efficient scaling of past class classifier weights to make them more comparable to those of new classes. Scaling exploits incremental state level statistics and is applied to the classifiers learned in the initial state of classes in order to profit from all their available data. We also question the utility of the widely used distillation loss component of incremental learning algorithms by comparing it to vanilla fine tuning in presence of a bounded memory. Evaluation is done against competitive baselines using four public datasets. Results show that the classifier weights scaling and the removal of the distillation are both beneficial. |
2205.02764 | Sahraoui Dhelim Dr | Sahraoui Dhelim, Tahar Kechadi, Liming Chen, Nyothiri Aung, Huansheng
Ning and Luigi Atzori | Edge-enabled Metaverse: The Convergence of Metaverse and Mobile Edge
Computing | Submitted to IEEE IoTJ | null | null | null | cs.DC cs.AI cs.NI cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Metaverse is a virtual environment where users are represented by avatars
to navigate a virtual world, which has strong links with the physical one.
State-of-the-art Metaverse architectures rely on a cloud-based approach for
avatar physics emulation and graphics rendering computation. Such centralized
design is unfavorable as it suffers from several drawbacks caused by the long
latency required for cloud access, such as low quality visualization. To solve
this issue, in this paper, we propose a Fog-Edge hybrid computing architecture
for Metaverse applications that leverage an edge-enabled distributed computing
paradigm, which makes use of edge devices computing power to fulfil the
required computational cost for heavy tasks such as collision detection in
virtual universe and computation of 3D physics in virtual simulation. The
computational cost related to an entity in the Metaverse such as collision
detection or physics emulation are performed at the end-device of the
associated physical entity. To prove the effectiveness of the proposed
architecture, we simulate a distributed social metaverse application.
Simulation results shows that the proposed architecture can reduce the latency
by 50% when compared with the legacy cloud-based Metaverse applications.
| [
{
"created": "Wed, 13 Apr 2022 11:38:57 GMT",
"version": "v1"
}
] | 2022-05-06 | [
[
"Dhelim",
"Sahraoui",
""
],
[
"Kechadi",
"Tahar",
""
],
[
"Chen",
"Liming",
""
],
[
"Aung",
"Nyothiri",
""
],
[
"Ning",
"Huansheng",
""
],
[
"Atzori",
"Luigi",
""
]
] | The Metaverse is a virtual environment where users are represented by avatars to navigate a virtual world, which has strong links with the physical one. State-of-the-art Metaverse architectures rely on a cloud-based approach for avatar physics emulation and graphics rendering computation. Such centralized design is unfavorable as it suffers from several drawbacks caused by the long latency required for cloud access, such as low quality visualization. To solve this issue, in this paper, we propose a Fog-Edge hybrid computing architecture for Metaverse applications that leverage an edge-enabled distributed computing paradigm, which makes use of edge devices computing power to fulfil the required computational cost for heavy tasks such as collision detection in virtual universe and computation of 3D physics in virtual simulation. The computational cost related to an entity in the Metaverse such as collision detection or physics emulation are performed at the end-device of the associated physical entity. To prove the effectiveness of the proposed architecture, we simulate a distributed social metaverse application. Simulation results shows that the proposed architecture can reduce the latency by 50% when compared with the legacy cloud-based Metaverse applications. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.