id
stringlengths 9
10
| submitter
stringlengths 1
64
⌀ | authors
stringlengths 4
20.7k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
404
⌀ | doi
stringlengths 11
153
⌀ | report-no
stringlengths 2
254
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | orig_abstract
stringlengths 14
3.35k
| versions
listlengths 1
60
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
1.35k
| abstract
stringlengths 11
3.34k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2303.01774
|
David Eriksson
|
Aryan Deshwal, Sebastian Ament, Maximilian Balandat, Eytan Bakshy,
Janardhan Rao Doppa, David Eriksson
|
Bayesian Optimization over High-Dimensional Combinatorial Spaces via
Dictionary-based Embeddings
|
Appearing in AISTATS 2023
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider the problem of optimizing expensive black-box functions over
high-dimensional combinatorial spaces which arises in many science,
engineering, and ML applications. We use Bayesian Optimization (BO) and propose
a novel surrogate modeling approach for efficiently handling a large number of
binary and categorical parameters. The key idea is to select a number of
discrete structures from the input space (the dictionary) and use them to
define an ordinal embedding for high-dimensional combinatorial structures. This
allows us to use existing Gaussian process models for continuous spaces. We
develop a principled approach based on binary wavelets to construct
dictionaries for binary spaces, and propose a randomized construction method
that generalizes to categorical spaces. We provide theoretical justification to
support the effectiveness of the dictionary-based embeddings. Our experiments
on diverse real-world benchmarks demonstrate the effectiveness of our proposed
surrogate modeling approach over state-of-the-art BO methods.
|
[
{
"created": "Fri, 3 Mar 2023 08:31:42 GMT",
"version": "v1"
}
] |
2023-03-06
|
[
[
"Deshwal",
"Aryan",
""
],
[
"Ament",
"Sebastian",
""
],
[
"Balandat",
"Maximilian",
""
],
[
"Bakshy",
"Eytan",
""
],
[
"Doppa",
"Janardhan Rao",
""
],
[
"Eriksson",
"David",
""
]
] |
We consider the problem of optimizing expensive black-box functions over high-dimensional combinatorial spaces which arises in many science, engineering, and ML applications. We use Bayesian Optimization (BO) and propose a novel surrogate modeling approach for efficiently handling a large number of binary and categorical parameters. The key idea is to select a number of discrete structures from the input space (the dictionary) and use them to define an ordinal embedding for high-dimensional combinatorial structures. This allows us to use existing Gaussian process models for continuous spaces. We develop a principled approach based on binary wavelets to construct dictionaries for binary spaces, and propose a randomized construction method that generalizes to categorical spaces. We provide theoretical justification to support the effectiveness of the dictionary-based embeddings. Our experiments on diverse real-world benchmarks demonstrate the effectiveness of our proposed surrogate modeling approach over state-of-the-art BO methods.
|
1705.07202
|
Lei Cai
|
Lei Cai and Hongyang Gao and Shuiwang Ji
|
Multi-Stage Variational Auto-Encoders for Coarse-to-Fine Image
Generation
| null | null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Variational auto-encoder (VAE) is a powerful unsupervised learning framework
for image generation. One drawback of VAE is that it generates blurry images
due to its Gaussianity assumption and thus L2 loss. To allow the generation of
high quality images by VAE, we increase the capacity of decoder network by
employing residual blocks and skip connections, which also enable efficient
optimization. To overcome the limitation of L2 loss, we propose to generate
images in a multi-stage manner from coarse to fine. In the simplest case, the
proposed multi-stage VAE divides the decoder into two components in which the
second component generates refined images based on the course images generated
by the first component. Since the second component is independent of the VAE
model, it can employ other loss functions beyond the L2 loss and different
model architectures. The proposed framework can be easily generalized to
contain more than two components. Experiment results on the MNIST and CelebA
datasets demonstrate that the proposed multi-stage VAE can generate sharper
images as compared to those from the original VAE.
|
[
{
"created": "Fri, 19 May 2017 21:51:30 GMT",
"version": "v1"
}
] |
2017-05-23
|
[
[
"Cai",
"Lei",
""
],
[
"Gao",
"Hongyang",
""
],
[
"Ji",
"Shuiwang",
""
]
] |
Variational auto-encoder (VAE) is a powerful unsupervised learning framework for image generation. One drawback of VAE is that it generates blurry images due to its Gaussianity assumption and thus L2 loss. To allow the generation of high quality images by VAE, we increase the capacity of decoder network by employing residual blocks and skip connections, which also enable efficient optimization. To overcome the limitation of L2 loss, we propose to generate images in a multi-stage manner from coarse to fine. In the simplest case, the proposed multi-stage VAE divides the decoder into two components in which the second component generates refined images based on the course images generated by the first component. Since the second component is independent of the VAE model, it can employ other loss functions beyond the L2 loss and different model architectures. The proposed framework can be easily generalized to contain more than two components. Experiment results on the MNIST and CelebA datasets demonstrate that the proposed multi-stage VAE can generate sharper images as compared to those from the original VAE.
|
2407.07805
|
Xin Jin
|
Huafeng Qin, Xin Jin, Hongyu Zhu, Hongchao Liao, Moun\^im A.
El-Yacoubi and Xinbo Gao
|
SUMix: Mixup with Semantic and Uncertain Information
|
Accepted by ECCV2024 [Camera Ready] (19 pages, 7 figures) with the
source code at https://github.com/JinXins/SUMix
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Mixup data augmentation approaches have been applied for various tasks of
deep learning to improve the generalization ability of deep neural networks.
Some existing approaches CutMix, SaliencyMix, etc. randomly replace a patch in
one image with patches from another to generate the mixed image. Similarly, the
corresponding labels are linearly combined by a fixed ratio $\lambda$ by l. The
objects in two images may be overlapped during the mixing process, so some
semantic information is corrupted in the mixed samples. In this case, the mixed
image does not match the mixed label information. Besides, such a label may
mislead the deep learning model training, which results in poor performance. To
solve this problem, we proposed a novel approach named SUMix to learn the
mixing ratio as well as the uncertainty for the mixed samples during the
training process. First, we design a learnable similarity function to compute
an accurate mix ratio. Second, an approach is investigated as a regularized
term to model the uncertainty of the mixed samples. We conduct experiments on
five image benchmarks, and extensive experimental results imply that our method
is capable of improving the performance of classifiers with different
cutting-based mixup approaches. The source code is available at
https://github.com/JinXins/SUMix.
|
[
{
"created": "Wed, 10 Jul 2024 16:25:26 GMT",
"version": "v1"
},
{
"created": "Wed, 17 Jul 2024 11:46:52 GMT",
"version": "v2"
}
] |
2024-07-18
|
[
[
"Qin",
"Huafeng",
""
],
[
"Jin",
"Xin",
""
],
[
"Zhu",
"Hongyu",
""
],
[
"Liao",
"Hongchao",
""
],
[
"El-Yacoubi",
"Mounîm A.",
""
],
[
"Gao",
"Xinbo",
""
]
] |
Mixup data augmentation approaches have been applied for various tasks of deep learning to improve the generalization ability of deep neural networks. Some existing approaches CutMix, SaliencyMix, etc. randomly replace a patch in one image with patches from another to generate the mixed image. Similarly, the corresponding labels are linearly combined by a fixed ratio $\lambda$ by l. The objects in two images may be overlapped during the mixing process, so some semantic information is corrupted in the mixed samples. In this case, the mixed image does not match the mixed label information. Besides, such a label may mislead the deep learning model training, which results in poor performance. To solve this problem, we proposed a novel approach named SUMix to learn the mixing ratio as well as the uncertainty for the mixed samples during the training process. First, we design a learnable similarity function to compute an accurate mix ratio. Second, an approach is investigated as a regularized term to model the uncertainty of the mixed samples. We conduct experiments on five image benchmarks, and extensive experimental results imply that our method is capable of improving the performance of classifiers with different cutting-based mixup approaches. The source code is available at https://github.com/JinXins/SUMix.
|
1901.08983
|
Xinyuan Qian
|
Xinyuan Qian, Andrea Cavallaro, Alessio Brutti, Maurizio Omologo
|
LOCATA challenge: speaker localization with a planar array
|
In Proceedings of the LOCATA ChallengeWorkshop - a satellite event of
IWAENC 2018 (arXiv:1811.08482 )
| null | null |
LOCATAchallenge/2018/05
|
cs.SD eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This document describes our submission to the 2018 LOCalization And TrAcking
(LOCATA) challenge (Tasks 1, 3, 5). We estimate the 3D position of a speaker
using the Global Coherence Field (GCF) computed from multiple microphone pairs
of a DICIT planar array. One of the main challenges when using such an array
with omnidirectional microphones is the front-back ambiguity, which is
particularly evident in Task 5. We address this challenge by post-processing
the peaks of the GCF and exploiting the attenuation introduced by the frame of
the array. Moreover, the intermittent nature of speech and the changing
orientation of the speaker make localization difficult. For Tasks 3 and 5, we
also employ a Particle Filter (PF) that favors the spatio-temporal continuity
of the localization results.
|
[
{
"created": "Fri, 25 Jan 2019 17:00:56 GMT",
"version": "v1"
}
] |
2019-01-28
|
[
[
"Qian",
"Xinyuan",
""
],
[
"Cavallaro",
"Andrea",
""
],
[
"Brutti",
"Alessio",
""
],
[
"Omologo",
"Maurizio",
""
]
] |
This document describes our submission to the 2018 LOCalization And TrAcking (LOCATA) challenge (Tasks 1, 3, 5). We estimate the 3D position of a speaker using the Global Coherence Field (GCF) computed from multiple microphone pairs of a DICIT planar array. One of the main challenges when using such an array with omnidirectional microphones is the front-back ambiguity, which is particularly evident in Task 5. We address this challenge by post-processing the peaks of the GCF and exploiting the attenuation introduced by the frame of the array. Moreover, the intermittent nature of speech and the changing orientation of the speaker make localization difficult. For Tasks 3 and 5, we also employ a Particle Filter (PF) that favors the spatio-temporal continuity of the localization results.
|
2205.10234
|
Pablo Mosteiro
|
Thomas Borger, Pablo Mosteiro, Heysem Kaya, Emil Rijcken, Albert Ali
Salah, Floortje Scheepers, Marco Spruit
|
Federated learning for violence incident prediction in a simulated
cross-institutional psychiatric setting
| null |
Expert Systems with Applications Volume 199, 1 August 2022, 116720
|
10.1016/j.eswa.2022.116720
| null |
cs.CL cs.CY cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Inpatient violence is a common and severe problem within psychiatry. Knowing
who might become violent can influence staffing levels and mitigate severity.
Predictive machine learning models can assess each patient's likelihood of
becoming violent based on clinical notes. Yet, while machine learning models
benefit from having more data, data availability is limited as hospitals
typically do not share their data for privacy preservation. Federated Learning
(FL) can overcome the problem of data limitation by training models in a
decentralised manner, without disclosing data between collaborators. However,
although several FL approaches exist, none of these train Natural Language
Processing models on clinical notes. In this work, we investigate the
application of Federated Learning to clinical Natural Language Processing,
applied to the task of Violence Risk Assessment by simulating a
cross-institutional psychiatric setting. We train and compare four models: two
local models, a federated model and a data-centralised model. Our results
indicate that the federated model outperforms the local models and has similar
performance as the data-centralised model. These findings suggest that
Federated Learning can be used successfully in a cross-institutional setting
and is a step towards new applications of Federated Learning based on clinical
notes
|
[
{
"created": "Tue, 17 May 2022 07:37:12 GMT",
"version": "v1"
}
] |
2022-05-23
|
[
[
"Borger",
"Thomas",
""
],
[
"Mosteiro",
"Pablo",
""
],
[
"Kaya",
"Heysem",
""
],
[
"Rijcken",
"Emil",
""
],
[
"Salah",
"Albert Ali",
""
],
[
"Scheepers",
"Floortje",
""
],
[
"Spruit",
"Marco",
""
]
] |
Inpatient violence is a common and severe problem within psychiatry. Knowing who might become violent can influence staffing levels and mitigate severity. Predictive machine learning models can assess each patient's likelihood of becoming violent based on clinical notes. Yet, while machine learning models benefit from having more data, data availability is limited as hospitals typically do not share their data for privacy preservation. Federated Learning (FL) can overcome the problem of data limitation by training models in a decentralised manner, without disclosing data between collaborators. However, although several FL approaches exist, none of these train Natural Language Processing models on clinical notes. In this work, we investigate the application of Federated Learning to clinical Natural Language Processing, applied to the task of Violence Risk Assessment by simulating a cross-institutional psychiatric setting. We train and compare four models: two local models, a federated model and a data-centralised model. Our results indicate that the federated model outperforms the local models and has similar performance as the data-centralised model. These findings suggest that Federated Learning can be used successfully in a cross-institutional setting and is a step towards new applications of Federated Learning based on clinical notes
|
1607.01896
|
Howard H. Yang
|
Howard H. Yang, Giovanni Geraci, Tony Q. S. Quek, Jeffrey G. Andrews
|
Cell-Edge-Aware Precoding for Downlink Massive MIMO Cellular Networks
|
13 pages, 10 figures
| null |
10.1109/TSP.2017.2690387
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a cell-edge-aware (CEA) zero forcing (ZF) precoder that exploits
the excess spatial degrees of freedom provided by a large number of base
station (BS) antennas to suppress inter-cell interference at the most
vulnerable user equipments (UEs). We evaluate the downlink performance of
CEA-ZF, as well as that of a conventional cell-edge-unaware (CEU) ZF precoder
in a network with random base station topology. Our analysis and simulations
show that the proposed CEA-ZF precoder outperforms CEU-ZF precoding in terms of
(i) aggregate per-cell data rate, (ii) coverage probability, and (iii)
95%-likely, or edge user, rate. In particular, when both perfect channel state
information and a large number of antennas N are available at the BSs, we
demonstrate that the outage probability under CEA-ZF and CEU-ZF decay as 1/N^2
and 1/N, respectively. This result identifies CEA-ZF as a more effective
precoding scheme for massive MIMO cellular networks. Our framework also reveals
the importance of scheduling the optimal number of UEs per BS, and confirms the
necessity to control the amount of pilot contamination received during the
channel estimation phase.
|
[
{
"created": "Thu, 7 Jul 2016 07:38:35 GMT",
"version": "v1"
},
{
"created": "Tue, 11 Oct 2016 14:10:51 GMT",
"version": "v2"
}
] |
2017-05-24
|
[
[
"Yang",
"Howard H.",
""
],
[
"Geraci",
"Giovanni",
""
],
[
"Quek",
"Tony Q. S.",
""
],
[
"Andrews",
"Jeffrey G.",
""
]
] |
We propose a cell-edge-aware (CEA) zero forcing (ZF) precoder that exploits the excess spatial degrees of freedom provided by a large number of base station (BS) antennas to suppress inter-cell interference at the most vulnerable user equipments (UEs). We evaluate the downlink performance of CEA-ZF, as well as that of a conventional cell-edge-unaware (CEU) ZF precoder in a network with random base station topology. Our analysis and simulations show that the proposed CEA-ZF precoder outperforms CEU-ZF precoding in terms of (i) aggregate per-cell data rate, (ii) coverage probability, and (iii) 95%-likely, or edge user, rate. In particular, when both perfect channel state information and a large number of antennas N are available at the BSs, we demonstrate that the outage probability under CEA-ZF and CEU-ZF decay as 1/N^2 and 1/N, respectively. This result identifies CEA-ZF as a more effective precoding scheme for massive MIMO cellular networks. Our framework also reveals the importance of scheduling the optimal number of UEs per BS, and confirms the necessity to control the amount of pilot contamination received during the channel estimation phase.
|
2406.11283
|
Yunsong Wang
|
Yunsong Wang, Na Zhao, Gim Hee Lee
|
Enhancing Generalizability of Representation Learning for Data-Efficient
3D Scene Understanding
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The field of self-supervised 3D representation learning has emerged as a
promising solution to alleviate the challenge presented by the scarcity of
extensive, well-annotated datasets. However, it continues to be hindered by the
lack of diverse, large-scale, real-world 3D scene datasets for source data. To
address this shortfall, we propose Generalizable Representation Learning (GRL),
where we devise a generative Bayesian network to produce diverse synthetic
scenes with real-world patterns, and conduct pre-training with a joint
objective. By jointly learning a coarse-to-fine contrastive learning task and
an occlusion-aware reconstruction task, the model is primed with transferable,
geometry-informed representations. Post pre-training on synthetic data, the
acquired knowledge of the model can be seamlessly transferred to two principal
downstream tasks associated with 3D scene understanding, namely 3D object
detection and 3D semantic segmentation, using real-world benchmark datasets. A
thorough series of experiments robustly display our method's consistent
superiority over existing state-of-the-art pre-training approaches.
|
[
{
"created": "Mon, 17 Jun 2024 07:43:53 GMT",
"version": "v1"
}
] |
2024-06-18
|
[
[
"Wang",
"Yunsong",
""
],
[
"Zhao",
"Na",
""
],
[
"Lee",
"Gim Hee",
""
]
] |
The field of self-supervised 3D representation learning has emerged as a promising solution to alleviate the challenge presented by the scarcity of extensive, well-annotated datasets. However, it continues to be hindered by the lack of diverse, large-scale, real-world 3D scene datasets for source data. To address this shortfall, we propose Generalizable Representation Learning (GRL), where we devise a generative Bayesian network to produce diverse synthetic scenes with real-world patterns, and conduct pre-training with a joint objective. By jointly learning a coarse-to-fine contrastive learning task and an occlusion-aware reconstruction task, the model is primed with transferable, geometry-informed representations. Post pre-training on synthetic data, the acquired knowledge of the model can be seamlessly transferred to two principal downstream tasks associated with 3D scene understanding, namely 3D object detection and 3D semantic segmentation, using real-world benchmark datasets. A thorough series of experiments robustly display our method's consistent superiority over existing state-of-the-art pre-training approaches.
|
1909.02490
|
Dekai Zhu
|
Dekai Zhu, Jinhu Dong, Zhongcong Xu, Canbo Ye, Yinbai Hu, Hang Su,
Zhengfa Liu, Guang Chen
|
Neuromorphic Visual Odometry System for Intelligent Vehicle Application
with Bio-inspired Vision Sensor
|
8 pages, 14 figures
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The neuromorphic camera is a brand new vision sensor that has emerged in
recent years. In contrast to the conventional frame-based camera, the
neuromorphic camera only transmits local pixel-level changes at the time of its
occurrence and provides an asynchronous event stream with low latency. It has
the advantages of extremely low signal delay, low transmission bandwidth
requirements, rich information of edges, high dynamic range etc., which make it
a promising sensor in the application of in-vehicle visual odometry system.
This paper proposes a neuromorphic in-vehicle visual odometry system using
feature tracking algorithm. To the best of our knowledge, this is the first
in-vehicle visual odometry system that only uses a neuromorphic camera, and its
performance test is carried out on actual driving datasets. In addition, an
in-depth analysis of the results of the experiment is provided. The work of
this paper verifies the feasibility of in-vehicle visual odometry system using
neuromorphic cameras.
|
[
{
"created": "Thu, 5 Sep 2019 15:42:00 GMT",
"version": "v1"
}
] |
2019-09-06
|
[
[
"Zhu",
"Dekai",
""
],
[
"Dong",
"Jinhu",
""
],
[
"Xu",
"Zhongcong",
""
],
[
"Ye",
"Canbo",
""
],
[
"Hu",
"Yinbai",
""
],
[
"Su",
"Hang",
""
],
[
"Liu",
"Zhengfa",
""
],
[
"Chen",
"Guang",
""
]
] |
The neuromorphic camera is a brand new vision sensor that has emerged in recent years. In contrast to the conventional frame-based camera, the neuromorphic camera only transmits local pixel-level changes at the time of its occurrence and provides an asynchronous event stream with low latency. It has the advantages of extremely low signal delay, low transmission bandwidth requirements, rich information of edges, high dynamic range etc., which make it a promising sensor in the application of in-vehicle visual odometry system. This paper proposes a neuromorphic in-vehicle visual odometry system using feature tracking algorithm. To the best of our knowledge, this is the first in-vehicle visual odometry system that only uses a neuromorphic camera, and its performance test is carried out on actual driving datasets. In addition, an in-depth analysis of the results of the experiment is provided. The work of this paper verifies the feasibility of in-vehicle visual odometry system using neuromorphic cameras.
|
2010.11686
|
Ren-Song Tsay
|
Tsung-Ying Lu, Hsu-Hsun Chin, Hsin-I Wu, and Ren-Song Tsay
|
A Very Compact Embedded CNN Processor Design Based on Logarithmic
Computing
| null | null | null | null |
cs.LG cs.AR cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we propose a very compact embedded CNN processor design based
on a modified logarithmic computing method using very low bit-width
representation. Our high-quality CNN processor can easily fit into edge
devices. For Yolov2, our processing circuit takes only 0.15 mm2 using TSMC 40
nm cell library. The key idea is to constrain the activation and weight values
of all layers uniformly to be within the range [-1, 1] and produce low
bit-width logarithmic representation. With the uniform representations, we
devise a unified, reusable CNN computing kernel and significantly reduce
computing resources. The proposed approach has been extensively evaluated on
many popular image classification CNN models (AlexNet, VGG16, and ResNet-18/34)
and object detection models (Yolov2). The hardware-implemented results show
that our design consumes only minimal computing and storage resources, yet
attains very high accuracy. The design is thoroughly verified on FPGAs, and the
SoC integration is underway with promising results. With extremely efficient
resource and energy usage, our design is excellent for edge computing purposes.
|
[
{
"created": "Tue, 13 Oct 2020 23:48:36 GMT",
"version": "v1"
}
] |
2020-10-23
|
[
[
"Lu",
"Tsung-Ying",
""
],
[
"Chin",
"Hsu-Hsun",
""
],
[
"Wu",
"Hsin-I",
""
],
[
"Tsay",
"Ren-Song",
""
]
] |
In this paper, we propose a very compact embedded CNN processor design based on a modified logarithmic computing method using very low bit-width representation. Our high-quality CNN processor can easily fit into edge devices. For Yolov2, our processing circuit takes only 0.15 mm2 using TSMC 40 nm cell library. The key idea is to constrain the activation and weight values of all layers uniformly to be within the range [-1, 1] and produce low bit-width logarithmic representation. With the uniform representations, we devise a unified, reusable CNN computing kernel and significantly reduce computing resources. The proposed approach has been extensively evaluated on many popular image classification CNN models (AlexNet, VGG16, and ResNet-18/34) and object detection models (Yolov2). The hardware-implemented results show that our design consumes only minimal computing and storage resources, yet attains very high accuracy. The design is thoroughly verified on FPGAs, and the SoC integration is underway with promising results. With extremely efficient resource and energy usage, our design is excellent for edge computing purposes.
|
1409.0932
|
Jeffrey Wildman
|
Jeffrey Wildman, Steven Weber
|
On Characterizing the Local Pooling Factor of Greedy Maximal Scheduling
in Random Graphs
|
16 pages, 7 figures, 1 table, 1 listing. Submitted on 2014-09-02 to
IEEE/ACM Transactions on Networking. Accepted on 2015-05-29 to IEEE/ACM
Transactions on Networking
| null |
10.1109/TNET.2015.2451090
| null |
cs.IT cs.NI math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The study of the optimality of low-complexity greedy scheduling techniques in
wireless communications networks is a very complex problem. The Local Pooling
(LoP) factor provides a single-parameter means of expressing the achievable
capacity region (and optimality) of one such scheme, greedy maximal scheduling
(GMS). The exact LoP factor for an arbitrary network graph is generally
difficult to obtain, but may be evaluated or bounded based on the network
graph's particular structure. In this paper, we provide rigorous
characterizations of the LoP factor in large networks modeled as
Erd\H{o}s-R\'enyi (ER) and random geometric (RG) graphs under the primary
interference model. We employ threshold functions to establish critical values
for either the edge probability or communication radius to yield useful bounds
on the range and expectation of the LoP factor as the network grows large. For
sufficiently dense random graphs, we find that the LoP factor is between 1/2
and 2/3, while sufficiently sparse random graphs permit GMS optimality (the LoP
factor is 1) with high probability. We then place LoP within a larger context
of commonly studied random graph properties centered around connectedness. We
observe that edge densities permitting connectivity generally admit cycle
subgraphs which forms the basis for the LoP factor upper bound of 2/3. We
conclude with simulations to explore the regime of small networks, which
suggest the probability that an ER or RG graph satisfies LoP and is connected
decays quickly in network size.
|
[
{
"created": "Wed, 3 Sep 2014 00:43:20 GMT",
"version": "v1"
},
{
"created": "Fri, 26 Jun 2015 23:26:09 GMT",
"version": "v2"
}
] |
2015-08-04
|
[
[
"Wildman",
"Jeffrey",
""
],
[
"Weber",
"Steven",
""
]
] |
The study of the optimality of low-complexity greedy scheduling techniques in wireless communications networks is a very complex problem. The Local Pooling (LoP) factor provides a single-parameter means of expressing the achievable capacity region (and optimality) of one such scheme, greedy maximal scheduling (GMS). The exact LoP factor for an arbitrary network graph is generally difficult to obtain, but may be evaluated or bounded based on the network graph's particular structure. In this paper, we provide rigorous characterizations of the LoP factor in large networks modeled as Erd\H{o}s-R\'enyi (ER) and random geometric (RG) graphs under the primary interference model. We employ threshold functions to establish critical values for either the edge probability or communication radius to yield useful bounds on the range and expectation of the LoP factor as the network grows large. For sufficiently dense random graphs, we find that the LoP factor is between 1/2 and 2/3, while sufficiently sparse random graphs permit GMS optimality (the LoP factor is 1) with high probability. We then place LoP within a larger context of commonly studied random graph properties centered around connectedness. We observe that edge densities permitting connectivity generally admit cycle subgraphs which forms the basis for the LoP factor upper bound of 2/3. We conclude with simulations to explore the regime of small networks, which suggest the probability that an ER or RG graph satisfies LoP and is connected decays quickly in network size.
|
1504.07482
|
Robin Haunschild
|
Robin Haunschild, Lutz Bornmann, and Loet Leydesdorff
|
Networks of reader and country status: An analysis of Mendeley reader
statistics
|
26 pages, 6 figures (also web-based startable), and 2 tables
|
PeerJ CompSci, 32 (2015)
|
10.7717/peerj-cs.32
| null |
cs.DL cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The number of papers published in journals indexed by the Web of Science core
collection is steadily increasing. In recent years, nearly two million new
papers were published each year; somewhat more than one million papers when
primary research papers are considered only (articles and reviews are the
document types where primary research is usually reported or reviewed).
However, who reads these papers? More precisely, which groups of researchers
from which (self-assigned) scientific disciplines and countries are reading
these papers? Is it possible to visualize readership patterns for certain
countries, scientific disciplines, or academic status groups? One popular
method to answer these questions is a network analysis. In this study, we
analyze Mendeley readership data of a set of 1,133,224 articles and 64,960
reviews with publication year 2012 to generate three different kinds of
networks: (1) The network based on disciplinary affiliations of Mendeley
readers contains four groups: (i) biology, (ii) social science and humanities
(including relevant computer science), (iii) bio-medical sciences, and (iv)
natural science and engineering. In all four groups, the category with the
addition "miscellaneous" prevails. (2) The network of co-readers in terms of
professional status shows that a common interest in papers is mainly shared
among PhD students, Master's students, and postdocs. (3) The country network
focusses on global readership patterns: a group of 53 nations is identified as
core to the scientific enterprise, including Russia and China as well as two
thirds of the OECD (Organisation for Economic Co-operation and Development)
countries.
|
[
{
"created": "Tue, 28 Apr 2015 14:08:06 GMT",
"version": "v1"
},
{
"created": "Mon, 27 Jul 2015 14:49:51 GMT",
"version": "v2"
},
{
"created": "Fri, 16 Oct 2015 13:32:40 GMT",
"version": "v3"
}
] |
2015-11-17
|
[
[
"Haunschild",
"Robin",
""
],
[
"Bornmann",
"Lutz",
""
],
[
"Leydesdorff",
"Loet",
""
]
] |
The number of papers published in journals indexed by the Web of Science core collection is steadily increasing. In recent years, nearly two million new papers were published each year; somewhat more than one million papers when primary research papers are considered only (articles and reviews are the document types where primary research is usually reported or reviewed). However, who reads these papers? More precisely, which groups of researchers from which (self-assigned) scientific disciplines and countries are reading these papers? Is it possible to visualize readership patterns for certain countries, scientific disciplines, or academic status groups? One popular method to answer these questions is a network analysis. In this study, we analyze Mendeley readership data of a set of 1,133,224 articles and 64,960 reviews with publication year 2012 to generate three different kinds of networks: (1) The network based on disciplinary affiliations of Mendeley readers contains four groups: (i) biology, (ii) social science and humanities (including relevant computer science), (iii) bio-medical sciences, and (iv) natural science and engineering. In all four groups, the category with the addition "miscellaneous" prevails. (2) The network of co-readers in terms of professional status shows that a common interest in papers is mainly shared among PhD students, Master's students, and postdocs. (3) The country network focusses on global readership patterns: a group of 53 nations is identified as core to the scientific enterprise, including Russia and China as well as two thirds of the OECD (Organisation for Economic Co-operation and Development) countries.
|
2207.11668
|
Jiaxin Wang
|
Jiaxin Wang, Zexia Shi, Yadi Wei, Fang-Wei Fu
|
Constructions of linear codes with two or three weights from vectorial
dual-bent functions
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Linear codes with a few weights are an important class of codes in coding
theory and have attracted a lot of attention. In this paper, we present several
constructions of $q$-ary linear codes with two or three weights from vectorial
dual-bent functions, where $q$ is a power of an odd prime $p$. The weight
distributions of the constructed $q$-ary linear codes are completely
determined. We illustrate that some known constructions in the literature can
be obtained by our constructions. In some special cases, our constructed linear
codes can meet the Griesmer bound. Furthermore, based on the constructed
$q$-ary linear codes, we obtain secret sharing schemes with interesting access
structures.
|
[
{
"created": "Sun, 24 Jul 2022 05:37:20 GMT",
"version": "v1"
}
] |
2022-07-26
|
[
[
"Wang",
"Jiaxin",
""
],
[
"Shi",
"Zexia",
""
],
[
"Wei",
"Yadi",
""
],
[
"Fu",
"Fang-Wei",
""
]
] |
Linear codes with a few weights are an important class of codes in coding theory and have attracted a lot of attention. In this paper, we present several constructions of $q$-ary linear codes with two or three weights from vectorial dual-bent functions, where $q$ is a power of an odd prime $p$. The weight distributions of the constructed $q$-ary linear codes are completely determined. We illustrate that some known constructions in the literature can be obtained by our constructions. In some special cases, our constructed linear codes can meet the Griesmer bound. Furthermore, based on the constructed $q$-ary linear codes, we obtain secret sharing schemes with interesting access structures.
|
2205.14006
|
Neha Thomas
|
Neha Thomas, Farimah Fazlollahi, Katherine J. Kuchenbecker, and Jeremy
D. Brown
|
The Utility of Synthetic Reflexes and Haptic Feedback for Upper-Limb
Prostheses in a Dexterous Task Without Direct Vision
| null | null |
10.1109/TNSRE.2022.3217452.
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Individuals who use myoelectric upper-limb prostheses often rely heavily on
vision to complete their daily activities. They thus struggle in situations
where vision is overloaded, such as multitasking, or unavailable, such as poor
lighting conditions. Non-amputees can easily accomplish such tasks due to
tactile reflexes and haptic sensation guiding their upper-limb motor
coordination. Based on these principles, we developed and tested two novel
prosthesis systems that incorporate autonomous controllers and provide the user
with touch-location feedback through either vibration or distributed pressure.
These capabilities were made possible by installing a custom contact-location
sensor on thefingers of a commercial prosthetic hand, along with a custom
pressure sensor on the thumb. We compared the performance of the two systems
against a standard myoelectric prosthesis and a myoelectric prosthesis with
only autonomous controllers in a difficult reach-to-pick-and-place task
conducted without direct vision. Results from 40 non-amputee participants in
this between-subjects study indicated that vibrotactile feedback combined with
synthetic reflexes proved significantly more advantageous than the standard
prosthesis in several of the task milestones. In addition, vibrotactile
feedback and synthetic reflexes improved grasp placement compared to only
synthetic reflexes or pressure feedback combined with synthetic reflexes. These
results indicate that both autonomous controllers and haptic feedback
facilitate success in dexterous tasks without vision, and that the type of
haptic display matters.
|
[
{
"created": "Fri, 27 May 2022 14:30:51 GMT",
"version": "v1"
}
] |
2022-11-18
|
[
[
"Thomas",
"Neha",
""
],
[
"Fazlollahi",
"Farimah",
""
],
[
"Kuchenbecker",
"Katherine J.",
""
],
[
"Brown",
"Jeremy D.",
""
]
] |
Individuals who use myoelectric upper-limb prostheses often rely heavily on vision to complete their daily activities. They thus struggle in situations where vision is overloaded, such as multitasking, or unavailable, such as poor lighting conditions. Non-amputees can easily accomplish such tasks due to tactile reflexes and haptic sensation guiding their upper-limb motor coordination. Based on these principles, we developed and tested two novel prosthesis systems that incorporate autonomous controllers and provide the user with touch-location feedback through either vibration or distributed pressure. These capabilities were made possible by installing a custom contact-location sensor on thefingers of a commercial prosthetic hand, along with a custom pressure sensor on the thumb. We compared the performance of the two systems against a standard myoelectric prosthesis and a myoelectric prosthesis with only autonomous controllers in a difficult reach-to-pick-and-place task conducted without direct vision. Results from 40 non-amputee participants in this between-subjects study indicated that vibrotactile feedback combined with synthetic reflexes proved significantly more advantageous than the standard prosthesis in several of the task milestones. In addition, vibrotactile feedback and synthetic reflexes improved grasp placement compared to only synthetic reflexes or pressure feedback combined with synthetic reflexes. These results indicate that both autonomous controllers and haptic feedback facilitate success in dexterous tasks without vision, and that the type of haptic display matters.
|
1604.02949
|
Marin\^es Guerreiro
|
J. J. Bernal, M. Guerreiro, J. J. Sim\'on
|
Ds-bounds for cyclic codes: new bounds for abelian codes
|
Submitted
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we develop a technique to extend any bound for cyclic codes
constructed from its defining sets (ds-bounds) to abelian (or multivariate)
codes. We use this technique to improve the searching of new bounds for abelian
codes.
|
[
{
"created": "Mon, 11 Apr 2016 13:40:24 GMT",
"version": "v1"
}
] |
2016-04-12
|
[
[
"Bernal",
"J. J.",
""
],
[
"Guerreiro",
"M.",
""
],
[
"Simón",
"J. J.",
""
]
] |
In this paper we develop a technique to extend any bound for cyclic codes constructed from its defining sets (ds-bounds) to abelian (or multivariate) codes. We use this technique to improve the searching of new bounds for abelian codes.
|
2402.02877
|
Nataliia Bielova
|
Cristiana Santos, Nataliia Bielova (PRIVATICS), Vincent Roca
(PRIVATICS), Mathieu Cunche (PRIVATICS), Gilles Mertens (PRIVATICS), Karel
Kubicek (ETHZ), Hamed Haddadi
|
Feedback to the European Data Protection Board's Guidelines 2/2023 on
Technical Scope of Art. 5(3) of ePrivacy Directive
| null | null | null | null |
cs.CR cs.CY cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We very much welcome the EDPB's Guidelines. Please find hereunder our
feedback to the Guidelines 2/2023 on Technical Scope of Art. 5(3) of ePrivacy
Directive. Our comments are presented after a quotation from the proposed text
by the EDPB in a box.
|
[
{
"created": "Mon, 5 Feb 2024 10:45:39 GMT",
"version": "v1"
}
] |
2024-02-06
|
[
[
"Santos",
"Cristiana",
"",
"PRIVATICS"
],
[
"Bielova",
"Nataliia",
"",
"PRIVATICS"
],
[
"Roca",
"Vincent",
"",
"PRIVATICS"
],
[
"Cunche",
"Mathieu",
"",
"PRIVATICS"
],
[
"Mertens",
"Gilles",
"",
"PRIVATICS"
],
[
"Kubicek",
"Karel",
"",
"ETHZ"
],
[
"Haddadi",
"Hamed",
""
]
] |
We very much welcome the EDPB's Guidelines. Please find hereunder our feedback to the Guidelines 2/2023 on Technical Scope of Art. 5(3) of ePrivacy Directive. Our comments are presented after a quotation from the proposed text by the EDPB in a box.
|
2408.06152
|
Xinqi Jin
|
Xinqi Jin, Zhui Zhu, Xikai Sun, Fan Dang, Jiangchuan Liu, Jingao Xu,
Kebin Liu, Xinlei Chen, Yunhao Liu
|
Palantir: Towards Efficient Super Resolution for Ultra-high-definition
Live Streaming
| null | null | null | null |
cs.MM cs.AI cs.CV cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Neural enhancement through super-resolution deep neural networks opens up new
possibilities for ultra-high-definition live streaming over existing encoding
and networking infrastructure. Yet, the heavy SR DNN inference overhead leads
to severe deployment challenges. To reduce the overhead, existing systems
propose to apply DNN-based SR only on selected anchor frames while upscaling
non-anchor frames via the lightweight reusing-based SR approach. However,
frame-level scheduling is coarse-grained and fails to deliver optimal
efficiency. In this work, we propose Palantir, the first neural-enhanced UHD
live streaming system with fine-grained patch-level scheduling. In the
presented solutions, two novel techniques are incorporated to make good
scheduling decisions for inference overhead optimization and reduce the
scheduling latency. Firstly, under the guidance of our pioneering and
theoretical analysis, Palantir constructs a directed acyclic graph (DAG) for
lightweight yet accurate quality estimation under any possible anchor patch
set. Secondly, to further optimize the scheduling latency, Palantir improves
parallelizability by refactoring the computation subprocedure of the estimation
process into a sparse matrix-matrix multiplication operation. The evaluation
results suggest that Palantir incurs a negligible scheduling latency accounting
for less than 5.7% of the end-to-end latency requirement. When compared to the
state-of-the-art real-time frame-level scheduling strategy, Palantir reduces
the energy overhead of SR-integrated mobile clients by 38.1% at most (and 22.4%
on average) and the monetary costs of cloud-based SR by 80.1% at most (and
38.4% on average).
|
[
{
"created": "Mon, 12 Aug 2024 13:48:06 GMT",
"version": "v1"
}
] |
2024-08-13
|
[
[
"Jin",
"Xinqi",
""
],
[
"Zhu",
"Zhui",
""
],
[
"Sun",
"Xikai",
""
],
[
"Dang",
"Fan",
""
],
[
"Liu",
"Jiangchuan",
""
],
[
"Xu",
"Jingao",
""
],
[
"Liu",
"Kebin",
""
],
[
"Chen",
"Xinlei",
""
],
[
"Liu",
"Yunhao",
""
]
] |
Neural enhancement through super-resolution deep neural networks opens up new possibilities for ultra-high-definition live streaming over existing encoding and networking infrastructure. Yet, the heavy SR DNN inference overhead leads to severe deployment challenges. To reduce the overhead, existing systems propose to apply DNN-based SR only on selected anchor frames while upscaling non-anchor frames via the lightweight reusing-based SR approach. However, frame-level scheduling is coarse-grained and fails to deliver optimal efficiency. In this work, we propose Palantir, the first neural-enhanced UHD live streaming system with fine-grained patch-level scheduling. In the presented solutions, two novel techniques are incorporated to make good scheduling decisions for inference overhead optimization and reduce the scheduling latency. Firstly, under the guidance of our pioneering and theoretical analysis, Palantir constructs a directed acyclic graph (DAG) for lightweight yet accurate quality estimation under any possible anchor patch set. Secondly, to further optimize the scheduling latency, Palantir improves parallelizability by refactoring the computation subprocedure of the estimation process into a sparse matrix-matrix multiplication operation. The evaluation results suggest that Palantir incurs a negligible scheduling latency accounting for less than 5.7% of the end-to-end latency requirement. When compared to the state-of-the-art real-time frame-level scheduling strategy, Palantir reduces the energy overhead of SR-integrated mobile clients by 38.1% at most (and 22.4% on average) and the monetary costs of cloud-based SR by 80.1% at most (and 38.4% on average).
|
1109.3781
|
Zhongkui Li
|
Zhongkui Li, Zhisheng Duan, Lihua Xie, Xiangdong Liu
|
Distributed Robust Control of Linear Multi-Agent Systems with Parameter
Uncertainties
|
17 pages, 3 figures. Submitted to International Journal of Robust and
Nonlinear Control
| null | null | null |
cs.SY math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper considers the distributed robust control problems of uncertain
linear multi-agent systems with undirected communication topologies. It is
assumed that the agents have identical nominal dynamics while subject to
different norm-bounded parameter uncertainties, leading to weakly heterogeneous
multi-agent systems. Distributed controllers are designed for both continuous-
and discrete-time multi-agent systems, based on the relative states of
neighboring agents and a subset of absolute states of the agents. It is shown
for both the continuous- and discrete-time cases that the distributed robust
control problems under such controllers in the sense of quadratic stability are
equivalent to the $H_\infty$ control problems of a set of decoupled linear
systems having the same dimensions as a single agent. A two-step algorithm is
presented to construct the distributed controller for the continuous-time case,
which does not involve any conservatism and meanwhile decouples the feedback
gain design from the communication topology. Furthermore, a sufficient
existence condition in terms of linear matrix inequalities is derived for the
distributed discrete-time controller. Finally, the distributed robust
$H_\infty$ control problems of uncertain linear multi-agent systems subject to
external disturbances are discussed.
|
[
{
"created": "Sat, 17 Sep 2011 14:01:20 GMT",
"version": "v1"
}
] |
2011-09-20
|
[
[
"Li",
"Zhongkui",
""
],
[
"Duan",
"Zhisheng",
""
],
[
"Xie",
"Lihua",
""
],
[
"Liu",
"Xiangdong",
""
]
] |
This paper considers the distributed robust control problems of uncertain linear multi-agent systems with undirected communication topologies. It is assumed that the agents have identical nominal dynamics while subject to different norm-bounded parameter uncertainties, leading to weakly heterogeneous multi-agent systems. Distributed controllers are designed for both continuous- and discrete-time multi-agent systems, based on the relative states of neighboring agents and a subset of absolute states of the agents. It is shown for both the continuous- and discrete-time cases that the distributed robust control problems under such controllers in the sense of quadratic stability are equivalent to the $H_\infty$ control problems of a set of decoupled linear systems having the same dimensions as a single agent. A two-step algorithm is presented to construct the distributed controller for the continuous-time case, which does not involve any conservatism and meanwhile decouples the feedback gain design from the communication topology. Furthermore, a sufficient existence condition in terms of linear matrix inequalities is derived for the distributed discrete-time controller. Finally, the distributed robust $H_\infty$ control problems of uncertain linear multi-agent systems subject to external disturbances are discussed.
|
1908.02150
|
Moslem Azamfar
|
Jay Lee, Jaskaran Singh, Moslem Azamfar
|
Industrial Artificial Intelligence
| null | null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Artificial Intelligence (AI) is a cognitive science to enables human to
explore many intelligent ways to model our sensing and reasoning processes.
Industrial AI is a systematic discipline to enable engineers to systematically
develop and deploy AI algorithms with repeating and consistent successes. In
this paper, the key enablers for this transformative technology along with
their significant advantages are discussed. In addition, this research explains
Lighthouse Factories as an emerging status applying to the top manufacturers
that have implemented Industrial AI in their manufacturing ecosystem and gained
significant financial benefits. It is believed that this research will work as
a guideline and roadmap for researchers and industries towards the real-world
implementation of Industrial AI.
|
[
{
"created": "Sun, 4 Aug 2019 05:19:43 GMT",
"version": "v1"
},
{
"created": "Tue, 20 Aug 2019 01:06:22 GMT",
"version": "v2"
},
{
"created": "Tue, 22 Oct 2019 02:23:42 GMT",
"version": "v3"
}
] |
2019-10-23
|
[
[
"Lee",
"Jay",
""
],
[
"Singh",
"Jaskaran",
""
],
[
"Azamfar",
"Moslem",
""
]
] |
Artificial Intelligence (AI) is a cognitive science to enables human to explore many intelligent ways to model our sensing and reasoning processes. Industrial AI is a systematic discipline to enable engineers to systematically develop and deploy AI algorithms with repeating and consistent successes. In this paper, the key enablers for this transformative technology along with their significant advantages are discussed. In addition, this research explains Lighthouse Factories as an emerging status applying to the top manufacturers that have implemented Industrial AI in their manufacturing ecosystem and gained significant financial benefits. It is believed that this research will work as a guideline and roadmap for researchers and industries towards the real-world implementation of Industrial AI.
|
2003.11641
|
Rafet Durgut
|
Rafet Durgut
|
Improved Binary Artificial Bee Colony Algorithm
| null | null | null | null |
cs.NE cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Artificial Bee Colony (ABC) algorithm is an evolutionary optimization
algorithm based on swarm intelligence and inspired by the honey bees' food
search behavior. Since the ABC algorithm has been developed to achieve optimal
solutions by searching in the continuous search space, modification is required
to apply this method to binary optimization problems. In this paper, we improve
the ABC algorithm to solve binary optimization problems and call it the
improved binary Artificial Bee Colony (ibinABC). The proposed method consists
of an update mechanism based on fitness values and processing different number
of decision variables. Thus, we aim to prevent the ABC algorithm from getting
stuck in a local minimum by increasing its exploration ability. We compare the
ibinABC algorithm with three variants of the ABC and other meta-heuristic
algorithms in the literature. For comparison, we use the wellknown OR-Library
dataset containing 15 problem instances prepared for the uncapacitated facility
location problem. Computational results show that the proposed method is
superior to other methods in terms of convergence speed and robustness. The
source code of the algorithm will be available on GitHub after reviewing
process
|
[
{
"created": "Thu, 12 Mar 2020 17:22:52 GMT",
"version": "v1"
},
{
"created": "Mon, 20 Apr 2020 17:27:22 GMT",
"version": "v2"
}
] |
2020-04-21
|
[
[
"Durgut",
"Rafet",
""
]
] |
The Artificial Bee Colony (ABC) algorithm is an evolutionary optimization algorithm based on swarm intelligence and inspired by the honey bees' food search behavior. Since the ABC algorithm has been developed to achieve optimal solutions by searching in the continuous search space, modification is required to apply this method to binary optimization problems. In this paper, we improve the ABC algorithm to solve binary optimization problems and call it the improved binary Artificial Bee Colony (ibinABC). The proposed method consists of an update mechanism based on fitness values and processing different number of decision variables. Thus, we aim to prevent the ABC algorithm from getting stuck in a local minimum by increasing its exploration ability. We compare the ibinABC algorithm with three variants of the ABC and other meta-heuristic algorithms in the literature. For comparison, we use the wellknown OR-Library dataset containing 15 problem instances prepared for the uncapacitated facility location problem. Computational results show that the proposed method is superior to other methods in terms of convergence speed and robustness. The source code of the algorithm will be available on GitHub after reviewing process
|
2307.11031
|
Neel Guha
|
Neel Guha, Mayee F. Chen, Kush Bhatia, Azalia Mirhoseini, Frederic
Sala, Christopher R\'e
|
Embroid: Unsupervised Prediction Smoothing Can Improve Few-Shot
Classification
|
38 pages, 22 figures, 8 tables
| null | null | null |
cs.LG cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Recent work has shown that language models' (LMs) prompt-based learning
capabilities make them well suited for automating data labeling in domains
where manual annotation is expensive. The challenge is that while writing an
initial prompt is cheap, improving a prompt is costly -- practitioners often
require significant labeled data in order to evaluate the impact of prompt
modifications. Our work asks whether it is possible to improve prompt-based
learning without additional labeled data. We approach this problem by
attempting to modify the predictions of a prompt, rather than the prompt
itself. Our intuition is that accurate predictions should also be consistent:
samples which are similar under some feature representation should receive the
same prompt prediction. We propose Embroid, a method which computes multiple
representations of a dataset under different embedding functions, and uses the
consistency between the LM predictions for neighboring samples to identify
mispredictions. Embroid then uses these neighborhoods to create additional
predictions for each sample, and combines these predictions with a simple
latent variable graphical model in order to generate a final corrected
prediction. In addition to providing a theoretical analysis of Embroid, we
conduct a rigorous empirical evaluation across six different LMs and up to 95
different tasks. We find that (1) Embroid substantially improves performance
over original prompts (e.g., by an average of 7.3 points on GPT-JT), (2) also
realizes improvements for more sophisticated prompting strategies (e.g.,
chain-of-thought), and (3) can be specialized to domains like law through the
embedding functions.
|
[
{
"created": "Thu, 20 Jul 2023 17:07:28 GMT",
"version": "v1"
}
] |
2023-07-21
|
[
[
"Guha",
"Neel",
""
],
[
"Chen",
"Mayee F.",
""
],
[
"Bhatia",
"Kush",
""
],
[
"Mirhoseini",
"Azalia",
""
],
[
"Sala",
"Frederic",
""
],
[
"Ré",
"Christopher",
""
]
] |
Recent work has shown that language models' (LMs) prompt-based learning capabilities make them well suited for automating data labeling in domains where manual annotation is expensive. The challenge is that while writing an initial prompt is cheap, improving a prompt is costly -- practitioners often require significant labeled data in order to evaluate the impact of prompt modifications. Our work asks whether it is possible to improve prompt-based learning without additional labeled data. We approach this problem by attempting to modify the predictions of a prompt, rather than the prompt itself. Our intuition is that accurate predictions should also be consistent: samples which are similar under some feature representation should receive the same prompt prediction. We propose Embroid, a method which computes multiple representations of a dataset under different embedding functions, and uses the consistency between the LM predictions for neighboring samples to identify mispredictions. Embroid then uses these neighborhoods to create additional predictions for each sample, and combines these predictions with a simple latent variable graphical model in order to generate a final corrected prediction. In addition to providing a theoretical analysis of Embroid, we conduct a rigorous empirical evaluation across six different LMs and up to 95 different tasks. We find that (1) Embroid substantially improves performance over original prompts (e.g., by an average of 7.3 points on GPT-JT), (2) also realizes improvements for more sophisticated prompting strategies (e.g., chain-of-thought), and (3) can be specialized to domains like law through the embedding functions.
|
2407.02606
|
Yuan Sun
|
Yuan Sun, Jorge Ortiz
|
An AI-Based System Utilizing IoT-Enabled Ambient Sensors and LLMs for
Complex Activity Tracking
| null | null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Complex activity recognition plays an important role in elderly care
assistance. However, the reasoning ability of edge devices is constrained by
the classic machine learning model capacity. In this paper, we present a
non-invasive ambient sensing system that can detect multiple activities and
apply large language models (LLMs) to reason the activity sequences. This
method effectively combines edge devices and LLMs to help elderly people in
their daily activities, such as reminding them to take pills or handling
emergencies like falls. The LLM-based edge device can also serve as an
interface to interact with elderly people, especially with memory issue,
assisting them in their daily lives. By deploying such a system, we believe
that the smart sensing system can improve the quality of life for older people
and provide more efficient protection
|
[
{
"created": "Tue, 2 Jul 2024 18:46:05 GMT",
"version": "v1"
}
] |
2024-07-04
|
[
[
"Sun",
"Yuan",
""
],
[
"Ortiz",
"Jorge",
""
]
] |
Complex activity recognition plays an important role in elderly care assistance. However, the reasoning ability of edge devices is constrained by the classic machine learning model capacity. In this paper, we present a non-invasive ambient sensing system that can detect multiple activities and apply large language models (LLMs) to reason the activity sequences. This method effectively combines edge devices and LLMs to help elderly people in their daily activities, such as reminding them to take pills or handling emergencies like falls. The LLM-based edge device can also serve as an interface to interact with elderly people, especially with memory issue, assisting them in their daily lives. By deploying such a system, we believe that the smart sensing system can improve the quality of life for older people and provide more efficient protection
|
2206.05668
|
Jiaxiang Li
|
Jiaxiang Li and Shiqian Ma
|
Federated Learning on Riemannian Manifolds
| null | null | null | null |
cs.LG math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Federated learning (FL) has found many important applications in
smart-phone-APP based machine learning applications. Although many algorithms
have been studied for FL, to the best of our knowledge, algorithms for FL with
nonconvex constraints have not been studied. This paper studies FL over
Riemannian manifolds, which finds important applications such as federated PCA
and federated kPCA. We propose a Riemannian federated SVRG (RFedSVRG) method to
solve federated optimization over Riemannian manifolds. We analyze its
convergence rate under different scenarios. Numerical experiments are conducted
to compare RFedSVRG with the Riemannian counterparts of FedAvg and FedProx. We
observed from the numerical experiments that the advantages of RFedSVRG are
significant.
|
[
{
"created": "Sun, 12 Jun 2022 05:41:23 GMT",
"version": "v1"
}
] |
2022-06-14
|
[
[
"Li",
"Jiaxiang",
""
],
[
"Ma",
"Shiqian",
""
]
] |
Federated learning (FL) has found many important applications in smart-phone-APP based machine learning applications. Although many algorithms have been studied for FL, to the best of our knowledge, algorithms for FL with nonconvex constraints have not been studied. This paper studies FL over Riemannian manifolds, which finds important applications such as federated PCA and federated kPCA. We propose a Riemannian federated SVRG (RFedSVRG) method to solve federated optimization over Riemannian manifolds. We analyze its convergence rate under different scenarios. Numerical experiments are conducted to compare RFedSVRG with the Riemannian counterparts of FedAvg and FedProx. We observed from the numerical experiments that the advantages of RFedSVRG are significant.
|
2401.14749
|
Vasiliy Stanislavovich Usatyuk
|
Vasiliy Usatyuk, Denis Sapozhnikov, Sergey Egorov
|
Topology-Aware Exploration of Energy-Based Models Equilibrium: Toric
QC-LDPC Codes and Hyperbolic MET QC-LDPC Codes
|
16 pages, 29 figures. arXiv admin note: text overlap with
arXiv:2307.15778
| null | null | null |
cs.IT cs.AI cs.CV cs.LG cs.SY eess.SY math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents a method for achieving equilibrium in the ISING
Hamiltonian when confronted with unevenly distributed charges on an irregular
grid. Employing (Multi-Edge) QC-LDPC codes and the Boltzmann machine, our
approach involves dimensionally expanding the system, substituting charges with
circulants, and representing distances through circulant shifts. This results
in a systematic mapping of the charge system onto a space, transforming the
irregular grid into a uniform configuration, applicable to Torical and Circular
Hyperboloid Topologies. The paper covers fundamental definitions and notations
related to QC-LDPC Codes, Multi-Edge QC-LDPC codes, and the Boltzmann machine.
It explores the marginalization problem in code on the graph probabilistic
models for evaluating the partition function, encompassing exact and
approximate estimation techniques. Rigorous proof is provided for the
attainability of equilibrium states for the Boltzmann machine under Torical and
Circular Hyperboloid, paving the way for the application of our methodology.
Practical applications of our approach are investigated in Finite Geometry
QC-LDPC Codes, specifically in Material Science. The paper further explores its
effectiveness in the realm of Natural Language Processing Transformer Deep
Neural Networks, examining Generalized Repeat Accumulate Codes,
Spatially-Coupled and Cage-Graph QC-LDPC Codes. The versatile and impactful
nature of our topology-aware hardware-efficient quasi-cycle codes equilibrium
method is showcased across diverse scientific domains without the use of
specific section delineations.
|
[
{
"created": "Fri, 26 Jan 2024 10:14:10 GMT",
"version": "v1"
}
] |
2024-01-29
|
[
[
"Usatyuk",
"Vasiliy",
""
],
[
"Sapozhnikov",
"Denis",
""
],
[
"Egorov",
"Sergey",
""
]
] |
This paper presents a method for achieving equilibrium in the ISING Hamiltonian when confronted with unevenly distributed charges on an irregular grid. Employing (Multi-Edge) QC-LDPC codes and the Boltzmann machine, our approach involves dimensionally expanding the system, substituting charges with circulants, and representing distances through circulant shifts. This results in a systematic mapping of the charge system onto a space, transforming the irregular grid into a uniform configuration, applicable to Torical and Circular Hyperboloid Topologies. The paper covers fundamental definitions and notations related to QC-LDPC Codes, Multi-Edge QC-LDPC codes, and the Boltzmann machine. It explores the marginalization problem in code on the graph probabilistic models for evaluating the partition function, encompassing exact and approximate estimation techniques. Rigorous proof is provided for the attainability of equilibrium states for the Boltzmann machine under Torical and Circular Hyperboloid, paving the way for the application of our methodology. Practical applications of our approach are investigated in Finite Geometry QC-LDPC Codes, specifically in Material Science. The paper further explores its effectiveness in the realm of Natural Language Processing Transformer Deep Neural Networks, examining Generalized Repeat Accumulate Codes, Spatially-Coupled and Cage-Graph QC-LDPC Codes. The versatile and impactful nature of our topology-aware hardware-efficient quasi-cycle codes equilibrium method is showcased across diverse scientific domains without the use of specific section delineations.
|
2310.15529
|
Sagar Sudhakara
|
Sagar Sudhakara
|
Symmetric Strategies for Multi-Access IoT Network Optimization: A Common
Information Approach
| null | null | null | null |
cs.MA cs.GT
|
http://creativecommons.org/licenses/by/4.0/
|
In the context of IoT deployments, a multitude of devices concurrently
require network access to transmit data over a shared communication channel.
Employing symmetric strategies can effectively facilitate the collaborative use
of the communication medium among these devices. By adopting such strategies,
devices collectively optimize their transmission parameters, resulting in
minimized collisions and enhanced overall network throughput.
Our primary focus centers on the formulation of symmetric (i.e., identical)
strategies for the sensors, aiming to optimize a finite horizon team objective.
The imposition of symmetric strategies introduces novel facets and complexities
into the team problem. To address this, we embrace the common information
approach and adapt it to accommodate the use of symmetric strategies. This
adaptation yields a dynamic programming framework grounded in common
information, wherein each step entails the minimization of a single function
mapping from an agent's private information space to the space of probability
distributions over possible actions.
Our proposed policy/method incurs a reduced cumulative cost compared to other
methods employing symmetric strategies, a point substantiated by our simulation
results.
|
[
{
"created": "Tue, 24 Oct 2023 05:21:06 GMT",
"version": "v1"
}
] |
2023-10-25
|
[
[
"Sudhakara",
"Sagar",
""
]
] |
In the context of IoT deployments, a multitude of devices concurrently require network access to transmit data over a shared communication channel. Employing symmetric strategies can effectively facilitate the collaborative use of the communication medium among these devices. By adopting such strategies, devices collectively optimize their transmission parameters, resulting in minimized collisions and enhanced overall network throughput. Our primary focus centers on the formulation of symmetric (i.e., identical) strategies for the sensors, aiming to optimize a finite horizon team objective. The imposition of symmetric strategies introduces novel facets and complexities into the team problem. To address this, we embrace the common information approach and adapt it to accommodate the use of symmetric strategies. This adaptation yields a dynamic programming framework grounded in common information, wherein each step entails the minimization of a single function mapping from an agent's private information space to the space of probability distributions over possible actions. Our proposed policy/method incurs a reduced cumulative cost compared to other methods employing symmetric strategies, a point substantiated by our simulation results.
|
2106.14003
|
Alexandros Filotheou
|
Alexandros Filotheou
|
Correspondenceless scan-to-map-scan matching of homoriented 2D scans for
mobile robot localisation
|
19 pages, 19 figures
|
Rob. Auton. Syst. 149 (2022) 103957
|
10.1016/j.robot.2021.103957
| null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
The objective of this study is improving the location estimate of a mobile
robot capable of motion on a plane and mounted with a conventional 2D LIDAR
sensor, given an initial guess for its location on a 2D map of its
surroundings. Documented herein is the theoretical reasoning behind solving a
matching problem between two homoriented 2D scans, one derived from the robot's
physical sensor and one derived by simulating its operation within the map, in
a manner that does not require the establishing of correspondences between
their constituting rays. Two results are proved and subsequently shown through
experiments. The first is that the true position of the sensor can be recovered
with arbitrary precision when the physical sensor reports faultless
measurements and there is no discrepancy between the environment the robot
operates in and its perception of it by the robot. The second is that when
either is affected by disturbance, the location estimate is bound in a
neighbourhood of the true location whose radius is proportional to the
affecting disturbance.
|
[
{
"created": "Sat, 26 Jun 2021 11:41:19 GMT",
"version": "v1"
}
] |
2023-07-27
|
[
[
"Filotheou",
"Alexandros",
""
]
] |
The objective of this study is improving the location estimate of a mobile robot capable of motion on a plane and mounted with a conventional 2D LIDAR sensor, given an initial guess for its location on a 2D map of its surroundings. Documented herein is the theoretical reasoning behind solving a matching problem between two homoriented 2D scans, one derived from the robot's physical sensor and one derived by simulating its operation within the map, in a manner that does not require the establishing of correspondences between their constituting rays. Two results are proved and subsequently shown through experiments. The first is that the true position of the sensor can be recovered with arbitrary precision when the physical sensor reports faultless measurements and there is no discrepancy between the environment the robot operates in and its perception of it by the robot. The second is that when either is affected by disturbance, the location estimate is bound in a neighbourhood of the true location whose radius is proportional to the affecting disturbance.
|
2402.00048
|
Bruno Sartini
|
Bruno Sartini
|
IICONGRAPH: improved Iconographic and Iconological Statements in
Knowledge Graphs
|
18 pages
| null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Iconography and iconology are fundamental domains when it comes to
understanding artifacts of cultural heritage. Iconography deals with the study
and interpretation of visual elements depicted in artifacts and their
symbolism, while iconology delves deeper, exploring the underlying cultural and
historical meanings. Despite the advances in representing cultural heritage
with Linked Open Data (LOD), recent studies show persistent gaps in the
representation of iconographic and iconological statements in current knowledge
graphs (KGs). To address them, this paper presents IICONGRAPH, a KG that was
created by refining and extending the iconographic and iconological statements
of ArCo (the Italian KG of cultural heritage) and Wikidata. The development of
IICONGRAPH was also driven by a series of requirements emerging from research
case studies that were unattainable in the non-reengineered versions of the
KGs. The evaluation results demonstrate that IICONGRAPH not only outperforms
ArCo and Wikidata through domain-specific assessments from the literature but
also serves as a robust platform for addressing the formulated research
questions. IICONGRAPH is released and documented in accordance with the FAIR
principles to guarantee the resource's reusability. The algorithms used to
create it and assess the research questions have also been made available to
ensure transparency and reproducibility. While future work focuses on ingesting
more data into the KG, and on implementing it as a backbone of LLM-based
question answering systems, the current version of IICONGRAPH still emerges as
a valuable asset, contributing to the evolving landscape of cultural heritage
representation within Knowledge Graphs, the Semantic Web, and beyond.
|
[
{
"created": "Wed, 24 Jan 2024 15:44:16 GMT",
"version": "v1"
}
] |
2024-02-02
|
[
[
"Sartini",
"Bruno",
""
]
] |
Iconography and iconology are fundamental domains when it comes to understanding artifacts of cultural heritage. Iconography deals with the study and interpretation of visual elements depicted in artifacts and their symbolism, while iconology delves deeper, exploring the underlying cultural and historical meanings. Despite the advances in representing cultural heritage with Linked Open Data (LOD), recent studies show persistent gaps in the representation of iconographic and iconological statements in current knowledge graphs (KGs). To address them, this paper presents IICONGRAPH, a KG that was created by refining and extending the iconographic and iconological statements of ArCo (the Italian KG of cultural heritage) and Wikidata. The development of IICONGRAPH was also driven by a series of requirements emerging from research case studies that were unattainable in the non-reengineered versions of the KGs. The evaluation results demonstrate that IICONGRAPH not only outperforms ArCo and Wikidata through domain-specific assessments from the literature but also serves as a robust platform for addressing the formulated research questions. IICONGRAPH is released and documented in accordance with the FAIR principles to guarantee the resource's reusability. The algorithms used to create it and assess the research questions have also been made available to ensure transparency and reproducibility. While future work focuses on ingesting more data into the KG, and on implementing it as a backbone of LLM-based question answering systems, the current version of IICONGRAPH still emerges as a valuable asset, contributing to the evolving landscape of cultural heritage representation within Knowledge Graphs, the Semantic Web, and beyond.
|
2306.11489
|
Linyao Yang
|
Linyao Yang and Hongyang Chen and Zhao Li and Xiao Ding and Xindong Wu
|
Give Us the Facts: Enhancing Large Language Models with Knowledge Graphs
for Fact-aware Language Modeling
| null | null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, ChatGPT, a representative large language model (LLM), has gained
considerable attention due to its powerful emergent abilities. Some researchers
suggest that LLMs could potentially replace structured knowledge bases like
knowledge graphs (KGs) and function as parameterized knowledge bases. However,
while LLMs are proficient at learning probabilistic language patterns based on
large corpus and engaging in conversations with humans, they, like previous
smaller pre-trained language models (PLMs), still have difficulty in recalling
facts while generating knowledge-grounded contents. To overcome these
limitations, researchers have proposed enhancing data-driven PLMs with
knowledge-based KGs to incorporate explicit factual knowledge into PLMs, thus
improving their performance to generate texts requiring factual knowledge and
providing more informed responses to user queries. This paper reviews the
studies on enhancing PLMs with KGs, detailing existing knowledge graph enhanced
pre-trained language models (KGPLMs) as well as their applications. Inspired by
existing studies on KGPLM, this paper proposes to enhance LLMs with KGs by
developing knowledge graph-enhanced large language models (KGLLMs). KGLLM
provides a solution to enhance LLMs' factual reasoning ability, opening up new
avenues for LLM research.
|
[
{
"created": "Tue, 20 Jun 2023 12:21:06 GMT",
"version": "v1"
},
{
"created": "Tue, 30 Jan 2024 12:11:45 GMT",
"version": "v2"
}
] |
2024-01-31
|
[
[
"Yang",
"Linyao",
""
],
[
"Chen",
"Hongyang",
""
],
[
"Li",
"Zhao",
""
],
[
"Ding",
"Xiao",
""
],
[
"Wu",
"Xindong",
""
]
] |
Recently, ChatGPT, a representative large language model (LLM), has gained considerable attention due to its powerful emergent abilities. Some researchers suggest that LLMs could potentially replace structured knowledge bases like knowledge graphs (KGs) and function as parameterized knowledge bases. However, while LLMs are proficient at learning probabilistic language patterns based on large corpus and engaging in conversations with humans, they, like previous smaller pre-trained language models (PLMs), still have difficulty in recalling facts while generating knowledge-grounded contents. To overcome these limitations, researchers have proposed enhancing data-driven PLMs with knowledge-based KGs to incorporate explicit factual knowledge into PLMs, thus improving their performance to generate texts requiring factual knowledge and providing more informed responses to user queries. This paper reviews the studies on enhancing PLMs with KGs, detailing existing knowledge graph enhanced pre-trained language models (KGPLMs) as well as their applications. Inspired by existing studies on KGPLM, this paper proposes to enhance LLMs with KGs by developing knowledge graph-enhanced large language models (KGLLMs). KGLLM provides a solution to enhance LLMs' factual reasoning ability, opening up new avenues for LLM research.
|
1701.03568
|
Jagadeesh Harshan
|
J. Harshan, Sang-Yoon Chang, Yih-Chun Hu
|
Insider-Attacks on Physical-Layer Group Secret-Key Generation in
Wireless Networks
|
To appear in the Proc. of IEEE WCNC 2017
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Physical-layer group secret-key (GSK) generation is an effective way of
generating secret keys in wireless networks, wherein the nodes exploit inherent
randomness in the wireless channels to generate group keys, which are
subsequently applied to secure messages while broadcasting, relaying, and other
network-level communications. While existing GSK protocols focus on securing
the common source of randomness from external eavesdroppers, they assume that
the legitimate nodes of the group are trusted. In this paper, we address
insider attacks from the legitimate participants of the wireless network during
the key generation process. Instead of addressing conspicuous attacks such as
switching-off communication, injecting noise, or denying consensus on group
keys, we introduce stealth attacks that can go undetected against
state-of-the-art GSK schemes. We propose two forms of attacks, namely: (i)
different-key attacks, wherein an insider attempts to generate different keys
at different nodes, especially across nodes that are out of range so that they
fail to recover group messages despite possessing the group key, and (ii)
low-rate key attacks, wherein an insider alters the common source of randomness
so as to reduce the key-rate. We also discuss various detection techniques,
which are based on detecting anomalies and inconsistencies on the channel
measurements at the legitimate nodes. Through simulations we show that GSK
generation schemes are vulnerable to insider-threats, especially on topologies
that cannot support additional secure links between neighbouring nodes to
verify the attacks.
|
[
{
"created": "Fri, 13 Jan 2017 05:51:38 GMT",
"version": "v1"
}
] |
2017-01-16
|
[
[
"Harshan",
"J.",
""
],
[
"Chang",
"Sang-Yoon",
""
],
[
"Hu",
"Yih-Chun",
""
]
] |
Physical-layer group secret-key (GSK) generation is an effective way of generating secret keys in wireless networks, wherein the nodes exploit inherent randomness in the wireless channels to generate group keys, which are subsequently applied to secure messages while broadcasting, relaying, and other network-level communications. While existing GSK protocols focus on securing the common source of randomness from external eavesdroppers, they assume that the legitimate nodes of the group are trusted. In this paper, we address insider attacks from the legitimate participants of the wireless network during the key generation process. Instead of addressing conspicuous attacks such as switching-off communication, injecting noise, or denying consensus on group keys, we introduce stealth attacks that can go undetected against state-of-the-art GSK schemes. We propose two forms of attacks, namely: (i) different-key attacks, wherein an insider attempts to generate different keys at different nodes, especially across nodes that are out of range so that they fail to recover group messages despite possessing the group key, and (ii) low-rate key attacks, wherein an insider alters the common source of randomness so as to reduce the key-rate. We also discuss various detection techniques, which are based on detecting anomalies and inconsistencies on the channel measurements at the legitimate nodes. Through simulations we show that GSK generation schemes are vulnerable to insider-threats, especially on topologies that cannot support additional secure links between neighbouring nodes to verify the attacks.
|
2011.02787
|
Abhishek Gupta
|
Abhishek Gupta (1 and 2), Alexandrine Royer (1 and 3), Victoria Heath
(1 and 4), Connor Wright (1 and 5), Camylle Lanteigne (1, 6, and 7), Allison
Cohen (1, 8, and 9), Marianna Bergamaschi Ganapini (1 and 10), Muriam Fancy
(1, 11, and 12), Erick Galinkin (1 and 13), Ryan Khurana (1), Mo Akif (1),
Renjie Butalid (1), Falaah Arif Khan (1, 14, and 15), Masa Sweidan (1 and
16), Audrey Balogh (1 and 16) ((1) Montreal AI Ethics Institute, (2)
Microsoft, (3) University of Cambridge, (4) Creative Commons, (5) University
of Exeter, (6) Concordia University, (7) Algora Lab, (8) AI Global, (9) Mila,
(10) Union College, (11) University of Toronto, (12) University of Ottawa,
(13) Rapid7, (14) NYU Center for Responsible AI, (15) IIIT Hyderabad, (16)
McGill University)
|
The State of AI Ethics Report (October 2020)
|
158 pages
| null | null | null |
cs.CY cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
The 2nd edition of the Montreal AI Ethics Institute's The State of AI Ethics
captures the most relevant developments in the field of AI Ethics since July
2020. This report aims to help anyone, from machine learning experts to human
rights activists and policymakers, quickly digest and understand the
ever-changing developments in the field. Through research and article
summaries, as well as expert commentary, this report distills the research and
reporting surrounding various domains related to the ethics of AI, including:
AI and society, bias and algorithmic justice, disinformation, humans and AI,
labor impacts, privacy, risk, and future of AI ethics.
In addition, The State of AI Ethics includes exclusive content written by
world-class AI Ethics experts from universities, research institutes,
consulting firms, and governments. These experts include: Danit Gal (Tech
Advisor, United Nations), Amba Kak (Director of Global Policy and Programs,
NYU's AI Now Institute), Rumman Chowdhury (Global Lead for Responsible AI,
Accenture), Brent Barron (Director of Strategic Projects and Knowledge
Management, CIFAR), Adam Murray (U.S. Diplomat working on tech policy, Chair of
the OECD Network on AI), Thomas Kochan (Professor, MIT Sloan School of
Management), and Katya Klinova (AI and Economy Program Lead, Partnership on
AI).
This report should be used not only as a point of reference and insight on
the latest thinking in the field of AI Ethics, but should also be used as a
tool for introspection as we aim to foster a more nuanced conversation
regarding the impacts of AI on the world.
|
[
{
"created": "Thu, 5 Nov 2020 12:36:16 GMT",
"version": "v1"
}
] |
2020-11-06
|
[
[
"Gupta",
"Abhishek",
"",
"1 and 2"
],
[
"Royer",
"Alexandrine",
"",
"1 and 3"
],
[
"Heath",
"Victoria",
"",
"1 and 4"
],
[
"Wright",
"Connor",
"",
"1 and 5"
],
[
"Lanteigne",
"Camylle",
"",
"1, 6, and 7"
],
[
"Cohen",
"Allison",
"",
"1, 8, and 9"
],
[
"Ganapini",
"Marianna Bergamaschi",
"",
"1 and 10"
],
[
"Fancy",
"Muriam",
"",
"1, 11, and 12"
],
[
"Galinkin",
"Erick",
"",
"1 and 13"
],
[
"Khurana",
"Ryan",
"",
"1, 14, and 15"
],
[
"Akif",
"Mo",
"",
"1, 14, and 15"
],
[
"Butalid",
"Renjie",
"",
"1, 14, and 15"
],
[
"Khan",
"Falaah Arif",
"",
"1, 14, and 15"
],
[
"Sweidan",
"Masa",
"",
"1 and\n 16"
],
[
"Balogh",
"Audrey",
"",
"1 and 16"
]
] |
The 2nd edition of the Montreal AI Ethics Institute's The State of AI Ethics captures the most relevant developments in the field of AI Ethics since July 2020. This report aims to help anyone, from machine learning experts to human rights activists and policymakers, quickly digest and understand the ever-changing developments in the field. Through research and article summaries, as well as expert commentary, this report distills the research and reporting surrounding various domains related to the ethics of AI, including: AI and society, bias and algorithmic justice, disinformation, humans and AI, labor impacts, privacy, risk, and future of AI ethics. In addition, The State of AI Ethics includes exclusive content written by world-class AI Ethics experts from universities, research institutes, consulting firms, and governments. These experts include: Danit Gal (Tech Advisor, United Nations), Amba Kak (Director of Global Policy and Programs, NYU's AI Now Institute), Rumman Chowdhury (Global Lead for Responsible AI, Accenture), Brent Barron (Director of Strategic Projects and Knowledge Management, CIFAR), Adam Murray (U.S. Diplomat working on tech policy, Chair of the OECD Network on AI), Thomas Kochan (Professor, MIT Sloan School of Management), and Katya Klinova (AI and Economy Program Lead, Partnership on AI). This report should be used not only as a point of reference and insight on the latest thinking in the field of AI Ethics, but should also be used as a tool for introspection as we aim to foster a more nuanced conversation regarding the impacts of AI on the world.
|
1907.03595
|
Shuo Zhang
|
Shuo Zhang and Krisztian Balog
|
Recommending Related Tables
| null | null | null | null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Tables are an extremely powerful visual and interactive tool for structuring
and manipulating data, making spreadsheet programs one of the most popular
computer applications. In this paper we introduce and address the task of
recommending related tables: given an input table, identifying and returning a
ranked list of relevant tables. One of the many possible application scenarios
for this task is to provide users of a spreadsheet program proactively with
recommendations for related structured content on the Web. At its core, the
related table recommendation task boils down to computing the similarity
between a pair of tables. We develop a theoretically sound framework for
performing table matching. Our approach hinges on the idea of representing
table elements in multiple semantic spaces, and then combining element-level
similarities using a discriminative learning model. Using a purpose-built test
collection from Wikipedia tables, we demonstrate that the proposed approach
delivers state-of-the-art performance.
|
[
{
"created": "Mon, 8 Jul 2019 13:20:28 GMT",
"version": "v1"
},
{
"created": "Thu, 25 Jul 2019 05:03:41 GMT",
"version": "v2"
}
] |
2019-07-26
|
[
[
"Zhang",
"Shuo",
""
],
[
"Balog",
"Krisztian",
""
]
] |
Tables are an extremely powerful visual and interactive tool for structuring and manipulating data, making spreadsheet programs one of the most popular computer applications. In this paper we introduce and address the task of recommending related tables: given an input table, identifying and returning a ranked list of relevant tables. One of the many possible application scenarios for this task is to provide users of a spreadsheet program proactively with recommendations for related structured content on the Web. At its core, the related table recommendation task boils down to computing the similarity between a pair of tables. We develop a theoretically sound framework for performing table matching. Our approach hinges on the idea of representing table elements in multiple semantic spaces, and then combining element-level similarities using a discriminative learning model. Using a purpose-built test collection from Wikipedia tables, we demonstrate that the proposed approach delivers state-of-the-art performance.
|
1608.05104
|
Nasim Souly
|
Nasim Souly and Mubarak Shah
|
Scene Labeling Through Knowledge-Based Rules Employing Constrained
Integer Linear Programing
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Scene labeling task is to segment the image into meaningful regions and
categorize them into classes of objects which comprised the image. Commonly
used methods typically find the local features for each segment and label them
using classifiers. Afterward, labeling is smoothed in order to make sure that
neighboring regions receive similar labels. However, they ignore expressive and
non-local dependencies among regions due to expensive training and inference.
In this paper, we propose to use high-level knowledge regarding rules in the
inference to incorporate dependencies among regions in the image to improve
scores of classification. Towards this aim, we extract these rules from data
and transform them into constraints for Integer Programming to optimize the
structured problem of assigning labels to super-pixels (consequently pixels) of
an image. In addition, we propose to use soft-constraints in some scenarios,
allowing violating the constraint by imposing a penalty, to make the model more
flexible. We assessed our approach on three datasets and obtained promising
results.
|
[
{
"created": "Wed, 17 Aug 2016 21:14:51 GMT",
"version": "v1"
}
] |
2016-08-19
|
[
[
"Souly",
"Nasim",
""
],
[
"Shah",
"Mubarak",
""
]
] |
Scene labeling task is to segment the image into meaningful regions and categorize them into classes of objects which comprised the image. Commonly used methods typically find the local features for each segment and label them using classifiers. Afterward, labeling is smoothed in order to make sure that neighboring regions receive similar labels. However, they ignore expressive and non-local dependencies among regions due to expensive training and inference. In this paper, we propose to use high-level knowledge regarding rules in the inference to incorporate dependencies among regions in the image to improve scores of classification. Towards this aim, we extract these rules from data and transform them into constraints for Integer Programming to optimize the structured problem of assigning labels to super-pixels (consequently pixels) of an image. In addition, we propose to use soft-constraints in some scenarios, allowing violating the constraint by imposing a penalty, to make the model more flexible. We assessed our approach on three datasets and obtained promising results.
|
1301.1873
|
Alexandre Pinlou
|
Pascal Ochem and Alexandre Pinlou
|
Application of entropy compression in pattern avoidance
|
11 pages
| null | null | null |
cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In combinatorics on words, a word $w$ over an alphabet $\Sigma$ is said to
avoid a pattern $p$ over an alphabet $\Delta$ if there is no factor $f$ of $w$
such that $f= (p)$ where $h: \Delta^*\to\Sigma^*$ is a non-erasing morphism. A
pattern $p$ is said to be $k$-avoidable if there exists an infinite word over a
$k$-letter alphabet that avoids $p$. We give a positive answer to Problem 3.3.2
in Lothaire's book "Algebraic combinatorics on words", that is, every pattern
with $k$ variables of length at least $2^k$ (resp. $3\times2^{k-1}$) is
3-avoidable (resp. 2-avoidable). This improves previous bounds due to Bell and
Goh, and Rampersad.
|
[
{
"created": "Wed, 9 Jan 2013 14:46:33 GMT",
"version": "v1"
}
] |
2013-01-10
|
[
[
"Ochem",
"Pascal",
""
],
[
"Pinlou",
"Alexandre",
""
]
] |
In combinatorics on words, a word $w$ over an alphabet $\Sigma$ is said to avoid a pattern $p$ over an alphabet $\Delta$ if there is no factor $f$ of $w$ such that $f= (p)$ where $h: \Delta^*\to\Sigma^*$ is a non-erasing morphism. A pattern $p$ is said to be $k$-avoidable if there exists an infinite word over a $k$-letter alphabet that avoids $p$. We give a positive answer to Problem 3.3.2 in Lothaire's book "Algebraic combinatorics on words", that is, every pattern with $k$ variables of length at least $2^k$ (resp. $3\times2^{k-1}$) is 3-avoidable (resp. 2-avoidable). This improves previous bounds due to Bell and Goh, and Rampersad.
|
2302.03235
|
Yu Duan
|
Yu Duan, Zhongfan Jia, Qian Li, Yi Zhong, Kaisheng Ma
|
Hebbian and Gradient-based Plasticity Enables Robust Memory and Rapid
Learning in RNNs
|
Published as a conference paper at ICLR 2023
| null | null | null |
cs.NE cs.AI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Rapidly learning from ongoing experiences and remembering past events with a
flexible memory system are two core capacities of biological intelligence.
While the underlying neural mechanisms are not fully understood, various
evidence supports that synaptic plasticity plays a critical role in memory
formation and fast learning. Inspired by these results, we equip Recurrent
Neural Networks (RNNs) with plasticity rules to enable them to adapt their
parameters according to ongoing experiences. In addition to the traditional
local Hebbian plasticity, we propose a global, gradient-based plasticity rule,
which allows the model to evolve towards its self-determined target. Our models
show promising results on sequential and associative memory tasks, illustrating
their ability to robustly form and retain memories. In the meantime, these
models can cope with many challenging few-shot learning problems. Comparing
different plasticity rules under the same framework shows that Hebbian
plasticity is well-suited for several memory and associative learning tasks;
however, it is outperformed by gradient-based plasticity on few-shot regression
tasks which require the model to infer the underlying mapping. Code is
available at https://github.com/yuvenduan/PlasticRNNs.
|
[
{
"created": "Tue, 7 Feb 2023 03:42:42 GMT",
"version": "v1"
}
] |
2023-02-08
|
[
[
"Duan",
"Yu",
""
],
[
"Jia",
"Zhongfan",
""
],
[
"Li",
"Qian",
""
],
[
"Zhong",
"Yi",
""
],
[
"Ma",
"Kaisheng",
""
]
] |
Rapidly learning from ongoing experiences and remembering past events with a flexible memory system are two core capacities of biological intelligence. While the underlying neural mechanisms are not fully understood, various evidence supports that synaptic plasticity plays a critical role in memory formation and fast learning. Inspired by these results, we equip Recurrent Neural Networks (RNNs) with plasticity rules to enable them to adapt their parameters according to ongoing experiences. In addition to the traditional local Hebbian plasticity, we propose a global, gradient-based plasticity rule, which allows the model to evolve towards its self-determined target. Our models show promising results on sequential and associative memory tasks, illustrating their ability to robustly form and retain memories. In the meantime, these models can cope with many challenging few-shot learning problems. Comparing different plasticity rules under the same framework shows that Hebbian plasticity is well-suited for several memory and associative learning tasks; however, it is outperformed by gradient-based plasticity on few-shot regression tasks which require the model to infer the underlying mapping. Code is available at https://github.com/yuvenduan/PlasticRNNs.
|
1205.3317
|
Gianluigi Liva
|
Gianluigi Liva, Enrico Paolini, Michael Lentmaier, Marco Chiani
|
Spatially-Coupled Random Access on Graphs
|
To be presented at IEEE ISIT 2012, Boston
| null |
10.1109/ISIT.2012.6284235
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we investigate the effect of spatial coupling applied to the
recently-proposed coded slotted ALOHA (CSA) random access protocol. Thanks to
the bridge between the graphical model describing the iterative interference
cancelation process of CSA over the random access frame and the erasure
recovery process of low-density parity-check (LDPC) codes over the binary
erasure channel (BEC), we propose an access protocol which is inspired by the
convolutional LDPC code construction. The proposed protocol exploits the
terminations of its graphical model to achieve the spatial coupling effect,
attaining performance close to the theoretical limits of CSA. As for the
convolutional LDPC code case, large iterative decoding thresholds are obtained
by simply increasing the density of the graph. We show that the threshold
saturation effect takes place by defining a suitable counterpart of the
maximum-a-posteriori decoding threshold of spatially-coupled LDPC code
ensembles. In the asymptotic setting, the proposed scheme allows sustaining a
traffic close to 1 [packets/slot].
|
[
{
"created": "Tue, 15 May 2012 10:39:06 GMT",
"version": "v1"
}
] |
2016-11-17
|
[
[
"Liva",
"Gianluigi",
""
],
[
"Paolini",
"Enrico",
""
],
[
"Lentmaier",
"Michael",
""
],
[
"Chiani",
"Marco",
""
]
] |
In this paper we investigate the effect of spatial coupling applied to the recently-proposed coded slotted ALOHA (CSA) random access protocol. Thanks to the bridge between the graphical model describing the iterative interference cancelation process of CSA over the random access frame and the erasure recovery process of low-density parity-check (LDPC) codes over the binary erasure channel (BEC), we propose an access protocol which is inspired by the convolutional LDPC code construction. The proposed protocol exploits the terminations of its graphical model to achieve the spatial coupling effect, attaining performance close to the theoretical limits of CSA. As for the convolutional LDPC code case, large iterative decoding thresholds are obtained by simply increasing the density of the graph. We show that the threshold saturation effect takes place by defining a suitable counterpart of the maximum-a-posteriori decoding threshold of spatially-coupled LDPC code ensembles. In the asymptotic setting, the proposed scheme allows sustaining a traffic close to 1 [packets/slot].
|
2012.06576
|
Harrie Oosterhuis
|
Harrie Oosterhuis
|
Learning from User Interactions with Rankings: A Unification of the
Field
|
PhD Thesis of Harrie Oosterhuis defended at the University of
Amsterdam on November 27th 2020
| null | null | null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Ranking systems form the basis for online search engines and recommendation
services. They process large collections of items, for instance web pages or
e-commerce products, and present the user with a small ordered selection. The
goal of a ranking system is to help a user find the items they are looking for
with the least amount of effort. Thus the rankings they produce should place
the most relevant or preferred items at the top of the ranking. Learning to
rank is a field within machine learning that covers methods which optimize
ranking systems w.r.t. this goal. Traditional supervised learning to rank
methods utilize expert-judgements to evaluate and learn, however, in many
situations such judgements are impossible or infeasible to obtain. As a
solution, methods have been introduced that perform learning to rank based on
user clicks instead. The difficulty with clicks is that they are not only
affected by user preferences, but also by what rankings were displayed.
Therefore, these methods have to prevent being biased by other factors than
user preference. This thesis concerns learning to rank methods based on user
clicks and specifically aims to unify the different families of these methods.
As a whole, the second part of this thesis proposes a framework that bridges
many gaps between areas of online, counterfactual, and supervised learning to
rank. It has taken approaches, previously considered independent, and unified
them into a single methodology for widely applicable and effective learning to
rank from user clicks.
|
[
{
"created": "Wed, 9 Dec 2020 14:47:59 GMT",
"version": "v1"
}
] |
2020-12-14
|
[
[
"Oosterhuis",
"Harrie",
""
]
] |
Ranking systems form the basis for online search engines and recommendation services. They process large collections of items, for instance web pages or e-commerce products, and present the user with a small ordered selection. The goal of a ranking system is to help a user find the items they are looking for with the least amount of effort. Thus the rankings they produce should place the most relevant or preferred items at the top of the ranking. Learning to rank is a field within machine learning that covers methods which optimize ranking systems w.r.t. this goal. Traditional supervised learning to rank methods utilize expert-judgements to evaluate and learn, however, in many situations such judgements are impossible or infeasible to obtain. As a solution, methods have been introduced that perform learning to rank based on user clicks instead. The difficulty with clicks is that they are not only affected by user preferences, but also by what rankings were displayed. Therefore, these methods have to prevent being biased by other factors than user preference. This thesis concerns learning to rank methods based on user clicks and specifically aims to unify the different families of these methods. As a whole, the second part of this thesis proposes a framework that bridges many gaps between areas of online, counterfactual, and supervised learning to rank. It has taken approaches, previously considered independent, and unified them into a single methodology for widely applicable and effective learning to rank from user clicks.
|
2206.08330
|
Timothy Castiglia Mr.
|
Timothy Castiglia, Anirban Das, Shiqiang Wang, Stacy Patterson
|
Compressed-VFL: Communication-Efficient Learning with Vertically
Partitioned Data
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose Compressed Vertical Federated Learning (C-VFL) for
communication-efficient training on vertically partitioned data. In C-VFL, a
server and multiple parties collaboratively train a model on their respective
features utilizing several local iterations and sharing compressed intermediate
results periodically. Our work provides the first theoretical analysis of the
effect message compression has on distributed training over vertically
partitioned data. We prove convergence of non-convex objectives at a rate of
$O(\frac{1}{\sqrt{T}})$ when the compression error is bounded over the course
of training. We provide specific requirements for convergence with common
compression techniques, such as quantization and top-$k$ sparsification.
Finally, we experimentally show compression can reduce communication by over
$90\%$ without a significant decrease in accuracy over VFL without compression.
|
[
{
"created": "Thu, 16 Jun 2022 17:34:07 GMT",
"version": "v1"
},
{
"created": "Tue, 28 Mar 2023 22:03:09 GMT",
"version": "v2"
}
] |
2023-03-30
|
[
[
"Castiglia",
"Timothy",
""
],
[
"Das",
"Anirban",
""
],
[
"Wang",
"Shiqiang",
""
],
[
"Patterson",
"Stacy",
""
]
] |
We propose Compressed Vertical Federated Learning (C-VFL) for communication-efficient training on vertically partitioned data. In C-VFL, a server and multiple parties collaboratively train a model on their respective features utilizing several local iterations and sharing compressed intermediate results periodically. Our work provides the first theoretical analysis of the effect message compression has on distributed training over vertically partitioned data. We prove convergence of non-convex objectives at a rate of $O(\frac{1}{\sqrt{T}})$ when the compression error is bounded over the course of training. We provide specific requirements for convergence with common compression techniques, such as quantization and top-$k$ sparsification. Finally, we experimentally show compression can reduce communication by over $90\%$ without a significant decrease in accuracy over VFL without compression.
|
2211.05075
|
Mohamad Fazelnia
|
Mohamad Fazelnia, Ahmet Okutan, Mehdi Mirakhorli
|
Supporting AI/ML Security Workers through an Adversarial Techniques,
Tools, and Common Knowledge (AI/ML ATT&CK) Framework
|
AI/ML ATT&CK
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper focuses on supporting AI/ML Security Workers -- professionals
involved in the development and deployment of secure AI-enabled software
systems. It presents AI/ML Adversarial Techniques, Tools, and Common Knowledge
(AI/ML ATT&CK) framework to enable AI/ML Security Workers intuitively to
explore offensive and defensive tactics.
|
[
{
"created": "Wed, 9 Nov 2022 18:07:10 GMT",
"version": "v1"
}
] |
2022-11-10
|
[
[
"Fazelnia",
"Mohamad",
""
],
[
"Okutan",
"Ahmet",
""
],
[
"Mirakhorli",
"Mehdi",
""
]
] |
This paper focuses on supporting AI/ML Security Workers -- professionals involved in the development and deployment of secure AI-enabled software systems. It presents AI/ML Adversarial Techniques, Tools, and Common Knowledge (AI/ML ATT&CK) framework to enable AI/ML Security Workers intuitively to explore offensive and defensive tactics.
|
1607.06795
|
Jesse Shore
|
Jesse Shore, Jiye Baek, and Chrysanthos Dellarocas
|
Network structure and patterns of information diversity on Twitter
| null | null | null | null |
cs.SI physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Social media have great potential to support diverse information sharing, but
there is widespread concern that platforms like Twitter do not result in
communication between those who hold contradictory viewpoints. Because users
can choose whom to follow, prior research suggests that social media users
exist in 'echo chambers' or become polarized. We seek evidence of this in a
complete cross section of hyperlinks posted on Twitter, using previously
validated measures of the political slant of news sources to study information
diversity. Contrary to prediction, we find that the average account posts links
to more politically moderate news sources than the ones they receive in their
own feed. However, members of a tiny network core do exhibit cross-sectional
evidence of polarization and are responsible for the majority of tweets
received overall due to their popularity and activity, which could explain the
widespread perception of polarization on social media.
|
[
{
"created": "Fri, 22 Jul 2016 19:13:56 GMT",
"version": "v1"
},
{
"created": "Mon, 11 Dec 2017 18:17:11 GMT",
"version": "v2"
}
] |
2017-12-12
|
[
[
"Shore",
"Jesse",
""
],
[
"Baek",
"Jiye",
""
],
[
"Dellarocas",
"Chrysanthos",
""
]
] |
Social media have great potential to support diverse information sharing, but there is widespread concern that platforms like Twitter do not result in communication between those who hold contradictory viewpoints. Because users can choose whom to follow, prior research suggests that social media users exist in 'echo chambers' or become polarized. We seek evidence of this in a complete cross section of hyperlinks posted on Twitter, using previously validated measures of the political slant of news sources to study information diversity. Contrary to prediction, we find that the average account posts links to more politically moderate news sources than the ones they receive in their own feed. However, members of a tiny network core do exhibit cross-sectional evidence of polarization and are responsible for the majority of tweets received overall due to their popularity and activity, which could explain the widespread perception of polarization on social media.
|
2302.14339
|
Shengjie Wang
|
Haotian Xu and Shengjie Wang and Zhaolei Wang and Yunzhe Zhang and
Qing Zhuo and Yang Gao and Tao Zhang
|
Efficient Exploration Using Extra Safety Budget in Constrained Policy
Optimization
|
7 pages, 8 figures
| null | null | null |
cs.RO cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reinforcement learning (RL) has achieved promising results on most robotic
control tasks. Safety of learning-based controllers is an essential notion of
ensuring the effectiveness of the controllers. Current methods adopt whole
consistency constraints during the training, thus resulting in inefficient
exploration in the early stage. In this paper, we propose an algorithm named
Constrained Policy Optimization with Extra Safety Budget (ESB-CPO) to strike a
balance between the exploration efficiency and the constraints satisfaction. In
the early stage, our method loosens the practical constraints of unsafe
transitions (adding extra safety budget) with the aid of a new metric we
propose. With the training process, the constraints in our optimization problem
become tighter. Meanwhile, theoretical analysis and practical experiments
demonstrate that our method gradually meets the cost limit's demand in the
final training stage. When evaluated on Safety-Gym and Bullet-Safety-Gym
benchmarks, our method has shown its advantages over baseline algorithms in
terms of safety and optimality. Remarkably, our method gains remarkable
performance improvement under the same cost limit compared with baselines.
|
[
{
"created": "Tue, 28 Feb 2023 06:16:34 GMT",
"version": "v1"
},
{
"created": "Fri, 28 Jul 2023 01:54:26 GMT",
"version": "v2"
}
] |
2023-07-31
|
[
[
"Xu",
"Haotian",
""
],
[
"Wang",
"Shengjie",
""
],
[
"Wang",
"Zhaolei",
""
],
[
"Zhang",
"Yunzhe",
""
],
[
"Zhuo",
"Qing",
""
],
[
"Gao",
"Yang",
""
],
[
"Zhang",
"Tao",
""
]
] |
Reinforcement learning (RL) has achieved promising results on most robotic control tasks. Safety of learning-based controllers is an essential notion of ensuring the effectiveness of the controllers. Current methods adopt whole consistency constraints during the training, thus resulting in inefficient exploration in the early stage. In this paper, we propose an algorithm named Constrained Policy Optimization with Extra Safety Budget (ESB-CPO) to strike a balance between the exploration efficiency and the constraints satisfaction. In the early stage, our method loosens the practical constraints of unsafe transitions (adding extra safety budget) with the aid of a new metric we propose. With the training process, the constraints in our optimization problem become tighter. Meanwhile, theoretical analysis and practical experiments demonstrate that our method gradually meets the cost limit's demand in the final training stage. When evaluated on Safety-Gym and Bullet-Safety-Gym benchmarks, our method has shown its advantages over baseline algorithms in terms of safety and optimality. Remarkably, our method gains remarkable performance improvement under the same cost limit compared with baselines.
|
2109.04108
|
Manqing Dong
|
Manqing Dong, Chunguang Pan, and Zhipeng Luo
|
MapRE: An Effective Semantic Mapping Approach for Low-resource Relation
Extraction
|
Accepted as a long paper in the main conference of EMNLP 2021
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Neural relation extraction models have shown promising results in recent
years; however, the model performance drops dramatically given only a few
training samples. Recent works try leveraging the advance in few-shot learning
to solve the low resource problem, where they train label-agnostic models to
directly compare the semantic similarities among context sentences in the
embedding space. However, the label-aware information, i.e., the relation label
that contains the semantic knowledge of the relation itself, is often neglected
for prediction. In this work, we propose a framework considering both
label-agnostic and label-aware semantic mapping information for low resource
relation extraction. We show that incorporating the above two types of mapping
information in both pretraining and fine-tuning can significantly improve the
model performance on low-resource relation extraction tasks.
|
[
{
"created": "Thu, 9 Sep 2021 09:02:23 GMT",
"version": "v1"
}
] |
2021-09-10
|
[
[
"Dong",
"Manqing",
""
],
[
"Pan",
"Chunguang",
""
],
[
"Luo",
"Zhipeng",
""
]
] |
Neural relation extraction models have shown promising results in recent years; however, the model performance drops dramatically given only a few training samples. Recent works try leveraging the advance in few-shot learning to solve the low resource problem, where they train label-agnostic models to directly compare the semantic similarities among context sentences in the embedding space. However, the label-aware information, i.e., the relation label that contains the semantic knowledge of the relation itself, is often neglected for prediction. In this work, we propose a framework considering both label-agnostic and label-aware semantic mapping information for low resource relation extraction. We show that incorporating the above two types of mapping information in both pretraining and fine-tuning can significantly improve the model performance on low-resource relation extraction tasks.
|
2009.14395
|
Shamil Chollampatt
|
Shamil Chollampatt, Raymond Hendy Susanto, Liling Tan, Ewa Szymanska
|
Can Automatic Post-Editing Improve NMT?
|
In EMNLP 2020
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automatic post-editing (APE) aims to improve machine translations, thereby
reducing human post-editing effort. APE has had notable success when used with
statistical machine translation (SMT) systems but has not been as successful
over neural machine translation (NMT) systems. This has raised questions on the
relevance of APE task in the current scenario. However, the training of APE
models has been heavily reliant on large-scale artificial corpora combined with
only limited human post-edited data. We hypothesize that APE models have been
underperforming in improving NMT translations due to the lack of adequate
supervision. To ascertain our hypothesis, we compile a larger corpus of human
post-edits of English to German NMT. We empirically show that a state-of-art
neural APE model trained on this corpus can significantly improve a strong
in-domain NMT system, challenging the current understanding in the field. We
further investigate the effects of varying training data sizes, using
artificial training data, and domain specificity for the APE task. We release
this new corpus under CC BY-NC-SA 4.0 license at
https://github.com/shamilcm/pedra.
|
[
{
"created": "Wed, 30 Sep 2020 02:34:19 GMT",
"version": "v1"
}
] |
2020-10-01
|
[
[
"Chollampatt",
"Shamil",
""
],
[
"Susanto",
"Raymond Hendy",
""
],
[
"Tan",
"Liling",
""
],
[
"Szymanska",
"Ewa",
""
]
] |
Automatic post-editing (APE) aims to improve machine translations, thereby reducing human post-editing effort. APE has had notable success when used with statistical machine translation (SMT) systems but has not been as successful over neural machine translation (NMT) systems. This has raised questions on the relevance of APE task in the current scenario. However, the training of APE models has been heavily reliant on large-scale artificial corpora combined with only limited human post-edited data. We hypothesize that APE models have been underperforming in improving NMT translations due to the lack of adequate supervision. To ascertain our hypothesis, we compile a larger corpus of human post-edits of English to German NMT. We empirically show that a state-of-art neural APE model trained on this corpus can significantly improve a strong in-domain NMT system, challenging the current understanding in the field. We further investigate the effects of varying training data sizes, using artificial training data, and domain specificity for the APE task. We release this new corpus under CC BY-NC-SA 4.0 license at https://github.com/shamilcm/pedra.
|
1906.12017
|
Yansheng Wu
|
Yansheng Wu, Jong Yoon Hyun and Qin Yue
|
Binary optimal linear codes from posets of the disjoint union of two
chains
|
4 pages
| null | null | null |
cs.IT math.CO math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, Chang and Hyun obtained some classes of binary optimal codes via
simplicial complexes. In this letter, we utilize posets of the disjoint union
of two chains to construct binary optimal linear codes.
|
[
{
"created": "Fri, 28 Jun 2019 02:13:26 GMT",
"version": "v1"
}
] |
2019-07-01
|
[
[
"Wu",
"Yansheng",
""
],
[
"Hyun",
"Jong Yoon",
""
],
[
"Yue",
"Qin",
""
]
] |
Recently, Chang and Hyun obtained some classes of binary optimal codes via simplicial complexes. In this letter, we utilize posets of the disjoint union of two chains to construct binary optimal linear codes.
|
1910.10777
|
Hagit Grushka-Cohen
|
Hagit Grushka-Cohen, Ofer Biller, Oded Sofer, Lior Rokach and Bracha
Shapira
|
Diversifying Database Activity Monitoring with Bandits
| null | null | null | null |
cs.LG cs.AI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Database activity monitoring (DAM) systems are commonly used by organizations
to protect the organizational data, knowledge and intellectual properties. In
order to protect organizations database DAM systems have two main roles,
monitoring (documenting activity) and alerting to anomalous activity. Due to
high-velocity streams and operating costs, such systems are restricted to
examining only a sample of the activity. Current solutions use policies,
manually crafted by experts, to decide which transactions to monitor and log.
This limits the diversity of the data collected. Bandit algorithms, which use
reward functions as the basis for optimization while adding diversity to the
recommended set, have gained increased attention in recommendation systems for
improving diversity.
In this work, we redefine the data sampling problem as a special case of the
multi-armed bandit (MAB) problem and present a novel algorithm, which combines
expert knowledge with random exploration. We analyze the effect of diversity on
coverage and downstream event detection tasks using a simulated dataset. In
doing so, we find that adding diversity to the sampling using the bandit-based
approach works well for this task and maximizing population coverage without
decreasing the quality in terms of issuing alerts about events.
|
[
{
"created": "Wed, 23 Oct 2019 19:39:51 GMT",
"version": "v1"
}
] |
2019-10-25
|
[
[
"Grushka-Cohen",
"Hagit",
""
],
[
"Biller",
"Ofer",
""
],
[
"Sofer",
"Oded",
""
],
[
"Rokach",
"Lior",
""
],
[
"Shapira",
"Bracha",
""
]
] |
Database activity monitoring (DAM) systems are commonly used by organizations to protect the organizational data, knowledge and intellectual properties. In order to protect organizations database DAM systems have two main roles, monitoring (documenting activity) and alerting to anomalous activity. Due to high-velocity streams and operating costs, such systems are restricted to examining only a sample of the activity. Current solutions use policies, manually crafted by experts, to decide which transactions to monitor and log. This limits the diversity of the data collected. Bandit algorithms, which use reward functions as the basis for optimization while adding diversity to the recommended set, have gained increased attention in recommendation systems for improving diversity. In this work, we redefine the data sampling problem as a special case of the multi-armed bandit (MAB) problem and present a novel algorithm, which combines expert knowledge with random exploration. We analyze the effect of diversity on coverage and downstream event detection tasks using a simulated dataset. In doing so, we find that adding diversity to the sampling using the bandit-based approach works well for this task and maximizing population coverage without decreasing the quality in terms of issuing alerts about events.
|
1904.06307
|
Shikui Tu
|
Wenjing Huang, Shikui Tu and Lei Xu
|
Revisit Lmser and its further development based on convolutional layers
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Proposed in 1991, Least Mean Square Error Reconstruction for self-organizing
network, shortly Lmser, was a further development of the traditional
auto-encoder (AE) by folding the architecture with respect to the central
coding layer and thus leading to the features of symmetric weights and neurons,
as well as jointly supervised and unsupervised learning. However, its
advantages were only demonstrated in a one-hidden-layer implementation due to
the lack of computing resources and big data at that time. In this paper, we
revisit Lmser from the perspective of deep learning, develop Lmser network
based on multiple convolutional layers, which is more suitable for
image-related tasks, and confirm several Lmser functions with preliminary
demonstrations on image recognition, reconstruction, association recall, and so
on. Experiments demonstrate that Lmser indeed works as indicated in the
original paper, and it has promising performance in various applications.
|
[
{
"created": "Fri, 12 Apr 2019 16:26:04 GMT",
"version": "v1"
}
] |
2019-04-15
|
[
[
"Huang",
"Wenjing",
""
],
[
"Tu",
"Shikui",
""
],
[
"Xu",
"Lei",
""
]
] |
Proposed in 1991, Least Mean Square Error Reconstruction for self-organizing network, shortly Lmser, was a further development of the traditional auto-encoder (AE) by folding the architecture with respect to the central coding layer and thus leading to the features of symmetric weights and neurons, as well as jointly supervised and unsupervised learning. However, its advantages were only demonstrated in a one-hidden-layer implementation due to the lack of computing resources and big data at that time. In this paper, we revisit Lmser from the perspective of deep learning, develop Lmser network based on multiple convolutional layers, which is more suitable for image-related tasks, and confirm several Lmser functions with preliminary demonstrations on image recognition, reconstruction, association recall, and so on. Experiments demonstrate that Lmser indeed works as indicated in the original paper, and it has promising performance in various applications.
|
2106.15711
|
Christopher Xie
|
Christopher Xie, Arsalan Mousavian, Yu Xiang, Dieter Fox
|
RICE: Refining Instance Masks in Cluttered Environments with Graph
Neural Networks
| null | null | null | null |
cs.CV cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Segmenting unseen object instances in cluttered environments is an important
capability that robots need when functioning in unstructured environments.
While previous methods have exhibited promising results, they still tend to
provide incorrect results in highly cluttered scenes. We postulate that a
network architecture that encodes relations between objects at a high-level can
be beneficial. Thus, in this work, we propose a novel framework that refines
the output of such methods by utilizing a graph-based representation of
instance masks. We train deep networks capable of sampling smart perturbations
to the segmentations, and a graph neural network, which can encode relations
between objects, to evaluate the perturbed segmentations. Our proposed method
is orthogonal to previous works and achieves state-of-the-art performance when
combined with them. We demonstrate an application that uses uncertainty
estimates generated by our method to guide a manipulator, leading to efficient
understanding of cluttered scenes. Code, models, and video can be found at
https://github.com/chrisdxie/rice .
|
[
{
"created": "Tue, 29 Jun 2021 20:29:29 GMT",
"version": "v1"
}
] |
2021-07-01
|
[
[
"Xie",
"Christopher",
""
],
[
"Mousavian",
"Arsalan",
""
],
[
"Xiang",
"Yu",
""
],
[
"Fox",
"Dieter",
""
]
] |
Segmenting unseen object instances in cluttered environments is an important capability that robots need when functioning in unstructured environments. While previous methods have exhibited promising results, they still tend to provide incorrect results in highly cluttered scenes. We postulate that a network architecture that encodes relations between objects at a high-level can be beneficial. Thus, in this work, we propose a novel framework that refines the output of such methods by utilizing a graph-based representation of instance masks. We train deep networks capable of sampling smart perturbations to the segmentations, and a graph neural network, which can encode relations between objects, to evaluate the perturbed segmentations. Our proposed method is orthogonal to previous works and achieves state-of-the-art performance when combined with them. We demonstrate an application that uses uncertainty estimates generated by our method to guide a manipulator, leading to efficient understanding of cluttered scenes. Code, models, and video can be found at https://github.com/chrisdxie/rice .
|
1102.2616
|
William Jackson
|
Sanjay Bansal and Sanjeev Sharma
|
An Improved Multiple Faults Reassignment based Recovery in Cluster
Computing
|
Online at http://journalofcomputing.org
|
Journal of Computing, Volume 2, Issue 11, November 2010, eISSN
2151-9617
| null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In case of multiple node failures performance becomes very low as compare to
single node failure. Failures of nodes in cluster computing can be tolerated by
multiple fault tolerant computing. Existing recovery schemes are efficient for
single fault but not with multiple faults. Recovery scheme proposed in this
paper having two phases; sequentially phase, concurrent phase. In sequentially
phase, loads of all working nodes are uniformly and evenly distributed by
proposed dynamic rank based and load distribution algorithm. In concurrent
phase, loads of all failure nodes as well as new job arrival are assigned
equally to all available nodes by just finding the least loaded node among the
several nodes by failure nodes job allocation algorithm. Sequential and
concurrent executions of algorithms improve the performance as well better
resource utilization. Dynamic rank based algorithm for load redistribution
works as a sequential restoration algorithm and reassignment algorithm for
distribution of failure nodes to least loaded computing nodes works as a
concurrent recovery reassignment algorithm. Since load is evenly and uniformly
distributed among all available working nodes with less number of iterations,
low iterative time and communication overheads hence performance is improved.
Dynamic ranking algorithm is low overhead, high convergence algorithm for
reassignment of tasks uniformly among all available nodes. Reassignments of
failure nodes are done by a low overhead efficient failure job allocation
algorithm. Test results to show effectiveness of the proposed scheme are
presented.
|
[
{
"created": "Sun, 13 Feb 2011 16:50:30 GMT",
"version": "v1"
}
] |
2011-02-15
|
[
[
"Bansal",
"Sanjay",
""
],
[
"Sharma",
"Sanjeev",
""
]
] |
In case of multiple node failures performance becomes very low as compare to single node failure. Failures of nodes in cluster computing can be tolerated by multiple fault tolerant computing. Existing recovery schemes are efficient for single fault but not with multiple faults. Recovery scheme proposed in this paper having two phases; sequentially phase, concurrent phase. In sequentially phase, loads of all working nodes are uniformly and evenly distributed by proposed dynamic rank based and load distribution algorithm. In concurrent phase, loads of all failure nodes as well as new job arrival are assigned equally to all available nodes by just finding the least loaded node among the several nodes by failure nodes job allocation algorithm. Sequential and concurrent executions of algorithms improve the performance as well better resource utilization. Dynamic rank based algorithm for load redistribution works as a sequential restoration algorithm and reassignment algorithm for distribution of failure nodes to least loaded computing nodes works as a concurrent recovery reassignment algorithm. Since load is evenly and uniformly distributed among all available working nodes with less number of iterations, low iterative time and communication overheads hence performance is improved. Dynamic ranking algorithm is low overhead, high convergence algorithm for reassignment of tasks uniformly among all available nodes. Reassignments of failure nodes are done by a low overhead efficient failure job allocation algorithm. Test results to show effectiveness of the proposed scheme are presented.
|
2008.13544
|
Hamada Zahera
|
Hamada M. Zahera, Rricha Jalota, Mohamed A. Sherif, Axel N. Ngomo
|
I-AID: Identifying Actionable Information from Disaster-related Tweets
| null | null | null | null |
cs.CL cs.AI cs.IR cs.LG stat.ML
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Social media plays a significant role in disaster management by providing
valuable data about affected people, donations and help requests. Recent
studies highlight the need to filter information on social media into
fine-grained content labels. However, identifying useful information from
massive amounts of social media posts during a crisis is a challenging task. In
this paper, we propose I-AID, a multimodel approach to automatically categorize
tweets into multi-label information types and filter critical information from
the enormous volume of social media data. I-AID incorporates three main
components: i) a BERT-based encoder to capture the semantics of a tweet and
represent as a low-dimensional vector, ii) a graph attention network (GAT) to
apprehend correlations between tweets' words/entities and the corresponding
information types, and iii) a Relation Network as a learnable distance metric
to compute the similarity between tweets and their corresponding information
types in a supervised way. We conducted several experiments on two real
publicly-available datasets. Our results indicate that I-AID outperforms
state-of-the-art approaches in terms of weighted average F1 score by +6% and
+4% on the TREC-IS dataset and COVID-19 Tweets, respectively.
|
[
{
"created": "Tue, 4 Aug 2020 19:07:50 GMT",
"version": "v1"
},
{
"created": "Wed, 19 May 2021 02:32:43 GMT",
"version": "v2"
}
] |
2021-05-20
|
[
[
"Zahera",
"Hamada M.",
""
],
[
"Jalota",
"Rricha",
""
],
[
"Sherif",
"Mohamed A.",
""
],
[
"Ngomo",
"Axel N.",
""
]
] |
Social media plays a significant role in disaster management by providing valuable data about affected people, donations and help requests. Recent studies highlight the need to filter information on social media into fine-grained content labels. However, identifying useful information from massive amounts of social media posts during a crisis is a challenging task. In this paper, we propose I-AID, a multimodel approach to automatically categorize tweets into multi-label information types and filter critical information from the enormous volume of social media data. I-AID incorporates three main components: i) a BERT-based encoder to capture the semantics of a tweet and represent as a low-dimensional vector, ii) a graph attention network (GAT) to apprehend correlations between tweets' words/entities and the corresponding information types, and iii) a Relation Network as a learnable distance metric to compute the similarity between tweets and their corresponding information types in a supervised way. We conducted several experiments on two real publicly-available datasets. Our results indicate that I-AID outperforms state-of-the-art approaches in terms of weighted average F1 score by +6% and +4% on the TREC-IS dataset and COVID-19 Tweets, respectively.
|
2003.10783
|
Kengo Tajiri
|
Kengo Tajiri and Yasuhiro Ikeda and Yuusuke Nakano and Keishiro
Watanabe
|
Dividing Deep Learning Model for Continuous Anomaly Detection of
Inconsistent ICT Systems
|
Accepted for IEEE/IFIP Network Operations and Management Symposium
2020 (NOMS2020)
| null | null | null |
cs.NI cs.LG stat.AP stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Health monitoring is important for maintaining reliable information and
communications technology (ICT) systems. Anomaly detection methods based on
machine learning, which train a model for describing "normality" are promising
for monitoring the state of ICT systems. However, these methods cannot be used
when the type of monitored log data changes from that of training data due to
the replacement of certain equipment. Therefore, such methods may dismiss an
anomaly that appears when log data changes. To solve this problem, we propose
an ICT-systems-monitoring method with deep learning models divided based on the
correlation of log data. We also propose an algorithm for extracting the
correlations of log data from a deep learning model and separating log data
based on the correlation. When some of the log data changes, our method can
continue health monitoring with the divided models which are not affected by
changes in the log data. We present the results from experiments involving
benchmark data and real log data, which indicate that our method using divided
models does not decrease anomaly detection accuracy and a model for anomaly
detection can be divided to continue monitoring a network state even if some
the log data change.
|
[
{
"created": "Tue, 24 Mar 2020 11:32:00 GMT",
"version": "v1"
}
] |
2020-03-25
|
[
[
"Tajiri",
"Kengo",
""
],
[
"Ikeda",
"Yasuhiro",
""
],
[
"Nakano",
"Yuusuke",
""
],
[
"Watanabe",
"Keishiro",
""
]
] |
Health monitoring is important for maintaining reliable information and communications technology (ICT) systems. Anomaly detection methods based on machine learning, which train a model for describing "normality" are promising for monitoring the state of ICT systems. However, these methods cannot be used when the type of monitored log data changes from that of training data due to the replacement of certain equipment. Therefore, such methods may dismiss an anomaly that appears when log data changes. To solve this problem, we propose an ICT-systems-monitoring method with deep learning models divided based on the correlation of log data. We also propose an algorithm for extracting the correlations of log data from a deep learning model and separating log data based on the correlation. When some of the log data changes, our method can continue health monitoring with the divided models which are not affected by changes in the log data. We present the results from experiments involving benchmark data and real log data, which indicate that our method using divided models does not decrease anomaly detection accuracy and a model for anomaly detection can be divided to continue monitoring a network state even if some the log data change.
|
2403.18513
|
Hendrik Molter
|
George B. Mertzios and Hendrik Molter and Paul G. Spirakis
|
Realizing temporal transportation trees
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we study the complexity of the \textit{periodic temporal graph
realization} problem with respect to upper bounds on the fastest path durations
among its vertices. This constraint with respect to upper bounds appears
naturally in transportation network design applications where, for example, a
road network is given, and the goal is to appropriately schedule periodic
travel routes, while not exceeding some desired upper bounds on the travel
times. This approach is in contrast to verification applications of the graph
realization problems, where exact values for the distances (respectively,
fastest travel times) are given, following some kind of precise measurement. In
our work, we focus only on underlying tree topologies, which are fundamental in
many transportation network applications.
As it turns out, the periodic upper-bounded temporal tree realization problem
(TTR) has a very different computational complexity behavior than both (i) the
classic graph realization problem with respect to shortest path distances in
static graphs and (ii) the periodic temporal graph realization problem with
exact given fastest travel times (which was recently introduced). First, we
prove that, surprisingly, TTR is NP-hard, even for a constant period $\Delta$
and when the input tree $G$ satisfies at least one of the following conditions:
(a) $G$ has a constant diameter, or (b) $G$ has constant maximum degree. In
contrast, when we are given exact values of the fastest travel delays, the
problem is known to be solvable in polynomial time. Second, we prove that TTR
is fixed-parameter tractable (FPT) with respect to the number of leaves in the
input tree $G$, via a novel combination of techniques for totally unimodular
matrices and mixed integer linear programming.
|
[
{
"created": "Wed, 27 Mar 2024 12:44:27 GMT",
"version": "v1"
}
] |
2024-03-28
|
[
[
"Mertzios",
"George B.",
""
],
[
"Molter",
"Hendrik",
""
],
[
"Spirakis",
"Paul G.",
""
]
] |
In this paper, we study the complexity of the \textit{periodic temporal graph realization} problem with respect to upper bounds on the fastest path durations among its vertices. This constraint with respect to upper bounds appears naturally in transportation network design applications where, for example, a road network is given, and the goal is to appropriately schedule periodic travel routes, while not exceeding some desired upper bounds on the travel times. This approach is in contrast to verification applications of the graph realization problems, where exact values for the distances (respectively, fastest travel times) are given, following some kind of precise measurement. In our work, we focus only on underlying tree topologies, which are fundamental in many transportation network applications. As it turns out, the periodic upper-bounded temporal tree realization problem (TTR) has a very different computational complexity behavior than both (i) the classic graph realization problem with respect to shortest path distances in static graphs and (ii) the periodic temporal graph realization problem with exact given fastest travel times (which was recently introduced). First, we prove that, surprisingly, TTR is NP-hard, even for a constant period $\Delta$ and when the input tree $G$ satisfies at least one of the following conditions: (a) $G$ has a constant diameter, or (b) $G$ has constant maximum degree. In contrast, when we are given exact values of the fastest travel delays, the problem is known to be solvable in polynomial time. Second, we prove that TTR is fixed-parameter tractable (FPT) with respect to the number of leaves in the input tree $G$, via a novel combination of techniques for totally unimodular matrices and mixed integer linear programming.
|
1105.2003
|
Justin Thaler
|
Graham Cormode, Michael Mitzenmacher, Justin Thaler
|
Practical Verified Computation with Streaming Interactive Proofs
|
39 pages, 12 figures, 2 tables. Accepted to ITCS 2012
| null | null | null |
cs.DS cs.CC cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
When delegating computation to a service provider, as in cloud computing, we
seek some reassurance that the output is correct and complete. Yet recomputing
the output as a check is inefficient and expensive, and it may not even be
feasible to store all the data locally. We are therefore interested in proof
systems which allow a service provider to prove the correctness of its output
to a streaming (sublinear space) user, who cannot store the full input or
perform the full computation herself.
Our approach is two-fold. First, we describe a carefully chosen instantiation
of one of the most efficient general-purpose constructions for arbitrary
computations (streaming or otherwise), due to Goldwasser, Kalai, and Rothblum.
This requires several new insights to make the methodology more practical. Our
main contribution is in achieving a prover who runs in time O(S(n) log S(n)),
where S(n) is the size of an arithmetic circuit computing the function of
interest. Our experimental results demonstrate that a practical general-purpose
protocol for verifiable computation may be significantly closer to reality than
previously realized.
Second, we describe techniques that achieve genuine scalability for protocols
fine-tuned for specific important problems in streaming and database
processing. Focusing in particular on non-interactive protocols for problems
ranging from matrix-vector multiplication to bipartite perfect matching, we
build on prior work to achieve a prover who runs in nearly linear-time, while
obtaining optimal tradeoffs between communication cost and the user's working
memory. Existing techniques required (substantially) superlinear time for the
prover. We argue that even if general-purpose methods improve, fine-tuned
protocols will remain valuable in real-world settings for key problems, and
hence special attention to specific problems is warranted.
|
[
{
"created": "Tue, 10 May 2011 17:34:25 GMT",
"version": "v1"
},
{
"created": "Tue, 31 May 2011 18:17:03 GMT",
"version": "v2"
},
{
"created": "Fri, 12 Aug 2011 15:20:31 GMT",
"version": "v3"
},
{
"created": "Fri, 25 Nov 2011 22:14:11 GMT",
"version": "v4"
},
{
"created": "Mon, 13 Feb 2012 02:36:57 GMT",
"version": "v5"
}
] |
2015-03-19
|
[
[
"Cormode",
"Graham",
""
],
[
"Mitzenmacher",
"Michael",
""
],
[
"Thaler",
"Justin",
""
]
] |
When delegating computation to a service provider, as in cloud computing, we seek some reassurance that the output is correct and complete. Yet recomputing the output as a check is inefficient and expensive, and it may not even be feasible to store all the data locally. We are therefore interested in proof systems which allow a service provider to prove the correctness of its output to a streaming (sublinear space) user, who cannot store the full input or perform the full computation herself. Our approach is two-fold. First, we describe a carefully chosen instantiation of one of the most efficient general-purpose constructions for arbitrary computations (streaming or otherwise), due to Goldwasser, Kalai, and Rothblum. This requires several new insights to make the methodology more practical. Our main contribution is in achieving a prover who runs in time O(S(n) log S(n)), where S(n) is the size of an arithmetic circuit computing the function of interest. Our experimental results demonstrate that a practical general-purpose protocol for verifiable computation may be significantly closer to reality than previously realized. Second, we describe techniques that achieve genuine scalability for protocols fine-tuned for specific important problems in streaming and database processing. Focusing in particular on non-interactive protocols for problems ranging from matrix-vector multiplication to bipartite perfect matching, we build on prior work to achieve a prover who runs in nearly linear-time, while obtaining optimal tradeoffs between communication cost and the user's working memory. Existing techniques required (substantially) superlinear time for the prover. We argue that even if general-purpose methods improve, fine-tuned protocols will remain valuable in real-world settings for key problems, and hence special attention to specific problems is warranted.
|
1602.07449
|
Alessandro Ugolini
|
Stefano Buzzi, Carmen D'Andrea, Tommaso Foggi, Alessandro Ugolini,
Giulio Colavolpe
|
Spectral Efficiency of MIMO Millimeter-Wave Links with Single-Carrier
Modulation for 5G Networks
|
8 pages, 8 figures, to appear in Proc. 20th International ITG
Workshop on Smart Antennas (WSA2016)
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Future wireless networks will extensively rely upon bandwidths centered on
carrier frequencies larger than 10GHz. Indeed, recent research has shown that,
despite the large path-loss, millimeter wave (mmWave) frequencies can be
successfully exploited to transmit very large data-rates over short distances
to slowly moving users. Due to hardware complexity and cost constraints,
single-carrier modulation schemes, as opposed to the popular multi-carrier
schemes, are being considered for use at mmWave frequencies. This paper
presents preliminary studies on the achievable spectral efficiency on a
wireless MIMO link operating at mmWave in a typical 5G scenario. Two different
single-carrier modem schemes are considered, i.e. a traditional modulation
scheme with linear equalization at the receiver, and a single-carrier
modulation with cyclic prefix, frequency-domain equalization and FFT-based
processing at the receiver. Our results show that the former achieves a larger
spectral efficiency than the latter. Results also confirm that the spectral
efficiency increases with the dimension of the antenna array, as well as that
performance gets severely degraded when the link length exceeds 100 meters and
the transmit power falls below 0dBW. Nonetheless, mmWave appear to be very
suited for providing very large data-rates over short distances.
|
[
{
"created": "Wed, 24 Feb 2016 09:49:57 GMT",
"version": "v1"
}
] |
2016-02-25
|
[
[
"Buzzi",
"Stefano",
""
],
[
"D'Andrea",
"Carmen",
""
],
[
"Foggi",
"Tommaso",
""
],
[
"Ugolini",
"Alessandro",
""
],
[
"Colavolpe",
"Giulio",
""
]
] |
Future wireless networks will extensively rely upon bandwidths centered on carrier frequencies larger than 10GHz. Indeed, recent research has shown that, despite the large path-loss, millimeter wave (mmWave) frequencies can be successfully exploited to transmit very large data-rates over short distances to slowly moving users. Due to hardware complexity and cost constraints, single-carrier modulation schemes, as opposed to the popular multi-carrier schemes, are being considered for use at mmWave frequencies. This paper presents preliminary studies on the achievable spectral efficiency on a wireless MIMO link operating at mmWave in a typical 5G scenario. Two different single-carrier modem schemes are considered, i.e. a traditional modulation scheme with linear equalization at the receiver, and a single-carrier modulation with cyclic prefix, frequency-domain equalization and FFT-based processing at the receiver. Our results show that the former achieves a larger spectral efficiency than the latter. Results also confirm that the spectral efficiency increases with the dimension of the antenna array, as well as that performance gets severely degraded when the link length exceeds 100 meters and the transmit power falls below 0dBW. Nonetheless, mmWave appear to be very suited for providing very large data-rates over short distances.
|
2102.06439
|
Sihao Sun
|
Bram Strack van Schijndel, Sihao Sun and Coen de Visser
|
Fast Fault Detection on a Quadrotor using Onboard Sensors and a Kalman
Filter Approach
|
7 pages, 12 figures
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a novel method for fast and robust detection of actuator
failures on quadrotors. The proposed algorithm has very little model
dependency. A Kalman filter estimator estimates a stochastic effectiveness
factor for every actuator, using only onboard RPM, gyro and accelerometer
measurements. Then, a hypothesis test identifies the failed actuator. This
algorithm is validated online in real-time, also as part of an active fault
tolerant control system. Loss of actuator effectiveness is induced by ejecting
the propellers from the motors. The robustness of this algorithm is further
investigated offline over a range of parameter settings by replaying real
flight data containing 26 propeller ejections. The detection delays are found
to be in the 30 to 130 ms range, without missed detections or false alarms
occurring.
|
[
{
"created": "Fri, 12 Feb 2021 10:55:56 GMT",
"version": "v1"
}
] |
2021-02-15
|
[
[
"van Schijndel",
"Bram Strack",
""
],
[
"Sun",
"Sihao",
""
],
[
"de Visser",
"Coen",
""
]
] |
This paper presents a novel method for fast and robust detection of actuator failures on quadrotors. The proposed algorithm has very little model dependency. A Kalman filter estimator estimates a stochastic effectiveness factor for every actuator, using only onboard RPM, gyro and accelerometer measurements. Then, a hypothesis test identifies the failed actuator. This algorithm is validated online in real-time, also as part of an active fault tolerant control system. Loss of actuator effectiveness is induced by ejecting the propellers from the motors. The robustness of this algorithm is further investigated offline over a range of parameter settings by replaying real flight data containing 26 propeller ejections. The detection delays are found to be in the 30 to 130 ms range, without missed detections or false alarms occurring.
|
2205.09402
|
Sarish Nigade
|
Archit P. Kane, Ashutosh S. Kore, Advait N. Khandale, Sarish S.
Nigade, Pranjali P. Joshi
|
Predictive Maintenance using Machine Learning
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Predictive maintenance (PdM) is a concept, which is implemented to
effectively manage maintenance plans of the assets by predicting their failures
with data driven techniques. In these scenarios, data is collected over a
certain period of time to monitor the state of equipment. The objective is to
find some correlations and patterns that can help predict and ultimately
prevent failures. Equipment in manufacturing industry are often utilized
without a planned maintenance approach. Such practise frequently results in
unexpected downtime, owing to certain unexpected failures. In scheduled
maintenance, the condition of the manufacturing equipment is checked after
fixed time interval and if any fault occurs, the component is replaced to avoid
unexpected equipment stoppages. On the flip side, this leads to increase in
time for which machine is non-functioning and cost of carrying out the
maintenance. The emergence of Industry 4.0 and smart systems have led to
increasing emphasis on predictive maintenance (PdM) strategies that can reduce
the cost of downtime and increase the availability (utilization rate) of
manufacturing equipment. PdM also has the potential to bring about new
sustainable practices in manufacturing by fully utilizing the useful lives of
components.
|
[
{
"created": "Thu, 19 May 2022 09:05:37 GMT",
"version": "v1"
}
] |
2022-05-20
|
[
[
"Kane",
"Archit P.",
""
],
[
"Kore",
"Ashutosh S.",
""
],
[
"Khandale",
"Advait N.",
""
],
[
"Nigade",
"Sarish S.",
""
],
[
"Joshi",
"Pranjali P.",
""
]
] |
Predictive maintenance (PdM) is a concept, which is implemented to effectively manage maintenance plans of the assets by predicting their failures with data driven techniques. In these scenarios, data is collected over a certain period of time to monitor the state of equipment. The objective is to find some correlations and patterns that can help predict and ultimately prevent failures. Equipment in manufacturing industry are often utilized without a planned maintenance approach. Such practise frequently results in unexpected downtime, owing to certain unexpected failures. In scheduled maintenance, the condition of the manufacturing equipment is checked after fixed time interval and if any fault occurs, the component is replaced to avoid unexpected equipment stoppages. On the flip side, this leads to increase in time for which machine is non-functioning and cost of carrying out the maintenance. The emergence of Industry 4.0 and smart systems have led to increasing emphasis on predictive maintenance (PdM) strategies that can reduce the cost of downtime and increase the availability (utilization rate) of manufacturing equipment. PdM also has the potential to bring about new sustainable practices in manufacturing by fully utilizing the useful lives of components.
|
2202.08429
|
Tanu Malik
|
Naga Nithin Manne, Shilvi Satpati, Tanu Malik, Amitabha Bagchi, Ashish
Gehani and Amitabh Chaudhary
|
CHEX: Multiversion Replay with Ordered Checkpoints
|
13 pages, 13 figures, VLDB
| null | null | null |
cs.DB
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In scientific computing and data science disciplines, it is often necessary
to share application workflows and repeat results. Current tools containerize
application workflows, and share the resulting container for repeating results.
These tools, due to containerization, do improve sharing of results. However,
they do not improve the efficiency of replay. In this paper, we present the
multiversion replay problem which arises when multiple versions of an
application are containerized, and each version must be replayed to repeat
results. To avoid executing each version separately, we develop CHEX, which
checkpoints program state and determines when it is permissible to reuse
program state across versions. It does so using system call-based execution
lineage. Our capability to identify common computations across versions enables
us to consider optimizing replay using an in-memory cache, based on a
checkpoint-restore-switch system. We show the multiversion replay problem is
NP-hard, and propose efficient heuristics for it. CHEX reduces overall replay
time by sharing common computations but avoids storing a large number of
checkpoints. We demonstrate that CHEX maintains lightweight package sharing,
and improves the total time of multiversion replay by 50% on average.
|
[
{
"created": "Thu, 17 Feb 2022 03:17:49 GMT",
"version": "v1"
}
] |
2022-02-18
|
[
[
"Manne",
"Naga Nithin",
""
],
[
"Satpati",
"Shilvi",
""
],
[
"Malik",
"Tanu",
""
],
[
"Bagchi",
"Amitabha",
""
],
[
"Gehani",
"Ashish",
""
],
[
"Chaudhary",
"Amitabh",
""
]
] |
In scientific computing and data science disciplines, it is often necessary to share application workflows and repeat results. Current tools containerize application workflows, and share the resulting container for repeating results. These tools, due to containerization, do improve sharing of results. However, they do not improve the efficiency of replay. In this paper, we present the multiversion replay problem which arises when multiple versions of an application are containerized, and each version must be replayed to repeat results. To avoid executing each version separately, we develop CHEX, which checkpoints program state and determines when it is permissible to reuse program state across versions. It does so using system call-based execution lineage. Our capability to identify common computations across versions enables us to consider optimizing replay using an in-memory cache, based on a checkpoint-restore-switch system. We show the multiversion replay problem is NP-hard, and propose efficient heuristics for it. CHEX reduces overall replay time by sharing common computations but avoids storing a large number of checkpoints. We demonstrate that CHEX maintains lightweight package sharing, and improves the total time of multiversion replay by 50% on average.
|
1303.1399
|
Owen Stephens
|
Pawe{\l} Sobocinski and Owen Stephens
|
Reachability via Compositionality in Petri nets
| null | null | null | null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce a novel technique for checking reachability in Petri nets that
relies on a recently introduced compositional algebra of nets. We prove that
the technique is correct, and discuss our implementation. We report promising
experimental results on some well-known examples.
|
[
{
"created": "Wed, 6 Mar 2013 17:36:14 GMT",
"version": "v1"
},
{
"created": "Mon, 21 Apr 2014 18:55:39 GMT",
"version": "v2"
}
] |
2014-04-22
|
[
[
"Sobocinski",
"Paweł",
""
],
[
"Stephens",
"Owen",
""
]
] |
We introduce a novel technique for checking reachability in Petri nets that relies on a recently introduced compositional algebra of nets. We prove that the technique is correct, and discuss our implementation. We report promising experimental results on some well-known examples.
|
2203.04089
|
Saurav Keshari Aryal PhD
|
Saurav K. Aryal, Peter A. Keiller
|
Associating eHealth Policies and National Data Privacy Regulations
| null | null | null | null |
cs.CY cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
As electronic data becomes the lifeline of modern society, privacy concerns
increase. These concerns are reflected by the European Union's enactment of the
General Data Protection Regulation (GDPR), one of the most comprehensive and
robust privacy regulations globally. This project aims to evaluate and
highlight associations between eHealth systems' policies and personal data
privacy regulations. Using bias-corrected Cramer's V and Thiel's U tests, we
found weak and zero associations between e-health systems' rules and
protections for data privacy. A simple decision tree model is trained, which
validates the association scores obtained
|
[
{
"created": "Sun, 27 Feb 2022 21:22:48 GMT",
"version": "v1"
}
] |
2022-03-09
|
[
[
"Aryal",
"Saurav K.",
""
],
[
"Keiller",
"Peter A.",
""
]
] |
As electronic data becomes the lifeline of modern society, privacy concerns increase. These concerns are reflected by the European Union's enactment of the General Data Protection Regulation (GDPR), one of the most comprehensive and robust privacy regulations globally. This project aims to evaluate and highlight associations between eHealth systems' policies and personal data privacy regulations. Using bias-corrected Cramer's V and Thiel's U tests, we found weak and zero associations between e-health systems' rules and protections for data privacy. A simple decision tree model is trained, which validates the association scores obtained
|
2403.16638
|
Gang Cao
|
Jianfa Bai, Man Lin, Gang Cao
|
AI-Generated Video Detection via Spatio-Temporal Anomaly Learning
| null | null | null | null |
cs.CV cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The advancement of generation models has led to the emergence of highly
realistic artificial intelligence (AI)-generated videos. Malicious users can
easily create non-existent videos to spread false information. This letter
proposes an effective AI-generated video detection (AIGVDet) scheme by
capturing the forensic traces with a two-branch spatio-temporal convolutional
neural network (CNN). Specifically, two ResNet sub-detectors are learned
separately for identifying the anomalies in spatical and optical flow domains,
respectively. Results of such sub-detectors are fused to further enhance the
discrimination ability. A large-scale generated video dataset (GVD) is
constructed as a benchmark for model training and evaluation. Extensive
experimental results verify the high generalization and robustness of our
AIGVDet scheme. Code and dataset will be available at
https://github.com/multimediaFor/AIGVDet.
|
[
{
"created": "Mon, 25 Mar 2024 11:26:18 GMT",
"version": "v1"
}
] |
2024-03-26
|
[
[
"Bai",
"Jianfa",
""
],
[
"Lin",
"Man",
""
],
[
"Cao",
"Gang",
""
]
] |
The advancement of generation models has led to the emergence of highly realistic artificial intelligence (AI)-generated videos. Malicious users can easily create non-existent videos to spread false information. This letter proposes an effective AI-generated video detection (AIGVDet) scheme by capturing the forensic traces with a two-branch spatio-temporal convolutional neural network (CNN). Specifically, two ResNet sub-detectors are learned separately for identifying the anomalies in spatical and optical flow domains, respectively. Results of such sub-detectors are fused to further enhance the discrimination ability. A large-scale generated video dataset (GVD) is constructed as a benchmark for model training and evaluation. Extensive experimental results verify the high generalization and robustness of our AIGVDet scheme. Code and dataset will be available at https://github.com/multimediaFor/AIGVDet.
|
1603.02345
|
Byeongkeun Kang
|
Byeongkeun Kang, Kar-Han Tan, Nan Jiang, Hung-Shuo Tai, Daniel
Tretter, and Truong Q. Nguyen
|
Hand Segmentation for Hand-Object Interaction from Depth map
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Hand segmentation for hand-object interaction is a necessary preprocessing
step in many applications such as augmented reality, medical application, and
human-robot interaction. However, typical methods are based on color
information which is not robust to objects with skin color, skin pigment
difference, and light condition variations. Thus, we propose hand segmentation
method for hand-object interaction using only a depth map. It is challenging
because of the small depth difference between a hand and objects during an
interaction. To overcome this challenge, we propose the two-stage random
decision forest (RDF) method consisting of detecting hands and segmenting
hands. To validate the proposed method, we demonstrate results on the publicly
available dataset of hand segmentation for hand-object interaction. The
proposed method achieves high accuracy in short processing time comparing to
the other state-of-the-art methods.
|
[
{
"created": "Tue, 8 Mar 2016 00:22:59 GMT",
"version": "v1"
},
{
"created": "Tue, 20 Sep 2016 01:02:20 GMT",
"version": "v2"
},
{
"created": "Wed, 10 Jan 2018 03:20:52 GMT",
"version": "v3"
}
] |
2018-01-11
|
[
[
"Kang",
"Byeongkeun",
""
],
[
"Tan",
"Kar-Han",
""
],
[
"Jiang",
"Nan",
""
],
[
"Tai",
"Hung-Shuo",
""
],
[
"Tretter",
"Daniel",
""
],
[
"Nguyen",
"Truong Q.",
""
]
] |
Hand segmentation for hand-object interaction is a necessary preprocessing step in many applications such as augmented reality, medical application, and human-robot interaction. However, typical methods are based on color information which is not robust to objects with skin color, skin pigment difference, and light condition variations. Thus, we propose hand segmentation method for hand-object interaction using only a depth map. It is challenging because of the small depth difference between a hand and objects during an interaction. To overcome this challenge, we propose the two-stage random decision forest (RDF) method consisting of detecting hands and segmenting hands. To validate the proposed method, we demonstrate results on the publicly available dataset of hand segmentation for hand-object interaction. The proposed method achieves high accuracy in short processing time comparing to the other state-of-the-art methods.
|
1610.08904
|
Chen Huang
|
Chen Huang, Chen Change Loy, Xiaoou Tang
|
Local Similarity-Aware Deep Feature Embedding
|
9 pages, 4 figures, 2 tables. Accepted to NIPS 2016
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing deep embedding methods in vision tasks are capable of learning a
compact Euclidean space from images, where Euclidean distances correspond to a
similarity metric. To make learning more effective and efficient, hard sample
mining is usually employed, with samples identified through computing the
Euclidean feature distance. However, the global Euclidean distance cannot
faithfully characterize the true feature similarity in a complex visual feature
space, where the intraclass distance in a high-density region may be larger
than the interclass distance in low-density regions. In this paper, we
introduce a Position-Dependent Deep Metric (PDDM) unit, which is capable of
learning a similarity metric adaptive to local feature structure. The metric
can be used to select genuinely hard samples in a local neighborhood to guide
the deep embedding learning in an online and robust manner. The new layer is
appealing in that it is pluggable to any convolutional networks and is trained
end-to-end. Our local similarity-aware feature embedding not only demonstrates
faster convergence and boosted performance on two complex image retrieval
datasets, its large margin nature also leads to superior generalization results
under the large and open set scenarios of transfer learning and zero-shot
learning on ImageNet 2010 and ImageNet-10K datasets.
|
[
{
"created": "Thu, 27 Oct 2016 17:51:18 GMT",
"version": "v1"
}
] |
2016-10-28
|
[
[
"Huang",
"Chen",
""
],
[
"Loy",
"Chen Change",
""
],
[
"Tang",
"Xiaoou",
""
]
] |
Existing deep embedding methods in vision tasks are capable of learning a compact Euclidean space from images, where Euclidean distances correspond to a similarity metric. To make learning more effective and efficient, hard sample mining is usually employed, with samples identified through computing the Euclidean feature distance. However, the global Euclidean distance cannot faithfully characterize the true feature similarity in a complex visual feature space, where the intraclass distance in a high-density region may be larger than the interclass distance in low-density regions. In this paper, we introduce a Position-Dependent Deep Metric (PDDM) unit, which is capable of learning a similarity metric adaptive to local feature structure. The metric can be used to select genuinely hard samples in a local neighborhood to guide the deep embedding learning in an online and robust manner. The new layer is appealing in that it is pluggable to any convolutional networks and is trained end-to-end. Our local similarity-aware feature embedding not only demonstrates faster convergence and boosted performance on two complex image retrieval datasets, its large margin nature also leads to superior generalization results under the large and open set scenarios of transfer learning and zero-shot learning on ImageNet 2010 and ImageNet-10K datasets.
|
1207.0369
|
Benjamin Doerr
|
Benjamin Doerr, Daniel Johannsen, Timo K\"otzing, Frank Neumann,
Madeleine Theile
|
More Effective Crossover Operators for the All-Pairs Shortest Path
Problem
| null | null | null | null |
cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The all-pairs shortest path problem is the first non-artificial problem for
which it was shown that adding crossover can significantly speed up a
mutation-only evolutionary algorithm. Recently, the analysis of this algorithm
was refined and it was shown to have an expected optimization time (w.r.t. the
number of fitness evaluations) of $\Theta(n^{3.25}(\log n)^{0.25})$.
In contrast to this simple algorithm, evolutionary algorithms used in
practice usually employ refined recombination strategies in order to avoid the
creation of infeasible offspring. We study extensions of the basic algorithm by
two such concepts which are central in recombination, namely \emph{repair
mechanisms} and \emph{parent selection}. We show that repairing infeasible
offspring leads to an improved expected optimization time of
$\mathord{O}(n^{3.2}(\log n)^{0.2})$. As a second part of our study we prove
that choosing parents that guarantee feasible offspring results in an even
better optimization time of $\mathord{O}(n^{3}\log n)$.
Both results show that already simple adjustments of the recombination
operator can asymptotically improve the runtime of evolutionary algorithms.
|
[
{
"created": "Mon, 2 Jul 2012 13:14:14 GMT",
"version": "v1"
}
] |
2015-03-20
|
[
[
"Doerr",
"Benjamin",
""
],
[
"Johannsen",
"Daniel",
""
],
[
"Kötzing",
"Timo",
""
],
[
"Neumann",
"Frank",
""
],
[
"Theile",
"Madeleine",
""
]
] |
The all-pairs shortest path problem is the first non-artificial problem for which it was shown that adding crossover can significantly speed up a mutation-only evolutionary algorithm. Recently, the analysis of this algorithm was refined and it was shown to have an expected optimization time (w.r.t. the number of fitness evaluations) of $\Theta(n^{3.25}(\log n)^{0.25})$. In contrast to this simple algorithm, evolutionary algorithms used in practice usually employ refined recombination strategies in order to avoid the creation of infeasible offspring. We study extensions of the basic algorithm by two such concepts which are central in recombination, namely \emph{repair mechanisms} and \emph{parent selection}. We show that repairing infeasible offspring leads to an improved expected optimization time of $\mathord{O}(n^{3.2}(\log n)^{0.2})$. As a second part of our study we prove that choosing parents that guarantee feasible offspring results in an even better optimization time of $\mathord{O}(n^{3}\log n)$. Both results show that already simple adjustments of the recombination operator can asymptotically improve the runtime of evolutionary algorithms.
|
2206.10381
|
Yu Ma
|
Kimberly Villalobos Carballo, Liangyuan Na, Yu Ma, L\'eonard
Boussioux, Cynthia Zeng, Luis R. Soenksen, Dimitris Bertsimas
|
TabText: A Flexible and Contextual Approach to Tabular Data
Representation
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Tabular data is essential for applying machine learning tasks across various
industries. However, traditional data processing methods do not fully utilize
all the information available in the tables, ignoring important contextual
information such as column header descriptions. In addition, pre-processing
data into a tabular format can remain a labor-intensive bottleneck in model
development. This work introduces TabText, a processing and feature extraction
framework that extracts contextual information from tabular data structures.
TabText addresses processing difficulties by converting the content into
language and utilizing pre-trained large language models (LLMs). We evaluate
our framework on nine healthcare prediction tasks ranging from patient
discharge, ICU admission, and mortality. We show that 1) applying our TabText
framework enables the generation of high-performing and simple machine learning
baseline models with minimal data pre-processing, and 2) augmenting
pre-processed tabular data with TabText representations improves the average
and worst-case AUC performance of standard machine learning models by as much
as 6%.
|
[
{
"created": "Tue, 21 Jun 2022 13:28:57 GMT",
"version": "v1"
},
{
"created": "Mon, 15 Aug 2022 14:54:05 GMT",
"version": "v2"
},
{
"created": "Tue, 18 Jul 2023 13:55:14 GMT",
"version": "v3"
},
{
"created": "Fri, 21 Jul 2023 20:34:02 GMT",
"version": "v4"
}
] |
2023-07-25
|
[
[
"Carballo",
"Kimberly Villalobos",
""
],
[
"Na",
"Liangyuan",
""
],
[
"Ma",
"Yu",
""
],
[
"Boussioux",
"Léonard",
""
],
[
"Zeng",
"Cynthia",
""
],
[
"Soenksen",
"Luis R.",
""
],
[
"Bertsimas",
"Dimitris",
""
]
] |
Tabular data is essential for applying machine learning tasks across various industries. However, traditional data processing methods do not fully utilize all the information available in the tables, ignoring important contextual information such as column header descriptions. In addition, pre-processing data into a tabular format can remain a labor-intensive bottleneck in model development. This work introduces TabText, a processing and feature extraction framework that extracts contextual information from tabular data structures. TabText addresses processing difficulties by converting the content into language and utilizing pre-trained large language models (LLMs). We evaluate our framework on nine healthcare prediction tasks ranging from patient discharge, ICU admission, and mortality. We show that 1) applying our TabText framework enables the generation of high-performing and simple machine learning baseline models with minimal data pre-processing, and 2) augmenting pre-processed tabular data with TabText representations improves the average and worst-case AUC performance of standard machine learning models by as much as 6%.
|
2210.04772
|
Andrea Mazzullo
|
Massimo Carraturo, Andrea Mazzullo
|
An Ontology for Defect Detection in Metal Additive Manufacturing
| null | null | null | null |
cs.AI cs.LO
|
http://creativecommons.org/licenses/by-sa/4.0/
|
A key challenge for Industry 4.0 applications is to develop control systems
for automated manufacturing services that are capable of addressing both data
integration and semantic interoperability issues, as well as monitoring and
decision making tasks. To address such an issue in advanced manufacturing
systems, principled knowledge representation approaches based on formal
ontologies have been proposed as a foundation to information management and
maintenance in presence of heterogeneous data sources. In addition, ontologies
provide reasoning and querying capabilities to aid domain experts and end users
in the context of constraint validation and decision making. Finally,
ontology-based approaches to advanced manufacturing services can support the
explainability and interpretability of the behaviour of monitoring, control,
and simulation systems that are based on black-box machine learning algorithms.
In this work, we provide a novel ontology for the classification of
process-induced defects known from the metal additive manufacturing literature.
Together with a formal representation of the characterising features and
sources of defects, we integrate our knowledge base with state-of-the-art
ontologies in the field. Our knowledge base aims at enhancing the modelling
capabilities of additive manufacturing ontologies by adding further defect
analysis terminology and diagnostic inference features.
|
[
{
"created": "Thu, 29 Sep 2022 13:35:25 GMT",
"version": "v1"
}
] |
2022-10-11
|
[
[
"Carraturo",
"Massimo",
""
],
[
"Mazzullo",
"Andrea",
""
]
] |
A key challenge for Industry 4.0 applications is to develop control systems for automated manufacturing services that are capable of addressing both data integration and semantic interoperability issues, as well as monitoring and decision making tasks. To address such an issue in advanced manufacturing systems, principled knowledge representation approaches based on formal ontologies have been proposed as a foundation to information management and maintenance in presence of heterogeneous data sources. In addition, ontologies provide reasoning and querying capabilities to aid domain experts and end users in the context of constraint validation and decision making. Finally, ontology-based approaches to advanced manufacturing services can support the explainability and interpretability of the behaviour of monitoring, control, and simulation systems that are based on black-box machine learning algorithms. In this work, we provide a novel ontology for the classification of process-induced defects known from the metal additive manufacturing literature. Together with a formal representation of the characterising features and sources of defects, we integrate our knowledge base with state-of-the-art ontologies in the field. Our knowledge base aims at enhancing the modelling capabilities of additive manufacturing ontologies by adding further defect analysis terminology and diagnostic inference features.
|
2211.12640
|
Shahryar Zehtabi
|
Shahryar Zehtabi, Seyyedali Hosseinalipour, Christopher G. Brinton
|
Event-Triggered Decentralized Federated Learning over
Resource-Constrained Edge Devices
|
23 pages. arXiv admin note: text overlap with arXiv:2204.03726
| null | null | null |
cs.LG cs.DC math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Federated learning (FL) is a technique for distributed machine learning (ML),
in which edge devices carry out local model training on their individual
datasets. In traditional FL algorithms, trained models at the edge are
periodically sent to a central server for aggregation, utilizing a star
topology as the underlying communication graph. However, assuming access to a
central coordinator is not always practical, e.g., in ad hoc wireless network
settings. In this paper, we develop a novel methodology for fully decentralized
FL, where in addition to local training, devices conduct model aggregation via
cooperative consensus formation with their one-hop neighbors over the
decentralized underlying physical network. We further eliminate the need for a
timing coordinator by introducing asynchronous, event-triggered communications
among the devices. In doing so, to account for the inherent resource
heterogeneity challenges in FL, we define personalized communication triggering
conditions at each device that weigh the change in local model parameters
against the available local resources. We theoretically demonstrate that our
methodology converges to the globally optimal learning model at a
$O{(\frac{\ln{k}}{\sqrt{k}})}$ rate under standard assumptions in distributed
learning and consensus literature. Our subsequent numerical evaluations
demonstrate that our methodology obtains substantial improvements in
convergence speed and/or communication savings compared with existing
decentralized FL baselines.
|
[
{
"created": "Wed, 23 Nov 2022 00:04:05 GMT",
"version": "v1"
}
] |
2022-11-24
|
[
[
"Zehtabi",
"Shahryar",
""
],
[
"Hosseinalipour",
"Seyyedali",
""
],
[
"Brinton",
"Christopher G.",
""
]
] |
Federated learning (FL) is a technique for distributed machine learning (ML), in which edge devices carry out local model training on their individual datasets. In traditional FL algorithms, trained models at the edge are periodically sent to a central server for aggregation, utilizing a star topology as the underlying communication graph. However, assuming access to a central coordinator is not always practical, e.g., in ad hoc wireless network settings. In this paper, we develop a novel methodology for fully decentralized FL, where in addition to local training, devices conduct model aggregation via cooperative consensus formation with their one-hop neighbors over the decentralized underlying physical network. We further eliminate the need for a timing coordinator by introducing asynchronous, event-triggered communications among the devices. In doing so, to account for the inherent resource heterogeneity challenges in FL, we define personalized communication triggering conditions at each device that weigh the change in local model parameters against the available local resources. We theoretically demonstrate that our methodology converges to the globally optimal learning model at a $O{(\frac{\ln{k}}{\sqrt{k}})}$ rate under standard assumptions in distributed learning and consensus literature. Our subsequent numerical evaluations demonstrate that our methodology obtains substantial improvements in convergence speed and/or communication savings compared with existing decentralized FL baselines.
|
2006.03906
|
Dominik Baumann
|
Dominik Baumann, Friedrich Solowjow, Karl H. Johansson, and Sebastian
Trimpe
|
Identifying Causal Structure in Dynamical Systems
|
Accepted final versions to appear in the Transactions on Machine
Learning Research
| null | null | null |
cs.LG cs.SY eess.SY stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Mathematical models are fundamental building blocks in the design of
dynamical control systems. As control systems are becoming increasingly complex
and networked, approaches for obtaining such models based on first principles
reach their limits. Data-driven methods provide an alternative. However,
without structural knowledge, these methods are prone to finding spurious
correlations in the training data, which can hamper generalization capabilities
of the obtained models. This can significantly lower control and prediction
performance when the system is exposed to unknown situations. A preceding
causal identification can prevent this pitfall. In this paper, we propose a
method that identifies the causal structure of control systems. We design
experiments based on the concept of controllability, which provides a
systematic way to compute input trajectories that steer the system to specific
regions in its state space. We then analyze the resulting data leveraging
powerful techniques from causal inference and extend them to control systems.
Further, we derive conditions that guarantee the discovery of the true causal
structure of the system. Experiments on a robot arm demonstrate reliable causal
identification from real-world data and enhanced generalization capabilities.
|
[
{
"created": "Sat, 6 Jun 2020 16:17:07 GMT",
"version": "v1"
},
{
"created": "Mon, 18 Jul 2022 06:29:31 GMT",
"version": "v2"
}
] |
2022-07-19
|
[
[
"Baumann",
"Dominik",
""
],
[
"Solowjow",
"Friedrich",
""
],
[
"Johansson",
"Karl H.",
""
],
[
"Trimpe",
"Sebastian",
""
]
] |
Mathematical models are fundamental building blocks in the design of dynamical control systems. As control systems are becoming increasingly complex and networked, approaches for obtaining such models based on first principles reach their limits. Data-driven methods provide an alternative. However, without structural knowledge, these methods are prone to finding spurious correlations in the training data, which can hamper generalization capabilities of the obtained models. This can significantly lower control and prediction performance when the system is exposed to unknown situations. A preceding causal identification can prevent this pitfall. In this paper, we propose a method that identifies the causal structure of control systems. We design experiments based on the concept of controllability, which provides a systematic way to compute input trajectories that steer the system to specific regions in its state space. We then analyze the resulting data leveraging powerful techniques from causal inference and extend them to control systems. Further, we derive conditions that guarantee the discovery of the true causal structure of the system. Experiments on a robot arm demonstrate reliable causal identification from real-world data and enhanced generalization capabilities.
|
1811.12360
|
Daniel Severin Dr.
|
Manoel Camp\^elo, Daniel Sever\'in
|
The polytope of legal sequences
| null | null | null | null |
cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A sequence of vertices in a graph is called a \emph{(total) legal dominating
sequence} if every vertex in the sequence (total) dominates at least one vertex
not dominated by those ones that precede it, and at the end all vertices of the
graph are (totally) dominated. The \emph{Grundy (total) domination number} of a
graph is the size of the largest (total) legal dominating sequence. In this
work, we address the problems of determining these two parameters by
introducing a generalized version of them. We explicitly calculate the
corresponding (general) parameter for paths and web graphs. We propose integer
programming formulations for the new problem and we study the polytope
associated to one of them. We find families of valid inequalities and derive
conditions under which they are facet-defining. Finally, we perform
computational experiments to compare the formulations as well as to test valid
inequalities as cuts in a B\&C framework.
|
[
{
"created": "Thu, 29 Nov 2018 18:11:35 GMT",
"version": "v1"
}
] |
2018-11-30
|
[
[
"Campêlo",
"Manoel",
""
],
[
"Severín",
"Daniel",
""
]
] |
A sequence of vertices in a graph is called a \emph{(total) legal dominating sequence} if every vertex in the sequence (total) dominates at least one vertex not dominated by those ones that precede it, and at the end all vertices of the graph are (totally) dominated. The \emph{Grundy (total) domination number} of a graph is the size of the largest (total) legal dominating sequence. In this work, we address the problems of determining these two parameters by introducing a generalized version of them. We explicitly calculate the corresponding (general) parameter for paths and web graphs. We propose integer programming formulations for the new problem and we study the polytope associated to one of them. We find families of valid inequalities and derive conditions under which they are facet-defining. Finally, we perform computational experiments to compare the formulations as well as to test valid inequalities as cuts in a B\&C framework.
|
2302.10366
|
Tianyin Xu
|
Jinghao Jia and YiFei Zhu and Dan Williams and Andrea Arcangeli and
Claudio Canella and Hubertus Franke and Tobin Feldman-Fitzthum and Dimitrios
Skarlatos and Daniel Gruss and Tianyin Xu
|
Programmable System Call Security with eBPF
| null | null | null | null |
cs.OS cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
System call filtering is a widely used security mechanism for protecting a
shared OS kernel against untrusted user applications. However, existing system
call filtering techniques either are too expensive due to the context switch
overhead imposed by userspace agents, or lack sufficient programmability to
express advanced policies. Seccomp, Linux's system call filtering module, is
widely used by modern container technologies, mobile apps, and system
management services. Despite the adoption of the classic BPF language (cBPF),
security policies in Seccomp are mostly limited to static allow lists,
primarily because cBPF does not support stateful policies. Consequently, many
essential security features cannot be expressed precisely and/or require kernel
modifications.
In this paper, we present a programmable system call filtering mechanism,
which enables more advanced security policies to be expressed by leveraging the
extended BPF language (eBPF). More specifically, we create a new Seccomp eBPF
program type, exposing, modifying or creating new eBPF helper functions to
safely manage filter state, access kernel and user state, and utilize
synchronization primitives. Importantly, our system integrates with existing
kernel privilege and capability mechanisms, enabling unprivileged users to
install advanced filters safely. Our evaluation shows that our eBPF-based
filtering can enhance existing policies (e.g., reducing the attack surface of
early execution phase by up to 55.4% for temporal specialization), mitigate
real-world vulnerabilities, and accelerate filters.
|
[
{
"created": "Mon, 20 Feb 2023 23:54:04 GMT",
"version": "v1"
}
] |
2023-02-22
|
[
[
"Jia",
"Jinghao",
""
],
[
"Zhu",
"YiFei",
""
],
[
"Williams",
"Dan",
""
],
[
"Arcangeli",
"Andrea",
""
],
[
"Canella",
"Claudio",
""
],
[
"Franke",
"Hubertus",
""
],
[
"Feldman-Fitzthum",
"Tobin",
""
],
[
"Skarlatos",
"Dimitrios",
""
],
[
"Gruss",
"Daniel",
""
],
[
"Xu",
"Tianyin",
""
]
] |
System call filtering is a widely used security mechanism for protecting a shared OS kernel against untrusted user applications. However, existing system call filtering techniques either are too expensive due to the context switch overhead imposed by userspace agents, or lack sufficient programmability to express advanced policies. Seccomp, Linux's system call filtering module, is widely used by modern container technologies, mobile apps, and system management services. Despite the adoption of the classic BPF language (cBPF), security policies in Seccomp are mostly limited to static allow lists, primarily because cBPF does not support stateful policies. Consequently, many essential security features cannot be expressed precisely and/or require kernel modifications. In this paper, we present a programmable system call filtering mechanism, which enables more advanced security policies to be expressed by leveraging the extended BPF language (eBPF). More specifically, we create a new Seccomp eBPF program type, exposing, modifying or creating new eBPF helper functions to safely manage filter state, access kernel and user state, and utilize synchronization primitives. Importantly, our system integrates with existing kernel privilege and capability mechanisms, enabling unprivileged users to install advanced filters safely. Our evaluation shows that our eBPF-based filtering can enhance existing policies (e.g., reducing the attack surface of early execution phase by up to 55.4% for temporal specialization), mitigate real-world vulnerabilities, and accelerate filters.
|
2401.02607
|
Yuping Ye
|
Yuping Ye, Zhan Song, Juan Zhao
|
Partition-based Nonrigid Registration for 3D Face Model
| null | null | null | null |
cs.CV
|
http://creativecommons.org/publicdomain/zero/1.0/
|
This paper presents a partition-based surface registration for 3D morphable
model(3DMM). In the 3DMM, it often requires to warp a handcrafted template
model into different captured models. The proposed method first utilizes the
landmarks to partition the template model then scale each part and finally
smooth the boundaries. This method is especially effective when the disparity
between the template model and the target model is huge. The experiment result
shows the method perform well than the traditional warp method and robust to
the local minima.
|
[
{
"created": "Fri, 5 Jan 2024 02:46:08 GMT",
"version": "v1"
}
] |
2024-01-08
|
[
[
"Ye",
"Yuping",
""
],
[
"Song",
"Zhan",
""
],
[
"Zhao",
"Juan",
""
]
] |
This paper presents a partition-based surface registration for 3D morphable model(3DMM). In the 3DMM, it often requires to warp a handcrafted template model into different captured models. The proposed method first utilizes the landmarks to partition the template model then scale each part and finally smooth the boundaries. This method is especially effective when the disparity between the template model and the target model is huge. The experiment result shows the method perform well than the traditional warp method and robust to the local minima.
|
1202.0313
|
Leslie Ann Goldberg
|
Leslie Ann Goldberg and Mark Jerrum
|
The Complexity of Computing the Sign of the Tutte Polynomial
|
minor updates. This is the final version (to appear in SICOMP)
| null | null | null |
cs.CC math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the complexity of computing the sign of the Tutte polynomial of a
graph. As there are only three possible outcomes (positive, negative, and
zero), this seems at first sight more like a decision problem than a counting
problem. Surprisingly, however, there are large regions of the parameter space
for which computing the sign of the Tutte polynomial is actually #P-hard. As a
trivial consequence, approximating the polynomial is also #P-hard in this case.
Thus, approximately evaluating the Tutte polynomial in these regions is as hard
as exactly counting the satisfying assignments to a CNF Boolean formula. For
most other points in the parameter space, we show that computing the sign of
the polynomial is in FP, whereas approximating the polynomial can be done in
polynomial time with an NP oracle. As a special case, we completely resolve the
complexity of computing the sign of the chromatic polynomial - this is easily
computable at q=2 and when q is less than or equal to 32/27, and is NP-hard to
compute for all other values of the parameter q.
|
[
{
"created": "Wed, 1 Feb 2012 22:23:34 GMT",
"version": "v1"
},
{
"created": "Sat, 30 Jun 2012 11:02:23 GMT",
"version": "v2"
},
{
"created": "Fri, 4 Apr 2014 13:17:10 GMT",
"version": "v3"
},
{
"created": "Fri, 25 Apr 2014 13:59:54 GMT",
"version": "v4"
},
{
"created": "Wed, 8 Oct 2014 21:17:36 GMT",
"version": "v5"
}
] |
2014-10-10
|
[
[
"Goldberg",
"Leslie Ann",
""
],
[
"Jerrum",
"Mark",
""
]
] |
We study the complexity of computing the sign of the Tutte polynomial of a graph. As there are only three possible outcomes (positive, negative, and zero), this seems at first sight more like a decision problem than a counting problem. Surprisingly, however, there are large regions of the parameter space for which computing the sign of the Tutte polynomial is actually #P-hard. As a trivial consequence, approximating the polynomial is also #P-hard in this case. Thus, approximately evaluating the Tutte polynomial in these regions is as hard as exactly counting the satisfying assignments to a CNF Boolean formula. For most other points in the parameter space, we show that computing the sign of the polynomial is in FP, whereas approximating the polynomial can be done in polynomial time with an NP oracle. As a special case, we completely resolve the complexity of computing the sign of the chromatic polynomial - this is easily computable at q=2 and when q is less than or equal to 32/27, and is NP-hard to compute for all other values of the parameter q.
|
2203.08908
|
Shangyuan Tong
|
Shangyuan Tong, Timur Garipov, Yang Zhang, Shiyu Chang, Tommi S.
Jaakkola
|
Adversarial Support Alignment
|
Accepted to ICLR 2022
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the problem of aligning the supports of distributions. Compared to
the existing work on distribution alignment, support alignment does not require
the densities to be matched. We propose symmetric support difference as a
divergence measure to quantify the mismatch between supports. We show that
select discriminators (e.g. discriminator trained for Jensen-Shannon
divergence) are able to map support differences as support differences in their
one-dimensional output space. Following this result, our method aligns supports
by minimizing a symmetrized relaxed optimal transport cost in the discriminator
1D space via an adversarial process. Furthermore, we show that our approach can
be viewed as a limit of existing notions of alignment by increasing
transportation assignment tolerance. We quantitatively evaluate the method
across domain adaptation tasks with shifts in label distributions. Our
experiments show that the proposed method is more robust against these shifts
than other alignment-based baselines.
|
[
{
"created": "Wed, 16 Mar 2022 19:46:09 GMT",
"version": "v1"
}
] |
2022-03-18
|
[
[
"Tong",
"Shangyuan",
""
],
[
"Garipov",
"Timur",
""
],
[
"Zhang",
"Yang",
""
],
[
"Chang",
"Shiyu",
""
],
[
"Jaakkola",
"Tommi S.",
""
]
] |
We study the problem of aligning the supports of distributions. Compared to the existing work on distribution alignment, support alignment does not require the densities to be matched. We propose symmetric support difference as a divergence measure to quantify the mismatch between supports. We show that select discriminators (e.g. discriminator trained for Jensen-Shannon divergence) are able to map support differences as support differences in their one-dimensional output space. Following this result, our method aligns supports by minimizing a symmetrized relaxed optimal transport cost in the discriminator 1D space via an adversarial process. Furthermore, we show that our approach can be viewed as a limit of existing notions of alignment by increasing transportation assignment tolerance. We quantitatively evaluate the method across domain adaptation tasks with shifts in label distributions. Our experiments show that the proposed method is more robust against these shifts than other alignment-based baselines.
|
2406.08413
|
Christopher Wolters
|
Christopher Wolters, Xiaoxuan Yang, Ulf Schlichtmann, Toyotaro
Suzumura
|
Memory Is All You Need: An Overview of Compute-in-Memory Architectures
for Accelerating Large Language Model Inference
| null | null | null | null |
cs.AR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Large language models (LLMs) have recently transformed natural language
processing, enabling machines to generate human-like text and engage in
meaningful conversations. This development necessitates speed, efficiency, and
accessibility in LLM inference as the computational and memory requirements of
these systems grow exponentially. Meanwhile, advancements in computing and
memory capabilities are lagging behind, exacerbated by the discontinuation of
Moore's law. With LLMs exceeding the capacity of single GPUs, they require
complex, expert-level configurations for parallel processing. Memory accesses
become significantly more expensive than computation, posing a challenge for
efficient scaling, known as the memory wall. Here, compute-in-memory (CIM)
technologies offer a promising solution for accelerating AI inference by
directly performing analog computations in memory, potentially reducing latency
and power consumption. By closely integrating memory and compute elements, CIM
eliminates the von Neumann bottleneck, reducing data movement and improving
energy efficiency. This survey paper provides an overview and analysis of
transformer-based models, reviewing various CIM architectures and exploring how
they can address the imminent challenges of modern AI computing systems. We
discuss transformer-related operators and their hardware acceleration schemes
and highlight challenges, trends, and insights in corresponding CIM designs.
|
[
{
"created": "Wed, 12 Jun 2024 16:57:58 GMT",
"version": "v1"
}
] |
2024-06-13
|
[
[
"Wolters",
"Christopher",
""
],
[
"Yang",
"Xiaoxuan",
""
],
[
"Schlichtmann",
"Ulf",
""
],
[
"Suzumura",
"Toyotaro",
""
]
] |
Large language models (LLMs) have recently transformed natural language processing, enabling machines to generate human-like text and engage in meaningful conversations. This development necessitates speed, efficiency, and accessibility in LLM inference as the computational and memory requirements of these systems grow exponentially. Meanwhile, advancements in computing and memory capabilities are lagging behind, exacerbated by the discontinuation of Moore's law. With LLMs exceeding the capacity of single GPUs, they require complex, expert-level configurations for parallel processing. Memory accesses become significantly more expensive than computation, posing a challenge for efficient scaling, known as the memory wall. Here, compute-in-memory (CIM) technologies offer a promising solution for accelerating AI inference by directly performing analog computations in memory, potentially reducing latency and power consumption. By closely integrating memory and compute elements, CIM eliminates the von Neumann bottleneck, reducing data movement and improving energy efficiency. This survey paper provides an overview and analysis of transformer-based models, reviewing various CIM architectures and exploring how they can address the imminent challenges of modern AI computing systems. We discuss transformer-related operators and their hardware acceleration schemes and highlight challenges, trends, and insights in corresponding CIM designs.
|
1311.0505
|
Dang Hoan Tran
|
Dang-Hoan Tran
|
Automated Change Detection and Reactive Clustering in Multivariate
Streaming Data
| null | null | null | null |
cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many automated systems need the capability of automatic change detection
without the given detection threshold. This paper presents an automated change
detection algorithm in streaming multivariate data. Two overlapping windows are
used to quantify the changes. While a window is used as the reference window
from which the clustering is created, the other called the current window
captures the newly incoming data points. A newly incoming data point can be
considered a change point if it is not a member of any cluster. As our
clustering-based change detector does not require detection threshold, it is an
automated detector. Based on this change detector, we propose a reactive
clustering algorithm for streaming data. Our empirical results show that, our
clustering-based change detector works well with multivariate streaming data.
The detection accuracy depends on the number of clusters in the reference
window, the window width.
|
[
{
"created": "Sun, 3 Nov 2013 18:48:50 GMT",
"version": "v1"
}
] |
2013-11-05
|
[
[
"Tran",
"Dang-Hoan",
""
]
] |
Many automated systems need the capability of automatic change detection without the given detection threshold. This paper presents an automated change detection algorithm in streaming multivariate data. Two overlapping windows are used to quantify the changes. While a window is used as the reference window from which the clustering is created, the other called the current window captures the newly incoming data points. A newly incoming data point can be considered a change point if it is not a member of any cluster. As our clustering-based change detector does not require detection threshold, it is an automated detector. Based on this change detector, we propose a reactive clustering algorithm for streaming data. Our empirical results show that, our clustering-based change detector works well with multivariate streaming data. The detection accuracy depends on the number of clusters in the reference window, the window width.
|
2012.01939
|
Thomas Dalton
|
Thomas Dalton, Mauritius Schmidtler, Alireza Hadj Khodabakhshi
|
Classifying Malware Using Function Representations in a Static Call
Graph
|
12 pages, 6 figures, accepted to CSoNet 2020 Dallas, to be published
in Springer's Lecture Notes in Computer Science
| null | null | null |
cs.CR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We propose a deep learning approach for identifying malware families using
the function call graphs of x86 assembly instructions. Though prior work on
static call graph analysis exists, very little involves the application of
modern, principled feature learning techniques to the problem. In this paper,
we introduce a system utilizing an executable's function call graph where
function representations are obtained by way of a recurrent neural network
(RNN) autoencoder which maps sequences of x86 instructions into dense, latent
vectors. These function embeddings are then modeled as vertices in a graph with
edges indicating call dependencies. Capturing rich, node-level representations
as well as global, topological properties of an executable file greatly
improves malware family detection rates and contributes to a more principled
approach to the problem in a way that deliberately avoids tedious feature
engineering and domain expertise. We test our approach by performing several
experiments on a Microsoft malware classification data set and achieve
excellent separation between malware families with a classification accuracy of
99.41%.
|
[
{
"created": "Tue, 1 Dec 2020 20:36:19 GMT",
"version": "v1"
}
] |
2020-12-04
|
[
[
"Dalton",
"Thomas",
""
],
[
"Schmidtler",
"Mauritius",
""
],
[
"Khodabakhshi",
"Alireza Hadj",
""
]
] |
We propose a deep learning approach for identifying malware families using the function call graphs of x86 assembly instructions. Though prior work on static call graph analysis exists, very little involves the application of modern, principled feature learning techniques to the problem. In this paper, we introduce a system utilizing an executable's function call graph where function representations are obtained by way of a recurrent neural network (RNN) autoencoder which maps sequences of x86 instructions into dense, latent vectors. These function embeddings are then modeled as vertices in a graph with edges indicating call dependencies. Capturing rich, node-level representations as well as global, topological properties of an executable file greatly improves malware family detection rates and contributes to a more principled approach to the problem in a way that deliberately avoids tedious feature engineering and domain expertise. We test our approach by performing several experiments on a Microsoft malware classification data set and achieve excellent separation between malware families with a classification accuracy of 99.41%.
|
1512.08427
|
Li Han
|
Li Han, David Kempe, Ruixin Qiang
|
Incentivizing Exploration with Heterogeneous Value of Money
|
WINE 2015
|
LNCS 9470 (2015) 370-383
|
10.1007/978-3-662-48995-6_27
| null |
cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, Frazier et al. proposed a natural model for crowdsourced
exploration of different a priori unknown options: a principal is interested in
the long-term welfare of a population of agents who arrive one by one in a
multi-armed bandit setting. However, each agent is myopic, so in order to
incentivize him to explore options with better long-term prospects, the
principal must offer the agent money. Frazier et al. showed that a simple class
of policies called time-expanded are optimal in the worst case, and
characterized their budget-reward tradeoff.
The previous work assumed that all agents are equally and uniformly
susceptible to financial incentives. In reality, agents may have different
utility for money. We therefore extend the model of Frazier et al. to allow
agents that have heterogeneous and non-linear utilities for money. The
principal is informed of the agent's tradeoff via a signal that could be more
or less informative.
Our main result is to show that a convex program can be used to derive a
signal-dependent time-expanded policy which achieves the best possible
Lagrangian reward in the worst case. The worst-case guarantee is matched by
so-called "Diamonds in the Rough" instances; the proof that the guarantees
match is based on showing that two different convex programs have the same
optimal solution for these specific instances. These results also extend to the
budgeted case as in Frazier et al. We also show that the optimal policy is
monotone with respect to information, i.e., the approximation ratio of the
optimal policy improves as the signals become more informative.
|
[
{
"created": "Mon, 28 Dec 2015 14:50:45 GMT",
"version": "v1"
}
] |
2015-12-29
|
[
[
"Han",
"Li",
""
],
[
"Kempe",
"David",
""
],
[
"Qiang",
"Ruixin",
""
]
] |
Recently, Frazier et al. proposed a natural model for crowdsourced exploration of different a priori unknown options: a principal is interested in the long-term welfare of a population of agents who arrive one by one in a multi-armed bandit setting. However, each agent is myopic, so in order to incentivize him to explore options with better long-term prospects, the principal must offer the agent money. Frazier et al. showed that a simple class of policies called time-expanded are optimal in the worst case, and characterized their budget-reward tradeoff. The previous work assumed that all agents are equally and uniformly susceptible to financial incentives. In reality, agents may have different utility for money. We therefore extend the model of Frazier et al. to allow agents that have heterogeneous and non-linear utilities for money. The principal is informed of the agent's tradeoff via a signal that could be more or less informative. Our main result is to show that a convex program can be used to derive a signal-dependent time-expanded policy which achieves the best possible Lagrangian reward in the worst case. The worst-case guarantee is matched by so-called "Diamonds in the Rough" instances; the proof that the guarantees match is based on showing that two different convex programs have the same optimal solution for these specific instances. These results also extend to the budgeted case as in Frazier et al. We also show that the optimal policy is monotone with respect to information, i.e., the approximation ratio of the optimal policy improves as the signals become more informative.
|
2010.06113
|
Shubham Sharma
|
Shubham Sharma, Alan H. Gee, David Paydarfar, Joydeep Ghosh
|
FaiR-N: Fair and Robust Neural Networks for Structured Data
| null | null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fairness in machine learning is crucial when individuals are subject to
automated decisions made by models in high-stake domains. Organizations that
employ these models may also need to satisfy regulations that promote
responsible and ethical A.I. While fairness metrics relying on comparing model
error rates across subpopulations have been widely investigated for the
detection and mitigation of bias, fairness in terms of the equalized ability to
achieve recourse for different protected attribute groups has been relatively
unexplored. We present a novel formulation for training neural networks that
considers the distance of data points to the decision boundary such that the
new objective: (1) reduces the average distance to the decision boundary
between two groups for individuals subject to a negative outcome in each group,
i.e. the network is more fair with respect to the ability to obtain recourse,
and (2) increases the average distance of data points to the boundary to
promote adversarial robustness. We demonstrate that training with this loss
yields more fair and robust neural networks with similar accuracies to models
trained without it. Moreover, we qualitatively motivate and empirically show
that reducing recourse disparity across groups also improves fairness measures
that rely on error rates. To the best of our knowledge, this is the first time
that recourse capabilities across groups are considered to train fairer neural
networks, and a relation between error rates based fairness and recourse based
fairness is investigated.
|
[
{
"created": "Tue, 13 Oct 2020 01:53:15 GMT",
"version": "v1"
}
] |
2020-10-14
|
[
[
"Sharma",
"Shubham",
""
],
[
"Gee",
"Alan H.",
""
],
[
"Paydarfar",
"David",
""
],
[
"Ghosh",
"Joydeep",
""
]
] |
Fairness in machine learning is crucial when individuals are subject to automated decisions made by models in high-stake domains. Organizations that employ these models may also need to satisfy regulations that promote responsible and ethical A.I. While fairness metrics relying on comparing model error rates across subpopulations have been widely investigated for the detection and mitigation of bias, fairness in terms of the equalized ability to achieve recourse for different protected attribute groups has been relatively unexplored. We present a novel formulation for training neural networks that considers the distance of data points to the decision boundary such that the new objective: (1) reduces the average distance to the decision boundary between two groups for individuals subject to a negative outcome in each group, i.e. the network is more fair with respect to the ability to obtain recourse, and (2) increases the average distance of data points to the boundary to promote adversarial robustness. We demonstrate that training with this loss yields more fair and robust neural networks with similar accuracies to models trained without it. Moreover, we qualitatively motivate and empirically show that reducing recourse disparity across groups also improves fairness measures that rely on error rates. To the best of our knowledge, this is the first time that recourse capabilities across groups are considered to train fairer neural networks, and a relation between error rates based fairness and recourse based fairness is investigated.
|
2107.07754
|
Christopher Teo
|
Christopher T.H Teo and Ngai-Man Cheung
|
Measuring Fairness in Generative Models
|
Accepted in ICML 2021 Workshop - Machine Learning for Data: Automated
Creation, Privacy, Bias
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Deep generative models have made much progress in improving training
stability and quality of generated data. Recently there has been increased
interest in the fairness of deep-generated data. Fairness is important in many
applications, e.g. law enforcement, as biases will affect efficacy. Central to
fair data generation are the fairness metrics for the assessment and evaluation
of different generative models. In this paper, we first review fairness metrics
proposed in previous works and highlight potential weaknesses. We then discuss
a performance benchmark framework along with the assessment of alternative
metrics.
|
[
{
"created": "Fri, 16 Jul 2021 08:12:44 GMT",
"version": "v1"
}
] |
2021-07-19
|
[
[
"Teo",
"Christopher T. H",
""
],
[
"Cheung",
"Ngai-Man",
""
]
] |
Deep generative models have made much progress in improving training stability and quality of generated data. Recently there has been increased interest in the fairness of deep-generated data. Fairness is important in many applications, e.g. law enforcement, as biases will affect efficacy. Central to fair data generation are the fairness metrics for the assessment and evaluation of different generative models. In this paper, we first review fairness metrics proposed in previous works and highlight potential weaknesses. We then discuss a performance benchmark framework along with the assessment of alternative metrics.
|
2404.11357
|
Hangtao Zhang
|
Hangtao Zhang, Shengshan Hu, Yichen Wang, Leo Yu Zhang, Ziqi Zhou,
Xianlong Wang, Yanjun Zhang, Chao Chen
|
Detector Collapse: Physical-World Backdooring Object Detection to
Catastrophic Overload or Blindness in Autonomous Driving
|
Accepted to IJCAI 2024
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Object detection tasks, crucial in safety-critical systems like autonomous
driving, focus on pinpointing object locations. These detectors are known to be
susceptible to backdoor attacks. However, existing backdoor techniques have
primarily been adapted from classification tasks, overlooking deeper
vulnerabilities specific to object detection. This paper is dedicated to
bridging this gap by introducing Detector Collapse} (DC), a brand-new backdoor
attack paradigm tailored for object detection. DC is designed to instantly
incapacitate detectors (i.e., severely impairing detector's performance and
culminating in a denial-of-service). To this end, we develop two innovative
attack schemes: Sponge for triggering widespread misidentifications and
Blinding for rendering objects invisible. Remarkably, we introduce a novel
poisoning strategy exploiting natural objects, enabling DC to act as a
practical backdoor in real-world environments. Our experiments on different
detectors across several benchmarks show a significant improvement
($\sim$10\%-60\% absolute and $\sim$2-7$\times$ relative) in attack efficacy
over state-of-the-art attacks.
|
[
{
"created": "Wed, 17 Apr 2024 13:12:14 GMT",
"version": "v1"
},
{
"created": "Thu, 15 Aug 2024 13:02:41 GMT",
"version": "v2"
}
] |
2024-08-16
|
[
[
"Zhang",
"Hangtao",
""
],
[
"Hu",
"Shengshan",
""
],
[
"Wang",
"Yichen",
""
],
[
"Zhang",
"Leo Yu",
""
],
[
"Zhou",
"Ziqi",
""
],
[
"Wang",
"Xianlong",
""
],
[
"Zhang",
"Yanjun",
""
],
[
"Chen",
"Chao",
""
]
] |
Object detection tasks, crucial in safety-critical systems like autonomous driving, focus on pinpointing object locations. These detectors are known to be susceptible to backdoor attacks. However, existing backdoor techniques have primarily been adapted from classification tasks, overlooking deeper vulnerabilities specific to object detection. This paper is dedicated to bridging this gap by introducing Detector Collapse} (DC), a brand-new backdoor attack paradigm tailored for object detection. DC is designed to instantly incapacitate detectors (i.e., severely impairing detector's performance and culminating in a denial-of-service). To this end, we develop two innovative attack schemes: Sponge for triggering widespread misidentifications and Blinding for rendering objects invisible. Remarkably, we introduce a novel poisoning strategy exploiting natural objects, enabling DC to act as a practical backdoor in real-world environments. Our experiments on different detectors across several benchmarks show a significant improvement ($\sim$10\%-60\% absolute and $\sim$2-7$\times$ relative) in attack efficacy over state-of-the-art attacks.
|
2302.02560
|
Mauricio Tec
|
Mauricio Tec, Oladimeji Mudele, Kevin Josey, Francesca Dominici
|
Causal Estimation of Exposure Shifts with Neural Networks: Evaluating
the Health Benefits of Stricter Air Quality Standards in the US
| null | null | null | null |
cs.LG stat.ME stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
In policy research, one of the most critical analytic tasks is to estimate
the causal effect of a policy-relevant shift to the distribution of a
continuous exposure/treatment on an outcome of interest. We call this problem
shift-response function (SRF) estimation. Existing neural network methods
involving robust causal-effect estimators lack theoretical guarantees and
practical implementations for SRF estimation. Motivated by a key
policy-relevant question in public health, we develop a neural network method
and its theoretical underpinnings to estimate SRFs with robustness and
efficiency guarantees. We then apply our method to data consisting of 68
million individuals and 27 million deaths across the U.S. to estimate the
causal effect from revising the US National Ambient Air Quality Standards
(NAAQS) for PM 2.5 from 12 $\mu g/m^3$ to 9 $\mu g/m^3$. This change has been
recently proposed by the US Environmental Protection Agency (EPA). Our goal is
to estimate, for the first time, the reduction in deaths that would result from
this anticipated revision using causal methods for SRFs. Our proposed method,
called {T}argeted {R}egularization for {E}xposure {S}hifts with Neural
{Net}works (TRESNET), contributes to the neural network literature for causal
inference in two ways: first, it proposes a targeted regularization loss with
theoretical properties that ensure double robustness and achieves asymptotic
efficiency specific for SRF estimation; second, it enables loss functions from
the exponential family of distributions to accommodate non-continuous outcome
distributions (such as hospitalization or mortality counts). We complement our
application with benchmark experiments that demonstrate TRESNET's broad
applicability and competitiveness.
|
[
{
"created": "Mon, 6 Feb 2023 04:35:08 GMT",
"version": "v1"
},
{
"created": "Sun, 29 Oct 2023 02:19:00 GMT",
"version": "v2"
},
{
"created": "Wed, 6 Dec 2023 18:55:43 GMT",
"version": "v3"
}
] |
2023-12-07
|
[
[
"Tec",
"Mauricio",
""
],
[
"Mudele",
"Oladimeji",
""
],
[
"Josey",
"Kevin",
""
],
[
"Dominici",
"Francesca",
""
]
] |
In policy research, one of the most critical analytic tasks is to estimate the causal effect of a policy-relevant shift to the distribution of a continuous exposure/treatment on an outcome of interest. We call this problem shift-response function (SRF) estimation. Existing neural network methods involving robust causal-effect estimators lack theoretical guarantees and practical implementations for SRF estimation. Motivated by a key policy-relevant question in public health, we develop a neural network method and its theoretical underpinnings to estimate SRFs with robustness and efficiency guarantees. We then apply our method to data consisting of 68 million individuals and 27 million deaths across the U.S. to estimate the causal effect from revising the US National Ambient Air Quality Standards (NAAQS) for PM 2.5 from 12 $\mu g/m^3$ to 9 $\mu g/m^3$. This change has been recently proposed by the US Environmental Protection Agency (EPA). Our goal is to estimate, for the first time, the reduction in deaths that would result from this anticipated revision using causal methods for SRFs. Our proposed method, called {T}argeted {R}egularization for {E}xposure {S}hifts with Neural {Net}works (TRESNET), contributes to the neural network literature for causal inference in two ways: first, it proposes a targeted regularization loss with theoretical properties that ensure double robustness and achieves asymptotic efficiency specific for SRF estimation; second, it enables loss functions from the exponential family of distributions to accommodate non-continuous outcome distributions (such as hospitalization or mortality counts). We complement our application with benchmark experiments that demonstrate TRESNET's broad applicability and competitiveness.
|
2112.02855
|
William Buchanan Prof
|
William Abramson, William J. Buchanan, Sarwar Sayeed, Nikolaos
Pitropakis, Owen Lo
|
PAN-DOMAIN: Privacy-preserving Sharing and Auditing of Infection
Identifier Matching
| null |
IEEE SIN 2021
| null | null |
cs.CR cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
The spread of COVID-19 has highlighted the need for a robust contact tracing
infrastructure that enables infected individuals to have their contacts traced,
and followed up with a test. The key entities involved within a contact tracing
infrastructure may include the Citizen, a Testing Centre (TC), a Health
Authority (HA), and a Government Authority (GA). Typically, these different
domains need to communicate with each other about an individual. A common
approach is when a citizen discloses his personally identifiable information to
both the HA a TC, if the test result comes positive, the information is used by
the TC to alert the HA. Along with this, there can be other trusted entities
that have other key elements of data related to the citizen. However, the
existing approaches comprise severe flaws in terms of privacy and security.
Additionally, the aforementioned approaches are not transparent and often being
questioned for the efficacy of the implementations. In order to overcome the
challenges, this paper outlines the PAN-DOMAIN infrastructure that allows for
citizen identifiers to be matched amongst the TA, the HA and the GA. PAN-DOMAIN
ensures that the citizen can keep control of the mapping between the trusted
entities using a trusted converter, and has access to an audit log.
|
[
{
"created": "Mon, 6 Dec 2021 08:26:08 GMT",
"version": "v1"
}
] |
2021-12-07
|
[
[
"Abramson",
"William",
""
],
[
"Buchanan",
"William J.",
""
],
[
"Sayeed",
"Sarwar",
""
],
[
"Pitropakis",
"Nikolaos",
""
],
[
"Lo",
"Owen",
""
]
] |
The spread of COVID-19 has highlighted the need for a robust contact tracing infrastructure that enables infected individuals to have their contacts traced, and followed up with a test. The key entities involved within a contact tracing infrastructure may include the Citizen, a Testing Centre (TC), a Health Authority (HA), and a Government Authority (GA). Typically, these different domains need to communicate with each other about an individual. A common approach is when a citizen discloses his personally identifiable information to both the HA a TC, if the test result comes positive, the information is used by the TC to alert the HA. Along with this, there can be other trusted entities that have other key elements of data related to the citizen. However, the existing approaches comprise severe flaws in terms of privacy and security. Additionally, the aforementioned approaches are not transparent and often being questioned for the efficacy of the implementations. In order to overcome the challenges, this paper outlines the PAN-DOMAIN infrastructure that allows for citizen identifiers to be matched amongst the TA, the HA and the GA. PAN-DOMAIN ensures that the citizen can keep control of the mapping between the trusted entities using a trusted converter, and has access to an audit log.
|
1709.00717
|
Minho Kim
|
Minho Kim, Seung-Woo Ko and Seong-Lyun Kim
|
Enhancing TCP End-to-End Performance in Millimeter-Wave Communications
|
5 pages, PIMRC 2017
| null | null | null |
cs.NI cs.ET
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, millimeter-wave (mmWave) communications have received great
attention due to the availability of large spectrum resources. Nevertheless,
their impact on TCP performance has been overlooked, which is observed that the
said TCP performance collapse occurs owing to the significant difference in
signal quality between LOS and NLOS links. We propose a novel TCP design for
mmWave communications, a mmWave performance enhancing proxy (mmPEP), enabling
not only to overcome TCP performance collapse but also exploit the properties
of mmWave channels. The base station installs the TCP proxy to operate the two
functionalities called Ack management and batch retransmission. Specifically,
the proxy sends the said early-Ack to the server not to decrease its sending
rate even in the NLOS status. In addition, when a packet-loss is detected, the
proxy retransmits not only lost packets but also the certain number of the
following packets expected to be lost too. It is verified by ns-3 simulation
that compared with benchmark, mmPEP enhances the end-to-end rate and packet
delivery ratio by maintaining high sending rate with decreasing the loss
recovery time.
|
[
{
"created": "Sun, 3 Sep 2017 13:47:42 GMT",
"version": "v1"
},
{
"created": "Tue, 5 Sep 2017 11:13:20 GMT",
"version": "v2"
}
] |
2017-09-06
|
[
[
"Kim",
"Minho",
""
],
[
"Ko",
"Seung-Woo",
""
],
[
"Kim",
"Seong-Lyun",
""
]
] |
Recently, millimeter-wave (mmWave) communications have received great attention due to the availability of large spectrum resources. Nevertheless, their impact on TCP performance has been overlooked, which is observed that the said TCP performance collapse occurs owing to the significant difference in signal quality between LOS and NLOS links. We propose a novel TCP design for mmWave communications, a mmWave performance enhancing proxy (mmPEP), enabling not only to overcome TCP performance collapse but also exploit the properties of mmWave channels. The base station installs the TCP proxy to operate the two functionalities called Ack management and batch retransmission. Specifically, the proxy sends the said early-Ack to the server not to decrease its sending rate even in the NLOS status. In addition, when a packet-loss is detected, the proxy retransmits not only lost packets but also the certain number of the following packets expected to be lost too. It is verified by ns-3 simulation that compared with benchmark, mmPEP enhances the end-to-end rate and packet delivery ratio by maintaining high sending rate with decreasing the loss recovery time.
|
1804.00401
|
Carsten Binnig
|
Prasetya Utama, Nathaniel Weir, Fuat Basik, Carsten Binnig, Ugur
Cetintemel, Benjamin H\"attasch, Amir Ilkhechi, Shekar Ramaswamy, Arif Usta
|
An End-to-end Neural Natural Language Interface for Databases
| null | null | null | null |
cs.DB cs.CL cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The ability to extract insights from new data sets is critical for decision
making. Visual interactive tools play an important role in data exploration
since they provide non-technical users with an effective way to visually
compose queries and comprehend the results. Natural language has recently
gained traction as an alternative query interface to databases with the
potential to enable non-expert users to formulate complex questions and
information needs efficiently and effectively. However, understanding natural
language questions and translating them accurately to SQL is a challenging
task, and thus Natural Language Interfaces for Databases (NLIDBs) have not yet
made their way into practical tools and commercial products.
In this paper, we present DBPal, a novel data exploration tool with a natural
language interface. DBPal leverages recent advances in deep models to make
query understanding more robust in the following ways: First, DBPal uses a deep
model to translate natural language statements to SQL, making the translation
process more robust to paraphrasing and other linguistic variations. Second, to
support the users in phrasing questions without knowing the database schema and
the query features, DBPal provides a learned auto-completion model that
suggests partial query extensions to users during query formulation and thus
helps to write complex queries.
|
[
{
"created": "Mon, 2 Apr 2018 05:36:38 GMT",
"version": "v1"
}
] |
2018-04-03
|
[
[
"Utama",
"Prasetya",
""
],
[
"Weir",
"Nathaniel",
""
],
[
"Basik",
"Fuat",
""
],
[
"Binnig",
"Carsten",
""
],
[
"Cetintemel",
"Ugur",
""
],
[
"Hättasch",
"Benjamin",
""
],
[
"Ilkhechi",
"Amir",
""
],
[
"Ramaswamy",
"Shekar",
""
],
[
"Usta",
"Arif",
""
]
] |
The ability to extract insights from new data sets is critical for decision making. Visual interactive tools play an important role in data exploration since they provide non-technical users with an effective way to visually compose queries and comprehend the results. Natural language has recently gained traction as an alternative query interface to databases with the potential to enable non-expert users to formulate complex questions and information needs efficiently and effectively. However, understanding natural language questions and translating them accurately to SQL is a challenging task, and thus Natural Language Interfaces for Databases (NLIDBs) have not yet made their way into practical tools and commercial products. In this paper, we present DBPal, a novel data exploration tool with a natural language interface. DBPal leverages recent advances in deep models to make query understanding more robust in the following ways: First, DBPal uses a deep model to translate natural language statements to SQL, making the translation process more robust to paraphrasing and other linguistic variations. Second, to support the users in phrasing questions without knowing the database schema and the query features, DBPal provides a learned auto-completion model that suggests partial query extensions to users during query formulation and thus helps to write complex queries.
|
2004.02516
|
Sibylle Fr\"oschle
|
Sibylle Fr\"oschle, Martin Kubisch, Marlon Gr\"afing
|
Security Analysis and Design for TAGA: a Touch and Go Assistant in the
Aerospace Domain
|
19 pages, 10 figures
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There is currently a drive in the aerospace domain to introduce machine to
machine communication over wireless networks to improve ground processes at
airports such as refuelling and air conditiong. To this end a session key has
to be established between the aircraft and the respective ground unit such as a
fuel truck or a pre-conditiong unit. This is to be provided by a `touch and go
assistant in the aerospace domain' (TAGA), which allows an operator to pair up
a ground unit and an aircraft present at a parking slot with the help of a NFC
system. In this paper, we present the results of our security analysis and
co-development of requirements, security concepts, and modular verification
thereof. We show that by, and only by, a combination of advanced security
protocols and local process measures we obtain secure and resilient designs for
TAGA. In particular, the design of choice is fully resilient against long-term
key compromises and parallel escalation of attacks.
|
[
{
"created": "Mon, 6 Apr 2020 09:40:40 GMT",
"version": "v1"
}
] |
2020-04-07
|
[
[
"Fröschle",
"Sibylle",
""
],
[
"Kubisch",
"Martin",
""
],
[
"Gräfing",
"Marlon",
""
]
] |
There is currently a drive in the aerospace domain to introduce machine to machine communication over wireless networks to improve ground processes at airports such as refuelling and air conditiong. To this end a session key has to be established between the aircraft and the respective ground unit such as a fuel truck or a pre-conditiong unit. This is to be provided by a `touch and go assistant in the aerospace domain' (TAGA), which allows an operator to pair up a ground unit and an aircraft present at a parking slot with the help of a NFC system. In this paper, we present the results of our security analysis and co-development of requirements, security concepts, and modular verification thereof. We show that by, and only by, a combination of advanced security protocols and local process measures we obtain secure and resilient designs for TAGA. In particular, the design of choice is fully resilient against long-term key compromises and parallel escalation of attacks.
|
2308.11854
|
Anmol Chaure
|
Anmol Chaure, Ashok Kumar Behera, Sudip Bhattacharya
|
Finding the Perfect Fit: Applying Regression Models to ClimateBench v1.0
| null |
International Journal of Computer Applications 185(29):31-39,
August 2023
|
10.5120/ijca2023923042
| null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Climate projections using data driven machine learning models acting as
emulators, is one of the prevailing areas of research to enable policy makers
make informed decisions. Use of machine learning emulators as surrogates for
computationally heavy GCM simulators reduces time and carbon footprints. In
this direction, ClimateBench [1] is a recently curated benchmarking dataset for
evaluating the performance of machine learning emulators designed for climate
data. Recent studies have reported that despite being considered fundamental,
regression models offer several advantages pertaining to climate emulations. In
particular, by leveraging the kernel trick, regression models can capture
complex relationships and improve their predictive capabilities. This study
focuses on evaluating non-linear regression models using the aforementioned
dataset. Specifically, we compare the emulation capabilities of three
non-linear regression models. Among them, Gaussian Process Regressor
demonstrates the best-in-class performance against standard evaluation metrics
used for climate field emulation studies. However, Gaussian Process Regression
suffers from being computational resource hungry in terms of space and time
complexity. Alternatively, Support Vector and Kernel Ridge models also deliver
competitive results and but there are certain trade-offs to be addressed.
Additionally, we are actively investigating the performance of composite
kernels and techniques such as variational inference to further enhance the
performance of the regression models and effectively model complex non-linear
patterns, including phenomena like precipitation.
|
[
{
"created": "Wed, 23 Aug 2023 01:08:01 GMT",
"version": "v1"
}
] |
2023-08-24
|
[
[
"Chaure",
"Anmol",
""
],
[
"Behera",
"Ashok Kumar",
""
],
[
"Bhattacharya",
"Sudip",
""
]
] |
Climate projections using data driven machine learning models acting as emulators, is one of the prevailing areas of research to enable policy makers make informed decisions. Use of machine learning emulators as surrogates for computationally heavy GCM simulators reduces time and carbon footprints. In this direction, ClimateBench [1] is a recently curated benchmarking dataset for evaluating the performance of machine learning emulators designed for climate data. Recent studies have reported that despite being considered fundamental, regression models offer several advantages pertaining to climate emulations. In particular, by leveraging the kernel trick, regression models can capture complex relationships and improve their predictive capabilities. This study focuses on evaluating non-linear regression models using the aforementioned dataset. Specifically, we compare the emulation capabilities of three non-linear regression models. Among them, Gaussian Process Regressor demonstrates the best-in-class performance against standard evaluation metrics used for climate field emulation studies. However, Gaussian Process Regression suffers from being computational resource hungry in terms of space and time complexity. Alternatively, Support Vector and Kernel Ridge models also deliver competitive results and but there are certain trade-offs to be addressed. Additionally, we are actively investigating the performance of composite kernels and techniques such as variational inference to further enhance the performance of the regression models and effectively model complex non-linear patterns, including phenomena like precipitation.
|
1201.1684
|
Lawrence Ong
|
Lawrence Ong, Gottfried Lechner, Sarah J. Johnson, Christopher M.
Kellett
|
The Three-User Finite-Field Multi-Way Relay Channel with Correlated
Sources
|
Author's final version (accepted and to appear in IEEE Transactions
on Communications)
|
IEEE Transactions on Communications, Vol. 61, No. 8, pp.
3125-3135, Aug. 2013
|
10.1109/TCOMM.2013.13.120987
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper studies the three-user finite-field multi-way relay channel, where
the users exchange messages via a relay. The messages are arbitrarily
correlated, and the finite-field channel is linear and is subject to additive
noise of arbitrary distribution. The problem is to determine the minimum
achievable source-channel rate, defined as channel uses per source symbol
needed for reliable communication. We combine Slepian-Wolf source coding and
functional-decode-forward channel coding to obtain the solution for two classes
of source and channel combinations. Furthermore, for correlated sources that
have their common information equal their mutual information, we propose a new
coding scheme to achieve the minimum source-channel rate.
|
[
{
"created": "Mon, 9 Jan 2012 03:26:47 GMT",
"version": "v1"
},
{
"created": "Mon, 17 Jun 2013 00:53:01 GMT",
"version": "v2"
}
] |
2013-09-18
|
[
[
"Ong",
"Lawrence",
""
],
[
"Lechner",
"Gottfried",
""
],
[
"Johnson",
"Sarah J.",
""
],
[
"Kellett",
"Christopher M.",
""
]
] |
This paper studies the three-user finite-field multi-way relay channel, where the users exchange messages via a relay. The messages are arbitrarily correlated, and the finite-field channel is linear and is subject to additive noise of arbitrary distribution. The problem is to determine the minimum achievable source-channel rate, defined as channel uses per source symbol needed for reliable communication. We combine Slepian-Wolf source coding and functional-decode-forward channel coding to obtain the solution for two classes of source and channel combinations. Furthermore, for correlated sources that have their common information equal their mutual information, we propose a new coding scheme to achieve the minimum source-channel rate.
|
2303.08518
|
Daixuan Cheng
|
Daixuan Cheng, Shaohan Huang, Junyu Bi, Yuefeng Zhan, Jianfeng Liu,
Yujing Wang, Hao Sun, Furu Wei, Denvy Deng, Qi Zhang
|
UPRISE: Universal Prompt Retrieval for Improving Zero-Shot Evaluation
|
EMNLP 2023 Main Conference
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large Language Models (LLMs) are popular for their impressive abilities, but
the need for model-specific fine-tuning or task-specific prompt engineering can
hinder their generalization. We propose UPRISE (Universal Prompt Retrieval for
Improving zero-Shot Evaluation), which tunes a lightweight and versatile
retriever that automatically retrieves prompts for a given zero-shot task
input. Specifically, we demonstrate universality in a cross-task and
cross-model scenario: the retriever is tuned on a diverse set of tasks, but
tested on unseen task types; we use a small frozen LLM, GPT-Neo-2.7B, for
tuning the retriever, but test the retriever on different LLMs of much larger
scales, such as BLOOM-7.1B, OPT-66B and GPT3-175B. Additionally, we show that
UPRISE mitigates the hallucination problem in our experiments with ChatGPT,
suggesting its potential to improve even the strongest LLMs. Our model and code
are available at https://github.com/microsoft/LMOps.
|
[
{
"created": "Wed, 15 Mar 2023 10:53:49 GMT",
"version": "v1"
},
{
"created": "Wed, 22 Mar 2023 11:29:48 GMT",
"version": "v2"
},
{
"created": "Wed, 11 Oct 2023 05:40:41 GMT",
"version": "v3"
},
{
"created": "Sat, 16 Dec 2023 06:50:09 GMT",
"version": "v4"
}
] |
2023-12-19
|
[
[
"Cheng",
"Daixuan",
""
],
[
"Huang",
"Shaohan",
""
],
[
"Bi",
"Junyu",
""
],
[
"Zhan",
"Yuefeng",
""
],
[
"Liu",
"Jianfeng",
""
],
[
"Wang",
"Yujing",
""
],
[
"Sun",
"Hao",
""
],
[
"Wei",
"Furu",
""
],
[
"Deng",
"Denvy",
""
],
[
"Zhang",
"Qi",
""
]
] |
Large Language Models (LLMs) are popular for their impressive abilities, but the need for model-specific fine-tuning or task-specific prompt engineering can hinder their generalization. We propose UPRISE (Universal Prompt Retrieval for Improving zero-Shot Evaluation), which tunes a lightweight and versatile retriever that automatically retrieves prompts for a given zero-shot task input. Specifically, we demonstrate universality in a cross-task and cross-model scenario: the retriever is tuned on a diverse set of tasks, but tested on unseen task types; we use a small frozen LLM, GPT-Neo-2.7B, for tuning the retriever, but test the retriever on different LLMs of much larger scales, such as BLOOM-7.1B, OPT-66B and GPT3-175B. Additionally, we show that UPRISE mitigates the hallucination problem in our experiments with ChatGPT, suggesting its potential to improve even the strongest LLMs. Our model and code are available at https://github.com/microsoft/LMOps.
|
1805.01559
|
Georgios Paschos
|
Georgios Paschos, Nikolaos Liakopoulos, Merouane Debbah and Tong Wen
|
Computational Optimal Transport for 5G Massive C-RAN Device Association
|
6 pages, 5 figures
| null | null | null |
cs.NI cs.PF
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The massive scale of future wireless networks will create computational
bottlenecks in performance optimization. In this paper, we study the problem of
connecting mobile traffic to Cloud RAN (C-RAN) stations. To balance station
load, we steer the traffic by designing device association rules. The baseline
association rule connects each device to the station with the strongest signal,
which does not account for interference or traffic hot spots, and leads to load
imbalances and performance deterioration. Instead, we can formulate an
optimization problem to decide centrally the best association rule at each time
instance. However, in practice this optimization has such high dimensions, that
even linear programming solvers fail to solve. To address the challenge of
massive connectivity, we propose an approach based on the theory of optimal
transport, which studies the economical transfer of probability between two
distributions. Our proposed methodology can further inspire scalable algorithms
for massive optimization problems in wireless networks.
|
[
{
"created": "Thu, 3 May 2018 22:08:19 GMT",
"version": "v1"
}
] |
2018-05-07
|
[
[
"Paschos",
"Georgios",
""
],
[
"Liakopoulos",
"Nikolaos",
""
],
[
"Debbah",
"Merouane",
""
],
[
"Wen",
"Tong",
""
]
] |
The massive scale of future wireless networks will create computational bottlenecks in performance optimization. In this paper, we study the problem of connecting mobile traffic to Cloud RAN (C-RAN) stations. To balance station load, we steer the traffic by designing device association rules. The baseline association rule connects each device to the station with the strongest signal, which does not account for interference or traffic hot spots, and leads to load imbalances and performance deterioration. Instead, we can formulate an optimization problem to decide centrally the best association rule at each time instance. However, in practice this optimization has such high dimensions, that even linear programming solvers fail to solve. To address the challenge of massive connectivity, we propose an approach based on the theory of optimal transport, which studies the economical transfer of probability between two distributions. Our proposed methodology can further inspire scalable algorithms for massive optimization problems in wireless networks.
|
2012.11998
|
Fernando Hernando
|
Carlos Galindo and Fernando Hernando
|
On the generalization of the construction of quantum codes from
Hermitian self-orthogonal codes
| null |
Designs, Codes and Cryptography, volume 90, 2022
|
10.1007/s10623-022-01018-2
| null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Many $q$-ary stabilizer quantum codes can be constructed from Hermitian
self-orthogonal $q^2$-ary linear codes. This result can be generalized to $q^{2
m}$-ary linear codes, $m > 1$. We give a result for easily obtaining quantum
codes from that generalization. As a consequence we provide several new binary
stabilizer quantum codes which are records according to \cite{codet} and new
$q$-ary ones, with $q \neq 2$, improving others in the literature.
|
[
{
"created": "Tue, 22 Dec 2020 13:38:47 GMT",
"version": "v1"
},
{
"created": "Wed, 21 Jul 2021 11:21:40 GMT",
"version": "v2"
}
] |
2024-05-01
|
[
[
"Galindo",
"Carlos",
""
],
[
"Hernando",
"Fernando",
""
]
] |
Many $q$-ary stabilizer quantum codes can be constructed from Hermitian self-orthogonal $q^2$-ary linear codes. This result can be generalized to $q^{2 m}$-ary linear codes, $m > 1$. We give a result for easily obtaining quantum codes from that generalization. As a consequence we provide several new binary stabilizer quantum codes which are records according to \cite{codet} and new $q$-ary ones, with $q \neq 2$, improving others in the literature.
|
1010.2623
|
Konstantin Kolchin
|
Konstantin Kolchin
|
Surface Curvature Effects on Reflectance from Translucent Materials
|
10 pages, 2 figures. The first version of this paper was published in
the Communication Papers Proceedings of 18th International Conference on
Computer Graphics, Visualization and Computer Vision 2010 - WSCG2010
| null | null | null |
cs.GR physics.optics
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Most of the physically based techniques for rendering translucent objects use
the diffusion theory of light scattering in turbid media. The widely used
dipole diffusion model (Jensen et al. 2001) applies the diffusion-theory
formula derived for a planar interface to objects of arbitrary shapes. This
paper presents first results of our investigation of how surface curvature
affects the diffuse reflectance from translucent materials.
|
[
{
"created": "Wed, 13 Oct 2010 10:35:23 GMT",
"version": "v1"
},
{
"created": "Mon, 15 Nov 2010 10:47:24 GMT",
"version": "v2"
}
] |
2010-11-16
|
[
[
"Kolchin",
"Konstantin",
""
]
] |
Most of the physically based techniques for rendering translucent objects use the diffusion theory of light scattering in turbid media. The widely used dipole diffusion model (Jensen et al. 2001) applies the diffusion-theory formula derived for a planar interface to objects of arbitrary shapes. This paper presents first results of our investigation of how surface curvature affects the diffuse reflectance from translucent materials.
|
1804.03312
|
Ke Yu
|
Ke Yu, Chao Dong, Liang Lin, Chen Change Loy
|
Crafting a Toolchain for Image Restoration by Deep Reinforcement
Learning
|
To appear at CVPR 2018 (Spotlight). Project page:
http://mmlab.ie.cuhk.edu.hk/projects/RL-Restore/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We investigate a novel approach for image restoration by reinforcement
learning. Unlike existing studies that mostly train a single large network for
a specialized task, we prepare a toolbox consisting of small-scale
convolutional networks of different complexities and specialized in different
tasks. Our method, RL-Restore, then learns a policy to select appropriate tools
from the toolbox to progressively restore the quality of a corrupted image. We
formulate a step-wise reward function proportional to how well the image is
restored at each step to learn the action policy. We also devise a joint
learning scheme to train the agent and tools for better performance in handling
uncertainty. In comparison to conventional human-designed networks, RL-Restore
is capable of restoring images corrupted with complex and unknown distortions
in a more parameter-efficient manner using the dynamically formed toolchain.
|
[
{
"created": "Tue, 10 Apr 2018 02:30:40 GMT",
"version": "v1"
}
] |
2018-04-11
|
[
[
"Yu",
"Ke",
""
],
[
"Dong",
"Chao",
""
],
[
"Lin",
"Liang",
""
],
[
"Loy",
"Chen Change",
""
]
] |
We investigate a novel approach for image restoration by reinforcement learning. Unlike existing studies that mostly train a single large network for a specialized task, we prepare a toolbox consisting of small-scale convolutional networks of different complexities and specialized in different tasks. Our method, RL-Restore, then learns a policy to select appropriate tools from the toolbox to progressively restore the quality of a corrupted image. We formulate a step-wise reward function proportional to how well the image is restored at each step to learn the action policy. We also devise a joint learning scheme to train the agent and tools for better performance in handling uncertainty. In comparison to conventional human-designed networks, RL-Restore is capable of restoring images corrupted with complex and unknown distortions in a more parameter-efficient manner using the dynamically formed toolchain.
|
1709.09235
|
Yu-Hang Tang
|
Yu-Hang Tang, Dongkun Zhang and George Em Karniadakis
|
An Atomistic Fingerprint Algorithm for Learning Ab Initio Molecular
Force Fields
| null | null |
10.1063/1.5008630
| null |
cs.CE physics.chem-ph physics.comp-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Molecular fingerprints, i.e. feature vectors describing atomistic
neighborhood configurations, is an important abstraction and a key ingredient
for data-driven modeling of potential energy surface and interatomic force. In
this paper, we present the Density-Encoded Canonically Aligned Fingerprint
(DECAF) fingerprint algorithm, which is robust and efficient, for fitting
per-atom scalar and vector quantities. The fingerprint is essentially a
continuous density field formed through the superimposition of smoothing
kernels centered on the atoms. Rotational invariance of the fingerprint is
achieved by aligning, for each fingerprint instance, the neighboring atoms onto
a local canonical coordinate frame computed from a kernel minisum optimization
procedure. We show that this approach is superior over PCA-based methods
especially when the atomistic neighborhood is sparse and/or contains symmetry.
We propose that the `distance' between the density fields be measured using a
volume integral of their pointwise difference. This can be efficiently computed
using optimal quadrature rules, which only require discrete sampling at a small
number of grid points. We also experiment on the choice of weight functions for
constructing the density fields, and characterize their performance for fitting
interatomic potentials. The applicability of the fingerprint is demonstrated
through a set of benchmark problems.
|
[
{
"created": "Tue, 26 Sep 2017 19:49:32 GMT",
"version": "v1"
},
{
"created": "Mon, 9 Oct 2017 03:44:30 GMT",
"version": "v2"
},
{
"created": "Thu, 14 Dec 2017 05:48:08 GMT",
"version": "v3"
}
] |
2018-02-14
|
[
[
"Tang",
"Yu-Hang",
""
],
[
"Zhang",
"Dongkun",
""
],
[
"Karniadakis",
"George Em",
""
]
] |
Molecular fingerprints, i.e. feature vectors describing atomistic neighborhood configurations, is an important abstraction and a key ingredient for data-driven modeling of potential energy surface and interatomic force. In this paper, we present the Density-Encoded Canonically Aligned Fingerprint (DECAF) fingerprint algorithm, which is robust and efficient, for fitting per-atom scalar and vector quantities. The fingerprint is essentially a continuous density field formed through the superimposition of smoothing kernels centered on the atoms. Rotational invariance of the fingerprint is achieved by aligning, for each fingerprint instance, the neighboring atoms onto a local canonical coordinate frame computed from a kernel minisum optimization procedure. We show that this approach is superior over PCA-based methods especially when the atomistic neighborhood is sparse and/or contains symmetry. We propose that the `distance' between the density fields be measured using a volume integral of their pointwise difference. This can be efficiently computed using optimal quadrature rules, which only require discrete sampling at a small number of grid points. We also experiment on the choice of weight functions for constructing the density fields, and characterize their performance for fitting interatomic potentials. The applicability of the fingerprint is demonstrated through a set of benchmark problems.
|
1206.5242
|
Vibhav Gogate
|
Vibhav Gogate, Bozhena Bidyuk, Rina Dechter
|
Studies in Lower Bounding Probabilities of Evidence using the Markov
Inequality
|
Appears in Proceedings of the Twenty-Third Conference on Uncertainty
in Artificial Intelligence (UAI2007)
| null | null |
UAI-P-2007-PG-141-148
|
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Computing the probability of evidence even with known error bounds is
NP-hard. In this paper we address this hard problem by settling on an easier
problem. We propose an approximation which provides high confidence lower
bounds on probability of evidence but does not have any guarantees in terms of
relative or absolute error. Our proposed approximation is a randomized
importance sampling scheme that uses the Markov inequality. However, a
straight-forward application of the Markov inequality may lead to poor lower
bounds. We therefore propose several heuristic measures to improve its
performance in practice. Empirical evaluation of our scheme with state-of-
the-art lower bounding schemes reveals the promise of our approach.
|
[
{
"created": "Wed, 20 Jun 2012 14:53:07 GMT",
"version": "v1"
}
] |
2012-06-26
|
[
[
"Gogate",
"Vibhav",
""
],
[
"Bidyuk",
"Bozhena",
""
],
[
"Dechter",
"Rina",
""
]
] |
Computing the probability of evidence even with known error bounds is NP-hard. In this paper we address this hard problem by settling on an easier problem. We propose an approximation which provides high confidence lower bounds on probability of evidence but does not have any guarantees in terms of relative or absolute error. Our proposed approximation is a randomized importance sampling scheme that uses the Markov inequality. However, a straight-forward application of the Markov inequality may lead to poor lower bounds. We therefore propose several heuristic measures to improve its performance in practice. Empirical evaluation of our scheme with state-of- the-art lower bounding schemes reveals the promise of our approach.
|
1704.00090
|
Marc-Andr\'e Gardner
|
Marc-Andr\'e Gardner, Kalyan Sunkavalli, Ersin Yumer, Xiaohui Shen,
Emiliano Gambaretto, Christian Gagn\'e, Jean-Fran\c{c}ois Lalonde
|
Learning to Predict Indoor Illumination from a Single Image
| null | null | null | null |
cs.CV cs.GR stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose an automatic method to infer high dynamic range illumination from
a single, limited field-of-view, low dynamic range photograph of an indoor
scene. In contrast to previous work that relies on specialized image capture,
user input, and/or simple scene models, we train an end-to-end deep neural
network that directly regresses a limited field-of-view photo to HDR
illumination, without strong assumptions on scene geometry, material
properties, or lighting. We show that this can be accomplished in a three step
process: 1) we train a robust lighting classifier to automatically annotate the
location of light sources in a large dataset of LDR environment maps, 2) we use
these annotations to train a deep neural network that predicts the location of
lights in a scene from a single limited field-of-view photo, and 3) we
fine-tune this network using a small dataset of HDR environment maps to predict
light intensities. This allows us to automatically recover high-quality HDR
illumination estimates that significantly outperform previous state-of-the-art
methods. Consequently, using our illumination estimates for applications like
3D object insertion, we can achieve results that are photo-realistic, which is
validated via a perceptual user study.
|
[
{
"created": "Sat, 1 Apr 2017 00:50:12 GMT",
"version": "v1"
},
{
"created": "Thu, 25 May 2017 19:20:01 GMT",
"version": "v2"
},
{
"created": "Tue, 21 Nov 2017 08:32:24 GMT",
"version": "v3"
}
] |
2017-11-22
|
[
[
"Gardner",
"Marc-André",
""
],
[
"Sunkavalli",
"Kalyan",
""
],
[
"Yumer",
"Ersin",
""
],
[
"Shen",
"Xiaohui",
""
],
[
"Gambaretto",
"Emiliano",
""
],
[
"Gagné",
"Christian",
""
],
[
"Lalonde",
"Jean-François",
""
]
] |
We propose an automatic method to infer high dynamic range illumination from a single, limited field-of-view, low dynamic range photograph of an indoor scene. In contrast to previous work that relies on specialized image capture, user input, and/or simple scene models, we train an end-to-end deep neural network that directly regresses a limited field-of-view photo to HDR illumination, without strong assumptions on scene geometry, material properties, or lighting. We show that this can be accomplished in a three step process: 1) we train a robust lighting classifier to automatically annotate the location of light sources in a large dataset of LDR environment maps, 2) we use these annotations to train a deep neural network that predicts the location of lights in a scene from a single limited field-of-view photo, and 3) we fine-tune this network using a small dataset of HDR environment maps to predict light intensities. This allows us to automatically recover high-quality HDR illumination estimates that significantly outperform previous state-of-the-art methods. Consequently, using our illumination estimates for applications like 3D object insertion, we can achieve results that are photo-realistic, which is validated via a perceptual user study.
|
2005.12788
|
Tianchi Huang
|
Tianchi Huang, Rui-Xiao Zhang, Lifeng Sun
|
Self-play Reinforcement Learning for Video Transmission
|
To appear in NOSSDAV'20
| null |
10.1145/3386290.3396930
| null |
cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Video transmission services adopt adaptive algorithms to ensure users'
demands. Existing techniques are often optimized and evaluated by a function
that linearly combines several weighted metrics. Nevertheless, we observe that
the given function fails to describe the requirement accurately. Thus, such
proposed methods might eventually violate the original needs. To eliminate this
concern, we propose \emph{Zwei}, a self-play reinforcement learning algorithm
for video transmission tasks. Zwei aims to update the policy by
straightforwardly utilizing the actual requirement. Technically, Zwei samples a
number of trajectories from the same starting point and instantly estimates the
win rate w.r.t the competition outcome. Here the competition result represents
which trajectory is closer to the assigned requirement. Subsequently, Zwei
optimizes the strategy by maximizing the win rate. To build Zwei, we develop
simulation environments, design adequate neural network models, and invent
training methods for dealing with different requirements on various video
transmission scenarios. Trace-driven analysis over two representative tasks
demonstrates that Zwei optimizes itself according to the assigned requirement
faithfully, outperforming the state-of-the-art methods under all considered
scenarios.
|
[
{
"created": "Tue, 26 May 2020 15:12:08 GMT",
"version": "v1"
}
] |
2020-05-27
|
[
[
"Huang",
"Tianchi",
""
],
[
"Zhang",
"Rui-Xiao",
""
],
[
"Sun",
"Lifeng",
""
]
] |
Video transmission services adopt adaptive algorithms to ensure users' demands. Existing techniques are often optimized and evaluated by a function that linearly combines several weighted metrics. Nevertheless, we observe that the given function fails to describe the requirement accurately. Thus, such proposed methods might eventually violate the original needs. To eliminate this concern, we propose \emph{Zwei}, a self-play reinforcement learning algorithm for video transmission tasks. Zwei aims to update the policy by straightforwardly utilizing the actual requirement. Technically, Zwei samples a number of trajectories from the same starting point and instantly estimates the win rate w.r.t the competition outcome. Here the competition result represents which trajectory is closer to the assigned requirement. Subsequently, Zwei optimizes the strategy by maximizing the win rate. To build Zwei, we develop simulation environments, design adequate neural network models, and invent training methods for dealing with different requirements on various video transmission scenarios. Trace-driven analysis over two representative tasks demonstrates that Zwei optimizes itself according to the assigned requirement faithfully, outperforming the state-of-the-art methods under all considered scenarios.
|
1812.04912
|
Nana Wang
|
Nana Wang, Li Cui, Xi Huang, Yingcong Xiang, Jing Xiao
|
EasiCSDeep: A deep learning model for Cervical Spondylosis
Identification using surface electromyography signal
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cervical spondylosis (CS) is a common chronic disease that affects up to
two-thirds of the population and poses a serious burden on individuals and
society. The early identification has significant value in improving cure rate
and reducing costs. However, the pathology is complex, and the mild symptoms
increase the difficulty of the diagnosis, especially in the early stage.
Besides, the time-consuming and costliness of hospital medical service reduces
the attention to the CS identification. Thus, a convenient, low-cost
intelligent CS identification method is imperious demanded. In this paper, we
present an intelligent method based on the deep learning to identify CS, using
the surface electromyography (sEMG) signal. Faced with the complex, high
dimensionality and weak usability of the sEMG signal, we proposed and developed
a multi-channel EasiCSDeep algorithm based on the convolutional neural network,
which consists of the feature extraction, spatial relationship representation
and classification algorithm. To the best of our knowledge, this EasiCSDeep is
the first effort to employ the deep learning and the sEMG data to identify CS.
Compared with previous state-of-the-art algorithm, our algorithm achieves a
significant improvement.
|
[
{
"created": "Wed, 12 Dec 2018 12:10:45 GMT",
"version": "v1"
}
] |
2018-12-13
|
[
[
"Wang",
"Nana",
""
],
[
"Cui",
"Li",
""
],
[
"Huang",
"Xi",
""
],
[
"Xiang",
"Yingcong",
""
],
[
"Xiao",
"Jing",
""
]
] |
Cervical spondylosis (CS) is a common chronic disease that affects up to two-thirds of the population and poses a serious burden on individuals and society. The early identification has significant value in improving cure rate and reducing costs. However, the pathology is complex, and the mild symptoms increase the difficulty of the diagnosis, especially in the early stage. Besides, the time-consuming and costliness of hospital medical service reduces the attention to the CS identification. Thus, a convenient, low-cost intelligent CS identification method is imperious demanded. In this paper, we present an intelligent method based on the deep learning to identify CS, using the surface electromyography (sEMG) signal. Faced with the complex, high dimensionality and weak usability of the sEMG signal, we proposed and developed a multi-channel EasiCSDeep algorithm based on the convolutional neural network, which consists of the feature extraction, spatial relationship representation and classification algorithm. To the best of our knowledge, this EasiCSDeep is the first effort to employ the deep learning and the sEMG data to identify CS. Compared with previous state-of-the-art algorithm, our algorithm achieves a significant improvement.
|
2407.08233
|
Ding Chen
|
Ding Chen, Chen Liu
|
Differentially Private Neural Network Training under Hidden State
Assumption
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We present a novel approach called differentially private stochastic block
coordinate descent (DP-SBCD) for training neural networks with provable
guarantees of differential privacy under the hidden state assumption. Our
methodology incorporates Lipschitz neural networks and decomposes the training
process of the neural network into sub-problems, each corresponding to the
training of a specific layer. By doing so, we extend the analysis of
differential privacy under the hidden state assumption to encompass non-convex
problems and algorithms employing proximal gradient descent. Furthermore, in
contrast to existing methods, we adopt a novel approach by utilizing calibrated
noise sampled from adaptive distributions, yielding improved empirical
trade-offs between utility and privacy.
|
[
{
"created": "Thu, 11 Jul 2024 07:14:40 GMT",
"version": "v1"
}
] |
2024-07-12
|
[
[
"Chen",
"Ding",
""
],
[
"Liu",
"Chen",
""
]
] |
We present a novel approach called differentially private stochastic block coordinate descent (DP-SBCD) for training neural networks with provable guarantees of differential privacy under the hidden state assumption. Our methodology incorporates Lipschitz neural networks and decomposes the training process of the neural network into sub-problems, each corresponding to the training of a specific layer. By doing so, we extend the analysis of differential privacy under the hidden state assumption to encompass non-convex problems and algorithms employing proximal gradient descent. Furthermore, in contrast to existing methods, we adopt a novel approach by utilizing calibrated noise sampled from adaptive distributions, yielding improved empirical trade-offs between utility and privacy.
|
2007.14580
|
T.J. Tsai
|
Mengyi Shan and TJ Tsai
|
Improved Handling of Repeats and Jumps in Audio-Sheet Image
Synchronization
|
8 pages, 5 figures. Accepted paper at the International Society for
Music Information Retrieval Conference (ISMIR) 2020
| null | null | null |
cs.MM cs.SD eess.AS eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
This paper studies the problem of automatically generating piano score
following videos given an audio recording and raw sheet music images. Whereas
previous works focus on synthetic sheet music where the data has been cleaned
and preprocessed, we instead focus on developing a system that can cope with
the messiness of raw, unprocessed sheet music PDFs from IMSLP. We investigate
how well existing systems cope with real scanned sheet music, filler pages and
unrelated pieces or movements, and discontinuities due to jumps and repeats. We
find that a significant bottleneck in system performance is handling jumps and
repeats correctly. In particular, we find that a previously proposed Jump DTW
algorithm does not perform robustly when jump locations are unknown a priori.
We propose a novel alignment algorithm called Hierarchical DTW that can handle
jumps and repeats even when jump locations are not known. It first performs
alignment at the feature level on each sheet music line, and then performs a
second alignment at the segment level. By operating at the segment level, it is
able to encode domain knowledge about how likely a particular jump is. Through
carefully controlled experiments on unprocessed sheet music PDFs from IMSLP, we
show that Hierarachical DTW significantly outperforms Jump DTW in handling
various types of jumps.
|
[
{
"created": "Wed, 29 Jul 2020 04:04:07 GMT",
"version": "v1"
}
] |
2020-07-30
|
[
[
"Shan",
"Mengyi",
""
],
[
"Tsai",
"TJ",
""
]
] |
This paper studies the problem of automatically generating piano score following videos given an audio recording and raw sheet music images. Whereas previous works focus on synthetic sheet music where the data has been cleaned and preprocessed, we instead focus on developing a system that can cope with the messiness of raw, unprocessed sheet music PDFs from IMSLP. We investigate how well existing systems cope with real scanned sheet music, filler pages and unrelated pieces or movements, and discontinuities due to jumps and repeats. We find that a significant bottleneck in system performance is handling jumps and repeats correctly. In particular, we find that a previously proposed Jump DTW algorithm does not perform robustly when jump locations are unknown a priori. We propose a novel alignment algorithm called Hierarchical DTW that can handle jumps and repeats even when jump locations are not known. It first performs alignment at the feature level on each sheet music line, and then performs a second alignment at the segment level. By operating at the segment level, it is able to encode domain knowledge about how likely a particular jump is. Through carefully controlled experiments on unprocessed sheet music PDFs from IMSLP, we show that Hierarachical DTW significantly outperforms Jump DTW in handling various types of jumps.
|
2305.07687
|
Sean Cornelius
|
Michael M. Danziger, Omkar R. Gojala, Sean P. Cornelius
|
Mastering Percolation-like Games with Deep Learning
|
8 pages, 7 figures; improved figures, references added
| null | null | null |
cs.LG nlin.AO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Though robustness of networks to random attacks has been widely studied,
intentional destruction by an intelligent agent is not tractable with previous
methods. Here we devise a single-player game on a lattice that mimics the logic
of an attacker attempting to destroy a network. The objective of the game is to
disable all nodes in the fewest number of steps. We develop a reinforcement
learning approach using deep Q-learning that is capable of learning to play
this game successfully, and in so doing, to optimally attack a network. Because
the learning algorithm is universal, we train agents on different definitions
of robustness and compare the learned strategies. We find that superficially
similar definitions of robustness induce different strategies in the trained
agent, implying that optimally attacking or defending a network is sensitive
the particular objective. Our method provides a new approach to understand
network robustness, with potential applications to other discrete processes in
disordered systems.
|
[
{
"created": "Fri, 12 May 2023 15:37:45 GMT",
"version": "v1"
},
{
"created": "Wed, 28 Jun 2023 20:35:36 GMT",
"version": "v2"
}
] |
2023-06-30
|
[
[
"Danziger",
"Michael M.",
""
],
[
"Gojala",
"Omkar R.",
""
],
[
"Cornelius",
"Sean P.",
""
]
] |
Though robustness of networks to random attacks has been widely studied, intentional destruction by an intelligent agent is not tractable with previous methods. Here we devise a single-player game on a lattice that mimics the logic of an attacker attempting to destroy a network. The objective of the game is to disable all nodes in the fewest number of steps. We develop a reinforcement learning approach using deep Q-learning that is capable of learning to play this game successfully, and in so doing, to optimally attack a network. Because the learning algorithm is universal, we train agents on different definitions of robustness and compare the learned strategies. We find that superficially similar definitions of robustness induce different strategies in the trained agent, implying that optimally attacking or defending a network is sensitive the particular objective. Our method provides a new approach to understand network robustness, with potential applications to other discrete processes in disordered systems.
|
2009.03116
|
Magnus Sahlgren
|
Tim Isbister and Magnus Sahlgren
|
Why Not Simply Translate? A First Swedish Evaluation Benchmark for
Semantic Similarity
|
SLTC 2020
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents the first Swedish evaluation benchmark for textual
semantic similarity. The benchmark is compiled by simply running the English
STS-B dataset through the Google machine translation API. This paper discusses
potential problems with using such a simple approach to compile a Swedish
evaluation benchmark, including translation errors, vocabulary variation, and
productive compounding. Despite some obvious problems with the resulting
dataset, we use the benchmark to compare the majority of the currently existing
Swedish text representations, demonstrating that native models outperform
multilingual ones, and that simple bag of words performs remarkably well.
|
[
{
"created": "Mon, 7 Sep 2020 14:07:12 GMT",
"version": "v1"
},
{
"created": "Sun, 29 Nov 2020 10:04:27 GMT",
"version": "v2"
}
] |
2020-12-01
|
[
[
"Isbister",
"Tim",
""
],
[
"Sahlgren",
"Magnus",
""
]
] |
This paper presents the first Swedish evaluation benchmark for textual semantic similarity. The benchmark is compiled by simply running the English STS-B dataset through the Google machine translation API. This paper discusses potential problems with using such a simple approach to compile a Swedish evaluation benchmark, including translation errors, vocabulary variation, and productive compounding. Despite some obvious problems with the resulting dataset, we use the benchmark to compare the majority of the currently existing Swedish text representations, demonstrating that native models outperform multilingual ones, and that simple bag of words performs remarkably well.
|
2312.10118
|
Jaeho Moon
|
Jaeho Moon, Juan Luis Gonzalez Bello, Byeongjun Kwon, Munchurl Kim
|
From-Ground-To-Objects: Coarse-to-Fine Self-supervised Monocular Depth
Estimation of Dynamic Objects with Ground Contact Prior
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Self-supervised monocular depth estimation (DE) is an approach to learning
depth without costly depth ground truths. However, it often struggles with
moving objects that violate the static scene assumption during training. To
address this issue, we introduce a coarse-to-fine training strategy leveraging
the ground contacting prior based on the observation that most moving objects
in outdoor scenes contact the ground. In the coarse training stage, we exclude
the objects in dynamic classes from the reprojection loss calculation to avoid
inaccurate depth learning. To provide precise supervision on the depth of the
objects, we present a novel Ground-contacting-prior Disparity Smoothness Loss
(GDS-Loss) that encourages a DE network to align the depth of the objects with
their ground-contacting points. Subsequently, in the fine training stage, we
refine the DE network to learn the detailed depth of the objects from the
reprojection loss, while ensuring accurate DE on the moving object regions by
employing our regularization loss with a cost-volume-based weighting factor.
Our overall coarse-to-fine training strategy can easily be integrated with
existing DE methods without any modifications, significantly enhancing DE
performance on challenging Cityscapes and KITTI datasets, especially in the
moving object regions.
|
[
{
"created": "Fri, 15 Dec 2023 11:22:17 GMT",
"version": "v1"
}
] |
2023-12-19
|
[
[
"Moon",
"Jaeho",
""
],
[
"Bello",
"Juan Luis Gonzalez",
""
],
[
"Kwon",
"Byeongjun",
""
],
[
"Kim",
"Munchurl",
""
]
] |
Self-supervised monocular depth estimation (DE) is an approach to learning depth without costly depth ground truths. However, it often struggles with moving objects that violate the static scene assumption during training. To address this issue, we introduce a coarse-to-fine training strategy leveraging the ground contacting prior based on the observation that most moving objects in outdoor scenes contact the ground. In the coarse training stage, we exclude the objects in dynamic classes from the reprojection loss calculation to avoid inaccurate depth learning. To provide precise supervision on the depth of the objects, we present a novel Ground-contacting-prior Disparity Smoothness Loss (GDS-Loss) that encourages a DE network to align the depth of the objects with their ground-contacting points. Subsequently, in the fine training stage, we refine the DE network to learn the detailed depth of the objects from the reprojection loss, while ensuring accurate DE on the moving object regions by employing our regularization loss with a cost-volume-based weighting factor. Our overall coarse-to-fine training strategy can easily be integrated with existing DE methods without any modifications, significantly enhancing DE performance on challenging Cityscapes and KITTI datasets, especially in the moving object regions.
|
1805.10515
|
Wensheng Gan
|
Wensheng Gan, Jerry Chun-Wei Lin, Philippe Fournier-Viger, Han-Chieh
Chao and Philip S. Yu
|
A Survey of Parallel Sequential Pattern Mining
|
Accepted by ACM Trans. on Knowl. Discov. Data, 33 pages
|
ACM Transactions on Knowledge Discovery from Data, 2019
|
10.1145/3314107
| null |
cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the growing popularity of shared resources, large volumes of complex
data of different types are collected automatically. Traditional data mining
algorithms generally have problems and challenges including huge memory cost,
low processing speed, and inadequate hard disk space. As a fundamental task of
data mining, sequential pattern mining (SPM) is used in a wide variety of
real-life applications. However, it is more complex and challenging than other
pattern mining tasks, i.e., frequent itemset mining and association rule
mining, and also suffers from the above challenges when handling the
large-scale data. To solve these problems, mining sequential patterns in a
parallel or distributed computing environment has emerged as an important issue
with many applications. In this paper, an in-depth survey of the current status
of parallel sequential pattern mining (PSPM) is investigated and provided,
including detailed categorization of traditional serial SPM approaches, and
state of the art parallel SPM. We review the related work of parallel
sequential pattern mining in detail, including partition-based algorithms for
PSPM, Apriori-based PSPM, pattern growth based PSPM, and hybrid algorithms for
PSPM, and provide deep description (i.e., characteristics, advantages,
disadvantages and summarization) of these parallel approaches of PSPM. Some
advanced topics for PSPM, including parallel quantitative / weighted / utility
sequential pattern mining, PSPM from uncertain data and stream data, hardware
acceleration for PSPM, are further reviewed in details. Besides, we review and
provide some well-known open-source software of PSPM. Finally, we summarize
some challenges and opportunities of PSPM in the big data era.
|
[
{
"created": "Sat, 26 May 2018 18:44:12 GMT",
"version": "v1"
},
{
"created": "Fri, 5 Apr 2019 02:16:58 GMT",
"version": "v2"
}
] |
2021-04-01
|
[
[
"Gan",
"Wensheng",
""
],
[
"Lin",
"Jerry Chun-Wei",
""
],
[
"Fournier-Viger",
"Philippe",
""
],
[
"Chao",
"Han-Chieh",
""
],
[
"Yu",
"Philip S.",
""
]
] |
With the growing popularity of shared resources, large volumes of complex data of different types are collected automatically. Traditional data mining algorithms generally have problems and challenges including huge memory cost, low processing speed, and inadequate hard disk space. As a fundamental task of data mining, sequential pattern mining (SPM) is used in a wide variety of real-life applications. However, it is more complex and challenging than other pattern mining tasks, i.e., frequent itemset mining and association rule mining, and also suffers from the above challenges when handling the large-scale data. To solve these problems, mining sequential patterns in a parallel or distributed computing environment has emerged as an important issue with many applications. In this paper, an in-depth survey of the current status of parallel sequential pattern mining (PSPM) is investigated and provided, including detailed categorization of traditional serial SPM approaches, and state of the art parallel SPM. We review the related work of parallel sequential pattern mining in detail, including partition-based algorithms for PSPM, Apriori-based PSPM, pattern growth based PSPM, and hybrid algorithms for PSPM, and provide deep description (i.e., characteristics, advantages, disadvantages and summarization) of these parallel approaches of PSPM. Some advanced topics for PSPM, including parallel quantitative / weighted / utility sequential pattern mining, PSPM from uncertain data and stream data, hardware acceleration for PSPM, are further reviewed in details. Besides, we review and provide some well-known open-source software of PSPM. Finally, we summarize some challenges and opportunities of PSPM in the big data era.
|
2303.00387
|
Dorjan Hitaj
|
Giulio Pagnotta, Fabio De Gaspari, Dorjan Hitaj, Mauro Andreolini,
Michele Colajanni, Luigi V. Mancini
|
DOLOS: A Novel Architecture for Moving Target Defense
|
16 pages
|
IEEE Transactions on Information Forensics and Security, 2023
|
10.1109/TIFS.2023.3318964
| null |
cs.CR cs.CY
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Moving Target Defense and Cyber Deception emerged in recent years as two key
proactive cyber defense approaches, contrasting with the static nature of the
traditional reactive cyber defense. The key insight behind these approaches is
to impose an asymmetric disadvantage for the attacker by using deception and
randomization techniques to create a dynamic attack surface. Moving Target
Defense typically relies on system randomization and diversification, while
Cyber Deception is based on decoy nodes and fake systems to deceive attackers.
However, current Moving Target Defense techniques are complex to manage and can
introduce high overheads, while Cyber Deception nodes are easily recognized and
avoided by adversaries. This paper presents DOLOS, a novel architecture that
unifies Cyber Deception and Moving Target Defense approaches. DOLOS is
motivated by the insight that deceptive techniques are much more powerful when
integrated into production systems rather than deployed alongside them. DOLOS
combines typical Moving Target Defense techniques, such as randomization,
diversity, and redundancy, with cyber deception and seamlessly integrates them
into production systems through multiple layers of isolation. We extensively
evaluate DOLOS against a wide range of attackers, ranging from automated
malware to professional penetration testers, and show that DOLOS is highly
effective in slowing down attacks and protecting the integrity of production
systems. We also provide valuable insights and considerations for the future
development of MTD techniques based on our findings.
|
[
{
"created": "Wed, 1 Mar 2023 10:13:03 GMT",
"version": "v1"
},
{
"created": "Wed, 27 Sep 2023 14:22:38 GMT",
"version": "v2"
}
] |
2023-09-28
|
[
[
"Pagnotta",
"Giulio",
""
],
[
"De Gaspari",
"Fabio",
""
],
[
"Hitaj",
"Dorjan",
""
],
[
"Andreolini",
"Mauro",
""
],
[
"Colajanni",
"Michele",
""
],
[
"Mancini",
"Luigi V.",
""
]
] |
Moving Target Defense and Cyber Deception emerged in recent years as two key proactive cyber defense approaches, contrasting with the static nature of the traditional reactive cyber defense. The key insight behind these approaches is to impose an asymmetric disadvantage for the attacker by using deception and randomization techniques to create a dynamic attack surface. Moving Target Defense typically relies on system randomization and diversification, while Cyber Deception is based on decoy nodes and fake systems to deceive attackers. However, current Moving Target Defense techniques are complex to manage and can introduce high overheads, while Cyber Deception nodes are easily recognized and avoided by adversaries. This paper presents DOLOS, a novel architecture that unifies Cyber Deception and Moving Target Defense approaches. DOLOS is motivated by the insight that deceptive techniques are much more powerful when integrated into production systems rather than deployed alongside them. DOLOS combines typical Moving Target Defense techniques, such as randomization, diversity, and redundancy, with cyber deception and seamlessly integrates them into production systems through multiple layers of isolation. We extensively evaluate DOLOS against a wide range of attackers, ranging from automated malware to professional penetration testers, and show that DOLOS is highly effective in slowing down attacks and protecting the integrity of production systems. We also provide valuable insights and considerations for the future development of MTD techniques based on our findings.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.