id
stringlengths 9
10
| submitter
stringlengths 1
64
⌀ | authors
stringlengths 4
20.7k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
404
⌀ | doi
stringlengths 11
153
⌀ | report-no
stringlengths 2
254
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | orig_abstract
stringlengths 14
3.35k
| versions
listlengths 1
60
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
1.35k
| abstract
stringlengths 11
3.34k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1107.2336
|
Nikolaos Nikolaidis
|
N. S. Nikolaidis, I. N. Nikolaidis and C. C. Tsouros
|
A Variation of the Box-Counting Algorithm Applied to Colour Images
|
10 pages, 3 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The box counting method for fractal dimension estimation had not been applied
to large or colour images thus far due to the processing time required. In this
letter we present a fast, easy to implement and very easily expandable to any
number of dimensions variation, the box merging method. It is applied here in
RGB images which are considered as sets in 5-D space.
|
[
{
"created": "Tue, 12 Jul 2011 16:21:06 GMT",
"version": "v1"
}
] |
2011-07-13
|
[
[
"Nikolaidis",
"N. S.",
""
],
[
"Nikolaidis",
"I. N.",
""
],
[
"Tsouros",
"C. C.",
""
]
] |
The box counting method for fractal dimension estimation had not been applied to large or colour images thus far due to the processing time required. In this letter we present a fast, easy to implement and very easily expandable to any number of dimensions variation, the box merging method. It is applied here in RGB images which are considered as sets in 5-D space.
|
2402.16508
|
Fan Jiang
|
Fan Jiang, Tom Drummond, Trevor Cohn
|
Pre-training Cross-lingual Open Domain Question Answering with
Large-scale Synthetic Supervision
| null | null | null | null |
cs.CL cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cross-lingual open domain question answering (CLQA) is a complex problem,
comprising cross-lingual retrieval from a multilingual knowledge base, followed
by answer generation in the query language. Both steps are usually tackled by
separate models, requiring substantial annotated datasets, and typically
auxiliary resources, like machine translation systems to bridge between
languages. In this paper, we show that CLQA can be addressed using a single
encoder-decoder model. To effectively train this model, we propose a
self-supervised method based on exploiting the cross-lingual link structure
within Wikipedia. We demonstrate how linked Wikipedia pages can be used to
synthesise supervisory signals for cross-lingual retrieval, through a form of
cloze query, and generate more natural questions to supervise answer
generation. Together, we show our approach, \texttt{CLASS}, outperforms
comparable methods on both supervised and zero-shot language adaptation
settings, including those using machine translation.
|
[
{
"created": "Mon, 26 Feb 2024 11:42:29 GMT",
"version": "v1"
},
{
"created": "Sun, 16 Jun 2024 08:18:25 GMT",
"version": "v2"
}
] |
2024-06-18
|
[
[
"Jiang",
"Fan",
""
],
[
"Drummond",
"Tom",
""
],
[
"Cohn",
"Trevor",
""
]
] |
Cross-lingual open domain question answering (CLQA) is a complex problem, comprising cross-lingual retrieval from a multilingual knowledge base, followed by answer generation in the query language. Both steps are usually tackled by separate models, requiring substantial annotated datasets, and typically auxiliary resources, like machine translation systems to bridge between languages. In this paper, we show that CLQA can be addressed using a single encoder-decoder model. To effectively train this model, we propose a self-supervised method based on exploiting the cross-lingual link structure within Wikipedia. We demonstrate how linked Wikipedia pages can be used to synthesise supervisory signals for cross-lingual retrieval, through a form of cloze query, and generate more natural questions to supervise answer generation. Together, we show our approach, \texttt{CLASS}, outperforms comparable methods on both supervised and zero-shot language adaptation settings, including those using machine translation.
|
1606.02033
|
Mingquan Zhong
|
Mingquan Zhong, Suzhi Bi, and Xiaohui Lin
|
User Cooperation for Enhanced Throughput Fairness in Wireless Powered
Communication Networks
|
This paper has been accepted by the IEEE International Conference on
Telecommunications (ICT), Thessaloniki, Greece, May 16-18, 2016
| null |
10.1109/ICT.2016.7500446
| null |
cs.IT cs.NI math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper studies a novel user cooperation method in a wireless powered
communication network (WPCN), where a pair of distributed terminal users first
harvest wireless energy broadcasted by one energy node (EN) and then use the
harvested energy to transmit information cooperatively to a destination node
(DN). In particular, the two cooperating users exchange their independent
information with each other to form a virtual antenna array and transmit
jointly to the DN. By allowing each user to allocate part of its harvested
energy to transmit the other's information, the proposed cooperation can
effectively mitigate the user unfairness problem in WPCNs, where a user may
suffer from very low data rate due to the poor energy harvesting performance
and high data transmission consumptions. We derive the maximum common
throughput achieved by the cooperation scheme through optimizing the time
allocation on wireless energy transfer, user message exchange, and joint
information transmissions. Through comparing with some representative benchmark
schemes, our results demonstrate the effectiveness of the proposed user
cooperation in enhancing the throughput performance under different setups.
|
[
{
"created": "Tue, 7 Jun 2016 06:03:45 GMT",
"version": "v1"
},
{
"created": "Fri, 28 Oct 2016 17:41:58 GMT",
"version": "v2"
}
] |
2016-11-18
|
[
[
"Zhong",
"Mingquan",
""
],
[
"Bi",
"Suzhi",
""
],
[
"Lin",
"Xiaohui",
""
]
] |
This paper studies a novel user cooperation method in a wireless powered communication network (WPCN), where a pair of distributed terminal users first harvest wireless energy broadcasted by one energy node (EN) and then use the harvested energy to transmit information cooperatively to a destination node (DN). In particular, the two cooperating users exchange their independent information with each other to form a virtual antenna array and transmit jointly to the DN. By allowing each user to allocate part of its harvested energy to transmit the other's information, the proposed cooperation can effectively mitigate the user unfairness problem in WPCNs, where a user may suffer from very low data rate due to the poor energy harvesting performance and high data transmission consumptions. We derive the maximum common throughput achieved by the cooperation scheme through optimizing the time allocation on wireless energy transfer, user message exchange, and joint information transmissions. Through comparing with some representative benchmark schemes, our results demonstrate the effectiveness of the proposed user cooperation in enhancing the throughput performance under different setups.
|
2109.06873
|
Ranganath Krishnan
|
Ranganath Krishnan, Nilesh Ahuja, Alok Sinha, Mahesh Subedar, Omesh
Tickoo, Ravi Iyer
|
Robust Contrastive Active Learning with Feature-guided Query Strategies
|
20 pages with appendix. arXiv admin note: text overlap with
arXiv:2109.06321
| null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce supervised contrastive active learning (SCAL) and propose
efficient query strategies in active learning based on the feature similarity
(featuresim) and principal component analysis based feature-reconstruction
error (fre) to select informative data samples with diverse feature
representations. We demonstrate our proposed method achieves state-of-the-art
accuracy, model calibration and reduces sampling bias in an active learning
setup for balanced and imbalanced datasets on image classification tasks. We
also evaluate robustness of model to distributional shift derived from
different query strategies in active learning setting. Using extensive
experiments, we show that our proposed approach outperforms high performing
compute-intensive methods by a big margin resulting in 9.9% lower mean
corruption error, 7.2% lower expected calibration error under dataset shift and
8.9% higher AUROC for out-of-distribution detection.
|
[
{
"created": "Mon, 13 Sep 2021 21:09:21 GMT",
"version": "v1"
},
{
"created": "Sun, 14 Aug 2022 19:39:03 GMT",
"version": "v2"
}
] |
2022-08-16
|
[
[
"Krishnan",
"Ranganath",
""
],
[
"Ahuja",
"Nilesh",
""
],
[
"Sinha",
"Alok",
""
],
[
"Subedar",
"Mahesh",
""
],
[
"Tickoo",
"Omesh",
""
],
[
"Iyer",
"Ravi",
""
]
] |
We introduce supervised contrastive active learning (SCAL) and propose efficient query strategies in active learning based on the feature similarity (featuresim) and principal component analysis based feature-reconstruction error (fre) to select informative data samples with diverse feature representations. We demonstrate our proposed method achieves state-of-the-art accuracy, model calibration and reduces sampling bias in an active learning setup for balanced and imbalanced datasets on image classification tasks. We also evaluate robustness of model to distributional shift derived from different query strategies in active learning setting. Using extensive experiments, we show that our proposed approach outperforms high performing compute-intensive methods by a big margin resulting in 9.9% lower mean corruption error, 7.2% lower expected calibration error under dataset shift and 8.9% higher AUROC for out-of-distribution detection.
|
2103.10277
|
Sergio Martiradonna
|
Sergio Martiradonna, Andrea Abrardo, Marco Moretti, Giuseppe Piro,
Gennaro Boggia
|
Deep Reinforcement Learning-Aided RAN Slicing Enforcement for B5G
Latency Sensitive Services
| null | null | null | null |
cs.NI cs.AI cs.LG cs.PF
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The combination of cloud computing capabilities at the network edge and
artificial intelligence promise to turn future mobile networks into service-
and radio-aware entities, able to address the requirements of upcoming
latency-sensitive applications. In this context, a challenging research goal is
to exploit edge intelligence to dynamically and optimally manage the Radio
Access Network Slicing (that is a less mature and more complex technology than
fifth-generation Network Slicing) and Radio Resource Management, which is a
very complex task due to the mostly unpredictably nature of the wireless
channel. This paper presents a novel architecture that leverages Deep
Reinforcement Learning at the edge of the network in order to address Radio
Access Network Slicing and Radio Resource Management optimization supporting
latency-sensitive applications. The effectiveness of our proposal against
baseline methodologies is investigated through computer simulation, by
considering an autonomous-driving use-case.
|
[
{
"created": "Thu, 18 Mar 2021 14:18:34 GMT",
"version": "v1"
}
] |
2021-03-19
|
[
[
"Martiradonna",
"Sergio",
""
],
[
"Abrardo",
"Andrea",
""
],
[
"Moretti",
"Marco",
""
],
[
"Piro",
"Giuseppe",
""
],
[
"Boggia",
"Gennaro",
""
]
] |
The combination of cloud computing capabilities at the network edge and artificial intelligence promise to turn future mobile networks into service- and radio-aware entities, able to address the requirements of upcoming latency-sensitive applications. In this context, a challenging research goal is to exploit edge intelligence to dynamically and optimally manage the Radio Access Network Slicing (that is a less mature and more complex technology than fifth-generation Network Slicing) and Radio Resource Management, which is a very complex task due to the mostly unpredictably nature of the wireless channel. This paper presents a novel architecture that leverages Deep Reinforcement Learning at the edge of the network in order to address Radio Access Network Slicing and Radio Resource Management optimization supporting latency-sensitive applications. The effectiveness of our proposal against baseline methodologies is investigated through computer simulation, by considering an autonomous-driving use-case.
|
2307.14446
|
Reza Azad
|
Sanaz Karimijafarbigloo and Reza Azad and Dorit Merhof
|
Self-supervised Few-shot Learning for Semantic Segmentation: An
Annotation-free Approach
|
MICCAI 2023 workshop PRIME
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Few-shot semantic segmentation (FSS) offers immense potential in the field of
medical image analysis, enabling accurate object segmentation with limited
training data. However, existing FSS techniques heavily rely on annotated
semantic classes, rendering them unsuitable for medical images due to the
scarcity of annotations. To address this challenge, multiple contributions are
proposed: First, inspired by spectral decomposition methods, the problem of
image decomposition is reframed as a graph partitioning task. The eigenvectors
of the Laplacian matrix, derived from the feature affinity matrix of
self-supervised networks, are analyzed to estimate the distribution of the
objects of interest from the support images. Secondly, we propose a novel
self-supervised FSS framework that does not rely on any annotation. Instead, it
adaptively estimates the query mask by leveraging the eigenvectors obtained
from the support images. This approach eliminates the need for manual
annotation, making it particularly suitable for medical images with limited
annotated data. Thirdly, to further enhance the decoding of the query image
based on the information provided by the support image, we introduce a
multi-scale large kernel attention module. By selectively emphasizing relevant
features and details, this module improves the segmentation process and
contributes to better object delineation. Evaluations on both natural and
medical image datasets demonstrate the efficiency and effectiveness of our
method. Moreover, the proposed approach is characterized by its generality and
model-agnostic nature, allowing for seamless integration with various deep
architectures. The code is publicly available at
\href{https://github.com/mindflow-institue/annotation_free_fewshot}{\textcolor{magenta}{GitHub}}.
|
[
{
"created": "Wed, 26 Jul 2023 18:33:30 GMT",
"version": "v1"
}
] |
2023-07-28
|
[
[
"Karimijafarbigloo",
"Sanaz",
""
],
[
"Azad",
"Reza",
""
],
[
"Merhof",
"Dorit",
""
]
] |
Few-shot semantic segmentation (FSS) offers immense potential in the field of medical image analysis, enabling accurate object segmentation with limited training data. However, existing FSS techniques heavily rely on annotated semantic classes, rendering them unsuitable for medical images due to the scarcity of annotations. To address this challenge, multiple contributions are proposed: First, inspired by spectral decomposition methods, the problem of image decomposition is reframed as a graph partitioning task. The eigenvectors of the Laplacian matrix, derived from the feature affinity matrix of self-supervised networks, are analyzed to estimate the distribution of the objects of interest from the support images. Secondly, we propose a novel self-supervised FSS framework that does not rely on any annotation. Instead, it adaptively estimates the query mask by leveraging the eigenvectors obtained from the support images. This approach eliminates the need for manual annotation, making it particularly suitable for medical images with limited annotated data. Thirdly, to further enhance the decoding of the query image based on the information provided by the support image, we introduce a multi-scale large kernel attention module. By selectively emphasizing relevant features and details, this module improves the segmentation process and contributes to better object delineation. Evaluations on both natural and medical image datasets demonstrate the efficiency and effectiveness of our method. Moreover, the proposed approach is characterized by its generality and model-agnostic nature, allowing for seamless integration with various deep architectures. The code is publicly available at \href{https://github.com/mindflow-institue/annotation_free_fewshot}{\textcolor{magenta}{GitHub}}.
|
1911.08793
|
Neema Kachappilly Davis
|
Neema Davis, Gaurav Raina, Krishna Jagannathan
|
A Framework for End-to-End Deep Learning-Based Anomaly Detection in
Transportation Networks
|
Preprint submitted to Elsevier TRIP. arXiv admin note: text overlap
with arXiv:1909.06041
| null | null | null |
cs.LG eess.SP stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We develop an end-to-end deep learning-based anomaly detection model for
temporal data in transportation networks. The proposed EVT-LSTM model is
derived from the popular LSTM (Long Short-Term Memory) network and adopts an
objective function that is based on fundamental results from EVT (Extreme Value
Theory). We compare the EVT-LSTM model with some established statistical,
machine learning, and hybrid deep learning baselines. Experiments on seven
diverse real-world data sets demonstrate the superior anomaly detection
performance of our proposed model over the other models considered in the
comparison study.
|
[
{
"created": "Wed, 20 Nov 2019 09:49:58 GMT",
"version": "v1"
}
] |
2019-11-21
|
[
[
"Davis",
"Neema",
""
],
[
"Raina",
"Gaurav",
""
],
[
"Jagannathan",
"Krishna",
""
]
] |
We develop an end-to-end deep learning-based anomaly detection model for temporal data in transportation networks. The proposed EVT-LSTM model is derived from the popular LSTM (Long Short-Term Memory) network and adopts an objective function that is based on fundamental results from EVT (Extreme Value Theory). We compare the EVT-LSTM model with some established statistical, machine learning, and hybrid deep learning baselines. Experiments on seven diverse real-world data sets demonstrate the superior anomaly detection performance of our proposed model over the other models considered in the comparison study.
|
1507.04603
|
Xinyu Gao
|
Xinyu Gao, Linglong Dai, Chau Yuen, and Zhaocheng Wang
|
Turbo-Like Beamforming Based on Tabu Search Algorithm for
Millimeter-Wave Massive MIMO Systems
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For millimeter-wave (mmWave) massive MIMO systems, the codebook-based analog
beamforming (including transmit precoding and receive combining) is usually
used to compensate the severe attenuation of mmWave signals. However,
conventional beamforming schemes involve complicated search among pre-defined
codebooks to find out the optimal pair of analog precoder and analog combiner.
To solve this problem, by exploring the idea of turbo equalizer together with
tabu search (TS) algorithm, we propose a Turbo-like beamforming scheme based on
TS, which is called Turbo-TS beamforming in this paper, to achieve the
near-optimal performance with low complexity. Specifically, the proposed
Turbo-TS beamforming scheme is composed of the following two key components: 1)
Based on the iterative information exchange between the base station and the
user, we design a Turbo-like joint search scheme to find out the near-optimal
pair of analog precoder and analog combiner; 2) Inspired by the idea of TS
algorithm developed in artificial intelligence, we propose a TS-based
precoding/combining scheme to intelligently search the best precoder/combiner
in each iteration of Turbo-like joint search with low complexity. Analysis
shows that the proposed Turbo-TS beamforming can considerably reduce the
searching complexity, and simulation results verify that it can achieve the
near-optimal performance.
|
[
{
"created": "Thu, 16 Jul 2015 14:47:35 GMT",
"version": "v1"
}
] |
2015-07-17
|
[
[
"Gao",
"Xinyu",
""
],
[
"Dai",
"Linglong",
""
],
[
"Yuen",
"Chau",
""
],
[
"Wang",
"Zhaocheng",
""
]
] |
For millimeter-wave (mmWave) massive MIMO systems, the codebook-based analog beamforming (including transmit precoding and receive combining) is usually used to compensate the severe attenuation of mmWave signals. However, conventional beamforming schemes involve complicated search among pre-defined codebooks to find out the optimal pair of analog precoder and analog combiner. To solve this problem, by exploring the idea of turbo equalizer together with tabu search (TS) algorithm, we propose a Turbo-like beamforming scheme based on TS, which is called Turbo-TS beamforming in this paper, to achieve the near-optimal performance with low complexity. Specifically, the proposed Turbo-TS beamforming scheme is composed of the following two key components: 1) Based on the iterative information exchange between the base station and the user, we design a Turbo-like joint search scheme to find out the near-optimal pair of analog precoder and analog combiner; 2) Inspired by the idea of TS algorithm developed in artificial intelligence, we propose a TS-based precoding/combining scheme to intelligently search the best precoder/combiner in each iteration of Turbo-like joint search with low complexity. Analysis shows that the proposed Turbo-TS beamforming can considerably reduce the searching complexity, and simulation results verify that it can achieve the near-optimal performance.
|
1104.1506
|
Jocelyne Troccaz
|
Michael Baumann (TIMC), Michel Bolla, Vincent Daanen, Jean-Luc
Descotes, Jean-Yves Giraud, Nikolai Hungr (TIMC), Antoine Leroy,
Jean-Alexandre Long, S\'ebastien Martin (TIMC), Jocelyne Troccaz (TIMC)
|
Prosper: image and robot-guided prostate brachytherapy
| null | null | null | null |
cs.RO physics.med-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Brachytherapy for localized prostate cancer consists in destroying cancer by
introducing iodine radioactive seeds into the gland through hollow needles. The
planning of the position of the seeds and their introduction into the prostate
is based on intra-operative ultrasound (US) imaging. We propose to optimize the
global quality of the procedure by: i) using 3D US; ii) enhancing US data with
MRI registration; iii) using a specially designed needle-insertion robot,
connected to the imaging data. The imaging methods have been successfully
tested on patient data while the robot accuracy has been evaluated on a
realistic deformable phantom.
|
[
{
"created": "Fri, 8 Apr 2011 07:37:30 GMT",
"version": "v1"
}
] |
2011-04-11
|
[
[
"Baumann",
"Michael",
"",
"TIMC"
],
[
"Bolla",
"Michel",
"",
"TIMC"
],
[
"Daanen",
"Vincent",
"",
"TIMC"
],
[
"Descotes",
"Jean-Luc",
"",
"TIMC"
],
[
"Giraud",
"Jean-Yves",
"",
"TIMC"
],
[
"Hungr",
"Nikolai",
"",
"TIMC"
],
[
"Leroy",
"Antoine",
"",
"TIMC"
],
[
"Long",
"Jean-Alexandre",
"",
"TIMC"
],
[
"Martin",
"Sébastien",
"",
"TIMC"
],
[
"Troccaz",
"Jocelyne",
"",
"TIMC"
]
] |
Brachytherapy for localized prostate cancer consists in destroying cancer by introducing iodine radioactive seeds into the gland through hollow needles. The planning of the position of the seeds and their introduction into the prostate is based on intra-operative ultrasound (US) imaging. We propose to optimize the global quality of the procedure by: i) using 3D US; ii) enhancing US data with MRI registration; iii) using a specially designed needle-insertion robot, connected to the imaging data. The imaging methods have been successfully tested on patient data while the robot accuracy has been evaluated on a realistic deformable phantom.
|
1912.05571
|
Iman Tabrizian
|
Saeedeh Parsaeefard, Iman Tabrizian, Alberto Leon Garcia
|
Representation of Federated Learning via Worst-Case Robust Optimization
Theory
| null | null | null | null |
cs.LG cs.DC stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Federated learning (FL) is a distributed learning approach where a set of
end-user devices participate in the learning process by acting on their
isolated local data sets. Here, we process local data sets of users where
worst-case optimization theory is used to reformulate the FL problem where the
impact of local data sets in training phase is considered as an uncertain
function bounded in a closed uncertainty region. This representation allows us
to compare the performance of FL with its centralized counterpart, and to
replace the uncertain function with a concept of protection functions leading
to more tractable formulation. The latter supports applying a regularization
factor in each user cost function in FL to reach a better performance. We
evaluated our model using the MNIST data set versus the protection function
parameters, e.g., regularization factors.
|
[
{
"created": "Wed, 11 Dec 2019 19:02:49 GMT",
"version": "v1"
}
] |
2019-12-13
|
[
[
"Parsaeefard",
"Saeedeh",
""
],
[
"Tabrizian",
"Iman",
""
],
[
"Garcia",
"Alberto Leon",
""
]
] |
Federated learning (FL) is a distributed learning approach where a set of end-user devices participate in the learning process by acting on their isolated local data sets. Here, we process local data sets of users where worst-case optimization theory is used to reformulate the FL problem where the impact of local data sets in training phase is considered as an uncertain function bounded in a closed uncertainty region. This representation allows us to compare the performance of FL with its centralized counterpart, and to replace the uncertain function with a concept of protection functions leading to more tractable formulation. The latter supports applying a regularization factor in each user cost function in FL to reach a better performance. We evaluated our model using the MNIST data set versus the protection function parameters, e.g., regularization factors.
|
1904.11578
|
Guo YuHu
|
Yuhu Guo, Han Xiao, Yidong Chen, Xiaodong Shi
|
Asynchronous "Events" are Better For Motion Estimation
|
Submitted at IJCAI 2019
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Event-based camera is a bio-inspired vision sensor that records intensity
changes (called event) asynchronously in each pixel. As an instance of
event-based camera, Dynamic and Active-pixel Vision Sensor (DAVIS) combines a
standard camera and an event-based camera. However, traditional models could
not deal with the event stream asynchronously. To analyze the event stream
asynchronously, most existing approaches accumulate events within a certain
time interval and treat the accumulated events as a synchronous frame, which
wastes the intensity change information and weakens the advantages of DAVIS.
Therefore, in this paper, we present the first neural asynchronous approach to
process event stream for event-based camera. Our method asynchronously extracts
dynamic information from events by leveraging previous motion and critical
features of gray-scale frames. To our best knowledge, this is the first neural
asynchronous method to analyze event stream through a novel deep neural
network. Extensive experiments demonstrate that our proposed model achieves
remarkable improvements against the state-of-the-art baselines.
|
[
{
"created": "Wed, 24 Apr 2019 02:10:10 GMT",
"version": "v1"
}
] |
2019-04-29
|
[
[
"Guo",
"Yuhu",
""
],
[
"Xiao",
"Han",
""
],
[
"Chen",
"Yidong",
""
],
[
"Shi",
"Xiaodong",
""
]
] |
Event-based camera is a bio-inspired vision sensor that records intensity changes (called event) asynchronously in each pixel. As an instance of event-based camera, Dynamic and Active-pixel Vision Sensor (DAVIS) combines a standard camera and an event-based camera. However, traditional models could not deal with the event stream asynchronously. To analyze the event stream asynchronously, most existing approaches accumulate events within a certain time interval and treat the accumulated events as a synchronous frame, which wastes the intensity change information and weakens the advantages of DAVIS. Therefore, in this paper, we present the first neural asynchronous approach to process event stream for event-based camera. Our method asynchronously extracts dynamic information from events by leveraging previous motion and critical features of gray-scale frames. To our best knowledge, this is the first neural asynchronous method to analyze event stream through a novel deep neural network. Extensive experiments demonstrate that our proposed model achieves remarkable improvements against the state-of-the-art baselines.
|
2107.14230
|
Dongdong Chen
|
Shuquan Ye and Dongdong Chen and Songfang Han and Jing Liao
|
Learning with Noisy Labels for Robust Point Cloud Segmentation
|
Typos fixed. ICCV 2021 Oral, Relabeled ScanNetV2 and code are
available at https://shuquanye.com/PNAL_website/
| null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Point cloud segmentation is a fundamental task in 3D. Despite recent progress
on point cloud segmentation with the power of deep networks, current deep
learning methods based on the clean label assumptions may fail with noisy
labels. Yet, object class labels are often mislabeled in real-world point cloud
datasets. In this work, we take the lead in solving this issue by proposing a
novel Point Noise-Adaptive Learning (PNAL) framework. Compared to existing
noise-robust methods on image tasks, our PNAL is noise-rate blind, to cope with
the spatially variant noise rate problem specific to point clouds.
Specifically, we propose a novel point-wise confidence selection to obtain
reliable labels based on the historical predictions of each point. A novel
cluster-wise label correction is proposed with a voting strategy to generate
the best possible label taking the neighbor point correlations into
consideration. We conduct extensive experiments to demonstrate the
effectiveness of PNAL on both synthetic and real-world noisy datasets. In
particular, even with $60\%$ symmetric noisy labels, our proposed method
produces much better results than its baseline counterpart without PNAL and is
comparable to the ideal upper bound trained on a completely clean dataset.
Moreover, we fully re-labeled the validation set of a popular but noisy
real-world scene dataset ScanNetV2 to make it clean, for rigorous experiment
and future research. Our code and data are available at
\url{https://shuquanye.com/PNAL_website/}.
|
[
{
"created": "Thu, 29 Jul 2021 17:59:54 GMT",
"version": "v1"
},
{
"created": "Thu, 5 Aug 2021 17:59:08 GMT",
"version": "v2"
}
] |
2021-08-06
|
[
[
"Ye",
"Shuquan",
""
],
[
"Chen",
"Dongdong",
""
],
[
"Han",
"Songfang",
""
],
[
"Liao",
"Jing",
""
]
] |
Point cloud segmentation is a fundamental task in 3D. Despite recent progress on point cloud segmentation with the power of deep networks, current deep learning methods based on the clean label assumptions may fail with noisy labels. Yet, object class labels are often mislabeled in real-world point cloud datasets. In this work, we take the lead in solving this issue by proposing a novel Point Noise-Adaptive Learning (PNAL) framework. Compared to existing noise-robust methods on image tasks, our PNAL is noise-rate blind, to cope with the spatially variant noise rate problem specific to point clouds. Specifically, we propose a novel point-wise confidence selection to obtain reliable labels based on the historical predictions of each point. A novel cluster-wise label correction is proposed with a voting strategy to generate the best possible label taking the neighbor point correlations into consideration. We conduct extensive experiments to demonstrate the effectiveness of PNAL on both synthetic and real-world noisy datasets. In particular, even with $60\%$ symmetric noisy labels, our proposed method produces much better results than its baseline counterpart without PNAL and is comparable to the ideal upper bound trained on a completely clean dataset. Moreover, we fully re-labeled the validation set of a popular but noisy real-world scene dataset ScanNetV2 to make it clean, for rigorous experiment and future research. Our code and data are available at \url{https://shuquanye.com/PNAL_website/}.
|
1703.02156
|
Jiaming Song
|
Jiaming Song, Russell Stewart, Shengjia Zhao and Stefano Ermon
|
On the Limits of Learning Representations with Label-Based Supervision
|
Submitted to ICLR 2017 Workshop Track
| null | null | null |
cs.LG cs.AI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Advances in neural network based classifiers have transformed automatic
feature learning from a pipe dream of stronger AI to a routine and expected
property of practical systems. Since the emergence of AlexNet every winning
submission of the ImageNet challenge has employed end-to-end representation
learning, and due to the utility of good representations for transfer learning,
representation learning has become as an important and distinct task from
supervised learning. At present, this distinction is inconsequential, as
supervised methods are state-of-the-art in learning transferable
representations. But recent work has shown that generative models can also be
powerful agents of representation learning. Will the representations learned
from these generative methods ever rival the quality of those from their
supervised competitors? In this work, we argue in the affirmative, that from an
information theoretic perspective, generative models have greater potential for
representation learning. Based on several experimentally validated assumptions,
we show that supervised learning is upper bounded in its capacity for
representation learning in ways that certain generative models, such as
Generative Adversarial Networks (GANs) are not. We hope that our analysis will
provide a rigorous motivation for further exploration of generative
representation learning.
|
[
{
"created": "Tue, 7 Mar 2017 00:09:31 GMT",
"version": "v1"
}
] |
2017-03-08
|
[
[
"Song",
"Jiaming",
""
],
[
"Stewart",
"Russell",
""
],
[
"Zhao",
"Shengjia",
""
],
[
"Ermon",
"Stefano",
""
]
] |
Advances in neural network based classifiers have transformed automatic feature learning from a pipe dream of stronger AI to a routine and expected property of practical systems. Since the emergence of AlexNet every winning submission of the ImageNet challenge has employed end-to-end representation learning, and due to the utility of good representations for transfer learning, representation learning has become as an important and distinct task from supervised learning. At present, this distinction is inconsequential, as supervised methods are state-of-the-art in learning transferable representations. But recent work has shown that generative models can also be powerful agents of representation learning. Will the representations learned from these generative methods ever rival the quality of those from their supervised competitors? In this work, we argue in the affirmative, that from an information theoretic perspective, generative models have greater potential for representation learning. Based on several experimentally validated assumptions, we show that supervised learning is upper bounded in its capacity for representation learning in ways that certain generative models, such as Generative Adversarial Networks (GANs) are not. We hope that our analysis will provide a rigorous motivation for further exploration of generative representation learning.
|
2212.04365
|
Haifeng Li
|
Jiawei Zhu, Mei Hong, Ronghua Du, Haifeng Li
|
Alleviating neighbor bias: augmenting graph self-supervise learning with
structural equivalent positive samples
|
8 pages, 5 figures, 8 tables
| null | null | null |
cs.LG cs.AI cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, using a self-supervised learning framework to learn the
general characteristics of graphs has been considered a promising paradigm for
graph representation learning. The core of self-supervised learning strategies
for graph neural networks lies in constructing suitable positive sample
selection strategies. However, existing GNNs typically aggregate information
from neighboring nodes to update node representations, leading to an
over-reliance on neighboring positive samples, i.e., homophilous samples; while
ignoring long-range positive samples, i.e., positive samples that are far apart
on the graph but structurally equivalent samples, a problem we call "neighbor
bias." This neighbor bias can reduce the generalization performance of GNNs. In
this paper, we argue that the generalization properties of GNNs should be
determined by combining homogeneous samples and structurally equivalent
samples, which we call the "GC combination hypothesis." Therefore, we propose a
topological signal-driven self-supervised method. It uses a topological
information-guided structural equivalence sampling strategy. First, we extract
multiscale topological features using persistent homology. Then we compute the
structural equivalence of node pairs based on their topological features. In
particular, we design a topological loss function to pull in non-neighboring
node pairs with high structural equivalence in the representation space to
alleviate neighbor bias. Finally, we use the joint training mechanism to adjust
the effect of structural equivalence on the model to fit datasets with
different characteristics. We conducted experiments on the node classification
task across seven graph datasets. The results show that the model performance
can be effectively improved using a strategy of topological signal enhancement.
|
[
{
"created": "Thu, 8 Dec 2022 16:04:06 GMT",
"version": "v1"
}
] |
2022-12-09
|
[
[
"Zhu",
"Jiawei",
""
],
[
"Hong",
"Mei",
""
],
[
"Du",
"Ronghua",
""
],
[
"Li",
"Haifeng",
""
]
] |
In recent years, using a self-supervised learning framework to learn the general characteristics of graphs has been considered a promising paradigm for graph representation learning. The core of self-supervised learning strategies for graph neural networks lies in constructing suitable positive sample selection strategies. However, existing GNNs typically aggregate information from neighboring nodes to update node representations, leading to an over-reliance on neighboring positive samples, i.e., homophilous samples; while ignoring long-range positive samples, i.e., positive samples that are far apart on the graph but structurally equivalent samples, a problem we call "neighbor bias." This neighbor bias can reduce the generalization performance of GNNs. In this paper, we argue that the generalization properties of GNNs should be determined by combining homogeneous samples and structurally equivalent samples, which we call the "GC combination hypothesis." Therefore, we propose a topological signal-driven self-supervised method. It uses a topological information-guided structural equivalence sampling strategy. First, we extract multiscale topological features using persistent homology. Then we compute the structural equivalence of node pairs based on their topological features. In particular, we design a topological loss function to pull in non-neighboring node pairs with high structural equivalence in the representation space to alleviate neighbor bias. Finally, we use the joint training mechanism to adjust the effect of structural equivalence on the model to fit datasets with different characteristics. We conducted experiments on the node classification task across seven graph datasets. The results show that the model performance can be effectively improved using a strategy of topological signal enhancement.
|
2211.06233
|
Matias Valdenegro-Toro
|
Levente Foldesi and Matias Valdenegro-Toro
|
Comparison of Uncertainty Quantification with Deep Learning in Time
Series Regression
|
5 pages, with appendix. RobustSeq @ NeurIPS 2022 Camera Ready
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Increasingly high-stakes decisions are made using neural networks in order to
make predictions. Specifically, meteorologists and hedge funds apply these
techniques to time series data. When it comes to prediction, there are certain
limitations for machine learning models (such as lack of expressiveness,
vulnerability of domain shifts and overconfidence) which can be solved using
uncertainty estimation. There is a set of expectations regarding how
uncertainty should ``behave". For instance, a wider prediction horizon should
lead to more uncertainty or the model's confidence should be proportional to
its accuracy. In this paper, different uncertainty estimation methods are
compared to forecast meteorological time series data and evaluate these
expectations. The results show how each uncertainty estimation method performs
on the forecasting task, which partially evaluates the robustness of predicted
uncertainty.
|
[
{
"created": "Fri, 11 Nov 2022 14:29:13 GMT",
"version": "v1"
}
] |
2022-11-14
|
[
[
"Foldesi",
"Levente",
""
],
[
"Valdenegro-Toro",
"Matias",
""
]
] |
Increasingly high-stakes decisions are made using neural networks in order to make predictions. Specifically, meteorologists and hedge funds apply these techniques to time series data. When it comes to prediction, there are certain limitations for machine learning models (such as lack of expressiveness, vulnerability of domain shifts and overconfidence) which can be solved using uncertainty estimation. There is a set of expectations regarding how uncertainty should ``behave". For instance, a wider prediction horizon should lead to more uncertainty or the model's confidence should be proportional to its accuracy. In this paper, different uncertainty estimation methods are compared to forecast meteorological time series data and evaluate these expectations. The results show how each uncertainty estimation method performs on the forecasting task, which partially evaluates the robustness of predicted uncertainty.
|
2206.11795
|
Bowen Baker
|
Bowen Baker, Ilge Akkaya, Peter Zhokhov, Joost Huizinga, Jie Tang,
Adrien Ecoffet, Brandon Houghton, Raul Sampedro, Jeff Clune
|
Video PreTraining (VPT): Learning to Act by Watching Unlabeled Online
Videos
| null | null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Pretraining on noisy, internet-scale datasets has been heavily studied as a
technique for training models with broad, general capabilities for text,
images, and other modalities. However, for many sequential decision domains
such as robotics, video games, and computer use, publicly available data does
not contain the labels required to train behavioral priors in the same way. We
extend the internet-scale pretraining paradigm to sequential decision domains
through semi-supervised imitation learning wherein agents learn to act by
watching online unlabeled videos. Specifically, we show that with a small
amount of labeled data we can train an inverse dynamics model accurate enough
to label a huge unlabeled source of online data -- here, online videos of
people playing Minecraft -- from which we can then train a general behavioral
prior. Despite using the native human interface (mouse and keyboard at 20Hz),
we show that this behavioral prior has nontrivial zero-shot capabilities and
that it can be fine-tuned, with both imitation learning and reinforcement
learning, to hard-exploration tasks that are impossible to learn from scratch
via reinforcement learning. For many tasks our models exhibit human-level
performance, and we are the first to report computer agents that can craft
diamond tools, which can take proficient humans upwards of 20 minutes (24,000
environment actions) of gameplay to accomplish.
|
[
{
"created": "Thu, 23 Jun 2022 16:01:11 GMT",
"version": "v1"
}
] |
2022-06-24
|
[
[
"Baker",
"Bowen",
""
],
[
"Akkaya",
"Ilge",
""
],
[
"Zhokhov",
"Peter",
""
],
[
"Huizinga",
"Joost",
""
],
[
"Tang",
"Jie",
""
],
[
"Ecoffet",
"Adrien",
""
],
[
"Houghton",
"Brandon",
""
],
[
"Sampedro",
"Raul",
""
],
[
"Clune",
"Jeff",
""
]
] |
Pretraining on noisy, internet-scale datasets has been heavily studied as a technique for training models with broad, general capabilities for text, images, and other modalities. However, for many sequential decision domains such as robotics, video games, and computer use, publicly available data does not contain the labels required to train behavioral priors in the same way. We extend the internet-scale pretraining paradigm to sequential decision domains through semi-supervised imitation learning wherein agents learn to act by watching online unlabeled videos. Specifically, we show that with a small amount of labeled data we can train an inverse dynamics model accurate enough to label a huge unlabeled source of online data -- here, online videos of people playing Minecraft -- from which we can then train a general behavioral prior. Despite using the native human interface (mouse and keyboard at 20Hz), we show that this behavioral prior has nontrivial zero-shot capabilities and that it can be fine-tuned, with both imitation learning and reinforcement learning, to hard-exploration tasks that are impossible to learn from scratch via reinforcement learning. For many tasks our models exhibit human-level performance, and we are the first to report computer agents that can craft diamond tools, which can take proficient humans upwards of 20 minutes (24,000 environment actions) of gameplay to accomplish.
|
2006.15832
|
Linshan Jiang
|
Linshan Jiang, Rui Tan, Arvind Easwaran
|
Resilience Bounds of Network Clock Synchronization with Fault Correction
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Internet of Things (IoT) will be a main data generation infrastructure
for achieving better system intelligence. This paper considers the design and
implementation of a practical privacy-preserving collaborative learning scheme,
in which a curious learning coordinator trains a better machine learning model
based on the data samples contributed by a number of IoT objects, while the
confidentiality of the raw forms of the training data is protected against the
coordinator. Existing distributed machine learning and data encryption
approaches incur significant computation and communication overhead, rendering
them ill-suited for resource-constrained IoT objects. We study an approach that
applies independent random projection at each IoT object to obfuscate data and
trains a deep neural network at the coordinator based on the projected data
from the IoT objects. This approach introduces light computation overhead to
the IoT objects and moves most workload to the coordinator that can have
sufficient computing resources. Although the independent projections performed
by the IoT objects address the potential collusion between the curious
coordinator and some compromised IoT objects, they significantly increase the
complexity of the projected data. In this paper, we leverage the superior
learning capability of deep learning in capturing sophisticated patterns to
maintain good learning performance. Extensive comparative evaluation shows that
this approach outperforms other lightweight approaches that apply additive
noisification for differential privacy and/or support vector machines for
learning in the applications with light to moderate data pattern complexities.
|
[
{
"created": "Mon, 29 Jun 2020 06:41:24 GMT",
"version": "v1"
}
] |
2020-06-30
|
[
[
"Jiang",
"Linshan",
""
],
[
"Tan",
"Rui",
""
],
[
"Easwaran",
"Arvind",
""
]
] |
The Internet of Things (IoT) will be a main data generation infrastructure for achieving better system intelligence. This paper considers the design and implementation of a practical privacy-preserving collaborative learning scheme, in which a curious learning coordinator trains a better machine learning model based on the data samples contributed by a number of IoT objects, while the confidentiality of the raw forms of the training data is protected against the coordinator. Existing distributed machine learning and data encryption approaches incur significant computation and communication overhead, rendering them ill-suited for resource-constrained IoT objects. We study an approach that applies independent random projection at each IoT object to obfuscate data and trains a deep neural network at the coordinator based on the projected data from the IoT objects. This approach introduces light computation overhead to the IoT objects and moves most workload to the coordinator that can have sufficient computing resources. Although the independent projections performed by the IoT objects address the potential collusion between the curious coordinator and some compromised IoT objects, they significantly increase the complexity of the projected data. In this paper, we leverage the superior learning capability of deep learning in capturing sophisticated patterns to maintain good learning performance. Extensive comparative evaluation shows that this approach outperforms other lightweight approaches that apply additive noisification for differential privacy and/or support vector machines for learning in the applications with light to moderate data pattern complexities.
|
1001.4411
|
EPTCS
|
Igor Mozolevsky (University of Newcastle), John Fitzgerald (University
of Newcastle)
|
Common Representation of Information Flows for Dynamic Coalitions
| null |
EPTCS 16, 2010, pp. 15-25
|
10.4204/EPTCS.16.2
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a formal foundation for reasoning about access control policies
within a Dynamic Coalition, defining an abstraction over existing access
control models and providing mechanisms for translation of those models into
information-flow domain. The abstracted information-flow domain model, called a
Common Representation, can then be used for defining a way to control the
evolution of Dynamic Coalitions with respect to information flow.
|
[
{
"created": "Mon, 25 Jan 2010 12:44:06 GMT",
"version": "v1"
}
] |
2010-01-26
|
[
[
"Mozolevsky",
"Igor",
"",
"University of Newcastle"
],
[
"Fitzgerald",
"John",
"",
"University\n of Newcastle"
]
] |
We propose a formal foundation for reasoning about access control policies within a Dynamic Coalition, defining an abstraction over existing access control models and providing mechanisms for translation of those models into information-flow domain. The abstracted information-flow domain model, called a Common Representation, can then be used for defining a way to control the evolution of Dynamic Coalitions with respect to information flow.
|
1903.12090
|
Fabrizio Sebastiani
|
Alejandro Moreo Fern\'andez, Andrea Esuli, Fabrizio Sebastiani
|
Learning to Weight for Text Classification
|
To appear in IEEE Transactions on Knowledge and Data Engineering
|
Final version published in IEEE Transactions on Data and Knowledge
Engineering, 32(2):302-316, 2020
| null | null |
cs.LG cs.IR stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In information retrieval (IR) and related tasks, term weighting approaches
typically consider the frequency of the term in the document and in the
collection in order to compute a score reflecting the importance of the term
for the document. In tasks characterized by the presence of training data (such
as text classification) it seems logical that the term weighting function
should take into account the distribution (as estimated from training data) of
the term across the classes of interest. Although `supervised term weighting'
approaches that use this intuition have been described before, they have failed
to show consistent improvements. In this article we analyse the possible
reasons for this failure, and call consolidated assumptions into question.
Following this criticism we propose a novel supervised term weighting approach
that, instead of relying on any predefined formula, learns a term weighting
function optimised on the training set of interest; we dub this approach
\emph{Learning to Weight} (LTW). The experiments that we run on several
well-known benchmarks, and using different learning methods, show that our
method outperforms previous term weighting approaches in text classification.
|
[
{
"created": "Thu, 28 Mar 2019 16:13:35 GMT",
"version": "v1"
}
] |
2021-09-22
|
[
[
"Fernández",
"Alejandro Moreo",
""
],
[
"Esuli",
"Andrea",
""
],
[
"Sebastiani",
"Fabrizio",
""
]
] |
In information retrieval (IR) and related tasks, term weighting approaches typically consider the frequency of the term in the document and in the collection in order to compute a score reflecting the importance of the term for the document. In tasks characterized by the presence of training data (such as text classification) it seems logical that the term weighting function should take into account the distribution (as estimated from training data) of the term across the classes of interest. Although `supervised term weighting' approaches that use this intuition have been described before, they have failed to show consistent improvements. In this article we analyse the possible reasons for this failure, and call consolidated assumptions into question. Following this criticism we propose a novel supervised term weighting approach that, instead of relying on any predefined formula, learns a term weighting function optimised on the training set of interest; we dub this approach \emph{Learning to Weight} (LTW). The experiments that we run on several well-known benchmarks, and using different learning methods, show that our method outperforms previous term weighting approaches in text classification.
|
2007.03937
|
Tommaso Cesari
|
Tommaso Cesari (TSE), Roberto Colomboni (IIT)
|
A Nearest Neighbor Characterization of Lebesgue Points in Metric Measure
Spaces
| null | null | null | null |
cs.LG math.PR stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The property of almost every point being a Lebesgue point has proven to be
crucial for the consistency of several classification algorithms based on
nearest neighbors. We characterize Lebesgue points in terms of a 1-Nearest
Neighbor regression algorithm for pointwise estimation, fleshing out the role
played by tie-breaking rules in the corresponding convergence problem. We then
give an application of our results, proving the convergence of the risk of a
large class of 1-Nearest Neighbor classification algorithms in general metric
spaces where almost every point is a Lebesgue point.
|
[
{
"created": "Wed, 8 Jul 2020 07:42:31 GMT",
"version": "v1"
},
{
"created": "Mon, 21 Sep 2020 10:13:43 GMT",
"version": "v2"
},
{
"created": "Fri, 11 Dec 2020 10:37:56 GMT",
"version": "v3"
},
{
"created": "Tue, 12 Jan 2021 15:15:27 GMT",
"version": "v4"
}
] |
2021-01-13
|
[
[
"Cesari",
"Tommaso",
"",
"TSE"
],
[
"Colomboni",
"Roberto",
"",
"IIT"
]
] |
The property of almost every point being a Lebesgue point has proven to be crucial for the consistency of several classification algorithms based on nearest neighbors. We characterize Lebesgue points in terms of a 1-Nearest Neighbor regression algorithm for pointwise estimation, fleshing out the role played by tie-breaking rules in the corresponding convergence problem. We then give an application of our results, proving the convergence of the risk of a large class of 1-Nearest Neighbor classification algorithms in general metric spaces where almost every point is a Lebesgue point.
|
1402.2482
|
Manuel Cebrian
|
Yury Kryvasheyeu, Haohui Chen, Esteban Moro, Pascal Van Hentenryck,
Manuel Cebrian
|
Performance of Social Network Sensors During Hurricane Sandy
| null | null |
10.1371/journal.pone.0117288
| null |
cs.SI physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Information flow during catastrophic events is a critical aspect of disaster
management. Modern communication platforms, in particular online social
networks, provide an opportunity to study such flow, and a mean to derive
early-warning sensors, improving emergency preparedness and response.
Performance of the social networks sensor method, based on topological and
behavioural properties derived from the "friendship paradox", is studied here
for over 50 million Twitter messages posted before, during, and after Hurricane
Sandy. We find that differences in user's network centrality effectively
translate into moderate awareness advantage (up to 26 hours); and that
geo-location of users within or outside of the hurricane-affected area plays
significant role in determining the scale of such advantage. Emotional response
appears to be universal regardless of the position in the network topology, and
displays characteristic, easily detectable patterns, opening a possibility of
implementing a simple "sentiment sensing" technique to detect and locate
disasters.
|
[
{
"created": "Tue, 11 Feb 2014 13:09:21 GMT",
"version": "v1"
},
{
"created": "Fri, 20 Jun 2014 15:21:55 GMT",
"version": "v2"
}
] |
2015-06-18
|
[
[
"Kryvasheyeu",
"Yury",
""
],
[
"Chen",
"Haohui",
""
],
[
"Moro",
"Esteban",
""
],
[
"Van Hentenryck",
"Pascal",
""
],
[
"Cebrian",
"Manuel",
""
]
] |
Information flow during catastrophic events is a critical aspect of disaster management. Modern communication platforms, in particular online social networks, provide an opportunity to study such flow, and a mean to derive early-warning sensors, improving emergency preparedness and response. Performance of the social networks sensor method, based on topological and behavioural properties derived from the "friendship paradox", is studied here for over 50 million Twitter messages posted before, during, and after Hurricane Sandy. We find that differences in user's network centrality effectively translate into moderate awareness advantage (up to 26 hours); and that geo-location of users within or outside of the hurricane-affected area plays significant role in determining the scale of such advantage. Emotional response appears to be universal regardless of the position in the network topology, and displays characteristic, easily detectable patterns, opening a possibility of implementing a simple "sentiment sensing" technique to detect and locate disasters.
|
2405.13268
|
Haosen Ge
|
Haosen Ge, Hamsa Bastani, Osbert Bastani
|
Stochastic Online Conformal Prediction with Semi-Bandit Feedback
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Conformal prediction has emerged as an effective strategy for uncertainty
quantification by modifying a model to output sets of labels instead of a
single label. These prediction sets come with the guarantee that they contain
the true label with high probability. However, conformal prediction typically
requires a large calibration dataset of i.i.d. examples. We consider the online
learning setting, where examples arrive over time, and the goal is to construct
prediction sets dynamically. Departing from existing work, we assume
semi-bandit feedback, where we only observe the true label if it is contained
in the prediction set. For instance, consider calibrating a document retrieval
model to a new domain; in this setting, a user would only be able to provide
the true label if the target document is in the prediction set of retrieved
documents. We propose a novel conformal prediction algorithm targeted at this
setting, and prove that it obtains sublinear regret compared to the optimal
conformal predictor. We evaluate our algorithm on a retrieval task and an image
classification task, and demonstrate that it empirically achieves good
performance.
|
[
{
"created": "Wed, 22 May 2024 00:42:49 GMT",
"version": "v1"
}
] |
2024-05-24
|
[
[
"Ge",
"Haosen",
""
],
[
"Bastani",
"Hamsa",
""
],
[
"Bastani",
"Osbert",
""
]
] |
Conformal prediction has emerged as an effective strategy for uncertainty quantification by modifying a model to output sets of labels instead of a single label. These prediction sets come with the guarantee that they contain the true label with high probability. However, conformal prediction typically requires a large calibration dataset of i.i.d. examples. We consider the online learning setting, where examples arrive over time, and the goal is to construct prediction sets dynamically. Departing from existing work, we assume semi-bandit feedback, where we only observe the true label if it is contained in the prediction set. For instance, consider calibrating a document retrieval model to a new domain; in this setting, a user would only be able to provide the true label if the target document is in the prediction set of retrieved documents. We propose a novel conformal prediction algorithm targeted at this setting, and prove that it obtains sublinear regret compared to the optimal conformal predictor. We evaluate our algorithm on a retrieval task and an image classification task, and demonstrate that it empirically achieves good performance.
|
2206.07099
|
James Flamino
|
Omar Malik, James Flamino, Boleslaw K. Szymanski
|
Resource-Mediated Consensus Formation
|
8 pages, 9 figures
|
Proc. SIGSIM-PADS'22: SIGSIM Conference on Principles of Advanced
Discrete Simulation, Atlanta, GA, USA, June 8-10, 2022, pp. 105-112,
|
10.1145/3518997.3534959
| null |
cs.MA cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
In social sciences, simulating opinion dynamics to study the interplay
between homophily and influence, and the subsequent formation of echo chambers,
is of great importance. As such, in this paper we investigate echo chambers by
implementing a unique social game in which we spawn in a large number of
agents, each assigned one of the two opinions on an issue and a finite amount
of influence in the form of a game currency. Agents attempt to have an opinion
that is a majority at the end of the game, to obtain a reward also paid in the
game currency. At the beginning of each round, a randomly selected agent is
selected, referred to as a speaker. The second agent is selected in the radius
of speaker influence (which is a set subset of the speaker's neighbors) to
interact with the speaker as a listener. In this interaction, the speaker
proposes a payoff in the game currency from their personal influence budget to
persuade the listener to hold the speaker's opinion in future rounds until
chosen listener again. The listener can either choose to accept or reject this
payoff to hold the speaker's opinion for future rounds. The listener's choice
is informed only by their estimate of global majority opinion through a limited
view of the opinions of their neighboring agents. We show that the influence
game leads to the formation of "echo chambers," or homogeneous clusters of
opinions. We also investigate various scenarios to disrupt the creation of such
echo chambers, including the introduction of resource disparity between agents
with different opinions, initially preferentially assigning opinions to agents,
and the introduction of committed agents, who never change their initial
opinion.
|
[
{
"created": "Tue, 14 Jun 2022 18:36:22 GMT",
"version": "v1"
}
] |
2022-06-22
|
[
[
"Malik",
"Omar",
""
],
[
"Flamino",
"James",
""
],
[
"Szymanski",
"Boleslaw K.",
""
]
] |
In social sciences, simulating opinion dynamics to study the interplay between homophily and influence, and the subsequent formation of echo chambers, is of great importance. As such, in this paper we investigate echo chambers by implementing a unique social game in which we spawn in a large number of agents, each assigned one of the two opinions on an issue and a finite amount of influence in the form of a game currency. Agents attempt to have an opinion that is a majority at the end of the game, to obtain a reward also paid in the game currency. At the beginning of each round, a randomly selected agent is selected, referred to as a speaker. The second agent is selected in the radius of speaker influence (which is a set subset of the speaker's neighbors) to interact with the speaker as a listener. In this interaction, the speaker proposes a payoff in the game currency from their personal influence budget to persuade the listener to hold the speaker's opinion in future rounds until chosen listener again. The listener can either choose to accept or reject this payoff to hold the speaker's opinion for future rounds. The listener's choice is informed only by their estimate of global majority opinion through a limited view of the opinions of their neighboring agents. We show that the influence game leads to the formation of "echo chambers," or homogeneous clusters of opinions. We also investigate various scenarios to disrupt the creation of such echo chambers, including the introduction of resource disparity between agents with different opinions, initially preferentially assigning opinions to agents, and the introduction of committed agents, who never change their initial opinion.
|
1506.03710
|
Ugo Dal Lago
|
Alberto Cappai, Ugo Dal Lago
|
On Equivalences, Metrics, and Polynomial Time (Long Version)
| null | null | null | null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Interactive behaviors are ubiquitous in modern cryptography, but are also
present in $\lambda$-calculi, in the form of higher-order constructions.
Traditionally, however, typed $\lambda$-calculi simply do not fit well into
cryptography, being both deterministic and too powerful as for the complexity
of functions they can express. We study interaction in a $\lambda$-calculus for
probabilistic polynomial time computable functions. In particular, we show how
notions of context equivalence and context metric can both be characterized by
way of traces when defined on linear contexts. We then give evidence on how
this can be turned into a proof methodology for computational
indistinguishability, a key notion in modern cryptography. We also hint at what
happens if a more general notion of a context is used.
|
[
{
"created": "Thu, 11 Jun 2015 15:32:04 GMT",
"version": "v1"
}
] |
2015-06-12
|
[
[
"Cappai",
"Alberto",
""
],
[
"Lago",
"Ugo Dal",
""
]
] |
Interactive behaviors are ubiquitous in modern cryptography, but are also present in $\lambda$-calculi, in the form of higher-order constructions. Traditionally, however, typed $\lambda$-calculi simply do not fit well into cryptography, being both deterministic and too powerful as for the complexity of functions they can express. We study interaction in a $\lambda$-calculus for probabilistic polynomial time computable functions. In particular, we show how notions of context equivalence and context metric can both be characterized by way of traces when defined on linear contexts. We then give evidence on how this can be turned into a proof methodology for computational indistinguishability, a key notion in modern cryptography. We also hint at what happens if a more general notion of a context is used.
|
2011.11744
|
Anshuman Misra
|
Anshuman Misra and Ajay D. Kshemkalyani
|
The Bloom Clock for Causality Testing
| null | null |
10.1007/978-3-030-65621-8_1
| null |
cs.DC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Testing for causality between events in distributed executions is a
fundamental problem. Vector clocks solve this problem but do not scale well.
The probabilistic Bloom clock can determine causality between events with lower
space, time, and message-space overhead than vector clock; however, predictions
suffer from false positives. We give the protocol for the Bloom clock based on
Counting Bloom filters and study its properties including the probabilities of
a positive outcome and a false positive. We show the results of extensive
experiments to determine how these above probabilities vary as a function of
the Bloom timestamps of the two events being tested, and to determine the
accuracy, precision, and false positive rate of a slice of the execution
containing events in the temporal proximity of each other. Based on these
experiments, we make recommendations for the setting of the Bloom clock
parameters. We postulate the causality spread hypothesis from the application's
perspective to indicate whether Bloom clocks will be suitable for correct
predictions with high confidence. The Bloom clock design can serve as a viable
space-, time-, and message-space-efficient alternative to vector clocks if
false positives can be tolerated by an application.
|
[
{
"created": "Mon, 23 Nov 2020 21:43:27 GMT",
"version": "v1"
}
] |
2021-06-21
|
[
[
"Misra",
"Anshuman",
""
],
[
"Kshemkalyani",
"Ajay D.",
""
]
] |
Testing for causality between events in distributed executions is a fundamental problem. Vector clocks solve this problem but do not scale well. The probabilistic Bloom clock can determine causality between events with lower space, time, and message-space overhead than vector clock; however, predictions suffer from false positives. We give the protocol for the Bloom clock based on Counting Bloom filters and study its properties including the probabilities of a positive outcome and a false positive. We show the results of extensive experiments to determine how these above probabilities vary as a function of the Bloom timestamps of the two events being tested, and to determine the accuracy, precision, and false positive rate of a slice of the execution containing events in the temporal proximity of each other. Based on these experiments, we make recommendations for the setting of the Bloom clock parameters. We postulate the causality spread hypothesis from the application's perspective to indicate whether Bloom clocks will be suitable for correct predictions with high confidence. The Bloom clock design can serve as a viable space-, time-, and message-space-efficient alternative to vector clocks if false positives can be tolerated by an application.
|
2401.13961
|
Jia Wan
|
Jia Wan, Wanhua Li, Jason Ken Adhinarta, Atmadeep Banerjee, Evelina
Sjostedt, Jingpeng Wu, Jeff Lichtman, Hanspeter Pfister, Donglai Wei
|
TriSAM: Tri-Plane SAM for zero-shot cortical blood vessel segmentation
in VEM images
|
BvEM-Mouse can be visualized at: https://tinyurl.com/yc2s38x9
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
While imaging techniques at macro and mesoscales have garnered substantial
attention and resources, microscale Volume Electron Microscopy (vEM) imaging,
capable of revealing intricate vascular details, has lacked the necessary
benchmarking infrastructure. In this paper, we address a significant gap in
this field of neuroimaging by introducing the first-in-class public benchmark,
BvEM, designed specifically for cortical blood vessel segmentation in vEM
images. Our BvEM benchmark is based on vEM image volumes from three mammals:
adult mouse, macaque, and human. We standardized the resolution, addressed
imaging variations, and meticulously annotated blood vessels through
semi-automatic, manual, and quality control processes, ensuring high-quality 3D
segmentation. Furthermore, we developed a zero-shot cortical blood vessel
segmentation method named TriSAM, which leverages the powerful segmentation
model SAM for 3D segmentation. To extend SAM from 2D to 3D volume segmentation,
TriSAM employs a multi-seed tracking framework, leveraging the reliability of
certain image planes for tracking while using others to identify potential
turning points. This approach effectively achieves long-term 3D blood vessel
segmentation without model training or fine-tuning. Experimental results show
that TriSAM achieved superior performances on the BvEM benchmark across three
species. Our dataset, code, and model are available online at
\url{https://jia-wan.github.io/bvem}.
|
[
{
"created": "Thu, 25 Jan 2024 05:50:48 GMT",
"version": "v1"
},
{
"created": "Tue, 9 Apr 2024 08:07:48 GMT",
"version": "v2"
},
{
"created": "Tue, 18 Jun 2024 03:53:12 GMT",
"version": "v3"
},
{
"created": "Thu, 15 Aug 2024 09:23:00 GMT",
"version": "v4"
}
] |
2024-08-16
|
[
[
"Wan",
"Jia",
""
],
[
"Li",
"Wanhua",
""
],
[
"Adhinarta",
"Jason Ken",
""
],
[
"Banerjee",
"Atmadeep",
""
],
[
"Sjostedt",
"Evelina",
""
],
[
"Wu",
"Jingpeng",
""
],
[
"Lichtman",
"Jeff",
""
],
[
"Pfister",
"Hanspeter",
""
],
[
"Wei",
"Donglai",
""
]
] |
While imaging techniques at macro and mesoscales have garnered substantial attention and resources, microscale Volume Electron Microscopy (vEM) imaging, capable of revealing intricate vascular details, has lacked the necessary benchmarking infrastructure. In this paper, we address a significant gap in this field of neuroimaging by introducing the first-in-class public benchmark, BvEM, designed specifically for cortical blood vessel segmentation in vEM images. Our BvEM benchmark is based on vEM image volumes from three mammals: adult mouse, macaque, and human. We standardized the resolution, addressed imaging variations, and meticulously annotated blood vessels through semi-automatic, manual, and quality control processes, ensuring high-quality 3D segmentation. Furthermore, we developed a zero-shot cortical blood vessel segmentation method named TriSAM, which leverages the powerful segmentation model SAM for 3D segmentation. To extend SAM from 2D to 3D volume segmentation, TriSAM employs a multi-seed tracking framework, leveraging the reliability of certain image planes for tracking while using others to identify potential turning points. This approach effectively achieves long-term 3D blood vessel segmentation without model training or fine-tuning. Experimental results show that TriSAM achieved superior performances on the BvEM benchmark across three species. Our dataset, code, and model are available online at \url{https://jia-wan.github.io/bvem}.
|
2307.09748
|
Zijian Zhu
|
Feiran Hu, Peng Wang, Yangyang Li, Chenlong Duan, Zijian Zhu, Fei
Wang, Faen Zhang, Yong Li, Xiu-Shen Wei
|
Watch out Venomous Snake Species: A Solution to SnakeCLEF2023
|
This work was the winner solution of the SnakeCLEF2023 challenge
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The SnakeCLEF2023 competition aims to the development of advanced algorithms
for snake species identification through the analysis of images and
accompanying metadata. This paper presents a method leveraging utilization of
both images and metadata. Modern CNN models and strong data augmentation are
utilized to learn better representation of images. To relieve the challenge of
long-tailed distribution, seesaw loss is utilized in our method. We also design
a light model to calculate prior probabilities using metadata features
extracted from CLIP in post processing stage. Besides, we attach more
importance to venomous species by assigning venomous species labels to some
examples that model is uncertain about. Our method achieves 91.31% score of the
final metric combined of F1 and other metrics on private leaderboard, which is
the 1st place among the participators. The code is available at
https://github.com/xiaoxsparraw/CLEF2023.
|
[
{
"created": "Wed, 19 Jul 2023 04:59:58 GMT",
"version": "v1"
}
] |
2023-07-20
|
[
[
"Hu",
"Feiran",
""
],
[
"Wang",
"Peng",
""
],
[
"Li",
"Yangyang",
""
],
[
"Duan",
"Chenlong",
""
],
[
"Zhu",
"Zijian",
""
],
[
"Wang",
"Fei",
""
],
[
"Zhang",
"Faen",
""
],
[
"Li",
"Yong",
""
],
[
"Wei",
"Xiu-Shen",
""
]
] |
The SnakeCLEF2023 competition aims to the development of advanced algorithms for snake species identification through the analysis of images and accompanying metadata. This paper presents a method leveraging utilization of both images and metadata. Modern CNN models and strong data augmentation are utilized to learn better representation of images. To relieve the challenge of long-tailed distribution, seesaw loss is utilized in our method. We also design a light model to calculate prior probabilities using metadata features extracted from CLIP in post processing stage. Besides, we attach more importance to venomous species by assigning venomous species labels to some examples that model is uncertain about. Our method achieves 91.31% score of the final metric combined of F1 and other metrics on private leaderboard, which is the 1st place among the participators. The code is available at https://github.com/xiaoxsparraw/CLEF2023.
|
1106.2312
|
Rathipriya R
|
R.Rathipriya, Dr. K.Thangavel and J.Bagyamani
|
Evolutionary Biclustering of Clickstream Data
| null | null | null | null |
cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Biclustering is a two way clustering approach involving simultaneous
clustering along two dimensions of the data matrix. Finding biclusters of web
objects (i.e. web users and web pages) is an emerging topic in the context of
web usage mining. It overcomes the problem associated with traditional
clustering methods by allowing automatic discovery of browsing pattern based on
a subset of attributes. A coherent bicluster of clickstream data is a local
browsing pattern such that users in bicluster exhibit correlated browsing
pattern through a subset of pages of a web site. This paper proposed a new
application of biclustering to web data using a combination of heuristics and
meta-heuristics such as K-means, Greedy Search Procedure and Genetic Algorithms
to identify the coherent browsing pattern. Experiment is conducted on the
benchmark clickstream msnbc dataset from UCI repository. Results demonstrate
the efficiency and beneficial outcome of the proposed method by correlating the
users and pages of a web site in high degree.This approach shows excellent
performance at finding high degree of overlapped coherent biclusters from web
data.
|
[
{
"created": "Sun, 12 Jun 2011 14:34:16 GMT",
"version": "v1"
}
] |
2011-06-14
|
[
[
"Rathipriya",
"R.",
""
],
[
"Thangavel",
"Dr. K.",
""
],
[
"Bagyamani",
"J.",
""
]
] |
Biclustering is a two way clustering approach involving simultaneous clustering along two dimensions of the data matrix. Finding biclusters of web objects (i.e. web users and web pages) is an emerging topic in the context of web usage mining. It overcomes the problem associated with traditional clustering methods by allowing automatic discovery of browsing pattern based on a subset of attributes. A coherent bicluster of clickstream data is a local browsing pattern such that users in bicluster exhibit correlated browsing pattern through a subset of pages of a web site. This paper proposed a new application of biclustering to web data using a combination of heuristics and meta-heuristics such as K-means, Greedy Search Procedure and Genetic Algorithms to identify the coherent browsing pattern. Experiment is conducted on the benchmark clickstream msnbc dataset from UCI repository. Results demonstrate the efficiency and beneficial outcome of the proposed method by correlating the users and pages of a web site in high degree.This approach shows excellent performance at finding high degree of overlapped coherent biclusters from web data.
|
1805.07430
|
Chen-Yu Wei
|
Haipeng Luo, Chen-Yu Wei, Kai Zheng
|
Efficient Online Portfolio with Logarithmic Regret
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the decades-old problem of online portfolio management and propose
the first algorithm with logarithmic regret that is not based on Cover's
Universal Portfolio algorithm and admits much faster implementation.
Specifically Universal Portfolio enjoys optimal regret $\mathcal{O}(N\ln T)$
for $N$ financial instruments over $T$ rounds, but requires log-concave
sampling and has a large polynomial running time. Our algorithm, on the other
hand, ensures a slightly larger but still logarithmic regret of
$\mathcal{O}(N^2(\ln T)^4)$, and is based on the well-studied Online Mirror
Descent framework with a novel regularizer that can be implemented via standard
optimization methods in time $\mathcal{O}(TN^{2.5})$ per round. The regret of
all other existing works is either polynomial in $T$ or has a potentially
unbounded factor such as the inverse of the smallest price relative.
|
[
{
"created": "Fri, 18 May 2018 20:29:02 GMT",
"version": "v1"
},
{
"created": "Fri, 16 Nov 2018 00:39:47 GMT",
"version": "v2"
}
] |
2018-11-19
|
[
[
"Luo",
"Haipeng",
""
],
[
"Wei",
"Chen-Yu",
""
],
[
"Zheng",
"Kai",
""
]
] |
We study the decades-old problem of online portfolio management and propose the first algorithm with logarithmic regret that is not based on Cover's Universal Portfolio algorithm and admits much faster implementation. Specifically Universal Portfolio enjoys optimal regret $\mathcal{O}(N\ln T)$ for $N$ financial instruments over $T$ rounds, but requires log-concave sampling and has a large polynomial running time. Our algorithm, on the other hand, ensures a slightly larger but still logarithmic regret of $\mathcal{O}(N^2(\ln T)^4)$, and is based on the well-studied Online Mirror Descent framework with a novel regularizer that can be implemented via standard optimization methods in time $\mathcal{O}(TN^{2.5})$ per round. The regret of all other existing works is either polynomial in $T$ or has a potentially unbounded factor such as the inverse of the smallest price relative.
|
2110.03380
|
You Jin Kim
|
You Jin Kim, Hee-Soo Heo, Jee-weon Jung, Youngki Kwon, Bong-Jin Lee,
Joon Son Chung
|
Advancing the dimensionality reduction of speaker embeddings for speaker
diarisation: disentangling noise and informing speech activity
|
This paper was submitted to ICASSP 2023
| null | null | null |
cs.SD cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The objective of this work is to train noise-robust speaker embeddings
adapted for speaker diarisation. Speaker embeddings play a crucial role in the
performance of diarisation systems, but they often capture spurious information
such as noise, adversely affecting performance. Our previous work has proposed
an auto-encoder-based dimensionality reduction module to help remove the
redundant information. However, they do not explicitly separate such
information and have also been found to be sensitive to hyper-parameter values.
To this end, we propose two contributions to overcome these issues: (i) a novel
dimensionality reduction framework that can disentangle spurious information
from the speaker embeddings; (ii) the use of speech activity vector to prevent
the speaker code from representing the background noise. Through a range of
experiments conducted on four datasets, our approach consistently demonstrates
the state-of-the-art performance among models without system fusion.
|
[
{
"created": "Thu, 7 Oct 2021 12:19:09 GMT",
"version": "v1"
},
{
"created": "Tue, 29 Mar 2022 09:40:45 GMT",
"version": "v2"
},
{
"created": "Thu, 3 Nov 2022 09:21:30 GMT",
"version": "v3"
}
] |
2022-11-04
|
[
[
"Kim",
"You Jin",
""
],
[
"Heo",
"Hee-Soo",
""
],
[
"Jung",
"Jee-weon",
""
],
[
"Kwon",
"Youngki",
""
],
[
"Lee",
"Bong-Jin",
""
],
[
"Chung",
"Joon Son",
""
]
] |
The objective of this work is to train noise-robust speaker embeddings adapted for speaker diarisation. Speaker embeddings play a crucial role in the performance of diarisation systems, but they often capture spurious information such as noise, adversely affecting performance. Our previous work has proposed an auto-encoder-based dimensionality reduction module to help remove the redundant information. However, they do not explicitly separate such information and have also been found to be sensitive to hyper-parameter values. To this end, we propose two contributions to overcome these issues: (i) a novel dimensionality reduction framework that can disentangle spurious information from the speaker embeddings; (ii) the use of speech activity vector to prevent the speaker code from representing the background noise. Through a range of experiments conducted on four datasets, our approach consistently demonstrates the state-of-the-art performance among models without system fusion.
|
2311.15959
|
Kaijun Tan
|
Kaijun Tan, Benzhe Dai, Jiakui Li, Wenyu Mao
|
CheapNET: Improving Light-weight speech enhancement network by projected
loss function
| null | null | null | null |
cs.SD cs.AI eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Noise suppression and echo cancellation are critical in speech enhancement
and essential for smart devices and real-time communication. Deployed in voice
processing front-ends and edge devices, these algorithms must ensure efficient
real-time inference with low computational demands. Traditional edge-based
noise suppression often uses MSE-based amplitude spectrum mask training, but
this approach has limitations. We introduce a novel projection loss function,
diverging from MSE, to enhance noise suppression. This method uses projection
techniques to isolate key audio components from noise, significantly improving
model performance. For echo cancellation, the function enables direct
predictions on LAEC pre-processed outputs, substantially enhancing performance.
Our noise suppression model achieves near state-of-the-art results with only
3.1M parameters and 0.4GFlops/s computational load. Moreover, our echo
cancellation model outperforms replicated industry-leading models, introducing
a new perspective in speech enhancement.
|
[
{
"created": "Mon, 27 Nov 2023 16:03:42 GMT",
"version": "v1"
}
] |
2023-11-28
|
[
[
"Tan",
"Kaijun",
""
],
[
"Dai",
"Benzhe",
""
],
[
"Li",
"Jiakui",
""
],
[
"Mao",
"Wenyu",
""
]
] |
Noise suppression and echo cancellation are critical in speech enhancement and essential for smart devices and real-time communication. Deployed in voice processing front-ends and edge devices, these algorithms must ensure efficient real-time inference with low computational demands. Traditional edge-based noise suppression often uses MSE-based amplitude spectrum mask training, but this approach has limitations. We introduce a novel projection loss function, diverging from MSE, to enhance noise suppression. This method uses projection techniques to isolate key audio components from noise, significantly improving model performance. For echo cancellation, the function enables direct predictions on LAEC pre-processed outputs, substantially enhancing performance. Our noise suppression model achieves near state-of-the-art results with only 3.1M parameters and 0.4GFlops/s computational load. Moreover, our echo cancellation model outperforms replicated industry-leading models, introducing a new perspective in speech enhancement.
|
1708.00636
|
Jaesung Park
|
Jae Sung Park and Nam Ik Cho
|
Generation of High Dynamic Range Illumination from a Single Image for
the Enhancement of Undesirably Illuminated Images
| null | null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents an algorithm that enhances undesirably illuminated images
by generating and fusing multi-level illuminations from a single image.The
input image is first decomposed into illumination and reflectance components by
using an edge-preserving smoothing filter. Then the reflectance component is
scaled up to improve the image details in bright areas. The illumination
component is scaled up and down to generate several illumination images that
correspond to certain camera exposure values different from the original. The
virtual multi-exposure illuminations are blended into an enhanced illumination,
where we also propose a method to generate appropriate weight maps for the tone
fusion. Finally, an enhanced image is obtained by multiplying the equalized
illumination and enhanced reflectance. Experiments show that the proposed
algorithm produces visually pleasing output and also yields comparable
objective results to the conventional enhancement methods, while requiring
modest computational loads.
|
[
{
"created": "Wed, 2 Aug 2017 08:14:18 GMT",
"version": "v1"
}
] |
2017-08-03
|
[
[
"Park",
"Jae Sung",
""
],
[
"Cho",
"Nam Ik",
""
]
] |
This paper presents an algorithm that enhances undesirably illuminated images by generating and fusing multi-level illuminations from a single image.The input image is first decomposed into illumination and reflectance components by using an edge-preserving smoothing filter. Then the reflectance component is scaled up to improve the image details in bright areas. The illumination component is scaled up and down to generate several illumination images that correspond to certain camera exposure values different from the original. The virtual multi-exposure illuminations are blended into an enhanced illumination, where we also propose a method to generate appropriate weight maps for the tone fusion. Finally, an enhanced image is obtained by multiplying the equalized illumination and enhanced reflectance. Experiments show that the proposed algorithm produces visually pleasing output and also yields comparable objective results to the conventional enhancement methods, while requiring modest computational loads.
|
1802.08402
|
Shadrokh Samavi
|
Mojtaba Akbari, Majid Mohrekesh, S.M.Reza Soroushmehr, Nader Karimi,
Shadrokh Samavi, Kayvan Najarian
|
Adaptive specular reflection detection and inpainting in colonoscopy
video frames
|
5 pages, 5 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Colonoscopy video frames might be contaminated by bright spots with
unsaturated values known as specular reflection. Detection and removal of such
reflections could enhance the quality of colonoscopy images and facilitate
diagnosis procedure. In this paper we propose a novel two-phase method for this
purpose, consisting of detection and removal phases. In the detection phase, we
employ both HSV and RGB color space information for segmentation of specular
reflections. We first train a non-linear SVM for selecting a color space based
on image statistical features extracted from each channel of the color spaces.
Then, a cost function for detection of specular reflections is introduced. In
the removal phase, we propose a two-step inpainting method which consists of
appropriate replacement patch selection and removal of the blockiness effects.
The proposed method is evaluated by testing on an available colonoscopy image
database where accuracy and Dice score of 99.68% and 71.79% are achieved
respectively.
|
[
{
"created": "Fri, 23 Feb 2018 06:25:21 GMT",
"version": "v1"
}
] |
2018-02-26
|
[
[
"Akbari",
"Mojtaba",
""
],
[
"Mohrekesh",
"Majid",
""
],
[
"Soroushmehr",
"S. M. Reza",
""
],
[
"Karimi",
"Nader",
""
],
[
"Samavi",
"Shadrokh",
""
],
[
"Najarian",
"Kayvan",
""
]
] |
Colonoscopy video frames might be contaminated by bright spots with unsaturated values known as specular reflection. Detection and removal of such reflections could enhance the quality of colonoscopy images and facilitate diagnosis procedure. In this paper we propose a novel two-phase method for this purpose, consisting of detection and removal phases. In the detection phase, we employ both HSV and RGB color space information for segmentation of specular reflections. We first train a non-linear SVM for selecting a color space based on image statistical features extracted from each channel of the color spaces. Then, a cost function for detection of specular reflections is introduced. In the removal phase, we propose a two-step inpainting method which consists of appropriate replacement patch selection and removal of the blockiness effects. The proposed method is evaluated by testing on an available colonoscopy image database where accuracy and Dice score of 99.68% and 71.79% are achieved respectively.
|
2207.11903
|
Allen Liu
|
Allen Liu, Ankur Moitra
|
Minimax Rates for Robust Community Detection
|
To appear in FOCS 2022
| null | null | null |
cs.DS cs.LG cs.SI math.PR stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we study the problem of community detection in the stochastic
block model with adversarial node corruptions. Our main result is an efficient
algorithm that can tolerate an $\epsilon$-fraction of corruptions and achieves
error $O(\epsilon) + e^{-\frac{C}{2} (1 \pm o(1))}$ where $C = (\sqrt{a} -
\sqrt{b})^2$ is the signal-to-noise ratio and $a/n$ and $b/n$ are the
inter-community and intra-community connection probabilities respectively.
These bounds essentially match the minimax rates for the SBM without
corruptions. We also give robust algorithms for $\mathbb{Z}_2$-synchronization.
At the heart of our algorithm is a new semidefinite program that uses global
information to robustly boost the accuracy of a rough clustering. Moreover, we
show that our algorithms are doubly-robust in the sense that they work in an
even more challenging noise model that mixes adversarial corruptions with
unbounded monotone changes, from the semi-random model.
|
[
{
"created": "Mon, 25 Jul 2022 04:45:16 GMT",
"version": "v1"
}
] |
2022-07-26
|
[
[
"Liu",
"Allen",
""
],
[
"Moitra",
"Ankur",
""
]
] |
In this work, we study the problem of community detection in the stochastic block model with adversarial node corruptions. Our main result is an efficient algorithm that can tolerate an $\epsilon$-fraction of corruptions and achieves error $O(\epsilon) + e^{-\frac{C}{2} (1 \pm o(1))}$ where $C = (\sqrt{a} - \sqrt{b})^2$ is the signal-to-noise ratio and $a/n$ and $b/n$ are the inter-community and intra-community connection probabilities respectively. These bounds essentially match the minimax rates for the SBM without corruptions. We also give robust algorithms for $\mathbb{Z}_2$-synchronization. At the heart of our algorithm is a new semidefinite program that uses global information to robustly boost the accuracy of a rough clustering. Moreover, we show that our algorithms are doubly-robust in the sense that they work in an even more challenging noise model that mixes adversarial corruptions with unbounded monotone changes, from the semi-random model.
|
2407.21611
|
Jiafeng Zhong
|
Jiafeng Zhong, Bin Li, Jiangyan Yi
|
Enhancing Partially Spoofed Audio Localization with Boundary-aware
Attention Mechanism
| null | null | null | null |
cs.SD cs.AI eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The task of partially spoofed audio localization aims to accurately determine
audio authenticity at a frame level. Although some works have achieved
encouraging results, utilizing boundary information within a single model
remains an unexplored research topic. In this work, we propose a novel method
called Boundary-aware Attention Mechanism (BAM). Specifically, it consists of
two core modules: Boundary Enhancement and Boundary Frame-wise Attention. The
former assembles the intra-frame and inter-frame information to extract
discriminative boundary features that are subsequently used for boundary
position detection and authenticity decision, while the latter leverages
boundary prediction results to explicitly control the feature interaction
between frames, which achieves effective discrimination between real and fake
frames. Experimental results on PartialSpoof database demonstrate our proposed
method achieves the best performance. The code is available at
https://github.com/media-sec-lab/BAM.
|
[
{
"created": "Wed, 31 Jul 2024 13:49:17 GMT",
"version": "v1"
}
] |
2024-08-01
|
[
[
"Zhong",
"Jiafeng",
""
],
[
"Li",
"Bin",
""
],
[
"Yi",
"Jiangyan",
""
]
] |
The task of partially spoofed audio localization aims to accurately determine audio authenticity at a frame level. Although some works have achieved encouraging results, utilizing boundary information within a single model remains an unexplored research topic. In this work, we propose a novel method called Boundary-aware Attention Mechanism (BAM). Specifically, it consists of two core modules: Boundary Enhancement and Boundary Frame-wise Attention. The former assembles the intra-frame and inter-frame information to extract discriminative boundary features that are subsequently used for boundary position detection and authenticity decision, while the latter leverages boundary prediction results to explicitly control the feature interaction between frames, which achieves effective discrimination between real and fake frames. Experimental results on PartialSpoof database demonstrate our proposed method achieves the best performance. The code is available at https://github.com/media-sec-lab/BAM.
|
1912.05937
|
Rahul Gopinath
|
Rahul Gopinath, Bj\"orn Mathis, Andreas Zeller
|
Inferring Input Grammars from Dynamic Control Flow
| null | null | null | null |
cs.SE cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A program is characterized by its input model, and a formal input model can
be of use in diverse areas including vulnerability analysis, reverse
engineering, fuzzing and software testing, clone detection and refactoring.
Unfortunately, input models for typical programs are often unavailable or out
of date. While there exist algorithms that can mine the syntactical structure
of program inputs, they either produce unwieldy and incomprehensible grammars,
or require heuristics that target specific parsing patterns.
In this paper, we present a general algorithm that takes a program and a
small set of sample inputs and automatically infers a readable context-free
grammar capturing the input language of the program. We infer the syntactic
input structure only by observing access of input characters at different
locations of the input parser. This works on all program stack based recursive
descent input parsers, including PEG and parser combinators, and can do
entirely without program specific heuristics. Our Mimid prototype produced
accurate and readable grammars for a variety of evaluation subjects, including
expr, URLparse, and microJSON.
|
[
{
"created": "Thu, 12 Dec 2019 13:35:09 GMT",
"version": "v1"
}
] |
2019-12-13
|
[
[
"Gopinath",
"Rahul",
""
],
[
"Mathis",
"Björn",
""
],
[
"Zeller",
"Andreas",
""
]
] |
A program is characterized by its input model, and a formal input model can be of use in diverse areas including vulnerability analysis, reverse engineering, fuzzing and software testing, clone detection and refactoring. Unfortunately, input models for typical programs are often unavailable or out of date. While there exist algorithms that can mine the syntactical structure of program inputs, they either produce unwieldy and incomprehensible grammars, or require heuristics that target specific parsing patterns. In this paper, we present a general algorithm that takes a program and a small set of sample inputs and automatically infers a readable context-free grammar capturing the input language of the program. We infer the syntactic input structure only by observing access of input characters at different locations of the input parser. This works on all program stack based recursive descent input parsers, including PEG and parser combinators, and can do entirely without program specific heuristics. Our Mimid prototype produced accurate and readable grammars for a variety of evaluation subjects, including expr, URLparse, and microJSON.
|
1905.10855
|
Martin Sulzmann
|
Martin Sulzmann and Kai Stadtm\"uller
|
Data Race Prediction for Inaccurate Traces
|
26 pages with appendix
| null | null | null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Happens-before based data race prediction methods infer from a trace of
events a partial order to check if one event happens before another event. If
two two write events are unordered, they are in a race. We observe that common
tracing methods provide no guarantee that the trace order corresponds to an
actual program run. The consequence of inaccurate tracing is that results
(races) reported are inaccurate. We introduce diagnostic methods to examine if
(1) a race is guaranteed to be correct regardless of any potential
inaccuracies, (2) maybe is incorrect due to inaccurate tracing. We have fully
implemented the approach and provide for an empirical comparison with state of
the art happens-before based race predictors such as FastTrack and SHB.
|
[
{
"created": "Sun, 26 May 2019 18:51:24 GMT",
"version": "v1"
},
{
"created": "Sat, 26 Oct 2019 07:57:50 GMT",
"version": "v2"
}
] |
2019-10-29
|
[
[
"Sulzmann",
"Martin",
""
],
[
"Stadtmüller",
"Kai",
""
]
] |
Happens-before based data race prediction methods infer from a trace of events a partial order to check if one event happens before another event. If two two write events are unordered, they are in a race. We observe that common tracing methods provide no guarantee that the trace order corresponds to an actual program run. The consequence of inaccurate tracing is that results (races) reported are inaccurate. We introduce diagnostic methods to examine if (1) a race is guaranteed to be correct regardless of any potential inaccuracies, (2) maybe is incorrect due to inaccurate tracing. We have fully implemented the approach and provide for an empirical comparison with state of the art happens-before based race predictors such as FastTrack and SHB.
|
1104.0775
|
Diora Jordan
|
Markus Wagner, Jareth Day, Diora Jordan, Trent Kroeger, Frank Neumann
|
Evolving Pacing Strategies for Team Pursuit Track Cycling
| null | null | null | null |
cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Team pursuit track cycling is a bicycle racing sport held on velodromes and
is part of the Summer Olympics. It involves the use of strategies to minimize
the overall time that a team of cyclists needs to complete a race. We present
an optimisation framework for team pursuit track cycling and show how to evolve
strategies using metaheuristics for this interesting real-world problem. Our
experimental results show that these heuristics lead to significantly better
strategies than state-of-art strategies that are currently used by teams of
cyclists.
|
[
{
"created": "Tue, 5 Apr 2011 08:46:33 GMT",
"version": "v1"
},
{
"created": "Thu, 16 Jun 2011 05:42:09 GMT",
"version": "v2"
}
] |
2011-06-17
|
[
[
"Wagner",
"Markus",
""
],
[
"Day",
"Jareth",
""
],
[
"Jordan",
"Diora",
""
],
[
"Kroeger",
"Trent",
""
],
[
"Neumann",
"Frank",
""
]
] |
Team pursuit track cycling is a bicycle racing sport held on velodromes and is part of the Summer Olympics. It involves the use of strategies to minimize the overall time that a team of cyclists needs to complete a race. We present an optimisation framework for team pursuit track cycling and show how to evolve strategies using metaheuristics for this interesting real-world problem. Our experimental results show that these heuristics lead to significantly better strategies than state-of-art strategies that are currently used by teams of cyclists.
|
1805.08028
|
Fuli Luo
|
Fuli Luo, Tianyu Liu, Qiaolin Xia, Baobao Chang and Zhifang Sui
|
Incorporating Glosses into Neural Word Sense Disambiguation
|
Accepted to ACL 2018 (long paper), added code link
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Word Sense Disambiguation (WSD) aims to identify the correct meaning of
polysemous words in the particular context. Lexical resources like WordNet
which are proved to be of great help for WSD in the knowledge-based methods.
However, previous neural networks for WSD always rely on massive labeled data
(context), ignoring lexical resources like glosses (sense definitions). In this
paper, we integrate the context and glosses of the target word into a unified
framework in order to make full use of both labeled data and lexical knowledge.
Therefore, we propose GAS: a gloss-augmented WSD neural network which jointly
encodes the context and glosses of the target word. GAS models the semantic
relationship between the context and the gloss in an improved memory network
framework, which breaks the barriers of the previous supervised methods and
knowledge-based methods. We further extend the original gloss of word sense via
its semantic relations in WordNet to enrich the gloss information. The
experimental results show that our model outperforms the state-of-theart
systems on several English all-words WSD datasets.
|
[
{
"created": "Mon, 21 May 2018 12:59:17 GMT",
"version": "v1"
},
{
"created": "Mon, 16 Jul 2018 14:58:30 GMT",
"version": "v2"
}
] |
2018-07-17
|
[
[
"Luo",
"Fuli",
""
],
[
"Liu",
"Tianyu",
""
],
[
"Xia",
"Qiaolin",
""
],
[
"Chang",
"Baobao",
""
],
[
"Sui",
"Zhifang",
""
]
] |
Word Sense Disambiguation (WSD) aims to identify the correct meaning of polysemous words in the particular context. Lexical resources like WordNet which are proved to be of great help for WSD in the knowledge-based methods. However, previous neural networks for WSD always rely on massive labeled data (context), ignoring lexical resources like glosses (sense definitions). In this paper, we integrate the context and glosses of the target word into a unified framework in order to make full use of both labeled data and lexical knowledge. Therefore, we propose GAS: a gloss-augmented WSD neural network which jointly encodes the context and glosses of the target word. GAS models the semantic relationship between the context and the gloss in an improved memory network framework, which breaks the barriers of the previous supervised methods and knowledge-based methods. We further extend the original gloss of word sense via its semantic relations in WordNet to enrich the gloss information. The experimental results show that our model outperforms the state-of-theart systems on several English all-words WSD datasets.
|
1901.10415
|
Juncai He
|
Juncai He and Jinchao Xu
|
MgNet: A Unified Framework of Multigrid and Convolutional Neural Network
|
30 pages
|
Sci. China Math. 62 (2019) 1331-1354
|
10.1007/s11425-019-9547-2
| null |
cs.CV cs.LG math.NA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We develop a unified model, known as MgNet, that simultaneously recovers some
convolutional neural networks (CNN) for image classification and multigrid (MG)
methods for solving discretized partial differential equations (PDEs). This
model is based on close connections that we have observed and uncovered between
the CNN and MG methodologies. For example, pooling operation and feature
extraction in CNN correspond directly to restriction operation and iterative
smoothers in MG, respectively. As the solution space is often the dual of the
data space in PDEs, the analogous concept of feature space and data space
(which are dual to each other) is introduced in CNN. With such connections and
new concept in the unified model, the function of various convolution
operations and pooling used in CNN can be better understood. As a result,
modified CNN models (with fewer weights and hyper parameters) are developed
that exhibit competitive and sometimes better performance in comparison with
existing CNN models when applied to both CIFAR-10 and CIFAR-100 data sets.
|
[
{
"created": "Tue, 29 Jan 2019 17:30:59 GMT",
"version": "v1"
},
{
"created": "Wed, 1 May 2019 20:49:32 GMT",
"version": "v2"
}
] |
2020-04-10
|
[
[
"He",
"Juncai",
""
],
[
"Xu",
"Jinchao",
""
]
] |
We develop a unified model, known as MgNet, that simultaneously recovers some convolutional neural networks (CNN) for image classification and multigrid (MG) methods for solving discretized partial differential equations (PDEs). This model is based on close connections that we have observed and uncovered between the CNN and MG methodologies. For example, pooling operation and feature extraction in CNN correspond directly to restriction operation and iterative smoothers in MG, respectively. As the solution space is often the dual of the data space in PDEs, the analogous concept of feature space and data space (which are dual to each other) is introduced in CNN. With such connections and new concept in the unified model, the function of various convolution operations and pooling used in CNN can be better understood. As a result, modified CNN models (with fewer weights and hyper parameters) are developed that exhibit competitive and sometimes better performance in comparison with existing CNN models when applied to both CIFAR-10 and CIFAR-100 data sets.
|
1811.03496
|
Helge Spieker
|
Helge Spieker, Arnaud Gotlieb, Morten Mossige
|
Multi-Cycle Assignment Problems with Rotational Diversity
|
Extended journal version
| null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multi-cycle assignment problems address scenarios where a series of general
assignment problems has to be solved sequentially. Subsequent cycles can differ
from previous ones due to changing availability or creation of tasks and
agents, which makes an upfront static schedule infeasible and introduces
uncertainty in the task-agent assignment process. We consider the setting
where, besides profit maximization, it is also desired to maintain diverse
assignments for tasks and agents, such that all tasks have been assigned to all
agents over subsequent cycles. This problem of multi-cycle assignment with
rotational diversity is approached in two sub-problems: The outer problem which
augments the original profit maximization objective with additional information
about the state of rotational diversity while the inner problem solves the
adjusted general assignment problem in a single execution of the model. We
discuss strategies to augment the profit values and evaluate them
experimentally. The method's efficacy is shown in three case studies:
multi-cycle variants of the multiple knapsack and the multiple subset sum
problems, and a real-world case study on the test case selection and assignment
problem from the software engineering domain.
|
[
{
"created": "Thu, 8 Nov 2018 15:39:35 GMT",
"version": "v1"
},
{
"created": "Thu, 19 Dec 2019 16:11:28 GMT",
"version": "v2"
}
] |
2019-12-20
|
[
[
"Spieker",
"Helge",
""
],
[
"Gotlieb",
"Arnaud",
""
],
[
"Mossige",
"Morten",
""
]
] |
Multi-cycle assignment problems address scenarios where a series of general assignment problems has to be solved sequentially. Subsequent cycles can differ from previous ones due to changing availability or creation of tasks and agents, which makes an upfront static schedule infeasible and introduces uncertainty in the task-agent assignment process. We consider the setting where, besides profit maximization, it is also desired to maintain diverse assignments for tasks and agents, such that all tasks have been assigned to all agents over subsequent cycles. This problem of multi-cycle assignment with rotational diversity is approached in two sub-problems: The outer problem which augments the original profit maximization objective with additional information about the state of rotational diversity while the inner problem solves the adjusted general assignment problem in a single execution of the model. We discuss strategies to augment the profit values and evaluate them experimentally. The method's efficacy is shown in three case studies: multi-cycle variants of the multiple knapsack and the multiple subset sum problems, and a real-world case study on the test case selection and assignment problem from the software engineering domain.
|
2406.11819
|
Joseph Tung
|
Joseph Tung, Gene Chou, Ruojin Cai, Guandao Yang, Kai Zhang, Gordon
Wetzstein, Bharath Hariharan, Noah Snavely
|
MegaScenes: Scene-Level View Synthesis at Scale
|
Our project page is at https://megascenes.github.io
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Scene-level novel view synthesis (NVS) is fundamental to many vision and
graphics applications. Recently, pose-conditioned diffusion models have led to
significant progress by extracting 3D information from 2D foundation models,
but these methods are limited by the lack of scene-level training data. Common
dataset choices either consist of isolated objects (Objaverse), or of
object-centric scenes with limited pose distributions (DTU, CO3D). In this
paper, we create a large-scale scene-level dataset from Internet photo
collections, called MegaScenes, which contains over 100K structure from motion
(SfM) reconstructions from around the world. Internet photos represent a
scalable data source but come with challenges such as lighting and transient
objects. We address these issues to further create a subset suitable for the
task of NVS. Additionally, we analyze failure cases of state-of-the-art NVS
methods and significantly improve generation consistency. Through extensive
experiments, we validate the effectiveness of both our dataset and method on
generating in-the-wild scenes. For details on the dataset and code, see our
project page at https://megascenes.github.io .
|
[
{
"created": "Mon, 17 Jun 2024 17:55:55 GMT",
"version": "v1"
}
] |
2024-06-18
|
[
[
"Tung",
"Joseph",
""
],
[
"Chou",
"Gene",
""
],
[
"Cai",
"Ruojin",
""
],
[
"Yang",
"Guandao",
""
],
[
"Zhang",
"Kai",
""
],
[
"Wetzstein",
"Gordon",
""
],
[
"Hariharan",
"Bharath",
""
],
[
"Snavely",
"Noah",
""
]
] |
Scene-level novel view synthesis (NVS) is fundamental to many vision and graphics applications. Recently, pose-conditioned diffusion models have led to significant progress by extracting 3D information from 2D foundation models, but these methods are limited by the lack of scene-level training data. Common dataset choices either consist of isolated objects (Objaverse), or of object-centric scenes with limited pose distributions (DTU, CO3D). In this paper, we create a large-scale scene-level dataset from Internet photo collections, called MegaScenes, which contains over 100K structure from motion (SfM) reconstructions from around the world. Internet photos represent a scalable data source but come with challenges such as lighting and transient objects. We address these issues to further create a subset suitable for the task of NVS. Additionally, we analyze failure cases of state-of-the-art NVS methods and significantly improve generation consistency. Through extensive experiments, we validate the effectiveness of both our dataset and method on generating in-the-wild scenes. For details on the dataset and code, see our project page at https://megascenes.github.io .
|
2008.02935
|
EPTCS
|
Horatiu Cirstea (LORIA, CNRS & INRIA & Universit\'e de Lorraine),
Alexis Grall (LORIA, CNRS & INRIA & Universit\'e de Lorraine), Dominique
M\'ery (LORIA, CNRS & INRIA & Universit\'e de Lorraine)
|
Generating Distributed Programs from Event-B Models
|
In Proceedings VPT/HCVS 2020, arXiv:2008.02483
|
EPTCS 320, 2020, pp. 110-124
|
10.4204/EPTCS.320.8
| null |
cs.PL cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Distributed algorithms offer challenges in checking that they meet their
specifications. Verification techniques can be extended to deal with the
verification of safety properties of distributed algorithms. In this paper, we
present an approach for combining correct-by-construction approaches and
transformations of formal models (Event-B) into programs (DistAlgo) to address
the design of verified distributed programs. We define a subset LB (Local
Event-B) of the Event-B modelling language restricted to events modelling the
classical actions of distributed programs as internal or local computations,
sending messages and receiving messages. We define then transformations of the
various elements of the LB language into DistAlgo programs. The general
methodology consists in starting from a statement of the problem to program and
then progressively producing an LB model obtained after several refinement
steps of the initial LB model. The derivation of the LB model is not described
in the current paper and has already been addressed in other works. The
transformation of LB models into DistAlgo programs is illustrated through a
simple example. The refinement process and the soundness of the transformation
allow one to produce correct-by-construction distributed programs.
|
[
{
"created": "Fri, 7 Aug 2020 01:23:53 GMT",
"version": "v1"
}
] |
2020-08-10
|
[
[
"Cirstea",
"Horatiu",
"",
"LORIA, CNRS & INRIA & Université de Lorraine"
],
[
"Grall",
"Alexis",
"",
"LORIA, CNRS & INRIA & Université de Lorraine"
],
[
"Méry",
"Dominique",
"",
"LORIA, CNRS & INRIA & Université de Lorraine"
]
] |
Distributed algorithms offer challenges in checking that they meet their specifications. Verification techniques can be extended to deal with the verification of safety properties of distributed algorithms. In this paper, we present an approach for combining correct-by-construction approaches and transformations of formal models (Event-B) into programs (DistAlgo) to address the design of verified distributed programs. We define a subset LB (Local Event-B) of the Event-B modelling language restricted to events modelling the classical actions of distributed programs as internal or local computations, sending messages and receiving messages. We define then transformations of the various elements of the LB language into DistAlgo programs. The general methodology consists in starting from a statement of the problem to program and then progressively producing an LB model obtained after several refinement steps of the initial LB model. The derivation of the LB model is not described in the current paper and has already been addressed in other works. The transformation of LB models into DistAlgo programs is illustrated through a simple example. The refinement process and the soundness of the transformation allow one to produce correct-by-construction distributed programs.
|
2111.05097
|
Tarek Saier
|
Tarek Saier, Michael F\"arber, Tornike Tsereteli
|
Cross-Lingual Citations in English Papers: A Large-Scale Analysis of
Prevalence, Usage, and Impact
|
to be published in the International Journal on Digital Libraries
| null |
10.1007/s00799-021-00312-z
| null |
cs.DL cs.IR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Citation information in scholarly data is an important source of insight into
the reception of publications and the scholarly discourse. Outcomes of citation
analyses and the applicability of citation based machine learning approaches
heavily depend on the completeness of such data. One particular shortcoming of
scholarly data nowadays is that non-English publications are often not included
in data sets, or that language metadata is not available. Because of this,
citations between publications of differing languages (cross-lingual citations)
have only been studied to a very limited degree. In this paper, we present an
analysis of cross-lingual citations based on over one million English papers,
spanning three scientific disciplines and a time span of three decades. Our
investigation covers differences between cited languages and disciplines,
trends over time, and the usage characteristics as well as impact of
cross-lingual citations. Among our findings are an increasing rate of citations
to publications written in Chinese, citations being primarily to local
non-English languages, and consistency in citation intent between cross- and
monolingual citations. To facilitate further research, we make our collected
data and source code publicly available.
|
[
{
"created": "Sun, 7 Nov 2021 15:34:02 GMT",
"version": "v1"
},
{
"created": "Wed, 10 Nov 2021 08:48:05 GMT",
"version": "v2"
}
] |
2022-01-12
|
[
[
"Saier",
"Tarek",
""
],
[
"Färber",
"Michael",
""
],
[
"Tsereteli",
"Tornike",
""
]
] |
Citation information in scholarly data is an important source of insight into the reception of publications and the scholarly discourse. Outcomes of citation analyses and the applicability of citation based machine learning approaches heavily depend on the completeness of such data. One particular shortcoming of scholarly data nowadays is that non-English publications are often not included in data sets, or that language metadata is not available. Because of this, citations between publications of differing languages (cross-lingual citations) have only been studied to a very limited degree. In this paper, we present an analysis of cross-lingual citations based on over one million English papers, spanning three scientific disciplines and a time span of three decades. Our investigation covers differences between cited languages and disciplines, trends over time, and the usage characteristics as well as impact of cross-lingual citations. Among our findings are an increasing rate of citations to publications written in Chinese, citations being primarily to local non-English languages, and consistency in citation intent between cross- and monolingual citations. To facilitate further research, we make our collected data and source code publicly available.
|
2302.09130
|
Mary Schlembach
|
William H. Mischo, Mary C. Schlembach, Elisandro Cabada
|
Relationships between Journal Publication, Citation, and Usage Metrics
within a Carnegie R1 University Collection: A Correlation Analysis
|
23 pages; 9 tables
|
College and Research Libraries, March 2024
| null | null |
cs.DL
|
http://creativecommons.org/licenses/by/4.0/
|
This study examines the correlational relationships between local journal
authorship, local and external citation counts, full-text downloads,
link-resolver clicks, and four global journal impact factor indices within an
all-disciplines journal collection of 12,200 titles and six subject subsets at
the University of Illinois at Urbana-Champaign (UIUC) Library. While earlier
investigations of the relationships between usage (downloads) and citation
metrics have been inconclusive, this study shows strong correlations in the
all-disciplines set and most subject subsets. The normalized Eigenfactor was
the only global impact factor index that correlated highly with local journal
metrics. Some of the identified disciplinary variances among the six subject
subsets may be explained by the journal publication aspirations of UIUC
researchers. The correlations between authorship and local citations in the six
specific subject subsets closely match national department or program rankings.
|
[
{
"created": "Fri, 17 Feb 2023 20:45:47 GMT",
"version": "v1"
}
] |
2023-02-21
|
[
[
"Mischo",
"William H.",
""
],
[
"Schlembach",
"Mary C.",
""
],
[
"Cabada",
"Elisandro",
""
]
] |
This study examines the correlational relationships between local journal authorship, local and external citation counts, full-text downloads, link-resolver clicks, and four global journal impact factor indices within an all-disciplines journal collection of 12,200 titles and six subject subsets at the University of Illinois at Urbana-Champaign (UIUC) Library. While earlier investigations of the relationships between usage (downloads) and citation metrics have been inconclusive, this study shows strong correlations in the all-disciplines set and most subject subsets. The normalized Eigenfactor was the only global impact factor index that correlated highly with local journal metrics. Some of the identified disciplinary variances among the six subject subsets may be explained by the journal publication aspirations of UIUC researchers. The correlations between authorship and local citations in the six specific subject subsets closely match national department or program rankings.
|
2402.12336
|
Christian Schlarmann
|
Christian Schlarmann, Naman Deep Singh, Francesco Croce, Matthias Hein
|
Robust CLIP: Unsupervised Adversarial Fine-Tuning of Vision Embeddings
for Robust Large Vision-Language Models
|
ICML 2024 Oral
| null | null | null |
cs.LG cs.AI cs.CV stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
Multi-modal foundation models like OpenFlamingo, LLaVA, and GPT-4 are
increasingly used for various real-world tasks. Prior work has shown that these
models are highly vulnerable to adversarial attacks on the vision modality.
These attacks can be leveraged to spread fake information or defraud users, and
thus pose a significant risk, which makes the robustness of large multi-modal
foundation models a pressing problem. The CLIP model, or one of its variants,
is used as a frozen vision encoder in many large vision-language models
(LVLMs), e.g. LLaVA and OpenFlamingo. We propose an unsupervised adversarial
fine-tuning scheme to obtain a robust CLIP vision encoder, which yields
robustness on all vision down-stream tasks (LVLMs, zero-shot classification)
that rely on CLIP. In particular, we show that stealth-attacks on users of
LVLMs by a malicious third party providing manipulated images are no longer
possible once one replaces the original CLIP model with our robust one. No
retraining or fine-tuning of the down-stream LVLMs is required. The code and
robust models are available at https://github.com/chs20/RobustVLM
|
[
{
"created": "Mon, 19 Feb 2024 18:09:48 GMT",
"version": "v1"
},
{
"created": "Wed, 5 Jun 2024 15:32:03 GMT",
"version": "v2"
}
] |
2024-06-06
|
[
[
"Schlarmann",
"Christian",
""
],
[
"Singh",
"Naman Deep",
""
],
[
"Croce",
"Francesco",
""
],
[
"Hein",
"Matthias",
""
]
] |
Multi-modal foundation models like OpenFlamingo, LLaVA, and GPT-4 are increasingly used for various real-world tasks. Prior work has shown that these models are highly vulnerable to adversarial attacks on the vision modality. These attacks can be leveraged to spread fake information or defraud users, and thus pose a significant risk, which makes the robustness of large multi-modal foundation models a pressing problem. The CLIP model, or one of its variants, is used as a frozen vision encoder in many large vision-language models (LVLMs), e.g. LLaVA and OpenFlamingo. We propose an unsupervised adversarial fine-tuning scheme to obtain a robust CLIP vision encoder, which yields robustness on all vision down-stream tasks (LVLMs, zero-shot classification) that rely on CLIP. In particular, we show that stealth-attacks on users of LVLMs by a malicious third party providing manipulated images are no longer possible once one replaces the original CLIP model with our robust one. No retraining or fine-tuning of the down-stream LVLMs is required. The code and robust models are available at https://github.com/chs20/RobustVLM
|
2010.02305
|
Jatin Ganhotra
|
Jatin Ganhotra, Haggai Roitman, Doron Cohen, Nathaniel Mills, Chulaka
Gunasekara, Yosi Mass, Sachindra Joshi, Luis Lastras and David Konopnicki
|
Conversational Document Prediction to Assist Customer Care Agents
|
EMNLP 2020. The released Twitter dataset is available at:
https://github.com/IBM/twitter-customer-care-document-prediction
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
A frequent pattern in customer care conversations is the agents responding
with appropriate webpage URLs that address users' needs. We study the task of
predicting the documents that customer care agents can use to facilitate users'
needs. We also introduce a new public dataset which supports the aforementioned
problem. Using this dataset and two others, we investigate state-of-the art
deep learning (DL) and information retrieval (IR) models for the task.
Additionally, we analyze the practicality of such systems in terms of inference
time complexity. Our show that an hybrid IR+DL approach provides the best of
both worlds.
|
[
{
"created": "Mon, 5 Oct 2020 19:53:41 GMT",
"version": "v1"
}
] |
2020-10-07
|
[
[
"Ganhotra",
"Jatin",
""
],
[
"Roitman",
"Haggai",
""
],
[
"Cohen",
"Doron",
""
],
[
"Mills",
"Nathaniel",
""
],
[
"Gunasekara",
"Chulaka",
""
],
[
"Mass",
"Yosi",
""
],
[
"Joshi",
"Sachindra",
""
],
[
"Lastras",
"Luis",
""
],
[
"Konopnicki",
"David",
""
]
] |
A frequent pattern in customer care conversations is the agents responding with appropriate webpage URLs that address users' needs. We study the task of predicting the documents that customer care agents can use to facilitate users' needs. We also introduce a new public dataset which supports the aforementioned problem. Using this dataset and two others, we investigate state-of-the art deep learning (DL) and information retrieval (IR) models for the task. Additionally, we analyze the practicality of such systems in terms of inference time complexity. Our show that an hybrid IR+DL approach provides the best of both worlds.
|
1511.00661
|
Stephen Chestnut
|
Vladimir Braverman, Stephen R. Chestnut, Nikita Ivkin, David P.
Woodruff
|
Beating CountSketch for Heavy Hitters in Insertion Streams
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Given a stream $p_1, \ldots, p_m$ of items from a universe $\mathcal{U}$,
which, without loss of generality we identify with the set of integers $\{1, 2,
\ldots, n\}$, we consider the problem of returning all $\ell_2$-heavy hitters,
i.e., those items $j$ for which $f_j \geq \epsilon \sqrt{F_2}$, where $f_j$ is
the number of occurrences of item $j$ in the stream, and $F_2 = \sum_{i \in
[n]} f_i^2$. Such a guarantee is considerably stronger than the
$\ell_1$-guarantee, which finds those $j$ for which $f_j \geq \epsilon m$. In
2002, Charikar, Chen, and Farach-Colton suggested the {\sf CountSketch} data
structure, which finds all such $j$ using $\Theta(\log^2 n)$ bits of space (for
constant $\epsilon > 0$). The only known lower bound is $\Omega(\log n)$ bits
of space, which comes from the need to specify the identities of the items
found. In this paper we show it is possible to achieve $O(\log n \log \log n)$
bits of space for this problem. Our techniques, based on Gaussian processes,
lead to a number of other new results for data streams, including
(1) The first algorithm for estimating $F_2$ simultaneously at all points in
a stream using only $O(\log n\log\log n)$ bits of space, improving a natural
union bound and the algorithm of Huang, Tai, and Yi (2014).
(2) A way to estimate the $\ell_{\infty}$ norm of a stream up to additive
error $\epsilon \sqrt{F_2}$ with $O(\log n\log\log n)$ bits of space, resolving
Open Question 3 from the IITK 2006 list for insertion only streams.
|
[
{
"created": "Mon, 2 Nov 2015 20:03:39 GMT",
"version": "v1"
}
] |
2015-11-03
|
[
[
"Braverman",
"Vladimir",
""
],
[
"Chestnut",
"Stephen R.",
""
],
[
"Ivkin",
"Nikita",
""
],
[
"Woodruff",
"David P.",
""
]
] |
Given a stream $p_1, \ldots, p_m$ of items from a universe $\mathcal{U}$, which, without loss of generality we identify with the set of integers $\{1, 2, \ldots, n\}$, we consider the problem of returning all $\ell_2$-heavy hitters, i.e., those items $j$ for which $f_j \geq \epsilon \sqrt{F_2}$, where $f_j$ is the number of occurrences of item $j$ in the stream, and $F_2 = \sum_{i \in [n]} f_i^2$. Such a guarantee is considerably stronger than the $\ell_1$-guarantee, which finds those $j$ for which $f_j \geq \epsilon m$. In 2002, Charikar, Chen, and Farach-Colton suggested the {\sf CountSketch} data structure, which finds all such $j$ using $\Theta(\log^2 n)$ bits of space (for constant $\epsilon > 0$). The only known lower bound is $\Omega(\log n)$ bits of space, which comes from the need to specify the identities of the items found. In this paper we show it is possible to achieve $O(\log n \log \log n)$ bits of space for this problem. Our techniques, based on Gaussian processes, lead to a number of other new results for data streams, including (1) The first algorithm for estimating $F_2$ simultaneously at all points in a stream using only $O(\log n\log\log n)$ bits of space, improving a natural union bound and the algorithm of Huang, Tai, and Yi (2014). (2) A way to estimate the $\ell_{\infty}$ norm of a stream up to additive error $\epsilon \sqrt{F_2}$ with $O(\log n\log\log n)$ bits of space, resolving Open Question 3 from the IITK 2006 list for insertion only streams.
|
1904.00205
|
Taimoor Tariq Mr.
|
Taimoor Tariq, Juan Luis Gonzalez, Munchurl Kim
|
A HVS-inspired Attention to Improve Loss Metrics for CNN-based
Perception-Oriented Super-Resolution
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep Convolutional Neural Network (CNN) features have been demonstrated to be
effective perceptual quality features. The perceptual loss, based on feature
maps of pre-trained CNN's has proven to be remarkably effective for CNN based
perceptual image restoration problems. In this work, taking inspiration from
the the Human Visual System (HVS) and visual perception, we propose a spatial
attention mechanism based on the dependency human contrast sensitivity on
spatial frequency. We identify regions in input images, based on the underlying
spatial frequency, which are not generally well reconstructed during
Super-Resolution but are most important in terms of visual sensitivity. Based
on this prior, we design a spatial attention map that is applied to feature
maps in the perceptual loss and its variants, helping them to identify regions
that are of more perceptual importance. The results demonstrate the our
technique improves the ability of the perceptual loss and contextual loss to
deliver more natural images in CNN based super-resolution.
|
[
{
"created": "Sat, 30 Mar 2019 12:14:50 GMT",
"version": "v1"
},
{
"created": "Sat, 27 Jul 2019 15:19:46 GMT",
"version": "v2"
}
] |
2019-07-30
|
[
[
"Tariq",
"Taimoor",
""
],
[
"Gonzalez",
"Juan Luis",
""
],
[
"Kim",
"Munchurl",
""
]
] |
Deep Convolutional Neural Network (CNN) features have been demonstrated to be effective perceptual quality features. The perceptual loss, based on feature maps of pre-trained CNN's has proven to be remarkably effective for CNN based perceptual image restoration problems. In this work, taking inspiration from the the Human Visual System (HVS) and visual perception, we propose a spatial attention mechanism based on the dependency human contrast sensitivity on spatial frequency. We identify regions in input images, based on the underlying spatial frequency, which are not generally well reconstructed during Super-Resolution but are most important in terms of visual sensitivity. Based on this prior, we design a spatial attention map that is applied to feature maps in the perceptual loss and its variants, helping them to identify regions that are of more perceptual importance. The results demonstrate the our technique improves the ability of the perceptual loss and contextual loss to deliver more natural images in CNN based super-resolution.
|
1109.4433
|
Johanne Cohen
|
Olivier Bournez, J\'er\'emie Chalopin, Johanne Cohen, Xavier Koegler,
Mikael Rabie
|
Asymetric Pavlovian Populations
| null | null | null | null |
cs.DC cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Population protocols have been introduced by Angluin et al. as a model of
networks consisting of very limited mobile agents that interact in pairs but
with no control over their own movement. A collection of anonymous agents,
modeled by finite automata, interact pairwise according to some rules that
update their states. Predicates on the initial configurations that can be
computed by such protocols have been characterized as semi-linear predicates.
In an orthogonal way, several distributed systems have been termed in
literature as being realizations of games in the sense of game theory. We
investigate under which conditions population protocols, or more generally
pairwise interaction rules, correspond to games. We show that restricting to
asymetric games is not really a restric- tion: all predicates computable by
protocols can actually be computed by protocols corresponding to games, i.e.
any semi-linear predicate can be computed by a Pavlovian population
multi-protocol.
|
[
{
"created": "Tue, 20 Sep 2011 21:17:02 GMT",
"version": "v1"
}
] |
2011-09-22
|
[
[
"Bournez",
"Olivier",
""
],
[
"Chalopin",
"Jérémie",
""
],
[
"Cohen",
"Johanne",
""
],
[
"Koegler",
"Xavier",
""
],
[
"Rabie",
"Mikael",
""
]
] |
Population protocols have been introduced by Angluin et al. as a model of networks consisting of very limited mobile agents that interact in pairs but with no control over their own movement. A collection of anonymous agents, modeled by finite automata, interact pairwise according to some rules that update their states. Predicates on the initial configurations that can be computed by such protocols have been characterized as semi-linear predicates. In an orthogonal way, several distributed systems have been termed in literature as being realizations of games in the sense of game theory. We investigate under which conditions population protocols, or more generally pairwise interaction rules, correspond to games. We show that restricting to asymetric games is not really a restric- tion: all predicates computable by protocols can actually be computed by protocols corresponding to games, i.e. any semi-linear predicate can be computed by a Pavlovian population multi-protocol.
|
2004.01045
|
Dongfang Zhao
|
Dongfang Zhao
|
Topological Properties of Multi-Party Blockchain Transactions
| null | null | null | null |
cs.CR cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The cross-blockchain transaction remains one of the most challenging problems
in blockchains. The root cause of the challenge lies in the nondeterministic
nature of blockchains: A $n$-party transaction across multiple blockchains
might be partially rolled back due to the potential forks in any of the
participating blockchains---eventually, only one fork will survive in the
competition among miners. While some effort has recently been made to
developing hierarchically distributed commit protocols to make multi-party
transactions progress, there is no systematic method to reason about the
transaction outcome. This paper tackles this problem from a perspective of
point-set topology. We construct multiple topological spaces for the
transactions and blockchain forks, and show that these spaces are internally
related through either homeomorphism or continuous functions. Combined
together, these tools allow us to reason about the cross-blockchain
transactions through the growing-fork topology, an intuitive representation of
blockchains.
|
[
{
"created": "Wed, 1 Apr 2020 03:56:42 GMT",
"version": "v1"
},
{
"created": "Fri, 3 Apr 2020 19:07:06 GMT",
"version": "v2"
},
{
"created": "Thu, 16 Apr 2020 00:46:57 GMT",
"version": "v3"
}
] |
2020-04-17
|
[
[
"Zhao",
"Dongfang",
""
]
] |
The cross-blockchain transaction remains one of the most challenging problems in blockchains. The root cause of the challenge lies in the nondeterministic nature of blockchains: A $n$-party transaction across multiple blockchains might be partially rolled back due to the potential forks in any of the participating blockchains---eventually, only one fork will survive in the competition among miners. While some effort has recently been made to developing hierarchically distributed commit protocols to make multi-party transactions progress, there is no systematic method to reason about the transaction outcome. This paper tackles this problem from a perspective of point-set topology. We construct multiple topological spaces for the transactions and blockchain forks, and show that these spaces are internally related through either homeomorphism or continuous functions. Combined together, these tools allow us to reason about the cross-blockchain transactions through the growing-fork topology, an intuitive representation of blockchains.
|
2207.10648
|
Michael Desmond
|
Michael Desmond, Evelyn Duesterwald, Vatche Isahagian, Vinod Muthusamy
|
A No-Code Low-Code Paradigm for Authoring Business Automations Using
Natural Language
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Most business process automation is still developed using traditional
automation technologies such as workflow engines. These systems provide domain
specific languages that require both business knowledge and programming skills
to effectively use. As such, business users often lack adequate programming
skills to fully leverage these code oriented environments. We propose a
paradigm for the construction of business automations using natural language.
The approach applies a large language model to translate business rules and
automations described in natural language, into a domain specific language
interpretable by a business rule engine. We compare the performance of various
language model configurations, across various target domains, and explore the
use of constrained decoding to ensure syntactically correct generation of
output.
|
[
{
"created": "Fri, 15 Jul 2022 19:17:55 GMT",
"version": "v1"
}
] |
2022-07-22
|
[
[
"Desmond",
"Michael",
""
],
[
"Duesterwald",
"Evelyn",
""
],
[
"Isahagian",
"Vatche",
""
],
[
"Muthusamy",
"Vinod",
""
]
] |
Most business process automation is still developed using traditional automation technologies such as workflow engines. These systems provide domain specific languages that require both business knowledge and programming skills to effectively use. As such, business users often lack adequate programming skills to fully leverage these code oriented environments. We propose a paradigm for the construction of business automations using natural language. The approach applies a large language model to translate business rules and automations described in natural language, into a domain specific language interpretable by a business rule engine. We compare the performance of various language model configurations, across various target domains, and explore the use of constrained decoding to ensure syntactically correct generation of output.
|
2112.00209
|
Yuki Okamoto
|
Yuki Okamoto, Shota Horiguchi, Masaaki Yamamoto, Keisuke Imoto, Yohei
Kawaguchi
|
Environmental Sound Extraction Using Onomatopoeic Words
|
Accepted to ICASSP2022
| null | null | null |
cs.SD cs.LG eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An onomatopoeic word, which is a character sequence that phonetically
imitates a sound, is effective in expressing characteristics of sound such as
duration, pitch, and timbre. We propose an environmental-sound-extraction
method using onomatopoeic words to specify the target sound to be extracted. By
this method, we estimate a time-frequency mask from an input mixture
spectrogram and an onomatopoeic word using a U-Net architecture, then extract
the corresponding target sound by masking the spectrogram. Experimental results
indicate that the proposed method can extract only the target sound
corresponding to the onomatopoeic word and performs better than conventional
methods that use sound-event classes to specify the target sound.
|
[
{
"created": "Wed, 1 Dec 2021 01:18:06 GMT",
"version": "v1"
},
{
"created": "Thu, 2 Dec 2021 03:55:40 GMT",
"version": "v2"
},
{
"created": "Fri, 4 Feb 2022 10:27:02 GMT",
"version": "v3"
},
{
"created": "Thu, 17 Feb 2022 04:41:59 GMT",
"version": "v4"
}
] |
2022-02-18
|
[
[
"Okamoto",
"Yuki",
""
],
[
"Horiguchi",
"Shota",
""
],
[
"Yamamoto",
"Masaaki",
""
],
[
"Imoto",
"Keisuke",
""
],
[
"Kawaguchi",
"Yohei",
""
]
] |
An onomatopoeic word, which is a character sequence that phonetically imitates a sound, is effective in expressing characteristics of sound such as duration, pitch, and timbre. We propose an environmental-sound-extraction method using onomatopoeic words to specify the target sound to be extracted. By this method, we estimate a time-frequency mask from an input mixture spectrogram and an onomatopoeic word using a U-Net architecture, then extract the corresponding target sound by masking the spectrogram. Experimental results indicate that the proposed method can extract only the target sound corresponding to the onomatopoeic word and performs better than conventional methods that use sound-event classes to specify the target sound.
|
1908.07860
|
Zhao Zhang
|
Zhao Zhang, Lei Wang, Sheng Li, Yang Wang, Zheng Zhang, Zhengjun Zha
and Meng Wang
|
Adaptive Structure-constrained Robust Latent Low-Rank Coding for Image
Recovery
|
Accepted by ICDM 2019 as a regular paper
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a robust representation learning model called
Adaptive Structure-constrained Low-Rank Coding (AS-LRC) for the latent
representation of data. To recover the underlying subspaces more accurately,
AS-LRC seamlessly integrates an adaptive weighting based block-diagonal
structure-constrained low-rank representation and the group sparse salient
feature extraction into a unified framework. Specifically, AS-LRC performs the
latent decomposition of given data into a low-rank reconstruction by a
block-diagonal codes matrix, a group sparse locality-adaptive salient feature
part and a sparse error part. To enforce the block-diagonal structures adaptive
to different real datasets for the low-rank recovery, AS-LRC clearly computes
an auto-weighting matrix based on the locality-adaptive features and multiplies
by the low-rank coefficients for direct minimization at the same time. This
encourages the codes to be block-diagonal and can avoid the tricky issue of
choosing optimal neighborhood size or kernel width for the weight assignment,
suffered in most local geometrical structures-preserving low-rank coding
methods. In addition, our AS-LRC selects the L2,1-norm on the projection for
extracting group sparse features rather than learning low-rank features by
Nuclear-norm regularization, which can make learnt features robust to noise and
outliers in samples, and can also make the feature coding process efficient.
Extensive visualizations and numerical results demonstrate the effectiveness of
our AS-LRC for image representation and recovery.
|
[
{
"created": "Wed, 21 Aug 2019 13:17:55 GMT",
"version": "v1"
},
{
"created": "Thu, 22 Aug 2019 02:28:14 GMT",
"version": "v2"
}
] |
2019-08-23
|
[
[
"Zhang",
"Zhao",
""
],
[
"Wang",
"Lei",
""
],
[
"Li",
"Sheng",
""
],
[
"Wang",
"Yang",
""
],
[
"Zhang",
"Zheng",
""
],
[
"Zha",
"Zhengjun",
""
],
[
"Wang",
"Meng",
""
]
] |
In this paper, we propose a robust representation learning model called Adaptive Structure-constrained Low-Rank Coding (AS-LRC) for the latent representation of data. To recover the underlying subspaces more accurately, AS-LRC seamlessly integrates an adaptive weighting based block-diagonal structure-constrained low-rank representation and the group sparse salient feature extraction into a unified framework. Specifically, AS-LRC performs the latent decomposition of given data into a low-rank reconstruction by a block-diagonal codes matrix, a group sparse locality-adaptive salient feature part and a sparse error part. To enforce the block-diagonal structures adaptive to different real datasets for the low-rank recovery, AS-LRC clearly computes an auto-weighting matrix based on the locality-adaptive features and multiplies by the low-rank coefficients for direct minimization at the same time. This encourages the codes to be block-diagonal and can avoid the tricky issue of choosing optimal neighborhood size or kernel width for the weight assignment, suffered in most local geometrical structures-preserving low-rank coding methods. In addition, our AS-LRC selects the L2,1-norm on the projection for extracting group sparse features rather than learning low-rank features by Nuclear-norm regularization, which can make learnt features robust to noise and outliers in samples, and can also make the feature coding process efficient. Extensive visualizations and numerical results demonstrate the effectiveness of our AS-LRC for image representation and recovery.
|
2310.01405
|
Andy Zou
|
Andy Zou, Long Phan, Sarah Chen, James Campbell, Phillip Guo, Richard
Ren, Alexander Pan, Xuwang Yin, Mantas Mazeika, Ann-Kathrin Dombrowski,
Shashwat Goel, Nathaniel Li, Michael J. Byun, Zifan Wang, Alex Mallen, Steven
Basart, Sanmi Koyejo, Dawn Song, Matt Fredrikson, J. Zico Kolter, Dan
Hendrycks
|
Representation Engineering: A Top-Down Approach to AI Transparency
|
Code is available at
https://github.com/andyzoujm/representation-engineering
| null | null | null |
cs.LG cs.AI cs.CL cs.CV cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we identify and characterize the emerging area of
representation engineering (RepE), an approach to enhancing the transparency of
AI systems that draws on insights from cognitive neuroscience. RepE places
population-level representations, rather than neurons or circuits, at the
center of analysis, equipping us with novel methods for monitoring and
manipulating high-level cognitive phenomena in deep neural networks (DNNs). We
provide baselines and an initial analysis of RepE techniques, showing that they
offer simple yet effective solutions for improving our understanding and
control of large language models. We showcase how these methods can provide
traction on a wide range of safety-relevant problems, including honesty,
harmlessness, power-seeking, and more, demonstrating the promise of top-down
transparency research. We hope that this work catalyzes further exploration of
RepE and fosters advancements in the transparency and safety of AI systems.
|
[
{
"created": "Mon, 2 Oct 2023 17:59:07 GMT",
"version": "v1"
},
{
"created": "Tue, 3 Oct 2023 08:39:09 GMT",
"version": "v2"
},
{
"created": "Tue, 10 Oct 2023 08:00:53 GMT",
"version": "v3"
}
] |
2023-10-11
|
[
[
"Zou",
"Andy",
""
],
[
"Phan",
"Long",
""
],
[
"Chen",
"Sarah",
""
],
[
"Campbell",
"James",
""
],
[
"Guo",
"Phillip",
""
],
[
"Ren",
"Richard",
""
],
[
"Pan",
"Alexander",
""
],
[
"Yin",
"Xuwang",
""
],
[
"Mazeika",
"Mantas",
""
],
[
"Dombrowski",
"Ann-Kathrin",
""
],
[
"Goel",
"Shashwat",
""
],
[
"Li",
"Nathaniel",
""
],
[
"Byun",
"Michael J.",
""
],
[
"Wang",
"Zifan",
""
],
[
"Mallen",
"Alex",
""
],
[
"Basart",
"Steven",
""
],
[
"Koyejo",
"Sanmi",
""
],
[
"Song",
"Dawn",
""
],
[
"Fredrikson",
"Matt",
""
],
[
"Kolter",
"J. Zico",
""
],
[
"Hendrycks",
"Dan",
""
]
] |
In this paper, we identify and characterize the emerging area of representation engineering (RepE), an approach to enhancing the transparency of AI systems that draws on insights from cognitive neuroscience. RepE places population-level representations, rather than neurons or circuits, at the center of analysis, equipping us with novel methods for monitoring and manipulating high-level cognitive phenomena in deep neural networks (DNNs). We provide baselines and an initial analysis of RepE techniques, showing that they offer simple yet effective solutions for improving our understanding and control of large language models. We showcase how these methods can provide traction on a wide range of safety-relevant problems, including honesty, harmlessness, power-seeking, and more, demonstrating the promise of top-down transparency research. We hope that this work catalyzes further exploration of RepE and fosters advancements in the transparency and safety of AI systems.
|
2302.11732
|
Mingke Yang
|
Mingke Yang and Yuming Zhou and Bixin Li and Yutian Tang
|
On Code Reuse from StackOverflow: An Exploratory Study on Jupyter
Notebook
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Jupyter Notebook is a popular tool among data analysts and scientists for
working with data. It provides a way to combine code, documentation, and
visualizations in a single, interactive environment, facilitating code reuse.
While code reuse can improve programming efficiency, it can also decrease
readability, security, and overall performance. We conduct a large-scale
exploratory study of code reuse practices in the Jupyter Notebook development
community on the Stack Overflow platform to understand the potential negative
impacts of code reuse. Our findings identified 1,097,470 Jupyter Notebook clone
pairs that reuse Stack Overflow code snippets, and the average code snippet has
7.91 code quality violations. Through our research, we gain insight into the
reasons behind Jupyter Notebook developers' decision to reuse code and the
potential drawbacks of this practice.
|
[
{
"created": "Thu, 23 Feb 2023 01:47:53 GMT",
"version": "v1"
}
] |
2023-02-24
|
[
[
"Yang",
"Mingke",
""
],
[
"Zhou",
"Yuming",
""
],
[
"Li",
"Bixin",
""
],
[
"Tang",
"Yutian",
""
]
] |
Jupyter Notebook is a popular tool among data analysts and scientists for working with data. It provides a way to combine code, documentation, and visualizations in a single, interactive environment, facilitating code reuse. While code reuse can improve programming efficiency, it can also decrease readability, security, and overall performance. We conduct a large-scale exploratory study of code reuse practices in the Jupyter Notebook development community on the Stack Overflow platform to understand the potential negative impacts of code reuse. Our findings identified 1,097,470 Jupyter Notebook clone pairs that reuse Stack Overflow code snippets, and the average code snippet has 7.91 code quality violations. Through our research, we gain insight into the reasons behind Jupyter Notebook developers' decision to reuse code and the potential drawbacks of this practice.
|
2312.04344
|
Pengcheng Chen
|
Pengcheng Chen, Ziyan Huang, Zhongying Deng, Tianbin Li, Yanzhou Su,
Haoyu Wang, Jin Ye, Yu Qiao, Junjun He
|
Enhancing Medical Task Performance in GPT-4V: A Comprehensive Study on
Prompt Engineering Strategies
| null | null | null | null |
cs.CL cs.AI cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
OpenAI's latest large vision-language model (LVLM), GPT-4V(ision), has piqued
considerable interest for its potential in medical applications. Despite its
promise, recent studies and internal reviews highlight its underperformance in
specialized medical tasks. This paper explores the boundary of GPT-4V's
capabilities in medicine, particularly in processing complex imaging data from
endoscopies, CT scans, and MRIs etc. Leveraging open-source datasets, we
assessed its foundational competencies, identifying substantial areas for
enhancement. Our research emphasizes prompt engineering, an often-underutilized
strategy for improving AI responsiveness. Through iterative testing, we refined
the model's prompts, significantly improving its interpretative accuracy and
relevance in medical imaging. From our comprehensive evaluations, we distilled
10 effective prompt engineering techniques, each fortifying GPT-4V's medical
acumen. These methodical enhancements facilitate more reliable, precise, and
clinically valuable insights from GPT-4V, advancing its operability in critical
healthcare environments. Our findings are pivotal for those employing AI in
medicine, providing clear, actionable guidance on harnessing GPT-4V's full
diagnostic potential.
|
[
{
"created": "Thu, 7 Dec 2023 15:05:59 GMT",
"version": "v1"
},
{
"created": "Tue, 12 Dec 2023 06:37:53 GMT",
"version": "v2"
}
] |
2023-12-13
|
[
[
"Chen",
"Pengcheng",
""
],
[
"Huang",
"Ziyan",
""
],
[
"Deng",
"Zhongying",
""
],
[
"Li",
"Tianbin",
""
],
[
"Su",
"Yanzhou",
""
],
[
"Wang",
"Haoyu",
""
],
[
"Ye",
"Jin",
""
],
[
"Qiao",
"Yu",
""
],
[
"He",
"Junjun",
""
]
] |
OpenAI's latest large vision-language model (LVLM), GPT-4V(ision), has piqued considerable interest for its potential in medical applications. Despite its promise, recent studies and internal reviews highlight its underperformance in specialized medical tasks. This paper explores the boundary of GPT-4V's capabilities in medicine, particularly in processing complex imaging data from endoscopies, CT scans, and MRIs etc. Leveraging open-source datasets, we assessed its foundational competencies, identifying substantial areas for enhancement. Our research emphasizes prompt engineering, an often-underutilized strategy for improving AI responsiveness. Through iterative testing, we refined the model's prompts, significantly improving its interpretative accuracy and relevance in medical imaging. From our comprehensive evaluations, we distilled 10 effective prompt engineering techniques, each fortifying GPT-4V's medical acumen. These methodical enhancements facilitate more reliable, precise, and clinically valuable insights from GPT-4V, advancing its operability in critical healthcare environments. Our findings are pivotal for those employing AI in medicine, providing clear, actionable guidance on harnessing GPT-4V's full diagnostic potential.
|
2101.10729
|
Hyoungsung Kim
|
Hyoungsung Kim, Jehyuk Jang, Sangjun Park, Heung-no Lee
|
Ethereum ECCPoW
|
It is under the review of IEEE Access
|
IEEE Access, vol. 9, pp. 135942-135952, 2021
|
10.1109/ACCESS.2021.3113522
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The error-correction code based proof-of-work (ECCPoW) algorithm is based on
a low-density parity-check (LDPC) code. The ECCPoW is possible to impair ASIC
with its time-varying capability of the parameters of LDPC code. Previous
researches on the ECCPoW algorithm have presented its theory and implementation
on Bitcoin. But they do not discuss how stable the block generation time is. A
finite mean block generation time (BGT) and none heavy-tail BGT distribution
are the ones of the focus in this study. In the ECCPoW algorithm, BGT may show
a long-tailed distribution due to time-varying cryptographic puzzles. Thus, it
is of interest to see if the BGT distribution is not heavy-tailed and if it
shows a finite mean. If the distribution is heavy-tailed, then confirmation of
a transaction cannot be guaranteed. We present implementation, simulation, and
validation of ECCPoW Ethereum. In implementation, we explain how the ECCPoW
algorithm is integrated into Ethereum 1.0 as a new consensus algorithm. In the
simulation, we perform a multinode simulation to show that the ECCPoW Ethereum
works well with automatic difficulty change. In the validation, we present the
statistical results of the two-sample Anderson-Darling test to show that the
distribution of BGT satisfies the necessary condition of the exponential
distribution. Our implementation is downloadable at
https://github.com/cryptoecc/ETH-ECC.
|
[
{
"created": "Tue, 26 Jan 2021 11:50:06 GMT",
"version": "v1"
}
] |
2022-08-10
|
[
[
"Kim",
"Hyoungsung",
""
],
[
"Jang",
"Jehyuk",
""
],
[
"Park",
"Sangjun",
""
],
[
"Lee",
"Heung-no",
""
]
] |
The error-correction code based proof-of-work (ECCPoW) algorithm is based on a low-density parity-check (LDPC) code. The ECCPoW is possible to impair ASIC with its time-varying capability of the parameters of LDPC code. Previous researches on the ECCPoW algorithm have presented its theory and implementation on Bitcoin. But they do not discuss how stable the block generation time is. A finite mean block generation time (BGT) and none heavy-tail BGT distribution are the ones of the focus in this study. In the ECCPoW algorithm, BGT may show a long-tailed distribution due to time-varying cryptographic puzzles. Thus, it is of interest to see if the BGT distribution is not heavy-tailed and if it shows a finite mean. If the distribution is heavy-tailed, then confirmation of a transaction cannot be guaranteed. We present implementation, simulation, and validation of ECCPoW Ethereum. In implementation, we explain how the ECCPoW algorithm is integrated into Ethereum 1.0 as a new consensus algorithm. In the simulation, we perform a multinode simulation to show that the ECCPoW Ethereum works well with automatic difficulty change. In the validation, we present the statistical results of the two-sample Anderson-Darling test to show that the distribution of BGT satisfies the necessary condition of the exponential distribution. Our implementation is downloadable at https://github.com/cryptoecc/ETH-ECC.
|
cs/0304040
|
Russell Impagliazzo
|
Russell Impagliazzo
|
Hardness as randomness: a survey of universal derandomization
| null |
Proceedings of the ICM, Beijing 2002, vol. 3, 659--672
| null | null |
cs.CC
| null |
We survey recent developments in the study of probabilistic complexity
classes. While the evidence seems to support the conjecture that probabilism
can be deterministically simulated with relatively low overhead, i.e., that
$P=BPP$, it also indicates that this may be a difficult question to resolve. In
fact, proving that probabilistic algorithms have non-trivial deterministic
simulations is basically equivalent to proving circuit lower bounds, either in
the algebraic or Boolean models.
|
[
{
"created": "Mon, 28 Apr 2003 22:50:34 GMT",
"version": "v1"
}
] |
2008-12-15
|
[
[
"Impagliazzo",
"Russell",
""
]
] |
We survey recent developments in the study of probabilistic complexity classes. While the evidence seems to support the conjecture that probabilism can be deterministically simulated with relatively low overhead, i.e., that $P=BPP$, it also indicates that this may be a difficult question to resolve. In fact, proving that probabilistic algorithms have non-trivial deterministic simulations is basically equivalent to proving circuit lower bounds, either in the algebraic or Boolean models.
|
1811.05669
|
Luc Le Magoarou
|
Antoine Le Calvez (IRT b-com), Luc Le Magoarou (IRT b-com), St\'ephane
Paquelet (IRT b-com)
|
Massive MIMO channel estimation taking into account spherical waves
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Together with millimiter waves (mmWaves), massive multiple-input
multiple-output (MIMO) systems are key technological components of fifth
generation (5G) wireless communication systems. In such a context, geometric
considerations show that the largely adopted plane wave model (PWM) of the
channel potentially loses its validity. An alternative is to consider the more
accurate but more complex spherical wave model (SWM). This paper introduces an
intermediate parabolic wave model (ParWM), more accurate than the PWM while
less complex than the SWM. The validity domains of those three physical models
are assessed in a novel way. Finally, estimation algorithms for the SWM and
ParWM are proposed and compared with classical algorithms, showing a promising
performance complexity trade-off.
|
[
{
"created": "Wed, 14 Nov 2018 07:33:17 GMT",
"version": "v1"
}
] |
2018-11-15
|
[
[
"Calvez",
"Antoine Le",
"",
"IRT b-com"
],
[
"Magoarou",
"Luc Le",
"",
"IRT b-com"
],
[
"Paquelet",
"Stéphane",
"",
"IRT b-com"
]
] |
Together with millimiter waves (mmWaves), massive multiple-input multiple-output (MIMO) systems are key technological components of fifth generation (5G) wireless communication systems. In such a context, geometric considerations show that the largely adopted plane wave model (PWM) of the channel potentially loses its validity. An alternative is to consider the more accurate but more complex spherical wave model (SWM). This paper introduces an intermediate parabolic wave model (ParWM), more accurate than the PWM while less complex than the SWM. The validity domains of those three physical models are assessed in a novel way. Finally, estimation algorithms for the SWM and ParWM are proposed and compared with classical algorithms, showing a promising performance complexity trade-off.
|
cs/0202032
|
Daniel Lehmann
|
Rica Gonen and Daniel Lehmann
|
Optimal Solutions for Multi-Unit Combinatorial Auctions: Branch and
Bound Heuristics
|
Presented at EC'00
|
Second ACM Conference on Electronic Commerce (EC'00) Minneapolis,
Minnesota, October 2000, pp. 13-20
| null | null |
cs.GT cs.AI
| null |
Finding optimal solutions for multi-unit combinatorial auctions is a hard
problem and finding approximations to the optimal solution is also hard. We
investigate the use of Branch-and-Bound techniques: they require both a way to
bound from above the value of the best allocation and a good criterion to
decide which bids are to be tried first. Different methods for efficiently
bounding from above the value of the best allocation are considered.
Theoretical original results characterize the best approximation ratio and the
ordering criterion that provides it. We suggest to use this criterion.
|
[
{
"created": "Wed, 20 Feb 2002 14:38:39 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Gonen",
"Rica",
""
],
[
"Lehmann",
"Daniel",
""
]
] |
Finding optimal solutions for multi-unit combinatorial auctions is a hard problem and finding approximations to the optimal solution is also hard. We investigate the use of Branch-and-Bound techniques: they require both a way to bound from above the value of the best allocation and a good criterion to decide which bids are to be tried first. Different methods for efficiently bounding from above the value of the best allocation are considered. Theoretical original results characterize the best approximation ratio and the ordering criterion that provides it. We suggest to use this criterion.
|
1905.11544
|
Naveed Akhtar Dr.
|
Naveed Akhtar, Mohammad A. A. K. Jalwana, Mohammed Bennamoun, Ajmal
Mian
|
Label Universal Targeted Attack
| null | null | null | null |
cs.CR cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce Label Universal Targeted Attack (LUTA) that makes a deep model
predict a label of attacker's choice for `any' sample of a given source class
with high probability. Our attack stochastically maximizes the log-probability
of the target label for the source class with first order gradient
optimization, while accounting for the gradient moments. It also suppresses the
leakage of attack information to the non-source classes for avoiding the attack
suspicions. The perturbations resulting from our attack achieve high fooling
ratios on the large-scale ImageNet and VGGFace models, and transfer well to the
Physical World. Given full control over the perturbation scope in LUTA, we also
demonstrate it as a tool for deep model autopsy. The proposed attack reveals
interesting perturbation patterns and observations regarding the deep models.
|
[
{
"created": "Mon, 27 May 2019 23:53:00 GMT",
"version": "v1"
},
{
"created": "Sat, 1 Jun 2019 05:11:50 GMT",
"version": "v2"
}
] |
2019-06-04
|
[
[
"Akhtar",
"Naveed",
""
],
[
"Jalwana",
"Mohammad A. A. K.",
""
],
[
"Bennamoun",
"Mohammed",
""
],
[
"Mian",
"Ajmal",
""
]
] |
We introduce Label Universal Targeted Attack (LUTA) that makes a deep model predict a label of attacker's choice for `any' sample of a given source class with high probability. Our attack stochastically maximizes the log-probability of the target label for the source class with first order gradient optimization, while accounting for the gradient moments. It also suppresses the leakage of attack information to the non-source classes for avoiding the attack suspicions. The perturbations resulting from our attack achieve high fooling ratios on the large-scale ImageNet and VGGFace models, and transfer well to the Physical World. Given full control over the perturbation scope in LUTA, we also demonstrate it as a tool for deep model autopsy. The proposed attack reveals interesting perturbation patterns and observations regarding the deep models.
|
2401.04594
|
Noam Zilberstein
|
Noam Zilberstein
|
A Relatively Complete Program Logic for Effectful Branching
| null | null | null | null |
cs.LO cs.PL
|
http://creativecommons.org/licenses/by/4.0/
|
Starting with Hoare Logic over 50 years ago, numerous sound and relatively
complete program logics have been devised to reason about the diverse programs
encountered in the real world. This includes reasoning about computational
effects, particularly those effects that cause the program execution to branch
into multiple paths due to, e.g., nondeterministic or probabilistic choice.
The recently introduced Outcome Logic reimagines Hoare Logic with effects at
its core, using an algebraic representation of choice to capture a variety of
effects. In this paper, we give the first relatively complete proof system for
Outcome Logic, handling general purpose looping for the first time. We also
show that this proof system applies to programs with various effects and that
it facilitates the reuse of proof fragments across different kinds of
specifications.
|
[
{
"created": "Tue, 9 Jan 2024 14:55:54 GMT",
"version": "v1"
}
] |
2024-01-10
|
[
[
"Zilberstein",
"Noam",
""
]
] |
Starting with Hoare Logic over 50 years ago, numerous sound and relatively complete program logics have been devised to reason about the diverse programs encountered in the real world. This includes reasoning about computational effects, particularly those effects that cause the program execution to branch into multiple paths due to, e.g., nondeterministic or probabilistic choice. The recently introduced Outcome Logic reimagines Hoare Logic with effects at its core, using an algebraic representation of choice to capture a variety of effects. In this paper, we give the first relatively complete proof system for Outcome Logic, handling general purpose looping for the first time. We also show that this proof system applies to programs with various effects and that it facilitates the reuse of proof fragments across different kinds of specifications.
|
1606.06854
|
Xingyi Zhou
|
Xingyi Zhou, Qingfu Wan, Wei Zhang, Xiangyang Xue, Yichen Wei
|
Model-based Deep Hand Pose Estimation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Previous learning based hand pose estimation methods does not fully exploit
the prior information in hand model geometry. Instead, they usually rely a
separate model fitting step to generate valid hand poses. Such a post
processing is inconvenient and sub-optimal. In this work, we propose a model
based deep learning approach that adopts a forward kinematics based layer to
ensure the geometric validity of estimated poses. For the first time, we show
that embedding such a non-linear generative process in deep learning is
feasible for hand pose estimation. Our approach is verified on challenging
public datasets and achieves state-of-the-art performance.
|
[
{
"created": "Wed, 22 Jun 2016 08:47:06 GMT",
"version": "v1"
}
] |
2016-06-23
|
[
[
"Zhou",
"Xingyi",
""
],
[
"Wan",
"Qingfu",
""
],
[
"Zhang",
"Wei",
""
],
[
"Xue",
"Xiangyang",
""
],
[
"Wei",
"Yichen",
""
]
] |
Previous learning based hand pose estimation methods does not fully exploit the prior information in hand model geometry. Instead, they usually rely a separate model fitting step to generate valid hand poses. Such a post processing is inconvenient and sub-optimal. In this work, we propose a model based deep learning approach that adopts a forward kinematics based layer to ensure the geometric validity of estimated poses. For the first time, we show that embedding such a non-linear generative process in deep learning is feasible for hand pose estimation. Our approach is verified on challenging public datasets and achieves state-of-the-art performance.
|
2210.09900
|
Housheng Xie
|
Housheng Xie, Junhui Qiu, Yuan Dai, Yang Yang, Changcheng Xiang,
Yukuan Zhang
|
SA-DNet: A on-demand semantic object registration network adapting to
non-rigid deformation
|
15 pages, 12 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As an essential processing step before the fusing of infrared and visible
images, the performance of image registration determines whether the two images
can be fused at correct spatial position. In the actual scenario, the varied
imaging devices may lead to a change in perspective or time gap between shots,
making significant non-rigid spatial relationship in infrared and visible
images. Even if a large number of feature points are matched, the registration
accuracy may still be inadequate, affecting the result of image fusion and
other vision tasks. To alleviate this problem, we propose a Semantic-Aware
on-Demand registration network (SA-DNet), which mainly purpose is to confine
the feature matching process to the semantic region of interest (sROI) by
designing semantic-aware module (SAM) and HOL-Deep hybrid matching module
(HDM). After utilizing TPS to transform infrared and visible images based on
the corresponding feature points in sROI, the registered images are fused using
image fusion module (IFM) to achieve a fully functional registration and fusion
network. Moreover, we point out that for different demands, this type of
approach allows us to select semantic objects for feature matching as needed
and accomplishes task-specific registration based on specific requirements. To
demonstrate the robustness of SA-DNet for non-rigid distortions, we conduct
extensive experiments by comparing SA-DNet with five state-of-the-art infrared
and visible image feature matching methods, and the experimental results show
that our method adapts better to the presence of non-rigid distortions in the
images and provides semantically well-registered images.
|
[
{
"created": "Tue, 18 Oct 2022 14:41:28 GMT",
"version": "v1"
},
{
"created": "Wed, 26 Oct 2022 02:13:42 GMT",
"version": "v2"
}
] |
2022-10-27
|
[
[
"Xie",
"Housheng",
""
],
[
"Qiu",
"Junhui",
""
],
[
"Dai",
"Yuan",
""
],
[
"Yang",
"Yang",
""
],
[
"Xiang",
"Changcheng",
""
],
[
"Zhang",
"Yukuan",
""
]
] |
As an essential processing step before the fusing of infrared and visible images, the performance of image registration determines whether the two images can be fused at correct spatial position. In the actual scenario, the varied imaging devices may lead to a change in perspective or time gap between shots, making significant non-rigid spatial relationship in infrared and visible images. Even if a large number of feature points are matched, the registration accuracy may still be inadequate, affecting the result of image fusion and other vision tasks. To alleviate this problem, we propose a Semantic-Aware on-Demand registration network (SA-DNet), which mainly purpose is to confine the feature matching process to the semantic region of interest (sROI) by designing semantic-aware module (SAM) and HOL-Deep hybrid matching module (HDM). After utilizing TPS to transform infrared and visible images based on the corresponding feature points in sROI, the registered images are fused using image fusion module (IFM) to achieve a fully functional registration and fusion network. Moreover, we point out that for different demands, this type of approach allows us to select semantic objects for feature matching as needed and accomplishes task-specific registration based on specific requirements. To demonstrate the robustness of SA-DNet for non-rigid distortions, we conduct extensive experiments by comparing SA-DNet with five state-of-the-art infrared and visible image feature matching methods, and the experimental results show that our method adapts better to the presence of non-rigid distortions in the images and provides semantically well-registered images.
|
2404.19117
|
Sergi Liesegang
|
Sergi Liesegang and Stefano Buzzi
|
Coexistence of eMBB+ and mMTC+ in Uplink Cell-Free Massive MIMO Networks
|
This work has been submitted to IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessible
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper tackles the problem of designing proper uplink multiple access
(MA) schemes for coexistence between enhanced mobile broadband+ (eMBB+) users
and massive machine-type communications+ (mMTC+) devices in a terminal-centric
cell-free massive MIMO system. Specifically, the use of a time-frequency
spreading technique for the mMTC+ devices has been proposed. Coupled with the
assumption of imperfect channel knowledge, closed-form bounds of the achievable
(ergodic) rate for the two types of data services are derived. Using suitable
power control mechanisms, we show it is possible to efficiently multiplex eMBB+
and mMTC+ traffic in the same time-frequency resource grid. Numerical
experiments reveal interesting trade-offs in the selection of the spreading
gain and the number of serving access points within the system. Results also
demonstrate that the performance of the mMTC+ devices is slightly affected by
the presence of the eMBB+ users. Overall, our approach can endow good quality
of service to both 6G cornerstones at once.
|
[
{
"created": "Mon, 29 Apr 2024 21:32:30 GMT",
"version": "v1"
}
] |
2024-05-01
|
[
[
"Liesegang",
"Sergi",
""
],
[
"Buzzi",
"Stefano",
""
]
] |
This paper tackles the problem of designing proper uplink multiple access (MA) schemes for coexistence between enhanced mobile broadband+ (eMBB+) users and massive machine-type communications+ (mMTC+) devices in a terminal-centric cell-free massive MIMO system. Specifically, the use of a time-frequency spreading technique for the mMTC+ devices has been proposed. Coupled with the assumption of imperfect channel knowledge, closed-form bounds of the achievable (ergodic) rate for the two types of data services are derived. Using suitable power control mechanisms, we show it is possible to efficiently multiplex eMBB+ and mMTC+ traffic in the same time-frequency resource grid. Numerical experiments reveal interesting trade-offs in the selection of the spreading gain and the number of serving access points within the system. Results also demonstrate that the performance of the mMTC+ devices is slightly affected by the presence of the eMBB+ users. Overall, our approach can endow good quality of service to both 6G cornerstones at once.
|
2205.00861
|
Vipin Singh Sehrawat
|
Vipin Singh Sehrawat, Foo Yee Yeo, Dmitriy Vassilyev
|
Star-specific Key-homomorphic PRFs from Learning with Linear Regression
|
This is the preprint of a paper published in IEEE Access, vol. 11,
pp. 73235-73267, 2023
|
IEEE Access, vol. 11, pp. 73235-73267, 2023
|
10.1109/ACCESS.2023.3294844
| null |
cs.CR cs.IT math.CO math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce a novel method to derandomize the learning with errors (LWE)
problem by generating deterministic yet sufficiently independent LWE instances
that are constructed by using linear regression models, which are generated via
(wireless) communication errors. We also introduce star-specific
key-homomorphic (SSKH) pseudorandom functions (PRFs), which are defined by the
respective sets of parties that construct them. We use our derandomized variant
of LWE to construct a SSKH PRF family. The sets of parties constructing SSKH
PRFs are arranged as star graphs with possibly shared vertices, i.e., the pairs
of sets may have non-empty intersections. We reduce the security of our SSKH
PRF family to the hardness of LWE. To establish the maximum number of SSKH PRFs
that can be constructed -- by a set of parties -- in the presence of
passive/active and external/internal adversaries, we prove several bounds on
the size of maximally cover-free at most $t$-intersecting $k$-uniform family of
sets $\mathcal{H}$, where the three properties are defined as: (i) $k$-uniform:
$\forall A \in \mathcal{H}: |A| = k$, (ii) at most $t$-intersecting: $\forall
A, B \in \mathcal{H}, B \neq A: |A \cap B| \leq t$, (iii) maximally cover-free:
$\forall A \in \mathcal{H}: A \not\subseteq \bigcup\limits_{\substack{B \in
\mathcal{H} \\ B \neq A}} B$. For the same purpose, we define and compute the
mutual information between different linear regression hypotheses that are
generated from overlapping training datasets.
|
[
{
"created": "Mon, 2 May 2022 12:44:26 GMT",
"version": "v1"
},
{
"created": "Fri, 3 Mar 2023 01:21:15 GMT",
"version": "v2"
},
{
"created": "Fri, 28 Jul 2023 17:22:54 GMT",
"version": "v3"
}
] |
2023-07-31
|
[
[
"Sehrawat",
"Vipin Singh",
""
],
[
"Yeo",
"Foo Yee",
""
],
[
"Vassilyev",
"Dmitriy",
""
]
] |
We introduce a novel method to derandomize the learning with errors (LWE) problem by generating deterministic yet sufficiently independent LWE instances that are constructed by using linear regression models, which are generated via (wireless) communication errors. We also introduce star-specific key-homomorphic (SSKH) pseudorandom functions (PRFs), which are defined by the respective sets of parties that construct them. We use our derandomized variant of LWE to construct a SSKH PRF family. The sets of parties constructing SSKH PRFs are arranged as star graphs with possibly shared vertices, i.e., the pairs of sets may have non-empty intersections. We reduce the security of our SSKH PRF family to the hardness of LWE. To establish the maximum number of SSKH PRFs that can be constructed -- by a set of parties -- in the presence of passive/active and external/internal adversaries, we prove several bounds on the size of maximally cover-free at most $t$-intersecting $k$-uniform family of sets $\mathcal{H}$, where the three properties are defined as: (i) $k$-uniform: $\forall A \in \mathcal{H}: |A| = k$, (ii) at most $t$-intersecting: $\forall A, B \in \mathcal{H}, B \neq A: |A \cap B| \leq t$, (iii) maximally cover-free: $\forall A \in \mathcal{H}: A \not\subseteq \bigcup\limits_{\substack{B \in \mathcal{H} \\ B \neq A}} B$. For the same purpose, we define and compute the mutual information between different linear regression hypotheses that are generated from overlapping training datasets.
|
2307.11550
|
Arul Selvam Periyasamy
|
Arul Selvam Periyasamy, Arash Amini, Vladimir Tsaturyan, and Sven
Behnke
|
YOLOPose V2: Understanding and Improving Transformer-based 6D Pose
Estimation
|
Robotics and Autonomous Systems Journal, Elsevier, to appear 2023.
arXiv admin note: substantial text overlap with arXiv:2205.02536
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
6D object pose estimation is a crucial prerequisite for autonomous robot
manipulation applications. The state-of-the-art models for pose estimation are
convolutional neural network (CNN)-based. Lately, Transformers, an architecture
originally proposed for natural language processing, is achieving
state-of-the-art results in many computer vision tasks as well. Equipped with
the multi-head self-attention mechanism, Transformers enable simple
single-stage end-to-end architectures for learning object detection and 6D
object pose estimation jointly. In this work, we propose YOLOPose (short form
for You Only Look Once Pose estimation), a Transformer-based multi-object 6D
pose estimation method based on keypoint regression and an improved variant of
the YOLOPose model. In contrast to the standard heatmaps for predicting
keypoints in an image, we directly regress the keypoints. Additionally, we
employ a learnable orientation estimation module to predict the orientation
from the keypoints. Along with a separate translation estimation module, our
model is end-to-end differentiable. Our method is suitable for real-time
applications and achieves results comparable to state-of-the-art methods. We
analyze the role of object queries in our architecture and reveal that the
object queries specialize in detecting objects in specific image regions.
Furthermore, we quantify the accuracy trade-off of using datasets of smaller
sizes to train our model.
|
[
{
"created": "Fri, 21 Jul 2023 12:53:54 GMT",
"version": "v1"
}
] |
2023-07-24
|
[
[
"Periyasamy",
"Arul Selvam",
""
],
[
"Amini",
"Arash",
""
],
[
"Tsaturyan",
"Vladimir",
""
],
[
"Behnke",
"Sven",
""
]
] |
6D object pose estimation is a crucial prerequisite for autonomous robot manipulation applications. The state-of-the-art models for pose estimation are convolutional neural network (CNN)-based. Lately, Transformers, an architecture originally proposed for natural language processing, is achieving state-of-the-art results in many computer vision tasks as well. Equipped with the multi-head self-attention mechanism, Transformers enable simple single-stage end-to-end architectures for learning object detection and 6D object pose estimation jointly. In this work, we propose YOLOPose (short form for You Only Look Once Pose estimation), a Transformer-based multi-object 6D pose estimation method based on keypoint regression and an improved variant of the YOLOPose model. In contrast to the standard heatmaps for predicting keypoints in an image, we directly regress the keypoints. Additionally, we employ a learnable orientation estimation module to predict the orientation from the keypoints. Along with a separate translation estimation module, our model is end-to-end differentiable. Our method is suitable for real-time applications and achieves results comparable to state-of-the-art methods. We analyze the role of object queries in our architecture and reveal that the object queries specialize in detecting objects in specific image regions. Furthermore, we quantify the accuracy trade-off of using datasets of smaller sizes to train our model.
|
2403.09495
|
Kevin Kraschewski
|
Kevin Kraschewski, Gregory P. Phlipot, Dennis M. Kochmann
|
A mixed-order quasicontinuum approach for beam-based architected
materials with application to fracture
| null | null | null | null |
cs.CE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Predicting the mechanics of large structural networks, such as beam-based
architected materials, requires a multiscale computational strategy that
preserves information about the discrete structure while being applicable to
large assemblies of struts. Especially the fracture properties of such beam
lattices necessitate a two-scale modeling strategy, since the fracture
toughness depends on discrete beam failure events, while the application of
remote loads requires large simulation domains. As classical homogenization
techniques fail in the absence of a separation of scales at the crack tip, we
present a concurrent multiscale technique: a fully-nonlocal quasicontinuum (QC)
multi-lattice formulation for beam networks, based on a conforming mesh. Like
the original atomistic QC formulation, we maintain discrete resolution where
needed (such as around a crack tip) while efficiently coarse-graining in the
remaining simulation domain. A key challenge is a suitable model in the
coarse-grained domain, where classical QC uses affine interpolations. This
formulation fails in bending-dominated lattices, as it overconstrains the
lattice by preventing bending without stretching of beams. Therefore, we here
present a beam QC formulation based on mixed-order interpolation in the
coarse-grained region -- combining the efficiency of linear interpolation where
possible with the accuracy advantages of quadratic interpolation where needed.
This results in a powerful computational framework, which, as we demonstrate
through our validation and benchmark examples, overcomes the deficiencies of
previous QC formulations and enables, e.g., the prediction of the fracture
toughness and the diverse nature of stress distributions of stretching- and
bending-dominated beam lattices in two and three dimensions.
|
[
{
"created": "Thu, 14 Mar 2024 15:35:30 GMT",
"version": "v1"
}
] |
2024-03-15
|
[
[
"Kraschewski",
"Kevin",
""
],
[
"Phlipot",
"Gregory P.",
""
],
[
"Kochmann",
"Dennis M.",
""
]
] |
Predicting the mechanics of large structural networks, such as beam-based architected materials, requires a multiscale computational strategy that preserves information about the discrete structure while being applicable to large assemblies of struts. Especially the fracture properties of such beam lattices necessitate a two-scale modeling strategy, since the fracture toughness depends on discrete beam failure events, while the application of remote loads requires large simulation domains. As classical homogenization techniques fail in the absence of a separation of scales at the crack tip, we present a concurrent multiscale technique: a fully-nonlocal quasicontinuum (QC) multi-lattice formulation for beam networks, based on a conforming mesh. Like the original atomistic QC formulation, we maintain discrete resolution where needed (such as around a crack tip) while efficiently coarse-graining in the remaining simulation domain. A key challenge is a suitable model in the coarse-grained domain, where classical QC uses affine interpolations. This formulation fails in bending-dominated lattices, as it overconstrains the lattice by preventing bending without stretching of beams. Therefore, we here present a beam QC formulation based on mixed-order interpolation in the coarse-grained region -- combining the efficiency of linear interpolation where possible with the accuracy advantages of quadratic interpolation where needed. This results in a powerful computational framework, which, as we demonstrate through our validation and benchmark examples, overcomes the deficiencies of previous QC formulations and enables, e.g., the prediction of the fracture toughness and the diverse nature of stress distributions of stretching- and bending-dominated beam lattices in two and three dimensions.
|
2202.05965
|
Haonan Wang
|
Haonan Wang, Ang Li, Ya-Feng Liu, Qibo Qin, Lingyang Song, and Yonghui
Li
|
Achievable Rate Maximization Pattern Design for Reconfigurable MIMO
Antenna Array
|
This work has been accepted by IEEE Transactions on Wireless
Communications
|
IEEE Transactions on Wireless Communications, 2023, (Early Access)
|
10.1109/TWC.2023.3238069
| null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reconfigurable multiple-input multiple-output can provide performance gains
over traditional MIMO by reshaping the channels, i.e., introducing more channel
realizations. In this paper, we focus on the achievable rate maximization
pattern design for reconfigurable MIMO systems. Firstly, we introduce the
matrix representation of pattern reconfigurable MIMO (PR-MIMO), based on which
a pattern design problem is formulated. To further reveal the effect of the
radiation pattern on the wireless channel, we consider pattern design for both
the single-pattern case where the optimized radiation pattern is the same for
all the antenna elements, and the multi-pattern case where different antenna
elements can adopt different radiation patterns. For the single-pattern case,
we show that the pattern design is equivalent to a redistribution of gains
among all scattering paths, and an eigenvalue optimization based solution is
obtained. For the multi-pattern case, we propose a sequential optimization
framework with manifold optimization and eigenvalue decomposition to obtain
near-optimal solutions. Numerical results validate the superiority of PR-MIMO
systems over traditional MIMO in terms of achievable rate, and also show the
effectiveness of the proposed solutions.
|
[
{
"created": "Sat, 12 Feb 2022 03:27:56 GMT",
"version": "v1"
},
{
"created": "Mon, 13 Feb 2023 11:27:58 GMT",
"version": "v2"
}
] |
2023-02-14
|
[
[
"Wang",
"Haonan",
""
],
[
"Li",
"Ang",
""
],
[
"Liu",
"Ya-Feng",
""
],
[
"Qin",
"Qibo",
""
],
[
"Song",
"Lingyang",
""
],
[
"Li",
"Yonghui",
""
]
] |
Reconfigurable multiple-input multiple-output can provide performance gains over traditional MIMO by reshaping the channels, i.e., introducing more channel realizations. In this paper, we focus on the achievable rate maximization pattern design for reconfigurable MIMO systems. Firstly, we introduce the matrix representation of pattern reconfigurable MIMO (PR-MIMO), based on which a pattern design problem is formulated. To further reveal the effect of the radiation pattern on the wireless channel, we consider pattern design for both the single-pattern case where the optimized radiation pattern is the same for all the antenna elements, and the multi-pattern case where different antenna elements can adopt different radiation patterns. For the single-pattern case, we show that the pattern design is equivalent to a redistribution of gains among all scattering paths, and an eigenvalue optimization based solution is obtained. For the multi-pattern case, we propose a sequential optimization framework with manifold optimization and eigenvalue decomposition to obtain near-optimal solutions. Numerical results validate the superiority of PR-MIMO systems over traditional MIMO in terms of achievable rate, and also show the effectiveness of the proposed solutions.
|
2202.02429
|
Chelsea Sidrane
|
Chelsea Sidrane, Sydney Katz, Anthony Corso, Mykel J. Kochenderfer
|
Verifying Inverse Model Neural Networks
|
Reformatted and fixed typos
| null | null | null |
cs.LG cs.LO
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Inverse problems exist in a wide variety of physical domains from aerospace
engineering to medical imaging. The goal is to infer the underlying state from
a set of observations. When the forward model that produced the observations is
nonlinear and stochastic, solving the inverse problem is very challenging.
Neural networks are an appealing solution for solving inverse problems as they
can be trained from noisy data and once trained are computationally efficient
to run. However, inverse model neural networks do not have guarantees of
correctness built-in, which makes them unreliable for use in safety and
accuracy-critical contexts. In this work we introduce a method for verifying
the correctness of inverse model neural networks. Our approach is to
overapproximate a nonlinear, stochastic forward model with piecewise linear
constraints and encode both the overapproximate forward model and the neural
network inverse model as a mixed-integer program. We demonstrate this
verification procedure on a real-world airplane fuel gauge case study. The
ability to verify and consequently trust inverse model neural networks allows
their use in a wide variety of contexts, from aerospace to medicine.
|
[
{
"created": "Fri, 4 Feb 2022 23:13:22 GMT",
"version": "v1"
},
{
"created": "Thu, 5 Jan 2023 00:53:35 GMT",
"version": "v2"
}
] |
2023-01-06
|
[
[
"Sidrane",
"Chelsea",
""
],
[
"Katz",
"Sydney",
""
],
[
"Corso",
"Anthony",
""
],
[
"Kochenderfer",
"Mykel J.",
""
]
] |
Inverse problems exist in a wide variety of physical domains from aerospace engineering to medical imaging. The goal is to infer the underlying state from a set of observations. When the forward model that produced the observations is nonlinear and stochastic, solving the inverse problem is very challenging. Neural networks are an appealing solution for solving inverse problems as they can be trained from noisy data and once trained are computationally efficient to run. However, inverse model neural networks do not have guarantees of correctness built-in, which makes them unreliable for use in safety and accuracy-critical contexts. In this work we introduce a method for verifying the correctness of inverse model neural networks. Our approach is to overapproximate a nonlinear, stochastic forward model with piecewise linear constraints and encode both the overapproximate forward model and the neural network inverse model as a mixed-integer program. We demonstrate this verification procedure on a real-world airplane fuel gauge case study. The ability to verify and consequently trust inverse model neural networks allows their use in a wide variety of contexts, from aerospace to medicine.
|
1307.6923
|
Rajib Rana
|
Rajib Rana, Mingrui Yang, Tim Wark, Chun Tung Chou, Wen Hu
|
A Deterministic Construction of Projection matrix for Adaptive
Trajectory Compression
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Compressive Sensing, which offers exact reconstruction of sparse signal from
a small number of measurements, has tremendous potential for trajectory
compression. In order to optimize the compression, trajectory compression
algorithms need to adapt compression ratio subject to the compressibility of
the trajectory. Intuitively, the trajectory of an object moving in starlight
road is more compressible compared to the trajectory of a object moving in
winding roads, therefore, higher compression is achievable in the former case
compared to the later. We propose an in-situ compression technique underpinning
the support vector regression theory, which accurately predicts the
compressibility of a trajectory given the mean speed of the object and then
apply compressive sensing to adapt the compression to the compressibility of
the trajectory. The conventional encoding and decoding process of compressive
sensing uses predefined dictionary and measurement (or projection) matrix
pairs. However, the selection of an optimal pair is nontrivial and exhaustive,
and random selection of a pair does not guarantee the best compression
performance. In this paper, we propose a deterministic and data driven
construction for the projection matrix which is obtained by applying singular
value decomposition to a sparsifying dictionary learned from the dataset. We
analyze case studies of pedestrian and animal trajectory datasets including GPS
trajectory data from 127 subjects. The experimental results suggest that the
proposed adaptive compression algorithm, incorporating the deterministic
construction of projection matrix, offers significantly better compression
performance compared to the state-of-the-art alternatives.
|
[
{
"created": "Fri, 26 Jul 2013 04:59:26 GMT",
"version": "v1"
}
] |
2013-07-29
|
[
[
"Rana",
"Rajib",
""
],
[
"Yang",
"Mingrui",
""
],
[
"Wark",
"Tim",
""
],
[
"Chou",
"Chun Tung",
""
],
[
"Hu",
"Wen",
""
]
] |
Compressive Sensing, which offers exact reconstruction of sparse signal from a small number of measurements, has tremendous potential for trajectory compression. In order to optimize the compression, trajectory compression algorithms need to adapt compression ratio subject to the compressibility of the trajectory. Intuitively, the trajectory of an object moving in starlight road is more compressible compared to the trajectory of a object moving in winding roads, therefore, higher compression is achievable in the former case compared to the later. We propose an in-situ compression technique underpinning the support vector regression theory, which accurately predicts the compressibility of a trajectory given the mean speed of the object and then apply compressive sensing to adapt the compression to the compressibility of the trajectory. The conventional encoding and decoding process of compressive sensing uses predefined dictionary and measurement (or projection) matrix pairs. However, the selection of an optimal pair is nontrivial and exhaustive, and random selection of a pair does not guarantee the best compression performance. In this paper, we propose a deterministic and data driven construction for the projection matrix which is obtained by applying singular value decomposition to a sparsifying dictionary learned from the dataset. We analyze case studies of pedestrian and animal trajectory datasets including GPS trajectory data from 127 subjects. The experimental results suggest that the proposed adaptive compression algorithm, incorporating the deterministic construction of projection matrix, offers significantly better compression performance compared to the state-of-the-art alternatives.
|
2008.06805
|
Michael Wehar
|
Andr\'as Z. Salamon and Michael Wehar
|
Superlinear Lower Bounds Based on ETH
|
Accepted at STACS 2022
| null |
10.4230/LIPIcs.STACS.2022.55
| null |
cs.CC cs.FL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce techniques for proving superlinear conditional lower bounds for
polynomial time problems. In particular, we show that CircuitSAT for circuits
with m gates and log(m) inputs (denoted by log-CircuitSAT) is not decidable in
essentially-linear time unless the exponential time hypothesis (ETH) is false
and k-Clique is decidable in essentially-linear time in terms of the graph's
size for all fixed k. Such conditional lower bounds have previously only been
demonstrated relative to the strong exponential time hypothesis (SETH). Our
results therefore offer significant progress towards proving unconditional
superlinear time complexity lower bounds for natural problems in polynomial
time.
|
[
{
"created": "Sat, 15 Aug 2020 23:00:23 GMT",
"version": "v1"
},
{
"created": "Sun, 8 Nov 2020 23:25:24 GMT",
"version": "v2"
},
{
"created": "Tue, 10 Nov 2020 04:56:02 GMT",
"version": "v3"
},
{
"created": "Mon, 1 Mar 2021 01:11:24 GMT",
"version": "v4"
},
{
"created": "Sun, 23 Jan 2022 02:32:34 GMT",
"version": "v5"
}
] |
2022-05-18
|
[
[
"Salamon",
"András Z.",
""
],
[
"Wehar",
"Michael",
""
]
] |
We introduce techniques for proving superlinear conditional lower bounds for polynomial time problems. In particular, we show that CircuitSAT for circuits with m gates and log(m) inputs (denoted by log-CircuitSAT) is not decidable in essentially-linear time unless the exponential time hypothesis (ETH) is false and k-Clique is decidable in essentially-linear time in terms of the graph's size for all fixed k. Such conditional lower bounds have previously only been demonstrated relative to the strong exponential time hypothesis (SETH). Our results therefore offer significant progress towards proving unconditional superlinear time complexity lower bounds for natural problems in polynomial time.
|
1712.05726
|
Gourab Ghatak
|
Gourab Ghatak, Antonio De Domenico, and Marceau Coupechoux
|
Coverage Analysis and Load Balancing in HetNets with mmWave Multi-RAT
Small Cells
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We characterize a two tier heterogeneous network, consisting of classical
sub-6GHz macro cells, and multi Radio Access Technology (RAT) small cells able
to operate in sub-6GHz and millimeter-wave (mm-wave) bands. For optimizing
coverage and to balance loads, we propose a two-step mechanism based on two
biases for tuning the tier and RAT selection, where the sub-6GHz band is used
to speed-up the initial access procedure in the mm-wave RAT. First, we
investigate the effect of the biases in terms of signal to interference plus
noise ratio (SINR) distribution, cell load, and user throughput. More
specifically, we obtain the optimal biases that maximize either the SINR
coverage or the user downlink throughput. Then, we characterize the cell load
using the mean cell approach and derive upper bounds on the overloading
probabilities. Finally, for a given traffic density, we provide the small cell
density required to satisfy system constraints in terms of overloading and
outage probabilities. Our analysis highlights the importance of deploying dual
band small cells in particular when small cells are sparsely deployed or in
case of heavy traffic.
|
[
{
"created": "Fri, 15 Dec 2017 16:15:01 GMT",
"version": "v1"
}
] |
2017-12-18
|
[
[
"Ghatak",
"Gourab",
""
],
[
"De Domenico",
"Antonio",
""
],
[
"Coupechoux",
"Marceau",
""
]
] |
We characterize a two tier heterogeneous network, consisting of classical sub-6GHz macro cells, and multi Radio Access Technology (RAT) small cells able to operate in sub-6GHz and millimeter-wave (mm-wave) bands. For optimizing coverage and to balance loads, we propose a two-step mechanism based on two biases for tuning the tier and RAT selection, where the sub-6GHz band is used to speed-up the initial access procedure in the mm-wave RAT. First, we investigate the effect of the biases in terms of signal to interference plus noise ratio (SINR) distribution, cell load, and user throughput. More specifically, we obtain the optimal biases that maximize either the SINR coverage or the user downlink throughput. Then, we characterize the cell load using the mean cell approach and derive upper bounds on the overloading probabilities. Finally, for a given traffic density, we provide the small cell density required to satisfy system constraints in terms of overloading and outage probabilities. Our analysis highlights the importance of deploying dual band small cells in particular when small cells are sparsely deployed or in case of heavy traffic.
|
1703.05305
|
Ilya Dumer
|
Ilya Dumer and Kirill Shabunov
|
Soft decision decoding of Reed-Muller codes: recursive lists
| null |
IEEE Trans. Info. Theory, vol. 52, no. 3, pp. 1260-1266, 2006
| null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recursive list decoding is considered for Reed-Muller (RM) codes. The
algorithm repeatedly relegates itself to the shorter RM codes by recalculating
the posterior probabilities of their symbols. Intermediate decodings are only
performed when these recalculations reach the trivial RM codes. In turn, the
updated lists of most plausible codewords are used in subsequent decodings. The
algorithm is further improved by using permutation techniques on code positions
and by eliminating the most error-prone information bits. Simulation results
show that for all RM codes of length 256 and many subcodes of length 512, these
algorithms approach maximum-likelihood (ML) performance within a margin of 0.1
dB. As a result, we present tight experimental bounds on ML performance for
these codes.
|
[
{
"created": "Tue, 14 Mar 2017 23:08:24 GMT",
"version": "v1"
}
] |
2017-03-17
|
[
[
"Dumer",
"Ilya",
""
],
[
"Shabunov",
"Kirill",
""
]
] |
Recursive list decoding is considered for Reed-Muller (RM) codes. The algorithm repeatedly relegates itself to the shorter RM codes by recalculating the posterior probabilities of their symbols. Intermediate decodings are only performed when these recalculations reach the trivial RM codes. In turn, the updated lists of most plausible codewords are used in subsequent decodings. The algorithm is further improved by using permutation techniques on code positions and by eliminating the most error-prone information bits. Simulation results show that for all RM codes of length 256 and many subcodes of length 512, these algorithms approach maximum-likelihood (ML) performance within a margin of 0.1 dB. As a result, we present tight experimental bounds on ML performance for these codes.
|
1505.02428
|
Daniel Kulesz
|
Daniel Kulesz, Fabian Toth, Fabian Beck
|
Live Inspection of Spreadsheets
|
In Proceedings of the 2nd Workshop on Software Engineering Methods in
Spreadsheets (http://spreadsheetlab.org/sems15/)
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Existing approaches for detecting anomalies in spreadsheets can help to
discover faults, but they are often applied too late in the spreadsheet
lifecycle. By contrast, our approach detects anomalies immediately whenever
users change their spreadsheets. This live inspection approach has been
implemented as part of the Spreadsheet Inspection Framework, enabling the tool
to visually report findings without disturbing the users' workflow. An advanced
list representation allows users to keep track of the latest findings,
prioritize open problems, and check progress on solving the issues. Results
from a first user study indicate that users find the approach useful.
|
[
{
"created": "Sun, 10 May 2015 19:56:14 GMT",
"version": "v1"
}
] |
2015-05-12
|
[
[
"Kulesz",
"Daniel",
""
],
[
"Toth",
"Fabian",
""
],
[
"Beck",
"Fabian",
""
]
] |
Existing approaches for detecting anomalies in spreadsheets can help to discover faults, but they are often applied too late in the spreadsheet lifecycle. By contrast, our approach detects anomalies immediately whenever users change their spreadsheets. This live inspection approach has been implemented as part of the Spreadsheet Inspection Framework, enabling the tool to visually report findings without disturbing the users' workflow. An advanced list representation allows users to keep track of the latest findings, prioritize open problems, and check progress on solving the issues. Results from a first user study indicate that users find the approach useful.
|
2212.08729
|
Jonathan Francis
|
Jonathan Francis, Bingqing Chen, Weiran Yao, Eric Nyberg, Jean Oh
|
Distribution-aware Goal Prediction and Conformant Model-based Planning
for Safe Autonomous Driving
|
Accepted: 1st Workshop on Safe Learning for Autonomous Driving, at
the International Conference on Machine Learning (ICML 2022); Best Paper
Award
| null | null | null |
cs.RO cs.AI cs.CV cs.LG cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The feasibility of collecting a large amount of expert demonstrations has
inspired growing research interests in learning-to-drive settings, where models
learn by imitating the driving behaviour from experts. However, exclusively
relying on imitation can limit agents' generalisability to novel scenarios that
are outside the support of the training data. In this paper, we address this
challenge by factorising the driving task, based on the intuition that modular
architectures are more generalisable and more robust to changes in the
environment compared to monolithic, end-to-end frameworks. Specifically, we
draw inspiration from the trajectory forecasting community and reformulate the
learning-to-drive task as obstacle-aware perception and grounding,
distribution-aware goal prediction, and model-based planning. Firstly, we train
the obstacle-aware perception module to extract salient representation of the
visual context. Then, we learn a multi-modal goal distribution by performing
conditional density-estimation using normalising flow. Finally, we ground
candidate trajectory predictions road geometry, and plan the actions based on
on vehicle dynamics. Under the CARLA simulator, we report state-of-the-art
results on the CARNOVEL benchmark.
|
[
{
"created": "Fri, 16 Dec 2022 21:51:51 GMT",
"version": "v1"
}
] |
2022-12-20
|
[
[
"Francis",
"Jonathan",
""
],
[
"Chen",
"Bingqing",
""
],
[
"Yao",
"Weiran",
""
],
[
"Nyberg",
"Eric",
""
],
[
"Oh",
"Jean",
""
]
] |
The feasibility of collecting a large amount of expert demonstrations has inspired growing research interests in learning-to-drive settings, where models learn by imitating the driving behaviour from experts. However, exclusively relying on imitation can limit agents' generalisability to novel scenarios that are outside the support of the training data. In this paper, we address this challenge by factorising the driving task, based on the intuition that modular architectures are more generalisable and more robust to changes in the environment compared to monolithic, end-to-end frameworks. Specifically, we draw inspiration from the trajectory forecasting community and reformulate the learning-to-drive task as obstacle-aware perception and grounding, distribution-aware goal prediction, and model-based planning. Firstly, we train the obstacle-aware perception module to extract salient representation of the visual context. Then, we learn a multi-modal goal distribution by performing conditional density-estimation using normalising flow. Finally, we ground candidate trajectory predictions road geometry, and plan the actions based on on vehicle dynamics. Under the CARLA simulator, we report state-of-the-art results on the CARNOVEL benchmark.
|
2112.01713
|
Yuhong Guo
|
Xuejun Han, Yuhong Guo
|
Contrastive Continual Learning with Feature Propagation
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Classical machine learners are designed only to tackle one task without
capability of adopting new emerging tasks or classes whereas such capacity is
more practical and human-like in the real world. To address this shortcoming,
continual machine learners are elaborated to commendably learn a stream of
tasks with domain and class shifts among different tasks. In this paper, we
propose a general feature-propagation based contrastive continual learning
method which is capable of handling multiple continual learning scenarios.
Specifically, we align the current and previous representation spaces by means
of feature propagation and contrastive representation learning to bridge the
domain shifts among distinct tasks. To further mitigate the class-wise shifts
of the feature representation, a supervised contrastive loss is exploited to
make the example embeddings of the same class closer than those of different
classes. The extensive experimental results demonstrate the outstanding
performance of the proposed method on six continual learning benchmarks
compared to a group of cutting-edge continual learning methods.
|
[
{
"created": "Fri, 3 Dec 2021 04:55:28 GMT",
"version": "v1"
}
] |
2021-12-06
|
[
[
"Han",
"Xuejun",
""
],
[
"Guo",
"Yuhong",
""
]
] |
Classical machine learners are designed only to tackle one task without capability of adopting new emerging tasks or classes whereas such capacity is more practical and human-like in the real world. To address this shortcoming, continual machine learners are elaborated to commendably learn a stream of tasks with domain and class shifts among different tasks. In this paper, we propose a general feature-propagation based contrastive continual learning method which is capable of handling multiple continual learning scenarios. Specifically, we align the current and previous representation spaces by means of feature propagation and contrastive representation learning to bridge the domain shifts among distinct tasks. To further mitigate the class-wise shifts of the feature representation, a supervised contrastive loss is exploited to make the example embeddings of the same class closer than those of different classes. The extensive experimental results demonstrate the outstanding performance of the proposed method on six continual learning benchmarks compared to a group of cutting-edge continual learning methods.
|
1810.05617
|
Philipp Allgeuer
|
Philipp Allgeuer and Sven Behnke
|
Bipedal Walking with Corrective Actions in the Tilt Phase Space
|
International Conference on Humanoid Robots (Humanoids), Beijing,
China, 2018
|
International Conference on Humanoid Robots (Humanoids), Beijing,
China, 2018
| null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many methods exist for a bipedal robot to keep its balance while walking. In
addition to step size and timing, other strategies are possible that influence
the stability of the robot without interfering with the target direction and
speed of locomotion. This paper introduces a multifaceted feedback controller
that uses numerous different feedback mechanisms, collectively termed
corrective actions, to stabilise a core keypoint-based gait. The feedback
controller is experimentally effective, yet free of any physical model of the
robot, very computationally inexpensive, and requires only a single 6-axis IMU
sensor. Due to these low requirements, the approach is deemed to be highly
portable between robots, and was specifically also designed to target lower
cost robots that have suboptimal sensing, actuation and computational
resources. The IMU data is used to estimate the yaw-independent tilt
orientation of the robot, expressed in the so-called tilt phase space, and is
the source of all feedback provided by the controller. Experimental validation
is performed in simulation as well as on real robot hardware.
|
[
{
"created": "Fri, 12 Oct 2018 17:18:57 GMT",
"version": "v1"
}
] |
2018-10-15
|
[
[
"Allgeuer",
"Philipp",
""
],
[
"Behnke",
"Sven",
""
]
] |
Many methods exist for a bipedal robot to keep its balance while walking. In addition to step size and timing, other strategies are possible that influence the stability of the robot without interfering with the target direction and speed of locomotion. This paper introduces a multifaceted feedback controller that uses numerous different feedback mechanisms, collectively termed corrective actions, to stabilise a core keypoint-based gait. The feedback controller is experimentally effective, yet free of any physical model of the robot, very computationally inexpensive, and requires only a single 6-axis IMU sensor. Due to these low requirements, the approach is deemed to be highly portable between robots, and was specifically also designed to target lower cost robots that have suboptimal sensing, actuation and computational resources. The IMU data is used to estimate the yaw-independent tilt orientation of the robot, expressed in the so-called tilt phase space, and is the source of all feedback provided by the controller. Experimental validation is performed in simulation as well as on real robot hardware.
|
1609.09656
|
Yilong Yang
|
Yilong Yang
|
Automated Enterprise Applications Generation from Requirements Model
|
update version from 2016
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Enterprise applications can be automatically generated from a sophisticated
OO design model based on model-driven approach. The design model contains
information about how to decompose the system into components, how to
encapsulate the system operations into classes, and how the objects of classes
collaborate to fulfill the functionality of the system operations. However, the
efforts to build the design model from a validated requirements model are not
proportional to the return. In practice, it is very desirable to have an
approach that can automatically generate standardized enterprise applications
directly from the validated requirements models. In this paper, we propose an
approach named RM2EA, which can reach this goal based on the contract-based
requirements model. We demonstrate the proposed approach through 13 case
studies. The evaluation result shows that the quality and efficiency of the
generated applications are almost equal to the applications implemented by
developers: firstly, we demonstrate that a popular type of enterprise
applications (i.e., a Jakarta EE application) can be successfully generated by
customizing and improving the set of rules; secondly, RM2EA can generate more
readable or maintainable code; thirdly, the enterprise applications generated
by RM2EA achieve similar performance in test results. Overall, the result is
satisfactory, and the implementation of the proposed approach can be further
enhanced and applied to software development in the industry.
|
[
{
"created": "Fri, 30 Sep 2016 09:53:22 GMT",
"version": "v1"
},
{
"created": "Sun, 16 Jan 2022 10:58:27 GMT",
"version": "v2"
},
{
"created": "Tue, 22 Mar 2022 17:28:44 GMT",
"version": "v3"
}
] |
2022-03-23
|
[
[
"Yang",
"Yilong",
""
]
] |
Enterprise applications can be automatically generated from a sophisticated OO design model based on model-driven approach. The design model contains information about how to decompose the system into components, how to encapsulate the system operations into classes, and how the objects of classes collaborate to fulfill the functionality of the system operations. However, the efforts to build the design model from a validated requirements model are not proportional to the return. In practice, it is very desirable to have an approach that can automatically generate standardized enterprise applications directly from the validated requirements models. In this paper, we propose an approach named RM2EA, which can reach this goal based on the contract-based requirements model. We demonstrate the proposed approach through 13 case studies. The evaluation result shows that the quality and efficiency of the generated applications are almost equal to the applications implemented by developers: firstly, we demonstrate that a popular type of enterprise applications (i.e., a Jakarta EE application) can be successfully generated by customizing and improving the set of rules; secondly, RM2EA can generate more readable or maintainable code; thirdly, the enterprise applications generated by RM2EA achieve similar performance in test results. Overall, the result is satisfactory, and the implementation of the proposed approach can be further enhanced and applied to software development in the industry.
|
1803.01599
|
Jogendra Nath Kundu
|
Jogendra Nath Kundu, Phani Krishna Uppala, Anuj Pahuja, R. Venkatesh
Babu
|
AdaDepth: Unsupervised Content Congruent Adaptation for Depth Estimation
|
CVPR 2018
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Supervised deep learning methods have shown promising results for the task of
monocular depth estimation; but acquiring ground truth is costly, and prone to
noise as well as inaccuracies. While synthetic datasets have been used to
circumvent above problems, the resultant models do not generalize well to
natural scenes due to the inherent domain shift. Recent adversarial approaches
for domain adaption have performed well in mitigating the differences between
the source and target domains. But these methods are mostly limited to a
classification setup and do not scale well for fully-convolutional
architectures. In this work, we propose AdaDepth - an unsupervised domain
adaptation strategy for the pixel-wise regression task of monocular depth
estimation. The proposed approach is devoid of above limitations through a)
adversarial learning and b) explicit imposition of content consistency on the
adapted target representation. Our unsupervised approach performs competitively
with other established approaches on depth estimation tasks and achieves
state-of-the-art results in a semi-supervised setting.
|
[
{
"created": "Mon, 5 Mar 2018 10:55:58 GMT",
"version": "v1"
},
{
"created": "Thu, 7 Jun 2018 11:06:01 GMT",
"version": "v2"
}
] |
2018-06-08
|
[
[
"Kundu",
"Jogendra Nath",
""
],
[
"Uppala",
"Phani Krishna",
""
],
[
"Pahuja",
"Anuj",
""
],
[
"Babu",
"R. Venkatesh",
""
]
] |
Supervised deep learning methods have shown promising results for the task of monocular depth estimation; but acquiring ground truth is costly, and prone to noise as well as inaccuracies. While synthetic datasets have been used to circumvent above problems, the resultant models do not generalize well to natural scenes due to the inherent domain shift. Recent adversarial approaches for domain adaption have performed well in mitigating the differences between the source and target domains. But these methods are mostly limited to a classification setup and do not scale well for fully-convolutional architectures. In this work, we propose AdaDepth - an unsupervised domain adaptation strategy for the pixel-wise regression task of monocular depth estimation. The proposed approach is devoid of above limitations through a) adversarial learning and b) explicit imposition of content consistency on the adapted target representation. Our unsupervised approach performs competitively with other established approaches on depth estimation tasks and achieves state-of-the-art results in a semi-supervised setting.
|
1711.06729
|
Laura Gwilliams
|
Laura Gwilliams, David Poeppel, Alec Marantz and Tal Linzen
|
Phonological (un)certainty weights lexical activation
|
6 pages, 4 figures, accepted at: Cognitive Modeling and Computational
Linguistics (CMCL) 2018
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Spoken word recognition involves at least two basic computations. First is
matching acoustic input to phonological categories (e.g. /b/, /p/, /d/). Second
is activating words consistent with those phonological categories. Here we test
the hypothesis that the listener's probability distribution over lexical items
is weighted by the outcome of both computations: uncertainty about phonological
discretisation and the frequency of the selected word(s). To test this, we
record neural responses in auditory cortex using magnetoencephalography, and
model this activity as a function of the size and relative activation of
lexical candidates. Our findings indicate that towards the beginning of a word,
the processing system indeed weights lexical candidates by both phonological
certainty and lexical frequency; however, later into the word, activation is
weighted by frequency alone.
|
[
{
"created": "Fri, 17 Nov 2017 21:17:20 GMT",
"version": "v1"
}
] |
2017-11-21
|
[
[
"Gwilliams",
"Laura",
""
],
[
"Poeppel",
"David",
""
],
[
"Marantz",
"Alec",
""
],
[
"Linzen",
"Tal",
""
]
] |
Spoken word recognition involves at least two basic computations. First is matching acoustic input to phonological categories (e.g. /b/, /p/, /d/). Second is activating words consistent with those phonological categories. Here we test the hypothesis that the listener's probability distribution over lexical items is weighted by the outcome of both computations: uncertainty about phonological discretisation and the frequency of the selected word(s). To test this, we record neural responses in auditory cortex using magnetoencephalography, and model this activity as a function of the size and relative activation of lexical candidates. Our findings indicate that towards the beginning of a word, the processing system indeed weights lexical candidates by both phonological certainty and lexical frequency; however, later into the word, activation is weighted by frequency alone.
|
1909.13221
|
Chao Wei
|
Chao Wei, Weiru Zhang, Shengjie Sun, Fei Li, Xiaonan Meng, Yi Hu and
Hao Wang
|
Optimal Delivery with Budget Constraint in E-Commerce Advertising
|
13 pages, 5 figures
| null | null | null |
cs.LG cs.AI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Online advertising in E-commerce platforms provides sellers an opportunity to
achieve potential audiences with different target goals. Ad serving systems
(like display and search advertising systems) that assign ads to pages should
satisfy objectives such as plenty of audience for branding advertisers, clicks
or conversions for performance-based advertisers, at the same time try to
maximize overall revenue of the platform. In this paper, we propose an approach
based on linear programming subjects to constraints in order to optimize the
revenue and improve different performance goals simultaneously. We have
validated our algorithm by implementing an offline simulation system in Alibaba
E-commerce platform and running the auctions from online requests which takes
system performance, ranking and pricing schemas into account. We have also
compared our algorithm with related work, and the results show that our
algorithm can effectively improve campaign performance and revenue of the
platform.
|
[
{
"created": "Sun, 29 Sep 2019 07:11:10 GMT",
"version": "v1"
},
{
"created": "Tue, 8 Oct 2019 12:52:24 GMT",
"version": "v2"
}
] |
2019-10-09
|
[
[
"Wei",
"Chao",
""
],
[
"Zhang",
"Weiru",
""
],
[
"Sun",
"Shengjie",
""
],
[
"Li",
"Fei",
""
],
[
"Meng",
"Xiaonan",
""
],
[
"Hu",
"Yi",
""
],
[
"Wang",
"Hao",
""
]
] |
Online advertising in E-commerce platforms provides sellers an opportunity to achieve potential audiences with different target goals. Ad serving systems (like display and search advertising systems) that assign ads to pages should satisfy objectives such as plenty of audience for branding advertisers, clicks or conversions for performance-based advertisers, at the same time try to maximize overall revenue of the platform. In this paper, we propose an approach based on linear programming subjects to constraints in order to optimize the revenue and improve different performance goals simultaneously. We have validated our algorithm by implementing an offline simulation system in Alibaba E-commerce platform and running the auctions from online requests which takes system performance, ranking and pricing schemas into account. We have also compared our algorithm with related work, and the results show that our algorithm can effectively improve campaign performance and revenue of the platform.
|
2403.06319
|
Hamid Mozaffari
|
Hamid Mozaffari, Sunav Choudhary, and Amir Houmansadr
|
Fake or Compromised? Making Sense of Malicious Clients in Federated
Learning
| null | null | null | null |
cs.LG cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Federated learning (FL) is a distributed machine learning paradigm that
enables training models on decentralized data. The field of FL security against
poisoning attacks is plagued with confusion due to the proliferation of
research that makes different assumptions about the capabilities of adversaries
and the adversary models they operate under. Our work aims to clarify this
confusion by presenting a comprehensive analysis of the various poisoning
attacks and defensive aggregation rules (AGRs) proposed in the literature, and
connecting them under a common framework. To connect existing adversary models,
we present a hybrid adversary model, which lies in the middle of the spectrum
of adversaries, where the adversary compromises a few clients, trains a
generative (e.g., DDPM) model with their compromised samples, and generates new
synthetic data to solve an optimization for a stronger (e.g., cheaper, more
practical) attack against different robust aggregation rules. By presenting the
spectrum of FL adversaries, we aim to provide practitioners and researchers
with a clear understanding of the different types of threats they need to
consider when designing FL systems, and identify areas where further research
is needed.
|
[
{
"created": "Sun, 10 Mar 2024 21:37:21 GMT",
"version": "v1"
}
] |
2024-03-12
|
[
[
"Mozaffari",
"Hamid",
""
],
[
"Choudhary",
"Sunav",
""
],
[
"Houmansadr",
"Amir",
""
]
] |
Federated learning (FL) is a distributed machine learning paradigm that enables training models on decentralized data. The field of FL security against poisoning attacks is plagued with confusion due to the proliferation of research that makes different assumptions about the capabilities of adversaries and the adversary models they operate under. Our work aims to clarify this confusion by presenting a comprehensive analysis of the various poisoning attacks and defensive aggregation rules (AGRs) proposed in the literature, and connecting them under a common framework. To connect existing adversary models, we present a hybrid adversary model, which lies in the middle of the spectrum of adversaries, where the adversary compromises a few clients, trains a generative (e.g., DDPM) model with their compromised samples, and generates new synthetic data to solve an optimization for a stronger (e.g., cheaper, more practical) attack against different robust aggregation rules. By presenting the spectrum of FL adversaries, we aim to provide practitioners and researchers with a clear understanding of the different types of threats they need to consider when designing FL systems, and identify areas where further research is needed.
|
2008.03959
|
Nadav Merlis
|
Nadav Merlis, Shie Mannor
|
Lenient Regret for Multi-Armed Bandits
|
Accepted to AAAI2021
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider the Multi-Armed Bandit (MAB) problem, where an agent sequentially
chooses actions and observes rewards for the actions it took. While the
majority of algorithms try to minimize the regret, i.e., the cumulative
difference between the reward of the best action and the agent's action, this
criterion might lead to undesirable results. For example, in large problems, or
when the interaction with the environment is brief, finding an optimal arm is
infeasible, and regret-minimizing algorithms tend to over-explore. To overcome
this issue, algorithms for such settings should instead focus on playing
near-optimal arms. To this end, we suggest a new, more lenient, regret
criterion that ignores suboptimality gaps smaller than some $\epsilon$. We then
present a variant of the Thompson Sampling (TS) algorithm, called
$\epsilon$-TS, and prove its asymptotic optimality in terms of the lenient
regret. Importantly, we show that when the mean of the optimal arm is high
enough, the lenient regret of $\epsilon$-TS is bounded by a constant. Finally,
we show that $\epsilon$-TS can be applied to improve the performance when the
agent knows a lower bound of the suboptimality gaps.
|
[
{
"created": "Mon, 10 Aug 2020 08:30:52 GMT",
"version": "v1"
},
{
"created": "Sun, 13 Sep 2020 07:45:54 GMT",
"version": "v2"
},
{
"created": "Wed, 16 Dec 2020 13:15:49 GMT",
"version": "v3"
},
{
"created": "Sun, 12 Sep 2021 12:22:50 GMT",
"version": "v4"
}
] |
2021-09-14
|
[
[
"Merlis",
"Nadav",
""
],
[
"Mannor",
"Shie",
""
]
] |
We consider the Multi-Armed Bandit (MAB) problem, where an agent sequentially chooses actions and observes rewards for the actions it took. While the majority of algorithms try to minimize the regret, i.e., the cumulative difference between the reward of the best action and the agent's action, this criterion might lead to undesirable results. For example, in large problems, or when the interaction with the environment is brief, finding an optimal arm is infeasible, and regret-minimizing algorithms tend to over-explore. To overcome this issue, algorithms for such settings should instead focus on playing near-optimal arms. To this end, we suggest a new, more lenient, regret criterion that ignores suboptimality gaps smaller than some $\epsilon$. We then present a variant of the Thompson Sampling (TS) algorithm, called $\epsilon$-TS, and prove its asymptotic optimality in terms of the lenient regret. Importantly, we show that when the mean of the optimal arm is high enough, the lenient regret of $\epsilon$-TS is bounded by a constant. Finally, we show that $\epsilon$-TS can be applied to improve the performance when the agent knows a lower bound of the suboptimality gaps.
|
2202.07464
|
Ruoxi Chen
|
Haibo Jin, Ruoxi Chen, Haibin Zheng, Jinyin Chen, Yao Cheng, Yue Yu,
Xianglong Liu
|
Excitement Surfeited Turns to Errors: Deep Learning Testing Framework
Based on Excitable Neurons
|
32 pages
| null | null | null |
cs.LG cs.AI cs.CR cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Despite impressive capabilities and outstanding performance, deep neural
networks (DNNs) have captured increasing public concern about their security
problems, due to their frequently occurred erroneous behaviors. Therefore, it
is necessary to conduct a systematical testing for DNNs before they are
deployed to real-world applications. Existing testing methods have provided
fine-grained metrics based on neuron coverage and proposed various approaches
to improve such metrics. However, it has been gradually realized that a higher
neuron coverage does \textit{not} necessarily represent better capabilities in
identifying defects that lead to errors. Besides, coverage-guided methods
cannot hunt errors due to faulty training procedure. So the robustness
improvement of DNNs via retraining by these testing examples are
unsatisfactory. To address this challenge, we introduce the concept of
excitable neurons based on Shapley value and design a novel white-box testing
framework for DNNs, namely DeepSensor. It is motivated by our observation that
neurons with larger responsibility towards model loss changes due to small
perturbations are more likely related to incorrect corner cases due to
potential defects. By maximizing the number of excitable neurons concerning
various wrong behaviors of models, DeepSensor can generate testing examples
that effectively trigger more errors due to adversarial inputs, polluted data
and incomplete training. Extensive experiments implemented on both image
classification models and speaker recognition models have demonstrated the
superiority of DeepSensor.
|
[
{
"created": "Sat, 12 Feb 2022 16:44:15 GMT",
"version": "v1"
},
{
"created": "Sun, 20 Nov 2022 14:50:45 GMT",
"version": "v2"
}
] |
2022-11-22
|
[
[
"Jin",
"Haibo",
""
],
[
"Chen",
"Ruoxi",
""
],
[
"Zheng",
"Haibin",
""
],
[
"Chen",
"Jinyin",
""
],
[
"Cheng",
"Yao",
""
],
[
"Yu",
"Yue",
""
],
[
"Liu",
"Xianglong",
""
]
] |
Despite impressive capabilities and outstanding performance, deep neural networks (DNNs) have captured increasing public concern about their security problems, due to their frequently occurred erroneous behaviors. Therefore, it is necessary to conduct a systematical testing for DNNs before they are deployed to real-world applications. Existing testing methods have provided fine-grained metrics based on neuron coverage and proposed various approaches to improve such metrics. However, it has been gradually realized that a higher neuron coverage does \textit{not} necessarily represent better capabilities in identifying defects that lead to errors. Besides, coverage-guided methods cannot hunt errors due to faulty training procedure. So the robustness improvement of DNNs via retraining by these testing examples are unsatisfactory. To address this challenge, we introduce the concept of excitable neurons based on Shapley value and design a novel white-box testing framework for DNNs, namely DeepSensor. It is motivated by our observation that neurons with larger responsibility towards model loss changes due to small perturbations are more likely related to incorrect corner cases due to potential defects. By maximizing the number of excitable neurons concerning various wrong behaviors of models, DeepSensor can generate testing examples that effectively trigger more errors due to adversarial inputs, polluted data and incomplete training. Extensive experiments implemented on both image classification models and speaker recognition models have demonstrated the superiority of DeepSensor.
|
2406.03227
|
Martin Langhammer
|
Martin Langhammer (1 and 2), George A. Constantinides (2) ((1) Intel
Corporation, (2) Imperial College London)
|
Soft GPGPU versus IP cores: Quantifying and Reducing the Performance Gap
| null | null | null | null |
cs.AR
|
http://creativecommons.org/licenses/by/4.0/
|
eGPU, a recently-reported soft GPGPU for FPGAs, has demonstrated very high
clock frequencies (more than 750 MHz) and small footprint. This means that for
the first time, commercial soft processors may be competitive for the kind of
heavy numerical computations common in FPGA-based digital signal processing. In
this paper we take a deep dive into the performance of the eGPU family on FFT
computation, in order to quantify the performance gap between state-of-the-art
soft processors and commercial IP cores specialized for this task. In the
process, we propose two novel architectural features for the eGPU that improve
the efficiency of the design by 50\% when executing the FFTs. The end-result is
that our modified GPGPU takes only 3 times the performance-area product of a
specialized IP core, yet as a programmable processor is able to execute
arbitrary software-defined algorithms. Further comparison to Nvidia A100 GPGPUs
demonstrates the superior efficiency of eGPU on FFTs of the size studied (256
to 4096-point).
|
[
{
"created": "Wed, 5 Jun 2024 13:04:25 GMT",
"version": "v1"
}
] |
2024-06-06
|
[
[
"Langhammer",
"Martin",
"",
"1 and 2"
],
[
"Constantinides",
"George A.",
""
]
] |
eGPU, a recently-reported soft GPGPU for FPGAs, has demonstrated very high clock frequencies (more than 750 MHz) and small footprint. This means that for the first time, commercial soft processors may be competitive for the kind of heavy numerical computations common in FPGA-based digital signal processing. In this paper we take a deep dive into the performance of the eGPU family on FFT computation, in order to quantify the performance gap between state-of-the-art soft processors and commercial IP cores specialized for this task. In the process, we propose two novel architectural features for the eGPU that improve the efficiency of the design by 50\% when executing the FFTs. The end-result is that our modified GPGPU takes only 3 times the performance-area product of a specialized IP core, yet as a programmable processor is able to execute arbitrary software-defined algorithms. Further comparison to Nvidia A100 GPGPUs demonstrates the superior efficiency of eGPU on FFTs of the size studied (256 to 4096-point).
|
2211.11156
|
Ankit Chakraborty
|
Ankit Chakraborty and Georg May
|
A Continuous $hp-$Mesh Model for Discontinuous Petrov-Galerkin Finite
Element Schemes with Optimal Test Functions
| null | null | null | null |
cs.CE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present an anisotropic $hp-$mesh adaptation strategy using a continuous
mesh model for discontinuous Petrov-Galerkin (DPG) finite element schemes with
optimal test functions, extending our previous work on $h-$adaptation. The
proposed strategy utilizes the inbuilt residual-based error estimator of the
DPG discretization to compute both the polynomial distribution and the
anisotropy of the mesh elements. In order to predict the optimal order of
approximation, we solve local problems on element patches, thus making these
computations highly parallelizable. The continuous mesh model is formulated
either with respect to the error in the solution, measured in a suitable norm,
or with respect to certain admissible target functionals. We demonstrate the
performance of the proposed strategy using several numerical examples on
triangular grids.
Keywords: Discontinuous Petrov-Galerkin, Continuous mesh models, $hp-$
adaptations, Anisotropy
|
[
{
"created": "Mon, 21 Nov 2022 02:51:16 GMT",
"version": "v1"
}
] |
2022-11-22
|
[
[
"Chakraborty",
"Ankit",
""
],
[
"May",
"Georg",
""
]
] |
We present an anisotropic $hp-$mesh adaptation strategy using a continuous mesh model for discontinuous Petrov-Galerkin (DPG) finite element schemes with optimal test functions, extending our previous work on $h-$adaptation. The proposed strategy utilizes the inbuilt residual-based error estimator of the DPG discretization to compute both the polynomial distribution and the anisotropy of the mesh elements. In order to predict the optimal order of approximation, we solve local problems on element patches, thus making these computations highly parallelizable. The continuous mesh model is formulated either with respect to the error in the solution, measured in a suitable norm, or with respect to certain admissible target functionals. We demonstrate the performance of the proposed strategy using several numerical examples on triangular grids. Keywords: Discontinuous Petrov-Galerkin, Continuous mesh models, $hp-$ adaptations, Anisotropy
|
1810.00462
|
Longsheng Jiang
|
Longsheng Jiang, Yue Wang
|
A Human-Computer Interface Design for Quantitative Measure of Regret
Theory
|
6 pages, 5 figures
| null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Regret theory is a theory that describes human decision-making under risk.
The key of obtaining a quantitative model of regret theory is to measure the
preference in humans' mind when they choose among a set of options. Unlike
physical quantities, measuring psychological preference is not procedure
invariant, i.e. the readings alter when the methods change. In this work, we
alleviate this influence by choosing the procedure compatible with the way that
an individual makes a choice. We believe the resulting model is closer to the
nature of human decision-making. The preference elicitation process is
decomposed into a series of short surveys to reduce cognitive workload and
increase response accuracy. To make the questions natural and familiar to the
subjects, we follow the insight that humans generate, quantify and communicate
preference in natural language. The fuzzy-set theory is hence utilized to model
responses from subjects. Based on these ideas, a graphical human-computer
interface (HCI) is designed to articulate the information as well as to
efficiently collect human responses. The design also accounts for human
heuristics and biases, e.g. range effect and anchoring effect, to enhance its
reliability. The overall performance of the survey is satisfactory because the
measured model shows prediction accuracy equivalent to the revisit-performance
of the subjects.
|
[
{
"created": "Sun, 30 Sep 2018 20:37:22 GMT",
"version": "v1"
}
] |
2018-10-02
|
[
[
"Jiang",
"Longsheng",
""
],
[
"Wang",
"Yue",
""
]
] |
Regret theory is a theory that describes human decision-making under risk. The key of obtaining a quantitative model of regret theory is to measure the preference in humans' mind when they choose among a set of options. Unlike physical quantities, measuring psychological preference is not procedure invariant, i.e. the readings alter when the methods change. In this work, we alleviate this influence by choosing the procedure compatible with the way that an individual makes a choice. We believe the resulting model is closer to the nature of human decision-making. The preference elicitation process is decomposed into a series of short surveys to reduce cognitive workload and increase response accuracy. To make the questions natural and familiar to the subjects, we follow the insight that humans generate, quantify and communicate preference in natural language. The fuzzy-set theory is hence utilized to model responses from subjects. Based on these ideas, a graphical human-computer interface (HCI) is designed to articulate the information as well as to efficiently collect human responses. The design also accounts for human heuristics and biases, e.g. range effect and anchoring effect, to enhance its reliability. The overall performance of the survey is satisfactory because the measured model shows prediction accuracy equivalent to the revisit-performance of the subjects.
|
1911.00988
|
Bahador Saket
|
Bahador Saket, Subhajit Das, Bum Chul Kwon, Alex Endert
|
Geono-Cluster: Interactive Visual Cluster Analysis for Biologists
| null | null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Biologists often perform clustering analysis to derive meaningful patterns,
relationships, and structures from data instances and attributes. Though
clustering plays a pivotal role in biologists' data exploration, it takes
non-trivial efforts for biologists to find the best grouping in their data
using existing tools. Visual cluster analysis is currently performed either
programmatically or through menus and dialogues in many tools, which require
parameter adjustments over several steps of trial-and-error. In this paper, we
introduce Geono-Cluster, a novel visual analysis tool designed to support
cluster analysis for biologists who do not have formal data science training.
Geono-Cluster enables biologists to apply their domain expertise into
clustering results by visually demonstrating how their expected clustering
outputs should look like with a small sample of data instances. The system then
predicts users' intentions and generates potential clustering results. Our
study follows the design study protocol to derive biologists' tasks and
requirements, design the system, and evaluate the system with experts on their
own dataset. Results of our study with six biologists provide initial evidence
that Geono-Cluster enables biologists to create, refine, and evaluate
clustering results to effectively analyze their data and gain data-driven
insights. At the end, we discuss lessons learned and the implications of our
study.
|
[
{
"created": "Sun, 3 Nov 2019 23:10:31 GMT",
"version": "v1"
}
] |
2019-11-05
|
[
[
"Saket",
"Bahador",
""
],
[
"Das",
"Subhajit",
""
],
[
"Kwon",
"Bum Chul",
""
],
[
"Endert",
"Alex",
""
]
] |
Biologists often perform clustering analysis to derive meaningful patterns, relationships, and structures from data instances and attributes. Though clustering plays a pivotal role in biologists' data exploration, it takes non-trivial efforts for biologists to find the best grouping in their data using existing tools. Visual cluster analysis is currently performed either programmatically or through menus and dialogues in many tools, which require parameter adjustments over several steps of trial-and-error. In this paper, we introduce Geono-Cluster, a novel visual analysis tool designed to support cluster analysis for biologists who do not have formal data science training. Geono-Cluster enables biologists to apply their domain expertise into clustering results by visually demonstrating how their expected clustering outputs should look like with a small sample of data instances. The system then predicts users' intentions and generates potential clustering results. Our study follows the design study protocol to derive biologists' tasks and requirements, design the system, and evaluate the system with experts on their own dataset. Results of our study with six biologists provide initial evidence that Geono-Cluster enables biologists to create, refine, and evaluate clustering results to effectively analyze their data and gain data-driven insights. At the end, we discuss lessons learned and the implications of our study.
|
1810.03115
|
Alexandre Quemy
|
Alexandre Quemy
|
European Court of Human Right Open Data project
|
Preprint submitted to Data Mining and Knowledge Discovery
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents thirteen datasets for binary, multiclass and multilabel
classification based on the European Court of Human Rights judgments since its
creation. The interest of such datasets is explained through the prism of the
researcher, the data scientist, the citizen and the legal practitioner.
Contrarily to many datasets, the creation process, from the collection of raw
data to the feature transformation, is provided under the form of a collection
of fully automated and open-source scripts. It ensures reproducibility and a
high level of confidence in the processed data, which is some of the most
important issues in data governance nowadays. A first experimental campaign is
performed to study some predictability properties and to establish baseline
results on popular machine learning algorithms. The results are consistently
good across the binary datasets with an accuracy comprised between 75.86% and
98.32% for an average accuracy of 96.45%.
|
[
{
"created": "Sun, 7 Oct 2018 09:36:27 GMT",
"version": "v1"
},
{
"created": "Mon, 4 Feb 2019 16:09:14 GMT",
"version": "v2"
}
] |
2019-02-05
|
[
[
"Quemy",
"Alexandre",
""
]
] |
This paper presents thirteen datasets for binary, multiclass and multilabel classification based on the European Court of Human Rights judgments since its creation. The interest of such datasets is explained through the prism of the researcher, the data scientist, the citizen and the legal practitioner. Contrarily to many datasets, the creation process, from the collection of raw data to the feature transformation, is provided under the form of a collection of fully automated and open-source scripts. It ensures reproducibility and a high level of confidence in the processed data, which is some of the most important issues in data governance nowadays. A first experimental campaign is performed to study some predictability properties and to establish baseline results on popular machine learning algorithms. The results are consistently good across the binary datasets with an accuracy comprised between 75.86% and 98.32% for an average accuracy of 96.45%.
|
1205.6846
|
Hassan Mansour
|
Hassan Mansour and Ozgur Yilmaz
|
Support driven reweighted $\ell_1$ minimization
|
Proc. of the IEEE International Conference on Acoustics, Speech, and
Signal Processing (ICASSP), March, 2012
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a support driven reweighted $\ell_1$ minimization
algorithm (SDRL1) that solves a sequence of weighted $\ell_1$ problems and
relies on the support estimate accuracy. Our SDRL1 algorithm is related to the
IRL1 algorithm proposed by Cand{\`e}s, Wakin, and Boyd. We demonstrate that it
is sufficient to find support estimates with \emph{good} accuracy and apply
constant weights instead of using the inverse coefficient magnitudes to achieve
gains similar to those of IRL1. We then prove that given a support estimate
with sufficient accuracy, if the signal decays according to a specific rate,
the solution to the weighted $\ell_1$ minimization problem results in a support
estimate with higher accuracy than the initial estimate. We also show that
under certain conditions, it is possible to achieve higher estimate accuracy
when the intersection of support estimates is considered. We demonstrate the
performance of SDRL1 through numerical simulations and compare it with that of
IRL1 and standard $\ell_1$ minimization.
|
[
{
"created": "Wed, 30 May 2012 22:16:18 GMT",
"version": "v1"
}
] |
2012-06-01
|
[
[
"Mansour",
"Hassan",
""
],
[
"Yilmaz",
"Ozgur",
""
]
] |
In this paper, we propose a support driven reweighted $\ell_1$ minimization algorithm (SDRL1) that solves a sequence of weighted $\ell_1$ problems and relies on the support estimate accuracy. Our SDRL1 algorithm is related to the IRL1 algorithm proposed by Cand{\`e}s, Wakin, and Boyd. We demonstrate that it is sufficient to find support estimates with \emph{good} accuracy and apply constant weights instead of using the inverse coefficient magnitudes to achieve gains similar to those of IRL1. We then prove that given a support estimate with sufficient accuracy, if the signal decays according to a specific rate, the solution to the weighted $\ell_1$ minimization problem results in a support estimate with higher accuracy than the initial estimate. We also show that under certain conditions, it is possible to achieve higher estimate accuracy when the intersection of support estimates is considered. We demonstrate the performance of SDRL1 through numerical simulations and compare it with that of IRL1 and standard $\ell_1$ minimization.
|
1210.4145
|
Sacha Sokoloski
|
Sacha Sokoloski
|
A Biologically Realistic Model of Saccadic Eye Control with
Probabilistic Population Codes
| null | null | null | null |
cs.NE q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The posterior parietal cortex is believed to direct eye movements, especially
in regards to target tracking tasks, and a number of debates exist over the
precise nature of the computations performed by the parietal cortex, with each
side supported by different sets of biological evidence. In this paper I will
present my model which navigates a course between some of these debates,
towards the end of presenting a model which can explain some of the competing
interpretations among the data sets. In particular, rather than assuming that
proprioception or efference copies form the key source of information for
computing eye position information, I use a biological plausible implementation
of a Kalman filter to optimally combine the two signals, and a simple gain
control mechanism in order to accommodate the latency of the proprioceptive
signal. Fitting within the Bayesian brain hypothesis, the result is a Bayes
optimal solution to the eye control problem, with a range of data supporting
claims of biological plausibility.
|
[
{
"created": "Mon, 15 Oct 2012 19:33:27 GMT",
"version": "v1"
}
] |
2012-10-16
|
[
[
"Sokoloski",
"Sacha",
""
]
] |
The posterior parietal cortex is believed to direct eye movements, especially in regards to target tracking tasks, and a number of debates exist over the precise nature of the computations performed by the parietal cortex, with each side supported by different sets of biological evidence. In this paper I will present my model which navigates a course between some of these debates, towards the end of presenting a model which can explain some of the competing interpretations among the data sets. In particular, rather than assuming that proprioception or efference copies form the key source of information for computing eye position information, I use a biological plausible implementation of a Kalman filter to optimally combine the two signals, and a simple gain control mechanism in order to accommodate the latency of the proprioceptive signal. Fitting within the Bayesian brain hypothesis, the result is a Bayes optimal solution to the eye control problem, with a range of data supporting claims of biological plausibility.
|
2211.04086
|
Anders Eklund
|
M{\aa}ns Larsson, Muhammad Usman Akbar, Anders Eklund
|
Does an ensemble of GANs lead to better performance when training
segmentation networks with synthetic images?
|
5 pages, submitted to ISBI 2023
| null | null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
Large annotated datasets are required to train segmentation networks. In
medical imaging, it is often difficult, time consuming and expensive to create
such datasets, and it may also be difficult to share these datasets with other
researchers. Different AI models can today generate very realistic synthetic
images, which can potentially be openly shared as they do not belong to
specific persons. However, recent work has shown that using synthetic images
for training deep networks often leads to worse performance compared to using
real images. Here we demonstrate that using synthetic images and annotations
from an ensemble of 20 GANs, instead of from a single GAN, increases the Dice
score on real test images with 4.7 % to 14.0 % on specific classes.
|
[
{
"created": "Tue, 8 Nov 2022 08:35:15 GMT",
"version": "v1"
},
{
"created": "Sun, 12 Mar 2023 13:42:25 GMT",
"version": "v2"
}
] |
2023-03-14
|
[
[
"Larsson",
"Måns",
""
],
[
"Akbar",
"Muhammad Usman",
""
],
[
"Eklund",
"Anders",
""
]
] |
Large annotated datasets are required to train segmentation networks. In medical imaging, it is often difficult, time consuming and expensive to create such datasets, and it may also be difficult to share these datasets with other researchers. Different AI models can today generate very realistic synthetic images, which can potentially be openly shared as they do not belong to specific persons. However, recent work has shown that using synthetic images for training deep networks often leads to worse performance compared to using real images. Here we demonstrate that using synthetic images and annotations from an ensemble of 20 GANs, instead of from a single GAN, increases the Dice score on real test images with 4.7 % to 14.0 % on specific classes.
|
1807.02553
|
Shi Li
|
Janardhan Kulkarni and Shi Li
|
Flow-time Optimization For Concurrent Open-Shop and Precedence
Constrained Scheduling Models
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Scheduling a set of jobs over a collection of machines is a fundamental
problem that needs to be solved millions of times a day in various computing
platforms: in operating systems, in large data clusters, and in data centers.
Along with makespan, flow-time, which measures the length of time a job spends
in a system before it completes, is arguably the most important metric to
measure the performance of a scheduling algorithm. In recent years, there has
been a remarkable progress in understanding flow-time based objective functions
in diverse settings such as unrelated machines scheduling, broadcast
scheduling, multi-dimensional scheduling, to name a few.
Yet, our understanding of the flow-time objective is limited mostly to the
scenarios where jobs have simple structures; in particular, each job is a
single self contained entity. On the other hand, in almost all real world
applications, think of MapReduce settings for example, jobs have more complex
structures. In this paper, we consider two classical scheduling models that
capture complex job structures: 1) concurrent open-shop scheduling and 2)
precedence constrained scheduling. Our main motivation to study these problems
specifically comes from their relevance to two scheduling problems that have
gained importance in the context of data centers: co-flow scheduling and DAG
scheduling. We design almost optimal approximation algorithms for open-shop
scheduling and precedence constrained scheduling, and show hardness results.
|
[
{
"created": "Fri, 6 Jul 2018 19:46:49 GMT",
"version": "v1"
}
] |
2018-07-10
|
[
[
"Kulkarni",
"Janardhan",
""
],
[
"Li",
"Shi",
""
]
] |
Scheduling a set of jobs over a collection of machines is a fundamental problem that needs to be solved millions of times a day in various computing platforms: in operating systems, in large data clusters, and in data centers. Along with makespan, flow-time, which measures the length of time a job spends in a system before it completes, is arguably the most important metric to measure the performance of a scheduling algorithm. In recent years, there has been a remarkable progress in understanding flow-time based objective functions in diverse settings such as unrelated machines scheduling, broadcast scheduling, multi-dimensional scheduling, to name a few. Yet, our understanding of the flow-time objective is limited mostly to the scenarios where jobs have simple structures; in particular, each job is a single self contained entity. On the other hand, in almost all real world applications, think of MapReduce settings for example, jobs have more complex structures. In this paper, we consider two classical scheduling models that capture complex job structures: 1) concurrent open-shop scheduling and 2) precedence constrained scheduling. Our main motivation to study these problems specifically comes from their relevance to two scheduling problems that have gained importance in the context of data centers: co-flow scheduling and DAG scheduling. We design almost optimal approximation algorithms for open-shop scheduling and precedence constrained scheduling, and show hardness results.
|
1911.02737
|
Wei Zhang
|
Wei Zhang, Feifei Lin, Xiaodong Wang, Zhenshuang Liang, Zhen Huang
|
SubCharacter Chinese-English Neural Machine Translation with Wubi
encoding
|
10 pages, 3 figures, 7 tables
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Neural machine translation (NMT) is one of the best methods for understanding
the differences in semantic rules between two languages. Especially for
Indo-European languages, subword-level models have achieved impressive results.
However, when the translation task involves Chinese, semantic granularity
remains at the word and character level, so there is still need more
fine-grained translation model of Chinese. In this paper, we introduce a simple
and effective method for Chinese translation at the sub-character level. Our
approach uses the Wubi method to translate Chinese into English; byte-pair
encoding (BPE) is then applied. Our method for Chinese-English translation
eliminates the need for a complicated word segmentation algorithm during
preprocessing. Furthermore, our method allows for sub-character-level neural
translation based on recurrent neural network (RNN) architecture, without
preprocessing. The empirical results show that for Chinese-English translation
tasks, our sub-character-level model has a comparable BLEU score to the subword
model, despite having a much smaller vocabulary. Additionally, the small
vocabulary is highly advantageous for NMT model compression.
|
[
{
"created": "Thu, 7 Nov 2019 03:13:26 GMT",
"version": "v1"
}
] |
2019-11-11
|
[
[
"Zhang",
"Wei",
""
],
[
"Lin",
"Feifei",
""
],
[
"Wang",
"Xiaodong",
""
],
[
"Liang",
"Zhenshuang",
""
],
[
"Huang",
"Zhen",
""
]
] |
Neural machine translation (NMT) is one of the best methods for understanding the differences in semantic rules between two languages. Especially for Indo-European languages, subword-level models have achieved impressive results. However, when the translation task involves Chinese, semantic granularity remains at the word and character level, so there is still need more fine-grained translation model of Chinese. In this paper, we introduce a simple and effective method for Chinese translation at the sub-character level. Our approach uses the Wubi method to translate Chinese into English; byte-pair encoding (BPE) is then applied. Our method for Chinese-English translation eliminates the need for a complicated word segmentation algorithm during preprocessing. Furthermore, our method allows for sub-character-level neural translation based on recurrent neural network (RNN) architecture, without preprocessing. The empirical results show that for Chinese-English translation tasks, our sub-character-level model has a comparable BLEU score to the subword model, despite having a much smaller vocabulary. Additionally, the small vocabulary is highly advantageous for NMT model compression.
|
2005.02696
|
Li Wang
|
Li Wang, Dawei Zhao, Tao Wu, Hao Fu, Zhiyu Wang, Liang Xiao, Xin Xu
and Bin Dai
|
Drosophila-Inspired 3D Moving Object Detection Based on Point Clouds
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
3D moving object detection is one of the most critical tasks in dynamic scene
analysis. In this paper, we propose a novel Drosophila-inspired 3D moving
object detection method using Lidar sensors. According to the theory of
elementary motion detector, we have developed a motion detector based on the
shallow visual neural pathway of Drosophila. This detector is sensitive to the
movement of objects and can well suppress background noise. Designing neural
circuits with different connection modes, the approach searches for motion
areas in a coarse-to-fine fashion and extracts point clouds of each motion area
to form moving object proposals. An improved 3D object detection network is
then used to estimate the point clouds of each proposal and efficiently
generates the 3D bounding boxes and the object categories. We evaluate the
proposed approach on the widely-used KITTI benchmark, and state-of-the-art
performance was obtained by using the proposed approach on the task of motion
detection.
|
[
{
"created": "Wed, 6 May 2020 10:04:23 GMT",
"version": "v1"
}
] |
2020-05-07
|
[
[
"Wang",
"Li",
""
],
[
"Zhao",
"Dawei",
""
],
[
"Wu",
"Tao",
""
],
[
"Fu",
"Hao",
""
],
[
"Wang",
"Zhiyu",
""
],
[
"Xiao",
"Liang",
""
],
[
"Xu",
"Xin",
""
],
[
"Dai",
"Bin",
""
]
] |
3D moving object detection is one of the most critical tasks in dynamic scene analysis. In this paper, we propose a novel Drosophila-inspired 3D moving object detection method using Lidar sensors. According to the theory of elementary motion detector, we have developed a motion detector based on the shallow visual neural pathway of Drosophila. This detector is sensitive to the movement of objects and can well suppress background noise. Designing neural circuits with different connection modes, the approach searches for motion areas in a coarse-to-fine fashion and extracts point clouds of each motion area to form moving object proposals. An improved 3D object detection network is then used to estimate the point clouds of each proposal and efficiently generates the 3D bounding boxes and the object categories. We evaluate the proposed approach on the widely-used KITTI benchmark, and state-of-the-art performance was obtained by using the proposed approach on the task of motion detection.
|
2006.00030
|
Onel Luis Alcaraz Lopez
|
Onel L. A. L\'opez and Nurul Huda Mahmood and Hirley Alves and Matti
Latva-aho
|
CSI-free vs CSI-based multi-antenna WET for massive low-power Internet
of Things
|
16 pags, 11 figures, Accepted in IEEE TWC
| null | null | null |
cs.NI cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Wireless Energy Transfer (WET) is a promising solution for powering massive
Internet of Things deployments. An important question is whether the costly
Channel State Information (CSI) acquisition procedure is necessary for optimum
performance. In this paper, we shed some light into this matter by evaluating
CSI-based and CSI-free multi-antenna WET schemes in a setup with WET in the
downlink, and periodic or Poisson-traffic Wireless Information Transfer (WIT)
in the uplink. When CSI is available, we show that a maximum ratio transmission
beamformer is close to optimum whenever the farthest node experiences at least
3 dB of power attenuation more than the remaining devices. On the other hand,
although the adopted CSI-free mechanism is not capable of providing average
harvesting gains, it does provide greater WET/WIT diversity with lower energy
requirements when compared with the CSI-based scheme. Our numerical results
evidence that the CSI-free scheme performs favorably under periodic traffic
conditions, but it may be deficient in case of Poisson traffic, specially if
the setup is not optimally configured. Finally, we show the prominent
performance results when the uplink transmissions are periodic, while
highlighting the need of a minimum mean square error equalizer rather than
zero-forcing for information decoding.
|
[
{
"created": "Fri, 29 May 2020 18:38:34 GMT",
"version": "v1"
},
{
"created": "Tue, 22 Dec 2020 09:29:08 GMT",
"version": "v2"
}
] |
2020-12-23
|
[
[
"López",
"Onel L. A.",
""
],
[
"Mahmood",
"Nurul Huda",
""
],
[
"Alves",
"Hirley",
""
],
[
"Latva-aho",
"Matti",
""
]
] |
Wireless Energy Transfer (WET) is a promising solution for powering massive Internet of Things deployments. An important question is whether the costly Channel State Information (CSI) acquisition procedure is necessary for optimum performance. In this paper, we shed some light into this matter by evaluating CSI-based and CSI-free multi-antenna WET schemes in a setup with WET in the downlink, and periodic or Poisson-traffic Wireless Information Transfer (WIT) in the uplink. When CSI is available, we show that a maximum ratio transmission beamformer is close to optimum whenever the farthest node experiences at least 3 dB of power attenuation more than the remaining devices. On the other hand, although the adopted CSI-free mechanism is not capable of providing average harvesting gains, it does provide greater WET/WIT diversity with lower energy requirements when compared with the CSI-based scheme. Our numerical results evidence that the CSI-free scheme performs favorably under periodic traffic conditions, but it may be deficient in case of Poisson traffic, specially if the setup is not optimally configured. Finally, we show the prominent performance results when the uplink transmissions are periodic, while highlighting the need of a minimum mean square error equalizer rather than zero-forcing for information decoding.
|
1310.3607
|
Albrecht Zimmermann
|
Albrecht Zimmermann, Sruthi Moorthy and Zifan Shi
|
Predicting college basketball match outcomes using machine learning
techniques: some results and lessons learned
| null | null | null | null |
cs.LG stat.AP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Most existing work on predicting NCAAB matches has been developed in a
statistical context. Trusting the capabilities of ML techniques, particularly
classification learners, to uncover the importance of features and learn their
relationships, we evaluated a number of different paradigms on this task. In
this paper, we summarize our work, pointing out that attributes seem to be more
important than models, and that there seems to be an upper limit to predictive
quality.
|
[
{
"created": "Mon, 14 Oct 2013 09:42:54 GMT",
"version": "v1"
}
] |
2013-10-15
|
[
[
"Zimmermann",
"Albrecht",
""
],
[
"Moorthy",
"Sruthi",
""
],
[
"Shi",
"Zifan",
""
]
] |
Most existing work on predicting NCAAB matches has been developed in a statistical context. Trusting the capabilities of ML techniques, particularly classification learners, to uncover the importance of features and learn their relationships, we evaluated a number of different paradigms on this task. In this paper, we summarize our work, pointing out that attributes seem to be more important than models, and that there seems to be an upper limit to predictive quality.
|
2404.04952
|
Victor Kebande
|
Victor R. Kebande
|
The Impact of Virtual Laboratories on Active Learning and Engagement in
Cybersecurity Distance Education
|
13 pages 4 figures
| null | null | null |
cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
Virtual Laboratories (V Labs) have in the recent past become part and parcel
of remote teaching in practical hands-on approaches, particularly in
Cybersecurity distance courses. Their potential is meant to assist learners
with hands-on practical laboratory exercises irrespective of geographical
location. Nevertheless, adopting V Labs in didactic approaches in higher
education has seen both merits and demerits. Based on this premise, this study
investigates the impact of V Labs on Active Learning (AL) and engagement in
cybersecurity distance education. A survey with a limited number of learners
and educators who have had an experience with cybersecurity distance courses
that leveraged V Labs in their practical Lab assignment, was conducted at
Blekinge Tekniska H\"ogskola, Sweden, to assess the impact of V Labs on AL and
engagement in Cybersecurity Distance Education. 29% and 73% of the learners and
educators, respectively responded to the survey administered remotely and with
good internal consistency of questionnaires based on the Cronbalch Alpha; the
results showed that learners and educators had a positive perception of using V
Labs to enhance AL in cybersecurity distance education. The key concentration
of the study was on AL and engagement and problem-solving abilities when V Labs
are used. Both the learners and educators found the V Labs to be engaging,
interactive, and effective in improving their understanding of cybersecurity
concepts.
|
[
{
"created": "Sun, 7 Apr 2024 13:16:37 GMT",
"version": "v1"
}
] |
2024-04-09
|
[
[
"Kebande",
"Victor R.",
""
]
] |
Virtual Laboratories (V Labs) have in the recent past become part and parcel of remote teaching in practical hands-on approaches, particularly in Cybersecurity distance courses. Their potential is meant to assist learners with hands-on practical laboratory exercises irrespective of geographical location. Nevertheless, adopting V Labs in didactic approaches in higher education has seen both merits and demerits. Based on this premise, this study investigates the impact of V Labs on Active Learning (AL) and engagement in cybersecurity distance education. A survey with a limited number of learners and educators who have had an experience with cybersecurity distance courses that leveraged V Labs in their practical Lab assignment, was conducted at Blekinge Tekniska H\"ogskola, Sweden, to assess the impact of V Labs on AL and engagement in Cybersecurity Distance Education. 29% and 73% of the learners and educators, respectively responded to the survey administered remotely and with good internal consistency of questionnaires based on the Cronbalch Alpha; the results showed that learners and educators had a positive perception of using V Labs to enhance AL in cybersecurity distance education. The key concentration of the study was on AL and engagement and problem-solving abilities when V Labs are used. Both the learners and educators found the V Labs to be engaging, interactive, and effective in improving their understanding of cybersecurity concepts.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.