id
stringlengths 9
10
| submitter
stringlengths 1
64
⌀ | authors
stringlengths 4
20.7k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
404
⌀ | doi
stringlengths 11
153
⌀ | report-no
stringlengths 2
254
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | orig_abstract
stringlengths 14
3.35k
| versions
listlengths 1
60
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
1.35k
| abstract
stringlengths 11
3.34k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1304.3610
|
Hardik Parekh
|
Hardik M. Parekh, Vipul K. Dabhi
|
Modified Soft Brood Crossover in Genetic Programming
| null | null | null | null |
cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Premature convergence is one of the important issues while using Genetic
Programming for data modeling. It can be avoided by improving population
diversity. Intelligent genetic operators can help to improve the population
diversity. Crossover is an important operator in Genetic Programming. So, we
have analyzed number of intelligent crossover operators and proposed an
algorithm with the modification of soft brood crossover operator. It will help
to improve the population diversity and reduce the premature convergence. We
have performed experiments on three different symbolic regression problems.
Then we made the performance comparison of our proposed crossover (Modified
Soft Brood Crossover) with the existing soft brood crossover and subtree
crossover operators.
|
[
{
"created": "Fri, 12 Apr 2013 11:54:35 GMT",
"version": "v1"
}
] |
2013-04-15
|
[
[
"Parekh",
"Hardik M.",
""
],
[
"Dabhi",
"Vipul K.",
""
]
] |
Premature convergence is one of the important issues while using Genetic Programming for data modeling. It can be avoided by improving population diversity. Intelligent genetic operators can help to improve the population diversity. Crossover is an important operator in Genetic Programming. So, we have analyzed number of intelligent crossover operators and proposed an algorithm with the modification of soft brood crossover operator. It will help to improve the population diversity and reduce the premature convergence. We have performed experiments on three different symbolic regression problems. Then we made the performance comparison of our proposed crossover (Modified Soft Brood Crossover) with the existing soft brood crossover and subtree crossover operators.
|
2402.01423
|
Siyao Peng
|
Siyao Peng, Zihang Sun, Sebastian Loftus, Barbara Plank
|
Different Tastes of Entities: Investigating Human Label Variation in
Named Entity Annotations
|
9 pages; Accepted at UnImplicit workshop at EACL 2024
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Named Entity Recognition (NER) is a key information extraction task with a
long-standing tradition. While recent studies address and aim to correct
annotation errors via re-labeling efforts, little is known about the sources of
human label variation, such as text ambiguity, annotation error, or guideline
divergence. This is especially the case for high-quality datasets and beyond
English CoNLL03. This paper studies disagreements in expert-annotated named
entity datasets for three languages: English, Danish, and Bavarian. We show
that text ambiguity and artificial guideline changes are dominant factors for
diverse annotations among high-quality revisions. We survey student annotations
on a subset of difficult entities and substantiate the feasibility and
necessity of manifold annotations for understanding named entity ambiguities
from a distributional perspective.
|
[
{
"created": "Fri, 2 Feb 2024 14:08:34 GMT",
"version": "v1"
}
] |
2024-02-05
|
[
[
"Peng",
"Siyao",
""
],
[
"Sun",
"Zihang",
""
],
[
"Loftus",
"Sebastian",
""
],
[
"Plank",
"Barbara",
""
]
] |
Named Entity Recognition (NER) is a key information extraction task with a long-standing tradition. While recent studies address and aim to correct annotation errors via re-labeling efforts, little is known about the sources of human label variation, such as text ambiguity, annotation error, or guideline divergence. This is especially the case for high-quality datasets and beyond English CoNLL03. This paper studies disagreements in expert-annotated named entity datasets for three languages: English, Danish, and Bavarian. We show that text ambiguity and artificial guideline changes are dominant factors for diverse annotations among high-quality revisions. We survey student annotations on a subset of difficult entities and substantiate the feasibility and necessity of manifold annotations for understanding named entity ambiguities from a distributional perspective.
|
2402.04216
|
Md Ferdous Pervej
|
Md Ferdous Pervej and Andreas F. Molisch
|
Resource-Aware Hierarchical Federated Learning in Wireless Video Caching
Networks
|
Under review for possible publication in IEEE Transactions on
Wireless Communications
| null | null | null |
cs.NI cs.LG cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
Backhaul traffic congestion caused by the video traffic of a few popular
files can be alleviated by storing the to-be-requested content at various
levels in wireless video caching networks. Typically, content service providers
(CSPs) own the content, and the users request their preferred content from the
CSPs using their (wireless) internet service providers (ISPs). As these parties
do not reveal their private information and business secrets, traditional
techniques may not be readily used to predict the dynamic changes in users'
future demands. Motivated by this, we propose a novel resource-aware
hierarchical federated learning (RawHFL) solution for predicting user's future
content requests. A practical data acquisition technique is used that allows
the user to update its local training dataset based on its requested content.
Besides, since networking and other computational resources are limited,
considering that only a subset of the users participate in the model training,
we derive the convergence bound of the proposed algorithm. Based on this bound,
we minimize a weighted utility function for jointly configuring the
controllable parameters to train the RawHFL energy efficiently under practical
resource constraints. Our extensive simulation results validate the proposed
algorithm's superiority, in terms of test accuracy and energy cost, over
existing baselines.
|
[
{
"created": "Tue, 6 Feb 2024 18:17:02 GMT",
"version": "v1"
},
{
"created": "Wed, 19 Jun 2024 19:18:42 GMT",
"version": "v2"
}
] |
2024-06-21
|
[
[
"Pervej",
"Md Ferdous",
""
],
[
"Molisch",
"Andreas F.",
""
]
] |
Backhaul traffic congestion caused by the video traffic of a few popular files can be alleviated by storing the to-be-requested content at various levels in wireless video caching networks. Typically, content service providers (CSPs) own the content, and the users request their preferred content from the CSPs using their (wireless) internet service providers (ISPs). As these parties do not reveal their private information and business secrets, traditional techniques may not be readily used to predict the dynamic changes in users' future demands. Motivated by this, we propose a novel resource-aware hierarchical federated learning (RawHFL) solution for predicting user's future content requests. A practical data acquisition technique is used that allows the user to update its local training dataset based on its requested content. Besides, since networking and other computational resources are limited, considering that only a subset of the users participate in the model training, we derive the convergence bound of the proposed algorithm. Based on this bound, we minimize a weighted utility function for jointly configuring the controllable parameters to train the RawHFL energy efficiently under practical resource constraints. Our extensive simulation results validate the proposed algorithm's superiority, in terms of test accuracy and energy cost, over existing baselines.
|
1303.1454
|
Marek J. Druzdzel
|
Marek J. Druzdzel, Herbert A. Simon
|
Causality in Bayesian Belief Networks
|
Appears in Proceedings of the Ninth Conference on Uncertainty in
Artificial Intelligence (UAI1993)
| null | null |
UAI-P-1993-PG-3-11
|
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We address the problem of causal interpretation of the graphical structure of
Bayesian belief networks (BBNs). We review the concept of causality explicated
in the domain of structural equations models and show that it is applicable to
BBNs. In this view, which we call mechanism-based, causality is defined within
models and causal asymmetries arise when mechanisms are placed in the context
of a system. We lay the link between structural equations models and BBNs
models and formulate the conditions under which the latter can be given causal
interpretation.
|
[
{
"created": "Wed, 6 Mar 2013 14:18:23 GMT",
"version": "v1"
}
] |
2013-03-08
|
[
[
"Druzdzel",
"Marek J.",
""
],
[
"Simon",
"Herbert A.",
""
]
] |
We address the problem of causal interpretation of the graphical structure of Bayesian belief networks (BBNs). We review the concept of causality explicated in the domain of structural equations models and show that it is applicable to BBNs. In this view, which we call mechanism-based, causality is defined within models and causal asymmetries arise when mechanisms are placed in the context of a system. We lay the link between structural equations models and BBNs models and formulate the conditions under which the latter can be given causal interpretation.
|
2401.11281
|
Kaylea Champion
|
Kaylea Champion and Benjamin Mako Hill
|
Sources of Underproduction in Open Source Software
| null | null | null | null |
cs.SE cs.CY cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Because open source software relies on individuals who select their own
tasks, it is often underproduced -- a term used by software engineering
researchers to describe when a piece of software's relative quality is lower
than its relative importance. We examine the social and technical factors
associated with underproduction through a comparison of software packaged by
the Debian GNU/Linux community. We test a series of hypotheses developed from a
reading of prior research in software engineering. Although we find that
software age and programming language age offer a partial explanation for
variation in underproduction, we were surprised to find that the association
between underproduction and package age is weaker at high levels of programming
language age. With respect to maintenance efforts, we find that additional
resources are not always tied to better outcomes. In particular, having higher
numbers of contributors is associated with higher underproduction risk. Also,
contrary to our expectations, maintainer turnover and maintenance by a declared
team are not associated with lower rates of underproduction. Finally, we find
that the people working on bugs in underproduced packages tend to be those who
are more central to the community's collaboration network structure, although
contributors' betweenness centrality (often associated with brokerage in social
networks) is not associated with underproduction.
|
[
{
"created": "Sat, 20 Jan 2024 17:21:24 GMT",
"version": "v1"
}
] |
2024-01-23
|
[
[
"Champion",
"Kaylea",
""
],
[
"Hill",
"Benjamin Mako",
""
]
] |
Because open source software relies on individuals who select their own tasks, it is often underproduced -- a term used by software engineering researchers to describe when a piece of software's relative quality is lower than its relative importance. We examine the social and technical factors associated with underproduction through a comparison of software packaged by the Debian GNU/Linux community. We test a series of hypotheses developed from a reading of prior research in software engineering. Although we find that software age and programming language age offer a partial explanation for variation in underproduction, we were surprised to find that the association between underproduction and package age is weaker at high levels of programming language age. With respect to maintenance efforts, we find that additional resources are not always tied to better outcomes. In particular, having higher numbers of contributors is associated with higher underproduction risk. Also, contrary to our expectations, maintainer turnover and maintenance by a declared team are not associated with lower rates of underproduction. Finally, we find that the people working on bugs in underproduced packages tend to be those who are more central to the community's collaboration network structure, although contributors' betweenness centrality (often associated with brokerage in social networks) is not associated with underproduction.
|
2310.12004
|
Feng Luo
|
Feng Luo, Jinxi Xiang, Jun Zhang, Xiao Han, Wei Yang
|
Image Super-resolution Via Latent Diffusion: A Sampling-space Mixture Of
Experts And Frequency-augmented Decoder Approach
|
15 pages, 7 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The recent use of diffusion prior, enhanced by pre-trained text-image models,
has markedly elevated the performance of image super-resolution (SR). To
alleviate the huge computational cost required by pixel-based diffusion SR,
latent-based methods utilize a feature encoder to transform the image and then
implement the SR image generation in a compact latent space. Nevertheless,
there are two major issues that limit the performance of latent-based
diffusion. First, the compression of latent space usually causes reconstruction
distortion. Second, huge computational cost constrains the parameter scale of
the diffusion model. To counteract these issues, we first propose a frequency
compensation module that enhances the frequency components from latent space to
pixel space. The reconstruction distortion (especially for high-frequency
information) can be significantly decreased. Then, we propose to use
Sample-Space Mixture of Experts (SS-MoE) to achieve more powerful latent-based
SR, which steadily improves the capacity of the model without a significant
increase in inference costs. These carefully crafted designs contribute to
performance improvements in largely explored 4x blind super-resolution
benchmarks and extend to large magnification factors, i.e., 8x image SR
benchmarks. The code is available at https://github.com/amandaluof/moe_sr.
|
[
{
"created": "Wed, 18 Oct 2023 14:39:25 GMT",
"version": "v1"
},
{
"created": "Fri, 20 Oct 2023 16:17:58 GMT",
"version": "v2"
},
{
"created": "Wed, 13 Dec 2023 13:08:29 GMT",
"version": "v3"
}
] |
2023-12-14
|
[
[
"Luo",
"Feng",
""
],
[
"Xiang",
"Jinxi",
""
],
[
"Zhang",
"Jun",
""
],
[
"Han",
"Xiao",
""
],
[
"Yang",
"Wei",
""
]
] |
The recent use of diffusion prior, enhanced by pre-trained text-image models, has markedly elevated the performance of image super-resolution (SR). To alleviate the huge computational cost required by pixel-based diffusion SR, latent-based methods utilize a feature encoder to transform the image and then implement the SR image generation in a compact latent space. Nevertheless, there are two major issues that limit the performance of latent-based diffusion. First, the compression of latent space usually causes reconstruction distortion. Second, huge computational cost constrains the parameter scale of the diffusion model. To counteract these issues, we first propose a frequency compensation module that enhances the frequency components from latent space to pixel space. The reconstruction distortion (especially for high-frequency information) can be significantly decreased. Then, we propose to use Sample-Space Mixture of Experts (SS-MoE) to achieve more powerful latent-based SR, which steadily improves the capacity of the model without a significant increase in inference costs. These carefully crafted designs contribute to performance improvements in largely explored 4x blind super-resolution benchmarks and extend to large magnification factors, i.e., 8x image SR benchmarks. The code is available at https://github.com/amandaluof/moe_sr.
|
1806.07842
|
Yuri G. Gordienko
|
Olga Barkova, Natalia Pysarevska, Oleg Allenin, Serhii Hamotsky,
Nikita Gordienko, Vladyslav Sarnatskyi, Vadym Ovcharenko, Mariia Tkachenko,
Yurii Gordienko, Sergei Stirenko
|
Gamification for Education of the Digitally Native Generation by Means
of Virtual Reality, Augmented Reality, Machine Learning, and Brain-Computing
Interfaces in Museums
|
16 pages, 8 figures,
http://uncommonculture.org/ojs/index.php/UC/article/view/9238
|
Uncommon Culture, vol. 7, no.1/2(13/14), pp.86-101 (2018)
| null | null |
cs.CY cs.DL cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Particularly close attention is being paid today among researchers in social
science disciplines to aspects of learning in the digital age, especially for
the Digitally Native Generation. In the context of museums, the question is:
how can rich learning experiences be provided for increasingly technologically
advanced young visitors in museums? Which high-tech platforms and solutions do
museums need to focus on? At the same time, the software games business is
growing fast and now finding its way into non-entertainment contexts, helping
to deliver substantial benefits, particularly in education, training, research,
and health. This article outlines some aspects facing Digitally Native learners
in museums through an analysis of several radically new key technologies:
Interactivity, Wearables, Virtual Reality, and Augmented Reality. Special
attention is paid to use cases for application of games-based scenarios via
these technologies in non-leisure contexts and specifically for educational
purposes in museums.
|
[
{
"created": "Wed, 20 Jun 2018 17:03:52 GMT",
"version": "v1"
}
] |
2018-06-21
|
[
[
"Barkova",
"Olga",
""
],
[
"Pysarevska",
"Natalia",
""
],
[
"Allenin",
"Oleg",
""
],
[
"Hamotsky",
"Serhii",
""
],
[
"Gordienko",
"Nikita",
""
],
[
"Sarnatskyi",
"Vladyslav",
""
],
[
"Ovcharenko",
"Vadym",
""
],
[
"Tkachenko",
"Mariia",
""
],
[
"Gordienko",
"Yurii",
""
],
[
"Stirenko",
"Sergei",
""
]
] |
Particularly close attention is being paid today among researchers in social science disciplines to aspects of learning in the digital age, especially for the Digitally Native Generation. In the context of museums, the question is: how can rich learning experiences be provided for increasingly technologically advanced young visitors in museums? Which high-tech platforms and solutions do museums need to focus on? At the same time, the software games business is growing fast and now finding its way into non-entertainment contexts, helping to deliver substantial benefits, particularly in education, training, research, and health. This article outlines some aspects facing Digitally Native learners in museums through an analysis of several radically new key technologies: Interactivity, Wearables, Virtual Reality, and Augmented Reality. Special attention is paid to use cases for application of games-based scenarios via these technologies in non-leisure contexts and specifically for educational purposes in museums.
|
1411.2132
|
George Grispos
|
George Grispos, William Bradley Glisson, J. Harold Pardue, Mike
Dickson
|
Identifying User Behavior from Residual Data in Cloud-based Synchronized
Apps
|
Please cite this paper as: G. Grispos, W.B. Glisson, J.H. Pardue and
M. Dickson (2014). Identifying User Behavior from Residual Data in
Cloud-based Synchronized Apps. Conference on Information Systems Applied
Research (CONISAR 2014), 6-9 November 2014, Baltimore Maryland, USA
| null | null | null |
cs.CR cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As the distinction between personal and organizational device usage continues
to blur, the combination of applications that interact increases the need to
investigate potential security issues. Although security and forensic
researchers have been able to recover a variety of artifacts, empirical
research has not examined a suite of application artifacts from the perspective
of high-level pattern identification. This research presents a preliminary
investigation into the idea that residual artifacts generated by cloud-based
synchronized applications can be used to identify broad user behavior patterns.
To accomplish this, the researchers conducted a single-case, pretest-posttest,
quasi experiment using a smartphone device and a suite of Google mobile
applications. The contribution of this paper is two-fold. First, it provides a
proof of concept of the extent to which residual data from cloud-based
synchronized applications can be used to broadly identify user behavior
patterns from device data patterns. Second, it highlights the need for security
controls to prevent and manage information flow between BYOD mobile devices and
cloud synchronization services.
Keywords: Residual Data, Cloud, Apps, Digital Forensics, BYOD
|
[
{
"created": "Sat, 8 Nov 2014 16:01:33 GMT",
"version": "v1"
}
] |
2014-11-11
|
[
[
"Grispos",
"George",
""
],
[
"Glisson",
"William Bradley",
""
],
[
"Pardue",
"J. Harold",
""
],
[
"Dickson",
"Mike",
""
]
] |
As the distinction between personal and organizational device usage continues to blur, the combination of applications that interact increases the need to investigate potential security issues. Although security and forensic researchers have been able to recover a variety of artifacts, empirical research has not examined a suite of application artifacts from the perspective of high-level pattern identification. This research presents a preliminary investigation into the idea that residual artifacts generated by cloud-based synchronized applications can be used to identify broad user behavior patterns. To accomplish this, the researchers conducted a single-case, pretest-posttest, quasi experiment using a smartphone device and a suite of Google mobile applications. The contribution of this paper is two-fold. First, it provides a proof of concept of the extent to which residual data from cloud-based synchronized applications can be used to broadly identify user behavior patterns from device data patterns. Second, it highlights the need for security controls to prevent and manage information flow between BYOD mobile devices and cloud synchronization services. Keywords: Residual Data, Cloud, Apps, Digital Forensics, BYOD
|
2207.04183
|
Haoxuan Che
|
Haoxuan Che and Haibo Jin and Hao Chen
|
Learning Robust Representation for Joint Grading of Ophthalmic Diseases
via Adaptive Curriculum and Feature Disentanglement
|
Accepted by MICCAI22
| null |
10.1007/978-3-031-16437-8_50
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Diabetic retinopathy (DR) and diabetic macular edema (DME) are leading causes
of permanent blindness worldwide. Designing an automatic grading system with
good generalization ability for DR and DME is vital in clinical practice.
However, prior works either grade DR or DME independently, without considering
internal correlations between them, or grade them jointly by shared feature
representation, yet ignoring potential generalization issues caused by
difficult samples and data bias. Aiming to address these problems, we propose a
framework for joint grading with the dynamic difficulty-aware weighted loss
(DAW) and the dual-stream disentangled learning architecture (DETACH). Inspired
by curriculum learning, DAW learns from simple samples to difficult samples
dynamically via measuring difficulty adaptively. DETACH separates features of
grading tasks to avoid potential emphasis on the bias. With the addition of DAW
and DETACH, the model learns robust disentangled feature representations to
explore internal correlations between DR and DME and achieve better grading
performance. Experiments on three benchmarks show the effectiveness and
robustness of our framework under both the intra-dataset and cross-dataset
tests.
|
[
{
"created": "Sat, 9 Jul 2022 03:02:36 GMT",
"version": "v1"
},
{
"created": "Sun, 26 Mar 2023 18:05:48 GMT",
"version": "v2"
}
] |
2023-03-28
|
[
[
"Che",
"Haoxuan",
""
],
[
"Jin",
"Haibo",
""
],
[
"Chen",
"Hao",
""
]
] |
Diabetic retinopathy (DR) and diabetic macular edema (DME) are leading causes of permanent blindness worldwide. Designing an automatic grading system with good generalization ability for DR and DME is vital in clinical practice. However, prior works either grade DR or DME independently, without considering internal correlations between them, or grade them jointly by shared feature representation, yet ignoring potential generalization issues caused by difficult samples and data bias. Aiming to address these problems, we propose a framework for joint grading with the dynamic difficulty-aware weighted loss (DAW) and the dual-stream disentangled learning architecture (DETACH). Inspired by curriculum learning, DAW learns from simple samples to difficult samples dynamically via measuring difficulty adaptively. DETACH separates features of grading tasks to avoid potential emphasis on the bias. With the addition of DAW and DETACH, the model learns robust disentangled feature representations to explore internal correlations between DR and DME and achieve better grading performance. Experiments on three benchmarks show the effectiveness and robustness of our framework under both the intra-dataset and cross-dataset tests.
|
2203.02167
|
Liang Wang
|
Liang Wang, Wei Zhao, Zhuoyu Wei, Jingming Liu
|
SimKGC: Simple Contrastive Knowledge Graph Completion with Pre-trained
Language Models
|
ACL 2022, 14 pages
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Knowledge graph completion (KGC) aims to reason over known facts and infer
the missing links. Text-based methods such as KGBERT (Yao et al., 2019) learn
entity representations from natural language descriptions, and have the
potential for inductive KGC. However, the performance of text-based methods
still largely lag behind graph embedding-based methods like TransE (Bordes et
al., 2013) and RotatE (Sun et al., 2019b). In this paper, we identify that the
key issue is efficient contrastive learning. To improve the learning
efficiency, we introduce three types of negatives: in-batch negatives,
pre-batch negatives, and self-negatives which act as a simple form of hard
negatives. Combined with InfoNCE loss, our proposed model SimKGC can
substantially outperform embedding-based methods on several benchmark datasets.
In terms of mean reciprocal rank (MRR), we advance the state-of-the-art by +19%
on WN18RR, +6.8% on the Wikidata5M transductive setting, and +22% on the
Wikidata5M inductive setting. Thorough analyses are conducted to gain insights
into each component. Our code is available at
https://github.com/intfloat/SimKGC .
|
[
{
"created": "Fri, 4 Mar 2022 07:36:30 GMT",
"version": "v1"
}
] |
2022-03-07
|
[
[
"Wang",
"Liang",
""
],
[
"Zhao",
"Wei",
""
],
[
"Wei",
"Zhuoyu",
""
],
[
"Liu",
"Jingming",
""
]
] |
Knowledge graph completion (KGC) aims to reason over known facts and infer the missing links. Text-based methods such as KGBERT (Yao et al., 2019) learn entity representations from natural language descriptions, and have the potential for inductive KGC. However, the performance of text-based methods still largely lag behind graph embedding-based methods like TransE (Bordes et al., 2013) and RotatE (Sun et al., 2019b). In this paper, we identify that the key issue is efficient contrastive learning. To improve the learning efficiency, we introduce three types of negatives: in-batch negatives, pre-batch negatives, and self-negatives which act as a simple form of hard negatives. Combined with InfoNCE loss, our proposed model SimKGC can substantially outperform embedding-based methods on several benchmark datasets. In terms of mean reciprocal rank (MRR), we advance the state-of-the-art by +19% on WN18RR, +6.8% on the Wikidata5M transductive setting, and +22% on the Wikidata5M inductive setting. Thorough analyses are conducted to gain insights into each component. Our code is available at https://github.com/intfloat/SimKGC .
|
1109.1325
|
Edith Cohen
|
Edith Cohen and Haim Kaplan
|
Get the Most out of Your Sample: Optimal Unbiased Estimators using
Partial Information
|
This is a full version of a PODS 2011 paper
| null | null | null |
cs.DB cs.DS cs.NI math.ST stat.TH
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Random sampling is an essential tool in the processing and transmission of
data. It is used to summarize data too large to store or manipulate and meet
resource constraints on bandwidth or battery power. Estimators that are applied
to the sample facilitate fast approximate processing of queries posed over the
original data and the value of the sample hinges on the quality of these
estimators.
Our work targets data sets such as request and traffic logs and sensor
measurements, where data is repeatedly collected over multiple {\em instances}:
time periods, locations, or snapshots.
We are interested in queries that span multiple instances, such as distinct
counts and distance measures over selected records. These queries are used for
applications ranging from planning to anomaly and change detection.
Unbiased low-variance estimators are particularly effective as the relative
error decreases with the number of selected record keys.
The Horvitz-Thompson estimator, known to minimize variance for sampling with
"all or nothing" outcomes (which reveals exacts value or no information on
estimated quantity), is not optimal for multi-instance operations for which an
outcome may provide partial information.
We present a general principled methodology for the derivation of (Pareto)
optimal unbiased estimators over sampled instances and aim to understand its
potential. We demonstrate significant improvement in estimate accuracy of
fundamental queries for common sampling schemes.
|
[
{
"created": "Tue, 6 Sep 2011 23:42:06 GMT",
"version": "v1"
}
] |
2015-03-19
|
[
[
"Cohen",
"Edith",
""
],
[
"Kaplan",
"Haim",
""
]
] |
Random sampling is an essential tool in the processing and transmission of data. It is used to summarize data too large to store or manipulate and meet resource constraints on bandwidth or battery power. Estimators that are applied to the sample facilitate fast approximate processing of queries posed over the original data and the value of the sample hinges on the quality of these estimators. Our work targets data sets such as request and traffic logs and sensor measurements, where data is repeatedly collected over multiple {\em instances}: time periods, locations, or snapshots. We are interested in queries that span multiple instances, such as distinct counts and distance measures over selected records. These queries are used for applications ranging from planning to anomaly and change detection. Unbiased low-variance estimators are particularly effective as the relative error decreases with the number of selected record keys. The Horvitz-Thompson estimator, known to minimize variance for sampling with "all or nothing" outcomes (which reveals exacts value or no information on estimated quantity), is not optimal for multi-instance operations for which an outcome may provide partial information. We present a general principled methodology for the derivation of (Pareto) optimal unbiased estimators over sampled instances and aim to understand its potential. We demonstrate significant improvement in estimate accuracy of fundamental queries for common sampling schemes.
|
2406.16384
|
Jaime Corsetti
|
Jaime Corsetti, Davide Boscaini, Francesco Giuliari, Changjae Oh,
Andrea Cavallaro, Fabio Poiesi
|
High-resolution open-vocabulary object 6D pose estimation
|
Technical report. Extension of CVPR paper "Open-vocabulary object 6D
pose estimation". Project page: https://jcorsetti.github.io/oryon
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The generalisation to unseen objects in the 6D pose estimation task is very
challenging. While Vision-Language Models (VLMs) enable using natural language
descriptions to support 6D pose estimation of unseen objects, these solutions
underperform compared to model-based methods. In this work we present Horyon,
an open-vocabulary VLM-based architecture that addresses relative pose
estimation between two scenes of an unseen object, described by a textual
prompt only. We use the textual prompt to identify the unseen object in the
scenes and then obtain high-resolution multi-scale features. These features are
used to extract cross-scene matches for registration. We evaluate our model on
a benchmark with a large variety of unseen objects across four datasets, namely
REAL275, Toyota-Light, Linemod, and YCB-Video. Our method achieves
state-of-the-art performance on all datasets, outperforming by 12.6 in Average
Recall the previous best-performing approach.
|
[
{
"created": "Mon, 24 Jun 2024 07:53:46 GMT",
"version": "v1"
},
{
"created": "Thu, 11 Jul 2024 17:03:29 GMT",
"version": "v2"
}
] |
2024-07-12
|
[
[
"Corsetti",
"Jaime",
""
],
[
"Boscaini",
"Davide",
""
],
[
"Giuliari",
"Francesco",
""
],
[
"Oh",
"Changjae",
""
],
[
"Cavallaro",
"Andrea",
""
],
[
"Poiesi",
"Fabio",
""
]
] |
The generalisation to unseen objects in the 6D pose estimation task is very challenging. While Vision-Language Models (VLMs) enable using natural language descriptions to support 6D pose estimation of unseen objects, these solutions underperform compared to model-based methods. In this work we present Horyon, an open-vocabulary VLM-based architecture that addresses relative pose estimation between two scenes of an unseen object, described by a textual prompt only. We use the textual prompt to identify the unseen object in the scenes and then obtain high-resolution multi-scale features. These features are used to extract cross-scene matches for registration. We evaluate our model on a benchmark with a large variety of unseen objects across four datasets, namely REAL275, Toyota-Light, Linemod, and YCB-Video. Our method achieves state-of-the-art performance on all datasets, outperforming by 12.6 in Average Recall the previous best-performing approach.
|
1602.04860
|
Thorsten Wissmann
|
G. A. Kavvos
|
Dual-Context Calculi for Modal Logic
|
Full version of article previously presented at LICS 2017 (see
arXiv:1602.04860v4 or doi: 10.1109/LICS.2017.8005089)
|
Logical Methods in Computer Science, Volume 16, Issue 3 (August
19, 2020) lmcs:4740
|
10.23638/LMCS-16(3:10)2020
| null |
cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
We present natural deduction systems and associated modal lambda calculi for
the necessity fragments of the normal modal logics K, T, K4, GL and S4. These
systems are in the dual-context style: they feature two distinct zones of
assumptions, one of which can be thought as modal, and the other as
intuitionistic. We show that these calculi have their roots in in sequent
calculi. We then investigate their metatheory, equip them with a confluent and
strongly normalizing notion of reduction, and show that they coincide with the
usual Hilbert systems up to provability. Finally, we investigate a categorical
semantics which interprets the modality as a product-preserving functor.
|
[
{
"created": "Mon, 15 Feb 2016 23:03:24 GMT",
"version": "v1"
},
{
"created": "Sat, 7 Jan 2017 13:03:59 GMT",
"version": "v2"
},
{
"created": "Fri, 3 Mar 2017 16:21:21 GMT",
"version": "v3"
},
{
"created": "Wed, 16 Aug 2017 00:04:52 GMT",
"version": "v4"
},
{
"created": "Sun, 5 Aug 2018 00:13:05 GMT",
"version": "v5"
},
{
"created": "Mon, 30 Mar 2020 15:55:37 GMT",
"version": "v6"
},
{
"created": "Sun, 2 Aug 2020 14:14:00 GMT",
"version": "v7"
},
{
"created": "Tue, 18 Aug 2020 13:43:05 GMT",
"version": "v8"
}
] |
2023-06-22
|
[
[
"Kavvos",
"G. A.",
""
]
] |
We present natural deduction systems and associated modal lambda calculi for the necessity fragments of the normal modal logics K, T, K4, GL and S4. These systems are in the dual-context style: they feature two distinct zones of assumptions, one of which can be thought as modal, and the other as intuitionistic. We show that these calculi have their roots in in sequent calculi. We then investigate their metatheory, equip them with a confluent and strongly normalizing notion of reduction, and show that they coincide with the usual Hilbert systems up to provability. Finally, we investigate a categorical semantics which interprets the modality as a product-preserving functor.
|
2208.08982
|
Pierre-Olivier C\^ot\'e
|
Pierre-Olivier C\^ot\'e, Amin Nikanjam, Rached Bouchoucha, Foutse
Khomh
|
Quality issues in Machine Learning Software Systems
|
Accepted as a registered report by ICSME 2022
| null | null | null |
cs.SE cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Context: An increasing demand is observed in various domains to employ
Machine Learning (ML) for solving complex problems. ML models are implemented
as software components and deployed in Machine Learning Software Systems
(MLSSs). Problem: There is a strong need for ensuring the serving quality of
MLSSs. False or poor decisions of such systems can lead to malfunction of other
systems, significant financial losses, or even threat to human life. The
quality assurance of MLSSs is considered as a challenging task and currently is
a hot research topic. Moreover, it is important to cover all various aspects of
the quality in MLSSs. Objective: This paper aims to investigate the
characteristics of real quality issues in MLSSs from the viewpoint of
practitioners. This empirical study aims to identify a catalog of bad-practices
related to poor quality in MLSSs. Method: We plan to conduct a set of
interviews with practitioners/experts, believing that interviews are the best
method to retrieve their experience and practices when dealing with quality
issues. We expect that the catalog of issues developed at this step will also
help us later to identify the severity, root causes, and possible remedy for
quality issues of MLSSs, allowing us to develop efficient quality assurance
tools for ML models and MLSSs.
|
[
{
"created": "Thu, 18 Aug 2022 17:55:18 GMT",
"version": "v1"
},
{
"created": "Mon, 22 Aug 2022 17:43:10 GMT",
"version": "v2"
}
] |
2022-08-23
|
[
[
"Côté",
"Pierre-Olivier",
""
],
[
"Nikanjam",
"Amin",
""
],
[
"Bouchoucha",
"Rached",
""
],
[
"Khomh",
"Foutse",
""
]
] |
Context: An increasing demand is observed in various domains to employ Machine Learning (ML) for solving complex problems. ML models are implemented as software components and deployed in Machine Learning Software Systems (MLSSs). Problem: There is a strong need for ensuring the serving quality of MLSSs. False or poor decisions of such systems can lead to malfunction of other systems, significant financial losses, or even threat to human life. The quality assurance of MLSSs is considered as a challenging task and currently is a hot research topic. Moreover, it is important to cover all various aspects of the quality in MLSSs. Objective: This paper aims to investigate the characteristics of real quality issues in MLSSs from the viewpoint of practitioners. This empirical study aims to identify a catalog of bad-practices related to poor quality in MLSSs. Method: We plan to conduct a set of interviews with practitioners/experts, believing that interviews are the best method to retrieve their experience and practices when dealing with quality issues. We expect that the catalog of issues developed at this step will also help us later to identify the severity, root causes, and possible remedy for quality issues of MLSSs, allowing us to develop efficient quality assurance tools for ML models and MLSSs.
|
2406.18776
|
Muhammed Saeed
|
Muhammed Saeed, Peter Bourgonje, Vera Demberg
|
Implicit Discourse Relation Classification For Nigerian Pidgin
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Despite attempts to make Large Language Models multi-lingual, many of the
world's languages are still severely under-resourced. This widens the
performance gap between NLP and AI applications aimed at well-financed, and
those aimed at less-resourced languages. In this paper, we focus on Nigerian
Pidgin (NP), which is spoken by nearly 100 million people, but has
comparatively very few NLP resources and corpora. We address the task of
Implicit Discourse Relation Classification (IDRC) and systematically compare an
approach translating NP data to English and then using a well-resourced IDRC
tool and back-projecting the labels versus creating a synthetic discourse
corpus for NP, in which we translate PDTB and project PDTB labels, and then
train an NP IDR classifier. The latter approach of learning a "native" NP
classifier outperforms our baseline by 13.27\% and 33.98\% in f$_{1}$ score for
4-way and 11-way classification, respectively.
|
[
{
"created": "Wed, 26 Jun 2024 22:10:15 GMT",
"version": "v1"
}
] |
2024-06-28
|
[
[
"Saeed",
"Muhammed",
""
],
[
"Bourgonje",
"Peter",
""
],
[
"Demberg",
"Vera",
""
]
] |
Despite attempts to make Large Language Models multi-lingual, many of the world's languages are still severely under-resourced. This widens the performance gap between NLP and AI applications aimed at well-financed, and those aimed at less-resourced languages. In this paper, we focus on Nigerian Pidgin (NP), which is spoken by nearly 100 million people, but has comparatively very few NLP resources and corpora. We address the task of Implicit Discourse Relation Classification (IDRC) and systematically compare an approach translating NP data to English and then using a well-resourced IDRC tool and back-projecting the labels versus creating a synthetic discourse corpus for NP, in which we translate PDTB and project PDTB labels, and then train an NP IDR classifier. The latter approach of learning a "native" NP classifier outperforms our baseline by 13.27\% and 33.98\% in f$_{1}$ score for 4-way and 11-way classification, respectively.
|
2204.11509
|
Tobias Pfandzelter
|
Tobias Pfandzelter and S\"oren Henning and Trever Schirmer and Wilhelm
Hasselbring and David Bermbach
|
Streaming vs. Functions: A Cost Perspective on Cloud Event Processing
|
Accepted for Publication at the 10th IEEE International Conference on
Cloud Engineering (IC2E 2022)
| null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In cloud event processing, data generated at the edge is processed in
real-time by cloud resources. Both distributed stream processing (DSP) and
Function-as-a-Service (FaaS) have been proposed to implement such event
processing applications. FaaS emphasizes fast development and easy operation,
while DSP emphasizes efficient handling of large data volumes. Despite their
architectural differences, both can be used to model and implement
loosely-coupled job graphs.
In this paper, we consider the selection of FaaS and DSP from a cost
perspective. We implement stateless and stateful workflows from the Theodolite
benchmarking suite using cloud FaaS and DSP. In an extensive evaluation, we
show how application type, cloud service provider, and runtime environment can
influence the cost of application deployments and derive decision guidelines
for cloud engineers.
|
[
{
"created": "Mon, 25 Apr 2022 08:42:39 GMT",
"version": "v1"
},
{
"created": "Fri, 12 Aug 2022 15:28:26 GMT",
"version": "v2"
}
] |
2022-08-15
|
[
[
"Pfandzelter",
"Tobias",
""
],
[
"Henning",
"Sören",
""
],
[
"Schirmer",
"Trever",
""
],
[
"Hasselbring",
"Wilhelm",
""
],
[
"Bermbach",
"David",
""
]
] |
In cloud event processing, data generated at the edge is processed in real-time by cloud resources. Both distributed stream processing (DSP) and Function-as-a-Service (FaaS) have been proposed to implement such event processing applications. FaaS emphasizes fast development and easy operation, while DSP emphasizes efficient handling of large data volumes. Despite their architectural differences, both can be used to model and implement loosely-coupled job graphs. In this paper, we consider the selection of FaaS and DSP from a cost perspective. We implement stateless and stateful workflows from the Theodolite benchmarking suite using cloud FaaS and DSP. In an extensive evaluation, we show how application type, cloud service provider, and runtime environment can influence the cost of application deployments and derive decision guidelines for cloud engineers.
|
1403.1279
|
Hamid A. Toussi
|
Hamid A. Toussi and Bahram Sadeghi Bigham
|
Design, Implementation and Evaluation of MTBDD based Fuzzy Sets and
Binary Fuzzy Relations
|
A shorter version was published in Proceeding of International
Conference on Computer, Information Technology and Digital Media, Tehran,
Iran, 2013
| null | null | null |
cs.DS cs.SC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For fast and efficient analysis of large sets of fuzzy data, elimination of
redundancies in the memory representation is needed. We used MTBDDs as the
underlying data-structure to represent fuzzy sets and binary fuzzy relations.
This leads to elimination of redundancies in the representation, less
computations, and faster analyses. We have also extended a BDD package (BuDDy)
to support MTBDDs in general and fuzzy sets and relations in particular.
Different fuzzy operations such as max, min and max-min composition were
implemented based on our representation. Effectiveness of our representation is
shown by applying it on fuzzy connectedness and image segmentation problem.
Compared to a base implementation, the running time of our MTBDD based
implementation was faster (in our test cases) by a factor ranging from 2 to 27.
Also, when the MTBDD based data-structure was employed, the memory needed to
represent the final results was improved by a factor ranging from 37.9 to
265.5.
|
[
{
"created": "Wed, 5 Mar 2014 21:48:58 GMT",
"version": "v1"
}
] |
2014-03-07
|
[
[
"Toussi",
"Hamid A.",
""
],
[
"Bigham",
"Bahram Sadeghi",
""
]
] |
For fast and efficient analysis of large sets of fuzzy data, elimination of redundancies in the memory representation is needed. We used MTBDDs as the underlying data-structure to represent fuzzy sets and binary fuzzy relations. This leads to elimination of redundancies in the representation, less computations, and faster analyses. We have also extended a BDD package (BuDDy) to support MTBDDs in general and fuzzy sets and relations in particular. Different fuzzy operations such as max, min and max-min composition were implemented based on our representation. Effectiveness of our representation is shown by applying it on fuzzy connectedness and image segmentation problem. Compared to a base implementation, the running time of our MTBDD based implementation was faster (in our test cases) by a factor ranging from 2 to 27. Also, when the MTBDD based data-structure was employed, the memory needed to represent the final results was improved by a factor ranging from 37.9 to 265.5.
|
1302.6840
|
Runping Qi
|
Runping Qi, Nevin Lianwen Zhang, David L. Poole
|
Solving Asymmetric Decision Problems with Influence Diagrams
|
Appears in Proceedings of the Tenth Conference on Uncertainty in
Artificial Intelligence (UAI1994)
| null | null |
UAI-P-1994-PG-491-497
|
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While influence diagrams have many advantages as a representation framework
for Bayesian decision problems, they have a serious drawback in handling
asymmetric decision problems. To be represented in an influence diagram, an
asymmetric decision problem must be symmetrized. A considerable amount of
unnecessary computation may be involved when a symmetrized influence diagram is
evaluated by conventional algorithms. In this paper we present an approach for
avoiding such unnecessary computation in influence diagram evaluation.
|
[
{
"created": "Wed, 27 Feb 2013 14:19:16 GMT",
"version": "v1"
}
] |
2013-02-28
|
[
[
"Qi",
"Runping",
""
],
[
"Zhang",
"Nevin Lianwen",
""
],
[
"Poole",
"David L.",
""
]
] |
While influence diagrams have many advantages as a representation framework for Bayesian decision problems, they have a serious drawback in handling asymmetric decision problems. To be represented in an influence diagram, an asymmetric decision problem must be symmetrized. A considerable amount of unnecessary computation may be involved when a symmetrized influence diagram is evaluated by conventional algorithms. In this paper we present an approach for avoiding such unnecessary computation in influence diagram evaluation.
|
1811.03196
|
Yanchun Xie
|
Yanchun Xie, Jimin Xiao, Kaizhu Huang, Jeyarajan Thiyagalingam, Yao
Zhao
|
Correlation Filter Selection for Visual Tracking Using Reinforcement
Learning
|
13 pages, 11 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Correlation filter has been proven to be an effective tool for a number of
approaches in visual tracking, particularly for seeking a good balance between
tracking accuracy and speed. However, correlation filter based models are
susceptible to wrong updates stemming from inaccurate tracking results. To
date, little effort has been devoted towards handling the correlation filter
update problem. In this paper, we propose a novel approach to address the
correlation filter update problem. In our approach, we update and maintain
multiple correlation filter models in parallel, and we use deep reinforcement
learning for the selection of an optimal correlation filter model among them.
To facilitate the decision process in an efficient manner, we propose a
decision-net to deal target appearance modeling, which is trained through
hundreds of challenging videos using proximal policy optimization and a
lightweight learning network. An exhaustive evaluation of the proposed approach
on the OTB100 and OTB2013 benchmarks show that the approach is effective enough
to achieve the average success rate of 62.3% and the average precision score of
81.2%, both exceeding the performance of traditional correlation filter based
trackers.
|
[
{
"created": "Thu, 8 Nov 2018 00:24:42 GMT",
"version": "v1"
}
] |
2018-11-09
|
[
[
"Xie",
"Yanchun",
""
],
[
"Xiao",
"Jimin",
""
],
[
"Huang",
"Kaizhu",
""
],
[
"Thiyagalingam",
"Jeyarajan",
""
],
[
"Zhao",
"Yao",
""
]
] |
Correlation filter has been proven to be an effective tool for a number of approaches in visual tracking, particularly for seeking a good balance between tracking accuracy and speed. However, correlation filter based models are susceptible to wrong updates stemming from inaccurate tracking results. To date, little effort has been devoted towards handling the correlation filter update problem. In this paper, we propose a novel approach to address the correlation filter update problem. In our approach, we update and maintain multiple correlation filter models in parallel, and we use deep reinforcement learning for the selection of an optimal correlation filter model among them. To facilitate the decision process in an efficient manner, we propose a decision-net to deal target appearance modeling, which is trained through hundreds of challenging videos using proximal policy optimization and a lightweight learning network. An exhaustive evaluation of the proposed approach on the OTB100 and OTB2013 benchmarks show that the approach is effective enough to achieve the average success rate of 62.3% and the average precision score of 81.2%, both exceeding the performance of traditional correlation filter based trackers.
|
2305.18193
|
Igor Spasojevic
|
Igor Spasojevic, Xu Liu, Alejandro Ribeiro, George J. Pappas, Vijay
Kumar
|
Active Collaborative Localization in Heterogeneous Robot Teams
|
To appear in Robotics: Science and Systems (RSS) 2023
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Accurate and robust state estimation is critical for autonomous navigation of
robot teams. This task is especially challenging for large groups of size,
weight, and power (SWAP) constrained aerial robots operating in
perceptually-degraded GPS-denied environments. We can, however, actively
increase the amount of perceptual information available to such robots by
augmenting them with a small number of more expensive, but less
resource-constrained, agents. Specifically, the latter can serve as sources of
perceptual information themselves. In this paper, we study the problem of
optimally positioning (and potentially navigating) a small number of more
capable agents to enhance the perceptual environment for their
lightweight,inexpensive, teammates that only need to rely on cameras and IMUs.
We propose a numerically robust, computationally efficient approach to solve
this problem via nonlinear optimization. Our method outperforms the standard
approach based on the greedy algorithm, while matching the accuracy of a
heuristic evolutionary scheme for global optimization at a fraction of its
running time. Ultimately, we validate our solution in both photorealistic
simulations and real-world experiments. In these experiments, we use
lidar-based autonomous ground vehicles as the more capable agents, and
vision-based aerial robots as their SWAP-constrained teammates. Our method is
able to reduce drift in visual-inertial odometry by as much as 90%, and it
outperforms random positioning of lidar-equipped agents by a significant
margin. Furthermore, our method can be generalized to different types of robot
teams with heterogeneous perception capabilities. It has a wide range of
applications, such as surveying and mapping challenging dynamic environments,
and enabling resilience to large-scale perturbations that can be caused by
earthquakes or storms.
|
[
{
"created": "Mon, 29 May 2023 16:44:06 GMT",
"version": "v1"
}
] |
2023-05-30
|
[
[
"Spasojevic",
"Igor",
""
],
[
"Liu",
"Xu",
""
],
[
"Ribeiro",
"Alejandro",
""
],
[
"Pappas",
"George J.",
""
],
[
"Kumar",
"Vijay",
""
]
] |
Accurate and robust state estimation is critical for autonomous navigation of robot teams. This task is especially challenging for large groups of size, weight, and power (SWAP) constrained aerial robots operating in perceptually-degraded GPS-denied environments. We can, however, actively increase the amount of perceptual information available to such robots by augmenting them with a small number of more expensive, but less resource-constrained, agents. Specifically, the latter can serve as sources of perceptual information themselves. In this paper, we study the problem of optimally positioning (and potentially navigating) a small number of more capable agents to enhance the perceptual environment for their lightweight,inexpensive, teammates that only need to rely on cameras and IMUs. We propose a numerically robust, computationally efficient approach to solve this problem via nonlinear optimization. Our method outperforms the standard approach based on the greedy algorithm, while matching the accuracy of a heuristic evolutionary scheme for global optimization at a fraction of its running time. Ultimately, we validate our solution in both photorealistic simulations and real-world experiments. In these experiments, we use lidar-based autonomous ground vehicles as the more capable agents, and vision-based aerial robots as their SWAP-constrained teammates. Our method is able to reduce drift in visual-inertial odometry by as much as 90%, and it outperforms random positioning of lidar-equipped agents by a significant margin. Furthermore, our method can be generalized to different types of robot teams with heterogeneous perception capabilities. It has a wide range of applications, such as surveying and mapping challenging dynamic environments, and enabling resilience to large-scale perturbations that can be caused by earthquakes or storms.
|
1608.02257
|
Chang Liu
|
Chang Liu, Bo Li, Yevgeniy Vorobeychik, Alina Oprea
|
Robust High-Dimensional Linear Regression
| null | null | null | null |
cs.LG cs.CR stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The effectiveness of supervised learning techniques has made them ubiquitous
in research and practice. In high-dimensional settings, supervised learning
commonly relies on dimensionality reduction to improve performance and identify
the most important factors in predicting outcomes. However, the economic
importance of learning has made it a natural target for adversarial
manipulation of training data, which we term poisoning attacks. Prior
approaches to dealing with robust supervised learning rely on strong
assumptions about the nature of the feature matrix, such as feature
independence and sub-Gaussian noise with low variance. We propose an integrated
method for robust regression that relaxes these assumptions, assuming only that
the feature matrix can be well approximated by a low-rank matrix. Our
techniques integrate improved robust low-rank matrix approximation and robust
principle component regression, and yield strong performance guarantees.
Moreover, we experimentally show that our methods significantly outperform
state of the art both in running time and prediction error.
|
[
{
"created": "Sun, 7 Aug 2016 19:03:52 GMT",
"version": "v1"
},
{
"created": "Tue, 9 Aug 2016 20:20:17 GMT",
"version": "v2"
}
] |
2016-08-11
|
[
[
"Liu",
"Chang",
""
],
[
"Li",
"Bo",
""
],
[
"Vorobeychik",
"Yevgeniy",
""
],
[
"Oprea",
"Alina",
""
]
] |
The effectiveness of supervised learning techniques has made them ubiquitous in research and practice. In high-dimensional settings, supervised learning commonly relies on dimensionality reduction to improve performance and identify the most important factors in predicting outcomes. However, the economic importance of learning has made it a natural target for adversarial manipulation of training data, which we term poisoning attacks. Prior approaches to dealing with robust supervised learning rely on strong assumptions about the nature of the feature matrix, such as feature independence and sub-Gaussian noise with low variance. We propose an integrated method for robust regression that relaxes these assumptions, assuming only that the feature matrix can be well approximated by a low-rank matrix. Our techniques integrate improved robust low-rank matrix approximation and robust principle component regression, and yield strong performance guarantees. Moreover, we experimentally show that our methods significantly outperform state of the art both in running time and prediction error.
|
2407.19547
|
Yushi Huang
|
Yushi Huang, Ruihao Gong, Xianglong Liu, Jing Liu, Yuhang Li, Jiwen
Lu, Dacheng Tao
|
Temporal Feature Matters: A Framework for Diffusion Model Quantization
|
arXiv admin note: substantial text overlap with arXiv:2311.16503
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The Diffusion models, widely used for image generation, face significant
challenges related to their broad applicability due to prolonged inference
times and high memory demands. Efficient Post-Training Quantization (PTQ) is
crucial to address these issues. However, unlike traditional models, diffusion
models critically rely on the time-step for the multi-round denoising.
Typically, each time-step is encoded into a hypersensitive temporal feature by
several modules. Despite this, existing PTQ methods do not optimize these
modules individually. Instead, they employ unsuitable reconstruction objectives
and complex calibration methods, leading to significant disturbances in the
temporal feature and denoising trajectory, as well as reduced compression
efficiency. To address these challenges, we introduce a novel quantization
framework that includes three strategies: 1) TIB-based Maintenance: Based on
our innovative Temporal Information Block (TIB) definition, Temporal
Information-aware Reconstruction (TIAR) and Finite Set Calibration (FSC) are
developed to efficiently align original temporal features. 2) Cache-based
Maintenance: Instead of indirect and complex optimization for the related
modules, pre-computing and caching quantized counterparts of temporal features
are developed to minimize errors. 3) Disturbance-aware Selection: Employ
temporal feature errors to guide a fine-grained selection between the two
maintenance strategies for further disturbance reduction. This framework
preserves most of the temporal information and ensures high-quality end-to-end
generation. Extensive testing on various datasets, diffusion models and
hardware confirms our superior performance and acceleration..
|
[
{
"created": "Sun, 28 Jul 2024 17:46:15 GMT",
"version": "v1"
},
{
"created": "Wed, 7 Aug 2024 20:43:10 GMT",
"version": "v2"
}
] |
2024-08-09
|
[
[
"Huang",
"Yushi",
""
],
[
"Gong",
"Ruihao",
""
],
[
"Liu",
"Xianglong",
""
],
[
"Liu",
"Jing",
""
],
[
"Li",
"Yuhang",
""
],
[
"Lu",
"Jiwen",
""
],
[
"Tao",
"Dacheng",
""
]
] |
The Diffusion models, widely used for image generation, face significant challenges related to their broad applicability due to prolonged inference times and high memory demands. Efficient Post-Training Quantization (PTQ) is crucial to address these issues. However, unlike traditional models, diffusion models critically rely on the time-step for the multi-round denoising. Typically, each time-step is encoded into a hypersensitive temporal feature by several modules. Despite this, existing PTQ methods do not optimize these modules individually. Instead, they employ unsuitable reconstruction objectives and complex calibration methods, leading to significant disturbances in the temporal feature and denoising trajectory, as well as reduced compression efficiency. To address these challenges, we introduce a novel quantization framework that includes three strategies: 1) TIB-based Maintenance: Based on our innovative Temporal Information Block (TIB) definition, Temporal Information-aware Reconstruction (TIAR) and Finite Set Calibration (FSC) are developed to efficiently align original temporal features. 2) Cache-based Maintenance: Instead of indirect and complex optimization for the related modules, pre-computing and caching quantized counterparts of temporal features are developed to minimize errors. 3) Disturbance-aware Selection: Employ temporal feature errors to guide a fine-grained selection between the two maintenance strategies for further disturbance reduction. This framework preserves most of the temporal information and ensures high-quality end-to-end generation. Extensive testing on various datasets, diffusion models and hardware confirms our superior performance and acceleration..
|
1412.3709
|
Abel Gonzalez-Garcia
|
Abel Gonzalez-Garcia, Alexander Vezhnevets, Vittorio Ferrari
|
An active search strategy for efficient object class detection
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Object class detectors typically apply a window classifier to all the windows
in a large set, either in a sliding window manner or using object proposals. In
this paper, we develop an active search strategy that sequentially chooses the
next window to evaluate based on all the information gathered before. This
results in a substantial reduction in the number of classifier evaluations and
in a more elegant approach in general. Our search strategy is guided by two
forces. First, we exploit context as the statistical relation between the
appearance of a window and its location relative to the object, as observed in
the training set. This enables to jump across distant regions in the image
(e.g. observing a sky region suggests that cars might be far below) and is done
efficiently in a Random Forest framework. Second, we exploit the score of the
classifier to attract the search to promising areas surrounding a highly scored
window, and to keep away from areas near low scored ones. Our search strategy
can be applied on top of any classifier as it treats it as a black-box. In
experiments with R-CNN on the challenging SUN2012 dataset, our method matches
the detection accuracy of evaluating all windows independently, while
evaluating 9x fewer windows.
|
[
{
"created": "Thu, 11 Dec 2014 16:23:38 GMT",
"version": "v1"
},
{
"created": "Tue, 14 Apr 2015 11:29:51 GMT",
"version": "v2"
}
] |
2015-04-15
|
[
[
"Gonzalez-Garcia",
"Abel",
""
],
[
"Vezhnevets",
"Alexander",
""
],
[
"Ferrari",
"Vittorio",
""
]
] |
Object class detectors typically apply a window classifier to all the windows in a large set, either in a sliding window manner or using object proposals. In this paper, we develop an active search strategy that sequentially chooses the next window to evaluate based on all the information gathered before. This results in a substantial reduction in the number of classifier evaluations and in a more elegant approach in general. Our search strategy is guided by two forces. First, we exploit context as the statistical relation between the appearance of a window and its location relative to the object, as observed in the training set. This enables to jump across distant regions in the image (e.g. observing a sky region suggests that cars might be far below) and is done efficiently in a Random Forest framework. Second, we exploit the score of the classifier to attract the search to promising areas surrounding a highly scored window, and to keep away from areas near low scored ones. Our search strategy can be applied on top of any classifier as it treats it as a black-box. In experiments with R-CNN on the challenging SUN2012 dataset, our method matches the detection accuracy of evaluating all windows independently, while evaluating 9x fewer windows.
|
2302.13375
|
Linghao Chen
|
Linghao Chen, Yunzhou Song, Hujun Bao, Xiaowei Zhou
|
Perceiving Unseen 3D Objects by Poking the Objects
|
Accepted to ICRA 2023. Project page:
https://zju3dv.github.io/poking_perception
| null | null | null |
cs.RO cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a novel approach to interactive 3D object perception for robots.
Unlike previous perception algorithms that rely on known object models or a
large amount of annotated training data, we propose a poking-based approach
that automatically discovers and reconstructs 3D objects. The poking process
not only enables the robot to discover unseen 3D objects but also produces
multi-view observations for 3D reconstruction of the objects. The reconstructed
objects are then memorized by neural networks with regular supervised learning
and can be recognized in new test images. The experiments on real-world data
show that our approach could unsupervisedly discover and reconstruct unseen 3D
objects with high quality, and facilitate real-world applications such as
robotic grasping. The code and supplementary materials are available at the
project page: https://zju3dv.github.io/poking_perception.
|
[
{
"created": "Sun, 26 Feb 2023 18:22:13 GMT",
"version": "v1"
}
] |
2023-02-28
|
[
[
"Chen",
"Linghao",
""
],
[
"Song",
"Yunzhou",
""
],
[
"Bao",
"Hujun",
""
],
[
"Zhou",
"Xiaowei",
""
]
] |
We present a novel approach to interactive 3D object perception for robots. Unlike previous perception algorithms that rely on known object models or a large amount of annotated training data, we propose a poking-based approach that automatically discovers and reconstructs 3D objects. The poking process not only enables the robot to discover unseen 3D objects but also produces multi-view observations for 3D reconstruction of the objects. The reconstructed objects are then memorized by neural networks with regular supervised learning and can be recognized in new test images. The experiments on real-world data show that our approach could unsupervisedly discover and reconstruct unseen 3D objects with high quality, and facilitate real-world applications such as robotic grasping. The code and supplementary materials are available at the project page: https://zju3dv.github.io/poking_perception.
|
2406.04268
|
Michael Dennis
|
Edward Hughes, Michael Dennis, Jack Parker-Holder, Feryal Behbahani,
Aditi Mavalankar, Yuge Shi, Tom Schaul, Tim Rocktaschel
|
Open-Endedness is Essential for Artificial Superhuman Intelligence
| null | null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
In recent years there has been a tremendous surge in the general capabilities
of AI systems, mainly fuelled by training foundation models on internetscale
data. Nevertheless, the creation of openended, ever self-improving AI remains
elusive. In this position paper, we argue that the ingredients are now in place
to achieve openendedness in AI systems with respect to a human observer.
Furthermore, we claim that such open-endedness is an essential property of any
artificial superhuman intelligence (ASI). We begin by providing a concrete
formal definition of open-endedness through the lens of novelty and
learnability. We then illustrate a path towards ASI via open-ended systems
built on top of foundation models, capable of making novel, humanrelevant
discoveries. We conclude by examining the safety implications of
generally-capable openended AI. We expect that open-ended foundation models
will prove to be an increasingly fertile and safety-critical area of research
in the near future.
|
[
{
"created": "Thu, 6 Jun 2024 17:15:02 GMT",
"version": "v1"
}
] |
2024-06-07
|
[
[
"Hughes",
"Edward",
""
],
[
"Dennis",
"Michael",
""
],
[
"Parker-Holder",
"Jack",
""
],
[
"Behbahani",
"Feryal",
""
],
[
"Mavalankar",
"Aditi",
""
],
[
"Shi",
"Yuge",
""
],
[
"Schaul",
"Tom",
""
],
[
"Rocktaschel",
"Tim",
""
]
] |
In recent years there has been a tremendous surge in the general capabilities of AI systems, mainly fuelled by training foundation models on internetscale data. Nevertheless, the creation of openended, ever self-improving AI remains elusive. In this position paper, we argue that the ingredients are now in place to achieve openendedness in AI systems with respect to a human observer. Furthermore, we claim that such open-endedness is an essential property of any artificial superhuman intelligence (ASI). We begin by providing a concrete formal definition of open-endedness through the lens of novelty and learnability. We then illustrate a path towards ASI via open-ended systems built on top of foundation models, capable of making novel, humanrelevant discoveries. We conclude by examining the safety implications of generally-capable openended AI. We expect that open-ended foundation models will prove to be an increasingly fertile and safety-critical area of research in the near future.
|
2103.05793
|
Zhifeng Kong
|
Zhifeng Kong, Kamalika Chaudhuri
|
Universal Approximation of Residual Flows in Maximum Mean Discrepancy
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Normalizing flows are a class of flexible deep generative models that offer
easy likelihood computation. Despite their empirical success, there is little
theoretical understanding of their expressiveness. In this work, we study
residual flows, a class of normalizing flows composed of Lipschitz residual
blocks. We prove residual flows are universal approximators in maximum mean
discrepancy. We provide upper bounds on the number of residual blocks to
achieve approximation under different assumptions.
|
[
{
"created": "Wed, 10 Mar 2021 00:16:33 GMT",
"version": "v1"
},
{
"created": "Fri, 25 Jun 2021 03:38:13 GMT",
"version": "v2"
}
] |
2021-06-28
|
[
[
"Kong",
"Zhifeng",
""
],
[
"Chaudhuri",
"Kamalika",
""
]
] |
Normalizing flows are a class of flexible deep generative models that offer easy likelihood computation. Despite their empirical success, there is little theoretical understanding of their expressiveness. In this work, we study residual flows, a class of normalizing flows composed of Lipschitz residual blocks. We prove residual flows are universal approximators in maximum mean discrepancy. We provide upper bounds on the number of residual blocks to achieve approximation under different assumptions.
|
2304.10537
|
Ziyu Wan
|
Ziyu Wan, Christian Richardt, Alja\v{z} Bo\v{z}i\v{c}, Chao Li, Vijay
Rengarajan, Seonghyeon Nam, Xiaoyu Xiang, Tuotuo Li, Bo Zhu, Rakesh Ranjan,
Jing Liao
|
Learning Neural Duplex Radiance Fields for Real-Time View Synthesis
|
CVPR 2023. Project page: http://raywzy.com/NDRF
| null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Neural radiance fields (NeRFs) enable novel view synthesis with unprecedented
visual quality. However, to render photorealistic images, NeRFs require
hundreds of deep multilayer perceptron (MLP) evaluations - for each pixel. This
is prohibitively expensive and makes real-time rendering infeasible, even on
powerful modern GPUs. In this paper, we propose a novel approach to distill and
bake NeRFs into highly efficient mesh-based neural representations that are
fully compatible with the massively parallel graphics rendering pipeline. We
represent scenes as neural radiance features encoded on a two-layer duplex
mesh, which effectively overcomes the inherent inaccuracies in 3D surface
reconstruction by learning the aggregated radiance information from a reliable
interval of ray-surface intersections. To exploit local geometric relationships
of nearby pixels, we leverage screen-space convolutions instead of the MLPs
used in NeRFs to achieve high-quality appearance. Finally, the performance of
the whole framework is further boosted by a novel multi-view distillation
optimization strategy. We demonstrate the effectiveness and superiority of our
approach via extensive experiments on a range of standard datasets.
|
[
{
"created": "Thu, 20 Apr 2023 17:59:52 GMT",
"version": "v1"
}
] |
2023-04-21
|
[
[
"Wan",
"Ziyu",
""
],
[
"Richardt",
"Christian",
""
],
[
"Božič",
"Aljaž",
""
],
[
"Li",
"Chao",
""
],
[
"Rengarajan",
"Vijay",
""
],
[
"Nam",
"Seonghyeon",
""
],
[
"Xiang",
"Xiaoyu",
""
],
[
"Li",
"Tuotuo",
""
],
[
"Zhu",
"Bo",
""
],
[
"Ranjan",
"Rakesh",
""
],
[
"Liao",
"Jing",
""
]
] |
Neural radiance fields (NeRFs) enable novel view synthesis with unprecedented visual quality. However, to render photorealistic images, NeRFs require hundreds of deep multilayer perceptron (MLP) evaluations - for each pixel. This is prohibitively expensive and makes real-time rendering infeasible, even on powerful modern GPUs. In this paper, we propose a novel approach to distill and bake NeRFs into highly efficient mesh-based neural representations that are fully compatible with the massively parallel graphics rendering pipeline. We represent scenes as neural radiance features encoded on a two-layer duplex mesh, which effectively overcomes the inherent inaccuracies in 3D surface reconstruction by learning the aggregated radiance information from a reliable interval of ray-surface intersections. To exploit local geometric relationships of nearby pixels, we leverage screen-space convolutions instead of the MLPs used in NeRFs to achieve high-quality appearance. Finally, the performance of the whole framework is further boosted by a novel multi-view distillation optimization strategy. We demonstrate the effectiveness and superiority of our approach via extensive experiments on a range of standard datasets.
|
2109.00627
|
Guangzhi Sun
|
Guangzhi Sun, Chao Zhang, Philip C. Woodland
|
Tree-constrained Pointer Generator for End-to-end Contextual Speech
Recognition
|
To appear in ASRU 2021
| null | null | null |
cs.CL cs.SD
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Contextual knowledge is important for real-world automatic speech recognition
(ASR) applications. In this paper, a novel tree-constrained pointer generator
(TCPGen) component is proposed that incorporates such knowledge as a list of
biasing words into both attention-based encoder-decoder and transducer
end-to-end ASR models in a neural-symbolic way. TCPGen structures the biasing
words into an efficient prefix tree to serve as its symbolic input and creates
a neural shortcut between the tree and the final ASR output distribution to
facilitate recognising biasing words during decoding. Systems were trained and
evaluated on the Librispeech corpus where biasing words were extracted at the
scales of an utterance, a chapter, or a book to simulate different application
scenarios. Experimental results showed that TCPGen consistently improved word
error rates (WERs) compared to the baselines, and in particular, achieved
significant WER reductions on the biasing words. TCPGen is highly efficient: it
can handle 5,000 biasing words and distractors and only add a small overhead to
memory use and computation cost.
|
[
{
"created": "Wed, 1 Sep 2021 21:41:59 GMT",
"version": "v1"
},
{
"created": "Fri, 3 Sep 2021 09:38:53 GMT",
"version": "v2"
},
{
"created": "Fri, 17 Sep 2021 15:47:21 GMT",
"version": "v3"
}
] |
2021-09-20
|
[
[
"Sun",
"Guangzhi",
""
],
[
"Zhang",
"Chao",
""
],
[
"Woodland",
"Philip C.",
""
]
] |
Contextual knowledge is important for real-world automatic speech recognition (ASR) applications. In this paper, a novel tree-constrained pointer generator (TCPGen) component is proposed that incorporates such knowledge as a list of biasing words into both attention-based encoder-decoder and transducer end-to-end ASR models in a neural-symbolic way. TCPGen structures the biasing words into an efficient prefix tree to serve as its symbolic input and creates a neural shortcut between the tree and the final ASR output distribution to facilitate recognising biasing words during decoding. Systems were trained and evaluated on the Librispeech corpus where biasing words were extracted at the scales of an utterance, a chapter, or a book to simulate different application scenarios. Experimental results showed that TCPGen consistently improved word error rates (WERs) compared to the baselines, and in particular, achieved significant WER reductions on the biasing words. TCPGen is highly efficient: it can handle 5,000 biasing words and distractors and only add a small overhead to memory use and computation cost.
|
1912.06088
|
Dibya Ghosh
|
Dibya Ghosh, Abhishek Gupta, Ashwin Reddy, Justin Fu, Coline Devin,
Benjamin Eysenbach, Sergey Levine
|
Learning to Reach Goals via Iterated Supervised Learning
|
First two authors contributed equally. Code available at
https://github.com/dibyaghosh/gcsl
| null | null | null |
cs.LG cs.AI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Current reinforcement learning (RL) algorithms can be brittle and difficult
to use, especially when learning goal-reaching behaviors from sparse rewards.
Although supervised imitation learning provides a simple and stable
alternative, it requires access to demonstrations from a human supervisor. In
this paper, we study RL algorithms that use imitation learning to acquire goal
reaching policies from scratch, without the need for expert demonstrations or a
value function. In lieu of demonstrations, we leverage the property that any
trajectory is a successful demonstration for reaching the final state in that
same trajectory. We propose a simple algorithm in which an agent continually
relabels and imitates the trajectories it generates to progressively learn
goal-reaching behaviors from scratch. Each iteration, the agent collects new
trajectories using the latest policy, and maximizes the likelihood of the
actions along these trajectories under the goal that was actually reached, so
as to improve the policy. We formally show that this iterated supervised
learning procedure optimizes a bound on the RL objective, derive performance
bounds of the learned policy, and empirically demonstrate improved
goal-reaching performance and robustness over current RL algorithms in several
benchmark tasks.
|
[
{
"created": "Thu, 12 Dec 2019 17:26:47 GMT",
"version": "v1"
},
{
"created": "Fri, 13 Dec 2019 01:42:38 GMT",
"version": "v2"
},
{
"created": "Wed, 10 Jun 2020 17:22:46 GMT",
"version": "v3"
},
{
"created": "Fri, 2 Oct 2020 19:49:10 GMT",
"version": "v4"
}
] |
2020-10-06
|
[
[
"Ghosh",
"Dibya",
""
],
[
"Gupta",
"Abhishek",
""
],
[
"Reddy",
"Ashwin",
""
],
[
"Fu",
"Justin",
""
],
[
"Devin",
"Coline",
""
],
[
"Eysenbach",
"Benjamin",
""
],
[
"Levine",
"Sergey",
""
]
] |
Current reinforcement learning (RL) algorithms can be brittle and difficult to use, especially when learning goal-reaching behaviors from sparse rewards. Although supervised imitation learning provides a simple and stable alternative, it requires access to demonstrations from a human supervisor. In this paper, we study RL algorithms that use imitation learning to acquire goal reaching policies from scratch, without the need for expert demonstrations or a value function. In lieu of demonstrations, we leverage the property that any trajectory is a successful demonstration for reaching the final state in that same trajectory. We propose a simple algorithm in which an agent continually relabels and imitates the trajectories it generates to progressively learn goal-reaching behaviors from scratch. Each iteration, the agent collects new trajectories using the latest policy, and maximizes the likelihood of the actions along these trajectories under the goal that was actually reached, so as to improve the policy. We formally show that this iterated supervised learning procedure optimizes a bound on the RL objective, derive performance bounds of the learned policy, and empirically demonstrate improved goal-reaching performance and robustness over current RL algorithms in several benchmark tasks.
|
2011.00057
|
Nishant Raj
|
Harshit Jain and Nishant Raj and Suyash Mishra
|
A Sui Generis QA Approach using RoBERTa for Adverse Drug Event
Identification
| null |
BMC Bioinformatics 22, 330 (2021)
|
10.1186/s12859-021-04249-7
| null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Extraction of adverse drug events from biomedical literature and other
textual data is an important component to monitor drug-safety and this has
attracted attention of many researchers in healthcare. Existing works are more
pivoted around entity-relation extraction using bidirectional long short term
memory networks (Bi-LSTM) which does not attain the best feature
representations. In this paper, we introduce a question answering framework
that exploits the robustness, masking and dynamic attention capabilities of
RoBERTa by a technique of domain adaptation and attempt to overcome the
aforementioned limitations. Our model outperforms the prior work by 9.53%
F1-Score.
|
[
{
"created": "Fri, 30 Oct 2020 19:09:48 GMT",
"version": "v1"
}
] |
2021-10-22
|
[
[
"Jain",
"Harshit",
""
],
[
"Raj",
"Nishant",
""
],
[
"Mishra",
"Suyash",
""
]
] |
Extraction of adverse drug events from biomedical literature and other textual data is an important component to monitor drug-safety and this has attracted attention of many researchers in healthcare. Existing works are more pivoted around entity-relation extraction using bidirectional long short term memory networks (Bi-LSTM) which does not attain the best feature representations. In this paper, we introduce a question answering framework that exploits the robustness, masking and dynamic attention capabilities of RoBERTa by a technique of domain adaptation and attempt to overcome the aforementioned limitations. Our model outperforms the prior work by 9.53% F1-Score.
|
1810.10615
|
Guilherme Miguel Teixeira Rito
|
Guilherme Rito and Herv\'e Paulino
|
Scheduling computations with provably low synchronization overheads
| null | null | null | null |
cs.DS cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Work Stealing has been a very successful algorithm for scheduling parallel
computations, and is known to achieve high performances even for computations
exhibiting fine-grained parallelism. We present a variant of \ws\ that provably
avoids most synchronization overheads by keeping processors' deques entirely
private by default, and only exposing work when requested by thieves. This is
the first paper that obtains bounds on the synchronization overheads that are
(essentially) independent of the total amount of work, thus corresponding to a
great improvement, in both algorithm design and theory, over state-of-the-art
\ws\ algorithms. Consider any computation with work $T_{1}$ and critical-path
length $T_{\infty}$ executed by $P$ processors using our scheduler. Our
analysis shows that the expected execution time is $O\left(\frac{T_{1}}{P} +
T_{\infty}\right)$, and the expected synchronization overheads incurred during
the execution are at most $O\left(\left(C_{CAS} +
C_{MFence}\right)PT_{\infty}\right)$, where $C_{CAS}$ and $C_{MFence}$
respectively denote the maximum cost of executing a Compare-And-Swap
instruction and a Memory Fence instruction.
|
[
{
"created": "Wed, 24 Oct 2018 20:54:48 GMT",
"version": "v1"
},
{
"created": "Fri, 26 Apr 2019 18:32:59 GMT",
"version": "v2"
}
] |
2019-04-30
|
[
[
"Rito",
"Guilherme",
""
],
[
"Paulino",
"Hervé",
""
]
] |
Work Stealing has been a very successful algorithm for scheduling parallel computations, and is known to achieve high performances even for computations exhibiting fine-grained parallelism. We present a variant of \ws\ that provably avoids most synchronization overheads by keeping processors' deques entirely private by default, and only exposing work when requested by thieves. This is the first paper that obtains bounds on the synchronization overheads that are (essentially) independent of the total amount of work, thus corresponding to a great improvement, in both algorithm design and theory, over state-of-the-art \ws\ algorithms. Consider any computation with work $T_{1}$ and critical-path length $T_{\infty}$ executed by $P$ processors using our scheduler. Our analysis shows that the expected execution time is $O\left(\frac{T_{1}}{P} + T_{\infty}\right)$, and the expected synchronization overheads incurred during the execution are at most $O\left(\left(C_{CAS} + C_{MFence}\right)PT_{\infty}\right)$, where $C_{CAS}$ and $C_{MFence}$ respectively denote the maximum cost of executing a Compare-And-Swap instruction and a Memory Fence instruction.
|
2210.15082
|
Nan Wang
|
Nan Wang and Ricardo G. Sanfelice
|
A Rapidly-Exploring Random Trees Motion Planning Algorithm for Hybrid
Dynamical Systems
|
This paper has been accepted for publication at the 2022 Conference
of Decision and Control (CDC)
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper proposes a rapidly-exploring random trees (RRT) algorithm to solve
the motion planning problem for hybrid systems. At each iteration, the proposed
algorithm, called HyRRT, randomly picks a state sample and extends the search
tree by flow or jump, which is also chosen randomly when both regimes are
possible. Through a definition of concatenation of functions defined on hybrid
time domains, we show that HyRRT is probabilistically complete, namely, the
probability of failing to find a motion plan approaches zero as the number of
iterations of the algorithm increases. This property is guaranteed under mild
conditions on the data defining the motion plan, which include a relaxation of
the usual positive clearance assumption imposed in the literature of classical
systems. The motion plan is computed through the solution of two optimization
problems, one associated with the flow and the other with the jumps of the
system. The proposed algorithm is applied to a walking robot so as to highlight
its generality and computational features.
|
[
{
"created": "Wed, 26 Oct 2022 23:36:30 GMT",
"version": "v1"
}
] |
2022-10-28
|
[
[
"Wang",
"Nan",
""
],
[
"Sanfelice",
"Ricardo G.",
""
]
] |
This paper proposes a rapidly-exploring random trees (RRT) algorithm to solve the motion planning problem for hybrid systems. At each iteration, the proposed algorithm, called HyRRT, randomly picks a state sample and extends the search tree by flow or jump, which is also chosen randomly when both regimes are possible. Through a definition of concatenation of functions defined on hybrid time domains, we show that HyRRT is probabilistically complete, namely, the probability of failing to find a motion plan approaches zero as the number of iterations of the algorithm increases. This property is guaranteed under mild conditions on the data defining the motion plan, which include a relaxation of the usual positive clearance assumption imposed in the literature of classical systems. The motion plan is computed through the solution of two optimization problems, one associated with the flow and the other with the jumps of the system. The proposed algorithm is applied to a walking robot so as to highlight its generality and computational features.
|
2402.11542
|
Xinbang Dai
|
Xinbang Dai, Huiying Li, Guilin Qi
|
Question Answering Over Spatio-Temporal Knowledge Graph
|
11 pages, 4 figures
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Spatio-temporal knowledge graphs (STKGs) extend the concept of knowledge
graphs (KGs) by incorporating time and location information. While the research
community's focus on Knowledge Graph Question Answering (KGQA), the field of
answering questions incorporating both spatio-temporal information based on
STKGs remains largely unexplored. Furthermore, a lack of comprehensive datasets
also has hindered progress in this area. To address this issue, we present
STQAD, a dataset comprising 10,000 natural language questions for
spatio-temporal knowledge graph question answering (STKGQA). Unfortunately,
various state-of-the-art KGQA approaches fall far short of achieving
satisfactory performance on our dataset. In response, we propose STCQA, a new
spatio-temporal KGQA approach that utilizes a novel STKG embedding method named
STComplEx. By extracting temporal and spatial information from a question, our
QA model can better comprehend the question and retrieve accurate answers from
the STKG. Through extensive experiments, we demonstrate the quality of our
dataset and the effectiveness of our STKGQA method.
|
[
{
"created": "Sun, 18 Feb 2024 10:44:48 GMT",
"version": "v1"
}
] |
2024-02-20
|
[
[
"Dai",
"Xinbang",
""
],
[
"Li",
"Huiying",
""
],
[
"Qi",
"Guilin",
""
]
] |
Spatio-temporal knowledge graphs (STKGs) extend the concept of knowledge graphs (KGs) by incorporating time and location information. While the research community's focus on Knowledge Graph Question Answering (KGQA), the field of answering questions incorporating both spatio-temporal information based on STKGs remains largely unexplored. Furthermore, a lack of comprehensive datasets also has hindered progress in this area. To address this issue, we present STQAD, a dataset comprising 10,000 natural language questions for spatio-temporal knowledge graph question answering (STKGQA). Unfortunately, various state-of-the-art KGQA approaches fall far short of achieving satisfactory performance on our dataset. In response, we propose STCQA, a new spatio-temporal KGQA approach that utilizes a novel STKG embedding method named STComplEx. By extracting temporal and spatial information from a question, our QA model can better comprehend the question and retrieve accurate answers from the STKG. Through extensive experiments, we demonstrate the quality of our dataset and the effectiveness of our STKGQA method.
|
2309.04654
|
Huaibo Zhao
|
Huaibo Zhao, Yosuke Higuchi, Yusuke Kida, Tetsuji Ogawa, Tetsunori
Kobayashi
|
Mask-CTC-based Encoder Pre-training for Streaming End-to-End Speech
Recognition
|
Accepted to EUSIPCO 2023
| null | null | null |
cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Achieving high accuracy with low latency has always been a challenge in
streaming end-to-end automatic speech recognition (ASR) systems. By attending
to more future contexts, a streaming ASR model achieves higher accuracy but
results in larger latency, which hurts the streaming performance. In the
Mask-CTC framework, an encoder network is trained to learn the feature
representation that anticipates long-term contexts, which is desirable for
streaming ASR. Mask-CTC-based encoder pre-training has been shown beneficial in
achieving low latency and high accuracy for triggered attention-based ASR.
However, the effectiveness of this method has not been demonstrated for various
model architectures, nor has it been verified that the encoder has the expected
look-ahead capability to reduce latency. This study, therefore, examines the
effectiveness of Mask-CTCbased pre-training for models with different
architectures, such as Transformer-Transducer and contextual block streaming
ASR. We also discuss the effect of the proposed pre-training method on
obtaining accurate output spike timing.
|
[
{
"created": "Sat, 9 Sep 2023 01:05:59 GMT",
"version": "v1"
}
] |
2023-09-12
|
[
[
"Zhao",
"Huaibo",
""
],
[
"Higuchi",
"Yosuke",
""
],
[
"Kida",
"Yusuke",
""
],
[
"Ogawa",
"Tetsuji",
""
],
[
"Kobayashi",
"Tetsunori",
""
]
] |
Achieving high accuracy with low latency has always been a challenge in streaming end-to-end automatic speech recognition (ASR) systems. By attending to more future contexts, a streaming ASR model achieves higher accuracy but results in larger latency, which hurts the streaming performance. In the Mask-CTC framework, an encoder network is trained to learn the feature representation that anticipates long-term contexts, which is desirable for streaming ASR. Mask-CTC-based encoder pre-training has been shown beneficial in achieving low latency and high accuracy for triggered attention-based ASR. However, the effectiveness of this method has not been demonstrated for various model architectures, nor has it been verified that the encoder has the expected look-ahead capability to reduce latency. This study, therefore, examines the effectiveness of Mask-CTCbased pre-training for models with different architectures, such as Transformer-Transducer and contextual block streaming ASR. We also discuss the effect of the proposed pre-training method on obtaining accurate output spike timing.
|
2303.15595
|
Robert H\"onig
|
Robert H\"onig and Jan Ackermann, Mingyuan Chi
|
Bi-Encoder Cascades for Efficient Image Search
|
Under review as a short paper at the ICCV '23 RCV workshop
| null | null | null |
cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
Modern neural encoders offer unprecedented text-image retrieval (TIR)
accuracy, but their high computational cost impedes an adoption to large-scale
image searches. To lower this cost, model cascades use an expensive encoder to
refine the ranking of a cheap encoder. However, existing cascading algorithms
focus on cross-encoders, which jointly process text-image pairs, but do not
consider cascades of bi-encoders, which separately process texts and images. We
introduce the small-world search scenario as a realistic setting where
bi-encoder cascades can reduce costs. We then propose a cascading algorithm
that leverages the small-world search scenario to reduce lifetime image
encoding costs of a TIR system. Our experiments show cost reductions by up to
6x.
|
[
{
"created": "Mon, 27 Mar 2023 20:54:49 GMT",
"version": "v1"
},
{
"created": "Fri, 4 Aug 2023 14:42:02 GMT",
"version": "v2"
}
] |
2023-08-07
|
[
[
"Hönig",
"Robert",
""
],
[
"Ackermann",
"Jan",
""
],
[
"Chi",
"Mingyuan",
""
]
] |
Modern neural encoders offer unprecedented text-image retrieval (TIR) accuracy, but their high computational cost impedes an adoption to large-scale image searches. To lower this cost, model cascades use an expensive encoder to refine the ranking of a cheap encoder. However, existing cascading algorithms focus on cross-encoders, which jointly process text-image pairs, but do not consider cascades of bi-encoders, which separately process texts and images. We introduce the small-world search scenario as a realistic setting where bi-encoder cascades can reduce costs. We then propose a cascading algorithm that leverages the small-world search scenario to reduce lifetime image encoding costs of a TIR system. Our experiments show cost reductions by up to 6x.
|
1905.02320
|
Songyao Jiang
|
Songyao Jiang, Hongfu Liu, Yue Wu, Yun Fu
|
Spatially Constrained GAN for Face and Fashion Synthesis
|
Accepted to IEEE International Conference on Automatic Face and
Gesture Recognition (FG), 2021
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Image generation has raised tremendous attention in both academic and
industrial areas, especially for the conditional and target-oriented image
generation, such as criminal portrait and fashion design. Although the current
studies have achieved preliminary results along this direction, they always
focus on class labels as the condition where spatial contents are randomly
generated from latent vectors. Edge details are usually blurred since spatial
information is difficult to preserve. In light of this, we propose a novel
Spatially Constrained Generative Adversarial Network (SCGAN), which decouples
the spatial constraints from the latent vector and makes these constraints
feasible as additional controllable signals. To enhance the spatial
controllability, a generator network is specially designed to take a semantic
segmentation, a latent vector and an attribute-level label as inputs step by
step. Besides, a segmentor network is constructed to impose spatial constraints
on the generator. Experimentally, we provide both visual and quantitative
results on CelebA and DeepFashion datasets, and demonstrate that the proposed
SCGAN is very effective in controlling the spatial contents as well as
generating high-quality images.
|
[
{
"created": "Tue, 7 May 2019 02:00:03 GMT",
"version": "v1"
},
{
"created": "Mon, 6 Dec 2021 08:02:15 GMT",
"version": "v2"
}
] |
2021-12-07
|
[
[
"Jiang",
"Songyao",
""
],
[
"Liu",
"Hongfu",
""
],
[
"Wu",
"Yue",
""
],
[
"Fu",
"Yun",
""
]
] |
Image generation has raised tremendous attention in both academic and industrial areas, especially for the conditional and target-oriented image generation, such as criminal portrait and fashion design. Although the current studies have achieved preliminary results along this direction, they always focus on class labels as the condition where spatial contents are randomly generated from latent vectors. Edge details are usually blurred since spatial information is difficult to preserve. In light of this, we propose a novel Spatially Constrained Generative Adversarial Network (SCGAN), which decouples the spatial constraints from the latent vector and makes these constraints feasible as additional controllable signals. To enhance the spatial controllability, a generator network is specially designed to take a semantic segmentation, a latent vector and an attribute-level label as inputs step by step. Besides, a segmentor network is constructed to impose spatial constraints on the generator. Experimentally, we provide both visual and quantitative results on CelebA and DeepFashion datasets, and demonstrate that the proposed SCGAN is very effective in controlling the spatial contents as well as generating high-quality images.
|
2102.07482
|
Pedro Gomes
|
Pedro Gomes, Silvia Rossi, Laura Toni
|
Spatio-temporal Graph-RNN for Point Cloud Prediction
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we propose an end-to-end learning network to predict future
frames in a point cloud sequence. As main novelty, an initial layer learns
topological information of point clouds as geometric features, to form
representative spatio-temporal neighborhoods. This module is followed by
multiple Graph-RNN cells. Each cell learns points dynamics (i.e., RNN states)
by processing each point jointly with the spatio-temporal neighbouring points.
We tested the network performance with a MINST dataset of moving digits, a
synthetic human bodies motions and JPEG dynamic bodies datasets. Simulation
results demonstrate that our method outperforms baseline ones that neglect
geometry features information.
|
[
{
"created": "Mon, 15 Feb 2021 11:39:40 GMT",
"version": "v1"
},
{
"created": "Wed, 17 Feb 2021 11:43:31 GMT",
"version": "v2"
},
{
"created": "Mon, 22 Feb 2021 15:10:02 GMT",
"version": "v3"
}
] |
2021-02-23
|
[
[
"Gomes",
"Pedro",
""
],
[
"Rossi",
"Silvia",
""
],
[
"Toni",
"Laura",
""
]
] |
In this paper, we propose an end-to-end learning network to predict future frames in a point cloud sequence. As main novelty, an initial layer learns topological information of point clouds as geometric features, to form representative spatio-temporal neighborhoods. This module is followed by multiple Graph-RNN cells. Each cell learns points dynamics (i.e., RNN states) by processing each point jointly with the spatio-temporal neighbouring points. We tested the network performance with a MINST dataset of moving digits, a synthetic human bodies motions and JPEG dynamic bodies datasets. Simulation results demonstrate that our method outperforms baseline ones that neglect geometry features information.
|
2307.07131
|
Holden Lee
|
Holden Lee
|
Parallelising Glauber dynamics
|
v3: Corrected proposal distribution for Parallel Ising, obtained
polylog dependence on epsilon, added p-spin model. To appear in RANDOM 2024
| null | null | null |
cs.DS math.PR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For distributions over discrete product spaces $\prod_{i=1}^n \Omega_i'$,
Glauber dynamics is a Markov chain that at each step, resamples a random
coordinate conditioned on the other coordinates. We show that $k$-Glauber
dynamics, which resamples a random subset of $k$ coordinates, mixes $k$ times
faster in $\chi^2$-divergence, and assuming approximate tensorization of
entropy, mixes $k$ times faster in KL-divergence. We apply this to obtain
parallel algorithms in two settings: (1) For the Ising model
$\mu_{J,h}(x)\propto \exp(\frac1 2\left\langle x,Jx \right\rangle + \langle
h,x\rangle)$ with $\|J\|<1-c$ (the regime where fast mixing is known), we show
that we can implement each step of $\widetilde \Theta(n/\|J\|_F)$-Glauber
dynamics efficiently with a parallel algorithm, resulting in a parallel
algorithm with running time $\widetilde O(\|J\|_F) = \widetilde O(\sqrt n)$.
(2) For the mixed $p$-spin model at high enough temperature, we show that with
high probability we can implement each step of $\widetilde \Theta(\sqrt
n)$-Glauber dynamics efficiently and obtain running time $\widetilde O(\sqrt
n)$.
|
[
{
"created": "Fri, 14 Jul 2023 02:59:28 GMT",
"version": "v1"
},
{
"created": "Tue, 12 Sep 2023 20:06:23 GMT",
"version": "v2"
},
{
"created": "Tue, 28 Nov 2023 01:21:39 GMT",
"version": "v3"
},
{
"created": "Wed, 10 Jul 2024 08:11:28 GMT",
"version": "v4"
}
] |
2024-07-11
|
[
[
"Lee",
"Holden",
""
]
] |
For distributions over discrete product spaces $\prod_{i=1}^n \Omega_i'$, Glauber dynamics is a Markov chain that at each step, resamples a random coordinate conditioned on the other coordinates. We show that $k$-Glauber dynamics, which resamples a random subset of $k$ coordinates, mixes $k$ times faster in $\chi^2$-divergence, and assuming approximate tensorization of entropy, mixes $k$ times faster in KL-divergence. We apply this to obtain parallel algorithms in two settings: (1) For the Ising model $\mu_{J,h}(x)\propto \exp(\frac1 2\left\langle x,Jx \right\rangle + \langle h,x\rangle)$ with $\|J\|<1-c$ (the regime where fast mixing is known), we show that we can implement each step of $\widetilde \Theta(n/\|J\|_F)$-Glauber dynamics efficiently with a parallel algorithm, resulting in a parallel algorithm with running time $\widetilde O(\|J\|_F) = \widetilde O(\sqrt n)$. (2) For the mixed $p$-spin model at high enough temperature, we show that with high probability we can implement each step of $\widetilde \Theta(\sqrt n)$-Glauber dynamics efficiently and obtain running time $\widetilde O(\sqrt n)$.
|
2209.05170
|
Sarit Kraus
|
Yohai Trabelsi, Abhijin Adiga, Sarit Kraus, S.S. Ravi
|
Resource Allocation to Agents with Restrictions: Maximizing Likelihood
with Minimum Compromise
| null | null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Many scenarios where agents with restrictions compete for resources can be
cast as maximum matching problems on bipartite graphs. Our focus is on resource
allocation problems where agents may have restrictions that make them
incompatible with some resources. We assume that a Principle chooses a maximum
matching randomly so that each agent is matched to a resource with some
probability. Agents would like to improve their chances of being matched by
modifying their restrictions within certain limits. The Principle's goal is to
advise an unsatisfied agent to relax its restrictions so that the total cost of
relaxation is within a budget (chosen by the agent) and the increase in the
probability of being assigned a resource is maximized. We establish hardness
results for some variants of this budget-constrained maximization problem and
present algorithmic results for other variants. We experimentally evaluate our
methods on synthetic datasets as well as on two novel real-world datasets: a
vacation activities dataset and a classrooms dataset.
|
[
{
"created": "Mon, 12 Sep 2022 11:58:19 GMT",
"version": "v1"
}
] |
2022-09-13
|
[
[
"Trabelsi",
"Yohai",
""
],
[
"Adiga",
"Abhijin",
""
],
[
"Kraus",
"Sarit",
""
],
[
"Ravi",
"S. S.",
""
]
] |
Many scenarios where agents with restrictions compete for resources can be cast as maximum matching problems on bipartite graphs. Our focus is on resource allocation problems where agents may have restrictions that make them incompatible with some resources. We assume that a Principle chooses a maximum matching randomly so that each agent is matched to a resource with some probability. Agents would like to improve their chances of being matched by modifying their restrictions within certain limits. The Principle's goal is to advise an unsatisfied agent to relax its restrictions so that the total cost of relaxation is within a budget (chosen by the agent) and the increase in the probability of being assigned a resource is maximized. We establish hardness results for some variants of this budget-constrained maximization problem and present algorithmic results for other variants. We experimentally evaluate our methods on synthetic datasets as well as on two novel real-world datasets: a vacation activities dataset and a classrooms dataset.
|
2001.05171
|
\c{C}a\u{g}atay Demiralp
|
Xiong Zhang and Jonathan Engel and Sara Evensen and Yuliang Li and
\c{C}a\u{g}atay Demiralp and Wang-Chiew Tan
|
Teddy: A System for Interactive Review Analysis
|
CHI'20
| null | null | null |
cs.HC cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reviews are integral to e-commerce services and products. They contain a
wealth of information about the opinions and experiences of users, which can
help better understand consumer decisions and improve user experience with
products and services. Today, data scientists analyze reviews by developing
rules and models to extract, aggregate, and understand information embedded in
the review text. However, working with thousands of reviews, which are
typically noisy incomplete text, can be daunting without proper tools. Here we
first contribute results from an interview study that we conducted with fifteen
data scientists who work with review text, providing insights into their
practices and challenges. Results suggest data scientists need interactive
systems for many review analysis tasks. In response we introduce Teddy, an
interactive system that enables data scientists to quickly obtain insights from
reviews and improve their extraction and modeling pipelines.
|
[
{
"created": "Wed, 15 Jan 2020 08:19:01 GMT",
"version": "v1"
}
] |
2020-01-16
|
[
[
"Zhang",
"Xiong",
""
],
[
"Engel",
"Jonathan",
""
],
[
"Evensen",
"Sara",
""
],
[
"Li",
"Yuliang",
""
],
[
"Demiralp",
"Çağatay",
""
],
[
"Tan",
"Wang-Chiew",
""
]
] |
Reviews are integral to e-commerce services and products. They contain a wealth of information about the opinions and experiences of users, which can help better understand consumer decisions and improve user experience with products and services. Today, data scientists analyze reviews by developing rules and models to extract, aggregate, and understand information embedded in the review text. However, working with thousands of reviews, which are typically noisy incomplete text, can be daunting without proper tools. Here we first contribute results from an interview study that we conducted with fifteen data scientists who work with review text, providing insights into their practices and challenges. Results suggest data scientists need interactive systems for many review analysis tasks. In response we introduce Teddy, an interactive system that enables data scientists to quickly obtain insights from reviews and improve their extraction and modeling pipelines.
|
2401.12259
|
Sascha Ossowski
|
Holger Billhardt, Alberto Fern\'andez, Marin Lujak, Sascha Ossowski
|
Agreement Technologies for Coordination in Smart Cities
| null |
Applied Sciences, Volume 8, Issue 5 (2018)
|
10.3390/app8050816
| null |
cs.MA cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Many challenges in today's society can be tackled by distributed open
systems. This is particularly true for domains that are commonly perceived
under the umbrella of smart cities, such as intelligent transportation, smart
energy grids, or participative governance. When designing computer applications
for these domains, it is necessary to account for the fact that the elements of
such systems, often called software agents, are usually made by different
designers and act on behalf of particular stakeholders. Furthermore, it is
unknown at design time when such agents will enter or leave the system, and
what interests new agents will represent. To instil coordination in such
systems is particularly demanding, as usually only part of them can be directly
controlled at runtime. Agreement technologies refer to a sandbox of tools and
mechanisms for the development of such open multiagent systems, which are based
on the notion of agreement. In this paper, we argue that agreement technologies
are a suitable means for achieving coordination in smart city domains, and back
our claim through examples of several real-world applications.
|
[
{
"created": "Sun, 21 Jan 2024 17:43:08 GMT",
"version": "v1"
}
] |
2024-01-24
|
[
[
"Billhardt",
"Holger",
""
],
[
"Fernández",
"Alberto",
""
],
[
"Lujak",
"Marin",
""
],
[
"Ossowski",
"Sascha",
""
]
] |
Many challenges in today's society can be tackled by distributed open systems. This is particularly true for domains that are commonly perceived under the umbrella of smart cities, such as intelligent transportation, smart energy grids, or participative governance. When designing computer applications for these domains, it is necessary to account for the fact that the elements of such systems, often called software agents, are usually made by different designers and act on behalf of particular stakeholders. Furthermore, it is unknown at design time when such agents will enter or leave the system, and what interests new agents will represent. To instil coordination in such systems is particularly demanding, as usually only part of them can be directly controlled at runtime. Agreement technologies refer to a sandbox of tools and mechanisms for the development of such open multiagent systems, which are based on the notion of agreement. In this paper, we argue that agreement technologies are a suitable means for achieving coordination in smart city domains, and back our claim through examples of several real-world applications.
|
2308.11103
|
Joel Niklaus
|
Alex Nyffenegger, Matthias St\"urmer, Joel Niklaus
|
Anonymity at Risk? Assessing Re-Identification Capabilities of Large
Language Models
|
Accepted to NAACL Findings 2024
| null | null | null |
cs.CL cs.AI cs.IR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Anonymity of both natural and legal persons in court rulings is a critical
aspect of privacy protection in the European Union and Switzerland. With the
advent of LLMs, concerns about large-scale re-identification of anonymized
persons are growing. In accordance with the Federal Supreme Court of
Switzerland, we explore the potential of LLMs to re-identify individuals in
court rulings by constructing a proof-of-concept using actual legal data from
the Swiss federal supreme court. Following the initial experiment, we
constructed an anonymized Wikipedia dataset as a more rigorous testing ground
to further investigate the findings. With the introduction and application of
the new task of re-identifying people in texts, we also introduce new metrics
to measure performance. We systematically analyze the factors that influence
successful re-identifications, identifying model size, input length, and
instruction tuning among the most critical determinants. Despite high
re-identification rates on Wikipedia, even the best LLMs struggled with court
decisions. The complexity is attributed to the lack of test datasets, the
necessity for substantial training resources, and data sparsity in the
information used for re-identification. In conclusion, this study demonstrates
that re-identification using LLMs may not be feasible for now, but as the
proof-of-concept on Wikipedia showed, it might become possible in the future.
We hope that our system can help enhance the confidence in the security of
anonymized decisions, thus leading to the courts being more confident to
publish decisions.
|
[
{
"created": "Tue, 22 Aug 2023 00:57:36 GMT",
"version": "v1"
},
{
"created": "Sun, 19 May 2024 09:25:45 GMT",
"version": "v2"
}
] |
2024-05-21
|
[
[
"Nyffenegger",
"Alex",
""
],
[
"Stürmer",
"Matthias",
""
],
[
"Niklaus",
"Joel",
""
]
] |
Anonymity of both natural and legal persons in court rulings is a critical aspect of privacy protection in the European Union and Switzerland. With the advent of LLMs, concerns about large-scale re-identification of anonymized persons are growing. In accordance with the Federal Supreme Court of Switzerland, we explore the potential of LLMs to re-identify individuals in court rulings by constructing a proof-of-concept using actual legal data from the Swiss federal supreme court. Following the initial experiment, we constructed an anonymized Wikipedia dataset as a more rigorous testing ground to further investigate the findings. With the introduction and application of the new task of re-identifying people in texts, we also introduce new metrics to measure performance. We systematically analyze the factors that influence successful re-identifications, identifying model size, input length, and instruction tuning among the most critical determinants. Despite high re-identification rates on Wikipedia, even the best LLMs struggled with court decisions. The complexity is attributed to the lack of test datasets, the necessity for substantial training resources, and data sparsity in the information used for re-identification. In conclusion, this study demonstrates that re-identification using LLMs may not be feasible for now, but as the proof-of-concept on Wikipedia showed, it might become possible in the future. We hope that our system can help enhance the confidence in the security of anonymized decisions, thus leading to the courts being more confident to publish decisions.
|
2009.06390
|
Yuxi Huan
|
Yuxi Huan, Fan Wu, Michail Basios, Leslie Kanthan, Lingbo Li, Baowen
Xu
|
IEO: Intelligent Evolutionary Optimisation for Hyperparameter Tuning
| null | null | null | null |
cs.LG cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Hyperparameter optimisation is a crucial process in searching the optimal
machine learning model. The efficiency of finding the optimal hyperparameter
settings has been a big concern in recent researches since the optimisation
process could be time-consuming, especially when the objective functions are
highly expensive to evaluate. In this paper, we introduce an intelligent
evolutionary optimisation algorithm which applies machine learning technique to
the traditional evolutionary algorithm to accelerate the overall optimisation
process of tuning machine learning models in classification problems. We
demonstrate our Intelligent Evolutionary Optimisation (IEO)in a series of
controlled experiments, comparing with traditional evolutionary optimisation in
hyperparameter tuning. The empirical study shows that our approach accelerates
the optimisation speed by 30.40% on average and up to 77.06% in the best
scenarios.
|
[
{
"created": "Thu, 10 Sep 2020 18:47:04 GMT",
"version": "v1"
}
] |
2020-09-15
|
[
[
"Huan",
"Yuxi",
""
],
[
"Wu",
"Fan",
""
],
[
"Basios",
"Michail",
""
],
[
"Kanthan",
"Leslie",
""
],
[
"Li",
"Lingbo",
""
],
[
"Xu",
"Baowen",
""
]
] |
Hyperparameter optimisation is a crucial process in searching the optimal machine learning model. The efficiency of finding the optimal hyperparameter settings has been a big concern in recent researches since the optimisation process could be time-consuming, especially when the objective functions are highly expensive to evaluate. In this paper, we introduce an intelligent evolutionary optimisation algorithm which applies machine learning technique to the traditional evolutionary algorithm to accelerate the overall optimisation process of tuning machine learning models in classification problems. We demonstrate our Intelligent Evolutionary Optimisation (IEO)in a series of controlled experiments, comparing with traditional evolutionary optimisation in hyperparameter tuning. The empirical study shows that our approach accelerates the optimisation speed by 30.40% on average and up to 77.06% in the best scenarios.
|
1706.00722
|
Mohammad Hajiesmaili
|
Mohammad H. Hajiesmaili, Desmond Cai, and Enrique Mallada
|
Understanding the Inefficiency of Security-Constrained Economic Dispatch
| null | null | null | null |
cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The security-constrained economic dispatch (SCED) problem tries to maintain
the reliability of a power network by ensuring that a single failure does not
lead to a global outage. The previous research has mainly investigated SCED by
formulating the problem in different modalities, e.g. preventive or corrective,
and devising efficient solutions for SCED. In this paper, we tackle a novel and
important direction, and analyze the economic cost of incorporating security
constraints in economic dispatch. Inspired by existing inefficiency metrics in
game theory and computer science, we introduce notion of price of security as a
metric that formally characterizes the economic inefficiency of
security-constrained economic dispatch as compared to the original problem
without security constraints. Then, we focus on the preventive approach in a
simple topology comprising two buses and two lines, and investigate the impact
of generation availability and demand distribution on the price of security.
Moreover, we explicitly derive the worst-case input instance that leads to the
maximum price of security. By extensive experimental study on two test-cases,
we verify the analytical results and provide insights for characterizing the
price of security in general networks.
|
[
{
"created": "Fri, 2 Jun 2017 15:25:26 GMT",
"version": "v1"
}
] |
2017-06-05
|
[
[
"Hajiesmaili",
"Mohammad H.",
""
],
[
"Cai",
"Desmond",
""
],
[
"Mallada",
"Enrique",
""
]
] |
The security-constrained economic dispatch (SCED) problem tries to maintain the reliability of a power network by ensuring that a single failure does not lead to a global outage. The previous research has mainly investigated SCED by formulating the problem in different modalities, e.g. preventive or corrective, and devising efficient solutions for SCED. In this paper, we tackle a novel and important direction, and analyze the economic cost of incorporating security constraints in economic dispatch. Inspired by existing inefficiency metrics in game theory and computer science, we introduce notion of price of security as a metric that formally characterizes the economic inefficiency of security-constrained economic dispatch as compared to the original problem without security constraints. Then, we focus on the preventive approach in a simple topology comprising two buses and two lines, and investigate the impact of generation availability and demand distribution on the price of security. Moreover, we explicitly derive the worst-case input instance that leads to the maximum price of security. By extensive experimental study on two test-cases, we verify the analytical results and provide insights for characterizing the price of security in general networks.
|
2110.15231
|
Junjiao Tian
|
Junjiao Tian, Yen-Change Hsu, Yilin Shen, Hongxia Jin, Zsolt Kira
|
Exploring Covariate and Concept Shift for Detection and Calibration of
Out-of-Distribution Data
|
A short version of the paper is accepted to NeurIPS DistShift
Workshop 2021
| null | null | null |
cs.LG cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Moving beyond testing on in-distribution data works on Out-of-Distribution
(OOD) detection have recently increased in popularity. A recent attempt to
categorize OOD data introduces the concept of near and far OOD detection.
Specifically, prior works define characteristics of OOD data in terms of
detection difficulty. We propose to characterize the spectrum of OOD data using
two types of distribution shifts: covariate shift and concept shift, where
covariate shift corresponds to change in style, e.g., noise, and concept shift
indicates a change in semantics. This characterization reveals that sensitivity
to each type of shift is important to the detection and confidence calibration
of OOD data. Consequently, we investigate score functions that capture
sensitivity to each type of dataset shift and methods that improve them. To
this end, we theoretically derive two score functions for OOD detection, the
covariate shift score and concept shift score, based on the decomposition of
KL-divergence for both scores, and propose a geometrically-inspired method
(Geometric ODIN) to improve OOD detection under both shifts with only
in-distribution data. Additionally, the proposed method naturally leads to an
expressive post-hoc calibration function which yields state-of-the-art
calibration performance on both in-distribution and out-of-distribution data.
We are the first to propose a method that works well across both OOD detection
and calibration and under different types of shifts. View project page at
https://sites.google.com/view/geometric-decomposition.
|
[
{
"created": "Thu, 28 Oct 2021 15:42:55 GMT",
"version": "v1"
},
{
"created": "Sun, 21 Nov 2021 20:35:07 GMT",
"version": "v2"
}
] |
2021-11-23
|
[
[
"Tian",
"Junjiao",
""
],
[
"Hsu",
"Yen-Change",
""
],
[
"Shen",
"Yilin",
""
],
[
"Jin",
"Hongxia",
""
],
[
"Kira",
"Zsolt",
""
]
] |
Moving beyond testing on in-distribution data works on Out-of-Distribution (OOD) detection have recently increased in popularity. A recent attempt to categorize OOD data introduces the concept of near and far OOD detection. Specifically, prior works define characteristics of OOD data in terms of detection difficulty. We propose to characterize the spectrum of OOD data using two types of distribution shifts: covariate shift and concept shift, where covariate shift corresponds to change in style, e.g., noise, and concept shift indicates a change in semantics. This characterization reveals that sensitivity to each type of shift is important to the detection and confidence calibration of OOD data. Consequently, we investigate score functions that capture sensitivity to each type of dataset shift and methods that improve them. To this end, we theoretically derive two score functions for OOD detection, the covariate shift score and concept shift score, based on the decomposition of KL-divergence for both scores, and propose a geometrically-inspired method (Geometric ODIN) to improve OOD detection under both shifts with only in-distribution data. Additionally, the proposed method naturally leads to an expressive post-hoc calibration function which yields state-of-the-art calibration performance on both in-distribution and out-of-distribution data. We are the first to propose a method that works well across both OOD detection and calibration and under different types of shifts. View project page at https://sites.google.com/view/geometric-decomposition.
|
2001.01401
|
Yeongtae Hwang
|
Yeongtae Hwang, Hyemin Cho, Hongsun Yang, Dong-Ok Won, Insoo Oh, and
Seong-Whan Lee
|
Mel-spectrogram augmentation for sequence to sequence voice conversion
|
5pages, 1 figures, 8 tables
| null | null | null |
cs.LG cs.SD stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For training the sequence-to-sequence voice conversion model, we need to
handle an issue of insufficient data about the number of speech pairs which
consist of the same utterance. This study experimentally investigated the
effects of Mel-spectrogram augmentation on training the sequence-to-sequence
voice conversion (VC) model from scratch. For Mel-spectrogram augmentation, we
adopted the policies proposed in SpecAugment. In addition, we proposed new
policies (i.e., frequency warping, loudness and time length control) for more
data variations. Moreover, to find the appropriate hyperparameters of
augmentation policies without training the VC model, we proposed hyperparameter
search strategy and the new metric for reducing experimental cost, namely
deformation per deteriorating ratio. We compared the effect of these
Mel-spectrogram augmentation methods based on various sizes of training set and
augmentation policies. In the experimental results, the time axis warping based
policies (i.e., time length control and time warping.) showed better
performance than other policies. These results indicate that the use of the
Mel-spectrogram augmentation is more beneficial for training the VC model.
|
[
{
"created": "Mon, 6 Jan 2020 05:14:09 GMT",
"version": "v1"
},
{
"created": "Mon, 15 Jun 2020 09:39:47 GMT",
"version": "v2"
}
] |
2020-06-16
|
[
[
"Hwang",
"Yeongtae",
""
],
[
"Cho",
"Hyemin",
""
],
[
"Yang",
"Hongsun",
""
],
[
"Won",
"Dong-Ok",
""
],
[
"Oh",
"Insoo",
""
],
[
"Lee",
"Seong-Whan",
""
]
] |
For training the sequence-to-sequence voice conversion model, we need to handle an issue of insufficient data about the number of speech pairs which consist of the same utterance. This study experimentally investigated the effects of Mel-spectrogram augmentation on training the sequence-to-sequence voice conversion (VC) model from scratch. For Mel-spectrogram augmentation, we adopted the policies proposed in SpecAugment. In addition, we proposed new policies (i.e., frequency warping, loudness and time length control) for more data variations. Moreover, to find the appropriate hyperparameters of augmentation policies without training the VC model, we proposed hyperparameter search strategy and the new metric for reducing experimental cost, namely deformation per deteriorating ratio. We compared the effect of these Mel-spectrogram augmentation methods based on various sizes of training set and augmentation policies. In the experimental results, the time axis warping based policies (i.e., time length control and time warping.) showed better performance than other policies. These results indicate that the use of the Mel-spectrogram augmentation is more beneficial for training the VC model.
|
2406.11200
|
Shirley Wu
|
Shirley Wu, Shiyu Zhao, Qian Huang, Kexin Huang, Michihiro Yasunaga,
Kaidi Cao, Vassilis N. Ioannidis, Karthik Subbian, Jure Leskovec, James Zou
|
AvaTaR: Optimizing LLM Agents for Tool-Assisted Knowledge Retrieval
|
19 pages, 8 figures, 6 tables
| null | null | null |
cs.LG cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large language model (LLM) agents have demonstrated impressive capability in
utilizing external tools and knowledge to boost accuracy and reduce
hallucinations. However, developing the prompting techniques that make LLM
agents able to effectively use external tools and knowledge is a heuristic and
laborious task. Here, we introduce AvaTaR, a novel and automatic framework that
optimizes an LLM agent to effectively use the provided tools and improve its
performance on a given task/domain. During optimization, we design a comparator
module to iteratively provide insightful and holistic prompts to the LLM agent
via reasoning between positive and negative examples sampled from training
data. We demonstrate AvaTaR on four complex multimodal retrieval datasets
featuring textual, visual, and relational information. We find AvaTaR
consistently outperforms state-of-the-art approaches across all four
challenging tasks and exhibits strong generalization ability when applied to
novel cases, achieving an average relative improvement of 14% on the Hit@1
metric. Code and dataset are available at https://github.com/zou-group/avatar.
|
[
{
"created": "Mon, 17 Jun 2024 04:20:02 GMT",
"version": "v1"
},
{
"created": "Tue, 18 Jun 2024 01:39:57 GMT",
"version": "v2"
}
] |
2024-06-19
|
[
[
"Wu",
"Shirley",
""
],
[
"Zhao",
"Shiyu",
""
],
[
"Huang",
"Qian",
""
],
[
"Huang",
"Kexin",
""
],
[
"Yasunaga",
"Michihiro",
""
],
[
"Cao",
"Kaidi",
""
],
[
"Ioannidis",
"Vassilis N.",
""
],
[
"Subbian",
"Karthik",
""
],
[
"Leskovec",
"Jure",
""
],
[
"Zou",
"James",
""
]
] |
Large language model (LLM) agents have demonstrated impressive capability in utilizing external tools and knowledge to boost accuracy and reduce hallucinations. However, developing the prompting techniques that make LLM agents able to effectively use external tools and knowledge is a heuristic and laborious task. Here, we introduce AvaTaR, a novel and automatic framework that optimizes an LLM agent to effectively use the provided tools and improve its performance on a given task/domain. During optimization, we design a comparator module to iteratively provide insightful and holistic prompts to the LLM agent via reasoning between positive and negative examples sampled from training data. We demonstrate AvaTaR on four complex multimodal retrieval datasets featuring textual, visual, and relational information. We find AvaTaR consistently outperforms state-of-the-art approaches across all four challenging tasks and exhibits strong generalization ability when applied to novel cases, achieving an average relative improvement of 14% on the Hit@1 metric. Code and dataset are available at https://github.com/zou-group/avatar.
|
2206.01309
|
Peixian Liang
|
Peixian Liang, Yizhe Zhang, Yifan Ding, Jianxu Chen, Chinedu S.
Madukoma, Tim Weninger, Joshua D. Shrout, Danny Z. Chen
|
H-EMD: A Hierarchical Earth Mover's Distance Method for Instance
Segmentation
|
Accepted at IEEE Transactions On Medical Imaging (TMI)
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Deep learning (DL) based semantic segmentation methods have achieved
excellent performance in biomedical image segmentation, producing high quality
probability maps to allow extraction of rich instance information to facilitate
good instance segmentation. While numerous efforts were put into developing new
DL semantic segmentation models, less attention was paid to a key issue of how
to effectively explore their probability maps to attain the best possible
instance segmentation. We observe that probability maps by DL semantic
segmentation models can be used to generate many possible instance candidates,
and accurate instance segmentation can be achieved by selecting from them a set
of "optimized" candidates as output instances. Further, the generated instance
candidates form a well-behaved hierarchical structure (a forest), which allows
selecting instances in an optimized manner. Hence, we propose a novel
framework, called hierarchical earth mover's distance (H-EMD), for instance
segmentation in biomedical 2D+time videos and 3D images, which judiciously
incorporates consistent instance selection with semantic-segmentation-generated
probability maps. H-EMD contains two main stages. (1) Instance candidate
generation: capturing instance-structured information in probability maps by
generating many instance candidates in a forest structure. (2) Instance
candidate selection: selecting instances from the candidate set for final
instance segmentation. We formulate a key instance selection problem on the
instance candidate forest as an optimization problem based on the earth mover's
distance (EMD), and solve it by integer linear programming. Extensive
experiments on eight biomedical video or 3D datasets demonstrate that H-EMD
consistently boosts DL semantic segmentation models and is highly competitive
with state-of-the-art methods.
|
[
{
"created": "Thu, 2 Jun 2022 21:27:27 GMT",
"version": "v1"
}
] |
2022-06-06
|
[
[
"Liang",
"Peixian",
""
],
[
"Zhang",
"Yizhe",
""
],
[
"Ding",
"Yifan",
""
],
[
"Chen",
"Jianxu",
""
],
[
"Madukoma",
"Chinedu S.",
""
],
[
"Weninger",
"Tim",
""
],
[
"Shrout",
"Joshua D.",
""
],
[
"Chen",
"Danny Z.",
""
]
] |
Deep learning (DL) based semantic segmentation methods have achieved excellent performance in biomedical image segmentation, producing high quality probability maps to allow extraction of rich instance information to facilitate good instance segmentation. While numerous efforts were put into developing new DL semantic segmentation models, less attention was paid to a key issue of how to effectively explore their probability maps to attain the best possible instance segmentation. We observe that probability maps by DL semantic segmentation models can be used to generate many possible instance candidates, and accurate instance segmentation can be achieved by selecting from them a set of "optimized" candidates as output instances. Further, the generated instance candidates form a well-behaved hierarchical structure (a forest), which allows selecting instances in an optimized manner. Hence, we propose a novel framework, called hierarchical earth mover's distance (H-EMD), for instance segmentation in biomedical 2D+time videos and 3D images, which judiciously incorporates consistent instance selection with semantic-segmentation-generated probability maps. H-EMD contains two main stages. (1) Instance candidate generation: capturing instance-structured information in probability maps by generating many instance candidates in a forest structure. (2) Instance candidate selection: selecting instances from the candidate set for final instance segmentation. We formulate a key instance selection problem on the instance candidate forest as an optimization problem based on the earth mover's distance (EMD), and solve it by integer linear programming. Extensive experiments on eight biomedical video or 3D datasets demonstrate that H-EMD consistently boosts DL semantic segmentation models and is highly competitive with state-of-the-art methods.
|
2402.06560
|
Aneesh Vartakavi
|
Amir Ziai, Aneesh Vartakavi
|
Video Annotator: A framework for efficiently building video classifiers
using vision-language models and active learning
|
Submitted for review to KDD '24 (ADS Track)
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
High-quality and consistent annotations are fundamental to the successful
development of robust machine learning models. Traditional data annotation
methods are resource-intensive and inefficient, often leading to a reliance on
third-party annotators who are not the domain experts. Hard samples, which are
usually the most informative for model training, tend to be difficult to label
accurately and consistently without business context. These can arise
unpredictably during the annotation process, requiring a variable number of
iterations and rounds of feedback, leading to unforeseen expenses and time
commitments to guarantee quality.
We posit that more direct involvement of domain experts, using a
human-in-the-loop system, can resolve many of these practical challenges. We
propose a novel framework we call Video Annotator (VA) for annotating,
managing, and iterating on video classification datasets. Our approach offers a
new paradigm for an end-user-centered model development process, enhancing the
efficiency, usability, and effectiveness of video classifiers. Uniquely, VA
allows for a continuous annotation process, seamlessly integrating data
collection and model training.
We leverage the zero-shot capabilities of vision-language foundation models
combined with active learning techniques, and demonstrate that VA enables the
efficient creation of high-quality models. VA achieves a median 6.8 point
improvement in Average Precision relative to the most competitive baseline
across a wide-ranging assortment of tasks. We release a dataset with 153k
labels across 56 video understanding tasks annotated by three professional
video editors using VA, and also release code to replicate our experiments at:
http://github.com/netflix/videoannotator.
|
[
{
"created": "Fri, 9 Feb 2024 17:19:05 GMT",
"version": "v1"
}
] |
2024-02-12
|
[
[
"Ziai",
"Amir",
""
],
[
"Vartakavi",
"Aneesh",
""
]
] |
High-quality and consistent annotations are fundamental to the successful development of robust machine learning models. Traditional data annotation methods are resource-intensive and inefficient, often leading to a reliance on third-party annotators who are not the domain experts. Hard samples, which are usually the most informative for model training, tend to be difficult to label accurately and consistently without business context. These can arise unpredictably during the annotation process, requiring a variable number of iterations and rounds of feedback, leading to unforeseen expenses and time commitments to guarantee quality. We posit that more direct involvement of domain experts, using a human-in-the-loop system, can resolve many of these practical challenges. We propose a novel framework we call Video Annotator (VA) for annotating, managing, and iterating on video classification datasets. Our approach offers a new paradigm for an end-user-centered model development process, enhancing the efficiency, usability, and effectiveness of video classifiers. Uniquely, VA allows for a continuous annotation process, seamlessly integrating data collection and model training. We leverage the zero-shot capabilities of vision-language foundation models combined with active learning techniques, and demonstrate that VA enables the efficient creation of high-quality models. VA achieves a median 6.8 point improvement in Average Precision relative to the most competitive baseline across a wide-ranging assortment of tasks. We release a dataset with 153k labels across 56 video understanding tasks annotated by three professional video editors using VA, and also release code to replicate our experiments at: http://github.com/netflix/videoannotator.
|
2311.06149
|
Slimane Djema
|
Slimane Djema, Zoubir Abdeslem Benselama, Ramdane Hedjar, Krabi
Abdallah
|
Dense Visual Odometry Using Genetic Algorithm
|
9 pages, 9 figures
|
International Journal of Intelligent Systems and Applications in
Engineering, Volume 11, issue 3, Pages 611-619, published date 2023/7/16
| null | null |
cs.RO cs.AI cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Our work aims to estimate the camera motion mounted on the head of a mobile
robot or a moving object from RGB-D images in a static scene. The problem of
motion estimation is transformed into a nonlinear least squares function.
Methods for solving such problems are iterative. Various classic methods gave
an iterative solution by linearizing this function. We can also use the
metaheuristic optimization method to solve this problem and improve results. In
this paper, a new algorithm is developed for visual odometry using a sequence
of RGB-D images. This algorithm is based on a genetic algorithm. The proposed
iterative genetic algorithm searches using particles to estimate the optimal
motion and then compares it to the traditional methods. To evaluate our method,
we use the root mean square error to compare it with the based energy method
and another metaheuristic method. We prove the efficiency of our innovative
algorithm on a large set of images.
|
[
{
"created": "Fri, 10 Nov 2023 16:09:01 GMT",
"version": "v1"
}
] |
2023-11-13
|
[
[
"Djema",
"Slimane",
""
],
[
"Benselama",
"Zoubir Abdeslem",
""
],
[
"Hedjar",
"Ramdane",
""
],
[
"Abdallah",
"Krabi",
""
]
] |
Our work aims to estimate the camera motion mounted on the head of a mobile robot or a moving object from RGB-D images in a static scene. The problem of motion estimation is transformed into a nonlinear least squares function. Methods for solving such problems are iterative. Various classic methods gave an iterative solution by linearizing this function. We can also use the metaheuristic optimization method to solve this problem and improve results. In this paper, a new algorithm is developed for visual odometry using a sequence of RGB-D images. This algorithm is based on a genetic algorithm. The proposed iterative genetic algorithm searches using particles to estimate the optimal motion and then compares it to the traditional methods. To evaluate our method, we use the root mean square error to compare it with the based energy method and another metaheuristic method. We prove the efficiency of our innovative algorithm on a large set of images.
|
0704.2808
|
Aditya Ramamoorthy
|
Aditya Ramamoorthy
|
Minimum cost distributed source coding over a network
|
First version apppeared in the Proceedings of the 2007 IEEE
International Symposium on Information Theory, Nice, France, June 24 - 29,
2007. The second version is an expanded journal submission under
consideration at the IEEE Transactions on Information Theory
| null | null | null |
cs.IT cs.NI math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work considers the problem of transmitting multiple compressible sources
over a network at minimum cost. The aim is to find the optimal rates at which
the sources should be compressed and the network flows using which they should
be transmitted so that the cost of the transmission is minimal. We consider
networks with capacity constraints and linear cost functions. The problem is
complicated by the fact that the description of the feasible rate region of
distributed source coding problems typically has a number of constraints that
is exponential in the number of sources. This renders general purpose solvers
inefficient. We present a framework in which these problems can be solved
efficiently by exploiting the structure of the feasible rate regions coupled
with dual decomposition and optimization techniques such as the subgradient
method and the proximal bundle method.
|
[
{
"created": "Mon, 23 Apr 2007 17:41:35 GMT",
"version": "v1"
},
{
"created": "Wed, 12 Aug 2009 22:56:01 GMT",
"version": "v2"
}
] |
2009-08-13
|
[
[
"Ramamoorthy",
"Aditya",
""
]
] |
This work considers the problem of transmitting multiple compressible sources over a network at minimum cost. The aim is to find the optimal rates at which the sources should be compressed and the network flows using which they should be transmitted so that the cost of the transmission is minimal. We consider networks with capacity constraints and linear cost functions. The problem is complicated by the fact that the description of the feasible rate region of distributed source coding problems typically has a number of constraints that is exponential in the number of sources. This renders general purpose solvers inefficient. We present a framework in which these problems can be solved efficiently by exploiting the structure of the feasible rate regions coupled with dual decomposition and optimization techniques such as the subgradient method and the proximal bundle method.
|
1206.6428
|
Alexandru Niculescu-Mizil
|
Abhishek Kumar (University of Maryland), Alexandru Niculescu-Mizil
(NEC Laboratories America), Koray Kavukcuoglu (NEC Laboratories America), Hal
Daume III (University of Maryland)
|
A Binary Classification Framework for Two-Stage Multiple Kernel Learning
|
Appears in Proceedings of the 29th International Conference on
Machine Learning (ICML 2012)
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the advent of kernel methods, automating the task of specifying a
suitable kernel has become increasingly important. In this context, the
Multiple Kernel Learning (MKL) problem of finding a combination of
pre-specified base kernels that is suitable for the task at hand has received
significant attention from researchers. In this paper we show that Multiple
Kernel Learning can be framed as a standard binary classification problem with
additional constraints that ensure the positive definiteness of the learned
kernel. Framing MKL in this way has the distinct advantage that it makes it
easy to leverage the extensive research in binary classification to develop
better performing and more scalable MKL algorithms that are conceptually
simpler, and, arguably, more accessible to practitioners. Experiments on nine
data sets from different domains show that, despite its simplicity, the
proposed technique compares favorably with current leading MKL approaches.
|
[
{
"created": "Wed, 27 Jun 2012 19:59:59 GMT",
"version": "v1"
}
] |
2012-07-03
|
[
[
"Kumar",
"Abhishek",
"",
"University of Maryland"
],
[
"Niculescu-Mizil",
"Alexandru",
"",
"NEC Laboratories America"
],
[
"Kavukcuoglu",
"Koray",
"",
"NEC Laboratories America"
],
[
"Daume",
"Hal",
"III",
"University of Maryland"
]
] |
With the advent of kernel methods, automating the task of specifying a suitable kernel has become increasingly important. In this context, the Multiple Kernel Learning (MKL) problem of finding a combination of pre-specified base kernels that is suitable for the task at hand has received significant attention from researchers. In this paper we show that Multiple Kernel Learning can be framed as a standard binary classification problem with additional constraints that ensure the positive definiteness of the learned kernel. Framing MKL in this way has the distinct advantage that it makes it easy to leverage the extensive research in binary classification to develop better performing and more scalable MKL algorithms that are conceptually simpler, and, arguably, more accessible to practitioners. Experiments on nine data sets from different domains show that, despite its simplicity, the proposed technique compares favorably with current leading MKL approaches.
|
1812.04431
|
Apostolos Rikos
|
Apostolos I. Rikos
|
Distributed Weight Balancing in Directed Topologies
|
doctoral thesis
| null | null | null |
cs.DC cs.DS eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This doctoral thesis concerns novel distributed algorithms for weight
balancing over directed (communication) topologies. A directed topology
(digraph) with nonnegative (or positive) weights assigned on each edge is
weight-balanced if, for each node, the sum of the weights of in-coming edges
equals the sum of the weights of out-going edges. The novel algorithms
introduced in this thesis can facilitate the development of strategies for
generating weight balanced digraphs, in a distributed manner, and find numerous
applications in coordination and control of multi-component systems. In the
first part of this thesis, we introduce a novel distributed algorithm that
operates over a static topology and solves the weight balancing problem when
the weights are restricted to be nonnegative integers. In the second part of
the thesis, we present a novel distributed algorithm which solves the integer
weight balancing problem in the presence of arbitrary (time-varying and
inhomogeneous) delays that might affect the transmission at a particular link
at a particular time. In the third part of this thesis, we present a novel
distributed algorithm for obtaining admissible and balanced integer weights for
the case when there are lower and upper weight constraints on the communication
links. In the fourth part of this thesis we present a novel distributed
algorithm which solves the integer weight balancing problem under lower and
upper weight constraints over the communication links for the case where
arbitrary (time-varying and inhomogeneous) time delays and possible packet
drops affect the transmission at a particular link at a particular time.
|
[
{
"created": "Thu, 6 Dec 2018 00:06:54 GMT",
"version": "v1"
}
] |
2018-12-12
|
[
[
"Rikos",
"Apostolos I.",
""
]
] |
This doctoral thesis concerns novel distributed algorithms for weight balancing over directed (communication) topologies. A directed topology (digraph) with nonnegative (or positive) weights assigned on each edge is weight-balanced if, for each node, the sum of the weights of in-coming edges equals the sum of the weights of out-going edges. The novel algorithms introduced in this thesis can facilitate the development of strategies for generating weight balanced digraphs, in a distributed manner, and find numerous applications in coordination and control of multi-component systems. In the first part of this thesis, we introduce a novel distributed algorithm that operates over a static topology and solves the weight balancing problem when the weights are restricted to be nonnegative integers. In the second part of the thesis, we present a novel distributed algorithm which solves the integer weight balancing problem in the presence of arbitrary (time-varying and inhomogeneous) delays that might affect the transmission at a particular link at a particular time. In the third part of this thesis, we present a novel distributed algorithm for obtaining admissible and balanced integer weights for the case when there are lower and upper weight constraints on the communication links. In the fourth part of this thesis we present a novel distributed algorithm which solves the integer weight balancing problem under lower and upper weight constraints over the communication links for the case where arbitrary (time-varying and inhomogeneous) time delays and possible packet drops affect the transmission at a particular link at a particular time.
|
1610.07045
|
Yixuan (Julie) Zhu
|
Julie Yixuan Zhu, Chao Zhang, Huichu Zhang, Shi Zhi, Victor O.K. Li,
Jiawei Han, Yu Zheng
|
pg-Causality: Identifying Spatiotemporal Causal Pathways for Air
Pollutants with Urban Big Data
| null | null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many countries are suffering from severe air pollution. Understanding how
different air pollutants accumulate and propagate is critical to making
relevant public policies. In this paper, we use urban big data (air quality
data and meteorological data) to identify the \emph{spatiotemporal (ST) causal
pathways} for air pollutants. This problem is challenging because: (1) there
are numerous noisy and low-pollution periods in the raw air quality data, which
may lead to unreliable causality analysis, (2) for large-scale data in the ST
space, the computational complexity of constructing a causal structure is very
high, and (3) the \emph{ST causal pathways} are complex due to the interactions
of multiple pollutants and the influence of environmental factors. Therefore,
we present \emph{p-Causality}, a novel pattern-aided causality analysis
approach that combines the strengths of \emph{pattern mining} and
\emph{Bayesian learning} to efficiently and faithfully identify the \emph{ST
causal pathways}. First, \emph{Pattern mining} helps suppress the noise by
capturing frequent evolving patterns (FEPs) of each monitoring sensor, and
greatly reduce the complexity by selecting the pattern-matched sensors as
"causers". Then, \emph{Bayesian learning} carefully encodes the local and ST
causal relations with a Gaussian Bayesian network (GBN)-based graphical model,
which also integrates environmental influences to minimize biases in the final
results. We evaluate our approach with three real-world data sets containing
982 air quality sensors, in three regions of China from 01-Jun-2013 to
19-Dec-2015. Results show that our approach outperforms the traditional causal
structure learning methods in time efficiency, inference accuracy and
interpretability.
|
[
{
"created": "Sat, 22 Oct 2016 13:17:28 GMT",
"version": "v1"
},
{
"created": "Thu, 9 Nov 2017 08:30:29 GMT",
"version": "v2"
},
{
"created": "Wed, 18 Apr 2018 07:39:53 GMT",
"version": "v3"
}
] |
2018-04-19
|
[
[
"Zhu",
"Julie Yixuan",
""
],
[
"Zhang",
"Chao",
""
],
[
"Zhang",
"Huichu",
""
],
[
"Zhi",
"Shi",
""
],
[
"Li",
"Victor O. K.",
""
],
[
"Han",
"Jiawei",
""
],
[
"Zheng",
"Yu",
""
]
] |
Many countries are suffering from severe air pollution. Understanding how different air pollutants accumulate and propagate is critical to making relevant public policies. In this paper, we use urban big data (air quality data and meteorological data) to identify the \emph{spatiotemporal (ST) causal pathways} for air pollutants. This problem is challenging because: (1) there are numerous noisy and low-pollution periods in the raw air quality data, which may lead to unreliable causality analysis, (2) for large-scale data in the ST space, the computational complexity of constructing a causal structure is very high, and (3) the \emph{ST causal pathways} are complex due to the interactions of multiple pollutants and the influence of environmental factors. Therefore, we present \emph{p-Causality}, a novel pattern-aided causality analysis approach that combines the strengths of \emph{pattern mining} and \emph{Bayesian learning} to efficiently and faithfully identify the \emph{ST causal pathways}. First, \emph{Pattern mining} helps suppress the noise by capturing frequent evolving patterns (FEPs) of each monitoring sensor, and greatly reduce the complexity by selecting the pattern-matched sensors as "causers". Then, \emph{Bayesian learning} carefully encodes the local and ST causal relations with a Gaussian Bayesian network (GBN)-based graphical model, which also integrates environmental influences to minimize biases in the final results. We evaluate our approach with three real-world data sets containing 982 air quality sensors, in three regions of China from 01-Jun-2013 to 19-Dec-2015. Results show that our approach outperforms the traditional causal structure learning methods in time efficiency, inference accuracy and interpretability.
|
2406.17608
|
Xiao Ma
|
Xiao Ma, Yuhui Tao, Yuhan Zhang, Zexuan Ji, Yizhe Zhang, Qiang Chen
|
Test-Time Generative Augmentation for Medical Image Segmentation
|
12pages, 2figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose a novel approach to enhance medical image
segmentation during test time. Instead of employing hand-crafted transforms or
functions on the input test image to create multiple views for test-time
augmentation, we advocate for the utilization of an advanced domain-fine-tuned
generative model (GM), e.g., stable diffusion (SD), for test-time augmentation.
Given that the GM has been trained to comprehend and encapsulate comprehensive
domain data knowledge, it is superior than segmentation models in terms of
representing the data characteristics and distribution. Hence, by integrating
the GM into test-time augmentation, we can effectively generate multiple views
of a given test sample, aligning with the content and appearance
characteristics of the sample and the related local data distribution. This
approach renders the augmentation process more adaptable and resilient compared
to conventional handcrafted transforms. Comprehensive experiments conducted
across three medical image segmentation tasks (nine datasets) demonstrate the
efficacy and versatility of the proposed TTGA in enhancing segmentation
outcomes. Moreover, TTGA significantly improves pixel-wise error estimation,
thereby facilitating the deployment of a more reliable segmentation system.
Code will be released at: https://github.com/maxiao0234/TTGA.
|
[
{
"created": "Tue, 25 Jun 2024 14:53:01 GMT",
"version": "v1"
}
] |
2024-06-26
|
[
[
"Ma",
"Xiao",
""
],
[
"Tao",
"Yuhui",
""
],
[
"Zhang",
"Yuhan",
""
],
[
"Ji",
"Zexuan",
""
],
[
"Zhang",
"Yizhe",
""
],
[
"Chen",
"Qiang",
""
]
] |
In this paper, we propose a novel approach to enhance medical image segmentation during test time. Instead of employing hand-crafted transforms or functions on the input test image to create multiple views for test-time augmentation, we advocate for the utilization of an advanced domain-fine-tuned generative model (GM), e.g., stable diffusion (SD), for test-time augmentation. Given that the GM has been trained to comprehend and encapsulate comprehensive domain data knowledge, it is superior than segmentation models in terms of representing the data characteristics and distribution. Hence, by integrating the GM into test-time augmentation, we can effectively generate multiple views of a given test sample, aligning with the content and appearance characteristics of the sample and the related local data distribution. This approach renders the augmentation process more adaptable and resilient compared to conventional handcrafted transforms. Comprehensive experiments conducted across three medical image segmentation tasks (nine datasets) demonstrate the efficacy and versatility of the proposed TTGA in enhancing segmentation outcomes. Moreover, TTGA significantly improves pixel-wise error estimation, thereby facilitating the deployment of a more reliable segmentation system. Code will be released at: https://github.com/maxiao0234/TTGA.
|
2310.19630
|
Olivier Rukundo
|
Olivier Rukundo, Andrea Behanova, Riccardo De Feo, Seppo Ronkko, Joni
Oja, Jussi Tohka
|
Convolutional Neural Networks for Automatic Detection of Intact
Adenovirus from TEM Imaging with Debris, Broken and Artefacts Particles
|
13 pages, 8 figures
| null | null | null |
cs.CV cs.NE
|
http://creativecommons.org/licenses/by/4.0/
|
Regular monitoring of the primary particles and purity profiles of a drug
product during development and manufacturing processes is essential for
manufacturers to avoid product variability and contamination. Transmission
electron microscopy (TEM) imaging helps manufacturers predict how changes
affect particle characteristics and purity for virus-based gene therapy vector
products and intermediates. Since intact particles can characterize efficacious
products, it is beneficial to automate the detection of intact adenovirus
against a non-intact-viral background mixed with debris, broken, and artefact
particles. In the presence of such particles, detecting intact adenoviruses
becomes more challenging. To overcome the challenge, due to such a presence, we
developed a software tool for semi-automatic annotation and segmentation of
adenoviruses and a software tool for automatic segmentation and detection of
intact adenoviruses in TEM imaging systems. The developed semi-automatic tool
exploited conventional image analysis techniques while the automatic tool was
built based on convolutional neural networks and image analysis techniques. Our
quantitative and qualitative evaluations showed outstanding true positive
detection rates compared to false positive and negative rates where
adenoviruses were nicely detected without mistaking them for real debris,
broken adenoviruses, and/or staining artefacts.
|
[
{
"created": "Mon, 30 Oct 2023 15:23:25 GMT",
"version": "v1"
},
{
"created": "Mon, 6 Nov 2023 14:58:19 GMT",
"version": "v2"
},
{
"created": "Thu, 9 Nov 2023 15:18:03 GMT",
"version": "v3"
}
] |
2023-11-10
|
[
[
"Rukundo",
"Olivier",
""
],
[
"Behanova",
"Andrea",
""
],
[
"De Feo",
"Riccardo",
""
],
[
"Ronkko",
"Seppo",
""
],
[
"Oja",
"Joni",
""
],
[
"Tohka",
"Jussi",
""
]
] |
Regular monitoring of the primary particles and purity profiles of a drug product during development and manufacturing processes is essential for manufacturers to avoid product variability and contamination. Transmission electron microscopy (TEM) imaging helps manufacturers predict how changes affect particle characteristics and purity for virus-based gene therapy vector products and intermediates. Since intact particles can characterize efficacious products, it is beneficial to automate the detection of intact adenovirus against a non-intact-viral background mixed with debris, broken, and artefact particles. In the presence of such particles, detecting intact adenoviruses becomes more challenging. To overcome the challenge, due to such a presence, we developed a software tool for semi-automatic annotation and segmentation of adenoviruses and a software tool for automatic segmentation and detection of intact adenoviruses in TEM imaging systems. The developed semi-automatic tool exploited conventional image analysis techniques while the automatic tool was built based on convolutional neural networks and image analysis techniques. Our quantitative and qualitative evaluations showed outstanding true positive detection rates compared to false positive and negative rates where adenoviruses were nicely detected without mistaking them for real debris, broken adenoviruses, and/or staining artefacts.
|
1202.1596
|
Vasilis Ntranos
|
Vasileios Ntranos, Giuseppe Caire, Alexandros G. Dimakis
|
Allocations for Heterogenous Distributed Storage
|
5 pages, 1 figure
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the problem of storing a data object in a set of data nodes that
fail independently with given probabilities. Our problem is a natural
generalization of a homogenous storage allocation problem where all the nodes
had the same reliability and is naturally motivated for peer-to-peer and cloud
storage systems with different types of nodes. Assuming optimal erasure coding
(MDS), the goal is to find a storage allocation (i.e, how much to store in each
node) to maximize the probability of successful recovery. This problem turns
out to be a challenging combinatorial optimization problem. In this work we
introduce an approximation framework based on large deviation inequalities and
convex optimization. We propose two approximation algorithms and study the
asymptotic performance of the resulting allocations.
|
[
{
"created": "Wed, 8 Feb 2012 04:18:14 GMT",
"version": "v1"
}
] |
2012-02-09
|
[
[
"Ntranos",
"Vasileios",
""
],
[
"Caire",
"Giuseppe",
""
],
[
"Dimakis",
"Alexandros G.",
""
]
] |
We study the problem of storing a data object in a set of data nodes that fail independently with given probabilities. Our problem is a natural generalization of a homogenous storage allocation problem where all the nodes had the same reliability and is naturally motivated for peer-to-peer and cloud storage systems with different types of nodes. Assuming optimal erasure coding (MDS), the goal is to find a storage allocation (i.e, how much to store in each node) to maximize the probability of successful recovery. This problem turns out to be a challenging combinatorial optimization problem. In this work we introduce an approximation framework based on large deviation inequalities and convex optimization. We propose two approximation algorithms and study the asymptotic performance of the resulting allocations.
|
2310.16704
|
AnneMarie Borg
|
Suzan Zuurmond, AnneMarie Borg, Matthijs van Kempen and Remi Wieten
|
Human-centred explanation of rule-based decision-making systems in the
legal domain
|
This is the full version of a demo at the 36th International
Conference on Legal Knowledge and Information Systems (JURIX'23)
| null | null | null |
cs.AI cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
We propose a human-centred explanation method for rule-based automated
decision-making systems in the legal domain. Firstly, we establish a conceptual
framework for developing explanation methods, representing its key internal
components (content, communication and adaptation) and external dependencies
(decision-making system, human recipient and domain). Secondly, we propose an
explanation method that uses a graph database to enable question-driven
explanations and multimedia display. This way, we can tailor the explanation to
the user. Finally, we show how our conceptual framework is applicable to a
real-world scenario at the Dutch Tax and Customs Administration and implement
our explanation method for this scenario.
|
[
{
"created": "Wed, 25 Oct 2023 15:20:05 GMT",
"version": "v1"
}
] |
2023-10-26
|
[
[
"Zuurmond",
"Suzan",
""
],
[
"Borg",
"AnneMarie",
""
],
[
"van Kempen",
"Matthijs",
""
],
[
"Wieten",
"Remi",
""
]
] |
We propose a human-centred explanation method for rule-based automated decision-making systems in the legal domain. Firstly, we establish a conceptual framework for developing explanation methods, representing its key internal components (content, communication and adaptation) and external dependencies (decision-making system, human recipient and domain). Secondly, we propose an explanation method that uses a graph database to enable question-driven explanations and multimedia display. This way, we can tailor the explanation to the user. Finally, we show how our conceptual framework is applicable to a real-world scenario at the Dutch Tax and Customs Administration and implement our explanation method for this scenario.
|
1208.0384
|
Liu Shaoli
|
Shaoli Liu, Yunji Chen, Tianshi Chen, Ling Li, Chao Lu
|
Global Adaptive Routing Algorithm Without Additional Congestion
Propagation Network
| null | null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by-nc-sa/3.0/
|
Adaptive routing algorithm has been employed in multichip interconnection
networks in order to improve network performance. Does a algorithm use local or
global network state? This is the key question in adaptive routing. In many
traffic patterns, the ignorance of global network state, leading to routing
selection based only on local congestion information, tends to violate global
load balance. To attack the load balance issue in adapting routing, some global
adaptive routing algorithms introduce a congestion propagation network to
obtain global network status information, such as Regional Congestion Awareness
(RCA) and Destination Based Adaptive Routing (DBAR).
However, the congestion propagation network leads to additional power and
area consumption which cannot be ignored. From another view, if we just
increase the bandwidth between neighbor nodes with the wires used to build the
congestion propagation network, the network performance could be improved as
well. In this paper, we propose a global adaptive routing algorithm without
employing the additional congestion propagation network. Our algorithm obtains
the global network state in a novel way, and can offer significant improvement
than the base-line local adaptive routing algorithm (xy-adaptive algorithm
which selects routing based on local congestion information in each hop) for
both medium and high injection rates.
In wormhole flow control, all the routing information (flit id, source node
id, destination node id, vc id and address) is contained in head flit, and data
is carried in body flits. As a result, there are always many free bits in the
head flit, especially when the bandwidth is 128-bits which is normal in
interconnection network design. Then, we can use these free bits in the head
flit to propagate global congestion information but not increase the number of
flits.
|
[
{
"created": "Thu, 2 Aug 2012 02:25:59 GMT",
"version": "v1"
}
] |
2012-08-03
|
[
[
"Liu",
"Shaoli",
""
],
[
"Chen",
"Yunji",
""
],
[
"Chen",
"Tianshi",
""
],
[
"Li",
"Ling",
""
],
[
"Lu",
"Chao",
""
]
] |
Adaptive routing algorithm has been employed in multichip interconnection networks in order to improve network performance. Does a algorithm use local or global network state? This is the key question in adaptive routing. In many traffic patterns, the ignorance of global network state, leading to routing selection based only on local congestion information, tends to violate global load balance. To attack the load balance issue in adapting routing, some global adaptive routing algorithms introduce a congestion propagation network to obtain global network status information, such as Regional Congestion Awareness (RCA) and Destination Based Adaptive Routing (DBAR). However, the congestion propagation network leads to additional power and area consumption which cannot be ignored. From another view, if we just increase the bandwidth between neighbor nodes with the wires used to build the congestion propagation network, the network performance could be improved as well. In this paper, we propose a global adaptive routing algorithm without employing the additional congestion propagation network. Our algorithm obtains the global network state in a novel way, and can offer significant improvement than the base-line local adaptive routing algorithm (xy-adaptive algorithm which selects routing based on local congestion information in each hop) for both medium and high injection rates. In wormhole flow control, all the routing information (flit id, source node id, destination node id, vc id and address) is contained in head flit, and data is carried in body flits. As a result, there are always many free bits in the head flit, especially when the bandwidth is 128-bits which is normal in interconnection network design. Then, we can use these free bits in the head flit to propagate global congestion information but not increase the number of flits.
|
2310.20354
|
Keith Malcolm Smith
|
Keith Malcolm Smith and Jason P. Smith
|
Statistical Complexity of Heterogeneous Geometric Networks
|
12 pages, 6 figures
| null | null | null |
cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Heterogeneity and geometry are key explanatory components underlying the
structure of real-world networks. The relationship between these components and
the statistical complexity of networks is not well understood. We introduce a
parsimonious normalised measure of statistical complexity for networks --
normalised hierarchical complexity. The measure is trivially 0 in regular
graphs and we prove that this measure tends to 0 in Erd\"os-R\'enyi random
graphs in the thermodynamic limit. We go on to demonstrate that greater
complexity arises from the combination of hierarchical and geometric components
to the network structure than either on their own. Further, the levels of
complexity achieved are similar to those found in many real-world networks. We
also find that real world networks establish connections in a way which
increases hierarchical complexity and which our null models and a range of
attachment mechanisms fail to explain. This underlines the non-trivial nature
of statistical complexity in real-world networks and provides foundations for
the comparative analysis of network complexity within and across disciplines.
|
[
{
"created": "Tue, 31 Oct 2023 10:51:10 GMT",
"version": "v1"
},
{
"created": "Thu, 29 Feb 2024 17:02:14 GMT",
"version": "v2"
}
] |
2024-03-01
|
[
[
"Smith",
"Keith Malcolm",
""
],
[
"Smith",
"Jason P.",
""
]
] |
Heterogeneity and geometry are key explanatory components underlying the structure of real-world networks. The relationship between these components and the statistical complexity of networks is not well understood. We introduce a parsimonious normalised measure of statistical complexity for networks -- normalised hierarchical complexity. The measure is trivially 0 in regular graphs and we prove that this measure tends to 0 in Erd\"os-R\'enyi random graphs in the thermodynamic limit. We go on to demonstrate that greater complexity arises from the combination of hierarchical and geometric components to the network structure than either on their own. Further, the levels of complexity achieved are similar to those found in many real-world networks. We also find that real world networks establish connections in a way which increases hierarchical complexity and which our null models and a range of attachment mechanisms fail to explain. This underlines the non-trivial nature of statistical complexity in real-world networks and provides foundations for the comparative analysis of network complexity within and across disciplines.
|
1310.5839
|
Volker Weinberg
|
David Brayford, Momme Allalen and Volker Weinberg
|
Extreme Scaling of Lattice Quantum Chromodynamics
|
5 pages, 2 figures, talk given at the "Extreme Scaling on SuperMUC"
Minisymposium during ParCo 2013, International Conference on Parallel
Computing, 10-13 September 2013, Munich
| null | null | null |
cs.DC cs.PF hep-lat physics.comp-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As the complexity and size of challenges in science and engineering are
continually increasing, it is highly important that applications are able to
scale strongly to very large numbers of cores (>100,000 cores) to enable HPC
systems to be utilised efficiently. This paper presents results of strong
scaling tests performed with an MPI only and a hybrid MPI + OpenMP version of
the Lattice QCD application BQCD on the European Tier-0 system SuperMUC at LRZ.
|
[
{
"created": "Tue, 22 Oct 2013 08:48:03 GMT",
"version": "v1"
}
] |
2013-10-23
|
[
[
"Brayford",
"David",
""
],
[
"Allalen",
"Momme",
""
],
[
"Weinberg",
"Volker",
""
]
] |
As the complexity and size of challenges in science and engineering are continually increasing, it is highly important that applications are able to scale strongly to very large numbers of cores (>100,000 cores) to enable HPC systems to be utilised efficiently. This paper presents results of strong scaling tests performed with an MPI only and a hybrid MPI + OpenMP version of the Lattice QCD application BQCD on the European Tier-0 system SuperMUC at LRZ.
|
2404.19431
|
Mohammad Javad Ahmadi
|
Mohammad Javad Ahmadi, Rafael F. Schaefer, H. Vincent Poor
|
Integrated Sensing and Communications for Unsourced Random Access:
Fundamental Limits
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
This work considers the problem of integrated sensing and communication
(ISAC) with a massive number of unsourced and uncoordinated users. In the
proposed model, known as the unsourced ISAC system (UNISAC), all active
communication and sensing users share a short frame to transmit their signals,
without requiring scheduling with the base station (BS). Hence, the signal
received from each user is affected by significant interference from numerous
interfering users, making it challenging to extract the transmitted signals.
UNISAC aims to decode the transmitted message sequences from communication
users while simultaneously detect active sensing users, regardless of the
identity of the decoded and detected users. In this paper, we derive an
achievable performance limit for UNISAC and demonstrate its superiority over
conventional approaches such as ALOHA, time-division multiple access, treating
interference as noise, and multiple signal classification. Through numerical
simulations, we validate the UNISAC's effectiveness in detecting and decoding a
large number of users.
|
[
{
"created": "Tue, 30 Apr 2024 10:26:04 GMT",
"version": "v1"
},
{
"created": "Wed, 1 May 2024 05:17:35 GMT",
"version": "v2"
}
] |
2024-05-02
|
[
[
"Ahmadi",
"Mohammad Javad",
""
],
[
"Schaefer",
"Rafael F.",
""
],
[
"Poor",
"H. Vincent",
""
]
] |
This work considers the problem of integrated sensing and communication (ISAC) with a massive number of unsourced and uncoordinated users. In the proposed model, known as the unsourced ISAC system (UNISAC), all active communication and sensing users share a short frame to transmit their signals, without requiring scheduling with the base station (BS). Hence, the signal received from each user is affected by significant interference from numerous interfering users, making it challenging to extract the transmitted signals. UNISAC aims to decode the transmitted message sequences from communication users while simultaneously detect active sensing users, regardless of the identity of the decoded and detected users. In this paper, we derive an achievable performance limit for UNISAC and demonstrate its superiority over conventional approaches such as ALOHA, time-division multiple access, treating interference as noise, and multiple signal classification. Through numerical simulations, we validate the UNISAC's effectiveness in detecting and decoding a large number of users.
|
2305.19212
|
Markus Hecher
|
Johannes K. Fichte, Markus Hecher, Michael Morak, Patrick Thier,
Stefan Woltran
|
Solving Projected Model Counting by Utilizing Treewidth and its Limits
|
arXiv admin note: substantial text overlap with arXiv:1805.05445
| null | null | null |
cs.CC cs.AI cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we introduce a novel algorithm to solve projected model
counting (PMC). PMC asks to count solutions of a Boolean formula with respect
to a given set of projection variables, where multiple solutions that are
identical when restricted to the projection variables count as only one
solution. Inspired by the observation that the so-called "treewidth" is one of
the most prominent structural parameters, our algorithm utilizes small
treewidth of the primal graph of the input instance. More precisely, it runs in
time O(2^2k+4n2) where k is the treewidth and n is the input size of the
instance. In other words, we obtain that the problem PMC is fixed-parameter
tractable when parameterized by treewidth. Further, we take the exponential
time hypothesis (ETH) into consideration and establish lower bounds of bounded
treewidth algorithms for PMC, yielding asymptotically tight runtime bounds of
our algorithm. While the algorithm above serves as a first theoretical upper
bound and although it might be quite appealing for small values of k,
unsurprisingly a naive implementation adhering to this runtime bound suffers
already from instances of relatively small width. Therefore, we turn our
attention to several measures in order to resolve this issue towards exploiting
treewidth in practice: We present a technique called nested dynamic
programming, where different levels of abstractions of the primal graph are
used to (recursively) compute and refine tree decompositions of a given
instance. Finally, we provide a nested dynamic programming algorithm and an
implementation that relies on database technology for PMC and a prominent
special case of PMC, namely model counting (#Sat). Experiments indicate that
the advancements are promising, allowing us to solve instances of treewidth
upper bounds beyond 200.
|
[
{
"created": "Tue, 30 May 2023 17:02:07 GMT",
"version": "v1"
},
{
"created": "Wed, 31 May 2023 00:51:58 GMT",
"version": "v2"
}
] |
2023-06-01
|
[
[
"Fichte",
"Johannes K.",
""
],
[
"Hecher",
"Markus",
""
],
[
"Morak",
"Michael",
""
],
[
"Thier",
"Patrick",
""
],
[
"Woltran",
"Stefan",
""
]
] |
In this paper, we introduce a novel algorithm to solve projected model counting (PMC). PMC asks to count solutions of a Boolean formula with respect to a given set of projection variables, where multiple solutions that are identical when restricted to the projection variables count as only one solution. Inspired by the observation that the so-called "treewidth" is one of the most prominent structural parameters, our algorithm utilizes small treewidth of the primal graph of the input instance. More precisely, it runs in time O(2^2k+4n2) where k is the treewidth and n is the input size of the instance. In other words, we obtain that the problem PMC is fixed-parameter tractable when parameterized by treewidth. Further, we take the exponential time hypothesis (ETH) into consideration and establish lower bounds of bounded treewidth algorithms for PMC, yielding asymptotically tight runtime bounds of our algorithm. While the algorithm above serves as a first theoretical upper bound and although it might be quite appealing for small values of k, unsurprisingly a naive implementation adhering to this runtime bound suffers already from instances of relatively small width. Therefore, we turn our attention to several measures in order to resolve this issue towards exploiting treewidth in practice: We present a technique called nested dynamic programming, where different levels of abstractions of the primal graph are used to (recursively) compute and refine tree decompositions of a given instance. Finally, we provide a nested dynamic programming algorithm and an implementation that relies on database technology for PMC and a prominent special case of PMC, namely model counting (#Sat). Experiments indicate that the advancements are promising, allowing us to solve instances of treewidth upper bounds beyond 200.
|
1902.04744
|
Peixin Wang
|
Peixin Wang, Hongfei Fu, Krishnendu Chatterjee, Yuxin Deng, Ming Xu
|
Proving Expected Sensitivity of Probabilistic Programs with Randomized
Variable-Dependent Termination Time
| null | null | null | null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The notion of program sensitivity (aka Lipschitz continuity) specifies that
changes in the program input result in proportional changes to the program
output. For probabilistic programs the notion is naturally extended to expected
sensitivity. A previous approach develops a relational program logic framework
for proving expected sensitivity of probabilistic while loops, where the number
of iterations is fixed and bounded. In this work, we consider probabilistic
while loops where the number of iterations is not fixed, but randomized and
depends on the initial input values. We present a sound approach for proving
expected sensitivity of such programs. Our sound approach is martingale-based
and can be automated through existing martingale-synthesis algorithms.
Furthermore, our approach is compositional for sequential composition of while
loops under a mild side condition. We demonstrate the effectiveness of our
approach on several classical examples from Gambler's Ruin, stochastic hybrid
systems and stochastic gradient descent. We also present experimental results
showing that our automated approach can handle various probabilistic programs
in the literature.
|
[
{
"created": "Wed, 13 Feb 2019 05:32:09 GMT",
"version": "v1"
},
{
"created": "Sat, 13 Jul 2019 09:31:56 GMT",
"version": "v2"
},
{
"created": "Mon, 28 Oct 2019 13:59:53 GMT",
"version": "v3"
}
] |
2019-10-29
|
[
[
"Wang",
"Peixin",
""
],
[
"Fu",
"Hongfei",
""
],
[
"Chatterjee",
"Krishnendu",
""
],
[
"Deng",
"Yuxin",
""
],
[
"Xu",
"Ming",
""
]
] |
The notion of program sensitivity (aka Lipschitz continuity) specifies that changes in the program input result in proportional changes to the program output. For probabilistic programs the notion is naturally extended to expected sensitivity. A previous approach develops a relational program logic framework for proving expected sensitivity of probabilistic while loops, where the number of iterations is fixed and bounded. In this work, we consider probabilistic while loops where the number of iterations is not fixed, but randomized and depends on the initial input values. We present a sound approach for proving expected sensitivity of such programs. Our sound approach is martingale-based and can be automated through existing martingale-synthesis algorithms. Furthermore, our approach is compositional for sequential composition of while loops under a mild side condition. We demonstrate the effectiveness of our approach on several classical examples from Gambler's Ruin, stochastic hybrid systems and stochastic gradient descent. We also present experimental results showing that our automated approach can handle various probabilistic programs in the literature.
|
2308.13712
|
Jiawei Liu
|
Jiawei Liu, Qiang Wang, Huijie Fan, Yinong Wang, Yandong Tang,
Liangqiong Qu
|
Residual Denoising Diffusion Models
|
Accepted to CVPR2024
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose residual denoising diffusion models (RDDM), a novel dual diffusion
process that decouples the traditional single denoising diffusion process into
residual diffusion and noise diffusion. This dual diffusion framework expands
the denoising-based diffusion models, initially uninterpretable for image
restoration, into a unified and interpretable model for both image generation
and restoration by introducing residuals. Specifically, our residual diffusion
represents directional diffusion from the target image to the degraded input
image and explicitly guides the reverse generation process for image
restoration, while noise diffusion represents random perturbations in the
diffusion process. The residual prioritizes certainty, while the noise
emphasizes diversity, enabling RDDM to effectively unify tasks with varying
certainty or diversity requirements, such as image generation and restoration.
We demonstrate that our sampling process is consistent with that of DDPM and
DDIM through coefficient transformation, and propose a partially
path-independent generation process to better understand the reverse process.
Notably, our RDDM enables a generic UNet, trained with only an L1 loss and a
batch size of 1, to compete with state-of-the-art image restoration methods. We
provide code and pre-trained models to encourage further exploration,
application, and development of our innovative framework
(https://github.com/nachifur/RDDM).
|
[
{
"created": "Fri, 25 Aug 2023 23:54:15 GMT",
"version": "v1"
},
{
"created": "Sat, 7 Oct 2023 14:32:30 GMT",
"version": "v2"
},
{
"created": "Fri, 22 Mar 2024 15:30:57 GMT",
"version": "v3"
}
] |
2024-03-25
|
[
[
"Liu",
"Jiawei",
""
],
[
"Wang",
"Qiang",
""
],
[
"Fan",
"Huijie",
""
],
[
"Wang",
"Yinong",
""
],
[
"Tang",
"Yandong",
""
],
[
"Qu",
"Liangqiong",
""
]
] |
We propose residual denoising diffusion models (RDDM), a novel dual diffusion process that decouples the traditional single denoising diffusion process into residual diffusion and noise diffusion. This dual diffusion framework expands the denoising-based diffusion models, initially uninterpretable for image restoration, into a unified and interpretable model for both image generation and restoration by introducing residuals. Specifically, our residual diffusion represents directional diffusion from the target image to the degraded input image and explicitly guides the reverse generation process for image restoration, while noise diffusion represents random perturbations in the diffusion process. The residual prioritizes certainty, while the noise emphasizes diversity, enabling RDDM to effectively unify tasks with varying certainty or diversity requirements, such as image generation and restoration. We demonstrate that our sampling process is consistent with that of DDPM and DDIM through coefficient transformation, and propose a partially path-independent generation process to better understand the reverse process. Notably, our RDDM enables a generic UNet, trained with only an L1 loss and a batch size of 1, to compete with state-of-the-art image restoration methods. We provide code and pre-trained models to encourage further exploration, application, and development of our innovative framework (https://github.com/nachifur/RDDM).
|
2305.04724
|
Banupriya V
|
V. Banupriya and S. Anusuya
|
Strategy for Rapid Diabetic Retinopathy Exposure Based on Enhanced
Feature Extraction Processing
| null | null |
10.32604/cmc.2023.038696
| null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
In the modern world, one of the most severe eye infections brought on by
diabetes is known as diabetic retinopathy, which will result in retinal damage,
and, thus, lead to blindness. Diabetic retinopathy can be well treated with
early diagnosis. Retinal fundus images of humans are used to screen for lesions
in the retina. However, detecting DR in the early stages is challenging due to
the minimal symptoms. Furthermore, the occurrence of diseases linked to
vascular anomalies brought on by DR aids in diagnosing the condition.
Nevertheless, the resources required for manually identifying the lesions are
high. Similarly, training for Convolutional Neural Networks is more
time-consuming. This proposed research aims to improve diabetic retinopathy
diagnosis by developing an enhanced deep learning model for timely DR
identification that is potentially more accurate than existing CNN-based
models. The proposed model will detect various lesions from retinal images in
the early stages. First, characteristics are retrieved from the retinal fundus
picture and put into the EDLM for classification. For dimensionality reduction,
EDLM is used. Additionally, the classification and feature extraction processes
are optimized using the stochastic gradient descent optimizer. The EDLM
effectiveness is assessed on the KAG GLE dataset with 3459 retinal images, and
results are compared over VGG16, VGG19, RESNET18, RESNET34, and RESNET50.
|
[
{
"created": "Mon, 8 May 2023 14:17:33 GMT",
"version": "v1"
}
] |
2023-05-09
|
[
[
"Banupriya",
"V.",
""
],
[
"Anusuya",
"S.",
""
]
] |
In the modern world, one of the most severe eye infections brought on by diabetes is known as diabetic retinopathy, which will result in retinal damage, and, thus, lead to blindness. Diabetic retinopathy can be well treated with early diagnosis. Retinal fundus images of humans are used to screen for lesions in the retina. However, detecting DR in the early stages is challenging due to the minimal symptoms. Furthermore, the occurrence of diseases linked to vascular anomalies brought on by DR aids in diagnosing the condition. Nevertheless, the resources required for manually identifying the lesions are high. Similarly, training for Convolutional Neural Networks is more time-consuming. This proposed research aims to improve diabetic retinopathy diagnosis by developing an enhanced deep learning model for timely DR identification that is potentially more accurate than existing CNN-based models. The proposed model will detect various lesions from retinal images in the early stages. First, characteristics are retrieved from the retinal fundus picture and put into the EDLM for classification. For dimensionality reduction, EDLM is used. Additionally, the classification and feature extraction processes are optimized using the stochastic gradient descent optimizer. The EDLM effectiveness is assessed on the KAG GLE dataset with 3459 retinal images, and results are compared over VGG16, VGG19, RESNET18, RESNET34, and RESNET50.
|
2007.10602
|
Makis Arsenis
|
Makis Arsenis, Odysseas Drosis, Robert Kleinberg
|
Revenue Monotonicity Under Misspecified Bidders
| null | null | null | null |
cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We investigate revenue guarantees for auction mechanisms in a model where a
distribution is specified for each bidder, but only some of the distributions
are correct. The subset of bidders whose distribution is correctly specified
(henceforth, the "green bidders") is unknown to the auctioneer. The question we
address is whether the auctioneer can run a mechanism that is guaranteed to
obtain at least as much revenue, in expectation, as would be obtained by
running an optimal mechanism on the green bidders only. For single-parameter
feasibility environments, we find that the answer depends on the feasibility
constraint. For matroid environments, running the optimal mechanism using all
the specified distributions (including the incorrect ones) guarantees at least
as much revenue in expectation as running the optimal mechanism on the green
bidders. For any feasibility constraint that is not a matroid, there exists a
way of setting the specified distributions and the true distributions such that
the opposite conclusion holds.
|
[
{
"created": "Tue, 21 Jul 2020 05:11:35 GMT",
"version": "v1"
}
] |
2020-07-22
|
[
[
"Arsenis",
"Makis",
""
],
[
"Drosis",
"Odysseas",
""
],
[
"Kleinberg",
"Robert",
""
]
] |
We investigate revenue guarantees for auction mechanisms in a model where a distribution is specified for each bidder, but only some of the distributions are correct. The subset of bidders whose distribution is correctly specified (henceforth, the "green bidders") is unknown to the auctioneer. The question we address is whether the auctioneer can run a mechanism that is guaranteed to obtain at least as much revenue, in expectation, as would be obtained by running an optimal mechanism on the green bidders only. For single-parameter feasibility environments, we find that the answer depends on the feasibility constraint. For matroid environments, running the optimal mechanism using all the specified distributions (including the incorrect ones) guarantees at least as much revenue in expectation as running the optimal mechanism on the green bidders. For any feasibility constraint that is not a matroid, there exists a way of setting the specified distributions and the true distributions such that the opposite conclusion holds.
|
2404.08607
|
Liangzhi Wang
|
Liangzhi Wang, Chen Chen, Carlo Fischione, and Jie Zhang
|
Learning-Based Joint Antenna Selection and Precoding Design for
Cell-Free MIMO Networks
|
This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessible
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper considers a downlink cell-free multiple-input multiple-output
(MIMO) network in which multiple multi-antenna base stations (BSs) serve
multiple users via coherent joint transmission. In order to reduce the energy
consumption by radio frequency components, each BS selects a subset of antennas
for downlink data transmission after estimating the channel state information
(CSI). We aim to maximize the sum spectral efficiency by jointly optimizing the
antenna selection and precoding design. To alleviate the fronthaul overhead and
enable real-time network operation, we propose a distributed scalable machine
learning algorithm. In particular, at each BS, we deploy a convolutional neural
network (CNN) for antenna selection and a graph neural network (GNN) for
precoding design. Different from conventional centralized solutions that
require a large amount of CSI and signaling exchange among the BSs, the
proposed distributed machine learning algorithm takes only locally estimated
CSI as input. With well-trained learning models, it is shown that the proposed
algorithm significantly outperforms the distributed baseline schemes and
achieves a sum spectral efficiency comparable to its centralized counterpart.
|
[
{
"created": "Fri, 12 Apr 2024 17:13:50 GMT",
"version": "v1"
}
] |
2024-04-15
|
[
[
"Wang",
"Liangzhi",
""
],
[
"Chen",
"Chen",
""
],
[
"Fischione",
"Carlo",
""
],
[
"Zhang",
"Jie",
""
]
] |
This paper considers a downlink cell-free multiple-input multiple-output (MIMO) network in which multiple multi-antenna base stations (BSs) serve multiple users via coherent joint transmission. In order to reduce the energy consumption by radio frequency components, each BS selects a subset of antennas for downlink data transmission after estimating the channel state information (CSI). We aim to maximize the sum spectral efficiency by jointly optimizing the antenna selection and precoding design. To alleviate the fronthaul overhead and enable real-time network operation, we propose a distributed scalable machine learning algorithm. In particular, at each BS, we deploy a convolutional neural network (CNN) for antenna selection and a graph neural network (GNN) for precoding design. Different from conventional centralized solutions that require a large amount of CSI and signaling exchange among the BSs, the proposed distributed machine learning algorithm takes only locally estimated CSI as input. With well-trained learning models, it is shown that the proposed algorithm significantly outperforms the distributed baseline schemes and achieves a sum spectral efficiency comparable to its centralized counterpart.
|
2010.07024
|
Xiang Hui Nicholas Lim
|
Nicholas Lim, Bryan Hooi, See-Kiong Ng, Xueou Wang, Yong Liang Goh,
Renrong Weng, Jagannadan Varadarajan
|
STP-UDGAT: Spatial-Temporal-Preference User Dimensional Graph Attention
Network for Next POI Recommendation
|
To appear in Proceedings of the 29th ACM International Conference on
Information and Knowledge Management (CIKM), 2020
| null |
10.1145/3340531.3411876
| null |
cs.IR cs.LG cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Next Point-of-Interest (POI) recommendation is a longstanding problem across
the domains of Location-Based Social Networks (LBSN) and transportation. Recent
Recurrent Neural Network (RNN) based approaches learn POI-POI relationships in
a local view based on independent user visit sequences. This limits the model's
ability to directly connect and learn across users in a global view to
recommend semantically trained POIs. In this work, we propose a
Spatial-Temporal-Preference User Dimensional Graph Attention Network
(STP-UDGAT), a novel explore-exploit model that concurrently exploits
personalized user preferences and explores new POIs in global
spatial-temporal-preference (STP) neighbourhoods, while allowing users to
selectively learn from other users. In addition, we propose random walks as a
masked self-attention option to leverage the STP graphs' structures and find
new higher-order POI neighbours during exploration. Experimental results on six
real-world datasets show that our model significantly outperforms baseline and
state-of-the-art methods.
|
[
{
"created": "Tue, 6 Oct 2020 04:03:42 GMT",
"version": "v1"
}
] |
2020-10-15
|
[
[
"Lim",
"Nicholas",
""
],
[
"Hooi",
"Bryan",
""
],
[
"Ng",
"See-Kiong",
""
],
[
"Wang",
"Xueou",
""
],
[
"Goh",
"Yong Liang",
""
],
[
"Weng",
"Renrong",
""
],
[
"Varadarajan",
"Jagannadan",
""
]
] |
Next Point-of-Interest (POI) recommendation is a longstanding problem across the domains of Location-Based Social Networks (LBSN) and transportation. Recent Recurrent Neural Network (RNN) based approaches learn POI-POI relationships in a local view based on independent user visit sequences. This limits the model's ability to directly connect and learn across users in a global view to recommend semantically trained POIs. In this work, we propose a Spatial-Temporal-Preference User Dimensional Graph Attention Network (STP-UDGAT), a novel explore-exploit model that concurrently exploits personalized user preferences and explores new POIs in global spatial-temporal-preference (STP) neighbourhoods, while allowing users to selectively learn from other users. In addition, we propose random walks as a masked self-attention option to leverage the STP graphs' structures and find new higher-order POI neighbours during exploration. Experimental results on six real-world datasets show that our model significantly outperforms baseline and state-of-the-art methods.
|
2202.09391
|
Sourabh Balgi Mr.
|
Sourabh Balgi, Jose M. Pe\~na, Adel Daoud
|
Counterfactual Analysis of the Impact of the IMF Program on Child
Poverty in the Global-South Region using Causal-Graphical Normalizing Flows
|
8(+6) pages, 3(+3) figures, arXiv admin note: text overlap with
arXiv:2202.03281
| null | null | null |
cs.AI econ.EM stat.AP
|
http://creativecommons.org/licenses/by/4.0/
|
This work demonstrates the application of a particular branch of causal
inference and deep learning models: \emph{causal-Graphical Normalizing Flows
(c-GNFs)}. In a recent contribution, scholars showed that normalizing flows
carry certain properties, making them particularly suitable for causal and
counterfactual analysis. However, c-GNFs have only been tested in a simulated
data setting and no contribution to date have evaluated the application of
c-GNFs on large-scale real-world data. Focusing on the \emph{AI for social
good}, our study provides a counterfactual analysis of the impact of the
International Monetary Fund (IMF) program on child poverty using c-GNFs. The
analysis relies on a large-scale real-world observational data: 1,941,734
children under the age of 18, cared for by 567,344 families residing in the 67
countries from the Global-South. While the primary objective of the IMF is to
support governments in achieving economic stability, our results find that an
IMF program reduces child poverty as a positive side-effect by about
1.2$\pm$0.24 degree (`0' equals no poverty and `7' is maximum poverty). Thus,
our article shows how c-GNFs further the use of deep learning and causal
inference in AI for social good. It shows how learning algorithms can be used
for addressing the untapped potential for a significant social impact through
counterfactual inference at population level (ACE), sub-population level
(CACE), and individual level (ICE). In contrast to most works that model ACE or
CACE but not ICE, c-GNFs enable personalization using \emph{`The First Law of
Causal Inference'}.
|
[
{
"created": "Thu, 17 Feb 2022 12:18:14 GMT",
"version": "v1"
}
] |
2022-02-22
|
[
[
"Balgi",
"Sourabh",
""
],
[
"Peña",
"Jose M.",
""
],
[
"Daoud",
"Adel",
""
]
] |
This work demonstrates the application of a particular branch of causal inference and deep learning models: \emph{causal-Graphical Normalizing Flows (c-GNFs)}. In a recent contribution, scholars showed that normalizing flows carry certain properties, making them particularly suitable for causal and counterfactual analysis. However, c-GNFs have only been tested in a simulated data setting and no contribution to date have evaluated the application of c-GNFs on large-scale real-world data. Focusing on the \emph{AI for social good}, our study provides a counterfactual analysis of the impact of the International Monetary Fund (IMF) program on child poverty using c-GNFs. The analysis relies on a large-scale real-world observational data: 1,941,734 children under the age of 18, cared for by 567,344 families residing in the 67 countries from the Global-South. While the primary objective of the IMF is to support governments in achieving economic stability, our results find that an IMF program reduces child poverty as a positive side-effect by about 1.2$\pm$0.24 degree (`0' equals no poverty and `7' is maximum poverty). Thus, our article shows how c-GNFs further the use of deep learning and causal inference in AI for social good. It shows how learning algorithms can be used for addressing the untapped potential for a significant social impact through counterfactual inference at population level (ACE), sub-population level (CACE), and individual level (ICE). In contrast to most works that model ACE or CACE but not ICE, c-GNFs enable personalization using \emph{`The First Law of Causal Inference'}.
|
1810.07132
|
Wei Dai
|
Wei Dai, Kenji Yoshigoe, William Parsley
|
Improving Data Quality through Deep Learning and Statistical Models
|
8 pages, 6 figures, and 3 tables
|
Dai, Wei, Kenji Yoshigoe, and William Parsley. "Improving Data
Quality Through Deep Learning and Statistical Models." In Information
Technology-New Generations, pp. 515-522. Springer, Cham, 2018
|
10.1007/978-3-319-54978-1_66
| null |
cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Traditional data quality control methods are based on users experience or
previously established business rules, and this limits performance in addition
to being a very time consuming process with lower than desirable accuracy.
Utilizing deep learning, we can leverage computing resources and advanced
techniques to overcome these challenges and provide greater value to users. In
this paper, we, the authors, first review relevant works and discuss machine
learning techniques, tools, and statistical quality models. Second, we offer a
creative data quality framework based on deep learning and statistical model
algorithm for identifying data quality. Third, we use data involving salary
levels from an open dataset published by the state of Arkansas to demonstrate
how to identify outlier data and how to improve data quality via deep learning.
Finally, we discuss future work.
|
[
{
"created": "Tue, 16 Oct 2018 16:57:07 GMT",
"version": "v1"
}
] |
2018-10-17
|
[
[
"Dai",
"Wei",
""
],
[
"Yoshigoe",
"Kenji",
""
],
[
"Parsley",
"William",
""
]
] |
Traditional data quality control methods are based on users experience or previously established business rules, and this limits performance in addition to being a very time consuming process with lower than desirable accuracy. Utilizing deep learning, we can leverage computing resources and advanced techniques to overcome these challenges and provide greater value to users. In this paper, we, the authors, first review relevant works and discuss machine learning techniques, tools, and statistical quality models. Second, we offer a creative data quality framework based on deep learning and statistical model algorithm for identifying data quality. Third, we use data involving salary levels from an open dataset published by the state of Arkansas to demonstrate how to identify outlier data and how to improve data quality via deep learning. Finally, we discuss future work.
|
1907.05406
|
Bing Yao
|
Bing Yao, Yarong Mu, Yirong Sun, Hui Sun, Xiaohui Zhang, Hongyu Wang,
Jing Su, Mingjun Zhang, Sihua Yang, Meimei Zhao, Xiaomin Wang, Fei Ma, Ming
Yao, Chao Yang, Jianming Xie
|
Using Chinese Characters To Generate Text-Based Passwords For
Information Security
| null | null | null | null |
cs.IT math.CO math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
Graphical passwords (GPWs) are in many areas of the current world.
Topological graphic passwords (Topsnut-gpws) are a new type of cryptography,
and they differ from the existing GPWs. A Topsnut-gpw consists of two parts:
one is a topological structure (graph), and one is a set of discrete elements
(a graph labelling, or coloring), the topological structure connects these
discrete elements together to form an interesting "story". Our idea is to
transform Chinese characters into computer and electronic equipments with touch
screen by speaking, writing and keyboard for forming Hanzi-graphs and
Hanzi-gpws. We will use Hanzigpws to produce text-based passwords (TB-paws). We
will introduce flawed graph labellings on disconnected Hanzi-graphs.
|
[
{
"created": "Thu, 11 Jul 2019 17:46:56 GMT",
"version": "v1"
}
] |
2019-07-12
|
[
[
"Yao",
"Bing",
""
],
[
"Mu",
"Yarong",
""
],
[
"Sun",
"Yirong",
""
],
[
"Sun",
"Hui",
""
],
[
"Zhang",
"Xiaohui",
""
],
[
"Wang",
"Hongyu",
""
],
[
"Su",
"Jing",
""
],
[
"Zhang",
"Mingjun",
""
],
[
"Yang",
"Sihua",
""
],
[
"Zhao",
"Meimei",
""
],
[
"Wang",
"Xiaomin",
""
],
[
"Ma",
"Fei",
""
],
[
"Yao",
"Ming",
""
],
[
"Yang",
"Chao",
""
],
[
"Xie",
"Jianming",
""
]
] |
Graphical passwords (GPWs) are in many areas of the current world. Topological graphic passwords (Topsnut-gpws) are a new type of cryptography, and they differ from the existing GPWs. A Topsnut-gpw consists of two parts: one is a topological structure (graph), and one is a set of discrete elements (a graph labelling, or coloring), the topological structure connects these discrete elements together to form an interesting "story". Our idea is to transform Chinese characters into computer and electronic equipments with touch screen by speaking, writing and keyboard for forming Hanzi-graphs and Hanzi-gpws. We will use Hanzigpws to produce text-based passwords (TB-paws). We will introduce flawed graph labellings on disconnected Hanzi-graphs.
|
1709.07658
|
Jordan Ivanchev
|
Jordan Ivanchev, Alois Knoll, Daniel Zehe, Suraj Nair, David Eckhoff
|
Potentials and Implications of Dedicated Highway Lanes for Autonomous
Vehicles
|
12 pages, 7 figures
| null | null | null |
cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The introduction of autonomous vehicles (AVs) will have far-reaching effects
on road traffic in cities and on highways.The implementation of automated
highway system (AHS), possibly with a dedicated lane only for AVs, is believed
to be a requirement to maximise the benefit from the advantages of AVs. We
study the ramifications of an increasing percentage of AVs on the traffic
system with and without the introduction of a dedicated AV lane on highways. We
conduct an analytical evaluation of a simplified scenario and a macroscopic
simulation of the city of Singapore under user equilibrium conditions with a
realistic traffic demand. We present findings regarding average travel time,
fuel consumption, throughput and road usage. Instead of only considering the
highways, we also focus on the effects on the remaining road network. Our
results show a reduction of average travel time and fuel consumption as a
result of increasing the portion of AVs in the system. We show that the
introduction of an AV lane is not beneficial in terms of average commute time.
Examining the effects of the AV population only, however, the AV lane provides
a considerable reduction of travel time (approx. 25%) at the price of delaying
conventional vehicles (approx. 7%). Furthermore a notable shift of travel
demand away from the highways towards major and small roads is noticed in early
stages of AV penetration of the system. Finally, our findings show that after a
certain threshold percentage of AVs the differences between AV and no AV lane
scenarios become negligible.
|
[
{
"created": "Fri, 22 Sep 2017 09:45:40 GMT",
"version": "v1"
}
] |
2017-09-25
|
[
[
"Ivanchev",
"Jordan",
""
],
[
"Knoll",
"Alois",
""
],
[
"Zehe",
"Daniel",
""
],
[
"Nair",
"Suraj",
""
],
[
"Eckhoff",
"David",
""
]
] |
The introduction of autonomous vehicles (AVs) will have far-reaching effects on road traffic in cities and on highways.The implementation of automated highway system (AHS), possibly with a dedicated lane only for AVs, is believed to be a requirement to maximise the benefit from the advantages of AVs. We study the ramifications of an increasing percentage of AVs on the traffic system with and without the introduction of a dedicated AV lane on highways. We conduct an analytical evaluation of a simplified scenario and a macroscopic simulation of the city of Singapore under user equilibrium conditions with a realistic traffic demand. We present findings regarding average travel time, fuel consumption, throughput and road usage. Instead of only considering the highways, we also focus on the effects on the remaining road network. Our results show a reduction of average travel time and fuel consumption as a result of increasing the portion of AVs in the system. We show that the introduction of an AV lane is not beneficial in terms of average commute time. Examining the effects of the AV population only, however, the AV lane provides a considerable reduction of travel time (approx. 25%) at the price of delaying conventional vehicles (approx. 7%). Furthermore a notable shift of travel demand away from the highways towards major and small roads is noticed in early stages of AV penetration of the system. Finally, our findings show that after a certain threshold percentage of AVs the differences between AV and no AV lane scenarios become negligible.
|
1006.0334
|
Rui Costa
|
Rui A. Costa and Michael Langberg and Jo\~ao Barros
|
One-Shot Capacity of Discrete Channels
|
ISIT 2010
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Shannon defined channel capacity as the highest rate at which there exists a
sequence of codes of block length $n$ such that the error probability goes to
zero as $n$ goes to infinity. In this definition, it is implicit that the block
length, which can be viewed as the number of available channel uses, is
unlimited. This is not the case when the transmission power must be
concentrated on a single transmission, most notably in military scenarios with
adversarial conditions or delay-tolerant networks with random short encounters.
A natural question arises: how much information can we transmit in a single use
of the channel? We give a precise characterization of the one-shot capacity of
discrete channels, defined as the maximum number of bits that can be
transmitted in a single use of a channel with an error probability that does
not exceed a prescribed value. This capacity definition is shown to be useful
and significantly different from the zero-error problem statement.
|
[
{
"created": "Wed, 2 Jun 2010 09:31:35 GMT",
"version": "v1"
}
] |
2010-06-03
|
[
[
"Costa",
"Rui A.",
""
],
[
"Langberg",
"Michael",
""
],
[
"Barros",
"João",
""
]
] |
Shannon defined channel capacity as the highest rate at which there exists a sequence of codes of block length $n$ such that the error probability goes to zero as $n$ goes to infinity. In this definition, it is implicit that the block length, which can be viewed as the number of available channel uses, is unlimited. This is not the case when the transmission power must be concentrated on a single transmission, most notably in military scenarios with adversarial conditions or delay-tolerant networks with random short encounters. A natural question arises: how much information can we transmit in a single use of the channel? We give a precise characterization of the one-shot capacity of discrete channels, defined as the maximum number of bits that can be transmitted in a single use of a channel with an error probability that does not exceed a prescribed value. This capacity definition is shown to be useful and significantly different from the zero-error problem statement.
|
2203.05707
|
Da Ma
|
Ghazal Mirabnahrazam, Da Ma, Sieun Lee, Karteek Popuri, Hyunwoo Lee,
Jiguo Cao, Lei Wang, James E Galvin, Mirza Faisal Beg, and the Alzheimer's
Disease Neuroimaging Initiative
|
Machine Learning Based Multimodal Neuroimaging Genomics Dementia Score
for Predicting Future Conversion to Alzheimer's Disease
| null |
J Alzheimers Dis 1 Jan. (2022) 1-21
|
10.3233/JAD-220021
| null |
cs.LG cs.AI eess.IV q-bio.GN
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Background: The increasing availability of databases containing both magnetic
resonance imaging (MRI) and genetic data allows researchers to utilize
multimodal data to better understand the characteristics of dementia of
Alzheimer's type (DAT). Objective: The goal of this study was to develop and
analyze novel biomarkers that can help predict the development and progression
of DAT. Methods: We used feature selection and ensemble learning classifier to
develop an image/genotype-based DAT score that represents a subject's
likelihood of developing DAT in the future. Three feature types were used: MRI
only, genetic only, and combined multimodal data. We used a novel data
stratification method to better represent different stages of DAT. Using a
pre-defined 0.5 threshold on DAT scores, we predicted whether or not a subject
would develop DAT in the future. Results: Our results on Alzheimer's Disease
Neuroimaging Initiative (ADNI) database showed that dementia scores using
genetic data could better predict future DAT progression for currently normal
control subjects (Accuracy=0.857) compared to MRI (Accuracy=0.143), while MRI
can better characterize subjects with stable mild cognitive impairment
(Accuracy=0.614) compared to genetics (Accuracy=0.356). Combining MRI and
genetic data showed improved classification performance in the remaining
stratified groups. Conclusion: MRI and genetic data can contribute to DAT
prediction in different ways. MRI data reflects anatomical changes in the
brain, while genetic data can detect the risk of DAT progression prior to the
symptomatic onset. Combining information from multimodal data in the right way
can improve prediction performance.
|
[
{
"created": "Fri, 11 Mar 2022 01:35:30 GMT",
"version": "v1"
}
] |
2022-04-25
|
[
[
"Mirabnahrazam",
"Ghazal",
""
],
[
"Ma",
"Da",
""
],
[
"Lee",
"Sieun",
""
],
[
"Popuri",
"Karteek",
""
],
[
"Lee",
"Hyunwoo",
""
],
[
"Cao",
"Jiguo",
""
],
[
"Wang",
"Lei",
""
],
[
"Galvin",
"James E",
""
],
[
"Beg",
"Mirza Faisal",
""
],
[
"Initiative",
"the Alzheimer's Disease Neuroimaging",
""
]
] |
Background: The increasing availability of databases containing both magnetic resonance imaging (MRI) and genetic data allows researchers to utilize multimodal data to better understand the characteristics of dementia of Alzheimer's type (DAT). Objective: The goal of this study was to develop and analyze novel biomarkers that can help predict the development and progression of DAT. Methods: We used feature selection and ensemble learning classifier to develop an image/genotype-based DAT score that represents a subject's likelihood of developing DAT in the future. Three feature types were used: MRI only, genetic only, and combined multimodal data. We used a novel data stratification method to better represent different stages of DAT. Using a pre-defined 0.5 threshold on DAT scores, we predicted whether or not a subject would develop DAT in the future. Results: Our results on Alzheimer's Disease Neuroimaging Initiative (ADNI) database showed that dementia scores using genetic data could better predict future DAT progression for currently normal control subjects (Accuracy=0.857) compared to MRI (Accuracy=0.143), while MRI can better characterize subjects with stable mild cognitive impairment (Accuracy=0.614) compared to genetics (Accuracy=0.356). Combining MRI and genetic data showed improved classification performance in the remaining stratified groups. Conclusion: MRI and genetic data can contribute to DAT prediction in different ways. MRI data reflects anatomical changes in the brain, while genetic data can detect the risk of DAT progression prior to the symptomatic onset. Combining information from multimodal data in the right way can improve prediction performance.
|
2108.11215
|
Reto Gubelmann
|
Reto Gubelmann (1), Peter Hongler (1), Siegfried Handschuh (1) ((1)
University of St.Gallen (HSG))
|
Exploring the Promises of Transformer-Based LMs for the Representation
of Normative Claims in the Legal Domain
|
11 pages, 3 figures
| null | null | null |
cs.CL cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this article, we explore the potential of transformer-based language
models (LMs) to correctly represent normative statements in the legal domain,
taking tax law as our use case. In our experiment, we use a variety of LMs as
bases for both word- and sentence-based clusterers that are then evaluated on a
small, expert-compiled test-set, consisting of real-world samples from tax law
research literature that can be clearly assigned to one of four normative
theories. The results of the experiment show that clusterers based on
sentence-BERT-embeddings deliver the most promising results. Based on this main
experiment, we make first attempts at using the best performing models in a
bootstrapping loop to build classifiers that map normative claims on one of
these four normative theories.
|
[
{
"created": "Wed, 25 Aug 2021 13:03:04 GMT",
"version": "v1"
}
] |
2021-08-26
|
[
[
"Gubelmann",
"Reto",
""
],
[
"Hongler",
"Peter",
""
],
[
"Handschuh",
"Siegfried",
""
]
] |
In this article, we explore the potential of transformer-based language models (LMs) to correctly represent normative statements in the legal domain, taking tax law as our use case. In our experiment, we use a variety of LMs as bases for both word- and sentence-based clusterers that are then evaluated on a small, expert-compiled test-set, consisting of real-world samples from tax law research literature that can be clearly assigned to one of four normative theories. The results of the experiment show that clusterers based on sentence-BERT-embeddings deliver the most promising results. Based on this main experiment, we make first attempts at using the best performing models in a bootstrapping loop to build classifiers that map normative claims on one of these four normative theories.
|
2011.05301
|
Eda Bayram
|
Eda Bayram and Alberto Garcia-Duran and Robert West
|
Node Attribute Completion in Knowledge Graphs with Multi-Relational
Propagation
|
7 pages
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The existing literature on knowledge graph completion mostly focuses on the
link prediction task. However, knowledge graphs have an additional
incompleteness problem: their nodes possess numerical attributes, whose values
are often missing. Our approach, denoted as MrAP, imputes the values of missing
attributes by propagating information across the multi-relational structure of
a knowledge graph. It employs regression functions for predicting one node
attribute from another depending on the relationship between the nodes and the
type of the attributes. The propagation mechanism operates iteratively in a
message passing scheme that collects the predictions at every iteration and
updates the value of the node attributes. Experiments over two benchmark
datasets show the effectiveness of our approach.
|
[
{
"created": "Tue, 10 Nov 2020 18:36:33 GMT",
"version": "v1"
}
] |
2020-11-11
|
[
[
"Bayram",
"Eda",
""
],
[
"Garcia-Duran",
"Alberto",
""
],
[
"West",
"Robert",
""
]
] |
The existing literature on knowledge graph completion mostly focuses on the link prediction task. However, knowledge graphs have an additional incompleteness problem: their nodes possess numerical attributes, whose values are often missing. Our approach, denoted as MrAP, imputes the values of missing attributes by propagating information across the multi-relational structure of a knowledge graph. It employs regression functions for predicting one node attribute from another depending on the relationship between the nodes and the type of the attributes. The propagation mechanism operates iteratively in a message passing scheme that collects the predictions at every iteration and updates the value of the node attributes. Experiments over two benchmark datasets show the effectiveness of our approach.
|
2201.02946
|
Linh Ma Van
|
Linh Van Ma, Tin Trung Tran, Moongu Jeon
|
Resolving Camera Position for a Practical Application of Gaze Estimation
on Edge Devices
|
6 pages, 11 figures, conference paper
|
ICAIIC 2022 (The 4th International Conference on Artificial
Intelligence in Information and Communication February 21 (Mon.) ~ 24
(Thur.), 2022, Guam, USA & Virtual Conference)
| null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Most Gaze estimation research only works on a setup condition that a camera
perfectly captures eyes gaze. They have not literarily specified how to set up
a camera correctly for a given position of a person. In this paper, we carry
out a study on gaze estimation with a logical camera setup position. We further
bring our research in a practical application by using inexpensive edge devices
with a realistic scenario. That is, we first set up a shopping environment
where we want to grasp customers gazing behaviors. This setup needs an optimal
camera position in order to maintain estimation accuracy from existing gaze
estimation research. We then apply the state-of-the-art of few-shot learning
gaze estimation to reduce training sampling in the inference phase. In the
experiment, we perform our implemented research on NVIDIA Jetson TX2 and
achieve a reasonable speed, 12 FPS which is faster compared with our reference
work, without much degradation of gaze estimation accuracy. The source code is
released at https://github.com/linh-gist/GazeEstimationTX2.
|
[
{
"created": "Sun, 9 Jan 2022 07:19:59 GMT",
"version": "v1"
},
{
"created": "Sat, 15 Jan 2022 23:20:22 GMT",
"version": "v2"
}
] |
2022-01-19
|
[
[
"Van Ma",
"Linh",
""
],
[
"Tran",
"Tin Trung",
""
],
[
"Jeon",
"Moongu",
""
]
] |
Most Gaze estimation research only works on a setup condition that a camera perfectly captures eyes gaze. They have not literarily specified how to set up a camera correctly for a given position of a person. In this paper, we carry out a study on gaze estimation with a logical camera setup position. We further bring our research in a practical application by using inexpensive edge devices with a realistic scenario. That is, we first set up a shopping environment where we want to grasp customers gazing behaviors. This setup needs an optimal camera position in order to maintain estimation accuracy from existing gaze estimation research. We then apply the state-of-the-art of few-shot learning gaze estimation to reduce training sampling in the inference phase. In the experiment, we perform our implemented research on NVIDIA Jetson TX2 and achieve a reasonable speed, 12 FPS which is faster compared with our reference work, without much degradation of gaze estimation accuracy. The source code is released at https://github.com/linh-gist/GazeEstimationTX2.
|
1905.06911
|
Derek Weitzel
|
Derek Weitzel, Marian Zvada, Ilija Vukotic, Rob Gardner, Brian
Bockelman, Mats Rynge, Edgar Fajardo Hernandez, Brian Lin, and Matyas Selmeci
|
StashCache: A Distributed Caching Federation for the Open Science Grid
|
In Practice and Experience in Advanced Research Computing (PEARC 19),
July 28-August 1, 2019, Chicago, IL, USA. ACM, New York, NY, USA, 7 pages
| null |
10.1145/3332186.3332212
| null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Data distribution for opportunistic users is challenging as they neither own
the computing resources they are using or any nearby storage. Users are
motivated to use opportunistic computing to expand their data processing
capacity, but they require storage and fast networking to distribute data to
that processing. Since it requires significant management overhead, it is rare
for resource providers to allow opportunistic access to storage. Additionally,
in order to use opportunistic storage at several distributed sites, users
assume the responsibility to maintain their data. In this paper we present
StashCache, a distributed caching federation that enables opportunistic users
to utilize nearby opportunistic storage. StashCache is comprised of four
components: data origins, redirectors, caches, and clients. StashCache has been
deployed in the Open Science Grid for several years and has been used by many
projects. Caches are deployed in geographically distributed locations across
the U.S. and Europe. We will present the architecture of StashCache, as well as
utilization information of the infrastructure. We will also present performance
analysis comparing distributed HTTP Proxies vs StashCache.
|
[
{
"created": "Thu, 16 May 2019 17:14:44 GMT",
"version": "v1"
}
] |
2019-05-17
|
[
[
"Weitzel",
"Derek",
""
],
[
"Zvada",
"Marian",
""
],
[
"Vukotic",
"Ilija",
""
],
[
"Gardner",
"Rob",
""
],
[
"Bockelman",
"Brian",
""
],
[
"Rynge",
"Mats",
""
],
[
"Hernandez",
"Edgar Fajardo",
""
],
[
"Lin",
"Brian",
""
],
[
"Selmeci",
"Matyas",
""
]
] |
Data distribution for opportunistic users is challenging as they neither own the computing resources they are using or any nearby storage. Users are motivated to use opportunistic computing to expand their data processing capacity, but they require storage and fast networking to distribute data to that processing. Since it requires significant management overhead, it is rare for resource providers to allow opportunistic access to storage. Additionally, in order to use opportunistic storage at several distributed sites, users assume the responsibility to maintain their data. In this paper we present StashCache, a distributed caching federation that enables opportunistic users to utilize nearby opportunistic storage. StashCache is comprised of four components: data origins, redirectors, caches, and clients. StashCache has been deployed in the Open Science Grid for several years and has been used by many projects. Caches are deployed in geographically distributed locations across the U.S. and Europe. We will present the architecture of StashCache, as well as utilization information of the infrastructure. We will also present performance analysis comparing distributed HTTP Proxies vs StashCache.
|
2002.01916
|
Ali Maatouk
|
Ali Maatouk, Yin Sun, Anthony Ephremides, Mohamad Assaad
|
Status Updates with Priorities: Lexicographic Optimality
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we consider a transmission scheduling problem, in which
several streams of status update packets with diverse priority levels are sent
through a shared channel to their destinations. We introduce a notion of
Lexicographic age optimality, or simply lex-age-optimality, to evaluate the
performance of multi-class status update policies. In particular, a
lex-age-optimal scheduling policy first minimizes the Age of Information (AoI)
metrics for high-priority streams, and then, within the set of optimal policies
for high-priority streams, achieves the minimum AoI metrics for low-priority
streams. We propose a new scheduling policy named Preemptive Priority, Maximum
Age First, Last-Generated, First-Served (PP-MAF-LGFS), and prove that the
PP-MAF-LGFS scheduling policy is lex-age-optimal. This result holds (i) for
minimizing any time-dependent, symmetric, and non-decreasing age penalty
function; (ii) for minimizing any non-decreasing functional of the stochastic
process formed by the age penalty function; and (iii) for the cases where
different priority classes have distinct arrival traffic patterns, age penalty
functions, and age penalty functionals. For example, the PP-MAF-LGFS scheduling
policy is lex-age-optimal for minimizing the mean peak age of a high-priority
stream and the time-average age of a low-priority stream. Numerical results are
provided to illustrate our theoretical findings.
|
[
{
"created": "Wed, 5 Feb 2020 18:43:16 GMT",
"version": "v1"
}
] |
2020-02-06
|
[
[
"Maatouk",
"Ali",
""
],
[
"Sun",
"Yin",
""
],
[
"Ephremides",
"Anthony",
""
],
[
"Assaad",
"Mohamad",
""
]
] |
In this paper, we consider a transmission scheduling problem, in which several streams of status update packets with diverse priority levels are sent through a shared channel to their destinations. We introduce a notion of Lexicographic age optimality, or simply lex-age-optimality, to evaluate the performance of multi-class status update policies. In particular, a lex-age-optimal scheduling policy first minimizes the Age of Information (AoI) metrics for high-priority streams, and then, within the set of optimal policies for high-priority streams, achieves the minimum AoI metrics for low-priority streams. We propose a new scheduling policy named Preemptive Priority, Maximum Age First, Last-Generated, First-Served (PP-MAF-LGFS), and prove that the PP-MAF-LGFS scheduling policy is lex-age-optimal. This result holds (i) for minimizing any time-dependent, symmetric, and non-decreasing age penalty function; (ii) for minimizing any non-decreasing functional of the stochastic process formed by the age penalty function; and (iii) for the cases where different priority classes have distinct arrival traffic patterns, age penalty functions, and age penalty functionals. For example, the PP-MAF-LGFS scheduling policy is lex-age-optimal for minimizing the mean peak age of a high-priority stream and the time-average age of a low-priority stream. Numerical results are provided to illustrate our theoretical findings.
|
1307.6303
|
Junyan Wang
|
Junyan Wang and Kap Luk Chan
|
Matching-Constrained Active Contours
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/3.0/
|
In object segmentation by active contours, the initial contour is often
required. Conventionally, the initial contour is provided by the user. This
paper extends the conventional active contour model by incorporating feature
matching in the formulation, which gives rise to a novel matching-constrained
active contour. The numerical solution to the new optimization model provides
an automated framework of object segmentation without user intervention. The
main idea is to incorporate feature point matching as a constraint in active
contour models. To this effect, we obtain a mathematical model of interior
points to boundary contour such that matching of interior feature points gives
contour alignment, and we formulate the matching score as a constraint to
active contour model such that the feature matching of maximum score that gives
the contour alignment provides the initial feasible solution to the constrained
optimization model of segmentation. The constraint also ensures that the
optimal contour does not deviate too much from the initial contour.
Projected-gradient descent equations are derived to solve the constrained
optimization. In the experiments, we show that our method is capable of
achieving the automatic object segmentation, and it outperforms the related
methods.
|
[
{
"created": "Wed, 24 Jul 2013 06:18:44 GMT",
"version": "v1"
}
] |
2013-07-25
|
[
[
"Wang",
"Junyan",
""
],
[
"Chan",
"Kap Luk",
""
]
] |
In object segmentation by active contours, the initial contour is often required. Conventionally, the initial contour is provided by the user. This paper extends the conventional active contour model by incorporating feature matching in the formulation, which gives rise to a novel matching-constrained active contour. The numerical solution to the new optimization model provides an automated framework of object segmentation without user intervention. The main idea is to incorporate feature point matching as a constraint in active contour models. To this effect, we obtain a mathematical model of interior points to boundary contour such that matching of interior feature points gives contour alignment, and we formulate the matching score as a constraint to active contour model such that the feature matching of maximum score that gives the contour alignment provides the initial feasible solution to the constrained optimization model of segmentation. The constraint also ensures that the optimal contour does not deviate too much from the initial contour. Projected-gradient descent equations are derived to solve the constrained optimization. In the experiments, we show that our method is capable of achieving the automatic object segmentation, and it outperforms the related methods.
|
2303.12237
|
Pulkit Khandelwal
|
Pulkit Khandelwal, Michael Tran Duong, Shokufeh Sadaghiani, Sydney
Lim, Amanda Denning, Eunice Chung, Sadhana Ravikumar, Sanaz Arezoumandan,
Claire Peterson, Madigan Bedard, Noah Capp, Ranjit Ittyerah, Elyse Migdal,
Grace Choi, Emily Kopp, Bridget Loja, Eusha Hasan, Jiacheng Li, Alejandra
Bahena, Karthik Prabhakaran, Gabor Mizsei, Marianna Gabrielyan, Theresa
Schuck, Winifred Trotman, John Robinson, Daniel Ohm, Edward B. Lee, John Q.
Trojanowski, Corey McMillan, Murray Grossman, David J. Irwin, John Detre, M.
Dylan Tisdall, Sandhitsu R. Das, Laura E.M. Wisse, David A. Wolk, Paul A.
Yushkevich
|
Automated deep learning segmentation of high-resolution 7 T postmortem
MRI for quantitative analysis of structure-pathology correlations in
neurodegenerative diseases
|
Preprint submitted to NeuroImage Project website:
https://pulkit-khandelwal.github.io/exvivo-brain-upenn
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Postmortem MRI allows brain anatomy to be examined at high resolution and to
link pathology measures with morphometric measurements. However, automated
segmentation methods for brain mapping in postmortem MRI are not well
developed, primarily due to limited availability of labeled datasets, and
heterogeneity in scanner hardware and acquisition protocols. In this work, we
present a high resolution of 135 postmortem human brain tissue specimens imaged
at 0.3 mm$^{3}$ isotropic using a T2w sequence on a 7T whole-body MRI scanner.
We developed a deep learning pipeline to segment the cortical mantle by
benchmarking the performance of nine deep neural architectures, followed by
post-hoc topological correction. We then segment four subcortical structures
(caudate, putamen, globus pallidus, and thalamus), white matter
hyperintensities, and the normal appearing white matter. We show generalizing
capabilities across whole brain hemispheres in different specimens, and also on
unseen images acquired at 0.28 mm^3 and 0.16 mm^3 isotropic T2*w FLASH sequence
at 7T. We then compute localized cortical thickness and volumetric measurements
across key regions, and link them with semi-quantitative neuropathological
ratings. Our code, Jupyter notebooks, and the containerized executables are
publicly available at: https://pulkit-khandelwal.github.io/exvivo-brain-upenn
|
[
{
"created": "Tue, 21 Mar 2023 23:44:02 GMT",
"version": "v1"
},
{
"created": "Tue, 17 Oct 2023 20:50:05 GMT",
"version": "v2"
}
] |
2023-10-19
|
[
[
"Khandelwal",
"Pulkit",
""
],
[
"Duong",
"Michael Tran",
""
],
[
"Sadaghiani",
"Shokufeh",
""
],
[
"Lim",
"Sydney",
""
],
[
"Denning",
"Amanda",
""
],
[
"Chung",
"Eunice",
""
],
[
"Ravikumar",
"Sadhana",
""
],
[
"Arezoumandan",
"Sanaz",
""
],
[
"Peterson",
"Claire",
""
],
[
"Bedard",
"Madigan",
""
],
[
"Capp",
"Noah",
""
],
[
"Ittyerah",
"Ranjit",
""
],
[
"Migdal",
"Elyse",
""
],
[
"Choi",
"Grace",
""
],
[
"Kopp",
"Emily",
""
],
[
"Loja",
"Bridget",
""
],
[
"Hasan",
"Eusha",
""
],
[
"Li",
"Jiacheng",
""
],
[
"Bahena",
"Alejandra",
""
],
[
"Prabhakaran",
"Karthik",
""
],
[
"Mizsei",
"Gabor",
""
],
[
"Gabrielyan",
"Marianna",
""
],
[
"Schuck",
"Theresa",
""
],
[
"Trotman",
"Winifred",
""
],
[
"Robinson",
"John",
""
],
[
"Ohm",
"Daniel",
""
],
[
"Lee",
"Edward B.",
""
],
[
"Trojanowski",
"John Q.",
""
],
[
"McMillan",
"Corey",
""
],
[
"Grossman",
"Murray",
""
],
[
"Irwin",
"David J.",
""
],
[
"Detre",
"John",
""
],
[
"Tisdall",
"M. Dylan",
""
],
[
"Das",
"Sandhitsu R.",
""
],
[
"Wisse",
"Laura E. M.",
""
],
[
"Wolk",
"David A.",
""
],
[
"Yushkevich",
"Paul A.",
""
]
] |
Postmortem MRI allows brain anatomy to be examined at high resolution and to link pathology measures with morphometric measurements. However, automated segmentation methods for brain mapping in postmortem MRI are not well developed, primarily due to limited availability of labeled datasets, and heterogeneity in scanner hardware and acquisition protocols. In this work, we present a high resolution of 135 postmortem human brain tissue specimens imaged at 0.3 mm$^{3}$ isotropic using a T2w sequence on a 7T whole-body MRI scanner. We developed a deep learning pipeline to segment the cortical mantle by benchmarking the performance of nine deep neural architectures, followed by post-hoc topological correction. We then segment four subcortical structures (caudate, putamen, globus pallidus, and thalamus), white matter hyperintensities, and the normal appearing white matter. We show generalizing capabilities across whole brain hemispheres in different specimens, and also on unseen images acquired at 0.28 mm^3 and 0.16 mm^3 isotropic T2*w FLASH sequence at 7T. We then compute localized cortical thickness and volumetric measurements across key regions, and link them with semi-quantitative neuropathological ratings. Our code, Jupyter notebooks, and the containerized executables are publicly available at: https://pulkit-khandelwal.github.io/exvivo-brain-upenn
|
2111.13236
|
Swaminathan Gurumurthy
|
Swaminathan Gurumurthy, Shaojie Bai, Zachary Manchester, J. Zico
Kolter
|
Joint inference and input optimization in equilibrium networks
|
Neurips 2021
|
Neurips 2021
| null | null |
cs.LG cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Many tasks in deep learning involve optimizing over the \emph{inputs} to a
network to minimize or maximize some objective; examples include optimization
over latent spaces in a generative model to match a target image, or
adversarially perturbing an input to worsen classifier performance. Performing
such optimization, however, is traditionally quite costly, as it involves a
complete forward and backward pass through the network for each gradient step.
In a separate line of work, a recent thread of research has developed the deep
equilibrium (DEQ) model, a class of models that foregoes traditional network
depth and instead computes the output of a network by finding the fixed point
of a single nonlinear layer. In this paper, we show that there is a natural
synergy between these two settings. Although, naively using DEQs for these
optimization problems is expensive (owing to the time needed to compute a fixed
point for each gradient step), we can leverage the fact that gradient-based
optimization can \emph{itself} be cast as a fixed point iteration to
substantially improve the overall speed. That is, we \emph{simultaneously} both
solve for the DEQ fixed point \emph{and} optimize over network inputs, all
within a single ``augmented'' DEQ model that jointly encodes both the original
network and the optimization process. Indeed, the procedure is fast enough that
it allows us to efficiently \emph{train} DEQ models for tasks traditionally
relying on an ``inner'' optimization loop. We demonstrate this strategy on
various tasks such as training generative models while optimizing over latent
codes, training models for inverse problems like denoising and inpainting,
adversarial training and gradient based meta-learning.
|
[
{
"created": "Thu, 25 Nov 2021 19:59:33 GMT",
"version": "v1"
}
] |
2021-11-29
|
[
[
"Gurumurthy",
"Swaminathan",
""
],
[
"Bai",
"Shaojie",
""
],
[
"Manchester",
"Zachary",
""
],
[
"Kolter",
"J. Zico",
""
]
] |
Many tasks in deep learning involve optimizing over the \emph{inputs} to a network to minimize or maximize some objective; examples include optimization over latent spaces in a generative model to match a target image, or adversarially perturbing an input to worsen classifier performance. Performing such optimization, however, is traditionally quite costly, as it involves a complete forward and backward pass through the network for each gradient step. In a separate line of work, a recent thread of research has developed the deep equilibrium (DEQ) model, a class of models that foregoes traditional network depth and instead computes the output of a network by finding the fixed point of a single nonlinear layer. In this paper, we show that there is a natural synergy between these two settings. Although, naively using DEQs for these optimization problems is expensive (owing to the time needed to compute a fixed point for each gradient step), we can leverage the fact that gradient-based optimization can \emph{itself} be cast as a fixed point iteration to substantially improve the overall speed. That is, we \emph{simultaneously} both solve for the DEQ fixed point \emph{and} optimize over network inputs, all within a single ``augmented'' DEQ model that jointly encodes both the original network and the optimization process. Indeed, the procedure is fast enough that it allows us to efficiently \emph{train} DEQ models for tasks traditionally relying on an ``inner'' optimization loop. We demonstrate this strategy on various tasks such as training generative models while optimizing over latent codes, training models for inverse problems like denoising and inpainting, adversarial training and gradient based meta-learning.
|
2311.05267
|
Nikela Papadopoulou
|
Minyu Cui, Nikela Papadopoulou, Miquel Peric\`as
|
Analysis and Characterization of Performance Variability for OpenMP
Runtime
|
To appear at ROSS 2023 (International Workshop on Runtime and
Operating Systems for Supercomputers), held in conjunction with SC23
| null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
In the high performance computing (HPC) domain, performance variability is a
major scalability issue for parallel computing applications with heavy
synchronization and communication. In this paper, we present an experimental
performance analysis of OpenMP benchmarks regarding the variation of execution
time, and determine the potential factors causing performance variability. Our
work offers some understanding of performance distributions and directions for
future work on how to mitigate variability for OpenMP-based applications. Two
representative OpenMP benchmarks from the EPCC OpenMP micro-benchmark suite and
BabelStream are run across two x86 multicore platforms featuring up to 256
threads. From the obtained results, we characterize and explain the execution
time variability as a function of thread-pinning, simultaneous multithreading
(SMT) and core frequency variation.
|
[
{
"created": "Thu, 9 Nov 2023 10:50:17 GMT",
"version": "v1"
}
] |
2023-11-10
|
[
[
"Cui",
"Minyu",
""
],
[
"Papadopoulou",
"Nikela",
""
],
[
"Pericàs",
"Miquel",
""
]
] |
In the high performance computing (HPC) domain, performance variability is a major scalability issue for parallel computing applications with heavy synchronization and communication. In this paper, we present an experimental performance analysis of OpenMP benchmarks regarding the variation of execution time, and determine the potential factors causing performance variability. Our work offers some understanding of performance distributions and directions for future work on how to mitigate variability for OpenMP-based applications. Two representative OpenMP benchmarks from the EPCC OpenMP micro-benchmark suite and BabelStream are run across two x86 multicore platforms featuring up to 256 threads. From the obtained results, we characterize and explain the execution time variability as a function of thread-pinning, simultaneous multithreading (SMT) and core frequency variation.
|
2309.14653
|
Francis Lau C.M.
|
Jia Zhan and Francis C.M. Lau
|
Joint Design of Source-Channel Codes with Linear Source Encoding
Complexity and Good Channel Thresholds Based on Double-Protograph LDPC Codes
|
7 pages, 5 figures, 3 tables, to appear in IEEE Communications
Letters
| null |
10.1109/LCOMM.2023.3320105
| null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We propose the use of a lower or upper triangular sub-base matrix to replace
the identity matrix in the source-check-channel-variable linking protomatrix of
a double-protograph low-density parity-check joint-source-channel code (DP-LDPC
JSCC). The elements along the diagonal of the proposed lower or upper
triangular sub-base matrix are assigned as "1" and the other non-zero elements
can take any non-negative integral values. Compared with the traditional
DP-LDPC JSCC designs, the new designs show a theoretical channel threshold
improvement of up to 0.41 dB and a simulated source symbol error rate
improvement of up to 0.5 dB at an error rate of 1e-6.
|
[
{
"created": "Tue, 26 Sep 2023 04:13:00 GMT",
"version": "v1"
}
] |
2023-10-17
|
[
[
"Zhan",
"Jia",
""
],
[
"Lau",
"Francis C. M.",
""
]
] |
We propose the use of a lower or upper triangular sub-base matrix to replace the identity matrix in the source-check-channel-variable linking protomatrix of a double-protograph low-density parity-check joint-source-channel code (DP-LDPC JSCC). The elements along the diagonal of the proposed lower or upper triangular sub-base matrix are assigned as "1" and the other non-zero elements can take any non-negative integral values. Compared with the traditional DP-LDPC JSCC designs, the new designs show a theoretical channel threshold improvement of up to 0.41 dB and a simulated source symbol error rate improvement of up to 0.5 dB at an error rate of 1e-6.
|
2309.05072
|
Xiaowei Gao
|
Xiaowei Gao, Xinke Jiang, Dingyi Zhuang, Huanfa Chen, Shenhao Wang,
Stephen Law, James Haworth
|
Uncertainty-Aware Probabilistic Graph Neural Networks for Road-Level
Traffic Accident Prediction
| null | null | null | null |
cs.LG cs.AI cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
Traffic accidents present substantial challenges to human safety and
socio-economic development in urban areas. Developing a reliable and
responsible traffic accident prediction model is crucial to addressing growing
public safety concerns and enhancing the safety of urban mobility systems.
Traditional methods face limitations at fine spatiotemporal scales due to the
sporadic nature of highrisk accidents and the predominance of non-accident
characteristics. Furthermore, while most current models show promising
occurrence prediction, they overlook the uncertainties arising from the
inherent nature of accidents, and then fail to adequately map the hierarchical
ranking of accident risk values for more precise insights. To address these
issues, we introduce the Spatiotemporal Zero-Inflated Tweedie Graph Neural
Network STZITDGNN -- the first uncertainty-aware probabilistic graph deep
learning model in roadlevel traffic accident prediction for multisteps. This
model integrates the interpretability of the statistical Tweedie family model
and the expressive power of graph neural networks. Its decoder innovatively
employs a compound Tweedie model,a Poisson distribution to model the frequency
of accident occurrences and a Gamma distribution to assess injury severity,
supplemented by a zeroinflated component to effectively identify exessive
nonincident instances. Empirical tests using realworld traffic data from
London, UK, demonstrate that the STZITDGNN surpasses other baseline models
across multiple benchmarks and metrics, including accident risk value
prediction, uncertainty minimisation, non-accident road identification and
accident occurrence accuracy. Our study demonstrates that STZTIDGNN can
effectively inform targeted road monitoring, thereby improving urban road
safety strategies.
|
[
{
"created": "Sun, 10 Sep 2023 16:35:47 GMT",
"version": "v1"
},
{
"created": "Fri, 21 Jun 2024 13:45:44 GMT",
"version": "v2"
},
{
"created": "Wed, 10 Jul 2024 19:05:37 GMT",
"version": "v3"
},
{
"created": "Sat, 27 Jul 2024 10:40:53 GMT",
"version": "v4"
}
] |
2024-07-30
|
[
[
"Gao",
"Xiaowei",
""
],
[
"Jiang",
"Xinke",
""
],
[
"Zhuang",
"Dingyi",
""
],
[
"Chen",
"Huanfa",
""
],
[
"Wang",
"Shenhao",
""
],
[
"Law",
"Stephen",
""
],
[
"Haworth",
"James",
""
]
] |
Traffic accidents present substantial challenges to human safety and socio-economic development in urban areas. Developing a reliable and responsible traffic accident prediction model is crucial to addressing growing public safety concerns and enhancing the safety of urban mobility systems. Traditional methods face limitations at fine spatiotemporal scales due to the sporadic nature of highrisk accidents and the predominance of non-accident characteristics. Furthermore, while most current models show promising occurrence prediction, they overlook the uncertainties arising from the inherent nature of accidents, and then fail to adequately map the hierarchical ranking of accident risk values for more precise insights. To address these issues, we introduce the Spatiotemporal Zero-Inflated Tweedie Graph Neural Network STZITDGNN -- the first uncertainty-aware probabilistic graph deep learning model in roadlevel traffic accident prediction for multisteps. This model integrates the interpretability of the statistical Tweedie family model and the expressive power of graph neural networks. Its decoder innovatively employs a compound Tweedie model,a Poisson distribution to model the frequency of accident occurrences and a Gamma distribution to assess injury severity, supplemented by a zeroinflated component to effectively identify exessive nonincident instances. Empirical tests using realworld traffic data from London, UK, demonstrate that the STZITDGNN surpasses other baseline models across multiple benchmarks and metrics, including accident risk value prediction, uncertainty minimisation, non-accident road identification and accident occurrence accuracy. Our study demonstrates that STZTIDGNN can effectively inform targeted road monitoring, thereby improving urban road safety strategies.
|
2211.03653
|
Esmaeil Delfaraz Pahlevanloo
|
Gianlorenzo D'Angelo and Esmaeil Delfaraz
|
Approximation algorithms for Node-weighted Steiner Problems: Digraphs
with Additive Prizes and Graphs with Submodular Prizes
| null | null | null | null |
cs.DS cs.CC
|
http://creativecommons.org/licenses/by/4.0/
|
In the \emph{budgeted rooted node-weighted Steiner tree} problem, we are
given a graph $G$ with $n$ nodes, a predefined node $r$, two weights associated
to each node modelling costs and prizes. The aim is to find a tree in $G$
rooted at $r$ such that the total cost of its nodes is at most a given budget
$B$ and the total prize is maximized. In the \emph{quota rooted node-weighted
Steiner tree} problem, we are given a real-valued quota $Q$, instead of the
budget, and we aim at minimizing the cost of a tree rooted at $r$ whose overall
prize is at least $Q$.
For the case of directed graphs with additive prize function, we develop a
technique resorting on a standard flow-based linear programming relaxation to
compute a tree with good trade-off between prize and cost, which allows us to
provide very simple polynomial time approximation algorithms for both the
budgeted and the quota problems. For the \emph{budgeted} problem, our algorithm
achieves a bicriteria $(1+\epsilon,
O(\frac{1}{\epsilon^2}n^{2/3}\ln{n}))$-approximation, for any $\epsilon \in (0,
1]$. For the \emph{quota} problem, our algorithm guarantees a bicriteria
approximation factor of $(2, O(n^{2/3}\ln{n}))$. Next, by using the flow-based
LP, we provide a surprisingly simple polynomial time $O((1+\epsilon)\sqrt{n}
\ln {n})$-approximation algorithm for the node-weighted version of the directed
Steiner tree problem, for any $\epsilon>0$.
For the case of undirected graphs with monotone submodular prize functions
over subsets of nodes, we provide a polynomial time
$O(\frac{1}{\epsilon^3}\sqrt{n}\log{n})$-approximation algorithm for the
budgeted problem that violates the budget constraint by a factor of at most
$1+\epsilon$, for any $\epsilon \in (0, 1]$. Our technique allows us to provide
a good approximation also for the quota problem.
|
[
{
"created": "Mon, 7 Nov 2022 16:07:03 GMT",
"version": "v1"
},
{
"created": "Sat, 12 Nov 2022 14:19:26 GMT",
"version": "v2"
}
] |
2022-11-15
|
[
[
"D'Angelo",
"Gianlorenzo",
""
],
[
"Delfaraz",
"Esmaeil",
""
]
] |
In the \emph{budgeted rooted node-weighted Steiner tree} problem, we are given a graph $G$ with $n$ nodes, a predefined node $r$, two weights associated to each node modelling costs and prizes. The aim is to find a tree in $G$ rooted at $r$ such that the total cost of its nodes is at most a given budget $B$ and the total prize is maximized. In the \emph{quota rooted node-weighted Steiner tree} problem, we are given a real-valued quota $Q$, instead of the budget, and we aim at minimizing the cost of a tree rooted at $r$ whose overall prize is at least $Q$. For the case of directed graphs with additive prize function, we develop a technique resorting on a standard flow-based linear programming relaxation to compute a tree with good trade-off between prize and cost, which allows us to provide very simple polynomial time approximation algorithms for both the budgeted and the quota problems. For the \emph{budgeted} problem, our algorithm achieves a bicriteria $(1+\epsilon, O(\frac{1}{\epsilon^2}n^{2/3}\ln{n}))$-approximation, for any $\epsilon \in (0, 1]$. For the \emph{quota} problem, our algorithm guarantees a bicriteria approximation factor of $(2, O(n^{2/3}\ln{n}))$. Next, by using the flow-based LP, we provide a surprisingly simple polynomial time $O((1+\epsilon)\sqrt{n} \ln {n})$-approximation algorithm for the node-weighted version of the directed Steiner tree problem, for any $\epsilon>0$. For the case of undirected graphs with monotone submodular prize functions over subsets of nodes, we provide a polynomial time $O(\frac{1}{\epsilon^3}\sqrt{n}\log{n})$-approximation algorithm for the budgeted problem that violates the budget constraint by a factor of at most $1+\epsilon$, for any $\epsilon \in (0, 1]$. Our technique allows us to provide a good approximation also for the quota problem.
|
1912.12628
|
Jose Mena Rold\'an
|
Jos\'e Mena, Oriol Pujol, Jordi Vitri\`a
|
Dirichlet uncertainty wrappers for actionable algorithm accuracy
accountability and auditability
|
13 pages, 5 figures and 1 table
| null | null | null |
cs.LG cs.CL stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Nowadays, the use of machine learning models is becoming a utility in many
applications. Companies deliver pre-trained models encapsulated as application
programming interfaces (APIs) that developers combine with third party
components and their own models and data to create complex data products to
solve specific problems. The complexity of such products and the lack of
control and knowledge of the internals of each component used cause unavoidable
effects, such as lack of transparency, difficulty in auditability, and
emergence of potential uncontrolled risks. They are effectively black-boxes.
Accountability of such solutions is a challenge for the auditors and the
machine learning community. In this work, we propose a wrapper that given a
black-box model enriches its output prediction with a measure of uncertainty.
By using this wrapper, we make the black-box auditable for the accuracy risk
(risk derived from low quality or uncertain decisions) and at the same time we
provide an actionable mechanism to mitigate that risk in the form of decision
rejection; we can choose not to issue a prediction when the risk or uncertainty
in that decision is significant. Based on the resulting uncertainty measure, we
advocate for a rejection system that selects the more confident predictions,
discarding those more uncertain, leading to an improvement in the trustability
of the resulting system. We showcase the proposed technique and methodology in
a practical scenario where a simulated sentiment analysis API based on natural
language processing is applied to different domains. Results demonstrate the
effectiveness of the uncertainty computed by the wrapper and its high
correlation to bad quality predictions and misclassifications.
|
[
{
"created": "Sun, 29 Dec 2019 11:05:47 GMT",
"version": "v1"
}
] |
2020-01-01
|
[
[
"Mena",
"José",
""
],
[
"Pujol",
"Oriol",
""
],
[
"Vitrià",
"Jordi",
""
]
] |
Nowadays, the use of machine learning models is becoming a utility in many applications. Companies deliver pre-trained models encapsulated as application programming interfaces (APIs) that developers combine with third party components and their own models and data to create complex data products to solve specific problems. The complexity of such products and the lack of control and knowledge of the internals of each component used cause unavoidable effects, such as lack of transparency, difficulty in auditability, and emergence of potential uncontrolled risks. They are effectively black-boxes. Accountability of such solutions is a challenge for the auditors and the machine learning community. In this work, we propose a wrapper that given a black-box model enriches its output prediction with a measure of uncertainty. By using this wrapper, we make the black-box auditable for the accuracy risk (risk derived from low quality or uncertain decisions) and at the same time we provide an actionable mechanism to mitigate that risk in the form of decision rejection; we can choose not to issue a prediction when the risk or uncertainty in that decision is significant. Based on the resulting uncertainty measure, we advocate for a rejection system that selects the more confident predictions, discarding those more uncertain, leading to an improvement in the trustability of the resulting system. We showcase the proposed technique and methodology in a practical scenario where a simulated sentiment analysis API based on natural language processing is applied to different domains. Results demonstrate the effectiveness of the uncertainty computed by the wrapper and its high correlation to bad quality predictions and misclassifications.
|
1712.02036
|
Yan Huang
|
Yan Huang, Qi Wu, Liang Wang
|
Learning Semantic Concepts and Order for Image and Sentence Matching
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Image and sentence matching has made great progress recently, but it remains
challenging due to the large visual-semantic discrepancy. This mainly arises
from that the representation of pixel-level image usually lacks of high-level
semantic information as in its matched sentence. In this work, we propose a
semantic-enhanced image and sentence matching model, which can improve the
image representation by learning semantic concepts and then organizing them in
a correct semantic order. Given an image, we first use a multi-regional
multi-label CNN to predict its semantic concepts, including objects,
properties, actions, etc. Then, considering that different orders of semantic
concepts lead to diverse semantic meanings, we use a context-gated sentence
generation scheme for semantic order learning. It simultaneously uses the image
global context containing concept relations as reference and the groundtruth
semantic order in the matched sentence as supervision. After obtaining the
improved image representation, we learn the sentence representation with a
conventional LSTM, and then jointly perform image and sentence matching and
sentence generation for model learning. Extensive experiments demonstrate the
effectiveness of our learned semantic concepts and order, by achieving the
state-of-the-art results on two public benchmark datasets.
|
[
{
"created": "Wed, 6 Dec 2017 04:36:40 GMT",
"version": "v1"
}
] |
2017-12-07
|
[
[
"Huang",
"Yan",
""
],
[
"Wu",
"Qi",
""
],
[
"Wang",
"Liang",
""
]
] |
Image and sentence matching has made great progress recently, but it remains challenging due to the large visual-semantic discrepancy. This mainly arises from that the representation of pixel-level image usually lacks of high-level semantic information as in its matched sentence. In this work, we propose a semantic-enhanced image and sentence matching model, which can improve the image representation by learning semantic concepts and then organizing them in a correct semantic order. Given an image, we first use a multi-regional multi-label CNN to predict its semantic concepts, including objects, properties, actions, etc. Then, considering that different orders of semantic concepts lead to diverse semantic meanings, we use a context-gated sentence generation scheme for semantic order learning. It simultaneously uses the image global context containing concept relations as reference and the groundtruth semantic order in the matched sentence as supervision. After obtaining the improved image representation, we learn the sentence representation with a conventional LSTM, and then jointly perform image and sentence matching and sentence generation for model learning. Extensive experiments demonstrate the effectiveness of our learned semantic concepts and order, by achieving the state-of-the-art results on two public benchmark datasets.
|
2309.07920
|
Ziang Cao
|
Ziang Cao, Fangzhou Hong, Tong Wu, Liang Pan, Ziwei Liu
|
Large-Vocabulary 3D Diffusion Model with Transformer
|
Project page at https://ziangcao0312.github.io/difftf_pages/
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Creating diverse and high-quality 3D assets with an automatic generative
model is highly desirable. Despite extensive efforts on 3D generation, most
existing works focus on the generation of a single category or a few
categories. In this paper, we introduce a diffusion-based feed-forward
framework for synthesizing massive categories of real-world 3D objects with a
single generative model. Notably, there are three major challenges for this
large-vocabulary 3D generation: a) the need for expressive yet efficient 3D
representation; b) large diversity in geometry and texture across categories;
c) complexity in the appearances of real-world objects. To this end, we propose
a novel triplane-based 3D-aware Diffusion model with TransFormer, DiffTF, for
handling challenges via three aspects. 1) Considering efficiency and
robustness, we adopt a revised triplane representation and improve the fitting
speed and accuracy. 2) To handle the drastic variations in geometry and
texture, we regard the features of all 3D objects as a combination of
generalized 3D knowledge and specialized 3D features. To extract generalized 3D
knowledge from diverse categories, we propose a novel 3D-aware transformer with
shared cross-plane attention. It learns the cross-plane relations across
different planes and aggregates the generalized 3D knowledge with specialized
3D features. 3) In addition, we devise the 3D-aware encoder/decoder to enhance
the generalized 3D knowledge in the encoded triplanes for handling categories
with complex appearances. Extensive experiments on ShapeNet and OmniObject3D
(over 200 diverse real-world categories) convincingly demonstrate that a single
DiffTF model achieves state-of-the-art large-vocabulary 3D object generation
performance with large diversity, rich semantics, and high quality.
|
[
{
"created": "Thu, 14 Sep 2023 17:59:53 GMT",
"version": "v1"
},
{
"created": "Fri, 15 Sep 2023 07:56:34 GMT",
"version": "v2"
}
] |
2023-09-18
|
[
[
"Cao",
"Ziang",
""
],
[
"Hong",
"Fangzhou",
""
],
[
"Wu",
"Tong",
""
],
[
"Pan",
"Liang",
""
],
[
"Liu",
"Ziwei",
""
]
] |
Creating diverse and high-quality 3D assets with an automatic generative model is highly desirable. Despite extensive efforts on 3D generation, most existing works focus on the generation of a single category or a few categories. In this paper, we introduce a diffusion-based feed-forward framework for synthesizing massive categories of real-world 3D objects with a single generative model. Notably, there are three major challenges for this large-vocabulary 3D generation: a) the need for expressive yet efficient 3D representation; b) large diversity in geometry and texture across categories; c) complexity in the appearances of real-world objects. To this end, we propose a novel triplane-based 3D-aware Diffusion model with TransFormer, DiffTF, for handling challenges via three aspects. 1) Considering efficiency and robustness, we adopt a revised triplane representation and improve the fitting speed and accuracy. 2) To handle the drastic variations in geometry and texture, we regard the features of all 3D objects as a combination of generalized 3D knowledge and specialized 3D features. To extract generalized 3D knowledge from diverse categories, we propose a novel 3D-aware transformer with shared cross-plane attention. It learns the cross-plane relations across different planes and aggregates the generalized 3D knowledge with specialized 3D features. 3) In addition, we devise the 3D-aware encoder/decoder to enhance the generalized 3D knowledge in the encoded triplanes for handling categories with complex appearances. Extensive experiments on ShapeNet and OmniObject3D (over 200 diverse real-world categories) convincingly demonstrate that a single DiffTF model achieves state-of-the-art large-vocabulary 3D object generation performance with large diversity, rich semantics, and high quality.
|
1306.4363
|
Sears Merritt
|
Sears Merritt and Aaron Clauset
|
Social Network Dynamics in a Massive Online Game: Network Turnover,
Non-densification, and Team Engagement in Halo Reach
|
8 pages, 13 figures
| null | null | null |
cs.SI physics.data-an physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Online multiplayer games are a popular form of social interaction, used by
hundreds of millions of individuals. However, little is known about the social
networks within these online games, or how they evolve over time. Understanding
human social dynamics within massive online games can shed new light on social
interactions in general and inform the development of more engaging systems.
Here, we study a novel, large friendship network, inferred from nearly 18
billion social interactions over 44 weeks between 17 million individuals in the
popular online game Halo: Reach. This network is one of the largest, most
detailed temporal interaction networks studied to date, and provides a novel
perspective on the dynamics of online friendship networks, as opposed to mere
interaction graphs. Initially, this network exhibits strong structural turnover
and decays rapidly from a peak size. In the following period, however, both
network size and turnover stabilize, producing a dynamic structural
equilibrium. In contrast to other studies, we find that the Halo friendship
network is non-densifying: both the mean degree and the average pairwise
distance are stable, suggesting that densification cannot occur when
maintaining friendships is costly. Finally, players with greater long-term
engagement exhibit stronger local clustering, suggesting a group-level social
engagement process. These results demonstrate the utility of online games for
studying social networks, shed new light on empirical temporal graph patterns,
and clarify the claims of universality of network densification.
|
[
{
"created": "Tue, 18 Jun 2013 21:36:55 GMT",
"version": "v1"
}
] |
2013-06-20
|
[
[
"Merritt",
"Sears",
""
],
[
"Clauset",
"Aaron",
""
]
] |
Online multiplayer games are a popular form of social interaction, used by hundreds of millions of individuals. However, little is known about the social networks within these online games, or how they evolve over time. Understanding human social dynamics within massive online games can shed new light on social interactions in general and inform the development of more engaging systems. Here, we study a novel, large friendship network, inferred from nearly 18 billion social interactions over 44 weeks between 17 million individuals in the popular online game Halo: Reach. This network is one of the largest, most detailed temporal interaction networks studied to date, and provides a novel perspective on the dynamics of online friendship networks, as opposed to mere interaction graphs. Initially, this network exhibits strong structural turnover and decays rapidly from a peak size. In the following period, however, both network size and turnover stabilize, producing a dynamic structural equilibrium. In contrast to other studies, we find that the Halo friendship network is non-densifying: both the mean degree and the average pairwise distance are stable, suggesting that densification cannot occur when maintaining friendships is costly. Finally, players with greater long-term engagement exhibit stronger local clustering, suggesting a group-level social engagement process. These results demonstrate the utility of online games for studying social networks, shed new light on empirical temporal graph patterns, and clarify the claims of universality of network densification.
|
1608.01947
|
Jean-Marc Valin
|
Jean-Marc Valin, Timothy B. Terriberry, Nathan E. Egge, Thomas Daede,
Yushin Cho, Christopher Montgomery, Michael Bebenita
|
Daala: Building A Next-Generation Video Codec From Unconventional
Technology
|
6 pages, accepted for multimedia signal processing (MMSP) workshop,
2016
| null | null | null |
cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Daala is a new royalty-free video codec that attempts to compete with
state-of-the-art royalty-bearing codecs. To do so, it must achieve good
compression while avoiding all of their patented techniques. We use technology
that is as different as possible from traditional approaches to achieve this.
This paper describes the technology behind Daala and discusses where it fits in
the newly created AV1 codec from the Alliance for Open Media. We show that
Daala is approaching the performance level of more mature, state-of-the art
video codecs and can contribute to improving AV1.
|
[
{
"created": "Fri, 5 Aug 2016 17:36:51 GMT",
"version": "v1"
}
] |
2016-08-08
|
[
[
"Valin",
"Jean-Marc",
""
],
[
"Terriberry",
"Timothy B.",
""
],
[
"Egge",
"Nathan E.",
""
],
[
"Daede",
"Thomas",
""
],
[
"Cho",
"Yushin",
""
],
[
"Montgomery",
"Christopher",
""
],
[
"Bebenita",
"Michael",
""
]
] |
Daala is a new royalty-free video codec that attempts to compete with state-of-the-art royalty-bearing codecs. To do so, it must achieve good compression while avoiding all of their patented techniques. We use technology that is as different as possible from traditional approaches to achieve this. This paper describes the technology behind Daala and discusses where it fits in the newly created AV1 codec from the Alliance for Open Media. We show that Daala is approaching the performance level of more mature, state-of-the art video codecs and can contribute to improving AV1.
|
1302.1258
|
Lele Wang
|
Lele Wang, Eren Sasoglu, Bernd Bandemer, and Young-Han Kim
|
A Comparison of Superposition Coding Schemes
|
5 pages, 3 figures, 1 table, submitted to IEEE International
Symposium on Information Theory (ISIT 2013)
| null |
10.1109/ISIT.2013.6620770
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There are two variants of superposition coding schemes. Cover's original
superposition coding scheme has code clouds of the identical shape, while
Bergmans's superposition coding scheme has code clouds of independently
generated shapes. These two schemes yield identical achievable rate regions in
several scenarios, such as the capacity region for degraded broadcast channels.
This paper shows that under the optimal maximum likelihood decoding, these two
superposition coding schemes can result in different rate regions. In
particular, it is shown that for the two-receiver broadcast channel, Cover's
superposition coding scheme can achieve rates strictly larger than Bergmans's
scheme.
|
[
{
"created": "Wed, 6 Feb 2013 04:13:40 GMT",
"version": "v1"
}
] |
2016-11-15
|
[
[
"Wang",
"Lele",
""
],
[
"Sasoglu",
"Eren",
""
],
[
"Bandemer",
"Bernd",
""
],
[
"Kim",
"Young-Han",
""
]
] |
There are two variants of superposition coding schemes. Cover's original superposition coding scheme has code clouds of the identical shape, while Bergmans's superposition coding scheme has code clouds of independently generated shapes. These two schemes yield identical achievable rate regions in several scenarios, such as the capacity region for degraded broadcast channels. This paper shows that under the optimal maximum likelihood decoding, these two superposition coding schemes can result in different rate regions. In particular, it is shown that for the two-receiver broadcast channel, Cover's superposition coding scheme can achieve rates strictly larger than Bergmans's scheme.
|
2006.11589
|
Calvin Beideman
|
Calvin Beideman, Karthekeyan Chandrasekaran and Chao Xu
|
Multicritera Cuts and Size-Constrained $k$-cuts in Hypergraphs
|
Accepted to RANDOM 2020
| null | null | null |
cs.DS cs.DM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We address counting and optimization variants of multicriteria global min-cut
and size-constrained min-$k$-cut in hypergraphs.
1. For an $r$-rank $n$-vertex hypergraph endowed with $t$ hyperedge-cost
functions, we show that the number of multiobjective min-cuts is
$O(r2^{tr}n^{3t-1})$. In particular, this shows that the number of parametric
min-cuts in constant rank hypergraphs for a constant number of criteria is
strongly polynomial, thus resolving an open question by Aissi, Mahjoub,
McCormick, and Queyranne (Math Programming, 2015). In addition, we give
randomized algorithms to enumerate all multiobjective min-cuts and all
pareto-optimal cuts in strongly polynomial-time.
2. We also address node-budgeted multiobjective min-cuts: For an $n$-vertex
hypergraph endowed with $t$ vertex-weight functions, we show that the number of
node-budgeted multiobjective min-cuts is $O(r2^{r}n^{t+2})$, where $r$ is the
rank of the hypergraph, and the number of node-budgeted $b$-multiobjective
min-cuts for a fixed budget-vector $b$ is $O(n^2)$.
3. We show that min-$k$-cut in hypergraphs subject to constant lower bounds
on part sizes is solvable in polynomial-time for constant $k$, thus resolving
an open problem posed by Queyranne. Our technique also shows that the number of
optimal solutions is polynomial.
All of our results build on the random contraction approach of Karger (SODA,
1993). Our techniques illustrate the versatility of the random contraction
approach to address counting and algorithmic problems concerning multiobjective
min-cuts and size-constrained $k$-cuts in hypergraphs.
|
[
{
"created": "Sat, 20 Jun 2020 14:41:00 GMT",
"version": "v1"
}
] |
2020-06-23
|
[
[
"Beideman",
"Calvin",
""
],
[
"Chandrasekaran",
"Karthekeyan",
""
],
[
"Xu",
"Chao",
""
]
] |
We address counting and optimization variants of multicriteria global min-cut and size-constrained min-$k$-cut in hypergraphs. 1. For an $r$-rank $n$-vertex hypergraph endowed with $t$ hyperedge-cost functions, we show that the number of multiobjective min-cuts is $O(r2^{tr}n^{3t-1})$. In particular, this shows that the number of parametric min-cuts in constant rank hypergraphs for a constant number of criteria is strongly polynomial, thus resolving an open question by Aissi, Mahjoub, McCormick, and Queyranne (Math Programming, 2015). In addition, we give randomized algorithms to enumerate all multiobjective min-cuts and all pareto-optimal cuts in strongly polynomial-time. 2. We also address node-budgeted multiobjective min-cuts: For an $n$-vertex hypergraph endowed with $t$ vertex-weight functions, we show that the number of node-budgeted multiobjective min-cuts is $O(r2^{r}n^{t+2})$, where $r$ is the rank of the hypergraph, and the number of node-budgeted $b$-multiobjective min-cuts for a fixed budget-vector $b$ is $O(n^2)$. 3. We show that min-$k$-cut in hypergraphs subject to constant lower bounds on part sizes is solvable in polynomial-time for constant $k$, thus resolving an open problem posed by Queyranne. Our technique also shows that the number of optimal solutions is polynomial. All of our results build on the random contraction approach of Karger (SODA, 1993). Our techniques illustrate the versatility of the random contraction approach to address counting and algorithmic problems concerning multiobjective min-cuts and size-constrained $k$-cuts in hypergraphs.
|
2405.15135
|
Xianglin Yang
|
Xianglin Yang, Jin Song Dong
|
Exploring the Evolution of Hidden Activations with Live-Update
Visualization
|
Preprint
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Monitoring the training of neural networks is essential for identifying
potential data anomalies, enabling timely interventions and conserving
significant computational resources. Apart from the commonly used metrics such
as losses and validation accuracies, the hidden representation could give more
insight into the model progression. To this end, we introduce SentryCam, an
automated, real-time visualization tool that reveals the progression of hidden
representations during training. Our results show that this visualization
offers a more comprehensive view of the learning dynamics compared to basic
metrics such as loss and accuracy over various datasets. Furthermore, we show
that SentryCam could facilitate detailed analysis such as task transfer and
catastrophic forgetting to a continual learning setting. The code is available
at https://github.com/xianglinyang/SentryCam.
|
[
{
"created": "Fri, 24 May 2024 01:23:20 GMT",
"version": "v1"
}
] |
2024-05-27
|
[
[
"Yang",
"Xianglin",
""
],
[
"Dong",
"Jin Song",
""
]
] |
Monitoring the training of neural networks is essential for identifying potential data anomalies, enabling timely interventions and conserving significant computational resources. Apart from the commonly used metrics such as losses and validation accuracies, the hidden representation could give more insight into the model progression. To this end, we introduce SentryCam, an automated, real-time visualization tool that reveals the progression of hidden representations during training. Our results show that this visualization offers a more comprehensive view of the learning dynamics compared to basic metrics such as loss and accuracy over various datasets. Furthermore, we show that SentryCam could facilitate detailed analysis such as task transfer and catastrophic forgetting to a continual learning setting. The code is available at https://github.com/xianglinyang/SentryCam.
|
2306.15864
|
Peide Huang
|
Peide Huang, Xilun Zhang, Ziang Cao, Shiqi Liu, Mengdi Xu, Wenhao
Ding, Jonathan Francis, Bingqing Chen, Ding Zhao
|
What Went Wrong? Closing the Sim-to-Real Gap via Differentiable Causal
Discovery
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Training control policies in simulation is more appealing than on real robots
directly, as it allows for exploring diverse states in an efficient manner.
Yet, robot simulators inevitably exhibit disparities from the real-world
\rebut{dynamics}, yielding inaccuracies that manifest as the dynamical
simulation-to-reality (sim-to-real) gap. Existing literature has proposed to
close this gap by actively modifying specific simulator parameters to align the
simulated data with real-world observations. However, the set of tunable
parameters is usually manually selected to reduce the search space in a
case-by-case manner, which is hard to scale up for complex systems and requires
extensive domain knowledge. To address the scalability issue and automate the
parameter-tuning process, we introduce COMPASS, which aligns the simulator with
the real world by discovering the causal relationship between the environment
parameters and the sim-to-real gap. Concretely, our method learns a
differentiable mapping from the environment parameters to the differences
between simulated and real-world robot-object trajectories. This mapping is
governed by a simultaneously learned causal graph to help prune the search
space of parameters, provide better interpretability, and improve
generalization on unseen parameters. We perform experiments to achieve both
sim-to-sim and sim-to-real transfer, and show that our method has significant
improvements in trajectory alignment and task success rate over strong
baselines in several challenging manipulation tasks.
|
[
{
"created": "Wed, 28 Jun 2023 01:32:45 GMT",
"version": "v1"
},
{
"created": "Thu, 19 Oct 2023 18:41:07 GMT",
"version": "v2"
}
] |
2023-10-23
|
[
[
"Huang",
"Peide",
""
],
[
"Zhang",
"Xilun",
""
],
[
"Cao",
"Ziang",
""
],
[
"Liu",
"Shiqi",
""
],
[
"Xu",
"Mengdi",
""
],
[
"Ding",
"Wenhao",
""
],
[
"Francis",
"Jonathan",
""
],
[
"Chen",
"Bingqing",
""
],
[
"Zhao",
"Ding",
""
]
] |
Training control policies in simulation is more appealing than on real robots directly, as it allows for exploring diverse states in an efficient manner. Yet, robot simulators inevitably exhibit disparities from the real-world \rebut{dynamics}, yielding inaccuracies that manifest as the dynamical simulation-to-reality (sim-to-real) gap. Existing literature has proposed to close this gap by actively modifying specific simulator parameters to align the simulated data with real-world observations. However, the set of tunable parameters is usually manually selected to reduce the search space in a case-by-case manner, which is hard to scale up for complex systems and requires extensive domain knowledge. To address the scalability issue and automate the parameter-tuning process, we introduce COMPASS, which aligns the simulator with the real world by discovering the causal relationship between the environment parameters and the sim-to-real gap. Concretely, our method learns a differentiable mapping from the environment parameters to the differences between simulated and real-world robot-object trajectories. This mapping is governed by a simultaneously learned causal graph to help prune the search space of parameters, provide better interpretability, and improve generalization on unseen parameters. We perform experiments to achieve both sim-to-sim and sim-to-real transfer, and show that our method has significant improvements in trajectory alignment and task success rate over strong baselines in several challenging manipulation tasks.
|
2304.14329
|
Aviv Netanyahu
|
Aviv Netanyahu, Abhishek Gupta, Max Simchowitz, Kaiqing Zhang, Pulkit
Agrawal
|
Learning to Extrapolate: A Transductive Approach
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Machine learning systems, especially with overparameterized deep neural
networks, can generalize to novel test instances drawn from the same
distribution as the training data. However, they fare poorly when evaluated on
out-of-support test points. In this work, we tackle the problem of developing
machine learning systems that retain the power of overparameterized function
approximators while enabling extrapolation to out-of-support test points when
possible. This is accomplished by noting that under certain conditions, a
"transductive" reparameterization can convert an out-of-support extrapolation
problem into a problem of within-support combinatorial generalization. We
propose a simple strategy based on bilinear embeddings to enable this type of
combinatorial generalization, thereby addressing the out-of-support
extrapolation problem under certain conditions. We instantiate a simple,
practical algorithm applicable to various supervised learning and imitation
learning tasks.
|
[
{
"created": "Thu, 27 Apr 2023 17:00:51 GMT",
"version": "v1"
}
] |
2023-04-28
|
[
[
"Netanyahu",
"Aviv",
""
],
[
"Gupta",
"Abhishek",
""
],
[
"Simchowitz",
"Max",
""
],
[
"Zhang",
"Kaiqing",
""
],
[
"Agrawal",
"Pulkit",
""
]
] |
Machine learning systems, especially with overparameterized deep neural networks, can generalize to novel test instances drawn from the same distribution as the training data. However, they fare poorly when evaluated on out-of-support test points. In this work, we tackle the problem of developing machine learning systems that retain the power of overparameterized function approximators while enabling extrapolation to out-of-support test points when possible. This is accomplished by noting that under certain conditions, a "transductive" reparameterization can convert an out-of-support extrapolation problem into a problem of within-support combinatorial generalization. We propose a simple strategy based on bilinear embeddings to enable this type of combinatorial generalization, thereby addressing the out-of-support extrapolation problem under certain conditions. We instantiate a simple, practical algorithm applicable to various supervised learning and imitation learning tasks.
|
1503.07991
|
Thim Strothmann
|
Joshua J. Daymude, Zahra Derakhshandeh, Robert Gmyr, Thim Strothmann,
Rida Bazzi, Andr\'ea W. Richa, Christian Scheideler
|
Leader Election and Shape Formation with Self-Organizing Programmable
Matter
| null | null | null | null |
cs.ET
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider programmable matter consisting of simple computational elements,
called particles, that can establish and release bonds and can actively move in
a self-organized way, and we investigate the feasibility of solving fundamental
problems relevant for programmable matter. As a suitable model for such
self-organizing particle systems, we will use a generalization of the geometric
amoebot model first proposed in SPAA 2014. Based on the geometric model, we
present efficient local-control algorithms for leader election and line
formation requiring only particles with constant size memory, and we also
discuss the limitations of solving these problems within the general amoebot
model.
|
[
{
"created": "Fri, 27 Mar 2015 08:57:41 GMT",
"version": "v1"
},
{
"created": "Thu, 31 Mar 2016 07:45:33 GMT",
"version": "v2"
}
] |
2016-04-01
|
[
[
"Daymude",
"Joshua J.",
""
],
[
"Derakhshandeh",
"Zahra",
""
],
[
"Gmyr",
"Robert",
""
],
[
"Strothmann",
"Thim",
""
],
[
"Bazzi",
"Rida",
""
],
[
"Richa",
"Andréa W.",
""
],
[
"Scheideler",
"Christian",
""
]
] |
We consider programmable matter consisting of simple computational elements, called particles, that can establish and release bonds and can actively move in a self-organized way, and we investigate the feasibility of solving fundamental problems relevant for programmable matter. As a suitable model for such self-organizing particle systems, we will use a generalization of the geometric amoebot model first proposed in SPAA 2014. Based on the geometric model, we present efficient local-control algorithms for leader election and line formation requiring only particles with constant size memory, and we also discuss the limitations of solving these problems within the general amoebot model.
|
2209.00943
|
Ricardo Morla
|
Gon\c{c}alo Xavier, Carlos Novo, Ricardo Morla
|
Tweaking Metasploit to Evade Encrypted C2 Traffic Detection
| null | null | null | null |
cs.CR cs.LG cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Command and Control (C2) communication is a key component of any structured
cyber-attack. As such, security operations actively try to detect this type of
communication in their networks. This poses a problem for legitimate pentesters
that try to remain undetected, since commonly used pentesting tools, such as
Metasploit, generate constant traffic patterns that are easily distinguishable
from regular web traffic. In this paper we start with these identifiable
patterns in Metasploit's C2 traffic and show that a machine learning-based
detector is able to detect the presence of such traffic with high accuracy,
even when encrypted. We then outline and implement a set of modifications to
the Metasploit framework in order to decrease the detection rates of such
classifier. To evaluate the performance of these modifications, we use two
threat models with increasing awareness of these modifications. We look at the
detection evasion performance and at the byte count and runtime overhead of the
modifications. Our results show that for the second, increased-awareness threat
model the framework-side traffic modifications yield a better detection
avoidance rate (90%) than payload-side only modifications (50%). We also show
that although the modifications use up to 3 times more TLS payload bytes than
the original, the runtime does not significantly change and the total number of
bytes (including TLS payload) reduces.
|
[
{
"created": "Fri, 2 Sep 2022 10:56:15 GMT",
"version": "v1"
}
] |
2022-09-05
|
[
[
"Xavier",
"Gonçalo",
""
],
[
"Novo",
"Carlos",
""
],
[
"Morla",
"Ricardo",
""
]
] |
Command and Control (C2) communication is a key component of any structured cyber-attack. As such, security operations actively try to detect this type of communication in their networks. This poses a problem for legitimate pentesters that try to remain undetected, since commonly used pentesting tools, such as Metasploit, generate constant traffic patterns that are easily distinguishable from regular web traffic. In this paper we start with these identifiable patterns in Metasploit's C2 traffic and show that a machine learning-based detector is able to detect the presence of such traffic with high accuracy, even when encrypted. We then outline and implement a set of modifications to the Metasploit framework in order to decrease the detection rates of such classifier. To evaluate the performance of these modifications, we use two threat models with increasing awareness of these modifications. We look at the detection evasion performance and at the byte count and runtime overhead of the modifications. Our results show that for the second, increased-awareness threat model the framework-side traffic modifications yield a better detection avoidance rate (90%) than payload-side only modifications (50%). We also show that although the modifications use up to 3 times more TLS payload bytes than the original, the runtime does not significantly change and the total number of bytes (including TLS payload) reduces.
|
1903.03695
|
Ashwini Tonge
|
Ashwini Tonge and Cornelia Caragea
|
Image Privacy Prediction Using Deep Neural Networks
| null | null | null | null |
cs.CV cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
Images today are increasingly shared online on social networking sites such
as Facebook, Flickr, Foursquare, and Instagram. Despite that current social
networking sites allow users to change their privacy preferences, this is often
a cumbersome task for the vast majority of users on the Web, who face
difficulties in assigning and managing privacy settings. Thus, automatically
predicting images' privacy to warn users about private or sensitive content
before uploading these images on social networking sites has become a necessity
in our current interconnected world.
In this paper, we explore learning models to automatically predict
appropriate images' privacy as private or public using carefully identified
image-specific features. We study deep visual semantic features that are
derived from various layers of Convolutional Neural Networks (CNNs) as well as
textual features such as user tags and deep tags generated from deep CNNs.
Particularly, we extract deep (visual and tag) features from four pre-trained
CNN architectures for object recognition, i.e., AlexNet, GoogLeNet, VGG-16, and
ResNet, and compare their performance for image privacy prediction. Results of
our experiments on a Flickr dataset of over thirty thousand images show that
the learning models trained on features extracted from ResNet outperform the
state-of-the-art models for image privacy prediction. We further investigate
the combination of user tags and deep tags derived from CNN architectures using
two settings: (1) SVM on the bag-of-tags features; and (2) text-based CNN. Our
results show that even though the models trained on the visual features perform
better than those trained on the tag features, the combination of deep visual
features with image tags shows improvements in performance over the individual
feature sets.
|
[
{
"created": "Fri, 8 Mar 2019 23:12:12 GMT",
"version": "v1"
}
] |
2019-03-12
|
[
[
"Tonge",
"Ashwini",
""
],
[
"Caragea",
"Cornelia",
""
]
] |
Images today are increasingly shared online on social networking sites such as Facebook, Flickr, Foursquare, and Instagram. Despite that current social networking sites allow users to change their privacy preferences, this is often a cumbersome task for the vast majority of users on the Web, who face difficulties in assigning and managing privacy settings. Thus, automatically predicting images' privacy to warn users about private or sensitive content before uploading these images on social networking sites has become a necessity in our current interconnected world. In this paper, we explore learning models to automatically predict appropriate images' privacy as private or public using carefully identified image-specific features. We study deep visual semantic features that are derived from various layers of Convolutional Neural Networks (CNNs) as well as textual features such as user tags and deep tags generated from deep CNNs. Particularly, we extract deep (visual and tag) features from four pre-trained CNN architectures for object recognition, i.e., AlexNet, GoogLeNet, VGG-16, and ResNet, and compare their performance for image privacy prediction. Results of our experiments on a Flickr dataset of over thirty thousand images show that the learning models trained on features extracted from ResNet outperform the state-of-the-art models for image privacy prediction. We further investigate the combination of user tags and deep tags derived from CNN architectures using two settings: (1) SVM on the bag-of-tags features; and (2) text-based CNN. Our results show that even though the models trained on the visual features perform better than those trained on the tag features, the combination of deep visual features with image tags shows improvements in performance over the individual feature sets.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.