id
stringlengths 9
10
| submitter
stringlengths 1
64
⌀ | authors
stringlengths 4
20.7k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
404
⌀ | doi
stringlengths 11
153
⌀ | report-no
stringlengths 2
254
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | orig_abstract
stringlengths 14
3.35k
| versions
listlengths 1
60
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
1.35k
| abstract
stringlengths 11
3.34k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1207.4121
|
Fabio Gagliardi Cozman
|
Fabio Gagliardi Cozman, Cassio Polpo de Campos, Jaime Ide, Jose Carlos
Ferreira da Rocha
|
Propositional and Relational Bayesian Networks Associated with Imprecise
and Qualitative Probabilistic Assesments
|
Appears in Proceedings of the Twentieth Conference on Uncertainty in
Artificial Intelligence (UAI2004)
| null | null |
UAI-P-2004-PG-104-111
|
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper investigates a representation language with flexibility inspired
by probabilistic logic and compactness inspired by relational Bayesian
networks. The goal is to handle propositional and first-order constructs
together with precise, imprecise, indeterminate and qualitative probabilistic
assessments. The paper shows how this can be achieved through the theory of
credal networks. New exact and approximate inference algorithms based on
multilinear programming and iterated/loopy propagation of interval
probabilities are presented; their superior performance, compared to existing
ones, is shown empirically.
|
[
{
"created": "Wed, 11 Jul 2012 14:45:39 GMT",
"version": "v1"
}
] |
2012-07-19
|
[
[
"Cozman",
"Fabio Gagliardi",
""
],
[
"de Campos",
"Cassio Polpo",
""
],
[
"Ide",
"Jaime",
""
],
[
"da Rocha",
"Jose Carlos Ferreira",
""
]
] |
This paper investigates a representation language with flexibility inspired by probabilistic logic and compactness inspired by relational Bayesian networks. The goal is to handle propositional and first-order constructs together with precise, imprecise, indeterminate and qualitative probabilistic assessments. The paper shows how this can be achieved through the theory of credal networks. New exact and approximate inference algorithms based on multilinear programming and iterated/loopy propagation of interval probabilities are presented; their superior performance, compared to existing ones, is shown empirically.
|
2010.11425
|
Abhimanyu Dubey
|
Abhimanyu Dubey and Alex Pentland
|
Differentially-Private Federated Linear Bandits
|
22 pages. Camera-ready for NeurIPS 2020
| null | null | null |
cs.LG cs.CR cs.MA stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The rapid proliferation of decentralized learning systems mandates the need
for differentially-private cooperative learning. In this paper, we study this
in context of the contextual linear bandit: we consider a collection of agents
cooperating to solve a common contextual bandit, while ensuring that their
communication remains private. For this problem, we devise \textsc{FedUCB}, a
multiagent private algorithm for both centralized and decentralized
(peer-to-peer) federated learning. We provide a rigorous technical analysis of
its utility in terms of regret, improving several results in cooperative bandit
learning, and provide rigorous privacy guarantees as well. Our algorithms
provide competitive performance both in terms of pseudoregret bounds and
empirical benchmark performance in various multi-agent settings.
|
[
{
"created": "Thu, 22 Oct 2020 03:58:39 GMT",
"version": "v1"
}
] |
2020-10-23
|
[
[
"Dubey",
"Abhimanyu",
""
],
[
"Pentland",
"Alex",
""
]
] |
The rapid proliferation of decentralized learning systems mandates the need for differentially-private cooperative learning. In this paper, we study this in context of the contextual linear bandit: we consider a collection of agents cooperating to solve a common contextual bandit, while ensuring that their communication remains private. For this problem, we devise \textsc{FedUCB}, a multiagent private algorithm for both centralized and decentralized (peer-to-peer) federated learning. We provide a rigorous technical analysis of its utility in terms of regret, improving several results in cooperative bandit learning, and provide rigorous privacy guarantees as well. Our algorithms provide competitive performance both in terms of pseudoregret bounds and empirical benchmark performance in various multi-agent settings.
|
1404.1462
|
Pallavi Sudhakar
|
Pallavi.V.S, Dr. Rukmani Devi.D
|
Design of a High Speed FPGA-Based Classifier for Efficient Packet
Classification
|
6 pages, 6 figures, "Published with International Journal of Computer
Trends and Technology (IJCTT)", "National Conference on Modern Electronics
and Signal Processing (2014)- Velammal Engineering College", "Recent Trends
in Information Technology (2014)- R.M.K College of Engineering and
Technology"
|
Pallavi.V.S, Dr.Rukmani Devi.D Article: Design of a High Speed
FPGA-Based Classifier for Efficient Packet Classification. International
Journal of Computer Trends and Technology(IJCTT) 9(3):123-128,Mar 2014
|
10.14445/22312803/IJCTT-V9P126
| null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Packet classification is a vital and complicated task as the processing of
packets should be done at a specified line speed. In order to classify a packet
as belonging to a particular flow or set of flows, network nodes must perform a
search over a set of filters using multiple fields of the packet as the search
key. Hence the matching of packets should be much faster and simpler for quick
processing and classification. A hardware accelerator or a classifier has been
proposed here using a modified version of the HyperCuts packet classification
algorithm. A new pre-cutting process has been implemented to reduce the memory
size to fit in an FPGA. This classifier can classify packets with high speed
and with a power consumption factor of less than 3W. This methodology removes
the need for floating point division to be performed by replacing the region
compaction scheme of HyperCuts by pre-cutting, while classifying the packets
and concentrates on classifying the packets at the core of the network.
|
[
{
"created": "Sat, 5 Apr 2014 11:06:40 GMT",
"version": "v1"
}
] |
2014-04-08
|
[
[
"S",
"Pallavi. V.",
""
],
[
"D",
"Dr. Rukmani Devi.",
""
]
] |
Packet classification is a vital and complicated task as the processing of packets should be done at a specified line speed. In order to classify a packet as belonging to a particular flow or set of flows, network nodes must perform a search over a set of filters using multiple fields of the packet as the search key. Hence the matching of packets should be much faster and simpler for quick processing and classification. A hardware accelerator or a classifier has been proposed here using a modified version of the HyperCuts packet classification algorithm. A new pre-cutting process has been implemented to reduce the memory size to fit in an FPGA. This classifier can classify packets with high speed and with a power consumption factor of less than 3W. This methodology removes the need for floating point division to be performed by replacing the region compaction scheme of HyperCuts by pre-cutting, while classifying the packets and concentrates on classifying the packets at the core of the network.
|
2307.03741
|
Vladan Stojni\'c
|
Vladan Stojni\'c, Zakaria Laskar, Giorgos Tolias
|
Training Ensembles with Inliers and Outliers for Semi-supervised Active
Learning
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep active learning in the presence of outlier examples poses a realistic
yet challenging scenario. Acquiring unlabeled data for annotation requires a
delicate balance between avoiding outliers to conserve the annotation budget
and prioritizing useful inlier examples for effective training. In this work,
we present an approach that leverages three highly synergistic components,
which are identified as key ingredients: joint classifier training with inliers
and outliers, semi-supervised learning through pseudo-labeling, and model
ensembling. Our work demonstrates that ensembling significantly enhances the
accuracy of pseudo-labeling and improves the quality of data acquisition. By
enabling semi-supervision through the joint training process, where outliers
are properly handled, we observe a substantial boost in classifier accuracy
through the use of all available unlabeled examples. Notably, we reveal that
the integration of joint training renders explicit outlier detection
unnecessary; a conventional component for acquisition in prior work. The three
key components align seamlessly with numerous existing approaches. Through
empirical evaluations, we showcase that their combined use leads to a
performance increase. Remarkably, despite its simplicity, our proposed approach
outperforms all other methods in terms of performance. Code:
https://github.com/vladan-stojnic/active-outliers
|
[
{
"created": "Fri, 7 Jul 2023 17:50:07 GMT",
"version": "v1"
}
] |
2023-07-10
|
[
[
"Stojnić",
"Vladan",
""
],
[
"Laskar",
"Zakaria",
""
],
[
"Tolias",
"Giorgos",
""
]
] |
Deep active learning in the presence of outlier examples poses a realistic yet challenging scenario. Acquiring unlabeled data for annotation requires a delicate balance between avoiding outliers to conserve the annotation budget and prioritizing useful inlier examples for effective training. In this work, we present an approach that leverages three highly synergistic components, which are identified as key ingredients: joint classifier training with inliers and outliers, semi-supervised learning through pseudo-labeling, and model ensembling. Our work demonstrates that ensembling significantly enhances the accuracy of pseudo-labeling and improves the quality of data acquisition. By enabling semi-supervision through the joint training process, where outliers are properly handled, we observe a substantial boost in classifier accuracy through the use of all available unlabeled examples. Notably, we reveal that the integration of joint training renders explicit outlier detection unnecessary; a conventional component for acquisition in prior work. The three key components align seamlessly with numerous existing approaches. Through empirical evaluations, we showcase that their combined use leads to a performance increase. Remarkably, despite its simplicity, our proposed approach outperforms all other methods in terms of performance. Code: https://github.com/vladan-stojnic/active-outliers
|
2408.05518
|
Hongsheng Qin
|
Zhengang Lu, Hongsheng Qin, Jing Li, Ming Sun and Jiubin Tan
|
Long working distance portable smartphone microscopy for metallic mesh
defect detection
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Metallic mesh is a transparent electromagnetic shielding film with a fine
metal line structure. However, it can develop defects that affect the
optoelectronic performance whether in the production preparation or in actual
use. The development of in-situ non-destructive testing (NDT) devices for
metallic mesh requires long working distances, reflective optical path design,
and miniaturization. To address the limitations of existing smartphone
microscopes, which feature short working distances and inadequate transmission
imaging for industrial in-situ inspection, we propose a novel long-working
distance reflective smartphone microscopy system (LD-RSM). LD-RSM builds a 4f
optical imaging system with external optical components and a smartphone,
utilizing a beam splitter to achieve reflective imaging with the illumination
system and imaging system on the same side of the sample. It achieves an
optical resolution of 4.92$\mu$m and a working distance of up to 22.23 mm.
Additionally, we introduce a dual prior weighted Robust Principal Component
Analysis (DW-RPCA) for defect detection. This approach leverages spectral
filter fusion and Hough transform to model different defect types, enhancing
the accuracy and efficiency of defect identification. Coupled with an optimized
threshold segmentation algorithm, DW-RPCA method achieves a pixel-level
accuracy of 84.8%. Our work showcases strong potential for growth in the field
of in-situ on-line inspection of industrial products.
|
[
{
"created": "Sat, 10 Aug 2024 11:02:03 GMT",
"version": "v1"
},
{
"created": "Tue, 13 Aug 2024 05:16:07 GMT",
"version": "v2"
}
] |
2024-08-14
|
[
[
"Lu",
"Zhengang",
""
],
[
"Qin",
"Hongsheng",
""
],
[
"Li",
"Jing",
""
],
[
"Sun",
"Ming",
""
],
[
"Tan",
"Jiubin",
""
]
] |
Metallic mesh is a transparent electromagnetic shielding film with a fine metal line structure. However, it can develop defects that affect the optoelectronic performance whether in the production preparation or in actual use. The development of in-situ non-destructive testing (NDT) devices for metallic mesh requires long working distances, reflective optical path design, and miniaturization. To address the limitations of existing smartphone microscopes, which feature short working distances and inadequate transmission imaging for industrial in-situ inspection, we propose a novel long-working distance reflective smartphone microscopy system (LD-RSM). LD-RSM builds a 4f optical imaging system with external optical components and a smartphone, utilizing a beam splitter to achieve reflective imaging with the illumination system and imaging system on the same side of the sample. It achieves an optical resolution of 4.92$\mu$m and a working distance of up to 22.23 mm. Additionally, we introduce a dual prior weighted Robust Principal Component Analysis (DW-RPCA) for defect detection. This approach leverages spectral filter fusion and Hough transform to model different defect types, enhancing the accuracy and efficiency of defect identification. Coupled with an optimized threshold segmentation algorithm, DW-RPCA method achieves a pixel-level accuracy of 84.8%. Our work showcases strong potential for growth in the field of in-situ on-line inspection of industrial products.
|
2205.07854
|
Haoteng Tang
|
Haoteng Tang, Xiyao Fu, Lei Guo, Yalin Wang, Scott Mackin, Olusola
Ajilore, Alex Leow, Paul Thompson, Heng Huang, Liang Zhan
|
Functional2Structural: Cross-Modality Brain Networks Representation
Learning
| null | null | null | null |
cs.LG cs.AI cs.CV eess.IV q-bio.NC
|
http://creativecommons.org/licenses/by/4.0/
|
MRI-based modeling of brain networks has been widely used to understand
functional and structural interactions and connections among brain regions, and
factors that affect them, such as brain development and disease. Graph mining
on brain networks may facilitate the discovery of novel biomarkers for clinical
phenotypes and neurodegenerative diseases. Since brain networks derived from
functional and structural MRI describe the brain topology from different
perspectives, exploring a representation that combines these cross-modality
brain networks is non-trivial. Most current studies aim to extract a fused
representation of the two types of brain network by projecting the structural
network to the functional counterpart. Since the functional network is dynamic
and the structural network is static, mapping a static object to a dynamic
object is suboptimal. However, mapping in the opposite direction is not
feasible due to the non-negativity requirement of current graph learning
techniques. Here, we propose a novel graph learning framework, known as Deep
Signed Brain Networks (DSBN), with a signed graph encoder that, from an
opposite perspective, learns the cross-modality representations by projecting
the functional network to the structural counterpart. We validate our framework
on clinical phenotype and neurodegenerative disease prediction tasks using two
independent, publicly available datasets (HCP and OASIS). The experimental
results clearly demonstrate the advantages of our model compared to several
state-of-the-art methods.
|
[
{
"created": "Fri, 6 May 2022 03:45:36 GMT",
"version": "v1"
}
] |
2022-05-18
|
[
[
"Tang",
"Haoteng",
""
],
[
"Fu",
"Xiyao",
""
],
[
"Guo",
"Lei",
""
],
[
"Wang",
"Yalin",
""
],
[
"Mackin",
"Scott",
""
],
[
"Ajilore",
"Olusola",
""
],
[
"Leow",
"Alex",
""
],
[
"Thompson",
"Paul",
""
],
[
"Huang",
"Heng",
""
],
[
"Zhan",
"Liang",
""
]
] |
MRI-based modeling of brain networks has been widely used to understand functional and structural interactions and connections among brain regions, and factors that affect them, such as brain development and disease. Graph mining on brain networks may facilitate the discovery of novel biomarkers for clinical phenotypes and neurodegenerative diseases. Since brain networks derived from functional and structural MRI describe the brain topology from different perspectives, exploring a representation that combines these cross-modality brain networks is non-trivial. Most current studies aim to extract a fused representation of the two types of brain network by projecting the structural network to the functional counterpart. Since the functional network is dynamic and the structural network is static, mapping a static object to a dynamic object is suboptimal. However, mapping in the opposite direction is not feasible due to the non-negativity requirement of current graph learning techniques. Here, we propose a novel graph learning framework, known as Deep Signed Brain Networks (DSBN), with a signed graph encoder that, from an opposite perspective, learns the cross-modality representations by projecting the functional network to the structural counterpart. We validate our framework on clinical phenotype and neurodegenerative disease prediction tasks using two independent, publicly available datasets (HCP and OASIS). The experimental results clearly demonstrate the advantages of our model compared to several state-of-the-art methods.
|
1210.5041
|
Thomas Maugey
|
Thomas Maugey, Ismael Daribo, Gene Cheung, Pascal Frossard
|
Navigation domain representation for interactive multiview imaging
| null | null |
10.1109/TIP.2013.2270183
| null |
cs.MM cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Enabling users to interactively navigate through different viewpoints of a
static scene is a new interesting functionality in 3D streaming systems. While
it opens exciting perspectives towards rich multimedia applications, it
requires the design of novel representations and coding techniques in order to
solve the new challenges imposed by interactive navigation. Interactivity
clearly brings new design constraints: the encoder is unaware of the exact
decoding process, while the decoder has to reconstruct information from
incomplete subsets of data since the server can generally not transmit images
for all possible viewpoints due to resource constrains. In this paper, we
propose a novel multiview data representation that permits to satisfy bandwidth
and storage constraints in an interactive multiview streaming system. In
particular, we partition the multiview navigation domain into segments, each of
which is described by a reference image and some auxiliary information. The
auxiliary information enables the client to recreate any viewpoint in the
navigation segment via view synthesis. The decoder is then able to navigate
freely in the segment without further data request to the server; it requests
additional data only when it moves to a different segment. We discuss the
benefits of this novel representation in interactive navigation systems and
further propose a method to optimize the partitioning of the navigation domain
into independent segments, under bandwidth and storage constraints.
Experimental results confirm the potential of the proposed representation;
namely, our system leads to similar compression performance as classical
inter-view coding, while it provides the high level of flexibility that is
required for interactive streaming. Hence, our new framework represents a
promising solution for 3D data representation in novel interactive multimedia
services.
|
[
{
"created": "Thu, 18 Oct 2012 07:41:17 GMT",
"version": "v1"
},
{
"created": "Mon, 17 Jun 2013 09:32:50 GMT",
"version": "v2"
}
] |
2015-06-11
|
[
[
"Maugey",
"Thomas",
""
],
[
"Daribo",
"Ismael",
""
],
[
"Cheung",
"Gene",
""
],
[
"Frossard",
"Pascal",
""
]
] |
Enabling users to interactively navigate through different viewpoints of a static scene is a new interesting functionality in 3D streaming systems. While it opens exciting perspectives towards rich multimedia applications, it requires the design of novel representations and coding techniques in order to solve the new challenges imposed by interactive navigation. Interactivity clearly brings new design constraints: the encoder is unaware of the exact decoding process, while the decoder has to reconstruct information from incomplete subsets of data since the server can generally not transmit images for all possible viewpoints due to resource constrains. In this paper, we propose a novel multiview data representation that permits to satisfy bandwidth and storage constraints in an interactive multiview streaming system. In particular, we partition the multiview navigation domain into segments, each of which is described by a reference image and some auxiliary information. The auxiliary information enables the client to recreate any viewpoint in the navigation segment via view synthesis. The decoder is then able to navigate freely in the segment without further data request to the server; it requests additional data only when it moves to a different segment. We discuss the benefits of this novel representation in interactive navigation systems and further propose a method to optimize the partitioning of the navigation domain into independent segments, under bandwidth and storage constraints. Experimental results confirm the potential of the proposed representation; namely, our system leads to similar compression performance as classical inter-view coding, while it provides the high level of flexibility that is required for interactive streaming. Hence, our new framework represents a promising solution for 3D data representation in novel interactive multimedia services.
|
1904.11327
|
Saverio Giallorenzo
|
Saverio Giallorenzo, Fabrizio Montesi, Larisa Safina, Stefano Pio
Zingaro
|
Ephemeral Data Handling in Microservices - Technical Report
| null | null | null | null |
cs.PL cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In modern application areas for software systems --- like eHealth, the
Internet-of-Things, and Edge Computing --- data is encoded in heterogeneous,
tree-shaped data-formats, it must be processed in real-time, and it must be
ephemeral, i.e., not persist in the system. While it is preferable to use a
query language to express complex data-handling logic, their typical execution
engine, a database external from the main application, is unfit in scenarios of
ephemeral data-handling. A better option is represented by integrated query
frameworks, which benefit from existing development support tools (e.g., syntax
and type checkers) and execute within the application memory. In this paper, we
propose one such framework that, for the first time, targets tree-shaped,
document-oriented queries. We formalise an instantiation of MQuery, a sound
variant of the widely-used MongoDB query language, which we implemented in the
Jolie language. Jolie programs are microservices, the building blocks of modern
software systems. Moreover, since Jolie supports native tree data-structures
and automatic management of heterogeneous data-encodings, we can provide a
uniform way to use MQuery on any data-format supported by the language. We
present a non-trivial use case from eHealth, use it to concretely evaluate our
model, and to illustrate our formalism.
|
[
{
"created": "Thu, 25 Apr 2019 13:31:33 GMT",
"version": "v1"
}
] |
2019-04-26
|
[
[
"Giallorenzo",
"Saverio",
""
],
[
"Montesi",
"Fabrizio",
""
],
[
"Safina",
"Larisa",
""
],
[
"Zingaro",
"Stefano Pio",
""
]
] |
In modern application areas for software systems --- like eHealth, the Internet-of-Things, and Edge Computing --- data is encoded in heterogeneous, tree-shaped data-formats, it must be processed in real-time, and it must be ephemeral, i.e., not persist in the system. While it is preferable to use a query language to express complex data-handling logic, their typical execution engine, a database external from the main application, is unfit in scenarios of ephemeral data-handling. A better option is represented by integrated query frameworks, which benefit from existing development support tools (e.g., syntax and type checkers) and execute within the application memory. In this paper, we propose one such framework that, for the first time, targets tree-shaped, document-oriented queries. We formalise an instantiation of MQuery, a sound variant of the widely-used MongoDB query language, which we implemented in the Jolie language. Jolie programs are microservices, the building blocks of modern software systems. Moreover, since Jolie supports native tree data-structures and automatic management of heterogeneous data-encodings, we can provide a uniform way to use MQuery on any data-format supported by the language. We present a non-trivial use case from eHealth, use it to concretely evaluate our model, and to illustrate our formalism.
|
2404.12704
|
Haoyu Sun
|
Jiazhu Dai, Haoyu Sun
|
A Clean-graph Backdoor Attack against Graph Convolutional Networks with
Poisoned Label Only
| null | null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Graph Convolutional Networks (GCNs) have shown excellent performance in
dealing with various graph structures such as node classification, graph
classification and other tasks. However,recent studies have shown that GCNs are
vulnerable to a novel threat known as backdoor attacks. However, all existing
backdoor attacks in the graph domain require modifying the training samples to
accomplish the backdoor injection, which may not be practical in many realistic
scenarios where adversaries have no access to modify the training samples and
may leads to the backdoor attack being detected easily. In order to explore the
backdoor vulnerability of GCNs and create a more practical and stealthy
backdoor attack method, this paper proposes a clean-graph backdoor attack
against GCNs (CBAG) in the node classification task,which only poisons the
training labels without any modification to the training samples, revealing
that GCNs have this security vulnerability. Specifically, CBAG designs a new
trigger exploration method to find important feature dimensions as the trigger
patterns to improve the attack performance. By poisoning the training labels, a
hidden backdoor is injected into the GCNs model. Experimental results show that
our clean graph backdoor can achieve 99% attack success rate while maintaining
the functionality of the GCNs model on benign samples.
|
[
{
"created": "Fri, 19 Apr 2024 08:21:54 GMT",
"version": "v1"
}
] |
2024-04-22
|
[
[
"Dai",
"Jiazhu",
""
],
[
"Sun",
"Haoyu",
""
]
] |
Graph Convolutional Networks (GCNs) have shown excellent performance in dealing with various graph structures such as node classification, graph classification and other tasks. However,recent studies have shown that GCNs are vulnerable to a novel threat known as backdoor attacks. However, all existing backdoor attacks in the graph domain require modifying the training samples to accomplish the backdoor injection, which may not be practical in many realistic scenarios where adversaries have no access to modify the training samples and may leads to the backdoor attack being detected easily. In order to explore the backdoor vulnerability of GCNs and create a more practical and stealthy backdoor attack method, this paper proposes a clean-graph backdoor attack against GCNs (CBAG) in the node classification task,which only poisons the training labels without any modification to the training samples, revealing that GCNs have this security vulnerability. Specifically, CBAG designs a new trigger exploration method to find important feature dimensions as the trigger patterns to improve the attack performance. By poisoning the training labels, a hidden backdoor is injected into the GCNs model. Experimental results show that our clean graph backdoor can achieve 99% attack success rate while maintaining the functionality of the GCNs model on benign samples.
|
2102.04762
|
Linwei Ye
|
Linwei Ye, Mrigank Rochan, Zhi Liu, Xiaoqin Zhang and Yang Wang
|
Referring Segmentation in Images and Videos with Cross-Modal
Self-Attention Network
|
14 pages, 8 figures. arXiv admin note: substantial text overlap with
arXiv:1904.04745
| null |
10.1109/TPAMI.2021.3054384
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
We consider the problem of referring segmentation in images and videos with
natural language. Given an input image (or video) and a referring expression,
the goal is to segment the entity referred by the expression in the image or
video. In this paper, we propose a cross-modal self-attention (CMSA) module to
utilize fine details of individual words and the input image or video, which
effectively captures the long-range dependencies between linguistic and visual
features. Our model can adaptively focus on informative words in the referring
expression and important regions in the visual input. We further propose a
gated multi-level fusion (GMLF) module to selectively integrate self-attentive
cross-modal features corresponding to different levels of visual features. This
module controls the feature fusion of information flow of features at different
levels with high-level and low-level semantic information related to different
attentive words. Besides, we introduce cross-frame self-attention (CFSA) module
to effectively integrate temporal information in consecutive frames which
extends our method in the case of referring segmentation in videos. Experiments
on benchmark datasets of four referring image datasets and two actor and action
video segmentation datasets consistently demonstrate that our proposed approach
outperforms existing state-of-the-art methods.
|
[
{
"created": "Tue, 9 Feb 2021 11:27:59 GMT",
"version": "v1"
}
] |
2021-02-10
|
[
[
"Ye",
"Linwei",
""
],
[
"Rochan",
"Mrigank",
""
],
[
"Liu",
"Zhi",
""
],
[
"Zhang",
"Xiaoqin",
""
],
[
"Wang",
"Yang",
""
]
] |
We consider the problem of referring segmentation in images and videos with natural language. Given an input image (or video) and a referring expression, the goal is to segment the entity referred by the expression in the image or video. In this paper, we propose a cross-modal self-attention (CMSA) module to utilize fine details of individual words and the input image or video, which effectively captures the long-range dependencies between linguistic and visual features. Our model can adaptively focus on informative words in the referring expression and important regions in the visual input. We further propose a gated multi-level fusion (GMLF) module to selectively integrate self-attentive cross-modal features corresponding to different levels of visual features. This module controls the feature fusion of information flow of features at different levels with high-level and low-level semantic information related to different attentive words. Besides, we introduce cross-frame self-attention (CFSA) module to effectively integrate temporal information in consecutive frames which extends our method in the case of referring segmentation in videos. Experiments on benchmark datasets of four referring image datasets and two actor and action video segmentation datasets consistently demonstrate that our proposed approach outperforms existing state-of-the-art methods.
|
1112.0958
|
Qianxue Wang
|
Jacques M. Bahi, Xiaole Fang, Christophe Guyeux, and Qianxue Wang
|
On the design of a family of CI pseudo-random number generators
|
4 pages, In WICOM'11, 7th Int. IEEE Conf. on Wireless Communications,
Networking and Mobile Computing, Wuhan, China, pages 1--4, September 2011
| null |
10.1109/wicom.2011.6040161
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Chaos and its applications in the field of secure communications have
attracted a lot of attention. Chaos-based pseudo-random number generators are
critical to guarantee security over open networks as the Internet. We have
previously demonstrated that it is possible to define such generators with good
statistical properties by using a tool called "chaotic iterations", which
depends on an iteration function. An approach to find update functions such
that the associated generator presents a random-like and chaotic behavior is
proposed in this research work. To do so, we use the vectorial Boolean negation
as a prototype and explain how to modify this iteration function without
deflating the good properties of the associated generator. Simulation results
and basic security analysis are then presented to evaluate the randomness of
this new family of generators.
|
[
{
"created": "Mon, 5 Dec 2011 15:07:16 GMT",
"version": "v1"
}
] |
2016-11-17
|
[
[
"Bahi",
"Jacques M.",
""
],
[
"Fang",
"Xiaole",
""
],
[
"Guyeux",
"Christophe",
""
],
[
"Wang",
"Qianxue",
""
]
] |
Chaos and its applications in the field of secure communications have attracted a lot of attention. Chaos-based pseudo-random number generators are critical to guarantee security over open networks as the Internet. We have previously demonstrated that it is possible to define such generators with good statistical properties by using a tool called "chaotic iterations", which depends on an iteration function. An approach to find update functions such that the associated generator presents a random-like and chaotic behavior is proposed in this research work. To do so, we use the vectorial Boolean negation as a prototype and explain how to modify this iteration function without deflating the good properties of the associated generator. Simulation results and basic security analysis are then presented to evaluate the randomness of this new family of generators.
|
1604.07044
|
Xianming Liu
|
Xianming Liu, Min-Hsuan Tsai, Thomas Huang
|
Analyzing User Preference for Social Image Recommendation
| null | null | null | null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the incredibly growing amount of multimedia data shared on the social
media platforms, recommender systems have become an important necessity to ease
users' burden on the information overload. In such a scenario, extensive amount
of heterogeneous information such as tags, image content, in addition to the
user-to-item preferences, is extremely valuable for making effective
recommendations. In this paper, we explore a novel hybrid algorithm termed {\em
STM}, for image recommendation. STM jointly considers the problem of image
content analysis with the users' preferences on the basis of sparse
representation. STM is able to tackle the challenges of highly sparse user
feedbacks and cold-start problmes in the social network scenario. In addition,
our model is based on the classical probabilistic matrix factorization and can
be easily extended to incorporate other useful information such as the social
relationships. We evaluate our approach with a newly collected 0.3 million
social image data set from Flickr. The experimental results demonstrate that
sparse topic modeling of the image content leads to more effective
recommendations, , with a significant performance gain over the
state-of-the-art alternatives.
|
[
{
"created": "Sun, 24 Apr 2016 15:54:02 GMT",
"version": "v1"
}
] |
2016-04-26
|
[
[
"Liu",
"Xianming",
""
],
[
"Tsai",
"Min-Hsuan",
""
],
[
"Huang",
"Thomas",
""
]
] |
With the incredibly growing amount of multimedia data shared on the social media platforms, recommender systems have become an important necessity to ease users' burden on the information overload. In such a scenario, extensive amount of heterogeneous information such as tags, image content, in addition to the user-to-item preferences, is extremely valuable for making effective recommendations. In this paper, we explore a novel hybrid algorithm termed {\em STM}, for image recommendation. STM jointly considers the problem of image content analysis with the users' preferences on the basis of sparse representation. STM is able to tackle the challenges of highly sparse user feedbacks and cold-start problmes in the social network scenario. In addition, our model is based on the classical probabilistic matrix factorization and can be easily extended to incorporate other useful information such as the social relationships. We evaluate our approach with a newly collected 0.3 million social image data set from Flickr. The experimental results demonstrate that sparse topic modeling of the image content leads to more effective recommendations, , with a significant performance gain over the state-of-the-art alternatives.
|
1504.00037
|
Alex Horn
|
Alex Horn and Daniel Kroening
|
On partial order semantics for SAT/SMT-based symbolic encodings of weak
memory concurrency
|
15 pages, 3 figures
| null | null | null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Concurrent systems are notoriously difficult to analyze, and technological
advances such as weak memory architectures greatly compound this problem. This
has renewed interest in partial order semantics as a theoretical foundation for
formal verification techniques. Among these, symbolic techniques have been
shown to be particularly effective at finding concurrency-related bugs because
they can leverage highly optimized decision procedures such as SAT/SMT solvers.
This paper gives new fundamental results on partial order semantics for
SAT/SMT-based symbolic encodings of weak memory concurrency. In particular, we
give the theoretical basis for a decision procedure that can handle a fragment
of concurrent programs endowed with least fixed point operators. In addition,
we show that a certain partial order semantics of relaxed sequential
consistency is equivalent to the conjunction of three extensively studied weak
memory axioms by Alglave et al. An important consequence of this equivalence is
an asymptotically smaller symbolic encoding for bounded model checking which
has only a quadratic number of partial order constraints compared to the
state-of-the-art cubic-size encoding.
|
[
{
"created": "Tue, 31 Mar 2015 21:03:30 GMT",
"version": "v1"
}
] |
2015-04-02
|
[
[
"Horn",
"Alex",
""
],
[
"Kroening",
"Daniel",
""
]
] |
Concurrent systems are notoriously difficult to analyze, and technological advances such as weak memory architectures greatly compound this problem. This has renewed interest in partial order semantics as a theoretical foundation for formal verification techniques. Among these, symbolic techniques have been shown to be particularly effective at finding concurrency-related bugs because they can leverage highly optimized decision procedures such as SAT/SMT solvers. This paper gives new fundamental results on partial order semantics for SAT/SMT-based symbolic encodings of weak memory concurrency. In particular, we give the theoretical basis for a decision procedure that can handle a fragment of concurrent programs endowed with least fixed point operators. In addition, we show that a certain partial order semantics of relaxed sequential consistency is equivalent to the conjunction of three extensively studied weak memory axioms by Alglave et al. An important consequence of this equivalence is an asymptotically smaller symbolic encoding for bounded model checking which has only a quadratic number of partial order constraints compared to the state-of-the-art cubic-size encoding.
|
1801.00377
|
Walid Shalaby
|
Walid Shalaby, BahaaEddin AlAila, Mohammed Korayem, Layla Pournajaf,
Khalifeh AlJadda, Shannon Quinn, and Wlodek Zadrozny
|
Help Me Find a Job: A Graph-based Approach for Job Recommendation at
Scale
|
Accepted at 2017 IEEE International Conference on Big Data (BIGDATA)
| null | null | null |
cs.IR cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Online job boards are one of the central components of modern recruitment
industry. With millions of candidates browsing through job postings everyday,
the need for accurate, effective, meaningful, and transparent job
recommendations is apparent more than ever. While recommendation systems are
successfully advancing in variety of online domains by creating social and
commercial value, the job recommendation domain is less explored. Existing
systems are mostly focused on content analysis of resumes and job descriptions,
relying heavily on the accuracy and coverage of the semantic analysis and
modeling of the content in which case, they end up usually suffering from
rigidity and the lack of implicit semantic relations that are uncovered from
users' behavior and could be captured by Collaborative Filtering (CF) methods.
Few works which utilize CF do not address the scalability challenges of
real-world systems and the problem of cold-start. In this paper, we propose a
scalable item-based recommendation system for online job recommendations. Our
approach overcomes the major challenges of sparsity and scalability by
leveraging a directed graph of jobs connected by multi-edges representing
various behavioral and contextual similarity signals. The short lived nature of
the items (jobs) in the system and the rapid rate in which new users and jobs
enter the system make the cold-start a serious problem hindering CF methods. We
address this problem by harnessing the power of deep learning in addition to
user behavior to serve hybrid recommendations. Our technique has been leveraged
by CareerBuilder.com which is one of the largest job boards in the world to
generate high-quality recommendations for millions of users.
|
[
{
"created": "Mon, 1 Jan 2018 00:47:44 GMT",
"version": "v1"
}
] |
2018-01-03
|
[
[
"Shalaby",
"Walid",
""
],
[
"AlAila",
"BahaaEddin",
""
],
[
"Korayem",
"Mohammed",
""
],
[
"Pournajaf",
"Layla",
""
],
[
"AlJadda",
"Khalifeh",
""
],
[
"Quinn",
"Shannon",
""
],
[
"Zadrozny",
"Wlodek",
""
]
] |
Online job boards are one of the central components of modern recruitment industry. With millions of candidates browsing through job postings everyday, the need for accurate, effective, meaningful, and transparent job recommendations is apparent more than ever. While recommendation systems are successfully advancing in variety of online domains by creating social and commercial value, the job recommendation domain is less explored. Existing systems are mostly focused on content analysis of resumes and job descriptions, relying heavily on the accuracy and coverage of the semantic analysis and modeling of the content in which case, they end up usually suffering from rigidity and the lack of implicit semantic relations that are uncovered from users' behavior and could be captured by Collaborative Filtering (CF) methods. Few works which utilize CF do not address the scalability challenges of real-world systems and the problem of cold-start. In this paper, we propose a scalable item-based recommendation system for online job recommendations. Our approach overcomes the major challenges of sparsity and scalability by leveraging a directed graph of jobs connected by multi-edges representing various behavioral and contextual similarity signals. The short lived nature of the items (jobs) in the system and the rapid rate in which new users and jobs enter the system make the cold-start a serious problem hindering CF methods. We address this problem by harnessing the power of deep learning in addition to user behavior to serve hybrid recommendations. Our technique has been leveraged by CareerBuilder.com which is one of the largest job boards in the world to generate high-quality recommendations for millions of users.
|
2201.06801
|
Subhasis Koley
|
Susobhan Bandopadhyay, Sasthi C. Ghosh and Subhasis Koley
|
Improved Bounds on the Span of $L(1,2)$-edge Labeling of Some Infinite
Regular Grids
| null | null | null | null |
cs.DM math.CO
|
http://creativecommons.org/licenses/by/4.0/
|
For two given nonnegative integers $h$ and $k$, an $L(h,k)$-edge labeling of
a graph $G$ is the assignment of labels $\{0,1, \cdots, n\}$ to the edges so
that two edges having a common vertex are labeled with difference at least $h$
and two edges not having any common vertex but having a common edge connecting
them are labeled with difference at least $k$. The span $\lambda'_{h,k}{(G)}$
is the minimum $n$ such that $G$ admits an $L(h,k)$-edge labeling. Here our
main focus is on finding $\lambda'_{1,2}{(G)}$ for $L(1,2)$-edge labeling of
infinite regular hexagonal ($T_3$), square ($T_4$), triangular ($T_6$) and
octagonal ($T_8$) grids. It was known that $7 \leq \lambda'_{1,2}{(T_3)} \leq
8$, $10 \leq \lambda'_{1,2}{(T_4)} \leq 11$, $16 \leq \lambda'_{1,2}{(T_6)}
\leq 20$ and $25 \leq \lambda'_{1,2}{(T_8)} \leq 28$. Here we settle two long
standing open questions i.e. $\lambda'_{1,2}{(T_3)}$ and
$\lambda'_{1,2}{(T_4)}$. We show $\lambda'_{1,2}{(T_3)} =7$,
$\lambda'_{1,2}{(T_4)}= 11$. We also improve the bound for $T_6$ and $T_8$ and
prove $\lambda'_{1,2}{(T_6)} \geq 18$, $ \lambda'_{1,2}{(T_8)} \geq 26$.
|
[
{
"created": "Tue, 18 Jan 2022 07:58:13 GMT",
"version": "v1"
}
] |
2022-01-19
|
[
[
"Bandopadhyay",
"Susobhan",
""
],
[
"Ghosh",
"Sasthi C.",
""
],
[
"Koley",
"Subhasis",
""
]
] |
For two given nonnegative integers $h$ and $k$, an $L(h,k)$-edge labeling of a graph $G$ is the assignment of labels $\{0,1, \cdots, n\}$ to the edges so that two edges having a common vertex are labeled with difference at least $h$ and two edges not having any common vertex but having a common edge connecting them are labeled with difference at least $k$. The span $\lambda'_{h,k}{(G)}$ is the minimum $n$ such that $G$ admits an $L(h,k)$-edge labeling. Here our main focus is on finding $\lambda'_{1,2}{(G)}$ for $L(1,2)$-edge labeling of infinite regular hexagonal ($T_3$), square ($T_4$), triangular ($T_6$) and octagonal ($T_8$) grids. It was known that $7 \leq \lambda'_{1,2}{(T_3)} \leq 8$, $10 \leq \lambda'_{1,2}{(T_4)} \leq 11$, $16 \leq \lambda'_{1,2}{(T_6)} \leq 20$ and $25 \leq \lambda'_{1,2}{(T_8)} \leq 28$. Here we settle two long standing open questions i.e. $\lambda'_{1,2}{(T_3)}$ and $\lambda'_{1,2}{(T_4)}$. We show $\lambda'_{1,2}{(T_3)} =7$, $\lambda'_{1,2}{(T_4)}= 11$. We also improve the bound for $T_6$ and $T_8$ and prove $\lambda'_{1,2}{(T_6)} \geq 18$, $ \lambda'_{1,2}{(T_8)} \geq 26$.
|
1908.10042
|
EPTCS
|
Wolfgang Ahrendt (Chalmers University of Technology), Ludovic Henrio
(Univ Lyon, EnsL, UCBL, CNRS, Inria, LIP), Wytse Oortwijn (University of
Twente)
|
Who is to Blame? Runtime Verification of Distributed Objects with Active
Monitors
|
In Proceedings VORTEX 2018, arXiv:1908.09302
|
EPTCS 302, 2019, pp. 32-46
|
10.4204/EPTCS.302.3
| null |
cs.SE cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Since distributed software systems are ubiquitous, their correct functioning
is crucially important. Static verification is possible in principle, but
requires high expertise and effort which is not feasible in many eco-systems.
Runtime verification can serve as a lean alternative, where monitoring
mechanisms are automatically generated from property specifications, to check
compliance at runtime. This paper contributes a practical solution for powerful
and flexible runtime verification of distributed, object-oriented applications,
via a combination of the runtime verification tool Larva and the active object
framework ProActive. Even if Larva supports in itself only the generation of
local, sequential monitors, we empower Larva for distributed monitoring by
connecting monitors with active objects, turning them into active,
communicating monitors. We discuss how this allows for a variety of monitoring
architectures. Further, we show how property specifications, and thereby the
generated monitors, provide a model that splits the blame between the local
object and its environment. While Larva itself focuses on monitoring of
control-oriented properties, we use the Larva front-end StaRVOOrS to also
capture data-oriented (pre/post) properties in the distributed monitoring. We
demonstrate this approach to distributed runtime verification with a case
study, a distributed key/value store.
|
[
{
"created": "Tue, 27 Aug 2019 06:20:22 GMT",
"version": "v1"
}
] |
2019-08-28
|
[
[
"Ahrendt",
"Wolfgang",
"",
"Chalmers University of Technology"
],
[
"Henrio",
"Ludovic",
"",
"Univ Lyon, EnsL, UCBL, CNRS, Inria, LIP"
],
[
"Oortwijn",
"Wytse",
"",
"University of\n Twente"
]
] |
Since distributed software systems are ubiquitous, their correct functioning is crucially important. Static verification is possible in principle, but requires high expertise and effort which is not feasible in many eco-systems. Runtime verification can serve as a lean alternative, where monitoring mechanisms are automatically generated from property specifications, to check compliance at runtime. This paper contributes a practical solution for powerful and flexible runtime verification of distributed, object-oriented applications, via a combination of the runtime verification tool Larva and the active object framework ProActive. Even if Larva supports in itself only the generation of local, sequential monitors, we empower Larva for distributed monitoring by connecting monitors with active objects, turning them into active, communicating monitors. We discuss how this allows for a variety of monitoring architectures. Further, we show how property specifications, and thereby the generated monitors, provide a model that splits the blame between the local object and its environment. While Larva itself focuses on monitoring of control-oriented properties, we use the Larva front-end StaRVOOrS to also capture data-oriented (pre/post) properties in the distributed monitoring. We demonstrate this approach to distributed runtime verification with a case study, a distributed key/value store.
|
1911.02590
|
Jonathan Lorraine
|
Jonathan Lorraine, Paul Vicol, David Duvenaud
|
Optimizing Millions of Hyperparameters by Implicit Differentiation
|
Submitted to AISTATS 2020
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose an algorithm for inexpensive gradient-based hyperparameter
optimization that combines the implicit function theorem (IFT) with efficient
inverse Hessian approximations. We present results about the relationship
between the IFT and differentiating through optimization, motivating our
algorithm. We use the proposed approach to train modern network architectures
with millions of weights and millions of hyper-parameters. For example, we
learn a data-augmentation network - where every weight is a hyperparameter
tuned for validation performance - outputting augmented training examples.
Jointly tuning weights and hyperparameters with our approach is only a few
times more costly in memory and compute than standard training.
|
[
{
"created": "Wed, 6 Nov 2019 19:04:16 GMT",
"version": "v1"
}
] |
2019-11-11
|
[
[
"Lorraine",
"Jonathan",
""
],
[
"Vicol",
"Paul",
""
],
[
"Duvenaud",
"David",
""
]
] |
We propose an algorithm for inexpensive gradient-based hyperparameter optimization that combines the implicit function theorem (IFT) with efficient inverse Hessian approximations. We present results about the relationship between the IFT and differentiating through optimization, motivating our algorithm. We use the proposed approach to train modern network architectures with millions of weights and millions of hyper-parameters. For example, we learn a data-augmentation network - where every weight is a hyperparameter tuned for validation performance - outputting augmented training examples. Jointly tuning weights and hyperparameters with our approach is only a few times more costly in memory and compute than standard training.
|
2012.12507
|
Se Young Chun
|
Dongwon Park, Dong Un Kang, Se Young Chun
|
Blur More To Deblur Better: Multi-Blur2Deblur For Efficient Video
Deblurring
|
9 pages, 7 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
One of the key components for video deblurring is how to exploit neighboring
frames. Recent state-of-the-art methods either used aligned adjacent frames to
the center frame or propagated the information on past frames to the current
frame recurrently. Here we propose multi-blur-to-deblur (MB2D), a novel concept
to exploit neighboring frames for efficient video deblurring. Firstly, inspired
by unsharp masking, we argue that using more blurred images with long exposures
as additional inputs significantly improves performance. Secondly, we propose
multi-blurring recurrent neural network (MBRNN) that can synthesize more
blurred images from neighboring frames, yielding substantially improved
performance with existing video deblurring methods. Lastly, we propose
multi-scale deblurring with connecting recurrent feature map from MBRNN (MSDR)
to achieve state-of-the-art performance on the popular GoPro and Su datasets in
fast and memory efficient ways.
|
[
{
"created": "Wed, 23 Dec 2020 06:17:31 GMT",
"version": "v1"
}
] |
2020-12-24
|
[
[
"Park",
"Dongwon",
""
],
[
"Kang",
"Dong Un",
""
],
[
"Chun",
"Se Young",
""
]
] |
One of the key components for video deblurring is how to exploit neighboring frames. Recent state-of-the-art methods either used aligned adjacent frames to the center frame or propagated the information on past frames to the current frame recurrently. Here we propose multi-blur-to-deblur (MB2D), a novel concept to exploit neighboring frames for efficient video deblurring. Firstly, inspired by unsharp masking, we argue that using more blurred images with long exposures as additional inputs significantly improves performance. Secondly, we propose multi-blurring recurrent neural network (MBRNN) that can synthesize more blurred images from neighboring frames, yielding substantially improved performance with existing video deblurring methods. Lastly, we propose multi-scale deblurring with connecting recurrent feature map from MBRNN (MSDR) to achieve state-of-the-art performance on the popular GoPro and Su datasets in fast and memory efficient ways.
|
2307.14686
|
Josep Marti-Saumell
|
Josep Mart\'i-Saumell, Hugo Duarte, Patrick Grosch, Juan
Andrade-Cetto, Angel Santamaria-Navarro, Joan Sol\`a
|
Borinot: an open thrust-torque-controlled robot for research on agile
aerial-contact motion
|
14 pages, 13 figures. See related video at
https://youtu.be/Ob7IIVB6P_A
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces Borinot, an open-source aerial robotic platform
designed to conduct research on hybrid agile locomotion and manipulation using
flight and contacts. This platform features an agile and powerful hexarotor
that can be outfitted with torque-actuated limbs of diverse architecture,
allowing for whole-body dynamic control. As a result, Borinot can perform agile
tasks such as aggressive or acrobatic maneuvers with the participation of the
whole-body dynamics.
The limbs attached to Borinot can be utilized in various ways; during
contact, they can be used as legs to create contact-based locomotion, or as
arms to manipulate objects. In free flight, they can be used as tails to
contribute to dynamics, mimicking the movements of many animals. This allows
for any hybridization of these dynamic modes, making Borinot an ideal
open-source platform for research on hybrid aerial-contact agile motion.
To demonstrate the key capabilities of Borinot in terms of agility with
hybrid motion modes, we have fitted a planar 2DoF limb and implemented a
whole-body torque-level model-predictive-control. The result is a capable and
adaptable platform that, we believe, opens up new avenues of research in the
field of agile robotics. Interesting links\footnote{Documentation:
\url{www.iri.upc.edu/borinot}}\footnote{Video:
\url{https://youtu.be/Ob7IIVB6P_A}}.
|
[
{
"created": "Thu, 27 Jul 2023 08:19:47 GMT",
"version": "v1"
}
] |
2023-07-28
|
[
[
"Martí-Saumell",
"Josep",
""
],
[
"Duarte",
"Hugo",
""
],
[
"Grosch",
"Patrick",
""
],
[
"Andrade-Cetto",
"Juan",
""
],
[
"Santamaria-Navarro",
"Angel",
""
],
[
"Solà",
"Joan",
""
]
] |
This paper introduces Borinot, an open-source aerial robotic platform designed to conduct research on hybrid agile locomotion and manipulation using flight and contacts. This platform features an agile and powerful hexarotor that can be outfitted with torque-actuated limbs of diverse architecture, allowing for whole-body dynamic control. As a result, Borinot can perform agile tasks such as aggressive or acrobatic maneuvers with the participation of the whole-body dynamics. The limbs attached to Borinot can be utilized in various ways; during contact, they can be used as legs to create contact-based locomotion, or as arms to manipulate objects. In free flight, they can be used as tails to contribute to dynamics, mimicking the movements of many animals. This allows for any hybridization of these dynamic modes, making Borinot an ideal open-source platform for research on hybrid aerial-contact agile motion. To demonstrate the key capabilities of Borinot in terms of agility with hybrid motion modes, we have fitted a planar 2DoF limb and implemented a whole-body torque-level model-predictive-control. The result is a capable and adaptable platform that, we believe, opens up new avenues of research in the field of agile robotics. Interesting links\footnote{Documentation: \url{www.iri.upc.edu/borinot}}\footnote{Video: \url{https://youtu.be/Ob7IIVB6P_A}}.
|
2208.11483
|
Suncheng Xiang
|
Hongwei Xu, Suncheng Xiang, Dahong Qian
|
SubFace: Learning with Softmax Approximation for Face Recognition
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The softmax-based loss functions and its variants (e.g., cosface, sphereface,
and arcface) significantly improve the face recognition performance in wild
unconstrained scenes. A common practice of these algorithms is to perform
optimizations on the multiplication between the embedding features and the
linear transformation matrix. However in most cases, the dimension of embedding
features is given based on traditional design experience, and there is
less-studied on improving performance using the feature itself when giving a
fixed size. To address this challenge, this paper presents a softmax
approximation method called SubFace, which employs the subspace feature to
promote the performance of face recognition. Specifically, we dynamically
select the non-overlapping subspace features in each batch during training, and
then use the subspace features to approximate full-feature among softmax-based
loss, so the discriminability of the deep model can be significantly enhanced
for face recognition. Comprehensive experiments conducted on benchmark datasets
demonstrate that our method can significantly improve the performance of
vanilla CNN baseline, which strongly proves the effectiveness of subspace
strategy with the margin-based loss.
|
[
{
"created": "Wed, 24 Aug 2022 12:31:08 GMT",
"version": "v1"
}
] |
2022-08-25
|
[
[
"Xu",
"Hongwei",
""
],
[
"Xiang",
"Suncheng",
""
],
[
"Qian",
"Dahong",
""
]
] |
The softmax-based loss functions and its variants (e.g., cosface, sphereface, and arcface) significantly improve the face recognition performance in wild unconstrained scenes. A common practice of these algorithms is to perform optimizations on the multiplication between the embedding features and the linear transformation matrix. However in most cases, the dimension of embedding features is given based on traditional design experience, and there is less-studied on improving performance using the feature itself when giving a fixed size. To address this challenge, this paper presents a softmax approximation method called SubFace, which employs the subspace feature to promote the performance of face recognition. Specifically, we dynamically select the non-overlapping subspace features in each batch during training, and then use the subspace features to approximate full-feature among softmax-based loss, so the discriminability of the deep model can be significantly enhanced for face recognition. Comprehensive experiments conducted on benchmark datasets demonstrate that our method can significantly improve the performance of vanilla CNN baseline, which strongly proves the effectiveness of subspace strategy with the margin-based loss.
|
1711.02950
|
Federico Costantini
|
Federico Costantini
|
MaaS and GDPR: an overview
|
Paper (10 pages) accepted at the international conference Intelligent
Transport Systems. From research and development to the market uptake,
Helsinki (Finland) November 29/30, 2017. Not published nor in press
| null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In MaaS (Mobility as a Service), means of transport are virtualized in
mobility resources and provided to users using the Internet. From a legal
perspective, this model of ITS (Intelligent Transport System) raises several
concerns with regard to data protection. This contribution, after a short
description of MaaS and an introduction to the issues of data protection in
ITS, explores the impact of GDPR (General Data Protection Regulation) in the
European Union, detecting possible threats and remedies and suggesting a
plausible approach.
|
[
{
"created": "Wed, 8 Nov 2017 14:14:15 GMT",
"version": "v1"
}
] |
2017-11-09
|
[
[
"Costantini",
"Federico",
""
]
] |
In MaaS (Mobility as a Service), means of transport are virtualized in mobility resources and provided to users using the Internet. From a legal perspective, this model of ITS (Intelligent Transport System) raises several concerns with regard to data protection. This contribution, after a short description of MaaS and an introduction to the issues of data protection in ITS, explores the impact of GDPR (General Data Protection Regulation) in the European Union, detecting possible threats and remedies and suggesting a plausible approach.
|
2104.06772
|
Anthony Ngo
|
Anthony Ngo, Max Paul Bauer, Michael Resch
|
Deep Evaluation Metric: Learning to Evaluate Simulated Radar Point
Clouds for Virtual Testing of Autonomous Driving
|
2021 IEEE Radar Conference (IEEE RadarConf 2021)
| null |
10.1109/RadarConf2147009.2021.9455235
| null |
cs.CV cs.AI eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The usage of environment sensor models for virtual testing is a promising
approach to reduce the testing effort of autonomous driving. However, in order
to deduce any statements regarding the performance of an autonomous driving
function based on simulation, the sensor model has to be validated to determine
the discrepancy between the synthetic and real sensor data. Since a certain
degree of divergence can be assumed to exist, the sufficient level of fidelity
must be determined, which poses a major challenge. In particular, a method for
quantifying the fidelity of a sensor model does not exist and the problem of
defining an appropriate metric remains. In this work, we train a neural network
to distinguish real and simulated radar sensor data with the purpose of
learning the latent features of real radar point clouds. Furthermore, we
propose the classifier's confidence score for the `real radar point cloud'
class as a metric to determine the degree of fidelity of synthetically
generated radar data. The presented approach is evaluated and it can be
demonstrated that the proposed deep evaluation metric outperforms conventional
metrics in terms of its capability to identify characteristic differences
between real and simulated radar data.
|
[
{
"created": "Wed, 14 Apr 2021 11:04:50 GMT",
"version": "v1"
},
{
"created": "Mon, 21 Jun 2021 08:30:09 GMT",
"version": "v2"
}
] |
2021-06-22
|
[
[
"Ngo",
"Anthony",
""
],
[
"Bauer",
"Max Paul",
""
],
[
"Resch",
"Michael",
""
]
] |
The usage of environment sensor models for virtual testing is a promising approach to reduce the testing effort of autonomous driving. However, in order to deduce any statements regarding the performance of an autonomous driving function based on simulation, the sensor model has to be validated to determine the discrepancy between the synthetic and real sensor data. Since a certain degree of divergence can be assumed to exist, the sufficient level of fidelity must be determined, which poses a major challenge. In particular, a method for quantifying the fidelity of a sensor model does not exist and the problem of defining an appropriate metric remains. In this work, we train a neural network to distinguish real and simulated radar sensor data with the purpose of learning the latent features of real radar point clouds. Furthermore, we propose the classifier's confidence score for the `real radar point cloud' class as a metric to determine the degree of fidelity of synthetically generated radar data. The presented approach is evaluated and it can be demonstrated that the proposed deep evaluation metric outperforms conventional metrics in terms of its capability to identify characteristic differences between real and simulated radar data.
|
2311.10529
|
Yichi Zhang
|
Yichi Zhang, Shiyao Hu, Sijie Ren, Chen Jiang, Yuan Cheng, Yuan Qi
|
Enhancing the Reliability of Segment Anything Model for Auto-Prompting
Medical Image Segmentation with Uncertainty Rectification
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Segment Anything Model (SAM) has recently emerged as a groundbreaking
foundation model for prompt-driven image segmentation tasks. However, both the
original SAM and its medical variants require slice-by-slice manual prompting
of target structures, which directly increase the burden for applications.
Despite attempts of auto-prompting to turn SAM into a fully automatic manner,
it still exhibits subpar performance and lacks of reliability especially in the
field of medical imaging. In this paper, we propose UR-SAM, an uncertainty
rectified SAM framework to enhance the reliability for auto-prompting medical
image segmentation. Building upon a localization framework for automatic prompt
generation, our method incorporates a prompt augmentation module to obtain a
series of input prompts for SAM for uncertainty estimation and an
uncertainty-based rectification module to further utilize the distribution of
estimated uncertainty to improve the segmentation performance. Extensive
experiments on two public 3D medical datasets covering the segmentation of 35
organs demonstrate that without supplementary training or fine-tuning, our
method further improves the segmentation performance with up to 10.7 % and 13.8
% in dice similarity coefficient, demonstrating efficiency and broad
capabilities for medical image segmentation without manual prompting.
|
[
{
"created": "Fri, 17 Nov 2023 13:49:00 GMT",
"version": "v1"
},
{
"created": "Wed, 13 Dec 2023 04:57:47 GMT",
"version": "v2"
},
{
"created": "Mon, 18 Mar 2024 08:47:03 GMT",
"version": "v3"
}
] |
2024-03-19
|
[
[
"Zhang",
"Yichi",
""
],
[
"Hu",
"Shiyao",
""
],
[
"Ren",
"Sijie",
""
],
[
"Jiang",
"Chen",
""
],
[
"Cheng",
"Yuan",
""
],
[
"Qi",
"Yuan",
""
]
] |
The Segment Anything Model (SAM) has recently emerged as a groundbreaking foundation model for prompt-driven image segmentation tasks. However, both the original SAM and its medical variants require slice-by-slice manual prompting of target structures, which directly increase the burden for applications. Despite attempts of auto-prompting to turn SAM into a fully automatic manner, it still exhibits subpar performance and lacks of reliability especially in the field of medical imaging. In this paper, we propose UR-SAM, an uncertainty rectified SAM framework to enhance the reliability for auto-prompting medical image segmentation. Building upon a localization framework for automatic prompt generation, our method incorporates a prompt augmentation module to obtain a series of input prompts for SAM for uncertainty estimation and an uncertainty-based rectification module to further utilize the distribution of estimated uncertainty to improve the segmentation performance. Extensive experiments on two public 3D medical datasets covering the segmentation of 35 organs demonstrate that without supplementary training or fine-tuning, our method further improves the segmentation performance with up to 10.7 % and 13.8 % in dice similarity coefficient, demonstrating efficiency and broad capabilities for medical image segmentation without manual prompting.
|
1809.09849
|
Richard Torkar
|
Richard Torkar, Carlo A. Furia, Robert Feldt, Francisco Gomes de
Oliveira Neto, Lucas Gren, Per Lenberg, Neil A. Ernst
|
A Method to Assess and Argue for Practical Significance in Software
Engineering
|
13 pages, 9 figures, 3 tables. Minor rev update
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A key goal of empirical research in software engineering is to assess
practical significance, which answers whether the observed effects of some
compared treatments show a relevant difference in practice in realistic
scenarios. Even though plenty of standard techniques exist to assess
statistical significance, connecting it to practical significance is not
straightforward or routinely done; indeed, only a few empirical studies in
software engineering assess practical significance in a principled and
systematic way.
In this paper, we argue that Bayesian data analysis provides suitable tools
to assess practical significance rigorously. We demonstrate our claims in a
case study comparing different test techniques. The case study's data was
previously analyzed (Afzal et al., 2015) using standard techniques focusing on
statistical significance. Here, we build a multilevel model of the same data,
which we fit and validate using Bayesian techniques. Our method is to apply
cumulative prospect theory on top of the statistical model to quantitatively
connect our statistical analysis output to a practically meaningful context.
This is then the basis both for assessing and arguing for practical
significance.
Our study demonstrates that Bayesian analysis provides a technically rigorous
yet practical framework for empirical software engineering. A substantial side
effect is that any uncertainty in the underlying data will be propagated
through the statistical model, and its effects on practical significance are
made clear.
Thus, in combination with cumulative prospect theory, Bayesian analysis
supports seamlessly assessing practical significance in an empirical software
engineering context, thus potentially clarifying and extending the relevance of
research for practitioners.
|
[
{
"created": "Wed, 26 Sep 2018 08:39:46 GMT",
"version": "v1"
},
{
"created": "Wed, 18 Mar 2020 10:35:12 GMT",
"version": "v2"
},
{
"created": "Fri, 3 Apr 2020 14:50:33 GMT",
"version": "v3"
},
{
"created": "Wed, 29 Apr 2020 14:11:03 GMT",
"version": "v4"
},
{
"created": "Sat, 31 Oct 2020 08:44:00 GMT",
"version": "v5"
},
{
"created": "Tue, 3 Nov 2020 08:57:13 GMT",
"version": "v6"
},
{
"created": "Fri, 25 Dec 2020 13:35:33 GMT",
"version": "v7"
}
] |
2020-12-29
|
[
[
"Torkar",
"Richard",
""
],
[
"Furia",
"Carlo A.",
""
],
[
"Feldt",
"Robert",
""
],
[
"Neto",
"Francisco Gomes de Oliveira",
""
],
[
"Gren",
"Lucas",
""
],
[
"Lenberg",
"Per",
""
],
[
"Ernst",
"Neil A.",
""
]
] |
A key goal of empirical research in software engineering is to assess practical significance, which answers whether the observed effects of some compared treatments show a relevant difference in practice in realistic scenarios. Even though plenty of standard techniques exist to assess statistical significance, connecting it to practical significance is not straightforward or routinely done; indeed, only a few empirical studies in software engineering assess practical significance in a principled and systematic way. In this paper, we argue that Bayesian data analysis provides suitable tools to assess practical significance rigorously. We demonstrate our claims in a case study comparing different test techniques. The case study's data was previously analyzed (Afzal et al., 2015) using standard techniques focusing on statistical significance. Here, we build a multilevel model of the same data, which we fit and validate using Bayesian techniques. Our method is to apply cumulative prospect theory on top of the statistical model to quantitatively connect our statistical analysis output to a practically meaningful context. This is then the basis both for assessing and arguing for practical significance. Our study demonstrates that Bayesian analysis provides a technically rigorous yet practical framework for empirical software engineering. A substantial side effect is that any uncertainty in the underlying data will be propagated through the statistical model, and its effects on practical significance are made clear. Thus, in combination with cumulative prospect theory, Bayesian analysis supports seamlessly assessing practical significance in an empirical software engineering context, thus potentially clarifying and extending the relevance of research for practitioners.
|
1808.01280
|
ShihChung Lo Ph.D.
|
ShihChung B. Lo, Ph.D., Matthew T. Freedman, M.D., Seong K. Mun,
Ph.D., and Heang-Ping Chan, Ph.D
|
Geared Rotationally Identical and Invariant Convolutional Neural Network
Systems
|
14 pages, 6 figures, 8 tables
| null | null | null |
cs.NE cs.LG cs.SI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Theorems and techniques to form different types of transformationally
invariant processing and to produce the same output quantitatively based on
either transformationally invariant operators or symmetric operations have
recently been introduced by the authors. In this study, we further propose to
compose a geared rotationally identical CNN system (GRI-CNN) with a small step
angle by connecting networks of participated processes at the first flatten
layer. Using an ordinary CNN structure as a base, requirements for constructing
a GRI-CNN include the use of either symmetric input vector or kernels with an
angle increment that can form a complete cycle as a "gearwheel". Four basic
GRI-CNN structures were studied. Each of them can produce quantitatively
identical output results when a rotation angle of the input vector is evenly
divisible by the step angle of the gear. Our study showed when an input vector
rotated with an angle does not match to a step angle, the GRI-CNN can also
produce a highly consistent result. With a design of using an ultra-fine
gear-tooth step angle (e.g., 1 degree or 0.1 degree), all four GRI-CNN systems
can be constructed virtually isotropically.
|
[
{
"created": "Fri, 3 Aug 2018 02:27:40 GMT",
"version": "v1"
},
{
"created": "Wed, 8 Aug 2018 15:08:37 GMT",
"version": "v2"
},
{
"created": "Fri, 10 Aug 2018 11:26:09 GMT",
"version": "v3"
}
] |
2018-08-13
|
[
[
"Lo",
"ShihChung B.",
""
],
[
"D.",
"Ph.",
""
],
[
"Freedman",
"Matthew T.",
""
],
[
"D.",
"M.",
""
],
[
"Mun",
"Seong K.",
""
],
[
"D.",
"Ph.",
""
],
[
"Chan",
"Heang-Ping",
""
],
[
"D",
"Ph.",
""
]
] |
Theorems and techniques to form different types of transformationally invariant processing and to produce the same output quantitatively based on either transformationally invariant operators or symmetric operations have recently been introduced by the authors. In this study, we further propose to compose a geared rotationally identical CNN system (GRI-CNN) with a small step angle by connecting networks of participated processes at the first flatten layer. Using an ordinary CNN structure as a base, requirements for constructing a GRI-CNN include the use of either symmetric input vector or kernels with an angle increment that can form a complete cycle as a "gearwheel". Four basic GRI-CNN structures were studied. Each of them can produce quantitatively identical output results when a rotation angle of the input vector is evenly divisible by the step angle of the gear. Our study showed when an input vector rotated with an angle does not match to a step angle, the GRI-CNN can also produce a highly consistent result. With a design of using an ultra-fine gear-tooth step angle (e.g., 1 degree or 0.1 degree), all four GRI-CNN systems can be constructed virtually isotropically.
|
1607.05408
|
Matthias Gall\'e
|
Will Radford, Matthias Galle
|
Discriminating between similar languages in Twitter using label
propagation
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Identifying the language of social media messages is an important first step
in linguistic processing. Existing models for Twitter focus on content
analysis, which is successful for dissimilar language pairs. We propose a label
propagation approach that takes the social graph of tweet authors into account
as well as content to better tease apart similar languages. This results in
state-of-the-art shared task performance of $76.63\%$, $1.4\%$ higher than the
top system.
|
[
{
"created": "Tue, 19 Jul 2016 05:38:58 GMT",
"version": "v1"
}
] |
2016-07-20
|
[
[
"Radford",
"Will",
""
],
[
"Galle",
"Matthias",
""
]
] |
Identifying the language of social media messages is an important first step in linguistic processing. Existing models for Twitter focus on content analysis, which is successful for dissimilar language pairs. We propose a label propagation approach that takes the social graph of tweet authors into account as well as content to better tease apart similar languages. This results in state-of-the-art shared task performance of $76.63\%$, $1.4\%$ higher than the top system.
|
2209.02101
|
Wolfgang Mulzer
|
Michaela Borzechowski and Wolfgang Mulzer
|
Unique Sink Orientations of Grids is in Unique End of Potential Line
|
8 pages, 5 figures
| null | null | null |
cs.CG
|
http://creativecommons.org/licenses/by/4.0/
|
The complexity classes Unique End of Potential Line (UEOPL) and its promise
version PUEOPL were introduced in 2018 by Fearnly et al. UEOPL captures search
problems where the instances are promised to have a unique solution. UEOPL
captures total search versions of these promise problems. The promise problems
can be made total by defining violations that are returned as a short
certificate of an unfulfilled promise.
GridUSO is the problem of finding the sink in a grid with a unique sink
orientation. It was introduced by G\"artner et al. We describe a promise
preserving reduction from GridUSO to UniqueForwardEOPL, a UEOPL-complete
problem. Thus, we show that GridUSO is in UEOPL and its promise version is in
PUEOPL.
|
[
{
"created": "Mon, 5 Sep 2022 19:06:53 GMT",
"version": "v1"
}
] |
2022-09-07
|
[
[
"Borzechowski",
"Michaela",
""
],
[
"Mulzer",
"Wolfgang",
""
]
] |
The complexity classes Unique End of Potential Line (UEOPL) and its promise version PUEOPL were introduced in 2018 by Fearnly et al. UEOPL captures search problems where the instances are promised to have a unique solution. UEOPL captures total search versions of these promise problems. The promise problems can be made total by defining violations that are returned as a short certificate of an unfulfilled promise. GridUSO is the problem of finding the sink in a grid with a unique sink orientation. It was introduced by G\"artner et al. We describe a promise preserving reduction from GridUSO to UniqueForwardEOPL, a UEOPL-complete problem. Thus, we show that GridUSO is in UEOPL and its promise version is in PUEOPL.
|
0708.1411
|
Sajad Sadough
|
Sajad Sadough (LSS), Pablo Piantanida (LSS), Pierre Duhamel (LSS)
|
Achievable Outage Rates with Improved Decoding of Bicm Multiband Ofdm
Under Channel Estimation Errors
| null |
Dans 40th Asilomar Conference on Signals, Systems, and Computers -
40th Asilomar Conference on Signals, Systems, and Computers, Monterey :
\'Etats-Unis d'Am\'erique (2007)
| null | null |
cs.NI
| null |
We consider the decoding of bit interleaved coded modulation (BICM) applied
to multiband OFDM for practical scenarios where only a noisy (possibly very
bad) estimate of the channel is available at the receiver. First, a decoding
metric based on the channel it a posteriori probability density, conditioned on
the channel estimate is derived and used for decoding BICM multiband OFDM.
Then, we characterize the limits of reliable information rates in terms of the
maximal achievable outage rates associated to the proposed metric. We also
compare our results with the outage rates of a system using a theoretical
decoder. Our results are useful for designing a communication system where a
prescribed quality of service (QoS), in terms of achievable target rates with
small error probability, must be satisfied even in the presence of imperfect
channel estimation. Numerical results over both realistic UWB and theoretical
Rayleigh fading channels show that the proposed method provides significant
gain in terms of BER and outage rates compared to the classical mismatched
detector, without introducing any additional complexity.
|
[
{
"created": "Fri, 10 Aug 2007 12:13:51 GMT",
"version": "v1"
}
] |
2007-08-13
|
[
[
"Sadough",
"Sajad",
"",
"LSS"
],
[
"Piantanida",
"Pablo",
"",
"LSS"
],
[
"Duhamel",
"Pierre",
"",
"LSS"
]
] |
We consider the decoding of bit interleaved coded modulation (BICM) applied to multiband OFDM for practical scenarios where only a noisy (possibly very bad) estimate of the channel is available at the receiver. First, a decoding metric based on the channel it a posteriori probability density, conditioned on the channel estimate is derived and used for decoding BICM multiband OFDM. Then, we characterize the limits of reliable information rates in terms of the maximal achievable outage rates associated to the proposed metric. We also compare our results with the outage rates of a system using a theoretical decoder. Our results are useful for designing a communication system where a prescribed quality of service (QoS), in terms of achievable target rates with small error probability, must be satisfied even in the presence of imperfect channel estimation. Numerical results over both realistic UWB and theoretical Rayleigh fading channels show that the proposed method provides significant gain in terms of BER and outage rates compared to the classical mismatched detector, without introducing any additional complexity.
|
2110.00785
|
Delano Oliveira
|
Delano Oliveira, Reydne Bruno, Fernanda Madeiral, Fernando Castor
|
Evaluating Code Readability and Legibility: An Examination of
Human-centric Studies
| null | null |
10.1109/ICSME46990.2020.00041
| null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reading code is an essential activity in software maintenance and evolution.
Several studies with human subjects have investigated how different factors,
such as the employed programming constructs and naming conventions, can impact
code readability, i.e., what makes a program easier or harder to read and
apprehend by developers, and code legibility, i.e., what influences the ease of
identifying elements of a program. These studies evaluate readability and
legibility by means of different comprehension tasks and response variables. In
this paper, we examine these tasks and variables in studies that compare
programming constructs, coding idioms, naming conventions, and formatting
guidelines, e.g., recursive vs. iterative code. To that end, we have conducted
a systematic literature review where we found 54 relevant papers. Most of these
studies evaluate code readability and legibility by measuring the correctness
of the subjects' results (83.3%) or simply asking their opinions (55.6%). Some
studies (16.7%) rely exclusively on the latter variable.There are still few
studies that monitor subjects' physical signs, such as brain activation regions
(5%). Moreover, our study shows that some variables are multi-faceted. For
instance, correctness can be measured as the ability to predict the output of a
program, answer questions about its behavior, or recall parts of it. These
results make it clear that different evaluation approaches require different
competencies from subjects, e.g., tracing the program vs. summarizing its goal
vs. memorizing its text. To assist researchers in the design of new studies and
improve our comprehension of existing ones, we model program comprehension as a
learning activity by adapting a preexisting learning taxonomy. This adaptation
indicates that some competencies are often exercised in these evaluations
whereas others are rarely targeted.
|
[
{
"created": "Sat, 2 Oct 2021 11:21:15 GMT",
"version": "v1"
}
] |
2021-10-05
|
[
[
"Oliveira",
"Delano",
""
],
[
"Bruno",
"Reydne",
""
],
[
"Madeiral",
"Fernanda",
""
],
[
"Castor",
"Fernando",
""
]
] |
Reading code is an essential activity in software maintenance and evolution. Several studies with human subjects have investigated how different factors, such as the employed programming constructs and naming conventions, can impact code readability, i.e., what makes a program easier or harder to read and apprehend by developers, and code legibility, i.e., what influences the ease of identifying elements of a program. These studies evaluate readability and legibility by means of different comprehension tasks and response variables. In this paper, we examine these tasks and variables in studies that compare programming constructs, coding idioms, naming conventions, and formatting guidelines, e.g., recursive vs. iterative code. To that end, we have conducted a systematic literature review where we found 54 relevant papers. Most of these studies evaluate code readability and legibility by measuring the correctness of the subjects' results (83.3%) or simply asking their opinions (55.6%). Some studies (16.7%) rely exclusively on the latter variable.There are still few studies that monitor subjects' physical signs, such as brain activation regions (5%). Moreover, our study shows that some variables are multi-faceted. For instance, correctness can be measured as the ability to predict the output of a program, answer questions about its behavior, or recall parts of it. These results make it clear that different evaluation approaches require different competencies from subjects, e.g., tracing the program vs. summarizing its goal vs. memorizing its text. To assist researchers in the design of new studies and improve our comprehension of existing ones, we model program comprehension as a learning activity by adapting a preexisting learning taxonomy. This adaptation indicates that some competencies are often exercised in these evaluations whereas others are rarely targeted.
|
2103.10631
|
Eric Schwenker
|
Eric Schwenker, Weixin Jiang, Trevor Spreadbury, Nicola Ferrier,
Oliver Cossairt, Maria K. Y. Chan
|
EXSCLAIM! -- An automated pipeline for the construction of labeled
materials imaging datasets from literature
| null | null | null | null |
cs.IR cond-mat.mtrl-sci
|
http://creativecommons.org/licenses/by/4.0/
|
Due to recent improvements in image resolution and acquisition speed,
materials microscopy is experiencing an explosion of published imaging data.
The standard publication format, while sufficient for traditional data
ingestion scenarios where a select number of images can be critically examined
and curated manually, is not conducive to large-scale data aggregation or
analysis, hindering data sharing and reuse. Most images in publications are
presented as components of a larger figure with their explicit context buried
in the main body or caption text, so even if aggregated, collections of images
with weak or no digitized contextual labels have limited value. To solve the
problem of curating labeled microscopy data from literature, this work
introduces the EXSCLAIM! Python toolkit for the automatic EXtraction,
Separation, and Caption-based natural Language Annotation of IMages from
scientific literature. We highlight the methodology behind the construction of
EXSCLAIM! and demonstrate its ability to extract and label open-source
scientific images at high volume.
|
[
{
"created": "Fri, 19 Mar 2021 04:48:12 GMT",
"version": "v1"
}
] |
2021-03-22
|
[
[
"Schwenker",
"Eric",
""
],
[
"Jiang",
"Weixin",
""
],
[
"Spreadbury",
"Trevor",
""
],
[
"Ferrier",
"Nicola",
""
],
[
"Cossairt",
"Oliver",
""
],
[
"Chan",
"Maria K. Y.",
""
]
] |
Due to recent improvements in image resolution and acquisition speed, materials microscopy is experiencing an explosion of published imaging data. The standard publication format, while sufficient for traditional data ingestion scenarios where a select number of images can be critically examined and curated manually, is not conducive to large-scale data aggregation or analysis, hindering data sharing and reuse. Most images in publications are presented as components of a larger figure with their explicit context buried in the main body or caption text, so even if aggregated, collections of images with weak or no digitized contextual labels have limited value. To solve the problem of curating labeled microscopy data from literature, this work introduces the EXSCLAIM! Python toolkit for the automatic EXtraction, Separation, and Caption-based natural Language Annotation of IMages from scientific literature. We highlight the methodology behind the construction of EXSCLAIM! and demonstrate its ability to extract and label open-source scientific images at high volume.
|
2306.15951
|
Zhiyi Zhang
|
Zhiyi Zhang, Pengfei Zhang, Zhuopin Xu, Qi Wang
|
Reduce Computational Complexity for Convolutional Layers by Skipping
Zeros
| null | null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Convolutional neural networks necessitate good algorithms to reduce
complexity, and sufficient utilization of parallel processors for acceleration.
Within convolutional layers, there are three types of operators: convolution
used in forward propagation, deconvolution and dilated-convolution utilized in
backward propagation. During the execution of these operators, zeros are
typically added to tensors, leading to redundant calculations and unnecessary
strain on hardware. To circumvent these inefficiencies, we propose the C-K-S
algorithm, accompanied by efficient GPU implementations. C-K-S trims filters to
exclude zero-padding. For deconvolution and dilated-convolution, C-K-S
transforms sparse tensors into dense tensors, and standardizes the local
computational rules to simplify the hardware control. The experimental results
demonstrate that C-K-S offers good performance in terms of speed and
convergence, surpassing the capabilities of PyTorch and cuDNN in certain
scenarios.
|
[
{
"created": "Wed, 28 Jun 2023 06:21:22 GMT",
"version": "v1"
},
{
"created": "Wed, 12 Jul 2023 08:18:30 GMT",
"version": "v2"
},
{
"created": "Sun, 5 Nov 2023 12:51:53 GMT",
"version": "v3"
}
] |
2023-11-07
|
[
[
"Zhang",
"Zhiyi",
""
],
[
"Zhang",
"Pengfei",
""
],
[
"Xu",
"Zhuopin",
""
],
[
"Wang",
"Qi",
""
]
] |
Convolutional neural networks necessitate good algorithms to reduce complexity, and sufficient utilization of parallel processors for acceleration. Within convolutional layers, there are three types of operators: convolution used in forward propagation, deconvolution and dilated-convolution utilized in backward propagation. During the execution of these operators, zeros are typically added to tensors, leading to redundant calculations and unnecessary strain on hardware. To circumvent these inefficiencies, we propose the C-K-S algorithm, accompanied by efficient GPU implementations. C-K-S trims filters to exclude zero-padding. For deconvolution and dilated-convolution, C-K-S transforms sparse tensors into dense tensors, and standardizes the local computational rules to simplify the hardware control. The experimental results demonstrate that C-K-S offers good performance in terms of speed and convergence, surpassing the capabilities of PyTorch and cuDNN in certain scenarios.
|
0806.0172
|
Grenville Croll
|
David Chadwick
|
EuSpRIG TEAM work:Tools, Education, Audit, Management
|
7 Pages, 1 Figure
|
Proc. European Spreadsheet Risks Int. Grp. (EuSpRIG) 2003 1-6 ISBN
1 86166 199 1
| null | null |
cs.HC cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Research on spreadsheet errors began over fifteen years ago. During that
time, there has been ample evidence demonstrating that spreadsheet errors are
common and nontrivial. Quite simply, spreadsheet error rates are comparable to
error rates in other human cognitive activities and are caused by fundamental
limitations in human cognition, not mere sloppiness. Nor does ordinary "being
careful" eliminate errors or reduce them to acceptable levels.
|
[
{
"created": "Sun, 1 Jun 2008 21:01:15 GMT",
"version": "v1"
}
] |
2008-06-03
|
[
[
"Chadwick",
"David",
""
]
] |
Research on spreadsheet errors began over fifteen years ago. During that time, there has been ample evidence demonstrating that spreadsheet errors are common and nontrivial. Quite simply, spreadsheet error rates are comparable to error rates in other human cognitive activities and are caused by fundamental limitations in human cognition, not mere sloppiness. Nor does ordinary "being careful" eliminate errors or reduce them to acceptable levels.
|
2404.14183
|
Yuxia Wang
|
Yuxia Wang, Jonibek Mansurov, Petar Ivanov, Jinyan Su, Artem
Shelmanov, Akim Tsvigun, Osama Mohammed Afzal, Tarek Mahmoud, Giovanni
Puccetti, Thomas Arnold, Chenxi Whitehouse, Alham Fikri Aji, Nizar Habash,
Iryna Gurevych, Preslav Nakov
|
SemEval-2024 Task 8: Multidomain, Multimodel and Multilingual
Machine-Generated Text Detection
|
23 pages, 12 tables
|
Proceedings of the 18th International Workshop on Semantic
Evaluation (SemEval-2024)
| null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We present the results and the main findings of SemEval-2024 Task 8:
Multigenerator, Multidomain, and Multilingual Machine-Generated Text Detection.
The task featured three subtasks. Subtask A is a binary classification task
determining whether a text is written by a human or generated by a machine.
This subtask has two tracks: a monolingual track focused solely on English
texts and a multilingual track. Subtask B is to detect the exact source of a
text, discerning whether it is written by a human or generated by a specific
LLM. Subtask C aims to identify the changing point within a text, at which the
authorship transitions from human to machine. The task attracted a large number
of participants: subtask A monolingual (126), subtask A multilingual (59),
subtask B (70), and subtask C (30). In this paper, we present the task, analyze
the results, and discuss the system submissions and the methods they used. For
all subtasks, the best systems used LLMs.
|
[
{
"created": "Mon, 22 Apr 2024 13:56:07 GMT",
"version": "v1"
}
] |
2024-04-23
|
[
[
"Wang",
"Yuxia",
""
],
[
"Mansurov",
"Jonibek",
""
],
[
"Ivanov",
"Petar",
""
],
[
"Su",
"Jinyan",
""
],
[
"Shelmanov",
"Artem",
""
],
[
"Tsvigun",
"Akim",
""
],
[
"Afzal",
"Osama Mohammed",
""
],
[
"Mahmoud",
"Tarek",
""
],
[
"Puccetti",
"Giovanni",
""
],
[
"Arnold",
"Thomas",
""
],
[
"Whitehouse",
"Chenxi",
""
],
[
"Aji",
"Alham Fikri",
""
],
[
"Habash",
"Nizar",
""
],
[
"Gurevych",
"Iryna",
""
],
[
"Nakov",
"Preslav",
""
]
] |
We present the results and the main findings of SemEval-2024 Task 8: Multigenerator, Multidomain, and Multilingual Machine-Generated Text Detection. The task featured three subtasks. Subtask A is a binary classification task determining whether a text is written by a human or generated by a machine. This subtask has two tracks: a monolingual track focused solely on English texts and a multilingual track. Subtask B is to detect the exact source of a text, discerning whether it is written by a human or generated by a specific LLM. Subtask C aims to identify the changing point within a text, at which the authorship transitions from human to machine. The task attracted a large number of participants: subtask A monolingual (126), subtask A multilingual (59), subtask B (70), and subtask C (30). In this paper, we present the task, analyze the results, and discuss the system submissions and the methods they used. For all subtasks, the best systems used LLMs.
|
2403.06201
|
Huanqi Yang
|
Huanqi Yang, Sijie Ji, Rucheng Wu, Weitao Xu
|
Are You Being Tracked? Discover the Power of Zero-Shot Trajectory
Tracing with LLMs!
| null | null | null | null |
cs.CL cs.AI cs.HC cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There is a burgeoning discussion around the capabilities of Large Language
Models (LLMs) in acting as fundamental components that can be seamlessly
incorporated into Artificial Intelligence of Things (AIoT) to interpret complex
trajectories. This study introduces LLMTrack, a model that illustrates how LLMs
can be leveraged for Zero-Shot Trajectory Recognition by employing a novel
single-prompt technique that combines role-play and think step-by-step
methodologies with unprocessed Inertial Measurement Unit (IMU) data. We
evaluate the model using real-world datasets designed to challenge it with
distinct trajectories characterized by indoor and outdoor scenarios. In both
test scenarios, LLMTrack not only meets but exceeds the performance benchmarks
set by traditional machine learning approaches and even contemporary
state-of-the-art deep learning models, all without the requirement of training
on specialized datasets. The results of our research suggest that, with
strategically designed prompts, LLMs can tap into their extensive knowledge
base and are well-equipped to analyze raw sensor data with remarkable
effectiveness.
|
[
{
"created": "Sun, 10 Mar 2024 12:50:35 GMT",
"version": "v1"
}
] |
2024-03-12
|
[
[
"Yang",
"Huanqi",
""
],
[
"Ji",
"Sijie",
""
],
[
"Wu",
"Rucheng",
""
],
[
"Xu",
"Weitao",
""
]
] |
There is a burgeoning discussion around the capabilities of Large Language Models (LLMs) in acting as fundamental components that can be seamlessly incorporated into Artificial Intelligence of Things (AIoT) to interpret complex trajectories. This study introduces LLMTrack, a model that illustrates how LLMs can be leveraged for Zero-Shot Trajectory Recognition by employing a novel single-prompt technique that combines role-play and think step-by-step methodologies with unprocessed Inertial Measurement Unit (IMU) data. We evaluate the model using real-world datasets designed to challenge it with distinct trajectories characterized by indoor and outdoor scenarios. In both test scenarios, LLMTrack not only meets but exceeds the performance benchmarks set by traditional machine learning approaches and even contemporary state-of-the-art deep learning models, all without the requirement of training on specialized datasets. The results of our research suggest that, with strategically designed prompts, LLMs can tap into their extensive knowledge base and are well-equipped to analyze raw sensor data with remarkable effectiveness.
|
2307.02627
|
Jacqueline Harding
|
Jacqueline Harding
|
Proxy Selection in Transitive Proxy Voting
| null |
Social Choice and Welfare 58, 69-99 (2022)
|
10.1007/s00355-021-01345-8
| null |
cs.GT cs.MA econ.GN q-fin.EC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Transitive proxy voting (or "liquid democracy") is a novel form of collective
decision making, often framed as an attractive hybrid of direct and
representative democracy. Although the ideas behind liquid democracy have
garnered widespread support, there have been relatively few attempts to model
it formally. This paper makes three main contributions. First, it proposes a
new social choice-theoretic model of liquid democracy, which is distinguished
by taking a richer formal perspective on the process by which a voter chooses a
proxy. Second, it examines the model from an axiomatic perspective, proving (a)
a proxy vote analogue of May's Theorem and (b) an impossibility result
concerning monotonicity properties in a proxy vote setting. Third, it explores
the topic of manipulation in transitive proxy votes. Two forms of manipulation
specific to the proxy vote setting are defined, and it is shown that
manipulation occurs in strictly more cases in proxy votes than in classical
votes.
|
[
{
"created": "Wed, 5 Jul 2023 19:59:06 GMT",
"version": "v1"
}
] |
2023-07-07
|
[
[
"Harding",
"Jacqueline",
""
]
] |
Transitive proxy voting (or "liquid democracy") is a novel form of collective decision making, often framed as an attractive hybrid of direct and representative democracy. Although the ideas behind liquid democracy have garnered widespread support, there have been relatively few attempts to model it formally. This paper makes three main contributions. First, it proposes a new social choice-theoretic model of liquid democracy, which is distinguished by taking a richer formal perspective on the process by which a voter chooses a proxy. Second, it examines the model from an axiomatic perspective, proving (a) a proxy vote analogue of May's Theorem and (b) an impossibility result concerning monotonicity properties in a proxy vote setting. Third, it explores the topic of manipulation in transitive proxy votes. Two forms of manipulation specific to the proxy vote setting are defined, and it is shown that manipulation occurs in strictly more cases in proxy votes than in classical votes.
|
2001.00088
|
Andrew Perrault
|
Andrew Perrault, Fei Fang, Arunesh Sinha, Milind Tambe
|
AI for Social Impact: Learning and Planning in the Data-to-Deployment
Pipeline
|
AI Magazine, Winter 2020
| null |
10.1609/aimag.v41i4.5296
| null |
cs.CY cs.GT cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the maturing of AI and multiagent systems research, we have a tremendous
opportunity to direct these advances towards addressing complex societal
problems. In pursuit of this goal of AI for Social Impact, we as AI researchers
must go beyond improvements in computational methodology; it is important to
step out in the field to demonstrate social impact. To this end, we focus on
the problems of public safety and security, wildlife conservation, and public
health in low-resource communities, and present research advances in multiagent
systems to address one key cross-cutting challenge: how to effectively deploy
our limited intervention resources in these problem domains. We present case
studies from our deployments around the world as well as lessons learned that
we hope are of use to researchers who are interested in AI for Social Impact.
In pushing this research agenda, we believe AI can indeed play an important
role in fighting social injustice and improving society.
|
[
{
"created": "Mon, 16 Dec 2019 18:10:56 GMT",
"version": "v1"
},
{
"created": "Sun, 12 Jun 2022 18:23:50 GMT",
"version": "v2"
}
] |
2022-06-14
|
[
[
"Perrault",
"Andrew",
""
],
[
"Fang",
"Fei",
""
],
[
"Sinha",
"Arunesh",
""
],
[
"Tambe",
"Milind",
""
]
] |
With the maturing of AI and multiagent systems research, we have a tremendous opportunity to direct these advances towards addressing complex societal problems. In pursuit of this goal of AI for Social Impact, we as AI researchers must go beyond improvements in computational methodology; it is important to step out in the field to demonstrate social impact. To this end, we focus on the problems of public safety and security, wildlife conservation, and public health in low-resource communities, and present research advances in multiagent systems to address one key cross-cutting challenge: how to effectively deploy our limited intervention resources in these problem domains. We present case studies from our deployments around the world as well as lessons learned that we hope are of use to researchers who are interested in AI for Social Impact. In pushing this research agenda, we believe AI can indeed play an important role in fighting social injustice and improving society.
|
cs/0510080
|
Joseph Y. Halpern
|
Peter D. Grunwald and Joseph Y. Halpern
|
When Ignorance is Bliss
|
In Proceedings of the Twentieth Conference on Uncertainty in AI,
2004, pp. 226-234
| null | null | null |
cs.AI cs.LG
| null |
It is commonly-accepted wisdom that more information is better, and that
information should never be ignored. Here we argue, using both a Bayesian and a
non-Bayesian analysis, that in some situations you are better off ignoring
information if your uncertainty is represented by a set of probability
measures. These include situations in which the information is relevant for the
prediction task at hand. In the non-Bayesian analysis, we show how ignoring
information avoids dilation, the phenomenon that additional pieces of
information sometimes lead to an increase in uncertainty. In the Bayesian
analysis, we show that for small sample sizes and certain prediction tasks, the
Bayesian posterior based on a noninformative prior yields worse predictions
than simply ignoring the given information.
|
[
{
"created": "Tue, 25 Oct 2005 22:14:33 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Grunwald",
"Peter D.",
""
],
[
"Halpern",
"Joseph Y.",
""
]
] |
It is commonly-accepted wisdom that more information is better, and that information should never be ignored. Here we argue, using both a Bayesian and a non-Bayesian analysis, that in some situations you are better off ignoring information if your uncertainty is represented by a set of probability measures. These include situations in which the information is relevant for the prediction task at hand. In the non-Bayesian analysis, we show how ignoring information avoids dilation, the phenomenon that additional pieces of information sometimes lead to an increase in uncertainty. In the Bayesian analysis, we show that for small sample sizes and certain prediction tasks, the Bayesian posterior based on a noninformative prior yields worse predictions than simply ignoring the given information.
|
1405.1894
|
Tillmann Miltzow
|
Michael Hoffmann, Vincent Kusters, Tillmann Miltzow
|
Halving Balls in Deterministic Linear Time
| null | null | null | null |
cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Let $\D$ be a set of $n$ pairwise disjoint unit balls in $\R^d$ and $P$ the
set of their center points. A hyperplane $\Hy$ is an \emph{$m$-separator} for
$\D$ if each closed halfspace bounded by $\Hy$ contains at least $m$ points
from $P$. This generalizes the notion of halving hyperplanes, which correspond
to $n/2$-separators. The analogous notion for point sets has been well studied.
Separators have various applications, for instance, in divide-and-conquer
schemes. In such a scheme any ball that is intersected by the separating
hyperplane may still interact with both sides of the partition. Therefore it is
desirable that the separating hyperplane intersects a small number of balls
only. We present three deterministic algorithms to bisect or approximately
bisect a given set of disjoint unit balls by a hyperplane: Firstly, we present
a simple linear-time algorithm to construct an $\alpha n$-separator for balls
in $\R^d$, for any $0<\alpha<1/2$, that intersects at most $cn^{(d-1)/d}$
balls, for some constant $c$ that depends on $d$ and $\alpha$. The number of
intersected balls is best possible up to the constant $c$. Secondly, we present
a near-linear time algorithm to construct an $(n/2-o(n))$-separator in $\R^d$
that intersects $o(n)$ balls. Finally, we give a linear-time algorithm to
construct a halving line in $\R^2$ that intersects $O(n^{(5/6)+\epsilon})$
disks.
Our results improve the runtime of a disk sliding algorithm by Bereg,
Dumitrescu and Pach. In addition, our results improve and derandomize an
algorithm to construct a space decomposition used by L{\"o}ffler and Mulzer to
construct an onion (convex layer) decomposition for imprecise points (any point
resides at an unknown location within a given disk).
|
[
{
"created": "Thu, 8 May 2014 11:47:51 GMT",
"version": "v1"
}
] |
2014-05-09
|
[
[
"Hoffmann",
"Michael",
""
],
[
"Kusters",
"Vincent",
""
],
[
"Miltzow",
"Tillmann",
""
]
] |
Let $\D$ be a set of $n$ pairwise disjoint unit balls in $\R^d$ and $P$ the set of their center points. A hyperplane $\Hy$ is an \emph{$m$-separator} for $\D$ if each closed halfspace bounded by $\Hy$ contains at least $m$ points from $P$. This generalizes the notion of halving hyperplanes, which correspond to $n/2$-separators. The analogous notion for point sets has been well studied. Separators have various applications, for instance, in divide-and-conquer schemes. In such a scheme any ball that is intersected by the separating hyperplane may still interact with both sides of the partition. Therefore it is desirable that the separating hyperplane intersects a small number of balls only. We present three deterministic algorithms to bisect or approximately bisect a given set of disjoint unit balls by a hyperplane: Firstly, we present a simple linear-time algorithm to construct an $\alpha n$-separator for balls in $\R^d$, for any $0<\alpha<1/2$, that intersects at most $cn^{(d-1)/d}$ balls, for some constant $c$ that depends on $d$ and $\alpha$. The number of intersected balls is best possible up to the constant $c$. Secondly, we present a near-linear time algorithm to construct an $(n/2-o(n))$-separator in $\R^d$ that intersects $o(n)$ balls. Finally, we give a linear-time algorithm to construct a halving line in $\R^2$ that intersects $O(n^{(5/6)+\epsilon})$ disks. Our results improve the runtime of a disk sliding algorithm by Bereg, Dumitrescu and Pach. In addition, our results improve and derandomize an algorithm to construct a space decomposition used by L{\"o}ffler and Mulzer to construct an onion (convex layer) decomposition for imprecise points (any point resides at an unknown location within a given disk).
|
1705.02116
|
Shuoyao Wang
|
Shuoyao Wang, Suzhi Bi, Ying Jun (Angela) Zhang, Jianwei Huang
|
Electrical Vehicle Charging Station Profit Maximization: Admission,
Pricing, and Online Scheduling
|
This paper has been submitted to IEEE Transactions on Sustainable
Energy for potential journal publication
| null | null | null |
cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The rapid emergence of electric vehicles (EVs) demands an advanced
infrastructure of publicly accessible charging stations that provide efficient
charging services. In this paper, we propose a new charging station operation
mechanism, the JoAP, which jointly optimizes the EV admission control, pricing,
and charging scheduling to maximize the charging station's profit. More
specifically, by introducing a tandem queueing network model, we analytically
characterize the average charging station profit as a function of the admission
control and pricing policies. Based on the analysis, we characterize the
optimal JoAP algorithm. Through extensive simulations, we demonstrate that the
proposed JoAP algorithm on average can achieve 330% and 531% higher profit than
a widely adopted benchmark method under two representative waiting-time penalty
rates.
|
[
{
"created": "Fri, 5 May 2017 07:59:36 GMT",
"version": "v1"
},
{
"created": "Thu, 7 Sep 2017 08:18:08 GMT",
"version": "v2"
}
] |
2017-09-08
|
[
[
"Wang",
"Shuoyao",
"",
"Angela"
],
[
"Bi",
"Suzhi",
"",
"Angela"
],
[
"Jun",
"Ying",
"",
"Angela"
],
[
"Zhang",
"",
""
],
[
"Huang",
"Jianwei",
""
]
] |
The rapid emergence of electric vehicles (EVs) demands an advanced infrastructure of publicly accessible charging stations that provide efficient charging services. In this paper, we propose a new charging station operation mechanism, the JoAP, which jointly optimizes the EV admission control, pricing, and charging scheduling to maximize the charging station's profit. More specifically, by introducing a tandem queueing network model, we analytically characterize the average charging station profit as a function of the admission control and pricing policies. Based on the analysis, we characterize the optimal JoAP algorithm. Through extensive simulations, we demonstrate that the proposed JoAP algorithm on average can achieve 330% and 531% higher profit than a widely adopted benchmark method under two representative waiting-time penalty rates.
|
2012.08740
|
Yuhang Yao
|
Yuhang Yao, Carlee Joe-Wong
|
Interpretable Clustering on Dynamic Graphs with Recurrent Graph Neural
Networks
|
AAAI 2021
|
AAAI 2021: 4608-4616
| null | null |
cs.LG cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the problem of clustering nodes in a dynamic graph, where the
connections between nodes and nodes' cluster memberships may change over time,
e.g., due to community migration. We first propose a dynamic stochastic block
model that captures these changes, and a simple decay-based clustering
algorithm that clusters nodes based on weighted connections between them, where
the weight decreases at a fixed rate over time. This decay rate can then be
interpreted as signifying the importance of including historical connection
information in the clustering. However, the optimal decay rate may differ for
clusters with different rates of turnover. We characterize the optimal decay
rate for each cluster and propose a clustering method that achieves almost
exact recovery of the true clusters. We then demonstrate the efficacy of our
clustering algorithm with optimized decay rates on simulated graph data.
Recurrent neural networks (RNNs), a popular algorithm for sequence learning,
use a similar decay-based method, and we use this insight to propose two new
RNN-GCN (graph convolutional network) architectures for semi-supervised graph
clustering. We finally demonstrate that the proposed architectures perform well
on real data compared to state-of-the-art graph clustering algorithms.
|
[
{
"created": "Wed, 16 Dec 2020 04:31:19 GMT",
"version": "v1"
},
{
"created": "Tue, 22 Jun 2021 20:13:53 GMT",
"version": "v2"
}
] |
2021-06-24
|
[
[
"Yao",
"Yuhang",
""
],
[
"Joe-Wong",
"Carlee",
""
]
] |
We study the problem of clustering nodes in a dynamic graph, where the connections between nodes and nodes' cluster memberships may change over time, e.g., due to community migration. We first propose a dynamic stochastic block model that captures these changes, and a simple decay-based clustering algorithm that clusters nodes based on weighted connections between them, where the weight decreases at a fixed rate over time. This decay rate can then be interpreted as signifying the importance of including historical connection information in the clustering. However, the optimal decay rate may differ for clusters with different rates of turnover. We characterize the optimal decay rate for each cluster and propose a clustering method that achieves almost exact recovery of the true clusters. We then demonstrate the efficacy of our clustering algorithm with optimized decay rates on simulated graph data. Recurrent neural networks (RNNs), a popular algorithm for sequence learning, use a similar decay-based method, and we use this insight to propose two new RNN-GCN (graph convolutional network) architectures for semi-supervised graph clustering. We finally demonstrate that the proposed architectures perform well on real data compared to state-of-the-art graph clustering algorithms.
|
1009.5346
|
Murugesan Kuttikrishnan
|
Murugesan Kuttikrishnan
|
A Novel Approach for Cardiac Disease Prediction and Classification Using
Intelligent Agents
|
8 pages 2 figures and 7 tables
|
(IJCSIS) International Journal of Computer Science and Information
Security, Vol. 8, No. 5, August 2010
| null | null |
cs.MA cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The goal is to develop a novel approach for cardiac disease prediction and
diagnosis using intelligent agents. Initially the symptoms are preprocessed
using filter and wrapper based agents. The filter removes the missing or
irrelevant symptoms. Wrapper is used to extract the data in the data set
according to the threshold limits. Dependency of each symptom is identified
using dependency checker agent. The classification is based on the prior and
posterior probability of the symptoms with the evidence value. Finally the
symptoms are classified in to five classes namely absence, starting, mild,
moderate and serious. Using the cooperative approach the cardiac problem is
solved and verified.
|
[
{
"created": "Mon, 27 Sep 2010 18:20:56 GMT",
"version": "v1"
}
] |
2010-09-28
|
[
[
"Kuttikrishnan",
"Murugesan",
""
]
] |
The goal is to develop a novel approach for cardiac disease prediction and diagnosis using intelligent agents. Initially the symptoms are preprocessed using filter and wrapper based agents. The filter removes the missing or irrelevant symptoms. Wrapper is used to extract the data in the data set according to the threshold limits. Dependency of each symptom is identified using dependency checker agent. The classification is based on the prior and posterior probability of the symptoms with the evidence value. Finally the symptoms are classified in to five classes namely absence, starting, mild, moderate and serious. Using the cooperative approach the cardiac problem is solved and verified.
|
2302.05706
|
Wenxuan Wang
|
Wenxuan Wang, Jen-tse Huang, Weibin Wu, Jianping Zhang, Yizhan Huang,
Shuqing Li, Pinjia He, Michael Lyu
|
MTTM: Metamorphic Testing for Textual Content Moderation Software
|
Accepted by ICSE 2023
| null | null | null |
cs.CL cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The exponential growth of social media platforms such as Twitter and Facebook
has revolutionized textual communication and textual content publication in
human society. However, they have been increasingly exploited to propagate
toxic content, such as hate speech, malicious advertisement, and pornography,
which can lead to highly negative impacts (e.g., harmful effects on teen mental
health). Researchers and practitioners have been enthusiastically developing
and extensively deploying textual content moderation software to address this
problem. However, we find that malicious users can evade moderation by changing
only a few words in the toxic content. Moreover, modern content moderation
software performance against malicious inputs remains underexplored. To this
end, we propose MTTM, a Metamorphic Testing framework for Textual content
Moderation software. Specifically, we conduct a pilot study on 2,000 text
messages collected from real users and summarize eleven metamorphic relations
across three perturbation levels: character, word, and sentence. MTTM employs
these metamorphic relations on toxic textual contents to generate test cases,
which are still toxic yet likely to evade moderation. In our evaluation, we
employ MTTM to test three commercial textual content moderation software and
two state-of-the-art moderation algorithms against three kinds of toxic
content. The results show that MTTM achieves up to 83.9%, 51%, and 82.5% error
finding rates (EFR) when testing commercial moderation software provided by
Google, Baidu, and Huawei, respectively, and it obtains up to 91.2% EFR when
testing the state-of-the-art algorithms from the academy. In addition, we
leverage the test cases generated by MTTM to retrain the model we explored,
which largely improves model robustness (0% to 5.9% EFR) while maintaining the
accuracy on the original test set.
|
[
{
"created": "Sat, 11 Feb 2023 14:44:39 GMT",
"version": "v1"
}
] |
2023-02-14
|
[
[
"Wang",
"Wenxuan",
""
],
[
"Huang",
"Jen-tse",
""
],
[
"Wu",
"Weibin",
""
],
[
"Zhang",
"Jianping",
""
],
[
"Huang",
"Yizhan",
""
],
[
"Li",
"Shuqing",
""
],
[
"He",
"Pinjia",
""
],
[
"Lyu",
"Michael",
""
]
] |
The exponential growth of social media platforms such as Twitter and Facebook has revolutionized textual communication and textual content publication in human society. However, they have been increasingly exploited to propagate toxic content, such as hate speech, malicious advertisement, and pornography, which can lead to highly negative impacts (e.g., harmful effects on teen mental health). Researchers and practitioners have been enthusiastically developing and extensively deploying textual content moderation software to address this problem. However, we find that malicious users can evade moderation by changing only a few words in the toxic content. Moreover, modern content moderation software performance against malicious inputs remains underexplored. To this end, we propose MTTM, a Metamorphic Testing framework for Textual content Moderation software. Specifically, we conduct a pilot study on 2,000 text messages collected from real users and summarize eleven metamorphic relations across three perturbation levels: character, word, and sentence. MTTM employs these metamorphic relations on toxic textual contents to generate test cases, which are still toxic yet likely to evade moderation. In our evaluation, we employ MTTM to test three commercial textual content moderation software and two state-of-the-art moderation algorithms against three kinds of toxic content. The results show that MTTM achieves up to 83.9%, 51%, and 82.5% error finding rates (EFR) when testing commercial moderation software provided by Google, Baidu, and Huawei, respectively, and it obtains up to 91.2% EFR when testing the state-of-the-art algorithms from the academy. In addition, we leverage the test cases generated by MTTM to retrain the model we explored, which largely improves model robustness (0% to 5.9% EFR) while maintaining the accuracy on the original test set.
|
2312.03173
|
Jaromir Savelka
|
Jacob Doughty, Zipiao Wan, Anishka Bompelli, Jubahed Qayum, Taozhi
Wang, Juran Zhang, Yujia Zheng, Aidan Doyle, Pragnya Sridhar, Arav Agarwal,
Christopher Bogart, Eric Keylor, Can Kultur, Jaromir Savelka, Majd Sakr
|
A Comparative Study of AI-Generated (GPT-4) and Human-crafted MCQs in
Programming Education
| null | null |
10.1145/3636243.3636256
| null |
cs.CY cs.AI cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
There is a constant need for educators to develop and maintain effective
up-to-date assessments. While there is a growing body of research in computing
education on utilizing large language models (LLMs) in generation and
engagement with coding exercises, the use of LLMs for generating programming
MCQs has not been extensively explored. We analyzed the capability of GPT-4 to
produce multiple-choice questions (MCQs) aligned with specific learning
objectives (LOs) from Python programming classes in higher education.
Specifically, we developed an LLM-powered (GPT-4) system for generation of MCQs
from high-level course context and module-level LOs. We evaluated 651
LLM-generated and 449 human-crafted MCQs aligned to 246 LOs from 6 Python
courses. We found that GPT-4 was capable of producing MCQs with clear language,
a single correct choice, and high-quality distractors. We also observed that
the generated MCQs appeared to be well-aligned with the LOs. Our findings can
be leveraged by educators wishing to take advantage of the state-of-the-art
generative models to support MCQ authoring efforts.
|
[
{
"created": "Tue, 5 Dec 2023 22:29:43 GMT",
"version": "v1"
}
] |
2023-12-07
|
[
[
"Doughty",
"Jacob",
""
],
[
"Wan",
"Zipiao",
""
],
[
"Bompelli",
"Anishka",
""
],
[
"Qayum",
"Jubahed",
""
],
[
"Wang",
"Taozhi",
""
],
[
"Zhang",
"Juran",
""
],
[
"Zheng",
"Yujia",
""
],
[
"Doyle",
"Aidan",
""
],
[
"Sridhar",
"Pragnya",
""
],
[
"Agarwal",
"Arav",
""
],
[
"Bogart",
"Christopher",
""
],
[
"Keylor",
"Eric",
""
],
[
"Kultur",
"Can",
""
],
[
"Savelka",
"Jaromir",
""
],
[
"Sakr",
"Majd",
""
]
] |
There is a constant need for educators to develop and maintain effective up-to-date assessments. While there is a growing body of research in computing education on utilizing large language models (LLMs) in generation and engagement with coding exercises, the use of LLMs for generating programming MCQs has not been extensively explored. We analyzed the capability of GPT-4 to produce multiple-choice questions (MCQs) aligned with specific learning objectives (LOs) from Python programming classes in higher education. Specifically, we developed an LLM-powered (GPT-4) system for generation of MCQs from high-level course context and module-level LOs. We evaluated 651 LLM-generated and 449 human-crafted MCQs aligned to 246 LOs from 6 Python courses. We found that GPT-4 was capable of producing MCQs with clear language, a single correct choice, and high-quality distractors. We also observed that the generated MCQs appeared to be well-aligned with the LOs. Our findings can be leveraged by educators wishing to take advantage of the state-of-the-art generative models to support MCQ authoring efforts.
|
2210.05896
|
Zhijie Wang
|
Shuangzhi Li, Zhijie Wang, Felix Juefei-Xu, Qing Guo, Xingyu Li and
Lei Ma
|
Common Corruption Robustness of Point Cloud Detectors: Benchmark and
Enhancement
|
16 pages, 6 figures
| null | null | null |
cs.CV cs.LG cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Object detection through LiDAR-based point cloud has recently been important
in autonomous driving. Although achieving high accuracy on public benchmarks,
the state-of-the-art detectors may still go wrong and cause a heavy loss due to
the widespread corruptions in the real world like rain, snow, sensor noise,
etc. Nevertheless, there is a lack of a large-scale dataset covering diverse
scenes and realistic corruption types with different severities to develop
practical and robust point cloud detectors, which is challenging due to the
heavy collection costs. To alleviate the challenge and start the first step for
robust point cloud detection, we propose the physical-aware simulation methods
to generate degraded point clouds under different real-world common
corruptions. Then, for the first attempt, we construct a benchmark based on the
physical-aware common corruptions for point cloud detectors, which contains a
total of 1,122,150 examples covering 7,481 scenes, 25 common corruption types,
and 6 severities. With such a novel benchmark, we conduct extensive empirical
studies on 8 state-of-the-art detectors that contain 6 different detection
frameworks. Thus we get several insight observations revealing the
vulnerabilities of the detectors and indicating the enhancement directions.
Moreover, we further study the effectiveness of existing robustness enhancement
methods based on data augmentation and data denoising. The benchmark can
potentially be a new platform for evaluating point cloud detectors, opening a
door for developing novel robustness enhancement methods.
|
[
{
"created": "Wed, 12 Oct 2022 03:23:35 GMT",
"version": "v1"
}
] |
2022-10-13
|
[
[
"Li",
"Shuangzhi",
""
],
[
"Wang",
"Zhijie",
""
],
[
"Juefei-Xu",
"Felix",
""
],
[
"Guo",
"Qing",
""
],
[
"Li",
"Xingyu",
""
],
[
"Ma",
"Lei",
""
]
] |
Object detection through LiDAR-based point cloud has recently been important in autonomous driving. Although achieving high accuracy on public benchmarks, the state-of-the-art detectors may still go wrong and cause a heavy loss due to the widespread corruptions in the real world like rain, snow, sensor noise, etc. Nevertheless, there is a lack of a large-scale dataset covering diverse scenes and realistic corruption types with different severities to develop practical and robust point cloud detectors, which is challenging due to the heavy collection costs. To alleviate the challenge and start the first step for robust point cloud detection, we propose the physical-aware simulation methods to generate degraded point clouds under different real-world common corruptions. Then, for the first attempt, we construct a benchmark based on the physical-aware common corruptions for point cloud detectors, which contains a total of 1,122,150 examples covering 7,481 scenes, 25 common corruption types, and 6 severities. With such a novel benchmark, we conduct extensive empirical studies on 8 state-of-the-art detectors that contain 6 different detection frameworks. Thus we get several insight observations revealing the vulnerabilities of the detectors and indicating the enhancement directions. Moreover, we further study the effectiveness of existing robustness enhancement methods based on data augmentation and data denoising. The benchmark can potentially be a new platform for evaluating point cloud detectors, opening a door for developing novel robustness enhancement methods.
|
1012.4870
|
Ying Ding
|
Erjia Yan, Ying Ding (School of Library and Information Science,
Indiana University, Bloomington, IN, United States)
|
Discovering author impact: A PageRank perspective
|
17 pages, 5 figures
| null | null | null |
cs.DL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This article provides an alternative perspective for measuring author impact
by applying PageRank algorithm to a coauthorship network. A weighted PageRank
algorithm considering citation and coauthorship network topology is proposed.
We test this algorithm under different damping factors by evaluating author
impact in the informetrics research community. In addition, we also compare
this weighted PageRank with the h-index, citation, and program committee (PC)
membership of the International Society for Scientometrics and Informetrics
(ISSI) conferences. Findings show that this weighted PageRank algorithm
provides reliable results in measuring author impact.
|
[
{
"created": "Wed, 22 Dec 2010 03:05:20 GMT",
"version": "v1"
}
] |
2010-12-23
|
[
[
"Yan",
"Erjia",
"",
"School of Library and Information Science,\n Indiana University, Bloomington, IN, United States"
],
[
"Ding",
"Ying",
"",
"School of Library and Information Science,\n Indiana University, Bloomington, IN, United States"
]
] |
This article provides an alternative perspective for measuring author impact by applying PageRank algorithm to a coauthorship network. A weighted PageRank algorithm considering citation and coauthorship network topology is proposed. We test this algorithm under different damping factors by evaluating author impact in the informetrics research community. In addition, we also compare this weighted PageRank with the h-index, citation, and program committee (PC) membership of the International Society for Scientometrics and Informetrics (ISSI) conferences. Findings show that this weighted PageRank algorithm provides reliable results in measuring author impact.
|
2206.07875
|
Risheng Liu
|
Risheng Liu, Xuan Liu, Shangzhi Zeng, Jin Zhang and Yixuan Zhang
|
Optimization-Derived Learning with Essential Convergence Analysis of
Training and Hyper-training
|
Accepted by ICML 2022
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Recently, Optimization-Derived Learning (ODL) has attracted attention from
learning and vision areas, which designs learning models from the perspective
of optimization. However, previous ODL approaches regard the training and
hyper-training procedures as two separated stages, meaning that the
hyper-training variables have to be fixed during the training process, and thus
it is also impossible to simultaneously obtain the convergence of training and
hyper-training variables. In this work, we design a Generalized
Krasnoselskii-Mann (GKM) scheme based on fixed-point iterations as our
fundamental ODL module, which unifies existing ODL methods as special cases.
Under the GKM scheme, a Bilevel Meta Optimization (BMO) algorithmic framework
is constructed to solve the optimal training and hyper-training variables
together. We rigorously prove the essential joint convergence of the
fixed-point iteration for training and the process of optimizing
hyper-parameters for hyper-training, both on the approximation quality, and on
the stationary analysis. Experiments demonstrate the efficiency of BMO with
competitive performance on sparse coding and real-world applications such as
image deconvolution and rain streak removal.
|
[
{
"created": "Thu, 16 Jun 2022 01:50:25 GMT",
"version": "v1"
}
] |
2022-06-17
|
[
[
"Liu",
"Risheng",
""
],
[
"Liu",
"Xuan",
""
],
[
"Zeng",
"Shangzhi",
""
],
[
"Zhang",
"Jin",
""
],
[
"Zhang",
"Yixuan",
""
]
] |
Recently, Optimization-Derived Learning (ODL) has attracted attention from learning and vision areas, which designs learning models from the perspective of optimization. However, previous ODL approaches regard the training and hyper-training procedures as two separated stages, meaning that the hyper-training variables have to be fixed during the training process, and thus it is also impossible to simultaneously obtain the convergence of training and hyper-training variables. In this work, we design a Generalized Krasnoselskii-Mann (GKM) scheme based on fixed-point iterations as our fundamental ODL module, which unifies existing ODL methods as special cases. Under the GKM scheme, a Bilevel Meta Optimization (BMO) algorithmic framework is constructed to solve the optimal training and hyper-training variables together. We rigorously prove the essential joint convergence of the fixed-point iteration for training and the process of optimizing hyper-parameters for hyper-training, both on the approximation quality, and on the stationary analysis. Experiments demonstrate the efficiency of BMO with competitive performance on sparse coding and real-world applications such as image deconvolution and rain streak removal.
|
1710.11381
|
Yannic Kilcher
|
Yannic Kilcher, Aurelien Lucchi, Thomas Hofmann
|
Semantic Interpolation in Implicit Models
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In implicit models, one often interpolates between sampled points in latent
space. As we show in this paper, care needs to be taken to match-up the
distributional assumptions on code vectors with the geometry of the
interpolating paths. Otherwise, typical assumptions about the quality and
semantics of in-between points may not be justified. Based on our analysis we
propose to modify the prior code distribution to put significantly more
probability mass closer to the origin. As a result, linear interpolation paths
are not only shortest paths, but they are also guaranteed to pass through
high-density regions, irrespective of the dimensionality of the latent space.
Experiments on standard benchmark image datasets demonstrate clear visual
improvements in the quality of the generated samples and exhibit more
meaningful interpolation paths.
|
[
{
"created": "Tue, 31 Oct 2017 09:11:17 GMT",
"version": "v1"
},
{
"created": "Wed, 3 Jan 2018 08:56:11 GMT",
"version": "v2"
},
{
"created": "Fri, 2 Feb 2018 09:56:08 GMT",
"version": "v3"
}
] |
2018-02-05
|
[
[
"Kilcher",
"Yannic",
""
],
[
"Lucchi",
"Aurelien",
""
],
[
"Hofmann",
"Thomas",
""
]
] |
In implicit models, one often interpolates between sampled points in latent space. As we show in this paper, care needs to be taken to match-up the distributional assumptions on code vectors with the geometry of the interpolating paths. Otherwise, typical assumptions about the quality and semantics of in-between points may not be justified. Based on our analysis we propose to modify the prior code distribution to put significantly more probability mass closer to the origin. As a result, linear interpolation paths are not only shortest paths, but they are also guaranteed to pass through high-density regions, irrespective of the dimensionality of the latent space. Experiments on standard benchmark image datasets demonstrate clear visual improvements in the quality of the generated samples and exhibit more meaningful interpolation paths.
|
2205.06632
|
In\^es Terrucha
|
In\^es Terrucha, Elias Fern\'andez Domingos, Francisco C. Santos,
Pieter Simoens and Tom Lenaerts
|
The art of compensation: how hybrid teams solve collective risk dilemmas
|
8 pages, 5 figures, accepted at workshop ALA 2022 (AAMAS 2022)
| null | null | null |
cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
It is widely known how the human ability to cooperate has influenced the
thriving of our species. However, as we move towards a hybrid human-machine
future, it is still unclear how the introduction of AI agents in our social
interactions will affect this cooperative capacity. Within the context of the
one-shot collective risk dilemma, where enough members of a group must
cooperate in order to avoid a collective disaster, we study the evolutionary
dynamics of cooperation in a hybrid population made of both adaptive and
fixed-behavior agents. Specifically, we show how the first learn to adapt their
behavior to compensate for the behavior of the latter. The less the
(artificially) fixed agents cooperate, the more the adaptive population is
motivated to cooperate, and vice-versa, especially when the risk is higher. By
pinpointing how adaptive agents avoid their share of costly cooperation if the
fixed-behavior agents implement a cooperative policy, our work hints towards an
unbalanced hybrid world. On one hand, this means that introducing cooperative
AI agents within our society might unburden human efforts. Nevertheless, it is
important to note that costless artificial cooperation might not be realistic,
and more than deploying AI systems that carry the cooperative effort, we must
focus on mechanisms that nudge shared cooperation among all members in the
hybrid system.
|
[
{
"created": "Fri, 13 May 2022 13:23:42 GMT",
"version": "v1"
}
] |
2022-05-16
|
[
[
"Terrucha",
"Inês",
""
],
[
"Domingos",
"Elias Fernández",
""
],
[
"Santos",
"Francisco C.",
""
],
[
"Simoens",
"Pieter",
""
],
[
"Lenaerts",
"Tom",
""
]
] |
It is widely known how the human ability to cooperate has influenced the thriving of our species. However, as we move towards a hybrid human-machine future, it is still unclear how the introduction of AI agents in our social interactions will affect this cooperative capacity. Within the context of the one-shot collective risk dilemma, where enough members of a group must cooperate in order to avoid a collective disaster, we study the evolutionary dynamics of cooperation in a hybrid population made of both adaptive and fixed-behavior agents. Specifically, we show how the first learn to adapt their behavior to compensate for the behavior of the latter. The less the (artificially) fixed agents cooperate, the more the adaptive population is motivated to cooperate, and vice-versa, especially when the risk is higher. By pinpointing how adaptive agents avoid their share of costly cooperation if the fixed-behavior agents implement a cooperative policy, our work hints towards an unbalanced hybrid world. On one hand, this means that introducing cooperative AI agents within our society might unburden human efforts. Nevertheless, it is important to note that costless artificial cooperation might not be realistic, and more than deploying AI systems that carry the cooperative effort, we must focus on mechanisms that nudge shared cooperation among all members in the hybrid system.
|
2112.04163
|
Jiayi Guo
|
Jiayi Guo, Chaoqun Du, Jiangshan Wang, Huijuan Huang, Pengfei Wan, Gao
Huang
|
Assessing a Single Image in Reference-Guided Image Synthesis
|
Accepted by AAAI 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Assessing the performance of Generative Adversarial Networks (GANs) has been
an important topic due to its practical significance. Although several
evaluation metrics have been proposed, they generally assess the quality of the
whole generated image distribution. For Reference-guided Image Synthesis (RIS)
tasks, i.e., rendering a source image in the style of another reference image,
where assessing the quality of a single generated image is crucial, these
metrics are not applicable. In this paper, we propose a general learning-based
framework, Reference-guided Image Synthesis Assessment (RISA) to quantitatively
evaluate the quality of a single generated image. Notably, the training of RISA
does not require human annotations. In specific, the training data for RISA are
acquired by the intermediate models from the training procedure in RIS, and
weakly annotated by the number of models' iterations, based on the positive
correlation between image quality and iterations. As this annotation is too
coarse as a supervision signal, we introduce two techniques: 1) a pixel-wise
interpolation scheme to refine the coarse labels, and 2) multiple binary
classifiers to replace a na\"ive regressor. In addition, an unsupervised
contrastive loss is introduced to effectively capture the style similarity
between a generated image and its reference image. Empirical results on various
datasets demonstrate that RISA is highly consistent with human preference and
transfers well across models.
|
[
{
"created": "Wed, 8 Dec 2021 08:22:14 GMT",
"version": "v1"
}
] |
2021-12-09
|
[
[
"Guo",
"Jiayi",
""
],
[
"Du",
"Chaoqun",
""
],
[
"Wang",
"Jiangshan",
""
],
[
"Huang",
"Huijuan",
""
],
[
"Wan",
"Pengfei",
""
],
[
"Huang",
"Gao",
""
]
] |
Assessing the performance of Generative Adversarial Networks (GANs) has been an important topic due to its practical significance. Although several evaluation metrics have been proposed, they generally assess the quality of the whole generated image distribution. For Reference-guided Image Synthesis (RIS) tasks, i.e., rendering a source image in the style of another reference image, where assessing the quality of a single generated image is crucial, these metrics are not applicable. In this paper, we propose a general learning-based framework, Reference-guided Image Synthesis Assessment (RISA) to quantitatively evaluate the quality of a single generated image. Notably, the training of RISA does not require human annotations. In specific, the training data for RISA are acquired by the intermediate models from the training procedure in RIS, and weakly annotated by the number of models' iterations, based on the positive correlation between image quality and iterations. As this annotation is too coarse as a supervision signal, we introduce two techniques: 1) a pixel-wise interpolation scheme to refine the coarse labels, and 2) multiple binary classifiers to replace a na\"ive regressor. In addition, an unsupervised contrastive loss is introduced to effectively capture the style similarity between a generated image and its reference image. Empirical results on various datasets demonstrate that RISA is highly consistent with human preference and transfers well across models.
|
1910.10451
|
Manuel Steve Mbankeu Patchou
|
Manuel Patchou and Benjamin Sliwa and Christian Wietfeld
|
Unmanned Aerial Vehicles in Logistics: Efficiency Gains and
Communication Performance of Hybrid Combinations of Ground and Aerial
Vehicles
| null | null | null | null |
cs.NI eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Unmanned Aerial Vehicles (UAVs) have drastically gained popularity in various
Intelligent Transportation System (ITS) applications to improve the safety and
efficiency of transportation systems. In this context, the combination of
ground vehicles, such as delivery trucks, with drones to assist in the last
mile pick-up and delivery of the parcels has been recently proposed. While
aerial vehicles promise increased efficiency based on flexible routes and
parallelized operation, highly reliable wireless communication is also required
for the control and coordination of potentially many drones acting in a
self-organized way. In this paper, we analyze the improvements procured by
drone usage in parcel delivery compared to traditional delivery and propose a
simulation framework to further quantify the efficiency gains of the parcel
delivery logistics and to analyze the performance of different wireless
communications options. To this end, we consider a heterogeneous vehicle
routing problem with various constraints. We consider two approaches regarding
the dispatching and recovery of drones and evaluate their benefits as opposed
to parcel delivery with a classic truck only. Furthermore, we compare two
networking technologies for enabling coordination of the self-organizing teams
of drones with a realistically modeled environment: one approach relying on
base station oriented Long Term Evolution (LTE) vs. a more decentralized
Cellular Vehicle-to-Everything (C-V2X) solution. The results show time savings
of nearly 40% can be achieved through drone usage and that the negative impact
of urban shadowing on network communications in the base station oriented LTE
approach can be compensated by leveraging decentralized C-V2X communications
|
[
{
"created": "Wed, 23 Oct 2019 10:35:42 GMT",
"version": "v1"
},
{
"created": "Thu, 7 Nov 2019 14:42:37 GMT",
"version": "v2"
}
] |
2019-11-11
|
[
[
"Patchou",
"Manuel",
""
],
[
"Sliwa",
"Benjamin",
""
],
[
"Wietfeld",
"Christian",
""
]
] |
Unmanned Aerial Vehicles (UAVs) have drastically gained popularity in various Intelligent Transportation System (ITS) applications to improve the safety and efficiency of transportation systems. In this context, the combination of ground vehicles, such as delivery trucks, with drones to assist in the last mile pick-up and delivery of the parcels has been recently proposed. While aerial vehicles promise increased efficiency based on flexible routes and parallelized operation, highly reliable wireless communication is also required for the control and coordination of potentially many drones acting in a self-organized way. In this paper, we analyze the improvements procured by drone usage in parcel delivery compared to traditional delivery and propose a simulation framework to further quantify the efficiency gains of the parcel delivery logistics and to analyze the performance of different wireless communications options. To this end, we consider a heterogeneous vehicle routing problem with various constraints. We consider two approaches regarding the dispatching and recovery of drones and evaluate their benefits as opposed to parcel delivery with a classic truck only. Furthermore, we compare two networking technologies for enabling coordination of the self-organizing teams of drones with a realistically modeled environment: one approach relying on base station oriented Long Term Evolution (LTE) vs. a more decentralized Cellular Vehicle-to-Everything (C-V2X) solution. The results show time savings of nearly 40% can be achieved through drone usage and that the negative impact of urban shadowing on network communications in the base station oriented LTE approach can be compensated by leveraging decentralized C-V2X communications
|
0712.2638
|
Steve Oudot
|
Fr\'ed\'eric Chazal (INRIA Sophia Antipolis), Steve Oudot (INRIA
Sophia Antipolis)
|
Towards Persistence-Based Reconstruction in Euclidean Spaces
| null | null | null | null |
cs.CG math.AT
| null |
Manifold reconstruction has been extensively studied for the last decade or
so, especially in two and three dimensions. Recently, significant improvements
were made in higher dimensions, leading to new methods to reconstruct large
classes of compact subsets of Euclidean space $\R^d$. However, the complexities
of these methods scale up exponentially with d, which makes them impractical in
medium or high dimensions, even for handling low-dimensional submanifolds. In
this paper, we introduce a novel approach that stands in-between classical
reconstruction and topological estimation, and whose complexity scales up with
the intrinsic dimension of the data. Specifically, when the data points are
sufficiently densely sampled from a smooth $m$-submanifold of $\R^d$, our
method retrieves the homology of the submanifold in time at most $c(m)n^5$,
where $n$ is the size of the input and $c(m)$ is a constant depending solely on
$m$. It can also provably well handle a wide range of compact subsets of
$\R^d$, though with worse complexities. Along the way to proving the
correctness of our algorithm, we obtain new results on \v{C}ech, Rips, and
witness complex filtrations in Euclidean spaces.
|
[
{
"created": "Mon, 17 Dec 2007 06:30:08 GMT",
"version": "v1"
},
{
"created": "Tue, 18 Dec 2007 10:26:34 GMT",
"version": "v2"
}
] |
2007-12-18
|
[
[
"Chazal",
"Frédéric",
"",
"INRIA Sophia Antipolis"
],
[
"Oudot",
"Steve",
"",
"INRIA\n Sophia Antipolis"
]
] |
Manifold reconstruction has been extensively studied for the last decade or so, especially in two and three dimensions. Recently, significant improvements were made in higher dimensions, leading to new methods to reconstruct large classes of compact subsets of Euclidean space $\R^d$. However, the complexities of these methods scale up exponentially with d, which makes them impractical in medium or high dimensions, even for handling low-dimensional submanifolds. In this paper, we introduce a novel approach that stands in-between classical reconstruction and topological estimation, and whose complexity scales up with the intrinsic dimension of the data. Specifically, when the data points are sufficiently densely sampled from a smooth $m$-submanifold of $\R^d$, our method retrieves the homology of the submanifold in time at most $c(m)n^5$, where $n$ is the size of the input and $c(m)$ is a constant depending solely on $m$. It can also provably well handle a wide range of compact subsets of $\R^d$, though with worse complexities. Along the way to proving the correctness of our algorithm, we obtain new results on \v{C}ech, Rips, and witness complex filtrations in Euclidean spaces.
|
2406.10415
|
Terrence Neumann
|
Terrence Neumann and Bryan Jones
|
PRISM: A Design Framework for Open-Source Foundation Model Safety
| null | null | null | null |
cs.CY cs.AI cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
The rapid advancement of open-source foundation models has brought
transparency and accessibility to this groundbreaking technology. However, this
openness has also enabled the development of highly-capable, unsafe models, as
exemplified by recent instances such as WormGPT and FraudGPT, which are
specifically designed to facilitate criminal activity. As the capabilities of
open foundation models continue to grow, potentially outpacing those of
closed-source models, the risk of misuse by bad actors poses an increasingly
serious threat to society. This paper addresses the critical question of how
open foundation model developers should approach model safety in light of these
challenges. Our analysis reveals that open-source foundation model companies
often provide less restrictive acceptable use policies (AUPs) compared to their
closed-source counterparts, likely due to the inherent difficulties in
enforcing such policies once the models are released. To tackle this issue, we
introduce PRISM, a design framework for open-source foundation model safety
that emphasizes Private, Robust, Independent Safety measures, at Minimal
marginal cost of compute. The PRISM framework proposes the use of modular
functions that moderate prompts and outputs independently of the core language
model, offering a more adaptable and resilient approach to safety compared to
the brittle reinforcement learning methods currently used for value alignment.
By focusing on identifying AUP violations and engaging the developer community
in establishing consensus around safety design decisions, PRISM aims to create
a safer open-source ecosystem that maximizes the potential of these powerful
technologies while minimizing the risks to individuals and society as a whole.
|
[
{
"created": "Fri, 14 Jun 2024 21:26:15 GMT",
"version": "v1"
}
] |
2024-06-18
|
[
[
"Neumann",
"Terrence",
""
],
[
"Jones",
"Bryan",
""
]
] |
The rapid advancement of open-source foundation models has brought transparency and accessibility to this groundbreaking technology. However, this openness has also enabled the development of highly-capable, unsafe models, as exemplified by recent instances such as WormGPT and FraudGPT, which are specifically designed to facilitate criminal activity. As the capabilities of open foundation models continue to grow, potentially outpacing those of closed-source models, the risk of misuse by bad actors poses an increasingly serious threat to society. This paper addresses the critical question of how open foundation model developers should approach model safety in light of these challenges. Our analysis reveals that open-source foundation model companies often provide less restrictive acceptable use policies (AUPs) compared to their closed-source counterparts, likely due to the inherent difficulties in enforcing such policies once the models are released. To tackle this issue, we introduce PRISM, a design framework for open-source foundation model safety that emphasizes Private, Robust, Independent Safety measures, at Minimal marginal cost of compute. The PRISM framework proposes the use of modular functions that moderate prompts and outputs independently of the core language model, offering a more adaptable and resilient approach to safety compared to the brittle reinforcement learning methods currently used for value alignment. By focusing on identifying AUP violations and engaging the developer community in establishing consensus around safety design decisions, PRISM aims to create a safer open-source ecosystem that maximizes the potential of these powerful technologies while minimizing the risks to individuals and society as a whole.
|
2205.01818
|
Ziyi Yang
|
Ziyi Yang, Yuwei Fang, Chenguang Zhu, Reid Pryzant, Dongdong Chen, Yu
Shi, Yichong Xu, Yao Qian, Mei Gao, Yi-Ling Chen, Liyang Lu, Yujia Xie,
Robert Gmyr, Noel Codella, Naoyuki Kanda, Bin Xiao, Lu Yuan, Takuya Yoshioka,
Michael Zeng, Xuedong Huang
|
i-Code: An Integrative and Composable Multimodal Learning Framework
| null | null | null | null |
cs.LG cs.AI cs.CL cs.CV eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Human intelligence is multimodal; we integrate visual, linguistic, and
acoustic signals to maintain a holistic worldview. Most current pretraining
methods, however, are limited to one or two modalities. We present i-Code, a
self-supervised pretraining framework where users may flexibly combine the
modalities of vision, speech, and language into unified and general-purpose
vector representations. In this framework, data from each modality are first
given to pretrained single-modality encoders. The encoder outputs are then
integrated with a multimodal fusion network, which uses novel attention
mechanisms and other architectural innovations to effectively combine
information from the different modalities. The entire system is pretrained
end-to-end with new objectives including masked modality unit modeling and
cross-modality contrastive learning. Unlike previous research using only video
for pretraining, the i-Code framework can dynamically process single, dual, and
triple-modality data during training and inference, flexibly projecting
different combinations of modalities into a single representation space.
Experimental results demonstrate how i-Code can outperform state-of-the-art
techniques on five video understanding tasks and the GLUE NLP benchmark,
improving by as much as 11% and demonstrating the power of integrative
multimodal pretraining.
|
[
{
"created": "Tue, 3 May 2022 23:38:50 GMT",
"version": "v1"
},
{
"created": "Thu, 5 May 2022 06:35:23 GMT",
"version": "v2"
}
] |
2022-05-06
|
[
[
"Yang",
"Ziyi",
""
],
[
"Fang",
"Yuwei",
""
],
[
"Zhu",
"Chenguang",
""
],
[
"Pryzant",
"Reid",
""
],
[
"Chen",
"Dongdong",
""
],
[
"Shi",
"Yu",
""
],
[
"Xu",
"Yichong",
""
],
[
"Qian",
"Yao",
""
],
[
"Gao",
"Mei",
""
],
[
"Chen",
"Yi-Ling",
""
],
[
"Lu",
"Liyang",
""
],
[
"Xie",
"Yujia",
""
],
[
"Gmyr",
"Robert",
""
],
[
"Codella",
"Noel",
""
],
[
"Kanda",
"Naoyuki",
""
],
[
"Xiao",
"Bin",
""
],
[
"Yuan",
"Lu",
""
],
[
"Yoshioka",
"Takuya",
""
],
[
"Zeng",
"Michael",
""
],
[
"Huang",
"Xuedong",
""
]
] |
Human intelligence is multimodal; we integrate visual, linguistic, and acoustic signals to maintain a holistic worldview. Most current pretraining methods, however, are limited to one or two modalities. We present i-Code, a self-supervised pretraining framework where users may flexibly combine the modalities of vision, speech, and language into unified and general-purpose vector representations. In this framework, data from each modality are first given to pretrained single-modality encoders. The encoder outputs are then integrated with a multimodal fusion network, which uses novel attention mechanisms and other architectural innovations to effectively combine information from the different modalities. The entire system is pretrained end-to-end with new objectives including masked modality unit modeling and cross-modality contrastive learning. Unlike previous research using only video for pretraining, the i-Code framework can dynamically process single, dual, and triple-modality data during training and inference, flexibly projecting different combinations of modalities into a single representation space. Experimental results demonstrate how i-Code can outperform state-of-the-art techniques on five video understanding tasks and the GLUE NLP benchmark, improving by as much as 11% and demonstrating the power of integrative multimodal pretraining.
|
1007.1593
|
Yakov Nekrich
|
Yakov Nekrich
|
A Fast Algorithm for Three-Dimensional Layers of Maxima Problem
| null | null | null | null |
cs.DS cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We show that the three-dimensional layers-of-maxima problem can be solved in
$o(n\log n)$ time in the word RAM model. Our algorithm runs in $O(n(\log \log
n)^3)$ deterministic time or $O(n(\log\log n)^2)$ expected time and uses O(n)
space. We also describe an algorithm that uses optimal O(n) space and solves
the three-dimensional layers-of-maxima problem in $O(n\log n)$ time in the
pointer machine model.
|
[
{
"created": "Fri, 9 Jul 2010 13:45:05 GMT",
"version": "v1"
},
{
"created": "Tue, 3 May 2011 13:08:33 GMT",
"version": "v2"
}
] |
2011-05-04
|
[
[
"Nekrich",
"Yakov",
""
]
] |
We show that the three-dimensional layers-of-maxima problem can be solved in $o(n\log n)$ time in the word RAM model. Our algorithm runs in $O(n(\log \log n)^3)$ deterministic time or $O(n(\log\log n)^2)$ expected time and uses O(n) space. We also describe an algorithm that uses optimal O(n) space and solves the three-dimensional layers-of-maxima problem in $O(n\log n)$ time in the pointer machine model.
|
2103.09191
|
Luigi Antonio Lavazza
|
Luigi Lavazza and Sandro Morasca
|
Understanding and Modeling AI-Intensive System Development
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Developers of AI-Intensive Systems--i.e., systems that involve both
"traditional" software and Artificial Intelligence"are recognizing the need to
organize development systematically and use engineered methods and tools. Since
an AI-Intensive System (AIIS) relies heavily on software, it is expected that
Software Engineering (SE) methods and tools can help. However, AIIS development
differs from the development of "traditional" software systems in a few
substantial aspects. Hence, traditional SE methods and tools are not suitable
or sufficient by themselves and need to be adapted and extended. A quest for
"SE for AI" methods and tools has started. We believe that, in this effort, we
should learn from experience and avoid repeating some of the mistakes made in
the quest for SE in past years. To this end, a fundamental instrument is a set
of concepts and a notation to deal with AIIS and the problems that characterize
their development processes. In this paper, we propose to describe AIIS via a
notation that was proposed for SE and embeds a set of concepts that are
suitable to represent AIIS as well. We demonstrate the usage of the notation by
modeling some characteristics that are particularly relevant for AIIS.
|
[
{
"created": "Tue, 16 Mar 2021 16:42:45 GMT",
"version": "v1"
}
] |
2021-03-17
|
[
[
"Lavazza",
"Luigi",
""
],
[
"Morasca",
"Sandro",
""
]
] |
Developers of AI-Intensive Systems--i.e., systems that involve both "traditional" software and Artificial Intelligence"are recognizing the need to organize development systematically and use engineered methods and tools. Since an AI-Intensive System (AIIS) relies heavily on software, it is expected that Software Engineering (SE) methods and tools can help. However, AIIS development differs from the development of "traditional" software systems in a few substantial aspects. Hence, traditional SE methods and tools are not suitable or sufficient by themselves and need to be adapted and extended. A quest for "SE for AI" methods and tools has started. We believe that, in this effort, we should learn from experience and avoid repeating some of the mistakes made in the quest for SE in past years. To this end, a fundamental instrument is a set of concepts and a notation to deal with AIIS and the problems that characterize their development processes. In this paper, we propose to describe AIIS via a notation that was proposed for SE and embeds a set of concepts that are suitable to represent AIIS as well. We demonstrate the usage of the notation by modeling some characteristics that are particularly relevant for AIIS.
|
2408.01091
|
Jin Gao
|
Jin Gao, Lei Gan, Yuankai Li, Yixin Ye, Dequan Wang
|
Dissecting Dissonance: Benchmarking Large Multimodal Models Against
Self-Contradictory Instructions
|
Accepted by the 18th European Conference on Computer Vision ECCV 2024
| null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large multimodal models (LMMs) excel in adhering to human instructions.
However, self-contradictory instructions may arise due to the increasing trend
of multimodal interaction and context length, which is challenging for language
beginners and vulnerable populations. We introduce the Self-Contradictory
Instructions benchmark to evaluate the capability of LMMs in recognizing
conflicting commands. It comprises 20,000 conflicts, evenly distributed between
language and vision paradigms. It is constructed by a novel automatic dataset
creation framework, which expedites the process and enables us to encompass a
wide range of instruction forms. Our comprehensive evaluation reveals current
LMMs consistently struggle to identify multimodal instruction discordance due
to a lack of self-awareness. Hence, we propose the Cognitive Awakening
Prompting to inject cognition from external, largely enhancing dissonance
detection. The dataset and code are here: https://selfcontradiction.github.io/.
|
[
{
"created": "Fri, 2 Aug 2024 08:11:11 GMT",
"version": "v1"
},
{
"created": "Mon, 5 Aug 2024 06:56:44 GMT",
"version": "v2"
}
] |
2024-08-06
|
[
[
"Gao",
"Jin",
""
],
[
"Gan",
"Lei",
""
],
[
"Li",
"Yuankai",
""
],
[
"Ye",
"Yixin",
""
],
[
"Wang",
"Dequan",
""
]
] |
Large multimodal models (LMMs) excel in adhering to human instructions. However, self-contradictory instructions may arise due to the increasing trend of multimodal interaction and context length, which is challenging for language beginners and vulnerable populations. We introduce the Self-Contradictory Instructions benchmark to evaluate the capability of LMMs in recognizing conflicting commands. It comprises 20,000 conflicts, evenly distributed between language and vision paradigms. It is constructed by a novel automatic dataset creation framework, which expedites the process and enables us to encompass a wide range of instruction forms. Our comprehensive evaluation reveals current LMMs consistently struggle to identify multimodal instruction discordance due to a lack of self-awareness. Hence, we propose the Cognitive Awakening Prompting to inject cognition from external, largely enhancing dissonance detection. The dataset and code are here: https://selfcontradiction.github.io/.
|
1509.01756
|
Xueru Li
|
Xueru Li, Emil Bj\"ornson, Erik G. Larsson, Shidong Zhou, Jing Wang
|
A Multi-cell MMSE Detector for Massive MIMO Systems and New Large System
Analysis
|
6 pages, 3 figures, accepted by Globecom 2015
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, a new multi-cell MMSE detector is proposed for massive MIMO
systems. Let $K$ and $B$ denote the number of users in each cell and the number
of available pilot sequences in the network, respectively, with $B = \beta K$,
where $\beta \ge 1 $ is called the pilot reuse factor. The novelty of the
multi-cell MMSE detector is that it utilizes all $B$ channel directions that
can be estimated locally at a base station, so that intra-cell interference,
parts of the inter-cell interference and the noise can all be actively
suppressed, while conventional detectors only use the $K$ intra-cell channels.
Furthermore, in the large-system limit, a deterministic equivalent expression
of the uplink SINR for the proposed multi-cell MMSE is derived. The expression
is easy to compute and accounts for power control for the pilot and payload,
imperfect channel estimation and arbitrary pilot allocation. Numerical results
show that significant sum spectral efficiency gains can be obtained by the
multi-cell MMSE over the conventional single-cell MMSE and the recent
multi-cell ZF, and the gains become more significant as $\beta$ and/or $K$
increases. Furthermore, the deterministic equivalent is shown to be very
accurate even for relatively small system dimensions.
|
[
{
"created": "Sun, 6 Sep 2015 02:06:27 GMT",
"version": "v1"
}
] |
2015-09-08
|
[
[
"Li",
"Xueru",
""
],
[
"Björnson",
"Emil",
""
],
[
"Larsson",
"Erik G.",
""
],
[
"Zhou",
"Shidong",
""
],
[
"Wang",
"Jing",
""
]
] |
In this paper, a new multi-cell MMSE detector is proposed for massive MIMO systems. Let $K$ and $B$ denote the number of users in each cell and the number of available pilot sequences in the network, respectively, with $B = \beta K$, where $\beta \ge 1 $ is called the pilot reuse factor. The novelty of the multi-cell MMSE detector is that it utilizes all $B$ channel directions that can be estimated locally at a base station, so that intra-cell interference, parts of the inter-cell interference and the noise can all be actively suppressed, while conventional detectors only use the $K$ intra-cell channels. Furthermore, in the large-system limit, a deterministic equivalent expression of the uplink SINR for the proposed multi-cell MMSE is derived. The expression is easy to compute and accounts for power control for the pilot and payload, imperfect channel estimation and arbitrary pilot allocation. Numerical results show that significant sum spectral efficiency gains can be obtained by the multi-cell MMSE over the conventional single-cell MMSE and the recent multi-cell ZF, and the gains become more significant as $\beta$ and/or $K$ increases. Furthermore, the deterministic equivalent is shown to be very accurate even for relatively small system dimensions.
|
2203.05683
|
Mayur Mallya
|
Mayur Mallya and Ghassan Hamarneh
|
Deep Multimodal Guidance for Medical Image Classification
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Medical imaging is a cornerstone of therapy and diagnosis in modern medicine.
However, the choice of imaging modality for a particular theranostic task
typically involves trade-offs between the feasibility of using a particular
modality (e.g., short wait times, low cost, fast acquisition, reduced
radiation/invasiveness) and the expected performance on a clinical task (e.g.,
diagnostic accuracy, efficacy of treatment planning and guidance). In this
work, we aim to apply the knowledge learned from the less feasible but
better-performing (superior) modality to guide the utilization of the
more-feasible yet under-performing (inferior) modality and steer it towards
improved performance. We focus on the application of deep learning for
image-based diagnosis. We develop a light-weight guidance model that leverages
the latent representation learned from the superior modality, when training a
model that consumes only the inferior modality. We examine the advantages of
our method in the context of two clinical applications: multi-task skin lesion
classification from clinical and dermoscopic images and brain tumor
classification from multi-sequence magnetic resonance imaging (MRI) and
histopathology images. For both these scenarios we show a boost in diagnostic
performance of the inferior modality without requiring the superior modality.
Furthermore, in the case of brain tumor classification, our method outperforms
the model trained on the superior modality while producing comparable results
to the model that uses both modalities during inference.
|
[
{
"created": "Thu, 10 Mar 2022 23:50:08 GMT",
"version": "v1"
},
{
"created": "Thu, 21 Jul 2022 15:41:26 GMT",
"version": "v2"
}
] |
2022-07-22
|
[
[
"Mallya",
"Mayur",
""
],
[
"Hamarneh",
"Ghassan",
""
]
] |
Medical imaging is a cornerstone of therapy and diagnosis in modern medicine. However, the choice of imaging modality for a particular theranostic task typically involves trade-offs between the feasibility of using a particular modality (e.g., short wait times, low cost, fast acquisition, reduced radiation/invasiveness) and the expected performance on a clinical task (e.g., diagnostic accuracy, efficacy of treatment planning and guidance). In this work, we aim to apply the knowledge learned from the less feasible but better-performing (superior) modality to guide the utilization of the more-feasible yet under-performing (inferior) modality and steer it towards improved performance. We focus on the application of deep learning for image-based diagnosis. We develop a light-weight guidance model that leverages the latent representation learned from the superior modality, when training a model that consumes only the inferior modality. We examine the advantages of our method in the context of two clinical applications: multi-task skin lesion classification from clinical and dermoscopic images and brain tumor classification from multi-sequence magnetic resonance imaging (MRI) and histopathology images. For both these scenarios we show a boost in diagnostic performance of the inferior modality without requiring the superior modality. Furthermore, in the case of brain tumor classification, our method outperforms the model trained on the superior modality while producing comparable results to the model that uses both modalities during inference.
|
2203.14531
|
Ye Zheng
|
Xiaoke Jiang, Donghai Li, Hao Chen, Ye Zheng, Rui Zhao and Liwei Wu
|
Uni6D: A Unified CNN Framework without Projection Breakdown for 6D Pose
Estimation
|
CVPR2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As RGB-D sensors become more affordable, using RGB-D images to obtain
high-accuracy 6D pose estimation results becomes a better option.
State-of-the-art approaches typically use different backbones to extract
features for RGB and depth images. They use a 2D CNN for RGB images and a
per-pixel point cloud network for depth data, as well as a fusion network for
feature fusion. We find that the essential reason for using two independent
backbones is the "projection breakdown" problem. In the depth image plane, the
projected 3D structure of the physical world is preserved by the 1D depth value
and its built-in 2D pixel coordinate (UV). Any spatial transformation that
modifies UV, such as resize, flip, crop, or pooling operations in the CNN
pipeline, breaks the binding between the pixel value and UV coordinate. As a
consequence, the 3D structure is no longer preserved by a modified depth image
or feature. To address this issue, we propose a simple yet effective method
denoted as Uni6D that explicitly takes the extra UV data along with RGB-D
images as input. Our method has a Unified CNN framework for 6D pose estimation
with a single CNN backbone. In particular, the architecture of our method is
based on Mask R-CNN with two extra heads, one named RT head for directly
predicting 6D pose and the other named abc head for guiding the network to map
the visible points to their coordinates in the 3D model as an auxiliary module.
This end-to-end approach balances simplicity and accuracy, achieving comparable
accuracy with state of the arts and 7.2x faster inference speed on the
YCB-Video dataset.
|
[
{
"created": "Mon, 28 Mar 2022 07:05:27 GMT",
"version": "v1"
},
{
"created": "Tue, 5 Apr 2022 04:04:54 GMT",
"version": "v2"
}
] |
2022-04-06
|
[
[
"Jiang",
"Xiaoke",
""
],
[
"Li",
"Donghai",
""
],
[
"Chen",
"Hao",
""
],
[
"Zheng",
"Ye",
""
],
[
"Zhao",
"Rui",
""
],
[
"Wu",
"Liwei",
""
]
] |
As RGB-D sensors become more affordable, using RGB-D images to obtain high-accuracy 6D pose estimation results becomes a better option. State-of-the-art approaches typically use different backbones to extract features for RGB and depth images. They use a 2D CNN for RGB images and a per-pixel point cloud network for depth data, as well as a fusion network for feature fusion. We find that the essential reason for using two independent backbones is the "projection breakdown" problem. In the depth image plane, the projected 3D structure of the physical world is preserved by the 1D depth value and its built-in 2D pixel coordinate (UV). Any spatial transformation that modifies UV, such as resize, flip, crop, or pooling operations in the CNN pipeline, breaks the binding between the pixel value and UV coordinate. As a consequence, the 3D structure is no longer preserved by a modified depth image or feature. To address this issue, we propose a simple yet effective method denoted as Uni6D that explicitly takes the extra UV data along with RGB-D images as input. Our method has a Unified CNN framework for 6D pose estimation with a single CNN backbone. In particular, the architecture of our method is based on Mask R-CNN with two extra heads, one named RT head for directly predicting 6D pose and the other named abc head for guiding the network to map the visible points to their coordinates in the 3D model as an auxiliary module. This end-to-end approach balances simplicity and accuracy, achieving comparable accuracy with state of the arts and 7.2x faster inference speed on the YCB-Video dataset.
|
1108.3525
|
Jason Corso
|
Yingjie Miao and Jason J. Corso
|
Hamiltonian Streamline Guided Feature Extraction with Applications to
Face Detection
| null | null | null | null |
cs.CV math.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a new feature extraction method based on two dynamical systems
induced by intensity landscape: the negative gradient system and the
Hamiltonian system. We build features based on the Hamiltonian streamlines.
These features contain nice global topological information about the intensity
landscape, and can be used for object detection. We show that for training
images of same size, our feature space is much smaller than that generated by
Haar-like features. The training time is extremely short, and detection speed
and accuracy is similar to Haar-like feature based classifiers.
|
[
{
"created": "Wed, 17 Aug 2011 17:06:41 GMT",
"version": "v1"
}
] |
2011-08-18
|
[
[
"Miao",
"Yingjie",
""
],
[
"Corso",
"Jason J.",
""
]
] |
We propose a new feature extraction method based on two dynamical systems induced by intensity landscape: the negative gradient system and the Hamiltonian system. We build features based on the Hamiltonian streamlines. These features contain nice global topological information about the intensity landscape, and can be used for object detection. We show that for training images of same size, our feature space is much smaller than that generated by Haar-like features. The training time is extremely short, and detection speed and accuracy is similar to Haar-like feature based classifiers.
|
1910.06813
|
Anindya Sarkar
|
Anindya Sarkar, Anirudh Sunder Raj, Raghu Sesha Iyengar
|
ODE guided Neural Data Augmentation Techniques for Time Series Data and
its Benefits on Robustness
|
8 pages, 5 figures, International Conference on Machine Learning and
Applications
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Exploring adversarial attack vectors and studying their effects on machine
learning algorithms has been of interest to researchers. Deep neural networks
working with time series data have received lesser interest compared to their
image counterparts in this context. In a recent finding, it has been revealed
that current state-of-the-art deep learning time series classifiers are
vulnerable to adversarial attacks. In this paper, we introduce two local
gradient based and one spectral density based time series data augmentation
techniques. We show that a model trained with data obtained using our
techniques obtains state-of-the-art classification accuracy on various time
series benchmarks. In addition, it improves the robustness of the model against
some of the most common corruption techniques,such as Fast Gradient Sign Method
(FGSM) and Basic Iterative Method (BIM).
|
[
{
"created": "Tue, 15 Oct 2019 14:37:18 GMT",
"version": "v1"
},
{
"created": "Thu, 2 Jul 2020 10:31:48 GMT",
"version": "v2"
},
{
"created": "Sun, 27 Sep 2020 17:53:53 GMT",
"version": "v3"
}
] |
2020-09-29
|
[
[
"Sarkar",
"Anindya",
""
],
[
"Raj",
"Anirudh Sunder",
""
],
[
"Iyengar",
"Raghu Sesha",
""
]
] |
Exploring adversarial attack vectors and studying their effects on machine learning algorithms has been of interest to researchers. Deep neural networks working with time series data have received lesser interest compared to their image counterparts in this context. In a recent finding, it has been revealed that current state-of-the-art deep learning time series classifiers are vulnerable to adversarial attacks. In this paper, we introduce two local gradient based and one spectral density based time series data augmentation techniques. We show that a model trained with data obtained using our techniques obtains state-of-the-art classification accuracy on various time series benchmarks. In addition, it improves the robustness of the model against some of the most common corruption techniques,such as Fast Gradient Sign Method (FGSM) and Basic Iterative Method (BIM).
|
2106.13731
|
Nestor Demeure
|
Less Wright and Nestor Demeure
|
Ranger21: a synergistic deep learning optimizer
|
for associated code, see https://github.com/lessw2020/Ranger21
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As optimizers are critical to the performances of neural networks, every year
a large number of papers innovating on the subject are published. However,
while most of these publications provide incremental improvements to existing
algorithms, they tend to be presented as new optimizers rather than composable
algorithms. Thus, many worthwhile improvements are rarely seen out of their
initial publication. Taking advantage of this untapped potential, we introduce
Ranger21, a new optimizer which combines AdamW with eight components, carefully
selected after reviewing and testing ideas from the literature. We found that
the resulting optimizer provides significantly improved validation accuracy and
training speed, smoother training curves, and is even able to train a ResNet50
on ImageNet2012 without Batch Normalization layers. A problem on which AdamW
stays systematically stuck in a bad initial state.
|
[
{
"created": "Fri, 25 Jun 2021 16:07:59 GMT",
"version": "v1"
},
{
"created": "Sat, 7 Aug 2021 01:18:28 GMT",
"version": "v2"
}
] |
2021-08-10
|
[
[
"Wright",
"Less",
""
],
[
"Demeure",
"Nestor",
""
]
] |
As optimizers are critical to the performances of neural networks, every year a large number of papers innovating on the subject are published. However, while most of these publications provide incremental improvements to existing algorithms, they tend to be presented as new optimizers rather than composable algorithms. Thus, many worthwhile improvements are rarely seen out of their initial publication. Taking advantage of this untapped potential, we introduce Ranger21, a new optimizer which combines AdamW with eight components, carefully selected after reviewing and testing ideas from the literature. We found that the resulting optimizer provides significantly improved validation accuracy and training speed, smoother training curves, and is even able to train a ResNet50 on ImageNet2012 without Batch Normalization layers. A problem on which AdamW stays systematically stuck in a bad initial state.
|
1903.00620
|
Jie Li
|
Jie Li, Yu Liu, Dong Gong, Qinfeng Shi, Xia Yuan, Chunxia Zhao, Ian
Reid
|
RGBD Based Dimensional Decomposition Residual Network for 3D Semantic
Scene Completion
|
CVPR2019
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
RGB images differentiate from depth images as they carry more details about
the color and texture information, which can be utilized as a vital
complementary to depth for boosting the performance of 3D semantic scene
completion (SSC). SSC is composed of 3D shape completion (SC) and semantic
scene labeling while most of the existing methods use depth as the sole input
which causes the performance bottleneck. Moreover, the state-of-the-art methods
employ 3D CNNs which have cumbersome networks and tremendous parameters. We
introduce a light-weight Dimensional Decomposition Residual network (DDR) for
3D dense prediction tasks. The novel factorized convolution layer is effective
for reducing the network parameters, and the proposed multi-scale fusion
mechanism for depth and color image can improve the completion and segmentation
accuracy simultaneously. Our method demonstrates excellent performance on two
public datasets. Compared with the latest method SSCNet, we achieve 5.9% gains
in SC-IoU and 5.7% gains in SSC-IOU, albeit with only 21% network parameters
and 16.6% FLOPs employed compared with that of SSCNet.
|
[
{
"created": "Sat, 2 Mar 2019 04:14:31 GMT",
"version": "v1"
},
{
"created": "Wed, 1 May 2019 03:54:34 GMT",
"version": "v2"
}
] |
2019-05-02
|
[
[
"Li",
"Jie",
""
],
[
"Liu",
"Yu",
""
],
[
"Gong",
"Dong",
""
],
[
"Shi",
"Qinfeng",
""
],
[
"Yuan",
"Xia",
""
],
[
"Zhao",
"Chunxia",
""
],
[
"Reid",
"Ian",
""
]
] |
RGB images differentiate from depth images as they carry more details about the color and texture information, which can be utilized as a vital complementary to depth for boosting the performance of 3D semantic scene completion (SSC). SSC is composed of 3D shape completion (SC) and semantic scene labeling while most of the existing methods use depth as the sole input which causes the performance bottleneck. Moreover, the state-of-the-art methods employ 3D CNNs which have cumbersome networks and tremendous parameters. We introduce a light-weight Dimensional Decomposition Residual network (DDR) for 3D dense prediction tasks. The novel factorized convolution layer is effective for reducing the network parameters, and the proposed multi-scale fusion mechanism for depth and color image can improve the completion and segmentation accuracy simultaneously. Our method demonstrates excellent performance on two public datasets. Compared with the latest method SSCNet, we achieve 5.9% gains in SC-IoU and 5.7% gains in SSC-IOU, albeit with only 21% network parameters and 16.6% FLOPs employed compared with that of SSCNet.
|
2204.04960
|
Adil Erzin I
|
Adil Erzin, Roman Plotnikov, Ilya Ladygin
|
Constrained Shortest Path and Hierarchical Structures
| null | null | null | null |
cs.DS cs.DM
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The Constraint Shortest Path (CSP) problem is as follows. An $n$-vertex graph
is given, each edge/arc assigned two weights. Let us call them "cost" and
"length" for definiteness. Finding a min-cost upper-bounded length path between
a given pair of vertices is required. The problem is NP-hard even when the
lengths of all edges are the same. Therefore, various approximation algorithms
have been proposed in the literature for it. The constraint on path length can
be accounted for by considering one edge weight equals to a linear combination
of cost and length. By varying the multiplier value in a linear combination, a
feasible solution delivers a minimum to the function with new weights. At the
same time, Dijkstra's algorithm or its modifications are used to construct the
shortest path with the current weights of the edges. However, with
insufficiently large graphs, this approach may turn out to be time-consuming.
In this article, we propose to look for a solution, not in the original graph
but specially constructed hierarchical structures (HS). We show that the
shortest path in the HS is constructed with $O(m)$-time complexity, where $m$
is the number of edges/arcs of the graph, and the approximate solution in the
case of integer costs and lengths of the edges is found with $O(m\log n)$-time
complexity. The a priori estimate of the algorithm's accuracy turned out to
depend on the parameters of the problem and can be significant. Therefore, to
evaluate the algorithm's effectiveness, we conducted a numerical experiment on
the graphs of roads of megalopolis and randomly constructed unit-disk graphs
(UDGs). The numerical experiment results show that in the HS, a solution close
to optimal one is built 10--100 times faster than in the methods which use
Dijkstra's algorithm to build a min-weight path in the original graph.
|
[
{
"created": "Mon, 11 Apr 2022 09:13:43 GMT",
"version": "v1"
}
] |
2022-04-12
|
[
[
"Erzin",
"Adil",
""
],
[
"Plotnikov",
"Roman",
""
],
[
"Ladygin",
"Ilya",
""
]
] |
The Constraint Shortest Path (CSP) problem is as follows. An $n$-vertex graph is given, each edge/arc assigned two weights. Let us call them "cost" and "length" for definiteness. Finding a min-cost upper-bounded length path between a given pair of vertices is required. The problem is NP-hard even when the lengths of all edges are the same. Therefore, various approximation algorithms have been proposed in the literature for it. The constraint on path length can be accounted for by considering one edge weight equals to a linear combination of cost and length. By varying the multiplier value in a linear combination, a feasible solution delivers a minimum to the function with new weights. At the same time, Dijkstra's algorithm or its modifications are used to construct the shortest path with the current weights of the edges. However, with insufficiently large graphs, this approach may turn out to be time-consuming. In this article, we propose to look for a solution, not in the original graph but specially constructed hierarchical structures (HS). We show that the shortest path in the HS is constructed with $O(m)$-time complexity, where $m$ is the number of edges/arcs of the graph, and the approximate solution in the case of integer costs and lengths of the edges is found with $O(m\log n)$-time complexity. The a priori estimate of the algorithm's accuracy turned out to depend on the parameters of the problem and can be significant. Therefore, to evaluate the algorithm's effectiveness, we conducted a numerical experiment on the graphs of roads of megalopolis and randomly constructed unit-disk graphs (UDGs). The numerical experiment results show that in the HS, a solution close to optimal one is built 10--100 times faster than in the methods which use Dijkstra's algorithm to build a min-weight path in the original graph.
|
1908.02723
|
Ian Fox
|
Ian Fox and Jenna Wiens
|
Advocacy Learning: Learning through Competition and Class-Conditional
Representations
|
Accepted IJCAI 2019
| null | null | null |
cs.LG cs.CV stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce advocacy learning, a novel supervised training scheme for
attention-based classification problems. Advocacy learning relies on a
framework consisting of two connected networks: 1) $N$ Advocates (one for each
class), each of which outputs an argument in the form of an attention map over
the input, and 2) a Judge, which predicts the class label based on these
arguments. Each Advocate produces a class-conditional representation with the
goal of convincing the Judge that the input example belongs to their class,
even when the input belongs to a different class. Applied to several different
classification tasks, we show that advocacy learning can lead to small
improvements in classification accuracy over an identical supervised baseline.
Though a series of follow-up experiments, we analyze when and how such
class-conditional representations improve discriminative performance. Though
somewhat counter-intuitive, a framework in which subnetworks are trained to
competitively provide evidence in support of their class shows promise, in many
cases performing on par with standard learning approaches. This provides a
foundation for further exploration into competition and class-conditional
representations in supervised learning.
|
[
{
"created": "Wed, 7 Aug 2019 16:55:44 GMT",
"version": "v1"
}
] |
2019-08-08
|
[
[
"Fox",
"Ian",
""
],
[
"Wiens",
"Jenna",
""
]
] |
We introduce advocacy learning, a novel supervised training scheme for attention-based classification problems. Advocacy learning relies on a framework consisting of two connected networks: 1) $N$ Advocates (one for each class), each of which outputs an argument in the form of an attention map over the input, and 2) a Judge, which predicts the class label based on these arguments. Each Advocate produces a class-conditional representation with the goal of convincing the Judge that the input example belongs to their class, even when the input belongs to a different class. Applied to several different classification tasks, we show that advocacy learning can lead to small improvements in classification accuracy over an identical supervised baseline. Though a series of follow-up experiments, we analyze when and how such class-conditional representations improve discriminative performance. Though somewhat counter-intuitive, a framework in which subnetworks are trained to competitively provide evidence in support of their class shows promise, in many cases performing on par with standard learning approaches. This provides a foundation for further exploration into competition and class-conditional representations in supervised learning.
|
2003.02576
|
Antoine Amarilli
|
Antoine Amarilli, Pierre Bourhis, Stefan Mengel, Matthias Niewerth
|
Constant-Delay Enumeration for Nondeterministic Document Spanners
|
29 pages. Extended version of arXiv:1807.09320. Integrates all
corrections following reviewer feedback. Outside of some minor formatting
differences and tweaks, this paper is the same as the paper to appear in the
ACM TODS journal
| null |
10.1145/3436487
| null |
cs.DB cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider the information extraction framework known as document spanners,
and study the problem of efficiently computing the results of the extraction
from an input document, where the extraction task is described as a sequential
variable-set automaton (VA). We pose this problem in the setting of enumeration
algorithms, where we can first run a preprocessing phase and must then produce
the results with a small delay between any two consecutive results. Our goal is
to have an algorithm which is tractable in combined complexity, i.e., in the
sizes of the input document and the VA; while ensuring the best possible data
complexity bounds in the input document size, i.e., constant delay in the
document size. Several recent works at PODS'18 proposed such algorithms but
with linear delay in the document size or with an exponential dependency in
size of the (generally nondeterministic) input VA. In particular, Florenzano et
al. suggest that our desired runtime guarantees cannot be met for general
sequential VAs. We refute this and show that, given a nondeterministic
sequential VA and an input document, we can enumerate the mappings of the VA on
the document with the following bounds: the preprocessing is linear in the
document size and polynomial in the size of the VA, and the delay is
independent of the document and polynomial in the size of the VA. The resulting
algorithm thus achieves tractability in combined complexity and the best
possible data complexity bounds. Moreover, it is rather easy to describe, in
particular for the restricted case of so-called extended VAs. Finally, we
evaluate our algorithm empirically using a prototype implementation.
|
[
{
"created": "Thu, 5 Mar 2020 12:49:56 GMT",
"version": "v1"
},
{
"created": "Fri, 25 Sep 2020 07:48:10 GMT",
"version": "v2"
},
{
"created": "Mon, 7 Dec 2020 13:51:51 GMT",
"version": "v3"
}
] |
2023-09-06
|
[
[
"Amarilli",
"Antoine",
""
],
[
"Bourhis",
"Pierre",
""
],
[
"Mengel",
"Stefan",
""
],
[
"Niewerth",
"Matthias",
""
]
] |
We consider the information extraction framework known as document spanners, and study the problem of efficiently computing the results of the extraction from an input document, where the extraction task is described as a sequential variable-set automaton (VA). We pose this problem in the setting of enumeration algorithms, where we can first run a preprocessing phase and must then produce the results with a small delay between any two consecutive results. Our goal is to have an algorithm which is tractable in combined complexity, i.e., in the sizes of the input document and the VA; while ensuring the best possible data complexity bounds in the input document size, i.e., constant delay in the document size. Several recent works at PODS'18 proposed such algorithms but with linear delay in the document size or with an exponential dependency in size of the (generally nondeterministic) input VA. In particular, Florenzano et al. suggest that our desired runtime guarantees cannot be met for general sequential VAs. We refute this and show that, given a nondeterministic sequential VA and an input document, we can enumerate the mappings of the VA on the document with the following bounds: the preprocessing is linear in the document size and polynomial in the size of the VA, and the delay is independent of the document and polynomial in the size of the VA. The resulting algorithm thus achieves tractability in combined complexity and the best possible data complexity bounds. Moreover, it is rather easy to describe, in particular for the restricted case of so-called extended VAs. Finally, we evaluate our algorithm empirically using a prototype implementation.
|
2204.01855
|
Shima Khoshraftar
|
Shima Khoshraftar, Aijun An
|
A Survey on Graph Representation Learning Methods
| null | null | null | null |
cs.LG cs.SI
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Graphs representation learning has been a very active research area in recent
years. The goal of graph representation learning is to generate graph
representation vectors that capture the structure and features of large graphs
accurately. This is especially important because the quality of the graph
representation vectors will affect the performance of these vectors in
downstream tasks such as node classification, link prediction and anomaly
detection. Many techniques are proposed for generating effective graph
representation vectors. Two of the most prevalent categories of graph
representation learning are graph embedding methods without using graph neural
nets (GNN), which we denote as non-GNN based graph embedding methods, and graph
neural nets (GNN) based methods. Non-GNN graph embedding methods are based on
techniques such as random walks, temporal point processes and neural network
learning methods. GNN-based methods, on the other hand, are the application of
deep learning on graph data. In this survey, we provide an overview of these
two categories and cover the current state-of-the-art methods for both static
and dynamic graphs. Finally, we explore some open and ongoing research
directions for future work.
|
[
{
"created": "Mon, 4 Apr 2022 21:18:48 GMT",
"version": "v1"
},
{
"created": "Wed, 15 Jun 2022 17:26:31 GMT",
"version": "v2"
}
] |
2022-06-16
|
[
[
"Khoshraftar",
"Shima",
""
],
[
"An",
"Aijun",
""
]
] |
Graphs representation learning has been a very active research area in recent years. The goal of graph representation learning is to generate graph representation vectors that capture the structure and features of large graphs accurately. This is especially important because the quality of the graph representation vectors will affect the performance of these vectors in downstream tasks such as node classification, link prediction and anomaly detection. Many techniques are proposed for generating effective graph representation vectors. Two of the most prevalent categories of graph representation learning are graph embedding methods without using graph neural nets (GNN), which we denote as non-GNN based graph embedding methods, and graph neural nets (GNN) based methods. Non-GNN graph embedding methods are based on techniques such as random walks, temporal point processes and neural network learning methods. GNN-based methods, on the other hand, are the application of deep learning on graph data. In this survey, we provide an overview of these two categories and cover the current state-of-the-art methods for both static and dynamic graphs. Finally, we explore some open and ongoing research directions for future work.
|
2310.10917
|
Boqun Zhao
|
Boqun Zhao, Chongjun Ouyang, Yuanwei Liu, Xingqi Zhang, H. Vincent
Poor
|
Modeling and Analysis of Near-Field ISAC
|
Accepted by IEEE Journal of Selected Topics in Signal Processing
| null |
10.1109/JSTSP.2024.3386054
| null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As the technical trends for the next-generation wireless network
significantly extend the near-field region, a performance reevaluation of
integrated sensing and communications (ISAC) with an appropriate channel model
to account for the effects introduced by the near field becomes essential. In
this paper, a near-field ISAC framework is proposed for both downlink and
uplink scenarios based on an accurate channel model. A uniform planar array is
equipped at a base station, where the impacts of the effective aperture and
polarization of antennas are considered. For the downlink case, three distinct
designs are studied: a communications-centric (C-C) design, a sensing-centric
(S-C) design, and a Pareto optimal design. Regarding the uplink case, the C-C
design, the S-C design and a time-sharing strategy are considered. Within each
design, sensing rates (SRs) and communication rates (CRs) are derived. To gain
further insights, high signal-to-noise ratio slopes and rate scaling laws
concerning the number of antennas are examined. The attainable near-field SR-CR
regions of ISAC and the baseline frequency-division S&C are also characterized.
Numerical results reveal that, as the number of antennas in the array grows,
the SRs and CRs under our accurate model converge to finite values, while those
under conventional far- and near-field models exhibit unbounded growth,
highlighting the importance of precisely modeling the channels for near-field
ISAC.
|
[
{
"created": "Tue, 17 Oct 2023 01:25:23 GMT",
"version": "v1"
},
{
"created": "Wed, 18 Oct 2023 20:05:53 GMT",
"version": "v2"
},
{
"created": "Thu, 30 Nov 2023 15:17:27 GMT",
"version": "v3"
},
{
"created": "Fri, 12 Apr 2024 23:26:36 GMT",
"version": "v4"
}
] |
2024-04-16
|
[
[
"Zhao",
"Boqun",
""
],
[
"Ouyang",
"Chongjun",
""
],
[
"Liu",
"Yuanwei",
""
],
[
"Zhang",
"Xingqi",
""
],
[
"Poor",
"H. Vincent",
""
]
] |
As the technical trends for the next-generation wireless network significantly extend the near-field region, a performance reevaluation of integrated sensing and communications (ISAC) with an appropriate channel model to account for the effects introduced by the near field becomes essential. In this paper, a near-field ISAC framework is proposed for both downlink and uplink scenarios based on an accurate channel model. A uniform planar array is equipped at a base station, where the impacts of the effective aperture and polarization of antennas are considered. For the downlink case, three distinct designs are studied: a communications-centric (C-C) design, a sensing-centric (S-C) design, and a Pareto optimal design. Regarding the uplink case, the C-C design, the S-C design and a time-sharing strategy are considered. Within each design, sensing rates (SRs) and communication rates (CRs) are derived. To gain further insights, high signal-to-noise ratio slopes and rate scaling laws concerning the number of antennas are examined. The attainable near-field SR-CR regions of ISAC and the baseline frequency-division S&C are also characterized. Numerical results reveal that, as the number of antennas in the array grows, the SRs and CRs under our accurate model converge to finite values, while those under conventional far- and near-field models exhibit unbounded growth, highlighting the importance of precisely modeling the channels for near-field ISAC.
|
2102.13185
|
Zhuangdi Zhu
|
Zhuangdi Zhu, Kaixiang Lin, Bo Dai, Jiayu Zhou
|
Off-Policy Imitation Learning from Observations
| null | null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Learning from Observations (LfO) is a practical reinforcement learning
scenario from which many applications can benefit through the reuse of
incomplete resources. Compared to conventional imitation learning (IL), LfO is
more challenging because of the lack of expert action guidance. In both
conventional IL and LfO, distribution matching is at the heart of their
foundation. Traditional distribution matching approaches are sample-costly
which depend on on-policy transitions for policy learning. Towards
sample-efficiency, some off-policy solutions have been proposed, which,
however, either lack comprehensive theoretical justifications or depend on the
guidance of expert actions. In this work, we propose a sample-efficient LfO
approach that enables off-policy optimization in a principled manner. To
further accelerate the learning procedure, we regulate the policy update with
an inverse action model, which assists distribution matching from the
perspective of mode-covering. Extensive empirical results on challenging
locomotion tasks indicate that our approach is comparable with state-of-the-art
in terms of both sample-efficiency and asymptotic performance.
|
[
{
"created": "Thu, 25 Feb 2021 21:33:47 GMT",
"version": "v1"
}
] |
2021-03-01
|
[
[
"Zhu",
"Zhuangdi",
""
],
[
"Lin",
"Kaixiang",
""
],
[
"Dai",
"Bo",
""
],
[
"Zhou",
"Jiayu",
""
]
] |
Learning from Observations (LfO) is a practical reinforcement learning scenario from which many applications can benefit through the reuse of incomplete resources. Compared to conventional imitation learning (IL), LfO is more challenging because of the lack of expert action guidance. In both conventional IL and LfO, distribution matching is at the heart of their foundation. Traditional distribution matching approaches are sample-costly which depend on on-policy transitions for policy learning. Towards sample-efficiency, some off-policy solutions have been proposed, which, however, either lack comprehensive theoretical justifications or depend on the guidance of expert actions. In this work, we propose a sample-efficient LfO approach that enables off-policy optimization in a principled manner. To further accelerate the learning procedure, we regulate the policy update with an inverse action model, which assists distribution matching from the perspective of mode-covering. Extensive empirical results on challenging locomotion tasks indicate that our approach is comparable with state-of-the-art in terms of both sample-efficiency and asymptotic performance.
|
1812.02937
|
Idoia Ruiz
|
Idoia Ruiz, Bogdan Raducanu, Rakesh Mehta, Jaume Amores
|
Optimizing speed/accuracy trade-off for person re-identification via
knowledge distillation
|
Published on the journal "Engineering Applications of Artificial
Intelligence"
|
Engineering Applications of Artificial Intelligence, Volume 87,
January 2020, 103309
|
10.1016/j.engappai.2019.103309
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Finding a person across a camera network plays an important role in video
surveillance. For a real-world person re-identification application, in order
to guarantee an optimal time response, it is crucial to find the balance
between accuracy and speed. We analyse this trade-off, comparing a classical
method, that comprises hand-crafted feature description and metric learning, in
particular, LOMO and XQDA, to deep learning based techniques, using image
classification networks, ResNet and MobileNets. Additionally, we propose and
analyse network distillation as a learning strategy to reduce the computational
cost of the deep learning approach at test time. We evaluate both methods on
the Market-1501 and DukeMTMC-reID large-scale datasets, showing that
distillation helps reducing the computational cost at inference time while even
increasing the accuracy performance.
|
[
{
"created": "Fri, 7 Dec 2018 08:11:06 GMT",
"version": "v1"
},
{
"created": "Thu, 5 Dec 2019 16:40:11 GMT",
"version": "v2"
}
] |
2019-12-06
|
[
[
"Ruiz",
"Idoia",
""
],
[
"Raducanu",
"Bogdan",
""
],
[
"Mehta",
"Rakesh",
""
],
[
"Amores",
"Jaume",
""
]
] |
Finding a person across a camera network plays an important role in video surveillance. For a real-world person re-identification application, in order to guarantee an optimal time response, it is crucial to find the balance between accuracy and speed. We analyse this trade-off, comparing a classical method, that comprises hand-crafted feature description and metric learning, in particular, LOMO and XQDA, to deep learning based techniques, using image classification networks, ResNet and MobileNets. Additionally, we propose and analyse network distillation as a learning strategy to reduce the computational cost of the deep learning approach at test time. We evaluate both methods on the Market-1501 and DukeMTMC-reID large-scale datasets, showing that distillation helps reducing the computational cost at inference time while even increasing the accuracy performance.
|
2110.06195
|
Brian Cho
|
Brian Y. Cho, Tucker Hermans, Alan Kuntz
|
Planning Sensing Sequences for Subsurface 3D Tumor Mapping
|
7 pages, 9 figures, to be published in the proceedings of the 2021
International Symposium on Medical Robotics (ISMR)
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Surgical automation has the potential to enable increased precision and
reduce the per-patient workload of overburdened human surgeons. An effective
automation system must be able to sense and map subsurface anatomy, such as
tumors, efficiently and accurately. In this work, we present a method that
plans a sequence of sensing actions to map the 3D geometry of subsurface
tumors. We leverage a sequential Bayesian Hilbert map to create a 3D
probabilistic occupancy model that represents the likelihood that any given
point in the anatomy is occupied by a tumor, conditioned on sensor readings. We
iteratively update the map, utilizing Bayesian optimization to determine
sensing poses that explore unsensed regions of anatomy and exploit the
knowledge gained by previous sensing actions. We demonstrate our method's
efficiency and accuracy in three anatomical scenarios including a liver tumor
scenario generated from a real patient's CT scan. The results show that our
proposed method significantly outperforms comparison methods in terms of
efficiency while detecting subsurface tumors with high accuracy.
|
[
{
"created": "Tue, 12 Oct 2021 17:48:41 GMT",
"version": "v1"
}
] |
2021-10-13
|
[
[
"Cho",
"Brian Y.",
""
],
[
"Hermans",
"Tucker",
""
],
[
"Kuntz",
"Alan",
""
]
] |
Surgical automation has the potential to enable increased precision and reduce the per-patient workload of overburdened human surgeons. An effective automation system must be able to sense and map subsurface anatomy, such as tumors, efficiently and accurately. In this work, we present a method that plans a sequence of sensing actions to map the 3D geometry of subsurface tumors. We leverage a sequential Bayesian Hilbert map to create a 3D probabilistic occupancy model that represents the likelihood that any given point in the anatomy is occupied by a tumor, conditioned on sensor readings. We iteratively update the map, utilizing Bayesian optimization to determine sensing poses that explore unsensed regions of anatomy and exploit the knowledge gained by previous sensing actions. We demonstrate our method's efficiency and accuracy in three anatomical scenarios including a liver tumor scenario generated from a real patient's CT scan. The results show that our proposed method significantly outperforms comparison methods in terms of efficiency while detecting subsurface tumors with high accuracy.
|
2011.06964
|
Micha\"el Fanuel
|
Micha\"el Fanuel, Joachim Schreurs, Johan A.K. Suykens
|
Determinantal Point Processes Implicitly Regularize Semi-parametric
Regression Problems
|
26 pages. Extended results. Typos corrected
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Semi-parametric regression models are used in several applications which
require comprehensibility without sacrificing accuracy. Typical examples are
spline interpolation in geophysics, or non-linear time series problems, where
the system includes a linear and non-linear component. We discuss here the use
of a finite Determinantal Point Process (DPP) for approximating semi-parametric
models. Recently, Barthelm\'e, Tremblay, Usevich, and Amblard introduced a
novel representation of some finite DPPs. These authors formulated extended
L-ensembles that can conveniently represent partial-projection DPPs and suggest
their use for optimal interpolation. With the help of this formalism, we derive
a key identity illustrating the implicit regularization effect of determinantal
sampling for semi-parametric regression and interpolation. Also, a novel
projected Nystr\"om approximation is defined and used to derive a bound on the
expected risk for the corresponding approximation of semi-parametric
regression. This work naturally extends similar results obtained for kernel
ridge regression.
|
[
{
"created": "Fri, 13 Nov 2020 15:22:16 GMT",
"version": "v1"
},
{
"created": "Tue, 9 Mar 2021 13:47:11 GMT",
"version": "v2"
}
] |
2021-03-10
|
[
[
"Fanuel",
"Michaël",
""
],
[
"Schreurs",
"Joachim",
""
],
[
"Suykens",
"Johan A. K.",
""
]
] |
Semi-parametric regression models are used in several applications which require comprehensibility without sacrificing accuracy. Typical examples are spline interpolation in geophysics, or non-linear time series problems, where the system includes a linear and non-linear component. We discuss here the use of a finite Determinantal Point Process (DPP) for approximating semi-parametric models. Recently, Barthelm\'e, Tremblay, Usevich, and Amblard introduced a novel representation of some finite DPPs. These authors formulated extended L-ensembles that can conveniently represent partial-projection DPPs and suggest their use for optimal interpolation. With the help of this formalism, we derive a key identity illustrating the implicit regularization effect of determinantal sampling for semi-parametric regression and interpolation. Also, a novel projected Nystr\"om approximation is defined and used to derive a bound on the expected risk for the corresponding approximation of semi-parametric regression. This work naturally extends similar results obtained for kernel ridge regression.
|
2308.06998
|
Hao Shen
|
Hao Shen, Zhong-Qiu Zhao, Yulun Zhang, Zhao Zhang
|
Mutual Information-driven Triple Interaction Network for Efficient Image
Dehazing
|
Accepted in ACM MM 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multi-stage architectures have exhibited efficacy in image dehazing, which
usually decomposes a challenging task into multiple more tractable sub-tasks
and progressively estimates latent hazy-free images. Despite the remarkable
progress, existing methods still suffer from the following shortcomings: (1)
limited exploration of frequency domain information; (2) insufficient
information interaction; (3) severe feature redundancy. To remedy these issues,
we propose a novel Mutual Information-driven Triple interaction Network
(MITNet) based on spatial-frequency dual domain information and two-stage
architecture. To be specific, the first stage, named amplitude-guided haze
removal, aims to recover the amplitude spectrum of the hazy images for haze
removal. And the second stage, named phase-guided structure refined, devotes to
learning the transformation and refinement of the phase spectrum. To facilitate
the information exchange between two stages, an Adaptive Triple Interaction
Module (ATIM) is developed to simultaneously aggregate cross-domain,
cross-scale, and cross-stage features, where the fused features are further
used to generate content-adaptive dynamic filters so that applying them to
enhance global context representation. In addition, we impose the mutual
information minimization constraint on paired scale encoder and decoder
features from both stages. Such an operation can effectively reduce information
redundancy and enhance cross-stage feature complementarity. Extensive
experiments on multiple public datasets exhibit that our MITNet performs
superior performance with lower model complexity.The code and models are
available at https://github.com/it-hao/MITNet.
|
[
{
"created": "Mon, 14 Aug 2023 08:23:58 GMT",
"version": "v1"
}
] |
2023-08-15
|
[
[
"Shen",
"Hao",
""
],
[
"Zhao",
"Zhong-Qiu",
""
],
[
"Zhang",
"Yulun",
""
],
[
"Zhang",
"Zhao",
""
]
] |
Multi-stage architectures have exhibited efficacy in image dehazing, which usually decomposes a challenging task into multiple more tractable sub-tasks and progressively estimates latent hazy-free images. Despite the remarkable progress, existing methods still suffer from the following shortcomings: (1) limited exploration of frequency domain information; (2) insufficient information interaction; (3) severe feature redundancy. To remedy these issues, we propose a novel Mutual Information-driven Triple interaction Network (MITNet) based on spatial-frequency dual domain information and two-stage architecture. To be specific, the first stage, named amplitude-guided haze removal, aims to recover the amplitude spectrum of the hazy images for haze removal. And the second stage, named phase-guided structure refined, devotes to learning the transformation and refinement of the phase spectrum. To facilitate the information exchange between two stages, an Adaptive Triple Interaction Module (ATIM) is developed to simultaneously aggregate cross-domain, cross-scale, and cross-stage features, where the fused features are further used to generate content-adaptive dynamic filters so that applying them to enhance global context representation. In addition, we impose the mutual information minimization constraint on paired scale encoder and decoder features from both stages. Such an operation can effectively reduce information redundancy and enhance cross-stage feature complementarity. Extensive experiments on multiple public datasets exhibit that our MITNet performs superior performance with lower model complexity.The code and models are available at https://github.com/it-hao/MITNet.
|
2404.08509
|
Haoran Qiu
|
Haoran Qiu, Weichao Mao, Archit Patke, Shengkun Cui, Saurabh Jha, Chen
Wang, Hubertus Franke, Zbigniew T. Kalbarczyk, Tamer Ba\c{s}ar, Ravishankar
K. Iyer
|
Efficient Interactive LLM Serving with Proxy Model-based Sequence Length
Prediction
|
Accepted at AIOps'24
| null | null | null |
cs.DC cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Large language models (LLMs) have been driving a new wave of interactive AI
applications across numerous domains. However, efficiently serving LLM
inference requests is challenging due to their unpredictable execution times
originating from the autoregressive nature of generative models. Existing LLM
serving systems exploit first-come-first-serve (FCFS) scheduling, suffering
from head-of-line blocking issues. To address the non-deterministic nature of
LLMs and enable efficient interactive LLM serving, we present a speculative
shortest-job-first (SSJF) scheduler that uses a light proxy model to predict
LLM output sequence lengths. Our open-source SSJF implementation does not
require changes to memory management or batching strategies. Evaluations on
real-world datasets and production workload traces show that SSJF reduces
average job completion times by 30.5-39.6% and increases throughput by 2.2-3.6x
compared to FCFS schedulers, across no batching, dynamic batching, and
continuous batching settings.
|
[
{
"created": "Fri, 12 Apr 2024 14:46:15 GMT",
"version": "v1"
}
] |
2024-04-15
|
[
[
"Qiu",
"Haoran",
""
],
[
"Mao",
"Weichao",
""
],
[
"Patke",
"Archit",
""
],
[
"Cui",
"Shengkun",
""
],
[
"Jha",
"Saurabh",
""
],
[
"Wang",
"Chen",
""
],
[
"Franke",
"Hubertus",
""
],
[
"Kalbarczyk",
"Zbigniew T.",
""
],
[
"Başar",
"Tamer",
""
],
[
"Iyer",
"Ravishankar K.",
""
]
] |
Large language models (LLMs) have been driving a new wave of interactive AI applications across numerous domains. However, efficiently serving LLM inference requests is challenging due to their unpredictable execution times originating from the autoregressive nature of generative models. Existing LLM serving systems exploit first-come-first-serve (FCFS) scheduling, suffering from head-of-line blocking issues. To address the non-deterministic nature of LLMs and enable efficient interactive LLM serving, we present a speculative shortest-job-first (SSJF) scheduler that uses a light proxy model to predict LLM output sequence lengths. Our open-source SSJF implementation does not require changes to memory management or batching strategies. Evaluations on real-world datasets and production workload traces show that SSJF reduces average job completion times by 30.5-39.6% and increases throughput by 2.2-3.6x compared to FCFS schedulers, across no batching, dynamic batching, and continuous batching settings.
|
1304.2714
|
Henry E. Kyburg Jr.
|
Henry E. Kyburg Jr
|
Higher Order Probabilities
|
Appears in Proceedings of the Third Conference on Uncertainty in
Artificial Intelligence (UAI1987)
| null | null |
UAI-P-1987-PG-30-38
|
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A number of writers have supposed that for the full specification of belief,
higher order probabilities are required. Some have even supposed that there may
be an unending sequence of higher order probabilities of probabilities of
probabilities.... In the present paper we show that higher order probabilities
can always be replaced by the marginal distributions of joint probability
distributions. We consider both the case in which higher order probabilities
are of the same sort as lower order probabilities and that in which higher
order probabilities are distinct in character, as when lower order
probabilities are construed as frequencies and higher order probabilities are
construed as subjective degrees of belief. In neither case do higher order
probabilities appear to offer any advantages, either conceptually or
computationally.
|
[
{
"created": "Wed, 27 Mar 2013 19:46:32 GMT",
"version": "v1"
}
] |
2013-04-11
|
[
[
"Kyburg",
"Henry E.",
"Jr"
]
] |
A number of writers have supposed that for the full specification of belief, higher order probabilities are required. Some have even supposed that there may be an unending sequence of higher order probabilities of probabilities of probabilities.... In the present paper we show that higher order probabilities can always be replaced by the marginal distributions of joint probability distributions. We consider both the case in which higher order probabilities are of the same sort as lower order probabilities and that in which higher order probabilities are distinct in character, as when lower order probabilities are construed as frequencies and higher order probabilities are construed as subjective degrees of belief. In neither case do higher order probabilities appear to offer any advantages, either conceptually or computationally.
|
2108.09241
|
Jie Huang
|
Jie Huang, Kevin Chen-Chuan Chang, Jinjun Xiong, Wen-mei Hwu
|
Open Relation Modeling: Learning to Define Relations between Entities
|
Accepted to Findings of ACL 2022
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Relations between entities can be represented by different instances, e.g., a
sentence containing both entities or a fact in a Knowledge Graph (KG). However,
these instances may not well capture the general relations between entities,
may be difficult to understand by humans, even may not be found due to the
incompleteness of the knowledge source. In this paper, we introduce the Open
Relation Modeling problem - given two entities, generate a coherent sentence
describing the relation between them. To solve this problem, we propose to
teach machines to generate definition-like relation descriptions by letting
them learn from defining entities. Specifically, we fine-tune Pre-trained
Language Models (PLMs) to produce definitions conditioned on extracted entity
pairs. To help PLMs reason between entities and provide additional relational
knowledge to PLMs for open relation modeling, we incorporate reasoning paths in
KGs and include a reasoning path selection mechanism. Experimental results show
that our model can generate concise but informative relation descriptions that
capture the representative characteristics of entities.
|
[
{
"created": "Fri, 20 Aug 2021 16:03:23 GMT",
"version": "v1"
},
{
"created": "Thu, 3 Mar 2022 04:36:32 GMT",
"version": "v2"
}
] |
2022-03-04
|
[
[
"Huang",
"Jie",
""
],
[
"Chang",
"Kevin Chen-Chuan",
""
],
[
"Xiong",
"Jinjun",
""
],
[
"Hwu",
"Wen-mei",
""
]
] |
Relations between entities can be represented by different instances, e.g., a sentence containing both entities or a fact in a Knowledge Graph (KG). However, these instances may not well capture the general relations between entities, may be difficult to understand by humans, even may not be found due to the incompleteness of the knowledge source. In this paper, we introduce the Open Relation Modeling problem - given two entities, generate a coherent sentence describing the relation between them. To solve this problem, we propose to teach machines to generate definition-like relation descriptions by letting them learn from defining entities. Specifically, we fine-tune Pre-trained Language Models (PLMs) to produce definitions conditioned on extracted entity pairs. To help PLMs reason between entities and provide additional relational knowledge to PLMs for open relation modeling, we incorporate reasoning paths in KGs and include a reasoning path selection mechanism. Experimental results show that our model can generate concise but informative relation descriptions that capture the representative characteristics of entities.
|
2106.07505
|
Vanessa Hahn
|
Vanessa Hahn, Dana Ruiter, Thomas Kleinbauer, Dietrich Klakow
|
Modeling Profanity and Hate Speech in Social Media with Semantic
Subspaces
|
9 pages, 4 figures, accepted as a long paper at Workshop on Online
Abuse and Harms 2021
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Hate speech and profanity detection suffer from data sparsity, especially for
languages other than English, due to the subjective nature of the tasks and the
resulting annotation incompatibility of existing corpora. In this study, we
identify profane subspaces in word and sentence representations and explore
their generalization capability on a variety of similar and distant target
tasks in a zero-shot setting. This is done monolingually (German) and
cross-lingually to closely-related (English), distantly-related (French) and
non-related (Arabic) tasks. We observe that, on both similar and distant target
tasks and across all languages, the subspace-based representations transfer
more effectively than standard BERT representations in the zero-shot setting,
with improvements between F1 +10.9 and F1 +42.9 over the baselines across all
tested monolingual and cross-lingual scenarios.
|
[
{
"created": "Mon, 14 Jun 2021 15:34:37 GMT",
"version": "v1"
},
{
"created": "Fri, 18 Jun 2021 10:04:11 GMT",
"version": "v2"
}
] |
2021-06-21
|
[
[
"Hahn",
"Vanessa",
""
],
[
"Ruiter",
"Dana",
""
],
[
"Kleinbauer",
"Thomas",
""
],
[
"Klakow",
"Dietrich",
""
]
] |
Hate speech and profanity detection suffer from data sparsity, especially for languages other than English, due to the subjective nature of the tasks and the resulting annotation incompatibility of existing corpora. In this study, we identify profane subspaces in word and sentence representations and explore their generalization capability on a variety of similar and distant target tasks in a zero-shot setting. This is done monolingually (German) and cross-lingually to closely-related (English), distantly-related (French) and non-related (Arabic) tasks. We observe that, on both similar and distant target tasks and across all languages, the subspace-based representations transfer more effectively than standard BERT representations in the zero-shot setting, with improvements between F1 +10.9 and F1 +42.9 over the baselines across all tested monolingual and cross-lingual scenarios.
|
1211.7113
|
Tinku Rasheed
|
Djamal-Eddine Meddour, Tinku Rasheed and Yvon Gourhant
|
On the Role of Infrastructure sharing for Mobile Network Operators in
Emerging Markets
| null |
The International Journal of Computer and Telecommunications
Networking, Volume 55, Issue 7, 2011, Pages 1576-1591
|
10.1016/j.comnet.2011.01.023
| null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The traditional model of single ownership of all the physical network
elements and network layers by mobile network operators is beginning to be
challenged. This has been attributed to the rapid and complex technology
migration compounded with rigorous regulatory requirements and ever increasing
capital expenditures. These trends, combined together with the increasing
competition, rapid commoditization of telecommunication equipments and rising
separation of network and service provisioning are pushing the operators to
adopt multiple strategies, with network infrastructure sharing in the core and
radio access networks emerging as a more radical mechanism to substantially and
sustainably improve network costs. Through infrastructure sharing, developing
countries and other emerging economies can harness the technological, market
and regulatory developments that have fostered affordable access to mobile and
broadband services. Similarly, the network operators entering or consolidating
in the emerging markets can aim for substantial savings on capital and
operating expenses. The present paper aims to investigate the current
technological solutions and regulatory and the technical-economical dimensions
in connection with the sharing of mobile telecommunication networks in emerging
countries. We analyze the estimated savings on capital and operating expenses,
while assessing the technical constraints, applicability and benefits of the
network sharing solutions in an emerging market context.
|
[
{
"created": "Thu, 29 Nov 2012 22:51:56 GMT",
"version": "v1"
}
] |
2012-12-03
|
[
[
"Meddour",
"Djamal-Eddine",
""
],
[
"Rasheed",
"Tinku",
""
],
[
"Gourhant",
"Yvon",
""
]
] |
The traditional model of single ownership of all the physical network elements and network layers by mobile network operators is beginning to be challenged. This has been attributed to the rapid and complex technology migration compounded with rigorous regulatory requirements and ever increasing capital expenditures. These trends, combined together with the increasing competition, rapid commoditization of telecommunication equipments and rising separation of network and service provisioning are pushing the operators to adopt multiple strategies, with network infrastructure sharing in the core and radio access networks emerging as a more radical mechanism to substantially and sustainably improve network costs. Through infrastructure sharing, developing countries and other emerging economies can harness the technological, market and regulatory developments that have fostered affordable access to mobile and broadband services. Similarly, the network operators entering or consolidating in the emerging markets can aim for substantial savings on capital and operating expenses. The present paper aims to investigate the current technological solutions and regulatory and the technical-economical dimensions in connection with the sharing of mobile telecommunication networks in emerging countries. We analyze the estimated savings on capital and operating expenses, while assessing the technical constraints, applicability and benefits of the network sharing solutions in an emerging market context.
|
1309.2350
|
Shahin Shahrampour
|
Shahin Shahrampour and Ali Jadbabaie
|
Exponentially Fast Parameter Estimation in Networks Using Distributed
Dual Averaging
|
6 pages, To appear in Conference on Decision and Control 2013
| null | null | null |
cs.LG cs.SI math.OC stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we present an optimization-based view of distributed parameter
estimation and observational social learning in networks. Agents receive a
sequence of random, independent and identically distributed (i.i.d.) signals,
each of which individually may not be informative about the underlying true
state, but the signals together are globally informative enough to make the
true state identifiable. Using an optimization-based characterization of
Bayesian learning as proximal stochastic gradient descent (with
Kullback-Leibler divergence from a prior as a proximal function), we show how
to efficiently use a distributed, online variant of Nesterov's dual averaging
method to solve the estimation with purely local information. When the true
state is globally identifiable, and the network is connected, we prove that
agents eventually learn the true parameter using a randomized gossip scheme. We
demonstrate that with high probability the convergence is exponentially fast
with a rate dependent on the KL divergence of observations under the true state
from observations under the second likeliest state. Furthermore, our work also
highlights the possibility of learning under continuous adaptation of network
which is a consequence of employing constant, unit stepsize for the algorithm.
|
[
{
"created": "Tue, 10 Sep 2013 00:36:44 GMT",
"version": "v1"
}
] |
2013-09-11
|
[
[
"Shahrampour",
"Shahin",
""
],
[
"Jadbabaie",
"Ali",
""
]
] |
In this paper we present an optimization-based view of distributed parameter estimation and observational social learning in networks. Agents receive a sequence of random, independent and identically distributed (i.i.d.) signals, each of which individually may not be informative about the underlying true state, but the signals together are globally informative enough to make the true state identifiable. Using an optimization-based characterization of Bayesian learning as proximal stochastic gradient descent (with Kullback-Leibler divergence from a prior as a proximal function), we show how to efficiently use a distributed, online variant of Nesterov's dual averaging method to solve the estimation with purely local information. When the true state is globally identifiable, and the network is connected, we prove that agents eventually learn the true parameter using a randomized gossip scheme. We demonstrate that with high probability the convergence is exponentially fast with a rate dependent on the KL divergence of observations under the true state from observations under the second likeliest state. Furthermore, our work also highlights the possibility of learning under continuous adaptation of network which is a consequence of employing constant, unit stepsize for the algorithm.
|
2309.16347
|
Eleftherios Triantafyllidis Mr.
|
Eleftherios Triantafyllidis, Filippos Christianos and Zhibin Li
|
Intrinsic Language-Guided Exploration for Complex Long-Horizon Robotic
Manipulation Tasks
|
Accepted at the International Conference on Robotics and Automation
(ICRA), 2024. The manuscript consists of 10 pages and 6 figures
| null | null | null |
cs.RO cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Current reinforcement learning algorithms struggle in sparse and complex
environments, most notably in long-horizon manipulation tasks entailing a
plethora of different sequences. In this work, we propose the Intrinsically
Guided Exploration from Large Language Models (IGE-LLMs) framework. By
leveraging LLMs as an assistive intrinsic reward, IGE-LLMs guides the
exploratory process in reinforcement learning to address intricate long-horizon
with sparse rewards robotic manipulation tasks. We evaluate our framework and
related intrinsic learning methods in an environment challenged with
exploration, and a complex robotic manipulation task challenged by both
exploration and long-horizons. Results show IGE-LLMs (i) exhibit notably higher
performance over related intrinsic methods and the direct use of LLMs in
decision-making, (ii) can be combined and complement existing learning methods
highlighting its modularity, (iii) are fairly insensitive to different
intrinsic scaling parameters, and (iv) maintain robustness against increased
levels of uncertainty and horizons.
|
[
{
"created": "Thu, 28 Sep 2023 11:14:52 GMT",
"version": "v1"
},
{
"created": "Thu, 7 Mar 2024 17:53:35 GMT",
"version": "v2"
}
] |
2024-03-08
|
[
[
"Triantafyllidis",
"Eleftherios",
""
],
[
"Christianos",
"Filippos",
""
],
[
"Li",
"Zhibin",
""
]
] |
Current reinforcement learning algorithms struggle in sparse and complex environments, most notably in long-horizon manipulation tasks entailing a plethora of different sequences. In this work, we propose the Intrinsically Guided Exploration from Large Language Models (IGE-LLMs) framework. By leveraging LLMs as an assistive intrinsic reward, IGE-LLMs guides the exploratory process in reinforcement learning to address intricate long-horizon with sparse rewards robotic manipulation tasks. We evaluate our framework and related intrinsic learning methods in an environment challenged with exploration, and a complex robotic manipulation task challenged by both exploration and long-horizons. Results show IGE-LLMs (i) exhibit notably higher performance over related intrinsic methods and the direct use of LLMs in decision-making, (ii) can be combined and complement existing learning methods highlighting its modularity, (iii) are fairly insensitive to different intrinsic scaling parameters, and (iv) maintain robustness against increased levels of uncertainty and horizons.
|
2305.18445
|
Sunitha Basodi
|
Sunitha Basodi, Krishna Pusuluri, Xueli Xiao, Yi Pan
|
Intelligent gradient amplification for deep neural networks
| null | null | null | null |
cs.LG cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Deep learning models offer superior performance compared to other machine
learning techniques for a variety of tasks and domains, but pose their own
challenges. In particular, deep learning models require larger training times
as the depth of a model increases, and suffer from vanishing gradients. Several
solutions address these problems independently, but there have been minimal
efforts to identify an integrated solution that improves the performance of a
model by addressing vanishing gradients, as well as accelerates the training
process to achieve higher performance at larger learning rates. In this work,
we intelligently determine which layers of a deep learning model to apply
gradient amplification to, using a formulated approach that analyzes gradient
fluctuations of layers during training. Detailed experiments are performed for
simpler and deeper neural networks using two different intelligent measures and
two different thresholds that determine the amplification layers, and a
training strategy where gradients are amplified only during certain epochs.
Results show that our amplification offers better performance compared to the
original models, and achieves accuracy improvement of around 2.5% on CIFAR- 10
and around 4.5% on CIFAR-100 datasets, even when the models are trained with
higher learning rates.
|
[
{
"created": "Mon, 29 May 2023 03:38:09 GMT",
"version": "v1"
}
] |
2023-05-31
|
[
[
"Basodi",
"Sunitha",
""
],
[
"Pusuluri",
"Krishna",
""
],
[
"Xiao",
"Xueli",
""
],
[
"Pan",
"Yi",
""
]
] |
Deep learning models offer superior performance compared to other machine learning techniques for a variety of tasks and domains, but pose their own challenges. In particular, deep learning models require larger training times as the depth of a model increases, and suffer from vanishing gradients. Several solutions address these problems independently, but there have been minimal efforts to identify an integrated solution that improves the performance of a model by addressing vanishing gradients, as well as accelerates the training process to achieve higher performance at larger learning rates. In this work, we intelligently determine which layers of a deep learning model to apply gradient amplification to, using a formulated approach that analyzes gradient fluctuations of layers during training. Detailed experiments are performed for simpler and deeper neural networks using two different intelligent measures and two different thresholds that determine the amplification layers, and a training strategy where gradients are amplified only during certain epochs. Results show that our amplification offers better performance compared to the original models, and achieves accuracy improvement of around 2.5% on CIFAR- 10 and around 4.5% on CIFAR-100 datasets, even when the models are trained with higher learning rates.
|
2001.02091
|
Chengkun Lang
|
Huiwei Zhou, Zhuang Liu1, Shixian Ning, Chengkun Lang, Yingyu Lin, Lei
Du
|
Knowledge-aware Attention Network for Protein-Protein Interaction
Extraction
|
Published on Journal of Biomedical Informatics, 14 pages, 5 figures
|
Journal of Biomedical Informatics, 2019, 96: 103234
|
10.1016/j.jbi.2019.103234
| null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Protein-protein interaction (PPI) extraction from published scientific
literature provides additional support for precision medicine efforts. However,
many of the current PPI extraction methods need extensive feature engineering
and cannot make full use of the prior knowledge in knowledge bases (KB). KBs
contain huge amounts of structured information about entities and
relationships, therefore plays a pivotal role in PPI extraction. This paper
proposes a knowledge-aware attention network (KAN) to fuse prior knowledge
about protein-protein pairs and context information for PPI extraction. The
proposed model first adopts a diagonal-disabled multi-head attention mechanism
to encode context sequence along with knowledge representations learned from
KB. Then a novel multi-dimensional attention mechanism is used to select the
features that can best describe the encoded context. Experiment results on the
BioCreative VI PPI dataset show that the proposed approach could acquire
knowledge-aware dependencies between different words in a sequence and lead to
a new state-of-the-art performance.
|
[
{
"created": "Tue, 7 Jan 2020 15:02:28 GMT",
"version": "v1"
}
] |
2020-01-08
|
[
[
"Zhou",
"Huiwei",
""
],
[
"Liu1",
"Zhuang",
""
],
[
"Ning",
"Shixian",
""
],
[
"Lang",
"Chengkun",
""
],
[
"Lin",
"Yingyu",
""
],
[
"Du",
"Lei",
""
]
] |
Protein-protein interaction (PPI) extraction from published scientific literature provides additional support for precision medicine efforts. However, many of the current PPI extraction methods need extensive feature engineering and cannot make full use of the prior knowledge in knowledge bases (KB). KBs contain huge amounts of structured information about entities and relationships, therefore plays a pivotal role in PPI extraction. This paper proposes a knowledge-aware attention network (KAN) to fuse prior knowledge about protein-protein pairs and context information for PPI extraction. The proposed model first adopts a diagonal-disabled multi-head attention mechanism to encode context sequence along with knowledge representations learned from KB. Then a novel multi-dimensional attention mechanism is used to select the features that can best describe the encoded context. Experiment results on the BioCreative VI PPI dataset show that the proposed approach could acquire knowledge-aware dependencies between different words in a sequence and lead to a new state-of-the-art performance.
|
1207.2567
|
Nadeem Javaid
|
S. Hayat, N. Javaid, Z. A. Khan, A. Shareef, A. Mahmood, S. H. Bouk
|
Energy Efficient MAC Protocols
| null |
5th AHPCN in conjunction with 14th HPCC-2012, Liverpool, UK
| null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a survey of energy efficiency of Medium Access Control
(MAC) protocols for Wireless Body Area Sensor Networks (WBASNs). We highlight
the features of MAC protocols along with their advantages and limitations in
context of WBASNs. Comparison of Low Power Listening (LPL), Scheduled
Contention and Time Division Multiple Access (TDMA) is also elaborated. MAC
protocols with respect to different approaches and techniques which are used
for energy minimization, traffic control mechanisms for collision avoidance are
discussed.We also present a survey of path loss models for In-body, On-body and
Off-body communications in WBASNs and analytically discuss that path loss is
maximum in In-body communication because of low energy levels to take care of
tissues and organs located inside the body. Survey of Power model for WBANs of
CSMA/CA and beacon mode is also presented.
|
[
{
"created": "Wed, 11 Jul 2012 09:05:36 GMT",
"version": "v1"
}
] |
2012-07-12
|
[
[
"Hayat",
"S.",
""
],
[
"Javaid",
"N.",
""
],
[
"Khan",
"Z. A.",
""
],
[
"Shareef",
"A.",
""
],
[
"Mahmood",
"A.",
""
],
[
"Bouk",
"S. H.",
""
]
] |
This paper presents a survey of energy efficiency of Medium Access Control (MAC) protocols for Wireless Body Area Sensor Networks (WBASNs). We highlight the features of MAC protocols along with their advantages and limitations in context of WBASNs. Comparison of Low Power Listening (LPL), Scheduled Contention and Time Division Multiple Access (TDMA) is also elaborated. MAC protocols with respect to different approaches and techniques which are used for energy minimization, traffic control mechanisms for collision avoidance are discussed.We also present a survey of path loss models for In-body, On-body and Off-body communications in WBASNs and analytically discuss that path loss is maximum in In-body communication because of low energy levels to take care of tissues and organs located inside the body. Survey of Power model for WBANs of CSMA/CA and beacon mode is also presented.
|
1910.06299
|
Gamal Sallam
|
Gamal Sallam, Zizhan Zheng, Bo Ji
|
Placement and Allocation of Virtual Network Functions: Multi-dimensional
Case
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Network function virtualization (NFV) is an emerging design paradigm that
replaces physical middlebox devices with software modules running on general
purpose commodity servers. While gradually transitioning to NFV, Internet
service providers face the problem of where to introduce NFV in order to make
the most benefit of that; here, we measure the benefit by the amount of traffic
that can be served in an NFV-enabled network. This problem is non-trivial as it
is composed of two challenging subproblems: 1) placement of nodes to support
virtual network functions (referred to as VNF-nodes); 2) allocation of the
VNF-nodes' resources to network flows. This problem has been studied for the
one-dimensional setting, where all network flows require one network function,
which requires a unit of resource to process a unit of flow. In this work, we
consider the multi-dimensional setting, where flows must be processed by
multiple network functions, which require a different amount of each resource
to process a unit of flow. The multi-dimensional setting introduces new
challenges in addition to those of the one-dimensional setting (e.g.,
NP-hardness and non-submodularity) and also makes the resource allocation
subproblem a multi-dimensional generalization of the generalized assignment
problem with assignment restrictions. To address these difficulties, we propose
a novel two-level relaxation method that allows us to draw a connection to the
sequence submodular theory and utilize the property of sequence submodularity
along with the primal-dual technique to design two approximation algorithms. We
further prove that the proposed algorithms have a non-trivial approximation
ratio that depends on the number of VNF-nodes, resources, and a measure of the
available resource compared to flow demand. Finally, we perform trace-driven
simulations to show the effectiveness of the proposed algorithms.
|
[
{
"created": "Mon, 14 Oct 2019 17:27:59 GMT",
"version": "v1"
},
{
"created": "Sun, 31 May 2020 22:39:43 GMT",
"version": "v2"
},
{
"created": "Sat, 19 Feb 2022 18:41:15 GMT",
"version": "v3"
}
] |
2022-02-22
|
[
[
"Sallam",
"Gamal",
""
],
[
"Zheng",
"Zizhan",
""
],
[
"Ji",
"Bo",
""
]
] |
Network function virtualization (NFV) is an emerging design paradigm that replaces physical middlebox devices with software modules running on general purpose commodity servers. While gradually transitioning to NFV, Internet service providers face the problem of where to introduce NFV in order to make the most benefit of that; here, we measure the benefit by the amount of traffic that can be served in an NFV-enabled network. This problem is non-trivial as it is composed of two challenging subproblems: 1) placement of nodes to support virtual network functions (referred to as VNF-nodes); 2) allocation of the VNF-nodes' resources to network flows. This problem has been studied for the one-dimensional setting, where all network flows require one network function, which requires a unit of resource to process a unit of flow. In this work, we consider the multi-dimensional setting, where flows must be processed by multiple network functions, which require a different amount of each resource to process a unit of flow. The multi-dimensional setting introduces new challenges in addition to those of the one-dimensional setting (e.g., NP-hardness and non-submodularity) and also makes the resource allocation subproblem a multi-dimensional generalization of the generalized assignment problem with assignment restrictions. To address these difficulties, we propose a novel two-level relaxation method that allows us to draw a connection to the sequence submodular theory and utilize the property of sequence submodularity along with the primal-dual technique to design two approximation algorithms. We further prove that the proposed algorithms have a non-trivial approximation ratio that depends on the number of VNF-nodes, resources, and a measure of the available resource compared to flow demand. Finally, we perform trace-driven simulations to show the effectiveness of the proposed algorithms.
|
2005.13751
|
Iraklis Moutidis
|
Iraklis Moutidis and Hywel T.P. Williams
|
Complex networks for event detection in heterogeneous high volume news
streams
| null | null | null | null |
cs.SI cs.CL cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Detecting important events in high volume news streams is an important task
for a variety of purposes.The volume and rate of online news increases the need
for automated event detection methods thatcan operate in real time. In this
paper we develop a network-based approach that makes the workingassumption that
important news events always involve named entities (such as persons,
locationsand organizations) that are linked in news articles. Our approach uses
natural language processingtechniques to detect these entities in a stream of
news articles and then creates a time-stamped seriesof networks in which the
detected entities are linked by co-occurrence in articles and sentences. Inthis
prototype, weighted node degree is tracked over time and change-point detection
used to locateimportant events. Potential events are characterized and
distinguished using community detectionon KeyGraphs that relate named entities
and informative noun-phrases from related articles. Thismethodology already
produces promising results and will be extended in future to include a
widervariety of complex network analysis techniques.
|
[
{
"created": "Thu, 28 May 2020 02:45:43 GMT",
"version": "v1"
}
] |
2020-05-29
|
[
[
"Moutidis",
"Iraklis",
""
],
[
"Williams",
"Hywel T. P.",
""
]
] |
Detecting important events in high volume news streams is an important task for a variety of purposes.The volume and rate of online news increases the need for automated event detection methods thatcan operate in real time. In this paper we develop a network-based approach that makes the workingassumption that important news events always involve named entities (such as persons, locationsand organizations) that are linked in news articles. Our approach uses natural language processingtechniques to detect these entities in a stream of news articles and then creates a time-stamped seriesof networks in which the detected entities are linked by co-occurrence in articles and sentences. Inthis prototype, weighted node degree is tracked over time and change-point detection used to locateimportant events. Potential events are characterized and distinguished using community detectionon KeyGraphs that relate named entities and informative noun-phrases from related articles. Thismethodology already produces promising results and will be extended in future to include a widervariety of complex network analysis techniques.
|
2209.15351
|
Xiuzhen Guo
|
Xiuzhen Guo, Longfei Shangguan, Yuan He, Jia Zhang, Haotian Jiang,
Awais Ahmad Siddiqi, Yunhao Liu
|
Efficient Ambient LoRa Backscatter with On-Off Keying Modulation
| null | null | null | null |
cs.NI eess.SP
|
http://creativecommons.org/licenses/by/4.0/
|
Backscatter communication holds potential for ubiquitous and low-cost
connectivity among low-power IoT devices. To avoid interference between the
carrier signal and the backscatter signal, recent works propose a
frequency-shifting technique to separate these two signals in the frequency
domain. Such proposals, however, have to occupy the precious wireless spectrum
that is already overcrowded, and increase the power, cost, and complexity of
the backscatter tag. In this paper, we revisit the classic ON-OFF Keying (OOK)
modulation and propose Aloba, a backscatter system that takes the ambient LoRa
transmissions as the excitation and piggybacks the in-band OOK modulated
signals over the LoRa transmissions. Our design enables the backsactter signal
to work in the same frequency band of the carrier signal, meanwhile achieving
flexible data rate at different transmission range. The key contributions of
Aloba include: (1) the design of a low-power backscatter tag that can pick up
the ambient LoRa signals from other signals. (2) a novel decoding algorithm to
demodulate both the carrier signal and the backscatter signal from their
superposition. We further adopt link coding mechanism and interleave operation
to enhance the reliability of backscatter signal decoding. We implement Aloba
and conduct head-to-head comparison with the state-of-the-art LoRa backscatter
system PLoRa in various settings. The experiment results show Aloba can achieve
199.4 Kbps data rate at various distances, 52.4 times higher than PLoRa.
|
[
{
"created": "Fri, 30 Sep 2022 10:16:43 GMT",
"version": "v1"
}
] |
2022-10-03
|
[
[
"Guo",
"Xiuzhen",
""
],
[
"Shangguan",
"Longfei",
""
],
[
"He",
"Yuan",
""
],
[
"Zhang",
"Jia",
""
],
[
"Jiang",
"Haotian",
""
],
[
"Siddiqi",
"Awais Ahmad",
""
],
[
"Liu",
"Yunhao",
""
]
] |
Backscatter communication holds potential for ubiquitous and low-cost connectivity among low-power IoT devices. To avoid interference between the carrier signal and the backscatter signal, recent works propose a frequency-shifting technique to separate these two signals in the frequency domain. Such proposals, however, have to occupy the precious wireless spectrum that is already overcrowded, and increase the power, cost, and complexity of the backscatter tag. In this paper, we revisit the classic ON-OFF Keying (OOK) modulation and propose Aloba, a backscatter system that takes the ambient LoRa transmissions as the excitation and piggybacks the in-band OOK modulated signals over the LoRa transmissions. Our design enables the backsactter signal to work in the same frequency band of the carrier signal, meanwhile achieving flexible data rate at different transmission range. The key contributions of Aloba include: (1) the design of a low-power backscatter tag that can pick up the ambient LoRa signals from other signals. (2) a novel decoding algorithm to demodulate both the carrier signal and the backscatter signal from their superposition. We further adopt link coding mechanism and interleave operation to enhance the reliability of backscatter signal decoding. We implement Aloba and conduct head-to-head comparison with the state-of-the-art LoRa backscatter system PLoRa in various settings. The experiment results show Aloba can achieve 199.4 Kbps data rate at various distances, 52.4 times higher than PLoRa.
|
1808.01688
|
Huan Zhang
|
Dong Su, Huan Zhang, Hongge Chen, Jinfeng Yi, Pin-Yu Chen, Yupeng Gao
|
Is Robustness the Cost of Accuracy? -- A Comprehensive Study on the
Robustness of 18 Deep Image Classification Models
|
Accepted by the European Conference on Computer Vision (ECCV) 2018
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The prediction accuracy has been the long-lasting and sole standard for
comparing the performance of different image classification models, including
the ImageNet competition. However, recent studies have highlighted the lack of
robustness in well-trained deep neural networks to adversarial examples.
Visually imperceptible perturbations to natural images can easily be crafted
and mislead the image classifiers towards misclassification. To demystify the
trade-offs between robustness and accuracy, in this paper we thoroughly
benchmark 18 ImageNet models using multiple robustness metrics, including the
distortion, success rate and transferability of adversarial examples between
306 pairs of models. Our extensive experimental results reveal several new
insights: (1) linear scaling law - the empirical $\ell_2$ and $\ell_\infty$
distortion metrics scale linearly with the logarithm of classification error;
(2) model architecture is a more critical factor to robustness than model size,
and the disclosed accuracy-robustness Pareto frontier can be used as an
evaluation criterion for ImageNet model designers; (3) for a similar network
architecture, increasing network depth slightly improves robustness in
$\ell_\infty$ distortion; (4) there exist models (in VGG family) that exhibit
high adversarial transferability, while most adversarial examples crafted from
one model can only be transferred within the same family. Experiment code is
publicly available at \url{https://github.com/huanzhang12/Adversarial_Survey}.
|
[
{
"created": "Sun, 5 Aug 2018 21:43:01 GMT",
"version": "v1"
},
{
"created": "Mon, 4 Mar 2019 00:35:26 GMT",
"version": "v2"
}
] |
2019-03-05
|
[
[
"Su",
"Dong",
""
],
[
"Zhang",
"Huan",
""
],
[
"Chen",
"Hongge",
""
],
[
"Yi",
"Jinfeng",
""
],
[
"Chen",
"Pin-Yu",
""
],
[
"Gao",
"Yupeng",
""
]
] |
The prediction accuracy has been the long-lasting and sole standard for comparing the performance of different image classification models, including the ImageNet competition. However, recent studies have highlighted the lack of robustness in well-trained deep neural networks to adversarial examples. Visually imperceptible perturbations to natural images can easily be crafted and mislead the image classifiers towards misclassification. To demystify the trade-offs between robustness and accuracy, in this paper we thoroughly benchmark 18 ImageNet models using multiple robustness metrics, including the distortion, success rate and transferability of adversarial examples between 306 pairs of models. Our extensive experimental results reveal several new insights: (1) linear scaling law - the empirical $\ell_2$ and $\ell_\infty$ distortion metrics scale linearly with the logarithm of classification error; (2) model architecture is a more critical factor to robustness than model size, and the disclosed accuracy-robustness Pareto frontier can be used as an evaluation criterion for ImageNet model designers; (3) for a similar network architecture, increasing network depth slightly improves robustness in $\ell_\infty$ distortion; (4) there exist models (in VGG family) that exhibit high adversarial transferability, while most adversarial examples crafted from one model can only be transferred within the same family. Experiment code is publicly available at \url{https://github.com/huanzhang12/Adversarial_Survey}.
|
2303.10087
|
Shuai Chen
|
Shuai Chen, Yash Bhalgat, Xinghui Li, Jiawang Bian, Kejie Li, Zirui
Wang, Victor Adrian Prisacariu
|
Neural Refinement for Absolute Pose Regression with Feature Synthesis
|
Paper Accepted by CVPR 2024. Project Page:
http://nefes.active.vision. Code will be released at
https://github.com/ActiveVisionLab/NeFeS
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Absolute Pose Regression (APR) methods use deep neural networks to directly
regress camera poses from RGB images. However, the predominant APR
architectures only rely on 2D operations during inference, resulting in limited
accuracy of pose estimation due to the lack of 3D geometry constraints or
priors. In this work, we propose a test-time refinement pipeline that leverages
implicit geometric constraints using a robust feature field to enhance the
ability of APR methods to use 3D information during inference. We also
introduce a novel Neural Feature Synthesizer (NeFeS) model, which encodes 3D
geometric features during training and directly renders dense novel view
features at test time to refine APR methods. To enhance the robustness of our
model, we introduce a feature fusion module and a progressive training
strategy. Our proposed method achieves state-of-the-art single-image APR
accuracy on indoor and outdoor datasets.
|
[
{
"created": "Fri, 17 Mar 2023 16:10:50 GMT",
"version": "v1"
},
{
"created": "Fri, 1 Mar 2024 01:40:52 GMT",
"version": "v2"
}
] |
2024-03-04
|
[
[
"Chen",
"Shuai",
""
],
[
"Bhalgat",
"Yash",
""
],
[
"Li",
"Xinghui",
""
],
[
"Bian",
"Jiawang",
""
],
[
"Li",
"Kejie",
""
],
[
"Wang",
"Zirui",
""
],
[
"Prisacariu",
"Victor Adrian",
""
]
] |
Absolute Pose Regression (APR) methods use deep neural networks to directly regress camera poses from RGB images. However, the predominant APR architectures only rely on 2D operations during inference, resulting in limited accuracy of pose estimation due to the lack of 3D geometry constraints or priors. In this work, we propose a test-time refinement pipeline that leverages implicit geometric constraints using a robust feature field to enhance the ability of APR methods to use 3D information during inference. We also introduce a novel Neural Feature Synthesizer (NeFeS) model, which encodes 3D geometric features during training and directly renders dense novel view features at test time to refine APR methods. To enhance the robustness of our model, we introduce a feature fusion module and a progressive training strategy. Our proposed method achieves state-of-the-art single-image APR accuracy on indoor and outdoor datasets.
|
2306.11112
|
Emily Diana
|
Emily Diana and Alexander Williams Tolbert
|
Correcting Underrepresentation and Intersectional Bias for
Classification
| null | null | null | null |
cs.LG cs.CY cs.DS stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider the problem of learning from data corrupted by
underrepresentation bias, where positive examples are filtered from the data at
different, unknown rates for a fixed number of sensitive groups. We show that
with a small amount of unbiased data, we can efficiently estimate the
group-wise drop-out rates, even in settings where intersectional group
membership makes learning each intersectional rate computationally infeasible.
Using these estimates, we construct a reweighting scheme that allows us to
approximate the loss of any hypothesis on the true distribution, even if we
only observe the empirical error on a biased sample. From this, we present an
algorithm encapsulating this learning and reweighting process along with a
thorough empirical investigation. Finally, we define a bespoke notion of PAC
learnability for the underrepresentation and intersectional bias setting and
show that our algorithm permits efficient learning for model classes of finite
VC dimension.
|
[
{
"created": "Mon, 19 Jun 2023 18:25:44 GMT",
"version": "v1"
},
{
"created": "Wed, 19 Jul 2023 21:08:41 GMT",
"version": "v2"
},
{
"created": "Tue, 26 Sep 2023 19:02:34 GMT",
"version": "v3"
},
{
"created": "Mon, 3 Jun 2024 20:57:56 GMT",
"version": "v4"
}
] |
2024-06-05
|
[
[
"Diana",
"Emily",
""
],
[
"Tolbert",
"Alexander Williams",
""
]
] |
We consider the problem of learning from data corrupted by underrepresentation bias, where positive examples are filtered from the data at different, unknown rates for a fixed number of sensitive groups. We show that with a small amount of unbiased data, we can efficiently estimate the group-wise drop-out rates, even in settings where intersectional group membership makes learning each intersectional rate computationally infeasible. Using these estimates, we construct a reweighting scheme that allows us to approximate the loss of any hypothesis on the true distribution, even if we only observe the empirical error on a biased sample. From this, we present an algorithm encapsulating this learning and reweighting process along with a thorough empirical investigation. Finally, we define a bespoke notion of PAC learnability for the underrepresentation and intersectional bias setting and show that our algorithm permits efficient learning for model classes of finite VC dimension.
|
2108.04343
|
Maryem Rhanoui
|
Siham Yousfi and Maryem Rhanoui and Dalila Chiadmi
|
Towards a Generic Multimodal Architecture for Batch and Streaming Big
Data Integration
| null |
Journal of Computer Science, Volume 15 No. 1, 2019, 207-220
|
10.3844/jcssp.2019.207.220
| null |
cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Big Data are rapidly produced from various heterogeneous data sources. They
are of different types (text, image, video or audio) and have different levels
of reliability and completeness. One of the most interesting architectures that
deal with the large amount of emerging data at high velocity is called the
lambda architecture. In fact, it combines two different processing layers
namely batch and speed layers, each providing specific views of data while
ensuring robustness, fast and scalable data processing. However, most papers
dealing with the lambda architecture are focusing one single type of data
generally produced by a single data source. Besides, the layers of the
architecture are implemented independently, or, at best, are combined to
perform basic processing without assessing either the data reliability or
completeness. Therefore, inspired by the lambda architecture, we propose in
this paper a generic multimodal architecture that combines both batch and
streaming processing in order to build a complete, global and accurate insight
in near-real-time based on the knowledge extracted from multiple heterogeneous
Big Data sources. Our architecture uses batch processing to analyze the data
structures and contents, build the learning models and calculate the
reliability index of the involved sources, while the streaming processing uses
the built-in models of the batch layer to immediately process incoming data and
rapidly provide results. We validate our architecture in the context of urban
traffic management systems in order to detect congestions.
|
[
{
"created": "Mon, 9 Aug 2021 20:50:01 GMT",
"version": "v1"
}
] |
2021-08-11
|
[
[
"Yousfi",
"Siham",
""
],
[
"Rhanoui",
"Maryem",
""
],
[
"Chiadmi",
"Dalila",
""
]
] |
Big Data are rapidly produced from various heterogeneous data sources. They are of different types (text, image, video or audio) and have different levels of reliability and completeness. One of the most interesting architectures that deal with the large amount of emerging data at high velocity is called the lambda architecture. In fact, it combines two different processing layers namely batch and speed layers, each providing specific views of data while ensuring robustness, fast and scalable data processing. However, most papers dealing with the lambda architecture are focusing one single type of data generally produced by a single data source. Besides, the layers of the architecture are implemented independently, or, at best, are combined to perform basic processing without assessing either the data reliability or completeness. Therefore, inspired by the lambda architecture, we propose in this paper a generic multimodal architecture that combines both batch and streaming processing in order to build a complete, global and accurate insight in near-real-time based on the knowledge extracted from multiple heterogeneous Big Data sources. Our architecture uses batch processing to analyze the data structures and contents, build the learning models and calculate the reliability index of the involved sources, while the streaming processing uses the built-in models of the batch layer to immediately process incoming data and rapidly provide results. We validate our architecture in the context of urban traffic management systems in order to detect congestions.
|
0810.1424
|
Sidharth Jaggi
|
Bikash Kumar Dey, Sidharth Jaggi, and Michael Langberg
|
"Real" Slepian-Wolf Codes
|
20 pages. Preliminary version presented at ISIT 2008, Toronto, Canada
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We provide a novel achievability proof of the Slepian-Wolf theorem for i.i.d.
sources over finite alphabets. We demonstrate that random codes that are linear
over the real field achieve the classical Slepian-Wolf rate-region. For finite
alphabets we show that typicality decoding is equivalent to solving an integer
program. Minimum entropy decoding is also shown to achieve exponentially small
probability of error. The techniques used may be of independent interest for
code design for a wide class of information theory problems, and for the field
of compressed sensing.
|
[
{
"created": "Wed, 8 Oct 2008 12:57:51 GMT",
"version": "v1"
}
] |
2008-10-09
|
[
[
"Dey",
"Bikash Kumar",
""
],
[
"Jaggi",
"Sidharth",
""
],
[
"Langberg",
"Michael",
""
]
] |
We provide a novel achievability proof of the Slepian-Wolf theorem for i.i.d. sources over finite alphabets. We demonstrate that random codes that are linear over the real field achieve the classical Slepian-Wolf rate-region. For finite alphabets we show that typicality decoding is equivalent to solving an integer program. Minimum entropy decoding is also shown to achieve exponentially small probability of error. The techniques used may be of independent interest for code design for a wide class of information theory problems, and for the field of compressed sensing.
|
2204.05311
|
M.Z. Naser
|
M.Z. Naser, Aybike Ozyuksel Ciftcioglu
|
Causal Discovery and Causal Learning for Fire Resistance Evaluation:
Incorporating Domain Knowledge
| null | null | null | null |
cs.LG stat.AP
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Experiments remain the gold standard to establish an understanding of
fire-related phenomena. A primary goal in designing tests is to uncover the
data generating process (i.e., the how and why the observations we see come to
be); or simply what causes such observations. Uncovering such a process not
only advances our knowledge but also provides us with the capability to be able
to predict phenomena accurately. This paper presents an approach that leverages
causal discovery and causal inference to evaluate the fire resistance of
structural members. In this approach, causal discovery algorithms are adopted
to uncover the causal structure between key variables pertaining to the fire
resistance of reinforced concrete (RC) columns. Then, companion inference
algorithms are applied to infer (estimate) the influence of each variable on
the fire resistance given a specific intervention. Finally, this study ends by
contrasting the algorithmic causal discovery with that obtained from domain
knowledge and traditional machine learning. Our findings clearly show the
potential and merit of adopting causality into our domain.
|
[
{
"created": "Mon, 11 Apr 2022 22:35:52 GMT",
"version": "v1"
}
] |
2022-04-13
|
[
[
"Naser",
"M. Z.",
""
],
[
"Ciftcioglu",
"Aybike Ozyuksel",
""
]
] |
Experiments remain the gold standard to establish an understanding of fire-related phenomena. A primary goal in designing tests is to uncover the data generating process (i.e., the how and why the observations we see come to be); or simply what causes such observations. Uncovering such a process not only advances our knowledge but also provides us with the capability to be able to predict phenomena accurately. This paper presents an approach that leverages causal discovery and causal inference to evaluate the fire resistance of structural members. In this approach, causal discovery algorithms are adopted to uncover the causal structure between key variables pertaining to the fire resistance of reinforced concrete (RC) columns. Then, companion inference algorithms are applied to infer (estimate) the influence of each variable on the fire resistance given a specific intervention. Finally, this study ends by contrasting the algorithmic causal discovery with that obtained from domain knowledge and traditional machine learning. Our findings clearly show the potential and merit of adopting causality into our domain.
|
2211.05295
|
Jianye Yi
|
Jianye Yi, Xiaopin Zhong, Weixiang Liu, Zongze Wu, Yuanlong Deng and
Zhengguang Wu
|
Harmonizing output imbalance for defect segmentation on
extremely-imbalanced photovoltaic module cells images
|
19 pages, 16 figures, 3 appendixes
| null | null | null |
cs.CV cs.LG eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The continuous development of the photovoltaic (PV) industry has raised high
requirements for the quality of monocrystalline of PV module cells. When
learning to segment defect regions in PV module cell images, Tiny Hidden Cracks
(THC) lead to extremely-imbalanced samples. The ratio of defect pixels to
normal pixels can be as low as 1:2000. This extreme imbalance makes it
difficult to segment the THC of PV module cells, which is also a challenge for
semantic segmentation. To address the problem of segmenting defects on
extremely-imbalanced THC data, the paper makes contributions from three
aspects: (1) it proposes an explicit measure for output imbalance; (2) it
generalizes a distribution-based loss that can handle different types of output
imbalances; and (3) it introduces a compound loss with our adaptive
hyperparameter selection algorithm that can keep the consistency of training
and inference for harmonizing the output imbalance on extremelyimbalanced input
data. The proposed method is evaluated on four widely-used deep learning
architectures and four datasets with varying degrees of input imbalance. The
experimental results show that the proposed method outperforms existing
methods.
|
[
{
"created": "Thu, 10 Nov 2022 02:05:17 GMT",
"version": "v1"
},
{
"created": "Mon, 12 Dec 2022 04:18:22 GMT",
"version": "v2"
},
{
"created": "Tue, 13 Dec 2022 02:09:40 GMT",
"version": "v3"
},
{
"created": "Tue, 24 Oct 2023 10:08:10 GMT",
"version": "v4"
}
] |
2023-10-25
|
[
[
"Yi",
"Jianye",
""
],
[
"Zhong",
"Xiaopin",
""
],
[
"Liu",
"Weixiang",
""
],
[
"Wu",
"Zongze",
""
],
[
"Deng",
"Yuanlong",
""
],
[
"Wu",
"Zhengguang",
""
]
] |
The continuous development of the photovoltaic (PV) industry has raised high requirements for the quality of monocrystalline of PV module cells. When learning to segment defect regions in PV module cell images, Tiny Hidden Cracks (THC) lead to extremely-imbalanced samples. The ratio of defect pixels to normal pixels can be as low as 1:2000. This extreme imbalance makes it difficult to segment the THC of PV module cells, which is also a challenge for semantic segmentation. To address the problem of segmenting defects on extremely-imbalanced THC data, the paper makes contributions from three aspects: (1) it proposes an explicit measure for output imbalance; (2) it generalizes a distribution-based loss that can handle different types of output imbalances; and (3) it introduces a compound loss with our adaptive hyperparameter selection algorithm that can keep the consistency of training and inference for harmonizing the output imbalance on extremelyimbalanced input data. The proposed method is evaluated on four widely-used deep learning architectures and four datasets with varying degrees of input imbalance. The experimental results show that the proposed method outperforms existing methods.
|
2105.05716
|
Adrian Remonda
|
Adrian Remonda, Eduardo Veas, Granit Luzhnica
|
Acting upon Imagination: when to trust imagined trajectories in model
based reinforcement learning
| null | null | null | null |
cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Model-based reinforcement learning (MBRL) aims to learn model(s) of the
environment dynamics that can predict the outcome of its actions. Forward
application of the model yields so called imagined trajectories (sequences of
action, predicted state-reward) used to optimize the set of candidate actions
that maximize expected reward. The outcome, an ideal imagined trajectory or
plan, is imperfect and typically MBRL relies on model predictive control (MPC)
to overcome this by continuously re-planning from scratch, incurring thus major
computational cost and increasing complexity in tasks with longer receding
horizon. We propose uncertainty estimation methods for online evaluation of
imagined trajectories to assess whether further planned actions can be trusted
to deliver acceptable reward. These methods include comparing the error after
performing the last action with the standard expected error and using model
uncertainty to assess the deviation from expected outcomes. Additionally, we
introduce methods that exploit the forward propagation of the dynamics model to
evaluate if the remainder of the plan aligns with expected results and assess
the remainder of the plan in terms of the expected reward. Our experiments
demonstrate the effectiveness of the proposed uncertainty estimation methods by
applying them to avoid unnecessary trajectory replanning in a shooting MBRL
setting. Results highlight significant reduction on computational costs without
sacrificing performance.
|
[
{
"created": "Wed, 12 May 2021 15:04:07 GMT",
"version": "v1"
},
{
"created": "Thu, 13 May 2021 10:26:30 GMT",
"version": "v2"
},
{
"created": "Wed, 9 Nov 2022 17:34:08 GMT",
"version": "v3"
},
{
"created": "Fri, 11 Nov 2022 10:47:43 GMT",
"version": "v4"
},
{
"created": "Thu, 18 Apr 2024 23:45:00 GMT",
"version": "v5"
},
{
"created": "Tue, 30 Jul 2024 14:25:07 GMT",
"version": "v6"
}
] |
2024-07-31
|
[
[
"Remonda",
"Adrian",
""
],
[
"Veas",
"Eduardo",
""
],
[
"Luzhnica",
"Granit",
""
]
] |
Model-based reinforcement learning (MBRL) aims to learn model(s) of the environment dynamics that can predict the outcome of its actions. Forward application of the model yields so called imagined trajectories (sequences of action, predicted state-reward) used to optimize the set of candidate actions that maximize expected reward. The outcome, an ideal imagined trajectory or plan, is imperfect and typically MBRL relies on model predictive control (MPC) to overcome this by continuously re-planning from scratch, incurring thus major computational cost and increasing complexity in tasks with longer receding horizon. We propose uncertainty estimation methods for online evaluation of imagined trajectories to assess whether further planned actions can be trusted to deliver acceptable reward. These methods include comparing the error after performing the last action with the standard expected error and using model uncertainty to assess the deviation from expected outcomes. Additionally, we introduce methods that exploit the forward propagation of the dynamics model to evaluate if the remainder of the plan aligns with expected results and assess the remainder of the plan in terms of the expected reward. Our experiments demonstrate the effectiveness of the proposed uncertainty estimation methods by applying them to avoid unnecessary trajectory replanning in a shooting MBRL setting. Results highlight significant reduction on computational costs without sacrificing performance.
|
2309.08369
|
Yinqi Li
|
Zhupeng Ye, Yinqi Li, Zejian Yuan
|
An Efficient Wide-Range Pseudo-3D Vehicle Detection Using A Single
Camera
|
11 pages, 27 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Wide-range and fine-grained vehicle detection plays a critical role in
enabling active safety features in intelligent driving systems. However,
existing vehicle detection methods based on rectangular bounding boxes (BBox)
often struggle with perceiving wide-range objects, especially small objects at
long distances. And BBox expression cannot provide detailed geometric shape and
pose information of vehicles. This paper proposes a novel wide-range Pseudo-3D
Vehicle Detection method based on images from a single camera and incorporates
efficient learning methods. This model takes a spliced image as input, which is
obtained by combining two sub-window images from a high-resolution image. This
image format maximizes the utilization of limited image resolution to retain
essential information about wide-range vehicle objects. To detect pseudo-3D
objects, our model adopts specifically designed detection heads. These heads
simultaneously output extended BBox and Side Projection Line (SPL)
representations, which capture vehicle shapes and poses, enabling
high-precision detection. To further enhance the performance of detection, a
joint constraint loss combining both the object box and SPL is designed during
model training, improving the efficiency, stability, and prediction accuracy of
the model. Experimental results on our self-built dataset demonstrate that our
model achieves favorable performance in wide-range pseudo-3D vehicle detection
across multiple evaluation metrics. Our demo video has been placed at
https://www.youtube.com/watch?v=1gk1PmsQ5Q8.
|
[
{
"created": "Fri, 15 Sep 2023 12:50:09 GMT",
"version": "v1"
}
] |
2023-09-18
|
[
[
"Ye",
"Zhupeng",
""
],
[
"Li",
"Yinqi",
""
],
[
"Yuan",
"Zejian",
""
]
] |
Wide-range and fine-grained vehicle detection plays a critical role in enabling active safety features in intelligent driving systems. However, existing vehicle detection methods based on rectangular bounding boxes (BBox) often struggle with perceiving wide-range objects, especially small objects at long distances. And BBox expression cannot provide detailed geometric shape and pose information of vehicles. This paper proposes a novel wide-range Pseudo-3D Vehicle Detection method based on images from a single camera and incorporates efficient learning methods. This model takes a spliced image as input, which is obtained by combining two sub-window images from a high-resolution image. This image format maximizes the utilization of limited image resolution to retain essential information about wide-range vehicle objects. To detect pseudo-3D objects, our model adopts specifically designed detection heads. These heads simultaneously output extended BBox and Side Projection Line (SPL) representations, which capture vehicle shapes and poses, enabling high-precision detection. To further enhance the performance of detection, a joint constraint loss combining both the object box and SPL is designed during model training, improving the efficiency, stability, and prediction accuracy of the model. Experimental results on our self-built dataset demonstrate that our model achieves favorable performance in wide-range pseudo-3D vehicle detection across multiple evaluation metrics. Our demo video has been placed at https://www.youtube.com/watch?v=1gk1PmsQ5Q8.
|
1805.01129
|
Palakorn Achananuparp
|
Palakorn Achananuparp, Ee-Peng Lim, Vibhanshu Abhishek
|
Does Journaling Encourage Healthier Choices? Analyzing Healthy Eating
Behaviors of Food Journalers
|
Published at Digital Health 2018
| null |
10.1145/3194658.3194663
| null |
cs.SI cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Past research has shown the benefits of food journaling in promoting mindful
eating and healthier food choices. However, the links between journaling and
healthy eating have not been thoroughly examined. Beyond caloric restriction,
do journalers consistently and sufficiently consume healthful diets? How
different are their eating habits compared to those of average consumers who
tend to be less conscious about health? In this study, we analyze the healthy
eating behaviors of active food journalers using data from MyFitnessPal.
Surprisingly, our findings show that food journalers do not eat as healthily as
they should despite their proclivity to health eating and their food choices
resemble those of the general populace. Furthermore, we find that the
journaling duration is only a marginal determinant of healthy eating outcomes
and sociodemographic factors, such as gender and regions of residence, are much
more predictive of healthy food choices.
|
[
{
"created": "Thu, 3 May 2018 05:59:22 GMT",
"version": "v1"
}
] |
2020-10-28
|
[
[
"Achananuparp",
"Palakorn",
""
],
[
"Lim",
"Ee-Peng",
""
],
[
"Abhishek",
"Vibhanshu",
""
]
] |
Past research has shown the benefits of food journaling in promoting mindful eating and healthier food choices. However, the links between journaling and healthy eating have not been thoroughly examined. Beyond caloric restriction, do journalers consistently and sufficiently consume healthful diets? How different are their eating habits compared to those of average consumers who tend to be less conscious about health? In this study, we analyze the healthy eating behaviors of active food journalers using data from MyFitnessPal. Surprisingly, our findings show that food journalers do not eat as healthily as they should despite their proclivity to health eating and their food choices resemble those of the general populace. Furthermore, we find that the journaling duration is only a marginal determinant of healthy eating outcomes and sociodemographic factors, such as gender and regions of residence, are much more predictive of healthy food choices.
|
2310.02076
|
Cindy Xiong Bearfield
|
Cindy Xiong Bearfield, Chase Stokes, Andrew Lovett, Steven Franconeri
|
What Does the Chart Say? Grouping Cues Guide Viewer Comparisons and
Conclusions in Bar Charts
| null | null |
10.1109/TVCG.2023.3289292
| null |
cs.HC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Reading a visualization is like reading a paragraph. Each sentence is a
comparison: the mean of these is higher than those; this difference is smaller
than that. What determines which comparisons are made first? The viewer's goals
and expertise matter, but the way that values are visually grouped together
within the chart also impacts those comparisons. Research from psychology
suggests that comparisons involve multiple steps. First, the viewer divides the
visualization into a set of units. This might include a single bar or a grouped
set of bars. Then the viewer selects and compares two of these units, perhaps
noting that one pair of bars is longer than another. Viewers might take an
additional third step and perform a second-order comparison, perhaps
determining that the difference between one pair of bars is greater than the
difference between another pair. We create a visual comparison taxonomy that
allows us to develop and test a sequence of hypotheses about which comparisons
people are more likely to make when reading a visualization. We find that
people tend to compare two groups before comparing two individual bars and that
second-order comparisons are rare. Visual cues like spatial proximity and color
can influence which elements are grouped together and selected for comparison,
with spatial proximity being a stronger grouping cue. Interestingly, once the
viewer grouped together and compared a set of bars, regardless of whether the
group is formed by spatial proximity or color similarity, they no longer
consider other possible groupings in their comparisons.
|
[
{
"created": "Tue, 3 Oct 2023 14:16:25 GMT",
"version": "v1"
}
] |
2023-10-04
|
[
[
"Bearfield",
"Cindy Xiong",
""
],
[
"Stokes",
"Chase",
""
],
[
"Lovett",
"Andrew",
""
],
[
"Franconeri",
"Steven",
""
]
] |
Reading a visualization is like reading a paragraph. Each sentence is a comparison: the mean of these is higher than those; this difference is smaller than that. What determines which comparisons are made first? The viewer's goals and expertise matter, but the way that values are visually grouped together within the chart also impacts those comparisons. Research from psychology suggests that comparisons involve multiple steps. First, the viewer divides the visualization into a set of units. This might include a single bar or a grouped set of bars. Then the viewer selects and compares two of these units, perhaps noting that one pair of bars is longer than another. Viewers might take an additional third step and perform a second-order comparison, perhaps determining that the difference between one pair of bars is greater than the difference between another pair. We create a visual comparison taxonomy that allows us to develop and test a sequence of hypotheses about which comparisons people are more likely to make when reading a visualization. We find that people tend to compare two groups before comparing two individual bars and that second-order comparisons are rare. Visual cues like spatial proximity and color can influence which elements are grouped together and selected for comparison, with spatial proximity being a stronger grouping cue. Interestingly, once the viewer grouped together and compared a set of bars, regardless of whether the group is formed by spatial proximity or color similarity, they no longer consider other possible groupings in their comparisons.
|
2404.15697
|
Orazio Pontorno
|
Orazio Pontorno (1), Luca Guarnera (1), Sebastiano Battiato (1) ((1)
University of Catania)
|
DeepFeatureX Net: Deep Features eXtractors based Network for
discriminating synthetic from real images
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deepfakes, synthetic images generated by deep learning algorithms, represent
one of the biggest challenges in the field of Digital Forensics. The scientific
community is working to develop approaches that can discriminate the origin of
digital images (real or AI-generated). However, these methodologies face the
challenge of generalization, that is, the ability to discern the nature of an
image even if it is generated by an architecture not seen during training. This
usually leads to a drop in performance. In this context, we propose a novel
approach based on three blocks called Base Models, each of which is responsible
for extracting the discriminative features of a specific image class (Diffusion
Model-generated, GAN-generated, or real) as it is trained by exploiting
deliberately unbalanced datasets. The features extracted from each block are
then concatenated and processed to discriminate the origin of the input image.
Experimental results showed that this approach not only demonstrates good
robust capabilities to JPEG compression but also outperforms state-of-the-art
methods in several generalization tests. Code, models and dataset are available
at https://github.com/opontorno/block-based_deepfake-detection.
|
[
{
"created": "Wed, 24 Apr 2024 07:25:36 GMT",
"version": "v1"
}
] |
2024-04-25
|
[
[
"Pontorno",
"Orazio",
""
],
[
"Guarnera",
"Luca",
""
],
[
"Battiato",
"Sebastiano",
""
]
] |
Deepfakes, synthetic images generated by deep learning algorithms, represent one of the biggest challenges in the field of Digital Forensics. The scientific community is working to develop approaches that can discriminate the origin of digital images (real or AI-generated). However, these methodologies face the challenge of generalization, that is, the ability to discern the nature of an image even if it is generated by an architecture not seen during training. This usually leads to a drop in performance. In this context, we propose a novel approach based on three blocks called Base Models, each of which is responsible for extracting the discriminative features of a specific image class (Diffusion Model-generated, GAN-generated, or real) as it is trained by exploiting deliberately unbalanced datasets. The features extracted from each block are then concatenated and processed to discriminate the origin of the input image. Experimental results showed that this approach not only demonstrates good robust capabilities to JPEG compression but also outperforms state-of-the-art methods in several generalization tests. Code, models and dataset are available at https://github.com/opontorno/block-based_deepfake-detection.
|
1311.0413
|
Gordana Dodig Crnkovic
|
Gordana Dodig-Crnkovic
|
Information, Computation, Cognition. Agency-based Hierarchies of Levels
|
5 pages
| null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/3.0/
|
Nature can be seen as informational structure with computational dynamics
(info-computationalism), where an (info-computational) agent is needed for the
potential information of the world to actualize. Starting from the definition
of information as the difference in one physical system that makes a difference
in another physical system, which combines Bateson and Hewitt definitions, the
argument is advanced for natural computation as a computational model of the
dynamics of the physical world where information processing is constantly going
on, on a variety of levels of organization. This setting helps elucidating the
relationships between computation, information, agency and cognition, within
the common conceptual framework, which has special relevance for biology and
robotics.
|
[
{
"created": "Sat, 2 Nov 2013 21:33:11 GMT",
"version": "v1"
}
] |
2013-11-05
|
[
[
"Dodig-Crnkovic",
"Gordana",
""
]
] |
Nature can be seen as informational structure with computational dynamics (info-computationalism), where an (info-computational) agent is needed for the potential information of the world to actualize. Starting from the definition of information as the difference in one physical system that makes a difference in another physical system, which combines Bateson and Hewitt definitions, the argument is advanced for natural computation as a computational model of the dynamics of the physical world where information processing is constantly going on, on a variety of levels of organization. This setting helps elucidating the relationships between computation, information, agency and cognition, within the common conceptual framework, which has special relevance for biology and robotics.
|
2306.05582
|
Denizhan Oak
|
Denizhan Pak, Donsuk Lee, Samantha M. W. Wood, Justin N. Wood
|
A newborn embodied Turing test for view-invariant object recognition
|
7 Pages. 4 figures, 1 table. This paper was accepted to the CogSci
2023 Conference. (https://cognitivesciencesociety.org/)
| null | null | null |
cs.AI q-bio.NC
|
http://creativecommons.org/licenses/by/4.0/
|
Recent progress in artificial intelligence has renewed interest in building
machines that learn like animals. Almost all of the work comparing learning
across biological and artificial systems comes from studies where animals and
machines received different training data, obscuring whether differences
between animals and machines emerged from differences in learning mechanisms
versus training data. We present an experimental approach-a "newborn embodied
Turing Test"-that allows newborn animals and machines to be raised in the same
environments and tested with the same tasks, permitting direct comparison of
their learning abilities. To make this platform, we first collected
controlled-rearing data from newborn chicks, then performed "digital twin"
experiments in which machines were raised in virtual environments that mimicked
the rearing conditions of the chicks. We found that (1) machines (deep
reinforcement learning agents with intrinsic motivation) can spontaneously
develop visually guided preference behavior, akin to imprinting in newborn
chicks, and (2) machines are still far from newborn-level performance on object
recognition tasks. Almost all of the chicks developed view-invariant object
recognition, whereas the machines tended to develop view-dependent recognition.
The learning outcomes were also far more constrained in the chicks versus
machines. Ultimately, we anticipate that this approach will help researchers
develop embodied AI systems that learn like newborn animals.
|
[
{
"created": "Thu, 8 Jun 2023 22:46:31 GMT",
"version": "v1"
}
] |
2023-06-12
|
[
[
"Pak",
"Denizhan",
""
],
[
"Lee",
"Donsuk",
""
],
[
"Wood",
"Samantha M. W.",
""
],
[
"Wood",
"Justin N.",
""
]
] |
Recent progress in artificial intelligence has renewed interest in building machines that learn like animals. Almost all of the work comparing learning across biological and artificial systems comes from studies where animals and machines received different training data, obscuring whether differences between animals and machines emerged from differences in learning mechanisms versus training data. We present an experimental approach-a "newborn embodied Turing Test"-that allows newborn animals and machines to be raised in the same environments and tested with the same tasks, permitting direct comparison of their learning abilities. To make this platform, we first collected controlled-rearing data from newborn chicks, then performed "digital twin" experiments in which machines were raised in virtual environments that mimicked the rearing conditions of the chicks. We found that (1) machines (deep reinforcement learning agents with intrinsic motivation) can spontaneously develop visually guided preference behavior, akin to imprinting in newborn chicks, and (2) machines are still far from newborn-level performance on object recognition tasks. Almost all of the chicks developed view-invariant object recognition, whereas the machines tended to develop view-dependent recognition. The learning outcomes were also far more constrained in the chicks versus machines. Ultimately, we anticipate that this approach will help researchers develop embodied AI systems that learn like newborn animals.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.