id
stringlengths 9
10
| submitter
stringlengths 1
64
⌀ | authors
stringlengths 4
20.7k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
404
⌀ | doi
stringlengths 11
153
⌀ | report-no
stringlengths 2
254
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | orig_abstract
stringlengths 14
3.35k
| versions
listlengths 1
60
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
1.35k
| abstract
stringlengths 11
3.34k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1805.10807
|
Suofei Zhang
|
Suofei Zhang, Wei Zhao, Xiaofu Wu, Quan Zhou
|
Fast Dynamic Routing Based on Weighted Kernel Density Estimation
|
16 pages, 4 figures, submitted to eccv 2018
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Capsules as well as dynamic routing between them are most recently proposed
structures for deep neural networks. A capsule groups data into vectors or
matrices as poses rather than conventional scalars to represent specific
properties of target instance. Besides of pose, a capsule should be attached
with a probability (often denoted as activation) for its presence. The dynamic
routing helps capsules achieve more generalization capacity with many fewer
model parameters. However, the bottleneck that prevents widespread applications
of capsule is the expense of computation during routing. To address this
problem, we generalize existing routing methods within the framework of
weighted kernel density estimation, and propose two fast routing methods with
different optimization strategies. Our methods prompt the time efficiency of
routing by nearly 40\% with negligible performance degradation. By stacking a
hybrid of convolutional layers and capsule layers, we construct a network
architecture to handle inputs at a resolution of $64\times{64}$ pixels. The
proposed models achieve a parallel performance with other leading methods in
multiple benchmarks.
|
[
{
"created": "Mon, 28 May 2018 08:18:31 GMT",
"version": "v1"
},
{
"created": "Sat, 1 Sep 2018 03:14:29 GMT",
"version": "v2"
}
] |
2018-09-05
|
[
[
"Zhang",
"Suofei",
""
],
[
"Zhao",
"Wei",
""
],
[
"Wu",
"Xiaofu",
""
],
[
"Zhou",
"Quan",
""
]
] |
Capsules as well as dynamic routing between them are most recently proposed structures for deep neural networks. A capsule groups data into vectors or matrices as poses rather than conventional scalars to represent specific properties of target instance. Besides of pose, a capsule should be attached with a probability (often denoted as activation) for its presence. The dynamic routing helps capsules achieve more generalization capacity with many fewer model parameters. However, the bottleneck that prevents widespread applications of capsule is the expense of computation during routing. To address this problem, we generalize existing routing methods within the framework of weighted kernel density estimation, and propose two fast routing methods with different optimization strategies. Our methods prompt the time efficiency of routing by nearly 40\% with negligible performance degradation. By stacking a hybrid of convolutional layers and capsule layers, we construct a network architecture to handle inputs at a resolution of $64\times{64}$ pixels. The proposed models achieve a parallel performance with other leading methods in multiple benchmarks.
|
2212.05757
|
Sheikh Salman Hassan
|
Sheikh Salman Hassan, Yu Min Park, Yan Kyaw Tun, Walid Saad, Zhu Han,
Choong Seon Hong
|
Satellite-based ITS Data Offloading & Computation in 6G Networks: A
Cooperative Multi-Agent Proximal Policy Optimization DRL with Attention
Approach
|
18 Pages, 20 Figures, Submitted to IEEE Transactions on Mobile
Computing (TMC)-(Under Major Revision)
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The proliferation of intelligent transportation systems (ITS) has led to
increasing demand for diverse network applications. However, conventional
terrestrial access networks (TANs) are inadequate in accommodating various
applications for remote ITS nodes, i.e., airplanes and ships. In contrast,
satellite access networks (SANs) offer supplementary support for TANs, in terms
of coverage flexibility and availability. In this study, we propose a novel
approach to ITS data offloading and computation services based on SANs. We use
low-Earth orbit (LEO) and cube satellites (CubeSats) as independent mobile edge
computing (MEC) servers that schedule the processing of data generated by ITS
nodes. To optimize offloading task selection, computing, and bandwidth resource
allocation for different satellite servers, we formulate a joint delay and
rental price minimization problem that is mixed-integer non-linear programming
(MINLP) and NP-hard. We propose a cooperative multi-agent proximal policy
optimization (Co-MAPPO) deep reinforcement learning (DRL) approach with an
attention mechanism to deal with intelligent offloading decisions. We also
decompose the remaining subproblem into three independent subproblems for
resource allocation and use convex optimization techniques to obtain their
optimal closed-form analytical solutions. We conduct extensive simulations and
compare our proposed approach to baselines, resulting in performance
improvements of 9.9%, 5.2%, and 4.2%, respectively.
|
[
{
"created": "Mon, 12 Dec 2022 08:14:57 GMT",
"version": "v1"
},
{
"created": "Thu, 15 Jun 2023 02:16:54 GMT",
"version": "v2"
}
] |
2023-06-16
|
[
[
"Hassan",
"Sheikh Salman",
""
],
[
"Park",
"Yu Min",
""
],
[
"Tun",
"Yan Kyaw",
""
],
[
"Saad",
"Walid",
""
],
[
"Han",
"Zhu",
""
],
[
"Hong",
"Choong Seon",
""
]
] |
The proliferation of intelligent transportation systems (ITS) has led to increasing demand for diverse network applications. However, conventional terrestrial access networks (TANs) are inadequate in accommodating various applications for remote ITS nodes, i.e., airplanes and ships. In contrast, satellite access networks (SANs) offer supplementary support for TANs, in terms of coverage flexibility and availability. In this study, we propose a novel approach to ITS data offloading and computation services based on SANs. We use low-Earth orbit (LEO) and cube satellites (CubeSats) as independent mobile edge computing (MEC) servers that schedule the processing of data generated by ITS nodes. To optimize offloading task selection, computing, and bandwidth resource allocation for different satellite servers, we formulate a joint delay and rental price minimization problem that is mixed-integer non-linear programming (MINLP) and NP-hard. We propose a cooperative multi-agent proximal policy optimization (Co-MAPPO) deep reinforcement learning (DRL) approach with an attention mechanism to deal with intelligent offloading decisions. We also decompose the remaining subproblem into three independent subproblems for resource allocation and use convex optimization techniques to obtain their optimal closed-form analytical solutions. We conduct extensive simulations and compare our proposed approach to baselines, resulting in performance improvements of 9.9%, 5.2%, and 4.2%, respectively.
|
2305.05062
|
Hyeokhyen Kwon
|
Hyeokhyen Kwon, Chaitra Hegde, Yashar Kiarashi, Venkata Siva Krishna
Madala, Ratan Singh, ArjunSinh Nakum, Robert Tweedy, Leandro Miletto Tonetto,
Craig M. Zimring, Matthew Doiron, Amy D. Rodriguez, Allan I. Levey, and Gari
D. Clifford
|
A Feasibility Study on Indoor Localization and Multi-person Tracking
Using Sparsely Distributed Camera Network with Edge Computing
| null | null |
10.1109/JISPIN.2023.3337189
| null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Camera-based activity monitoring systems are becoming an attractive solution
for smart building applications with the advances in computer vision and edge
computing technologies. In this paper, we present a feasibility study and
systematic analysis of a camera-based indoor localization and multi-person
tracking system implemented on edge computing devices within a large indoor
space. To this end, we deployed an end-to-end edge computing pipeline that
utilizes multiple cameras to achieve localization, body orientation estimation
and tracking of multiple individuals within a large therapeutic space spanning
$1700m^2$, all while maintaining a strong focus on preserving privacy. Our
pipeline consists of 39 edge computing camera systems equipped with Tensor
Processing Units (TPUs) placed in the indoor space's ceiling. To ensure the
privacy of individuals, a real-time multi-person pose estimation algorithm runs
on the TPU of the computing camera system. This algorithm extracts poses and
bounding boxes, which are utilized for indoor localization, body orientation
estimation, and multi-person tracking. Our pipeline demonstrated an average
localization error of 1.41 meters, a multiple-object tracking accuracy score of
88.6\%, and a mean absolute body orientation error of 29\degree. These results
shows that localization and tracking of individuals in a large indoor space is
feasible even with the privacy constrains.
|
[
{
"created": "Mon, 8 May 2023 21:38:42 GMT",
"version": "v1"
},
{
"created": "Wed, 29 Nov 2023 14:23:09 GMT",
"version": "v2"
}
] |
2023-11-30
|
[
[
"Kwon",
"Hyeokhyen",
""
],
[
"Hegde",
"Chaitra",
""
],
[
"Kiarashi",
"Yashar",
""
],
[
"Madala",
"Venkata Siva Krishna",
""
],
[
"Singh",
"Ratan",
""
],
[
"Nakum",
"ArjunSinh",
""
],
[
"Tweedy",
"Robert",
""
],
[
"Tonetto",
"Leandro Miletto",
""
],
[
"Zimring",
"Craig M.",
""
],
[
"Doiron",
"Matthew",
""
],
[
"Rodriguez",
"Amy D.",
""
],
[
"Levey",
"Allan I.",
""
],
[
"Clifford",
"Gari D.",
""
]
] |
Camera-based activity monitoring systems are becoming an attractive solution for smart building applications with the advances in computer vision and edge computing technologies. In this paper, we present a feasibility study and systematic analysis of a camera-based indoor localization and multi-person tracking system implemented on edge computing devices within a large indoor space. To this end, we deployed an end-to-end edge computing pipeline that utilizes multiple cameras to achieve localization, body orientation estimation and tracking of multiple individuals within a large therapeutic space spanning $1700m^2$, all while maintaining a strong focus on preserving privacy. Our pipeline consists of 39 edge computing camera systems equipped with Tensor Processing Units (TPUs) placed in the indoor space's ceiling. To ensure the privacy of individuals, a real-time multi-person pose estimation algorithm runs on the TPU of the computing camera system. This algorithm extracts poses and bounding boxes, which are utilized for indoor localization, body orientation estimation, and multi-person tracking. Our pipeline demonstrated an average localization error of 1.41 meters, a multiple-object tracking accuracy score of 88.6\%, and a mean absolute body orientation error of 29\degree. These results shows that localization and tracking of individuals in a large indoor space is feasible even with the privacy constrains.
|
2407.08442
|
Linglong Qian
|
Linglong Qian, Tao Wang, Jun Wang, Hugh Logan Ellis, Robin Mitra,
Richard Dobson, Zina Ibrahim
|
How Deep is your Guess? A Fresh Perspective on Deep Learning for Medical
Time-Series Imputation
| null | null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce a novel classification framework for time-series imputation
using deep learning, with a particular focus on clinical data. By identifying
conceptual gaps in the literature and existing reviews, we devise a taxonomy
grounded on the inductive bias of neural imputation frameworks, resulting in a
classification of existing deep imputation strategies based on their
suitability for specific imputation scenarios and data-specific properties. Our
review further examines the existing methodologies employed to benchmark deep
imputation models, evaluating their effectiveness in capturing the missingness
scenarios found in clinical data and emphasising the importance of reconciling
mathematical abstraction with clinical insights. Our classification aims to
serve as a guide for researchers to facilitate the selection of appropriate
deep learning imputation techniques tailored to their specific clinical data.
Our novel perspective also highlights the significance of bridging the gap
between computational methodologies and medical insights to achieve clinically
sound imputation models.
|
[
{
"created": "Thu, 11 Jul 2024 12:33:28 GMT",
"version": "v1"
}
] |
2024-07-12
|
[
[
"Qian",
"Linglong",
""
],
[
"Wang",
"Tao",
""
],
[
"Wang",
"Jun",
""
],
[
"Ellis",
"Hugh Logan",
""
],
[
"Mitra",
"Robin",
""
],
[
"Dobson",
"Richard",
""
],
[
"Ibrahim",
"Zina",
""
]
] |
We introduce a novel classification framework for time-series imputation using deep learning, with a particular focus on clinical data. By identifying conceptual gaps in the literature and existing reviews, we devise a taxonomy grounded on the inductive bias of neural imputation frameworks, resulting in a classification of existing deep imputation strategies based on their suitability for specific imputation scenarios and data-specific properties. Our review further examines the existing methodologies employed to benchmark deep imputation models, evaluating their effectiveness in capturing the missingness scenarios found in clinical data and emphasising the importance of reconciling mathematical abstraction with clinical insights. Our classification aims to serve as a guide for researchers to facilitate the selection of appropriate deep learning imputation techniques tailored to their specific clinical data. Our novel perspective also highlights the significance of bridging the gap between computational methodologies and medical insights to achieve clinically sound imputation models.
|
2011.11542
|
Chi Ian Tang
|
Chi Ian Tang, Ignacio Perez-Pozuelo, Dimitris Spathis, Cecilia Mascolo
|
Exploring Contrastive Learning in Human Activity Recognition for
Healthcare
|
Presented at Machine Learning for Mobile Health Workshop at NeurIPS
2020, Vancouver, Canada
| null | null | null |
cs.LG eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Human Activity Recognition (HAR) constitutes one of the most important tasks
for wearable and mobile sensing given its implications in human well-being and
health monitoring. Motivated by the limitations of labeled datasets in HAR,
particularly when employed in healthcare-related applications, this work
explores the adoption and adaptation of SimCLR, a contrastive learning
technique for visual representations, to HAR. The use of contrastive learning
objectives causes the representations of corresponding views to be more
similar, and those of non-corresponding views to be more different. After an
extensive evaluation exploring 64 combinations of different signal
transformations for augmenting the data, we observed significant performance
differences owing to the order and the function thereof. In particular,
preliminary results indicated an improvement over supervised and unsupervised
learning methods when using fine-tuning and random rotation for augmentation,
however, future work should explore under which conditions SimCLR is beneficial
for HAR systems and other healthcare-related applications.
|
[
{
"created": "Mon, 23 Nov 2020 16:55:22 GMT",
"version": "v1"
},
{
"created": "Thu, 24 Dec 2020 00:12:54 GMT",
"version": "v2"
},
{
"created": "Thu, 11 Feb 2021 05:41:43 GMT",
"version": "v3"
}
] |
2021-02-12
|
[
[
"Tang",
"Chi Ian",
""
],
[
"Perez-Pozuelo",
"Ignacio",
""
],
[
"Spathis",
"Dimitris",
""
],
[
"Mascolo",
"Cecilia",
""
]
] |
Human Activity Recognition (HAR) constitutes one of the most important tasks for wearable and mobile sensing given its implications in human well-being and health monitoring. Motivated by the limitations of labeled datasets in HAR, particularly when employed in healthcare-related applications, this work explores the adoption and adaptation of SimCLR, a contrastive learning technique for visual representations, to HAR. The use of contrastive learning objectives causes the representations of corresponding views to be more similar, and those of non-corresponding views to be more different. After an extensive evaluation exploring 64 combinations of different signal transformations for augmenting the data, we observed significant performance differences owing to the order and the function thereof. In particular, preliminary results indicated an improvement over supervised and unsupervised learning methods when using fine-tuning and random rotation for augmentation, however, future work should explore under which conditions SimCLR is beneficial for HAR systems and other healthcare-related applications.
|
1906.05202
|
Chia-Wen Kuo
|
Chia-Wen Kuo, Chih-Yao Ma, Jia-Bin Huang, Zsolt Kira
|
Manifold Graph with Learned Prototypes for Semi-Supervised Image
Classification
|
Project site:
https://sites.google.com/view/manifold-graph-with-prototypes/home
| null | null | null |
cs.LG cs.CV stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent advances in semi-supervised learning methods rely on estimating the
categories of unlabeled data using a model trained on the labeled data
(pseudo-labeling) and using the unlabeled data for various consistency-based
regularization. In this work, we propose to explicitly leverage the structure
of the data manifold based on a Manifold Graph constructed over the image
instances within the feature space. Specifically, we propose an architecture
based on graph networks that jointly optimizes feature extraction, graph
connectivity, and feature propagation and aggregation to unlabeled data in an
end-to-end manner. Further, we present a novel Prototype Generator for
producing a diverse set of prototypes that compactly represent each category,
which supports feature propagation. To evaluate our method, we first contribute
a strong baseline that combines two consistency-based regularizers that already
achieves state-of-the-art results especially with fewer labels. We then show
that when combined with these regularizers, the proposed method facilitates the
propagation of information from generated prototypes to image data to further
improve results. We provide extensive qualitative and quantitative experimental
results on semi-supervised benchmarks demonstrating the improvements arising
from our design and show that our method achieves state-of-the-art performance
when compared with existing methods using a single model and comparable with
ensemble methods. Specifically, we achieve error rates of 3.35% on SVHN, 8.27%
on CIFAR-10, and 33.83% on CIFAR-100. With much fewer labels, we surpass the
state of the arts by significant margins of 41% relative error decrease on
average.
|
[
{
"created": "Wed, 12 Jun 2019 15:18:36 GMT",
"version": "v1"
},
{
"created": "Thu, 13 Jun 2019 02:46:21 GMT",
"version": "v2"
}
] |
2019-06-14
|
[
[
"Kuo",
"Chia-Wen",
""
],
[
"Ma",
"Chih-Yao",
""
],
[
"Huang",
"Jia-Bin",
""
],
[
"Kira",
"Zsolt",
""
]
] |
Recent advances in semi-supervised learning methods rely on estimating the categories of unlabeled data using a model trained on the labeled data (pseudo-labeling) and using the unlabeled data for various consistency-based regularization. In this work, we propose to explicitly leverage the structure of the data manifold based on a Manifold Graph constructed over the image instances within the feature space. Specifically, we propose an architecture based on graph networks that jointly optimizes feature extraction, graph connectivity, and feature propagation and aggregation to unlabeled data in an end-to-end manner. Further, we present a novel Prototype Generator for producing a diverse set of prototypes that compactly represent each category, which supports feature propagation. To evaluate our method, we first contribute a strong baseline that combines two consistency-based regularizers that already achieves state-of-the-art results especially with fewer labels. We then show that when combined with these regularizers, the proposed method facilitates the propagation of information from generated prototypes to image data to further improve results. We provide extensive qualitative and quantitative experimental results on semi-supervised benchmarks demonstrating the improvements arising from our design and show that our method achieves state-of-the-art performance when compared with existing methods using a single model and comparable with ensemble methods. Specifically, we achieve error rates of 3.35% on SVHN, 8.27% on CIFAR-10, and 33.83% on CIFAR-100. With much fewer labels, we surpass the state of the arts by significant margins of 41% relative error decrease on average.
|
1811.10830
|
Rowan Zellers
|
Rowan Zellers, Yonatan Bisk, Ali Farhadi, Yejin Choi
|
From Recognition to Cognition: Visual Commonsense Reasoning
|
CVPR 2019 oral. Project page at https://visualcommonsense.com
| null | null | null |
cs.CV cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Visual understanding goes well beyond object recognition. With one glance at
an image, we can effortlessly imagine the world beyond the pixels: for
instance, we can infer people's actions, goals, and mental states. While this
task is easy for humans, it is tremendously difficult for today's vision
systems, requiring higher-order cognition and commonsense reasoning about the
world. We formalize this task as Visual Commonsense Reasoning. Given a
challenging question about an image, a machine must answer correctly and then
provide a rationale justifying its answer.
Next, we introduce a new dataset, VCR, consisting of 290k multiple choice QA
problems derived from 110k movie scenes. The key recipe for generating
non-trivial and high-quality problems at scale is Adversarial Matching, a new
approach to transform rich annotations into multiple choice questions with
minimal bias. Experimental results show that while humans find VCR easy (over
90% accuracy), state-of-the-art vision models struggle (~45%).
To move towards cognition-level understanding, we present a new reasoning
engine, Recognition to Cognition Networks (R2C), that models the necessary
layered inferences for grounding, contextualization, and reasoning. R2C helps
narrow the gap between humans and machines (~65%); still, the challenge is far
from solved, and we provide analysis that suggests avenues for future work.
|
[
{
"created": "Tue, 27 Nov 2018 06:22:26 GMT",
"version": "v1"
},
{
"created": "Tue, 26 Mar 2019 17:50:34 GMT",
"version": "v2"
}
] |
2019-03-27
|
[
[
"Zellers",
"Rowan",
""
],
[
"Bisk",
"Yonatan",
""
],
[
"Farhadi",
"Ali",
""
],
[
"Choi",
"Yejin",
""
]
] |
Visual understanding goes well beyond object recognition. With one glance at an image, we can effortlessly imagine the world beyond the pixels: for instance, we can infer people's actions, goals, and mental states. While this task is easy for humans, it is tremendously difficult for today's vision systems, requiring higher-order cognition and commonsense reasoning about the world. We formalize this task as Visual Commonsense Reasoning. Given a challenging question about an image, a machine must answer correctly and then provide a rationale justifying its answer. Next, we introduce a new dataset, VCR, consisting of 290k multiple choice QA problems derived from 110k movie scenes. The key recipe for generating non-trivial and high-quality problems at scale is Adversarial Matching, a new approach to transform rich annotations into multiple choice questions with minimal bias. Experimental results show that while humans find VCR easy (over 90% accuracy), state-of-the-art vision models struggle (~45%). To move towards cognition-level understanding, we present a new reasoning engine, Recognition to Cognition Networks (R2C), that models the necessary layered inferences for grounding, contextualization, and reasoning. R2C helps narrow the gap between humans and machines (~65%); still, the challenge is far from solved, and we provide analysis that suggests avenues for future work.
|
2002.04547
|
Steffen Haas
|
Steffen Haas, Robin Sommer, Mathias Fischer
|
zeek-osquery: Host-Network Correlation for Advanced Monitoring and
Intrusion Detection
|
Accepted for publication at ICT Systems Security and Privacy
Protection (IFIP) SEC 2020
| null |
10.1007/978-3-030-58201-2_17
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Intrusion Detection Systems (IDSs) can analyze network traffic for signs of
attacks and intrusions. However, encrypted communication limits their
visibility and sophisticated attackers additionally try to evade their
detection. To overcome these limitations, we extend the scope of Network IDSs
(NIDSs) with additional data from the hosts. For that, we propose the
integrated open-source zeek-osquery platform that combines the Zeek IDS with
the osquery host monitor. Our platform can collect, process, and correlate host
and network data at large scale, e.g., to attribute network flows to processes
and users. The platform can be flexibly extended with own detection scripts
using already correlated, but also additional and dynamically retrieved host
data. A distributed deployment enables it to scale with an arbitrary number of
osquery hosts. Our evaluation results indicate that a single Zeek instance can
manage more than 870 osquery hosts and can attribute more than 96% of TCP
connections to host-side applications and users in real-time.
|
[
{
"created": "Tue, 11 Feb 2020 17:06:36 GMT",
"version": "v1"
},
{
"created": "Wed, 11 Mar 2020 14:15:05 GMT",
"version": "v2"
}
] |
2020-12-17
|
[
[
"Haas",
"Steffen",
""
],
[
"Sommer",
"Robin",
""
],
[
"Fischer",
"Mathias",
""
]
] |
Intrusion Detection Systems (IDSs) can analyze network traffic for signs of attacks and intrusions. However, encrypted communication limits their visibility and sophisticated attackers additionally try to evade their detection. To overcome these limitations, we extend the scope of Network IDSs (NIDSs) with additional data from the hosts. For that, we propose the integrated open-source zeek-osquery platform that combines the Zeek IDS with the osquery host monitor. Our platform can collect, process, and correlate host and network data at large scale, e.g., to attribute network flows to processes and users. The platform can be flexibly extended with own detection scripts using already correlated, but also additional and dynamically retrieved host data. A distributed deployment enables it to scale with an arbitrary number of osquery hosts. Our evaluation results indicate that a single Zeek instance can manage more than 870 osquery hosts and can attribute more than 96% of TCP connections to host-side applications and users in real-time.
|
2010.07080
|
Felix A. Wolf
|
Felix A. Wolf and Malte Schwerhoff and Peter M\"uller
|
Concise Outlines for a Complex Logic: A Proof Outline Checker for TaDA
(Full Paper)
| null | null | null | null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modern separation logics allow one to prove rich properties of intricate
code, e.g. functional correctness and linearizability of non-blocking
concurrent code. However, this expressiveness leads to a complexity that makes
these logics difficult to apply. Manual proofs or proofs in interactive theorem
provers consist of a large number of steps, often with subtle side conditions.
On the other hand, automation with dedicated verifiers typically requires
sophisticated proof search algorithms that are specific to the given program
logic, resulting in limited tool support that makes it difficult to experiment
with program logics, e.g. when learning, improving, or comparing them. Proof
outline checkers fill this gap. Their input is a program annotated with the
most essential proof steps, just like the proof outlines typically presented in
papers. The tool then checks automatically that this outline represents a valid
proof in the program logic. In this paper, we systematically develop a proof
outline checker for the TaDA logic, which reduces the checking to a simpler
verification problem, for which automated tools exist. Our approach leads to
proof outline checkers that provide substantially more automation than
interactive provers, but are much simpler to develop than custom automatic
verifiers.
|
[
{
"created": "Wed, 14 Oct 2020 13:35:53 GMT",
"version": "v1"
},
{
"created": "Thu, 15 Oct 2020 17:20:41 GMT",
"version": "v2"
},
{
"created": "Fri, 13 Aug 2021 15:18:13 GMT",
"version": "v3"
}
] |
2021-08-16
|
[
[
"Wolf",
"Felix A.",
""
],
[
"Schwerhoff",
"Malte",
""
],
[
"Müller",
"Peter",
""
]
] |
Modern separation logics allow one to prove rich properties of intricate code, e.g. functional correctness and linearizability of non-blocking concurrent code. However, this expressiveness leads to a complexity that makes these logics difficult to apply. Manual proofs or proofs in interactive theorem provers consist of a large number of steps, often with subtle side conditions. On the other hand, automation with dedicated verifiers typically requires sophisticated proof search algorithms that are specific to the given program logic, resulting in limited tool support that makes it difficult to experiment with program logics, e.g. when learning, improving, or comparing them. Proof outline checkers fill this gap. Their input is a program annotated with the most essential proof steps, just like the proof outlines typically presented in papers. The tool then checks automatically that this outline represents a valid proof in the program logic. In this paper, we systematically develop a proof outline checker for the TaDA logic, which reduces the checking to a simpler verification problem, for which automated tools exist. Our approach leads to proof outline checkers that provide substantially more automation than interactive provers, but are much simpler to develop than custom automatic verifiers.
|
2406.07485
|
Adnan Abbas
|
Adnan Abbas, Sang Won Lee
|
PITCH: Productivity and Mental Well-being Coaching through Daily
Conversational Interaction
| null | null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Efficient task planning is essential for productivity and mental well-being,
yet individuals often struggle to create realistic plans and reflect upon their
productivity. Leveraging the advancement in artificial intelligence (AI),
conversational agents have emerged as a promising tool for enhancing
productivity. Our work focuses on externalizing plans through conversation,
aiming to solidify intentions and foster focused action, thereby positively
impacting their productivity and mental well-being. We share our plan of
designing a conversational agent to offer insightful questions and reflective
prompts for increasing plan adherence by leveraging the social interactivity of
natural conversations. Previous studies have shown the effectiveness of such
agents, but many interventions remain static, leading to decreased user
engagement over time. To address this limitation, we propose a novel rotation
and context-aware prompting strategy, providing users with varied interventions
daily. Our system, PITCH, utilizes large language models (LLMs) to facilitate
externalization and reflection on daily plans. Through this study, we
investigate the impact of externalizing tasks with conversational agents on
productivity and mental well-being, and the effectiveness of a rotation
strategy in maintaining user engagement.
|
[
{
"created": "Tue, 11 Jun 2024 17:26:58 GMT",
"version": "v1"
}
] |
2024-06-12
|
[
[
"Abbas",
"Adnan",
""
],
[
"Lee",
"Sang Won",
""
]
] |
Efficient task planning is essential for productivity and mental well-being, yet individuals often struggle to create realistic plans and reflect upon their productivity. Leveraging the advancement in artificial intelligence (AI), conversational agents have emerged as a promising tool for enhancing productivity. Our work focuses on externalizing plans through conversation, aiming to solidify intentions and foster focused action, thereby positively impacting their productivity and mental well-being. We share our plan of designing a conversational agent to offer insightful questions and reflective prompts for increasing plan adherence by leveraging the social interactivity of natural conversations. Previous studies have shown the effectiveness of such agents, but many interventions remain static, leading to decreased user engagement over time. To address this limitation, we propose a novel rotation and context-aware prompting strategy, providing users with varied interventions daily. Our system, PITCH, utilizes large language models (LLMs) to facilitate externalization and reflection on daily plans. Through this study, we investigate the impact of externalizing tasks with conversational agents on productivity and mental well-being, and the effectiveness of a rotation strategy in maintaining user engagement.
|
2302.09257
|
Zhenyu Li
|
Zhenyu Li, Ozan Alp Topal, \"Ozlem Tu\u{g}fe Demir, Emil Bj\"ornson,
Cicek Cavdar
|
mmWave Coverage Extension Using Reconfigurable Intelligent Surfaces in
Indoor Dense Spaces
|
6 pages 8 figures. Accepted to be presented in IEEE ICC 2023
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we consider the deployment of reconfigurable intelligent
surfaces (RISs) to extend the coverage of a millimeter-wave (mmWave) network in
indoor dense spaces. We first integrate RIS into ray-tracing simulations to
realistically capture the propagation characteristics, then formulate a
non-convex optimization problem that minimizes the number of RISs under rate
constraints. We propose a feasible point pursuit and successive convex
approximation-based algorithm, which solves the problem by jointly selecting
the RIS locations, optimizing the RIS phase-shifts, and allocating time
resources to user equipments (UEs). The numerical results demonstrate
substantial coverage extension by using at least four RISs, and a data rate of
130 Mbit/s is guaranteed for UEs in the considered area of an airplane cabin.
|
[
{
"created": "Sat, 18 Feb 2023 08:19:39 GMT",
"version": "v1"
}
] |
2023-02-21
|
[
[
"Li",
"Zhenyu",
""
],
[
"Topal",
"Ozan Alp",
""
],
[
"Demir",
"Özlem Tuğfe",
""
],
[
"Björnson",
"Emil",
""
],
[
"Cavdar",
"Cicek",
""
]
] |
In this work, we consider the deployment of reconfigurable intelligent surfaces (RISs) to extend the coverage of a millimeter-wave (mmWave) network in indoor dense spaces. We first integrate RIS into ray-tracing simulations to realistically capture the propagation characteristics, then formulate a non-convex optimization problem that minimizes the number of RISs under rate constraints. We propose a feasible point pursuit and successive convex approximation-based algorithm, which solves the problem by jointly selecting the RIS locations, optimizing the RIS phase-shifts, and allocating time resources to user equipments (UEs). The numerical results demonstrate substantial coverage extension by using at least four RISs, and a data rate of 130 Mbit/s is guaranteed for UEs in the considered area of an airplane cabin.
|
1101.1118
|
Giuliano Andrea Pagani
|
Giuliano Andrea Pagani and Marco Aiello
|
Towards Decentralized Trading: A Topological Investigation of the Dutch
Medium and Low Voltage Grids
| null | null | null | null |
cs.CE cs.DM cs.SI physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The traditional Power Grid has been designed in a hierarchical fashion, with
Energy pushed from the large scale production facilities towards the end users.
But with the increasing availability of micro and medium scale generating
facilities, the situation is changing. Many end users can now produce energy
and share it over the Power Grid. Naturally, end users need to have incentives
to do so and might want to be able to act in an open decentralized energy
market. In the present work, we offer a novel analysis of the Medium and Low
Voltage Power Grids of the North Netherlands using statistical tools from the
Complex Network Analysis field. We use a weighted model based on actual Grid
data and propose a set of statistical measures to evaluate the adequacy of the
current infrastructure for a decentralized energy market. Further, we use the
insight gained by the analysis to propose parameters that tie the statistical
topological measures to economic factors that might influence the
attractiveness to the end users in participating in such a decentralized energy
market, thus identifying what are the important topological parameters to work
on to facilitate such open decentralized markets.
|
[
{
"created": "Thu, 6 Jan 2011 00:11:57 GMT",
"version": "v1"
},
{
"created": "Fri, 7 Jan 2011 14:27:25 GMT",
"version": "v2"
},
{
"created": "Thu, 3 Feb 2011 13:08:01 GMT",
"version": "v3"
},
{
"created": "Mon, 14 Feb 2011 15:02:53 GMT",
"version": "v4"
}
] |
2015-03-17
|
[
[
"Pagani",
"Giuliano Andrea",
""
],
[
"Aiello",
"Marco",
""
]
] |
The traditional Power Grid has been designed in a hierarchical fashion, with Energy pushed from the large scale production facilities towards the end users. But with the increasing availability of micro and medium scale generating facilities, the situation is changing. Many end users can now produce energy and share it over the Power Grid. Naturally, end users need to have incentives to do so and might want to be able to act in an open decentralized energy market. In the present work, we offer a novel analysis of the Medium and Low Voltage Power Grids of the North Netherlands using statistical tools from the Complex Network Analysis field. We use a weighted model based on actual Grid data and propose a set of statistical measures to evaluate the adequacy of the current infrastructure for a decentralized energy market. Further, we use the insight gained by the analysis to propose parameters that tie the statistical topological measures to economic factors that might influence the attractiveness to the end users in participating in such a decentralized energy market, thus identifying what are the important topological parameters to work on to facilitate such open decentralized markets.
|
2305.12079
|
Gerdus Benade
|
Gerdus Benad\`e and Ariel D. Procaccia and Jamie Tucker-Foltz
|
You Can Have Your Cake and Redistrict It Too
|
EC2023
| null | null | null |
cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The design of algorithms for political redistricting generally takes one of
two approaches: optimize an objective such as compactness or, drawing on fair
division, construct a protocol whose outcomes guarantee partisan fairness. We
aim to have the best of both worlds by optimizing an objective subject to a
binary fairness constraint. As the fairness constraint we adopt the geometric
target, which requires the number of seats won by each party to be at least the
average (rounded down) of its outcomes under the worst and best partitions of
the state.
To study the feasibility of this approach, we introduce a new model of
redistricting that closely mirrors the classic model of cake-cutting. This
model has two innovative features. First, in any part of the state there is an
underlying 'density' of voters with political leanings toward any given party,
making it impossible to finely separate voters for different parties into
different districts. This captures a realistic constraint that previously
existing theoretical models of redistricting tend to ignore. Second, parties
may disagree on the distribution of voters - whether by genuine disagreement or
attempted strategic behavior. In the absence of a 'ground truth' distribution,
a redistricting algorithm must therefore aim to simultaneously be fair to each
party with respect to its own reported data. Our main theoretical result is
that, surprisingly, the geometric target is always feasible with respect to
arbitrarily diverging data sets on how voters are distributed.
Any standard for fairness is only useful if it can be readily satisfied in
practice. Our empirical results, which use real election data and maps of six
US states, demonstrate that the geometric target is always feasible, and that
imposing it as a fairness constraint comes at almost no cost to three
well-studied optimization objectives.
|
[
{
"created": "Sat, 20 May 2023 03:33:53 GMT",
"version": "v1"
}
] |
2023-05-23
|
[
[
"Benadè",
"Gerdus",
""
],
[
"Procaccia",
"Ariel D.",
""
],
[
"Tucker-Foltz",
"Jamie",
""
]
] |
The design of algorithms for political redistricting generally takes one of two approaches: optimize an objective such as compactness or, drawing on fair division, construct a protocol whose outcomes guarantee partisan fairness. We aim to have the best of both worlds by optimizing an objective subject to a binary fairness constraint. As the fairness constraint we adopt the geometric target, which requires the number of seats won by each party to be at least the average (rounded down) of its outcomes under the worst and best partitions of the state. To study the feasibility of this approach, we introduce a new model of redistricting that closely mirrors the classic model of cake-cutting. This model has two innovative features. First, in any part of the state there is an underlying 'density' of voters with political leanings toward any given party, making it impossible to finely separate voters for different parties into different districts. This captures a realistic constraint that previously existing theoretical models of redistricting tend to ignore. Second, parties may disagree on the distribution of voters - whether by genuine disagreement or attempted strategic behavior. In the absence of a 'ground truth' distribution, a redistricting algorithm must therefore aim to simultaneously be fair to each party with respect to its own reported data. Our main theoretical result is that, surprisingly, the geometric target is always feasible with respect to arbitrarily diverging data sets on how voters are distributed. Any standard for fairness is only useful if it can be readily satisfied in practice. Our empirical results, which use real election data and maps of six US states, demonstrate that the geometric target is always feasible, and that imposing it as a fairness constraint comes at almost no cost to three well-studied optimization objectives.
|
2407.00693
|
Gihun Lee
|
Gihun Lee, Minchan Jeong, Yujin Kim, Hojung Jung, Jaehoon Oh, Sangmook
Kim, Se-Young Yun
|
BAPO: Base-Anchored Preference Optimization for Personalized Alignment
in Large Language Models
|
under review
| null | null | null |
cs.AI cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
While learning to align Large Language Models (LLMs) with human preferences
has shown remarkable success, aligning these models to meet the diverse user
preferences presents further challenges in preserving previous knowledge. This
paper examines the impact of personalized preference optimization on LLMs,
revealing that the extent of knowledge loss varies significantly with
preference heterogeneity. Although previous approaches have utilized the KL
constraint between the reference model and the policy model, we observe that
they fail to maintain general knowledge and alignment when facing personalized
preferences. To this end, we introduce Base-Anchored Preference Optimization
(BAPO), a simple yet effective approach that utilizes the initial responses of
reference model to mitigate forgetting while accommodating personalized
alignment. BAPO effectively adapts to diverse user preferences while minimally
affecting global knowledge or general alignment. Our experiments demonstrate
the efficacy of BAPO in various setups.
|
[
{
"created": "Sun, 30 Jun 2024 13:30:04 GMT",
"version": "v1"
}
] |
2024-07-02
|
[
[
"Lee",
"Gihun",
""
],
[
"Jeong",
"Minchan",
""
],
[
"Kim",
"Yujin",
""
],
[
"Jung",
"Hojung",
""
],
[
"Oh",
"Jaehoon",
""
],
[
"Kim",
"Sangmook",
""
],
[
"Yun",
"Se-Young",
""
]
] |
While learning to align Large Language Models (LLMs) with human preferences has shown remarkable success, aligning these models to meet the diverse user preferences presents further challenges in preserving previous knowledge. This paper examines the impact of personalized preference optimization on LLMs, revealing that the extent of knowledge loss varies significantly with preference heterogeneity. Although previous approaches have utilized the KL constraint between the reference model and the policy model, we observe that they fail to maintain general knowledge and alignment when facing personalized preferences. To this end, we introduce Base-Anchored Preference Optimization (BAPO), a simple yet effective approach that utilizes the initial responses of reference model to mitigate forgetting while accommodating personalized alignment. BAPO effectively adapts to diverse user preferences while minimally affecting global knowledge or general alignment. Our experiments demonstrate the efficacy of BAPO in various setups.
|
2408.04336
|
Yin Gu
|
Yin Gu, Qi Liu, Zhi Li, Kai Zhang
|
KnowPC: Knowledge-Driven Programmatic Reinforcement Learning for
Zero-shot Coordination
| null | null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Zero-shot coordination (ZSC) remains a major challenge in the cooperative AI
field, which aims to learn an agent to cooperate with an unseen partner in
training environments or even novel environments. In recent years, a popular
ZSC solution paradigm has been deep reinforcement learning (DRL) combined with
advanced self-play or population-based methods to enhance the neural policy's
ability to handle unseen partners. Despite some success, these approaches
usually rely on black-box neural networks as the policy function. However,
neural networks typically lack interpretability and logic, making the learned
policies difficult for partners (e.g., humans) to understand and limiting their
generalization ability. These shortcomings hinder the application of
reinforcement learning methods in diverse cooperative scenarios.We suggest to
represent the agent's policy with an interpretable program. Unlike neural
networks, programs contain stable logic, but they are non-differentiable and
difficult to optimize.To automatically learn such programs, we introduce
Knowledge-driven Programmatic reinforcement learning for zero-shot Coordination
(KnowPC). We first define a foundational Domain-Specific Language (DSL),
including program structures, conditional primitives, and action primitives. A
significant challenge is the vast program search space, making it difficult to
find high-performing programs efficiently. To address this, KnowPC integrates
an extractor and an reasoner. The extractor discovers environmental transition
knowledge from multi-agent interaction trajectories, while the reasoner deduces
the preconditions of each action primitive based on the transition knowledge.
|
[
{
"created": "Thu, 8 Aug 2024 09:43:54 GMT",
"version": "v1"
}
] |
2024-08-09
|
[
[
"Gu",
"Yin",
""
],
[
"Liu",
"Qi",
""
],
[
"Li",
"Zhi",
""
],
[
"Zhang",
"Kai",
""
]
] |
Zero-shot coordination (ZSC) remains a major challenge in the cooperative AI field, which aims to learn an agent to cooperate with an unseen partner in training environments or even novel environments. In recent years, a popular ZSC solution paradigm has been deep reinforcement learning (DRL) combined with advanced self-play or population-based methods to enhance the neural policy's ability to handle unseen partners. Despite some success, these approaches usually rely on black-box neural networks as the policy function. However, neural networks typically lack interpretability and logic, making the learned policies difficult for partners (e.g., humans) to understand and limiting their generalization ability. These shortcomings hinder the application of reinforcement learning methods in diverse cooperative scenarios.We suggest to represent the agent's policy with an interpretable program. Unlike neural networks, programs contain stable logic, but they are non-differentiable and difficult to optimize.To automatically learn such programs, we introduce Knowledge-driven Programmatic reinforcement learning for zero-shot Coordination (KnowPC). We first define a foundational Domain-Specific Language (DSL), including program structures, conditional primitives, and action primitives. A significant challenge is the vast program search space, making it difficult to find high-performing programs efficiently. To address this, KnowPC integrates an extractor and an reasoner. The extractor discovers environmental transition knowledge from multi-agent interaction trajectories, while the reasoner deduces the preconditions of each action primitive based on the transition knowledge.
|
1609.05245
|
Marco Coraggio Mr
|
Marco Coraggio, Martin Homer, Oliver D. Payton, Mario di Bernardo
|
Improved Control Strategies for Intermittent Contact Mode Atomic Force
Microscopes
|
11 pages
| null |
10.1109/TCST.2017.2734046
| null |
cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Atomic force microscopes have proved to be fundamental research tools in many
situations where a gentle imaging process is required, and in a variety of
environmental conditions, such as the study of biological samples. Among the
possible modes of operation, intermittent contact mode is one that causes less
wear to both the sample and the instrument; therefore, it is ideal when imaging
soft samples. However, intermittent contact mode is not particularly fast when
compared to other imaging strategies. In this paper, we introduce three
enhanced control approaches, applied at both the dither and z-axis piezos, to
address the limitations of existing control schemes. Our proposed strategies
are able to eliminate different image artefacts, automatically adapt scan speed
to the sample being scanned and predict its features in real time. The result
is that both the image quality and the scan time are improved.
|
[
{
"created": "Fri, 16 Sep 2016 21:43:18 GMT",
"version": "v1"
}
] |
2023-01-05
|
[
[
"Coraggio",
"Marco",
""
],
[
"Homer",
"Martin",
""
],
[
"Payton",
"Oliver D.",
""
],
[
"di Bernardo",
"Mario",
""
]
] |
Atomic force microscopes have proved to be fundamental research tools in many situations where a gentle imaging process is required, and in a variety of environmental conditions, such as the study of biological samples. Among the possible modes of operation, intermittent contact mode is one that causes less wear to both the sample and the instrument; therefore, it is ideal when imaging soft samples. However, intermittent contact mode is not particularly fast when compared to other imaging strategies. In this paper, we introduce three enhanced control approaches, applied at both the dither and z-axis piezos, to address the limitations of existing control schemes. Our proposed strategies are able to eliminate different image artefacts, automatically adapt scan speed to the sample being scanned and predict its features in real time. The result is that both the image quality and the scan time are improved.
|
1304.1513
|
A. C. Kak
|
A. C. Kak, K. M. Andress, C. Lopez-Abadia, M. S. Carroll, J. R. Lewis
|
Hierarchical Evidence Accumulation in the Pseiki System and Experiments
in Model-Driven Mobile Robot Navigation
|
Appears in Proceedings of the Fifth Conference on Uncertainty in
Artificial Intelligence (UAI1989)
| null | null |
UAI-P-1989-PG-194-207
|
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we will review the process of evidence accumulation in the
PSEIKI system for expectation-driven interpretation of images of 3-D scenes.
Expectations are presented to PSEIKI as a geometrical hierarchy of
abstractions. PSEIKI's job is then to construct abstraction hierarchies in the
perceived image taking cues from the abstraction hierarchies in the
expectations. The Dempster-Shafer formalism is used for associating belief
values with the different possible labels for the constructed abstractions in
the perceived image. This system has been used successfully for autonomous
navigation of a mobile robot in indoor environments.
|
[
{
"created": "Wed, 27 Mar 2013 19:38:59 GMT",
"version": "v1"
}
] |
2013-04-08
|
[
[
"Kak",
"A. C.",
""
],
[
"Andress",
"K. M.",
""
],
[
"Lopez-Abadia",
"C.",
""
],
[
"Carroll",
"M. S.",
""
],
[
"Lewis",
"J. R.",
""
]
] |
In this paper, we will review the process of evidence accumulation in the PSEIKI system for expectation-driven interpretation of images of 3-D scenes. Expectations are presented to PSEIKI as a geometrical hierarchy of abstractions. PSEIKI's job is then to construct abstraction hierarchies in the perceived image taking cues from the abstraction hierarchies in the expectations. The Dempster-Shafer formalism is used for associating belief values with the different possible labels for the constructed abstractions in the perceived image. This system has been used successfully for autonomous navigation of a mobile robot in indoor environments.
|
1612.07303
|
Johannes Feldmaier
|
Johannes Feldmaier, Tamara Marmat, Johannes Kuhn, Klaus Diepold
|
Evaluation of a RGB-LED-based Emotion Display for Affective Agents
| null | null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Technology has become an essential part in every aspect of our lives. However
the key to a successful implementation of a technology depends on the
acceptance by the general public. In order to increase the acceptance various
approaches can be applied. In this paper, we will examine the human-robot
emotional interaction by investigating the capabilities of a developed
low-resolution RGB-LED display in the context of artificial emotions. We are
focusing on four of the most representative human emotions which include
happiness, anger, sadness and fear. We will work with colors and dynamic light
patterns which are supposed to evoke various associations. In an experiment,
the use these patterns as expressions of emotions are validated. The results of
the conducted study show that some of the considered basic emotions can be
recognized by human observers.
|
[
{
"created": "Wed, 21 Dec 2016 20:12:08 GMT",
"version": "v1"
}
] |
2016-12-22
|
[
[
"Feldmaier",
"Johannes",
""
],
[
"Marmat",
"Tamara",
""
],
[
"Kuhn",
"Johannes",
""
],
[
"Diepold",
"Klaus",
""
]
] |
Technology has become an essential part in every aspect of our lives. However the key to a successful implementation of a technology depends on the acceptance by the general public. In order to increase the acceptance various approaches can be applied. In this paper, we will examine the human-robot emotional interaction by investigating the capabilities of a developed low-resolution RGB-LED display in the context of artificial emotions. We are focusing on four of the most representative human emotions which include happiness, anger, sadness and fear. We will work with colors and dynamic light patterns which are supposed to evoke various associations. In an experiment, the use these patterns as expressions of emotions are validated. The results of the conducted study show that some of the considered basic emotions can be recognized by human observers.
|
2312.04412
|
Miroslav Popovic
|
Miroslav Popovic, Marko Popovic, Ivan Kastelan, Miodrag Djukic, Ilija
Basicevic
|
Developing Elementary Federated Learning Algorithms Leveraging the
ChatGPT
|
4 pages, 6 tables, submitted to TELFOR 2023, Published by IEEE Xplore
| null |
10.1109/TELFOR59449.2023.10372714
| null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Python Testbed for Federated Learning Algorithms is a simple Python FL
framework easy to use by ML&AI developers who do not need to be professional
programmers, and this paper shows that it is also amenable to emerging AI
tools. In this paper, we successfully developed three elementary FL algorithms
using the following three steps process: (i) specify context, (ii) ask ChatGPT
to complete server and clients' callback functions, and (iii) verify the
generated code.
|
[
{
"created": "Thu, 7 Dec 2023 16:34:47 GMT",
"version": "v1"
},
{
"created": "Mon, 8 Jan 2024 19:10:11 GMT",
"version": "v2"
}
] |
2024-01-10
|
[
[
"Popovic",
"Miroslav",
""
],
[
"Popovic",
"Marko",
""
],
[
"Kastelan",
"Ivan",
""
],
[
"Djukic",
"Miodrag",
""
],
[
"Basicevic",
"Ilija",
""
]
] |
The Python Testbed for Federated Learning Algorithms is a simple Python FL framework easy to use by ML&AI developers who do not need to be professional programmers, and this paper shows that it is also amenable to emerging AI tools. In this paper, we successfully developed three elementary FL algorithms using the following three steps process: (i) specify context, (ii) ask ChatGPT to complete server and clients' callback functions, and (iii) verify the generated code.
|
1404.5683
|
Chen Song
|
Eva C. Song, Paul Cuff, H. Vincent Poor
|
The Likelihood Encoder for Lossy Source Compression
|
5 pages, 2 figures, ISIT 2014
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, a likelihood encoder is studied in the context of lossy source
compression. The analysis of the likelihood encoder is based on a soft-covering
lemma. It is demonstrated that the use of a likelihood encoder together with
the soft-covering lemma gives alternative achievability proofs for classical
source coding problems. The case of the rate-distortion function with side
information at the decoder (i.e. the Wyner-Ziv problem) is carefully examined
and an application of the likelihood encoder to the multi-terminal source
coding inner bound (i.e. the Berger-Tung region) is outlined.
|
[
{
"created": "Wed, 23 Apr 2014 02:10:32 GMT",
"version": "v1"
}
] |
2014-04-24
|
[
[
"Song",
"Eva C.",
""
],
[
"Cuff",
"Paul",
""
],
[
"Poor",
"H. Vincent",
""
]
] |
In this work, a likelihood encoder is studied in the context of lossy source compression. The analysis of the likelihood encoder is based on a soft-covering lemma. It is demonstrated that the use of a likelihood encoder together with the soft-covering lemma gives alternative achievability proofs for classical source coding problems. The case of the rate-distortion function with side information at the decoder (i.e. the Wyner-Ziv problem) is carefully examined and an application of the likelihood encoder to the multi-terminal source coding inner bound (i.e. the Berger-Tung region) is outlined.
|
2402.18584
|
Chengqing Li
|
Xuenan Peng, Chengqing Li, Yicheng Zeng, Chun-Lai Li
|
Adjusting Dynamics of Hopfield Neural Network via Time-variant Stimulus
|
14 pages, 21 figures
| null | null | null |
cs.NE nlin.CD
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As a paradigmatic model for nonlinear dynamics studies, the Hopfield Neural
Network (HNN) demonstrates a high susceptibility to external disturbances owing
to its intricate structure. This paper delves into the challenge of modulating
HNN dynamics through time-variant stimuli. The effects of adjustments using two
distinct types of time-variant stimuli, namely the Weight Matrix Stimulus (WMS)
and the State Variable Stimulus (SVS), along with a Constant Stimulus (CS) are
reported. The findings reveal that deploying four WMSs enables the HNN to
generate either a four-scroll or a coexisting two-scroll attractor. When
combined with one SVS, four WMSs can lead to the formation of an eight-scroll
or four-scroll attractor, while the integration of four WMSs and multiple SVSs
can induce grid-multi-scroll attractors. Moreover, the introduction of a CS and
an SVS can significantly disrupt the dynamic behavior of the HNN. Consequently,
suitable adjustment methods are crucial for enhancing the network's dynamics,
whereas inappropriate applications can lead to the loss of its chaotic
characteristics. To empirically validate these enhancement effects, the study
employs an FPGA hardware platform. Subsequently, an image encryption scheme is
designed to demonstrate the practical application benefits of the dynamically
adjusted HNN in secure multimedia communication. This exploration into the
dynamic modulation of HNN via time-variant stimuli offers insightful
contributions to the advancement of secure communication technologies.
|
[
{
"created": "Mon, 15 Jan 2024 13:31:56 GMT",
"version": "v1"
}
] |
2024-03-01
|
[
[
"Peng",
"Xuenan",
""
],
[
"Li",
"Chengqing",
""
],
[
"Zeng",
"Yicheng",
""
],
[
"Li",
"Chun-Lai",
""
]
] |
As a paradigmatic model for nonlinear dynamics studies, the Hopfield Neural Network (HNN) demonstrates a high susceptibility to external disturbances owing to its intricate structure. This paper delves into the challenge of modulating HNN dynamics through time-variant stimuli. The effects of adjustments using two distinct types of time-variant stimuli, namely the Weight Matrix Stimulus (WMS) and the State Variable Stimulus (SVS), along with a Constant Stimulus (CS) are reported. The findings reveal that deploying four WMSs enables the HNN to generate either a four-scroll or a coexisting two-scroll attractor. When combined with one SVS, four WMSs can lead to the formation of an eight-scroll or four-scroll attractor, while the integration of four WMSs and multiple SVSs can induce grid-multi-scroll attractors. Moreover, the introduction of a CS and an SVS can significantly disrupt the dynamic behavior of the HNN. Consequently, suitable adjustment methods are crucial for enhancing the network's dynamics, whereas inappropriate applications can lead to the loss of its chaotic characteristics. To empirically validate these enhancement effects, the study employs an FPGA hardware platform. Subsequently, an image encryption scheme is designed to demonstrate the practical application benefits of the dynamically adjusted HNN in secure multimedia communication. This exploration into the dynamic modulation of HNN via time-variant stimuli offers insightful contributions to the advancement of secure communication technologies.
|
2301.00626
|
Alejandro Vigna-Gomez
|
Alejandro Vigna-G\'omez, Javier Murillo, Manelik Ramirez, Alberto
Borbolla, Ian M\'arquez and Prasun K. Ray
|
Design and analysis of tweet-based election models for the 2021 Mexican
legislative election
|
Accepted for publication in EPJ Data Science. 20 pages, 7 figures, 1
table
| null |
10.1140/epjds/s13688-023-00401-w
| null |
cs.SI cs.CL cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
Modelling and forecasting real-life human behaviour using online social media
is an active endeavour of interest in politics, government, academia, and
industry. Since its creation in 2006, Twitter has been proposed as a potential
laboratory that could be used to gauge and predict social behaviour. During the
last decade, the user base of Twitter has been growing and becoming more
representative of the general population. Here we analyse this user base in the
context of the 2021 Mexican Legislative Election. To do so, we use a dataset of
15 million election-related tweets in the six months preceding election day. We
explore different election models that assign political preference to either
the ruling parties or the opposition. We find that models using data with
geographical attributes determine the results of the election with better
precision and accuracy than conventional polling methods. These results
demonstrate that analysis of public online data can outperform conventional
polling methods, and that political analysis and general forecasting would
likely benefit from incorporating such data in the immediate future. Moreover,
the same Twitter dataset with geographical attributes is positively correlated
with results from official census data on population and internet usage in
Mexico. These findings suggest that we have reached a period in time when
online activity, appropriately curated, can provide an accurate representation
of offline behaviour.
|
[
{
"created": "Mon, 2 Jan 2023 12:40:05 GMT",
"version": "v1"
},
{
"created": "Wed, 21 Jun 2023 08:01:38 GMT",
"version": "v2"
}
] |
2023-08-15
|
[
[
"Vigna-Gómez",
"Alejandro",
""
],
[
"Murillo",
"Javier",
""
],
[
"Ramirez",
"Manelik",
""
],
[
"Borbolla",
"Alberto",
""
],
[
"Márquez",
"Ian",
""
],
[
"Ray",
"Prasun K.",
""
]
] |
Modelling and forecasting real-life human behaviour using online social media is an active endeavour of interest in politics, government, academia, and industry. Since its creation in 2006, Twitter has been proposed as a potential laboratory that could be used to gauge and predict social behaviour. During the last decade, the user base of Twitter has been growing and becoming more representative of the general population. Here we analyse this user base in the context of the 2021 Mexican Legislative Election. To do so, we use a dataset of 15 million election-related tweets in the six months preceding election day. We explore different election models that assign political preference to either the ruling parties or the opposition. We find that models using data with geographical attributes determine the results of the election with better precision and accuracy than conventional polling methods. These results demonstrate that analysis of public online data can outperform conventional polling methods, and that political analysis and general forecasting would likely benefit from incorporating such data in the immediate future. Moreover, the same Twitter dataset with geographical attributes is positively correlated with results from official census data on population and internet usage in Mexico. These findings suggest that we have reached a period in time when online activity, appropriately curated, can provide an accurate representation of offline behaviour.
|
2010.03424
|
Phuong Le-Hong
|
The Viet Bui, Phuong Le-Hong
|
Cross-lingual Extended Named Entity Classification of Wikipedia Articles
|
Accepted to NTCIR-15
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The FPT.AI team participated in the SHINRA2020-ML subtask of the NTCIR-15
SHINRA task. This paper describes our method to solving the problem and
discusses the official results. Our method focuses on learning cross-lingual
representations, both on the word level and document level for page
classification. We propose a three-stage approach including multilingual model
pre-training, monolingual model fine-tuning and cross-lingual voting. Our
system is able to achieve the best scores for 25 out of 30 languages; and its
accuracy gaps to the best performing systems of the other five languages are
relatively small.
|
[
{
"created": "Wed, 7 Oct 2020 14:06:09 GMT",
"version": "v1"
},
{
"created": "Sat, 17 Oct 2020 09:06:42 GMT",
"version": "v2"
}
] |
2020-10-20
|
[
[
"Bui",
"The Viet",
""
],
[
"Le-Hong",
"Phuong",
""
]
] |
The FPT.AI team participated in the SHINRA2020-ML subtask of the NTCIR-15 SHINRA task. This paper describes our method to solving the problem and discusses the official results. Our method focuses on learning cross-lingual representations, both on the word level and document level for page classification. We propose a three-stage approach including multilingual model pre-training, monolingual model fine-tuning and cross-lingual voting. Our system is able to achieve the best scores for 25 out of 30 languages; and its accuracy gaps to the best performing systems of the other five languages are relatively small.
|
1902.10796
|
Ashwini Tonge
|
Ashwini Tonge and Cornelia Caragea
|
Dynamic Deep Multi-modal Fusion for Image Privacy Prediction
|
Accepted by The Web Conference (WWW) 2019
| null |
10.1145/3308558.3313691
| null |
cs.CV cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
With millions of images that are shared online on social networking sites,
effective methods for image privacy prediction are highly needed. In this
paper, we propose an approach for fusing object, scene context, and image tags
modalities derived from convolutional neural networks for accurately predicting
the privacy of images shared online. Specifically, our approach identifies the
set of most competent modalities on the fly, according to each new target image
whose privacy has to be predicted. The approach considers three stages to
predict the privacy of a target image, wherein we first identify the
neighborhood images that are visually similar and/or have similar sensitive
content as the target image. Then, we estimate the competence of the modalities
based on the neighborhood images. Finally, we fuse the decisions of the most
competent modalities and predict the privacy label for the target image.
Experimental results show that our approach predicts the sensitive (or private)
content more accurately than the models trained on individual modalities
(object, scene, and tags) and prior privacy prediction works. Also, our
approach outperforms strong baselines, that train meta-classifiers to obtain an
optimal combination of modalities.
|
[
{
"created": "Wed, 27 Feb 2019 21:42:08 GMT",
"version": "v1"
},
{
"created": "Wed, 6 Mar 2019 15:54:24 GMT",
"version": "v2"
}
] |
2019-03-07
|
[
[
"Tonge",
"Ashwini",
""
],
[
"Caragea",
"Cornelia",
""
]
] |
With millions of images that are shared online on social networking sites, effective methods for image privacy prediction are highly needed. In this paper, we propose an approach for fusing object, scene context, and image tags modalities derived from convolutional neural networks for accurately predicting the privacy of images shared online. Specifically, our approach identifies the set of most competent modalities on the fly, according to each new target image whose privacy has to be predicted. The approach considers three stages to predict the privacy of a target image, wherein we first identify the neighborhood images that are visually similar and/or have similar sensitive content as the target image. Then, we estimate the competence of the modalities based on the neighborhood images. Finally, we fuse the decisions of the most competent modalities and predict the privacy label for the target image. Experimental results show that our approach predicts the sensitive (or private) content more accurately than the models trained on individual modalities (object, scene, and tags) and prior privacy prediction works. Also, our approach outperforms strong baselines, that train meta-classifiers to obtain an optimal combination of modalities.
|
2004.08641
|
Ihab Mohamed
|
Ihab S. Mohamed, Guillaume Allibert, Philippe Martinet
|
Model Predictive Path Integral Control Framework for Partially
Observable Navigation: A Quadrotor Case Study
|
8 pages, 8 figures, 3 tables
|
International Conference on Control, Automation, Robotics and
Vision (ICARCV), Shenzhen of China, December 13-15, 2020
| null | null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
Recently, Model Predictive Path Integral (MPPI) control algorithm has been
extensively applied to autonomous navigation tasks, where the cost map is
mostly assumed to be known and the 2D navigation tasks are only performed. In
this paper, we propose a generic MPPI control framework that can be used for 2D
or 3D autonomous navigation tasks in either fully or partially observable
environments, which are the most prevalent in robotics applications. This
framework exploits directly the 3D-voxel grid acquired from an on-board sensing
system for performing collision-free navigation. We test the framework, in
realistic RotorS-based simulation, on goal-oriented quadrotor navigation tasks
in a cluttered environment, for both fully and partially observable scenarios.
Preliminary results demonstrate that the proposed framework works perfectly,
under partial observability, in 2D and 3D cluttered environments.
|
[
{
"created": "Sat, 18 Apr 2020 15:18:17 GMT",
"version": "v1"
},
{
"created": "Mon, 27 Apr 2020 12:15:26 GMT",
"version": "v2"
},
{
"created": "Wed, 14 Oct 2020 08:26:52 GMT",
"version": "v3"
}
] |
2020-10-15
|
[
[
"Mohamed",
"Ihab S.",
""
],
[
"Allibert",
"Guillaume",
""
],
[
"Martinet",
"Philippe",
""
]
] |
Recently, Model Predictive Path Integral (MPPI) control algorithm has been extensively applied to autonomous navigation tasks, where the cost map is mostly assumed to be known and the 2D navigation tasks are only performed. In this paper, we propose a generic MPPI control framework that can be used for 2D or 3D autonomous navigation tasks in either fully or partially observable environments, which are the most prevalent in robotics applications. This framework exploits directly the 3D-voxel grid acquired from an on-board sensing system for performing collision-free navigation. We test the framework, in realistic RotorS-based simulation, on goal-oriented quadrotor navigation tasks in a cluttered environment, for both fully and partially observable scenarios. Preliminary results demonstrate that the proposed framework works perfectly, under partial observability, in 2D and 3D cluttered environments.
|
2009.05859
|
Young-Ho Kim
|
Young-Ho Kim, Jarrod Collins, Zhongyu Li, Ponraj Chinnadurai, Ankur
Kapoor, C. Huie Lin, Tommaso Mansi
|
Towards Automatic Manipulation of Intra-cardiac Echocardiography
Catheter
| null | null | null | null |
cs.RO cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Intra-cardiac Echocardiography (ICE) is a powerful imaging modality for
guiding electrophysiology and structural heart interventions. ICE provides
real-time observation of anatomy, catheters, and emergent complications.
However, this increased reliance on intraprocedural imaging creates a high
cognitive demand on physicians who can often serve as interventionalist and
imager. We present a robotic manipulator for ICE catheters to assist physicians
with imaging and serve as a platform for developing processes for procedural
automation. Herein, we introduce two application modules towards these goals:
(1) a view recovery process that allows physicians to save views during
intervention and automatically return with the push of a button and (2) a
data-driven approach to compensate kinematic model errors that result from
non-linear behaviors in catheter bending, providing more precise control of the
catheter tip. View recovery is validated by repeated catheter positioning in
cardiac phantom and animal experiments with position- and image-based analysis.
We present a simplified calibration approach for error compensation and verify
with complex rotation of the catheter in benchtop and phantom experiments under
varying realistic curvature conditions. Results support that a robotic
manipulator for ICE can provide an efficient and reproducible tool, potentially
reducing execution time and promoting greater utilization of ICE imaging.
|
[
{
"created": "Sat, 12 Sep 2020 20:14:49 GMT",
"version": "v1"
},
{
"created": "Wed, 28 Oct 2020 15:18:07 GMT",
"version": "v2"
},
{
"created": "Fri, 29 Jan 2021 22:16:34 GMT",
"version": "v3"
}
] |
2021-02-02
|
[
[
"Kim",
"Young-Ho",
""
],
[
"Collins",
"Jarrod",
""
],
[
"Li",
"Zhongyu",
""
],
[
"Chinnadurai",
"Ponraj",
""
],
[
"Kapoor",
"Ankur",
""
],
[
"Lin",
"C. Huie",
""
],
[
"Mansi",
"Tommaso",
""
]
] |
Intra-cardiac Echocardiography (ICE) is a powerful imaging modality for guiding electrophysiology and structural heart interventions. ICE provides real-time observation of anatomy, catheters, and emergent complications. However, this increased reliance on intraprocedural imaging creates a high cognitive demand on physicians who can often serve as interventionalist and imager. We present a robotic manipulator for ICE catheters to assist physicians with imaging and serve as a platform for developing processes for procedural automation. Herein, we introduce two application modules towards these goals: (1) a view recovery process that allows physicians to save views during intervention and automatically return with the push of a button and (2) a data-driven approach to compensate kinematic model errors that result from non-linear behaviors in catheter bending, providing more precise control of the catheter tip. View recovery is validated by repeated catheter positioning in cardiac phantom and animal experiments with position- and image-based analysis. We present a simplified calibration approach for error compensation and verify with complex rotation of the catheter in benchtop and phantom experiments under varying realistic curvature conditions. Results support that a robotic manipulator for ICE can provide an efficient and reproducible tool, potentially reducing execution time and promoting greater utilization of ICE imaging.
|
1802.01074
|
Minh C. Phan
|
Minh C. Phan and Aixin Sun and Yi Tay and Jialong Han and Chenliang Li
|
Pair-Linking for Collective Entity Disambiguation: Two Could Be Better
Than All
| null | null | null | null |
cs.IR cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Collective entity disambiguation aims to jointly resolve multiple mentions by
linking them to their associated entities in a knowledge base. Previous works
are primarily based on the underlying assumption that entities within the same
document are highly related. However, the extend to which these mentioned
entities are actually connected in reality is rarely studied and therefore
raises interesting research questions. For the first time, we show that the
semantic relationships between the mentioned entities are in fact less dense
than expected. This could be attributed to several reasons such as noise, data
sparsity and knowledge base incompleteness. As a remedy, we introduce MINTREE,
a new tree-based objective for the entity disambiguation problem. The key
intuition behind MINTREE is the concept of coherence relaxation which utilizes
the weight of a minimum spanning tree to measure the coherence between
entities. Based on this new objective, we design a novel entity disambiguation
algorithms which we call Pair-Linking. Instead of considering all the given
mentions, Pair-Linking iteratively selects a pair with the highest confidence
at each step for decision making. Via extensive experiments, we show that our
approach is not only more accurate but also surprisingly faster than many
state-of-the-art collective linking algorithms.
|
[
{
"created": "Sun, 4 Feb 2018 05:50:39 GMT",
"version": "v1"
},
{
"created": "Sat, 7 Jul 2018 21:17:33 GMT",
"version": "v2"
},
{
"created": "Mon, 16 Jul 2018 05:24:37 GMT",
"version": "v3"
}
] |
2018-07-17
|
[
[
"Phan",
"Minh C.",
""
],
[
"Sun",
"Aixin",
""
],
[
"Tay",
"Yi",
""
],
[
"Han",
"Jialong",
""
],
[
"Li",
"Chenliang",
""
]
] |
Collective entity disambiguation aims to jointly resolve multiple mentions by linking them to their associated entities in a knowledge base. Previous works are primarily based on the underlying assumption that entities within the same document are highly related. However, the extend to which these mentioned entities are actually connected in reality is rarely studied and therefore raises interesting research questions. For the first time, we show that the semantic relationships between the mentioned entities are in fact less dense than expected. This could be attributed to several reasons such as noise, data sparsity and knowledge base incompleteness. As a remedy, we introduce MINTREE, a new tree-based objective for the entity disambiguation problem. The key intuition behind MINTREE is the concept of coherence relaxation which utilizes the weight of a minimum spanning tree to measure the coherence between entities. Based on this new objective, we design a novel entity disambiguation algorithms which we call Pair-Linking. Instead of considering all the given mentions, Pair-Linking iteratively selects a pair with the highest confidence at each step for decision making. Via extensive experiments, we show that our approach is not only more accurate but also surprisingly faster than many state-of-the-art collective linking algorithms.
|
2205.03707
|
Federico Olmedo
|
Marcelo Navarro and Federico Olmedo
|
Slicing of Probabilistic Programs based on Specifications
| null | null | null | null |
cs.PL
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This paper presents the first slicing approach for probabilistic programs
based on specifications. We show that when probabilistic programs are
accompanied by their specifications in the form of pre- and post-condition, we
can exploit this semantic information to produce specification-preserving
slices strictly more precise than slices yielded by conventional techniques
based on data/control dependency.
To achieve this goal, our technique is based on the backward propagation of
post-conditions via the greatest pre-expectation transformer -- the
probabilistic counterpart of Dijkstra weakest pre-condition transformer. The
technique is termination-sensitive, allowing to preserve the partial as well as
the total correctness of probabilistic programs w.r.t. their specifications. It
is modular, featuring a local reasoning principle, and is formally proved
correct.
As fundamental technical ingredients of our technique, we design and prove
sound verification condition generators for establishing the partial and total
correctness of probabilistic programs, which are of interest on their own and
can be exploited elsewhere for other purposes.
On the practical side, we demonstrate the applicability of our approach by
means of a few illustrative examples and a case study from the probabilistic
modelling field. We also describe an algorithm for computing least slices among
the space of slices derived by our technique.
|
[
{
"created": "Sat, 7 May 2022 19:28:02 GMT",
"version": "v1"
}
] |
2022-05-10
|
[
[
"Navarro",
"Marcelo",
""
],
[
"Olmedo",
"Federico",
""
]
] |
This paper presents the first slicing approach for probabilistic programs based on specifications. We show that when probabilistic programs are accompanied by their specifications in the form of pre- and post-condition, we can exploit this semantic information to produce specification-preserving slices strictly more precise than slices yielded by conventional techniques based on data/control dependency. To achieve this goal, our technique is based on the backward propagation of post-conditions via the greatest pre-expectation transformer -- the probabilistic counterpart of Dijkstra weakest pre-condition transformer. The technique is termination-sensitive, allowing to preserve the partial as well as the total correctness of probabilistic programs w.r.t. their specifications. It is modular, featuring a local reasoning principle, and is formally proved correct. As fundamental technical ingredients of our technique, we design and prove sound verification condition generators for establishing the partial and total correctness of probabilistic programs, which are of interest on their own and can be exploited elsewhere for other purposes. On the practical side, we demonstrate the applicability of our approach by means of a few illustrative examples and a case study from the probabilistic modelling field. We also describe an algorithm for computing least slices among the space of slices derived by our technique.
|
1504.00198
|
Nils Jansen
|
Friedrich Gretz, Nils Jansen, Benjamin Lucien Kaminski, Joost-Pieter
Katoen, Annabelle McIver, Federico Olmedo
|
Conditioning in Probabilistic Programming
| null | null | null | null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We investigate the semantic intricacies of conditioning, a main feature in
probabilistic programming. We provide a weakest (liberal) pre-condition (w(l)p)
semantics for the elementary probabilistic programming language pGCL extended
with conditioning. We prove that quantitative weakest (liberal) pre-conditions
coincide with conditional (liberal) expected rewards in Markov chains and show
that semantically conditioning is a truly conservative extension. We present
two program transformations which entirely eliminate conditioning from any
program and prove their correctness using the w(l)p-semantics. Finally, we show
how the w(l)p-semantics can be used to determine conditional probabilities in a
parametric anonymity protocol and show that an inductive w(l)p-semantics for
conditioning in non-deterministic probabilistic programs cannot exist.
|
[
{
"created": "Wed, 1 Apr 2015 12:29:10 GMT",
"version": "v1"
}
] |
2015-04-02
|
[
[
"Gretz",
"Friedrich",
""
],
[
"Jansen",
"Nils",
""
],
[
"Kaminski",
"Benjamin Lucien",
""
],
[
"Katoen",
"Joost-Pieter",
""
],
[
"McIver",
"Annabelle",
""
],
[
"Olmedo",
"Federico",
""
]
] |
We investigate the semantic intricacies of conditioning, a main feature in probabilistic programming. We provide a weakest (liberal) pre-condition (w(l)p) semantics for the elementary probabilistic programming language pGCL extended with conditioning. We prove that quantitative weakest (liberal) pre-conditions coincide with conditional (liberal) expected rewards in Markov chains and show that semantically conditioning is a truly conservative extension. We present two program transformations which entirely eliminate conditioning from any program and prove their correctness using the w(l)p-semantics. Finally, we show how the w(l)p-semantics can be used to determine conditional probabilities in a parametric anonymity protocol and show that an inductive w(l)p-semantics for conditioning in non-deterministic probabilistic programs cannot exist.
|
2405.01636
|
Rokas Gipi\v{s}kis
|
Rokas Gipi\v{s}kis, Chun-Wei Tsai, and Olga Kurasova
|
Explainable AI (XAI) in Image Segmentation in Medicine, Industry, and
Beyond: A Survey
|
35 pages, 9 figures, 2 tables
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Artificial Intelligence (XAI) has found numerous applications in computer
vision. While image classification-based explainability techniques have
garnered significant attention, their counterparts in semantic segmentation
have been relatively neglected. Given the prevalent use of image segmentation,
ranging from medical to industrial deployments, these techniques warrant a
systematic look. In this paper, we present the first comprehensive survey on
XAI in semantic image segmentation. This work focuses on techniques that were
either specifically introduced for dense prediction tasks or were extended for
them by modifying existing methods in classification. We analyze and categorize
the literature based on application categories and domains, as well as the
evaluation metrics and datasets used. We also propose a taxonomy for
interpretable semantic segmentation, and discuss potential challenges and
future research directions.
|
[
{
"created": "Thu, 2 May 2024 18:00:25 GMT",
"version": "v1"
}
] |
2024-05-06
|
[
[
"Gipiškis",
"Rokas",
""
],
[
"Tsai",
"Chun-Wei",
""
],
[
"Kurasova",
"Olga",
""
]
] |
Artificial Intelligence (XAI) has found numerous applications in computer vision. While image classification-based explainability techniques have garnered significant attention, their counterparts in semantic segmentation have been relatively neglected. Given the prevalent use of image segmentation, ranging from medical to industrial deployments, these techniques warrant a systematic look. In this paper, we present the first comprehensive survey on XAI in semantic image segmentation. This work focuses on techniques that were either specifically introduced for dense prediction tasks or were extended for them by modifying existing methods in classification. We analyze and categorize the literature based on application categories and domains, as well as the evaluation metrics and datasets used. We also propose a taxonomy for interpretable semantic segmentation, and discuss potential challenges and future research directions.
|
2205.15052
|
Fatima Ezzahra Airod
|
Fatima Ezzahra Airod, Mattia Merluzzi, Paolo Di Lorenzo, Emilio
Calvanese Strinati
|
Reconfigurable Intelligent Surface Aided Mobile Edge Computing over
Intermittent mmWave Links
| null | null | null | null |
cs.IT cs.ET math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
The advent of Reconfigurable Intelligent Surfaces (RISs) in wireless
communication networks unlocks the way to support high frequency radio access
(e.g. in millimeter wave) while overcoming their sensitivity to the presence of
deep fading and blockages. In support of this vision, this work exhibits the
forward-looking perception of using RIS to enhance the connectivity of the
communication links in edge computing scenarios, to support computation
offloading services. We consider a multi-user MIMO system, and we formulate a
long-term optimization problem aiming to ensure a bounded end-to-end delay with
the minimum users average transmit power, by jointly selecting uplink user
precoding, RIS reflectivity parameters, and computation resources at a mobile
edge host. Thanks to the marriage of Lyapunov stochastic optimization,
projected gradient techniques and convex optimization, the problem is
efficiently solved in a per-slot basis, requiring only the observation of
instantaneous realizations of time-varying radio channels and task arrivals,
and that of communication and computing buffers. Numerical simulations show the
effectiveness of our method and the benefits of the RIS, in striking the best
trade-off between power consumption and delay for different blocking
conditions, also when different levels of channel knowledge are assumed.
|
[
{
"created": "Mon, 30 May 2022 12:31:58 GMT",
"version": "v1"
}
] |
2022-05-31
|
[
[
"Airod",
"Fatima Ezzahra",
""
],
[
"Merluzzi",
"Mattia",
""
],
[
"Di Lorenzo",
"Paolo",
""
],
[
"Strinati",
"Emilio Calvanese",
""
]
] |
The advent of Reconfigurable Intelligent Surfaces (RISs) in wireless communication networks unlocks the way to support high frequency radio access (e.g. in millimeter wave) while overcoming their sensitivity to the presence of deep fading and blockages. In support of this vision, this work exhibits the forward-looking perception of using RIS to enhance the connectivity of the communication links in edge computing scenarios, to support computation offloading services. We consider a multi-user MIMO system, and we formulate a long-term optimization problem aiming to ensure a bounded end-to-end delay with the minimum users average transmit power, by jointly selecting uplink user precoding, RIS reflectivity parameters, and computation resources at a mobile edge host. Thanks to the marriage of Lyapunov stochastic optimization, projected gradient techniques and convex optimization, the problem is efficiently solved in a per-slot basis, requiring only the observation of instantaneous realizations of time-varying radio channels and task arrivals, and that of communication and computing buffers. Numerical simulations show the effectiveness of our method and the benefits of the RIS, in striking the best trade-off between power consumption and delay for different blocking conditions, also when different levels of channel knowledge are assumed.
|
2203.10172
|
Eric Graves
|
Eric Graves and Sina Ghiassian
|
Importance Sampling Placement in Off-Policy Temporal-Difference Methods
|
5 pages, 2 figures
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A central challenge to applying many off-policy reinforcement learning
algorithms to real world problems is the variance introduced by importance
sampling. In off-policy learning, the agent learns about a different policy
than the one being executed. To account for the difference importance sampling
ratios are often used, but can increase variance in the algorithms and reduce
the rate of learning. Several variations of importance sampling have been
proposed to reduce variance, with per-decision importance sampling being the
most popular. However, the update rules for most off-policy algorithms in the
literature depart from per-decision importance sampling in a subtle way; they
correct the entire TD error instead of just the TD target. In this work, we
show how this slight change can be interpreted as a control variate for the TD
target, reducing variance and improving performance. Experiments over a wide
range of algorithms show this subtle modification results in improved
performance.
|
[
{
"created": "Fri, 18 Mar 2022 21:54:09 GMT",
"version": "v1"
},
{
"created": "Thu, 16 Jun 2022 19:54:42 GMT",
"version": "v2"
}
] |
2022-06-20
|
[
[
"Graves",
"Eric",
""
],
[
"Ghiassian",
"Sina",
""
]
] |
A central challenge to applying many off-policy reinforcement learning algorithms to real world problems is the variance introduced by importance sampling. In off-policy learning, the agent learns about a different policy than the one being executed. To account for the difference importance sampling ratios are often used, but can increase variance in the algorithms and reduce the rate of learning. Several variations of importance sampling have been proposed to reduce variance, with per-decision importance sampling being the most popular. However, the update rules for most off-policy algorithms in the literature depart from per-decision importance sampling in a subtle way; they correct the entire TD error instead of just the TD target. In this work, we show how this slight change can be interpreted as a control variate for the TD target, reducing variance and improving performance. Experiments over a wide range of algorithms show this subtle modification results in improved performance.
|
1901.04859
|
Herman Shen
|
Sharad Rawat and M.-H. Herman Shen
|
A Novel Topology Optimization Approach using Conditional Deep Learning
|
8 Pages, 6 Figures, 1 Table. arXiv admin note: text overlap with
arXiv:1808.02334
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this study, a novel topology optimization approach based on conditional
Wasserstein generative adversarial networks (CWGAN) is developed to replicate
the conventional topology optimization algorithms in an extremely
computationally inexpensive way. CWGAN consists of a generator and a
discriminator, both of which are deep convolutional neural networks (CNN). The
limited samples of data, quasi-optimal planar structures, needed for training
purposes are generated using the conventional topology optimization algorithms.
With CWGANs, the topology optimization conditions can be set to a required
value before generating samples. CWGAN truncates the global design space by
introducing an equality constraint by the designer. The results are validated
by generating an optimized planar structure using the conventional algorithms
with the same settings. A proof of concept is presented which is known to be
the first such illustration of fusion of CWGANs and topology optimization.
|
[
{
"created": "Mon, 14 Jan 2019 15:21:44 GMT",
"version": "v1"
}
] |
2019-01-16
|
[
[
"Rawat",
"Sharad",
""
],
[
"Shen",
"M. -H. Herman",
""
]
] |
In this study, a novel topology optimization approach based on conditional Wasserstein generative adversarial networks (CWGAN) is developed to replicate the conventional topology optimization algorithms in an extremely computationally inexpensive way. CWGAN consists of a generator and a discriminator, both of which are deep convolutional neural networks (CNN). The limited samples of data, quasi-optimal planar structures, needed for training purposes are generated using the conventional topology optimization algorithms. With CWGANs, the topology optimization conditions can be set to a required value before generating samples. CWGAN truncates the global design space by introducing an equality constraint by the designer. The results are validated by generating an optimized planar structure using the conventional algorithms with the same settings. A proof of concept is presented which is known to be the first such illustration of fusion of CWGANs and topology optimization.
|
2312.12030
|
Jiachun Pan
|
Jiachun Pan, Hanshu Yan, Jun Hao Liew, Jiashi Feng, Vincent Y. F. Tan
|
Towards Accurate Guided Diffusion Sampling through Symplectic Adjoint
Method
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Training-free guided sampling in diffusion models leverages off-the-shelf
pre-trained networks, such as an aesthetic evaluation model, to guide the
generation process. Current training-free guided sampling algorithms obtain the
guidance energy function based on a one-step estimate of the clean image.
However, since the off-the-shelf pre-trained networks are trained on clean
images, the one-step estimation procedure of the clean image may be inaccurate,
especially in the early stages of the generation process in diffusion models.
This causes the guidance in the early time steps to be inaccurate. To overcome
this problem, we propose Symplectic Adjoint Guidance (SAG), which calculates
the gradient guidance in two inner stages. Firstly, SAG estimates the clean
image via $n$ function calls, where $n$ serves as a flexible hyperparameter
that can be tailored to meet specific image quality requirements. Secondly, SAG
uses the symplectic adjoint method to obtain the gradients accurately and
efficiently in terms of the memory requirements. Extensive experiments
demonstrate that SAG generates images with higher qualities compared to the
baselines in both guided image and video generation tasks.
|
[
{
"created": "Tue, 19 Dec 2023 10:30:31 GMT",
"version": "v1"
}
] |
2023-12-20
|
[
[
"Pan",
"Jiachun",
""
],
[
"Yan",
"Hanshu",
""
],
[
"Liew",
"Jun Hao",
""
],
[
"Feng",
"Jiashi",
""
],
[
"Tan",
"Vincent Y. F.",
""
]
] |
Training-free guided sampling in diffusion models leverages off-the-shelf pre-trained networks, such as an aesthetic evaluation model, to guide the generation process. Current training-free guided sampling algorithms obtain the guidance energy function based on a one-step estimate of the clean image. However, since the off-the-shelf pre-trained networks are trained on clean images, the one-step estimation procedure of the clean image may be inaccurate, especially in the early stages of the generation process in diffusion models. This causes the guidance in the early time steps to be inaccurate. To overcome this problem, we propose Symplectic Adjoint Guidance (SAG), which calculates the gradient guidance in two inner stages. Firstly, SAG estimates the clean image via $n$ function calls, where $n$ serves as a flexible hyperparameter that can be tailored to meet specific image quality requirements. Secondly, SAG uses the symplectic adjoint method to obtain the gradients accurately and efficiently in terms of the memory requirements. Extensive experiments demonstrate that SAG generates images with higher qualities compared to the baselines in both guided image and video generation tasks.
|
0911.2551
|
Jayakrishnan Unnikrishnan
|
Jayakrishnan Unnikrishnan, Venugopal V. Veeravalli, Sean Meyn
|
Minimax Robust Quickest Change Detection
|
Submitted to IEEE Transactions on Information Theory, Nov. 2009.
Revised May 2010
| null |
10.1109/TIT.2011.2104993
| null |
cs.IT math.IT math.ST stat.TH
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The popular criteria of optimality for quickest change detection procedures
are the Lorden criterion, the Shiryaev-Roberts-Pollak criterion, and the
Bayesian criterion. In this paper a robust version of these quickest change
detection problems is considered when the pre-change and post-change
distributions are not known exactly but belong to known uncertainty classes of
distributions. For uncertainty classes that satisfy a specific condition, it is
shown that one can identify least favorable distributions (LFDs) from the
uncertainty classes, such that the detection rule designed for the LFDs is
optimal for the robust problem in a minimax sense. The condition is similar to
that required for the identification of LFDs for the robust hypothesis testing
problem originally studied by Huber. An upper bound on the delay incurred by
the robust test is also obtained in the asymptotic setting under the Lorden
criterion of optimality. This bound quantifies the delay penalty incurred to
guarantee robustness. When the LFDs can be identified, the proposed test is
easier to implement than the CUSUM test based on the Generalized Likelihood
Ratio (GLR) statistic which is a popular approach for such robust change
detection problems. The proposed test is also shown to give better performance
than the GLR test in simulations for some parameter values.
|
[
{
"created": "Fri, 13 Nov 2009 07:07:50 GMT",
"version": "v1"
},
{
"created": "Tue, 1 Jun 2010 03:57:48 GMT",
"version": "v2"
}
] |
2016-11-15
|
[
[
"Unnikrishnan",
"Jayakrishnan",
""
],
[
"Veeravalli",
"Venugopal V.",
""
],
[
"Meyn",
"Sean",
""
]
] |
The popular criteria of optimality for quickest change detection procedures are the Lorden criterion, the Shiryaev-Roberts-Pollak criterion, and the Bayesian criterion. In this paper a robust version of these quickest change detection problems is considered when the pre-change and post-change distributions are not known exactly but belong to known uncertainty classes of distributions. For uncertainty classes that satisfy a specific condition, it is shown that one can identify least favorable distributions (LFDs) from the uncertainty classes, such that the detection rule designed for the LFDs is optimal for the robust problem in a minimax sense. The condition is similar to that required for the identification of LFDs for the robust hypothesis testing problem originally studied by Huber. An upper bound on the delay incurred by the robust test is also obtained in the asymptotic setting under the Lorden criterion of optimality. This bound quantifies the delay penalty incurred to guarantee robustness. When the LFDs can be identified, the proposed test is easier to implement than the CUSUM test based on the Generalized Likelihood Ratio (GLR) statistic which is a popular approach for such robust change detection problems. The proposed test is also shown to give better performance than the GLR test in simulations for some parameter values.
|
2111.10711
|
Sadat Shahriar
|
Sadat Shahriar, Arjun Mukherjee, Omprakash Gnawali
|
A Domain-Independent Holistic Approach to Deception Detection
| null | null |
10.26615/978-954-452-072-4_147
| null |
cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
The deception in the text can be of different forms in different domains,
including fake news, rumor tweets, and spam emails. Irrespective of the domain,
the main intent of the deceptive text is to deceit the reader. Although
domain-specific deception detection exists, domain-independent deception
detection can provide a holistic picture, which can be crucial to understand
how deception occurs in the text. In this paper, we detect deception in a
domain-independent setting using deep learning architectures. Our method
outperforms the State-of-the-Art (SOTA) performance of most benchmark datasets
with an overall accuracy of 93.42% and F1-Score of 93.22%. The
domain-independent training allows us to capture subtler nuances of deceptive
writing style. Furthermore, we analyze how much in-domain data may be helpful
to accurately detect deception, especially for the cases where data may not be
readily available to train. Our results and analysis indicate that there may be
a universal pattern of deception lying in-between the text independent of the
domain, which can create a novel area of research and open up new avenues in
the field of deception detection.
|
[
{
"created": "Sun, 21 Nov 2021 01:52:38 GMT",
"version": "v1"
}
] |
2021-11-23
|
[
[
"Shahriar",
"Sadat",
""
],
[
"Mukherjee",
"Arjun",
""
],
[
"Gnawali",
"Omprakash",
""
]
] |
The deception in the text can be of different forms in different domains, including fake news, rumor tweets, and spam emails. Irrespective of the domain, the main intent of the deceptive text is to deceit the reader. Although domain-specific deception detection exists, domain-independent deception detection can provide a holistic picture, which can be crucial to understand how deception occurs in the text. In this paper, we detect deception in a domain-independent setting using deep learning architectures. Our method outperforms the State-of-the-Art (SOTA) performance of most benchmark datasets with an overall accuracy of 93.42% and F1-Score of 93.22%. The domain-independent training allows us to capture subtler nuances of deceptive writing style. Furthermore, we analyze how much in-domain data may be helpful to accurately detect deception, especially for the cases where data may not be readily available to train. Our results and analysis indicate that there may be a universal pattern of deception lying in-between the text independent of the domain, which can create a novel area of research and open up new avenues in the field of deception detection.
|
1903.01611
|
Jonathan Frankle
|
Jonathan Frankle, Gintare Karolina Dziugaite, Daniel M. Roy, Michael
Carbin
|
Stabilizing the Lottery Ticket Hypothesis
|
This article has been subsumed by "Linear Mode Connectivity and the
Lottery Ticket Hypothesis" (arXiv:1912.05671, ICML 2020). Please read/cite
that article instead
| null | null | null |
cs.LG cs.CV stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Pruning is a well-established technique for removing unnecessary structure
from neural networks after training to improve the performance of inference.
Several recent results have explored the possibility of pruning at
initialization time to provide similar benefits during training. In particular,
the "lottery ticket hypothesis" conjectures that typical neural networks
contain small subnetworks that can train to similar accuracy in a commensurate
number of steps. The evidence for this claim is that a procedure based on
iterative magnitude pruning (IMP) reliably finds such subnetworks retroactively
on small vision tasks. However, IMP fails on deeper networks, and proposed
methods to prune before training or train pruned networks encounter similar
scaling limitations. In this paper, we argue that these efforts have struggled
on deeper networks because they have focused on pruning precisely at
initialization. We modify IMP to search for subnetworks that could have been
obtained by pruning early in training (0.1% to 7% through) rather than at
iteration 0. With this change, it finds small subnetworks of deeper networks
(e.g., 80% sparsity on Resnet-50) that can complete the training process to
match the accuracy of the original network on more challenging tasks (e.g.,
ImageNet). In situations where IMP fails at iteration 0, the accuracy benefits
of delaying pruning accrue rapidly over the earliest iterations of training. To
explain these behaviors, we study subnetwork "stability," finding that - as
accuracy improves in this fashion - IMP subnetworks train to parameters closer
to those of the full network and do so with improved consistency in the face of
gradient noise. These results offer new insights into the opportunity to prune
large-scale networks early in training and the behaviors underlying the lottery
ticket hypothesis
|
[
{
"created": "Tue, 5 Mar 2019 00:52:12 GMT",
"version": "v1"
},
{
"created": "Fri, 7 Jun 2019 23:40:16 GMT",
"version": "v2"
},
{
"created": "Mon, 20 Jul 2020 16:50:33 GMT",
"version": "v3"
}
] |
2020-09-29
|
[
[
"Frankle",
"Jonathan",
""
],
[
"Dziugaite",
"Gintare Karolina",
""
],
[
"Roy",
"Daniel M.",
""
],
[
"Carbin",
"Michael",
""
]
] |
Pruning is a well-established technique for removing unnecessary structure from neural networks after training to improve the performance of inference. Several recent results have explored the possibility of pruning at initialization time to provide similar benefits during training. In particular, the "lottery ticket hypothesis" conjectures that typical neural networks contain small subnetworks that can train to similar accuracy in a commensurate number of steps. The evidence for this claim is that a procedure based on iterative magnitude pruning (IMP) reliably finds such subnetworks retroactively on small vision tasks. However, IMP fails on deeper networks, and proposed methods to prune before training or train pruned networks encounter similar scaling limitations. In this paper, we argue that these efforts have struggled on deeper networks because they have focused on pruning precisely at initialization. We modify IMP to search for subnetworks that could have been obtained by pruning early in training (0.1% to 7% through) rather than at iteration 0. With this change, it finds small subnetworks of deeper networks (e.g., 80% sparsity on Resnet-50) that can complete the training process to match the accuracy of the original network on more challenging tasks (e.g., ImageNet). In situations where IMP fails at iteration 0, the accuracy benefits of delaying pruning accrue rapidly over the earliest iterations of training. To explain these behaviors, we study subnetwork "stability," finding that - as accuracy improves in this fashion - IMP subnetworks train to parameters closer to those of the full network and do so with improved consistency in the face of gradient noise. These results offer new insights into the opportunity to prune large-scale networks early in training and the behaviors underlying the lottery ticket hypothesis
|
1905.09815
|
Marco Tezzele
|
Andrea Mola and Marco Tezzele and Mahmoud Gadalla and Federica
Valdenazzi and Davide Grassi and Roberta Padovan and Gianluigi Rozza
|
Efficient Reduction in Shape Parameter Space Dimension for Ship
Propeller Blade Design
| null | null | null | null |
cs.CE math.NA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we present the results of a ship propeller design optimization
campaign carried out in the framework of the research project PRELICA, funded
by the Friuli Venezia Giulia regional government. The main idea of this work is
to operate on a multidisciplinary level to identify propeller shapes that lead
to reduced tip vortex-induced pressure and increased efficiency without
altering the thrust. First, a specific tool for the bottom-up construction of
parameterized propeller blade geometries has been developed. The algorithm
proposed operates with a user defined number of arbitrary shaped or NACA
airfoil sections, and employs arbitrary degree NURBS to represent the chord,
pitch, skew and rake distribution as a function of the blade radial coordinate.
The control points of such curves have been modified to generate, in a fully
automated way, a family of blade geometries depending on as many as 20 shape
parameters. Such geometries have then been used to carry out potential flow
simulations with the Boundary Element Method based software PROCAL. Given the
high number of parameters considered, such a preliminary stage allowed for a
fast evaluation of the performance of several hundreds of shapes. In addition,
the data obtained from the potential flow simulation allowed for the
application of a parameter space reduction methodology based on active
subspaces (AS) property, which suggested that the main propeller performance
indices are, at a first but rather accurate approximation, only depending on a
single parameter which is a linear combination of all the original geometric
ones. AS analysis has also been used to carry out a constrained optimization
exploiting response surface method in the reduced parameter space, and a
sensitivity analysis based on such surrogate model. The few selected shapes
were finally used to set up high fidelity RANS simulations and select an
optimal shape.
|
[
{
"created": "Wed, 15 May 2019 08:29:00 GMT",
"version": "v1"
}
] |
2019-05-24
|
[
[
"Mola",
"Andrea",
""
],
[
"Tezzele",
"Marco",
""
],
[
"Gadalla",
"Mahmoud",
""
],
[
"Valdenazzi",
"Federica",
""
],
[
"Grassi",
"Davide",
""
],
[
"Padovan",
"Roberta",
""
],
[
"Rozza",
"Gianluigi",
""
]
] |
In this work, we present the results of a ship propeller design optimization campaign carried out in the framework of the research project PRELICA, funded by the Friuli Venezia Giulia regional government. The main idea of this work is to operate on a multidisciplinary level to identify propeller shapes that lead to reduced tip vortex-induced pressure and increased efficiency without altering the thrust. First, a specific tool for the bottom-up construction of parameterized propeller blade geometries has been developed. The algorithm proposed operates with a user defined number of arbitrary shaped or NACA airfoil sections, and employs arbitrary degree NURBS to represent the chord, pitch, skew and rake distribution as a function of the blade radial coordinate. The control points of such curves have been modified to generate, in a fully automated way, a family of blade geometries depending on as many as 20 shape parameters. Such geometries have then been used to carry out potential flow simulations with the Boundary Element Method based software PROCAL. Given the high number of parameters considered, such a preliminary stage allowed for a fast evaluation of the performance of several hundreds of shapes. In addition, the data obtained from the potential flow simulation allowed for the application of a parameter space reduction methodology based on active subspaces (AS) property, which suggested that the main propeller performance indices are, at a first but rather accurate approximation, only depending on a single parameter which is a linear combination of all the original geometric ones. AS analysis has also been used to carry out a constrained optimization exploiting response surface method in the reduced parameter space, and a sensitivity analysis based on such surrogate model. The few selected shapes were finally used to set up high fidelity RANS simulations and select an optimal shape.
|
1710.01605
|
Dirk Slock
|
Elisabeth de Carvalho and Dirk Slock
|
Cram\'er-Rao Bounds for Blind Multichannel Estimation
|
22 pages, 3 figures
| null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In some estimation problems, not all the parameters can be identified, which
results in singularity of the Fisher Information Matrix (FIM). The Cram\'er-Rao
Bound (CRB), which is the inverse of the FIM, is then not defined. To
regularize the estimation problem, one can impose constraints on the parameters
and derive the corresponding CRBs. The correspondence between local
identifiability and FIM regularity is studied here. Furthermore the number of
FIM singularities is shown to be equal to the number of independent constraints
necessary to have a well-defined constrained CRB and local identifiability. In
general, many sets of constraints can render the parameters identifiable,
giving different values for the CRB, that are not always relevant. When the
constraints can be chosen, we propose a constrained CRB, the pseudo-inverse of
the FIM, which gives, for a minimum number of constraints, the lowest bound on
the mean squared estimation error. These results are applied to two approaches
to blind FIR multichannel estimation which allow identification of the channel
up to a scale or phase factor. These two approaches correspond to deterministic
and Gaussian models for the unknown channel inputs. The singularities of the
FIMs and local identifiability are studied and the corresponding constrained
CRBs are derived and interpreted.
|
[
{
"created": "Wed, 4 Oct 2017 13:47:41 GMT",
"version": "v1"
}
] |
2018-07-24
|
[
[
"de Carvalho",
"Elisabeth",
""
],
[
"Slock",
"Dirk",
""
]
] |
In some estimation problems, not all the parameters can be identified, which results in singularity of the Fisher Information Matrix (FIM). The Cram\'er-Rao Bound (CRB), which is the inverse of the FIM, is then not defined. To regularize the estimation problem, one can impose constraints on the parameters and derive the corresponding CRBs. The correspondence between local identifiability and FIM regularity is studied here. Furthermore the number of FIM singularities is shown to be equal to the number of independent constraints necessary to have a well-defined constrained CRB and local identifiability. In general, many sets of constraints can render the parameters identifiable, giving different values for the CRB, that are not always relevant. When the constraints can be chosen, we propose a constrained CRB, the pseudo-inverse of the FIM, which gives, for a minimum number of constraints, the lowest bound on the mean squared estimation error. These results are applied to two approaches to blind FIR multichannel estimation which allow identification of the channel up to a scale or phase factor. These two approaches correspond to deterministic and Gaussian models for the unknown channel inputs. The singularities of the FIMs and local identifiability are studied and the corresponding constrained CRBs are derived and interpreted.
|
1802.05118
|
Torgeir Dings{\o}yr
|
Torgeir Dings{\o}yr, Tore Dyb{\aa}, Mette Gjertsen, Anette Odgaard
Jacobsen, Tor-Erik Mathisen, Jan Ole Nordfjord, Kjetil R{\o}e, Kjetil Strand
|
Key Lessons from Tailoring Agile Methods for Large-Scale Software
Development
|
Accepted for publication in IEEE IT Professional
| null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We describe advice derived from one of the largest development programs in
Norway, where twelve Scrum teams combined agile practices with traditional
project management. The Perform program delivered 12 releases over a four-year
period, and finished on budget and on time. In this article, we summarize 12
key lessons on five crucial topics, relevant to other large development
projects seeking to combine Scrum with traditional project management.
|
[
{
"created": "Wed, 14 Feb 2018 14:45:36 GMT",
"version": "v1"
}
] |
2018-02-15
|
[
[
"Dingsøyr",
"Torgeir",
""
],
[
"Dybå",
"Tore",
""
],
[
"Gjertsen",
"Mette",
""
],
[
"Jacobsen",
"Anette Odgaard",
""
],
[
"Mathisen",
"Tor-Erik",
""
],
[
"Nordfjord",
"Jan Ole",
""
],
[
"Røe",
"Kjetil",
""
],
[
"Strand",
"Kjetil",
""
]
] |
We describe advice derived from one of the largest development programs in Norway, where twelve Scrum teams combined agile practices with traditional project management. The Perform program delivered 12 releases over a four-year period, and finished on budget and on time. In this article, we summarize 12 key lessons on five crucial topics, relevant to other large development projects seeking to combine Scrum with traditional project management.
|
2406.07536
|
Wenxiao Wang
|
Wenxiao Wang, Weiming Zhuang, Lingjuan Lyu
|
Towards Fundamentally Scalable Model Selection: Asymptotically Fast
Update and Selection
|
19 pages, 8 figures
| null | null | null |
cs.LG cs.CV stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The advancement of deep learning technologies is bringing new models every
day, motivating the study of scalable model selection. An ideal model selection
scheme should minimally support two operations efficiently over a large pool of
candidate models: update, which involves either adding a new candidate model or
removing an existing candidate model, and selection, which involves locating
highly performing models for a given task. However, previous solutions to model
selection require high computational complexity for at least one of these two
operations. In this work, we target fundamentally (more) scalable model
selection that supports asymptotically fast update and asymptotically fast
selection at the same time. Firstly, we define isolated model embedding, a
family of model selection schemes supporting asymptotically fast update and
selection: With respect to the number of candidate models $m$, the update
complexity is O(1) and the selection consists of a single sweep over $m$
vectors in addition to O(1) model operations. Isolated model embedding also
implies several desirable properties for applications. Secondly, we present
Standardized Embedder, an empirical realization of isolated model embedding. We
assess its effectiveness by using it to select representations from a pool of
100 pre-trained vision models for classification tasks and measuring the
performance gaps between the selected models and the best candidates with a
linear probing protocol. Experiments suggest our realization is effective in
selecting models with competitive performances and highlight isolated model
embedding as a promising direction towards model selection that is
fundamentally (more) scalable.
|
[
{
"created": "Tue, 11 Jun 2024 17:57:49 GMT",
"version": "v1"
}
] |
2024-06-12
|
[
[
"Wang",
"Wenxiao",
""
],
[
"Zhuang",
"Weiming",
""
],
[
"Lyu",
"Lingjuan",
""
]
] |
The advancement of deep learning technologies is bringing new models every day, motivating the study of scalable model selection. An ideal model selection scheme should minimally support two operations efficiently over a large pool of candidate models: update, which involves either adding a new candidate model or removing an existing candidate model, and selection, which involves locating highly performing models for a given task. However, previous solutions to model selection require high computational complexity for at least one of these two operations. In this work, we target fundamentally (more) scalable model selection that supports asymptotically fast update and asymptotically fast selection at the same time. Firstly, we define isolated model embedding, a family of model selection schemes supporting asymptotically fast update and selection: With respect to the number of candidate models $m$, the update complexity is O(1) and the selection consists of a single sweep over $m$ vectors in addition to O(1) model operations. Isolated model embedding also implies several desirable properties for applications. Secondly, we present Standardized Embedder, an empirical realization of isolated model embedding. We assess its effectiveness by using it to select representations from a pool of 100 pre-trained vision models for classification tasks and measuring the performance gaps between the selected models and the best candidates with a linear probing protocol. Experiments suggest our realization is effective in selecting models with competitive performances and highlight isolated model embedding as a promising direction towards model selection that is fundamentally (more) scalable.
|
2101.11944
|
Mauro Innocente
|
Mauro S. Innocente, Johann Sienz
|
Coefficients' Settings in Particle Swarm Optimization: Insight and
Guidelines
|
Preprint submitted to E. Dvorkin, M. Goldschmit, & M. Storti (Eds.),
Mec\'anica Computacional: Computational Intelligence Techniques for
Optimization and Data Modeling (B) (Vol. XXIX, pp. 9253-9269). Asociaci\'on
Argentina de Mec\'anica Computacional, Buenos Aires, Argentina, 2010. Open
access published version here:
https://cimec.org.ar/ojs/index.php/mc/article/view/3666
| null | null | null |
cs.NE math.OC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Particle Swam Optimization is a population-based and gradient-free
optimization method developed by mimicking social behaviour observed in nature.
Its ability to optimize is not specifically implemented but emerges in the
global level from local interactions. In its canonical version, there are three
factors that govern a particle's trajectory: 1) inertia from its previous
displacement; 2) attraction to its best experience; and 3) attraction to a
given neighbour's best experience. The importance given to each of these
factors is regulated by three coefficients: 1) the inertia; 2) the
individuality; and 3) the sociality weights. Their settings rule the trajectory
of the particle when pulled by these two attractors. Different speeds and forms
of convergence of a particle towards its attractor(s) take place for different
settings of the coefficients. A more general formulation is presented aiming
for a better control of the embedded randomness. Guidelines to select the
coefficients' settings to obtain the desired behaviour are offered. The
convergence speed of the algorithm also depends on the speed of spread of
information within the swarm. The latter is governed by the structure of the
neighbourhood, whose study is beyond the scope of this paper. The objective
here is to help understand the core of the PSO paradigm from the bottom up by
offering some insight into the form of the particles' trajectories, and to
provide some guidelines as to how to decide upon the settings of the
coefficients in the particles' velocity update equation in the proposed
formulation to obtain the type of behaviour desired for the problem at hand.
General-purpose settings are also suggested. The relationship between the
proposed formulation and both the classical and constricted PSO formulations
are also provided.
|
[
{
"created": "Thu, 28 Jan 2021 11:49:45 GMT",
"version": "v1"
}
] |
2021-01-29
|
[
[
"Innocente",
"Mauro S.",
""
],
[
"Sienz",
"Johann",
""
]
] |
Particle Swam Optimization is a population-based and gradient-free optimization method developed by mimicking social behaviour observed in nature. Its ability to optimize is not specifically implemented but emerges in the global level from local interactions. In its canonical version, there are three factors that govern a particle's trajectory: 1) inertia from its previous displacement; 2) attraction to its best experience; and 3) attraction to a given neighbour's best experience. The importance given to each of these factors is regulated by three coefficients: 1) the inertia; 2) the individuality; and 3) the sociality weights. Their settings rule the trajectory of the particle when pulled by these two attractors. Different speeds and forms of convergence of a particle towards its attractor(s) take place for different settings of the coefficients. A more general formulation is presented aiming for a better control of the embedded randomness. Guidelines to select the coefficients' settings to obtain the desired behaviour are offered. The convergence speed of the algorithm also depends on the speed of spread of information within the swarm. The latter is governed by the structure of the neighbourhood, whose study is beyond the scope of this paper. The objective here is to help understand the core of the PSO paradigm from the bottom up by offering some insight into the form of the particles' trajectories, and to provide some guidelines as to how to decide upon the settings of the coefficients in the particles' velocity update equation in the proposed formulation to obtain the type of behaviour desired for the problem at hand. General-purpose settings are also suggested. The relationship between the proposed formulation and both the classical and constricted PSO formulations are also provided.
|
2003.13617
|
Javad Ghofrani
|
Kirill Loisha, Javad Ghofrani, Dirk Reichelt
|
A Systematic Mapping Study on Blockchain Technology for Digital
Protection of Communication with Industrial Control
|
8 pages
| null | null | null |
cs.CR cs.DC cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the next few years, Blockchain will play a central role in IoT as a
technology. It enables the traceability of processes between multiple parties
independent of a central instance. Blockchain allows to make the processes more
transparent, cheaper, and safer. This research paper was conducted as
systematic literature search. Our aim is to understand current state of
implementation in context of Blockchain Technology for digital protection of
communication in industrial cyber-physical systems. We have extracted 28
primary papers from scientific databases and classified into different
categories using visualizations. The results show that the focus in around 14\%
papers is on solution proposal and implementation of use cases "Secure transfer
of order data" using Ethereum Blockchain, 7\% papers applying Hyperledger
Fabric and Multichain. The majority of research (around 43\%) is focusing on
solution development for supply chain and process traceability.
|
[
{
"created": "Mon, 30 Mar 2020 16:49:11 GMT",
"version": "v1"
}
] |
2020-03-31
|
[
[
"Loisha",
"Kirill",
""
],
[
"Ghofrani",
"Javad",
""
],
[
"Reichelt",
"Dirk",
""
]
] |
In the next few years, Blockchain will play a central role in IoT as a technology. It enables the traceability of processes between multiple parties independent of a central instance. Blockchain allows to make the processes more transparent, cheaper, and safer. This research paper was conducted as systematic literature search. Our aim is to understand current state of implementation in context of Blockchain Technology for digital protection of communication in industrial cyber-physical systems. We have extracted 28 primary papers from scientific databases and classified into different categories using visualizations. The results show that the focus in around 14\% papers is on solution proposal and implementation of use cases "Secure transfer of order data" using Ethereum Blockchain, 7\% papers applying Hyperledger Fabric and Multichain. The majority of research (around 43\%) is focusing on solution development for supply chain and process traceability.
|
2302.13865
|
Yigit Sever
|
Ilter Taha Aktolga, Elif Sena Kuru, Yigit Sever, Pelin Angin
|
AI-Driven Container Security Approaches for 5G and Beyond: A Survey
|
14 pages, 2 tables, 1 figure, submitted to Special issue on AI-driven
security in 5G and beyond
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The rising use of microservices based software deployment on the cloud
leverages containerized software extensively. The security of applications
running inside containers as well as the container environment itself are
critical infrastructure in the cloud setting and 5G. To address the security
concerns, research efforts have been focused on container security with
subfields such as intrusion detection, malware detection and container
placement strategies. These security efforts are roughly divided into two
categories: rule based approaches and machine learning that can respond to
novel threats. In this study, we have surveyed the container security
literature focusing on approaches that leverage machine learning to address
security challenges.
|
[
{
"created": "Mon, 27 Feb 2023 15:05:53 GMT",
"version": "v1"
},
{
"created": "Fri, 31 Mar 2023 19:25:03 GMT",
"version": "v2"
}
] |
2023-04-04
|
[
[
"Aktolga",
"Ilter Taha",
""
],
[
"Kuru",
"Elif Sena",
""
],
[
"Sever",
"Yigit",
""
],
[
"Angin",
"Pelin",
""
]
] |
The rising use of microservices based software deployment on the cloud leverages containerized software extensively. The security of applications running inside containers as well as the container environment itself are critical infrastructure in the cloud setting and 5G. To address the security concerns, research efforts have been focused on container security with subfields such as intrusion detection, malware detection and container placement strategies. These security efforts are roughly divided into two categories: rule based approaches and machine learning that can respond to novel threats. In this study, we have surveyed the container security literature focusing on approaches that leverage machine learning to address security challenges.
|
1507.08467
|
David Sousa-Rodrigues
|
Cristian Jimenez-Romero and David Sousa-Rodrigues and Jeffrey H.
Johnson and Vitorino Ramos
|
A Model for Foraging Ants, Controlled by Spiking Neural Networks and
Double Pheromones
|
This work has been accepted for presentation at the UK Workshop on
Computational Intelligence --- University of Exeter, September 2015
http://www.ukci2015.ex.ac.uk/
| null | null | null |
cs.NE cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A model of an Ant System where ants are controlled by a spiking neural
circuit and a second order pheromone mechanism in a foraging task is presented.
A neural circuit is trained for individual ants and subsequently the ants are
exposed to a virtual environment where a swarm of ants performed a resource
foraging task. The model comprises an associative and unsupervised learning
strategy for the neural circuit of the ant. The neural circuit adapts to the
environment by means of classical conditioning. The initially unknown
environment includes different types of stimuli representing food and obstacles
which, when they come in direct contact with the ant, elicit a reflex response
in the motor neural system of the ant: moving towards or away from the source
of the stimulus. The ants are released on a landscape with multiple food
sources where one ant alone would have difficulty harvesting the landscape to
maximum efficiency. The introduction of a double pheromone mechanism yields
better results than traditional ant colony optimization strategies. Traditional
ant systems include mainly a positive reinforcement pheromone. This approach
uses a second pheromone that acts as a marker for forbidden paths (negative
feedback). This blockade is not permanent and is controlled by the evaporation
rate of the pheromones. The combined action of both pheromones acts as a
collective stigmergic memory of the swarm, which reduces the search space of
the problem. This paper explores how the adaptation and learning abilities
observed in biologically inspired cognitive architectures is synergistically
enhanced by swarm optimization strategies. The model portraits two forms of
artificial intelligent behaviour: at the individual level the spiking neural
network is the main controller and at the collective level the pheromone
distribution is a map towards the solution emerged by the colony.
|
[
{
"created": "Thu, 30 Jul 2015 11:57:54 GMT",
"version": "v1"
},
{
"created": "Mon, 3 Aug 2015 09:25:03 GMT",
"version": "v2"
},
{
"created": "Fri, 18 Sep 2015 14:17:39 GMT",
"version": "v3"
}
] |
2015-09-21
|
[
[
"Jimenez-Romero",
"Cristian",
""
],
[
"Sousa-Rodrigues",
"David",
""
],
[
"Johnson",
"Jeffrey H.",
""
],
[
"Ramos",
"Vitorino",
""
]
] |
A model of an Ant System where ants are controlled by a spiking neural circuit and a second order pheromone mechanism in a foraging task is presented. A neural circuit is trained for individual ants and subsequently the ants are exposed to a virtual environment where a swarm of ants performed a resource foraging task. The model comprises an associative and unsupervised learning strategy for the neural circuit of the ant. The neural circuit adapts to the environment by means of classical conditioning. The initially unknown environment includes different types of stimuli representing food and obstacles which, when they come in direct contact with the ant, elicit a reflex response in the motor neural system of the ant: moving towards or away from the source of the stimulus. The ants are released on a landscape with multiple food sources where one ant alone would have difficulty harvesting the landscape to maximum efficiency. The introduction of a double pheromone mechanism yields better results than traditional ant colony optimization strategies. Traditional ant systems include mainly a positive reinforcement pheromone. This approach uses a second pheromone that acts as a marker for forbidden paths (negative feedback). This blockade is not permanent and is controlled by the evaporation rate of the pheromones. The combined action of both pheromones acts as a collective stigmergic memory of the swarm, which reduces the search space of the problem. This paper explores how the adaptation and learning abilities observed in biologically inspired cognitive architectures is synergistically enhanced by swarm optimization strategies. The model portraits two forms of artificial intelligent behaviour: at the individual level the spiking neural network is the main controller and at the collective level the pheromone distribution is a map towards the solution emerged by the colony.
|
1609.08475
|
Ester Hait
|
Ester Hait and Guy Gilboa
|
Blind Facial Image Quality Enhancement using Non-Rigid Semantic Patches
|
Please see the updated published version: Hait, Ester, and Guy
Gilboa. Blind Facial Image Quality Enhancement using Non-Rigid Semantic
Patches. IEEE Transactions on Image Processing 26.6 (2017): 2705
| null |
10.1109/TIP.2017.2686003
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose to combine semantic data and registration algorithms to solve
various image processing problems such as denoising, super-resolution and
color-correction. It is shown how such new techniques can achieve significant
quality enhancement, both visually and quantitatively, in the case of facial
image enhancement. Our model assumes prior high quality data of the person to
be processed, but no knowledge of the degradation model. We try to overcome the
classical processing limits by using semantically-aware patches, with adaptive
size and location regions of coherent structure and context, as building
blocks. The method is demonstrated on the problem of cellular photography
enhancement of dark facial images for different identities, expressions and
poses.
|
[
{
"created": "Tue, 27 Sep 2016 14:29:33 GMT",
"version": "v1"
},
{
"created": "Sun, 30 Apr 2017 07:39:56 GMT",
"version": "v2"
}
] |
2023-07-19
|
[
[
"Hait",
"Ester",
""
],
[
"Gilboa",
"Guy",
""
]
] |
We propose to combine semantic data and registration algorithms to solve various image processing problems such as denoising, super-resolution and color-correction. It is shown how such new techniques can achieve significant quality enhancement, both visually and quantitatively, in the case of facial image enhancement. Our model assumes prior high quality data of the person to be processed, but no knowledge of the degradation model. We try to overcome the classical processing limits by using semantically-aware patches, with adaptive size and location regions of coherent structure and context, as building blocks. The method is demonstrated on the problem of cellular photography enhancement of dark facial images for different identities, expressions and poses.
|
2311.00915
|
Zedian Xiao
|
Zedian Xiao, William Held, Yanchen Liu, and Diyi Yang
|
Task-Agnostic Low-Rank Adapters for Unseen English Dialects
|
EMNLP 2023
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Large Language Models (LLMs) are trained on corpora disproportionally
weighted in favor of Standard American English. As a result, speakers of other
dialects experience significantly more failures when interacting with these
technologies. In practice, these speakers often accommodate their speech to be
better understood. Our work shares the belief that language technologies should
be designed to accommodate the diversity in English dialects and not the other
way around. However, prior works on dialect struggle with generalizing to
evolving and emerging dialects in a scalable manner. To fill this gap, our
method, HyperLoRA, leverages expert linguistic knowledge to enable
resource-efficient adaptation via hypernetworks. By disentangling
dialect-specific and cross-dialectal information, HyperLoRA improves
generalization to unseen dialects in a task-agnostic fashion. Not only is
HyperLoRA more scalable in the number of parameters, but it also achieves the
best or most competitive performance across 5 dialects in a zero-shot setting.
In this way, our approach facilitates access to language technology for
billions of English dialect speakers who are traditionally underrepresented.
|
[
{
"created": "Thu, 2 Nov 2023 01:17:29 GMT",
"version": "v1"
}
] |
2023-11-03
|
[
[
"Xiao",
"Zedian",
""
],
[
"Held",
"William",
""
],
[
"Liu",
"Yanchen",
""
],
[
"Yang",
"Diyi",
""
]
] |
Large Language Models (LLMs) are trained on corpora disproportionally weighted in favor of Standard American English. As a result, speakers of other dialects experience significantly more failures when interacting with these technologies. In practice, these speakers often accommodate their speech to be better understood. Our work shares the belief that language technologies should be designed to accommodate the diversity in English dialects and not the other way around. However, prior works on dialect struggle with generalizing to evolving and emerging dialects in a scalable manner. To fill this gap, our method, HyperLoRA, leverages expert linguistic knowledge to enable resource-efficient adaptation via hypernetworks. By disentangling dialect-specific and cross-dialectal information, HyperLoRA improves generalization to unseen dialects in a task-agnostic fashion. Not only is HyperLoRA more scalable in the number of parameters, but it also achieves the best or most competitive performance across 5 dialects in a zero-shot setting. In this way, our approach facilitates access to language technology for billions of English dialect speakers who are traditionally underrepresented.
|
2407.16142
|
Renming Huang
|
Renming Huang, Yunqiang Pei, Guoqing Wang, Yangming Zhang, Yang Yang,
Peng Wang and Hengtao Shen
|
Diffusion Models as Optimizers for Efficient Planning in Offline RL
|
The paper was accepted by ECCV2024
| null | null | null |
cs.LG cs.AI cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Diffusion models have shown strong competitiveness in offline reinforcement
learning tasks by formulating decision-making as sequential generation.
However, the practicality of these methods is limited due to the lengthy
inference processes they require. In this paper, we address this problem by
decomposing the sampling process of diffusion models into two decoupled
subprocesses: 1) generating a feasible trajectory, which is a time-consuming
process, and 2) optimizing the trajectory. With this decomposition approach, we
are able to partially separate efficiency and quality factors, enabling us to
simultaneously gain efficiency advantages and ensure quality assurance. We
propose the Trajectory Diffuser, which utilizes a faster autoregressive model
to handle the generation of feasible trajectories while retaining the
trajectory optimization process of diffusion models. This allows us to achieve
more efficient planning without sacrificing capability. To evaluate the
effectiveness and efficiency of the Trajectory Diffuser, we conduct experiments
on the D4RL benchmarks. The results demonstrate that our method achieves $\it
3$-$\it 10 \times$ faster inference speed compared to previous sequence
modeling methods, while also outperforming them in terms of overall
performance. https://github.com/RenMing-Huang/TrajectoryDiffuser
Keywords: Reinforcement Learning and Efficient Planning and Diffusion Model
|
[
{
"created": "Tue, 23 Jul 2024 03:00:01 GMT",
"version": "v1"
}
] |
2024-07-24
|
[
[
"Huang",
"Renming",
""
],
[
"Pei",
"Yunqiang",
""
],
[
"Wang",
"Guoqing",
""
],
[
"Zhang",
"Yangming",
""
],
[
"Yang",
"Yang",
""
],
[
"Wang",
"Peng",
""
],
[
"Shen",
"Hengtao",
""
]
] |
Diffusion models have shown strong competitiveness in offline reinforcement learning tasks by formulating decision-making as sequential generation. However, the practicality of these methods is limited due to the lengthy inference processes they require. In this paper, we address this problem by decomposing the sampling process of diffusion models into two decoupled subprocesses: 1) generating a feasible trajectory, which is a time-consuming process, and 2) optimizing the trajectory. With this decomposition approach, we are able to partially separate efficiency and quality factors, enabling us to simultaneously gain efficiency advantages and ensure quality assurance. We propose the Trajectory Diffuser, which utilizes a faster autoregressive model to handle the generation of feasible trajectories while retaining the trajectory optimization process of diffusion models. This allows us to achieve more efficient planning without sacrificing capability. To evaluate the effectiveness and efficiency of the Trajectory Diffuser, we conduct experiments on the D4RL benchmarks. The results demonstrate that our method achieves $\it 3$-$\it 10 \times$ faster inference speed compared to previous sequence modeling methods, while also outperforming them in terms of overall performance. https://github.com/RenMing-Huang/TrajectoryDiffuser Keywords: Reinforcement Learning and Efficient Planning and Diffusion Model
|
2103.13744
|
Christian Reiser
|
Christian Reiser and Songyou Peng and Yiyi Liao and Andreas Geiger
|
KiloNeRF: Speeding up Neural Radiance Fields with Thousands of Tiny MLPs
|
ICCV 2021. Code, pretrained models and an interactive viewer are
available at https://github.com/creiser/kilonerf/
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
NeRF synthesizes novel views of a scene with unprecedented quality by fitting
a neural radiance field to RGB images. However, NeRF requires querying a deep
Multi-Layer Perceptron (MLP) millions of times, leading to slow rendering
times, even on modern GPUs. In this paper, we demonstrate that real-time
rendering is possible by utilizing thousands of tiny MLPs instead of one single
large MLP. In our setting, each individual MLP only needs to represent parts of
the scene, thus smaller and faster-to-evaluate MLPs can be used. By combining
this divide-and-conquer strategy with further optimizations, rendering is
accelerated by three orders of magnitude compared to the original NeRF model
without incurring high storage costs. Further, using teacher-student
distillation for training, we show that this speed-up can be achieved without
sacrificing visual quality.
|
[
{
"created": "Thu, 25 Mar 2021 10:53:05 GMT",
"version": "v1"
},
{
"created": "Mon, 2 Aug 2021 15:58:25 GMT",
"version": "v2"
}
] |
2021-08-03
|
[
[
"Reiser",
"Christian",
""
],
[
"Peng",
"Songyou",
""
],
[
"Liao",
"Yiyi",
""
],
[
"Geiger",
"Andreas",
""
]
] |
NeRF synthesizes novel views of a scene with unprecedented quality by fitting a neural radiance field to RGB images. However, NeRF requires querying a deep Multi-Layer Perceptron (MLP) millions of times, leading to slow rendering times, even on modern GPUs. In this paper, we demonstrate that real-time rendering is possible by utilizing thousands of tiny MLPs instead of one single large MLP. In our setting, each individual MLP only needs to represent parts of the scene, thus smaller and faster-to-evaluate MLPs can be used. By combining this divide-and-conquer strategy with further optimizations, rendering is accelerated by three orders of magnitude compared to the original NeRF model without incurring high storage costs. Further, using teacher-student distillation for training, we show that this speed-up can be achieved without sacrificing visual quality.
|
2307.12643
|
Waqas Aman Dr.
|
Waqas Aman, Flavio Giorgi, Giulio Attenni, Saif Al-Kuwari, Elmehdi
Illi, Marwa Qaraqe, Gaia Maselli, Roberto Di Pietro
|
Expanding Boundaries: Cross-Media Routing for Seamless Underwater and
Aerial Communication
|
Submitted to IEEE Communications Magazine
| null | null | null |
cs.NI cs.ET eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The colossal evolution of wireless communication technologies over the past
few years has driven increased interest in its integration in a variety of
less-explored environments, such as the underwater medium. In this magazine
paper, we present a comprehensive discussion on a novel concept of routing
protocol known as cross-media routing, incorporating the marine and aerial
interfaces. In this regard, we discuss the limitation of single-media routing
and advocate the need for cross-media routing along with the current status of
research development in this direction. To this end, we also propose a novel
cross-media routing protocol known as bubble routing for autonomous marine
systems where different sets of AUVs, USVs, and airborne nodes are considered
for the routing problem. We evaluate the performance of the proposed routing
protocol by using the two key performance metrics, i.e., packet delivery ratio
(PDR) and end-to-end delay. Moreover, we delve into the challenges encountered
in cross-media routing, unveiling exciting opportunities for future research
and innovation. As wireless communication expands its horizons to encompass the
underwater and aerial domains, understanding and addressing these challenges
will pave the way for enhanced cross-media communication and exploration.
|
[
{
"created": "Mon, 24 Jul 2023 09:35:35 GMT",
"version": "v1"
}
] |
2023-07-25
|
[
[
"Aman",
"Waqas",
""
],
[
"Giorgi",
"Flavio",
""
],
[
"Attenni",
"Giulio",
""
],
[
"Al-Kuwari",
"Saif",
""
],
[
"Illi",
"Elmehdi",
""
],
[
"Qaraqe",
"Marwa",
""
],
[
"Maselli",
"Gaia",
""
],
[
"Di Pietro",
"Roberto",
""
]
] |
The colossal evolution of wireless communication technologies over the past few years has driven increased interest in its integration in a variety of less-explored environments, such as the underwater medium. In this magazine paper, we present a comprehensive discussion on a novel concept of routing protocol known as cross-media routing, incorporating the marine and aerial interfaces. In this regard, we discuss the limitation of single-media routing and advocate the need for cross-media routing along with the current status of research development in this direction. To this end, we also propose a novel cross-media routing protocol known as bubble routing for autonomous marine systems where different sets of AUVs, USVs, and airborne nodes are considered for the routing problem. We evaluate the performance of the proposed routing protocol by using the two key performance metrics, i.e., packet delivery ratio (PDR) and end-to-end delay. Moreover, we delve into the challenges encountered in cross-media routing, unveiling exciting opportunities for future research and innovation. As wireless communication expands its horizons to encompass the underwater and aerial domains, understanding and addressing these challenges will pave the way for enhanced cross-media communication and exploration.
|
1912.09147
|
Enmei Tu
|
Xiao Han, Zihao Wang, Enmei Tu, Gunnam Suryanarayana, Jie Yang
|
Semi-Supervised Deep Learning Using Improved Unsupervised Discriminant
Projection
|
1 figures
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep learning demands a huge amount of well-labeled data to train the network
parameters. How to use the least amount of labeled data to obtain the desired
classification accuracy is of great practical significance, because for many
real-world applications (such as medical diagnosis), it is difficult to obtain
so many labeled samples. In this paper, modify the unsupervised discriminant
projection algorithm from dimension reduction and apply it as a regularization
term to propose a new semi-supervised deep learning algorithm, which is able to
utilize both the local and nonlocal distribution of abundant unlabeled samples
to improve classification performance. Experiments show that given dozens of
labeled samples, the proposed algorithm can train a deep network to attain
satisfactory classification results.
|
[
{
"created": "Thu, 19 Dec 2019 11:55:12 GMT",
"version": "v1"
}
] |
2019-12-20
|
[
[
"Han",
"Xiao",
""
],
[
"Wang",
"Zihao",
""
],
[
"Tu",
"Enmei",
""
],
[
"Suryanarayana",
"Gunnam",
""
],
[
"Yang",
"Jie",
""
]
] |
Deep learning demands a huge amount of well-labeled data to train the network parameters. How to use the least amount of labeled data to obtain the desired classification accuracy is of great practical significance, because for many real-world applications (such as medical diagnosis), it is difficult to obtain so many labeled samples. In this paper, modify the unsupervised discriminant projection algorithm from dimension reduction and apply it as a regularization term to propose a new semi-supervised deep learning algorithm, which is able to utilize both the local and nonlocal distribution of abundant unlabeled samples to improve classification performance. Experiments show that given dozens of labeled samples, the proposed algorithm can train a deep network to attain satisfactory classification results.
|
1903.03746
|
Gal Kaplun
|
Dimitris Kalimeris and Gal Kaplun and Yaron Singer
|
Robust Influence Maximization for Hyperparametric Models
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we study the problem of robust influence maximization in the
independent cascade model under a hyperparametric assumption. In social
networks users influence and are influenced by individuals with similar
characteristics and as such, they are associated with some features. A recent
surging research direction in influence maximization focuses on the case where
the edge probabilities on the graph are not arbitrary but are generated as a
function of the features of the users and a global hyperparameter. We propose a
model where the objective is to maximize the worst-case number of influenced
users for any possible value of that hyperparameter. We provide theoretical
results showing that proper robust solution in our model is NP-hard and an
algorithm that achieves improper robust optimization. We make-use of sampling
based techniques and of the renowned multiplicative weight updates algorithm.
Additionally, we validate our method empirically and prove that it outperforms
the state-of-the-art robust influence maximization techniques.
|
[
{
"created": "Sat, 9 Mar 2019 06:23:11 GMT",
"version": "v1"
},
{
"created": "Mon, 13 May 2019 00:32:44 GMT",
"version": "v2"
}
] |
2019-05-14
|
[
[
"Kalimeris",
"Dimitris",
""
],
[
"Kaplun",
"Gal",
""
],
[
"Singer",
"Yaron",
""
]
] |
In this paper, we study the problem of robust influence maximization in the independent cascade model under a hyperparametric assumption. In social networks users influence and are influenced by individuals with similar characteristics and as such, they are associated with some features. A recent surging research direction in influence maximization focuses on the case where the edge probabilities on the graph are not arbitrary but are generated as a function of the features of the users and a global hyperparameter. We propose a model where the objective is to maximize the worst-case number of influenced users for any possible value of that hyperparameter. We provide theoretical results showing that proper robust solution in our model is NP-hard and an algorithm that achieves improper robust optimization. We make-use of sampling based techniques and of the renowned multiplicative weight updates algorithm. Additionally, we validate our method empirically and prove that it outperforms the state-of-the-art robust influence maximization techniques.
|
2203.15399
|
George Alexandropoulos
|
A. Mokh and J. de Rosny and G. C. Alexandropoulos and R. Khayatzadeh
and M. Kamoun and A. Ourir and A. Tourin and M. Fink
|
Time Reversal for Multiple Access and Mobility: Algorithmic Design and
Experimental Results
|
6 pages, 6 figures, to be presented at IEEE Wireless Communications
and Networking Conference 2022
| null | null | null |
cs.IT eess.SP math.IT
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Time Reversal (TR) has been proposed as a competitive precoding strategy for
low-complexity wireless devices relying on Ultra-WideBand (UWB) signal
waveforms. However, when TR is applied for multiple access, the signals
received by the multiple users suffer from significant levels of inter-symbol
and inter-user interference, which requires additional processing for
mitigation by each receiving user. In this paper, we present an iterative
Time-Reversal Division Multiple Access (TRDMA) approach that aims to dim the
latter interference levels. The performance of iterative TRDMA is evaluated
experimentally in a reverberation chamber that mimics a rich scattering indoor
wireless propagation environment. The improved efficiency, in terms of the
number of algorithmic iterations, of the proposed approach compared to
conventional TRDMA, is demonstrated. We also consider a mobile user
configuration, where the position of the receiver changes between the channel
estimation and data transmission steps. It is showcased, even for this
experimental setup, that the proposed iterative TRDMA approach is more
efficient than conventional precoding schemes.
|
[
{
"created": "Tue, 29 Mar 2022 09:44:23 GMT",
"version": "v1"
}
] |
2022-03-30
|
[
[
"Mokh",
"A.",
""
],
[
"de Rosny",
"J.",
""
],
[
"Alexandropoulos",
"G. C.",
""
],
[
"Khayatzadeh",
"R.",
""
],
[
"Kamoun",
"M.",
""
],
[
"Ourir",
"A.",
""
],
[
"Tourin",
"A.",
""
],
[
"Fink",
"M.",
""
]
] |
Time Reversal (TR) has been proposed as a competitive precoding strategy for low-complexity wireless devices relying on Ultra-WideBand (UWB) signal waveforms. However, when TR is applied for multiple access, the signals received by the multiple users suffer from significant levels of inter-symbol and inter-user interference, which requires additional processing for mitigation by each receiving user. In this paper, we present an iterative Time-Reversal Division Multiple Access (TRDMA) approach that aims to dim the latter interference levels. The performance of iterative TRDMA is evaluated experimentally in a reverberation chamber that mimics a rich scattering indoor wireless propagation environment. The improved efficiency, in terms of the number of algorithmic iterations, of the proposed approach compared to conventional TRDMA, is demonstrated. We also consider a mobile user configuration, where the position of the receiver changes between the channel estimation and data transmission steps. It is showcased, even for this experimental setup, that the proposed iterative TRDMA approach is more efficient than conventional precoding schemes.
|
1811.01306
|
Domenico G. Sorrenti
|
Augusto L. Ballardini, Daniele Cattaneo, and Domenico G. Sorrenti
|
A dataset for benchmarking vision-based localization at intersections
|
7 pages, 26 figures, report describing the work done to prepare a
dataset of sequences of a vehicle approaching an intersection, using the
sequences recorded in the KITTI dataset
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
In this report we present the work performed in order to build a dataset for
benchmarking vision-based localization at intersections, i.e., a set of stereo
video sequences taken from a road vehicle that is approaching an intersection,
altogether with a reliable measure of the observer position. This report is
meant to complement our paper "Vision-Based Localization at Intersections using
Digital Maps" submitted to ICRA2019. It complements the paper because the paper
uses the dataset, but it had no space for describing the work done to obtain
it. Moreover, the dataset is of interest for all those tackling the task of
online localization at intersections for road vehicles, e.g., for a
quantitative comparison with the proposal in our submitted paper, and it is
therefore appropriate to put the dataset description in a separate report. We
considered all datasets from road vehicles that we could find as for the end of
August 2018. After our evaluation, we kept only sub-sequences from the KITTI
dataset. In the future we will increase the collection of sequences with data
from our vehicle.
|
[
{
"created": "Sun, 4 Nov 2018 01:10:09 GMT",
"version": "v1"
}
] |
2018-11-06
|
[
[
"Ballardini",
"Augusto L.",
""
],
[
"Cattaneo",
"Daniele",
""
],
[
"Sorrenti",
"Domenico G.",
""
]
] |
In this report we present the work performed in order to build a dataset for benchmarking vision-based localization at intersections, i.e., a set of stereo video sequences taken from a road vehicle that is approaching an intersection, altogether with a reliable measure of the observer position. This report is meant to complement our paper "Vision-Based Localization at Intersections using Digital Maps" submitted to ICRA2019. It complements the paper because the paper uses the dataset, but it had no space for describing the work done to obtain it. Moreover, the dataset is of interest for all those tackling the task of online localization at intersections for road vehicles, e.g., for a quantitative comparison with the proposal in our submitted paper, and it is therefore appropriate to put the dataset description in a separate report. We considered all datasets from road vehicles that we could find as for the end of August 2018. After our evaluation, we kept only sub-sequences from the KITTI dataset. In the future we will increase the collection of sequences with data from our vehicle.
|
1609.08433
|
Lantian Li Mr.
|
Chenghui Zhao, Lantian Li, Dong Wang, April Pu
|
Local Training for PLDA in Speaker Verification
|
O-COCOSDA 2016
| null | null | null |
cs.SD cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
PLDA is a popular normalization approach for the i-vector model, and it has
delivered state-of-the-art performance in speaker verification. However, PLDA
training requires a large amount of labeled development data, which is highly
expensive in most cases. A possible approach to mitigate the problem is various
unsupervised adaptation methods, which use unlabeled data to adapt the PLDA
scattering matrices to the target domain.
In this paper, we present a new `local training' approach that utilizes
inaccurate but much cheaper local labels to train the PLDA model. These local
labels discriminate speakers within a single conversion only, and so are much
easier to obtain compared to the normal `global labels'. Our experiments show
that the proposed approach can deliver significant performance improvement,
particularly with limited globally-labeled data.
|
[
{
"created": "Tue, 27 Sep 2016 13:37:13 GMT",
"version": "v1"
}
] |
2016-09-28
|
[
[
"Zhao",
"Chenghui",
""
],
[
"Li",
"Lantian",
""
],
[
"Wang",
"Dong",
""
],
[
"Pu",
"April",
""
]
] |
PLDA is a popular normalization approach for the i-vector model, and it has delivered state-of-the-art performance in speaker verification. However, PLDA training requires a large amount of labeled development data, which is highly expensive in most cases. A possible approach to mitigate the problem is various unsupervised adaptation methods, which use unlabeled data to adapt the PLDA scattering matrices to the target domain. In this paper, we present a new `local training' approach that utilizes inaccurate but much cheaper local labels to train the PLDA model. These local labels discriminate speakers within a single conversion only, and so are much easier to obtain compared to the normal `global labels'. Our experiments show that the proposed approach can deliver significant performance improvement, particularly with limited globally-labeled data.
|
2107.03698
|
Lukas Lamm
|
L. Lamm and H. Holthusen and T. Brepols and S. Jockenh\"ovel and S.
Reese
|
A macroscopic approach for stress driven anisotropic growth in
bioengineered soft tissues
| null | null |
10.1007/s10237-021-01554-1
| null |
cs.CE cond-mat.soft
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The simulation of growth processes within soft biological tissues is of
utmost importance for many applications in the medical sector. Within this
contribution we propose a new macroscopic approach fro modelling stress-driven
volumetric growth occurring in soft tissues. Instead of using the standard
approach of a-priori defining the structure of the growth tensor, we postulate
the existance of a general growth potential. Such a potential describes all
eligable homeostatic stress states that can ultimately be reached as a result
of the growth process. Making use of well established methods from
visco-plasticity, the evolution of the growth related right Cauchy-Green tensor
is subsequently defined as a time dependent associative evolution law with
respect to the introduced potential. This approach naturally leads to a
formulation that is able to cover both, isotropic and anisotropic growth
related changes in geometry. It furthermore allows the model to flexibly adapt
to changing boundary and loading conditions. Besides the theoretical
development, we also describe the algorithmic implementation and furthermore
compare the newly derived model with a standard formulation of isotropic
growth.
|
[
{
"created": "Thu, 8 Jul 2021 09:21:33 GMT",
"version": "v1"
}
] |
2022-01-21
|
[
[
"Lamm",
"L.",
""
],
[
"Holthusen",
"H.",
""
],
[
"Brepols",
"T.",
""
],
[
"Jockenhövel",
"S.",
""
],
[
"Reese",
"S.",
""
]
] |
The simulation of growth processes within soft biological tissues is of utmost importance for many applications in the medical sector. Within this contribution we propose a new macroscopic approach fro modelling stress-driven volumetric growth occurring in soft tissues. Instead of using the standard approach of a-priori defining the structure of the growth tensor, we postulate the existance of a general growth potential. Such a potential describes all eligable homeostatic stress states that can ultimately be reached as a result of the growth process. Making use of well established methods from visco-plasticity, the evolution of the growth related right Cauchy-Green tensor is subsequently defined as a time dependent associative evolution law with respect to the introduced potential. This approach naturally leads to a formulation that is able to cover both, isotropic and anisotropic growth related changes in geometry. It furthermore allows the model to flexibly adapt to changing boundary and loading conditions. Besides the theoretical development, we also describe the algorithmic implementation and furthermore compare the newly derived model with a standard formulation of isotropic growth.
|
cs/0612002
|
Willemien Visser
|
Willemien Visser (INRIA Rocquencourt), Brigitte Trousse (INRIA Sophia
Antipolis)
|
Reuse of designs: Desperately seeking an interdisciplinary cognitive
approach
| null |
Dans IJCAI Thirteenth International Joint Conference on Artificial
Intelligence Workshop "Reuse of designs: An interdisciplinary cognitive
approach" (1993)
| null | null |
cs.HC cs.AI
| null |
This text analyses the papers accepted for the workshop "Reuse of designs: an
interdisciplinary cognitive approach". Several dimensions and questions
considered as important (by the authors and/or by us) are addressed: What about
the "interdisciplinary cognitive" character of the approaches adopted by the
authors? Is design indeed a domain where the use of CBR is particularly
suitable? Are there important distinctions between CBR and other approaches?
Which types of knowledge -other than cases- is being, or might be, used in CBR
systems? With respect to cases: are there different "types" of case and
different types of case use? which formats are adopted for their
representation? do cases have "components"? how are cases organised in the case
memory? Concerning their retrieval: which types of index are used? on which
types of relation is retrieval based? how does one retrieve only a selected
number of cases, i.e., how does one retrieve only the "best" cases? which
processes and strategies are used, by the system and by its user? Finally, some
important aspects of CBR system development are shortly discussed: should CBR
systems be assistance or autonomous systems? how can case knowledge be
"acquired"? what about the empirical evaluation of CBR systems? The conclusion
points out some lacking points: not much attention is paid to the user, and few
papers have indeed adopted an interdisciplinary cognitive approach.
|
[
{
"created": "Thu, 30 Nov 2006 22:38:49 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Visser",
"Willemien",
"",
"INRIA Rocquencourt"
],
[
"Trousse",
"Brigitte",
"",
"INRIA Sophia\n Antipolis"
]
] |
This text analyses the papers accepted for the workshop "Reuse of designs: an interdisciplinary cognitive approach". Several dimensions and questions considered as important (by the authors and/or by us) are addressed: What about the "interdisciplinary cognitive" character of the approaches adopted by the authors? Is design indeed a domain where the use of CBR is particularly suitable? Are there important distinctions between CBR and other approaches? Which types of knowledge -other than cases- is being, or might be, used in CBR systems? With respect to cases: are there different "types" of case and different types of case use? which formats are adopted for their representation? do cases have "components"? how are cases organised in the case memory? Concerning their retrieval: which types of index are used? on which types of relation is retrieval based? how does one retrieve only a selected number of cases, i.e., how does one retrieve only the "best" cases? which processes and strategies are used, by the system and by its user? Finally, some important aspects of CBR system development are shortly discussed: should CBR systems be assistance or autonomous systems? how can case knowledge be "acquired"? what about the empirical evaluation of CBR systems? The conclusion points out some lacking points: not much attention is paid to the user, and few papers have indeed adopted an interdisciplinary cognitive approach.
|
2308.11079
|
Luke Ditria
|
Luke Ditria, Tom Drummond
|
Long-Term Prediction of Natural Video Sequences with Robust Video
Predictors
| null | null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Predicting high dimensional video sequences is a curiously difficult problem.
The number of possible futures for a given video sequence grows exponentially
over time due to uncertainty. This is especially evident when trying to predict
complicated natural video scenes from a limited snapshot of the world. The
inherent uncertainty accumulates the further into the future you predict making
long-term prediction very difficult. In this work we introduce a number of
improvements to existing work that aid in creating Robust Video Predictors
(RoViPs). We show that with a combination of deep Perceptual and
uncertainty-based reconstruction losses we are able to create high quality
short-term predictions. Attention-based skip connections are utilised to allow
for long range spatial movement of input features to further improve
performance. Finally, we show that by simply making the predictor robust to its
own prediction errors, it is possible to produce very long, realistic natural
video sequences using an iterated single-step prediction task.
|
[
{
"created": "Mon, 21 Aug 2023 23:16:58 GMT",
"version": "v1"
}
] |
2023-08-23
|
[
[
"Ditria",
"Luke",
""
],
[
"Drummond",
"Tom",
""
]
] |
Predicting high dimensional video sequences is a curiously difficult problem. The number of possible futures for a given video sequence grows exponentially over time due to uncertainty. This is especially evident when trying to predict complicated natural video scenes from a limited snapshot of the world. The inherent uncertainty accumulates the further into the future you predict making long-term prediction very difficult. In this work we introduce a number of improvements to existing work that aid in creating Robust Video Predictors (RoViPs). We show that with a combination of deep Perceptual and uncertainty-based reconstruction losses we are able to create high quality short-term predictions. Attention-based skip connections are utilised to allow for long range spatial movement of input features to further improve performance. Finally, we show that by simply making the predictor robust to its own prediction errors, it is possible to produce very long, realistic natural video sequences using an iterated single-step prediction task.
|
2310.09672
|
Chang Lu
|
Chang Lu, Chandan K. Reddy, Ping Wang, Yue Ning
|
Towards Semi-Structured Automatic ICD Coding via Tree-based Contrastive
Learning
|
Accepted by NeurIPS 2023
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automatic coding of International Classification of Diseases (ICD) is a
multi-label text categorization task that involves extracting disease or
procedure codes from clinical notes. Despite the application of
state-of-the-art natural language processing (NLP) techniques, there are still
challenges including limited availability of data due to privacy constraints
and the high variability of clinical notes caused by different writing habits
of medical professionals and various pathological features of patients. In this
work, we investigate the semi-structured nature of clinical notes and propose
an automatic algorithm to segment them into sections. To address the
variability issues in existing ICD coding models with limited data, we
introduce a contrastive pre-training approach on sections using a soft
multi-label similarity metric based on tree edit distance. Additionally, we
design a masked section training strategy to enable ICD coding models to locate
sections related to ICD codes. Extensive experimental results demonstrate that
our proposed training strategies effectively enhance the performance of
existing ICD coding methods.
|
[
{
"created": "Sat, 14 Oct 2023 22:07:13 GMT",
"version": "v1"
}
] |
2023-10-17
|
[
[
"Lu",
"Chang",
""
],
[
"Reddy",
"Chandan K.",
""
],
[
"Wang",
"Ping",
""
],
[
"Ning",
"Yue",
""
]
] |
Automatic coding of International Classification of Diseases (ICD) is a multi-label text categorization task that involves extracting disease or procedure codes from clinical notes. Despite the application of state-of-the-art natural language processing (NLP) techniques, there are still challenges including limited availability of data due to privacy constraints and the high variability of clinical notes caused by different writing habits of medical professionals and various pathological features of patients. In this work, we investigate the semi-structured nature of clinical notes and propose an automatic algorithm to segment them into sections. To address the variability issues in existing ICD coding models with limited data, we introduce a contrastive pre-training approach on sections using a soft multi-label similarity metric based on tree edit distance. Additionally, we design a masked section training strategy to enable ICD coding models to locate sections related to ICD codes. Extensive experimental results demonstrate that our proposed training strategies effectively enhance the performance of existing ICD coding methods.
|
1606.06452
|
Mihalis Psarakis
|
Mihalis Psarakis
|
Reliability-Aware Overlay Architectures for FPGAs: Features and Design
Challenges
|
Presented at 2nd International Workshop on Overlay Architectures for
FPGAs (OLAF 2016) arXiv:1605.08149
| null | null |
OLAF/2016/04
|
cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The FPGA overlay architectures have been mainly proposed to improve design
productivity, circuit portability and system debugging. In this paper, we
address the use of overlay architectures for building fault tolerant SRAM-based
FPGA systems and discuss the main features and design challenges of a
reliability-aware overlay architecture.
|
[
{
"created": "Tue, 21 Jun 2016 07:25:05 GMT",
"version": "v1"
}
] |
2016-06-22
|
[
[
"Psarakis",
"Mihalis",
""
]
] |
The FPGA overlay architectures have been mainly proposed to improve design productivity, circuit portability and system debugging. In this paper, we address the use of overlay architectures for building fault tolerant SRAM-based FPGA systems and discuss the main features and design challenges of a reliability-aware overlay architecture.
|
1911.10023
|
Mateusz Juda
|
Mateusz Juda
|
Unsupervised Features Learning for Sampled Vector Fields
| null | null | null | null |
cs.LG math.AT math.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we introduce a new approach to computing hidden features of
sampled vector fields. The basic idea is to convert the vector field data to a
graph structure and use tools designed for automatic, unsupervised analysis of
graphs. Using a few data sets we show that the collected features of the vector
fields are correlated with the dynamics known for analytic models which
generates the data. In particular the method may be useful in analysis of data
sets where the analytic model is poorly understood or not known.
|
[
{
"created": "Fri, 22 Nov 2019 13:17:13 GMT",
"version": "v1"
},
{
"created": "Tue, 11 Aug 2020 10:38:40 GMT",
"version": "v2"
}
] |
2020-08-12
|
[
[
"Juda",
"Mateusz",
""
]
] |
In this paper we introduce a new approach to computing hidden features of sampled vector fields. The basic idea is to convert the vector field data to a graph structure and use tools designed for automatic, unsupervised analysis of graphs. Using a few data sets we show that the collected features of the vector fields are correlated with the dynamics known for analytic models which generates the data. In particular the method may be useful in analysis of data sets where the analytic model is poorly understood or not known.
|
1704.08021
|
Nir Shlezinger
|
Nir Shlezinger, Ron Dabora, and Yonina C. Eldar
|
Measurement Matrix Design for Phase Retrieval Based on Mutual
Information
|
This paper was presented in part at the 2017 International Symposium
on Information Theory
| null |
10.1109/TSP.2017.2759101
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In phase retrieval problems, a signal of interest (SOI) is reconstructed
based on the magnitude of a linear transformation of the SOI observed with
additive noise. The linear transform is typically referred to as a measurement
matrix. Many works on phase retrieval assume that the measurement matrix is a
random Gaussian matrix, which, in the noiseless scenario with sufficiently many
measurements, guarantees invertability of the transformation between the SOI
and the observations, up to an inherent phase ambiguity. However, in many
practical applications, the measurement matrix corresponds to an underlying
physical setup, and is therefore deterministic, possibly with structural
constraints. In this work we study the design of deterministic measurement
matrices, based on maximizing the mutual information between the SOI and the
observations. We characterize necessary conditions for the optimality of a
measurement matrix, and analytically obtain the optimal matrix in the low
signal-to-noise ratio regime. Practical methods for designing general
measurement matrices and masked Fourier measurements are proposed. Simulation
tests demonstrate the performance gain achieved by the proposed techniques
compared to random Gaussian measurements for various phase recovery algorithms.
|
[
{
"created": "Wed, 26 Apr 2017 09:08:43 GMT",
"version": "v1"
},
{
"created": "Mon, 18 Sep 2017 18:24:14 GMT",
"version": "v2"
},
{
"created": "Sun, 24 Sep 2017 06:47:02 GMT",
"version": "v3"
}
] |
2018-02-14
|
[
[
"Shlezinger",
"Nir",
""
],
[
"Dabora",
"Ron",
""
],
[
"Eldar",
"Yonina C.",
""
]
] |
In phase retrieval problems, a signal of interest (SOI) is reconstructed based on the magnitude of a linear transformation of the SOI observed with additive noise. The linear transform is typically referred to as a measurement matrix. Many works on phase retrieval assume that the measurement matrix is a random Gaussian matrix, which, in the noiseless scenario with sufficiently many measurements, guarantees invertability of the transformation between the SOI and the observations, up to an inherent phase ambiguity. However, in many practical applications, the measurement matrix corresponds to an underlying physical setup, and is therefore deterministic, possibly with structural constraints. In this work we study the design of deterministic measurement matrices, based on maximizing the mutual information between the SOI and the observations. We characterize necessary conditions for the optimality of a measurement matrix, and analytically obtain the optimal matrix in the low signal-to-noise ratio regime. Practical methods for designing general measurement matrices and masked Fourier measurements are proposed. Simulation tests demonstrate the performance gain achieved by the proposed techniques compared to random Gaussian measurements for various phase recovery algorithms.
|
2110.07301
|
Michael Ruchte
|
Michael Ruchte and Josif Grabocka
|
Multi-task problems are not multi-objective
| null | null | null | null |
cs.LG stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
Multi-objective optimization (MOO) aims at finding a set of optimal
configurations for a given set of objectives. A recent line of work applies MOO
methods to the typical Machine Learning (ML) setting, which becomes
multi-objective if a model should optimize more than one objective, for
instance in fair machine learning. These works also use Multi-Task Learning
(MTL) problems to benchmark MOO algorithms treating each task as independent
objective.
In this work we show that MTL problems do not resemble the characteristics of
MOO problems. In particular, MTL losses are not competing in case of a
sufficiently expressive single model. As a consequence, a single model can
perform just as well as optimizing all objectives with independent models,
rendering MOO inapplicable. We provide evidence with extensive experiments on
the widely used Multi-Fashion-MNIST datasets. Our results call for new
benchmarks to evaluate MOO algorithms for ML. Our code is available at:
https://github.com/ruchtem/moo-mtl.
|
[
{
"created": "Thu, 14 Oct 2021 12:08:46 GMT",
"version": "v1"
}
] |
2021-10-15
|
[
[
"Ruchte",
"Michael",
""
],
[
"Grabocka",
"Josif",
""
]
] |
Multi-objective optimization (MOO) aims at finding a set of optimal configurations for a given set of objectives. A recent line of work applies MOO methods to the typical Machine Learning (ML) setting, which becomes multi-objective if a model should optimize more than one objective, for instance in fair machine learning. These works also use Multi-Task Learning (MTL) problems to benchmark MOO algorithms treating each task as independent objective. In this work we show that MTL problems do not resemble the characteristics of MOO problems. In particular, MTL losses are not competing in case of a sufficiently expressive single model. As a consequence, a single model can perform just as well as optimizing all objectives with independent models, rendering MOO inapplicable. We provide evidence with extensive experiments on the widely used Multi-Fashion-MNIST datasets. Our results call for new benchmarks to evaluate MOO algorithms for ML. Our code is available at: https://github.com/ruchtem/moo-mtl.
|
1911.09365
|
Javier Segovia Aguas
|
Javier Segovia-Aguas and Sergio Jim\'enez and Anders Jonsson
|
Generalized Planning with Positive and Negative Examples
|
Accepted at AAAI-20 (oral presentation)
| null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Generalized planning aims at computing an algorithm-like structure
(generalized plan) that solves a set of multiple planning instances. In this
paper we define negative examples for generalized planning as planning
instances that must not be solved by a generalized plan. With this regard the
paper extends the notion of validation of a generalized plan as the problem of
verifying that a given generalized plan solves the set of input positives
instances while it fails to solve a given input set of negative examples. This
notion of plan validation allows us to define quantitative metrics to asses the
generalization capacity of generalized plans. The paper also shows how to
incorporate this new notion of plan validation into a compilation for plan
synthesis that takes both positive and negative instances as input. Experiments
show that incorporating negative examples can accelerate plan synthesis in
several domains and leverage quantitative metrics to evaluate the
generalization capacity of the synthesized plans.
|
[
{
"created": "Thu, 21 Nov 2019 09:41:56 GMT",
"version": "v1"
}
] |
2019-11-22
|
[
[
"Segovia-Aguas",
"Javier",
""
],
[
"Jiménez",
"Sergio",
""
],
[
"Jonsson",
"Anders",
""
]
] |
Generalized planning aims at computing an algorithm-like structure (generalized plan) that solves a set of multiple planning instances. In this paper we define negative examples for generalized planning as planning instances that must not be solved by a generalized plan. With this regard the paper extends the notion of validation of a generalized plan as the problem of verifying that a given generalized plan solves the set of input positives instances while it fails to solve a given input set of negative examples. This notion of plan validation allows us to define quantitative metrics to asses the generalization capacity of generalized plans. The paper also shows how to incorporate this new notion of plan validation into a compilation for plan synthesis that takes both positive and negative instances as input. Experiments show that incorporating negative examples can accelerate plan synthesis in several domains and leverage quantitative metrics to evaluate the generalization capacity of the synthesized plans.
|
2308.06342
|
Jaesung Tae
|
Jaesung Tae
|
Mirror Diffusion Models
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Diffusion models have successfully been applied to generative tasks in
various continuous domains. However, applying diffusion to discrete categorical
data remains a non-trivial task. Moreover, generation in continuous domains
often requires clipping in practice, which motivates the need for a theoretical
framework for adapting diffusion to constrained domains. Inspired by the mirror
Langevin algorithm for the constrained sampling problem, in this theoretical
report we propose Mirror Diffusion Models (MDMs). We demonstrate MDMs in the
context of simplex diffusion and propose natural extensions to popular domains
such as image and text generation.
|
[
{
"created": "Fri, 11 Aug 2023 18:31:54 GMT",
"version": "v1"
},
{
"created": "Fri, 18 Aug 2023 03:21:55 GMT",
"version": "v2"
}
] |
2023-08-21
|
[
[
"Tae",
"Jaesung",
""
]
] |
Diffusion models have successfully been applied to generative tasks in various continuous domains. However, applying diffusion to discrete categorical data remains a non-trivial task. Moreover, generation in continuous domains often requires clipping in practice, which motivates the need for a theoretical framework for adapting diffusion to constrained domains. Inspired by the mirror Langevin algorithm for the constrained sampling problem, in this theoretical report we propose Mirror Diffusion Models (MDMs). We demonstrate MDMs in the context of simplex diffusion and propose natural extensions to popular domains such as image and text generation.
|
2004.13587
|
Zhongchao Qian
|
Zhongchao Qian, Tyler L. Hayes, Kushal Kafle, Christopher Kanan
|
Do We Need Fully Connected Output Layers in Convolutional Networks?
| null | null | null | null |
cs.CV cs.LG eess.IV stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Traditionally, deep convolutional neural networks consist of a series of
convolutional and pooling layers followed by one or more fully connected (FC)
layers to perform the final classification. While this design has been
successful, for datasets with a large number of categories, the fully connected
layers often account for a large percentage of the network's parameters. For
applications with memory constraints, such as mobile devices and embedded
platforms, this is not ideal. Recently, a family of architectures that involve
replacing the learned fully connected output layer with a fixed layer has been
proposed as a way to achieve better efficiency. In this paper we examine this
idea further and demonstrate that fixed classifiers offer no additional benefit
compared to simply removing the output layer along with its parameters. We
further demonstrate that the typical approach of having a fully connected final
output layer is inefficient in terms of parameter count. We are able to achieve
comparable performance to a traditionally learned fully connected
classification output layer on the ImageNet-1K, CIFAR-100, Stanford Cars-196,
and Oxford Flowers-102 datasets, while not having a fully connected output
layer at all.
|
[
{
"created": "Tue, 28 Apr 2020 15:21:44 GMT",
"version": "v1"
},
{
"created": "Wed, 29 Apr 2020 03:20:47 GMT",
"version": "v2"
}
] |
2020-04-30
|
[
[
"Qian",
"Zhongchao",
""
],
[
"Hayes",
"Tyler L.",
""
],
[
"Kafle",
"Kushal",
""
],
[
"Kanan",
"Christopher",
""
]
] |
Traditionally, deep convolutional neural networks consist of a series of convolutional and pooling layers followed by one or more fully connected (FC) layers to perform the final classification. While this design has been successful, for datasets with a large number of categories, the fully connected layers often account for a large percentage of the network's parameters. For applications with memory constraints, such as mobile devices and embedded platforms, this is not ideal. Recently, a family of architectures that involve replacing the learned fully connected output layer with a fixed layer has been proposed as a way to achieve better efficiency. In this paper we examine this idea further and demonstrate that fixed classifiers offer no additional benefit compared to simply removing the output layer along with its parameters. We further demonstrate that the typical approach of having a fully connected final output layer is inefficient in terms of parameter count. We are able to achieve comparable performance to a traditionally learned fully connected classification output layer on the ImageNet-1K, CIFAR-100, Stanford Cars-196, and Oxford Flowers-102 datasets, while not having a fully connected output layer at all.
|
2404.02589
|
Zhiyuan Wen
|
Zhiyuan Wen, Jiannong Cao, Yu Yang, Ruosong Yang, Shuaiqi Liu
|
Affective-NLI: Towards Accurate and Interpretable Personality
Recognition in Conversation
|
Accepted by IEEE PerCom 2024
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Personality Recognition in Conversation (PRC) aims to identify the
personality traits of speakers through textual dialogue content. It is
essential for providing personalized services in various applications of
Human-Computer Interaction (HCI), such as AI-based mental therapy and companion
robots for the elderly. Most recent studies analyze the dialog content for
personality classification yet overlook two major concerns that hinder their
performance. First, crucial implicit factors contained in conversation, such as
emotions that reflect the speakers' personalities are ignored. Second, only
focusing on the input dialog content disregards the semantic understanding of
personality itself, which reduces the interpretability of the results. In this
paper, we propose Affective Natural Language Inference (Affective-NLI) for
accurate and interpretable PRC. To utilize affectivity within dialog content
for accurate personality recognition, we fine-tuned a pre-trained language
model specifically for emotion recognition in conversations, facilitating
real-time affective annotations for utterances. For interpretability of
recognition results, we formulate personality recognition as an NLI problem by
determining whether the textual description of personality labels is entailed
by the dialog content. Extensive experiments on two daily conversation datasets
suggest that Affective-NLI significantly outperforms (by 6%-7%)
state-of-the-art approaches. Additionally, our Flow experiment demonstrates
that Affective-NLI can accurately recognize the speaker's personality in the
early stages of conversations by surpassing state-of-the-art methods with
22%-34%.
|
[
{
"created": "Wed, 3 Apr 2024 09:14:24 GMT",
"version": "v1"
}
] |
2024-04-04
|
[
[
"Wen",
"Zhiyuan",
""
],
[
"Cao",
"Jiannong",
""
],
[
"Yang",
"Yu",
""
],
[
"Yang",
"Ruosong",
""
],
[
"Liu",
"Shuaiqi",
""
]
] |
Personality Recognition in Conversation (PRC) aims to identify the personality traits of speakers through textual dialogue content. It is essential for providing personalized services in various applications of Human-Computer Interaction (HCI), such as AI-based mental therapy and companion robots for the elderly. Most recent studies analyze the dialog content for personality classification yet overlook two major concerns that hinder their performance. First, crucial implicit factors contained in conversation, such as emotions that reflect the speakers' personalities are ignored. Second, only focusing on the input dialog content disregards the semantic understanding of personality itself, which reduces the interpretability of the results. In this paper, we propose Affective Natural Language Inference (Affective-NLI) for accurate and interpretable PRC. To utilize affectivity within dialog content for accurate personality recognition, we fine-tuned a pre-trained language model specifically for emotion recognition in conversations, facilitating real-time affective annotations for utterances. For interpretability of recognition results, we formulate personality recognition as an NLI problem by determining whether the textual description of personality labels is entailed by the dialog content. Extensive experiments on two daily conversation datasets suggest that Affective-NLI significantly outperforms (by 6%-7%) state-of-the-art approaches. Additionally, our Flow experiment demonstrates that Affective-NLI can accurately recognize the speaker's personality in the early stages of conversations by surpassing state-of-the-art methods with 22%-34%.
|
2102.08308
|
Ecenaz Erdemir
|
Ecenaz Erdemir and Pier Luigi Dragotti and Deniz Gunduz
|
Active Privacy-utility Trade-off Against a Hypothesis Testing Adversary
|
Accepted to IEEE International Conference on Acoustics, Speech and
Signal Processing (ICASSP 2021)
| null | null | null |
cs.IT cs.CR cs.LG math.IT stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider a user releasing her data containing some personal information in
return of a service. We model user's personal information as two correlated
random variables, one of them, called the secret variable, is to be kept
private, while the other, called the useful variable, is to be disclosed for
utility. We consider active sequential data release, where at each time step
the user chooses from among a finite set of release mechanisms, each revealing
some information about the user's personal information, i.e., the true
hypotheses, albeit with different statistics. The user manages data release in
an online fashion such that maximum amount of information is revealed about the
latent useful variable, while the confidence for the sensitive variable is kept
below a predefined level. For the utility, we consider both the probability of
correct detection of the useful variable and the mutual information (MI)
between the useful variable and released data. We formulate both problems as a
Markov decision process (MDP), and numerically solve them by advantage
actor-critic (A2C) deep reinforcement learning (RL).
|
[
{
"created": "Tue, 16 Feb 2021 17:49:31 GMT",
"version": "v1"
},
{
"created": "Thu, 18 Feb 2021 11:59:00 GMT",
"version": "v2"
}
] |
2021-02-19
|
[
[
"Erdemir",
"Ecenaz",
""
],
[
"Dragotti",
"Pier Luigi",
""
],
[
"Gunduz",
"Deniz",
""
]
] |
We consider a user releasing her data containing some personal information in return of a service. We model user's personal information as two correlated random variables, one of them, called the secret variable, is to be kept private, while the other, called the useful variable, is to be disclosed for utility. We consider active sequential data release, where at each time step the user chooses from among a finite set of release mechanisms, each revealing some information about the user's personal information, i.e., the true hypotheses, albeit with different statistics. The user manages data release in an online fashion such that maximum amount of information is revealed about the latent useful variable, while the confidence for the sensitive variable is kept below a predefined level. For the utility, we consider both the probability of correct detection of the useful variable and the mutual information (MI) between the useful variable and released data. We formulate both problems as a Markov decision process (MDP), and numerically solve them by advantage actor-critic (A2C) deep reinforcement learning (RL).
|
2209.10687
|
Stephanie Newdick
|
Stephanie Newdick, Nitin Ongole, Tony G. Chen, Edward Schmerling, Mark
R. Cutkosky, Marco Pavone
|
Motion Planning for a Climbing Robot with Stochastic Grasps
|
7 pages, 7 figures
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Motion planning for a multi-limbed climbing robot must consider the robot's
posture, joint torques, and how it uses contact forces to interact with its
environment. This paper focuses on motion planning for a robot that uses
nontraditional locomotion to explore unpredictable environments such as martian
caves. Our robotic concept, ReachBot, uses extendable and retractable booms as
limbs to achieve a large reachable workspace while climbing. Each extendable
boom is capped by a microspine gripper designed for grasping rocky surfaces.
ReachBot leverages its large workspace to navigate around obstacles, over
crevasses, and through challenging terrain. Our planning approach must be
versatile to accommodate variable terrain features and robust to mitigate risks
from the stochastic nature of grasping with spines. In this paper, we introduce
a graph traversal algorithm to select a discrete sequence of grasps based on
available terrain features suitable for grasping. This discrete plan is
complemented by a decoupled motion planner that considers the alternating
phases of body movement and end-effector movement, using a combination of
sampling-based planning and sequential convex programming to optimize
individual phases. We use our motion planner to plan a trajectory across a
simulated 2D cave environment with at least 95% probability of success and
demonstrate improved robustness over a baseline trajectory. Finally, we verify
our motion planning algorithm through experimentation on a 2D planar prototype.
|
[
{
"created": "Wed, 21 Sep 2022 22:25:11 GMT",
"version": "v1"
}
] |
2022-09-23
|
[
[
"Newdick",
"Stephanie",
""
],
[
"Ongole",
"Nitin",
""
],
[
"Chen",
"Tony G.",
""
],
[
"Schmerling",
"Edward",
""
],
[
"Cutkosky",
"Mark R.",
""
],
[
"Pavone",
"Marco",
""
]
] |
Motion planning for a multi-limbed climbing robot must consider the robot's posture, joint torques, and how it uses contact forces to interact with its environment. This paper focuses on motion planning for a robot that uses nontraditional locomotion to explore unpredictable environments such as martian caves. Our robotic concept, ReachBot, uses extendable and retractable booms as limbs to achieve a large reachable workspace while climbing. Each extendable boom is capped by a microspine gripper designed for grasping rocky surfaces. ReachBot leverages its large workspace to navigate around obstacles, over crevasses, and through challenging terrain. Our planning approach must be versatile to accommodate variable terrain features and robust to mitigate risks from the stochastic nature of grasping with spines. In this paper, we introduce a graph traversal algorithm to select a discrete sequence of grasps based on available terrain features suitable for grasping. This discrete plan is complemented by a decoupled motion planner that considers the alternating phases of body movement and end-effector movement, using a combination of sampling-based planning and sequential convex programming to optimize individual phases. We use our motion planner to plan a trajectory across a simulated 2D cave environment with at least 95% probability of success and demonstrate improved robustness over a baseline trajectory. Finally, we verify our motion planning algorithm through experimentation on a 2D planar prototype.
|
2112.10699
|
Siddhartha Datta
|
Siddhartha Datta, Konrad Kollnig, Nigel Shadbolt
|
Mind-proofing Your Phone: Navigating the Digital Minefield with
GreaseTerminator
|
Accepted in ACM IUI 2022
| null |
10.1145/3490099.3511152
| null |
cs.HC cs.CR cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Digital harms are widespread in the mobile ecosystem. As these devices gain
ever more prominence in our daily lives, so too increases the potential for
malicious attacks against individuals. The last line of defense against a range
of digital harms - including digital distraction, political polarisation
through hate speech, and children being exposed to damaging material - is the
user interface. This work introduces GreaseTerminator to enable researchers to
develop, deploy, and test interventions against these harms with end-users. We
demonstrate the ease of intervention development and deployment, as well as the
broad range of harms potentially covered with GreaseTerminator in five in-depth
case studies.
|
[
{
"created": "Mon, 20 Dec 2021 17:35:02 GMT",
"version": "v1"
},
{
"created": "Mon, 31 Jan 2022 12:14:49 GMT",
"version": "v2"
},
{
"created": "Tue, 1 Feb 2022 13:03:09 GMT",
"version": "v3"
}
] |
2022-12-20
|
[
[
"Datta",
"Siddhartha",
""
],
[
"Kollnig",
"Konrad",
""
],
[
"Shadbolt",
"Nigel",
""
]
] |
Digital harms are widespread in the mobile ecosystem. As these devices gain ever more prominence in our daily lives, so too increases the potential for malicious attacks against individuals. The last line of defense against a range of digital harms - including digital distraction, political polarisation through hate speech, and children being exposed to damaging material - is the user interface. This work introduces GreaseTerminator to enable researchers to develop, deploy, and test interventions against these harms with end-users. We demonstrate the ease of intervention development and deployment, as well as the broad range of harms potentially covered with GreaseTerminator in five in-depth case studies.
|
2404.07520
|
Anant Khandelwal
|
Anant Khandelwal
|
PromptSync: Bridging Domain Gaps in Vision-Language Models through
Class-Aware Prototype Alignment and Discrimination
|
Accepted at CVPR 2024 LIMIT, 12 pages, 8 Tables, 2 Figures
| null | null | null |
cs.CV cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
The potential for zero-shot generalization in vision-language (V-L) models
such as CLIP has spurred their widespread adoption in addressing numerous
downstream tasks. Previous methods have employed test-time prompt tuning to
adapt the model to unseen domains, but they overlooked the issue of imbalanced
class distributions. In this study, we explicitly address this problem by
employing class-aware prototype alignment weighted by mean class probabilities
obtained for the test sample and filtered augmented views. Additionally, we
ensure that the class probabilities are as accurate as possible by performing
prototype discrimination using contrastive learning. The combination of
alignment and discriminative loss serves as a geometric regularizer, preventing
the prompt representation from collapsing onto a single class and effectively
bridging the distribution gap between the source and test domains. Our method,
named PromptSync, synchronizes the prompts for each test sample on both the
text and vision branches of the V-L model. In empirical evaluations on the
domain generalization benchmark, our method outperforms previous best methods
by 2.33% in overall performance, by 1% in base-to-novel generalization, and by
2.84% in cross-dataset transfer tasks.
|
[
{
"created": "Thu, 11 Apr 2024 07:26:00 GMT",
"version": "v1"
},
{
"created": "Fri, 12 Apr 2024 17:01:04 GMT",
"version": "v2"
}
] |
2024-04-15
|
[
[
"Khandelwal",
"Anant",
""
]
] |
The potential for zero-shot generalization in vision-language (V-L) models such as CLIP has spurred their widespread adoption in addressing numerous downstream tasks. Previous methods have employed test-time prompt tuning to adapt the model to unseen domains, but they overlooked the issue of imbalanced class distributions. In this study, we explicitly address this problem by employing class-aware prototype alignment weighted by mean class probabilities obtained for the test sample and filtered augmented views. Additionally, we ensure that the class probabilities are as accurate as possible by performing prototype discrimination using contrastive learning. The combination of alignment and discriminative loss serves as a geometric regularizer, preventing the prompt representation from collapsing onto a single class and effectively bridging the distribution gap between the source and test domains. Our method, named PromptSync, synchronizes the prompts for each test sample on both the text and vision branches of the V-L model. In empirical evaluations on the domain generalization benchmark, our method outperforms previous best methods by 2.33% in overall performance, by 1% in base-to-novel generalization, and by 2.84% in cross-dataset transfer tasks.
|
2102.07113
|
Jasmin Bogatinovski Mr.
|
Jasmin Bogatinovski, Ljup\v{c}o Todorovski, Sa\v{s}o D\v{z}eroski,
Dragi Kocev
|
Comprehensive Comparative Study of Multi-Label Classification Methods
| null | null | null | null |
cs.LG cs.AI cs.CC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Multi-label classification (MLC) has recently received increasing interest
from the machine learning community. Several studies provide reviews of methods
and datasets for MLC and a few provide empirical comparisons of MLC methods.
However, they are limited in the number of methods and datasets considered.
This work provides a comprehensive empirical study of a wide range of MLC
methods on a plethora of datasets from various domains. More specifically, our
study evaluates 26 methods on 42 benchmark datasets using 20 evaluation
measures. The adopted evaluation methodology adheres to the highest literature
standards for designing and executing large scale, time-budgeted experimental
studies. First, the methods are selected based on their usage by the community,
assuring representation of methods across the MLC taxonomy of methods and
different base learners. Second, the datasets cover a wide range of complexity
and domains of application. The selected evaluation measures assess the
predictive performance and the efficiency of the methods. The results of the
analysis identify RFPCT, RFDTBR, ECCJ48, EBRJ48 and AdaBoostMH as best
performing methods across the spectrum of performance measures. Whenever a new
method is introduced, it should be compared to different subsets of MLC
methods, determined on the basis of the different evaluation criteria.
|
[
{
"created": "Sun, 14 Feb 2021 09:38:15 GMT",
"version": "v1"
},
{
"created": "Tue, 16 Feb 2021 05:29:43 GMT",
"version": "v2"
}
] |
2021-02-17
|
[
[
"Bogatinovski",
"Jasmin",
""
],
[
"Todorovski",
"Ljupčo",
""
],
[
"Džeroski",
"Sašo",
""
],
[
"Kocev",
"Dragi",
""
]
] |
Multi-label classification (MLC) has recently received increasing interest from the machine learning community. Several studies provide reviews of methods and datasets for MLC and a few provide empirical comparisons of MLC methods. However, they are limited in the number of methods and datasets considered. This work provides a comprehensive empirical study of a wide range of MLC methods on a plethora of datasets from various domains. More specifically, our study evaluates 26 methods on 42 benchmark datasets using 20 evaluation measures. The adopted evaluation methodology adheres to the highest literature standards for designing and executing large scale, time-budgeted experimental studies. First, the methods are selected based on their usage by the community, assuring representation of methods across the MLC taxonomy of methods and different base learners. Second, the datasets cover a wide range of complexity and domains of application. The selected evaluation measures assess the predictive performance and the efficiency of the methods. The results of the analysis identify RFPCT, RFDTBR, ECCJ48, EBRJ48 and AdaBoostMH as best performing methods across the spectrum of performance measures. Whenever a new method is introduced, it should be compared to different subsets of MLC methods, determined on the basis of the different evaluation criteria.
|
1804.06612
|
Constantin Enea
|
Ahmed Bouajjani, Constantin Enea, Kailiang Ji, Shaz Qadeer
|
On the Completeness of Verifying Message Passing Programs under Bounded
Asynchrony
| null | null | null | null |
cs.PL cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We address the problem of verifying message passing programs, defined as a
set of parallel processes communicating through unbounded FIFO buffers. We
introduce a bounded analysis that explores a special type of computations,
called k-synchronous. These computations can be viewed as (unbounded) sequences
of interaction phases, each phase allowing at most k send actions (by different
processes), followed by a sequence of receives corresponding to sends in the
same phase. We give a procedure for deciding k-synchronizability of a program,
i.e., whether every computation is equivalent (has the same happens-before
relation) to one of its k-synchronous computations. We also show that
reachability over k-synchronous computations and checking k-synchronizability
are both PSPACE-complete. Furthermore, we introduce a class of programs called
{\em flow-bounded} for which the problem of deciding whether there exists a k>0
for which the program is k-synchronizable, is decidable.
|
[
{
"created": "Wed, 18 Apr 2018 09:11:10 GMT",
"version": "v1"
}
] |
2018-04-20
|
[
[
"Bouajjani",
"Ahmed",
""
],
[
"Enea",
"Constantin",
""
],
[
"Ji",
"Kailiang",
""
],
[
"Qadeer",
"Shaz",
""
]
] |
We address the problem of verifying message passing programs, defined as a set of parallel processes communicating through unbounded FIFO buffers. We introduce a bounded analysis that explores a special type of computations, called k-synchronous. These computations can be viewed as (unbounded) sequences of interaction phases, each phase allowing at most k send actions (by different processes), followed by a sequence of receives corresponding to sends in the same phase. We give a procedure for deciding k-synchronizability of a program, i.e., whether every computation is equivalent (has the same happens-before relation) to one of its k-synchronous computations. We also show that reachability over k-synchronous computations and checking k-synchronizability are both PSPACE-complete. Furthermore, we introduce a class of programs called {\em flow-bounded} for which the problem of deciding whether there exists a k>0 for which the program is k-synchronizable, is decidable.
|
1604.05525
|
Sonse Shimaoka
|
Sonse Shimaoka, Pontus Stenetorp, Kentaro Inui, Sebastian Riedel
|
An Attentive Neural Architecture for Fine-grained Entity Type
Classification
|
6 pages, 2 figures
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work we propose a novel attention-based neural network model for the
task of fine-grained entity type classification that unlike previously proposed
models recursively composes representations of entity mention contexts. Our
model achieves state-of-the-art performance with 74.94% loose micro F1-score on
the well-established FIGER dataset, a relative improvement of 2.59%. We also
investigate the behavior of the attention mechanism of our model and observe
that it can learn contextual linguistic expressions that indicate the
fine-grained category memberships of an entity.
|
[
{
"created": "Tue, 19 Apr 2016 11:39:53 GMT",
"version": "v1"
}
] |
2016-04-20
|
[
[
"Shimaoka",
"Sonse",
""
],
[
"Stenetorp",
"Pontus",
""
],
[
"Inui",
"Kentaro",
""
],
[
"Riedel",
"Sebastian",
""
]
] |
In this work we propose a novel attention-based neural network model for the task of fine-grained entity type classification that unlike previously proposed models recursively composes representations of entity mention contexts. Our model achieves state-of-the-art performance with 74.94% loose micro F1-score on the well-established FIGER dataset, a relative improvement of 2.59%. We also investigate the behavior of the attention mechanism of our model and observe that it can learn contextual linguistic expressions that indicate the fine-grained category memberships of an entity.
|
0711.2399
|
Alexander Tiskin
|
Vladimir Deineko and Alexander Tiskin
|
Minimum-weight double-tree shortcutting for Metric TSP: Bounding the
approximation ratio
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Metric Traveling Salesman Problem (TSP) is a classical NP-hard
optimization problem. The double-tree shortcutting method for Metric TSP yields
an exponentially-sized space of TSP tours, each of which approximates the
optimal solution within at most a factor of 2. We consider the problem of
finding among these tours the one that gives the closest approximation, i.e.\
the \emph{minimum-weight double-tree shortcutting}. Previously, we gave an
efficient algorithm for this problem, and carried out its experimental
analysis. In this paper, we address the related question of the worst-case
approximation ratio for the minimum-weight double-tree shortcutting method. In
particular, we give lower bounds on the approximation ratio in some specific
metric spaces: the ratio of 2 in the discrete shortest path metric, 1.622 in
the planar Euclidean metric, and 1.666 in the planar Minkowski metric. The
first of these lower bounds is tight; we conjecture that the other two bounds
are also tight, and in particular that the minimum-weight double-tree method
provides a 1.622-approximation for planar Euclidean TSP.
|
[
{
"created": "Thu, 15 Nov 2007 13:19:01 GMT",
"version": "v1"
},
{
"created": "Tue, 16 Dec 2008 11:58:25 GMT",
"version": "v2"
},
{
"created": "Sun, 28 Dec 2008 17:28:18 GMT",
"version": "v3"
}
] |
2008-12-30
|
[
[
"Deineko",
"Vladimir",
""
],
[
"Tiskin",
"Alexander",
""
]
] |
The Metric Traveling Salesman Problem (TSP) is a classical NP-hard optimization problem. The double-tree shortcutting method for Metric TSP yields an exponentially-sized space of TSP tours, each of which approximates the optimal solution within at most a factor of 2. We consider the problem of finding among these tours the one that gives the closest approximation, i.e.\ the \emph{minimum-weight double-tree shortcutting}. Previously, we gave an efficient algorithm for this problem, and carried out its experimental analysis. In this paper, we address the related question of the worst-case approximation ratio for the minimum-weight double-tree shortcutting method. In particular, we give lower bounds on the approximation ratio in some specific metric spaces: the ratio of 2 in the discrete shortest path metric, 1.622 in the planar Euclidean metric, and 1.666 in the planar Minkowski metric. The first of these lower bounds is tight; we conjecture that the other two bounds are also tight, and in particular that the minimum-weight double-tree method provides a 1.622-approximation for planar Euclidean TSP.
|
1710.11310
|
Masato Tajima
|
Masato Tajima
|
An Innovations Approach to Viterbi Decoding of Convolutional Codes
|
Accepted for publication in IEEE Trans. Inf. Theory
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce the notion of innovations for Viterbi decoding of convolutional
codes. First we define a kind of innovation corresponding to the received data,
i.e., the input to a Viterbi decoder. Then the structure of a
Scarce-State-Transition (SST) Viterbi decoder is derived in a natural manner.
It is shown that the newly defined innovation is just the input to the main
decoder in an SST Viterbi decoder and generates the same syndrome as the
original received data does. A similar result holds for Quick-Look-In (QLI)
codes as well. In this case, however, the precise innovation is not defined. We
see that this innovation-like quantity is related to the linear smoothed
estimate of the information. The essence of innovations approach to a linear
filtering problem is first to whiten the observed data, and then to treat the
resulting simpler white-noise observations problem. In our case, this
corresponds to the reduction of decoding complexity in the main decoder in an
SST Viterbi decoder. We show the distributions related to the main decoder
(i.e., the input distribution and the state distribution in the code trellis
for the main decoder) are much biased under moderately noisy conditions. We see
that these biased distributions actually lead to the complexity reduction in
the main decoder. Furthermore, it is shown that the proposed innovations
approach can be extended to maximum-likelihood (ML) decoding of block codes as
well.
|
[
{
"created": "Tue, 31 Oct 2017 03:21:59 GMT",
"version": "v1"
},
{
"created": "Mon, 14 May 2018 08:58:22 GMT",
"version": "v2"
},
{
"created": "Sun, 4 Nov 2018 00:39:49 GMT",
"version": "v3"
}
] |
2018-11-06
|
[
[
"Tajima",
"Masato",
""
]
] |
We introduce the notion of innovations for Viterbi decoding of convolutional codes. First we define a kind of innovation corresponding to the received data, i.e., the input to a Viterbi decoder. Then the structure of a Scarce-State-Transition (SST) Viterbi decoder is derived in a natural manner. It is shown that the newly defined innovation is just the input to the main decoder in an SST Viterbi decoder and generates the same syndrome as the original received data does. A similar result holds for Quick-Look-In (QLI) codes as well. In this case, however, the precise innovation is not defined. We see that this innovation-like quantity is related to the linear smoothed estimate of the information. The essence of innovations approach to a linear filtering problem is first to whiten the observed data, and then to treat the resulting simpler white-noise observations problem. In our case, this corresponds to the reduction of decoding complexity in the main decoder in an SST Viterbi decoder. We show the distributions related to the main decoder (i.e., the input distribution and the state distribution in the code trellis for the main decoder) are much biased under moderately noisy conditions. We see that these biased distributions actually lead to the complexity reduction in the main decoder. Furthermore, it is shown that the proposed innovations approach can be extended to maximum-likelihood (ML) decoding of block codes as well.
|
2309.09309
|
James O'Keeffe
|
James O'Keeffe, Alan Gregory Millard
|
Predictive Fault Tolerance for Autonomous Robot Swarms
|
This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessible
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Active fault tolerance is essential for robot swarms to retain long-term
autonomy. Previous work on swarm fault tolerance focuses on reacting to
electro-mechanical faults that are spontaneously injected into robot sensors
and actuators. Resolving faults once they have manifested as failures is an
inefficient approach, and there are some safety-critical scenarios in which any
kind of robot failure is unacceptable. We propose a predictive approach to
fault tolerance, based on the principle of preemptive maintenance, in which
potential faults are autonomously detected and resolved before they manifest as
failures. Our approach is shown to improve swarm performance and prevent robot
failure in the cases tested.
|
[
{
"created": "Sun, 17 Sep 2023 15:54:48 GMT",
"version": "v1"
}
] |
2023-09-19
|
[
[
"O'Keeffe",
"James",
""
],
[
"Millard",
"Alan Gregory",
""
]
] |
Active fault tolerance is essential for robot swarms to retain long-term autonomy. Previous work on swarm fault tolerance focuses on reacting to electro-mechanical faults that are spontaneously injected into robot sensors and actuators. Resolving faults once they have manifested as failures is an inefficient approach, and there are some safety-critical scenarios in which any kind of robot failure is unacceptable. We propose a predictive approach to fault tolerance, based on the principle of preemptive maintenance, in which potential faults are autonomously detected and resolved before they manifest as failures. Our approach is shown to improve swarm performance and prevent robot failure in the cases tested.
|
1401.3428
|
Nicolas Meuleau
|
Nicolas Meuleau, Emmanuel Benazera, Ronen I. Brafman, Eric A. Hansen,
Mausam
|
A Heuristic Search Approach to Planning with Continuous Resources in
Stochastic Domains
| null |
Journal Of Artificial Intelligence Research, Volume 34, pages
27-59, 2009
|
10.1613/jair.2529
| null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider the problem of optimal planning in stochastic domains with
resource constraints, where the resources are continuous and the choice of
action at each step depends on resource availability. We introduce the HAO*
algorithm, a generalization of the AO* algorithm that performs search in a
hybrid state space that is modeled using both discrete and continuous state
variables, where the continuous variables represent monotonic resources. Like
other heuristic search algorithms, HAO* leverages knowledge of the start state
and an admissible heuristic to focus computational effort on those parts of the
state space that could be reached from the start state by following an optimal
policy. We show that this approach is especially effective when resource
constraints limit how much of the state space is reachable. Experimental
results demonstrate its effectiveness in the domain that motivates our
research: automated planning for planetary exploration rovers.
|
[
{
"created": "Wed, 15 Jan 2014 04:46:00 GMT",
"version": "v1"
}
] |
2014-01-16
|
[
[
"Meuleau",
"Nicolas",
""
],
[
"Benazera",
"Emmanuel",
""
],
[
"Brafman",
"Ronen I.",
""
],
[
"Hansen",
"Eric A.",
""
],
[
"Mausam",
"",
""
]
] |
We consider the problem of optimal planning in stochastic domains with resource constraints, where the resources are continuous and the choice of action at each step depends on resource availability. We introduce the HAO* algorithm, a generalization of the AO* algorithm that performs search in a hybrid state space that is modeled using both discrete and continuous state variables, where the continuous variables represent monotonic resources. Like other heuristic search algorithms, HAO* leverages knowledge of the start state and an admissible heuristic to focus computational effort on those parts of the state space that could be reached from the start state by following an optimal policy. We show that this approach is especially effective when resource constraints limit how much of the state space is reachable. Experimental results demonstrate its effectiveness in the domain that motivates our research: automated planning for planetary exploration rovers.
|
1405.2430
|
Giuseppe Antonio Di Luna
|
G. A. Di Luna, P. Flocchini, S. Gan Chaudhuri, N. Santoro, G.
Viglietta
|
Robots with Lights: Overcoming Obstructed Visibility Without Colliding
| null | null | null | null |
cs.DC cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Robots with lights is a model of autonomous mobile computational entities
operating in the plane in Look-Compute-Move cycles: each agent has an
externally visible light which can assume colors from a fixed set; the lights
are persistent (i.e., the color is not erased at the end of a cycle), but
otherwise the agents are oblivious. The investigation of computability in this
model, initially suggested by Peleg, is under way, and several results have
been recently established. In these investigations, however, an agent is
assumed to be capable to see through another agent. In this paper we start the
study of computing when visibility is obstructable, and investigate the most
basic problem for this setting, Complete Visibility: The agents must reach
within finite time a configuration where they can all see each other and
terminate. We do not make any assumption on a-priori knowledge of the number of
agents, on rigidity of movements nor on chirality. The local coordinate system
of an agent may change at each activation. Also, by definition of lights, an
agent can communicate and remember only a constant number of bits in each
cycle. In spite of these weak conditions, we prove that Complete Visibility is
always solvable, even in the asynchronous setting, without collisions and using
a small constant number of colors. The proof is constructive. We also show how
to extend our protocol for Complete Visibility so that, with the same number of
colors, the agents solve the (non-uniform) Circle Formation problem with
obstructed visibility.
|
[
{
"created": "Sat, 10 May 2014 13:05:25 GMT",
"version": "v1"
}
] |
2014-05-13
|
[
[
"Di Luna",
"G. A.",
""
],
[
"Flocchini",
"P.",
""
],
[
"Chaudhuri",
"S. Gan",
""
],
[
"Santoro",
"N.",
""
],
[
"Viglietta",
"G.",
""
]
] |
Robots with lights is a model of autonomous mobile computational entities operating in the plane in Look-Compute-Move cycles: each agent has an externally visible light which can assume colors from a fixed set; the lights are persistent (i.e., the color is not erased at the end of a cycle), but otherwise the agents are oblivious. The investigation of computability in this model, initially suggested by Peleg, is under way, and several results have been recently established. In these investigations, however, an agent is assumed to be capable to see through another agent. In this paper we start the study of computing when visibility is obstructable, and investigate the most basic problem for this setting, Complete Visibility: The agents must reach within finite time a configuration where they can all see each other and terminate. We do not make any assumption on a-priori knowledge of the number of agents, on rigidity of movements nor on chirality. The local coordinate system of an agent may change at each activation. Also, by definition of lights, an agent can communicate and remember only a constant number of bits in each cycle. In spite of these weak conditions, we prove that Complete Visibility is always solvable, even in the asynchronous setting, without collisions and using a small constant number of colors. The proof is constructive. We also show how to extend our protocol for Complete Visibility so that, with the same number of colors, the agents solve the (non-uniform) Circle Formation problem with obstructed visibility.
|
2107.00977
|
Hadrien Reynaud
|
Hadrien Reynaud, Athanasios Vlontzos, Benjamin Hou, Arian Beqiri, Paul
Leeson, Bernhard Kainz
|
Ultrasound Video Transformers for Cardiac Ejection Fraction Estimation
|
Accepted for MICCAI 2021
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cardiac ultrasound imaging is used to diagnose various heart diseases. Common
analysis pipelines involve manual processing of the video frames by expert
clinicians. This suffers from intra- and inter-observer variability. We propose
a novel approach to ultrasound video analysis using a transformer architecture
based on a Residual Auto-Encoder Network and a BERT model adapted for token
classification. This enables videos of any length to be processed. We apply our
model to the task of End-Systolic (ES) and End-Diastolic (ED) frame detection
and the automated computation of the left ventricular ejection fraction. We
achieve an average frame distance of 3.36 frames for the ES and 7.17 frames for
the ED on videos of arbitrary length. Our end-to-end learnable approach can
estimate the ejection fraction with a MAE of 5.95 and $R^2$ of 0.52 in 0.15s
per video, showing that segmentation is not the only way to predict ejection
fraction. Code and models are available at https://github.com/HReynaud/UVT.
|
[
{
"created": "Fri, 2 Jul 2021 11:23:09 GMT",
"version": "v1"
}
] |
2021-07-05
|
[
[
"Reynaud",
"Hadrien",
""
],
[
"Vlontzos",
"Athanasios",
""
],
[
"Hou",
"Benjamin",
""
],
[
"Beqiri",
"Arian",
""
],
[
"Leeson",
"Paul",
""
],
[
"Kainz",
"Bernhard",
""
]
] |
Cardiac ultrasound imaging is used to diagnose various heart diseases. Common analysis pipelines involve manual processing of the video frames by expert clinicians. This suffers from intra- and inter-observer variability. We propose a novel approach to ultrasound video analysis using a transformer architecture based on a Residual Auto-Encoder Network and a BERT model adapted for token classification. This enables videos of any length to be processed. We apply our model to the task of End-Systolic (ES) and End-Diastolic (ED) frame detection and the automated computation of the left ventricular ejection fraction. We achieve an average frame distance of 3.36 frames for the ES and 7.17 frames for the ED on videos of arbitrary length. Our end-to-end learnable approach can estimate the ejection fraction with a MAE of 5.95 and $R^2$ of 0.52 in 0.15s per video, showing that segmentation is not the only way to predict ejection fraction. Code and models are available at https://github.com/HReynaud/UVT.
|
1303.4434
|
Pinghua Gong
|
Pinghua Gong, Changshui Zhang, Zhaosong Lu, Jianhua Huang, Jieping Ye
|
A General Iterative Shrinkage and Thresholding Algorithm for Non-convex
Regularized Optimization Problems
| null | null | null | null |
cs.LG cs.NA stat.CO stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Non-convex sparsity-inducing penalties have recently received considerable
attentions in sparse learning. Recent theoretical investigations have
demonstrated their superiority over the convex counterparts in several sparse
learning settings. However, solving the non-convex optimization problems
associated with non-convex penalties remains a big challenge. A commonly used
approach is the Multi-Stage (MS) convex relaxation (or DC programming), which
relaxes the original non-convex problem to a sequence of convex problems. This
approach is usually not very practical for large-scale problems because its
computational cost is a multiple of solving a single convex problem. In this
paper, we propose a General Iterative Shrinkage and Thresholding (GIST)
algorithm to solve the nonconvex optimization problem for a large class of
non-convex penalties. The GIST algorithm iteratively solves a proximal operator
problem, which in turn has a closed-form solution for many commonly used
penalties. At each outer iteration of the algorithm, we use a line search
initialized by the Barzilai-Borwein (BB) rule that allows finding an
appropriate step size quickly. The paper also presents a detailed convergence
analysis of the GIST algorithm. The efficiency of the proposed algorithm is
demonstrated by extensive experiments on large-scale data sets.
|
[
{
"created": "Mon, 18 Mar 2013 21:41:53 GMT",
"version": "v1"
}
] |
2013-03-20
|
[
[
"Gong",
"Pinghua",
""
],
[
"Zhang",
"Changshui",
""
],
[
"Lu",
"Zhaosong",
""
],
[
"Huang",
"Jianhua",
""
],
[
"Ye",
"Jieping",
""
]
] |
Non-convex sparsity-inducing penalties have recently received considerable attentions in sparse learning. Recent theoretical investigations have demonstrated their superiority over the convex counterparts in several sparse learning settings. However, solving the non-convex optimization problems associated with non-convex penalties remains a big challenge. A commonly used approach is the Multi-Stage (MS) convex relaxation (or DC programming), which relaxes the original non-convex problem to a sequence of convex problems. This approach is usually not very practical for large-scale problems because its computational cost is a multiple of solving a single convex problem. In this paper, we propose a General Iterative Shrinkage and Thresholding (GIST) algorithm to solve the nonconvex optimization problem for a large class of non-convex penalties. The GIST algorithm iteratively solves a proximal operator problem, which in turn has a closed-form solution for many commonly used penalties. At each outer iteration of the algorithm, we use a line search initialized by the Barzilai-Borwein (BB) rule that allows finding an appropriate step size quickly. The paper also presents a detailed convergence analysis of the GIST algorithm. The efficiency of the proposed algorithm is demonstrated by extensive experiments on large-scale data sets.
|
2310.07638
|
Ziyue Huang
|
Ziyue Huang, Mingming Zhang, Qingjie Liu, Wei Wang, Zhe Dong, and
Yunhong Wang
|
Context-Enhanced Detector For Building Detection From Remote Sensing
Images
|
12 pages, 7 figures
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The field of building detection from remote sensing images has made
significant progress, but faces challenges in achieving high-accuracy detection
due to the diversity in building appearances and the complexity of vast scenes.
To address these challenges, we propose a novel approach called
Context-Enhanced Detector (CEDet). Our approach utilizes a three-stage cascade
structure to enhance the extraction of contextual information and improve
building detection accuracy. Specifically, we introduce two modules: the
Semantic Guided Contextual Mining (SGCM) module, which aggregates multi-scale
contexts and incorporates an attention mechanism to capture long-range
interactions, and the Instance Context Mining Module (ICMM), which captures
instance-level relationship context by constructing a spatial relationship
graph and aggregating instance features. Additionally, we introduce a semantic
segmentation loss based on pseudo-masks to guide contextual information
extraction. Our method achieves state-of-the-art performance on three building
detection benchmarks, including CNBuilding-9P, CNBuilding-23P, and SpaceNet.
|
[
{
"created": "Wed, 11 Oct 2023 16:33:30 GMT",
"version": "v1"
},
{
"created": "Tue, 9 Jul 2024 07:03:02 GMT",
"version": "v2"
}
] |
2024-07-10
|
[
[
"Huang",
"Ziyue",
""
],
[
"Zhang",
"Mingming",
""
],
[
"Liu",
"Qingjie",
""
],
[
"Wang",
"Wei",
""
],
[
"Dong",
"Zhe",
""
],
[
"Wang",
"Yunhong",
""
]
] |
The field of building detection from remote sensing images has made significant progress, but faces challenges in achieving high-accuracy detection due to the diversity in building appearances and the complexity of vast scenes. To address these challenges, we propose a novel approach called Context-Enhanced Detector (CEDet). Our approach utilizes a three-stage cascade structure to enhance the extraction of contextual information and improve building detection accuracy. Specifically, we introduce two modules: the Semantic Guided Contextual Mining (SGCM) module, which aggregates multi-scale contexts and incorporates an attention mechanism to capture long-range interactions, and the Instance Context Mining Module (ICMM), which captures instance-level relationship context by constructing a spatial relationship graph and aggregating instance features. Additionally, we introduce a semantic segmentation loss based on pseudo-masks to guide contextual information extraction. Our method achieves state-of-the-art performance on three building detection benchmarks, including CNBuilding-9P, CNBuilding-23P, and SpaceNet.
|
2312.00949
|
Christophe Tribes
|
Christophe Tribes, Sacha Benarroch-Lelong, Peng Lu, Ivan Kobyzev
|
Hyperparameter Optimization for Large Language Model Instruction-Tuning
| null | null | null | null |
cs.CL math.OC
|
http://creativecommons.org/licenses/by/4.0/
|
The fine-tuning of Large Language Models (LLMs) has enabled them to recently
achieve milestones in natural language processing applications. The emergence
of ever larger LLMs has paved the way for more efficient fine-tuning methods.
Among these, the Low-Rank Adaptation (LoRA) method keeps most of the weights of
the pre-trained LLM frozen while introducing a low-rank decomposition of the
weight matrix, enabling the tuning of only a very small proportion of the
network. The performance on downstream tasks of models fine-tuned with LoRA
heavily relies on a set of hyperparameters including the rank of the
decomposition. In this work, we investigate the choice of these hyperparameters
through two main blackbox optimization (BBO) techniques. We examine the whole
pipeline of performing fine-tuning and validation on a pre-trained LLM as a
blackbox and efficiently explore the space of hyperparameters with the \nomad
algorithm, achieving a boost in performance and human alignment of the tuned
model.
|
[
{
"created": "Fri, 1 Dec 2023 22:03:12 GMT",
"version": "v1"
},
{
"created": "Tue, 30 Jan 2024 21:32:31 GMT",
"version": "v2"
}
] |
2024-02-01
|
[
[
"Tribes",
"Christophe",
""
],
[
"Benarroch-Lelong",
"Sacha",
""
],
[
"Lu",
"Peng",
""
],
[
"Kobyzev",
"Ivan",
""
]
] |
The fine-tuning of Large Language Models (LLMs) has enabled them to recently achieve milestones in natural language processing applications. The emergence of ever larger LLMs has paved the way for more efficient fine-tuning methods. Among these, the Low-Rank Adaptation (LoRA) method keeps most of the weights of the pre-trained LLM frozen while introducing a low-rank decomposition of the weight matrix, enabling the tuning of only a very small proportion of the network. The performance on downstream tasks of models fine-tuned with LoRA heavily relies on a set of hyperparameters including the rank of the decomposition. In this work, we investigate the choice of these hyperparameters through two main blackbox optimization (BBO) techniques. We examine the whole pipeline of performing fine-tuning and validation on a pre-trained LLM as a blackbox and efficiently explore the space of hyperparameters with the \nomad algorithm, achieving a boost in performance and human alignment of the tuned model.
|
2001.11396
|
Matthew Willetts
|
Miguel Morin, Matthew Willetts
|
Non-Determinism in TensorFlow ResNets
|
4 pages
| null | null | null |
cs.LG cs.NE stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We show that the stochasticity in training ResNets for image classification
on GPUs in TensorFlow is dominated by the non-determinism from GPUs, rather
than by the initialisation of the weights and biases of the network or by the
sequence of minibatches given. The standard deviation of test set accuracy is
0.02 with fixed seeds, compared to 0.027 with different seeds---nearly 74\% of
the standard deviation of a ResNet model is non-deterministic. For test set
loss the ratio of standard deviations is more than 80\%. These results call for
more robust evaluation strategies of deep learning models, as a significant
amount of the variation in results across runs can arise simply from GPU
randomness.
|
[
{
"created": "Thu, 30 Jan 2020 15:29:13 GMT",
"version": "v1"
}
] |
2020-01-31
|
[
[
"Morin",
"Miguel",
""
],
[
"Willetts",
"Matthew",
""
]
] |
We show that the stochasticity in training ResNets for image classification on GPUs in TensorFlow is dominated by the non-determinism from GPUs, rather than by the initialisation of the weights and biases of the network or by the sequence of minibatches given. The standard deviation of test set accuracy is 0.02 with fixed seeds, compared to 0.027 with different seeds---nearly 74\% of the standard deviation of a ResNet model is non-deterministic. For test set loss the ratio of standard deviations is more than 80\%. These results call for more robust evaluation strategies of deep learning models, as a significant amount of the variation in results across runs can arise simply from GPU randomness.
|
1604.08379
|
Debasis Mishra
|
Debasis Mishra and Tridib Sharma
|
Balanced Ranking Mechanisms
| null | null | null | null |
cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the private values single object auction model, we construct a
satisfactory mechanism - a symmetric, dominant strategy incentive compatible,
and budget-balanced mechanism. Our mechanism allocates the object to the
highest valued agent with more than 99% probability provided there are at least
14 agents. It is also ex-post individually rational. We show that our mechanism
is optimal in a restricted class of satisfactory ranking mechanisms. Since
achieving efficiency through a dominant strategy incentive compatible and
budget-balanced mechanism is impossible in this model, our results illustrate
the limits of this impossibility.
|
[
{
"created": "Thu, 28 Apr 2016 11:41:02 GMT",
"version": "v1"
}
] |
2016-10-04
|
[
[
"Mishra",
"Debasis",
""
],
[
"Sharma",
"Tridib",
""
]
] |
In the private values single object auction model, we construct a satisfactory mechanism - a symmetric, dominant strategy incentive compatible, and budget-balanced mechanism. Our mechanism allocates the object to the highest valued agent with more than 99% probability provided there are at least 14 agents. It is also ex-post individually rational. We show that our mechanism is optimal in a restricted class of satisfactory ranking mechanisms. Since achieving efficiency through a dominant strategy incentive compatible and budget-balanced mechanism is impossible in this model, our results illustrate the limits of this impossibility.
|
1410.0562
|
Jacopo Pantaleoni
|
Jacopo Pantaleoni
|
A massively parallel algorithm for constructing the BWT of large string
sets
| null | null | null |
NVR-2014-002
|
cs.DS cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a new scalable, lightweight algorithm to incrementally construct
the BWT and FM-index of large string sets such as those produced by Next
Generation Sequencing. The algorithm is designed for massive parallelism and
can effectively exploit the combination of low capacity high bandwidth memory
and slower external system memory typical of GPU accelerated systems.
Particularly, for a string set of n characters from an alphabet with \sigma
symbols, it uses a constant amount of high-bandwidth memory and at most 3n
log(\sigma) bits of system memory. Given that deep memory hierarchies are
becoming a pervasive trait of high performance computing architectures, we
believe this to be a relevant feature. The implementation can handle reads of
arbitrary length and is up to 2 and respectively 6.5 times faster than
state-of-the-art for short and long genomic reads
|
[
{
"created": "Thu, 2 Oct 2014 14:25:51 GMT",
"version": "v1"
}
] |
2014-10-03
|
[
[
"Pantaleoni",
"Jacopo",
""
]
] |
We present a new scalable, lightweight algorithm to incrementally construct the BWT and FM-index of large string sets such as those produced by Next Generation Sequencing. The algorithm is designed for massive parallelism and can effectively exploit the combination of low capacity high bandwidth memory and slower external system memory typical of GPU accelerated systems. Particularly, for a string set of n characters from an alphabet with \sigma symbols, it uses a constant amount of high-bandwidth memory and at most 3n log(\sigma) bits of system memory. Given that deep memory hierarchies are becoming a pervasive trait of high performance computing architectures, we believe this to be a relevant feature. The implementation can handle reads of arbitrary length and is up to 2 and respectively 6.5 times faster than state-of-the-art for short and long genomic reads
|
1612.05543
|
Wasiur R. KhudaBukhsh
|
Wasiur R. KhudaBukhsh, Sounak Kar, Amr Rizk, and Heinz Koeppl
|
A Generalized Performance Evaluation Framework for Parallel Systems with
Output Synchronization
| null | null | null | null |
cs.PF
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Frameworks, such as MapReduce and Hadoop are abundant nowadays. They seek to
reap benefits of parallelization, albeit subject to a synchronization
constraint at the output. Fork-Join (FJ) queuing models are used to analyze
such systems. Arriving jobs are split into tasks each of which is mapped to
exactly one server. A job leaves the system when all of its tasks are executed.
As a metric of performance, we consider waiting times for both
work-conserving and non-work conserving server systems under a mathematical
set-up general enough to take into account possible phase-type behavior of the
servers, and as suggested by recent evidences, bursty arrivals.
To this end, we present a Markov-additive process framework for an FJ system
and provide computable bounds on tail probabilities of steady-state waiting
times, for both types of servers separately. We apply our results to three
scenarios, namely, non-renewal (Markov-modulated) arrivals, servers showing
phase-type behavior, and Markov-modulated arrivals and services. We compare our
bounds against estimates obtained through simulations and also provide a
theoretical conceptualization of provisions in FJ systems. Finally, we
calibrate our model with real data traces, and illustrate how our bounds can be
used to devise provisions.
|
[
{
"created": "Fri, 16 Dec 2016 16:33:34 GMT",
"version": "v1"
}
] |
2016-12-19
|
[
[
"KhudaBukhsh",
"Wasiur R.",
""
],
[
"Kar",
"Sounak",
""
],
[
"Rizk",
"Amr",
""
],
[
"Koeppl",
"Heinz",
""
]
] |
Frameworks, such as MapReduce and Hadoop are abundant nowadays. They seek to reap benefits of parallelization, albeit subject to a synchronization constraint at the output. Fork-Join (FJ) queuing models are used to analyze such systems. Arriving jobs are split into tasks each of which is mapped to exactly one server. A job leaves the system when all of its tasks are executed. As a metric of performance, we consider waiting times for both work-conserving and non-work conserving server systems under a mathematical set-up general enough to take into account possible phase-type behavior of the servers, and as suggested by recent evidences, bursty arrivals. To this end, we present a Markov-additive process framework for an FJ system and provide computable bounds on tail probabilities of steady-state waiting times, for both types of servers separately. We apply our results to three scenarios, namely, non-renewal (Markov-modulated) arrivals, servers showing phase-type behavior, and Markov-modulated arrivals and services. We compare our bounds against estimates obtained through simulations and also provide a theoretical conceptualization of provisions in FJ systems. Finally, we calibrate our model with real data traces, and illustrate how our bounds can be used to devise provisions.
|
1604.02035
|
Vivek Kumar
|
Vivek Kumar, Srijita Barthwal, Rishabh Kishore, Ruchika Saklani, Anuj
Sharma, Sandeep Sharma
|
Lossy Data Compression Using Logarithm
|
Presented in National Conference on Inspired Learning (2015)
| null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Lossy compression algorithms take advantage of the inherent limitations of
the human eye and discard information that cannot be seen. In the present
paper, a technique termed as Lossy Data Compression using Logarithm (LDCL) is
proposed to compress incoming binary data in the form of a resultant matrix
containing the logarithmic values of different chosen numeric sets. The
proposed method is able to achieve compression ratio up to 60 in many major
cases.
Keywords: LDCL, Lossy Data Compression, Binary Reduction, Logarithmic
Approach
|
[
{
"created": "Thu, 7 Apr 2016 15:20:17 GMT",
"version": "v1"
}
] |
2016-04-08
|
[
[
"Kumar",
"Vivek",
""
],
[
"Barthwal",
"Srijita",
""
],
[
"Kishore",
"Rishabh",
""
],
[
"Saklani",
"Ruchika",
""
],
[
"Sharma",
"Anuj",
""
],
[
"Sharma",
"Sandeep",
""
]
] |
Lossy compression algorithms take advantage of the inherent limitations of the human eye and discard information that cannot be seen. In the present paper, a technique termed as Lossy Data Compression using Logarithm (LDCL) is proposed to compress incoming binary data in the form of a resultant matrix containing the logarithmic values of different chosen numeric sets. The proposed method is able to achieve compression ratio up to 60 in many major cases. Keywords: LDCL, Lossy Data Compression, Binary Reduction, Logarithmic Approach
|
2104.05657
|
Jiyang Tang
|
Jiyang Tang, Ming Li
|
End-to-End Mandarin Tone Classification with Short Term Context
Information
|
Accepted by APSIPA ASC 2021
| null | null | null |
cs.SD cs.LG eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose an end-to-end Mandarin tone classification method
from continuous speech utterances utilizing both the spectrogram and the
short-term context information as the input. Both spectrograms and context
segment features are used to train the tone classifier. We first divide the
spectrogram frames into syllable segments using force alignment results
produced by an ASR model. Then we extract the short-term segment features to
capture the context information across multiple syllables. Feeding both the
spectrogram and the short-term context segment features into an end-to-end
model could significantly improve the performance. Experiments are performed on
a large-scale open-source Mandarin speech dataset to evaluate the proposed
method. Results show that this method improves the classification accuracy from
79.5% to 92.6% on the AISHELL3 database.
|
[
{
"created": "Mon, 12 Apr 2021 17:27:39 GMT",
"version": "v1"
},
{
"created": "Mon, 12 Jul 2021 07:38:20 GMT",
"version": "v2"
},
{
"created": "Fri, 17 Dec 2021 14:08:26 GMT",
"version": "v3"
}
] |
2021-12-20
|
[
[
"Tang",
"Jiyang",
""
],
[
"Li",
"Ming",
""
]
] |
In this paper, we propose an end-to-end Mandarin tone classification method from continuous speech utterances utilizing both the spectrogram and the short-term context information as the input. Both spectrograms and context segment features are used to train the tone classifier. We first divide the spectrogram frames into syllable segments using force alignment results produced by an ASR model. Then we extract the short-term segment features to capture the context information across multiple syllables. Feeding both the spectrogram and the short-term context segment features into an end-to-end model could significantly improve the performance. Experiments are performed on a large-scale open-source Mandarin speech dataset to evaluate the proposed method. Results show that this method improves the classification accuracy from 79.5% to 92.6% on the AISHELL3 database.
|
1203.4319
|
Azni Haslizan Ab Halim
|
A. H. Azni, Rabiah Ahmad, Zul Azri Muhamad Noh, Abd Samad Hasan Basari
and Burairah Hussin
|
Correlated Node Behavior Model based on Semi Markov Process for MANETS
|
IJCSI Volume 9, Issue 1, January 2012
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces a new model for node behavior namely Correlated Node
Behavior Model which is an extension of Node Behavior Model. The model adopts
semi Markov process in continuous time which clusters the node that has
correlation. The key parameter of the process is determined by five
probabilistic parameters based on the Markovian model. Computed from the
transition probabilities of the semi-Markov process, the node correlation
impact on network survivability and resilience can be measure quantitatively.
From the result, the quantitative analysis of correlated node behavior on the
survivability is obtained through mathematical description, and the
effectiveness and rationality of the proposed model are verified through
numerical analysis. The analytical results show that the effect from correlated
failure nodes on network survivability is much severer than other misbehaviors.
|
[
{
"created": "Tue, 20 Mar 2012 05:05:50 GMT",
"version": "v1"
}
] |
2012-03-21
|
[
[
"Azni",
"A. H.",
""
],
[
"Ahmad",
"Rabiah",
""
],
[
"Noh",
"Zul Azri Muhamad",
""
],
[
"Basari",
"Abd Samad Hasan",
""
],
[
"Hussin",
"Burairah",
""
]
] |
This paper introduces a new model for node behavior namely Correlated Node Behavior Model which is an extension of Node Behavior Model. The model adopts semi Markov process in continuous time which clusters the node that has correlation. The key parameter of the process is determined by five probabilistic parameters based on the Markovian model. Computed from the transition probabilities of the semi-Markov process, the node correlation impact on network survivability and resilience can be measure quantitatively. From the result, the quantitative analysis of correlated node behavior on the survivability is obtained through mathematical description, and the effectiveness and rationality of the proposed model are verified through numerical analysis. The analytical results show that the effect from correlated failure nodes on network survivability is much severer than other misbehaviors.
|
2312.12936
|
Gabriele Ciravegna
|
Eleonora Poeta, Gabriele Ciravegna, Eliana Pastor, Tania Cerquitelli,
Elena Baralis
|
Concept-based Explainable Artificial Intelligence: A Survey
| null | null | null | null |
cs.AI cs.HC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The field of explainable artificial intelligence emerged in response to the
growing need for more transparent and reliable models. However, using raw
features to provide explanations has been disputed in several works lately,
advocating for more user-understandable explanations. To address this issue, a
wide range of papers proposing Concept-based eXplainable Artificial
Intelligence (C-XAI) methods have arisen in recent years. Nevertheless, a
unified categorization and precise field definition are still missing. This
paper fills the gap by offering a thorough review of C-XAI approaches. We
define and identify different concepts and explanation types. We provide a
taxonomy identifying nine categories and propose guidelines for selecting a
suitable category based on the development context. Additionally, we report
common evaluation strategies including metrics, human evaluations and dataset
employed, aiming to assist the development of future methods. We believe this
survey will serve researchers, practitioners, and domain experts in
comprehending and advancing this innovative field.
|
[
{
"created": "Wed, 20 Dec 2023 11:27:21 GMT",
"version": "v1"
}
] |
2023-12-21
|
[
[
"Poeta",
"Eleonora",
""
],
[
"Ciravegna",
"Gabriele",
""
],
[
"Pastor",
"Eliana",
""
],
[
"Cerquitelli",
"Tania",
""
],
[
"Baralis",
"Elena",
""
]
] |
The field of explainable artificial intelligence emerged in response to the growing need for more transparent and reliable models. However, using raw features to provide explanations has been disputed in several works lately, advocating for more user-understandable explanations. To address this issue, a wide range of papers proposing Concept-based eXplainable Artificial Intelligence (C-XAI) methods have arisen in recent years. Nevertheless, a unified categorization and precise field definition are still missing. This paper fills the gap by offering a thorough review of C-XAI approaches. We define and identify different concepts and explanation types. We provide a taxonomy identifying nine categories and propose guidelines for selecting a suitable category based on the development context. Additionally, we report common evaluation strategies including metrics, human evaluations and dataset employed, aiming to assist the development of future methods. We believe this survey will serve researchers, practitioners, and domain experts in comprehending and advancing this innovative field.
|
2404.08648
|
Zhenyun Xie
|
Zhenyun Xie, David S\'anchez-J\'acome, Luis Torrijos-Mor\'an, Daniel
P\'erez-L\'opez
|
Software-defined optical networking applications enabled by programmable
integrated photonics
| null | null | null | null |
cs.NI cs.ET physics.optics
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Data center networks are experiencing unprecedented exponential growth,
mostly driven by the continuous computing demands in machine learning and
artificial intelligence algorithms. Within this realm, optical networking
offers numerous advantages, including low latency, energy efficiency, and
bandwidth transparency, positioning it as a compelling alternative to its
electronic counterparts. In this work, we showcase a range of software-defined
optical networking applications deployed on a general-purpose programmable
integrated photonic processor. Leveraging graph-based theory, we experimentally
demonstrate dynamic optical interconnects, circuit switching, and multicasting
on the same photonic platform, yielding remarkable results in terms of
crosstalk and reconfiguration speed. Our approach harnesses the benefits of
reconfigurability and reliability, paving the way for a new generation of
high-performance optical devices tailored for data center and computing
clusters.
|
[
{
"created": "Mon, 4 Mar 2024 21:13:32 GMT",
"version": "v1"
}
] |
2024-04-16
|
[
[
"Xie",
"Zhenyun",
""
],
[
"Sánchez-Jácome",
"David",
""
],
[
"Torrijos-Morán",
"Luis",
""
],
[
"Pérez-López",
"Daniel",
""
]
] |
Data center networks are experiencing unprecedented exponential growth, mostly driven by the continuous computing demands in machine learning and artificial intelligence algorithms. Within this realm, optical networking offers numerous advantages, including low latency, energy efficiency, and bandwidth transparency, positioning it as a compelling alternative to its electronic counterparts. In this work, we showcase a range of software-defined optical networking applications deployed on a general-purpose programmable integrated photonic processor. Leveraging graph-based theory, we experimentally demonstrate dynamic optical interconnects, circuit switching, and multicasting on the same photonic platform, yielding remarkable results in terms of crosstalk and reconfiguration speed. Our approach harnesses the benefits of reconfigurability and reliability, paving the way for a new generation of high-performance optical devices tailored for data center and computing clusters.
|
1812.10477
|
Yulun Zhang
|
Yulun Zhang, Yapeng Tian, Yu Kong, Bineng Zhong, Yun Fu
|
Residual Dense Network for Image Restoration
|
To appear in TPAMI. arXiv admin note: substantial text overlap with
arXiv:1802.08797
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Convolutional neural network has recently achieved great success for image
restoration (IR) and also offered hierarchical features. However, most deep CNN
based IR models do not make full use of the hierarchical features from the
original low-quality images, thereby achieving relatively-low performance. In
this paper, we propose a novel residual dense network (RDN) to address this
problem in IR. We fully exploit the hierarchical features from all the
convolutional layers. Specifically, we propose residual dense block (RDB) to
extract abundant local features via densely connected convolutional layers. RDB
further allows direct connections from the state of preceding RDB to all the
layers of current RDB, leading to a contiguous memory mechanism. To adaptively
learn more effective features from preceding and current local features and
stabilize the training of wider network, we proposed local feature fusion in
RDB. After fully obtaining dense local features, we use global feature fusion
to jointly and adaptively learn global hierarchical features in a holistic way.
We demonstrate the effectiveness of RDN with several representative IR
applications, single image super-resolution, Gaussian image denoising, image
compression artifact reduction, and image deblurring. Experiments on benchmark
and real-world datasets show that our RDN achieves favorable performance
against state-of-the-art methods for each IR task quantitatively and visually.
|
[
{
"created": "Tue, 25 Dec 2018 03:45:44 GMT",
"version": "v1"
},
{
"created": "Thu, 23 Jan 2020 01:10:07 GMT",
"version": "v2"
}
] |
2020-01-24
|
[
[
"Zhang",
"Yulun",
""
],
[
"Tian",
"Yapeng",
""
],
[
"Kong",
"Yu",
""
],
[
"Zhong",
"Bineng",
""
],
[
"Fu",
"Yun",
""
]
] |
Convolutional neural network has recently achieved great success for image restoration (IR) and also offered hierarchical features. However, most deep CNN based IR models do not make full use of the hierarchical features from the original low-quality images, thereby achieving relatively-low performance. In this paper, we propose a novel residual dense network (RDN) to address this problem in IR. We fully exploit the hierarchical features from all the convolutional layers. Specifically, we propose residual dense block (RDB) to extract abundant local features via densely connected convolutional layers. RDB further allows direct connections from the state of preceding RDB to all the layers of current RDB, leading to a contiguous memory mechanism. To adaptively learn more effective features from preceding and current local features and stabilize the training of wider network, we proposed local feature fusion in RDB. After fully obtaining dense local features, we use global feature fusion to jointly and adaptively learn global hierarchical features in a holistic way. We demonstrate the effectiveness of RDN with several representative IR applications, single image super-resolution, Gaussian image denoising, image compression artifact reduction, and image deblurring. Experiments on benchmark and real-world datasets show that our RDN achieves favorable performance against state-of-the-art methods for each IR task quantitatively and visually.
|
2105.12786
|
Amit Amram
|
Eran Dahan, Tzvi Diskin, Amit Amram, Amit Moryossef, Omer Koren
|
cofga: A Dataset for Fine Grained Classification of Objects from Aerial
Imagery
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Detection and classification of objects in overhead images are two important
and challenging problems in computer vision. Among various research areas in
this domain, the task of fine-grained classification of objects in overhead
images has become ubiquitous in diverse real-world applications, due to recent
advances in high-resolution satellite and airborne imaging systems. The small
inter-class variations and the large intra class variations caused by the fine
grained nature make it a challenging task, especially in low-resource cases. In
this paper, we introduce COFGA a new open dataset for the advancement of
fine-grained classification research. The 2,104 images in the dataset are
collected from an airborne imaging system at 5 15 cm ground sampling distance,
providing higher spatial resolution than most public overhead imagery datasets.
The 14,256 annotated objects in the dataset were classified into 2 classes, 15
subclasses, 14 unique features, and 8 perceived colors a total of 37 distinct
labels making it suitable to the task of fine-grained classification more than
any other publicly available overhead imagery dataset. We compare COFGA to
other overhead imagery datasets and then describe some distinguished fine-grain
classification approaches that were explored during an open data-science
competition we have conducted for this task.
|
[
{
"created": "Wed, 26 May 2021 18:39:47 GMT",
"version": "v1"
}
] |
2021-05-28
|
[
[
"Dahan",
"Eran",
""
],
[
"Diskin",
"Tzvi",
""
],
[
"Amram",
"Amit",
""
],
[
"Moryossef",
"Amit",
""
],
[
"Koren",
"Omer",
""
]
] |
Detection and classification of objects in overhead images are two important and challenging problems in computer vision. Among various research areas in this domain, the task of fine-grained classification of objects in overhead images has become ubiquitous in diverse real-world applications, due to recent advances in high-resolution satellite and airborne imaging systems. The small inter-class variations and the large intra class variations caused by the fine grained nature make it a challenging task, especially in low-resource cases. In this paper, we introduce COFGA a new open dataset for the advancement of fine-grained classification research. The 2,104 images in the dataset are collected from an airborne imaging system at 5 15 cm ground sampling distance, providing higher spatial resolution than most public overhead imagery datasets. The 14,256 annotated objects in the dataset were classified into 2 classes, 15 subclasses, 14 unique features, and 8 perceived colors a total of 37 distinct labels making it suitable to the task of fine-grained classification more than any other publicly available overhead imagery dataset. We compare COFGA to other overhead imagery datasets and then describe some distinguished fine-grain classification approaches that were explored during an open data-science competition we have conducted for this task.
|
2112.15001
|
Josep Domingo-Ferrer
|
Josep Domingo-Ferrer and Jes\'us Manj\'on
|
Circuit-Free General-Purpose Multi-Party Computation via Co-Utile
Unlinkable Outsourcing
|
IEEE Transactions on Dependable and Secure Computing, to appear
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multiparty computation (MPC) consists in several parties engaging in joint
computation in such a way that each party's input and output remain private to
that party. Whereas MPC protocols for specific computations have existed since
the 1980s, only recently general-purpose compilers have been developed to allow
MPC on arbitrary functions. Yet, using today's MPC compilers requires
substantial programming effort and skill on the user's side, among other things
because nearly all compilers translate the code of the computation into a
Boolean or arithmetic circuit. In particular, the circuit representation
requires unrolling loops and recursive calls, which forces programmers to
(often manually) define loop bounds and hardly use recursion. We present an
approach allowing MPC on an arbitrary computation expressed as ordinary code
with all functionalities that does not need to be translated into a circuit.
Our notion of input and output privacy is predicated on unlinkability. Our
method leverages co-utile computation outsourcing using anonymous channels via
decentralized reputation, makes a minimalistic use of cryptography and does not
require participants to be honest-but-curious: it works as long as participants
are rational (self-interested), which may include rationally malicious peers
(who become attackers if this is advantageous to them). We present example
applications, including e-voting. Our empirical work shows that reputation
captures well the behavior of peers and ensures that parties with high
reputation obtain correct results.
|
[
{
"created": "Thu, 30 Dec 2021 10:26:13 GMT",
"version": "v1"
}
] |
2022-01-03
|
[
[
"Domingo-Ferrer",
"Josep",
""
],
[
"Manjón",
"Jesús",
""
]
] |
Multiparty computation (MPC) consists in several parties engaging in joint computation in such a way that each party's input and output remain private to that party. Whereas MPC protocols for specific computations have existed since the 1980s, only recently general-purpose compilers have been developed to allow MPC on arbitrary functions. Yet, using today's MPC compilers requires substantial programming effort and skill on the user's side, among other things because nearly all compilers translate the code of the computation into a Boolean or arithmetic circuit. In particular, the circuit representation requires unrolling loops and recursive calls, which forces programmers to (often manually) define loop bounds and hardly use recursion. We present an approach allowing MPC on an arbitrary computation expressed as ordinary code with all functionalities that does not need to be translated into a circuit. Our notion of input and output privacy is predicated on unlinkability. Our method leverages co-utile computation outsourcing using anonymous channels via decentralized reputation, makes a minimalistic use of cryptography and does not require participants to be honest-but-curious: it works as long as participants are rational (self-interested), which may include rationally malicious peers (who become attackers if this is advantageous to them). We present example applications, including e-voting. Our empirical work shows that reputation captures well the behavior of peers and ensures that parties with high reputation obtain correct results.
|
2208.12463
|
Qingqiang Sun
|
Qingqiang Sun, Xuemin Lin, Ying Zhang, Wenjie Zhang, Chaoqi Chen
|
Towards Higher-order Topological Consistency for Unsupervised Network
Alignment
|
Accepted by IEEE International Conference on Data Engineering (ICDE),
2023
| null | null | null |
cs.LG cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Network alignment task, which aims to identify corresponding nodes in
different networks, is of great significance for many subsequent applications.
Without the need for labeled anchor links, unsupervised alignment methods have
been attracting more and more attention. However, the topological consistency
assumptions defined by existing methods are generally low-order and less
accurate because only the edge-indiscriminative topological pattern is
considered, which is especially risky in an unsupervised setting. To reposition
the focus of the alignment process from low-order to higher-order topological
consistency, in this paper, we propose a fully unsupervised network alignment
framework named HTC. The proposed higher-order topological consistency is
formulated based on edge orbits, which is merged into the information
aggregation process of a graph convolutional network so that the alignment
consistencies are transformed into the similarity of node embeddings.
Furthermore, the encoder is trained to be multi-orbit-aware and then be refined
to identify more trusted anchor links. Node correspondence is comprehensively
evaluated by integrating all different orders of consistency. {In addition to
sound theoretical analysis, the superiority of the proposed method is also
empirically demonstrated through extensive experimental evaluation. On three
pairs of real-world datasets and two pairs of synthetic datasets, our HTC
consistently outperforms a wide variety of unsupervised and supervised methods
with the least or comparable time consumption. It also exhibits robustness to
structural noise as a result of our multi-orbit-aware training mechanism.
|
[
{
"created": "Fri, 26 Aug 2022 07:09:13 GMT",
"version": "v1"
}
] |
2022-08-29
|
[
[
"Sun",
"Qingqiang",
""
],
[
"Lin",
"Xuemin",
""
],
[
"Zhang",
"Ying",
""
],
[
"Zhang",
"Wenjie",
""
],
[
"Chen",
"Chaoqi",
""
]
] |
Network alignment task, which aims to identify corresponding nodes in different networks, is of great significance for many subsequent applications. Without the need for labeled anchor links, unsupervised alignment methods have been attracting more and more attention. However, the topological consistency assumptions defined by existing methods are generally low-order and less accurate because only the edge-indiscriminative topological pattern is considered, which is especially risky in an unsupervised setting. To reposition the focus of the alignment process from low-order to higher-order topological consistency, in this paper, we propose a fully unsupervised network alignment framework named HTC. The proposed higher-order topological consistency is formulated based on edge orbits, which is merged into the information aggregation process of a graph convolutional network so that the alignment consistencies are transformed into the similarity of node embeddings. Furthermore, the encoder is trained to be multi-orbit-aware and then be refined to identify more trusted anchor links. Node correspondence is comprehensively evaluated by integrating all different orders of consistency. {In addition to sound theoretical analysis, the superiority of the proposed method is also empirically demonstrated through extensive experimental evaluation. On three pairs of real-world datasets and two pairs of synthetic datasets, our HTC consistently outperforms a wide variety of unsupervised and supervised methods with the least or comparable time consumption. It also exhibits robustness to structural noise as a result of our multi-orbit-aware training mechanism.
|
1303.5234
|
Piotr Dendek
|
Piotr Jan Dendek and Artur Czeczko and Mateusz Fedoryszak and Adam
Kawa and Piotr Wendykier and Lukasz Bolikowski
|
How to perform research in Hadoop environment not losing mental
equilibrium - case study
|
This paper (with changed content) appeared under the title "Chrum:
The Tool for Convenient Generation of Apache Oozie Workflows" in "Intelligent
Tools for Building a Scientific Information Platform: From Research to
Implementation", "Studies in Computational Intelligence", Volume 541, 2014,
http://link.springer.com/book/10.1007/978-3-319-04714-0
| null | null | null |
cs.SE cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Conducting a research in an efficient, repetitive, evaluable, but also
convenient (in terms of development) way has always been a challenge. To
satisfy those requirements in a long term and simultaneously minimize costs of
the software engineering process, one has to follow a certain set of
guidelines. This article describes such guidelines based on the research
environment called Content Analysis System (CoAnSys) created in the Center for
Open Science (CeON). Best practices and tools for working in the Apache Hadoop
environment, as well as the process of establishing these rules are portrayed.
|
[
{
"created": "Thu, 21 Mar 2013 11:53:37 GMT",
"version": "v1"
},
{
"created": "Tue, 2 Apr 2013 10:17:14 GMT",
"version": "v2"
},
{
"created": "Sun, 16 Mar 2014 22:30:10 GMT",
"version": "v3"
}
] |
2014-03-18
|
[
[
"Dendek",
"Piotr Jan",
""
],
[
"Czeczko",
"Artur",
""
],
[
"Fedoryszak",
"Mateusz",
""
],
[
"Kawa",
"Adam",
""
],
[
"Wendykier",
"Piotr",
""
],
[
"Bolikowski",
"Lukasz",
""
]
] |
Conducting a research in an efficient, repetitive, evaluable, but also convenient (in terms of development) way has always been a challenge. To satisfy those requirements in a long term and simultaneously minimize costs of the software engineering process, one has to follow a certain set of guidelines. This article describes such guidelines based on the research environment called Content Analysis System (CoAnSys) created in the Center for Open Science (CeON). Best practices and tools for working in the Apache Hadoop environment, as well as the process of establishing these rules are portrayed.
|
2301.06635
|
Fuchang Gao
|
Fuchang Gao, Boyu Zhang
|
Data-aware customization of activation functions reduces neural network
error
|
13 pages. arXiv admin note: substantial text overlap with
arXiv:2011.11713
| null | null | null |
cs.LG cs.NE stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Activation functions play critical roles in neural networks, yet current
off-the-shelf neural networks pay little attention to the specific choice of
activation functions used. Here we show that data-aware customization of
activation functions can result in striking reductions in neural network error.
We first give a simple linear algebraic explanation of the role of activation
functions in neural networks; then, through connection with the
Diaconis-Shahshahani Approximation Theorem, we propose a set of criteria for
good activation functions. As a case study, we consider regression tasks with a
partially exchangeable target function, \emph{i.e.} $f(u,v,w)=f(v,u,w)$ for
$u,v\in \mathbb{R}^d$ and $w\in \mathbb{R}^k$, and prove that for such a target
function, using an even activation function in at least one of the layers
guarantees that the prediction preserves partial exchangeability for best
performance. Since even activation functions are seldom used in practice, we
designed the ``seagull'' even activation function $\log(1+x^2)$ according to
our criteria. Empirical testing on over two dozen 9-25 dimensional examples
with different local smoothness, curvature, and degree of exchangeability
revealed that a simple substitution with the ``seagull'' activation function in
an already-refined neural network can lead to an order-of-magnitude reduction
in error. This improvement was most pronounced when the activation function
substitution was applied to the layer in which the exchangeable variables are
connected for the first time. While the improvement is greatest for
low-dimensional data, experiments on the CIFAR10 image classification dataset
showed that use of ``seagull'' can reduce error even for high-dimensional
cases. These results collectively highlight the potential of customizing
activation functions as a general approach to improve neural network
performance.
|
[
{
"created": "Mon, 16 Jan 2023 23:38:37 GMT",
"version": "v1"
}
] |
2023-01-18
|
[
[
"Gao",
"Fuchang",
""
],
[
"Zhang",
"Boyu",
""
]
] |
Activation functions play critical roles in neural networks, yet current off-the-shelf neural networks pay little attention to the specific choice of activation functions used. Here we show that data-aware customization of activation functions can result in striking reductions in neural network error. We first give a simple linear algebraic explanation of the role of activation functions in neural networks; then, through connection with the Diaconis-Shahshahani Approximation Theorem, we propose a set of criteria for good activation functions. As a case study, we consider regression tasks with a partially exchangeable target function, \emph{i.e.} $f(u,v,w)=f(v,u,w)$ for $u,v\in \mathbb{R}^d$ and $w\in \mathbb{R}^k$, and prove that for such a target function, using an even activation function in at least one of the layers guarantees that the prediction preserves partial exchangeability for best performance. Since even activation functions are seldom used in practice, we designed the ``seagull'' even activation function $\log(1+x^2)$ according to our criteria. Empirical testing on over two dozen 9-25 dimensional examples with different local smoothness, curvature, and degree of exchangeability revealed that a simple substitution with the ``seagull'' activation function in an already-refined neural network can lead to an order-of-magnitude reduction in error. This improvement was most pronounced when the activation function substitution was applied to the layer in which the exchangeable variables are connected for the first time. While the improvement is greatest for low-dimensional data, experiments on the CIFAR10 image classification dataset showed that use of ``seagull'' can reduce error even for high-dimensional cases. These results collectively highlight the potential of customizing activation functions as a general approach to improve neural network performance.
|
1905.08212
|
Xinyi Wang
|
Xinyi Wang, Graham Neubig
|
Target Conditioned Sampling: Optimizing Data Selection for Multilingual
Neural Machine Translation
|
Accepted at ACL 2019
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To improve low-resource Neural Machine Translation (NMT) with multilingual
corpora, training on the most related high-resource language only is often more
effective than using all data available (Neubig and Hu, 2018). However, it is
possible that an intelligent data selection strategy can further improve
low-resource NMT with data from other auxiliary languages. In this paper, we
seek to construct a sampling distribution over all multilingual data, so that
it minimizes the training loss of the low-resource language. Based on this
formulation, we propose an efficient algorithm, Target Conditioned Sampling
(TCS), which first samples a target sentence, and then conditionally samples
its source sentence. Experiments show that TCS brings significant gains of up
to 2 BLEU on three of four languages we test, with minimal training overhead.
|
[
{
"created": "Mon, 20 May 2019 16:54:43 GMT",
"version": "v1"
}
] |
2019-05-21
|
[
[
"Wang",
"Xinyi",
""
],
[
"Neubig",
"Graham",
""
]
] |
To improve low-resource Neural Machine Translation (NMT) with multilingual corpora, training on the most related high-resource language only is often more effective than using all data available (Neubig and Hu, 2018). However, it is possible that an intelligent data selection strategy can further improve low-resource NMT with data from other auxiliary languages. In this paper, we seek to construct a sampling distribution over all multilingual data, so that it minimizes the training loss of the low-resource language. Based on this formulation, we propose an efficient algorithm, Target Conditioned Sampling (TCS), which first samples a target sentence, and then conditionally samples its source sentence. Experiments show that TCS brings significant gains of up to 2 BLEU on three of four languages we test, with minimal training overhead.
|
2011.11829
|
Zhiwen Xiao
|
Zhiwen Xiao, Xin Xu, Huanlai Xing, Shouxi Luo, Penglin Dai, Dawei Zhan
|
RTFN: A Robust Temporal Feature Network for Time Series Classification
|
41pages, 7figures, Revised Paper
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Time series data usually contains local and global patterns. Most of the
existing feature networks pay more attention to local features rather than the
relationships among them. The latter is, however, also important yet more
difficult to explore. To obtain sufficient representations by a feature network
is still challenging. To this end, we propose a novel robust temporal feature
network (RTFN) for feature extraction in time series classification, containing
a temporal feature network (TFN) and an LSTM-based attention network (LSTMaN).
TFN is a residual structure with multiple convolutional layers. It functions as
a local-feature extraction network to mine sufficient local features from data.
LSTMaN is composed of two identical layers, where attention and long short-term
memory (LSTM) networks are hybridized. This network acts as a relation
extraction network to discover the intrinsic relationships among the extracted
features at different positions in sequential data. In experiments, we embed
RTFN into a supervised structure as a feature extractor and into an
unsupervised structure as an encoder, respectively. The results show that the
RTFN-based structures achieve excellent supervised and unsupervised performance
on a large number of UCR2018 and UEA2018 datasets.
|
[
{
"created": "Tue, 24 Nov 2020 01:24:04 GMT",
"version": "v1"
},
{
"created": "Tue, 29 Dec 2020 02:23:58 GMT",
"version": "v2"
}
] |
2021-01-01
|
[
[
"Xiao",
"Zhiwen",
""
],
[
"Xu",
"Xin",
""
],
[
"Xing",
"Huanlai",
""
],
[
"Luo",
"Shouxi",
""
],
[
"Dai",
"Penglin",
""
],
[
"Zhan",
"Dawei",
""
]
] |
Time series data usually contains local and global patterns. Most of the existing feature networks pay more attention to local features rather than the relationships among them. The latter is, however, also important yet more difficult to explore. To obtain sufficient representations by a feature network is still challenging. To this end, we propose a novel robust temporal feature network (RTFN) for feature extraction in time series classification, containing a temporal feature network (TFN) and an LSTM-based attention network (LSTMaN). TFN is a residual structure with multiple convolutional layers. It functions as a local-feature extraction network to mine sufficient local features from data. LSTMaN is composed of two identical layers, where attention and long short-term memory (LSTM) networks are hybridized. This network acts as a relation extraction network to discover the intrinsic relationships among the extracted features at different positions in sequential data. In experiments, we embed RTFN into a supervised structure as a feature extractor and into an unsupervised structure as an encoder, respectively. The results show that the RTFN-based structures achieve excellent supervised and unsupervised performance on a large number of UCR2018 and UEA2018 datasets.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.