id
stringlengths 9
10
| submitter
stringlengths 1
64
⌀ | authors
stringlengths 4
20.7k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
404
⌀ | doi
stringlengths 11
153
⌀ | report-no
stringlengths 2
254
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | orig_abstract
stringlengths 14
3.35k
| versions
listlengths 1
60
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
1.35k
| abstract
stringlengths 11
3.34k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1808.00888
|
Patrick Slade
|
Patrick Slade, Zachary N. Sunberg, Mykel J. Kochenderfer
|
Estimation and Control Using Sampling-Based Bayesian Reinforcement
Learning
|
10 pages, 6 figures. arXiv admin note: text overlap with
arXiv:1707.09055
| null | null | null |
cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Real-world autonomous systems operate under uncertainty about both their pose
and dynamics. Autonomous control systems must simultaneously perform estimation
and control tasks to maintain robustness to changing dynamics or modeling
errors. However, information gathering actions often conflict with optimal
actions for reaching control objectives, requiring a trade-off between
exploration and exploitation. The specific problem setting considered here is
for discrete-time nonlinear systems, with process noise, input-constraints, and
parameter uncertainty. This article frames this problem as a Bayes-adaptive
Markov decision process and solves it online using Monte Carlo tree search with
an unscented Kalman filter to account for process noise and parameter
uncertainty. This method is compared with certainty equivalent model predictive
control and a tree search method that approximates the QMDP solution, providing
insight into when information gathering is useful. Discrete time simulations
characterize performance over a range of process noise and bounds on unknown
parameters. An offline optimization method is used to select the Monte Carlo
tree search parameters without hand-tuning. In lieu of recursive feasibility
guarantees, a probabilistic bounding heuristic is offered that increases the
probability of keeping the state within a desired region.
|
[
{
"created": "Wed, 1 Aug 2018 01:55:37 GMT",
"version": "v1"
}
] |
2018-08-03
|
[
[
"Slade",
"Patrick",
""
],
[
"Sunberg",
"Zachary N.",
""
],
[
"Kochenderfer",
"Mykel J.",
""
]
] |
Real-world autonomous systems operate under uncertainty about both their pose and dynamics. Autonomous control systems must simultaneously perform estimation and control tasks to maintain robustness to changing dynamics or modeling errors. However, information gathering actions often conflict with optimal actions for reaching control objectives, requiring a trade-off between exploration and exploitation. The specific problem setting considered here is for discrete-time nonlinear systems, with process noise, input-constraints, and parameter uncertainty. This article frames this problem as a Bayes-adaptive Markov decision process and solves it online using Monte Carlo tree search with an unscented Kalman filter to account for process noise and parameter uncertainty. This method is compared with certainty equivalent model predictive control and a tree search method that approximates the QMDP solution, providing insight into when information gathering is useful. Discrete time simulations characterize performance over a range of process noise and bounds on unknown parameters. An offline optimization method is used to select the Monte Carlo tree search parameters without hand-tuning. In lieu of recursive feasibility guarantees, a probabilistic bounding heuristic is offered that increases the probability of keeping the state within a desired region.
|
2303.11153
|
Yuquan Xiao
|
Yuquan Xiao and Qinghe Du
|
Statistical Age-of-Information Optimization for Status Update over
Multi-State Fading Channels
|
This paper has been accepted by IEEE Transactions on Vehicular
Technology
| null |
10.1109/TVT.2023.3336728
| null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Age of information (AoI) is a powerful metric to evaluate the freshness of
information, where minimization of average statistics, such as the average AoI
and average peak AoI, currently prevails in guiding freshness optimization for
related applications. Although minimizing the statistics does improve the
received information's freshness for status update systems in the sense of
average, the time-varying fading characteristics of wireless channels often
cause uncertain yet frequent age violations. The recently-proposed statistical
AoI metric can better characterize more features of AoI dynamics, which
evaluates the achievable minimum peak AoI under the certain constraint on age
violation probability. In this paper, we study the statistical AoI minimization
problem for status update systems over multi-state fading channels, which can
effectively upper-bound the AoI violation probability but introduce the
prohibitively-high computing complexity. To resolve this issue, we tackle the
problem with a two-fold approach. For a small AoI exponent, the problem is
approximated via a fractional programming problem. For a large AoI exponent,
the problem is converted to a convex problem. Solving the two problems
respectively, we derive the near-optimal sampling interval for diverse status
update systems. Insightful observations are obtained on how sampling interval
shall be tuned as a decreasing function of channel state information (CSI).
Surprisingly, for the extremely stringent AoI requirement, the sampling
interval converges to a constant regardless of CSI's variation. Numerical
results verify effectiveness as well as superiority of our proposed scheme.
|
[
{
"created": "Mon, 20 Mar 2023 14:35:39 GMT",
"version": "v1"
},
{
"created": "Sat, 16 Sep 2023 03:28:35 GMT",
"version": "v2"
},
{
"created": "Tue, 28 Nov 2023 03:22:28 GMT",
"version": "v3"
}
] |
2023-11-29
|
[
[
"Xiao",
"Yuquan",
""
],
[
"Du",
"Qinghe",
""
]
] |
Age of information (AoI) is a powerful metric to evaluate the freshness of information, where minimization of average statistics, such as the average AoI and average peak AoI, currently prevails in guiding freshness optimization for related applications. Although minimizing the statistics does improve the received information's freshness for status update systems in the sense of average, the time-varying fading characteristics of wireless channels often cause uncertain yet frequent age violations. The recently-proposed statistical AoI metric can better characterize more features of AoI dynamics, which evaluates the achievable minimum peak AoI under the certain constraint on age violation probability. In this paper, we study the statistical AoI minimization problem for status update systems over multi-state fading channels, which can effectively upper-bound the AoI violation probability but introduce the prohibitively-high computing complexity. To resolve this issue, we tackle the problem with a two-fold approach. For a small AoI exponent, the problem is approximated via a fractional programming problem. For a large AoI exponent, the problem is converted to a convex problem. Solving the two problems respectively, we derive the near-optimal sampling interval for diverse status update systems. Insightful observations are obtained on how sampling interval shall be tuned as a decreasing function of channel state information (CSI). Surprisingly, for the extremely stringent AoI requirement, the sampling interval converges to a constant regardless of CSI's variation. Numerical results verify effectiveness as well as superiority of our proposed scheme.
|
2210.12316
|
Yupeng Hou
|
Yupeng Hou, Zhankui He, Julian McAuley, Wayne Xin Zhao
|
Learning Vector-Quantized Item Representation for Transferable
Sequential Recommenders
|
Accepted by TheWebConf (WWW) 2023
| null | null | null |
cs.IR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Recently, the generality of natural language text has been leveraged to
develop transferable recommender systems. The basic idea is to employ
pre-trained language models~(PLM) to encode item text into item
representations. Despite the promising transferability, the binding between
item text and item representations might be too tight, leading to potential
problems such as over-emphasizing the effect of text features and exaggerating
the negative impact of domain gap. To address this issue, this paper proposes
VQ-Rec, a novel approach to learning Vector-Quantized item representations for
transferable sequential Recommenders. The main novelty of our approach lies in
the new item representation scheme: it first maps item text into a vector of
discrete indices (called item code), and then employs these indices to lookup
the code embedding table for deriving item representations. Such a scheme can
be denoted as "text $\Longrightarrow$ code $\Longrightarrow$ representation".
Based on this representation scheme, we further propose an enhanced contrastive
pre-training approach, using semi-synthetic and mixed-domain code
representations as hard negatives. Furthermore, we design a new cross-domain
fine-tuning method based on a differentiable permutation-based network.
Extensive experiments conducted on six public benchmarks demonstrate the
effectiveness of the proposed approach, in both cross-domain and cross-platform
settings. Code and pre-trained model are available at:
https://github.com/RUCAIBox/VQ-Rec.
|
[
{
"created": "Sat, 22 Oct 2022 00:43:14 GMT",
"version": "v1"
},
{
"created": "Sun, 12 Feb 2023 08:20:46 GMT",
"version": "v2"
}
] |
2023-02-14
|
[
[
"Hou",
"Yupeng",
""
],
[
"He",
"Zhankui",
""
],
[
"McAuley",
"Julian",
""
],
[
"Zhao",
"Wayne Xin",
""
]
] |
Recently, the generality of natural language text has been leveraged to develop transferable recommender systems. The basic idea is to employ pre-trained language models~(PLM) to encode item text into item representations. Despite the promising transferability, the binding between item text and item representations might be too tight, leading to potential problems such as over-emphasizing the effect of text features and exaggerating the negative impact of domain gap. To address this issue, this paper proposes VQ-Rec, a novel approach to learning Vector-Quantized item representations for transferable sequential Recommenders. The main novelty of our approach lies in the new item representation scheme: it first maps item text into a vector of discrete indices (called item code), and then employs these indices to lookup the code embedding table for deriving item representations. Such a scheme can be denoted as "text $\Longrightarrow$ code $\Longrightarrow$ representation". Based on this representation scheme, we further propose an enhanced contrastive pre-training approach, using semi-synthetic and mixed-domain code representations as hard negatives. Furthermore, we design a new cross-domain fine-tuning method based on a differentiable permutation-based network. Extensive experiments conducted on six public benchmarks demonstrate the effectiveness of the proposed approach, in both cross-domain and cross-platform settings. Code and pre-trained model are available at: https://github.com/RUCAIBox/VQ-Rec.
|
2312.03562
|
Yassine Himeur
|
El Ouanas Belabbaci, Mohammed Khammari, Ammar Chouchane, Mohcene
Bessaoudi, Abdelmalik Ouamane, Yassine Himeur, Shadi Atalla and Wathiq
Mansoor
|
Enhancing Kinship Verification through Multiscale Retinex and Combined
Deep-Shallow features
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The challenge of kinship verification from facial images represents a
cutting-edge and formidable frontier in the realms of pattern recognition and
computer vision. This area of study holds a myriad of potential applications,
spanning from image annotation and forensic analysis to social media research.
Our research stands out by integrating a preprocessing method named Multiscale
Retinex (MSR), which elevates image quality and amplifies contrast, ultimately
bolstering the end results. Strategically, our methodology capitalizes on the
harmonious blend of deep and shallow texture descriptors, merging them
proficiently at the score level through the Logistic Regression (LR) method. To
elucidate, we employ the Local Phase Quantization (LPQ) descriptor to extract
shallow texture characteristics. For deep feature extraction, we turn to the
prowess of the VGG16 model, which is pre-trained on a convolutional neural
network (CNN). The robustness and efficacy of our method have been put to the
test through meticulous experiments on three rigorous kinship datasets, namely:
Cornell Kin Face, UB Kin Face, and TS Kin Face.
|
[
{
"created": "Wed, 6 Dec 2023 15:52:31 GMT",
"version": "v1"
}
] |
2023-12-07
|
[
[
"Belabbaci",
"El Ouanas",
""
],
[
"Khammari",
"Mohammed",
""
],
[
"Chouchane",
"Ammar",
""
],
[
"Bessaoudi",
"Mohcene",
""
],
[
"Ouamane",
"Abdelmalik",
""
],
[
"Himeur",
"Yassine",
""
],
[
"Atalla",
"Shadi",
""
],
[
"Mansoor",
"Wathiq",
""
]
] |
The challenge of kinship verification from facial images represents a cutting-edge and formidable frontier in the realms of pattern recognition and computer vision. This area of study holds a myriad of potential applications, spanning from image annotation and forensic analysis to social media research. Our research stands out by integrating a preprocessing method named Multiscale Retinex (MSR), which elevates image quality and amplifies contrast, ultimately bolstering the end results. Strategically, our methodology capitalizes on the harmonious blend of deep and shallow texture descriptors, merging them proficiently at the score level through the Logistic Regression (LR) method. To elucidate, we employ the Local Phase Quantization (LPQ) descriptor to extract shallow texture characteristics. For deep feature extraction, we turn to the prowess of the VGG16 model, which is pre-trained on a convolutional neural network (CNN). The robustness and efficacy of our method have been put to the test through meticulous experiments on three rigorous kinship datasets, namely: Cornell Kin Face, UB Kin Face, and TS Kin Face.
|
1804.09635
|
Dongyeop Kang
|
Dongyeop Kang and Waleed Ammar and Bhavana Dalvi and Madeleine van
Zuylen and Sebastian Kohlmeier and Eduard Hovy and Roy Schwartz
|
A Dataset of Peer Reviews (PeerRead): Collection, Insights and NLP
Applications
|
NAACL 2018
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Peer reviewing is a central component in the scientific publishing process.
We present the first public dataset of scientific peer reviews available for
research purposes (PeerRead v1) providing an opportunity to study this
important artifact. The dataset consists of 14.7K paper drafts and the
corresponding accept/reject decisions in top-tier venues including ACL, NIPS
and ICLR. The dataset also includes 10.7K textual peer reviews written by
experts for a subset of the papers. We describe the data collection process and
report interesting observed phenomena in the peer reviews. We also propose two
novel NLP tasks based on this dataset and provide simple baseline models. In
the first task, we show that simple models can predict whether a paper is
accepted with up to 21% error reduction compared to the majority baseline. In
the second task, we predict the numerical scores of review aspects and show
that simple models can outperform the mean baseline for aspects with high
variance such as 'originality' and 'impact'.
|
[
{
"created": "Wed, 25 Apr 2018 15:41:15 GMT",
"version": "v1"
}
] |
2018-04-26
|
[
[
"Kang",
"Dongyeop",
""
],
[
"Ammar",
"Waleed",
""
],
[
"Dalvi",
"Bhavana",
""
],
[
"van Zuylen",
"Madeleine",
""
],
[
"Kohlmeier",
"Sebastian",
""
],
[
"Hovy",
"Eduard",
""
],
[
"Schwartz",
"Roy",
""
]
] |
Peer reviewing is a central component in the scientific publishing process. We present the first public dataset of scientific peer reviews available for research purposes (PeerRead v1) providing an opportunity to study this important artifact. The dataset consists of 14.7K paper drafts and the corresponding accept/reject decisions in top-tier venues including ACL, NIPS and ICLR. The dataset also includes 10.7K textual peer reviews written by experts for a subset of the papers. We describe the data collection process and report interesting observed phenomena in the peer reviews. We also propose two novel NLP tasks based on this dataset and provide simple baseline models. In the first task, we show that simple models can predict whether a paper is accepted with up to 21% error reduction compared to the majority baseline. In the second task, we predict the numerical scores of review aspects and show that simple models can outperform the mean baseline for aspects with high variance such as 'originality' and 'impact'.
|
1411.3229
|
Tian Cao
|
Tian Cao, Christopher Zach, Shannon Modla, Debbie Powell, Kirk Czymmek
and Marc Niethammer
|
Multi-modal Image Registration for Correlative Microscopy
|
24 pages
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Correlative microscopy is a methodology combining the functionality of light
microscopy with the high resolution of electron microscopy and other microscopy
technologies. Image registration for correlative microscopy is quite
challenging because it is a multi-modal, multi-scale and multi-dimensional
registration problem. In this report, I introduce two methods of image
registration for correlative microscopy. The first method is based on fiducials
(beads). I generate landmarks from the fiducials and compute the similarity
transformation matrix based on three pairs of nearest corresponding landmarks.
A least-squares matching process is applied afterwards to further refine the
registration. The second method is inspired by the image analogies approach. I
introduce the sparse representation model into image analogies. I first train
representative image patches (dictionaries) for pre-registered datasets from
two different modalities, and then I use the sparse coding technique to
transfer a given image to a predicted image from one modality to another based
on the learned dictionaries. The final image registration is between the
predicted image and the original image corresponding to the given image in the
different modality. The method transforms a multi-modal registration problem to
a mono-modal one. I test my approaches on Transmission Electron Microscopy
(TEM) and confocal microscopy images. Experimental results of the methods are
also shown in this report.
|
[
{
"created": "Wed, 12 Nov 2014 16:32:17 GMT",
"version": "v1"
},
{
"created": "Tue, 13 Jan 2015 15:44:08 GMT",
"version": "v2"
}
] |
2015-01-14
|
[
[
"Cao",
"Tian",
""
],
[
"Zach",
"Christopher",
""
],
[
"Modla",
"Shannon",
""
],
[
"Powell",
"Debbie",
""
],
[
"Czymmek",
"Kirk",
""
],
[
"Niethammer",
"Marc",
""
]
] |
Correlative microscopy is a methodology combining the functionality of light microscopy with the high resolution of electron microscopy and other microscopy technologies. Image registration for correlative microscopy is quite challenging because it is a multi-modal, multi-scale and multi-dimensional registration problem. In this report, I introduce two methods of image registration for correlative microscopy. The first method is based on fiducials (beads). I generate landmarks from the fiducials and compute the similarity transformation matrix based on three pairs of nearest corresponding landmarks. A least-squares matching process is applied afterwards to further refine the registration. The second method is inspired by the image analogies approach. I introduce the sparse representation model into image analogies. I first train representative image patches (dictionaries) for pre-registered datasets from two different modalities, and then I use the sparse coding technique to transfer a given image to a predicted image from one modality to another based on the learned dictionaries. The final image registration is between the predicted image and the original image corresponding to the given image in the different modality. The method transforms a multi-modal registration problem to a mono-modal one. I test my approaches on Transmission Electron Microscopy (TEM) and confocal microscopy images. Experimental results of the methods are also shown in this report.
|
2406.03248
|
Xiaoyu Zhang
|
Xiaoyu Zhang, Yishan Li, Jiayin Wang, Bowen Sun, Weizhi Ma, Peijie
Sun, Min Zhang
|
Large Language Models as Evaluators for Recommendation Explanations
| null | null | null | null |
cs.IR cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The explainability of recommender systems has attracted significant attention
in academia and industry. Many efforts have been made for explainable
recommendations, yet evaluating the quality of the explanations remains a
challenging and unresolved issue. In recent years, leveraging LLMs as
evaluators presents a promising avenue in Natural Language Processing tasks
(e.g., sentiment classification, information extraction), as they perform
strong capabilities in instruction following and common-sense reasoning.
However, evaluating recommendation explanatory texts is different from these
NLG tasks, as its criteria are related to human perceptions and are usually
subjective. In this paper, we investigate whether LLMs can serve as evaluators
of recommendation explanations. To answer the question, we utilize real user
feedback on explanations given from previous work and additionally collect
third-party annotations and LLM evaluations. We design and apply a 3-level meta
evaluation strategy to measure the correlation between evaluator labels and the
ground truth provided by users. Our experiments reveal that LLMs, such as GPT4,
can provide comparable evaluations with appropriate prompts and settings. We
also provide further insights into combining human labels with the LLM
evaluation process and utilizing ensembles of multiple heterogeneous LLM
evaluators to enhance the accuracy and stability of evaluations. Our study
verifies that utilizing LLMs as evaluators can be an accurate, reproducible and
cost-effective solution for evaluating recommendation explanation texts. Our
code is available at https://github.com/Xiaoyu-SZ/LLMasEvaluator.
|
[
{
"created": "Wed, 5 Jun 2024 13:23:23 GMT",
"version": "v1"
},
{
"created": "Thu, 6 Jun 2024 04:31:37 GMT",
"version": "v2"
}
] |
2024-06-07
|
[
[
"Zhang",
"Xiaoyu",
""
],
[
"Li",
"Yishan",
""
],
[
"Wang",
"Jiayin",
""
],
[
"Sun",
"Bowen",
""
],
[
"Ma",
"Weizhi",
""
],
[
"Sun",
"Peijie",
""
],
[
"Zhang",
"Min",
""
]
] |
The explainability of recommender systems has attracted significant attention in academia and industry. Many efforts have been made for explainable recommendations, yet evaluating the quality of the explanations remains a challenging and unresolved issue. In recent years, leveraging LLMs as evaluators presents a promising avenue in Natural Language Processing tasks (e.g., sentiment classification, information extraction), as they perform strong capabilities in instruction following and common-sense reasoning. However, evaluating recommendation explanatory texts is different from these NLG tasks, as its criteria are related to human perceptions and are usually subjective. In this paper, we investigate whether LLMs can serve as evaluators of recommendation explanations. To answer the question, we utilize real user feedback on explanations given from previous work and additionally collect third-party annotations and LLM evaluations. We design and apply a 3-level meta evaluation strategy to measure the correlation between evaluator labels and the ground truth provided by users. Our experiments reveal that LLMs, such as GPT4, can provide comparable evaluations with appropriate prompts and settings. We also provide further insights into combining human labels with the LLM evaluation process and utilizing ensembles of multiple heterogeneous LLM evaluators to enhance the accuracy and stability of evaluations. Our study verifies that utilizing LLMs as evaluators can be an accurate, reproducible and cost-effective solution for evaluating recommendation explanation texts. Our code is available at https://github.com/Xiaoyu-SZ/LLMasEvaluator.
|
2304.10140
|
Maksymilian Wojnar
|
Wojciech Ciezobka, Maksymilian Wojnar, Katarzyna Kosek-Szott, Szymon
Szott, Krzysztof Rusek
|
FTMRate: Collision-Immune Distance-based Data Rate Selection for IEEE
802.11 Networks
|
11 pages, 8 figures, 5 tables
|
IEEE 24th International Symposium on a World of Wireless, Mobile
and Multimedia Networks (WoWMoM), Boston, MA, USA, 2023, pp. 242-251
|
10.1109/WoWMoM57956.2023.00039
| null |
cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
Data rate selection algorithms for Wi-Fi devices are an important area of
research because they directly impact performance. Most of the proposals are
based on measuring the transmission success probability for a given data rate.
In dense scenarios, however, this probing approach will fail because frame
collisions are misinterpreted as erroneous data rate selection. We propose
FTMRate which uses the fine timing measurement (FTM) feature, recently
introduced in IEEE 802.11. FTM allows stations to measure their distance from
the AP. We argue that knowledge of the distance from the receiver can be useful
in determining which data rate to use. We apply statistical learning (a form of
machine learning) to estimate the distance based on measurements, estimate
channel quality from the distance, and select data rates based on channel
quality. We evaluate three distinct estimation approaches: exponential
smoothing, Kalman filter, and particle filter. We present a performance
evaluation of the three variants of FTMRate and show, in several dense and
mobile (though line-of-sight only) scenarios, that it can outperform two
benchmarks and provide close to optimal results in IEEE 802.11ax networks.
|
[
{
"created": "Thu, 20 Apr 2023 08:02:14 GMT",
"version": "v1"
},
{
"created": "Wed, 9 Aug 2023 08:10:40 GMT",
"version": "v2"
}
] |
2023-08-10
|
[
[
"Ciezobka",
"Wojciech",
""
],
[
"Wojnar",
"Maksymilian",
""
],
[
"Kosek-Szott",
"Katarzyna",
""
],
[
"Szott",
"Szymon",
""
],
[
"Rusek",
"Krzysztof",
""
]
] |
Data rate selection algorithms for Wi-Fi devices are an important area of research because they directly impact performance. Most of the proposals are based on measuring the transmission success probability for a given data rate. In dense scenarios, however, this probing approach will fail because frame collisions are misinterpreted as erroneous data rate selection. We propose FTMRate which uses the fine timing measurement (FTM) feature, recently introduced in IEEE 802.11. FTM allows stations to measure their distance from the AP. We argue that knowledge of the distance from the receiver can be useful in determining which data rate to use. We apply statistical learning (a form of machine learning) to estimate the distance based on measurements, estimate channel quality from the distance, and select data rates based on channel quality. We evaluate three distinct estimation approaches: exponential smoothing, Kalman filter, and particle filter. We present a performance evaluation of the three variants of FTMRate and show, in several dense and mobile (though line-of-sight only) scenarios, that it can outperform two benchmarks and provide close to optimal results in IEEE 802.11ax networks.
|
2009.10430
|
Zhi Chen
|
Zhi Chen, Lu Chen, Yanbin Zhao, Su Zhu and Kai Yu
|
Dual Learning for Dialogue State Tracking
|
7 pages, 4 figures
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In task-oriented multi-turn dialogue systems, dialogue state refers to a
compact representation of the user goal in the context of dialogue history.
Dialogue state tracking (DST) is to estimate the dialogue state at each turn.
Due to the dependency on complicated dialogue history contexts, DST data
annotation is more expensive than single-sentence language understanding, which
makes the task more challenging. In this work, we formulate DST as a sequence
generation problem and propose a novel dual-learning framework to make full use
of unlabeled data. In the dual-learning framework, there are two agents: the
primal tracker agent (utterance-to-state generator) and the dual utterance
generator agent (state-to-utterance genera-tor). Compared with traditional
supervised learning framework, dual learning can iteratively update both agents
through the reconstruction error and reward signal respectively without labeled
data. Reward sparsity problem is hard to solve in previous DST methods. In this
work, the reformulation of DST as a sequence generation model effectively
alleviates this problem. We call this primal tracker agent dual-DST.
Experimental results on MultiWOZ2.1 dataset show that the proposed dual-DST
works very well, especially when labelled data is limited. It achieves
comparable performance to the system where labeled data is fully used.
|
[
{
"created": "Tue, 22 Sep 2020 10:15:09 GMT",
"version": "v1"
}
] |
2020-09-23
|
[
[
"Chen",
"Zhi",
""
],
[
"Chen",
"Lu",
""
],
[
"Zhao",
"Yanbin",
""
],
[
"Zhu",
"Su",
""
],
[
"Yu",
"Kai",
""
]
] |
In task-oriented multi-turn dialogue systems, dialogue state refers to a compact representation of the user goal in the context of dialogue history. Dialogue state tracking (DST) is to estimate the dialogue state at each turn. Due to the dependency on complicated dialogue history contexts, DST data annotation is more expensive than single-sentence language understanding, which makes the task more challenging. In this work, we formulate DST as a sequence generation problem and propose a novel dual-learning framework to make full use of unlabeled data. In the dual-learning framework, there are two agents: the primal tracker agent (utterance-to-state generator) and the dual utterance generator agent (state-to-utterance genera-tor). Compared with traditional supervised learning framework, dual learning can iteratively update both agents through the reconstruction error and reward signal respectively without labeled data. Reward sparsity problem is hard to solve in previous DST methods. In this work, the reformulation of DST as a sequence generation model effectively alleviates this problem. We call this primal tracker agent dual-DST. Experimental results on MultiWOZ2.1 dataset show that the proposed dual-DST works very well, especially when labelled data is limited. It achieves comparable performance to the system where labeled data is fully used.
|
2109.14309
|
Vladimir V'yugin
|
Vladimir V'yugin and Vladimir Trunov
|
Online Aggregation of Probability Forecasts with Confidence
|
32 pages, 10 figures
|
Pattern Recognition, Volume 121, January 2022, 108193
| null | null |
cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The paper presents numerical experiments and some theoretical developments in
prediction with expert advice (PEA). One experiment deals with predicting
electricity consumption depending on temperature and uses real data. As the
pattern of dependence can change with season and time of the day, the domain
naturally admits PEA formulation with experts having different ``areas of
expertise''. We consider the case where several competing methods produce
online predictions in the form of probability distribution functions. The
dissimilarity between a probability forecast and an outcome is measured by a
loss function (scoring rule). A popular example of scoring rule for continuous
outcomes is Continuous Ranked Probability Score (CRPS). In this paper the
problem of combining probabilistic forecasts is considered in the PEA
framework. We show that CRPS is a mixable loss function and then the
time-independent upper bound for the regret of the Vovk aggregating algorithm
using CRPS as a loss function can be obtained. Also, we incorporate a
``smooth'' version of the method of specialized experts in this scheme which
allows us to combine the probabilistic predictions of the specialized experts
with overlapping domains of their competence.
|
[
{
"created": "Wed, 29 Sep 2021 09:49:16 GMT",
"version": "v1"
}
] |
2021-09-30
|
[
[
"V'yugin",
"Vladimir",
""
],
[
"Trunov",
"Vladimir",
""
]
] |
The paper presents numerical experiments and some theoretical developments in prediction with expert advice (PEA). One experiment deals with predicting electricity consumption depending on temperature and uses real data. As the pattern of dependence can change with season and time of the day, the domain naturally admits PEA formulation with experts having different ``areas of expertise''. We consider the case where several competing methods produce online predictions in the form of probability distribution functions. The dissimilarity between a probability forecast and an outcome is measured by a loss function (scoring rule). A popular example of scoring rule for continuous outcomes is Continuous Ranked Probability Score (CRPS). In this paper the problem of combining probabilistic forecasts is considered in the PEA framework. We show that CRPS is a mixable loss function and then the time-independent upper bound for the regret of the Vovk aggregating algorithm using CRPS as a loss function can be obtained. Also, we incorporate a ``smooth'' version of the method of specialized experts in this scheme which allows us to combine the probabilistic predictions of the specialized experts with overlapping domains of their competence.
|
1401.3837
|
Moshe Babaioff
|
Moshe Babaioff, Michal Feldman, Noam Nisan
|
Mixed Strategies in Combinatorial Agency
| null |
Journal Of Artificial Intelligence Research, Volume 38, pages
339-369, 2010
|
10.1613/jair.2961
| null |
cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In many multiagent domains a set of agents exert effort towards a joint
outcome, yet the individual effort levels cannot be easily observed. A typical
example for such a scenario is routing in communication networks, where the
sender can only observe whether the packet reached its destination, but often
has no information about the actions of the intermediate routers, which
influences the final outcome. We study a setting where a principal needs to
motivate a team of agents whose combination of hidden efforts stochastically
determines an outcome. In a companion paper we devise and study a basic
combinatorial agency model for this setting, where the principal is restricted
to inducing a pure Nash equilibrium. Here we study various implications of this
restriction. First, we show that, in contrast to the case of observable
efforts, inducing a mixed-strategies equilibrium may be beneficial for the
principal. Second, we present a sufficient condition for technologies for which
no gain can be generated. Third, we bound the principals gain for various
families of technologies. Finally, we study the robustness of mixed equilibria
to coalitional deviations and the computational hardness of the optimal mixed
equilibria.
|
[
{
"created": "Thu, 16 Jan 2014 04:51:30 GMT",
"version": "v1"
}
] |
2014-01-17
|
[
[
"Babaioff",
"Moshe",
""
],
[
"Feldman",
"Michal",
""
],
[
"Nisan",
"Noam",
""
]
] |
In many multiagent domains a set of agents exert effort towards a joint outcome, yet the individual effort levels cannot be easily observed. A typical example for such a scenario is routing in communication networks, where the sender can only observe whether the packet reached its destination, but often has no information about the actions of the intermediate routers, which influences the final outcome. We study a setting where a principal needs to motivate a team of agents whose combination of hidden efforts stochastically determines an outcome. In a companion paper we devise and study a basic combinatorial agency model for this setting, where the principal is restricted to inducing a pure Nash equilibrium. Here we study various implications of this restriction. First, we show that, in contrast to the case of observable efforts, inducing a mixed-strategies equilibrium may be beneficial for the principal. Second, we present a sufficient condition for technologies for which no gain can be generated. Third, we bound the principals gain for various families of technologies. Finally, we study the robustness of mixed equilibria to coalitional deviations and the computational hardness of the optimal mixed equilibria.
|
2312.03335
|
Yao Zhang
|
Yao Zhang, Xiaofei Xie, Yi Li, Sen Chen, Cen Zhang, Xiaohong Li
|
EndWatch: A Practical Method for Detecting Non-Termination in Real-World
Software
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Detecting non-termination is crucial for ensuring program correctness and
security, such as preventing denial-of-service attacks. While termination
analysis has been studied for many years, existing methods have limited
scalability and are only effective on small programs. To address this issue, we
propose a practical termination checking technique, called EndWatch, for
detecting non-termination caused by infinite loops through testing.
Specifically, we introduce two methods to generate non-termination oracles
based on checking state revisits, i.e., if the program returns to a previously
visited state at the same program location, it does not terminate. The
non-termination oracles can be incorporated into testing tools (e.g., AFL used
in this paper) to detect non-termination in large programs. For linear loops,
we perform symbolic execution on individual loops to infer State Revisit
Conditions (SRCs) and instrument SRCs into target loops. For non-linear loops,
we instrument target loops for checking concrete state revisits during
execution. We evaluated EndWatch on standard benchmarks with small-sized
programs and real-world projects with large-sized programs. The evaluation
results show that EndWatch is more effective than the state-of-the-art tools on
standard benchmarks (detecting 87% of non-terminating programs while the best
baseline detects only 67%), and useful in detecting non-termination in
real-world projects (detecting 90% of known non-termination CVEs and 4 unknown
bugs).
|
[
{
"created": "Wed, 6 Dec 2023 08:13:30 GMT",
"version": "v1"
}
] |
2023-12-07
|
[
[
"Zhang",
"Yao",
""
],
[
"Xie",
"Xiaofei",
""
],
[
"Li",
"Yi",
""
],
[
"Chen",
"Sen",
""
],
[
"Zhang",
"Cen",
""
],
[
"Li",
"Xiaohong",
""
]
] |
Detecting non-termination is crucial for ensuring program correctness and security, such as preventing denial-of-service attacks. While termination analysis has been studied for many years, existing methods have limited scalability and are only effective on small programs. To address this issue, we propose a practical termination checking technique, called EndWatch, for detecting non-termination caused by infinite loops through testing. Specifically, we introduce two methods to generate non-termination oracles based on checking state revisits, i.e., if the program returns to a previously visited state at the same program location, it does not terminate. The non-termination oracles can be incorporated into testing tools (e.g., AFL used in this paper) to detect non-termination in large programs. For linear loops, we perform symbolic execution on individual loops to infer State Revisit Conditions (SRCs) and instrument SRCs into target loops. For non-linear loops, we instrument target loops for checking concrete state revisits during execution. We evaluated EndWatch on standard benchmarks with small-sized programs and real-world projects with large-sized programs. The evaluation results show that EndWatch is more effective than the state-of-the-art tools on standard benchmarks (detecting 87% of non-terminating programs while the best baseline detects only 67%), and useful in detecting non-termination in real-world projects (detecting 90% of known non-termination CVEs and 4 unknown bugs).
|
1909.12913
|
Prabin Sharma
|
Prabin Sharma, Shubham Joshi, Subash Gautam, Sneha Maharjan, Salik Ram
Khanal, Manuel Cabral Reis, Jo\~ao Barroso, V\'itor Manuel de Jesus Filipe
|
Student Engagement Detection Using Emotion Analysis, Eye Tracking and
Head Movement with Machine Learning
|
9 pages, 9 Figures, 2 tables
| null | null | null |
cs.CV cs.CY cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
With the increase of distance learning, in general, and e-learning, in
particular, having a system capable of determining the engagement of students
is of primordial importance, and one of the biggest challenges, both for
teachers, researchers and policy makers. Here, we present a system to detect
the engagement level of the students. It uses only information provided by the
typical built-in web-camera present in a laptop computer, and was designed to
work in real time. We combine information about the movements of the eyes and
head, and facial emotions to produce a concentration index with three classes
of engagement: "very engaged", "nominally engaged" and "not engaged at all".
The system was tested in a typical e-learning scenario, and the results show
that it correctly identifies each period of time where students were "very
engaged", "nominally engaged" and "not engaged at all". Additionally, the
results also show that the students with best scores also have higher
concentration indexes.
|
[
{
"created": "Wed, 18 Sep 2019 15:46:48 GMT",
"version": "v1"
},
{
"created": "Fri, 8 Nov 2019 09:28:54 GMT",
"version": "v2"
},
{
"created": "Mon, 28 Sep 2020 16:56:12 GMT",
"version": "v3"
},
{
"created": "Sat, 26 Dec 2020 19:05:15 GMT",
"version": "v4"
},
{
"created": "Thu, 23 Mar 2023 16:43:29 GMT",
"version": "v5"
}
] |
2023-03-24
|
[
[
"Sharma",
"Prabin",
""
],
[
"Joshi",
"Shubham",
""
],
[
"Gautam",
"Subash",
""
],
[
"Maharjan",
"Sneha",
""
],
[
"Khanal",
"Salik Ram",
""
],
[
"Reis",
"Manuel Cabral",
""
],
[
"Barroso",
"João",
""
],
[
"Filipe",
"Vítor Manuel de Jesus",
""
]
] |
With the increase of distance learning, in general, and e-learning, in particular, having a system capable of determining the engagement of students is of primordial importance, and one of the biggest challenges, both for teachers, researchers and policy makers. Here, we present a system to detect the engagement level of the students. It uses only information provided by the typical built-in web-camera present in a laptop computer, and was designed to work in real time. We combine information about the movements of the eyes and head, and facial emotions to produce a concentration index with three classes of engagement: "very engaged", "nominally engaged" and "not engaged at all". The system was tested in a typical e-learning scenario, and the results show that it correctly identifies each period of time where students were "very engaged", "nominally engaged" and "not engaged at all". Additionally, the results also show that the students with best scores also have higher concentration indexes.
|
1903.02725
|
Wyatt Felt
|
Wyatt Felt
|
An Inverting-Tube Clutching Contractile Soft Pneumatic Actuator
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents the simple synergistic combination of a novel contracting
soft pneumatic actuator with a soft clutch (linear brake). The device is
designated the Inverting-tube Vacuum ACtuator with Clutch (InVACC). The
actuator alone (no clutch) is designated "InVAC" and uses vacuum pressure to
invert a thin tube into a shorter section of reinforced flexible tubing. The
inverting tube acts as rolling diaphragm and a flexible tendon. This allows the
actuator to contract to one third of its extended length. The
contractile-force-per-unit-pressure is approximately constant over the stroke.
The theoretical maximum of this force is the product of the vacuum gauge
pressure and half the interior cross-sectional area of the tube. The
experimental evaluation revealed hysteretic losses that depend on the actuation
direction and rate. With -81 kPa, the prototype produced 12.7 N of tension
during extension and 7.5 N during retraction. The reinforced tubing of the
InVAC was integrated with an inner collapsible "clutching" tube to create an
InVACC. The clutch is engaged by applying a positive pressure between the
reinforced tube and the clutching tube, which collapses the clutching tube onto
the flexible tendon. With a pressure of 50 kPa, the InVACC clutch tested in
this work was able to support a peak tensile load of 120 N before slipping.
Though the fatigue life of the current prototypes is limited, improved
fabrication methods for this novel actuator/clutch concept will enable new
applications in robotics and wearable haptic systems.
|
[
{
"created": "Thu, 7 Mar 2019 04:39:17 GMT",
"version": "v1"
}
] |
2019-03-08
|
[
[
"Felt",
"Wyatt",
""
]
] |
This paper presents the simple synergistic combination of a novel contracting soft pneumatic actuator with a soft clutch (linear brake). The device is designated the Inverting-tube Vacuum ACtuator with Clutch (InVACC). The actuator alone (no clutch) is designated "InVAC" and uses vacuum pressure to invert a thin tube into a shorter section of reinforced flexible tubing. The inverting tube acts as rolling diaphragm and a flexible tendon. This allows the actuator to contract to one third of its extended length. The contractile-force-per-unit-pressure is approximately constant over the stroke. The theoretical maximum of this force is the product of the vacuum gauge pressure and half the interior cross-sectional area of the tube. The experimental evaluation revealed hysteretic losses that depend on the actuation direction and rate. With -81 kPa, the prototype produced 12.7 N of tension during extension and 7.5 N during retraction. The reinforced tubing of the InVAC was integrated with an inner collapsible "clutching" tube to create an InVACC. The clutch is engaged by applying a positive pressure between the reinforced tube and the clutching tube, which collapses the clutching tube onto the flexible tendon. With a pressure of 50 kPa, the InVACC clutch tested in this work was able to support a peak tensile load of 120 N before slipping. Though the fatigue life of the current prototypes is limited, improved fabrication methods for this novel actuator/clutch concept will enable new applications in robotics and wearable haptic systems.
|
2404.06619
|
Jane Dwivedi-Yu
|
Jane Dwivedi-Yu and Raaz Dwivedi and Timo Schick
|
FairPair: A Robust Evaluation of Biases in Language Models through
Paired Perturbations
| null | null | null | null |
cs.CL cs.CY cs.LG
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The accurate evaluation of differential treatment in language models to
specific groups is critical to ensuring a positive and safe user experience. An
ideal evaluation should have the properties of being robust, extendable to new
groups or attributes, and being able to capture biases that appear in typical
usage (rather than just extreme, rare cases). Relatedly, bias evaluation should
surface not only egregious biases but also ones that are subtle and
commonplace, such as a likelihood for talking about appearances with regard to
women. We present FairPair, an evaluation framework for assessing differential
treatment that occurs during ordinary usage. FairPair operates through
counterfactual pairs, but crucially, the paired continuations are grounded in
the same demographic group, which ensures equivalent comparison. Additionally,
unlike prior work, our method factors in the inherent variability that comes
from the generation process itself by measuring the sampling variability. We
present an evaluation of several commonly used generative models and a
qualitative analysis that indicates a preference for discussing family and
hobbies with regard to women.
|
[
{
"created": "Tue, 9 Apr 2024 21:09:22 GMT",
"version": "v1"
}
] |
2024-04-11
|
[
[
"Dwivedi-Yu",
"Jane",
""
],
[
"Dwivedi",
"Raaz",
""
],
[
"Schick",
"Timo",
""
]
] |
The accurate evaluation of differential treatment in language models to specific groups is critical to ensuring a positive and safe user experience. An ideal evaluation should have the properties of being robust, extendable to new groups or attributes, and being able to capture biases that appear in typical usage (rather than just extreme, rare cases). Relatedly, bias evaluation should surface not only egregious biases but also ones that are subtle and commonplace, such as a likelihood for talking about appearances with regard to women. We present FairPair, an evaluation framework for assessing differential treatment that occurs during ordinary usage. FairPair operates through counterfactual pairs, but crucially, the paired continuations are grounded in the same demographic group, which ensures equivalent comparison. Additionally, unlike prior work, our method factors in the inherent variability that comes from the generation process itself by measuring the sampling variability. We present an evaluation of several commonly used generative models and a qualitative analysis that indicates a preference for discussing family and hobbies with regard to women.
|
2309.07933
|
EPTCS
|
Rob van Glabbeek (University of Edinburgh), Peter H\"ofner (Australian
National University, Canberra), Weiyou Wang (Australian National University,
Canberra)
|
A Lean-Congruence Format for EP-Bisimilarity
|
In Proceedings EXPRESS/SOS2023, arXiv:2309.05788. A full version of
this paper, enriched with two appendices, is available at arXiv:2308.16350
|
EPTCS 387, 2023, pp. 59-75
|
10.4204/EPTCS.387.6
|
EPTCS 387-6
|
cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
Enabling preserving bisimilarity is a refinement of strong bisimilarity that
preserves safety as well as liveness properties. To define it properly,
labelled transition systems needed to be upgraded with a successor relation,
capturing concurrency between transitions enabled in the same state. We enrich
the well-known De Simone format to handle inductive definitions of this
successor relation. We then establish that ep-bisimilarity is a congruence for
the operators, as well as lean congruence for recursion, for all (enriched) De
Simone languages.
|
[
{
"created": "Wed, 13 Sep 2023 20:51:32 GMT",
"version": "v1"
}
] |
2023-09-18
|
[
[
"van Glabbeek",
"Rob",
"",
"University of Edinburgh"
],
[
"Höfner",
"Peter",
"",
"Australian\n National University, Canberra"
],
[
"Wang",
"Weiyou",
"",
"Australian National University,\n Canberra"
]
] |
Enabling preserving bisimilarity is a refinement of strong bisimilarity that preserves safety as well as liveness properties. To define it properly, labelled transition systems needed to be upgraded with a successor relation, capturing concurrency between transitions enabled in the same state. We enrich the well-known De Simone format to handle inductive definitions of this successor relation. We then establish that ep-bisimilarity is a congruence for the operators, as well as lean congruence for recursion, for all (enriched) De Simone languages.
|
2311.13385
|
Bo Zhao
|
Yuxin Du, Fan Bai, Tiejun Huang, Bo Zhao
|
SegVol: Universal and Interactive Volumetric Medical Image Segmentation
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Precise image segmentation provides clinical study with instructive
information. Despite the remarkable progress achieved in medical image
segmentation, there is still an absence of 3D foundation segmentation model
that can segment a wide range of anatomical categories with easy user
interaction. In this paper, we propose a 3D foundation segmentation model,
named SegVol, supporting universal and interactive volumetric medical image
segmentation. By scaling up training data to 90K unlabeled Computed Tomography
(CT) volumes and 6K labeled CT volumes, this foundation model supports the
segmentation of over 200 anatomical categories using semantic and spatial
prompts. Extensive experiments on 10 internal validation tasks and 18 external
validation tasks verify that SegVol outperforms the state of the art by a large
margin. Through its capacity to provide precise volumetric segmentation across
various anatomical categories, SegVol has the potential to accelerate
advancements in medical imaging diagnosis and facilitate treatment
optimization. The model and code are publicly available at:
https://github.com/BAAI-DCAI/SegVol.
|
[
{
"created": "Wed, 22 Nov 2023 13:27:36 GMT",
"version": "v1"
},
{
"created": "Thu, 29 Feb 2024 03:27:28 GMT",
"version": "v2"
},
{
"created": "Tue, 26 Mar 2024 10:21:46 GMT",
"version": "v3"
}
] |
2024-03-27
|
[
[
"Du",
"Yuxin",
""
],
[
"Bai",
"Fan",
""
],
[
"Huang",
"Tiejun",
""
],
[
"Zhao",
"Bo",
""
]
] |
Precise image segmentation provides clinical study with instructive information. Despite the remarkable progress achieved in medical image segmentation, there is still an absence of 3D foundation segmentation model that can segment a wide range of anatomical categories with easy user interaction. In this paper, we propose a 3D foundation segmentation model, named SegVol, supporting universal and interactive volumetric medical image segmentation. By scaling up training data to 90K unlabeled Computed Tomography (CT) volumes and 6K labeled CT volumes, this foundation model supports the segmentation of over 200 anatomical categories using semantic and spatial prompts. Extensive experiments on 10 internal validation tasks and 18 external validation tasks verify that SegVol outperforms the state of the art by a large margin. Through its capacity to provide precise volumetric segmentation across various anatomical categories, SegVol has the potential to accelerate advancements in medical imaging diagnosis and facilitate treatment optimization. The model and code are publicly available at: https://github.com/BAAI-DCAI/SegVol.
|
2104.00876
|
Aryia Dattamajumdar
|
Aryia Dattamajumdar
|
An early warning AI-powered portable system to reduce workload and
inspect environmental damage after natural disasters
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
1.3 million household fires, 3,400 civilian deaths, and 23 billion dollars in
damage, a fire department is called to respond every 24 seconds. Many
firefighters are injured during search and rescue operations due to hidden
dangers. Additionally, fire-retardant water runoff pollution can threaten human
health. My goal is to develop a system to monitor calamity-induced environment
damage to provide early-intelligence to incident-commanders. I have developed a
multi-spectral sensing system to inspect air and water quality for safer and
accessible hazardous environment operations. Key components include a) drone
mounted with four sensors (gas sensors, thermal camera, GPS sensor, visual
camera) and wireless communicator for inspection, b) AI-powered computer vision
base-station to identify targets, c) low-cost, portable, spectral water quality
analyzer and d) robotic retriever. The prototype demonstrates the potential for
safer and more accessible search and rescue operations for fire-fighters and
scientists. The gas sensor could identify thick smoke situations (thresholds >
400). The visual and thermal cameras detected hidden hot objects and sent
images to AI-powered analyzer to identify and localize target with rescue GPS
coordinates for robotic retrieval. Water quality was analyzed with spectral
signatures to indicate turbidity levels that correlate with potential
pollutants (threshold > 1.3). Prototype results were shown to the Sunnyvale
fire department and received encouraging feedback. Future goals include
monitoring firefighter health and overexertion with smart clothes.
|
[
{
"created": "Fri, 2 Apr 2021 03:51:47 GMT",
"version": "v1"
}
] |
2021-04-05
|
[
[
"Dattamajumdar",
"Aryia",
""
]
] |
1.3 million household fires, 3,400 civilian deaths, and 23 billion dollars in damage, a fire department is called to respond every 24 seconds. Many firefighters are injured during search and rescue operations due to hidden dangers. Additionally, fire-retardant water runoff pollution can threaten human health. My goal is to develop a system to monitor calamity-induced environment damage to provide early-intelligence to incident-commanders. I have developed a multi-spectral sensing system to inspect air and water quality for safer and accessible hazardous environment operations. Key components include a) drone mounted with four sensors (gas sensors, thermal camera, GPS sensor, visual camera) and wireless communicator for inspection, b) AI-powered computer vision base-station to identify targets, c) low-cost, portable, spectral water quality analyzer and d) robotic retriever. The prototype demonstrates the potential for safer and more accessible search and rescue operations for fire-fighters and scientists. The gas sensor could identify thick smoke situations (thresholds > 400). The visual and thermal cameras detected hidden hot objects and sent images to AI-powered analyzer to identify and localize target with rescue GPS coordinates for robotic retrieval. Water quality was analyzed with spectral signatures to indicate turbidity levels that correlate with potential pollutants (threshold > 1.3). Prototype results were shown to the Sunnyvale fire department and received encouraging feedback. Future goals include monitoring firefighter health and overexertion with smart clothes.
|
2408.06324
|
Spyros Kontogiannis
|
Spyros Kontogiannis and Andreas Paraskevopoulos and Christos
Zaroliagis
|
Online Vehicle Routing with Pickups and Deliveries under Time-Dependent
Travel-Time Constraints
|
25 pages, extended version of the ATMOS 2024 accepted paper
| null | null | null |
cs.CE cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
The Vehicle Routing Problem with pickups, deliveries and spatiotemporal
service constraints ($VRPPDSTC$) is a quite challenging algorithmic problem
that can be dealt with in either an offline or an online fashion. In this work,
we focus on a generalization, called $VRPPDSTCtd$, in which the travel-time
metric is \emph{time-dependent}: the traversal-time per road segment
(represented as a directed arc) is determined by some function of the
departure-time from its tail towards its head. Time-dependence makes things
much more complicated, even for the simpler problem of computing
earliest-arrival-time paths which is a crucial subroutine to be solved
(numerous times) by $VRPPDSTCtd$ schedulers.
We propose two \emph{online} schedulers of requests to workers, one which is
a time-dependent variant of the classical Plain-Insertion heuristic, and an
extension of it trying to digest some sort of forecasts for future demands for
service. We enrich these two online schedulers with two additional heuristics,
one targeting for distance-balanced assignments of work loads to the workers
and another that makes local-search-improvements to the produced solutions.
We conduct a careful experimental evaluation of the proposed algorithms on a
real-world instance, with or without these heuristics, and compare their
quality with human-curated assignments provided by professional experts (human
operators at actual pickup-and-delivery control centers), and also with
feasible solutions constructed from a relaxed MILP formulation of $VRPPDSTCtd$,
which is also introduced in this paper.
Our findings are quite encouraging, demonstrating that the proposed
algorithms produce solutions which (i) are significant improvements over the
human-curated assignments, and (ii) have overall quality pretty close to that
of the (extremely time-consuming) solutions provided by an exact solver for the
MILP formulation.
|
[
{
"created": "Mon, 12 Aug 2024 17:43:48 GMT",
"version": "v1"
},
{
"created": "Tue, 13 Aug 2024 06:32:22 GMT",
"version": "v2"
}
] |
2024-08-14
|
[
[
"Kontogiannis",
"Spyros",
""
],
[
"Paraskevopoulos",
"Andreas",
""
],
[
"Zaroliagis",
"Christos",
""
]
] |
The Vehicle Routing Problem with pickups, deliveries and spatiotemporal service constraints ($VRPPDSTC$) is a quite challenging algorithmic problem that can be dealt with in either an offline or an online fashion. In this work, we focus on a generalization, called $VRPPDSTCtd$, in which the travel-time metric is \emph{time-dependent}: the traversal-time per road segment (represented as a directed arc) is determined by some function of the departure-time from its tail towards its head. Time-dependence makes things much more complicated, even for the simpler problem of computing earliest-arrival-time paths which is a crucial subroutine to be solved (numerous times) by $VRPPDSTCtd$ schedulers. We propose two \emph{online} schedulers of requests to workers, one which is a time-dependent variant of the classical Plain-Insertion heuristic, and an extension of it trying to digest some sort of forecasts for future demands for service. We enrich these two online schedulers with two additional heuristics, one targeting for distance-balanced assignments of work loads to the workers and another that makes local-search-improvements to the produced solutions. We conduct a careful experimental evaluation of the proposed algorithms on a real-world instance, with or without these heuristics, and compare their quality with human-curated assignments provided by professional experts (human operators at actual pickup-and-delivery control centers), and also with feasible solutions constructed from a relaxed MILP formulation of $VRPPDSTCtd$, which is also introduced in this paper. Our findings are quite encouraging, demonstrating that the proposed algorithms produce solutions which (i) are significant improvements over the human-curated assignments, and (ii) have overall quality pretty close to that of the (extremely time-consuming) solutions provided by an exact solver for the MILP formulation.
|
1005.2405
|
Ozan Candogan
|
Ozan Candogan, Ishai Menache, Asuman Ozdaglar, Pablo A. Parrilo
|
Flows and Decompositions of Games: Harmonic and Potential Games
| null |
Mathematics of Operations Research, Vol. 36, No. 3, pp. 474-503,
2011
|
10.1287/moor.1110.0500
| null |
cs.GT math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we introduce a novel flow representation for finite games in
strategic form. This representation allows us to develop a canonical direct sum
decomposition of an arbitrary game into three components, which we refer to as
the potential, harmonic and nonstrategic components. We analyze natural classes
of games that are induced by this decomposition, and in particular, focus on
games with no harmonic component and games with no potential component. We show
that the first class corresponds to the well-known potential games. We refer to
the second class of games as harmonic games, and study the structural and
equilibrium properties of this new class of games. Intuitively, the potential
component of a game captures interactions that can equivalently be represented
as a common interest game, while the harmonic part represents the conflicts
between the interests of the players. We make this intuition precise, by
studying the properties of these two classes, and show that indeed they have
quite distinct and remarkable characteristics. For instance, while finite
potential games always have pure Nash equilibria, harmonic games generically
never do. Moreover, we show that the nonstrategic component does not affect the
equilibria of a game, but plays a fundamental role in their efficiency
properties, thus decoupling the location of equilibria and their payoff-related
properties. Exploiting the properties of the decomposition framework, we obtain
explicit expressions for the projections of games onto the subspaces of
potential and harmonic games. This enables an extension of the properties of
potential and harmonic games to "nearby" games. We exemplify this point by
showing that the set of approximate equilibria of an arbitrary game can be
characterized through the equilibria of its projection onto the set of
potential games.
|
[
{
"created": "Thu, 13 May 2010 19:55:59 GMT",
"version": "v1"
},
{
"created": "Fri, 25 Jun 2010 03:22:21 GMT",
"version": "v2"
}
] |
2015-03-17
|
[
[
"Candogan",
"Ozan",
""
],
[
"Menache",
"Ishai",
""
],
[
"Ozdaglar",
"Asuman",
""
],
[
"Parrilo",
"Pablo A.",
""
]
] |
In this paper we introduce a novel flow representation for finite games in strategic form. This representation allows us to develop a canonical direct sum decomposition of an arbitrary game into three components, which we refer to as the potential, harmonic and nonstrategic components. We analyze natural classes of games that are induced by this decomposition, and in particular, focus on games with no harmonic component and games with no potential component. We show that the first class corresponds to the well-known potential games. We refer to the second class of games as harmonic games, and study the structural and equilibrium properties of this new class of games. Intuitively, the potential component of a game captures interactions that can equivalently be represented as a common interest game, while the harmonic part represents the conflicts between the interests of the players. We make this intuition precise, by studying the properties of these two classes, and show that indeed they have quite distinct and remarkable characteristics. For instance, while finite potential games always have pure Nash equilibria, harmonic games generically never do. Moreover, we show that the nonstrategic component does not affect the equilibria of a game, but plays a fundamental role in their efficiency properties, thus decoupling the location of equilibria and their payoff-related properties. Exploiting the properties of the decomposition framework, we obtain explicit expressions for the projections of games onto the subspaces of potential and harmonic games. This enables an extension of the properties of potential and harmonic games to "nearby" games. We exemplify this point by showing that the set of approximate equilibria of an arbitrary game can be characterized through the equilibria of its projection onto the set of potential games.
|
1103.2240
|
Xin Kang
|
Xin Kang, Rui Zhang, and Mehul Motani
|
Price-Based Resource Allocation for Spectrum-Sharing Femtocell Networks:
A Stackelberg Game Approach
|
27 pages, 7 figures, Submitted to JSAC
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper investigates the price-based resource allocation strategies for
the uplink transmission of a spectrum-sharing femtocell network, in which a
central macrocell is underlaid with distributed femtocells, all operating over
the same frequency band as the macrocell. Assuming that the macrocell base
station (MBS) protects itself by pricing the interference from the femtocell
users, a Stackelberg game is formulated to study the joint utility maximization
of the macrocell and the femtocells subject to a maximum tolerable interference
power constraint at the MBS. Especially, two practical femtocell channel
models: sparsely deployed scenario for rural areas and densely deployed
scenario for urban areas, are investigated. For each scenario, two pricing
schemes: uniform pricing and non-uniform pricing, are proposed. Then, the
Stackelberg equilibriums for these proposed games are studied, and an effective
distributed interference price bargaining algorithm with guaranteed convergence
is proposed for the uniform-pricing case. Finally, numerical examples are
presented to verify the proposed studies. It is shown that the proposed
algorithms are effective in resource allocation and macrocell protection
requiring minimal network overhead for spectrum-sharing-based two-tier
femtocell networks.
|
[
{
"created": "Fri, 11 Mar 2011 10:44:51 GMT",
"version": "v1"
}
] |
2015-03-19
|
[
[
"Kang",
"Xin",
""
],
[
"Zhang",
"Rui",
""
],
[
"Motani",
"Mehul",
""
]
] |
This paper investigates the price-based resource allocation strategies for the uplink transmission of a spectrum-sharing femtocell network, in which a central macrocell is underlaid with distributed femtocells, all operating over the same frequency band as the macrocell. Assuming that the macrocell base station (MBS) protects itself by pricing the interference from the femtocell users, a Stackelberg game is formulated to study the joint utility maximization of the macrocell and the femtocells subject to a maximum tolerable interference power constraint at the MBS. Especially, two practical femtocell channel models: sparsely deployed scenario for rural areas and densely deployed scenario for urban areas, are investigated. For each scenario, two pricing schemes: uniform pricing and non-uniform pricing, are proposed. Then, the Stackelberg equilibriums for these proposed games are studied, and an effective distributed interference price bargaining algorithm with guaranteed convergence is proposed for the uniform-pricing case. Finally, numerical examples are presented to verify the proposed studies. It is shown that the proposed algorithms are effective in resource allocation and macrocell protection requiring minimal network overhead for spectrum-sharing-based two-tier femtocell networks.
|
2306.08894
|
Alena Chang
|
Alena Chang, Yinxin Wan, Guoliang Xue, Arunabha Sen
|
Entanglement Distribution in Satellite-based Dynamic Quantum Networks
| null | null | null | null |
cs.NI quant-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Low Earth Orbit (LEO) satellites present a compelling opportunity for the
establishment of a global quantum information network. However, satellite-based
entanglement distribution from a networking perspective has not been fully
investigated. Existing works often do not account for satellite movement over
time when distributing entanglement and/or often do not permit entanglement
distribution along inter-satellite links, which are two shortcomings we address
in this paper. We first define a system model which considers both satellite
movement over time and inter-satellite links. We next formulate the optimal
entanglement distribution (OED) problem under this system model and show how to
convert the OED problem in a dynamic physical network to one in a static
logical graph which can be used to solve the OED problem in the dynamic
physical network. We then propose a polynomial time greedy algorithm for
computing satellite-assisted multi-hop entanglement paths. We also design an
integer linear programming (ILP)-based algorithm to compute optimal solutions
as a baseline to study the performance of our greedy algorithm. We present
evaluation results to demonstrate the advantage of our model and algorithms.
|
[
{
"created": "Thu, 15 Jun 2023 06:56:26 GMT",
"version": "v1"
}
] |
2023-06-16
|
[
[
"Chang",
"Alena",
""
],
[
"Wan",
"Yinxin",
""
],
[
"Xue",
"Guoliang",
""
],
[
"Sen",
"Arunabha",
""
]
] |
Low Earth Orbit (LEO) satellites present a compelling opportunity for the establishment of a global quantum information network. However, satellite-based entanglement distribution from a networking perspective has not been fully investigated. Existing works often do not account for satellite movement over time when distributing entanglement and/or often do not permit entanglement distribution along inter-satellite links, which are two shortcomings we address in this paper. We first define a system model which considers both satellite movement over time and inter-satellite links. We next formulate the optimal entanglement distribution (OED) problem under this system model and show how to convert the OED problem in a dynamic physical network to one in a static logical graph which can be used to solve the OED problem in the dynamic physical network. We then propose a polynomial time greedy algorithm for computing satellite-assisted multi-hop entanglement paths. We also design an integer linear programming (ILP)-based algorithm to compute optimal solutions as a baseline to study the performance of our greedy algorithm. We present evaluation results to demonstrate the advantage of our model and algorithms.
|
2311.17498
|
Daniel Zentai
|
Daniel Zentai, Mihail Plesa, Robin Frot
|
A Multiparty Commutative Hashing Protocol based on the Discrete
Logarithm Problem
|
11 pages, 2 figures, presented at the 3rd International Conference on
Cryptography and Blockchain, published in Computer Science & Information
Technology (CS & IT), ISSN : 2231 - 5403, Volume 13, Number 21, November 2023
|
Computer Science & Information Technology (CS & IT), ISSN : 2231 -
5403, Volume 13, Number 21, November 2023
| null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Let $\mathcal{X}$ and $\mathcal{Y}$ be two sets and suppose that a set of
participants $P=\{P_1,P_2,\dots,P_n\}$ would like to calculate the keyed hash
value of some message $m\in\mathcal{X}$ known to a single participant in $P$
called the data owner. Also, suppose that each participant $P_i$ knows a secret
value $x_i\in\mathcal{X}$. In this paper, we will propose a protocol that
enables the participants in this setup to calculate the value
$y=H(m,x_1,x_2,\dots ,x_n)$ of a hash function
$H:\mathcal{X}^{n+1}\rightarrow\mathcal{Y}$ such that the function $H$ is a
one-way function, participants in $P\backslash\{P_i\}$ cannot obtain $x_i$,
participants other than the data owner cannot obtain $m$, and the hash value
$y=H(m,x_1,x_2,\dots ,x_n)$ remains the same regardless the order of the secret
$x_i$ values.
|
[
{
"created": "Wed, 29 Nov 2023 10:19:34 GMT",
"version": "v1"
}
] |
2023-11-30
|
[
[
"Zentai",
"Daniel",
""
],
[
"Plesa",
"Mihail",
""
],
[
"Frot",
"Robin",
""
]
] |
Let $\mathcal{X}$ and $\mathcal{Y}$ be two sets and suppose that a set of participants $P=\{P_1,P_2,\dots,P_n\}$ would like to calculate the keyed hash value of some message $m\in\mathcal{X}$ known to a single participant in $P$ called the data owner. Also, suppose that each participant $P_i$ knows a secret value $x_i\in\mathcal{X}$. In this paper, we will propose a protocol that enables the participants in this setup to calculate the value $y=H(m,x_1,x_2,\dots ,x_n)$ of a hash function $H:\mathcal{X}^{n+1}\rightarrow\mathcal{Y}$ such that the function $H$ is a one-way function, participants in $P\backslash\{P_i\}$ cannot obtain $x_i$, participants other than the data owner cannot obtain $m$, and the hash value $y=H(m,x_1,x_2,\dots ,x_n)$ remains the same regardless the order of the secret $x_i$ values.
|
1903.12483
|
Saulo Martiello Mastelini
|
Saulo Martiello Mastelini, Sylvio Barbon Jr., Andr\'e Carlos Ponce de
Leon Ferreira de Carvalho
|
Online Multi-target regression trees with stacked leaf models
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
One of the current challenges in machine learning is how to deal with data
coming at increasing rates in data streams. New predictive learning strategies
are needed to cope with the high throughput data and concept drift. One of the
data stream mining tasks where new learning strategies are needed is
multi-target regression, due to its applicability in a high number of real
world problems. While reliable and effective learning strategies have been
proposed for batch multi-target regression, few have been proposed for
multi-target online learning in data streams. Besides, most of the existing
solutions do not consider the occurrence of inter-target correlations when
making predictions. In this work, we propose a novel online learning strategy
for multi-target regression in data streams. The proposed strategy extends
existing online decision tree learning algorithm to explore inter-target
dependencies while making predictions. For such, the proposed strategy, called
Stacked Single-target Hoeffding Tree (SST-HT), uses the inter-target
dependencies as an additional information source to enhance predictive
accuracy. Throughout an extensive experimental setup, we evaluate our proposal
against state-of-the-art decision tree-based algorithms for online multi-target
regression. According to the experimental results, SST-HT presents superior
predictive accuracy, with a small increase in the processing time and memory
requirements.
|
[
{
"created": "Fri, 29 Mar 2019 12:42:03 GMT",
"version": "v1"
},
{
"created": "Mon, 3 Jun 2019 12:21:44 GMT",
"version": "v2"
},
{
"created": "Mon, 1 Jul 2019 19:50:03 GMT",
"version": "v3"
},
{
"created": "Tue, 10 Mar 2020 17:59:39 GMT",
"version": "v4"
}
] |
2020-03-11
|
[
[
"Mastelini",
"Saulo Martiello",
""
],
[
"Barbon",
"Sylvio",
"Jr."
],
[
"de Carvalho",
"André Carlos Ponce de Leon Ferreira",
""
]
] |
One of the current challenges in machine learning is how to deal with data coming at increasing rates in data streams. New predictive learning strategies are needed to cope with the high throughput data and concept drift. One of the data stream mining tasks where new learning strategies are needed is multi-target regression, due to its applicability in a high number of real world problems. While reliable and effective learning strategies have been proposed for batch multi-target regression, few have been proposed for multi-target online learning in data streams. Besides, most of the existing solutions do not consider the occurrence of inter-target correlations when making predictions. In this work, we propose a novel online learning strategy for multi-target regression in data streams. The proposed strategy extends existing online decision tree learning algorithm to explore inter-target dependencies while making predictions. For such, the proposed strategy, called Stacked Single-target Hoeffding Tree (SST-HT), uses the inter-target dependencies as an additional information source to enhance predictive accuracy. Throughout an extensive experimental setup, we evaluate our proposal against state-of-the-art decision tree-based algorithms for online multi-target regression. According to the experimental results, SST-HT presents superior predictive accuracy, with a small increase in the processing time and memory requirements.
|
2212.08362
|
Bohan Zhao
|
Bohan Zhao, Wenfei Wu, Wei Xu
|
NetRPC: Enabling In-Network Computation in Remote Procedure Calls
| null | null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
People have shown that in-network computation (INC) significantly boosts
performance in many application scenarios include distributed training,
MapReduce, agreement, and network monitoring. However, existing INC programming
is unfriendly to the normal application developers, demanding tedious network
engineering details like flow control, packet organization, chip-specific
programming language, and ASIC architecture with many limitations. We propose a
general INC-enabled RPC system, NetRPC. NetRPC provides a set of familiar and
lightweight interfaces for software developers to describe an INC application
using a traditional RPC programming model. NetRPC also proposes a
general-purpose INC implementation together with a set of optimization
techniques to guarantee the efficiency of various types of INC applications
running on a shared INC data plane. We conduct extensive experiments on
different types of applications on the real testbed. Results show that using
only about 5% or even fewer human-written lines of code, NetRPC can achieve
performance similar to the state-of-the-art INC solutions.
|
[
{
"created": "Fri, 16 Dec 2022 09:21:44 GMT",
"version": "v1"
}
] |
2022-12-19
|
[
[
"Zhao",
"Bohan",
""
],
[
"Wu",
"Wenfei",
""
],
[
"Xu",
"Wei",
""
]
] |
People have shown that in-network computation (INC) significantly boosts performance in many application scenarios include distributed training, MapReduce, agreement, and network monitoring. However, existing INC programming is unfriendly to the normal application developers, demanding tedious network engineering details like flow control, packet organization, chip-specific programming language, and ASIC architecture with many limitations. We propose a general INC-enabled RPC system, NetRPC. NetRPC provides a set of familiar and lightweight interfaces for software developers to describe an INC application using a traditional RPC programming model. NetRPC also proposes a general-purpose INC implementation together with a set of optimization techniques to guarantee the efficiency of various types of INC applications running on a shared INC data plane. We conduct extensive experiments on different types of applications on the real testbed. Results show that using only about 5% or even fewer human-written lines of code, NetRPC can achieve performance similar to the state-of-the-art INC solutions.
|
2310.02990
|
Obinnaya Chikezie Victor Nwosu
|
Nwosu Obinnaya Chikezie Victor
|
Exploring API Capabilities with Fieldwire
|
12 pages, 9 Figures, 3 Tables, Table 3 KPI evaluation before and
after API
| null | null | null |
cs.SE cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
Fieldwire, a cloud-based construction management software, has become a
pivotal tool in the construction industry. It offers a comprehensive suite of
features encompassing project management, task tracking, document management,
and collaboration. With the rise of Application Programming Interfaces (APIs)
in the software industry, Fieldwire has harnessed this trend to further empower
construction professionals. APIs act as bridges between different software
systems, and in Fieldwire's context, they hold the potential to integrate with
specialized construction tools, eliminating data silos, manual data entry, and
real-time information-sharing issues. This integration promises a streamlined
and efficient construction management process, saving both time and resources.
The research outlined in these abstract focuses on understanding Fieldwire's
API capabilities, exploring integration possibilities with various construction
tools, evaluating the impact of integration on efficiency and error reduction,
establishing best practices, and offering recommendations to construction
professionals. Python programming scripts are employed to visualize the
benefits of API integration. Empirical findings indicate that Fieldwire's API
significantly improves data accuracy, reduces project completion times by an
average of 20%, and garners high user satisfaction. Such results are paramount
in an industry reliant on precise data and efficient communication. This
research underscores the transformative potential of Fieldwire's API and its
relevance in modern construction management. It encourages construction
professionals to embrace API integration for enhanced project outcomes and
serves as an inspiration for software developers to innovate further in
construction technology. As the construction industry evolves, API integration
remains crucial for staying competitive and efficient.
|
[
{
"created": "Wed, 4 Oct 2023 17:26:44 GMT",
"version": "v1"
}
] |
2023-10-05
|
[
[
"Victor",
"Nwosu Obinnaya Chikezie",
""
]
] |
Fieldwire, a cloud-based construction management software, has become a pivotal tool in the construction industry. It offers a comprehensive suite of features encompassing project management, task tracking, document management, and collaboration. With the rise of Application Programming Interfaces (APIs) in the software industry, Fieldwire has harnessed this trend to further empower construction professionals. APIs act as bridges between different software systems, and in Fieldwire's context, they hold the potential to integrate with specialized construction tools, eliminating data silos, manual data entry, and real-time information-sharing issues. This integration promises a streamlined and efficient construction management process, saving both time and resources. The research outlined in these abstract focuses on understanding Fieldwire's API capabilities, exploring integration possibilities with various construction tools, evaluating the impact of integration on efficiency and error reduction, establishing best practices, and offering recommendations to construction professionals. Python programming scripts are employed to visualize the benefits of API integration. Empirical findings indicate that Fieldwire's API significantly improves data accuracy, reduces project completion times by an average of 20%, and garners high user satisfaction. Such results are paramount in an industry reliant on precise data and efficient communication. This research underscores the transformative potential of Fieldwire's API and its relevance in modern construction management. It encourages construction professionals to embrace API integration for enhanced project outcomes and serves as an inspiration for software developers to innovate further in construction technology. As the construction industry evolves, API integration remains crucial for staying competitive and efficient.
|
1709.01870
|
Sunrita Poddar
|
Sunrita Poddar, Mathews Jacob
|
Clustering of Data with Missing Entries using Non-convex Fusion
Penalties
| null | null | null | null |
cs.CV cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The presence of missing entries in data often creates challenges for pattern
recognition algorithms. Traditional algorithms for clustering data assume that
all the feature values are known for every data point. We propose a method to
cluster data in the presence of missing information. Unlike conventional
clustering techniques where every feature is known for each point, our
algorithm can handle cases where a few feature values are unknown for every
point. For this more challenging problem, we provide theoretical guarantees for
clustering using a $\ell_0$ fusion penalty based optimization problem.
Furthermore, we propose an algorithm to solve a relaxation of this problem
using saturating non-convex fusion penalties. It is observed that this
algorithm produces solutions that degrade gradually with an increase in the
fraction of missing feature values. We demonstrate the utility of the proposed
method using a simulated dataset, the Wine dataset and also an under-sampled
cardiac MRI dataset. It is shown that the proposed method is a promising
clustering technique for datasets with large fractions of missing entries.
|
[
{
"created": "Wed, 6 Sep 2017 15:59:57 GMT",
"version": "v1"
}
] |
2017-09-07
|
[
[
"Poddar",
"Sunrita",
""
],
[
"Jacob",
"Mathews",
""
]
] |
The presence of missing entries in data often creates challenges for pattern recognition algorithms. Traditional algorithms for clustering data assume that all the feature values are known for every data point. We propose a method to cluster data in the presence of missing information. Unlike conventional clustering techniques where every feature is known for each point, our algorithm can handle cases where a few feature values are unknown for every point. For this more challenging problem, we provide theoretical guarantees for clustering using a $\ell_0$ fusion penalty based optimization problem. Furthermore, we propose an algorithm to solve a relaxation of this problem using saturating non-convex fusion penalties. It is observed that this algorithm produces solutions that degrade gradually with an increase in the fraction of missing feature values. We demonstrate the utility of the proposed method using a simulated dataset, the Wine dataset and also an under-sampled cardiac MRI dataset. It is shown that the proposed method is a promising clustering technique for datasets with large fractions of missing entries.
|
2405.17283
|
Anand Gopalakrishnan
|
Anand Gopalakrishnan, Aleksandar Stani\'c, J\"urgen Schmidhuber,
Michael Curtis Mozer
|
Recurrent Complex-Weighted Autoencoders for Unsupervised Object
Discovery
|
minor typo fixed
| null | null | null |
cs.LG cs.NE
|
http://creativecommons.org/licenses/by/4.0/
|
Current state-of-the-art synchrony-based models encode object bindings with
complex-valued activations and compute with real-valued weights in feedforward
architectures. We argue for the computational advantages of a recurrent
architecture with complex-valued weights. We propose a fully convolutional
autoencoder, SynCx, that performs iterative constraint satisfaction: at each
iteration, a hidden layer bottleneck encodes statistically regular
configurations of features in particular phase relationships; over iterations,
local constraints propagate and the model converges to a globally consistent
configuration of phase assignments. Binding is achieved simply by the
matrix-vector product operation between complex-valued weights and activations,
without the need for additional mechanisms that have been incorporated into
current synchrony-based models. SynCx outperforms or is strongly competitive
with current models for unsupervised object discovery. SynCx also avoids
certain systematic grouping errors of current models, such as the inability to
separate similarly colored objects without additional supervision.
|
[
{
"created": "Mon, 27 May 2024 15:47:03 GMT",
"version": "v1"
},
{
"created": "Tue, 28 May 2024 12:06:28 GMT",
"version": "v2"
}
] |
2024-05-29
|
[
[
"Gopalakrishnan",
"Anand",
""
],
[
"Stanić",
"Aleksandar",
""
],
[
"Schmidhuber",
"Jürgen",
""
],
[
"Mozer",
"Michael Curtis",
""
]
] |
Current state-of-the-art synchrony-based models encode object bindings with complex-valued activations and compute with real-valued weights in feedforward architectures. We argue for the computational advantages of a recurrent architecture with complex-valued weights. We propose a fully convolutional autoencoder, SynCx, that performs iterative constraint satisfaction: at each iteration, a hidden layer bottleneck encodes statistically regular configurations of features in particular phase relationships; over iterations, local constraints propagate and the model converges to a globally consistent configuration of phase assignments. Binding is achieved simply by the matrix-vector product operation between complex-valued weights and activations, without the need for additional mechanisms that have been incorporated into current synchrony-based models. SynCx outperforms or is strongly competitive with current models for unsupervised object discovery. SynCx also avoids certain systematic grouping errors of current models, such as the inability to separate similarly colored objects without additional supervision.
|
2310.17250
|
Zsolt Janos Viharos Dr.
|
Anh T. Hoang, Zsolt J. Viharos
|
IDENAS: Internal Dependency Exploration for Neural Architecture Search
|
57 pages, 19 figures + appendix, the related software code can be
found under the link: https://github.com/viharoszsolt/IDENAS
| null | null | null |
cs.LG cs.AI cs.NE
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Machine learning is a powerful tool for extracting valuable information and
making various predictions from diverse datasets. Traditional algorithms rely
on well-defined input and output variables however, there are scenarios where
the distinction between the input and output variables and the underlying,
associated (input and output) layers of the model, are unknown. Neural
Architecture Search (NAS) and Feature Selection have emerged as promising
solutions in such scenarios. This research proposes IDENAS, an Internal
Dependency-based Exploration for Neural Architecture Search, integrating NAS
with feature selection. The methodology explores internal dependencies in the
complete parameter space for classification involving 1D sensor and 2D image
data as well. IDENAS employs a modified encoder-decoder model and the
Sequential Forward Search (SFS) algorithm, combining input-output configuration
search with embedded feature selection. Experimental results demonstrate
IDENASs superior performance in comparison to other algorithms, showcasing its
effectiveness in model development pipelines and automated machine learning. On
average, IDENAS achieved significant modelling improvements, underscoring its
significant contribution to advancing the state-of-the-art in neural
architecture search and feature selection integration.
|
[
{
"created": "Thu, 26 Oct 2023 08:58:29 GMT",
"version": "v1"
}
] |
2023-10-27
|
[
[
"Hoang",
"Anh T.",
""
],
[
"Viharos",
"Zsolt J.",
""
]
] |
Machine learning is a powerful tool for extracting valuable information and making various predictions from diverse datasets. Traditional algorithms rely on well-defined input and output variables however, there are scenarios where the distinction between the input and output variables and the underlying, associated (input and output) layers of the model, are unknown. Neural Architecture Search (NAS) and Feature Selection have emerged as promising solutions in such scenarios. This research proposes IDENAS, an Internal Dependency-based Exploration for Neural Architecture Search, integrating NAS with feature selection. The methodology explores internal dependencies in the complete parameter space for classification involving 1D sensor and 2D image data as well. IDENAS employs a modified encoder-decoder model and the Sequential Forward Search (SFS) algorithm, combining input-output configuration search with embedded feature selection. Experimental results demonstrate IDENASs superior performance in comparison to other algorithms, showcasing its effectiveness in model development pipelines and automated machine learning. On average, IDENAS achieved significant modelling improvements, underscoring its significant contribution to advancing the state-of-the-art in neural architecture search and feature selection integration.
|
2304.04437
|
Tobias Baumgartner
|
Tobias Baumgartner and Stefanie Klatt
|
Monocular 3D Human Pose Estimation for Sports Broadcasts using Partial
Sports Field Registration
|
accept at "9th International Workshop on Computer Vision in Sports
(CVsports) at CVPR 2023"
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The filming of sporting events projects and flattens the movement of athletes
in the world onto a 2D broadcast image. The pixel locations of joints in these
images can be detected with high validity. Recovering the actual 3D movement of
the limbs (kinematics) of the athletes requires lifting these 2D pixel
locations back into a third dimension, implying a certain scene geometry. The
well-known line markings of sports fields allow for the calibration of the
camera and for determining the actual geometry of the scene. Close-up shots of
athletes are required to extract detailed kinematics, which in turn obfuscates
the pertinent field markers for camera calibration. We suggest partial sports
field registration, which determines a set of scene-consistent camera
calibrations up to a single degree of freedom. Through joint optimization of 3D
pose estimation and camera calibration, we demonstrate the successful
extraction of 3D running kinematics on a 400m track. In this work, we combine
advances in 2D human pose estimation and camera calibration via partial sports
field registration to demonstrate an avenue for collecting valid large-scale
kinematic datasets. We generate a synthetic dataset of more than 10k images in
Unreal Engine 5 with different viewpoints, running styles, and body types, to
show the limitations of existing monocular 3D HPE methods. Synthetic data and
code are available at https://github.com/tobibaum/PartialSportsFieldReg_3DHPE.
|
[
{
"created": "Mon, 10 Apr 2023 07:41:44 GMT",
"version": "v1"
}
] |
2023-04-11
|
[
[
"Baumgartner",
"Tobias",
""
],
[
"Klatt",
"Stefanie",
""
]
] |
The filming of sporting events projects and flattens the movement of athletes in the world onto a 2D broadcast image. The pixel locations of joints in these images can be detected with high validity. Recovering the actual 3D movement of the limbs (kinematics) of the athletes requires lifting these 2D pixel locations back into a third dimension, implying a certain scene geometry. The well-known line markings of sports fields allow for the calibration of the camera and for determining the actual geometry of the scene. Close-up shots of athletes are required to extract detailed kinematics, which in turn obfuscates the pertinent field markers for camera calibration. We suggest partial sports field registration, which determines a set of scene-consistent camera calibrations up to a single degree of freedom. Through joint optimization of 3D pose estimation and camera calibration, we demonstrate the successful extraction of 3D running kinematics on a 400m track. In this work, we combine advances in 2D human pose estimation and camera calibration via partial sports field registration to demonstrate an avenue for collecting valid large-scale kinematic datasets. We generate a synthetic dataset of more than 10k images in Unreal Engine 5 with different viewpoints, running styles, and body types, to show the limitations of existing monocular 3D HPE methods. Synthetic data and code are available at https://github.com/tobibaum/PartialSportsFieldReg_3DHPE.
|
1402.3821
|
Ylies Falcone
|
Tom Cornebize and Yli\`es Falcone
|
Efficient and Generalized Decentralized Monitoring of Regular Languages
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The main contribution of this paper is an efficient and generalized
decentralized monitoring algorithm allowing to detect satisfaction or violation
of any regular specification by local monitors alone in a system without
central observation point. Our algorithm does not assume any form of
synchronization between system events and communication of monitors, uses state
machines as underlying mechanism for efficiency, and tries to keep the number
and size of messages exchanged between monitors to a minimum. We provide a full
implementation of the algorithm with an open-source benchmark to evaluate its
efficiency in terms of number, size of exchanged messages, and delay induced by
communication between monitors. Experimental results demonstrate the
effectiveness of our algorithm which outperforms the previous most general one
along several (new) monitoring metrics.
|
[
{
"created": "Sun, 16 Feb 2014 17:49:57 GMT",
"version": "v1"
}
] |
2014-02-18
|
[
[
"Cornebize",
"Tom",
""
],
[
"Falcone",
"Yliès",
""
]
] |
The main contribution of this paper is an efficient and generalized decentralized monitoring algorithm allowing to detect satisfaction or violation of any regular specification by local monitors alone in a system without central observation point. Our algorithm does not assume any form of synchronization between system events and communication of monitors, uses state machines as underlying mechanism for efficiency, and tries to keep the number and size of messages exchanged between monitors to a minimum. We provide a full implementation of the algorithm with an open-source benchmark to evaluate its efficiency in terms of number, size of exchanged messages, and delay induced by communication between monitors. Experimental results demonstrate the effectiveness of our algorithm which outperforms the previous most general one along several (new) monitoring metrics.
|
1609.05132
|
James Garland
|
James Garland, David Gregg
|
Low Complexity Multiply Accumulate Unit for Weight-Sharing Convolutional
Neural Networks
|
4 pages
| null |
10.1109/LCA.2017.2656880
| null |
cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Convolutional Neural Networks (CNNs) are one of the most successful deep
machine learning technologies for processing image, voice and video data. CNNs
require large amounts of processing capacity and memory, which can exceed the
resources of low power mobile and embedded systems. Several designs for
hardware accelerators have been proposed for CNNs which typically contain large
numbers of Multiply Accumulate (MAC) units. One approach to reducing data sizes
and memory traffic in CNN accelerators is "weight sharing", where the full
range of values in a trained CNN are put in bins and the bin index is stored
instead of the original weight value. In this paper we propose a novel MAC
circuit that exploits binning in weight-sharing CNNs. Rather than computing the
MAC directly we instead count the frequency of each weight and place it in a
bin. We then compute the accumulated value in a subsequent multiply phase. This
allows hardware multipliers in the MAC circuit to be replaced with adders and
selection logic. Experiments show that for the same clock speed our approach
results in fewer gates, smaller logic, and reduced power.
|
[
{
"created": "Tue, 30 Aug 2016 13:41:41 GMT",
"version": "v1"
},
{
"created": "Sun, 15 Jan 2017 19:23:59 GMT",
"version": "v2"
},
{
"created": "Tue, 17 Jan 2017 14:36:21 GMT",
"version": "v3"
},
{
"created": "Thu, 19 Jan 2017 16:07:03 GMT",
"version": "v4"
}
] |
2017-08-17
|
[
[
"Garland",
"James",
""
],
[
"Gregg",
"David",
""
]
] |
Convolutional Neural Networks (CNNs) are one of the most successful deep machine learning technologies for processing image, voice and video data. CNNs require large amounts of processing capacity and memory, which can exceed the resources of low power mobile and embedded systems. Several designs for hardware accelerators have been proposed for CNNs which typically contain large numbers of Multiply Accumulate (MAC) units. One approach to reducing data sizes and memory traffic in CNN accelerators is "weight sharing", where the full range of values in a trained CNN are put in bins and the bin index is stored instead of the original weight value. In this paper we propose a novel MAC circuit that exploits binning in weight-sharing CNNs. Rather than computing the MAC directly we instead count the frequency of each weight and place it in a bin. We then compute the accumulated value in a subsequent multiply phase. This allows hardware multipliers in the MAC circuit to be replaced with adders and selection logic. Experiments show that for the same clock speed our approach results in fewer gates, smaller logic, and reduced power.
|
cs/0702064
|
Terence H. Chan
|
Terence H. Chan
|
Group characterizable entropy functions
| null | null | null | null |
cs.IT math.IT
| null |
This paper studies properties of entropy functions that are induced by groups
and subgroups. We showed that many information theoretic properties of those
group induced entropy functions also have corresponding group theoretic
interpretations. Then we propose an extension method to find outer bound for
these group induced entropy functions.
|
[
{
"created": "Sat, 10 Feb 2007 12:38:13 GMT",
"version": "v1"
}
] |
2007-07-13
|
[
[
"Chan",
"Terence H.",
""
]
] |
This paper studies properties of entropy functions that are induced by groups and subgroups. We showed that many information theoretic properties of those group induced entropy functions also have corresponding group theoretic interpretations. Then we propose an extension method to find outer bound for these group induced entropy functions.
|
1010.0406
|
Shlomo Jozpeh
|
Uriel Feige, Shlomo Jozeph
|
Oblivious Algorithms for the Maximum Directed Cut Problem
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces a special family of randomized algorithms for Max DICUT
that we call oblivious algorithms. Let the bias of a vertex be the ratio
between the total weight of its outgoing edges and the total weight of all its
edges. An oblivious algorithm selects at random in which side of the cut to
place a vertex v, with probability that only depends on the bias of v,
independently of other vertices. The reader may observe that the algorithm that
ignores the bias and chooses each side with probability 1/2 has an
approximation ratio of 1/4, whereas no oblivious algorithm can have an
approximation ratio better than 1/2 (with an even directed cycle serving as a
negative example). We attempt to characterize the best approximation ratio
achievable by oblivious algorithms, and present results that are nearly tight.
The paper also discusses natural extensions of the notion of oblivious
algorithms, and extensions to the more general problem of Max 2-AND.
|
[
{
"created": "Sun, 3 Oct 2010 14:05:40 GMT",
"version": "v1"
}
] |
2010-10-05
|
[
[
"Feige",
"Uriel",
""
],
[
"Jozeph",
"Shlomo",
""
]
] |
This paper introduces a special family of randomized algorithms for Max DICUT that we call oblivious algorithms. Let the bias of a vertex be the ratio between the total weight of its outgoing edges and the total weight of all its edges. An oblivious algorithm selects at random in which side of the cut to place a vertex v, with probability that only depends on the bias of v, independently of other vertices. The reader may observe that the algorithm that ignores the bias and chooses each side with probability 1/2 has an approximation ratio of 1/4, whereas no oblivious algorithm can have an approximation ratio better than 1/2 (with an even directed cycle serving as a negative example). We attempt to characterize the best approximation ratio achievable by oblivious algorithms, and present results that are nearly tight. The paper also discusses natural extensions of the notion of oblivious algorithms, and extensions to the more general problem of Max 2-AND.
|
1810.01730
|
Ben Chugg
|
Ben Chugg, Takanori Maehara
|
Submodular Stochastic Probing with Prices
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce Stochastic Probing with Prices (SPP), a variant of the
Stochastic Probing (SP) model in which we must pay a price to probe an element.
A SPP problem involves two set systems $(N,\mathcal{I}_{in})$ and
$(N,\mathcal{I}_{out})$ where each $e\in N$ is active with probability $p_e$.
To discover whether $e$ is active, it must be probed by paying the price
$\Delta_e$. If it is probed and active, then it is irrevocably added to the
solution. Moreover, at all times, the set of probed elements must lie in
$\mathcal{I}_{out}$, and the solution (the set of probed and active elements)
must lie in $\mathcal{I}_{in}$. The goal is to maximize a set function $f$
minus the cost of the probes. We give a bi-criteria approximation algorithm to
the online version of this problem, in which the elements are shown to the
algorithm in a possibly adversarial order. Our results translate to
state-of-the-art approximations for the traditional (online) stochastic probing
problem.
|
[
{
"created": "Wed, 3 Oct 2018 13:31:07 GMT",
"version": "v1"
},
{
"created": "Sun, 14 Oct 2018 18:05:37 GMT",
"version": "v2"
},
{
"created": "Tue, 8 Jan 2019 09:14:25 GMT",
"version": "v3"
}
] |
2019-01-09
|
[
[
"Chugg",
"Ben",
""
],
[
"Maehara",
"Takanori",
""
]
] |
We introduce Stochastic Probing with Prices (SPP), a variant of the Stochastic Probing (SP) model in which we must pay a price to probe an element. A SPP problem involves two set systems $(N,\mathcal{I}_{in})$ and $(N,\mathcal{I}_{out})$ where each $e\in N$ is active with probability $p_e$. To discover whether $e$ is active, it must be probed by paying the price $\Delta_e$. If it is probed and active, then it is irrevocably added to the solution. Moreover, at all times, the set of probed elements must lie in $\mathcal{I}_{out}$, and the solution (the set of probed and active elements) must lie in $\mathcal{I}_{in}$. The goal is to maximize a set function $f$ minus the cost of the probes. We give a bi-criteria approximation algorithm to the online version of this problem, in which the elements are shown to the algorithm in a possibly adversarial order. Our results translate to state-of-the-art approximations for the traditional (online) stochastic probing problem.
|
1802.10229
|
Yi Yang
|
Yi Yang, Ozan Irsoy, Kazi Shefaet Rahman
|
Collective Entity Disambiguation with Structured Gradient Tree Boosting
|
Accepted by NAACL 2018
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a gradient-tree-boosting-based structured learning model for
jointly disambiguating named entities in a document. Gradient tree boosting is
a widely used machine learning algorithm that underlies many top-performing
natural language processing systems. Surprisingly, most works limit the use of
gradient tree boosting as a tool for regular classification or regression
problems, despite the structured nature of language. To the best of our
knowledge, our work is the first one that employs the structured gradient tree
boosting (SGTB) algorithm for collective entity disambiguation. By defining
global features over previous disambiguation decisions and jointly modeling
them with local features, our system is able to produce globally optimized
entity assignments for mentions in a document. Exact inference is prohibitively
expensive for our globally normalized model. To solve this problem, we propose
Bidirectional Beam Search with Gold path (BiBSG), an approximate inference
algorithm that is a variant of the standard beam search algorithm. BiBSG makes
use of global information from both past and future to perform better local
search. Experiments on standard benchmark datasets show that SGTB significantly
improves upon published results. Specifically, SGTB outperforms the previous
state-of-the-art neural system by near 1\% absolute accuracy on the popular
AIDA-CoNLL dataset.
|
[
{
"created": "Wed, 28 Feb 2018 02:01:30 GMT",
"version": "v1"
},
{
"created": "Tue, 24 Apr 2018 01:23:48 GMT",
"version": "v2"
}
] |
2018-04-25
|
[
[
"Yang",
"Yi",
""
],
[
"Irsoy",
"Ozan",
""
],
[
"Rahman",
"Kazi Shefaet",
""
]
] |
We present a gradient-tree-boosting-based structured learning model for jointly disambiguating named entities in a document. Gradient tree boosting is a widely used machine learning algorithm that underlies many top-performing natural language processing systems. Surprisingly, most works limit the use of gradient tree boosting as a tool for regular classification or regression problems, despite the structured nature of language. To the best of our knowledge, our work is the first one that employs the structured gradient tree boosting (SGTB) algorithm for collective entity disambiguation. By defining global features over previous disambiguation decisions and jointly modeling them with local features, our system is able to produce globally optimized entity assignments for mentions in a document. Exact inference is prohibitively expensive for our globally normalized model. To solve this problem, we propose Bidirectional Beam Search with Gold path (BiBSG), an approximate inference algorithm that is a variant of the standard beam search algorithm. BiBSG makes use of global information from both past and future to perform better local search. Experiments on standard benchmark datasets show that SGTB significantly improves upon published results. Specifically, SGTB outperforms the previous state-of-the-art neural system by near 1\% absolute accuracy on the popular AIDA-CoNLL dataset.
|
1504.06049
|
Michael Ruderman
|
Michael Ruderman
|
State-space formulation of scalar Preisach hysteresis model for rapid
computation in time domain
| null | null |
10.1016/j.apm.2015.09.065
| null |
cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A state-space formulation of classical scalar Preisach model (CSPM) of
hysteresis is proposed. The introduced state dynamics and memory interface
allow to use the state equation, which is rapid in calculation, instead of the
original Preisach equation. The main benefit of the proposed modeling approach
is the reduced computational effort which requires only a single integration
over the instantaneous line segment in the Preisach plane. Numerical
evaluations of the computation time and model accuracy are provided in
comparison to the CSPM which is taken as a reference model.
|
[
{
"created": "Thu, 23 Apr 2015 06:15:08 GMT",
"version": "v1"
}
] |
2017-05-02
|
[
[
"Ruderman",
"Michael",
""
]
] |
A state-space formulation of classical scalar Preisach model (CSPM) of hysteresis is proposed. The introduced state dynamics and memory interface allow to use the state equation, which is rapid in calculation, instead of the original Preisach equation. The main benefit of the proposed modeling approach is the reduced computational effort which requires only a single integration over the instantaneous line segment in the Preisach plane. Numerical evaluations of the computation time and model accuracy are provided in comparison to the CSPM which is taken as a reference model.
|
1802.06183
|
Ikechukwu Maduako
|
Maduako N. Ikechukwu, Francis I. Okeke
|
Towards Realisation of Heterogeneous Earth-Observation Sensor Database
Framework for the Sensor Observation Service based on PostGIS
| null | null | null | null |
cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Environmental monitoring and management systems in most cases deal with
models and spatial analytics that involve the integration of in-situ and remote
Geosensor observations. In-situ sensor observations and those gathered by
remote sensors are usually provided by different databases and services in
real-time dynamic services such as the Geo-Web Services. Thus, data have to be
pulled from different databases and transferred over the network before they
are fused and processed on the service middleware. This process is very massive
and unnecessary communication-work load on the service middleware. Massive work
load in large raster downloads from flat-file raster data sources each time a
request is made and huge integration and geo-processing work load on the
service middleware which could actually be better leveraged at the database
This paper therefore proposes the realization of heterogeneous sensor database
framework based on PostGIS for integration, geo-processing and spatial analysis
of remote and in-situ sensor observations at the database level. Also discussed
in this paper is how the framework can be integrated in the Sensor Observation
Service (SOS) to reduce communication and massive workload on the Geospatial
Web Services and as well make query request from the user end a lot more
flexible. Keywords: Earth-Observation, Heterogeneous Earth-Observation Sensor
Database, PostGIS , Sensor Observation Service.
|
[
{
"created": "Sat, 17 Feb 2018 03:52:21 GMT",
"version": "v1"
}
] |
2018-02-20
|
[
[
"Ikechukwu",
"Maduako N.",
""
],
[
"Okeke",
"Francis I.",
""
]
] |
Environmental monitoring and management systems in most cases deal with models and spatial analytics that involve the integration of in-situ and remote Geosensor observations. In-situ sensor observations and those gathered by remote sensors are usually provided by different databases and services in real-time dynamic services such as the Geo-Web Services. Thus, data have to be pulled from different databases and transferred over the network before they are fused and processed on the service middleware. This process is very massive and unnecessary communication-work load on the service middleware. Massive work load in large raster downloads from flat-file raster data sources each time a request is made and huge integration and geo-processing work load on the service middleware which could actually be better leveraged at the database This paper therefore proposes the realization of heterogeneous sensor database framework based on PostGIS for integration, geo-processing and spatial analysis of remote and in-situ sensor observations at the database level. Also discussed in this paper is how the framework can be integrated in the Sensor Observation Service (SOS) to reduce communication and massive workload on the Geospatial Web Services and as well make query request from the user end a lot more flexible. Keywords: Earth-Observation, Heterogeneous Earth-Observation Sensor Database, PostGIS , Sensor Observation Service.
|
2406.08862
|
Alexi Gladstone
|
Alexi Gladstone, Ganesh Nanduru, Md Mofijul Islam, Aman Chadha,
Jundong Li, Tariq Iqbal
|
Cognitively Inspired Energy-Based World Models
|
23 pages, 6 figures
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
One of the predominant methods for training world models is autoregressive
prediction in the output space of the next element of a sequence. In Natural
Language Processing (NLP), this takes the form of Large Language Models (LLMs)
predicting the next token; in Computer Vision (CV), this takes the form of
autoregressive models predicting the next frame/token/pixel. However, this
approach differs from human cognition in several respects. First, human
predictions about the future actively influence internal cognitive processes.
Second, humans naturally evaluate the plausibility of predictions regarding
future states. Based on this capability, and third, by assessing when
predictions are sufficient, humans allocate a dynamic amount of time to make a
prediction. This adaptive process is analogous to System 2 thinking in
psychology. All these capabilities are fundamental to the success of humans at
high-level reasoning and planning. Therefore, to address the limitations of
traditional autoregressive models lacking these human-like capabilities, we
introduce Energy-Based World Models (EBWM). EBWM involves training an
Energy-Based Model (EBM) to predict the compatibility of a given context and a
predicted future state. In doing so, EBWM enables models to achieve all three
facets of human cognition described. Moreover, we developed a variant of the
traditional autoregressive transformer tailored for Energy-Based models, termed
the Energy-Based Transformer (EBT). Our results demonstrate that EBWM scales
better with data and GPU Hours than traditional autoregressive transformers in
CV, and that EBWM offers promising early scaling in NLP. Consequently, this
approach offers an exciting path toward training future models capable of
System 2 thinking and intelligently searching across state spaces.
|
[
{
"created": "Thu, 13 Jun 2024 06:54:37 GMT",
"version": "v1"
}
] |
2024-06-14
|
[
[
"Gladstone",
"Alexi",
""
],
[
"Nanduru",
"Ganesh",
""
],
[
"Islam",
"Md Mofijul",
""
],
[
"Chadha",
"Aman",
""
],
[
"Li",
"Jundong",
""
],
[
"Iqbal",
"Tariq",
""
]
] |
One of the predominant methods for training world models is autoregressive prediction in the output space of the next element of a sequence. In Natural Language Processing (NLP), this takes the form of Large Language Models (LLMs) predicting the next token; in Computer Vision (CV), this takes the form of autoregressive models predicting the next frame/token/pixel. However, this approach differs from human cognition in several respects. First, human predictions about the future actively influence internal cognitive processes. Second, humans naturally evaluate the plausibility of predictions regarding future states. Based on this capability, and third, by assessing when predictions are sufficient, humans allocate a dynamic amount of time to make a prediction. This adaptive process is analogous to System 2 thinking in psychology. All these capabilities are fundamental to the success of humans at high-level reasoning and planning. Therefore, to address the limitations of traditional autoregressive models lacking these human-like capabilities, we introduce Energy-Based World Models (EBWM). EBWM involves training an Energy-Based Model (EBM) to predict the compatibility of a given context and a predicted future state. In doing so, EBWM enables models to achieve all three facets of human cognition described. Moreover, we developed a variant of the traditional autoregressive transformer tailored for Energy-Based models, termed the Energy-Based Transformer (EBT). Our results demonstrate that EBWM scales better with data and GPU Hours than traditional autoregressive transformers in CV, and that EBWM offers promising early scaling in NLP. Consequently, this approach offers an exciting path toward training future models capable of System 2 thinking and intelligently searching across state spaces.
|
1708.03800
|
Michel Fliess
|
Hassane Aboua\"issa, Ola Alhaj Hasan, C\'edric Join, Michel Fliess,
Didier Defer
|
Energy saving for building heating via a simple and efficient model-free
control design: First steps with computer simulations
|
21st International Conference on System Theory, Control and
Computing, October 2017, Sinaia, Romania
| null | null | null |
cs.SY cs.AI math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The model-based control of building heating systems for energy saving
encounters severe physical, mathematical and calibration difficulties in the
numerous attempts that has been published until now. This topic is addressed
here via a new model-free control setting, where the need of any mathematical
description disappears. Several convincing computer simulations are presented.
Comparisons with classic PI controllers and flatness-based predictive control
are provided.
|
[
{
"created": "Sat, 12 Aug 2017 17:35:52 GMT",
"version": "v1"
},
{
"created": "Wed, 6 Sep 2017 20:21:03 GMT",
"version": "v2"
}
] |
2017-09-08
|
[
[
"Abouaïssa",
"Hassane",
""
],
[
"Hasan",
"Ola Alhaj",
""
],
[
"Join",
"Cédric",
""
],
[
"Fliess",
"Michel",
""
],
[
"Defer",
"Didier",
""
]
] |
The model-based control of building heating systems for energy saving encounters severe physical, mathematical and calibration difficulties in the numerous attempts that has been published until now. This topic is addressed here via a new model-free control setting, where the need of any mathematical description disappears. Several convincing computer simulations are presented. Comparisons with classic PI controllers and flatness-based predictive control are provided.
|
2103.12523
|
Anshul Pundhir
|
Anshul Pundhir, Deepak Verma, Puneet Kumar, Balasubramanian Raman
|
Region extraction based approach for cigarette usage classification
using deep learning
|
5 pages, 16 figures. To appear in the proceedings of the 28th IEEE
International Conference on Image Processing (IEEE - ICIP), September 19-22,
2021, Anchorage, Alaska, USA
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper has proposed a novel approach to classify the subjects' smoking
behavior by extracting relevant regions from a given image using deep learning.
After the classification, we have proposed a conditional detection module based
on Yolo-v3, which improves model's performance and reduces its complexity. As
per the best of our knowledge, we are the first to work on this dataset. This
dataset contains a total of 2,400 images that include smokers and non-smokers
equally in various environmental settings. We have evaluated the proposed
approach's performance using quantitative and qualitative measures, which
confirms its effectiveness in challenging situations. The proposed approach has
achieved a classification accuracy of 96.74% on this dataset.
|
[
{
"created": "Tue, 23 Mar 2021 13:19:43 GMT",
"version": "v1"
}
] |
2021-03-24
|
[
[
"Pundhir",
"Anshul",
""
],
[
"Verma",
"Deepak",
""
],
[
"Kumar",
"Puneet",
""
],
[
"Raman",
"Balasubramanian",
""
]
] |
This paper has proposed a novel approach to classify the subjects' smoking behavior by extracting relevant regions from a given image using deep learning. After the classification, we have proposed a conditional detection module based on Yolo-v3, which improves model's performance and reduces its complexity. As per the best of our knowledge, we are the first to work on this dataset. This dataset contains a total of 2,400 images that include smokers and non-smokers equally in various environmental settings. We have evaluated the proposed approach's performance using quantitative and qualitative measures, which confirms its effectiveness in challenging situations. The proposed approach has achieved a classification accuracy of 96.74% on this dataset.
|
1311.6235
|
Tomasz Kociumaka
|
Tomasz Kociumaka, Jakub Radoszewski, Wojciech Rytter, Tomasz Wale\'n
|
Internal Pattern Matching Queries in a Text and Applications
|
42 pages, 13 figures; an updated version of a paper presented at SODA
2015
| null |
10.1137/1.9781611973730.36
| null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider several types of internal queries, that is, questions about
fragments of a given text $T$ specified in constant space by their locations in
$T$. Our main result is an optimal data structure for Internal Pattern Matching
(IPM) queries which, given two fragments $x$ and $y$, ask for a representation
of all fragments contained in $y$ and matching $x$ exactly; this problem can be
viewed as an internal version of the Exact Pattern Matching problem. Our data
structure answers IPM queries in time proportional to the quotient $|y|/|x|$ of
fragments' lengths, which is required due to the information content of the
output. If $T$ is a text of length $n$ over an integer alphabet of size
$\sigma$, then our data structure occupies $O(n/ \log_\sigma n)$ machine words
(that is, $O(n\log \sigma)$ bits) and admits an $O(n/ \log_\sigma n)$-time
construction algorithm.
We show the applicability of IPM queries for answering internal queries
corresponding to other classic string processing problems. Among others, we
derive optimal data structures reporting the periods of a fragment and testing
the cyclic equivalence of two fragments. IPM queries have already found
numerous further applications, following the path paved by the classic Longest
Common Extension (LCE) queries of Landau and Vishkin (JCSS, 1988). In
particular, IPM queries have been implemented in grammar-compressed and dynamic
settings and, along with LCE queries, constitute elementary operations of the
PILLAR model, developed by Charalampopoulos, Kociumaka, and Wellnitz (FOCS
2020).
On the way to our main result, we provide a novel construction of string
synchronizing sets of Kempa and Kociumaka (STOC 2019). Our method, based on a
new restricted version of the recompression technique of Je\.z (J. ACM, 2016),
yields a hierarchy of $O(\log n)$ string synchronizing sets covering the whole
spectrum of fragments' lengths.
|
[
{
"created": "Mon, 25 Nov 2013 08:49:39 GMT",
"version": "v1"
},
{
"created": "Tue, 18 Mar 2014 09:56:29 GMT",
"version": "v2"
},
{
"created": "Tue, 26 Aug 2014 16:11:45 GMT",
"version": "v3"
},
{
"created": "Mon, 13 Oct 2014 16:33:19 GMT",
"version": "v4"
},
{
"created": "Tue, 2 May 2023 14:07:45 GMT",
"version": "v5"
}
] |
2023-05-03
|
[
[
"Kociumaka",
"Tomasz",
""
],
[
"Radoszewski",
"Jakub",
""
],
[
"Rytter",
"Wojciech",
""
],
[
"Waleń",
"Tomasz",
""
]
] |
We consider several types of internal queries, that is, questions about fragments of a given text $T$ specified in constant space by their locations in $T$. Our main result is an optimal data structure for Internal Pattern Matching (IPM) queries which, given two fragments $x$ and $y$, ask for a representation of all fragments contained in $y$ and matching $x$ exactly; this problem can be viewed as an internal version of the Exact Pattern Matching problem. Our data structure answers IPM queries in time proportional to the quotient $|y|/|x|$ of fragments' lengths, which is required due to the information content of the output. If $T$ is a text of length $n$ over an integer alphabet of size $\sigma$, then our data structure occupies $O(n/ \log_\sigma n)$ machine words (that is, $O(n\log \sigma)$ bits) and admits an $O(n/ \log_\sigma n)$-time construction algorithm. We show the applicability of IPM queries for answering internal queries corresponding to other classic string processing problems. Among others, we derive optimal data structures reporting the periods of a fragment and testing the cyclic equivalence of two fragments. IPM queries have already found numerous further applications, following the path paved by the classic Longest Common Extension (LCE) queries of Landau and Vishkin (JCSS, 1988). In particular, IPM queries have been implemented in grammar-compressed and dynamic settings and, along with LCE queries, constitute elementary operations of the PILLAR model, developed by Charalampopoulos, Kociumaka, and Wellnitz (FOCS 2020). On the way to our main result, we provide a novel construction of string synchronizing sets of Kempa and Kociumaka (STOC 2019). Our method, based on a new restricted version of the recompression technique of Je\.z (J. ACM, 2016), yields a hierarchy of $O(\log n)$ string synchronizing sets covering the whole spectrum of fragments' lengths.
|
2102.00892
|
Sina Hajimiri
|
Sina Hajimiri, Aryo Lotfi, Mahdieh Soleymani Baghshah
|
Semi-Supervised Disentanglement of Class-Related and Class-Independent
Factors in VAE
|
16 pages, 10 figures
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, extending variational autoencoder's framework to learn
disentangled representations has received much attention. We address this
problem by proposing a framework capable of disentangling class-related and
class-independent factors of variation in data. Our framework employs an
attention mechanism in its latent space in order to improve the process of
extracting class-related factors from data. We also deal with the multimodality
of data distribution by utilizing mixture models as learnable prior
distributions, as well as incorporating the Bhattacharyya coefficient in the
objective function to prevent highly overlapping mixtures. Our model's encoder
is further trained in a semi-supervised manner, with a small fraction of
labeled data, to improve representations' interpretability. Experiments show
that our framework disentangles class-related and class-independent factors of
variation and learns interpretable features. Moreover, we demonstrate our
model's performance with quantitative and qualitative results on various
datasets.
|
[
{
"created": "Mon, 1 Feb 2021 15:05:24 GMT",
"version": "v1"
}
] |
2021-02-02
|
[
[
"Hajimiri",
"Sina",
""
],
[
"Lotfi",
"Aryo",
""
],
[
"Baghshah",
"Mahdieh Soleymani",
""
]
] |
In recent years, extending variational autoencoder's framework to learn disentangled representations has received much attention. We address this problem by proposing a framework capable of disentangling class-related and class-independent factors of variation in data. Our framework employs an attention mechanism in its latent space in order to improve the process of extracting class-related factors from data. We also deal with the multimodality of data distribution by utilizing mixture models as learnable prior distributions, as well as incorporating the Bhattacharyya coefficient in the objective function to prevent highly overlapping mixtures. Our model's encoder is further trained in a semi-supervised manner, with a small fraction of labeled data, to improve representations' interpretability. Experiments show that our framework disentangles class-related and class-independent factors of variation and learns interpretable features. Moreover, we demonstrate our model's performance with quantitative and qualitative results on various datasets.
|
2209.11094
|
Jack Saunders Mr
|
Jack Saunders, Sajad Saeedi, Wenbin Li
|
Parallel Reinforcement Learning Simulation for Visual Quadrotor
Navigation
|
This work has been submitted to the IEEE International Conference on
Robotics and Automation (ICRA) for possible publication. Copyright may be
transferred without notice, after which this version may no longer be
accessible
| null | null | null |
cs.RO cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Reinforcement learning (RL) is an agent-based approach for teaching robots to
navigate within the physical world. Gathering data for RL is known to be a
laborious task, and real-world experiments can be risky. Simulators facilitate
the collection of training data in a quicker and more cost-effective manner.
However, RL frequently requires a significant number of simulation steps for an
agent to become skilful at simple tasks. This is a prevalent issue within the
field of RL-based visual quadrotor navigation where state dimensions are
typically very large and dynamic models are complex. Furthermore, rendering
images and obtaining physical properties of the agent can be computationally
expensive. To solve this, we present a simulation framework, built on AirSim,
which provides efficient parallel training. Building on this framework, Ape-X
is modified to incorporate decentralised training of AirSim environments to
make use of numerous networked computers. Through experiments we were able to
achieve a reduction in training time from 3.9 hours to 11 minutes using the
aforementioned framework and a total of 74 agents and two networked computers.
Further details including a github repo and videos about our project,
PRL4AirSim, can be found at https://sites.google.com/view/prl4airsim/home
|
[
{
"created": "Thu, 22 Sep 2022 15:27:42 GMT",
"version": "v1"
}
] |
2022-09-23
|
[
[
"Saunders",
"Jack",
""
],
[
"Saeedi",
"Sajad",
""
],
[
"Li",
"Wenbin",
""
]
] |
Reinforcement learning (RL) is an agent-based approach for teaching robots to navigate within the physical world. Gathering data for RL is known to be a laborious task, and real-world experiments can be risky. Simulators facilitate the collection of training data in a quicker and more cost-effective manner. However, RL frequently requires a significant number of simulation steps for an agent to become skilful at simple tasks. This is a prevalent issue within the field of RL-based visual quadrotor navigation where state dimensions are typically very large and dynamic models are complex. Furthermore, rendering images and obtaining physical properties of the agent can be computationally expensive. To solve this, we present a simulation framework, built on AirSim, which provides efficient parallel training. Building on this framework, Ape-X is modified to incorporate decentralised training of AirSim environments to make use of numerous networked computers. Through experiments we were able to achieve a reduction in training time from 3.9 hours to 11 minutes using the aforementioned framework and a total of 74 agents and two networked computers. Further details including a github repo and videos about our project, PRL4AirSim, can be found at https://sites.google.com/view/prl4airsim/home
|
2405.10531
|
Chen Zhang
|
Chen Zhang, Steven Tin Sui Luo, Jason Chun Lok Li, Yik-Chung Wu, Ngai
Wong
|
Nonparametric Teaching of Implicit Neural Representations
|
ICML 2024 (24 pages, 13 figures)
| null | null | null |
cs.LG cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We investigate the learning of implicit neural representation (INR) using an
overparameterized multilayer perceptron (MLP) via a novel nonparametric
teaching perspective. The latter offers an efficient example selection
framework for teaching nonparametrically defined (viz. non-closed-form) target
functions, such as image functions defined by 2D grids of pixels. To address
the costly training of INRs, we propose a paradigm called Implicit Neural
Teaching (INT) that treats INR learning as a nonparametric teaching problem,
where the given signal being fitted serves as the target function. The teacher
then selects signal fragments for iterative training of the MLP to achieve fast
convergence. By establishing a connection between MLP evolution through
parameter-based gradient descent and that of function evolution through
functional gradient descent in nonparametric teaching, we show for the first
time that teaching an overparameterized MLP is consistent with teaching a
nonparametric learner. This new discovery readily permits a convenient drop-in
of nonparametric teaching algorithms to broadly enhance INR training
efficiency, demonstrating 30%+ training time savings across various input
modalities.
|
[
{
"created": "Fri, 17 May 2024 04:20:39 GMT",
"version": "v1"
}
] |
2024-05-20
|
[
[
"Zhang",
"Chen",
""
],
[
"Luo",
"Steven Tin Sui",
""
],
[
"Li",
"Jason Chun Lok",
""
],
[
"Wu",
"Yik-Chung",
""
],
[
"Wong",
"Ngai",
""
]
] |
We investigate the learning of implicit neural representation (INR) using an overparameterized multilayer perceptron (MLP) via a novel nonparametric teaching perspective. The latter offers an efficient example selection framework for teaching nonparametrically defined (viz. non-closed-form) target functions, such as image functions defined by 2D grids of pixels. To address the costly training of INRs, we propose a paradigm called Implicit Neural Teaching (INT) that treats INR learning as a nonparametric teaching problem, where the given signal being fitted serves as the target function. The teacher then selects signal fragments for iterative training of the MLP to achieve fast convergence. By establishing a connection between MLP evolution through parameter-based gradient descent and that of function evolution through functional gradient descent in nonparametric teaching, we show for the first time that teaching an overparameterized MLP is consistent with teaching a nonparametric learner. This new discovery readily permits a convenient drop-in of nonparametric teaching algorithms to broadly enhance INR training efficiency, demonstrating 30%+ training time savings across various input modalities.
|
1806.02693
|
Aaron Weiss
|
Aaron Weiss, Daniel Patterson, and Amal Ahmed
|
Rust Distilled: An Expressive Tower of Languages
|
ML '18 Final
| null | null | null |
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Rust represents a major advancement in production programming languages
because of its success in bridging the gap between high-level application
programming and low-level systems programming. At the heart of its design lies
a novel approach to ownership that remains highly programmable.
In this talk, we will describe our ongoing work on designing a formal
semantics for Rust that captures ownership and borrowing without the details of
lifetime analysis. This semantics models a high-level understanding of
ownership and as a result is close to source-level Rust (but with full type
annotations) which differs from the recent RustBelt effort that essentially
models MIR, a CPS-style IR used in the Rust compiler. Further, while RustBelt
aims to verify the safety of unsafe code in Rust's standard library, we model
standard library APIs as primitives, which is sufficient to reason about their
behavior. This yields a simpler model of Rust and its type system that we think
researchers will find easier to use as a starting point for investigating Rust
extensions. Unlike RustBelt, we aim to prove type soundness using progress and
preservation instead of a Kripke logical relation. Finally, our semantics is a
family of languages of increasing expressive power, where subsequent levels
have features that are impossible to define in previous levels. Following
Felleisen, expressive power is defined in terms of observational equivalence.
Separating the language into different levels of expressive power should
provide a framework for future work on Rust verification and compiler
optimization.
|
[
{
"created": "Thu, 7 Jun 2018 14:13:04 GMT",
"version": "v1"
},
{
"created": "Thu, 16 Aug 2018 18:34:19 GMT",
"version": "v2"
}
] |
2018-08-20
|
[
[
"Weiss",
"Aaron",
""
],
[
"Patterson",
"Daniel",
""
],
[
"Ahmed",
"Amal",
""
]
] |
Rust represents a major advancement in production programming languages because of its success in bridging the gap between high-level application programming and low-level systems programming. At the heart of its design lies a novel approach to ownership that remains highly programmable. In this talk, we will describe our ongoing work on designing a formal semantics for Rust that captures ownership and borrowing without the details of lifetime analysis. This semantics models a high-level understanding of ownership and as a result is close to source-level Rust (but with full type annotations) which differs from the recent RustBelt effort that essentially models MIR, a CPS-style IR used in the Rust compiler. Further, while RustBelt aims to verify the safety of unsafe code in Rust's standard library, we model standard library APIs as primitives, which is sufficient to reason about their behavior. This yields a simpler model of Rust and its type system that we think researchers will find easier to use as a starting point for investigating Rust extensions. Unlike RustBelt, we aim to prove type soundness using progress and preservation instead of a Kripke logical relation. Finally, our semantics is a family of languages of increasing expressive power, where subsequent levels have features that are impossible to define in previous levels. Following Felleisen, expressive power is defined in terms of observational equivalence. Separating the language into different levels of expressive power should provide a framework for future work on Rust verification and compiler optimization.
|
2211.09330
|
Sangdon Park
|
Sangdon Park and Osbert Bastani and Taesoo Kim
|
ACon$^2$: Adaptive Conformal Consensus for Provable Blockchain Oracles
|
Accepted to USENIX Security 2023
| null | null | null |
cs.CR cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Blockchains with smart contracts are distributed ledger systems that achieve
block-state consistency among distributed nodes by only allowing deterministic
operations of smart contracts. However, the power of smart contracts is enabled
by interacting with stochastic off-chain data, which in turn opens the
possibility to undermine the block-state consistency. To address this issue, an
oracle smart contract is used to provide a single consistent source of external
data; but, simultaneously, this introduces a single point of failure, which is
called the oracle problem. To address the oracle problem, we propose an
adaptive conformal consensus (ACon$^2$) algorithm that derives a consensus set
of data from multiple oracle contracts via the recent advance in online
uncertainty quantification learning. Interesting, the consensus set provides a
desired correctness guarantee under distribution shift and Byzantine
adversaries. We demonstrate the efficacy of the proposed algorithm on two price
datasets and an Ethereum case study. In particular, the Solidity implementation
of the proposed algorithm shows the potential practicality of the proposed
algorithm, implying that online machine learning algorithms are applicable to
address security issues in blockchains.
|
[
{
"created": "Thu, 17 Nov 2022 04:37:24 GMT",
"version": "v1"
},
{
"created": "Sat, 25 Feb 2023 16:20:35 GMT",
"version": "v2"
},
{
"created": "Tue, 7 Mar 2023 17:20:45 GMT",
"version": "v3"
}
] |
2023-03-08
|
[
[
"Park",
"Sangdon",
""
],
[
"Bastani",
"Osbert",
""
],
[
"Kim",
"Taesoo",
""
]
] |
Blockchains with smart contracts are distributed ledger systems that achieve block-state consistency among distributed nodes by only allowing deterministic operations of smart contracts. However, the power of smart contracts is enabled by interacting with stochastic off-chain data, which in turn opens the possibility to undermine the block-state consistency. To address this issue, an oracle smart contract is used to provide a single consistent source of external data; but, simultaneously, this introduces a single point of failure, which is called the oracle problem. To address the oracle problem, we propose an adaptive conformal consensus (ACon$^2$) algorithm that derives a consensus set of data from multiple oracle contracts via the recent advance in online uncertainty quantification learning. Interesting, the consensus set provides a desired correctness guarantee under distribution shift and Byzantine adversaries. We demonstrate the efficacy of the proposed algorithm on two price datasets and an Ethereum case study. In particular, the Solidity implementation of the proposed algorithm shows the potential practicality of the proposed algorithm, implying that online machine learning algorithms are applicable to address security issues in blockchains.
|
2204.10669
|
Ebaa Alnazer
|
Ebaa Alnazer, Ilche Georgievski, Marco Aiello
|
Risk Awareness in HTN Planning
|
62 pages, 9 figures
| null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Actual real-world domains are characterised by uncertain situations in which
acting and use of resources require embracing risk. Performing actions in such
domains always entails costs of consuming some resource, such as time, money,
or energy, where the knowledge about these costs can range from totally known
to totally unknown and even unknowable probabilities of costs. Think of robotic
domains, where actions and their costs are non-deterministic due to uncertain
factors like obstacles. Choosing which action to perform considering its cost
on the available resource requires taking a stance on risk. Thus, these domains
call for not only planning under uncertainty but also planning while embracing
risk. Taking Hierarchical Task Network (HTN) planning as a widely used planning
technique in real-world applications, one can observe that existing approaches
do not account for risk. That is, computing most probable or optimal plans
using actions with single-valued costs is only enough to express risk
neutrality. In this work, we postulate that HTN planning can become risk aware
by considering expected utility theory, a representative concept of decision
theory that enables choosing actions considering a probability distribution of
their costs and a given risk attitude expressed using a utility function. In
particular, we introduce a general framework for HTN planning that allows
modelling risk and uncertainty using a probability distribution of action costs
upon which we define risk-aware HTN planning as an approach that accounts for
the different risk attitudes and allows computing plans that go beyond risk
neutrality. In fact, we layout that computing risk-aware plans requires finding
plans with the highest expected utility. Finally, we argue that it is possible
for HTN planning agents to solve specialised risk-aware HTN planning problems
by adapting some existing HTN planning approaches.
|
[
{
"created": "Fri, 22 Apr 2022 12:33:27 GMT",
"version": "v1"
}
] |
2022-04-25
|
[
[
"Alnazer",
"Ebaa",
""
],
[
"Georgievski",
"Ilche",
""
],
[
"Aiello",
"Marco",
""
]
] |
Actual real-world domains are characterised by uncertain situations in which acting and use of resources require embracing risk. Performing actions in such domains always entails costs of consuming some resource, such as time, money, or energy, where the knowledge about these costs can range from totally known to totally unknown and even unknowable probabilities of costs. Think of robotic domains, where actions and their costs are non-deterministic due to uncertain factors like obstacles. Choosing which action to perform considering its cost on the available resource requires taking a stance on risk. Thus, these domains call for not only planning under uncertainty but also planning while embracing risk. Taking Hierarchical Task Network (HTN) planning as a widely used planning technique in real-world applications, one can observe that existing approaches do not account for risk. That is, computing most probable or optimal plans using actions with single-valued costs is only enough to express risk neutrality. In this work, we postulate that HTN planning can become risk aware by considering expected utility theory, a representative concept of decision theory that enables choosing actions considering a probability distribution of their costs and a given risk attitude expressed using a utility function. In particular, we introduce a general framework for HTN planning that allows modelling risk and uncertainty using a probability distribution of action costs upon which we define risk-aware HTN planning as an approach that accounts for the different risk attitudes and allows computing plans that go beyond risk neutrality. In fact, we layout that computing risk-aware plans requires finding plans with the highest expected utility. Finally, we argue that it is possible for HTN planning agents to solve specialised risk-aware HTN planning problems by adapting some existing HTN planning approaches.
|
1709.09708
|
Stefano Ferretti
|
Stefano Ferretti
|
On the Complex Network Structure of Musical Pieces: Analysis of Some Use
Cases from Different Music Genres
|
accepted to Multimedia Tools and Applications, Springer
| null |
10.1007/s11042-017-5175-y
| null |
cs.SD cs.MM eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper focuses on the modeling of musical melodies as networks. Notes of
a melody can be treated as nodes of a network. Connections are created whenever
notes are played in sequence. We analyze some main tracks coming from different
music genres, with melodies played using different musical instruments. We find
out that the considered networks are, in general, scale free networks and
exhibit the small world property. We measure the main metrics and assess
whether these networks can be considered as formed by sub-communities. Outcomes
confirm that peculiar features of the tracks can be extracted from this
analysis methodology. This approach can have an impact in several multimedia
applications such as music didactics, multimedia entertainment, and digital
music generation.
|
[
{
"created": "Wed, 13 Sep 2017 15:04:30 GMT",
"version": "v1"
}
] |
2017-09-29
|
[
[
"Ferretti",
"Stefano",
""
]
] |
This paper focuses on the modeling of musical melodies as networks. Notes of a melody can be treated as nodes of a network. Connections are created whenever notes are played in sequence. We analyze some main tracks coming from different music genres, with melodies played using different musical instruments. We find out that the considered networks are, in general, scale free networks and exhibit the small world property. We measure the main metrics and assess whether these networks can be considered as formed by sub-communities. Outcomes confirm that peculiar features of the tracks can be extracted from this analysis methodology. This approach can have an impact in several multimedia applications such as music didactics, multimedia entertainment, and digital music generation.
|
2207.00159
|
Luliang Jia
|
Luliang Jia, Nan Qi, Feihuang Chu, Shengliang Fang, Ximing Wang, Shuli
Ma, and Shuo Feng
|
Game-theoretic Learning Anti-jamming Approaches in Wireless Networks
|
Published in IEEE Communcations Magazine
| null | null | null |
cs.NI cs.GT
|
http://creativecommons.org/licenses/by-sa/4.0/
|
In this article, the anti-jamming communication problem is investigated from
a game-theoretic learning perspective. By exploring and analyzing intelligent
anti-jamming communication, we present the characteristics of jammers and the
requirements of an intelligent anti-jamming approach. Such approach is required
of self-sensing, self-decision making, self-coordination, self-evaluation, and
learning ability. Then, a game-theoretic learning anti-jamming (GTLAJ) paradigm
is proposed, and its framework and challenges of GTLAJ are introduced.
Moreover, through three cases, i.e., Stackelberg anti-jamming game, Markov
anti-jamming game and hypergraph-based anti-jamming game, different
anti-jamming game models and applications are discussed, and some future
directions are presented.
|
[
{
"created": "Mon, 20 Jun 2022 14:35:31 GMT",
"version": "v1"
}
] |
2022-07-04
|
[
[
"Jia",
"Luliang",
""
],
[
"Qi",
"Nan",
""
],
[
"Chu",
"Feihuang",
""
],
[
"Fang",
"Shengliang",
""
],
[
"Wang",
"Ximing",
""
],
[
"Ma",
"Shuli",
""
],
[
"Feng",
"Shuo",
""
]
] |
In this article, the anti-jamming communication problem is investigated from a game-theoretic learning perspective. By exploring and analyzing intelligent anti-jamming communication, we present the characteristics of jammers and the requirements of an intelligent anti-jamming approach. Such approach is required of self-sensing, self-decision making, self-coordination, self-evaluation, and learning ability. Then, a game-theoretic learning anti-jamming (GTLAJ) paradigm is proposed, and its framework and challenges of GTLAJ are introduced. Moreover, through three cases, i.e., Stackelberg anti-jamming game, Markov anti-jamming game and hypergraph-based anti-jamming game, different anti-jamming game models and applications are discussed, and some future directions are presented.
|
2403.13745
|
Fu-Yun Wang
|
Fu-Yun Wang, Xiaoshi Wu, Zhaoyang Huang, Xiaoyu Shi, Dazhong Shen,
Guanglu Song, Yu Liu, Hongsheng Li
|
Be-Your-Outpainter: Mastering Video Outpainting through Input-Specific
Adaptation
|
Code will be available at https://github.com/G-U-N/Be-Your-Outpainter
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Video outpainting is a challenging task, aiming at generating video content
outside the viewport of the input video while maintaining inter-frame and
intra-frame consistency. Existing methods fall short in either generation
quality or flexibility. We introduce MOTIA Mastering Video Outpainting Through
Input-Specific Adaptation, a diffusion-based pipeline that leverages both the
intrinsic data-specific patterns of the source video and the image/video
generative prior for effective outpainting. MOTIA comprises two main phases:
input-specific adaptation and pattern-aware outpainting. The input-specific
adaptation phase involves conducting efficient and effective pseudo outpainting
learning on the single-shot source video. This process encourages the model to
identify and learn patterns within the source video, as well as bridging the
gap between standard generative processes and outpainting. The subsequent
phase, pattern-aware outpainting, is dedicated to the generalization of these
learned patterns to generate outpainting outcomes. Additional strategies
including spatial-aware insertion and noise travel are proposed to better
leverage the diffusion model's generative prior and the acquired video patterns
from source videos. Extensive evaluations underscore MOTIA's superiority,
outperforming existing state-of-the-art methods in widely recognized
benchmarks. Notably, these advancements are achieved without necessitating
extensive, task-specific tuning.
|
[
{
"created": "Wed, 20 Mar 2024 16:53:45 GMT",
"version": "v1"
}
] |
2024-03-21
|
[
[
"Wang",
"Fu-Yun",
""
],
[
"Wu",
"Xiaoshi",
""
],
[
"Huang",
"Zhaoyang",
""
],
[
"Shi",
"Xiaoyu",
""
],
[
"Shen",
"Dazhong",
""
],
[
"Song",
"Guanglu",
""
],
[
"Liu",
"Yu",
""
],
[
"Li",
"Hongsheng",
""
]
] |
Video outpainting is a challenging task, aiming at generating video content outside the viewport of the input video while maintaining inter-frame and intra-frame consistency. Existing methods fall short in either generation quality or flexibility. We introduce MOTIA Mastering Video Outpainting Through Input-Specific Adaptation, a diffusion-based pipeline that leverages both the intrinsic data-specific patterns of the source video and the image/video generative prior for effective outpainting. MOTIA comprises two main phases: input-specific adaptation and pattern-aware outpainting. The input-specific adaptation phase involves conducting efficient and effective pseudo outpainting learning on the single-shot source video. This process encourages the model to identify and learn patterns within the source video, as well as bridging the gap between standard generative processes and outpainting. The subsequent phase, pattern-aware outpainting, is dedicated to the generalization of these learned patterns to generate outpainting outcomes. Additional strategies including spatial-aware insertion and noise travel are proposed to better leverage the diffusion model's generative prior and the acquired video patterns from source videos. Extensive evaluations underscore MOTIA's superiority, outperforming existing state-of-the-art methods in widely recognized benchmarks. Notably, these advancements are achieved without necessitating extensive, task-specific tuning.
|
2403.18183
|
Hsiu-Wei Yang
|
Hsiu-Wei Yang, Abhinav Agrawal, Pavlos Fragkogiannis, Shubham Nitin
Mulay
|
Can AI Models Appreciate Document Aesthetics? An Exploration of
Legibility and Layout Quality in Relation to Prediction Confidence
| null | null | null | null |
cs.AI cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
A well-designed document communicates not only through its words but also
through its visual eloquence. Authors utilize aesthetic elements such as
colors, fonts, graphics, and layouts to shape the perception of information.
Thoughtful document design, informed by psychological insights, enhances both
the visual appeal and the comprehension of the content. While state-of-the-art
document AI models demonstrate the benefits of incorporating layout and image
data, it remains unclear whether the nuances of document aesthetics are
effectively captured. To bridge the gap between human cognition and AI
interpretation of aesthetic elements, we formulated hypotheses concerning AI
behavior in document understanding tasks, specifically anchored in document
design principles. With a focus on legibility and layout quality, we tested
four aspects of aesthetic effects: noise, font-size contrast, alignment, and
complexity, on model confidence using correlational analysis. The results and
observations highlight the value of model analysis rooted in document design
theories. Our work serves as a trailhead for further studies and we advocate
for continued research in this topic to deepen our understanding of how AI
interprets document aesthetics.
|
[
{
"created": "Wed, 27 Mar 2024 01:21:48 GMT",
"version": "v1"
}
] |
2024-03-28
|
[
[
"Yang",
"Hsiu-Wei",
""
],
[
"Agrawal",
"Abhinav",
""
],
[
"Fragkogiannis",
"Pavlos",
""
],
[
"Mulay",
"Shubham Nitin",
""
]
] |
A well-designed document communicates not only through its words but also through its visual eloquence. Authors utilize aesthetic elements such as colors, fonts, graphics, and layouts to shape the perception of information. Thoughtful document design, informed by psychological insights, enhances both the visual appeal and the comprehension of the content. While state-of-the-art document AI models demonstrate the benefits of incorporating layout and image data, it remains unclear whether the nuances of document aesthetics are effectively captured. To bridge the gap between human cognition and AI interpretation of aesthetic elements, we formulated hypotheses concerning AI behavior in document understanding tasks, specifically anchored in document design principles. With a focus on legibility and layout quality, we tested four aspects of aesthetic effects: noise, font-size contrast, alignment, and complexity, on model confidence using correlational analysis. The results and observations highlight the value of model analysis rooted in document design theories. Our work serves as a trailhead for further studies and we advocate for continued research in this topic to deepen our understanding of how AI interprets document aesthetics.
|
2311.03932
|
Evangelia Tsoukanara
|
Evangelia Tsoukanara and Georgia Koloniari and Evaggelia Pitoura
|
TempoGRAPHer: Aggregation Based Temporal Graph Exploration
| null | null | null | null |
cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
Graphs offer a generic abstraction for modeling entities, and the
interactions and relationships between them. Most real world graphs, such as
social and cooperation networks evolve over time, and exploring their evolution
may reveal important information. In this paper, we present TempoGRAPHer, a
system for visualizing and analyzing the evolution of a temporal attributed
graph. TempoGRAPHer supports both temporal and attribute aggregation. It also
allows graph exploration by identifying periods of significant growth,
shrinkage, or stability. Temporal exploration is supported by two complementary
strategies, namely skyline and interaction-based exploration. Skyline-based
exploration provides insights on the overall trends in the evolution, while
interaction-based exploration offers a closer look at specific parts of the
graph evolution history where significant changes appeared. We showcase the
usefulness of TempoGRAPHer in understanding graph evolution by presenting a
detailed scenario that explores the evolution of a contact network between
primary school students.
|
[
{
"created": "Tue, 7 Nov 2023 12:14:34 GMT",
"version": "v1"
},
{
"created": "Wed, 8 Nov 2023 13:50:33 GMT",
"version": "v2"
}
] |
2023-11-09
|
[
[
"Tsoukanara",
"Evangelia",
""
],
[
"Koloniari",
"Georgia",
""
],
[
"Pitoura",
"Evaggelia",
""
]
] |
Graphs offer a generic abstraction for modeling entities, and the interactions and relationships between them. Most real world graphs, such as social and cooperation networks evolve over time, and exploring their evolution may reveal important information. In this paper, we present TempoGRAPHer, a system for visualizing and analyzing the evolution of a temporal attributed graph. TempoGRAPHer supports both temporal and attribute aggregation. It also allows graph exploration by identifying periods of significant growth, shrinkage, or stability. Temporal exploration is supported by two complementary strategies, namely skyline and interaction-based exploration. Skyline-based exploration provides insights on the overall trends in the evolution, while interaction-based exploration offers a closer look at specific parts of the graph evolution history where significant changes appeared. We showcase the usefulness of TempoGRAPHer in understanding graph evolution by presenting a detailed scenario that explores the evolution of a contact network between primary school students.
|
2203.10166
|
Johannes Schneider
|
Johannes Schneider and Giovanni Apruzzese
|
Concept-based Adversarial Attacks: Tricking Humans and Classifiers Alike
|
Accepted at IEEE Symposium on Security and Privacy (S&P) Workshop on
Deep Learning and Security, 2022
| null | null | null |
cs.LG cs.CR cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose to generate adversarial samples by modifying activations of upper
layers encoding semantically meaningful concepts. The original sample is
shifted towards a target sample, yielding an adversarial sample, by using the
modified activations to reconstruct the original sample. A human might (and
possibly should) notice differences between the original and the adversarial
sample. Depending on the attacker-provided constraints, an adversarial sample
can exhibit subtle differences or appear like a "forged" sample from another
class. Our approach and goal are in stark contrast to common attacks involving
perturbations of single pixels that are not recognizable by humans. Our
approach is relevant in, e.g., multi-stage processing of inputs, where both
humans and machines are involved in decision-making because invisible
perturbations will not fool a human. Our evaluation focuses on deep neural
networks. We also show the transferability of our adversarial examples among
networks.
|
[
{
"created": "Fri, 18 Mar 2022 21:30:11 GMT",
"version": "v1"
}
] |
2022-03-22
|
[
[
"Schneider",
"Johannes",
""
],
[
"Apruzzese",
"Giovanni",
""
]
] |
We propose to generate adversarial samples by modifying activations of upper layers encoding semantically meaningful concepts. The original sample is shifted towards a target sample, yielding an adversarial sample, by using the modified activations to reconstruct the original sample. A human might (and possibly should) notice differences between the original and the adversarial sample. Depending on the attacker-provided constraints, an adversarial sample can exhibit subtle differences or appear like a "forged" sample from another class. Our approach and goal are in stark contrast to common attacks involving perturbations of single pixels that are not recognizable by humans. Our approach is relevant in, e.g., multi-stage processing of inputs, where both humans and machines are involved in decision-making because invisible perturbations will not fool a human. Our evaluation focuses on deep neural networks. We also show the transferability of our adversarial examples among networks.
|
1811.09956
|
Gurunath Reddy M
|
Gurunath Reddy M, Tanumay Mandal, Krothapalli Sreenivasa Rao
|
Glottal Closure Instants Detection From Pathological Acoustic Speech
Signal Using Deep Learning
|
Machine Learning for Health (ML4H) Workshop at NeurIPS 2018
arXiv:1811.07216
| null | null |
ML4H/2018/39
|
cs.SD cs.LG eess.AS stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we propose a classification based glottal closure instants
(GCI) detection from pathological acoustic speech signal, which finds many
applications in vocal disorder analysis. Till date, GCI for pathological
disorder is extracted from laryngeal (glottal source) signal recorded from
Electroglottograph, a dedicated device designed to measure the vocal folds
vibration around the larynx. We have created a pathological dataset which
consists of simultaneous recordings of glottal source and acoustic speech
signal of six different disorders from vocal disordered patients. The GCI
locations are manually annotated for disorder analysis and supervised learning.
We have proposed convolutional neural network based GCI detection method by
fusing deep acoustic speech and linear prediction residual features for robust
GCI detection. The experimental results showed that the proposed method is
significantly better than the state-of-the-art GCI detection methods.
|
[
{
"created": "Sun, 25 Nov 2018 06:18:24 GMT",
"version": "v1"
}
] |
2018-11-28
|
[
[
"M",
"Gurunath Reddy",
""
],
[
"Mandal",
"Tanumay",
""
],
[
"Rao",
"Krothapalli Sreenivasa",
""
]
] |
In this paper, we propose a classification based glottal closure instants (GCI) detection from pathological acoustic speech signal, which finds many applications in vocal disorder analysis. Till date, GCI for pathological disorder is extracted from laryngeal (glottal source) signal recorded from Electroglottograph, a dedicated device designed to measure the vocal folds vibration around the larynx. We have created a pathological dataset which consists of simultaneous recordings of glottal source and acoustic speech signal of six different disorders from vocal disordered patients. The GCI locations are manually annotated for disorder analysis and supervised learning. We have proposed convolutional neural network based GCI detection method by fusing deep acoustic speech and linear prediction residual features for robust GCI detection. The experimental results showed that the proposed method is significantly better than the state-of-the-art GCI detection methods.
|
2305.15709
|
Risheng Liu
|
Xianghao Jiao, Yaohua Liu, Jiaxin Gao, Xinyuan Chu, Risheng Liu, Xin
Fan
|
PEARL: Preprocessing Enhanced Adversarial Robust Learning of Image
Deraining for Semantic Segmentation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In light of the significant progress made in the development and application
of semantic segmentation tasks, there has been increasing attention towards
improving the robustness of segmentation models against natural degradation
factors (e.g., rain streaks) or artificially attack factors (e.g., adversarial
attack). Whereas, most existing methods are designed to address a single
degradation factor and are tailored to specific application scenarios. In this
work, we present the first attempt to improve the robustness of semantic
segmentation tasks by simultaneously handling different types of degradation
factors. Specifically, we introduce the Preprocessing Enhanced Adversarial
Robust Learning (PEARL) framework based on the analysis of our proposed Naive
Adversarial Training (NAT) framework. Our approach effectively handles both
rain streaks and adversarial perturbation by transferring the robustness of the
segmentation model to the image derain model. Furthermore, as opposed to the
commonly used Negative Adversarial Attack (NAA), we design the Auxiliary Mirror
Attack (AMA) to introduce positive information prior to the training of the
PEARL framework, which improves defense capability and segmentation
performance. Our extensive experiments and ablation studies based on different
derain methods and segmentation models have demonstrated the significant
performance improvement of PEARL with AMA in defense against various
adversarial attacks and rain streaks while maintaining high generalization
performance across different datasets.
|
[
{
"created": "Thu, 25 May 2023 04:44:17 GMT",
"version": "v1"
}
] |
2023-05-26
|
[
[
"Jiao",
"Xianghao",
""
],
[
"Liu",
"Yaohua",
""
],
[
"Gao",
"Jiaxin",
""
],
[
"Chu",
"Xinyuan",
""
],
[
"Liu",
"Risheng",
""
],
[
"Fan",
"Xin",
""
]
] |
In light of the significant progress made in the development and application of semantic segmentation tasks, there has been increasing attention towards improving the robustness of segmentation models against natural degradation factors (e.g., rain streaks) or artificially attack factors (e.g., adversarial attack). Whereas, most existing methods are designed to address a single degradation factor and are tailored to specific application scenarios. In this work, we present the first attempt to improve the robustness of semantic segmentation tasks by simultaneously handling different types of degradation factors. Specifically, we introduce the Preprocessing Enhanced Adversarial Robust Learning (PEARL) framework based on the analysis of our proposed Naive Adversarial Training (NAT) framework. Our approach effectively handles both rain streaks and adversarial perturbation by transferring the robustness of the segmentation model to the image derain model. Furthermore, as opposed to the commonly used Negative Adversarial Attack (NAA), we design the Auxiliary Mirror Attack (AMA) to introduce positive information prior to the training of the PEARL framework, which improves defense capability and segmentation performance. Our extensive experiments and ablation studies based on different derain methods and segmentation models have demonstrated the significant performance improvement of PEARL with AMA in defense against various adversarial attacks and rain streaks while maintaining high generalization performance across different datasets.
|
2110.11070
|
Arpan Biswas
|
Arpan Biswas, Claudio Fuentes, Christopher Hoyle
|
A Nested Weighted Tchebycheff Multi-Objective Bayesian Optimization
Approach for Flexibility of Unknown Utopia Estimation in Expensive Black-box
Design Problems
|
35 pages, 8 figures in main text and 2 figures in supplementary
| null | null | null |
cs.LG stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
We propose a nested weighted Tchebycheff Multi-objective Bayesian
optimization framework where we build a regression model selection procedure
from an ensemble of models, towards better estimation of the uncertain
parameters of the weighted-Tchebycheff expensive black-box multi-objective
function. In existing work, a weighted Tchebycheff MOBO approach has been
demonstrated which attempts to estimate the unknown utopia in formulating
acquisition function, through calibration using a priori selected regression
model. However, the existing MOBO model lacks flexibility in selecting the
appropriate regression models given the guided sampled data and therefore, can
under-fit or over-fit as the iterations of the MOBO progress, reducing the
overall MOBO performance. As it is too complex to a priori guarantee a best
model in general, this motivates us to consider a portfolio of different
families of predictive models fitted with current training data, guided by the
WTB MOBO; the best model is selected following a user-defined prediction root
mean-square-error-based approach. The proposed approach is implemented in
optimizing a multi-modal benchmark problem and a thin tube design under
constant loading of temperature-pressure, with minimizing the risk of
creep-fatigue failure and design cost. Finally, the nested weighted Tchebycheff
MOBO model performance is compared with different MOBO frameworks with respect
to accuracy in parameter estimation, Pareto-optimal solutions and function
evaluation cost. This method is generalized enough to consider different
families of predictive models in the portfolio for best model selection, where
the overall design architecture allows for solving any high-dimensional
(multiple functions) complex black-box problems and can be extended to any
other global criterion multi-objective optimization methods where prior
knowledge of utopia is required.
|
[
{
"created": "Sat, 16 Oct 2021 00:44:06 GMT",
"version": "v1"
}
] |
2021-10-22
|
[
[
"Biswas",
"Arpan",
""
],
[
"Fuentes",
"Claudio",
""
],
[
"Hoyle",
"Christopher",
""
]
] |
We propose a nested weighted Tchebycheff Multi-objective Bayesian optimization framework where we build a regression model selection procedure from an ensemble of models, towards better estimation of the uncertain parameters of the weighted-Tchebycheff expensive black-box multi-objective function. In existing work, a weighted Tchebycheff MOBO approach has been demonstrated which attempts to estimate the unknown utopia in formulating acquisition function, through calibration using a priori selected regression model. However, the existing MOBO model lacks flexibility in selecting the appropriate regression models given the guided sampled data and therefore, can under-fit or over-fit as the iterations of the MOBO progress, reducing the overall MOBO performance. As it is too complex to a priori guarantee a best model in general, this motivates us to consider a portfolio of different families of predictive models fitted with current training data, guided by the WTB MOBO; the best model is selected following a user-defined prediction root mean-square-error-based approach. The proposed approach is implemented in optimizing a multi-modal benchmark problem and a thin tube design under constant loading of temperature-pressure, with minimizing the risk of creep-fatigue failure and design cost. Finally, the nested weighted Tchebycheff MOBO model performance is compared with different MOBO frameworks with respect to accuracy in parameter estimation, Pareto-optimal solutions and function evaluation cost. This method is generalized enough to consider different families of predictive models in the portfolio for best model selection, where the overall design architecture allows for solving any high-dimensional (multiple functions) complex black-box problems and can be extended to any other global criterion multi-objective optimization methods where prior knowledge of utopia is required.
|
1906.05881
|
Nancy Day
|
Ali Abbassi and Nancy A. Day and Derek Rayside
|
Astra Version 1.0: Evaluating Translations from Alloy to SMT-LIB
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a variety of translation options for converting Alloy to SMT-LIB
via Alloy's Kodkod interface. Our translations, which are implemented in a
library that we call Astra, are based on converting the set and relational
operations of Alloy into their equivalent in typed first-order logic (TFOL). We
investigate and compare the performance of an SMT solver for many translation
options. We compare using only one universal type to recovering Alloy type
information from the Kodkod representation and using multiple types in TFOL. We
compare a direct translation of the relations to predicates in TFOL to one
where we recover functions from their relational form in Kodkod and represent
these as functions in TFOL. We compare representations in TFOL with unbounded
scopes to ones with bounded scopes, either pre or post quantifier expansion.
Our results across all these dimensions provide directions for portfolio
solvers, modelling improvements, and optimizing SMT solvers.
|
[
{
"created": "Thu, 13 Jun 2019 18:16:57 GMT",
"version": "v1"
}
] |
2019-06-17
|
[
[
"Abbassi",
"Ali",
""
],
[
"Day",
"Nancy A.",
""
],
[
"Rayside",
"Derek",
""
]
] |
We present a variety of translation options for converting Alloy to SMT-LIB via Alloy's Kodkod interface. Our translations, which are implemented in a library that we call Astra, are based on converting the set and relational operations of Alloy into their equivalent in typed first-order logic (TFOL). We investigate and compare the performance of an SMT solver for many translation options. We compare using only one universal type to recovering Alloy type information from the Kodkod representation and using multiple types in TFOL. We compare a direct translation of the relations to predicates in TFOL to one where we recover functions from their relational form in Kodkod and represent these as functions in TFOL. We compare representations in TFOL with unbounded scopes to ones with bounded scopes, either pre or post quantifier expansion. Our results across all these dimensions provide directions for portfolio solvers, modelling improvements, and optimizing SMT solvers.
|
2109.02860
|
Ruwen Bai
|
Ruwen Bai, Min Li, Bo Meng, Fengfa Li, Miao Jiang, Junxing Ren, Degang
Sun
|
Hierarchical Graph Convolutional Skeleton Transformer for Action
Recognition
|
7 pages, 3 figures
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Graph convolutional networks (GCNs) have emerged as dominant methods for
skeleton-based action recognition.
However, they still suffer from two problems, namely, neighborhood
constraints and entangled spatiotemporal feature representations.
Most studies have focused on improving the design of graph topology to solve
the first problem but they have yet to fully explore the latter.
In this work, we design a disentangled spatiotemporal transformer (DSTT)
block to overcome the above limitations of GCNs in three steps: (i) feature
disentanglement for spatiotemporal decomposition;(ii) global spatiotemporal
attention for capturing correlations in the global context; and (iii) local
information enhancement for utilizing more local information.
Thereon, we propose a novel architecture, named Hierarchical Graph
Convolutional skeleton Transformer (HGCT), to employ the complementary
advantages of GCN (i.e., local topology, temporal dynamics and hierarchy) and
Transformer (i.e., global context and dynamic attention).
HGCT is lightweight and computationally efficient.
Quantitative analysis demonstrates the superiority and good interpretability
of HGCT.
|
[
{
"created": "Tue, 7 Sep 2021 04:32:10 GMT",
"version": "v1"
},
{
"created": "Wed, 8 Sep 2021 04:03:35 GMT",
"version": "v2"
},
{
"created": "Fri, 10 Sep 2021 02:23:11 GMT",
"version": "v3"
},
{
"created": "Mon, 10 Jan 2022 11:02:07 GMT",
"version": "v4"
}
] |
2022-01-11
|
[
[
"Bai",
"Ruwen",
""
],
[
"Li",
"Min",
""
],
[
"Meng",
"Bo",
""
],
[
"Li",
"Fengfa",
""
],
[
"Jiang",
"Miao",
""
],
[
"Ren",
"Junxing",
""
],
[
"Sun",
"Degang",
""
]
] |
Graph convolutional networks (GCNs) have emerged as dominant methods for skeleton-based action recognition. However, they still suffer from two problems, namely, neighborhood constraints and entangled spatiotemporal feature representations. Most studies have focused on improving the design of graph topology to solve the first problem but they have yet to fully explore the latter. In this work, we design a disentangled spatiotemporal transformer (DSTT) block to overcome the above limitations of GCNs in three steps: (i) feature disentanglement for spatiotemporal decomposition;(ii) global spatiotemporal attention for capturing correlations in the global context; and (iii) local information enhancement for utilizing more local information. Thereon, we propose a novel architecture, named Hierarchical Graph Convolutional skeleton Transformer (HGCT), to employ the complementary advantages of GCN (i.e., local topology, temporal dynamics and hierarchy) and Transformer (i.e., global context and dynamic attention). HGCT is lightweight and computationally efficient. Quantitative analysis demonstrates the superiority and good interpretability of HGCT.
|
2006.06143
|
James D. Finch
|
James D. Finch and Jinho D. Choi
|
Emora STDM: A Versatile Framework for Innovative Dialogue System
Development
|
Accepted by SIGDIAL 2020: System Demonstrations
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This demo paper presents Emora STDM (State Transition Dialogue Manager), a
dialogue system development framework that provides novel workflows for rapid
prototyping of chat-based dialogue managers as well as collaborative
development of complex interactions. Our framework caters to a wide range of
expertise levels by supporting interoperability between two popular approaches,
state machine and information state, to dialogue management. Our Natural
Language Expression package allows seamless integration of pattern matching,
custom NLP modules, and database querying, that makes the workflows much more
efficient. As a user study, we adopt this framework to an interdisciplinary
undergraduate course where students with both technical and non-technical
backgrounds are able to develop creative dialogue managers in a short period of
time.
|
[
{
"created": "Thu, 11 Jun 2020 01:31:17 GMT",
"version": "v1"
}
] |
2020-06-12
|
[
[
"Finch",
"James D.",
""
],
[
"Choi",
"Jinho D.",
""
]
] |
This demo paper presents Emora STDM (State Transition Dialogue Manager), a dialogue system development framework that provides novel workflows for rapid prototyping of chat-based dialogue managers as well as collaborative development of complex interactions. Our framework caters to a wide range of expertise levels by supporting interoperability between two popular approaches, state machine and information state, to dialogue management. Our Natural Language Expression package allows seamless integration of pattern matching, custom NLP modules, and database querying, that makes the workflows much more efficient. As a user study, we adopt this framework to an interdisciplinary undergraduate course where students with both technical and non-technical backgrounds are able to develop creative dialogue managers in a short period of time.
|
1902.10648
|
Georg B\"ocherer
|
Georg B\"ocherer and Diego Lentner and Alessandro Cirino and Fabian
Steiner
|
Probabilistic Parity Shaping for Linear Codes
|
Draft based on talk given at 2019 Oberpfaffenhofen Workshop on High
Throughput Coding (OWHTC)
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Linear layered probabilistic shaping (LLPS) is proposed, an architecture for
linear codes to efficiently encode to shaped code words. In the previously
proposed probabilistic amplitude shaping (PAS) architecture, a distribution
matcher (DM) maps information bits to shaped bits, which are then
systematically encoded by appending uniformly distributed parity bits. LLPS
extends PAS by probabilistic parity shaping (PPS), which uses a syndrome DM to
calculate shaped parity bits. LLPS enables the transmission with any desired
distribution using linear codes, furthermore, by LLPS, a given linear code with
rate $R_\text{fec}$ can be operated at any rate $R\leq R_\text{fec}$ by
changing the distribution. LLPS is used with an LDPC code for dirty paper
coding against an interfering BPSK signal, improving the energy efficiency by
0.8 dB.
|
[
{
"created": "Wed, 27 Feb 2019 17:30:27 GMT",
"version": "v1"
}
] |
2019-02-28
|
[
[
"Böcherer",
"Georg",
""
],
[
"Lentner",
"Diego",
""
],
[
"Cirino",
"Alessandro",
""
],
[
"Steiner",
"Fabian",
""
]
] |
Linear layered probabilistic shaping (LLPS) is proposed, an architecture for linear codes to efficiently encode to shaped code words. In the previously proposed probabilistic amplitude shaping (PAS) architecture, a distribution matcher (DM) maps information bits to shaped bits, which are then systematically encoded by appending uniformly distributed parity bits. LLPS extends PAS by probabilistic parity shaping (PPS), which uses a syndrome DM to calculate shaped parity bits. LLPS enables the transmission with any desired distribution using linear codes, furthermore, by LLPS, a given linear code with rate $R_\text{fec}$ can be operated at any rate $R\leq R_\text{fec}$ by changing the distribution. LLPS is used with an LDPC code for dirty paper coding against an interfering BPSK signal, improving the energy efficiency by 0.8 dB.
|
1504.04690
|
Eyal Skop
|
Shay Mozes, Eyal E. Skop
|
Efficient Vertex-Label Distance Oracles for Planar Graphs
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider distance queries in vertex-labeled planar graphs. For any fixed
$0 < \epsilon \leq 1/2$ we show how to preprocess a directed planar graph with
vertex labels and arc lengths into a data structure that answers queries of the
following form. Given a vertex $u$ and a label $\lambda$ return a
$(1+\epsilon)$-approximation of the distance from $u$ to its closest vertex
with label $\lambda$. For a directed planar graph with $n$ vertices, such that
the ratio of the largest to smallest arc length is bounded by $N$, the
preprocessing time is $O(\epsilon^{-2}n\lg^{3}{n}\lg(nN))$, the data structure
size is $O(\epsilon^{-1}n\lg{n}\lg(nN))$, and the query time is
$O(\lg\lg{n}\lg\lg(nN) + \epsilon^{-1})$. We also point out that a vertex label
distance oracle for undirected planar graphs suggested in an earlier version of
this paper is incorrect.
|
[
{
"created": "Sat, 18 Apr 2015 07:24:00 GMT",
"version": "v1"
},
{
"created": "Sat, 16 Dec 2017 07:29:35 GMT",
"version": "v2"
}
] |
2017-12-19
|
[
[
"Mozes",
"Shay",
""
],
[
"Skop",
"Eyal E.",
""
]
] |
We consider distance queries in vertex-labeled planar graphs. For any fixed $0 < \epsilon \leq 1/2$ we show how to preprocess a directed planar graph with vertex labels and arc lengths into a data structure that answers queries of the following form. Given a vertex $u$ and a label $\lambda$ return a $(1+\epsilon)$-approximation of the distance from $u$ to its closest vertex with label $\lambda$. For a directed planar graph with $n$ vertices, such that the ratio of the largest to smallest arc length is bounded by $N$, the preprocessing time is $O(\epsilon^{-2}n\lg^{3}{n}\lg(nN))$, the data structure size is $O(\epsilon^{-1}n\lg{n}\lg(nN))$, and the query time is $O(\lg\lg{n}\lg\lg(nN) + \epsilon^{-1})$. We also point out that a vertex label distance oracle for undirected planar graphs suggested in an earlier version of this paper is incorrect.
|
1304.2352
|
Alan M. Frisch
|
Alan M. Frisch, Peter Haddawy
|
Probability as a Modal Operator
|
Appears in Proceedings of the Fourth Conference on Uncertainty in
Artificial Intelligence (UAI1988)
| null | null |
UAI-P-1988-PG-109-118
|
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper argues for a modal view of probability. The syntax and semantics
of one particularly strong probability logic are discussed and some examples of
the use of the logic are provided. We show that it is both natural and useful
to think of probability as a modal operator. Contrary to popular belief in AI,
a probability ranging between 0 and 1 represents a continuum between
impossibility and necessity, not between simple falsity and truth. The present
work provides a clear semantics for quantification into the scope of the
probability operator and for higher-order probabilities. Probability logic is a
language for expressing both probabilistic and logical concepts.
|
[
{
"created": "Wed, 27 Mar 2013 19:42:49 GMT",
"version": "v1"
}
] |
2013-04-10
|
[
[
"Frisch",
"Alan M.",
""
],
[
"Haddawy",
"Peter",
""
]
] |
This paper argues for a modal view of probability. The syntax and semantics of one particularly strong probability logic are discussed and some examples of the use of the logic are provided. We show that it is both natural and useful to think of probability as a modal operator. Contrary to popular belief in AI, a probability ranging between 0 and 1 represents a continuum between impossibility and necessity, not between simple falsity and truth. The present work provides a clear semantics for quantification into the scope of the probability operator and for higher-order probabilities. Probability logic is a language for expressing both probabilistic and logical concepts.
|
2301.06132
|
Jinhui Hou
|
Jinhui Hou, Zhiyu Zhu, Junhui Hou, Hui Liu, Huanqiang Zeng, and Deyu
Meng
|
Deep Diversity-Enhanced Feature Representation of Hyperspectral Images
|
17 pages, 12 figures. Accepted in TPAMI 2024. arXiv admin note:
substantial text overlap with arXiv:2207.04266
| null | null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we study the problem of efficiently and effectively embedding
the high-dimensional spatio-spectral information of hyperspectral (HS) images,
guided by feature diversity. Specifically, based on the theoretical formulation
that feature diversity is correlated with the rank of the unfolded kernel
matrix, we rectify 3D convolution by modifying its topology to enhance the rank
upper-bound. This modification yields a rank-enhanced spatial-spectral
symmetrical convolution set (ReS$^3$-ConvSet), which not only learns diverse
and powerful feature representations but also saves network parameters.
Additionally, we also propose a novel diversity-aware regularization (DA-Reg)
term that directly acts on the feature maps to maximize independence among
elements. To demonstrate the superiority of the proposed ReS$^3$-ConvSet and
DA-Reg, we apply them to various HS image processing and analysis tasks,
including denoising, spatial super-resolution, and classification. Extensive
experiments show that the proposed approaches outperform state-of-the-art
methods both quantitatively and qualitatively to a significant extent. The code
is publicly available at https://github.com/jinnh/ReSSS-ConvSet.
|
[
{
"created": "Sun, 15 Jan 2023 16:19:18 GMT",
"version": "v1"
},
{
"created": "Fri, 15 Dec 2023 14:26:39 GMT",
"version": "v2"
},
{
"created": "Thu, 9 May 2024 15:33:35 GMT",
"version": "v3"
}
] |
2024-05-10
|
[
[
"Hou",
"Jinhui",
""
],
[
"Zhu",
"Zhiyu",
""
],
[
"Hou",
"Junhui",
""
],
[
"Liu",
"Hui",
""
],
[
"Zeng",
"Huanqiang",
""
],
[
"Meng",
"Deyu",
""
]
] |
In this paper, we study the problem of efficiently and effectively embedding the high-dimensional spatio-spectral information of hyperspectral (HS) images, guided by feature diversity. Specifically, based on the theoretical formulation that feature diversity is correlated with the rank of the unfolded kernel matrix, we rectify 3D convolution by modifying its topology to enhance the rank upper-bound. This modification yields a rank-enhanced spatial-spectral symmetrical convolution set (ReS$^3$-ConvSet), which not only learns diverse and powerful feature representations but also saves network parameters. Additionally, we also propose a novel diversity-aware regularization (DA-Reg) term that directly acts on the feature maps to maximize independence among elements. To demonstrate the superiority of the proposed ReS$^3$-ConvSet and DA-Reg, we apply them to various HS image processing and analysis tasks, including denoising, spatial super-resolution, and classification. Extensive experiments show that the proposed approaches outperform state-of-the-art methods both quantitatively and qualitatively to a significant extent. The code is publicly available at https://github.com/jinnh/ReSSS-ConvSet.
|
2202.09938
|
Madhu Kiran
|
Madhu Kiran, Le Thanh Nguyen-Meidine, Rajat Sahay, Rafael Menelau
Oliveira E Cruz, Louis-Antoine Blais-Morin and Eric Granger
|
Generative Target Update for Adaptive Siamese Tracking
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Siamese trackers perform similarity matching with templates (i.e., target
models) to recursively localize objects within a search region. Several
strategies have been proposed in the literature to update a template based on
the tracker output, typically extracted from the target search region in the
current frame, and thereby mitigate the effects of target drift. However, this
may lead to corrupted templates, limiting the potential benefits of a template
update strategy.
This paper proposes a model adaptation method for Siamese trackers that uses
a generative model to produce a synthetic template from the object search
regions of several previous frames, rather than directly using the tracker
output. Since the search region encompasses the target, attention from the
search region is used for robust model adaptation. In particular, our approach
relies on an auto-encoder trained through adversarial learning to detect
changes in a target object's appearance and predict a future target template,
using a set of target templates localized from tracker outputs at previous
frames. To prevent template corruption during the update, the proposed tracker
also performs change detection using the generative model to suspend updates
until the tracker stabilizes, and robust matching can resume through dynamic
template fusion.
Extensive experiments conducted on VOT-16, VOT-17, OTB-50, and OTB-100
datasets highlight the effectiveness of our method, along with the impact of
its key components. Results indicate that our proposed approach can outperform
state-of-art trackers, and its overall robustness allows tracking for a longer
time before failure.
|
[
{
"created": "Mon, 21 Feb 2022 00:22:49 GMT",
"version": "v1"
}
] |
2022-02-22
|
[
[
"Kiran",
"Madhu",
""
],
[
"Nguyen-Meidine",
"Le Thanh",
""
],
[
"Sahay",
"Rajat",
""
],
[
"Cruz",
"Rafael Menelau Oliveira E",
""
],
[
"Blais-Morin",
"Louis-Antoine",
""
],
[
"Granger",
"Eric",
""
]
] |
Siamese trackers perform similarity matching with templates (i.e., target models) to recursively localize objects within a search region. Several strategies have been proposed in the literature to update a template based on the tracker output, typically extracted from the target search region in the current frame, and thereby mitigate the effects of target drift. However, this may lead to corrupted templates, limiting the potential benefits of a template update strategy. This paper proposes a model adaptation method for Siamese trackers that uses a generative model to produce a synthetic template from the object search regions of several previous frames, rather than directly using the tracker output. Since the search region encompasses the target, attention from the search region is used for robust model adaptation. In particular, our approach relies on an auto-encoder trained through adversarial learning to detect changes in a target object's appearance and predict a future target template, using a set of target templates localized from tracker outputs at previous frames. To prevent template corruption during the update, the proposed tracker also performs change detection using the generative model to suspend updates until the tracker stabilizes, and robust matching can resume through dynamic template fusion. Extensive experiments conducted on VOT-16, VOT-17, OTB-50, and OTB-100 datasets highlight the effectiveness of our method, along with the impact of its key components. Results indicate that our proposed approach can outperform state-of-art trackers, and its overall robustness allows tracking for a longer time before failure.
|
2208.14225
|
Tawfiq Aljohani
|
Tawfiq M. Aljohani
|
Cyberattacks on Energy Infrastructures: Modern War Weapons
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Recent high-profile cyberattacks on energy infrastructures, such as the
security breach of the Colonial Pipeline in 2021 and attacks that have
disrupted Ukraine's power grid from the mid-2010s till date, have pushed
cybersecurity as a top priority. As political tensions have escalated in Europe
this year, concerns about critical infrastructure security have increased.
Operators in the industrial sector face new cybersecurity threats that increase
the risk of disruptions in services, property damages, and environmental harm.
Amid rising geopolitical tensions, industrial companies, with their
network-connected systems, are now considered major targets for adversaries to
advance political, social, or military agendas. Moreover, the recent
Russian-Ukrainian conflict has set the alarm worldwide about the danger of
targeting energy grids via cyberattacks. Attack methodologies, techniques, and
procedures used successfully to hack energy grids in Ukraine can be used
elsewhere. This work aims to present a thorough analysis of the cybersecurity
of the energy infrastructure amid the increased rise of cyberwars. The article
navigates through the recent history of energy-related cyberattacks and their
reasoning, discusses the grid's vulnerability, and makes a precautionary
argument for securing the grids against them.
|
[
{
"created": "Sun, 28 Aug 2022 05:19:48 GMT",
"version": "v1"
}
] |
2022-08-31
|
[
[
"Aljohani",
"Tawfiq M.",
""
]
] |
Recent high-profile cyberattacks on energy infrastructures, such as the security breach of the Colonial Pipeline in 2021 and attacks that have disrupted Ukraine's power grid from the mid-2010s till date, have pushed cybersecurity as a top priority. As political tensions have escalated in Europe this year, concerns about critical infrastructure security have increased. Operators in the industrial sector face new cybersecurity threats that increase the risk of disruptions in services, property damages, and environmental harm. Amid rising geopolitical tensions, industrial companies, with their network-connected systems, are now considered major targets for adversaries to advance political, social, or military agendas. Moreover, the recent Russian-Ukrainian conflict has set the alarm worldwide about the danger of targeting energy grids via cyberattacks. Attack methodologies, techniques, and procedures used successfully to hack energy grids in Ukraine can be used elsewhere. This work aims to present a thorough analysis of the cybersecurity of the energy infrastructure amid the increased rise of cyberwars. The article navigates through the recent history of energy-related cyberattacks and their reasoning, discusses the grid's vulnerability, and makes a precautionary argument for securing the grids against them.
|
1812.02518
|
Shuman Jia
|
Shuman Jia, Antoine Despinasse, Zihao Wang, Herv\'e Delingette, Xavier
Pennec, Pierre Ja\"is, Hubert Cochet, and Maxime Sermesant
|
Automatically Segmenting the Left Atrium from Cardiac Images Using
Successive 3D U-Nets and a Contour Loss
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Radiological imaging offers effective measurement of anatomy, which is useful
in disease diagnosis and assessment. Previous study has shown that the left
atrial wall remodeling can provide information to predict treatment outcome in
atrial fibrillation. Nevertheless, the segmentation of the left atrial
structures from medical images is still very time-consuming. Current advances
in neural network may help creating automatic segmentation models that reduce
the workload for clinicians. In this preliminary study, we propose automated,
two-stage, three-dimensional U-Nets with convolutional neural network, for the
challenging task of left atrial segmentation. Unlike previous two-dimensional
image segmentation methods, we use 3D U-Nets to obtain the heart cavity
directly in 3D. The dual 3D U-Net structure consists of, a first U-Net to
coarsely segment and locate the left atrium, and a second U-Net to accurately
segment the left atrium under higher resolution. In addition, we introduce a
Contour loss based on additional distance information to adjust the final
segmentation. We randomly split the data into training datasets (80 subjects)
and validation datasets (20 subjects) to train multiple models, with different
augmentation setting. Experiments show that the average Dice coefficients for
validation datasets are around 0.91 - 0.92, the sensitivity around 0.90-0.94
and the specificity 0.99. Compared with traditional Dice loss, models trained
with Contour loss in general offer smaller Hausdorff distance with similar Dice
coefficient, and have less connected components in predictions. Finally, we
integrate several trained models in an ensemble prediction to segment testing
datasets.
|
[
{
"created": "Thu, 6 Dec 2018 13:34:24 GMT",
"version": "v1"
}
] |
2018-12-07
|
[
[
"Jia",
"Shuman",
""
],
[
"Despinasse",
"Antoine",
""
],
[
"Wang",
"Zihao",
""
],
[
"Delingette",
"Hervé",
""
],
[
"Pennec",
"Xavier",
""
],
[
"Jaïs",
"Pierre",
""
],
[
"Cochet",
"Hubert",
""
],
[
"Sermesant",
"Maxime",
""
]
] |
Radiological imaging offers effective measurement of anatomy, which is useful in disease diagnosis and assessment. Previous study has shown that the left atrial wall remodeling can provide information to predict treatment outcome in atrial fibrillation. Nevertheless, the segmentation of the left atrial structures from medical images is still very time-consuming. Current advances in neural network may help creating automatic segmentation models that reduce the workload for clinicians. In this preliminary study, we propose automated, two-stage, three-dimensional U-Nets with convolutional neural network, for the challenging task of left atrial segmentation. Unlike previous two-dimensional image segmentation methods, we use 3D U-Nets to obtain the heart cavity directly in 3D. The dual 3D U-Net structure consists of, a first U-Net to coarsely segment and locate the left atrium, and a second U-Net to accurately segment the left atrium under higher resolution. In addition, we introduce a Contour loss based on additional distance information to adjust the final segmentation. We randomly split the data into training datasets (80 subjects) and validation datasets (20 subjects) to train multiple models, with different augmentation setting. Experiments show that the average Dice coefficients for validation datasets are around 0.91 - 0.92, the sensitivity around 0.90-0.94 and the specificity 0.99. Compared with traditional Dice loss, models trained with Contour loss in general offer smaller Hausdorff distance with similar Dice coefficient, and have less connected components in predictions. Finally, we integrate several trained models in an ensemble prediction to segment testing datasets.
|
2012.01964
|
Vaishali Kansal
|
Vaishali Kansal and Mayank Dave
|
Proactive DDoS Attack Mitigation in Cloud-Fog Environment using Moving
Target Defense
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Distributed Denial of Service (DDoS) attacks are serious cyber attacks and
mitigating DDoS attacks in cloud is a topic of ongoing research interest which
remains a major security challenge. Fog computing is an extension of cloud
computing which has been used to secure cloud. Moving Target Defense (MTD) is a
newly recognized, proactive security defense that can be used to mitigate DDoS
attacks on cloud. MTD intends to make a system dynamic in nature and uncertain
by changing attack surface continuously to confuse attackers. In this paper, a
novel DDoS mitigation framework is presented to support Cloud-Fog Platform
using MTD technique (CFPM). CFPM applies migration MTD technique at fog layer
to mitigate DDoS attacks in cloud. It detects attacker among all the legitimate
clients proactively at the fog layer and isolate it from innocent clients. CFPM
uses an effective request handling procedure for load balancing and attacker
isolation procedure which aims to minimize disruption to cloud server as well
as serving fog servers. In addition, effectiveness of CFPM is evaluated by
analyzing the behavior of the system before and after attack, considering
different possible scenarios. This approach is effective as it uses the
advantage of both MTD technique and Fog computing paradigm supporting cloud
environment.
|
[
{
"created": "Thu, 3 Dec 2020 14:37:12 GMT",
"version": "v1"
}
] |
2020-12-04
|
[
[
"Kansal",
"Vaishali",
""
],
[
"Dave",
"Mayank",
""
]
] |
Distributed Denial of Service (DDoS) attacks are serious cyber attacks and mitigating DDoS attacks in cloud is a topic of ongoing research interest which remains a major security challenge. Fog computing is an extension of cloud computing which has been used to secure cloud. Moving Target Defense (MTD) is a newly recognized, proactive security defense that can be used to mitigate DDoS attacks on cloud. MTD intends to make a system dynamic in nature and uncertain by changing attack surface continuously to confuse attackers. In this paper, a novel DDoS mitigation framework is presented to support Cloud-Fog Platform using MTD technique (CFPM). CFPM applies migration MTD technique at fog layer to mitigate DDoS attacks in cloud. It detects attacker among all the legitimate clients proactively at the fog layer and isolate it from innocent clients. CFPM uses an effective request handling procedure for load balancing and attacker isolation procedure which aims to minimize disruption to cloud server as well as serving fog servers. In addition, effectiveness of CFPM is evaluated by analyzing the behavior of the system before and after attack, considering different possible scenarios. This approach is effective as it uses the advantage of both MTD technique and Fog computing paradigm supporting cloud environment.
|
1705.05627
|
Ryan Henderson
|
Ryan Henderson and Rasmus Rothe
|
Picasso: A Modular Framework for Visualizing the Learning Process of
Neural Network Image Classifiers
|
9 pages, submission to the Journal of Open Research Software,
github.com/merantix/picasso
|
Journal of Open Research Software. 5(1), p.22 (2017)
|
10.5334/jors.178
| null |
cs.CV cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Picasso is a free open-source (Eclipse Public License) web application
written in Python for rendering standard visualizations useful for analyzing
convolutional neural networks. Picasso ships with occlusion maps and saliency
maps, two visualizations which help reveal issues that evaluation metrics like
loss and accuracy might hide: for example, learning a proxy classification
task. Picasso works with the Tensorflow deep learning framework, and Keras
(when the model can be loaded into the Tensorflow backend). Picasso can be used
with minimal configuration by deep learning researchers and engineers alike
across various neural network architectures. Adding new visualizations is
simple: the user can specify their visualization code and HTML template
separately from the application code.
|
[
{
"created": "Tue, 16 May 2017 10:06:19 GMT",
"version": "v1"
},
{
"created": "Thu, 20 Jul 2017 16:22:49 GMT",
"version": "v2"
},
{
"created": "Mon, 11 Sep 2017 12:35:18 GMT",
"version": "v3"
}
] |
2017-09-12
|
[
[
"Henderson",
"Ryan",
""
],
[
"Rothe",
"Rasmus",
""
]
] |
Picasso is a free open-source (Eclipse Public License) web application written in Python for rendering standard visualizations useful for analyzing convolutional neural networks. Picasso ships with occlusion maps and saliency maps, two visualizations which help reveal issues that evaluation metrics like loss and accuracy might hide: for example, learning a proxy classification task. Picasso works with the Tensorflow deep learning framework, and Keras (when the model can be loaded into the Tensorflow backend). Picasso can be used with minimal configuration by deep learning researchers and engineers alike across various neural network architectures. Adding new visualizations is simple: the user can specify their visualization code and HTML template separately from the application code.
|
2012.04522
|
Yingkai Li
|
Jiarui Gan, Bo Li, Yingkai Li
|
Your College Dorm and Dormmates: Fair Resource Sharing with
Externalities
|
accepted in JAIR 2023
|
Journal.of.Artificial.Intelligence.Research.77(2023)793-820
|
10.1613/jair.1.14863
| null |
cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study a fair resource sharing problem, where a set of resources are to be
shared among a group of agents. Each agent demands one resource and each
resource can serve a limited number of agents. An agent cares about what
resource they get as well as the externalities imposed by their mates, who
share the same resource with them. Clearly, the strong notion of envy-freeness,
where no agent envies another for their resource or mates, cannot always be
achieved and we show that even deciding the existence of such a strongly
envy-free assignment is an intractable problem. Hence, a more interesting
question is whether (and in what situations) a relaxed notion of envy-freeness,
the Pareto envy-freeness, can be achieved. Under this relaxed notion, an agent
envies another only when they envy both the resource and the mates of the other
agent. In particular, we are interested in a dorm assignment problem, where
students are to be assigned to dorms with the same capacity and they have
dichotomous preference over their dormmates. We show that when the capacity of
each dorm is 2, a Pareto envy-free assignment always exists and we present a
polynomial-time algorithm to compute such an assignment. Nevertheless, the
result breaks immediately when the capacity increases to 3, in which case even
Pareto envy-freeness cannot be guaranteed. In addition to the existential
results, we also investigate the utility guarantees of (Pareto) envy-free
assignments in our model.
|
[
{
"created": "Tue, 8 Dec 2020 16:11:17 GMT",
"version": "v1"
},
{
"created": "Thu, 13 Jul 2023 15:59:59 GMT",
"version": "v2"
}
] |
2023-07-14
|
[
[
"Gan",
"Jiarui",
""
],
[
"Li",
"Bo",
""
],
[
"Li",
"Yingkai",
""
]
] |
We study a fair resource sharing problem, where a set of resources are to be shared among a group of agents. Each agent demands one resource and each resource can serve a limited number of agents. An agent cares about what resource they get as well as the externalities imposed by their mates, who share the same resource with them. Clearly, the strong notion of envy-freeness, where no agent envies another for their resource or mates, cannot always be achieved and we show that even deciding the existence of such a strongly envy-free assignment is an intractable problem. Hence, a more interesting question is whether (and in what situations) a relaxed notion of envy-freeness, the Pareto envy-freeness, can be achieved. Under this relaxed notion, an agent envies another only when they envy both the resource and the mates of the other agent. In particular, we are interested in a dorm assignment problem, where students are to be assigned to dorms with the same capacity and they have dichotomous preference over their dormmates. We show that when the capacity of each dorm is 2, a Pareto envy-free assignment always exists and we present a polynomial-time algorithm to compute such an assignment. Nevertheless, the result breaks immediately when the capacity increases to 3, in which case even Pareto envy-freeness cannot be guaranteed. In addition to the existential results, we also investigate the utility guarantees of (Pareto) envy-free assignments in our model.
|
2405.11338
|
Danli Shi
|
Danli Shi, Weiyi Zhang, Xiaolan Chen, Yexin Liu, Jiancheng Yang, Siyu
Huang, Yih Chung Tham, Yingfeng Zheng, Mingguang He
|
EyeFound: A Multimodal Generalist Foundation Model for Ophthalmic
Imaging
|
21 pages, 2 figures, 4 tables
| null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Artificial intelligence (AI) is vital in ophthalmology, tackling tasks like
diagnosis, classification, and visual question answering (VQA). However,
existing AI models in this domain often require extensive annotation and are
task-specific, limiting their clinical utility. While recent developments have
brought about foundation models for ophthalmology, they are limited by the need
to train separate weights for each imaging modality, preventing a comprehensive
representation of multi-modal features. This highlights the need for versatile
foundation models capable of handling various tasks and modalities in
ophthalmology. To address this gap, we present EyeFound, a multimodal
foundation model for ophthalmic images. Unlike existing models, EyeFound learns
generalizable representations from unlabeled multimodal retinal images,
enabling efficient model adaptation across multiple applications. Trained on
2.78 million images from 227 hospitals across 11 ophthalmic modalities,
EyeFound facilitates generalist representations and diverse multimodal
downstream tasks, even for detecting challenging rare diseases. It outperforms
previous work RETFound in diagnosing eye diseases, predicting systemic disease
incidents, and zero-shot multimodal VQA. EyeFound provides a generalizable
solution to improve model performance and lessen the annotation burden on
experts, facilitating widespread clinical AI applications for retinal imaging.
|
[
{
"created": "Sat, 18 May 2024 17:03:39 GMT",
"version": "v1"
},
{
"created": "Wed, 22 May 2024 02:21:07 GMT",
"version": "v2"
}
] |
2024-05-24
|
[
[
"Shi",
"Danli",
""
],
[
"Zhang",
"Weiyi",
""
],
[
"Chen",
"Xiaolan",
""
],
[
"Liu",
"Yexin",
""
],
[
"Yang",
"Jiancheng",
""
],
[
"Huang",
"Siyu",
""
],
[
"Tham",
"Yih Chung",
""
],
[
"Zheng",
"Yingfeng",
""
],
[
"He",
"Mingguang",
""
]
] |
Artificial intelligence (AI) is vital in ophthalmology, tackling tasks like diagnosis, classification, and visual question answering (VQA). However, existing AI models in this domain often require extensive annotation and are task-specific, limiting their clinical utility. While recent developments have brought about foundation models for ophthalmology, they are limited by the need to train separate weights for each imaging modality, preventing a comprehensive representation of multi-modal features. This highlights the need for versatile foundation models capable of handling various tasks and modalities in ophthalmology. To address this gap, we present EyeFound, a multimodal foundation model for ophthalmic images. Unlike existing models, EyeFound learns generalizable representations from unlabeled multimodal retinal images, enabling efficient model adaptation across multiple applications. Trained on 2.78 million images from 227 hospitals across 11 ophthalmic modalities, EyeFound facilitates generalist representations and diverse multimodal downstream tasks, even for detecting challenging rare diseases. It outperforms previous work RETFound in diagnosing eye diseases, predicting systemic disease incidents, and zero-shot multimodal VQA. EyeFound provides a generalizable solution to improve model performance and lessen the annotation burden on experts, facilitating widespread clinical AI applications for retinal imaging.
|
2310.01684
|
Asiful Arefeen
|
Asiful Arefeen and Hassan Ghasemzadeh
|
Designing User-Centric Behavioral Interventions to Prevent Dysglycemia
with Novel Counterfactual Explanations
| null | null | null | null |
cs.AI cs.HC cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Maintaining normal blood glucose levels through lifestyle behaviors is
central to maintaining health and preventing disease. Frequent exposure to
dysglycemia (i.e., abnormal glucose events such as hyperlycemia and
hypoglycemia) leads to chronic complications including diabetes, kidney disease
and need for dialysis, myocardial infarction, stroke, amputation, and death.
Therefore, a tool capable of predicting dysglycemia and offering users
actionable feedback about how to make changes in their diet, exercise, and
medication to prevent abnormal glycemic events could have significant societal
impacts. Counterfactual explanations can provide insights into why a model made
a particular prediction by generating hypothetical instances that are similar
to the original input but lead to a different prediction outcome. Therefore,
counterfactuals can be viewed as a means to design AI-driven health
interventions to prevent adverse health outcomes such as dysglycemia. In this
paper, we design GlyCoach, a framework for generating counterfactual
explanations for glucose control. Leveraging insights from adversarial
learning, GlyCoach characterizes the decision boundary for high-dimensional
health data and performs a grid search to generate actionable interventions.
GlyCoach is unique in integrating prior knowledge about user preferences of
plausible explanations into the process of counterfactual generation. We
evaluate GlyCoach extensively using two real-world datasets and external
simulators from prior studies that predict glucose response. GlyCoach achieves
87\% sensitivity in the simulation-aided validation, surpassing the
state-of-the-art techniques for generating counterfactual explanations by at
least $10\%$. Besides, counterfactuals from GlyCoach exhibit a $32\%$ improved
normalized distance compared to previous research.
|
[
{
"created": "Mon, 2 Oct 2023 22:42:52 GMT",
"version": "v1"
}
] |
2023-10-04
|
[
[
"Arefeen",
"Asiful",
""
],
[
"Ghasemzadeh",
"Hassan",
""
]
] |
Maintaining normal blood glucose levels through lifestyle behaviors is central to maintaining health and preventing disease. Frequent exposure to dysglycemia (i.e., abnormal glucose events such as hyperlycemia and hypoglycemia) leads to chronic complications including diabetes, kidney disease and need for dialysis, myocardial infarction, stroke, amputation, and death. Therefore, a tool capable of predicting dysglycemia and offering users actionable feedback about how to make changes in their diet, exercise, and medication to prevent abnormal glycemic events could have significant societal impacts. Counterfactual explanations can provide insights into why a model made a particular prediction by generating hypothetical instances that are similar to the original input but lead to a different prediction outcome. Therefore, counterfactuals can be viewed as a means to design AI-driven health interventions to prevent adverse health outcomes such as dysglycemia. In this paper, we design GlyCoach, a framework for generating counterfactual explanations for glucose control. Leveraging insights from adversarial learning, GlyCoach characterizes the decision boundary for high-dimensional health data and performs a grid search to generate actionable interventions. GlyCoach is unique in integrating prior knowledge about user preferences of plausible explanations into the process of counterfactual generation. We evaluate GlyCoach extensively using two real-world datasets and external simulators from prior studies that predict glucose response. GlyCoach achieves 87\% sensitivity in the simulation-aided validation, surpassing the state-of-the-art techniques for generating counterfactual explanations by at least $10\%$. Besides, counterfactuals from GlyCoach exhibit a $32\%$ improved normalized distance compared to previous research.
|
2011.04044
|
Yufei Feng
|
Yufei Feng, Zi'ou Zheng, Quan Liu, Michael Greenspan, Xiaodan Zhu
|
Exploring End-to-End Differentiable Natural Logic Modeling
|
10 pages
|
COLING 2020
| null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We explore end-to-end trained differentiable models that integrate natural
logic with neural networks, aiming to keep the backbone of natural language
reasoning based on the natural logic formalism while introducing subsymbolic
vector representations and neural components. The proposed model adapts module
networks to model natural logic operations, which is enhanced with a memory
component to model contextual information. Experiments show that the proposed
framework can effectively model monotonicity-based reasoning, compared to the
baseline neural network models without built-in inductive bias for
monotonicity-based reasoning. Our proposed model shows to be robust when
transferred from upward to downward inference. We perform further analyses on
the performance of the proposed model on aggregation, showing the effectiveness
of the proposed subcomponents on helping achieve better intermediate
aggregation performance.
|
[
{
"created": "Sun, 8 Nov 2020 18:18:15 GMT",
"version": "v1"
}
] |
2020-11-11
|
[
[
"Feng",
"Yufei",
""
],
[
"Zheng",
"Zi'ou",
""
],
[
"Liu",
"Quan",
""
],
[
"Greenspan",
"Michael",
""
],
[
"Zhu",
"Xiaodan",
""
]
] |
We explore end-to-end trained differentiable models that integrate natural logic with neural networks, aiming to keep the backbone of natural language reasoning based on the natural logic formalism while introducing subsymbolic vector representations and neural components. The proposed model adapts module networks to model natural logic operations, which is enhanced with a memory component to model contextual information. Experiments show that the proposed framework can effectively model monotonicity-based reasoning, compared to the baseline neural network models without built-in inductive bias for monotonicity-based reasoning. Our proposed model shows to be robust when transferred from upward to downward inference. We perform further analyses on the performance of the proposed model on aggregation, showing the effectiveness of the proposed subcomponents on helping achieve better intermediate aggregation performance.
|
2301.12344
|
Xuchen Liu
|
Xuchen Liu (1 and 2), Minghao Dou (1 and 2), Dongyue Huang (1 and 2),
Biao Wang (3 and 4), Jinqiang Cui (4), Qinyuan Ren (5 and 4), Lihua Dou (6),
Zhi Gao (7), Jie Chen (1) and Ben M. Chen (2) ((1) Shanghai Research
Institute for Intelligent Autonomous Systems, Tongji University, Shanghai,
China, (2) Department of Mechanical and Automation Engineering, the Chinese
University of Hong Kong, Hong Kong, China, (3) College of Automation
Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing,
China, (4) Peng Cheng Laboratory, Shenzhen, China, (5) College of Control
Science and Engineering, Zhejiang University, Hangzhou, China, (6) School of
Automation, Beijing Institute of Technology, Beijing, China, (7) School of
Remote Sensing and Information Engineering, Wuhan University, Wuhan, China)
|
TJ-FlyingFish: Design and Implementation of an Aerial-Aquatic Quadrotor
with Tiltable Propulsion Units
|
6 pages, 9 figures, accepted to 2023 IEEE International Conference on
Robotics and Automation (ICRA)
| null | null | null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Aerial-aquatic vehicles are capable to move in the two most dominant fluids,
making them more promising for a wide range of applications. We propose a
prototype with special designs for propulsion and thruster configuration to
cope with the vast differences in the fluid properties of water and air. For
propulsion, the operating range is switched for the different mediums by the
dual-speed propulsion unit, providing sufficient thrust and also ensuring
output efficiency. For thruster configuration, thrust vectoring is realized by
the rotation of the propulsion unit around the mount arm, thus enhancing the
underwater maneuverability. This paper presents a quadrotor prototype of this
concept and the design details and realization in practice.
|
[
{
"created": "Sun, 29 Jan 2023 03:54:05 GMT",
"version": "v1"
},
{
"created": "Tue, 7 Feb 2023 02:49:27 GMT",
"version": "v2"
}
] |
2023-02-08
|
[
[
"Liu",
"Xuchen",
"",
"1 and 2"
],
[
"Dou",
"Minghao",
"",
"1 and 2"
],
[
"Huang",
"Dongyue",
"",
"1 and 2"
],
[
"Wang",
"Biao",
"",
"3 and 4"
],
[
"Cui",
"Jinqiang",
"",
"5 and 4"
],
[
"Ren",
"Qinyuan",
"",
"5 and 4"
],
[
"Dou",
"Lihua",
""
],
[
"Gao",
"Zhi",
""
],
[
"Chen",
"Jie",
""
],
[
"Chen",
"Ben M.",
""
]
] |
Aerial-aquatic vehicles are capable to move in the two most dominant fluids, making them more promising for a wide range of applications. We propose a prototype with special designs for propulsion and thruster configuration to cope with the vast differences in the fluid properties of water and air. For propulsion, the operating range is switched for the different mediums by the dual-speed propulsion unit, providing sufficient thrust and also ensuring output efficiency. For thruster configuration, thrust vectoring is realized by the rotation of the propulsion unit around the mount arm, thus enhancing the underwater maneuverability. This paper presents a quadrotor prototype of this concept and the design details and realization in practice.
|
1502.06260
|
Xin Yuan
|
Xin Yuan, Tsung-Han Tsai, Ruoyu Zhu, Patrick Llull, David Brady,
Lawrence Carin
|
Compressive Hyperspectral Imaging with Side Information
|
20 pages, 21 figures. To appear in the IEEE Journal of Selected
Topics Signal Processing
| null |
10.1109/JSTSP.2015.2411575
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A blind compressive sensing algorithm is proposed to reconstruct
hyperspectral images from spectrally-compressed measurements.The
wavelength-dependent data are coded and then superposed, mapping the
three-dimensional hyperspectral datacube to a two-dimensional image. The
inversion algorithm learns a dictionary {\em in situ} from the measurements via
global-local shrinkage priors. By using RGB images as side information of the
compressive sensing system, the proposed approach is extended to learn a
coupled dictionary from the joint dataset of the compressed measurements and
the corresponding RGB images, to improve reconstruction quality. A prototype
camera is built using a liquid-crystal-on-silicon modulator. Experimental
reconstructions of hyperspectral datacubes from both simulated and real
compressed measurements demonstrate the efficacy of the proposed inversion
algorithm, the feasibility of the camera and the benefit of side information.
|
[
{
"created": "Sun, 22 Feb 2015 19:10:31 GMT",
"version": "v1"
}
] |
2015-10-28
|
[
[
"Yuan",
"Xin",
""
],
[
"Tsai",
"Tsung-Han",
""
],
[
"Zhu",
"Ruoyu",
""
],
[
"Llull",
"Patrick",
""
],
[
"Brady",
"David",
""
],
[
"Carin",
"Lawrence",
""
]
] |
A blind compressive sensing algorithm is proposed to reconstruct hyperspectral images from spectrally-compressed measurements.The wavelength-dependent data are coded and then superposed, mapping the three-dimensional hyperspectral datacube to a two-dimensional image. The inversion algorithm learns a dictionary {\em in situ} from the measurements via global-local shrinkage priors. By using RGB images as side information of the compressive sensing system, the proposed approach is extended to learn a coupled dictionary from the joint dataset of the compressed measurements and the corresponding RGB images, to improve reconstruction quality. A prototype camera is built using a liquid-crystal-on-silicon modulator. Experimental reconstructions of hyperspectral datacubes from both simulated and real compressed measurements demonstrate the efficacy of the proposed inversion algorithm, the feasibility of the camera and the benefit of side information.
|
2306.07842
|
Guangtao Lyu
|
Guangtao Lyu, Anna Zhu
|
PSSTRNet: Progressive Segmentation-guided Scene Text Removal Network
|
Accepted by ICME2022
|
2022 IEEE International Conference on Multimedia and Expo (ICME)
|
10.1109/ICME52920.2022.9859792
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Scene text removal (STR) is a challenging task due to the complex text fonts,
colors, sizes, and background textures in scene images. However, most previous
methods learn both text location and background inpainting implicitly within a
single network, which weakens the text localization mechanism and makes a lossy
background. To tackle these problems, we propose a simple Progressive
Segmentation-guided Scene Text Removal Network(PSSTRNet) to remove the text in
the image iteratively. It contains two decoder branches, a text segmentation
branch, and a text removal branch, with a shared encoder. The text segmentation
branch generates text mask maps as the guidance for the regional removal
branch. In each iteration, the original image, previous text removal result,
and text mask are input to the network to extract the rest part of the text
segments and cleaner text removal result. To get a more accurate text mask map,
an update module is developed to merge the mask map in the current and previous
stages. The final text removal result is obtained by adaptive fusion of results
from all previous stages. A sufficient number of experiments and ablation
studies conducted on the real and synthetic public datasets demonstrate our
proposed method achieves state-of-the-art performance. The source code of our
work is available at:
\href{https://github.com/GuangtaoLyu/PSSTRNet}{https://github.com/GuangtaoLyu/PSSTRNet.}
|
[
{
"created": "Tue, 13 Jun 2023 15:20:37 GMT",
"version": "v1"
}
] |
2023-06-14
|
[
[
"Lyu",
"Guangtao",
""
],
[
"Zhu",
"Anna",
""
]
] |
Scene text removal (STR) is a challenging task due to the complex text fonts, colors, sizes, and background textures in scene images. However, most previous methods learn both text location and background inpainting implicitly within a single network, which weakens the text localization mechanism and makes a lossy background. To tackle these problems, we propose a simple Progressive Segmentation-guided Scene Text Removal Network(PSSTRNet) to remove the text in the image iteratively. It contains two decoder branches, a text segmentation branch, and a text removal branch, with a shared encoder. The text segmentation branch generates text mask maps as the guidance for the regional removal branch. In each iteration, the original image, previous text removal result, and text mask are input to the network to extract the rest part of the text segments and cleaner text removal result. To get a more accurate text mask map, an update module is developed to merge the mask map in the current and previous stages. The final text removal result is obtained by adaptive fusion of results from all previous stages. A sufficient number of experiments and ablation studies conducted on the real and synthetic public datasets demonstrate our proposed method achieves state-of-the-art performance. The source code of our work is available at: \href{https://github.com/GuangtaoLyu/PSSTRNet}{https://github.com/GuangtaoLyu/PSSTRNet.}
|
1809.03531
|
Guillaume Sartoretti
|
Guillaume Sartoretti, Justin Kerr, Yunfei Shi, Glenn Wagner, T. K.
Satish Kumar, Sven Koenig, and Howie Choset
|
PRIMAL: Pathfinding via Reinforcement and Imitation Multi-Agent Learning
|
\c{opyright} 20XX IEEE. Personal use of this material is permitted.
Permission from IEEE must be obtained for all other uses, in any current or
future media, including reprinting/republishing this material for advertising
or promotional purposes, creating new collective works, for resale or
redistribution to servers or lists, or reuse of any copyrighted component of
this work in other works
| null |
10.1109/LRA.2019.2903261
| null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multi-agent path finding (MAPF) is an essential component of many
large-scale, real-world robot deployments, from aerial swarms to warehouse
automation. However, despite the community's continued efforts, most
state-of-the-art MAPF planners still rely on centralized planning and scale
poorly past a few hundred agents. Such planning approaches are maladapted to
real-world deployments, where noise and uncertainty often require paths be
recomputed online, which is impossible when planning times are in seconds to
minutes. We present PRIMAL, a novel framework for MAPF that combines
reinforcement and imitation learning to teach fully-decentralized policies,
where agents reactively plan paths online in a partially-observable world while
exhibiting implicit coordination. This framework extends our previous work on
distributed learning of collaborative policies by introducing demonstrations of
an expert MAPF planner during training, as well as careful reward shaping and
environment sampling. Once learned, the resulting policy can be copied onto any
number of agents and naturally scales to different team sizes and world
dimensions. We present results on randomized worlds with up to 1024 agents and
compare success rates against state-of-the-art MAPF planners. Finally, we
experimentally validate the learned policies in a hybrid simulation of a
factory mockup, involving both real-world and simulated robots.
|
[
{
"created": "Mon, 10 Sep 2018 18:18:03 GMT",
"version": "v1"
},
{
"created": "Mon, 18 Feb 2019 18:20:28 GMT",
"version": "v2"
},
{
"created": "Wed, 20 Feb 2019 23:56:34 GMT",
"version": "v3"
}
] |
2021-02-02
|
[
[
"Sartoretti",
"Guillaume",
""
],
[
"Kerr",
"Justin",
""
],
[
"Shi",
"Yunfei",
""
],
[
"Wagner",
"Glenn",
""
],
[
"Kumar",
"T. K. Satish",
""
],
[
"Koenig",
"Sven",
""
],
[
"Choset",
"Howie",
""
]
] |
Multi-agent path finding (MAPF) is an essential component of many large-scale, real-world robot deployments, from aerial swarms to warehouse automation. However, despite the community's continued efforts, most state-of-the-art MAPF planners still rely on centralized planning and scale poorly past a few hundred agents. Such planning approaches are maladapted to real-world deployments, where noise and uncertainty often require paths be recomputed online, which is impossible when planning times are in seconds to minutes. We present PRIMAL, a novel framework for MAPF that combines reinforcement and imitation learning to teach fully-decentralized policies, where agents reactively plan paths online in a partially-observable world while exhibiting implicit coordination. This framework extends our previous work on distributed learning of collaborative policies by introducing demonstrations of an expert MAPF planner during training, as well as careful reward shaping and environment sampling. Once learned, the resulting policy can be copied onto any number of agents and naturally scales to different team sizes and world dimensions. We present results on randomized worlds with up to 1024 agents and compare success rates against state-of-the-art MAPF planners. Finally, we experimentally validate the learned policies in a hybrid simulation of a factory mockup, involving both real-world and simulated robots.
|
cs/0308040
|
Sanatan Rai
|
Sanatan Rai
|
Open source software and peer review
|
4 pages
| null | null | null |
cs.SE cs.CY
| null |
We compare the open source model of software development to peer review in
academia.
|
[
{
"created": "Sat, 23 Aug 2003 21:11:41 GMT",
"version": "v1"
},
{
"created": "Sun, 7 Sep 2003 18:20:11 GMT",
"version": "v2"
}
] |
2007-05-23
|
[
[
"Rai",
"Sanatan",
""
]
] |
We compare the open source model of software development to peer review in academia.
|
2109.13803
|
Hongwei Zhu
|
Hongwei Zhu, Minjia Shi, Xiaoqiang Wang, Tor Helleseth
|
The $q$-ary antiprimitive BCH codes
|
This manuscript was first submitted to IEEE Tran. Inf. Theory in 06,
April, 2021
| null | null | null |
cs.IT math.IT math.NT
|
http://creativecommons.org/licenses/by/4.0/
|
It is well-known that cyclic codes have efficient encoding and decoding
algorithms. In recent years, antiprimitive BCH codes have attracted a lot of
attention. The objective of this paper is to study BCH codes of this type over
finite fields and analyse their parameters. Some lower bounds on the minimum
distance of antiprimitive BCH codes are given. The BCH codes presented in this
paper have good parameters in general, containing many optimal linear codes. In
particular, two open problems about the minimum distance of BCH codes of this
type are partially solved in this paper.
|
[
{
"created": "Tue, 28 Sep 2021 15:33:10 GMT",
"version": "v1"
}
] |
2021-09-29
|
[
[
"Zhu",
"Hongwei",
""
],
[
"Shi",
"Minjia",
""
],
[
"Wang",
"Xiaoqiang",
""
],
[
"Helleseth",
"Tor",
""
]
] |
It is well-known that cyclic codes have efficient encoding and decoding algorithms. In recent years, antiprimitive BCH codes have attracted a lot of attention. The objective of this paper is to study BCH codes of this type over finite fields and analyse their parameters. Some lower bounds on the minimum distance of antiprimitive BCH codes are given. The BCH codes presented in this paper have good parameters in general, containing many optimal linear codes. In particular, two open problems about the minimum distance of BCH codes of this type are partially solved in this paper.
|
1801.03578
|
Afshin Zafari
|
Afshin Zafari, Elisabeth Larsson, Martin Tillenius
|
DuctTeip: An efficient programming model for distributed task based
parallel computing
| null | null | null | null |
cs.DC cs.CE cs.PF
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Current high-performance computer systems used for scientific computing
typically combine shared memory computational nodes in a distributed memory
environment. Extracting high performance from these complex systems requires
tailored approaches. Task based parallel programming has been successful both
in simplifying the programming and in exploiting the available hardware
parallelism for shared memory systems. In this paper we focus on how to extend
task parallel programming to distributed memory systems. We use a hierarchical
decomposition of tasks and data in order to accommodate the different levels of
hardware. We test the proposed programming model on two different applications,
a Cholesky factorization, and a solver for the Shallow Water Equations. We also
compare the performance of our implementation with that of other frameworks for
distributed task parallel programming, and show that it is competitive.
|
[
{
"created": "Wed, 10 Jan 2018 22:50:01 GMT",
"version": "v1"
}
] |
2018-01-14
|
[
[
"Zafari",
"Afshin",
""
],
[
"Larsson",
"Elisabeth",
""
],
[
"Tillenius",
"Martin",
""
]
] |
Current high-performance computer systems used for scientific computing typically combine shared memory computational nodes in a distributed memory environment. Extracting high performance from these complex systems requires tailored approaches. Task based parallel programming has been successful both in simplifying the programming and in exploiting the available hardware parallelism for shared memory systems. In this paper we focus on how to extend task parallel programming to distributed memory systems. We use a hierarchical decomposition of tasks and data in order to accommodate the different levels of hardware. We test the proposed programming model on two different applications, a Cholesky factorization, and a solver for the Shallow Water Equations. We also compare the performance of our implementation with that of other frameworks for distributed task parallel programming, and show that it is competitive.
|
1701.02560
|
Bettagere Bharath
|
B. N. Bharath and P. Vaishali
|
Time Complexity Analysis of a Distributed Stochastic Optimization in a
Non-Stationary Environment
|
16 pages + 5 figures
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we consider a distributed stochastic optimization problem
where the goal is to minimize the time average of a cost function subject to a
set of constraints on the time averages of related stochastic processes called
penalties. We assume that the state of the system is evolving in an independent
and non-stationary fashion and the "common information" available at each node
is distributed and delayed. Such stochastic optimization is an integral part of
many important problems in wireless networks such as scheduling, routing,
resource allocation and crowd sensing. We propose an approximate distributed
Drift- Plus-Penalty (DPP) algorithm, and show that it achieves a time average
cost (and penalties) that is within epsilon > 0 of the optimal cost (and
constraints) with high probability. Also, we provide a condition on the
convergence time t for this result to hold. In particular, for any delay D >= 0
in the common information, we use a coupling argument to prove that the
proposed algorithm converges almost surely to the optimal solution. We use an
application from wireless sensor network to corroborate our theoretical
findings through simulation results.
|
[
{
"created": "Tue, 10 Jan 2017 12:48:33 GMT",
"version": "v1"
}
] |
2017-01-11
|
[
[
"Bharath",
"B. N.",
""
],
[
"Vaishali",
"P.",
""
]
] |
In this paper, we consider a distributed stochastic optimization problem where the goal is to minimize the time average of a cost function subject to a set of constraints on the time averages of related stochastic processes called penalties. We assume that the state of the system is evolving in an independent and non-stationary fashion and the "common information" available at each node is distributed and delayed. Such stochastic optimization is an integral part of many important problems in wireless networks such as scheduling, routing, resource allocation and crowd sensing. We propose an approximate distributed Drift- Plus-Penalty (DPP) algorithm, and show that it achieves a time average cost (and penalties) that is within epsilon > 0 of the optimal cost (and constraints) with high probability. Also, we provide a condition on the convergence time t for this result to hold. In particular, for any delay D >= 0 in the common information, we use a coupling argument to prove that the proposed algorithm converges almost surely to the optimal solution. We use an application from wireless sensor network to corroborate our theoretical findings through simulation results.
|
2006.05028
|
Slobodan Mitrovi\'c
|
Piotr Indyk, Frederik Mallmann-Trenn, Slobodan Mitrovi\'c, Ronitt
Rubinfeld
|
Online Page Migration with ML Advice
| null | null | null | null |
cs.DS cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider online algorithms for the {\em page migration problem} that use
predictions, potentially imperfect, to improve their performance. The best
known online algorithms for this problem, due to Westbrook'94 and Bienkowski et
al'17, have competitive ratios strictly bounded away from 1. In contrast, we
show that if the algorithm is given a prediction of the input sequence, then it
can achieve a competitive ratio that tends to $1$ as the prediction error rate
tends to $0$. Specifically, the competitive ratio is equal to $1+O(q)$, where
$q$ is the prediction error rate. We also design a ``fallback option'' that
ensures that the competitive ratio of the algorithm for {\em any} input
sequence is at most $O(1/q)$. Our result adds to the recent body of work that
uses machine learning to improve the performance of ``classic'' algorithms.
|
[
{
"created": "Tue, 9 Jun 2020 03:15:34 GMT",
"version": "v1"
}
] |
2020-06-11
|
[
[
"Indyk",
"Piotr",
""
],
[
"Mallmann-Trenn",
"Frederik",
""
],
[
"Mitrović",
"Slobodan",
""
],
[
"Rubinfeld",
"Ronitt",
""
]
] |
We consider online algorithms for the {\em page migration problem} that use predictions, potentially imperfect, to improve their performance. The best known online algorithms for this problem, due to Westbrook'94 and Bienkowski et al'17, have competitive ratios strictly bounded away from 1. In contrast, we show that if the algorithm is given a prediction of the input sequence, then it can achieve a competitive ratio that tends to $1$ as the prediction error rate tends to $0$. Specifically, the competitive ratio is equal to $1+O(q)$, where $q$ is the prediction error rate. We also design a ``fallback option'' that ensures that the competitive ratio of the algorithm for {\em any} input sequence is at most $O(1/q)$. Our result adds to the recent body of work that uses machine learning to improve the performance of ``classic'' algorithms.
|
2407.15078
|
Logan Weber
|
Logan Weber, Jesse Michel, Alex Renda, Michael Carbin
|
Learning to Compile Programs to Neural Networks
| null | null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
A $\textit{neural surrogate of a program}$ is a neural network that mimics
the behavior of a program. Researchers have used these neural surrogates to
automatically tune program inputs, adapt programs to new settings, and
accelerate computations. Researchers traditionally develop neural surrogates by
training on input-output examples from a single program. Alternatively,
language models trained on a large dataset including many programs can consume
program text, to act as a neural surrogate. Using a language model to both
generate a surrogate and act as a surrogate, however, leading to a trade-off
between resource consumption and accuracy. We present $\textit{neural surrogate
compilation}$, a technique for producing neural surrogates directly from
program text without coupling neural surrogate generation and execution. We
implement neural surrogate compilers using hypernetworks trained on a dataset
of C programs and find that they produce neural surrogates that are
$1.9$-$9.5\times$ as data-efficient, produce visual results that are
$1.0$-$1.3\times$ more similar to ground truth, and train in $4.3$-$7.3\times$
fewer epochs than neural surrogates trained from scratch.
|
[
{
"created": "Sun, 21 Jul 2024 07:04:52 GMT",
"version": "v1"
}
] |
2024-07-23
|
[
[
"Weber",
"Logan",
""
],
[
"Michel",
"Jesse",
""
],
[
"Renda",
"Alex",
""
],
[
"Carbin",
"Michael",
""
]
] |
A $\textit{neural surrogate of a program}$ is a neural network that mimics the behavior of a program. Researchers have used these neural surrogates to automatically tune program inputs, adapt programs to new settings, and accelerate computations. Researchers traditionally develop neural surrogates by training on input-output examples from a single program. Alternatively, language models trained on a large dataset including many programs can consume program text, to act as a neural surrogate. Using a language model to both generate a surrogate and act as a surrogate, however, leading to a trade-off between resource consumption and accuracy. We present $\textit{neural surrogate compilation}$, a technique for producing neural surrogates directly from program text without coupling neural surrogate generation and execution. We implement neural surrogate compilers using hypernetworks trained on a dataset of C programs and find that they produce neural surrogates that are $1.9$-$9.5\times$ as data-efficient, produce visual results that are $1.0$-$1.3\times$ more similar to ground truth, and train in $4.3$-$7.3\times$ fewer epochs than neural surrogates trained from scratch.
|
2310.07078
|
Ashiqur Rahman KhudaBukhsh
|
Clay H. Yoo and Ashiqur R. KhudaBukhsh
|
Auditing and Robustifying COVID-19 Misinformation Datasets via
Anticontent Sampling
|
This paper has been accepted at AAAI 2023 (Robust and Safe AI track)
| null | null | null |
cs.LG cs.AI cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper makes two key contributions. First, it argues that highly
specialized rare content classifiers trained on small data typically have
limited exposure to the richness and topical diversity of the negative class
(dubbed anticontent) as observed in the wild. As a result, these classifiers'
strong performance observed on the test set may not translate into real-world
settings. In the context of COVID-19 misinformation detection, we conduct an
in-the-wild audit of multiple datasets and demonstrate that models trained with
several prominently cited recent datasets are vulnerable to anticontent when
evaluated in the wild. Second, we present a novel active learning pipeline that
requires zero manual annotation and iteratively augments the training data with
challenging anticontent, robustifying these classifiers.
|
[
{
"created": "Sat, 5 Aug 2023 22:38:05 GMT",
"version": "v1"
}
] |
2023-10-12
|
[
[
"Yoo",
"Clay H.",
""
],
[
"KhudaBukhsh",
"Ashiqur R.",
""
]
] |
This paper makes two key contributions. First, it argues that highly specialized rare content classifiers trained on small data typically have limited exposure to the richness and topical diversity of the negative class (dubbed anticontent) as observed in the wild. As a result, these classifiers' strong performance observed on the test set may not translate into real-world settings. In the context of COVID-19 misinformation detection, we conduct an in-the-wild audit of multiple datasets and demonstrate that models trained with several prominently cited recent datasets are vulnerable to anticontent when evaluated in the wild. Second, we present a novel active learning pipeline that requires zero manual annotation and iteratively augments the training data with challenging anticontent, robustifying these classifiers.
|
1307.5299
|
Paul D\"utting
|
Paul Duetting and Robert Kleinberg
|
Polymatroid Prophet Inequalities
| null | null | null | null |
cs.DS cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Consider a gambler and a prophet who observe a sequence of independent,
non-negative numbers. The gambler sees the numbers one-by-one whereas the
prophet sees the entire sequence at once. The goal of both is to decide on
fractions of each number they want to keep so as to maximize the weighted
fractional sum of the numbers chosen.
The classic result of Krengel and Sucheston (1977-78) asserts that if both
the gambler and the prophet can pick one number, then the gambler can do at
least half as well as the prophet. Recently, Kleinberg and Weinberg (2012) have
generalized this result to settings where the numbers that can be chosen are
subject to a matroid constraint.
In this note we go one step further and show that the bound carries over to
settings where the fractions that can be chosen are subject to a polymatroid
constraint. This bound is tight as it is already tight for the simple setting
where the gambler and the prophet can pick only one number. An interesting
application of our result is in mechanism design, where it leads to improved
results for various problems.
|
[
{
"created": "Fri, 19 Jul 2013 18:11:44 GMT",
"version": "v1"
}
] |
2013-07-22
|
[
[
"Duetting",
"Paul",
""
],
[
"Kleinberg",
"Robert",
""
]
] |
Consider a gambler and a prophet who observe a sequence of independent, non-negative numbers. The gambler sees the numbers one-by-one whereas the prophet sees the entire sequence at once. The goal of both is to decide on fractions of each number they want to keep so as to maximize the weighted fractional sum of the numbers chosen. The classic result of Krengel and Sucheston (1977-78) asserts that if both the gambler and the prophet can pick one number, then the gambler can do at least half as well as the prophet. Recently, Kleinberg and Weinberg (2012) have generalized this result to settings where the numbers that can be chosen are subject to a matroid constraint. In this note we go one step further and show that the bound carries over to settings where the fractions that can be chosen are subject to a polymatroid constraint. This bound is tight as it is already tight for the simple setting where the gambler and the prophet can pick only one number. An interesting application of our result is in mechanism design, where it leads to improved results for various problems.
|
2309.10979
|
Xin Zheng
|
Xin Zheng, Yixin Liu, Zhifeng Bao, Meng Fang, Xia Hu, Alan Wee-Chung
Liew, Shirui Pan
|
Towards Data-centric Graph Machine Learning: Review and Outlook
|
42 pages, 9 figures
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Data-centric AI, with its primary focus on the collection, management, and
utilization of data to drive AI models and applications, has attracted
increasing attention in recent years. In this article, we conduct an in-depth
and comprehensive review, offering a forward-looking outlook on the current
efforts in data-centric AI pertaining to graph data-the fundamental data
structure for representing and capturing intricate dependencies among massive
and diverse real-life entities. We introduce a systematic framework,
Data-centric Graph Machine Learning (DC-GML), that encompasses all stages of
the graph data lifecycle, including graph data collection, exploration,
improvement, exploitation, and maintenance. A thorough taxonomy of each stage
is presented to answer three critical graph-centric questions: (1) how to
enhance graph data availability and quality; (2) how to learn from graph data
with limited-availability and low-quality; (3) how to build graph MLOps systems
from the graph data-centric view. Lastly, we pinpoint the future prospects of
the DC-GML domain, providing insights to navigate its advancements and
applications.
|
[
{
"created": "Wed, 20 Sep 2023 00:40:13 GMT",
"version": "v1"
}
] |
2023-09-21
|
[
[
"Zheng",
"Xin",
""
],
[
"Liu",
"Yixin",
""
],
[
"Bao",
"Zhifeng",
""
],
[
"Fang",
"Meng",
""
],
[
"Hu",
"Xia",
""
],
[
"Liew",
"Alan Wee-Chung",
""
],
[
"Pan",
"Shirui",
""
]
] |
Data-centric AI, with its primary focus on the collection, management, and utilization of data to drive AI models and applications, has attracted increasing attention in recent years. In this article, we conduct an in-depth and comprehensive review, offering a forward-looking outlook on the current efforts in data-centric AI pertaining to graph data-the fundamental data structure for representing and capturing intricate dependencies among massive and diverse real-life entities. We introduce a systematic framework, Data-centric Graph Machine Learning (DC-GML), that encompasses all stages of the graph data lifecycle, including graph data collection, exploration, improvement, exploitation, and maintenance. A thorough taxonomy of each stage is presented to answer three critical graph-centric questions: (1) how to enhance graph data availability and quality; (2) how to learn from graph data with limited-availability and low-quality; (3) how to build graph MLOps systems from the graph data-centric view. Lastly, we pinpoint the future prospects of the DC-GML domain, providing insights to navigate its advancements and applications.
|
2301.08719
|
Aldo Badano
|
A Badano, M Lago, E Sizikova, JG Delfino, S Guan, MA Anastasio and B
Sahiner
|
The stochastic digital human is now enrolling for in silico imaging
trials -- Methods and tools for generating digital cohorts
| null | null | null | null |
cs.AI physics.med-ph
|
http://creativecommons.org/licenses/by/4.0/
|
Randomized clinical trials, while often viewed as the highest evidentiary bar
by which to judge the quality of a medical intervention, are far from perfect.
In silico imaging trials are computational studies that seek to ascertain the
performance of a medical device by collecting this information entirely via
computer simulations. The benefits of in silico trials for evaluating new
technology include significant resource and time savings, minimization of
subject risk, the ability to study devices that are not achievable in the
physical world, allow for the rapid and effective investigation of new
technologies and ensure representation from all relevant subgroups. To conduct
in silico trials, digital representations of humans are needed. We review the
latest developments in methods and tools for obtaining digital humans for in
silico imaging studies. First, we introduce terminology and a classification of
digital human models. Second, we survey available methodologies for generating
digital humans with healthy and diseased status and examine briefly the role of
augmentation methods. Finally, we discuss the trade-offs of four approaches for
sampling digital cohorts and the associated potential for study bias with
selecting specific patient distributions.
|
[
{
"created": "Fri, 20 Jan 2023 18:31:22 GMT",
"version": "v1"
}
] |
2023-01-23
|
[
[
"Badano",
"A",
""
],
[
"Lago",
"M",
""
],
[
"Sizikova",
"E",
""
],
[
"Delfino",
"JG",
""
],
[
"Guan",
"S",
""
],
[
"Anastasio",
"MA",
""
],
[
"Sahiner",
"B",
""
]
] |
Randomized clinical trials, while often viewed as the highest evidentiary bar by which to judge the quality of a medical intervention, are far from perfect. In silico imaging trials are computational studies that seek to ascertain the performance of a medical device by collecting this information entirely via computer simulations. The benefits of in silico trials for evaluating new technology include significant resource and time savings, minimization of subject risk, the ability to study devices that are not achievable in the physical world, allow for the rapid and effective investigation of new technologies and ensure representation from all relevant subgroups. To conduct in silico trials, digital representations of humans are needed. We review the latest developments in methods and tools for obtaining digital humans for in silico imaging studies. First, we introduce terminology and a classification of digital human models. Second, we survey available methodologies for generating digital humans with healthy and diseased status and examine briefly the role of augmentation methods. Finally, we discuss the trade-offs of four approaches for sampling digital cohorts and the associated potential for study bias with selecting specific patient distributions.
|
2403.05565
|
Jiaqi Ma
|
Jiaqi Ma, Vivian Lai, Yiming Zhang, Chacha Chen, Paul Hamilton, Davor
Ljubenkov, Himabindu Lakkaraju, Chenhao Tan
|
OpenHEXAI: An Open-Source Framework for Human-Centered Evaluation of
Explainable Machine Learning
| null | null | null | null |
cs.HC cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, there has been a surge of explainable AI (XAI) methods driven by
the need for understanding machine learning model behaviors in high-stakes
scenarios. However, properly evaluating the effectiveness of the XAI methods
inevitably requires the involvement of human subjects, and conducting
human-centered benchmarks is challenging in a number of ways: designing and
implementing user studies is complex; numerous design choices in the design
space of user study lead to problems of reproducibility; and running user
studies can be challenging and even daunting for machine learning researchers.
To address these challenges, this paper presents OpenHEXAI, an open-source
framework for human-centered evaluation of XAI methods. OpenHEXAI features (1)
a collection of diverse benchmark datasets, pre-trained models, and post hoc
explanation methods; (2) an easy-to-use web application for user study; (3)
comprehensive evaluation metrics for the effectiveness of post hoc explanation
methods in the context of human-AI decision making tasks; (4) best practice
recommendations of experiment documentation; and (5) convenient tools for power
analysis and cost estimation. OpenHEAXI is the first large-scale
infrastructural effort to facilitate human-centered benchmarks of XAI methods.
It simplifies the design and implementation of user studies for XAI methods,
thus allowing researchers and practitioners to focus on the scientific
questions. Additionally, it enhances reproducibility through standardized
designs. Based on OpenHEXAI, we further conduct a systematic benchmark of four
state-of-the-art post hoc explanation methods and compare their impacts on
human-AI decision making tasks in terms of accuracy, fairness, as well as
users' trust and understanding of the machine learning model.
|
[
{
"created": "Tue, 20 Feb 2024 22:17:59 GMT",
"version": "v1"
}
] |
2024-03-12
|
[
[
"Ma",
"Jiaqi",
""
],
[
"Lai",
"Vivian",
""
],
[
"Zhang",
"Yiming",
""
],
[
"Chen",
"Chacha",
""
],
[
"Hamilton",
"Paul",
""
],
[
"Ljubenkov",
"Davor",
""
],
[
"Lakkaraju",
"Himabindu",
""
],
[
"Tan",
"Chenhao",
""
]
] |
Recently, there has been a surge of explainable AI (XAI) methods driven by the need for understanding machine learning model behaviors in high-stakes scenarios. However, properly evaluating the effectiveness of the XAI methods inevitably requires the involvement of human subjects, and conducting human-centered benchmarks is challenging in a number of ways: designing and implementing user studies is complex; numerous design choices in the design space of user study lead to problems of reproducibility; and running user studies can be challenging and even daunting for machine learning researchers. To address these challenges, this paper presents OpenHEXAI, an open-source framework for human-centered evaluation of XAI methods. OpenHEXAI features (1) a collection of diverse benchmark datasets, pre-trained models, and post hoc explanation methods; (2) an easy-to-use web application for user study; (3) comprehensive evaluation metrics for the effectiveness of post hoc explanation methods in the context of human-AI decision making tasks; (4) best practice recommendations of experiment documentation; and (5) convenient tools for power analysis and cost estimation. OpenHEAXI is the first large-scale infrastructural effort to facilitate human-centered benchmarks of XAI methods. It simplifies the design and implementation of user studies for XAI methods, thus allowing researchers and practitioners to focus on the scientific questions. Additionally, it enhances reproducibility through standardized designs. Based on OpenHEXAI, we further conduct a systematic benchmark of four state-of-the-art post hoc explanation methods and compare their impacts on human-AI decision making tasks in terms of accuracy, fairness, as well as users' trust and understanding of the machine learning model.
|
2101.06448
|
Junliang Yu
|
Junliang Yu, Hongzhi Yin, Jundong Li, Qinyong Wang, Nguyen Quoc Viet
Hung, Xiangliang Zhang
|
Self-Supervised Multi-Channel Hypergraph Convolutional Network for
Social Recommendation
|
11 pages, Accepted by WWW'21. Correct some typos in the previous
version
| null | null | null |
cs.IR cs.SI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Social relations are often used to improve recommendation quality when
user-item interaction data is sparse in recommender systems. Most existing
social recommendation models exploit pairwise relations to mine potential user
preferences. However, real-life interactions among users are very complicated
and user relations can be high-order. Hypergraph provides a natural way to
model complex high-order relations, while its potentials for improving social
recommendation are under-explored. In this paper, we fill this gap and propose
a multi-channel hypergraph convolutional network to enhance social
recommendation by leveraging high-order user relations. Technically, each
channel in the network encodes a hypergraph that depicts a common high-order
user relation pattern via hypergraph convolution. By aggregating the embeddings
learned through multiple channels, we obtain comprehensive user representations
to generate recommendation results. However, the aggregation operation might
also obscure the inherent characteristics of different types of high-order
connectivity information. To compensate for the aggregating loss, we
innovatively integrate self-supervised learning into the training of the
hypergraph convolutional network to regain the connectivity information with
hierarchical mutual information maximization. The experimental results on
multiple real-world datasets show that the proposed model outperforms the SOTA
methods, and the ablation study verifies the effectiveness of the multi-channel
setting and the self-supervised task. The implementation of our model is
available via https://github.com/Coder-Yu/RecQ.
|
[
{
"created": "Sat, 16 Jan 2021 14:20:32 GMT",
"version": "v1"
},
{
"created": "Tue, 19 Jan 2021 21:13:42 GMT",
"version": "v2"
},
{
"created": "Thu, 21 Jan 2021 18:16:41 GMT",
"version": "v3"
},
{
"created": "Sun, 27 Feb 2022 04:35:40 GMT",
"version": "v4"
}
] |
2022-03-01
|
[
[
"Yu",
"Junliang",
""
],
[
"Yin",
"Hongzhi",
""
],
[
"Li",
"Jundong",
""
],
[
"Wang",
"Qinyong",
""
],
[
"Hung",
"Nguyen Quoc Viet",
""
],
[
"Zhang",
"Xiangliang",
""
]
] |
Social relations are often used to improve recommendation quality when user-item interaction data is sparse in recommender systems. Most existing social recommendation models exploit pairwise relations to mine potential user preferences. However, real-life interactions among users are very complicated and user relations can be high-order. Hypergraph provides a natural way to model complex high-order relations, while its potentials for improving social recommendation are under-explored. In this paper, we fill this gap and propose a multi-channel hypergraph convolutional network to enhance social recommendation by leveraging high-order user relations. Technically, each channel in the network encodes a hypergraph that depicts a common high-order user relation pattern via hypergraph convolution. By aggregating the embeddings learned through multiple channels, we obtain comprehensive user representations to generate recommendation results. However, the aggregation operation might also obscure the inherent characteristics of different types of high-order connectivity information. To compensate for the aggregating loss, we innovatively integrate self-supervised learning into the training of the hypergraph convolutional network to regain the connectivity information with hierarchical mutual information maximization. The experimental results on multiple real-world datasets show that the proposed model outperforms the SOTA methods, and the ablation study verifies the effectiveness of the multi-channel setting and the self-supervised task. The implementation of our model is available via https://github.com/Coder-Yu/RecQ.
|
2103.14101
|
Grace Lewis
|
Grace A. Lewis, Stephany Bellomo, Ipek Ozkaya
|
Characterizing and Detecting Mismatch in Machine-Learning-Enabled
Systems
|
1st Workshop on AI Engineering: Software Engineering for AI (WAIN
2021) held at the 2021 IEEE/ACM 43rd International Conference on Software
Engineering
| null | null | null |
cs.SE cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Increasing availability of machine learning (ML) frameworks and tools, as
well as their promise to improve solutions to data-driven decision problems,
has resulted in popularity of using ML techniques in software systems. However,
end-to-end development of ML-enabled systems, as well as their seamless
deployment and operations, remain a challenge. One reason is that development
and deployment of ML-enabled systems involves three distinct workflows,
perspectives, and roles, which include data science, software engineering, and
operations. These three distinct perspectives, when misaligned due to incorrect
assumptions, cause ML mismatches which can result in failed systems. We
conducted an interview and survey study where we collected and validated common
types of mismatches that occur in end-to-end development of ML-enabled systems.
Our analysis shows that how each role prioritizes the importance of relevant
mismatches varies, potentially contributing to these mismatched assumptions. In
addition, the mismatch categories we identified can be specified as machine
readable descriptors contributing to improved ML-enabled system development. In
this paper, we report our findings and their implications for improving
end-to-end ML-enabled system development.
|
[
{
"created": "Thu, 25 Mar 2021 19:40:29 GMT",
"version": "v1"
}
] |
2021-03-29
|
[
[
"Lewis",
"Grace A.",
""
],
[
"Bellomo",
"Stephany",
""
],
[
"Ozkaya",
"Ipek",
""
]
] |
Increasing availability of machine learning (ML) frameworks and tools, as well as their promise to improve solutions to data-driven decision problems, has resulted in popularity of using ML techniques in software systems. However, end-to-end development of ML-enabled systems, as well as their seamless deployment and operations, remain a challenge. One reason is that development and deployment of ML-enabled systems involves three distinct workflows, perspectives, and roles, which include data science, software engineering, and operations. These three distinct perspectives, when misaligned due to incorrect assumptions, cause ML mismatches which can result in failed systems. We conducted an interview and survey study where we collected and validated common types of mismatches that occur in end-to-end development of ML-enabled systems. Our analysis shows that how each role prioritizes the importance of relevant mismatches varies, potentially contributing to these mismatched assumptions. In addition, the mismatch categories we identified can be specified as machine readable descriptors contributing to improved ML-enabled system development. In this paper, we report our findings and their implications for improving end-to-end ML-enabled system development.
|
1604.01431
|
Wei-Chiu Ma
|
Wei-Chiu Ma, De-An Huang, Namhoon Lee, Kris M. Kitani
|
Forecasting Interactive Dynamics of Pedestrians with Fictitious Play
|
Accepted to CVPR 2017
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We develop predictive models of pedestrian dynamics by encoding the coupled
nature of multi-pedestrian interaction using game theory, and deep
learning-based visual analysis to estimate person-specific behavior parameters.
Building predictive models for multi-pedestrian interactions however, is very
challenging due to two reasons: (1) the dynamics of interaction are complex
interdependent processes, where the predicted behavior of one pedestrian can
affect the actions taken by others and (2) dynamics are variable depending on
an individuals physical characteristics (e.g., an older person may walk slowly
while the younger person may walk faster). To address these challenges, we (1)
utilize concepts from game theory to model the interdependent decision making
process of multiple pedestrians and (2) use visual classifiers to learn a
mapping from pedestrian appearance to behavior parameters. We evaluate our
proposed model on several public multiple pedestrian interaction video
datasets. Results show that our strategic planning model explains human
interactions 25% better when compared to state-of-the-art methods.
|
[
{
"created": "Tue, 5 Apr 2016 21:13:32 GMT",
"version": "v1"
},
{
"created": "Mon, 9 May 2016 18:07:23 GMT",
"version": "v2"
},
{
"created": "Tue, 28 Mar 2017 16:31:01 GMT",
"version": "v3"
}
] |
2017-03-29
|
[
[
"Ma",
"Wei-Chiu",
""
],
[
"Huang",
"De-An",
""
],
[
"Lee",
"Namhoon",
""
],
[
"Kitani",
"Kris M.",
""
]
] |
We develop predictive models of pedestrian dynamics by encoding the coupled nature of multi-pedestrian interaction using game theory, and deep learning-based visual analysis to estimate person-specific behavior parameters. Building predictive models for multi-pedestrian interactions however, is very challenging due to two reasons: (1) the dynamics of interaction are complex interdependent processes, where the predicted behavior of one pedestrian can affect the actions taken by others and (2) dynamics are variable depending on an individuals physical characteristics (e.g., an older person may walk slowly while the younger person may walk faster). To address these challenges, we (1) utilize concepts from game theory to model the interdependent decision making process of multiple pedestrians and (2) use visual classifiers to learn a mapping from pedestrian appearance to behavior parameters. We evaluate our proposed model on several public multiple pedestrian interaction video datasets. Results show that our strategic planning model explains human interactions 25% better when compared to state-of-the-art methods.
|
2405.09854
|
Aditya Joshi
|
Aditya Joshi, Jake Renzella, Pushpak Bhattacharyya, Saurav Jha,
Xiangyu Zhang
|
Striking a Balance between Classical and Deep Learning Approaches in
Natural Language Processing Pedagogy
|
Selected for publication at Teaching NLP workshop at ACL 2024; 9
pages + references
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
While deep learning approaches represent the state-of-the-art of natural
language processing (NLP) today, classical algorithms and approaches still find
a place in NLP textbooks and courses of recent years. This paper discusses the
perspectives of conveners of two introductory NLP courses taught in Australia
and India, and examines how classical and deep learning approaches can be
balanced within the lecture plan and assessments of the courses. We also draw
parallels with the objects-first and objects-later debate in CS1 education. We
observe that teaching classical approaches adds value to student learning by
building an intuitive understanding of NLP problems, potential solutions, and
even deep learning models themselves. Despite classical approaches not being
state-of-the-art, the paper makes a case for their inclusion in NLP courses
today.
|
[
{
"created": "Thu, 16 May 2024 07:14:13 GMT",
"version": "v1"
},
{
"created": "Tue, 9 Jul 2024 06:52:45 GMT",
"version": "v2"
}
] |
2024-07-10
|
[
[
"Joshi",
"Aditya",
""
],
[
"Renzella",
"Jake",
""
],
[
"Bhattacharyya",
"Pushpak",
""
],
[
"Jha",
"Saurav",
""
],
[
"Zhang",
"Xiangyu",
""
]
] |
While deep learning approaches represent the state-of-the-art of natural language processing (NLP) today, classical algorithms and approaches still find a place in NLP textbooks and courses of recent years. This paper discusses the perspectives of conveners of two introductory NLP courses taught in Australia and India, and examines how classical and deep learning approaches can be balanced within the lecture plan and assessments of the courses. We also draw parallels with the objects-first and objects-later debate in CS1 education. We observe that teaching classical approaches adds value to student learning by building an intuitive understanding of NLP problems, potential solutions, and even deep learning models themselves. Despite classical approaches not being state-of-the-art, the paper makes a case for their inclusion in NLP courses today.
|
2210.02601
|
Md Rayhanur Rahman
|
Md Rayhanur Rahman, Laurie Williams
|
From Threat Reports to Continuous Threat Intelligence: A Comparison of
Attack Technique Extraction Methods from Textual Artifacts
| null | null | null | null |
cs.CR cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The cyberthreat landscape is continuously evolving. Hence, continuous
monitoring and sharing of threat intelligence have become a priority for
organizations. Threat reports, published by cybersecurity vendors, contain
detailed descriptions of attack Tactics, Techniques, and Procedures (TTP)
written in an unstructured text format. Extracting TTP from these reports aids
cybersecurity practitioners and researchers learn and adapt to evolving attacks
and in planning threat mitigation. Researchers have proposed TTP extraction
methods in the literature, however, not all of these proposed methods are
compared to one another or to a baseline. \textit{The goal of this study is to
aid cybersecurity researchers and practitioners choose attack technique
extraction methods for monitoring and sharing threat intelligence by comparing
the underlying methods from the TTP extraction studies in the literature.} In
this work, we identify ten existing TTP extraction studies from the literature
and implement five methods from the ten studies. We find two methods, based on
Term Frequency-Inverse Document Frequency(TFIDF) and Latent Semantic Indexing
(LSI), outperform the other three methods with a F1 score of 84\% and 83\%,
respectively. We observe the performance of all methods in F1 score drops in
the case of increasing the class labels exponentially. We also implement and
evaluate an oversampling strategy to mitigate class imbalance issues.
Furthermore, oversampling improves the classification performance of TTP
extraction. We provide recommendations from our findings for future
cybersecurity researchers, such as the construction of a benchmark dataset from
a large corpus; and the selection of textual features of TTP. Our work, along
with the dataset and implementation source code, can work as a baseline for
cybersecurity researchers to test and compare the performance of future TTP
extraction methods.
|
[
{
"created": "Wed, 5 Oct 2022 23:21:41 GMT",
"version": "v1"
}
] |
2022-10-07
|
[
[
"Rahman",
"Md Rayhanur",
""
],
[
"Williams",
"Laurie",
""
]
] |
The cyberthreat landscape is continuously evolving. Hence, continuous monitoring and sharing of threat intelligence have become a priority for organizations. Threat reports, published by cybersecurity vendors, contain detailed descriptions of attack Tactics, Techniques, and Procedures (TTP) written in an unstructured text format. Extracting TTP from these reports aids cybersecurity practitioners and researchers learn and adapt to evolving attacks and in planning threat mitigation. Researchers have proposed TTP extraction methods in the literature, however, not all of these proposed methods are compared to one another or to a baseline. \textit{The goal of this study is to aid cybersecurity researchers and practitioners choose attack technique extraction methods for monitoring and sharing threat intelligence by comparing the underlying methods from the TTP extraction studies in the literature.} In this work, we identify ten existing TTP extraction studies from the literature and implement five methods from the ten studies. We find two methods, based on Term Frequency-Inverse Document Frequency(TFIDF) and Latent Semantic Indexing (LSI), outperform the other three methods with a F1 score of 84\% and 83\%, respectively. We observe the performance of all methods in F1 score drops in the case of increasing the class labels exponentially. We also implement and evaluate an oversampling strategy to mitigate class imbalance issues. Furthermore, oversampling improves the classification performance of TTP extraction. We provide recommendations from our findings for future cybersecurity researchers, such as the construction of a benchmark dataset from a large corpus; and the selection of textual features of TTP. Our work, along with the dataset and implementation source code, can work as a baseline for cybersecurity researchers to test and compare the performance of future TTP extraction methods.
|
1606.00717
|
Sateesh Awasthi Kumar
|
Sateesh Kumar Awasthi and Yatindra Nath Singh
|
Biased Contribution Index: A Simpler Mechanism to Maintain Fairness in
Peer to Peer Network
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To maintain fairness, in the terms of resources shared by an individual peer,
a proper incentive policy is required in a peer to peer network. This letter
proposes, a simpler mechanism to rank the peers based on their resource
contributions to the network. This mechanism will suppress the free riders from
downloading the resources from the network. Contributions of the peers are
biased in such a way that it can balance the download and upload amount of
resources at each peer. This mechanism can be implemented in a distributed
system and it converges much faster than the other existing approaches.
|
[
{
"created": "Thu, 2 Jun 2016 15:23:59 GMT",
"version": "v1"
}
] |
2016-06-03
|
[
[
"Awasthi",
"Sateesh Kumar",
""
],
[
"Singh",
"Yatindra Nath",
""
]
] |
To maintain fairness, in the terms of resources shared by an individual peer, a proper incentive policy is required in a peer to peer network. This letter proposes, a simpler mechanism to rank the peers based on their resource contributions to the network. This mechanism will suppress the free riders from downloading the resources from the network. Contributions of the peers are biased in such a way that it can balance the download and upload amount of resources at each peer. This mechanism can be implemented in a distributed system and it converges much faster than the other existing approaches.
|
1902.01878
|
Sagar Sharma
|
Sagar Sharma, Keke Chen
|
Disguised-Nets: Image Disguising for Privacy-preserving Outsourced Deep
Learning
| null | null | null | null |
cs.LG cs.CR cs.CV stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep learning model developers often use cloud GPU resources to experiment
with large data and models that need expensive setups. However, this practice
raises privacy concerns. Adversaries may be interested in: 1) personally
identifiable information or objects encoded in the training images, and 2) the
models trained with sensitive data to launch model-based attacks. Learning deep
neural networks (DNN) from encrypted data is still impractical due to the large
training data and the expensive learning process. A few recent studies have
tried to provide efficient, practical solutions to protect data privacy in
outsourced deep-learning. However, we find out that they are vulnerable under
certain attacks. In this paper, we specifically identify two types of unique
attacks on outsourced deep-learning: 1) the visual re-identification attack on
the training data, and 2) the class membership attack on the learned models,
which can break existing privacy-preserving solutions. We develop an image
disguising approach to address these attacks and design a suite of methods to
evaluate the levels of attack resilience for a privacy-preserving solution for
outsourced deep learning. The experimental results show that our
image-disguising mechanisms can provide a high level of protection against the
two attacks while still generating high-quality DNN models for image
classification.
|
[
{
"created": "Tue, 5 Feb 2019 19:20:02 GMT",
"version": "v1"
},
{
"created": "Fri, 19 Apr 2019 04:31:54 GMT",
"version": "v2"
}
] |
2019-04-22
|
[
[
"Sharma",
"Sagar",
""
],
[
"Chen",
"Keke",
""
]
] |
Deep learning model developers often use cloud GPU resources to experiment with large data and models that need expensive setups. However, this practice raises privacy concerns. Adversaries may be interested in: 1) personally identifiable information or objects encoded in the training images, and 2) the models trained with sensitive data to launch model-based attacks. Learning deep neural networks (DNN) from encrypted data is still impractical due to the large training data and the expensive learning process. A few recent studies have tried to provide efficient, practical solutions to protect data privacy in outsourced deep-learning. However, we find out that they are vulnerable under certain attacks. In this paper, we specifically identify two types of unique attacks on outsourced deep-learning: 1) the visual re-identification attack on the training data, and 2) the class membership attack on the learned models, which can break existing privacy-preserving solutions. We develop an image disguising approach to address these attacks and design a suite of methods to evaluate the levels of attack resilience for a privacy-preserving solution for outsourced deep learning. The experimental results show that our image-disguising mechanisms can provide a high level of protection against the two attacks while still generating high-quality DNN models for image classification.
|
2209.02228
|
Josef Pieprzyk
|
Josef Pieprzyk, Jarek Duda, Marcin Pawlowski, Seyit Camtepe, Arash
Mahboubi and Pawel Morawiecki
|
Compression Optimality of Asymmetric Numeral Systems
| null | null |
10.3390/e25040672
| null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Compression also known as entropy coding has a rich and long history.
However, a recent explosion of multimedia Internet applications (such as
teleconferencing and video streaming for instance) renews an interest in fast
compression that also squeezes out as much redundancy as possible. In 2009
Jarek Duda invented his asymmetric numeral system (ANS). Apart from a beautiful
mathematical structure, it is very efficient and offers compression with a very
low residual redundancy. ANS works well for any symbol source statistics.
Besides, ANS has become a preferred compression algorithm in the IT industry.
However, designing ANS instance requires a random selection of its symbol
spread function. Consequently, each ANS instance offers compression with a
slightly different compression rate.
The paper investigates compression optimality of ANS. It shows that ANS is
optimal (i.e. the entropies of encoding and source are equal) for any symbol
sources whose probability distribution is described by natural powers of 1/2.
We use Markov chains to calculate ANS state probabilities. This allows us to
determine ANS compression rate precisely. We present two algorithms for finding
ANS instances with high compression rates. The first explores state probability
approximations in order to choose ANS instances with better compression rates.
The second algorithm is a probabilistic one. It finds ANS instances, whose
compression rate can be made as close to the best rate as required. This is
done at the expense of the number $\theta$ of internal random ``coin'' tosses.
The algorithm complexity is ${\cal O}(\theta L^3)$, where $L$ is the number of
ANS states. The complexity can be reduced to ${\cal O}(\theta L\log{L})$ if we
use a fast matrix inversion. If the algorithm is implemented on quantum
computer, its complexity becomes ${\cal O}(\theta (\log{L})^3)$.
|
[
{
"created": "Tue, 6 Sep 2022 05:37:04 GMT",
"version": "v1"
}
] |
2023-05-10
|
[
[
"Pieprzyk",
"Josef",
""
],
[
"Duda",
"Jarek",
""
],
[
"Pawlowski",
"Marcin",
""
],
[
"Camtepe",
"Seyit",
""
],
[
"Mahboubi",
"Arash",
""
],
[
"Morawiecki",
"Pawel",
""
]
] |
Compression also known as entropy coding has a rich and long history. However, a recent explosion of multimedia Internet applications (such as teleconferencing and video streaming for instance) renews an interest in fast compression that also squeezes out as much redundancy as possible. In 2009 Jarek Duda invented his asymmetric numeral system (ANS). Apart from a beautiful mathematical structure, it is very efficient and offers compression with a very low residual redundancy. ANS works well for any symbol source statistics. Besides, ANS has become a preferred compression algorithm in the IT industry. However, designing ANS instance requires a random selection of its symbol spread function. Consequently, each ANS instance offers compression with a slightly different compression rate. The paper investigates compression optimality of ANS. It shows that ANS is optimal (i.e. the entropies of encoding and source are equal) for any symbol sources whose probability distribution is described by natural powers of 1/2. We use Markov chains to calculate ANS state probabilities. This allows us to determine ANS compression rate precisely. We present two algorithms for finding ANS instances with high compression rates. The first explores state probability approximations in order to choose ANS instances with better compression rates. The second algorithm is a probabilistic one. It finds ANS instances, whose compression rate can be made as close to the best rate as required. This is done at the expense of the number $\theta$ of internal random ``coin'' tosses. The algorithm complexity is ${\cal O}(\theta L^3)$, where $L$ is the number of ANS states. The complexity can be reduced to ${\cal O}(\theta L\log{L})$ if we use a fast matrix inversion. If the algorithm is implemented on quantum computer, its complexity becomes ${\cal O}(\theta (\log{L})^3)$.
|
2109.05238
|
Shaolei Zhang
|
Shaolei Zhang, Yang Feng
|
Universal Simultaneous Machine Translation with Mixture-of-Experts
Wait-k Policy
|
Accepted at EMNLP 2021 (main conference). 12 pages, 7 figures, 4
tables
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Simultaneous machine translation (SiMT) generates translation before reading
the entire source sentence and hence it has to trade off between translation
quality and latency. To fulfill the requirements of different translation
quality and latency in practical applications, the previous methods usually
need to train multiple SiMT models for different latency levels, resulting in
large computational costs. In this paper, we propose a universal SiMT model
with Mixture-of-Experts Wait-k Policy to achieve the best translation quality
under arbitrary latency with only one trained model. Specifically, our method
employs multi-head attention to accomplish the mixture of experts where each
head is treated as a wait-k expert with its own waiting words number, and given
a test latency and source inputs, the weights of the experts are accordingly
adjusted to produce the best translation. Experiments on three datasets show
that our method outperforms all the strong baselines under different latency,
including the state-of-the-art adaptive policy.
|
[
{
"created": "Sat, 11 Sep 2021 09:43:15 GMT",
"version": "v1"
},
{
"created": "Tue, 14 Sep 2021 01:31:39 GMT",
"version": "v2"
},
{
"created": "Mon, 21 Mar 2022 05:23:11 GMT",
"version": "v3"
}
] |
2022-03-22
|
[
[
"Zhang",
"Shaolei",
""
],
[
"Feng",
"Yang",
""
]
] |
Simultaneous machine translation (SiMT) generates translation before reading the entire source sentence and hence it has to trade off between translation quality and latency. To fulfill the requirements of different translation quality and latency in practical applications, the previous methods usually need to train multiple SiMT models for different latency levels, resulting in large computational costs. In this paper, we propose a universal SiMT model with Mixture-of-Experts Wait-k Policy to achieve the best translation quality under arbitrary latency with only one trained model. Specifically, our method employs multi-head attention to accomplish the mixture of experts where each head is treated as a wait-k expert with its own waiting words number, and given a test latency and source inputs, the weights of the experts are accordingly adjusted to produce the best translation. Experiments on three datasets show that our method outperforms all the strong baselines under different latency, including the state-of-the-art adaptive policy.
|
2206.06428
|
Hao Bai
|
Hao Bai
|
VSC-WebGPU: A Selenium-based VS Code Extension For Local Edit And Cloud
Compilation on WebGPU
|
Published by IEEE on conference ICFTIC'21
| null |
10.1109/ICFTIC54370.2021.9647189
| null |
cs.NI cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the rapid development of information transmission, Software as a Service
(SaaS) is developing at a rapid speed that everything originally local tends to
be transplanted onto servers and executed on the cloud. WebGPU is such a SaaS
system that it holds the GPU-equipped server to execute students' CUDA code and
releases the RESTful front-end website for students to write their code on.
However, programming on an HTML-based interface is not satisfactory due to a
lack of syntax highlighting and automatic keyword complement. On the other
side, Visual Studio Code is now becoming the most popular programming interface
due to its strong community and eclectic functionalities. Thus, we propose such
a system that, students write code locally using VS Code with its
coding-auxiliary extensions, and push the code to WebGPU with only one button
pressed using our VSC-WebGPU extension. The extension is divided into 4 parts:
the login process for automatically logging the student into WebGPU, the pull
process that pulls the code down to the local workspace, the push process that
copies the code to the browser for compiling and running, and the exit process
to exit the browser and close the connection. This 4-step architecture is also
applicable for any other automated tools to push local code to
authorization-required SaaS systems using Web automata.
|
[
{
"created": "Mon, 13 Jun 2022 19:18:26 GMT",
"version": "v1"
}
] |
2022-06-15
|
[
[
"Bai",
"Hao",
""
]
] |
With the rapid development of information transmission, Software as a Service (SaaS) is developing at a rapid speed that everything originally local tends to be transplanted onto servers and executed on the cloud. WebGPU is such a SaaS system that it holds the GPU-equipped server to execute students' CUDA code and releases the RESTful front-end website for students to write their code on. However, programming on an HTML-based interface is not satisfactory due to a lack of syntax highlighting and automatic keyword complement. On the other side, Visual Studio Code is now becoming the most popular programming interface due to its strong community and eclectic functionalities. Thus, we propose such a system that, students write code locally using VS Code with its coding-auxiliary extensions, and push the code to WebGPU with only one button pressed using our VSC-WebGPU extension. The extension is divided into 4 parts: the login process for automatically logging the student into WebGPU, the pull process that pulls the code down to the local workspace, the push process that copies the code to the browser for compiling and running, and the exit process to exit the browser and close the connection. This 4-step architecture is also applicable for any other automated tools to push local code to authorization-required SaaS systems using Web automata.
|
2007.10588
|
Jinpyo Kim
|
Jinpyo Kim, Wooekun Jung, Hyungmo Kim, Jaejin Lee
|
CyCNN: A Rotation Invariant CNN using Polar Mapping and Cylindrical
Convolution Layers
| null | null | null | null |
cs.CV cs.LG eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep Convolutional Neural Networks (CNNs) are empirically known to be
invariant to moderate translation but not to rotation in image classification.
This paper proposes a deep CNN model, called CyCNN, which exploits polar
mapping of input images to convert rotation to translation. To deal with the
cylindrical property of the polar coordinates, we replace convolution layers in
conventional CNNs to cylindrical convolutional (CyConv) layers. A CyConv layer
exploits the cylindrically sliding windows (CSW) mechanism that vertically
extends the input-image receptive fields of boundary units in a convolutional
layer. We evaluate CyCNN and conventional CNN models for classification tasks
on rotated MNIST, CIFAR-10, and SVHN datasets. We show that if there is no data
augmentation during training, CyCNN significantly improves classification
accuracies when compared to conventional CNN models. Our implementation of
CyCNN is publicly available on https://github.com/mcrl/CyCNN.
|
[
{
"created": "Tue, 21 Jul 2020 04:05:35 GMT",
"version": "v1"
}
] |
2020-07-23
|
[
[
"Kim",
"Jinpyo",
""
],
[
"Jung",
"Wooekun",
""
],
[
"Kim",
"Hyungmo",
""
],
[
"Lee",
"Jaejin",
""
]
] |
Deep Convolutional Neural Networks (CNNs) are empirically known to be invariant to moderate translation but not to rotation in image classification. This paper proposes a deep CNN model, called CyCNN, which exploits polar mapping of input images to convert rotation to translation. To deal with the cylindrical property of the polar coordinates, we replace convolution layers in conventional CNNs to cylindrical convolutional (CyConv) layers. A CyConv layer exploits the cylindrically sliding windows (CSW) mechanism that vertically extends the input-image receptive fields of boundary units in a convolutional layer. We evaluate CyCNN and conventional CNN models for classification tasks on rotated MNIST, CIFAR-10, and SVHN datasets. We show that if there is no data augmentation during training, CyCNN significantly improves classification accuracies when compared to conventional CNN models. Our implementation of CyCNN is publicly available on https://github.com/mcrl/CyCNN.
|
2408.03341
|
Andreas Knoblauch
|
Andreas Knoblauch
|
IVISIT: An Interactive Visual Simulation Tool for system simulation,
visualization, optimization, and parameter management
|
Minor update: Just added links to source code of Python examples of
section 3
| null | null | null |
cs.HC cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
IVISIT is a generic interactive visual simulation tool that is based on
Python/Numpy and can be used for system simulation, parameter optimization,
parameter management, and visualization of system dynamics as required, for
example,for developing neural network simulations, machine learning
applications, or computer vision systems. It provides classes for rapid
prototyping of applications and visualization and manipulation of system
properties using interactive GUI elements like sliders, images, textboxes,
option lists, checkboxes and buttons based on Tkinter and Matplotlib.
Parameters and simulation configurations can be stored and managed based on
SQLite database functions. This technical report describes the main
architecture and functions of IVISIT, and provides easy examples how to rapidly
implement interactive applications and manage parameter settings.
|
[
{
"created": "Mon, 22 Jul 2024 14:46:32 GMT",
"version": "v1"
},
{
"created": "Sat, 10 Aug 2024 08:01:23 GMT",
"version": "v2"
}
] |
2024-08-13
|
[
[
"Knoblauch",
"Andreas",
""
]
] |
IVISIT is a generic interactive visual simulation tool that is based on Python/Numpy and can be used for system simulation, parameter optimization, parameter management, and visualization of system dynamics as required, for example,for developing neural network simulations, machine learning applications, or computer vision systems. It provides classes for rapid prototyping of applications and visualization and manipulation of system properties using interactive GUI elements like sliders, images, textboxes, option lists, checkboxes and buttons based on Tkinter and Matplotlib. Parameters and simulation configurations can be stored and managed based on SQLite database functions. This technical report describes the main architecture and functions of IVISIT, and provides easy examples how to rapidly implement interactive applications and manage parameter settings.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.