id
stringlengths 9
10
| submitter
stringlengths 1
64
⌀ | authors
stringlengths 4
20.7k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
404
⌀ | doi
stringlengths 11
153
⌀ | report-no
stringlengths 2
254
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | orig_abstract
stringlengths 14
3.35k
| versions
listlengths 1
60
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
1.35k
| abstract
stringlengths 11
3.34k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2308.06721
|
Hu Ye
|
Hu Ye, Jun Zhang, Sibo Liu, Xiao Han, Wei Yang
|
IP-Adapter: Text Compatible Image Prompt Adapter for Text-to-Image
Diffusion Models
| null | null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent years have witnessed the strong power of large text-to-image diffusion
models for the impressive generative capability to create high-fidelity images.
However, it is very tricky to generate desired images using only text prompt as
it often involves complex prompt engineering. An alternative to text prompt is
image prompt, as the saying goes: "an image is worth a thousand words".
Although existing methods of direct fine-tuning from pretrained models are
effective, they require large computing resources and are not compatible with
other base models, text prompt, and structural controls. In this paper, we
present IP-Adapter, an effective and lightweight adapter to achieve image
prompt capability for the pretrained text-to-image diffusion models. The key
design of our IP-Adapter is decoupled cross-attention mechanism that separates
cross-attention layers for text features and image features. Despite the
simplicity of our method, an IP-Adapter with only 22M parameters can achieve
comparable or even better performance to a fully fine-tuned image prompt model.
As we freeze the pretrained diffusion model, the proposed IP-Adapter can be
generalized not only to other custom models fine-tuned from the same base
model, but also to controllable generation using existing controllable tools.
With the benefit of the decoupled cross-attention strategy, the image prompt
can also work well with the text prompt to achieve multimodal image generation.
The project page is available at \url{https://ip-adapter.github.io}.
|
[
{
"created": "Sun, 13 Aug 2023 08:34:51 GMT",
"version": "v1"
}
] |
2023-08-15
|
[
[
"Ye",
"Hu",
""
],
[
"Zhang",
"Jun",
""
],
[
"Liu",
"Sibo",
""
],
[
"Han",
"Xiao",
""
],
[
"Yang",
"Wei",
""
]
] |
Recent years have witnessed the strong power of large text-to-image diffusion models for the impressive generative capability to create high-fidelity images. However, it is very tricky to generate desired images using only text prompt as it often involves complex prompt engineering. An alternative to text prompt is image prompt, as the saying goes: "an image is worth a thousand words". Although existing methods of direct fine-tuning from pretrained models are effective, they require large computing resources and are not compatible with other base models, text prompt, and structural controls. In this paper, we present IP-Adapter, an effective and lightweight adapter to achieve image prompt capability for the pretrained text-to-image diffusion models. The key design of our IP-Adapter is decoupled cross-attention mechanism that separates cross-attention layers for text features and image features. Despite the simplicity of our method, an IP-Adapter with only 22M parameters can achieve comparable or even better performance to a fully fine-tuned image prompt model. As we freeze the pretrained diffusion model, the proposed IP-Adapter can be generalized not only to other custom models fine-tuned from the same base model, but also to controllable generation using existing controllable tools. With the benefit of the decoupled cross-attention strategy, the image prompt can also work well with the text prompt to achieve multimodal image generation. The project page is available at \url{https://ip-adapter.github.io}.
|
2403.08738
|
Amit Meghanani
|
Amit Meghanani and Thomas Hain
|
Improving Acoustic Word Embeddings through Correspondence Training of
Self-supervised Speech Representations
|
Accepted to EACL 2024 Main Conference, Long paper
| null | null | null |
cs.CL cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Acoustic word embeddings (AWEs) are vector representations of spoken words.
An effective method for obtaining AWEs is the Correspondence Auto-Encoder
(CAE). In the past, the CAE method has been associated with traditional MFCC
features. Representations obtained from self-supervised learning (SSL)-based
speech models such as HuBERT, Wav2vec2, etc., are outperforming MFCC in many
downstream tasks. However, they have not been well studied in the context of
learning AWEs. This work explores the effectiveness of CAE with SSL-based
speech representations to obtain improved AWEs. Additionally, the capabilities
of SSL-based speech models are explored in cross-lingual scenarios for
obtaining AWEs. Experiments are conducted on five languages: Polish,
Portuguese, Spanish, French, and English. HuBERT-based CAE model achieves the
best results for word discrimination in all languages, despite Hu-BERT being
pre-trained on English only. Also, the HuBERT-based CAE model works well in
cross-lingual settings. It outperforms MFCC-based CAE models trained on the
target languages when trained on one source language and tested on target
languages.
|
[
{
"created": "Wed, 13 Mar 2024 17:42:03 GMT",
"version": "v1"
}
] |
2024-03-14
|
[
[
"Meghanani",
"Amit",
""
],
[
"Hain",
"Thomas",
""
]
] |
Acoustic word embeddings (AWEs) are vector representations of spoken words. An effective method for obtaining AWEs is the Correspondence Auto-Encoder (CAE). In the past, the CAE method has been associated with traditional MFCC features. Representations obtained from self-supervised learning (SSL)-based speech models such as HuBERT, Wav2vec2, etc., are outperforming MFCC in many downstream tasks. However, they have not been well studied in the context of learning AWEs. This work explores the effectiveness of CAE with SSL-based speech representations to obtain improved AWEs. Additionally, the capabilities of SSL-based speech models are explored in cross-lingual scenarios for obtaining AWEs. Experiments are conducted on five languages: Polish, Portuguese, Spanish, French, and English. HuBERT-based CAE model achieves the best results for word discrimination in all languages, despite Hu-BERT being pre-trained on English only. Also, the HuBERT-based CAE model works well in cross-lingual settings. It outperforms MFCC-based CAE models trained on the target languages when trained on one source language and tested on target languages.
|
2209.13834
|
Tongda Xu
|
Tongda Xu, Yan Wang, Dailan He, Chenjian Gao, Han Gao, Kunzan Liu,
Hongwei Qin
|
Multi-Sample Training for Neural Image Compression
|
NeurIPS 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
This paper considers the problem of lossy neural image compression (NIC).
Current state-of-the-art (sota) methods adopt uniform posterior to approximate
quantization noise, and single-sample pathwise estimator to approximate the
gradient of evidence lower bound (ELBO). In this paper, we propose to train NIC
with multiple-sample importance weighted autoencoder (IWAE) target, which is
tighter than ELBO and converges to log likelihood as sample size increases.
First, we identify that the uniform posterior of NIC has special properties,
which affect the variance and bias of pathwise and score function estimators of
the IWAE target. Moreover, we provide insights on a commonly adopted trick in
NIC from gradient variance perspective. Based on those analysis, we further
propose multiple-sample NIC (MS-NIC), an enhanced IWAE target for NIC.
Experimental results demonstrate that it improves sota NIC methods. Our MS-NIC
is plug-and-play, and can be easily extended to other neural compression tasks.
|
[
{
"created": "Wed, 28 Sep 2022 04:42:02 GMT",
"version": "v1"
}
] |
2022-09-29
|
[
[
"Xu",
"Tongda",
""
],
[
"Wang",
"Yan",
""
],
[
"He",
"Dailan",
""
],
[
"Gao",
"Chenjian",
""
],
[
"Gao",
"Han",
""
],
[
"Liu",
"Kunzan",
""
],
[
"Qin",
"Hongwei",
""
]
] |
This paper considers the problem of lossy neural image compression (NIC). Current state-of-the-art (sota) methods adopt uniform posterior to approximate quantization noise, and single-sample pathwise estimator to approximate the gradient of evidence lower bound (ELBO). In this paper, we propose to train NIC with multiple-sample importance weighted autoencoder (IWAE) target, which is tighter than ELBO and converges to log likelihood as sample size increases. First, we identify that the uniform posterior of NIC has special properties, which affect the variance and bias of pathwise and score function estimators of the IWAE target. Moreover, we provide insights on a commonly adopted trick in NIC from gradient variance perspective. Based on those analysis, we further propose multiple-sample NIC (MS-NIC), an enhanced IWAE target for NIC. Experimental results demonstrate that it improves sota NIC methods. Our MS-NIC is plug-and-play, and can be easily extended to other neural compression tasks.
|
2210.10981
|
Liangrui Pan
|
Liangrui Pan, Lian Wang, Zhichao Feng, Zhujun Xu, Liwen Xu, Shaoliang
Peng
|
MGTUNet: An new UNet for colon nuclei instance segmentation and
quantification
|
Published in BIBM2022(regular
paper),https://doi.org/10.1109/BIBM55620.2022.9995669
| null | null | null |
cs.CV cs.GT
|
http://creativecommons.org/licenses/by/4.0/
|
Colorectal cancer (CRC) is among the top three malignant tumor types in terms
of morbidity and mortality. Histopathological images are the gold standard for
diagnosing colon cancer. Cellular nuclei instance segmentation and
classification, and nuclear component regression tasks can aid in the analysis
of the tumor microenvironment in colon tissue. Traditional methods are still
unable to handle both types of tasks end-to-end at the same time, and have poor
prediction accuracy and high application costs. This paper proposes a new UNet
model for handling nuclei based on the UNet framework, called MGTUNet, which
uses Mish, Group normalization and transposed convolution layer to improve the
segmentation model, and a ranger optimizer to adjust the SmoothL1Loss values.
Secondly, it uses different channels to segment and classify different types of
nucleus, ultimately completing the nuclei instance segmentation and
classification task, and the nuclei component regression task simultaneously.
Finally, we did extensive comparison experiments using eight segmentation
models. By comparing the three evaluation metrics and the parameter sizes of
the models, MGTUNet obtained 0.6254 on PQ, 0.6359 on mPQ, and 0.8695 on R2.
Thus, the experiments demonstrated that MGTUNet is now a state-of-the-art
method for quantifying histopathological images of colon cancer.
|
[
{
"created": "Thu, 20 Oct 2022 03:00:40 GMT",
"version": "v1"
},
{
"created": "Fri, 26 Jan 2024 13:55:37 GMT",
"version": "v2"
}
] |
2024-01-29
|
[
[
"Pan",
"Liangrui",
""
],
[
"Wang",
"Lian",
""
],
[
"Feng",
"Zhichao",
""
],
[
"Xu",
"Zhujun",
""
],
[
"Xu",
"Liwen",
""
],
[
"Peng",
"Shaoliang",
""
]
] |
Colorectal cancer (CRC) is among the top three malignant tumor types in terms of morbidity and mortality. Histopathological images are the gold standard for diagnosing colon cancer. Cellular nuclei instance segmentation and classification, and nuclear component regression tasks can aid in the analysis of the tumor microenvironment in colon tissue. Traditional methods are still unable to handle both types of tasks end-to-end at the same time, and have poor prediction accuracy and high application costs. This paper proposes a new UNet model for handling nuclei based on the UNet framework, called MGTUNet, which uses Mish, Group normalization and transposed convolution layer to improve the segmentation model, and a ranger optimizer to adjust the SmoothL1Loss values. Secondly, it uses different channels to segment and classify different types of nucleus, ultimately completing the nuclei instance segmentation and classification task, and the nuclei component regression task simultaneously. Finally, we did extensive comparison experiments using eight segmentation models. By comparing the three evaluation metrics and the parameter sizes of the models, MGTUNet obtained 0.6254 on PQ, 0.6359 on mPQ, and 0.8695 on R2. Thus, the experiments demonstrated that MGTUNet is now a state-of-the-art method for quantifying histopathological images of colon cancer.
|
2406.07420
|
Jibril Frej
|
Jibril Frej and Marta Knezevic and Tanja Kaser
|
Graph Reasoning for Explainable Cold Start Recommendation
| null | null | null | null |
cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
The cold start problem, where new users or items have no interaction history,
remains a critical challenge in recommender systems (RS). A common solution
involves using Knowledge Graphs (KG) to train entity embeddings or Graph Neural
Networks (GNNs). Since KGs incorporate auxiliary data and not just user/item
interactions, these methods can make relevant recommendations for cold users or
items. Graph Reasoning (GR) methods, however, find paths from users to items to
recommend using relations in the KG and, in the context of RS, have been used
for interpretability. In this study, we propose GRECS: a framework for adapting
GR to cold start recommendations. By utilizing explicit paths starting for
users rather than relying only on entity embeddings, GRECS can find items
corresponding to users' preferences by navigating the graph, even when limited
information about users is available. Our experiments show that GRECS mitigates
the cold start problem and outperforms competitive baselines across 5 standard
datasets while being explainable. This study highlights the potential of GR for
developing explainable recommender systems better suited for managing cold
users and items.
|
[
{
"created": "Tue, 11 Jun 2024 16:21:57 GMT",
"version": "v1"
}
] |
2024-06-12
|
[
[
"Frej",
"Jibril",
""
],
[
"Knezevic",
"Marta",
""
],
[
"Kaser",
"Tanja",
""
]
] |
The cold start problem, where new users or items have no interaction history, remains a critical challenge in recommender systems (RS). A common solution involves using Knowledge Graphs (KG) to train entity embeddings or Graph Neural Networks (GNNs). Since KGs incorporate auxiliary data and not just user/item interactions, these methods can make relevant recommendations for cold users or items. Graph Reasoning (GR) methods, however, find paths from users to items to recommend using relations in the KG and, in the context of RS, have been used for interpretability. In this study, we propose GRECS: a framework for adapting GR to cold start recommendations. By utilizing explicit paths starting for users rather than relying only on entity embeddings, GRECS can find items corresponding to users' preferences by navigating the graph, even when limited information about users is available. Our experiments show that GRECS mitigates the cold start problem and outperforms competitive baselines across 5 standard datasets while being explainable. This study highlights the potential of GR for developing explainable recommender systems better suited for managing cold users and items.
|
2312.01097
|
Cheng-Fu Yang
|
Cheng-Fu Yang, Haoyang Xu, Te-Lin Wu, Xiaofeng Gao, Kai-Wei Chang,
Feng Gao
|
Planning as In-Painting: A Diffusion-Based Embodied Task Planning
Framework for Environments under Uncertainty
| null | null | null | null |
cs.CV cs.LG cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Task planning for embodied AI has been one of the most challenging problems
where the community does not meet a consensus in terms of formulation. In this
paper, we aim to tackle this problem with a unified framework consisting of an
end-to-end trainable method and a planning algorithm. Particularly, we propose
a task-agnostic method named 'planning as in-painting'. In this method, we use
a Denoising Diffusion Model (DDM) for plan generation, conditioned on both
language instructions and perceptual inputs under partially observable
environments. Partial observation often leads to the model hallucinating the
planning. Therefore, our diffusion-based method jointly models both state
trajectory and goal estimation to improve the reliability of the generated
plan, given the limited available information at each step. To better leverage
newly discovered information along the plan execution for a higher success
rate, we propose an on-the-fly planning algorithm to collaborate with the
diffusion-based planner. The proposed framework achieves promising performances
in various embodied AI tasks, including vision-language navigation, object
manipulation, and task planning in a photorealistic virtual environment. The
code is available at: https://github.com/joeyy5588/planning-as-inpainting.
|
[
{
"created": "Sat, 2 Dec 2023 10:07:17 GMT",
"version": "v1"
}
] |
2023-12-05
|
[
[
"Yang",
"Cheng-Fu",
""
],
[
"Xu",
"Haoyang",
""
],
[
"Wu",
"Te-Lin",
""
],
[
"Gao",
"Xiaofeng",
""
],
[
"Chang",
"Kai-Wei",
""
],
[
"Gao",
"Feng",
""
]
] |
Task planning for embodied AI has been one of the most challenging problems where the community does not meet a consensus in terms of formulation. In this paper, we aim to tackle this problem with a unified framework consisting of an end-to-end trainable method and a planning algorithm. Particularly, we propose a task-agnostic method named 'planning as in-painting'. In this method, we use a Denoising Diffusion Model (DDM) for plan generation, conditioned on both language instructions and perceptual inputs under partially observable environments. Partial observation often leads to the model hallucinating the planning. Therefore, our diffusion-based method jointly models both state trajectory and goal estimation to improve the reliability of the generated plan, given the limited available information at each step. To better leverage newly discovered information along the plan execution for a higher success rate, we propose an on-the-fly planning algorithm to collaborate with the diffusion-based planner. The proposed framework achieves promising performances in various embodied AI tasks, including vision-language navigation, object manipulation, and task planning in a photorealistic virtual environment. The code is available at: https://github.com/joeyy5588/planning-as-inpainting.
|
1506.03193
|
Mohammad Shihabul Haque
|
Mohammad Shihabul Haque, Ang Li, Akash Kumar, Qingsong Wei
|
Accelerating Non-volatile/Hybrid Processor Cache Design Space
Exploration for Application Specific Embedded Systems
| null | null |
10.1109/ASPDAC.2015.7059045
| null |
cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this article, we propose a technique to accelerate nonvolatile or hybrid
of volatile and nonvolatile processor cache design space exploration for
application specific embedded systems. Utilizing a novel cache behavior
modeling equation and a new accurate cache miss prediction mechanism, our
proposed technique can accelerate NVM or hybrid FIFO processor cache design
space exploration for SPEC CPU 2000 applications up to 249 times compared to
the conventional approach.
|
[
{
"created": "Wed, 10 Jun 2015 07:14:45 GMT",
"version": "v1"
},
{
"created": "Mon, 31 Aug 2015 17:51:47 GMT",
"version": "v2"
}
] |
2015-09-01
|
[
[
"Haque",
"Mohammad Shihabul",
""
],
[
"Li",
"Ang",
""
],
[
"Kumar",
"Akash",
""
],
[
"Wei",
"Qingsong",
""
]
] |
In this article, we propose a technique to accelerate nonvolatile or hybrid of volatile and nonvolatile processor cache design space exploration for application specific embedded systems. Utilizing a novel cache behavior modeling equation and a new accurate cache miss prediction mechanism, our proposed technique can accelerate NVM or hybrid FIFO processor cache design space exploration for SPEC CPU 2000 applications up to 249 times compared to the conventional approach.
|
1305.6577
|
Chandra Chekuri
|
Chandra Chekuri and Julia Chuzhoy
|
Polynomial Bounds for the Grid-Minor Theorem
|
Preliminary version of this paper appeared in Proceedings of ACM
STOC, 2014. This is a full version that has been submitted to a journal and
then revised based on reviewer comments
| null | null | null |
cs.DS cs.DM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
One of the key results in Robertson and Seymour's seminal work on graph
minors is the Grid-Minor Theorem (also called the Excluded Grid Theorem). The
theorem states that for every grid $H$, every graph whose treewidth is large
enough relative to $|V(H)|$ contains $H$ as a minor. This theorem has found
many applications in graph theory and algorithms. Let $f(k)$ denote the largest
value such that every graph of treewidth $k$ contains a grid minor of size
$(f(k)\times f(k))$. The best previous quantitative bound, due to recent work
of Kawarabayashi and Kobayashi, and Leaf and Seymour, shows that
$f(k)=\Omega(\sqrt{\log k/\log \log k})$. In contrast, the best known upper
bound implies that $f(k) = O(\sqrt{k/\log k})$. In this paper we obtain the
first polynomial relationship between treewidth and grid minor size by showing
that $f(k)=\Omega(k^{\delta})$ for some fixed constant $\delta > 0$, and
describe a randomized algorithm, whose running time is polynomial in $|V(G)|$
and $k$, that with high probability finds a model of such a grid minor in $G$.
|
[
{
"created": "Tue, 28 May 2013 18:27:20 GMT",
"version": "v1"
},
{
"created": "Thu, 17 Oct 2013 16:50:19 GMT",
"version": "v2"
},
{
"created": "Mon, 21 Jul 2014 21:38:19 GMT",
"version": "v3"
},
{
"created": "Tue, 23 Sep 2014 21:33:18 GMT",
"version": "v4"
},
{
"created": "Tue, 9 Aug 2016 21:33:21 GMT",
"version": "v5"
}
] |
2016-08-11
|
[
[
"Chekuri",
"Chandra",
""
],
[
"Chuzhoy",
"Julia",
""
]
] |
One of the key results in Robertson and Seymour's seminal work on graph minors is the Grid-Minor Theorem (also called the Excluded Grid Theorem). The theorem states that for every grid $H$, every graph whose treewidth is large enough relative to $|V(H)|$ contains $H$ as a minor. This theorem has found many applications in graph theory and algorithms. Let $f(k)$ denote the largest value such that every graph of treewidth $k$ contains a grid minor of size $(f(k)\times f(k))$. The best previous quantitative bound, due to recent work of Kawarabayashi and Kobayashi, and Leaf and Seymour, shows that $f(k)=\Omega(\sqrt{\log k/\log \log k})$. In contrast, the best known upper bound implies that $f(k) = O(\sqrt{k/\log k})$. In this paper we obtain the first polynomial relationship between treewidth and grid minor size by showing that $f(k)=\Omega(k^{\delta})$ for some fixed constant $\delta > 0$, and describe a randomized algorithm, whose running time is polynomial in $|V(G)|$ and $k$, that with high probability finds a model of such a grid minor in $G$.
|
2001.07885
|
Jinyang Liu
|
Jinyang Liu, Yujia Zhai, Zizhong Chen
|
Normalization of Input-output Shared Embeddings in Text Generation
Models
| null | null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Neural Network based models have been state-of-the-art models for various
Natural Language Processing tasks, however, the input and output dimension
problem in the networks has still not been fully resolved, especially in text
generation tasks (e.g. Machine Translation, Text Summarization), in which input
and output both have huge sizes of vocabularies. Therefore, input-output
embedding weight sharing has been introduced and adopted widely, which remains
to be improved. Based on linear algebra and statistical theories, this paper
locates the shortcoming of existed input-output embedding weight sharing
method, then raises methods for improving input-output weight shared embedding,
among which methods of normalization of embedding weight matrices show best
performance. These methods are nearly computational cost-free, can get combined
with other embedding techniques, and show good effectiveness when applied on
state-of-the-art Neural Network models. For Transformer-big models, the
normalization techniques can get at best 0.6 BLEU improvement compared to the
original version of model on WMT'16 En-De dataset, and similar BLEU
improvements on IWSLT 14' datasets. For DynamicConv models, 0.5 BLEU
improvement can be attained on WMT'16 En-De dataset, and 0.41 BLEU improvement
on IWSLT 14' De-En translation task is achieved.
|
[
{
"created": "Wed, 22 Jan 2020 05:34:45 GMT",
"version": "v1"
},
{
"created": "Fri, 24 Jan 2020 04:42:13 GMT",
"version": "v2"
}
] |
2020-01-27
|
[
[
"Liu",
"Jinyang",
""
],
[
"Zhai",
"Yujia",
""
],
[
"Chen",
"Zizhong",
""
]
] |
Neural Network based models have been state-of-the-art models for various Natural Language Processing tasks, however, the input and output dimension problem in the networks has still not been fully resolved, especially in text generation tasks (e.g. Machine Translation, Text Summarization), in which input and output both have huge sizes of vocabularies. Therefore, input-output embedding weight sharing has been introduced and adopted widely, which remains to be improved. Based on linear algebra and statistical theories, this paper locates the shortcoming of existed input-output embedding weight sharing method, then raises methods for improving input-output weight shared embedding, among which methods of normalization of embedding weight matrices show best performance. These methods are nearly computational cost-free, can get combined with other embedding techniques, and show good effectiveness when applied on state-of-the-art Neural Network models. For Transformer-big models, the normalization techniques can get at best 0.6 BLEU improvement compared to the original version of model on WMT'16 En-De dataset, and similar BLEU improvements on IWSLT 14' datasets. For DynamicConv models, 0.5 BLEU improvement can be attained on WMT'16 En-De dataset, and 0.41 BLEU improvement on IWSLT 14' De-En translation task is achieved.
|
2005.02979
|
Anthony Corso
|
Anthony Corso, Robert J. Moss, Mark Koren, Ritchie Lee, Mykel J.
Kochenderfer
|
A Survey of Algorithms for Black-Box Safety Validation of Cyber-Physical
Systems
| null |
Journal of Artificial Intelligence Research, vol. 72, p. 377-428,
2021
|
10.1613/jair.1.12716
| null |
cs.LG cs.AI cs.SY eess.SY stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Autonomous cyber-physical systems (CPS) can improve safety and efficiency for
safety-critical applications, but require rigorous testing before deployment.
The complexity of these systems often precludes the use of formal verification
and real-world testing can be too dangerous during development. Therefore,
simulation-based techniques have been developed that treat the system under
test as a black box operating in a simulated environment. Safety validation
tasks include finding disturbances in the environment that cause the system to
fail (falsification), finding the most-likely failure, and estimating the
probability that the system fails. Motivated by the prevalence of
safety-critical artificial intelligence, this work provides a survey of
state-of-the-art safety validation techniques for CPS with a focus on applied
algorithms and their modifications for the safety validation problem. We
present and discuss algorithms in the domains of optimization, path planning,
reinforcement learning, and importance sampling. Problem decomposition
techniques are presented to help scale algorithms to large state spaces, which
are common for CPS. A brief overview of safety-critical applications is given,
including autonomous vehicles and aircraft collision avoidance systems.
Finally, we present a survey of existing academic and commercially available
safety validation tools.
|
[
{
"created": "Wed, 6 May 2020 17:31:51 GMT",
"version": "v1"
},
{
"created": "Sat, 11 Jul 2020 16:18:28 GMT",
"version": "v2"
},
{
"created": "Thu, 14 Oct 2021 16:40:00 GMT",
"version": "v3"
}
] |
2021-10-15
|
[
[
"Corso",
"Anthony",
""
],
[
"Moss",
"Robert J.",
""
],
[
"Koren",
"Mark",
""
],
[
"Lee",
"Ritchie",
""
],
[
"Kochenderfer",
"Mykel J.",
""
]
] |
Autonomous cyber-physical systems (CPS) can improve safety and efficiency for safety-critical applications, but require rigorous testing before deployment. The complexity of these systems often precludes the use of formal verification and real-world testing can be too dangerous during development. Therefore, simulation-based techniques have been developed that treat the system under test as a black box operating in a simulated environment. Safety validation tasks include finding disturbances in the environment that cause the system to fail (falsification), finding the most-likely failure, and estimating the probability that the system fails. Motivated by the prevalence of safety-critical artificial intelligence, this work provides a survey of state-of-the-art safety validation techniques for CPS with a focus on applied algorithms and their modifications for the safety validation problem. We present and discuss algorithms in the domains of optimization, path planning, reinforcement learning, and importance sampling. Problem decomposition techniques are presented to help scale algorithms to large state spaces, which are common for CPS. A brief overview of safety-critical applications is given, including autonomous vehicles and aircraft collision avoidance systems. Finally, we present a survey of existing academic and commercially available safety validation tools.
|
1102.5452
|
Miroslav \'Ciri\'c
|
Miroslav \'Ciri\'c, Jelena Ignjatovi\'c, Nada Damljanovi\'c, Milan
Ba\v{s}i\'c
|
Bisimulations for fuzzy automata
|
41 pages, submitted to a journal
| null | null | null |
cs.FL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Bisimulations have been widely used in many areas of computer science to
model equivalence between various systems, and to reduce the number of states
of these systems, whereas uniform fuzzy relations have recently been introduced
as a means to model the fuzzy equivalence between elements of two possible
different sets. Here we use the conjunction of these two concepts as a powerful
tool in the study of equivalence between fuzzy automata. We prove that a
uniform fuzzy relation between fuzzy automata $\cal A$ and $\cal B$ is a
forward bisimulation if and only if its kernel and co-kernel are forward
bisimulation fuzzy equivalences on $\cal A$ and $\cal B$ and there is a special
isomorphism between factor fuzzy automata with respect to these fuzzy
equivalences. As a consequence we get that fuzzy automata $\cal A$ and $\cal B$
are UFB-equivalent, i.e., there is a uniform forward bisimulation between them,
if and only if there is a special isomorphism between the factor fuzzy automata
of $\cal A$ and $\cal B$ with respect to their greatest forward bisimulation
fuzzy equivalences. This result reduces the problem of testing UFB-equivalence
to the problem of testing isomorphism of fuzzy automata, which is closely
related to the well-known graph isomorphism problem. We prove some similar
results for backward-forward bisimulations, and we point to fundamental
differences. Because of the duality with the studied concepts, backward and
forward-backward bisimulations are not considered separately. Finally, we give
a comprehensive overview of various concepts on deterministic,
nondeterministic, fuzzy, and weighted automata, which are related to
bisimulations.
|
[
{
"created": "Sat, 26 Feb 2011 21:37:51 GMT",
"version": "v1"
},
{
"created": "Thu, 5 May 2011 20:11:04 GMT",
"version": "v2"
}
] |
2011-05-09
|
[
[
"Ćirić",
"Miroslav",
""
],
[
"Ignjatović",
"Jelena",
""
],
[
"Damljanović",
"Nada",
""
],
[
"Bašić",
"Milan",
""
]
] |
Bisimulations have been widely used in many areas of computer science to model equivalence between various systems, and to reduce the number of states of these systems, whereas uniform fuzzy relations have recently been introduced as a means to model the fuzzy equivalence between elements of two possible different sets. Here we use the conjunction of these two concepts as a powerful tool in the study of equivalence between fuzzy automata. We prove that a uniform fuzzy relation between fuzzy automata $\cal A$ and $\cal B$ is a forward bisimulation if and only if its kernel and co-kernel are forward bisimulation fuzzy equivalences on $\cal A$ and $\cal B$ and there is a special isomorphism between factor fuzzy automata with respect to these fuzzy equivalences. As a consequence we get that fuzzy automata $\cal A$ and $\cal B$ are UFB-equivalent, i.e., there is a uniform forward bisimulation between them, if and only if there is a special isomorphism between the factor fuzzy automata of $\cal A$ and $\cal B$ with respect to their greatest forward bisimulation fuzzy equivalences. This result reduces the problem of testing UFB-equivalence to the problem of testing isomorphism of fuzzy automata, which is closely related to the well-known graph isomorphism problem. We prove some similar results for backward-forward bisimulations, and we point to fundamental differences. Because of the duality with the studied concepts, backward and forward-backward bisimulations are not considered separately. Finally, we give a comprehensive overview of various concepts on deterministic, nondeterministic, fuzzy, and weighted automata, which are related to bisimulations.
|
1402.5351
|
Jian-Jun Shu
|
Jian-Jun Shu
|
On generalized Tian Ji's horse racing strategy
| null |
Interdisciplinary Science Reviews, Vol. 37, No. 2, pp. 187-193,
2012
|
10.1179/0308018812Z.00000000014
| null |
cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Tian Ji's horse racing strategy, a famous Chinese legend, constitutes a
promising concept to be applied to important issues in today's competitive
environment; this strategy is elaborated on and analyzed by examining the
general case. The mathematical formulation concerning the calculation of
winning, drawing or losing combinations and probabilities is presented to
illustrate the interesting insights on how ancient philosophies could promote
thinking in business competitiveness, in particular, the wisdom behind
sacrificing the part for the benefit of the whole or sacrificing the short-term
objectives in order to gain the long-term goal.
|
[
{
"created": "Fri, 21 Feb 2014 17:01:23 GMT",
"version": "v1"
},
{
"created": "Fri, 28 Feb 2014 14:38:20 GMT",
"version": "v2"
},
{
"created": "Mon, 24 Apr 2017 17:37:43 GMT",
"version": "v3"
}
] |
2017-04-25
|
[
[
"Shu",
"Jian-Jun",
""
]
] |
Tian Ji's horse racing strategy, a famous Chinese legend, constitutes a promising concept to be applied to important issues in today's competitive environment; this strategy is elaborated on and analyzed by examining the general case. The mathematical formulation concerning the calculation of winning, drawing or losing combinations and probabilities is presented to illustrate the interesting insights on how ancient philosophies could promote thinking in business competitiveness, in particular, the wisdom behind sacrificing the part for the benefit of the whole or sacrificing the short-term objectives in order to gain the long-term goal.
|
2301.08830
|
Carlos Martin
|
Carlos Martin, Tuomas Sandholm
|
ApproxED: Approximate exploitability descent via learned best responses
| null | null | null | null |
cs.GT cs.AI cs.LG cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There has been substantial progress on finding game-theoretic equilibria.
Most of that work has focused on games with finite, discrete action spaces.
However, many games involving space, time, money, and other fine-grained
quantities have continuous action spaces (or are best modeled as having such).
We study the problem of finding an approximate Nash equilibrium of games with
continuous action sets. The standard measure of closeness to Nash equilibrium
is exploitability, which measures how much players can benefit from
unilaterally changing their strategy. We propose two new methods that minimize
an approximation of exploitability with respect to the strategy profile. The
first method uses a learned best-response function, which takes the current
strategy profile as input and outputs candidate best responses for each player.
The strategy profile and best-response functions are trained simultaneously,
with the former trying to minimize exploitability while the latter tries to
maximize it. The second method maintains an ensemble of candidate best
responses for each player. In each iteration, the best-performing elements of
each ensemble are used to update the current strategy profile. The strategy
profile and ensembles are simultaneously trained to minimize and maximize the
approximate exploitability, respectively. We evaluate our methods on various
continuous games and GAN training, showing that they outperform prior methods.
|
[
{
"created": "Fri, 20 Jan 2023 23:55:30 GMT",
"version": "v1"
},
{
"created": "Tue, 19 Dec 2023 18:31:20 GMT",
"version": "v2"
},
{
"created": "Wed, 12 Jun 2024 22:39:58 GMT",
"version": "v3"
}
] |
2024-06-14
|
[
[
"Martin",
"Carlos",
""
],
[
"Sandholm",
"Tuomas",
""
]
] |
There has been substantial progress on finding game-theoretic equilibria. Most of that work has focused on games with finite, discrete action spaces. However, many games involving space, time, money, and other fine-grained quantities have continuous action spaces (or are best modeled as having such). We study the problem of finding an approximate Nash equilibrium of games with continuous action sets. The standard measure of closeness to Nash equilibrium is exploitability, which measures how much players can benefit from unilaterally changing their strategy. We propose two new methods that minimize an approximation of exploitability with respect to the strategy profile. The first method uses a learned best-response function, which takes the current strategy profile as input and outputs candidate best responses for each player. The strategy profile and best-response functions are trained simultaneously, with the former trying to minimize exploitability while the latter tries to maximize it. The second method maintains an ensemble of candidate best responses for each player. In each iteration, the best-performing elements of each ensemble are used to update the current strategy profile. The strategy profile and ensembles are simultaneously trained to minimize and maximize the approximate exploitability, respectively. We evaluate our methods on various continuous games and GAN training, showing that they outperform prior methods.
|
2105.01266
|
Kevin Jin
|
Ayaz Hafiz, Kevin Jin
|
Architecture of a Flexible and Cost-Effective Remote Code Execution
Engine
|
11 pages, 8 figures
| null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Oftentimes, there is a need to experiment with different programming
languages and technologies when designing software applications. Such
experiments must be reproducible and share-able within a team workplace, and
manual effort should be minimized for setting up/tearing down said experiments.
This paper solves this problem by presenting a cloud-based web service for
remote code execution, that is easily extensible to support any number of
programming languages and libraries. The service provides a fast, reproducible
solution for small software experiments and is amenable to collaboration in a
workplace (via sharable permalinks). The service is designed as a distributed
system to reliably support a large number of users, and efficiently manage
cloud-hosting costs with predictive auto-scaling while minimizing SLA
violations.
|
[
{
"created": "Tue, 4 May 2021 03:13:54 GMT",
"version": "v1"
}
] |
2021-05-05
|
[
[
"Hafiz",
"Ayaz",
""
],
[
"Jin",
"Kevin",
""
]
] |
Oftentimes, there is a need to experiment with different programming languages and technologies when designing software applications. Such experiments must be reproducible and share-able within a team workplace, and manual effort should be minimized for setting up/tearing down said experiments. This paper solves this problem by presenting a cloud-based web service for remote code execution, that is easily extensible to support any number of programming languages and libraries. The service provides a fast, reproducible solution for small software experiments and is amenable to collaboration in a workplace (via sharable permalinks). The service is designed as a distributed system to reliably support a large number of users, and efficiently manage cloud-hosting costs with predictive auto-scaling while minimizing SLA violations.
|
2310.07793
|
Ruotong Liao
|
Ruotong Liao, Xu Jia, Yangzhe Li, Yunpu Ma, Volker Tresp
|
GenTKG: Generative Forecasting on Temporal Knowledge Graph with Large
Language Models
|
14 pages, Findings of NAACL 2024, Spotlight on TGL@NeurIPS2023
| null | null | null |
cs.CL cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The rapid advancements in large language models (LLMs) have ignited interest
in the temporal knowledge graph (tKG) domain, where conventional
embedding-based and rule-based methods dominate. The question remains open of
whether pre-trained LLMs can understand structured temporal relational data and
replace them as the foundation model for temporal relational forecasting.
Therefore, we bring temporal knowledge forecasting into the generative setting.
However, challenges occur in the huge chasms between complex temporal graph
data structure and sequential natural expressions LLMs can handle, and between
the enormous data sizes of tKGs and heavy computation costs of finetuning LLMs.
To address these challenges, we propose a novel retrieval-augmented generation
framework named GenTKG combining a temporal logical rule-based retrieval
strategy and few-shot parameter-efficient instruction tuning to solve the above
challenges, respectively. Extensive experiments have shown that GenTKG
outperforms conventional methods of temporal relational forecasting with low
computation resources using extremely limited training data as few as 16
samples. GenTKG also highlights remarkable cross-domain generalizability with
outperforming performance on unseen datasets without re-training, and in-domain
generalizability regardless of time split in the same dataset. Our work reveals
the huge potential of LLMs in the tKG domain and opens a new frontier for
generative forecasting on tKGs. Code and data are released here:
https://github.com/mayhugotong/GenTKG.
|
[
{
"created": "Wed, 11 Oct 2023 18:27:12 GMT",
"version": "v1"
},
{
"created": "Tue, 14 Nov 2023 15:51:18 GMT",
"version": "v2"
},
{
"created": "Thu, 7 Mar 2024 17:43:30 GMT",
"version": "v3"
},
{
"created": "Wed, 13 Mar 2024 17:10:48 GMT",
"version": "v4"
},
{
"created": "Tue, 16 Apr 2024 18:35:30 GMT",
"version": "v5"
}
] |
2024-04-18
|
[
[
"Liao",
"Ruotong",
""
],
[
"Jia",
"Xu",
""
],
[
"Li",
"Yangzhe",
""
],
[
"Ma",
"Yunpu",
""
],
[
"Tresp",
"Volker",
""
]
] |
The rapid advancements in large language models (LLMs) have ignited interest in the temporal knowledge graph (tKG) domain, where conventional embedding-based and rule-based methods dominate. The question remains open of whether pre-trained LLMs can understand structured temporal relational data and replace them as the foundation model for temporal relational forecasting. Therefore, we bring temporal knowledge forecasting into the generative setting. However, challenges occur in the huge chasms between complex temporal graph data structure and sequential natural expressions LLMs can handle, and between the enormous data sizes of tKGs and heavy computation costs of finetuning LLMs. To address these challenges, we propose a novel retrieval-augmented generation framework named GenTKG combining a temporal logical rule-based retrieval strategy and few-shot parameter-efficient instruction tuning to solve the above challenges, respectively. Extensive experiments have shown that GenTKG outperforms conventional methods of temporal relational forecasting with low computation resources using extremely limited training data as few as 16 samples. GenTKG also highlights remarkable cross-domain generalizability with outperforming performance on unseen datasets without re-training, and in-domain generalizability regardless of time split in the same dataset. Our work reveals the huge potential of LLMs in the tKG domain and opens a new frontier for generative forecasting on tKGs. Code and data are released here: https://github.com/mayhugotong/GenTKG.
|
2102.01886
|
Xuefeng Du
|
Xuefeng Du, Jingfeng Zhang, Bo Han, Tongliang Liu, Yu Rong, Gang Niu,
Junzhou Huang, Masashi Sugiyama
|
Learning Diverse-Structured Networks for Adversarial Robustness
|
ICML2021, code: https://github.com/d12306/dsnet
| null | null | null |
cs.LG cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In adversarial training (AT), the main focus has been the objective and
optimizer while the model has been less studied, so that the models being used
are still those classic ones in standard training (ST). Classic network
architectures (NAs) are generally worse than searched NAs in ST, which should
be the same in AT. In this paper, we argue that NA and AT cannot be handled
independently, since given a dataset, the optimal NA in ST would be no longer
optimal in AT. That being said, AT is time-consuming itself; if we directly
search NAs in AT over large search spaces, the computation will be practically
infeasible. Thus, we propose a diverse-structured network (DS-Net), to
significantly reduce the size of the search space: instead of low-level
operations, we only consider predefined atomic blocks, where an atomic block is
a time-tested building block like the residual block. There are only a few
atomic blocks and thus we can weight all atomic blocks rather than find the
best one in a searched block of DS-Net, which is an essential trade-off between
exploring diverse structures and exploiting the best structures. Empirical
results demonstrate the advantages of DS-Net, i.e., weighting the atomic
blocks.
|
[
{
"created": "Wed, 3 Feb 2021 05:52:11 GMT",
"version": "v1"
},
{
"created": "Fri, 5 Feb 2021 14:48:27 GMT",
"version": "v2"
},
{
"created": "Mon, 8 Feb 2021 03:09:39 GMT",
"version": "v3"
},
{
"created": "Fri, 18 Jun 2021 06:57:10 GMT",
"version": "v4"
}
] |
2021-06-21
|
[
[
"Du",
"Xuefeng",
""
],
[
"Zhang",
"Jingfeng",
""
],
[
"Han",
"Bo",
""
],
[
"Liu",
"Tongliang",
""
],
[
"Rong",
"Yu",
""
],
[
"Niu",
"Gang",
""
],
[
"Huang",
"Junzhou",
""
],
[
"Sugiyama",
"Masashi",
""
]
] |
In adversarial training (AT), the main focus has been the objective and optimizer while the model has been less studied, so that the models being used are still those classic ones in standard training (ST). Classic network architectures (NAs) are generally worse than searched NAs in ST, which should be the same in AT. In this paper, we argue that NA and AT cannot be handled independently, since given a dataset, the optimal NA in ST would be no longer optimal in AT. That being said, AT is time-consuming itself; if we directly search NAs in AT over large search spaces, the computation will be practically infeasible. Thus, we propose a diverse-structured network (DS-Net), to significantly reduce the size of the search space: instead of low-level operations, we only consider predefined atomic blocks, where an atomic block is a time-tested building block like the residual block. There are only a few atomic blocks and thus we can weight all atomic blocks rather than find the best one in a searched block of DS-Net, which is an essential trade-off between exploring diverse structures and exploiting the best structures. Empirical results demonstrate the advantages of DS-Net, i.e., weighting the atomic blocks.
|
2010.15314
|
Drew Linsley
|
Drew Linsley, Junkyung Kim, Alekh Ashok, and Thomas Serre
|
Recurrent neural circuits for contour detection
|
Published in ICLR 2020
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce a deep recurrent neural network architecture that approximates
visual cortical circuits. We show that this architecture, which we refer to as
the gamma-net, learns to solve contour detection tasks with better sample
efficiency than state-of-the-art feedforward networks, while also exhibiting a
classic perceptual illusion, known as the orientation-tilt illusion. Correcting
this illusion significantly reduces gamma-net contour detection accuracy by
driving it to prefer low-level edges over high-level object boundary contours.
Overall, our study suggests that the orientation-tilt illusion is a byproduct
of neural circuits that help biological visual systems achieve robust and
efficient contour detection, and that incorporating these circuits in
artificial neural networks can improve computer vision.
|
[
{
"created": "Thu, 29 Oct 2020 02:09:40 GMT",
"version": "v1"
}
] |
2020-10-30
|
[
[
"Linsley",
"Drew",
""
],
[
"Kim",
"Junkyung",
""
],
[
"Ashok",
"Alekh",
""
],
[
"Serre",
"Thomas",
""
]
] |
We introduce a deep recurrent neural network architecture that approximates visual cortical circuits. We show that this architecture, which we refer to as the gamma-net, learns to solve contour detection tasks with better sample efficiency than state-of-the-art feedforward networks, while also exhibiting a classic perceptual illusion, known as the orientation-tilt illusion. Correcting this illusion significantly reduces gamma-net contour detection accuracy by driving it to prefer low-level edges over high-level object boundary contours. Overall, our study suggests that the orientation-tilt illusion is a byproduct of neural circuits that help biological visual systems achieve robust and efficient contour detection, and that incorporating these circuits in artificial neural networks can improve computer vision.
|
1804.07453
|
Cuiling Lan
|
Pengfei Zhang, Cuiling Lan, Junliang Xing, Wenjun Zeng, Jianru Xue,
Nanning Zheng
|
View Adaptive Neural Networks for High Performance Skeleton-based Human
Action Recognition
|
Accepted in Transactions on Pattern Analysis and Machine Intelligence
(TPAMI)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Skeleton-based human action recognition has recently attracted increasing
attention thanks to the accessibility and the popularity of 3D skeleton data.
One of the key challenges in skeleton-based action recognition lies in the
large view variations when capturing data. In order to alleviate the effects of
view variations, this paper introduces a novel view adaptation scheme, which
automatically determines the virtual observation viewpoints in a learning based
data driven manner. We design two view adaptive neural networks, i.e., VA-RNN
based on RNN, and VA-CNN based on CNN. For each network, a novel view
adaptation module learns and determines the most suitable observation
viewpoints, and transforms the skeletons to those viewpoints for the end-to-end
recognition with a main classification network. Ablation studies find that the
proposed view adaptive models are capable of transforming the skeletons of
various viewpoints to much more consistent virtual viewpoints which largely
eliminates the viewpoint influence. In addition, we design a two-stream scheme
(referred to as VA-fusion) that fuses the scores of the two networks to provide
the fused prediction. Extensive experimental evaluations on five challenging
benchmarks demonstrate that the effectiveness of the proposed view-adaptive
networks and superior performance over state-of-the-art approaches. The source
code is available at
https://github.com/microsoft/View-Adaptive-Neural-Networks-for-Skeleton-based-Human-Action-Recognition.
|
[
{
"created": "Fri, 20 Apr 2018 05:37:47 GMT",
"version": "v1"
},
{
"created": "Tue, 12 Feb 2019 03:55:23 GMT",
"version": "v2"
},
{
"created": "Tue, 3 Sep 2019 08:08:52 GMT",
"version": "v3"
}
] |
2019-09-04
|
[
[
"Zhang",
"Pengfei",
""
],
[
"Lan",
"Cuiling",
""
],
[
"Xing",
"Junliang",
""
],
[
"Zeng",
"Wenjun",
""
],
[
"Xue",
"Jianru",
""
],
[
"Zheng",
"Nanning",
""
]
] |
Skeleton-based human action recognition has recently attracted increasing attention thanks to the accessibility and the popularity of 3D skeleton data. One of the key challenges in skeleton-based action recognition lies in the large view variations when capturing data. In order to alleviate the effects of view variations, this paper introduces a novel view adaptation scheme, which automatically determines the virtual observation viewpoints in a learning based data driven manner. We design two view adaptive neural networks, i.e., VA-RNN based on RNN, and VA-CNN based on CNN. For each network, a novel view adaptation module learns and determines the most suitable observation viewpoints, and transforms the skeletons to those viewpoints for the end-to-end recognition with a main classification network. Ablation studies find that the proposed view adaptive models are capable of transforming the skeletons of various viewpoints to much more consistent virtual viewpoints which largely eliminates the viewpoint influence. In addition, we design a two-stream scheme (referred to as VA-fusion) that fuses the scores of the two networks to provide the fused prediction. Extensive experimental evaluations on five challenging benchmarks demonstrate that the effectiveness of the proposed view-adaptive networks and superior performance over state-of-the-art approaches. The source code is available at https://github.com/microsoft/View-Adaptive-Neural-Networks-for-Skeleton-based-Human-Action-Recognition.
|
2402.02922
|
Farhad Pakdaman
|
Umut Cem Entok, Firas Laakom, Farhad Pakdaman, Moncef Gabbouj
|
Pixel-Wise Color Constancy via Smoothness Techniques in Multi-Illuminant
Scenes
|
Copyright 2024 IEEE - Submitted to IEEE ICIP 2024
| null | null | null |
cs.CV eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
Most scenes are illuminated by several light sources, where the traditional
assumption of uniform illumination is invalid. This issue is ignored in most
color constancy methods, primarily due to the complex spatial impact of
multiple light sources on the image. Moreover, most existing multi-illuminant
methods fail to preserve the smooth change of illumination, which stems from
spatial dependencies in natural images. Motivated by this, we propose a novel
multi-illuminant color constancy method, by learning pixel-wise illumination
maps caused by multiple light sources. The proposed method enforces smoothness
within neighboring pixels, by regularizing the training with the total
variation loss. Moreover, a bilateral filter is provisioned further to enhance
the natural appearance of the estimated images, while preserving the edges.
Additionally, we propose a label-smoothing technique that enables the model to
generalize well despite the uncertainties in ground truth. Quantitative and
qualitative experiments demonstrate that the proposed method outperforms the
state-of-the-art.
|
[
{
"created": "Mon, 5 Feb 2024 11:42:19 GMT",
"version": "v1"
}
] |
2024-02-06
|
[
[
"Entok",
"Umut Cem",
""
],
[
"Laakom",
"Firas",
""
],
[
"Pakdaman",
"Farhad",
""
],
[
"Gabbouj",
"Moncef",
""
]
] |
Most scenes are illuminated by several light sources, where the traditional assumption of uniform illumination is invalid. This issue is ignored in most color constancy methods, primarily due to the complex spatial impact of multiple light sources on the image. Moreover, most existing multi-illuminant methods fail to preserve the smooth change of illumination, which stems from spatial dependencies in natural images. Motivated by this, we propose a novel multi-illuminant color constancy method, by learning pixel-wise illumination maps caused by multiple light sources. The proposed method enforces smoothness within neighboring pixels, by regularizing the training with the total variation loss. Moreover, a bilateral filter is provisioned further to enhance the natural appearance of the estimated images, while preserving the edges. Additionally, we propose a label-smoothing technique that enables the model to generalize well despite the uncertainties in ground truth. Quantitative and qualitative experiments demonstrate that the proposed method outperforms the state-of-the-art.
|
2103.03773
|
Tim Barfoot
|
Timothy D Barfoot
|
A Geometric Algebra Solution to Wahba's Problem
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We retrace Davenport's solution to Wahba's classic problem of aligning two
pointclouds using the formalism of Geometric Algebra (GA). GA proves to be a
natural backdrop for this problem involving three-dimensional rotations due to
the isomorphism between unit-length quaternions and rotors. While the solution
to this problem is not a new result, it is hoped that its treatment in GA will
have tutorial value as well as open the door to addressing more complex
problems in a similar way.
|
[
{
"created": "Fri, 5 Mar 2021 16:01:30 GMT",
"version": "v1"
}
] |
2021-03-08
|
[
[
"Barfoot",
"Timothy D",
""
]
] |
We retrace Davenport's solution to Wahba's classic problem of aligning two pointclouds using the formalism of Geometric Algebra (GA). GA proves to be a natural backdrop for this problem involving three-dimensional rotations due to the isomorphism between unit-length quaternions and rotors. While the solution to this problem is not a new result, it is hoped that its treatment in GA will have tutorial value as well as open the door to addressing more complex problems in a similar way.
|
1204.1407
|
Xiaohu Xie
|
Stephen Breen, Xiao-Wen Chang
|
Column Reordering for Box-Constrained Integer Least Squares Problems
|
6 pages; IEEE GLOBECOM 2011
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The box-constrained integer least squares problem (BILS) arises in MIMO
wireless communications applications. Typically a sphere decoding algorithm (a
tree search algorithm) is used to solve the problem. In order to make the
search algorithm more efficient, the columns of the channel matrix in the BILS
problem have to be reordered. To our knowledge, there are currently two
algorithms for column reordering that provide the best known results. Both use
all available information, but they were derived respectively from geometric
and algebraic points of view and look different. In this paper we modify one to
make it more computationally efficient and easier to comprehend. Then we prove
the modified one and the other actually give the same column reordering in
theory. Finally we propose a new mathematically equivalent algorithm, which is
more computationally efficient and is still easy to understand.
|
[
{
"created": "Fri, 6 Apr 2012 04:50:09 GMT",
"version": "v1"
}
] |
2012-04-09
|
[
[
"Breen",
"Stephen",
""
],
[
"Chang",
"Xiao-Wen",
""
]
] |
The box-constrained integer least squares problem (BILS) arises in MIMO wireless communications applications. Typically a sphere decoding algorithm (a tree search algorithm) is used to solve the problem. In order to make the search algorithm more efficient, the columns of the channel matrix in the BILS problem have to be reordered. To our knowledge, there are currently two algorithms for column reordering that provide the best known results. Both use all available information, but they were derived respectively from geometric and algebraic points of view and look different. In this paper we modify one to make it more computationally efficient and easier to comprehend. Then we prove the modified one and the other actually give the same column reordering in theory. Finally we propose a new mathematically equivalent algorithm, which is more computationally efficient and is still easy to understand.
|
2010.06176
|
Yibo Yang
|
Yibo Yang, Hongyang Li, Shan You, Fei Wang, Chen Qian, Zhouchen Lin
|
ISTA-NAS: Efficient and Consistent Neural Architecture Search by Sparse
Coding
|
NeurIPS 2020
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Neural architecture search (NAS) aims to produce the optimal sparse solution
from a high-dimensional space spanned by all candidate connections. Current
gradient-based NAS methods commonly ignore the constraint of sparsity in the
search phase, but project the optimized solution onto a sparse one by
post-processing. As a result, the dense super-net for search is inefficient to
train and has a gap with the projected architecture for evaluation. In this
paper, we formulate neural architecture search as a sparse coding problem. We
perform the differentiable search on a compressed lower-dimensional space that
has the same validation loss as the original sparse solution space, and recover
an architecture by solving the sparse coding problem. The differentiable search
and architecture recovery are optimized in an alternate manner. By doing so,
our network for search at each update satisfies the sparsity constraint and is
efficient to train. In order to also eliminate the depth and width gap between
the network in search and the target-net in evaluation, we further propose a
method to search and evaluate in one stage under the target-net settings. When
training finishes, architecture variables are absorbed into network weights.
Thus we get the searched architecture and optimized parameters in a single run.
In experiments, our two-stage method on CIFAR-10 requires only 0.05 GPU-day for
search. Our one-stage method produces state-of-the-art performances on both
CIFAR-10 and ImageNet at the cost of only evaluation time.
|
[
{
"created": "Tue, 13 Oct 2020 04:34:24 GMT",
"version": "v1"
}
] |
2020-10-14
|
[
[
"Yang",
"Yibo",
""
],
[
"Li",
"Hongyang",
""
],
[
"You",
"Shan",
""
],
[
"Wang",
"Fei",
""
],
[
"Qian",
"Chen",
""
],
[
"Lin",
"Zhouchen",
""
]
] |
Neural architecture search (NAS) aims to produce the optimal sparse solution from a high-dimensional space spanned by all candidate connections. Current gradient-based NAS methods commonly ignore the constraint of sparsity in the search phase, but project the optimized solution onto a sparse one by post-processing. As a result, the dense super-net for search is inefficient to train and has a gap with the projected architecture for evaluation. In this paper, we formulate neural architecture search as a sparse coding problem. We perform the differentiable search on a compressed lower-dimensional space that has the same validation loss as the original sparse solution space, and recover an architecture by solving the sparse coding problem. The differentiable search and architecture recovery are optimized in an alternate manner. By doing so, our network for search at each update satisfies the sparsity constraint and is efficient to train. In order to also eliminate the depth and width gap between the network in search and the target-net in evaluation, we further propose a method to search and evaluate in one stage under the target-net settings. When training finishes, architecture variables are absorbed into network weights. Thus we get the searched architecture and optimized parameters in a single run. In experiments, our two-stage method on CIFAR-10 requires only 0.05 GPU-day for search. Our one-stage method produces state-of-the-art performances on both CIFAR-10 and ImageNet at the cost of only evaluation time.
|
2206.12862
|
Reda Marzouk
|
Reda Marzouk, Colin de La Higuera
|
Marginal Inference queries in Hidden Markov Models under context-free
grammar constraints
| null | null | null | null |
cs.AI cs.FL
|
http://creativecommons.org/licenses/by/4.0/
|
The primary use of any probabilistic model involving a set of random
variables is to run inference and sampling queries on it. Inference queries in
classical probabilistic models is concerned by the computation of marginal or
conditional probabilities of events given as an input. When the probabilistic
model is sequential, more sophisticated marginal inference queries involving
complex grammars may be of interest in fields such as computational linguistics
and NLP. In this work, we address the question of computing the likelihood of
context-free grammars (CFGs) in Hidden Markov Models (HMMs). We provide a
dynamic algorithm for the exact computation of the likelihood for the class of
unambiguous context-free grammars. We show that the problem is NP-Hard, even
with the promise that the input CFG has a degree of ambiguity less than or
equal to 2. We then propose a fully polynomial randomized approximation scheme
(FPRAS) algorithm to approximate the likelihood for the case of
polynomially-bounded ambiguous CFGs.
|
[
{
"created": "Sun, 26 Jun 2022 12:44:18 GMT",
"version": "v1"
}
] |
2022-06-28
|
[
[
"Marzouk",
"Reda",
""
],
[
"de La Higuera",
"Colin",
""
]
] |
The primary use of any probabilistic model involving a set of random variables is to run inference and sampling queries on it. Inference queries in classical probabilistic models is concerned by the computation of marginal or conditional probabilities of events given as an input. When the probabilistic model is sequential, more sophisticated marginal inference queries involving complex grammars may be of interest in fields such as computational linguistics and NLP. In this work, we address the question of computing the likelihood of context-free grammars (CFGs) in Hidden Markov Models (HMMs). We provide a dynamic algorithm for the exact computation of the likelihood for the class of unambiguous context-free grammars. We show that the problem is NP-Hard, even with the promise that the input CFG has a degree of ambiguity less than or equal to 2. We then propose a fully polynomial randomized approximation scheme (FPRAS) algorithm to approximate the likelihood for the case of polynomially-bounded ambiguous CFGs.
|
1810.05818
|
David Mateo
|
Jabez Leong Kit and David Mateo and Roland Bouffanais
|
A Decentralized Mobile Computing Network for Multi-Robot Systems
Operations
|
Accepted for Publication in Proc. 9th IEEE Annual Ubiquitous
Computing, Electronics & Mobile Communication Conference
|
UEMCON 9 (2018) 309-314
|
10.1109/UEMCON.2018.8796753
| null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Collective animal behaviors are paradigmatic examples of fully decentralized
operations involving complex collective computations such as collective turns
in flocks of birds or collective harvesting by ants. These systems offer a
unique source of inspiration for the development of fault-tolerant and
self-healing multi-robot systems capable of operating in dynamic environments.
Specifically, swarm robotics emerged and is significantly growing on these
premises. However, to date, most swarm robotics systems reported in the
literature involve basic computational tasks---averages and other algebraic
operations. In this paper, we introduce a novel Collective computing framework
based on the swarming paradigm, which exhibits the key innate features of
swarms: robustness, scalability and flexibility. Unlike Edge computing, the
proposed Collective computing framework is truly decentralized and does not
require user intervention or additional servers to sustain its operations. This
Collective computing framework is applied to the complex task of collective
mapping, in which multiple robots aim at cooperatively map a large area. Our
results confirm the effectiveness of the cooperative strategy, its robustness
to the loss of multiple units, as well as its scalability. Furthermore, the
topology of the interconnecting network is found to greatly influence the
performance of the collective action.
|
[
{
"created": "Sat, 13 Oct 2018 08:33:26 GMT",
"version": "v1"
}
] |
2022-09-28
|
[
[
"Kit",
"Jabez Leong",
""
],
[
"Mateo",
"David",
""
],
[
"Bouffanais",
"Roland",
""
]
] |
Collective animal behaviors are paradigmatic examples of fully decentralized operations involving complex collective computations such as collective turns in flocks of birds or collective harvesting by ants. These systems offer a unique source of inspiration for the development of fault-tolerant and self-healing multi-robot systems capable of operating in dynamic environments. Specifically, swarm robotics emerged and is significantly growing on these premises. However, to date, most swarm robotics systems reported in the literature involve basic computational tasks---averages and other algebraic operations. In this paper, we introduce a novel Collective computing framework based on the swarming paradigm, which exhibits the key innate features of swarms: robustness, scalability and flexibility. Unlike Edge computing, the proposed Collective computing framework is truly decentralized and does not require user intervention or additional servers to sustain its operations. This Collective computing framework is applied to the complex task of collective mapping, in which multiple robots aim at cooperatively map a large area. Our results confirm the effectiveness of the cooperative strategy, its robustness to the loss of multiple units, as well as its scalability. Furthermore, the topology of the interconnecting network is found to greatly influence the performance of the collective action.
|
1609.00321
|
Fabian Lipp
|
Thomas C. van Dijk and Martin Fink and Norbert Fischer and Fabian Lipp
and Peter Markfelder and Alexander Ravsky and Subhash Suri and Alexander
Wolff
|
Block Crossings in Storyline Visualizations
|
Appears in the Proceedings of the 24th International Symposium on
Graph Drawing and Network Visualization (GD 2016)
| null | null | null |
cs.CG cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Storyline visualizations help visualize encounters of the characters in a
story over time. Each character is represented by an x-monotone curve that goes
from left to right. A meeting is represented by having the characters that
participate in the meeting run close together for some time. In order to keep
the visual complexity low, rather than just minimizing pairwise crossings of
curves, we propose to count block crossings, that is, pairs of intersecting
bundles of lines.
Our main results are as follows. We show that minimizing the number of block
crossings is NP-hard, and we develop, for meetings of bounded size, a
constant-factor approximation. We also present two fixed-parameter algorithms
and, for meetings of size 2, a greedy heuristic that we evaluate
experimentally.
|
[
{
"created": "Thu, 1 Sep 2016 17:07:50 GMT",
"version": "v1"
}
] |
2016-09-02
|
[
[
"van Dijk",
"Thomas C.",
""
],
[
"Fink",
"Martin",
""
],
[
"Fischer",
"Norbert",
""
],
[
"Lipp",
"Fabian",
""
],
[
"Markfelder",
"Peter",
""
],
[
"Ravsky",
"Alexander",
""
],
[
"Suri",
"Subhash",
""
],
[
"Wolff",
"Alexander",
""
]
] |
Storyline visualizations help visualize encounters of the characters in a story over time. Each character is represented by an x-monotone curve that goes from left to right. A meeting is represented by having the characters that participate in the meeting run close together for some time. In order to keep the visual complexity low, rather than just minimizing pairwise crossings of curves, we propose to count block crossings, that is, pairs of intersecting bundles of lines. Our main results are as follows. We show that minimizing the number of block crossings is NP-hard, and we develop, for meetings of bounded size, a constant-factor approximation. We also present two fixed-parameter algorithms and, for meetings of size 2, a greedy heuristic that we evaluate experimentally.
|
2112.00424
|
Alberto Castagna
|
Alberto Castagna and Ivana Dusparic
|
Multi-Agent Transfer Learning in Reinforcement Learning-Based
Ride-Sharing Systems
| null | null | null | null |
cs.LG cs.AI cs.MA
|
http://creativecommons.org/licenses/by/4.0/
|
Reinforcement learning (RL) has been used in a range of simulated real-world
tasks, e.g., sensor coordination, traffic light control, and on-demand mobility
services. However, real world deployments are rare, as RL struggles with
dynamic nature of real world environments, requiring time for learning a task
and adapting to changes in the environment. Transfer Learning (TL) can help
lower these adaptation times. In particular, there is a significant potential
of applying TL in multi-agent RL systems, where multiple agents can share
knowledge with each other, as well as with new agents that join the system. To
obtain the most from inter-agent transfer, transfer roles (i.e., determining
which agents act as sources and which as targets), as well as relevant transfer
content parameters (e.g., transfer size) should be selected dynamically in each
particular situation. As a first step towards fully dynamic transfers, in this
paper we investigate the impact of TL transfer parameters with fixed source and
target roles. Specifically, we label every agent-environment interaction with
agent's epistemic confidence, and we filter the shared examples using varying
threshold levels and sample sizes. We investigate impact of these parameters in
two scenarios, a standard predator-prey RL benchmark and a simulation of a
ride-sharing system with 200 vehicle agents and 10,000 ride-requests.
|
[
{
"created": "Wed, 1 Dec 2021 11:23:40 GMT",
"version": "v1"
}
] |
2021-12-02
|
[
[
"Castagna",
"Alberto",
""
],
[
"Dusparic",
"Ivana",
""
]
] |
Reinforcement learning (RL) has been used in a range of simulated real-world tasks, e.g., sensor coordination, traffic light control, and on-demand mobility services. However, real world deployments are rare, as RL struggles with dynamic nature of real world environments, requiring time for learning a task and adapting to changes in the environment. Transfer Learning (TL) can help lower these adaptation times. In particular, there is a significant potential of applying TL in multi-agent RL systems, where multiple agents can share knowledge with each other, as well as with new agents that join the system. To obtain the most from inter-agent transfer, transfer roles (i.e., determining which agents act as sources and which as targets), as well as relevant transfer content parameters (e.g., transfer size) should be selected dynamically in each particular situation. As a first step towards fully dynamic transfers, in this paper we investigate the impact of TL transfer parameters with fixed source and target roles. Specifically, we label every agent-environment interaction with agent's epistemic confidence, and we filter the shared examples using varying threshold levels and sample sizes. We investigate impact of these parameters in two scenarios, a standard predator-prey RL benchmark and a simulation of a ride-sharing system with 200 vehicle agents and 10,000 ride-requests.
|
1711.11479
|
Thomas Lucas
|
Thomas Lucas and Jakob Verbeek
|
Auxiliary Guided Autoregressive Variational Autoencoders
|
Published as a conference paper at ECML-PKDD 2018
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Generative modeling of high-dimensional data is a key problem in machine
learning. Successful approaches include latent variable models and
autoregressive models. The complementary strengths of these approaches, to
model global and local image statistics respectively, suggest hybrid models
that encode global image structure into latent variables while autoregressively
modeling low level detail. Previous approaches to such hybrid models restrict
the capacity of the autoregressive decoder to prevent degenerate models that
ignore the latent variables and only rely on autoregressive modeling. Our
contribution is a training procedure relying on an auxiliary loss function that
controls which information is captured by the latent variables and what is left
to the autoregressive decoder. Our approach can leverage arbitrarily powerful
autoregressive decoders, achieves state-of-the art quantitative performance
among models with latent variables, and generates qualitatively convincing
samples.
|
[
{
"created": "Thu, 30 Nov 2017 15:57:24 GMT",
"version": "v1"
},
{
"created": "Thu, 18 Apr 2019 06:49:26 GMT",
"version": "v2"
}
] |
2019-04-19
|
[
[
"Lucas",
"Thomas",
""
],
[
"Verbeek",
"Jakob",
""
]
] |
Generative modeling of high-dimensional data is a key problem in machine learning. Successful approaches include latent variable models and autoregressive models. The complementary strengths of these approaches, to model global and local image statistics respectively, suggest hybrid models that encode global image structure into latent variables while autoregressively modeling low level detail. Previous approaches to such hybrid models restrict the capacity of the autoregressive decoder to prevent degenerate models that ignore the latent variables and only rely on autoregressive modeling. Our contribution is a training procedure relying on an auxiliary loss function that controls which information is captured by the latent variables and what is left to the autoregressive decoder. Our approach can leverage arbitrarily powerful autoregressive decoders, achieves state-of-the art quantitative performance among models with latent variables, and generates qualitatively convincing samples.
|
1703.04394
|
Yongqin Xian
|
Yongqin Xian, Bernt Schiele, Zeynep Akata
|
Zero-Shot Learning -- The Good, the Bad and the Ugly
|
Accepted as a full paper in IEEE Computer Vision and Pattern
Recognition (CVPR) 2017. We introduce Proposed Split Version 2.0 (Please
download it from the project page)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Due to the importance of zero-shot learning, the number of proposed
approaches has increased steadily recently. We argue that it is time to take a
step back and to analyze the status quo of the area. The purpose of this paper
is three-fold. First, given the fact that there is no agreed upon zero-shot
learning benchmark, we first define a new benchmark by unifying both the
evaluation protocols and data splits. This is an important contribution as
published results are often not comparable and sometimes even flawed due to,
e.g. pre-training on zero-shot test classes. Second, we compare and analyze a
significant number of the state-of-the-art methods in depth, both in the
classic zero-shot setting but also in the more realistic generalized zero-shot
setting. Finally, we discuss limitations of the current status of the area
which can be taken as a basis for advancing it.
|
[
{
"created": "Mon, 13 Mar 2017 13:53:19 GMT",
"version": "v1"
},
{
"created": "Wed, 23 Sep 2020 14:55:12 GMT",
"version": "v2"
}
] |
2020-09-24
|
[
[
"Xian",
"Yongqin",
""
],
[
"Schiele",
"Bernt",
""
],
[
"Akata",
"Zeynep",
""
]
] |
Due to the importance of zero-shot learning, the number of proposed approaches has increased steadily recently. We argue that it is time to take a step back and to analyze the status quo of the area. The purpose of this paper is three-fold. First, given the fact that there is no agreed upon zero-shot learning benchmark, we first define a new benchmark by unifying both the evaluation protocols and data splits. This is an important contribution as published results are often not comparable and sometimes even flawed due to, e.g. pre-training on zero-shot test classes. Second, we compare and analyze a significant number of the state-of-the-art methods in depth, both in the classic zero-shot setting but also in the more realistic generalized zero-shot setting. Finally, we discuss limitations of the current status of the area which can be taken as a basis for advancing it.
|
1902.10710
|
Vitaly Feldman
|
Vitaly Feldman and Jan Vondrak
|
High probability generalization bounds for uniformly stable algorithms
with nearly optimal rate
|
this is a follow-up to and has minor text overlap with
arXiv:1812.09859; v2: minor revision following acceptance for presentation at
COLT 2019
| null | null | null |
cs.LG cs.DS stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Algorithmic stability is a classical approach to understanding and analysis
of the generalization error of learning algorithms. A notable weakness of most
stability-based generalization bounds is that they hold only in expectation.
Generalization with high probability has been established in a landmark paper
of Bousquet and Elisseeff (2002) albeit at the expense of an additional
$\sqrt{n}$ factor in the bound. Specifically, their bound on the estimation
error of any $\gamma$-uniformly stable learning algorithm on $n$ samples and
range in $[0,1]$ is $O(\gamma \sqrt{n \log(1/\delta)} +
\sqrt{\log(1/\delta)/n})$ with probability $\geq 1-\delta$. The $\sqrt{n}$
overhead makes the bound vacuous in the common settings where $\gamma \geq
1/\sqrt{n}$. A stronger bound was recently proved by the authors (Feldman and
Vondrak, 2018) that reduces the overhead to at most $O(n^{1/4})$. Still, both
of these results give optimal generalization bounds only when $\gamma =
O(1/n)$.
We prove a nearly tight bound of $O(\gamma \log(n)\log(n/\delta) +
\sqrt{\log(1/\delta)/n})$ on the estimation error of any $\gamma$-uniformly
stable algorithm. It implies that for algorithms that are uniformly stable with
$\gamma = O(1/\sqrt{n})$, estimation error is essentially the same as the
sampling error. Our result leads to the first high-probability generalization
bounds for multi-pass stochastic gradient descent and regularized ERM for
stochastic convex problems with nearly optimal rate --- resolving open problems
in prior work. Our proof technique is new and we introduce several analysis
tools that might find additional applications.
|
[
{
"created": "Wed, 27 Feb 2019 18:50:28 GMT",
"version": "v1"
},
{
"created": "Sun, 23 Jun 2019 05:48:35 GMT",
"version": "v2"
}
] |
2019-06-25
|
[
[
"Feldman",
"Vitaly",
""
],
[
"Vondrak",
"Jan",
""
]
] |
Algorithmic stability is a classical approach to understanding and analysis of the generalization error of learning algorithms. A notable weakness of most stability-based generalization bounds is that they hold only in expectation. Generalization with high probability has been established in a landmark paper of Bousquet and Elisseeff (2002) albeit at the expense of an additional $\sqrt{n}$ factor in the bound. Specifically, their bound on the estimation error of any $\gamma$-uniformly stable learning algorithm on $n$ samples and range in $[0,1]$ is $O(\gamma \sqrt{n \log(1/\delta)} + \sqrt{\log(1/\delta)/n})$ with probability $\geq 1-\delta$. The $\sqrt{n}$ overhead makes the bound vacuous in the common settings where $\gamma \geq 1/\sqrt{n}$. A stronger bound was recently proved by the authors (Feldman and Vondrak, 2018) that reduces the overhead to at most $O(n^{1/4})$. Still, both of these results give optimal generalization bounds only when $\gamma = O(1/n)$. We prove a nearly tight bound of $O(\gamma \log(n)\log(n/\delta) + \sqrt{\log(1/\delta)/n})$ on the estimation error of any $\gamma$-uniformly stable algorithm. It implies that for algorithms that are uniformly stable with $\gamma = O(1/\sqrt{n})$, estimation error is essentially the same as the sampling error. Our result leads to the first high-probability generalization bounds for multi-pass stochastic gradient descent and regularized ERM for stochastic convex problems with nearly optimal rate --- resolving open problems in prior work. Our proof technique is new and we introduce several analysis tools that might find additional applications.
|
2403.08455
|
Novel Certad
|
Novel Certad, Enrico del Re, Helena Kornd\"orfer, Gregory Schr\"oder,
Walter Morales-Alvarez, Sebastian Tschernuth, Delgermaa Gankhuyag, Luigi del
Re and Cristina Olaverri-Monreal
|
IAMCV Multi-Scenario Vehicle Interaction Dataset
| null | null | null | null |
cs.RO cs.ET
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The acquisition and analysis of high-quality sensor data constitute an
essential requirement in shaping the development of fully autonomous driving
systems. This process is indispensable for enhancing road safety and ensuring
the effectiveness of the technological advancements in the automotive industry.
This study introduces the Interaction of Autonomous and Manually-Controlled
Vehicles (IAMCV) dataset, a novel and extensive dataset focused on
inter-vehicle interactions. The dataset, enriched with a sophisticated array of
sensors such as Light Detection and Ranging, cameras, Inertial Measurement
Unit/Global Positioning System, and vehicle bus data acquisition, provides a
comprehensive representation of real-world driving scenarios that include
roundabouts, intersections, country roads, and highways, recorded across
diverse locations in Germany. Furthermore, the study shows the versatility of
the IAMCV dataset through several proof-of-concept use cases. Firstly, an
unsupervised trajectory clustering algorithm illustrates the dataset's
capability in categorizing vehicle movements without the need for labeled
training data. Secondly, we compare an online camera calibration method with
the Robot Operating System-based standard, using images captured in the
dataset. Finally, a preliminary test employing the YOLOv8 object-detection
model is conducted, augmented by reflections on the transferability of object
detection across various LIDAR resolutions. These use cases underscore the
practical utility of the collected dataset, emphasizing its potential to
advance research and innovation in the area of intelligent vehicles.
|
[
{
"created": "Wed, 13 Mar 2024 12:09:44 GMT",
"version": "v1"
}
] |
2024-03-14
|
[
[
"Certad",
"Novel",
""
],
[
"del Re",
"Enrico",
""
],
[
"Korndörfer",
"Helena",
""
],
[
"Schröder",
"Gregory",
""
],
[
"Morales-Alvarez",
"Walter",
""
],
[
"Tschernuth",
"Sebastian",
""
],
[
"Gankhuyag",
"Delgermaa",
""
],
[
"del Re",
"Luigi",
""
],
[
"Olaverri-Monreal",
"Cristina",
""
]
] |
The acquisition and analysis of high-quality sensor data constitute an essential requirement in shaping the development of fully autonomous driving systems. This process is indispensable for enhancing road safety and ensuring the effectiveness of the technological advancements in the automotive industry. This study introduces the Interaction of Autonomous and Manually-Controlled Vehicles (IAMCV) dataset, a novel and extensive dataset focused on inter-vehicle interactions. The dataset, enriched with a sophisticated array of sensors such as Light Detection and Ranging, cameras, Inertial Measurement Unit/Global Positioning System, and vehicle bus data acquisition, provides a comprehensive representation of real-world driving scenarios that include roundabouts, intersections, country roads, and highways, recorded across diverse locations in Germany. Furthermore, the study shows the versatility of the IAMCV dataset through several proof-of-concept use cases. Firstly, an unsupervised trajectory clustering algorithm illustrates the dataset's capability in categorizing vehicle movements without the need for labeled training data. Secondly, we compare an online camera calibration method with the Robot Operating System-based standard, using images captured in the dataset. Finally, a preliminary test employing the YOLOv8 object-detection model is conducted, augmented by reflections on the transferability of object detection across various LIDAR resolutions. These use cases underscore the practical utility of the collected dataset, emphasizing its potential to advance research and innovation in the area of intelligent vehicles.
|
2311.17812
|
Ting Liu
|
Ting Liu, Yue Hu, Wansen Wu, Youkai Wang, Kai Xu, Quanjun Yin
|
DAP: Domain-aware Prompt Learning for Vision-and-Language Navigation
|
4 pages. arXiv admin note: substantial text overlap with
arXiv:2309.03661
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Following language instructions to navigate in unseen environments is a
challenging task for autonomous embodied agents. With strong representation
capabilities, pretrained vision-and-language models are widely used in VLN.
However, most of them are trained on web-crawled general-purpose datasets,
which incurs a considerable domain gap when used for VLN tasks. To address the
problem, we propose a novel and model-agnostic domain-aware prompt learning
(DAP) framework. For equipping the pretrained models with specific object-level
and scene-level cross-modal alignment in VLN tasks, DAP applies a low-cost
prompt tuning paradigm to learn soft visual prompts for extracting in-domain
image semantics. Specifically, we first generate a set of in-domain image-text
pairs with the help of the CLIP model. Then we introduce soft visual prompts in
the input space of the visual encoder in a pretrained model. DAP injects
in-domain visual knowledge into the visual encoder of the pretrained model in
an efficient way. Experimental results on both R2R and REVERIE show the
superiority of DAP compared to existing state-of-the-art methods.
|
[
{
"created": "Wed, 29 Nov 2023 17:03:37 GMT",
"version": "v1"
},
{
"created": "Thu, 30 Nov 2023 11:28:59 GMT",
"version": "v2"
},
{
"created": "Thu, 28 Dec 2023 13:59:45 GMT",
"version": "v3"
},
{
"created": "Fri, 29 Dec 2023 02:30:52 GMT",
"version": "v4"
}
] |
2024-01-01
|
[
[
"Liu",
"Ting",
""
],
[
"Hu",
"Yue",
""
],
[
"Wu",
"Wansen",
""
],
[
"Wang",
"Youkai",
""
],
[
"Xu",
"Kai",
""
],
[
"Yin",
"Quanjun",
""
]
] |
Following language instructions to navigate in unseen environments is a challenging task for autonomous embodied agents. With strong representation capabilities, pretrained vision-and-language models are widely used in VLN. However, most of them are trained on web-crawled general-purpose datasets, which incurs a considerable domain gap when used for VLN tasks. To address the problem, we propose a novel and model-agnostic domain-aware prompt learning (DAP) framework. For equipping the pretrained models with specific object-level and scene-level cross-modal alignment in VLN tasks, DAP applies a low-cost prompt tuning paradigm to learn soft visual prompts for extracting in-domain image semantics. Specifically, we first generate a set of in-domain image-text pairs with the help of the CLIP model. Then we introduce soft visual prompts in the input space of the visual encoder in a pretrained model. DAP injects in-domain visual knowledge into the visual encoder of the pretrained model in an efficient way. Experimental results on both R2R and REVERIE show the superiority of DAP compared to existing state-of-the-art methods.
|
1410.7452
|
Varun Jampani
|
Varun Jampani, S. M. Ali Eslami, Daniel Tarlow, Pushmeet Kohli and
John Winn
|
Consensus Message Passing for Layered Graphical Models
|
Appearing in Proceedings of the 18th International Conference on
Artificial Intelligence and Statistics (AISTATS) 2015
| null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-sa/3.0/
|
Generative models provide a powerful framework for probabilistic reasoning.
However, in many domains their use has been hampered by the practical
difficulties of inference. This is particularly the case in computer vision,
where models of the imaging process tend to be large, loopy and layered. For
this reason bottom-up conditional models have traditionally dominated in such
domains. We find that widely-used, general-purpose message passing inference
algorithms such as Expectation Propagation (EP) and Variational Message Passing
(VMP) fail on the simplest of vision models. With these models in mind, we
introduce a modification to message passing that learns to exploit their
layered structure by passing 'consensus' messages that guide inference towards
good solutions. Experiments on a variety of problems show that the proposed
technique leads to significantly more accurate inference results, not only when
compared to standard EP and VMP, but also when compared to competitive
bottom-up conditional models.
|
[
{
"created": "Mon, 27 Oct 2014 22:40:52 GMT",
"version": "v1"
},
{
"created": "Mon, 26 Jan 2015 21:36:36 GMT",
"version": "v2"
}
] |
2015-01-28
|
[
[
"Jampani",
"Varun",
""
],
[
"Eslami",
"S. M. Ali",
""
],
[
"Tarlow",
"Daniel",
""
],
[
"Kohli",
"Pushmeet",
""
],
[
"Winn",
"John",
""
]
] |
Generative models provide a powerful framework for probabilistic reasoning. However, in many domains their use has been hampered by the practical difficulties of inference. This is particularly the case in computer vision, where models of the imaging process tend to be large, loopy and layered. For this reason bottom-up conditional models have traditionally dominated in such domains. We find that widely-used, general-purpose message passing inference algorithms such as Expectation Propagation (EP) and Variational Message Passing (VMP) fail on the simplest of vision models. With these models in mind, we introduce a modification to message passing that learns to exploit their layered structure by passing 'consensus' messages that guide inference towards good solutions. Experiments on a variety of problems show that the proposed technique leads to significantly more accurate inference results, not only when compared to standard EP and VMP, but also when compared to competitive bottom-up conditional models.
|
1509.04956
|
J. Miguel Diaz-Banez
|
Francisco G\'omez, Joaqu\'in Mora, Emilia G\'omez, Jos\'e Miguel
D\'iaz-B\'a\~nez
|
Melodic Contour and Mid-Level Global Features Applied to the Analysis of
Flamenco Cantes
| null | null | null | null |
cs.SD cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work focuses on the topic of melodic characterization and similarity in
a specific musical repertoire: a cappella flamenco singing, more specifically
in debla and martinete styles. We propose the combination of manual and
automatic description. First, we use a state-of-the-art automatic transcription
method to account for general melodic similarity from music recordings. Second,
we define a specific set of representative mid-level melodic features, which
are manually labeled by flamenco experts. Both approaches are then contrasted
and combined into a global similarity measure. This similarity measure is
assessed by inspecting the clusters obtained through phylogenetic algorithms
algorithms and by relating similarity to categorization in terms of style.
Finally, we discuss the advantage of combining automatic and expert annotations
as well as the need to include repertoire-specific descriptions for meaningful
melodic characterization in traditional music collections.
|
[
{
"created": "Wed, 16 Sep 2015 15:56:22 GMT",
"version": "v1"
}
] |
2015-09-17
|
[
[
"Gómez",
"Francisco",
""
],
[
"Mora",
"Joaquín",
""
],
[
"Gómez",
"Emilia",
""
],
[
"Díaz-Báñez",
"José Miguel",
""
]
] |
This work focuses on the topic of melodic characterization and similarity in a specific musical repertoire: a cappella flamenco singing, more specifically in debla and martinete styles. We propose the combination of manual and automatic description. First, we use a state-of-the-art automatic transcription method to account for general melodic similarity from music recordings. Second, we define a specific set of representative mid-level melodic features, which are manually labeled by flamenco experts. Both approaches are then contrasted and combined into a global similarity measure. This similarity measure is assessed by inspecting the clusters obtained through phylogenetic algorithms algorithms and by relating similarity to categorization in terms of style. Finally, we discuss the advantage of combining automatic and expert annotations as well as the need to include repertoire-specific descriptions for meaningful melodic characterization in traditional music collections.
|
1210.1480
|
George Kampis
|
Frank van Harmelen and George Kampis and Katy Borner and Peter van den
Besselaar and Erik Schultes and Carole Goble and Paul Groth and Barend Mons
and Stuart Anderson and Stefan Decker and Conor Hayes and Thierry Buecheler
and Dirk Helbing
|
Theoretical And Technological Building Blocks For An Innovation
Accelerator
| null | null |
10.1140/epjst/e2012-01692-1
| null |
cs.DL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The scientific system that we use today was devised centuries ago and is
inadequate for our current ICT-based society: the peer review system encourages
conservatism, journal publications are monolithic and slow, data is often not
available to other scientists, and the independent validation of results is
limited. Building on the Innovation Accelerator paper by Helbing and Balietti
(2011) this paper takes the initial global vision and reviews the theoretical
and technological building blocks that can be used for implementing an
innovation (in first place: science) accelerator platform driven by
re-imagining the science system. The envisioned platform would rest on four
pillars: (i) Redesign the incentive scheme to reduce behavior such as
conservatism, herding and hyping; (ii) Advance scientific publications by
breaking up the monolithic paper unit and introducing other building blocks
such as data, tools, experiment workflows, resources; (iii) Use machine
readable semantics for publications, debate structures, provenance etc. in
order to include the computer as a partner in the scientific process, and (iv)
Build an online platform for collaboration, including a network of trust and
reputation among the different types of stakeholders in the scientific system:
scientists, educators, funding agencies, policy makers, students and industrial
innovators among others. Any such improvements to the scientific system must
support the entire scientific process (unlike current tools that chop up the
scientific process into disconnected pieces), must facilitate and encourage
collaboration and interdisciplinarity (again unlike current tools), must
facilitate the inclusion of intelligent computing in the scientific process,
must facilitate not only the core scientific process, but also accommodate
other stakeholders such science policy makers, industrial innovators, and the
general public.
|
[
{
"created": "Thu, 4 Oct 2012 15:08:28 GMT",
"version": "v1"
}
] |
2015-06-11
|
[
[
"van Harmelen",
"Frank",
""
],
[
"Kampis",
"George",
""
],
[
"Borner",
"Katy",
""
],
[
"Besselaar",
"Peter van den",
""
],
[
"Schultes",
"Erik",
""
],
[
"Goble",
"Carole",
""
],
[
"Groth",
"Paul",
""
],
[
"Mons",
"Barend",
""
],
[
"Anderson",
"Stuart",
""
],
[
"Decker",
"Stefan",
""
],
[
"Hayes",
"Conor",
""
],
[
"Buecheler",
"Thierry",
""
],
[
"Helbing",
"Dirk",
""
]
] |
The scientific system that we use today was devised centuries ago and is inadequate for our current ICT-based society: the peer review system encourages conservatism, journal publications are monolithic and slow, data is often not available to other scientists, and the independent validation of results is limited. Building on the Innovation Accelerator paper by Helbing and Balietti (2011) this paper takes the initial global vision and reviews the theoretical and technological building blocks that can be used for implementing an innovation (in first place: science) accelerator platform driven by re-imagining the science system. The envisioned platform would rest on four pillars: (i) Redesign the incentive scheme to reduce behavior such as conservatism, herding and hyping; (ii) Advance scientific publications by breaking up the monolithic paper unit and introducing other building blocks such as data, tools, experiment workflows, resources; (iii) Use machine readable semantics for publications, debate structures, provenance etc. in order to include the computer as a partner in the scientific process, and (iv) Build an online platform for collaboration, including a network of trust and reputation among the different types of stakeholders in the scientific system: scientists, educators, funding agencies, policy makers, students and industrial innovators among others. Any such improvements to the scientific system must support the entire scientific process (unlike current tools that chop up the scientific process into disconnected pieces), must facilitate and encourage collaboration and interdisciplinarity (again unlike current tools), must facilitate the inclusion of intelligent computing in the scientific process, must facilitate not only the core scientific process, but also accommodate other stakeholders such science policy makers, industrial innovators, and the general public.
|
2309.10740
|
Yatong Bai
|
Yatong Bai, Trung Dang, Dung Tran, Kazuhito Koishida, Somayeh Sojoudi
|
ConsistencyTTA: Accelerating Diffusion-Based Text-to-Audio Generation
with Consistency Distillation
| null | null | null | null |
cs.SD cs.LG cs.MM eess.AS
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Diffusion models are instrumental in text-to-audio (TTA) generation.
Unfortunately, they suffer from slow inference due to an excessive number of
queries to the underlying denoising network per generation. To address this
bottleneck, we introduce ConsistencyTTA, a framework requiring only a single
non-autoregressive network query, thereby accelerating TTA by hundreds of
times. We achieve so by proposing "CFG-aware latent consistency model," which
adapts consistency generation into a latent space and incorporates
classifier-free guidance (CFG) into model training. Moreover, unlike diffusion
models, ConsistencyTTA can be finetuned closed-loop with audio-space text-aware
metrics, such as CLAP score, to further enhance the generations. Our objective
and subjective evaluation on the AudioCaps dataset shows that compared to
diffusion-based counterparts, ConsistencyTTA reduces inference computation by
400x while retaining generation quality and diversity.
|
[
{
"created": "Tue, 19 Sep 2023 16:36:33 GMT",
"version": "v1"
},
{
"created": "Thu, 6 Jun 2024 01:32:00 GMT",
"version": "v2"
},
{
"created": "Mon, 24 Jun 2024 06:51:55 GMT",
"version": "v3"
}
] |
2024-06-25
|
[
[
"Bai",
"Yatong",
""
],
[
"Dang",
"Trung",
""
],
[
"Tran",
"Dung",
""
],
[
"Koishida",
"Kazuhito",
""
],
[
"Sojoudi",
"Somayeh",
""
]
] |
Diffusion models are instrumental in text-to-audio (TTA) generation. Unfortunately, they suffer from slow inference due to an excessive number of queries to the underlying denoising network per generation. To address this bottleneck, we introduce ConsistencyTTA, a framework requiring only a single non-autoregressive network query, thereby accelerating TTA by hundreds of times. We achieve so by proposing "CFG-aware latent consistency model," which adapts consistency generation into a latent space and incorporates classifier-free guidance (CFG) into model training. Moreover, unlike diffusion models, ConsistencyTTA can be finetuned closed-loop with audio-space text-aware metrics, such as CLAP score, to further enhance the generations. Our objective and subjective evaluation on the AudioCaps dataset shows that compared to diffusion-based counterparts, ConsistencyTTA reduces inference computation by 400x while retaining generation quality and diversity.
|
2209.12995
|
Jenni Raitoharju
|
Mikko Impi\"o, Pekka H\"arm\"a, Anna Tammilehto, Saku Anttila, Jenni
Raitoharju
|
Habitat classification from satellite observations with sparse
annotations
|
54 pages, 16 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Remote sensing benefits habitat conservation by making monitoring of large
areas easier compared to field surveying especially if the remote sensed data
can be automatically analyzed. An important aspect of monitoring is classifying
and mapping habitat types present in the monitored area. Automatic
classification is a difficult task, as classes have fine-grained differences
and their distributions are long-tailed and unbalanced. Usually training data
used for automatic land cover classification relies on fully annotated
segmentation maps, annotated from remote sensed imagery to a fairly high-level
taxonomy, i.e., classes such as forest, farmland, or urban area. A challenge
with automatic habitat classification is that reliable data annotation requires
field-surveys. Therefore, full segmentation maps are expensive to produce, and
training data is often sparse, point-like, and limited to areas accessible by
foot. Methods for utilizing these limited data more efficiently are needed.
We address these problems by proposing a method for habitat classification
and mapping, and apply this method to classify the entire northern Finnish
Lapland area into Natura2000 classes. The method is characterized by using
finely-grained, sparse, single-pixel annotations collected from the field,
combined with large amounts of unannotated data to produce segmentation maps.
Supervised, unsupervised and semi-supervised methods are compared, and the
benefits of transfer learning from a larger out-of-domain dataset are
demonstrated. We propose a \ac{CNN} biased towards center pixel classification
ensembled with a random forest classifier, that produces higher quality
classifications than the models themselves alone. We show that cropping
augmentations, test-time augmentation and semi-supervised learning can help
classification even further.
|
[
{
"created": "Mon, 26 Sep 2022 20:14:59 GMT",
"version": "v1"
}
] |
2022-09-28
|
[
[
"Impiö",
"Mikko",
""
],
[
"Härmä",
"Pekka",
""
],
[
"Tammilehto",
"Anna",
""
],
[
"Anttila",
"Saku",
""
],
[
"Raitoharju",
"Jenni",
""
]
] |
Remote sensing benefits habitat conservation by making monitoring of large areas easier compared to field surveying especially if the remote sensed data can be automatically analyzed. An important aspect of monitoring is classifying and mapping habitat types present in the monitored area. Automatic classification is a difficult task, as classes have fine-grained differences and their distributions are long-tailed and unbalanced. Usually training data used for automatic land cover classification relies on fully annotated segmentation maps, annotated from remote sensed imagery to a fairly high-level taxonomy, i.e., classes such as forest, farmland, or urban area. A challenge with automatic habitat classification is that reliable data annotation requires field-surveys. Therefore, full segmentation maps are expensive to produce, and training data is often sparse, point-like, and limited to areas accessible by foot. Methods for utilizing these limited data more efficiently are needed. We address these problems by proposing a method for habitat classification and mapping, and apply this method to classify the entire northern Finnish Lapland area into Natura2000 classes. The method is characterized by using finely-grained, sparse, single-pixel annotations collected from the field, combined with large amounts of unannotated data to produce segmentation maps. Supervised, unsupervised and semi-supervised methods are compared, and the benefits of transfer learning from a larger out-of-domain dataset are demonstrated. We propose a \ac{CNN} biased towards center pixel classification ensembled with a random forest classifier, that produces higher quality classifications than the models themselves alone. We show that cropping augmentations, test-time augmentation and semi-supervised learning can help classification even further.
|
2207.09133
|
Prerak Srivastava
|
Prerak Srivastava, Antoine Deleforge, Emmanuel Vincent
|
Realistic sources, receivers and walls improve the generalisability of
virtually-supervised blind acoustic parameter estimators
| null | null | null | null |
cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Blind acoustic parameter estimation consists in inferring the acoustic
properties of an environment from recordings of unknown sound sources. Recent
works in this area have utilized deep neural networks trained either partially
or exclusively on simulated data, due to the limited availability of real
annotated measurements. In this paper, we study whether a model purely trained
using a fast image-source room impulse response simulator can generalize to
real data. We present an ablation study on carefully crafted simulated training
sets that account for different levels of realism in source, receiver and wall
responses. The extent of realism is controlled by the sampling of wall
absorption coefficients and by applying measured directivity patterns to
microphones and sources. A state-of-the-art model trained on these datasets is
evaluated on the task of jointly estimating the room's volume, total surface
area, and octave-band reverberation times from multiple, multichannel speech
recordings. Results reveal that every added layer of simulation realism at
train time significantly improves the estimation of all quantities on real
signals.
|
[
{
"created": "Tue, 19 Jul 2022 09:09:38 GMT",
"version": "v1"
}
] |
2022-07-20
|
[
[
"Srivastava",
"Prerak",
""
],
[
"Deleforge",
"Antoine",
""
],
[
"Vincent",
"Emmanuel",
""
]
] |
Blind acoustic parameter estimation consists in inferring the acoustic properties of an environment from recordings of unknown sound sources. Recent works in this area have utilized deep neural networks trained either partially or exclusively on simulated data, due to the limited availability of real annotated measurements. In this paper, we study whether a model purely trained using a fast image-source room impulse response simulator can generalize to real data. We present an ablation study on carefully crafted simulated training sets that account for different levels of realism in source, receiver and wall responses. The extent of realism is controlled by the sampling of wall absorption coefficients and by applying measured directivity patterns to microphones and sources. A state-of-the-art model trained on these datasets is evaluated on the task of jointly estimating the room's volume, total surface area, and octave-band reverberation times from multiple, multichannel speech recordings. Results reveal that every added layer of simulation realism at train time significantly improves the estimation of all quantities on real signals.
|
2104.07642
|
Chih-Chan Tien
|
Chih-chan Tien, Shane Steinert-Threlkeld
|
Bilingual alignment transfers to multilingual alignment for unsupervised
parallel text mining
|
To be published at ACL 2022. 11 pages, 2 figures
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
This work presents methods for learning cross-lingual sentence
representations using paired or unpaired bilingual texts. We hypothesize that
the cross-lingual alignment strategy is transferable, and therefore a model
trained to align only two languages can encode multilingually more aligned
representations. We thus introduce dual-pivot transfer: training on one
language pair and evaluating on other pairs. To study this theory, we design
unsupervised models trained on unpaired sentences and single-pair supervised
models trained on bitexts, both based on the unsupervised language model XLM-R
with its parameters frozen. The experiments evaluate the models as universal
sentence encoders on the task of unsupervised bitext mining on two datasets,
where the unsupervised model reaches the state of the art of unsupervised
retrieval, and the alternative single-pair supervised model approaches the
performance of multilingually supervised models. The results suggest that
bilingual training techniques as proposed can be applied to get sentence
representations with multilingual alignment.
|
[
{
"created": "Thu, 15 Apr 2021 17:51:22 GMT",
"version": "v1"
},
{
"created": "Tue, 15 Mar 2022 21:59:57 GMT",
"version": "v2"
}
] |
2022-03-17
|
[
[
"Tien",
"Chih-chan",
""
],
[
"Steinert-Threlkeld",
"Shane",
""
]
] |
This work presents methods for learning cross-lingual sentence representations using paired or unpaired bilingual texts. We hypothesize that the cross-lingual alignment strategy is transferable, and therefore a model trained to align only two languages can encode multilingually more aligned representations. We thus introduce dual-pivot transfer: training on one language pair and evaluating on other pairs. To study this theory, we design unsupervised models trained on unpaired sentences and single-pair supervised models trained on bitexts, both based on the unsupervised language model XLM-R with its parameters frozen. The experiments evaluate the models as universal sentence encoders on the task of unsupervised bitext mining on two datasets, where the unsupervised model reaches the state of the art of unsupervised retrieval, and the alternative single-pair supervised model approaches the performance of multilingually supervised models. The results suggest that bilingual training techniques as proposed can be applied to get sentence representations with multilingual alignment.
|
2303.16507
|
Huy Hieu Pham
|
Hieu H. Pham, Khiem H. Le, Tuan V. Tran, Ha Q. Nguyen
|
Improving Object Detection in Medical Image Analysis through Multiple
Expert Annotators: An Empirical Investigation
|
This is a short version submitted to the Midwest Machine Learning
Symposium (MMLS 2023), Chicago, IL, USA
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The work discusses the use of machine learning algorithms for anomaly
detection in medical image analysis and how the performance of these algorithms
depends on the number of annotators and the quality of labels. To address the
issue of subjectivity in labeling with a single annotator, we introduce a
simple and effective approach that aggregates annotations from multiple
annotators with varying levels of expertise. We then aim to improve the
efficiency of predictive models in abnormal detection tasks by estimating
hidden labels from multiple annotations and using a re-weighted loss function
to improve detection performance. Our method is evaluated on a real-world
medical imaging dataset and outperforms relevant baselines that do not consider
disagreements among annotators.
|
[
{
"created": "Wed, 29 Mar 2023 07:34:20 GMT",
"version": "v1"
}
] |
2023-03-30
|
[
[
"Pham",
"Hieu H.",
""
],
[
"Le",
"Khiem H.",
""
],
[
"Tran",
"Tuan V.",
""
],
[
"Nguyen",
"Ha Q.",
""
]
] |
The work discusses the use of machine learning algorithms for anomaly detection in medical image analysis and how the performance of these algorithms depends on the number of annotators and the quality of labels. To address the issue of subjectivity in labeling with a single annotator, we introduce a simple and effective approach that aggregates annotations from multiple annotators with varying levels of expertise. We then aim to improve the efficiency of predictive models in abnormal detection tasks by estimating hidden labels from multiple annotations and using a re-weighted loss function to improve detection performance. Our method is evaluated on a real-world medical imaging dataset and outperforms relevant baselines that do not consider disagreements among annotators.
|
2204.03888
|
Peng Shen Dr.
|
Peng Shen, Xugang Lu, Hisashi Kawai
|
Transducer-based language embedding for spoken language identification
|
This paper was accepted by Interspeech 2022
| null | null | null |
cs.CL cs.SD eess.AS
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The acoustic and linguistic features are important cues for the spoken
language identification (LID) task. Recent advanced LID systems mainly use
acoustic features that lack the usage of explicit linguistic feature encoding.
In this paper, we propose a novel transducer-based language embedding approach
for LID tasks by integrating an RNN transducer model into a language embedding
framework. Benefiting from the advantages of the RNN transducer's linguistic
representation capability, the proposed method can exploit both
phonetically-aware acoustic features and explicit linguistic features for LID
tasks. Experiments were carried out on the large-scale multilingual LibriSpeech
and VoxLingua107 datasets. Experimental results showed the proposed method
significantly improves the performance on LID tasks with 12% to 59% and 16% to
24% relative improvement on in-domain and cross-domain datasets, respectively.
|
[
{
"created": "Fri, 8 Apr 2022 07:23:43 GMT",
"version": "v1"
},
{
"created": "Fri, 29 Jul 2022 09:41:24 GMT",
"version": "v2"
}
] |
2022-08-01
|
[
[
"Shen",
"Peng",
""
],
[
"Lu",
"Xugang",
""
],
[
"Kawai",
"Hisashi",
""
]
] |
The acoustic and linguistic features are important cues for the spoken language identification (LID) task. Recent advanced LID systems mainly use acoustic features that lack the usage of explicit linguistic feature encoding. In this paper, we propose a novel transducer-based language embedding approach for LID tasks by integrating an RNN transducer model into a language embedding framework. Benefiting from the advantages of the RNN transducer's linguistic representation capability, the proposed method can exploit both phonetically-aware acoustic features and explicit linguistic features for LID tasks. Experiments were carried out on the large-scale multilingual LibriSpeech and VoxLingua107 datasets. Experimental results showed the proposed method significantly improves the performance on LID tasks with 12% to 59% and 16% to 24% relative improvement on in-domain and cross-domain datasets, respectively.
|
2210.10123
|
David Hart
|
David Hart, Michael Whitney, Bryan Morse
|
Interpolated SelectionConv for Spherical Images and Surfaces
|
To be presented at WACV 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a new and general framework for convolutional neural network
operations on spherical (or omnidirectional) images. Our approach represents
the surface as a graph of connected points that doesn't rely on a particular
sampling strategy. Additionally, by using an interpolated version of
SelectionConv, we can operate on the sphere while using existing 2D CNNs and
their weights. Since our method leverages existing graph implementations, it is
also fast and can be fine-tuned efficiently. Our method is also general enough
to be applied to any surface type, even those that are topologically
non-simple. We demonstrate the effectiveness of our technique on the tasks of
style transfer and segmentation for spheres as well as stylization for 3D
meshes. We provide a thorough ablation study of the performance of various
spherical sampling strategies.
|
[
{
"created": "Tue, 18 Oct 2022 19:49:07 GMT",
"version": "v1"
}
] |
2022-10-20
|
[
[
"Hart",
"David",
""
],
[
"Whitney",
"Michael",
""
],
[
"Morse",
"Bryan",
""
]
] |
We present a new and general framework for convolutional neural network operations on spherical (or omnidirectional) images. Our approach represents the surface as a graph of connected points that doesn't rely on a particular sampling strategy. Additionally, by using an interpolated version of SelectionConv, we can operate on the sphere while using existing 2D CNNs and their weights. Since our method leverages existing graph implementations, it is also fast and can be fine-tuned efficiently. Our method is also general enough to be applied to any surface type, even those that are topologically non-simple. We demonstrate the effectiveness of our technique on the tasks of style transfer and segmentation for spheres as well as stylization for 3D meshes. We provide a thorough ablation study of the performance of various spherical sampling strategies.
|
2204.12855
|
Anupama Mishra
|
Anupama Mishra
|
Prediction Approach against DDoS Attack based on Machine Learning
Multiclassfier
| null | null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
DDoS attacks, also known as distributed denial of service (DDoS) attacks,
have emerged as one of the most serious and fastest-growing threats on the
Internet. Denial-of-service (DDoS) attacks are an example of cyber attacks that
target a specific system or network in an attempt to render it inaccessible or
unusable for a period of time. As a result, improving the detection of diverse
types of DDoS cyber threats with better algorithms and higher accuracy while
keeping the computational cost under control has become the most significant
component of detecting DDoS cyber threats. In order to properly defend the
targeted network or system, it is critical to first determine the sort of DDoS
assault that has been launched against it. A number of ensemble classification
techniques are presented in this paper, which combines the performance of
various algorithms. They are then compared to existing Machine Learning
Algorithms in terms of their effectiveness in detecting different types of DDoS
attacks using accuracy, F1 scores, and ROC curves. The results show high
accuracy and good performance.
|
[
{
"created": "Wed, 27 Apr 2022 11:39:24 GMT",
"version": "v1"
}
] |
2022-04-28
|
[
[
"Mishra",
"Anupama",
""
]
] |
DDoS attacks, also known as distributed denial of service (DDoS) attacks, have emerged as one of the most serious and fastest-growing threats on the Internet. Denial-of-service (DDoS) attacks are an example of cyber attacks that target a specific system or network in an attempt to render it inaccessible or unusable for a period of time. As a result, improving the detection of diverse types of DDoS cyber threats with better algorithms and higher accuracy while keeping the computational cost under control has become the most significant component of detecting DDoS cyber threats. In order to properly defend the targeted network or system, it is critical to first determine the sort of DDoS assault that has been launched against it. A number of ensemble classification techniques are presented in this paper, which combines the performance of various algorithms. They are then compared to existing Machine Learning Algorithms in terms of their effectiveness in detecting different types of DDoS attacks using accuracy, F1 scores, and ROC curves. The results show high accuracy and good performance.
|
2104.04955
|
R. Baghdadi
|
Riyadh Baghdadi, Massinissa Merouani, Mohamed-Hicham Leghettas, Kamel
Abdous, Taha Arbaoui, Karima Benatchba, Saman Amarasinghe
|
A Deep Learning Based Cost Model for Automatic Code Optimization
| null |
Proceedings of the 4th MLSys Conference, San Jose, CA, USA, 2021
| null | null |
cs.PL cs.DC cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Enabling compilers to automatically optimize code has been a longstanding
goal for the compiler community. Efficiently solving this problem requires
using precise cost models. These models predict whether applying a sequence of
code transformations reduces the execution time of the program. Building an
analytical cost model to do so is hard in modern x86 architectures due to the
complexity of the microarchitecture. In this paper, we present a novel deep
learning based cost model for automatic code optimization. This model was
integrated in a search method and implemented in the Tiramisu compiler to
select the best code transformations. The input of the proposed model is a set
of simple features representing the unoptimized code and a sequence of code
transformations. The model predicts the speedup expected when the code
transformations are applied. Unlike previous models, the proposed one works on
full programs and does not rely on any heavy feature engineering. The proposed
model has only 16% of mean absolute percentage error in predicting speedups on
full programs. The proposed model enables Tiramisu to automatically find code
transformations that match or are better than state-of-the-art compilers
without requiring the same level of heavy feature engineering required by those
compilers.
|
[
{
"created": "Sun, 11 Apr 2021 08:32:42 GMT",
"version": "v1"
}
] |
2021-04-13
|
[
[
"Baghdadi",
"Riyadh",
""
],
[
"Merouani",
"Massinissa",
""
],
[
"Leghettas",
"Mohamed-Hicham",
""
],
[
"Abdous",
"Kamel",
""
],
[
"Arbaoui",
"Taha",
""
],
[
"Benatchba",
"Karima",
""
],
[
"Amarasinghe",
"Saman",
""
]
] |
Enabling compilers to automatically optimize code has been a longstanding goal for the compiler community. Efficiently solving this problem requires using precise cost models. These models predict whether applying a sequence of code transformations reduces the execution time of the program. Building an analytical cost model to do so is hard in modern x86 architectures due to the complexity of the microarchitecture. In this paper, we present a novel deep learning based cost model for automatic code optimization. This model was integrated in a search method and implemented in the Tiramisu compiler to select the best code transformations. The input of the proposed model is a set of simple features representing the unoptimized code and a sequence of code transformations. The model predicts the speedup expected when the code transformations are applied. Unlike previous models, the proposed one works on full programs and does not rely on any heavy feature engineering. The proposed model has only 16% of mean absolute percentage error in predicting speedups on full programs. The proposed model enables Tiramisu to automatically find code transformations that match or are better than state-of-the-art compilers without requiring the same level of heavy feature engineering required by those compilers.
|
2011.12836
|
Yu Zeng
|
Yu Zeng, Zhe Lin, Huchuan Lu, Vishal M. Patel
|
CR-Fill: Generative Image Inpainting with Auxiliary Contexutal
Reconstruction
| null | null | null | null |
cs.CV cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent deep generative inpainting methods use attention layers to allow the
generator to explicitly borrow feature patches from the known region to
complete a missing region. Due to the lack of supervision signals for the
correspondence between missing regions and known regions, it may fail to find
proper reference features, which often leads to artifacts in the results. Also,
it computes pair-wise similarity across the entire feature map during inference
bringing a significant computational overhead. To address this issue, we
propose to teach such patch-borrowing behavior to an attention-free generator
by joint training of an auxiliary contextual reconstruction task, which
encourages the generated output to be plausible even when reconstructed by
surrounding regions. The auxiliary branch can be seen as a learnable loss
function, i.e. named as contextual reconstruction (CR) loss, where
query-reference feature similarity and reference-based reconstructor are
jointly optimized with the inpainting generator. The auxiliary branch (i.e. CR
loss) is required only during training, and only the inpainting generator is
required during the inference. Experimental results demonstrate that the
proposed inpainting model compares favourably against the state-of-the-art in
terms of quantitative and visual performance.
|
[
{
"created": "Wed, 25 Nov 2020 15:45:12 GMT",
"version": "v1"
},
{
"created": "Wed, 31 Mar 2021 11:47:51 GMT",
"version": "v2"
}
] |
2021-04-01
|
[
[
"Zeng",
"Yu",
""
],
[
"Lin",
"Zhe",
""
],
[
"Lu",
"Huchuan",
""
],
[
"Patel",
"Vishal M.",
""
]
] |
Recent deep generative inpainting methods use attention layers to allow the generator to explicitly borrow feature patches from the known region to complete a missing region. Due to the lack of supervision signals for the correspondence between missing regions and known regions, it may fail to find proper reference features, which often leads to artifacts in the results. Also, it computes pair-wise similarity across the entire feature map during inference bringing a significant computational overhead. To address this issue, we propose to teach such patch-borrowing behavior to an attention-free generator by joint training of an auxiliary contextual reconstruction task, which encourages the generated output to be plausible even when reconstructed by surrounding regions. The auxiliary branch can be seen as a learnable loss function, i.e. named as contextual reconstruction (CR) loss, where query-reference feature similarity and reference-based reconstructor are jointly optimized with the inpainting generator. The auxiliary branch (i.e. CR loss) is required only during training, and only the inpainting generator is required during the inference. Experimental results demonstrate that the proposed inpainting model compares favourably against the state-of-the-art in terms of quantitative and visual performance.
|
2005.11467
|
Tingting Liang
|
Tingting Liang, Congying Xia, Yuyu Yin, Philip S. Yu
|
Joint Training Capsule Network for Cold Start Recommendation
|
Accepted by SIGIR'20
| null | null | null |
cs.IR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper proposes a novel neural network, joint training capsule network
(JTCN), for the cold start recommendation task. We propose to mimic the
high-level user preference other than the raw interaction history based on the
side information for the fresh users. Specifically, an attentive capsule layer
is proposed to aggregate high-level user preference from the low-level
interaction history via a dynamic routing-by-agreement mechanism. Moreover,
JTCN jointly trains the loss for mimicking the user preference and the softmax
loss for the recommendation together in an end-to-end manner. Experiments on
two publicly available datasets demonstrate the effectiveness of the proposed
model. JTCN improves other state-of-the-art methods at least 7.07% for
CiteULike and 16.85% for Amazon in terms of Recall@100 in cold start
recommendation.
|
[
{
"created": "Sat, 23 May 2020 04:27:38 GMT",
"version": "v1"
}
] |
2020-05-26
|
[
[
"Liang",
"Tingting",
""
],
[
"Xia",
"Congying",
""
],
[
"Yin",
"Yuyu",
""
],
[
"Yu",
"Philip S.",
""
]
] |
This paper proposes a novel neural network, joint training capsule network (JTCN), for the cold start recommendation task. We propose to mimic the high-level user preference other than the raw interaction history based on the side information for the fresh users. Specifically, an attentive capsule layer is proposed to aggregate high-level user preference from the low-level interaction history via a dynamic routing-by-agreement mechanism. Moreover, JTCN jointly trains the loss for mimicking the user preference and the softmax loss for the recommendation together in an end-to-end manner. Experiments on two publicly available datasets demonstrate the effectiveness of the proposed model. JTCN improves other state-of-the-art methods at least 7.07% for CiteULike and 16.85% for Amazon in terms of Recall@100 in cold start recommendation.
|
cs/0202020
|
Oleg Kupervasser
|
Oleg Kupervasser, Alexsander Vardy
|
The Mysterious Optimality of Naive Bayes: Estimation of the Probability
in the System of "Classifiers"
|
9 pages,1 figure, all changes in the second version is made by
Kupervasser only
|
Pattern Recognition and Image Analysis, 2014, Vol. 24, No. 1
|
10.1134/S1054661814010088
| null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Bayes Classifiers are widely used currently for recognition, identification
and knowledge discovery. The fields of application are, for example, image
processing, medicine, chemistry (QSAR). But by mysterious way the Naive Bayes
Classifier usually gives a very nice and good presentation of a recognition. It
can not be improved considerably by more complex models of Bayes Classifier. We
demonstrate here a very nice and simple proof of the Naive Bayes Classifier
optimality, that can explain this interesting fact.The derivation in the
current paper is based on arXiv:cs/0202020v1
|
[
{
"created": "Sun, 17 Feb 2002 14:55:47 GMT",
"version": "v1"
},
{
"created": "Thu, 30 Jun 2011 14:23:47 GMT",
"version": "v2"
},
{
"created": "Fri, 24 Aug 2012 14:57:32 GMT",
"version": "v3"
}
] |
2013-12-30
|
[
[
"Kupervasser",
"Oleg",
""
],
[
"Vardy",
"Alexsander",
""
]
] |
Bayes Classifiers are widely used currently for recognition, identification and knowledge discovery. The fields of application are, for example, image processing, medicine, chemistry (QSAR). But by mysterious way the Naive Bayes Classifier usually gives a very nice and good presentation of a recognition. It can not be improved considerably by more complex models of Bayes Classifier. We demonstrate here a very nice and simple proof of the Naive Bayes Classifier optimality, that can explain this interesting fact.The derivation in the current paper is based on arXiv:cs/0202020v1
|
1803.09427
|
Andreas Schwierz
|
Andreas Schwierz and H{\aa}kan Forsberg
|
Design Assurance Evaluation of Microcontrollers for safety critical
Avionics
| null |
2017 IEEE/AIAA 36th Digital Avionics Systems Conference (DASC),
St. Petersburg, FL, 2017, pp. 1-9
|
10.1109/DASC.2017.8102145
| null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dealing with Commercial off-the-shelf (COTS) com- ponents is a daily business
for avionic system manufacturers. They are necessary ingredients for hardware
designs, but are not built in accordance with the avionics consensus standard
DO- 254 for Airborne Electronic Hardware (AEH) design. Especially for complex
COTS hardware components used in safety critical AEH, like Microcontroller
Units (MCUs), additional assurance activities have to be performed. All of them
together shall form a convincing confident, that the hardware is safe in its
intended operation environment. The focus of DO-254 is one approach called
Design Assurance (DA). Its aim is to reduce design errors by adherence of
prescribed process objectives for the entire design life cycle. The effort for
certain COTS assurance activities could be reduced if it is possible to
demonstrate, that the COTS design process is based on similar effective design
process guide- lines to minimize desgin errors. In the last years,
semiconductor manufacturers released safety MCUs in compliance to the ISO 26262
standard, dedicated for the development of functional safe automotive systems.
These products are COTS components in the sense of avionics, but they are also
developed according to a process that focuses on reduction of design errors. In
this paper an evaluation is performed to figure out if the ISO 26262 prescribes
a similar DA approach as the DO-254, in order to reduce the COTS assurance
effort for coming avionic systems.
|
[
{
"created": "Mon, 26 Mar 2018 06:15:09 GMT",
"version": "v1"
}
] |
2018-03-28
|
[
[
"Schwierz",
"Andreas",
""
],
[
"Forsberg",
"Håkan",
""
]
] |
Dealing with Commercial off-the-shelf (COTS) com- ponents is a daily business for avionic system manufacturers. They are necessary ingredients for hardware designs, but are not built in accordance with the avionics consensus standard DO- 254 for Airborne Electronic Hardware (AEH) design. Especially for complex COTS hardware components used in safety critical AEH, like Microcontroller Units (MCUs), additional assurance activities have to be performed. All of them together shall form a convincing confident, that the hardware is safe in its intended operation environment. The focus of DO-254 is one approach called Design Assurance (DA). Its aim is to reduce design errors by adherence of prescribed process objectives for the entire design life cycle. The effort for certain COTS assurance activities could be reduced if it is possible to demonstrate, that the COTS design process is based on similar effective design process guide- lines to minimize desgin errors. In the last years, semiconductor manufacturers released safety MCUs in compliance to the ISO 26262 standard, dedicated for the development of functional safe automotive systems. These products are COTS components in the sense of avionics, but they are also developed according to a process that focuses on reduction of design errors. In this paper an evaluation is performed to figure out if the ISO 26262 prescribes a similar DA approach as the DO-254, in order to reduce the COTS assurance effort for coming avionic systems.
|
1806.04725
|
Dongqing Zhang
|
Dongqing Zhang, Jianing Wang, Jack H. Noble, Benoit M. Dawant
|
Accurate Detection of Inner Ears in Head CTs Using a Deep
Volume-to-Volume Regression Network with False Positive Suppression and a
Shape-Based Constraint
|
Accepted to MICCAI 2018
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cochlear implants (CIs) are neural prosthetics which are used to treat
patients with hearing loss. CIs use an array of electrodes which are surgically
inserted into the cochlea to stimulate the auditory nerve endings. After
surgery, CIs need to be programmed. Studies have shown that the spatial
relationship between the intra-cochlear anatomy and electrodes derived from
medical images can guide CI programming and lead to significant improvement in
hearing outcomes. However, clinical head CT images are usually obtained from
scanners of different brands with different protocols. The field of view thus
varies greatly and visual inspection is needed to document their content prior
to applying algorithms for electrode localization and intra-cochlear anatomy
segmentation. In this work, to determine the presence/absence of inner ears and
to accurately localize them in head CTs, we use a volume-to-volume
convolutional neural network which can be trained end-to-end to map a raw CT
volume to probability maps which indicate inner ear positions. We incorporate a
false positive suppression strategy in training and apply a shape-based
constraint. We achieve a labeling accuracy of 98.59% and a localization error
of 2.45mm. The localization error is significantly smaller than a random
forest-based approach that has been proposed recently to perform the same task.
|
[
{
"created": "Tue, 12 Jun 2018 19:19:00 GMT",
"version": "v1"
}
] |
2018-06-14
|
[
[
"Zhang",
"Dongqing",
""
],
[
"Wang",
"Jianing",
""
],
[
"Noble",
"Jack H.",
""
],
[
"Dawant",
"Benoit M.",
""
]
] |
Cochlear implants (CIs) are neural prosthetics which are used to treat patients with hearing loss. CIs use an array of electrodes which are surgically inserted into the cochlea to stimulate the auditory nerve endings. After surgery, CIs need to be programmed. Studies have shown that the spatial relationship between the intra-cochlear anatomy and electrodes derived from medical images can guide CI programming and lead to significant improvement in hearing outcomes. However, clinical head CT images are usually obtained from scanners of different brands with different protocols. The field of view thus varies greatly and visual inspection is needed to document their content prior to applying algorithms for electrode localization and intra-cochlear anatomy segmentation. In this work, to determine the presence/absence of inner ears and to accurately localize them in head CTs, we use a volume-to-volume convolutional neural network which can be trained end-to-end to map a raw CT volume to probability maps which indicate inner ear positions. We incorporate a false positive suppression strategy in training and apply a shape-based constraint. We achieve a labeling accuracy of 98.59% and a localization error of 2.45mm. The localization error is significantly smaller than a random forest-based approach that has been proposed recently to perform the same task.
|
1212.1984
|
Nicolas Emilio Bordenabe
|
Miguel E. Andr\'es, Nicol\'as E. Bordenabe, Konstantinos
Chatzikokolakis, and Catuscia Palamidessi
|
Geo-Indistinguishability: Differential Privacy for Location-Based
Systems
|
15 pages
|
Proceedings of the 2013 ACM SIGSAC conference on Computer and
Communications Security (CCS'13), ACM, pp. 901-914, 2013
|
10.1145/2508859.2516735
| null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The growing popularity of location-based systems, allowing unknown/untrusted
servers to easily collect huge amounts of information regarding users'
location, has recently started raising serious privacy concerns. In this paper
we study geo-indistinguishability, a formal notion of privacy for
location-based systems that protects the user's exact location, while allowing
approximate information - typically needed to obtain a certain desired service
- to be released. Our privacy definition formalizes the intuitive notion of
protecting the user's location within a radius r with a level of privacy that
depends on r, and corresponds to a generalized version of the well-known
concept of differential privacy. Furthermore, we present a perturbation
technique for achieving geo-indistinguishability by adding controlled random
noise to the user's location. We demonstrate the applicability of our technique
on a LBS application. Finally, we compare our mechanism with other ones in the
literature. It turns our that our mechanism offers the best privacy guarantees,
for the same utility, among all those which do not depend on the prior.
|
[
{
"created": "Mon, 10 Dec 2012 07:09:21 GMT",
"version": "v1"
},
{
"created": "Fri, 24 May 2013 18:39:23 GMT",
"version": "v2"
},
{
"created": "Thu, 20 Feb 2014 15:39:48 GMT",
"version": "v3"
}
] |
2014-02-21
|
[
[
"Andrés",
"Miguel E.",
""
],
[
"Bordenabe",
"Nicolás E.",
""
],
[
"Chatzikokolakis",
"Konstantinos",
""
],
[
"Palamidessi",
"Catuscia",
""
]
] |
The growing popularity of location-based systems, allowing unknown/untrusted servers to easily collect huge amounts of information regarding users' location, has recently started raising serious privacy concerns. In this paper we study geo-indistinguishability, a formal notion of privacy for location-based systems that protects the user's exact location, while allowing approximate information - typically needed to obtain a certain desired service - to be released. Our privacy definition formalizes the intuitive notion of protecting the user's location within a radius r with a level of privacy that depends on r, and corresponds to a generalized version of the well-known concept of differential privacy. Furthermore, we present a perturbation technique for achieving geo-indistinguishability by adding controlled random noise to the user's location. We demonstrate the applicability of our technique on a LBS application. Finally, we compare our mechanism with other ones in the literature. It turns our that our mechanism offers the best privacy guarantees, for the same utility, among all those which do not depend on the prior.
|
1902.01091
|
Isaac Lera
|
Isaac Lera, Carlos Guerrero, Carlos Juiz
|
YAFS: A simulator for IoT scenarios in fog computing
| null | null |
10.1109/ACCESS.2019.2927895
| null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a fog computing simulator for analysing the design and deployment
of applications through customized and dynamical strategies. We model the
relationships among deployed applications, network connections and
infrastructure characteristics through complex network theory, enabling the
integration of topological measures in dynamic and customizable strategies such
as the placement of application modules, workload location, and path routing
and scheduling of services. We present a comparative analysis of the efficiency
and the convergence of results of our simulator with the most referenced
entity, iFogSim. To highlight YAFS functionalities, we model three scenarios
that, to the best of our knowledge, cannot be implemented with current fog
simulators: dynamic allocation of new application modules, dynamic failures of
network nodes and user mobility along the topology.
|
[
{
"created": "Mon, 4 Feb 2019 09:07:20 GMT",
"version": "v1"
}
] |
2019-07-19
|
[
[
"Lera",
"Isaac",
""
],
[
"Guerrero",
"Carlos",
""
],
[
"Juiz",
"Carlos",
""
]
] |
We propose a fog computing simulator for analysing the design and deployment of applications through customized and dynamical strategies. We model the relationships among deployed applications, network connections and infrastructure characteristics through complex network theory, enabling the integration of topological measures in dynamic and customizable strategies such as the placement of application modules, workload location, and path routing and scheduling of services. We present a comparative analysis of the efficiency and the convergence of results of our simulator with the most referenced entity, iFogSim. To highlight YAFS functionalities, we model three scenarios that, to the best of our knowledge, cannot be implemented with current fog simulators: dynamic allocation of new application modules, dynamic failures of network nodes and user mobility along the topology.
|
2305.15894
|
Seolhwa Lee
|
Seolhwa Lee, Anders S{\o}gaard
|
Private Meeting Summarization Without Performance Loss
|
SIGIR23 Main conference
| null |
10.1145/3539618.3592042
| null |
cs.CL cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Meeting summarization has an enormous business potential, but in addition to
being a hard problem, roll-out is challenged by privacy concerns. We explore
the problem of meeting summarization under differential privacy constraints and
find, to our surprise, that while differential privacy leads to slightly lower
performance on in-sample data, differential privacy improves performance when
evaluated on unseen meeting types. Since meeting summarization systems will
encounter a great variety of meeting types in practical employment scenarios,
this observation makes safe meeting summarization seem much more feasible. We
perform extensive error analysis and identify potential risks in meeting
summarization under differential privacy, including a faithfulness analysis.
|
[
{
"created": "Thu, 25 May 2023 09:48:50 GMT",
"version": "v1"
}
] |
2023-05-26
|
[
[
"Lee",
"Seolhwa",
""
],
[
"Søgaard",
"Anders",
""
]
] |
Meeting summarization has an enormous business potential, but in addition to being a hard problem, roll-out is challenged by privacy concerns. We explore the problem of meeting summarization under differential privacy constraints and find, to our surprise, that while differential privacy leads to slightly lower performance on in-sample data, differential privacy improves performance when evaluated on unseen meeting types. Since meeting summarization systems will encounter a great variety of meeting types in practical employment scenarios, this observation makes safe meeting summarization seem much more feasible. We perform extensive error analysis and identify potential risks in meeting summarization under differential privacy, including a faithfulness analysis.
|
1805.12233
|
Qiqi Yan
|
Kedar Dhamdhere and Mukund Sundararajan and Qiqi Yan
|
How Important Is a Neuron?
|
under submission
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The problem of attributing a deep network's prediction to its
\emph{input/base} features is well-studied. We introduce the notion of
\emph{conductance} to extend the notion of attribution to the understanding the
importance of \emph{hidden} units.
Informally, the conductance of a hidden unit of a deep network is the
\emph{flow} of attribution via this hidden unit. We use conductance to
understand the importance of a hidden unit to the prediction for a specific
input, or over a set of inputs. We evaluate the effectiveness of conductance in
multiple ways, including theoretical properties, ablation studies, and a
feature selection task. The empirical evaluations are done using the Inception
network over ImageNet data, and a sentiment analysis network over reviews. In
both cases, we demonstrate the effectiveness of conductance in identifying
interesting insights about the internal workings of these networks.
|
[
{
"created": "Wed, 30 May 2018 21:26:22 GMT",
"version": "v1"
}
] |
2018-06-01
|
[
[
"Dhamdhere",
"Kedar",
""
],
[
"Sundararajan",
"Mukund",
""
],
[
"Yan",
"Qiqi",
""
]
] |
The problem of attributing a deep network's prediction to its \emph{input/base} features is well-studied. We introduce the notion of \emph{conductance} to extend the notion of attribution to the understanding the importance of \emph{hidden} units. Informally, the conductance of a hidden unit of a deep network is the \emph{flow} of attribution via this hidden unit. We use conductance to understand the importance of a hidden unit to the prediction for a specific input, or over a set of inputs. We evaluate the effectiveness of conductance in multiple ways, including theoretical properties, ablation studies, and a feature selection task. The empirical evaluations are done using the Inception network over ImageNet data, and a sentiment analysis network over reviews. In both cases, we demonstrate the effectiveness of conductance in identifying interesting insights about the internal workings of these networks.
|
1904.09791
|
Seoung Wug Oh
|
Seoung Wug Oh, Joon-Young Lee, Ning Xu, Seon Joo Kim
|
Fast User-Guided Video Object Segmentation by
Interaction-and-Propagation Networks
|
CVPR 2019
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a deep learning method for the interactive video object
segmentation. Our method is built upon two core operations, interaction and
propagation, and each operation is conducted by Convolutional Neural Networks.
The two networks are connected both internally and externally so that the
networks are trained jointly and interact with each other to solve the complex
video object segmentation problem. We propose a new multi-round training scheme
for the interactive video object segmentation so that the networks can learn
how to understand the user's intention and update incorrect estimations during
the training. At the testing time, our method produces high-quality results and
also runs fast enough to work with users interactively. We evaluated the
proposed method quantitatively on the interactive track benchmark at the DAVIS
Challenge 2018. We outperformed other competing methods by a significant margin
in both the speed and the accuracy. We also demonstrated that our method works
well with real user interactions.
|
[
{
"created": "Mon, 22 Apr 2019 10:17:46 GMT",
"version": "v1"
},
{
"created": "Thu, 2 May 2019 09:17:22 GMT",
"version": "v2"
}
] |
2019-05-03
|
[
[
"Oh",
"Seoung Wug",
""
],
[
"Lee",
"Joon-Young",
""
],
[
"Xu",
"Ning",
""
],
[
"Kim",
"Seon Joo",
""
]
] |
We present a deep learning method for the interactive video object segmentation. Our method is built upon two core operations, interaction and propagation, and each operation is conducted by Convolutional Neural Networks. The two networks are connected both internally and externally so that the networks are trained jointly and interact with each other to solve the complex video object segmentation problem. We propose a new multi-round training scheme for the interactive video object segmentation so that the networks can learn how to understand the user's intention and update incorrect estimations during the training. At the testing time, our method produces high-quality results and also runs fast enough to work with users interactively. We evaluated the proposed method quantitatively on the interactive track benchmark at the DAVIS Challenge 2018. We outperformed other competing methods by a significant margin in both the speed and the accuracy. We also demonstrated that our method works well with real user interactions.
|
2008.01151
|
Kenneth Stewart
|
Kenneth Stewart, Garrick Orchard, Sumit Bam Shrestha, Emre Neftci
|
Online Few-shot Gesture Learning on a Neuromorphic Processor
|
10 pages, accepted by IEEE JETCAS
| null | null | null |
cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present the Surrogate-gradient Online Error-triggered Learning (SOEL)
system for online few-shot learning on neuromorphic processors. The SOEL
learning system uses a combination of transfer learning and principles of
computational neuroscience and deep learning. We show that partially trained
deep Spiking Neural Networks (SNNs) implemented on neuromorphic hardware can
rapidly adapt online to new classes of data within a domain. SOEL updates
trigger when an error occurs, enabling faster learning with fewer updates.
Using gesture recognition as a case study, we show SOEL can be used for online
few-shot learning of new classes of pre-recorded gesture data and rapid online
learning of new gestures from data streamed live from a Dynamic Active-pixel
Vision Sensor to an Intel Loihi neuromorphic research processor.
|
[
{
"created": "Mon, 3 Aug 2020 19:39:54 GMT",
"version": "v1"
},
{
"created": "Wed, 14 Oct 2020 05:13:05 GMT",
"version": "v2"
}
] |
2020-10-15
|
[
[
"Stewart",
"Kenneth",
""
],
[
"Orchard",
"Garrick",
""
],
[
"Shrestha",
"Sumit Bam",
""
],
[
"Neftci",
"Emre",
""
]
] |
We present the Surrogate-gradient Online Error-triggered Learning (SOEL) system for online few-shot learning on neuromorphic processors. The SOEL learning system uses a combination of transfer learning and principles of computational neuroscience and deep learning. We show that partially trained deep Spiking Neural Networks (SNNs) implemented on neuromorphic hardware can rapidly adapt online to new classes of data within a domain. SOEL updates trigger when an error occurs, enabling faster learning with fewer updates. Using gesture recognition as a case study, we show SOEL can be used for online few-shot learning of new classes of pre-recorded gesture data and rapid online learning of new gestures from data streamed live from a Dynamic Active-pixel Vision Sensor to an Intel Loihi neuromorphic research processor.
|
2109.09233
|
Ipek Baris Schlicht
|
Ipek Baris Schlicht and Angel Felipe Magnoss\~ao de Paula
|
Unified and Multilingual Author Profiling for Detecting Haters
|
9 pages, 2 figures, see the original paper:
http://ceur-ws.org/Vol-2936/paper-157.pdf
|
Published at the CLEF 2021
| null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents a unified user profiling framework to identify hate
speech spreaders by processing their tweets regardless of the language. The
framework encodes the tweets with sentence transformers and applies an
attention mechanism to select important tweets for learning user profiles.
Furthermore, the attention layer helps to explain why a user is a hate speech
spreader by producing attention weights at both token and post level. Our
proposed model outperformed the state-of-the-art multilingual transformer
models.
|
[
{
"created": "Sun, 19 Sep 2021 21:53:23 GMT",
"version": "v1"
}
] |
2021-09-21
|
[
[
"Schlicht",
"Ipek Baris",
""
],
[
"de Paula",
"Angel Felipe Magnossão",
""
]
] |
This paper presents a unified user profiling framework to identify hate speech spreaders by processing their tweets regardless of the language. The framework encodes the tweets with sentence transformers and applies an attention mechanism to select important tweets for learning user profiles. Furthermore, the attention layer helps to explain why a user is a hate speech spreader by producing attention weights at both token and post level. Our proposed model outperformed the state-of-the-art multilingual transformer models.
|
2010.01730
|
Shomik Jain
|
Shomik Jain, Abby K. Wood
|
Facebook Political Ads And Accountability: Outside Groups Are Most
Negative, Especially When Hiding Donors
| null | null | null | null |
cs.CY cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The emergence of online political advertising has come with little
regulation, allowing political advertisers on social media to avoid
accountability. We analyze how transparency and accountability deficits caused
by dark money and disappearing groups relate to the sentiment of political ads
on Facebook. We obtained 430,044 ads with FEC-registered advertisers from
Facebook's ad library that ran between August-November 2018. We compare ads run
by candidates, parties, and outside groups, which we classify by (1) their
donor transparency (dark money or disclosed) and (2) the group's permanence
(only FEC-registered in 2018 or persistent across cycles). The most negative
advertising came from dark money and disappearing outside groups, which were
mostly corporations or 501(c) organizations. However, only dark money was
associated with a significant decrease in ad sentiment. These results suggest
that accountability for political speech matters for advertising tone,
especially in the context of affective polarization on social media.
|
[
{
"created": "Mon, 5 Oct 2020 00:53:43 GMT",
"version": "v1"
},
{
"created": "Sun, 24 Jan 2021 00:21:42 GMT",
"version": "v2"
},
{
"created": "Thu, 2 Sep 2021 00:33:35 GMT",
"version": "v3"
},
{
"created": "Fri, 26 Jan 2024 17:38:02 GMT",
"version": "v4"
}
] |
2024-01-29
|
[
[
"Jain",
"Shomik",
""
],
[
"Wood",
"Abby K.",
""
]
] |
The emergence of online political advertising has come with little regulation, allowing political advertisers on social media to avoid accountability. We analyze how transparency and accountability deficits caused by dark money and disappearing groups relate to the sentiment of political ads on Facebook. We obtained 430,044 ads with FEC-registered advertisers from Facebook's ad library that ran between August-November 2018. We compare ads run by candidates, parties, and outside groups, which we classify by (1) their donor transparency (dark money or disclosed) and (2) the group's permanence (only FEC-registered in 2018 or persistent across cycles). The most negative advertising came from dark money and disappearing outside groups, which were mostly corporations or 501(c) organizations. However, only dark money was associated with a significant decrease in ad sentiment. These results suggest that accountability for political speech matters for advertising tone, especially in the context of affective polarization on social media.
|
2012.11554
|
Yufeng Zhang
|
Zhuoran Yang, Yufeng Zhang, Yongxin Chen, Zhaoran Wang
|
Variational Transport: A Convergent Particle-BasedAlgorithm for
Distributional Optimization
|
58 pages, add acknowledgement
| null | null | null |
cs.LG math.OC math.ST stat.ML stat.TH
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider the optimization problem of minimizing a functional defined over
a family of probability distributions, where the objective functional is
assumed to possess a variational form. Such a distributional optimization
problem arises widely in machine learning and statistics, with Monte-Carlo
sampling, variational inference, policy optimization, and generative
adversarial network as examples. For this problem, we propose a novel
particle-based algorithm, dubbed as variational transport, which approximately
performs Wasserstein gradient descent over the manifold of probability
distributions via iteratively pushing a set of particles. Specifically, we
prove that moving along the geodesic in the direction of functional gradient
with respect to the second-order Wasserstein distance is equivalent to applying
a pushforward mapping to a probability distribution, which can be approximated
accurately by pushing a set of particles. Specifically, in each iteration of
variational transport, we first solve the variational problem associated with
the objective functional using the particles, whose solution yields the
Wasserstein gradient direction. Then we update the current distribution by
pushing each particle along the direction specified by such a solution. By
characterizing both the statistical error incurred in estimating the
Wasserstein gradient and the progress of the optimization algorithm, we prove
that when the objective function satisfies a functional version of the
Polyak-\L{}ojasiewicz (PL) (Polyak, 1963) and smoothness conditions,
variational transport converges linearly to the global minimum of the objective
functional up to a certain statistical error, which decays to zero sublinearly
as the number of particles goes to infinity.
|
[
{
"created": "Mon, 21 Dec 2020 18:33:13 GMT",
"version": "v1"
},
{
"created": "Mon, 1 Apr 2024 03:56:23 GMT",
"version": "v2"
}
] |
2024-04-02
|
[
[
"Yang",
"Zhuoran",
""
],
[
"Zhang",
"Yufeng",
""
],
[
"Chen",
"Yongxin",
""
],
[
"Wang",
"Zhaoran",
""
]
] |
We consider the optimization problem of minimizing a functional defined over a family of probability distributions, where the objective functional is assumed to possess a variational form. Such a distributional optimization problem arises widely in machine learning and statistics, with Monte-Carlo sampling, variational inference, policy optimization, and generative adversarial network as examples. For this problem, we propose a novel particle-based algorithm, dubbed as variational transport, which approximately performs Wasserstein gradient descent over the manifold of probability distributions via iteratively pushing a set of particles. Specifically, we prove that moving along the geodesic in the direction of functional gradient with respect to the second-order Wasserstein distance is equivalent to applying a pushforward mapping to a probability distribution, which can be approximated accurately by pushing a set of particles. Specifically, in each iteration of variational transport, we first solve the variational problem associated with the objective functional using the particles, whose solution yields the Wasserstein gradient direction. Then we update the current distribution by pushing each particle along the direction specified by such a solution. By characterizing both the statistical error incurred in estimating the Wasserstein gradient and the progress of the optimization algorithm, we prove that when the objective function satisfies a functional version of the Polyak-\L{}ojasiewicz (PL) (Polyak, 1963) and smoothness conditions, variational transport converges linearly to the global minimum of the objective functional up to a certain statistical error, which decays to zero sublinearly as the number of particles goes to infinity.
|
2109.00442
|
Anik Saha
|
Anik Saha, Catherine Finegan-Dollak, Ashish Verma
|
Position Masking for Improved Layout-Aware Document Understanding
|
Document Intelligence Workshop at KDD, 2021
| null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Natural language processing for document scans and PDFs has the potential to
enormously improve the efficiency of business processes. Layout-aware word
embeddings such as LayoutLM have shown promise for classification of and
information extraction from such documents. This paper proposes a new
pre-training task called that can improve performance of layout-aware word
embeddings that incorporate 2-D position embeddings. We compare models
pre-trained with only language masking against models pre-trained with both
language masking and position masking, and we find that position masking
improves performance by over 5% on a form understanding task.
|
[
{
"created": "Wed, 1 Sep 2021 15:40:15 GMT",
"version": "v1"
}
] |
2021-09-02
|
[
[
"Saha",
"Anik",
""
],
[
"Finegan-Dollak",
"Catherine",
""
],
[
"Verma",
"Ashish",
""
]
] |
Natural language processing for document scans and PDFs has the potential to enormously improve the efficiency of business processes. Layout-aware word embeddings such as LayoutLM have shown promise for classification of and information extraction from such documents. This paper proposes a new pre-training task called that can improve performance of layout-aware word embeddings that incorporate 2-D position embeddings. We compare models pre-trained with only language masking against models pre-trained with both language masking and position masking, and we find that position masking improves performance by over 5% on a form understanding task.
|
2406.13237
|
Ke Zhang
|
Ke Zhang, Vishal M. Patel
|
ModelMix: A New Model-Mixup Strategy to Minimize Vicinal Risk across
Tasks for Few-scribble based Cardiac Segmentation
|
10 pages, 3 figures
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Pixel-level dense labeling is both resource-intensive and time-consuming,
whereas weak labels such as scribble present a more feasible alternative to
full annotations. However, training segmentation networks with weak supervision
from scribbles remains challenging. Inspired by the fact that different
segmentation tasks can be correlated with each other, we introduce a new
approach to few-scribble supervised segmentation based on model parameter
interpolation, termed as ModelMix. Leveraging the prior knowledge that linearly
interpolating convolution kernels and bias terms should result in linear
interpolations of the corresponding feature vectors, ModelMix constructs
virtual models using convex combinations of convolutional parameters from
separate encoders. We then regularize the model set to minimize vicinal risk
across tasks in both unsupervised and scribble-supervised way. Validated on
three open datasets, i.e., ACDC, MSCMRseg, and MyoPS, our few-scribble guided
ModelMix significantly surpasses the performance of the state-of-the-art
scribble supervised methods.
|
[
{
"created": "Wed, 19 Jun 2024 05:58:11 GMT",
"version": "v1"
}
] |
2024-06-21
|
[
[
"Zhang",
"Ke",
""
],
[
"Patel",
"Vishal M.",
""
]
] |
Pixel-level dense labeling is both resource-intensive and time-consuming, whereas weak labels such as scribble present a more feasible alternative to full annotations. However, training segmentation networks with weak supervision from scribbles remains challenging. Inspired by the fact that different segmentation tasks can be correlated with each other, we introduce a new approach to few-scribble supervised segmentation based on model parameter interpolation, termed as ModelMix. Leveraging the prior knowledge that linearly interpolating convolution kernels and bias terms should result in linear interpolations of the corresponding feature vectors, ModelMix constructs virtual models using convex combinations of convolutional parameters from separate encoders. We then regularize the model set to minimize vicinal risk across tasks in both unsupervised and scribble-supervised way. Validated on three open datasets, i.e., ACDC, MSCMRseg, and MyoPS, our few-scribble guided ModelMix significantly surpasses the performance of the state-of-the-art scribble supervised methods.
|
2205.10611
|
Xiang Xiang
|
Shiqi Li, Xiang Xiang
|
Lightweight Human Pose Estimation Using Heatmap-Weighting Loss
|
7 pages, 5 figures
| null | null | null |
cs.CV cs.AI cs.LG cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent research on human pose estimation exploits complex structures to
improve performance on benchmark datasets, ignoring the resource overhead and
inference speed when the model is actually deployed. In this paper, we lighten
the computation cost and parameters of the deconvolution head network in
SimpleBaseline and introduce an attention mechanism that utilizes original,
inter-level, and intra-level information to intensify the accuracy.
Additionally, we propose a novel loss function called heatmap weighting loss,
which generates weights for each pixel on the heatmap that makes the model more
focused on keypoints. Experiments demonstrate our method achieves a balance
between performance, resource volume, and inference speed. Specifically, our
method can achieve 65.3 AP score on COCO test-dev, while the inference speed is
55 FPS and 18 FPS on the mobile GPU and CPU, respectively.
|
[
{
"created": "Sat, 21 May 2022 14:26:14 GMT",
"version": "v1"
}
] |
2022-05-24
|
[
[
"Li",
"Shiqi",
""
],
[
"Xiang",
"Xiang",
""
]
] |
Recent research on human pose estimation exploits complex structures to improve performance on benchmark datasets, ignoring the resource overhead and inference speed when the model is actually deployed. In this paper, we lighten the computation cost and parameters of the deconvolution head network in SimpleBaseline and introduce an attention mechanism that utilizes original, inter-level, and intra-level information to intensify the accuracy. Additionally, we propose a novel loss function called heatmap weighting loss, which generates weights for each pixel on the heatmap that makes the model more focused on keypoints. Experiments demonstrate our method achieves a balance between performance, resource volume, and inference speed. Specifically, our method can achieve 65.3 AP score on COCO test-dev, while the inference speed is 55 FPS and 18 FPS on the mobile GPU and CPU, respectively.
|
1907.08027
|
Johan Ferret
|
Johan Ferret, Rapha\"el Marinier, Matthieu Geist and Olivier Pietquin
|
Self-Attentional Credit Assignment for Transfer in Reinforcement
Learning
|
21 pages, 10 figures, 3 tables (accepted as an oral presentation at
the Learning Transferable Skills workshop, NeurIPS 2019)
|
International Joint Conference on Artificial Intelligence. 29
(2020) 2655-2661
|
10.24963/ijcai.2020/368
| null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The ability to transfer knowledge to novel environments and tasks is a
sensible desiderata for general learning agents. Despite the apparent promises,
transfer in RL is still an open and little exploited research area. In this
paper, we take a brand-new perspective about transfer: we suggest that the
ability to assign credit unveils structural invariants in the tasks that can be
transferred to make RL more sample-efficient. Our main contribution is SECRET,
a novel approach to transfer learning for RL that uses a backward-view credit
assignment mechanism based on a self-attentive architecture. Two aspects are
key to its generality: it learns to assign credit as a separate offline
supervised process and exclusively modifies the reward function. Consequently,
it can be supplemented by transfer methods that do not modify the reward
function and it can be plugged on top of any RL algorithm.
|
[
{
"created": "Thu, 18 Jul 2019 13:02:16 GMT",
"version": "v1"
},
{
"created": "Fri, 22 Nov 2019 14:22:44 GMT",
"version": "v2"
}
] |
2020-10-13
|
[
[
"Ferret",
"Johan",
""
],
[
"Marinier",
"Raphaël",
""
],
[
"Geist",
"Matthieu",
""
],
[
"Pietquin",
"Olivier",
""
]
] |
The ability to transfer knowledge to novel environments and tasks is a sensible desiderata for general learning agents. Despite the apparent promises, transfer in RL is still an open and little exploited research area. In this paper, we take a brand-new perspective about transfer: we suggest that the ability to assign credit unveils structural invariants in the tasks that can be transferred to make RL more sample-efficient. Our main contribution is SECRET, a novel approach to transfer learning for RL that uses a backward-view credit assignment mechanism based on a self-attentive architecture. Two aspects are key to its generality: it learns to assign credit as a separate offline supervised process and exclusively modifies the reward function. Consequently, it can be supplemented by transfer methods that do not modify the reward function and it can be plugged on top of any RL algorithm.
|
2306.08057
|
Nan Jiang
|
Nan Jiang, Yexiang Xue
|
Symbolic Regression via Control Variable Genetic Programming
| null | null | null | null |
cs.NE cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Learning symbolic expressions directly from experiment data is a vital step
in AI-driven scientific discovery. Nevertheless, state-of-the-art approaches
are limited to learning simple expressions. Regressing expressions involving
many independent variables still remain out of reach. Motivated by the control
variable experiments widely utilized in science, we propose Control Variable
Genetic Programming (CVGP) for symbolic regression over many independent
variables. CVGP expedites symbolic expression discovery via customized
experiment design, rather than learning from a fixed dataset collected a
priori. CVGP starts by fitting simple expressions involving a small set of
independent variables using genetic programming, under controlled experiments
where other variables are held as constants. It then extends expressions
learned in previous generations by adding new independent variables, using new
control variable experiments in which these variables are allowed to vary.
Theoretically, we show CVGP as an incremental building approach can yield an
exponential reduction in the search space when learning a class of expressions.
Experimentally, CVGP outperforms several baselines in learning symbolic
expressions involving multiple independent variables.
|
[
{
"created": "Thu, 25 May 2023 04:11:14 GMT",
"version": "v1"
}
] |
2023-06-16
|
[
[
"Jiang",
"Nan",
""
],
[
"Xue",
"Yexiang",
""
]
] |
Learning symbolic expressions directly from experiment data is a vital step in AI-driven scientific discovery. Nevertheless, state-of-the-art approaches are limited to learning simple expressions. Regressing expressions involving many independent variables still remain out of reach. Motivated by the control variable experiments widely utilized in science, we propose Control Variable Genetic Programming (CVGP) for symbolic regression over many independent variables. CVGP expedites symbolic expression discovery via customized experiment design, rather than learning from a fixed dataset collected a priori. CVGP starts by fitting simple expressions involving a small set of independent variables using genetic programming, under controlled experiments where other variables are held as constants. It then extends expressions learned in previous generations by adding new independent variables, using new control variable experiments in which these variables are allowed to vary. Theoretically, we show CVGP as an incremental building approach can yield an exponential reduction in the search space when learning a class of expressions. Experimentally, CVGP outperforms several baselines in learning symbolic expressions involving multiple independent variables.
|
1905.12594
|
Hengchu Zhang
|
Hengchu Zhang, Edo Roth, Andreas Haeberlen, Benjamin C. Pierce, Aaron
Roth
|
Fuzzi: A Three-Level Logic for Differential Privacy
| null | null | null | null |
cs.PL cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Curators of sensitive datasets sometimes need to know whether queries against
the data are differentially private [Dwork et al. 2006]. Two sorts of logics
have been proposed for checking this property: (1) type systems and other
static analyses, which fully automate straightforward reasoning with concepts
like "program sensitivity" and "privacy loss," and (2) full-blown program
logics such as apRHL (an approximate, probabilistic, relational Hoare logic)
[Barthe et al. 2016], which support more flexible reasoning about subtle
privacy-preserving algorithmic techniques but offer only minimal automation.
We propose a three-level logic for differential privacy in an imperative
setting and present a prototype implementation called Fuzzi. Fuzzi's lowest
level is a general-purpose logic; its middle level is apRHL; and its top level
is a novel sensitivity logic adapted from the linear-logic-inspired type system
of Fuzz, a differentially private functional language [Reed and Pierce 2010].
The key novelty is a high degree of integration between the sensitivity logic
and the two lower-level logics: the judgments and proofs of the sensitivity
logic can be easily translated into apRHL; conversely, privacy properties of
key algorithmic building blocks can be proved manually in apRHL and the base
logic, then packaged up as typing rules that can be applied by a checker for
the sensitivity logic to automatically construct privacy proofs for composite
programs of arbitrary size.
We demonstrate Fuzzi's utility by implementing four different private
machine-learning algorithms and showing that Fuzzi's checker is able to derive
tight sensitivity bounds.
|
[
{
"created": "Wed, 29 May 2019 17:13:41 GMT",
"version": "v1"
}
] |
2019-06-10
|
[
[
"Zhang",
"Hengchu",
""
],
[
"Roth",
"Edo",
""
],
[
"Haeberlen",
"Andreas",
""
],
[
"Pierce",
"Benjamin C.",
""
],
[
"Roth",
"Aaron",
""
]
] |
Curators of sensitive datasets sometimes need to know whether queries against the data are differentially private [Dwork et al. 2006]. Two sorts of logics have been proposed for checking this property: (1) type systems and other static analyses, which fully automate straightforward reasoning with concepts like "program sensitivity" and "privacy loss," and (2) full-blown program logics such as apRHL (an approximate, probabilistic, relational Hoare logic) [Barthe et al. 2016], which support more flexible reasoning about subtle privacy-preserving algorithmic techniques but offer only minimal automation. We propose a three-level logic for differential privacy in an imperative setting and present a prototype implementation called Fuzzi. Fuzzi's lowest level is a general-purpose logic; its middle level is apRHL; and its top level is a novel sensitivity logic adapted from the linear-logic-inspired type system of Fuzz, a differentially private functional language [Reed and Pierce 2010]. The key novelty is a high degree of integration between the sensitivity logic and the two lower-level logics: the judgments and proofs of the sensitivity logic can be easily translated into apRHL; conversely, privacy properties of key algorithmic building blocks can be proved manually in apRHL and the base logic, then packaged up as typing rules that can be applied by a checker for the sensitivity logic to automatically construct privacy proofs for composite programs of arbitrary size. We demonstrate Fuzzi's utility by implementing four different private machine-learning algorithms and showing that Fuzzi's checker is able to derive tight sensitivity bounds.
|
1604.03757
|
Saber Shokat Fadaee
|
Saber Shokat Fadaee, Mohammad Sajjad Ghaemi, Ravi Sundaram, Hossein
Azari Soufiani
|
Chiron: A Robust Recommendation System with Graph Regularizer
| null | null | null | null |
cs.IR cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recommendation systems have been widely used by commercial service providers
for giving suggestions to users. Collaborative filtering (CF) systems, one of
the most popular recommendation systems, utilize the history of behaviors of
the aggregate user-base to provide individual recommendations and are effective
when almost all users faithfully express their opinions. However, they are
vulnerable to malicious users biasing their inputs in order to change the
overall ratings of a specific group of items. CF systems largely fall into two
categories - neighborhood-based and (matrix) factorization-based - and the
presence of adversarial input can influence recommendations in both categories,
leading to instabilities in estimation and prediction. Although the robustness
of different collaborative filtering algorithms has been extensively studied,
designing an efficient system that is immune to manipulation remains a
significant challenge. In this work we propose a novel "hybrid" recommendation
system with an adaptive graph-based user/item similarity-regularization -
"Chiron". Chiron ties the performance benefits of dimensionality reduction
(through factorization) with the advantage of neighborhood clustering (through
regularization). We demonstrate, using extensive comparative experiments, that
Chiron is resistant to manipulation by large and lethal attacks.
|
[
{
"created": "Wed, 13 Apr 2016 13:16:44 GMT",
"version": "v1"
},
{
"created": "Tue, 15 Nov 2016 20:48:15 GMT",
"version": "v2"
}
] |
2016-11-16
|
[
[
"Fadaee",
"Saber Shokat",
""
],
[
"Ghaemi",
"Mohammad Sajjad",
""
],
[
"Sundaram",
"Ravi",
""
],
[
"Soufiani",
"Hossein Azari",
""
]
] |
Recommendation systems have been widely used by commercial service providers for giving suggestions to users. Collaborative filtering (CF) systems, one of the most popular recommendation systems, utilize the history of behaviors of the aggregate user-base to provide individual recommendations and are effective when almost all users faithfully express their opinions. However, they are vulnerable to malicious users biasing their inputs in order to change the overall ratings of a specific group of items. CF systems largely fall into two categories - neighborhood-based and (matrix) factorization-based - and the presence of adversarial input can influence recommendations in both categories, leading to instabilities in estimation and prediction. Although the robustness of different collaborative filtering algorithms has been extensively studied, designing an efficient system that is immune to manipulation remains a significant challenge. In this work we propose a novel "hybrid" recommendation system with an adaptive graph-based user/item similarity-regularization - "Chiron". Chiron ties the performance benefits of dimensionality reduction (through factorization) with the advantage of neighborhood clustering (through regularization). We demonstrate, using extensive comparative experiments, that Chiron is resistant to manipulation by large and lethal attacks.
|
1502.04933
|
Chen He Mr.
|
Chen He, Z. Jane Wang, and Victor C.M. Leung
|
Block-Level Unitary Query: Incorporating Orthogonal-like Space-time Code
with Query Diversity for MIMO Backscatter RFID
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Because of the emerging field of Internet of Things (IoT), future backscatter
RFID is required to be more reliable and data intensive. Motivated by this,
orthogonal space-time block code (OSTBC), which is very successful in mobile
communications for its low complexity and high performance, has already been
investigated for backscatter RFID. On the other hand, a recently proposed
scheme called unitary query was shown to be able to considerably improve the
reliability of backscatter radio by exploiting query diversity. Therefore
incorporating the classical OSTBC (at the tag end) with the recently proposed
unitary query (at the query end) seems to be promising. However, in this paper,
we show that simple, direct employment of OSTBC together with unitary query
incurs a linear decoding problem and eventually leads to a severe performance
degradation. As a re-design of the recently proposed unitary query and the
classical OSTBC specifically for MIMO backscatter RFID, we present a
BUTQ-mOSTBC design pair idea by proposing the block-level unitary query (BUTQ)
at the query end and the corresponding modified OSTBC (mOSTBC) at the tag end.
The proposed BUTQ-mOSTBC can resolve the linear decoding problem, keep the
simplicity and high performance properties of the classical OSTBC, and achieve
the query diversity for the $M \times L \times N$ MIMO backscatter RFID
channel.
|
[
{
"created": "Tue, 17 Feb 2015 15:49:56 GMT",
"version": "v1"
}
] |
2015-02-18
|
[
[
"He",
"Chen",
""
],
[
"Wang",
"Z. Jane",
""
],
[
"Leung",
"Victor C. M.",
""
]
] |
Because of the emerging field of Internet of Things (IoT), future backscatter RFID is required to be more reliable and data intensive. Motivated by this, orthogonal space-time block code (OSTBC), which is very successful in mobile communications for its low complexity and high performance, has already been investigated for backscatter RFID. On the other hand, a recently proposed scheme called unitary query was shown to be able to considerably improve the reliability of backscatter radio by exploiting query diversity. Therefore incorporating the classical OSTBC (at the tag end) with the recently proposed unitary query (at the query end) seems to be promising. However, in this paper, we show that simple, direct employment of OSTBC together with unitary query incurs a linear decoding problem and eventually leads to a severe performance degradation. As a re-design of the recently proposed unitary query and the classical OSTBC specifically for MIMO backscatter RFID, we present a BUTQ-mOSTBC design pair idea by proposing the block-level unitary query (BUTQ) at the query end and the corresponding modified OSTBC (mOSTBC) at the tag end. The proposed BUTQ-mOSTBC can resolve the linear decoding problem, keep the simplicity and high performance properties of the classical OSTBC, and achieve the query diversity for the $M \times L \times N$ MIMO backscatter RFID channel.
|
2207.11167
|
Robert Grossman
|
Robert L. Grossman
|
Ten Lessons for Data Sharing With a Data Commons
| null | null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
A data commons is a cloud-based data platform with a governance structure
that allows a community to manage, analyze and share its data. Data commons
provide a research community with the ability to manage and analyze large
datasets using the elastic scalability provided by cloud computing and to share
data securely and compliantly, and, in this way, accelerate the pace of
research. Over the past decade, a number of data commons have been developed
and we discuss some of the lessons learned from this effort.
|
[
{
"created": "Fri, 22 Jul 2022 16:12:33 GMT",
"version": "v1"
}
] |
2022-07-25
|
[
[
"Grossman",
"Robert L.",
""
]
] |
A data commons is a cloud-based data platform with a governance structure that allows a community to manage, analyze and share its data. Data commons provide a research community with the ability to manage and analyze large datasets using the elastic scalability provided by cloud computing and to share data securely and compliantly, and, in this way, accelerate the pace of research. Over the past decade, a number of data commons have been developed and we discuss some of the lessons learned from this effort.
|
1610.02806
|
Yao Zhou
|
Yao Zhou, Cong Liu and Yan Pan
|
Modelling Sentence Pairs with Tree-structured Attentive Encoder
|
10 pages, 3 figures, COLING2016
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We describe an attentive encoder that combines tree-structured recursive
neural networks and sequential recurrent neural networks for modelling sentence
pairs. Since existing attentive models exert attention on the sequential
structure, we propose a way to incorporate attention into the tree topology.
Specially, given a pair of sentences, our attentive encoder uses the
representation of one sentence, which generated via an RNN, to guide the
structural encoding of the other sentence on the dependency parse tree. We
evaluate the proposed attentive encoder on three tasks: semantic similarity,
paraphrase identification and true-false question selection. Experimental
results show that our encoder outperforms all baselines and achieves
state-of-the-art results on two tasks.
|
[
{
"created": "Mon, 10 Oct 2016 08:52:36 GMT",
"version": "v1"
}
] |
2016-10-11
|
[
[
"Zhou",
"Yao",
""
],
[
"Liu",
"Cong",
""
],
[
"Pan",
"Yan",
""
]
] |
We describe an attentive encoder that combines tree-structured recursive neural networks and sequential recurrent neural networks for modelling sentence pairs. Since existing attentive models exert attention on the sequential structure, we propose a way to incorporate attention into the tree topology. Specially, given a pair of sentences, our attentive encoder uses the representation of one sentence, which generated via an RNN, to guide the structural encoding of the other sentence on the dependency parse tree. We evaluate the proposed attentive encoder on three tasks: semantic similarity, paraphrase identification and true-false question selection. Experimental results show that our encoder outperforms all baselines and achieves state-of-the-art results on two tasks.
|
2310.14743
|
Harry Emerson
|
Harry Emerson, Ryan McConville and Matthew Guy
|
The Safety Challenges of Deep Learning in Real-World Type 1 Diabetes
Management
|
15 pages, 3 figures
| null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Blood glucose simulation allows the effectiveness of type 1 diabetes (T1D)
management strategies to be evaluated without patient harm. Deep learning
algorithms provide a promising avenue for extending simulator capabilities;
however, these algorithms are limited in that they do not necessarily learn
physiologically correct glucose dynamics and can learn incorrect and
potentially dangerous relationships from confounders in training data. This is
likely to be more important in real-world scenarios, as data is not collected
under strict research protocol. This work explores the implications of using
deep learning algorithms trained on real-world data to model glucose dynamics.
Free-living data was processed from the OpenAPS Data Commons and supplemented
with patient-reported tags of challenging diabetes events, constituting one of
the most detailed real-world T1D datasets. This dataset was used to train and
evaluate state-of-the-art glucose simulators, comparing their prediction error
across safety critical scenarios and assessing the physiological
appropriateness of the learned dynamics using Shapley Additive Explanations
(SHAP). While deep learning prediction accuracy surpassed the widely-used
mathematical simulator approach, the model deteriorated in safety critical
scenarios and struggled to leverage self-reported meal and exercise
information. SHAP value analysis also indicated the model had fundamentally
confused the roles of insulin and carbohydrates, which is one of the most basic
T1D management principles. This work highlights the importance of considering
physiological appropriateness when using deep learning to model real-world
systems in T1D and healthcare more broadly, and provides recommendations for
building models that are robust to real-world data constraints.
|
[
{
"created": "Mon, 23 Oct 2023 09:25:50 GMT",
"version": "v1"
}
] |
2023-10-24
|
[
[
"Emerson",
"Harry",
""
],
[
"McConville",
"Ryan",
""
],
[
"Guy",
"Matthew",
""
]
] |
Blood glucose simulation allows the effectiveness of type 1 diabetes (T1D) management strategies to be evaluated without patient harm. Deep learning algorithms provide a promising avenue for extending simulator capabilities; however, these algorithms are limited in that they do not necessarily learn physiologically correct glucose dynamics and can learn incorrect and potentially dangerous relationships from confounders in training data. This is likely to be more important in real-world scenarios, as data is not collected under strict research protocol. This work explores the implications of using deep learning algorithms trained on real-world data to model glucose dynamics. Free-living data was processed from the OpenAPS Data Commons and supplemented with patient-reported tags of challenging diabetes events, constituting one of the most detailed real-world T1D datasets. This dataset was used to train and evaluate state-of-the-art glucose simulators, comparing their prediction error across safety critical scenarios and assessing the physiological appropriateness of the learned dynamics using Shapley Additive Explanations (SHAP). While deep learning prediction accuracy surpassed the widely-used mathematical simulator approach, the model deteriorated in safety critical scenarios and struggled to leverage self-reported meal and exercise information. SHAP value analysis also indicated the model had fundamentally confused the roles of insulin and carbohydrates, which is one of the most basic T1D management principles. This work highlights the importance of considering physiological appropriateness when using deep learning to model real-world systems in T1D and healthcare more broadly, and provides recommendations for building models that are robust to real-world data constraints.
|
1708.03417
|
Seungkyun Hong
|
Seungkyun Hong, Seongchan Kim, Minsu Joh, Sa-kwang Song
|
GlobeNet: Convolutional Neural Networks for Typhoon Eye Tracking from
Remote Sensing Imagery
|
Under review as a workshop paper at CI 2017
| null | null | null |
cs.NE cs.AI cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Advances in remote sensing technologies have made it possible to use
high-resolution visual data for weather observation and forecasting tasks. We
propose the use of multi-layer neural networks for understanding complex
atmospheric dynamics based on multichannel satellite images. The capability of
our model was evaluated by using a linear regression task for single typhoon
coordinates prediction. A specific combination of models and different
activation policies enabled us to obtain an interesting prediction result in
the northeastern hemisphere (ENH).
|
[
{
"created": "Fri, 11 Aug 2017 00:41:56 GMT",
"version": "v1"
}
] |
2017-11-30
|
[
[
"Hong",
"Seungkyun",
""
],
[
"Kim",
"Seongchan",
""
],
[
"Joh",
"Minsu",
""
],
[
"Song",
"Sa-kwang",
""
]
] |
Advances in remote sensing technologies have made it possible to use high-resolution visual data for weather observation and forecasting tasks. We propose the use of multi-layer neural networks for understanding complex atmospheric dynamics based on multichannel satellite images. The capability of our model was evaluated by using a linear regression task for single typhoon coordinates prediction. A specific combination of models and different activation policies enabled us to obtain an interesting prediction result in the northeastern hemisphere (ENH).
|
2309.04663
|
Xinyi Wang
|
Xinyi Wang, John Wieting, Jonathan H. Clark
|
FIAT: Fusing learning paradigms with Instruction-Accelerated Tuning
| null | null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Learning paradigms for large language models (LLMs) currently tend to fall
within either in-context learning (ICL) or full fine-tuning. Each of these
comes with their own trade-offs based on available data, model size, compute
cost, ease-of-use, and final quality with neither solution performing well
across-the-board. In this article, we first describe ICL and fine-tuning
paradigms in a way that highlights their natural connections. Based on these
connections, we propose a new learning paradigm called FIAT that fuses the best
of these paradigms together, enabling prompt-engineered instructions and
chain-of-thought reasoning with the very largest models while also using
similar methods to perform parameter updates on a modestly-sized LLM with
parameter-efficient tuning. We evaluate FIAT's effectiveness on a variety of
multilingual tasks and observe that FIAT performs better than both ICL and
fine-tuning at scales ranging from 100-10,000 training examples. We hope that
FIAT provides a practical way of harnessing the full potential of LLMs without
needing to make a hard choice between learning paradigms.
|
[
{
"created": "Sat, 9 Sep 2023 02:43:48 GMT",
"version": "v1"
},
{
"created": "Tue, 12 Sep 2023 14:34:03 GMT",
"version": "v2"
}
] |
2023-09-13
|
[
[
"Wang",
"Xinyi",
""
],
[
"Wieting",
"John",
""
],
[
"Clark",
"Jonathan H.",
""
]
] |
Learning paradigms for large language models (LLMs) currently tend to fall within either in-context learning (ICL) or full fine-tuning. Each of these comes with their own trade-offs based on available data, model size, compute cost, ease-of-use, and final quality with neither solution performing well across-the-board. In this article, we first describe ICL and fine-tuning paradigms in a way that highlights their natural connections. Based on these connections, we propose a new learning paradigm called FIAT that fuses the best of these paradigms together, enabling prompt-engineered instructions and chain-of-thought reasoning with the very largest models while also using similar methods to perform parameter updates on a modestly-sized LLM with parameter-efficient tuning. We evaluate FIAT's effectiveness on a variety of multilingual tasks and observe that FIAT performs better than both ICL and fine-tuning at scales ranging from 100-10,000 training examples. We hope that FIAT provides a practical way of harnessing the full potential of LLMs without needing to make a hard choice between learning paradigms.
|
2407.03471
|
Benno Krojer
|
Benno Krojer, Dheeraj Vattikonda, Luis Lara, Varun Jampani, Eva
Portelance, Christopher Pal, Siva Reddy
|
Learning Action and Reasoning-Centric Image Editing from Videos and
Simulations
|
Submitted to NeurIPS (Dataset & Benchmarks)
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
An image editing model should be able to perform diverse edits, ranging from
object replacement, changing attributes or style, to performing actions or
movement, which require many forms of reasoning. Current general
instruction-guided editing models have significant shortcomings with action and
reasoning-centric edits. Object, attribute or stylistic changes can be learned
from visually static datasets. On the other hand, high-quality data for action
and reasoning-centric edits is scarce and has to come from entirely different
sources that cover e.g. physical dynamics, temporality and spatial reasoning.
To this end, we meticulously curate the AURORA Dataset
(Action-Reasoning-Object-Attribute), a collection of high-quality training
data, human-annotated and curated from videos and simulation engines. We focus
on a key aspect of quality training data: triplets (source image, prompt,
target image) contain a single meaningful visual change described by the
prompt, i.e., truly minimal changes between source and target images. To
demonstrate the value of our dataset, we evaluate an AURORA-finetuned model on
a new expert-curated benchmark (AURORA-Bench) covering 8 diverse editing tasks.
Our model significantly outperforms previous editing models as judged by human
raters. For automatic evaluations, we find important flaws in previous metrics
and caution their use for semantically hard editing tasks. Instead, we propose
a new automatic metric that focuses on discriminative understanding. We hope
that our efforts : (1) curating a quality training dataset and an evaluation
benchmark, (2) developing critical evaluations, and (3) releasing a
state-of-the-art model, will fuel further progress on general image editing.
|
[
{
"created": "Wed, 3 Jul 2024 19:36:33 GMT",
"version": "v1"
},
{
"created": "Fri, 9 Aug 2024 22:03:59 GMT",
"version": "v2"
}
] |
2024-08-13
|
[
[
"Krojer",
"Benno",
""
],
[
"Vattikonda",
"Dheeraj",
""
],
[
"Lara",
"Luis",
""
],
[
"Jampani",
"Varun",
""
],
[
"Portelance",
"Eva",
""
],
[
"Pal",
"Christopher",
""
],
[
"Reddy",
"Siva",
""
]
] |
An image editing model should be able to perform diverse edits, ranging from object replacement, changing attributes or style, to performing actions or movement, which require many forms of reasoning. Current general instruction-guided editing models have significant shortcomings with action and reasoning-centric edits. Object, attribute or stylistic changes can be learned from visually static datasets. On the other hand, high-quality data for action and reasoning-centric edits is scarce and has to come from entirely different sources that cover e.g. physical dynamics, temporality and spatial reasoning. To this end, we meticulously curate the AURORA Dataset (Action-Reasoning-Object-Attribute), a collection of high-quality training data, human-annotated and curated from videos and simulation engines. We focus on a key aspect of quality training data: triplets (source image, prompt, target image) contain a single meaningful visual change described by the prompt, i.e., truly minimal changes between source and target images. To demonstrate the value of our dataset, we evaluate an AURORA-finetuned model on a new expert-curated benchmark (AURORA-Bench) covering 8 diverse editing tasks. Our model significantly outperforms previous editing models as judged by human raters. For automatic evaluations, we find important flaws in previous metrics and caution their use for semantically hard editing tasks. Instead, we propose a new automatic metric that focuses on discriminative understanding. We hope that our efforts : (1) curating a quality training dataset and an evaluation benchmark, (2) developing critical evaluations, and (3) releasing a state-of-the-art model, will fuel further progress on general image editing.
|
2112.11806
|
Alice Tarzariol
|
Alice Tarzariol, Martin Gebser, Konstantin Schekotihin
|
Lifting Symmetry Breaking Constraints with Inductive Logic Programming
|
to appear in Machine Learning Journal
|
Machine Learning (2022)
|
10.1007/s10994-022-06146-3
| null |
cs.LO cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Efficient omission of symmetric solution candidates is essential for
combinatorial problem-solving. Most of the existing approaches are
instance-specific and focus on the automatic computation of Symmetry Breaking
Constraints (SBCs) for each given problem instance. However, the application of
such approaches to large-scale instances or advanced problem encodings might be
problematic since the computed SBCs are propositional and, therefore, can
neither be meaningfully interpreted nor transferred to other instances. As a
result, a time-consuming recomputation of SBCs must be done before every
invocation of a solver.
To overcome these limitations, we introduce a new model-oriented approach for
Answer Set Programming that lifts the SBCs of small problem instances into a
set of interpretable first-order constraints using the Inductive Logic
Programming paradigm. Experiments demonstrate the ability of our framework to
learn general constraints from instance-specific SBCs for a collection of
combinatorial problems. The obtained results indicate that our approach
significantly outperforms a state-of-the-art instance-specific method as well
as the direct application of a solver.
|
[
{
"created": "Wed, 22 Dec 2021 11:27:48 GMT",
"version": "v1"
},
{
"created": "Thu, 23 Dec 2021 12:41:28 GMT",
"version": "v2"
}
] |
2022-04-26
|
[
[
"Tarzariol",
"Alice",
""
],
[
"Gebser",
"Martin",
""
],
[
"Schekotihin",
"Konstantin",
""
]
] |
Efficient omission of symmetric solution candidates is essential for combinatorial problem-solving. Most of the existing approaches are instance-specific and focus on the automatic computation of Symmetry Breaking Constraints (SBCs) for each given problem instance. However, the application of such approaches to large-scale instances or advanced problem encodings might be problematic since the computed SBCs are propositional and, therefore, can neither be meaningfully interpreted nor transferred to other instances. As a result, a time-consuming recomputation of SBCs must be done before every invocation of a solver. To overcome these limitations, we introduce a new model-oriented approach for Answer Set Programming that lifts the SBCs of small problem instances into a set of interpretable first-order constraints using the Inductive Logic Programming paradigm. Experiments demonstrate the ability of our framework to learn general constraints from instance-specific SBCs for a collection of combinatorial problems. The obtained results indicate that our approach significantly outperforms a state-of-the-art instance-specific method as well as the direct application of a solver.
|
1309.7959
|
Laurens Bliek
|
Laurens Bliek
|
Exploration and Exploitation in Visuomotor Prediction of Autonomous
Agents
|
Award-winning paper of the internal conference 'Almende research
workshop 2013'
| null | null | null |
cs.LG cs.CV math.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper discusses various techniques to let an agent learn how to predict
the effects of its own actions on its sensor data autonomously, and their
usefulness to apply them to visual sensors. An Extreme Learning Machine is used
for visuomotor prediction, while various autonomous control techniques that can
aid the prediction process by balancing exploration and exploitation are
discussed and tested in a simple system: a camera moving over a 2D greyscale
image.
|
[
{
"created": "Thu, 19 Sep 2013 07:10:53 GMT",
"version": "v1"
}
] |
2013-10-01
|
[
[
"Bliek",
"Laurens",
""
]
] |
This paper discusses various techniques to let an agent learn how to predict the effects of its own actions on its sensor data autonomously, and their usefulness to apply them to visual sensors. An Extreme Learning Machine is used for visuomotor prediction, while various autonomous control techniques that can aid the prediction process by balancing exploration and exploitation are discussed and tested in a simple system: a camera moving over a 2D greyscale image.
|
2006.16529
|
Jia Zou
|
Jia Zou, Amitabh Das, Pratik Barhate, Arun Iyengar, Binhang Yuan,
Dimitrije Jankov, Chris Jermaine
|
Lachesis: Automatic Partitioning for UDF-Centric Analytics
|
In submission
| null | null | null |
cs.DB cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Persistent partitioning is effective in avoiding expensive shuffling
operations. However it remains a significant challenge to automate this process
for Big Data analytics workloads that extensively use user defined functions
(UDFs), where sub-computations are hard to be reused for partitionings compared
to relational applications. In addition, functional dependency that is widely
utilized for partitioning selection is often unavailable in the unstructured
data that is ubiquitous in UDF-centric analytics. We propose the Lachesis
system, which represents UDF-centric workloads as workflows of analyzable and
reusable sub-computations. Lachesis further adopts a deep reinforcement
learning model to infer which sub-computations should be used to partition the
underlying data. This analysis is then applied to automatically optimize the
storage of the data across applications to improve the performance and users'
productivity.
|
[
{
"created": "Tue, 30 Jun 2020 04:49:44 GMT",
"version": "v1"
},
{
"created": "Thu, 23 Jul 2020 02:15:27 GMT",
"version": "v2"
},
{
"created": "Sun, 2 Aug 2020 00:25:31 GMT",
"version": "v3"
},
{
"created": "Sat, 10 Oct 2020 08:43:46 GMT",
"version": "v4"
},
{
"created": "Mon, 22 Feb 2021 08:21:08 GMT",
"version": "v5"
}
] |
2021-02-23
|
[
[
"Zou",
"Jia",
""
],
[
"Das",
"Amitabh",
""
],
[
"Barhate",
"Pratik",
""
],
[
"Iyengar",
"Arun",
""
],
[
"Yuan",
"Binhang",
""
],
[
"Jankov",
"Dimitrije",
""
],
[
"Jermaine",
"Chris",
""
]
] |
Persistent partitioning is effective in avoiding expensive shuffling operations. However it remains a significant challenge to automate this process for Big Data analytics workloads that extensively use user defined functions (UDFs), where sub-computations are hard to be reused for partitionings compared to relational applications. In addition, functional dependency that is widely utilized for partitioning selection is often unavailable in the unstructured data that is ubiquitous in UDF-centric analytics. We propose the Lachesis system, which represents UDF-centric workloads as workflows of analyzable and reusable sub-computations. Lachesis further adopts a deep reinforcement learning model to infer which sub-computations should be used to partition the underlying data. This analysis is then applied to automatically optimize the storage of the data across applications to improve the performance and users' productivity.
|
2111.05402
|
Robin Weishaupt
|
Peter Kern, Daniel Neugebauer, J\"org Rothe, Ren\'e L. Schilling,
Dietrich Stoyan, Robin Weishaupt
|
Cutting a Cake Is Not Always a 'Piece of Cake': A Closer Look at the
Foundations of Cake-Cutting Through the Lens of Measure Theory
| null | null | null | null |
cs.GT cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cake-cutting is a playful name for the fair division of a heterogeneous,
divisible good among agents, a well-studied problem at the intersection of
mathematics, economics, and artificial intelligence. The cake-cutting
literature is rich and edifying. However, different model assumptions are made
in its many papers, in particular regarding the set of allowed pieces of cake
that are to be distributed among the agents and regarding the agents' valuation
functions by which they measure these pieces. We survey the commonly used
definitions in the cake-cutting literature, highlight their strengths and
weaknesses, and make some recommendations on what definitions could be most
reasonably used when looking through the lens of measure theory.
|
[
{
"created": "Tue, 9 Nov 2021 20:18:41 GMT",
"version": "v1"
},
{
"created": "Tue, 21 Nov 2023 19:52:10 GMT",
"version": "v2"
}
] |
2023-11-23
|
[
[
"Kern",
"Peter",
""
],
[
"Neugebauer",
"Daniel",
""
],
[
"Rothe",
"Jörg",
""
],
[
"Schilling",
"René L.",
""
],
[
"Stoyan",
"Dietrich",
""
],
[
"Weishaupt",
"Robin",
""
]
] |
Cake-cutting is a playful name for the fair division of a heterogeneous, divisible good among agents, a well-studied problem at the intersection of mathematics, economics, and artificial intelligence. The cake-cutting literature is rich and edifying. However, different model assumptions are made in its many papers, in particular regarding the set of allowed pieces of cake that are to be distributed among the agents and regarding the agents' valuation functions by which they measure these pieces. We survey the commonly used definitions in the cake-cutting literature, highlight their strengths and weaknesses, and make some recommendations on what definitions could be most reasonably used when looking through the lens of measure theory.
|
1502.01157
|
Majed Haddad
|
Piotr Wiecek, Majed Haddad, Oussama Habachi and Yezekael Hayel
|
Toward Fully Coordinated Multi-level Multi-carrier Energy Efficient
Networks
|
9 pages, 6 figures
| null | null | null |
cs.NI cs.GT cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Enabling coordination between products from different vendors is a key
characteristic of the design philosophy behind future wireless communication
networks. As an example, different devices may have different implementations,
leading to different user experiences. A similar story emerges when devices
running different physical and link layer protocols share frequencies in the
same spectrum in order to maximize the system-wide spectral efficiency. In such
situations, coordinating multiple interfering devices presents a significant
challenge not only from an interworking perspective (as a result of reduced
infrastructure), but also from an implementation point of view. The following
question may then naturally arise: How to accommodate integrating such
heterogeneous wireless devices seamlessly? One approach is to coordinate the
spectrum in a centralized manner. However, the desired autonomous feature of
future wireless systems makes the use of a central authority for spectrum
management less appealing. Alternately, intelligent spectrum coordination have
spurred great interest and excitement in the recent years. This paper presents
a multi-level (hierarchical) power control game where users jointly choose
their channel control and power control selfishly in order to maximize their
individual energy efficiency. By hierarchical, we mean that some users'
decision priority is higher/lower than the others. We propose two simple and
nearly-optimal algorithms that ensure complete spectrum coordination among
users. Interestingly, it turns out that the complexity of the two proposed
algorithms is, in the worst case, quadratic in the number of users, whereas the
complexity of the optimal solution (obtained through exhaustive search) is N!.
|
[
{
"created": "Wed, 4 Feb 2015 11:13:00 GMT",
"version": "v1"
}
] |
2015-02-05
|
[
[
"Wiecek",
"Piotr",
""
],
[
"Haddad",
"Majed",
""
],
[
"Habachi",
"Oussama",
""
],
[
"Hayel",
"Yezekael",
""
]
] |
Enabling coordination between products from different vendors is a key characteristic of the design philosophy behind future wireless communication networks. As an example, different devices may have different implementations, leading to different user experiences. A similar story emerges when devices running different physical and link layer protocols share frequencies in the same spectrum in order to maximize the system-wide spectral efficiency. In such situations, coordinating multiple interfering devices presents a significant challenge not only from an interworking perspective (as a result of reduced infrastructure), but also from an implementation point of view. The following question may then naturally arise: How to accommodate integrating such heterogeneous wireless devices seamlessly? One approach is to coordinate the spectrum in a centralized manner. However, the desired autonomous feature of future wireless systems makes the use of a central authority for spectrum management less appealing. Alternately, intelligent spectrum coordination have spurred great interest and excitement in the recent years. This paper presents a multi-level (hierarchical) power control game where users jointly choose their channel control and power control selfishly in order to maximize their individual energy efficiency. By hierarchical, we mean that some users' decision priority is higher/lower than the others. We propose two simple and nearly-optimal algorithms that ensure complete spectrum coordination among users. Interestingly, it turns out that the complexity of the two proposed algorithms is, in the worst case, quadratic in the number of users, whereas the complexity of the optimal solution (obtained through exhaustive search) is N!.
|
2403.18761
|
Ningna Wang
|
Ningna Wang, Hui Huang, Shibo Song, Bin Wang, Wenping Wang, Xiaohu Guo
|
MATTopo: Topology-preserving Medial Axis Transform with Restricted Power
Diagram
| null | null | null | null |
cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a novel topology-preserving 3D medial axis computation framework
based on volumetric restricted power diagram (RPD), while preserving the medial
features and geometric convergence simultaneously, for both 3D CAD and organic
shapes. The volumetric RPD discretizes the input 3D volume into sub-regions
given a set of medial spheres. With this intermediate structure, we convert the
homotopy equivalency between the generated medial mesh and the input 3D shape
into a localized contractibility checking for each restricted element (power
cell, power face, power edge), by checking their connected components and Euler
characteristics. We further propose a fractional Euler characteristic algorithm
for efficient GPU-based computation of Euler characteristic for each restricted
element on the fly while computing the volumetric RPD. Compared with existing
voxel-based or point-cloud-based methods, our approach is the first to
adaptively and directly revise the medial mesh without globally modifying the
dependent structure, such as voxel size or sampling density, while preserving
its topology and medial features. In comparison with the feature preservation
method MATFP, our method provides geometrically comparable results with fewer
spheres and more robustly captures the topology of the input 3D shape.
|
[
{
"created": "Wed, 27 Mar 2024 16:59:21 GMT",
"version": "v1"
},
{
"created": "Tue, 21 May 2024 22:18:45 GMT",
"version": "v2"
}
] |
2024-05-24
|
[
[
"Wang",
"Ningna",
""
],
[
"Huang",
"Hui",
""
],
[
"Song",
"Shibo",
""
],
[
"Wang",
"Bin",
""
],
[
"Wang",
"Wenping",
""
],
[
"Guo",
"Xiaohu",
""
]
] |
We present a novel topology-preserving 3D medial axis computation framework based on volumetric restricted power diagram (RPD), while preserving the medial features and geometric convergence simultaneously, for both 3D CAD and organic shapes. The volumetric RPD discretizes the input 3D volume into sub-regions given a set of medial spheres. With this intermediate structure, we convert the homotopy equivalency between the generated medial mesh and the input 3D shape into a localized contractibility checking for each restricted element (power cell, power face, power edge), by checking their connected components and Euler characteristics. We further propose a fractional Euler characteristic algorithm for efficient GPU-based computation of Euler characteristic for each restricted element on the fly while computing the volumetric RPD. Compared with existing voxel-based or point-cloud-based methods, our approach is the first to adaptively and directly revise the medial mesh without globally modifying the dependent structure, such as voxel size or sampling density, while preserving its topology and medial features. In comparison with the feature preservation method MATFP, our method provides geometrically comparable results with fewer spheres and more robustly captures the topology of the input 3D shape.
|
1511.08250
|
Bernardino Romera-Paredes
|
Bernardino Romera-Paredes, Philip H. S. Torr
|
Recurrent Instance Segmentation
|
14 pages (main paper). 24 pages including references and appendix
|
ECCV 2016. 14th European Conference on Computer Vision
| null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Instance segmentation is the problem of detecting and delineating each
distinct object of interest appearing in an image. Current instance
segmentation approaches consist of ensembles of modules that are trained
independently of each other, thus missing opportunities for joint learning.
Here we propose a new instance segmentation paradigm consisting in an
end-to-end method that learns how to segment instances sequentially. The model
is based on a recurrent neural network that sequentially finds objects and
their segmentations one at a time. This net is provided with a spatial memory
that keeps track of what pixels have been explained and allows occlusion
handling. In order to train the model we designed a principled loss function
that accurately represents the properties of the instance segmentation problem.
In the experiments carried out, we found that our method outperforms recent
approaches on multiple person segmentation, and all state of the art approaches
on the Plant Phenotyping dataset for leaf counting.
|
[
{
"created": "Wed, 25 Nov 2015 23:28:14 GMT",
"version": "v1"
},
{
"created": "Tue, 5 Apr 2016 22:45:04 GMT",
"version": "v2"
},
{
"created": "Mon, 24 Oct 2016 23:57:19 GMT",
"version": "v3"
}
] |
2016-10-26
|
[
[
"Romera-Paredes",
"Bernardino",
""
],
[
"Torr",
"Philip H. S.",
""
]
] |
Instance segmentation is the problem of detecting and delineating each distinct object of interest appearing in an image. Current instance segmentation approaches consist of ensembles of modules that are trained independently of each other, thus missing opportunities for joint learning. Here we propose a new instance segmentation paradigm consisting in an end-to-end method that learns how to segment instances sequentially. The model is based on a recurrent neural network that sequentially finds objects and their segmentations one at a time. This net is provided with a spatial memory that keeps track of what pixels have been explained and allows occlusion handling. In order to train the model we designed a principled loss function that accurately represents the properties of the instance segmentation problem. In the experiments carried out, we found that our method outperforms recent approaches on multiple person segmentation, and all state of the art approaches on the Plant Phenotyping dataset for leaf counting.
|
1910.03478
|
Javier Cabrera ARteaga
|
Javier Cabrera-Arteaga, Martin Monperrus and Benoit Baudry
|
Scalable Comparison of JavaScript V8 Bytecode Traces
|
10 pages, 6 figures, 2 tables,
|
Proceedings of the SPLASH workshop on Virtual Machines and
Language Implementations (VMIL), 2019
|
10.1145/3358504.3361228
| null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The comparison and alignment of runtime traces are essential, e.g., for
semantic analysis or debugging. However, naive sequence alignment algorithms
cannot address the needs of the modern web: (i) the bytecode generation process
of V8 is not deterministic; (ii) bytecode traces are large.
We present STRAC, a scalable and extensible tool tailored to compare bytecode
traces generated by the V8 JavaScript engine. Given two V8 bytecode traces and
a distance function between trace events, STRAC computes and provides the best
alignment. The key insight is to split access between memory and disk. STRAC
can identify semantically equivalent web pages and is capable of processing
huge V8 bytecode traces whose order of magnitude matches today's web like
https://2019.splashcon.org, which generates approx. 150k of V8 bytecode
instructions.
|
[
{
"created": "Tue, 8 Oct 2019 15:46:42 GMT",
"version": "v1"
}
] |
2019-10-17
|
[
[
"Cabrera-Arteaga",
"Javier",
""
],
[
"Monperrus",
"Martin",
""
],
[
"Baudry",
"Benoit",
""
]
] |
The comparison and alignment of runtime traces are essential, e.g., for semantic analysis or debugging. However, naive sequence alignment algorithms cannot address the needs of the modern web: (i) the bytecode generation process of V8 is not deterministic; (ii) bytecode traces are large. We present STRAC, a scalable and extensible tool tailored to compare bytecode traces generated by the V8 JavaScript engine. Given two V8 bytecode traces and a distance function between trace events, STRAC computes and provides the best alignment. The key insight is to split access between memory and disk. STRAC can identify semantically equivalent web pages and is capable of processing huge V8 bytecode traces whose order of magnitude matches today's web like https://2019.splashcon.org, which generates approx. 150k of V8 bytecode instructions.
|
2207.13126
|
Tao Lin
|
Tao Lin, Yiling Chen
|
Sample Complexity of Forecast Aggregation
|
Accepted by NeurIPS 2023 (spotlight)
| null | null | null |
cs.LG cs.GT econ.TH
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider a Bayesian forecast aggregation model where $n$ experts, after
observing private signals about an unknown binary event, report their posterior
beliefs about the event to a principal, who then aggregates the reports into a
single prediction for the event. The signals of the experts and the outcome of
the event follow a joint distribution that is unknown to the principal, but the
principal has access to i.i.d. "samples" from the distribution, where each
sample is a tuple of the experts' reports (not signals) and the realization of
the event. Using these samples, the principal aims to find an
$\varepsilon$-approximately optimal aggregator, where optimality is measured in
terms of the expected squared distance between the aggregated prediction and
the realization of the event. We show that the sample complexity of this
problem is at least $\tilde \Omega(m^{n-2} / \varepsilon)$ for arbitrary
discrete distributions, where $m$ is the size of each expert's signal space.
This sample complexity grows exponentially in the number of experts $n$. But,
if the experts' signals are independent conditioned on the realization of the
event, then the sample complexity is significantly reduced, to $\tilde O(1 /
\varepsilon^2)$, which does not depend on $n$. Our results can be generalized
to non-binary events. The proof of our results uses a reduction from the
distribution learning problem and reveals the fact that forecast aggregation is
almost as difficult as distribution learning.
|
[
{
"created": "Tue, 26 Jul 2022 18:12:53 GMT",
"version": "v1"
},
{
"created": "Mon, 7 Nov 2022 22:33:33 GMT",
"version": "v2"
},
{
"created": "Thu, 1 Jun 2023 16:45:10 GMT",
"version": "v3"
},
{
"created": "Tue, 10 Oct 2023 04:04:17 GMT",
"version": "v4"
}
] |
2023-10-11
|
[
[
"Lin",
"Tao",
""
],
[
"Chen",
"Yiling",
""
]
] |
We consider a Bayesian forecast aggregation model where $n$ experts, after observing private signals about an unknown binary event, report their posterior beliefs about the event to a principal, who then aggregates the reports into a single prediction for the event. The signals of the experts and the outcome of the event follow a joint distribution that is unknown to the principal, but the principal has access to i.i.d. "samples" from the distribution, where each sample is a tuple of the experts' reports (not signals) and the realization of the event. Using these samples, the principal aims to find an $\varepsilon$-approximately optimal aggregator, where optimality is measured in terms of the expected squared distance between the aggregated prediction and the realization of the event. We show that the sample complexity of this problem is at least $\tilde \Omega(m^{n-2} / \varepsilon)$ for arbitrary discrete distributions, where $m$ is the size of each expert's signal space. This sample complexity grows exponentially in the number of experts $n$. But, if the experts' signals are independent conditioned on the realization of the event, then the sample complexity is significantly reduced, to $\tilde O(1 / \varepsilon^2)$, which does not depend on $n$. Our results can be generalized to non-binary events. The proof of our results uses a reduction from the distribution learning problem and reveals the fact that forecast aggregation is almost as difficult as distribution learning.
|
1411.6478
|
Francois Taiani
|
Roy Friedman, Michel Raynal (IUF, UR1, ASAP), Fran\c{c}ois Ta\"iani
(UR1, ASAP)
|
Fisheye Consistency: Keeping Data in Synch in a Georeplicated World
| null | null | null | null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Over the last thirty years, numerous consistency conditions for replicated
data have been proposed and implemented. Popular examples of such conditions
include linearizability (or atomicity), sequential consistency, causal
consistency, and eventual consistency. These consistency conditions are usually
defined independently from the computing entities (nodes) that manipulate the
replicated data; i.e., they do not take into account how computing entities
might be linked to one another, or geographically distributed. To address this
lack, as a first contribution, this paper introduces the notion of proximity
graph between computing nodes. If two nodes are connected in this graph, their
operations must satisfy a strong consistency condition, while the operations
invoked by other nodes are allowed to satisfy a weaker condition. The second
contribution is the use of such a graph to provide a generic approach to the
hybridization of data consistency conditions into the same system. We
illustrate this approach on sequential consistency and causal consistency, and
present a model in which all data operations are causally consistent, while
operations by neighboring processes in the proximity graph are sequentially
consistent. The third contribution of the paper is the design and the proof of
a distributed algorithm based on this proximity graph, which combines
sequential consistency and causal consistency (the resulting condition is
called fisheye consistency). In doing so the paper not only extends the domain
of consistency conditions, but provides a generic provably correct solution of
direct relevance to modern georeplicated systems.
|
[
{
"created": "Mon, 24 Nov 2014 15:12:39 GMT",
"version": "v1"
},
{
"created": "Thu, 22 Oct 2015 10:03:58 GMT",
"version": "v2"
}
] |
2015-10-23
|
[
[
"Friedman",
"Roy",
"",
"IUF, UR1, ASAP"
],
[
"Raynal",
"Michel",
"",
"IUF, UR1, ASAP"
],
[
"Taïani",
"François",
"",
"UR1, ASAP"
]
] |
Over the last thirty years, numerous consistency conditions for replicated data have been proposed and implemented. Popular examples of such conditions include linearizability (or atomicity), sequential consistency, causal consistency, and eventual consistency. These consistency conditions are usually defined independently from the computing entities (nodes) that manipulate the replicated data; i.e., they do not take into account how computing entities might be linked to one another, or geographically distributed. To address this lack, as a first contribution, this paper introduces the notion of proximity graph between computing nodes. If two nodes are connected in this graph, their operations must satisfy a strong consistency condition, while the operations invoked by other nodes are allowed to satisfy a weaker condition. The second contribution is the use of such a graph to provide a generic approach to the hybridization of data consistency conditions into the same system. We illustrate this approach on sequential consistency and causal consistency, and present a model in which all data operations are causally consistent, while operations by neighboring processes in the proximity graph are sequentially consistent. The third contribution of the paper is the design and the proof of a distributed algorithm based on this proximity graph, which combines sequential consistency and causal consistency (the resulting condition is called fisheye consistency). In doing so the paper not only extends the domain of consistency conditions, but provides a generic provably correct solution of direct relevance to modern georeplicated systems.
|
2207.06416
|
Rohit Shaw
|
Rohit Shaw
|
Collaborative Machine Learning-Driven Internet of Medical Things -- A
Systematic Literature Review
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
The growing adoption of IoT devices for healthcare has enabled researchers to
build intelligence using all the data produced by these devices. Monitoring and
diagnosing health have been the two most common scenarios where such devices
have proven beneficial. Achieving high prediction accuracy was a top priority
initially, but the focus has slowly shifted to efficiency and higher
throughput, and processing the data from these devices in a distributed manner
has proven to help achieve both. Since the field of machine learning is vast
with numerous state-of-the-art algorithms in play, it has been a challenge to
identify the algorithms that perform best in different scenarios. In this
literature review, we explored the distributed machine learning algorithms
tested by the authors of the selected studies and identified the ones that
achieved the best prediction accuracy in each healthcare scenario. While no
algorithm performed consistently, Random Forest performed the best in a few
studies. This could serve as a good starting point for future studies on
collaborative machine learning on IoMT data.
|
[
{
"created": "Wed, 13 Jul 2022 12:28:17 GMT",
"version": "v1"
}
] |
2022-07-15
|
[
[
"Shaw",
"Rohit",
""
]
] |
The growing adoption of IoT devices for healthcare has enabled researchers to build intelligence using all the data produced by these devices. Monitoring and diagnosing health have been the two most common scenarios where such devices have proven beneficial. Achieving high prediction accuracy was a top priority initially, but the focus has slowly shifted to efficiency and higher throughput, and processing the data from these devices in a distributed manner has proven to help achieve both. Since the field of machine learning is vast with numerous state-of-the-art algorithms in play, it has been a challenge to identify the algorithms that perform best in different scenarios. In this literature review, we explored the distributed machine learning algorithms tested by the authors of the selected studies and identified the ones that achieved the best prediction accuracy in each healthcare scenario. While no algorithm performed consistently, Random Forest performed the best in a few studies. This could serve as a good starting point for future studies on collaborative machine learning on IoMT data.
|
2001.09415
|
Pengfei Zhu
|
Pengfei Zhu and Hai Zhao and Xiaoguang Li
|
DUMA: Reading Comprehension with Transposition Thinking
| null |
IEEE/ACM.Transactions.on.Audio.Speech.and.Language.Processing 30
(2022) 269-279
|
10.1109/TASLP.2021.3138683
| null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multi-choice Machine Reading Comprehension (MRC) requires model to decide the
correct answer from a set of answer options when given a passage and a
question. Thus in addition to a powerful Pre-trained Language Model (PrLM) as
encoder, multi-choice MRC especially relies on a matching network design which
is supposed to effectively capture the relationships among the triplet of
passage, question and answers. While the newer and more powerful PrLMs have
shown their mightiness even without the support from a matching network, we
propose a new DUal Multi-head Co-Attention (DUMA) model, which is inspired by
human's transposition thinking process solving the multi-choice MRC problem:
respectively considering each other's focus from the standpoint of passage and
question. The proposed DUMA has been shown effective and is capable of
generally promoting PrLMs. Our proposed method is evaluated on two benchmark
multi-choice MRC tasks, DREAM and RACE, showing that in terms of powerful
PrLMs, DUMA can still boost the model to reach new state-of-the-art
performance.
|
[
{
"created": "Sun, 26 Jan 2020 07:35:02 GMT",
"version": "v1"
},
{
"created": "Wed, 5 Feb 2020 13:59:48 GMT",
"version": "v2"
},
{
"created": "Sat, 8 Feb 2020 03:47:36 GMT",
"version": "v3"
},
{
"created": "Wed, 18 Mar 2020 12:53:23 GMT",
"version": "v4"
},
{
"created": "Tue, 15 Sep 2020 07:16:15 GMT",
"version": "v5"
}
] |
2022-01-17
|
[
[
"Zhu",
"Pengfei",
""
],
[
"Zhao",
"Hai",
""
],
[
"Li",
"Xiaoguang",
""
]
] |
Multi-choice Machine Reading Comprehension (MRC) requires model to decide the correct answer from a set of answer options when given a passage and a question. Thus in addition to a powerful Pre-trained Language Model (PrLM) as encoder, multi-choice MRC especially relies on a matching network design which is supposed to effectively capture the relationships among the triplet of passage, question and answers. While the newer and more powerful PrLMs have shown their mightiness even without the support from a matching network, we propose a new DUal Multi-head Co-Attention (DUMA) model, which is inspired by human's transposition thinking process solving the multi-choice MRC problem: respectively considering each other's focus from the standpoint of passage and question. The proposed DUMA has been shown effective and is capable of generally promoting PrLMs. Our proposed method is evaluated on two benchmark multi-choice MRC tasks, DREAM and RACE, showing that in terms of powerful PrLMs, DUMA can still boost the model to reach new state-of-the-art performance.
|
1909.04715
|
Ahmed Khaled
|
Ahmed Khaled and Konstantin Mishchenko and Peter Richt\'arik
|
First Analysis of Local GD on Heterogeneous Data
|
NeurIPS 2019 Workshop on Federated Learning for Data Privacy and
Confidentiality. 11 pages, 4 lemmas, 1 theorem
| null | null | null |
cs.LG cs.DC cs.NA math.NA math.OC stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We provide the first convergence analysis of local gradient descent for
minimizing the average of smooth and convex but otherwise arbitrary functions.
Problems of this form and local gradient descent as a solution method are of
importance in federated learning, where each function is based on private data
stored by a user on a mobile device, and the data of different users can be
arbitrarily heterogeneous. We show that in a low accuracy regime, the method
has the same communication complexity as gradient descent.
|
[
{
"created": "Tue, 10 Sep 2019 19:46:13 GMT",
"version": "v1"
},
{
"created": "Wed, 18 Mar 2020 13:25:03 GMT",
"version": "v2"
}
] |
2020-03-19
|
[
[
"Khaled",
"Ahmed",
""
],
[
"Mishchenko",
"Konstantin",
""
],
[
"Richtárik",
"Peter",
""
]
] |
We provide the first convergence analysis of local gradient descent for minimizing the average of smooth and convex but otherwise arbitrary functions. Problems of this form and local gradient descent as a solution method are of importance in federated learning, where each function is based on private data stored by a user on a mobile device, and the data of different users can be arbitrarily heterogeneous. We show that in a low accuracy regime, the method has the same communication complexity as gradient descent.
|
2401.13789
|
Armand Stricker
|
Armand Stricker, Patrick Paroubek
|
A Unified Approach to Emotion Detection and Task-Oriented Dialogue
Modeling
|
Accepted @ IWSDS 2024
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In current text-based task-oriented dialogue (TOD) systems, user emotion
detection (ED) is often overlooked or is typically treated as a separate and
independent task, requiring additional training. In contrast, our work
demonstrates that seamlessly unifying ED and TOD modeling brings about mutual
benefits, and is therefore an alternative to be considered. Our method consists
in augmenting SimpleToD, an end-to-end TOD system, by extending belief state
tracking to include ED, relying on a single language model. We evaluate our
approach using GPT-2 and Llama-2 on the EmoWOZ benchmark, a version of MultiWOZ
annotated with emotions. Our results reveal a general increase in performance
for ED and task results. Our findings also indicate that user emotions provide
useful contextual conditioning for system responses, and can be leveraged to
further refine responses in terms of empathy.
|
[
{
"created": "Wed, 24 Jan 2024 20:17:11 GMT",
"version": "v1"
},
{
"created": "Mon, 19 Feb 2024 16:18:40 GMT",
"version": "v2"
},
{
"created": "Fri, 28 Jun 2024 10:23:29 GMT",
"version": "v3"
}
] |
2024-07-01
|
[
[
"Stricker",
"Armand",
""
],
[
"Paroubek",
"Patrick",
""
]
] |
In current text-based task-oriented dialogue (TOD) systems, user emotion detection (ED) is often overlooked or is typically treated as a separate and independent task, requiring additional training. In contrast, our work demonstrates that seamlessly unifying ED and TOD modeling brings about mutual benefits, and is therefore an alternative to be considered. Our method consists in augmenting SimpleToD, an end-to-end TOD system, by extending belief state tracking to include ED, relying on a single language model. We evaluate our approach using GPT-2 and Llama-2 on the EmoWOZ benchmark, a version of MultiWOZ annotated with emotions. Our results reveal a general increase in performance for ED and task results. Our findings also indicate that user emotions provide useful contextual conditioning for system responses, and can be leveraged to further refine responses in terms of empathy.
|
1401.3855
|
Michael Benisch
|
Michael Benisch, George B. Davis, Tuomas Sandholm
|
Algorithms for Closed Under Rational Behavior (CURB) Sets
| null |
Journal Of Artificial Intelligence Research, Volume 38, pages
513-534, 2010
|
10.1613/jair.3070
| null |
cs.GT cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We provide a series of algorithms demonstrating that solutions according to
the fundamental game-theoretic solution concept of closed under rational
behavior (CURB) sets in two-player, normal-form games can be computed in
polynomial time (we also discuss extensions to n-player games). First, we
describe an algorithm that identifies all of a player's best responses
conditioned on the belief that the other player will play from within a given
subset of its strategy space. This algorithm serves as a subroutine in a series
of polynomial-time algorithms for finding all minimal CURB sets, one minimal
CURB set, and the smallest minimal CURB set in a game. We then show that the
complexity of finding a Nash equilibrium can be exponential only in the size of
a game's smallest CURB set. Related to this, we show that the smallest CURB set
can be an arbitrarily small portion of the game, but it can also be arbitrarily
larger than the supports of its only enclosed Nash equilibrium. We test our
algorithms empirically and find that most commonly studied academic games tend
to have either very large or very small minimal CURB sets.
|
[
{
"created": "Thu, 16 Jan 2014 05:01:13 GMT",
"version": "v1"
}
] |
2014-01-17
|
[
[
"Benisch",
"Michael",
""
],
[
"Davis",
"George B.",
""
],
[
"Sandholm",
"Tuomas",
""
]
] |
We provide a series of algorithms demonstrating that solutions according to the fundamental game-theoretic solution concept of closed under rational behavior (CURB) sets in two-player, normal-form games can be computed in polynomial time (we also discuss extensions to n-player games). First, we describe an algorithm that identifies all of a player's best responses conditioned on the belief that the other player will play from within a given subset of its strategy space. This algorithm serves as a subroutine in a series of polynomial-time algorithms for finding all minimal CURB sets, one minimal CURB set, and the smallest minimal CURB set in a game. We then show that the complexity of finding a Nash equilibrium can be exponential only in the size of a game's smallest CURB set. Related to this, we show that the smallest CURB set can be an arbitrarily small portion of the game, but it can also be arbitrarily larger than the supports of its only enclosed Nash equilibrium. We test our algorithms empirically and find that most commonly studied academic games tend to have either very large or very small minimal CURB sets.
|
1906.04924
|
Shawn Meier
|
Shawn Meier, Sergio Mover, and Bor-Yuh Evan Chang
|
Lifestate: Event-Driven Protocols and Callback Control Flow (Extended
Version)
|
26 pages, 11 figures, ECOOP 2019
| null |
10.4230/LIPIcs.ECOOP.2019.4
| null |
cs.PL cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Developing interactive applications (apps) against event-driven software
frameworks such as Android is notoriously difficult. To create apps that behave
as expected, developers must follow complex and often implicit asynchronous
programming protocols. Such protocols intertwine the proper registering of
callbacks to receive control from the framework with appropriate
application-programming interface (API) calls that in turn affect the set of
possible future callbacks. An app violates the protocol when, for example, it
calls a particular API method in a state of the framework where such a call is
invalid. What makes automated reasoning hard in this domain is largely what
makes programming apps against such frameworks hard: the specification of the
protocol is unclear, and the control flow is complex, asynchronous, and
higher-order. In this paper, we tackle the problem of specifying and modeling
event-driven application-programming protocols. In particular, we formalize a
core meta-model that captures the dialogue between event-driven frameworks and
application callbacks. Based on this meta-model, we define a language called
lifestate that permits precise and formal descriptions of
application-programming protocols and the callback control flow imposed by the
event-driven framework. Lifestate unifies modeling what app callbacks can
expect of the framework with specifying rules the app must respect when calling
into the framework. In this way, we effectively combine lifecycle constraints
and typestate rules. To evaluate the effectiveness of lifestate modeling, we
provide a dynamic verification algorithm that takes as input a trace of
execution of an app and a lifestate protocol specification to either produce a
trace witnessing a protocol violation or a proof that no such trace is
realizable.
|
[
{
"created": "Wed, 12 Jun 2019 03:36:11 GMT",
"version": "v1"
},
{
"created": "Thu, 13 Jun 2019 01:40:21 GMT",
"version": "v2"
}
] |
2019-06-14
|
[
[
"Meier",
"Shawn",
""
],
[
"Mover",
"Sergio",
""
],
[
"Chang",
"Bor-Yuh Evan",
""
]
] |
Developing interactive applications (apps) against event-driven software frameworks such as Android is notoriously difficult. To create apps that behave as expected, developers must follow complex and often implicit asynchronous programming protocols. Such protocols intertwine the proper registering of callbacks to receive control from the framework with appropriate application-programming interface (API) calls that in turn affect the set of possible future callbacks. An app violates the protocol when, for example, it calls a particular API method in a state of the framework where such a call is invalid. What makes automated reasoning hard in this domain is largely what makes programming apps against such frameworks hard: the specification of the protocol is unclear, and the control flow is complex, asynchronous, and higher-order. In this paper, we tackle the problem of specifying and modeling event-driven application-programming protocols. In particular, we formalize a core meta-model that captures the dialogue between event-driven frameworks and application callbacks. Based on this meta-model, we define a language called lifestate that permits precise and formal descriptions of application-programming protocols and the callback control flow imposed by the event-driven framework. Lifestate unifies modeling what app callbacks can expect of the framework with specifying rules the app must respect when calling into the framework. In this way, we effectively combine lifecycle constraints and typestate rules. To evaluate the effectiveness of lifestate modeling, we provide a dynamic verification algorithm that takes as input a trace of execution of an app and a lifestate protocol specification to either produce a trace witnessing a protocol violation or a proof that no such trace is realizable.
|
2309.02395
|
Kush Jain
|
Kush Jain, Goutamkumar Tulajappa Kalburgi, Claire Le Goues, Alex Groce
|
Mind the Gap: The Difference Between Coverage and Mutation Score Can
Guide Testing Efforts
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An "adequate" test suite should effectively find all inconsistencies between
a system's requirements/specifications and its implementation. Practitioners
frequently use code coverage to approximate adequacy, while academics argue
that mutation score may better approximate true (oracular) adequacy coverage.
High code coverage is increasingly attainable even on large systems via
automatic test generation, including fuzzing. In light of all of these options
for measuring and improving testing effort, how should a QA engineer spend
their time? We propose a new framework for reasoning about the extent, limits,
and nature of a given testing effort based on an idea we call the oracle gap,
or the difference between source code coverage and mutation score for a given
software element. We conduct (1) a large-scale observational study of the
oracle gap across popular Maven projects, (2) a study that varies testing and
oracle quality across several of those projects and (3) a small-scale
observational study of highly critical, well-tested code across comparable
blockchain projects. We show that the oracle gap surfaces important information
about the extent and quality of a test effort beyond either adequacy metric
alone. In particular, it provides a way for practitioners to identify source
files where it is likely a weak oracle tests important code.
|
[
{
"created": "Tue, 5 Sep 2023 17:05:52 GMT",
"version": "v1"
}
] |
2023-09-06
|
[
[
"Jain",
"Kush",
""
],
[
"Kalburgi",
"Goutamkumar Tulajappa",
""
],
[
"Goues",
"Claire Le",
""
],
[
"Groce",
"Alex",
""
]
] |
An "adequate" test suite should effectively find all inconsistencies between a system's requirements/specifications and its implementation. Practitioners frequently use code coverage to approximate adequacy, while academics argue that mutation score may better approximate true (oracular) adequacy coverage. High code coverage is increasingly attainable even on large systems via automatic test generation, including fuzzing. In light of all of these options for measuring and improving testing effort, how should a QA engineer spend their time? We propose a new framework for reasoning about the extent, limits, and nature of a given testing effort based on an idea we call the oracle gap, or the difference between source code coverage and mutation score for a given software element. We conduct (1) a large-scale observational study of the oracle gap across popular Maven projects, (2) a study that varies testing and oracle quality across several of those projects and (3) a small-scale observational study of highly critical, well-tested code across comparable blockchain projects. We show that the oracle gap surfaces important information about the extent and quality of a test effort beyond either adequacy metric alone. In particular, it provides a way for practitioners to identify source files where it is likely a weak oracle tests important code.
|
2009.05769
|
Jinpeng Wang
|
Jinpeng Wang, Yuting Gao, Ke Li, Yiqi Lin, Andy J. Ma, Hao Cheng, Pai
Peng, Feiyue Huang, Rongrong Ji, Xing Sun
|
Removing the Background by Adding the Background: Towards Background
Robust Self-supervised Video Representation Learning
|
CVPR2021 camera ready
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Self-supervised learning has shown great potentials in improving the video
representation ability of deep neural networks by getting supervision from the
data itself. However, some of the current methods tend to cheat from the
background, i.e., the prediction is highly dependent on the video background
instead of the motion, making the model vulnerable to background changes. To
mitigate the model reliance towards the background, we propose to remove the
background impact by adding the background. That is, given a video, we randomly
select a static frame and add it to every other frames to construct a
distracting video sample. Then we force the model to pull the feature of the
distracting video and the feature of the original video closer, so that the
model is explicitly restricted to resist the background influence, focusing
more on the motion changes. We term our method as \emph{Background Erasing}
(BE). It is worth noting that the implementation of our method is so simple and
neat and can be added to most of the SOTA methods without much efforts.
Specifically, BE brings 16.4% and 19.1% improvements with MoCo on the severely
biased datasets UCF101 and HMDB51, and 14.5% improvement on the less biased
dataset Diving48.
|
[
{
"created": "Sat, 12 Sep 2020 11:25:13 GMT",
"version": "v1"
},
{
"created": "Wed, 18 Nov 2020 16:42:53 GMT",
"version": "v2"
},
{
"created": "Wed, 3 Mar 2021 11:52:16 GMT",
"version": "v3"
},
{
"created": "Thu, 22 Apr 2021 03:37:30 GMT",
"version": "v4"
}
] |
2021-04-23
|
[
[
"Wang",
"Jinpeng",
""
],
[
"Gao",
"Yuting",
""
],
[
"Li",
"Ke",
""
],
[
"Lin",
"Yiqi",
""
],
[
"Ma",
"Andy J.",
""
],
[
"Cheng",
"Hao",
""
],
[
"Peng",
"Pai",
""
],
[
"Huang",
"Feiyue",
""
],
[
"Ji",
"Rongrong",
""
],
[
"Sun",
"Xing",
""
]
] |
Self-supervised learning has shown great potentials in improving the video representation ability of deep neural networks by getting supervision from the data itself. However, some of the current methods tend to cheat from the background, i.e., the prediction is highly dependent on the video background instead of the motion, making the model vulnerable to background changes. To mitigate the model reliance towards the background, we propose to remove the background impact by adding the background. That is, given a video, we randomly select a static frame and add it to every other frames to construct a distracting video sample. Then we force the model to pull the feature of the distracting video and the feature of the original video closer, so that the model is explicitly restricted to resist the background influence, focusing more on the motion changes. We term our method as \emph{Background Erasing} (BE). It is worth noting that the implementation of our method is so simple and neat and can be added to most of the SOTA methods without much efforts. Specifically, BE brings 16.4% and 19.1% improvements with MoCo on the severely biased datasets UCF101 and HMDB51, and 14.5% improvement on the less biased dataset Diving48.
|
1911.09245
|
Diogo Luvizon
|
Diogo C Luvizon, Hedi Tabia, David Picard
|
Consensus-based Optimization for 3D Human Pose Estimation in Camera
Coordinates
|
Source code is available at
https://github.com/dluvizon/3d-pose-consensus
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
3D human pose estimation is frequently seen as the task of estimating 3D
poses relative to the root body joint. Alternatively, we propose a 3D human
pose estimation method in camera coordinates, which allows effective
combination of 2D annotated data and 3D poses and a straightforward multi-view
generalization. To that end, we cast the problem as a view frustum space pose
estimation, where absolute depth prediction and joint relative depth
estimations are disentangled. Final 3D predictions are obtained in camera
coordinates by the inverse camera projection. Based on this, we also present a
consensus-based optimization algorithm for multi-view predictions from
uncalibrated images, which requires a single monocular training procedure.
Although our method is indirectly tied to the training camera intrinsics, it
still converges for cameras with different intrinsic parameters, resulting in
coherent estimations up to a scale factor. Our method improves the state of the
art on well known 3D human pose datasets, reducing the prediction error by 32%
in the most common benchmark. We also reported our results in absolute pose
position error, achieving 80~mm for monocular estimations and 51~mm for
multi-view, on average.
|
[
{
"created": "Thu, 21 Nov 2019 02:19:08 GMT",
"version": "v1"
},
{
"created": "Thu, 28 Nov 2019 03:08:10 GMT",
"version": "v2"
},
{
"created": "Fri, 20 Aug 2021 13:53:55 GMT",
"version": "v3"
}
] |
2021-08-23
|
[
[
"Luvizon",
"Diogo C",
""
],
[
"Tabia",
"Hedi",
""
],
[
"Picard",
"David",
""
]
] |
3D human pose estimation is frequently seen as the task of estimating 3D poses relative to the root body joint. Alternatively, we propose a 3D human pose estimation method in camera coordinates, which allows effective combination of 2D annotated data and 3D poses and a straightforward multi-view generalization. To that end, we cast the problem as a view frustum space pose estimation, where absolute depth prediction and joint relative depth estimations are disentangled. Final 3D predictions are obtained in camera coordinates by the inverse camera projection. Based on this, we also present a consensus-based optimization algorithm for multi-view predictions from uncalibrated images, which requires a single monocular training procedure. Although our method is indirectly tied to the training camera intrinsics, it still converges for cameras with different intrinsic parameters, resulting in coherent estimations up to a scale factor. Our method improves the state of the art on well known 3D human pose datasets, reducing the prediction error by 32% in the most common benchmark. We also reported our results in absolute pose position error, achieving 80~mm for monocular estimations and 51~mm for multi-view, on average.
|
2204.05365
|
Wael Fatnassi
|
Wael Fatnassi and Yasser shoukry
|
PolyARBerNN: A Neural Network Guided Solver and Optimizer for Bounded
Polynomial Inequalities
| null | null | null | null |
cs.LO cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
Constraints solvers play a significant role in the analysis, synthesis, and
formal verification of complex embedded and cyber-physical systems. In this
paper, we study the problem of designing a scalable constraints solver for an
important class of constraints named polynomial constraint inequalities (also
known as non-linear real arithmetic theory). In this paper, we introduce a
solver named PolyARBerNN that uses convex polynomials as abstractions for
highly nonlinear polynomials. Such abstractions were previously shown to be
powerful to prune the search space and restrict the usage of sound and complete
solvers to small search spaces. Compared with the previous efforts on using
convex abstractions, PolyARBerNN provides three main contributions namely (i) a
neural network guided abstraction refinement procedure that helps selecting the
right abstraction out of a set of pre-defined abstractions, (ii) a Bernstein
polynomial-based search space pruning mechanism that can be used to compute
tight estimates of the polynomial maximum and minimum values which can be used
as an additional abstraction of the polynomials, and (iii) an optimizer that
transforms polynomial objective functions into polynomial constraints (on the
gradient of the objective function) whose solutions are guaranteed to be close
to the global optima. These enhancements together allowed the PolyARBerNN
solver to solve complex instances and scales more favorably compared to the
state-of-art non-linear real arithmetic solvers while maintaining the soundness
and completeness of the resulting solver. In particular, our test benches show
that PolyARBerNN achieved 100X speedup compared with Z3 8.9, Yices 2.6, and
NASALib (a solver that uses Bernstein expansion to solve multivariate
polynomial constraints) on a variety of standard test benches.
|
[
{
"created": "Mon, 11 Apr 2022 18:55:28 GMT",
"version": "v1"
},
{
"created": "Thu, 15 Sep 2022 18:19:32 GMT",
"version": "v2"
}
] |
2022-09-19
|
[
[
"Fatnassi",
"Wael",
""
],
[
"shoukry",
"Yasser",
""
]
] |
Constraints solvers play a significant role in the analysis, synthesis, and formal verification of complex embedded and cyber-physical systems. In this paper, we study the problem of designing a scalable constraints solver for an important class of constraints named polynomial constraint inequalities (also known as non-linear real arithmetic theory). In this paper, we introduce a solver named PolyARBerNN that uses convex polynomials as abstractions for highly nonlinear polynomials. Such abstractions were previously shown to be powerful to prune the search space and restrict the usage of sound and complete solvers to small search spaces. Compared with the previous efforts on using convex abstractions, PolyARBerNN provides three main contributions namely (i) a neural network guided abstraction refinement procedure that helps selecting the right abstraction out of a set of pre-defined abstractions, (ii) a Bernstein polynomial-based search space pruning mechanism that can be used to compute tight estimates of the polynomial maximum and minimum values which can be used as an additional abstraction of the polynomials, and (iii) an optimizer that transforms polynomial objective functions into polynomial constraints (on the gradient of the objective function) whose solutions are guaranteed to be close to the global optima. These enhancements together allowed the PolyARBerNN solver to solve complex instances and scales more favorably compared to the state-of-art non-linear real arithmetic solvers while maintaining the soundness and completeness of the resulting solver. In particular, our test benches show that PolyARBerNN achieved 100X speedup compared with Z3 8.9, Yices 2.6, and NASALib (a solver that uses Bernstein expansion to solve multivariate polynomial constraints) on a variety of standard test benches.
|
2105.06024
|
Siva Somayyajula
|
Siva Somayyajula, Frank Pfenning
|
Type-Based Termination for Futures
|
23 pages. Extended version
| null | null | null |
cs.PL cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In sequential functional languages, sized types enable termination checking
of programs with complex patterns of recursion in the presence of mixed
inductive-coinductive types. In this paper, we adapt sized types and their
metatheory to the concurrent setting. We extend the semi-axiomatic sequent
calculus, a subsuming paradigm for futures-based functional concurrency, and
its underlying operational semantics with recursion and arithmetic refinements.
The latter enables a new and highly general sized type scheme we call sized
type refinements. As a widely applicable technical device, we type recursive
programs with infinitely deep typing derivations that unfold all recursive
calls. Then, we observe that certain such derivations can be made infinitely
wide but finitely deep. The resulting trees serve as the induction target of
our termination result, which we develop via a novel logical relations
argument.
|
[
{
"created": "Thu, 13 May 2021 01:21:30 GMT",
"version": "v1"
},
{
"created": "Tue, 19 Oct 2021 17:59:48 GMT",
"version": "v2"
},
{
"created": "Sat, 19 Feb 2022 00:12:37 GMT",
"version": "v3"
},
{
"created": "Tue, 7 Mar 2023 17:58:49 GMT",
"version": "v4"
},
{
"created": "Sat, 2 Dec 2023 16:26:04 GMT",
"version": "v5"
},
{
"created": "Mon, 15 Apr 2024 16:42:13 GMT",
"version": "v6"
}
] |
2024-04-16
|
[
[
"Somayyajula",
"Siva",
""
],
[
"Pfenning",
"Frank",
""
]
] |
In sequential functional languages, sized types enable termination checking of programs with complex patterns of recursion in the presence of mixed inductive-coinductive types. In this paper, we adapt sized types and their metatheory to the concurrent setting. We extend the semi-axiomatic sequent calculus, a subsuming paradigm for futures-based functional concurrency, and its underlying operational semantics with recursion and arithmetic refinements. The latter enables a new and highly general sized type scheme we call sized type refinements. As a widely applicable technical device, we type recursive programs with infinitely deep typing derivations that unfold all recursive calls. Then, we observe that certain such derivations can be made infinitely wide but finitely deep. The resulting trees serve as the induction target of our termination result, which we develop via a novel logical relations argument.
|
2312.10303
|
Shufan Wang
|
Shufan Wang, Guojun Xiong, Jian Li
|
Online Restless Multi-Armed Bandits with Long-Term Fairness Constraints
|
AAAI 2024
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Restless multi-armed bandits (RMAB) have been widely used to model sequential
decision making problems with constraints. The decision maker (DM) aims to
maximize the expected total reward over an infinite horizon under an
"instantaneous activation constraint" that at most B arms can be activated at
any decision epoch, where the state of each arm evolves stochastically
according to a Markov decision process (MDP). However, this basic model fails
to provide any fairness guarantee among arms. In this paper, we introduce
RMAB-F, a new RMAB model with "long-term fairness constraints", where the
objective now is to maximize the long term reward while a minimum long-term
activation fraction for each arm must be satisfied. For the online RMAB-F
setting (i.e., the underlying MDPs associated with each arm are unknown to the
DM), we develop a novel reinforcement learning (RL) algorithm named Fair-UCRL.
We prove that Fair-UCRL ensures probabilistic sublinear bounds on both the
reward regret and the fairness violation regret. Compared with off-the-shelf RL
methods, our Fair-UCRL is much more computationally efficient since it contains
a novel exploitation that leverages a low-complexity index policy for making
decisions. Experimental results further demonstrate the effectiveness of our
Fair-UCRL.
|
[
{
"created": "Sat, 16 Dec 2023 03:35:56 GMT",
"version": "v1"
},
{
"created": "Fri, 22 Dec 2023 01:40:28 GMT",
"version": "v2"
}
] |
2023-12-25
|
[
[
"Wang",
"Shufan",
""
],
[
"Xiong",
"Guojun",
""
],
[
"Li",
"Jian",
""
]
] |
Restless multi-armed bandits (RMAB) have been widely used to model sequential decision making problems with constraints. The decision maker (DM) aims to maximize the expected total reward over an infinite horizon under an "instantaneous activation constraint" that at most B arms can be activated at any decision epoch, where the state of each arm evolves stochastically according to a Markov decision process (MDP). However, this basic model fails to provide any fairness guarantee among arms. In this paper, we introduce RMAB-F, a new RMAB model with "long-term fairness constraints", where the objective now is to maximize the long term reward while a minimum long-term activation fraction for each arm must be satisfied. For the online RMAB-F setting (i.e., the underlying MDPs associated with each arm are unknown to the DM), we develop a novel reinforcement learning (RL) algorithm named Fair-UCRL. We prove that Fair-UCRL ensures probabilistic sublinear bounds on both the reward regret and the fairness violation regret. Compared with off-the-shelf RL methods, our Fair-UCRL is much more computationally efficient since it contains a novel exploitation that leverages a low-complexity index policy for making decisions. Experimental results further demonstrate the effectiveness of our Fair-UCRL.
|
2310.00552
|
Guangxin Zhang
|
Guangxin Zhang, Shu Chen
|
Siamese Representation Learning for Unsupervised Relation Extraction
|
26th European Conference on Artificial Intelligence ECAI 2023
| null | null | null |
cs.CL cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Unsupervised relation extraction (URE) aims at discovering underlying
relations between named entity pairs from open-domain plain text without prior
information on relational distribution. Existing URE models utilizing
contrastive learning, which attract positive samples and repulse negative
samples to promote better separation, have got decent effect. However,
fine-grained relational semantic in relationship makes spurious negative
samples, damaging the inherent hierarchical structure and hindering
performances. To tackle this problem, we propose Siamese Representation
Learning for Unsupervised Relation Extraction -- a novel framework to simply
leverage positive pairs to representation learning, possessing the capability
to effectively optimize relation representation of instances and retain
hierarchical information in relational feature space. Experimental results show
that our model significantly advances the state-of-the-art results on two
benchmark datasets and detailed analyses demonstrate the effectiveness and
robustness of our proposed model on unsupervised relation extraction.
|
[
{
"created": "Sun, 1 Oct 2023 02:57:43 GMT",
"version": "v1"
}
] |
2023-10-03
|
[
[
"Zhang",
"Guangxin",
""
],
[
"Chen",
"Shu",
""
]
] |
Unsupervised relation extraction (URE) aims at discovering underlying relations between named entity pairs from open-domain plain text without prior information on relational distribution. Existing URE models utilizing contrastive learning, which attract positive samples and repulse negative samples to promote better separation, have got decent effect. However, fine-grained relational semantic in relationship makes spurious negative samples, damaging the inherent hierarchical structure and hindering performances. To tackle this problem, we propose Siamese Representation Learning for Unsupervised Relation Extraction -- a novel framework to simply leverage positive pairs to representation learning, possessing the capability to effectively optimize relation representation of instances and retain hierarchical information in relational feature space. Experimental results show that our model significantly advances the state-of-the-art results on two benchmark datasets and detailed analyses demonstrate the effectiveness and robustness of our proposed model on unsupervised relation extraction.
|
2403.00087
|
Gianluca Redondi
|
Alessandro Cimatti, Alberto Griggio, Gianluca Redondi
|
Towards the verification of a generic interlocking logic: Dafny meets
parameterized model checking
| null | null | null | null |
cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
Interlocking logics are at the core of critical systems controlling the
traffic within stations. In this paper, we consider a generic interlocking
logic, which can be instantiated to control a wide class of stations. We tackle
the problem of parameterized verification, i.e. prove that the logic satisfies
the required properties for all the relevant stations. We present a simplified
case study, where the interlocking logic is directly encoded in Dafny. Then, we
show how to automate the proof of an important safety requirement, by
integrating simple, template-based invariants and more complex invariants
obtained from a model checker for parameterized systems. Based on these
positive preliminary results, we outline how we intend to integrate the
approach by extending the IDE for the design of the interlocking logic.
|
[
{
"created": "Thu, 29 Feb 2024 19:27:45 GMT",
"version": "v1"
}
] |
2024-03-04
|
[
[
"Cimatti",
"Alessandro",
""
],
[
"Griggio",
"Alberto",
""
],
[
"Redondi",
"Gianluca",
""
]
] |
Interlocking logics are at the core of critical systems controlling the traffic within stations. In this paper, we consider a generic interlocking logic, which can be instantiated to control a wide class of stations. We tackle the problem of parameterized verification, i.e. prove that the logic satisfies the required properties for all the relevant stations. We present a simplified case study, where the interlocking logic is directly encoded in Dafny. Then, we show how to automate the proof of an important safety requirement, by integrating simple, template-based invariants and more complex invariants obtained from a model checker for parameterized systems. Based on these positive preliminary results, we outline how we intend to integrate the approach by extending the IDE for the design of the interlocking logic.
|
2405.06574
|
Elham Ravanbakhsh
|
Elham Ravanbakhsh, Yongqing Liang, J. Ramanujam, Xin Li
|
Deep video representation learning: a survey
|
Multimedia Tools and Applications (2023) 1-31
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
This paper provides a review on representation learning for videos. We
classify recent spatiotemporal feature learning methods for sequential visual
data and compare their pros and cons for general video analysis. Building
effective features for videos is a fundamental problem in computer vision tasks
involving video analysis and understanding. Existing features can be generally
categorized into spatial and temporal features. Their effectiveness under
variations of illumination, occlusion, view and background are discussed.
Finally, we discuss the remaining challenges in existing deep video
representation learning studies.
|
[
{
"created": "Fri, 10 May 2024 16:20:11 GMT",
"version": "v1"
}
] |
2024-05-13
|
[
[
"Ravanbakhsh",
"Elham",
""
],
[
"Liang",
"Yongqing",
""
],
[
"Ramanujam",
"J.",
""
],
[
"Li",
"Xin",
""
]
] |
This paper provides a review on representation learning for videos. We classify recent spatiotemporal feature learning methods for sequential visual data and compare their pros and cons for general video analysis. Building effective features for videos is a fundamental problem in computer vision tasks involving video analysis and understanding. Existing features can be generally categorized into spatial and temporal features. Their effectiveness under variations of illumination, occlusion, view and background are discussed. Finally, we discuss the remaining challenges in existing deep video representation learning studies.
|
1612.01294
|
Arnab Ghosh
|
Arnab Ghosh and Viveka Kulharia and Vinay Namboodiri
|
Message Passing Multi-Agent GANs
|
The first 2 authors contributed equally for this work
| null | null | null |
cs.CV cs.AI cs.LG cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Communicating and sharing intelligence among agents is an important facet of
achieving Artificial General Intelligence. As a first step towards this
challenge, we introduce a novel framework for image generation: Message Passing
Multi-Agent Generative Adversarial Networks (MPM GANs). While GANs have
recently been shown to be very effective for image generation and other tasks,
these networks have been limited to mostly single generator-discriminator
networks. We show that we can obtain multi-agent GANs that communicate through
message passing to achieve better image generation. The objectives of the
individual agents in this framework are two fold: a co-operation objective and
a competing objective. The co-operation objective ensures that the message
sharing mechanism guides the other generator to generate better than itself
while the competing objective encourages each generator to generate better than
its counterpart. We analyze and visualize the messages that these GANs share
among themselves in various scenarios. We quantitatively show that the message
sharing formulation serves as a regularizer for the adversarial training.
Qualitatively, we show that the different generators capture different traits
of the underlying data distribution.
|
[
{
"created": "Mon, 5 Dec 2016 10:10:13 GMT",
"version": "v1"
}
] |
2016-12-06
|
[
[
"Ghosh",
"Arnab",
""
],
[
"Kulharia",
"Viveka",
""
],
[
"Namboodiri",
"Vinay",
""
]
] |
Communicating and sharing intelligence among agents is an important facet of achieving Artificial General Intelligence. As a first step towards this challenge, we introduce a novel framework for image generation: Message Passing Multi-Agent Generative Adversarial Networks (MPM GANs). While GANs have recently been shown to be very effective for image generation and other tasks, these networks have been limited to mostly single generator-discriminator networks. We show that we can obtain multi-agent GANs that communicate through message passing to achieve better image generation. The objectives of the individual agents in this framework are two fold: a co-operation objective and a competing objective. The co-operation objective ensures that the message sharing mechanism guides the other generator to generate better than itself while the competing objective encourages each generator to generate better than its counterpart. We analyze and visualize the messages that these GANs share among themselves in various scenarios. We quantitatively show that the message sharing formulation serves as a regularizer for the adversarial training. Qualitatively, we show that the different generators capture different traits of the underlying data distribution.
|
1511.06314
|
Stefan Lee
|
Stefan Lee, Senthil Purushwalkam, Michael Cogswell, David Crandall,
and Dhruv Batra
|
Why M Heads are Better than One: Training a Diverse Ensemble of Deep
Networks
| null | null | null | null |
cs.CV cs.LG cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Convolutional Neural Networks have achieved state-of-the-art performance on a
wide range of tasks. Most benchmarks are led by ensembles of these powerful
learners, but ensembling is typically treated as a post-hoc procedure
implemented by averaging independently trained models with model variation
induced by bagging or random initialization. In this paper, we rigorously treat
ensembling as a first-class problem to explicitly address the question: what
are the best strategies to create an ensemble? We first compare a large number
of ensembling strategies, and then propose and evaluate novel strategies, such
as parameter sharing (through a new family of models we call TreeNets) as well
as training under ensemble-aware and diversity-encouraging losses. We
demonstrate that TreeNets can improve ensemble performance and that diverse
ensembles can be trained end-to-end under a unified loss, achieving
significantly higher "oracle" accuracies than classical ensembles.
|
[
{
"created": "Thu, 19 Nov 2015 19:19:58 GMT",
"version": "v1"
}
] |
2015-11-20
|
[
[
"Lee",
"Stefan",
""
],
[
"Purushwalkam",
"Senthil",
""
],
[
"Cogswell",
"Michael",
""
],
[
"Crandall",
"David",
""
],
[
"Batra",
"Dhruv",
""
]
] |
Convolutional Neural Networks have achieved state-of-the-art performance on a wide range of tasks. Most benchmarks are led by ensembles of these powerful learners, but ensembling is typically treated as a post-hoc procedure implemented by averaging independently trained models with model variation induced by bagging or random initialization. In this paper, we rigorously treat ensembling as a first-class problem to explicitly address the question: what are the best strategies to create an ensemble? We first compare a large number of ensembling strategies, and then propose and evaluate novel strategies, such as parameter sharing (through a new family of models we call TreeNets) as well as training under ensemble-aware and diversity-encouraging losses. We demonstrate that TreeNets can improve ensemble performance and that diverse ensembles can be trained end-to-end under a unified loss, achieving significantly higher "oracle" accuracies than classical ensembles.
|
2302.05895
|
Chuyuan Li
|
Chuyuan Li, Patrick Huber, Wen Xiao, Maxime Amblard, Chlo\'e Braud,
Giuseppe Carenini
|
Discourse Structure Extraction from Pre-Trained and Fine-Tuned Language
Models in Dialogues
| null |
Findings of the Association for Computational Linguistics: EACL
2023 (2023) 2562--2579
| null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Discourse processing suffers from data sparsity, especially for dialogues. As
a result, we explore approaches to build discourse structures for dialogues,
based on attention matrices from Pre-trained Language Models (PLMs). We
investigate multiple tasks for fine-tuning and show that the dialogue-tailored
Sentence Ordering task performs best. To locate and exploit discourse
information in PLMs, we propose an unsupervised and a semi-supervised method.
Our proposals achieve encouraging results on the STAC corpus, with F1 scores of
57.2 and 59.3 for unsupervised and semi-supervised methods, respectively. When
restricted to projective trees, our scores improved to 63.3 and 68.1.
|
[
{
"created": "Sun, 12 Feb 2023 11:26:10 GMT",
"version": "v1"
},
{
"created": "Sun, 25 Jun 2023 10:19:00 GMT",
"version": "v2"
}
] |
2023-06-27
|
[
[
"Li",
"Chuyuan",
""
],
[
"Huber",
"Patrick",
""
],
[
"Xiao",
"Wen",
""
],
[
"Amblard",
"Maxime",
""
],
[
"Braud",
"Chloé",
""
],
[
"Carenini",
"Giuseppe",
""
]
] |
Discourse processing suffers from data sparsity, especially for dialogues. As a result, we explore approaches to build discourse structures for dialogues, based on attention matrices from Pre-trained Language Models (PLMs). We investigate multiple tasks for fine-tuning and show that the dialogue-tailored Sentence Ordering task performs best. To locate and exploit discourse information in PLMs, we propose an unsupervised and a semi-supervised method. Our proposals achieve encouraging results on the STAC corpus, with F1 scores of 57.2 and 59.3 for unsupervised and semi-supervised methods, respectively. When restricted to projective trees, our scores improved to 63.3 and 68.1.
|
1703.00195
|
Jakub Radoszewski
|
Amihood Amir, Costas S. Iliopoulos, and Jakub Radoszewski
|
Two strings at Hamming distance 1 cannot be both quasiperiodic
|
6 pages, 3 figures
| null | null | null |
cs.FL cs.DM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a generalization of a known fact from combinatorics on words
related to periodicity into quasiperiodicity. A string is called periodic if it
has a period which is at most half of its length. A string $w$ is called
quasiperiodic if it has a non-trivial cover, that is, there exists a string $c$
that is shorter than $w$ and such that every position in $w$ is inside one of
the occurrences of $c$ in $w$. It is a folklore fact that two strings that
differ at exactly one position cannot be both periodic. Here we prove a more
general fact that two strings that differ at exactly one position cannot be
both quasiperiodic. Along the way we obtain new insights into combinatorics of
quasiperiodicities.
|
[
{
"created": "Wed, 1 Mar 2017 09:38:07 GMT",
"version": "v1"
}
] |
2017-03-02
|
[
[
"Amir",
"Amihood",
""
],
[
"Iliopoulos",
"Costas S.",
""
],
[
"Radoszewski",
"Jakub",
""
]
] |
We present a generalization of a known fact from combinatorics on words related to periodicity into quasiperiodicity. A string is called periodic if it has a period which is at most half of its length. A string $w$ is called quasiperiodic if it has a non-trivial cover, that is, there exists a string $c$ that is shorter than $w$ and such that every position in $w$ is inside one of the occurrences of $c$ in $w$. It is a folklore fact that two strings that differ at exactly one position cannot be both periodic. Here we prove a more general fact that two strings that differ at exactly one position cannot be both quasiperiodic. Along the way we obtain new insights into combinatorics of quasiperiodicities.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.