id
stringlengths 9
10
| submitter
stringlengths 1
64
⌀ | authors
stringlengths 4
20.7k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
404
⌀ | doi
stringlengths 11
153
⌀ | report-no
stringlengths 2
254
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | orig_abstract
stringlengths 14
3.35k
| versions
listlengths 1
60
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
1.35k
| abstract
stringlengths 11
3.34k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1803.02883
|
Naren Srivaths Raman
|
Naren Srivaths Raman and Prabir Barooah
|
On the round-trip efficiency of an HVAC-based virtual battery
|
Added a new simulation case study
| null | null | null |
cs.SY math.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Flexible loads, especially heating, ventilation, and air-conditioning (HVAC)
systems can be used to provide a battery-like service to the power grid by
varying their demand up and down over a baseline. Recent work has reported that
providing virtual energy storage with HVAC systems lead to a net loss of
energy, akin to a low round-trip efficiency (RTE) of a battery. In this work we
rigorously analyze the RTE of a virtual battery through a simplified
physics-based model. We show that the low RTEs reported in recent experimental
and simulation work are an artifact of the experimental/simulation setup. When
the HVAC system is repeatedly used as a virtual battery, the asymptotic RTE is
1. Robustness of the result to assumptions made in the analysis is illustrated
through a simulation case study.
|
[
{
"created": "Wed, 7 Mar 2018 21:40:09 GMT",
"version": "v1"
},
{
"created": "Tue, 13 Mar 2018 14:49:02 GMT",
"version": "v2"
},
{
"created": "Mon, 5 Nov 2018 18:38:36 GMT",
"version": "v3"
}
] |
2018-11-06
|
[
[
"Raman",
"Naren Srivaths",
""
],
[
"Barooah",
"Prabir",
""
]
] |
Flexible loads, especially heating, ventilation, and air-conditioning (HVAC) systems can be used to provide a battery-like service to the power grid by varying their demand up and down over a baseline. Recent work has reported that providing virtual energy storage with HVAC systems lead to a net loss of energy, akin to a low round-trip efficiency (RTE) of a battery. In this work we rigorously analyze the RTE of a virtual battery through a simplified physics-based model. We show that the low RTEs reported in recent experimental and simulation work are an artifact of the experimental/simulation setup. When the HVAC system is repeatedly used as a virtual battery, the asymptotic RTE is 1. Robustness of the result to assumptions made in the analysis is illustrated through a simulation case study.
|
2101.07136
|
Philipp D. Rohde
|
M\'onica Figuera and Philipp D. Rohde and Maria-Esther Vidal
|
Trav-SHACL: Efficiently Validating Networks of SHACL Constraints
| null | null | null | null |
cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Knowledge graphs have emerged as expressive data structures for Web data.
Knowledge graph potential and the demand for ecosystems to facilitate their
creation, curation, and understanding, is testified in diverse domains, e.g.,
biomedicine. The Shapes Constraint Language (SHACL) is the W3C recommendation
language for integrity constraints over RDF knowledge graphs. Enabling quality
assements of knowledge graphs, SHACL is rapidly gaining attention in real-world
scenarios. SHACL models integrity constraints as a network of shapes, where a
shape contains the constraints to be fullfiled by the same entities. The
validation of a SHACL shape schema can face the issue of tractability during
validation. To facilitate full adoption, efficient computational methods are
required. We present Trav-SHACL, a SHACL engine capable of planning the
traversal and execution of a shape schema in a way that invalid entities are
detected early and needless validations are minimized. Trav-SHACL reorders the
shapes in a shape schema for efficient validation and rewrites target and
constraint queries for the fast detection of invalid entities. Trav-SHACL is
empirically evaluated on 27 testbeds executed against knowledge graphs of up to
34M triples. Our experimental results suggest that Trav-SHACL exhibits high
performance gradually and reduces validation time by a factor of up to 28.93
compared to the state of the art.
|
[
{
"created": "Mon, 18 Jan 2021 15:57:14 GMT",
"version": "v1"
}
] |
2021-01-19
|
[
[
"Figuera",
"Mónica",
""
],
[
"Rohde",
"Philipp D.",
""
],
[
"Vidal",
"Maria-Esther",
""
]
] |
Knowledge graphs have emerged as expressive data structures for Web data. Knowledge graph potential and the demand for ecosystems to facilitate their creation, curation, and understanding, is testified in diverse domains, e.g., biomedicine. The Shapes Constraint Language (SHACL) is the W3C recommendation language for integrity constraints over RDF knowledge graphs. Enabling quality assements of knowledge graphs, SHACL is rapidly gaining attention in real-world scenarios. SHACL models integrity constraints as a network of shapes, where a shape contains the constraints to be fullfiled by the same entities. The validation of a SHACL shape schema can face the issue of tractability during validation. To facilitate full adoption, efficient computational methods are required. We present Trav-SHACL, a SHACL engine capable of planning the traversal and execution of a shape schema in a way that invalid entities are detected early and needless validations are minimized. Trav-SHACL reorders the shapes in a shape schema for efficient validation and rewrites target and constraint queries for the fast detection of invalid entities. Trav-SHACL is empirically evaluated on 27 testbeds executed against knowledge graphs of up to 34M triples. Our experimental results suggest that Trav-SHACL exhibits high performance gradually and reduces validation time by a factor of up to 28.93 compared to the state of the art.
|
2111.02331
|
Erh-Chung Chen
|
Erh-Chung Chen, Che-Rung Lee
|
LTD: Low Temperature Distillation for Robust Adversarial Training
| null | null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Adversarial training has been widely used to enhance the robustness of neural
network models against adversarial attacks. Despite the popularity of neural
network models, a significant gap exists between the natural and robust
accuracy of these models. In this paper, we identify one of the primary reasons
for this gap is the common use of one-hot vectors as labels, which hinders the
learning process for image recognition. Representing ambiguous images with
one-hot vectors is imprecise and may lead the model to suboptimal solutions. To
overcome this issue, we propose a novel method called Low Temperature
Distillation (LTD) that generates soft labels using the modified knowledge
distillation framework. Unlike previous approaches, LTD uses a relatively low
temperature in the teacher model and fixed, but different temperatures for the
teacher and student models. This modification boosts the model's robustness
without encountering the gradient masking problem that has been addressed in
defensive distillation. The experimental results demonstrate the effectiveness
of the proposed LTD method combined with previous techniques, achieving robust
accuracy rates of 58.19%, 31.13%, and 42.08% on CIFAR-10, CIFAR-100, and
ImageNet data sets, respectively, without additional unlabeled data.
|
[
{
"created": "Wed, 3 Nov 2021 16:26:00 GMT",
"version": "v1"
},
{
"created": "Sun, 10 Apr 2022 04:15:11 GMT",
"version": "v2"
},
{
"created": "Fri, 30 Jun 2023 06:56:18 GMT",
"version": "v3"
}
] |
2023-07-03
|
[
[
"Chen",
"Erh-Chung",
""
],
[
"Lee",
"Che-Rung",
""
]
] |
Adversarial training has been widely used to enhance the robustness of neural network models against adversarial attacks. Despite the popularity of neural network models, a significant gap exists between the natural and robust accuracy of these models. In this paper, we identify one of the primary reasons for this gap is the common use of one-hot vectors as labels, which hinders the learning process for image recognition. Representing ambiguous images with one-hot vectors is imprecise and may lead the model to suboptimal solutions. To overcome this issue, we propose a novel method called Low Temperature Distillation (LTD) that generates soft labels using the modified knowledge distillation framework. Unlike previous approaches, LTD uses a relatively low temperature in the teacher model and fixed, but different temperatures for the teacher and student models. This modification boosts the model's robustness without encountering the gradient masking problem that has been addressed in defensive distillation. The experimental results demonstrate the effectiveness of the proposed LTD method combined with previous techniques, achieving robust accuracy rates of 58.19%, 31.13%, and 42.08% on CIFAR-10, CIFAR-100, and ImageNet data sets, respectively, without additional unlabeled data.
|
cs/0604031
|
Amos Lapidoth
|
Amos Lapidoth and Ligong Wang
|
On the Low SNR Capacity of Peak-Limited Non-Coherent Fading Channels
with Memory
| null | null | null | null |
cs.IT math.IT
| null |
The capacity of non-coherent stationary Gaussian fading channels with memory
under a peak-power constraint is studied in the asymptotic weak-signal regime.
It is assumed that the fading law is known to both transmitter and receiver but
that neither is cognizant of the fading realization. A connection is
demonstrated between the asymptotic behavior of channel capacity in this regime
and the asymptotic behavior of the prediction error incurred in predicting the
fading process from very noisy observations of its past. This connection can be
viewed as the low signal-to-noise ratio (SNR) analog of recent results by
Lapidoth & Moser and by Lapidoth demonstrating connections between the high SNR
capacity growth and the noiseless or almost-noiseless prediction error. We
distinguish between two families of fading laws: the ``slowly forgetting'' and
the ``quickly forgetting''. For channels in the former category the low SNR
capacity is achieved by IID inputs, whereas in the latter such inputs are
typically sub-optimal. Instead, the asymptotic capacity can be approached by
inputs with IID phase but block-constant magnitude.
|
[
{
"created": "Fri, 7 Apr 2006 23:29:27 GMT",
"version": "v1"
}
] |
2007-07-13
|
[
[
"Lapidoth",
"Amos",
""
],
[
"Wang",
"Ligong",
""
]
] |
The capacity of non-coherent stationary Gaussian fading channels with memory under a peak-power constraint is studied in the asymptotic weak-signal regime. It is assumed that the fading law is known to both transmitter and receiver but that neither is cognizant of the fading realization. A connection is demonstrated between the asymptotic behavior of channel capacity in this regime and the asymptotic behavior of the prediction error incurred in predicting the fading process from very noisy observations of its past. This connection can be viewed as the low signal-to-noise ratio (SNR) analog of recent results by Lapidoth & Moser and by Lapidoth demonstrating connections between the high SNR capacity growth and the noiseless or almost-noiseless prediction error. We distinguish between two families of fading laws: the ``slowly forgetting'' and the ``quickly forgetting''. For channels in the former category the low SNR capacity is achieved by IID inputs, whereas in the latter such inputs are typically sub-optimal. Instead, the asymptotic capacity can be approached by inputs with IID phase but block-constant magnitude.
|
2407.05603
|
Pingyi Chen
|
Pingyi Chen, Chenglu Zhu, Sunyi Zheng, Honglin Li, Lin Yang
|
WSI-VQA: Interpreting Whole Slide Images by Generative Visual Question
Answering
|
Accepted at ECCV 2024
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Whole slide imaging is routinely adopted for carcinoma diagnosis and
prognosis. Abundant experience is required for pathologists to achieve accurate
and reliable diagnostic results of whole slide images (WSI). The huge size and
heterogeneous features of WSIs make the workflow of pathological reading
extremely time-consuming. In this paper, we propose a novel framework (WSI-VQA)
to interpret WSIs by generative visual question answering. WSI-VQA shows
universality by reframing various kinds of slide-level tasks in a
question-answering pattern, in which pathologists can achieve
immunohistochemical grading, survival prediction, and tumor subtyping following
human-machine interaction. Furthermore, we establish a WSI-VQA dataset which
contains 8672 slide-level question-answering pairs with 977 WSIs. Besides the
ability to deal with different slide-level tasks, our generative model which is
named Wsi2Text Transformer (W2T) outperforms existing discriminative models in
medical correctness, which reveals the potential of our model to be applied in
the clinical scenario. Additionally, we also visualize the co-attention mapping
between word embeddings and WSIs as an intuitive explanation for diagnostic
results. The dataset and related code are available at
https://github.com/cpystan/WSI-VQA.
|
[
{
"created": "Mon, 8 Jul 2024 04:37:32 GMT",
"version": "v1"
}
] |
2024-07-09
|
[
[
"Chen",
"Pingyi",
""
],
[
"Zhu",
"Chenglu",
""
],
[
"Zheng",
"Sunyi",
""
],
[
"Li",
"Honglin",
""
],
[
"Yang",
"Lin",
""
]
] |
Whole slide imaging is routinely adopted for carcinoma diagnosis and prognosis. Abundant experience is required for pathologists to achieve accurate and reliable diagnostic results of whole slide images (WSI). The huge size and heterogeneous features of WSIs make the workflow of pathological reading extremely time-consuming. In this paper, we propose a novel framework (WSI-VQA) to interpret WSIs by generative visual question answering. WSI-VQA shows universality by reframing various kinds of slide-level tasks in a question-answering pattern, in which pathologists can achieve immunohistochemical grading, survival prediction, and tumor subtyping following human-machine interaction. Furthermore, we establish a WSI-VQA dataset which contains 8672 slide-level question-answering pairs with 977 WSIs. Besides the ability to deal with different slide-level tasks, our generative model which is named Wsi2Text Transformer (W2T) outperforms existing discriminative models in medical correctness, which reveals the potential of our model to be applied in the clinical scenario. Additionally, we also visualize the co-attention mapping between word embeddings and WSIs as an intuitive explanation for diagnostic results. The dataset and related code are available at https://github.com/cpystan/WSI-VQA.
|
2111.04260
|
Avanika Narayan
|
Avanika Narayan, Piero Molino, Karan Goel, Willie Neiswanger,
Christopher R\'e (Department of Computer Science, Stanford University)
|
Personalized Benchmarking with the Ludwig Benchmarking Toolkit
|
14 pages, 14 figures, 35th Conference on Neural Information
Processing Systems (NeurIPS 2021) Track on Datasets and Benchmarks
| null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The rapid proliferation of machine learning models across domains and
deployment settings has given rise to various communities (e.g. industry
practitioners) which seek to benchmark models across tasks and objectives of
personal value. Unfortunately, these users cannot use standard benchmark
results to perform such value-driven comparisons as traditional benchmarks
evaluate models on a single objective (e.g. average accuracy) and fail to
facilitate a standardized training framework that controls for confounding
variables (e.g. computational budget), making fair comparisons difficult. To
address these challenges, we introduce the open-source Ludwig Benchmarking
Toolkit (LBT), a personalized benchmarking toolkit for running end-to-end
benchmark studies (from hyperparameter optimization to evaluation) across an
easily extensible set of tasks, deep learning models, datasets and evaluation
metrics. LBT provides a configurable interface for controlling training and
customizing evaluation, a standardized training framework for eliminating
confounding variables, and support for multi-objective evaluation. We
demonstrate how LBT can be used to create personalized benchmark studies with a
large-scale comparative analysis for text classification across 7 models and 9
datasets. We explore the trade-offs between inference latency and performance,
relationships between dataset attributes and performance, and the effects of
pretraining on convergence and robustness, showing how LBT can be used to
satisfy various benchmarking objectives.
|
[
{
"created": "Mon, 8 Nov 2021 03:53:38 GMT",
"version": "v1"
}
] |
2021-11-09
|
[
[
"Narayan",
"Avanika",
"",
"Department of Computer Science, Stanford University"
],
[
"Molino",
"Piero",
"",
"Department of Computer Science, Stanford University"
],
[
"Goel",
"Karan",
"",
"Department of Computer Science, Stanford University"
],
[
"Neiswanger",
"Willie",
"",
"Department of Computer Science, Stanford University"
],
[
"Ré",
"Christopher",
"",
"Department of Computer Science, Stanford University"
]
] |
The rapid proliferation of machine learning models across domains and deployment settings has given rise to various communities (e.g. industry practitioners) which seek to benchmark models across tasks and objectives of personal value. Unfortunately, these users cannot use standard benchmark results to perform such value-driven comparisons as traditional benchmarks evaluate models on a single objective (e.g. average accuracy) and fail to facilitate a standardized training framework that controls for confounding variables (e.g. computational budget), making fair comparisons difficult. To address these challenges, we introduce the open-source Ludwig Benchmarking Toolkit (LBT), a personalized benchmarking toolkit for running end-to-end benchmark studies (from hyperparameter optimization to evaluation) across an easily extensible set of tasks, deep learning models, datasets and evaluation metrics. LBT provides a configurable interface for controlling training and customizing evaluation, a standardized training framework for eliminating confounding variables, and support for multi-objective evaluation. We demonstrate how LBT can be used to create personalized benchmark studies with a large-scale comparative analysis for text classification across 7 models and 9 datasets. We explore the trade-offs between inference latency and performance, relationships between dataset attributes and performance, and the effects of pretraining on convergence and robustness, showing how LBT can be used to satisfy various benchmarking objectives.
|
2407.17911
|
Kang-Yang Huang
|
Jian-Yu Jiang-Lin, Kang-Yang Huang, Ling Lo, Yi-Ning Huang, Terence
Lin, Jhih-Ciang Wu, Hong-Han Shuai, Wen-Huang Cheng
|
ReCorD: Reasoning and Correcting Diffusion for HOI Generation
|
Accepted by ACM MM 2024. Project website:
https://alberthkyhky.github.io/ReCorD/
| null |
10.1145/3664647.3680936
| null |
cs.MM cs.AI cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Diffusion models revolutionize image generation by leveraging natural
language to guide the creation of multimedia content. Despite significant
advancements in such generative models, challenges persist in depicting
detailed human-object interactions, especially regarding pose and object
placement accuracy. We introduce a training-free method named Reasoning and
Correcting Diffusion (ReCorD) to address these challenges. Our model couples
Latent Diffusion Models with Visual Language Models to refine the generation
process, ensuring precise depictions of HOIs. We propose an interaction-aware
reasoning module to improve the interpretation of the interaction, along with
an interaction correcting module to refine the output image for more precise
HOI generation delicately. Through a meticulous process of pose selection and
object positioning, ReCorD achieves superior fidelity in generated images while
efficiently reducing computational requirements. We conduct comprehensive
experiments on three benchmarks to demonstrate the significant progress in
solving text-to-image generation tasks, showcasing ReCorD's ability to render
complex interactions accurately by outperforming existing methods in HOI
classification score, as well as FID and Verb CLIP-Score. Project website is
available at https://alberthkyhky.github.io/ReCorD/ .
|
[
{
"created": "Thu, 25 Jul 2024 10:06:26 GMT",
"version": "v1"
}
] |
2024-07-26
|
[
[
"Jiang-Lin",
"Jian-Yu",
""
],
[
"Huang",
"Kang-Yang",
""
],
[
"Lo",
"Ling",
""
],
[
"Huang",
"Yi-Ning",
""
],
[
"Lin",
"Terence",
""
],
[
"Wu",
"Jhih-Ciang",
""
],
[
"Shuai",
"Hong-Han",
""
],
[
"Cheng",
"Wen-Huang",
""
]
] |
Diffusion models revolutionize image generation by leveraging natural language to guide the creation of multimedia content. Despite significant advancements in such generative models, challenges persist in depicting detailed human-object interactions, especially regarding pose and object placement accuracy. We introduce a training-free method named Reasoning and Correcting Diffusion (ReCorD) to address these challenges. Our model couples Latent Diffusion Models with Visual Language Models to refine the generation process, ensuring precise depictions of HOIs. We propose an interaction-aware reasoning module to improve the interpretation of the interaction, along with an interaction correcting module to refine the output image for more precise HOI generation delicately. Through a meticulous process of pose selection and object positioning, ReCorD achieves superior fidelity in generated images while efficiently reducing computational requirements. We conduct comprehensive experiments on three benchmarks to demonstrate the significant progress in solving text-to-image generation tasks, showcasing ReCorD's ability to render complex interactions accurately by outperforming existing methods in HOI classification score, as well as FID and Verb CLIP-Score. Project website is available at https://alberthkyhky.github.io/ReCorD/ .
|
2311.03792
|
Shrestha Datta
|
Jakir Hasan, Shrestha Datta, Ameya Debnath
|
Character-Level Bangla Text-to-IPA Transcription Using Transformer
Architecture with Sequence Alignment
|
Achieved top position with a word error rate of 0.10582 in the public
ranking of DataVerse Challenge - ITVerse 2023 (link:
https://www.kaggle.com/competitions/dataverse_2023/). All codes can be found
on the respective competition webpage
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
The International Phonetic Alphabet (IPA) is indispensable in language
learning and understanding, aiding users in accurate pronunciation and
comprehension. Additionally, it plays a pivotal role in speech therapy,
linguistic research, accurate transliteration, and the development of
text-to-speech systems, making it an essential tool across diverse fields.
Bangla being 7th as one of the widely used languages, gives rise to the need
for IPA in its domain. Its IPA mapping is too diverse to be captured manually
giving the need for Artificial Intelligence and Machine Learning in this field.
In this study, we have utilized a transformer-based sequence-to-sequence model
at the letter and symbol level to get the IPA of each Bangla word as the
variation of IPA in association of different words is almost null. Our
transformer model only consisted of 8.5 million parameters with only a single
decoder and encoder layer. Additionally, to handle the punctuation marks and
the occurrence of foreign languages in the text, we have utilized manual
mapping as the model won't be able to learn to separate them from Bangla words
while decreasing our required computational resources. Finally, maintaining the
relative position of the sentence component IPAs and generation of the combined
IPA has led us to achieve the top position with a word error rate of 0.10582 in
the public ranking of DataVerse Challenge - ITVerse 2023
(https://www.kaggle.com/competitions/dataverse_2023/).
|
[
{
"created": "Tue, 7 Nov 2023 08:20:06 GMT",
"version": "v1"
}
] |
2023-11-08
|
[
[
"Hasan",
"Jakir",
""
],
[
"Datta",
"Shrestha",
""
],
[
"Debnath",
"Ameya",
""
]
] |
The International Phonetic Alphabet (IPA) is indispensable in language learning and understanding, aiding users in accurate pronunciation and comprehension. Additionally, it plays a pivotal role in speech therapy, linguistic research, accurate transliteration, and the development of text-to-speech systems, making it an essential tool across diverse fields. Bangla being 7th as one of the widely used languages, gives rise to the need for IPA in its domain. Its IPA mapping is too diverse to be captured manually giving the need for Artificial Intelligence and Machine Learning in this field. In this study, we have utilized a transformer-based sequence-to-sequence model at the letter and symbol level to get the IPA of each Bangla word as the variation of IPA in association of different words is almost null. Our transformer model only consisted of 8.5 million parameters with only a single decoder and encoder layer. Additionally, to handle the punctuation marks and the occurrence of foreign languages in the text, we have utilized manual mapping as the model won't be able to learn to separate them from Bangla words while decreasing our required computational resources. Finally, maintaining the relative position of the sentence component IPAs and generation of the combined IPA has led us to achieve the top position with a word error rate of 0.10582 in the public ranking of DataVerse Challenge - ITVerse 2023 (https://www.kaggle.com/competitions/dataverse_2023/).
|
2307.10710
|
Litian Liang
|
Zhiao Huang, Litian Liang, Zhan Ling, Xuanlin Li, Chuang Gan, Hao Su
|
Reparameterized Policy Learning for Multimodal Trajectory Optimization
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
We investigate the challenge of parametrizing policies for reinforcement
learning (RL) in high-dimensional continuous action spaces. Our objective is to
develop a multimodal policy that overcomes limitations inherent in the
commonly-used Gaussian parameterization. To achieve this, we propose a
principled framework that models the continuous RL policy as a generative model
of optimal trajectories. By conditioning the policy on a latent variable, we
derive a novel variational bound as the optimization objective, which promotes
exploration of the environment. We then present a practical model-based RL
method, called Reparameterized Policy Gradient (RPG), which leverages the
multimodal policy parameterization and learned world model to achieve strong
exploration capabilities and high data efficiency. Empirical results
demonstrate that our method can help agents evade local optima in tasks with
dense rewards and solve challenging sparse-reward environments by incorporating
an object-centric intrinsic reward. Our method consistently outperforms
previous approaches across a range of tasks. Code and supplementary materials
are available on the project page https://haosulab.github.io/RPG/
|
[
{
"created": "Thu, 20 Jul 2023 09:05:46 GMT",
"version": "v1"
}
] |
2023-07-21
|
[
[
"Huang",
"Zhiao",
""
],
[
"Liang",
"Litian",
""
],
[
"Ling",
"Zhan",
""
],
[
"Li",
"Xuanlin",
""
],
[
"Gan",
"Chuang",
""
],
[
"Su",
"Hao",
""
]
] |
We investigate the challenge of parametrizing policies for reinforcement learning (RL) in high-dimensional continuous action spaces. Our objective is to develop a multimodal policy that overcomes limitations inherent in the commonly-used Gaussian parameterization. To achieve this, we propose a principled framework that models the continuous RL policy as a generative model of optimal trajectories. By conditioning the policy on a latent variable, we derive a novel variational bound as the optimization objective, which promotes exploration of the environment. We then present a practical model-based RL method, called Reparameterized Policy Gradient (RPG), which leverages the multimodal policy parameterization and learned world model to achieve strong exploration capabilities and high data efficiency. Empirical results demonstrate that our method can help agents evade local optima in tasks with dense rewards and solve challenging sparse-reward environments by incorporating an object-centric intrinsic reward. Our method consistently outperforms previous approaches across a range of tasks. Code and supplementary materials are available on the project page https://haosulab.github.io/RPG/
|
2011.10684
|
Haozhe Feng
|
Hao-Zhe Feng, Kezhi Kong, Minghao Chen, Tianye Zhang, Minfeng Zhu, Wei
Chen
|
SHOT-VAE: Semi-supervised Deep Generative Models With Label-aware ELBO
Approximations
|
12 pages, 6 figures, Accepted for presentation at AAAI2021
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Semi-supervised variational autoencoders (VAEs) have obtained strong results,
but have also encountered the challenge that good ELBO values do not always
imply accurate inference results. In this paper, we investigate and propose two
causes of this problem: (1) The ELBO objective cannot utilize the label
information directly. (2) A bottleneck value exists and continuing to optimize
ELBO after this value will not improve inference accuracy. On the basis of the
experiment results, we propose SHOT-VAE to address these problems without
introducing additional prior knowledge. The SHOT-VAE offers two contributions:
(1) A new ELBO approximation named smooth-ELBO that integrates the label
predictive loss into ELBO. (2) An approximation based on optimal interpolation
that breaks the ELBO value bottleneck by reducing the margin between ELBO and
the data likelihood. The SHOT-VAE achieves good performance with a 25.30% error
rate on CIFAR-100 with 10k labels and reduces the error rate to 6.11% on
CIFAR-10 with 4k labels.
|
[
{
"created": "Sat, 21 Nov 2020 00:38:31 GMT",
"version": "v1"
},
{
"created": "Wed, 2 Dec 2020 09:27:54 GMT",
"version": "v2"
},
{
"created": "Thu, 3 Dec 2020 07:44:59 GMT",
"version": "v3"
},
{
"created": "Tue, 8 Dec 2020 07:04:44 GMT",
"version": "v4"
}
] |
2020-12-09
|
[
[
"Feng",
"Hao-Zhe",
""
],
[
"Kong",
"Kezhi",
""
],
[
"Chen",
"Minghao",
""
],
[
"Zhang",
"Tianye",
""
],
[
"Zhu",
"Minfeng",
""
],
[
"Chen",
"Wei",
""
]
] |
Semi-supervised variational autoencoders (VAEs) have obtained strong results, but have also encountered the challenge that good ELBO values do not always imply accurate inference results. In this paper, we investigate and propose two causes of this problem: (1) The ELBO objective cannot utilize the label information directly. (2) A bottleneck value exists and continuing to optimize ELBO after this value will not improve inference accuracy. On the basis of the experiment results, we propose SHOT-VAE to address these problems without introducing additional prior knowledge. The SHOT-VAE offers two contributions: (1) A new ELBO approximation named smooth-ELBO that integrates the label predictive loss into ELBO. (2) An approximation based on optimal interpolation that breaks the ELBO value bottleneck by reducing the margin between ELBO and the data likelihood. The SHOT-VAE achieves good performance with a 25.30% error rate on CIFAR-100 with 10k labels and reduces the error rate to 6.11% on CIFAR-10 with 4k labels.
|
1507.08240
|
Yajie Miao
|
Yajie Miao, Mohammad Gowayyed, Florian Metze
|
EESEN: End-to-End Speech Recognition using Deep RNN Models and
WFST-based Decoding
| null | null | null | null |
cs.CL cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The performance of automatic speech recognition (ASR) has improved
tremendously due to the application of deep neural networks (DNNs). Despite
this progress, building a new ASR system remains a challenging task, requiring
various resources, multiple training stages and significant expertise. This
paper presents our Eesen framework which drastically simplifies the existing
pipeline to build state-of-the-art ASR systems. Acoustic modeling in Eesen
involves learning a single recurrent neural network (RNN) predicting
context-independent targets (phonemes or characters). To remove the need for
pre-generated frame labels, we adopt the connectionist temporal classification
(CTC) objective function to infer the alignments between speech and label
sequences. A distinctive feature of Eesen is a generalized decoding approach
based on weighted finite-state transducers (WFSTs), which enables the efficient
incorporation of lexicons and language models into CTC decoding. Experiments
show that compared with the standard hybrid DNN systems, Eesen achieves
comparable word error rates (WERs), while at the same time speeding up decoding
significantly.
|
[
{
"created": "Wed, 29 Jul 2015 17:53:50 GMT",
"version": "v1"
},
{
"created": "Tue, 13 Oct 2015 21:03:34 GMT",
"version": "v2"
},
{
"created": "Sun, 18 Oct 2015 20:35:52 GMT",
"version": "v3"
}
] |
2015-10-20
|
[
[
"Miao",
"Yajie",
""
],
[
"Gowayyed",
"Mohammad",
""
],
[
"Metze",
"Florian",
""
]
] |
The performance of automatic speech recognition (ASR) has improved tremendously due to the application of deep neural networks (DNNs). Despite this progress, building a new ASR system remains a challenging task, requiring various resources, multiple training stages and significant expertise. This paper presents our Eesen framework which drastically simplifies the existing pipeline to build state-of-the-art ASR systems. Acoustic modeling in Eesen involves learning a single recurrent neural network (RNN) predicting context-independent targets (phonemes or characters). To remove the need for pre-generated frame labels, we adopt the connectionist temporal classification (CTC) objective function to infer the alignments between speech and label sequences. A distinctive feature of Eesen is a generalized decoding approach based on weighted finite-state transducers (WFSTs), which enables the efficient incorporation of lexicons and language models into CTC decoding. Experiments show that compared with the standard hybrid DNN systems, Eesen achieves comparable word error rates (WERs), while at the same time speeding up decoding significantly.
|
1904.04749
|
Lucas Gren
|
Lucas Gren
|
Social Influence in Agile Requirements Engineering
| null |
Work in Progress Session held in connection with the 43rd
Euromicro Conference on Software Engineering and Advanced Applications
(SEAA), 2017
| null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Agile requirements engineering implies more complex communication patterns
since even the developers are supposed to have direct contact with customers.
With more face-to-face communication comes social-psychological factors
influencing the requirements. Studies have pointed at the importance of
negotiation training, but I argue that more basic human traits can be triggered
in favor of the negotiator with the most knowledge of social influence research
and practice. I suggest a plan of how research in social influence and
requirements facilitation can be conducted, mostly through experimentation.
|
[
{
"created": "Fri, 5 Apr 2019 18:47:47 GMT",
"version": "v1"
}
] |
2019-04-10
|
[
[
"Gren",
"Lucas",
""
]
] |
Agile requirements engineering implies more complex communication patterns since even the developers are supposed to have direct contact with customers. With more face-to-face communication comes social-psychological factors influencing the requirements. Studies have pointed at the importance of negotiation training, but I argue that more basic human traits can be triggered in favor of the negotiator with the most knowledge of social influence research and practice. I suggest a plan of how research in social influence and requirements facilitation can be conducted, mostly through experimentation.
|
0807.1372
|
Danilo Silva
|
Danilo Silva, Frank R. Kschischang, Ralf K\"otter
|
Communication over Finite-Field Matrix Channels
|
24 pages, to be published at the IEEE Transactions on Information
Theory
|
IEEE Transactions on Information Theory, vol. 56, no. 3, pp.
1296-1305, Mar. 2010
|
10.1109/TIT.2009.2039167
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper is motivated by the problem of error control in network coding
when errors are introduced in a random fashion (rather than chosen by an
adversary). An additive-multiplicative matrix channel is considered as a model
for random network coding. The model assumes that n packets of length m are
transmitted over the network, and up to t erroneous packets are randomly chosen
and injected into the network. Upper and lower bounds on capacity are obtained
for any channel parameters, and asymptotic expressions are provided in the
limit of large field or matrix size. A simple coding scheme is presented that
achieves capacity in both limiting cases. The scheme has decoding complexity
O(n^2 m) and a probability of error that decreases exponentially both in the
packet length and in the field size in bits. Extensions of these results for
coherent network coding are also presented.
|
[
{
"created": "Wed, 9 Jul 2008 04:25:44 GMT",
"version": "v1"
},
{
"created": "Tue, 7 Jul 2009 01:57:55 GMT",
"version": "v2"
},
{
"created": "Fri, 11 Sep 2009 09:18:44 GMT",
"version": "v3"
},
{
"created": "Mon, 14 Sep 2009 13:08:21 GMT",
"version": "v4"
}
] |
2019-05-07
|
[
[
"Silva",
"Danilo",
""
],
[
"Kschischang",
"Frank R.",
""
],
[
"Kötter",
"Ralf",
""
]
] |
This paper is motivated by the problem of error control in network coding when errors are introduced in a random fashion (rather than chosen by an adversary). An additive-multiplicative matrix channel is considered as a model for random network coding. The model assumes that n packets of length m are transmitted over the network, and up to t erroneous packets are randomly chosen and injected into the network. Upper and lower bounds on capacity are obtained for any channel parameters, and asymptotic expressions are provided in the limit of large field or matrix size. A simple coding scheme is presented that achieves capacity in both limiting cases. The scheme has decoding complexity O(n^2 m) and a probability of error that decreases exponentially both in the packet length and in the field size in bits. Extensions of these results for coherent network coding are also presented.
|
2106.05227
|
Pardis Emami-Naeini
|
Pardis Emami-Naeini, Tiona Francisco, Tadayoshi Kohno, Franziska
Roesner
|
Understanding Privacy Attitudes and Concerns Towards Remote
Communications During the COVID-19 Pandemic
|
To appear at the 17th Symposium on Usable Privacy and Security
(SOUPS'21)
| null | null | null |
cs.CY cs.CR cs.HC cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Since December 2019, the COVID-19 pandemic has caused people around the world
to exercise social distancing, which has led to an abrupt rise in the adoption
of remote communications for working, socializing, and learning from home. As
remote communications will outlast the pandemic, it is crucial to protect
users' security and respect their privacy in this unprecedented setting, and
that requires a thorough understanding of their behaviors, attitudes, and
concerns toward various aspects of remote communications. To this end, we
conducted an online study with 220 worldwide Prolific participants. We found
that privacy and security are among the most frequently mentioned factors
impacting participants' attitude and comfort level with conferencing tools and
meeting locations. Open-ended responses revealed that most participants lacked
autonomy when choosing conferencing tools or using microphone/webcam in their
remote meetings, which in several cases contradicted their personal privacy and
security preferences. Based on our findings, we distill several recommendations
on how employers, educators, and tool developers can inform and empower users
to make privacy-protective decisions when engaging in remote communications.
|
[
{
"created": "Wed, 9 Jun 2021 17:17:06 GMT",
"version": "v1"
}
] |
2021-06-10
|
[
[
"Emami-Naeini",
"Pardis",
""
],
[
"Francisco",
"Tiona",
""
],
[
"Kohno",
"Tadayoshi",
""
],
[
"Roesner",
"Franziska",
""
]
] |
Since December 2019, the COVID-19 pandemic has caused people around the world to exercise social distancing, which has led to an abrupt rise in the adoption of remote communications for working, socializing, and learning from home. As remote communications will outlast the pandemic, it is crucial to protect users' security and respect their privacy in this unprecedented setting, and that requires a thorough understanding of their behaviors, attitudes, and concerns toward various aspects of remote communications. To this end, we conducted an online study with 220 worldwide Prolific participants. We found that privacy and security are among the most frequently mentioned factors impacting participants' attitude and comfort level with conferencing tools and meeting locations. Open-ended responses revealed that most participants lacked autonomy when choosing conferencing tools or using microphone/webcam in their remote meetings, which in several cases contradicted their personal privacy and security preferences. Based on our findings, we distill several recommendations on how employers, educators, and tool developers can inform and empower users to make privacy-protective decisions when engaging in remote communications.
|
1009.0896
|
Farnood Merrikh-Bayat
|
Farnood Merrikh-Bayat and Saeed Bagheri Shouraki
|
Memristor Crossbar-based Hardware Implementation of Fuzzy Membership
Functions
|
5 pages, 5 figures, Submitted to ICCAE 2011 conference
| null | null | null |
cs.NE cs.AI cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In May 1, 2008, researchers at Hewlett Packard (HP) announced the first
physical realization of a fundamental circuit element called memristor that
attracted so much interest worldwide. This newly found element can easily be
combined with crossbar interconnect technology which this new structure has
opened a new field in designing configurable or programmable electronic
systems. These systems in return can have applications in signal processing and
artificial intelligence. In this paper, based on the simple memristor crossbar
structure, we propose new and simple circuits for hardware implementation of
fuzzy membership functions. In our proposed circuits, these fuzzy membership
functions can have any shapes and resolutions. In addition, these circuits can
be used as a basis in the construction of evolutionary systems.
|
[
{
"created": "Sun, 5 Sep 2010 07:46:47 GMT",
"version": "v1"
}
] |
2010-09-07
|
[
[
"Merrikh-Bayat",
"Farnood",
""
],
[
"Shouraki",
"Saeed Bagheri",
""
]
] |
In May 1, 2008, researchers at Hewlett Packard (HP) announced the first physical realization of a fundamental circuit element called memristor that attracted so much interest worldwide. This newly found element can easily be combined with crossbar interconnect technology which this new structure has opened a new field in designing configurable or programmable electronic systems. These systems in return can have applications in signal processing and artificial intelligence. In this paper, based on the simple memristor crossbar structure, we propose new and simple circuits for hardware implementation of fuzzy membership functions. In our proposed circuits, these fuzzy membership functions can have any shapes and resolutions. In addition, these circuits can be used as a basis in the construction of evolutionary systems.
|
1812.10896
|
Hoang-Quoc Nguyen-Son
|
Hoang-Quoc Nguyen-Son, Ngoc-Dung T. Tieu, Huy H. Nguyen, Junichi
Yamagishi, and Isao Echizen
|
Identifying Computer-Translated Paragraphs using Coherence Features
|
9 pages, PACLIC 2018
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We have developed a method for extracting the coherence features from a
paragraph by matching similar words in its sentences. We conducted an
experiment with a parallel German corpus containing 2000 human-created and 2000
machine-translated paragraphs. The result showed that our method achieved the
best performance (accuracy = 72.3%, equal error rate = 29.8%) when it is
compared with previous methods on various computer-generated text including
translation and paper generation (best accuracy = 67.9%, equal error rate =
32.0%). Experiments on Dutch, another rich resource language, and a low
resource one (Japanese) attained similar performances. It demonstrated the
efficiency of the coherence features at distinguishing computer-translated from
human-created paragraphs on diverse languages.
|
[
{
"created": "Fri, 28 Dec 2018 05:35:31 GMT",
"version": "v1"
}
] |
2018-12-31
|
[
[
"Nguyen-Son",
"Hoang-Quoc",
""
],
[
"Tieu",
"Ngoc-Dung T.",
""
],
[
"Nguyen",
"Huy H.",
""
],
[
"Yamagishi",
"Junichi",
""
],
[
"Echizen",
"Isao",
""
]
] |
We have developed a method for extracting the coherence features from a paragraph by matching similar words in its sentences. We conducted an experiment with a parallel German corpus containing 2000 human-created and 2000 machine-translated paragraphs. The result showed that our method achieved the best performance (accuracy = 72.3%, equal error rate = 29.8%) when it is compared with previous methods on various computer-generated text including translation and paper generation (best accuracy = 67.9%, equal error rate = 32.0%). Experiments on Dutch, another rich resource language, and a low resource one (Japanese) attained similar performances. It demonstrated the efficiency of the coherence features at distinguishing computer-translated from human-created paragraphs on diverse languages.
|
1405.5336
|
Mingyue Ji
|
Mingyue Ji, Giuseppe Caire, Andreas F. Molisch
|
Fundamental Limits of Caching in Wireless D2D Networks
|
45 pages, 5 figures, Submitted to IEEE Transactions on Information
Theory, This is the extended version of the conference (ITW) paper
arXiv:1304.5856
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider a wireless Device-to-Device (D2D) network where communication is
restricted to be single-hop. Users make arbitrary requests from a finite
library of files and have pre-cached information on their devices, subject to a
per-node storage capacity constraint. A similar problem has already been
considered in an ``infrastructure'' setting, where all users receive a common
multicast (coded) message from a single omniscient server (e.g., a base station
having all the files in the library) through a shared bottleneck link. In this
work, we consider a D2D ``infrastructure-less'' version of the problem. We
propose a caching strategy based on deterministic assignment of subpackets of
the library files, and a coded delivery strategy where the users send linearly
coded messages to each other in order to collectively satisfy their demands. We
also consider a random caching strategy, which is more suitable to a fully
decentralized implementation. Under certain conditions, both approaches can
achieve the information theoretic outer bound within a constant multiplicative
factor. In our previous work, we showed that a caching D2D wireless network
with one-hop communication, random caching, and uncoded delivery, achieves the
same throughput scaling law of the infrastructure-based coded multicasting
scheme, in the regime of large number of users and files in the library. This
shows that the spatial reuse gain of the D2D network is order-equivalent to the
coded multicasting gain of single base station transmission. It is therefore
natural to ask whether these two gains are cumulative, i.e.,if a D2D network
with both local communication (spatial reuse) and coded multicasting can
provide an improved scaling law. Somewhat counterintuitively, we show that
these gains do not cumulate (in terms of throughput scaling law).
|
[
{
"created": "Wed, 21 May 2014 08:55:05 GMT",
"version": "v1"
}
] |
2014-05-22
|
[
[
"Ji",
"Mingyue",
""
],
[
"Caire",
"Giuseppe",
""
],
[
"Molisch",
"Andreas F.",
""
]
] |
We consider a wireless Device-to-Device (D2D) network where communication is restricted to be single-hop. Users make arbitrary requests from a finite library of files and have pre-cached information on their devices, subject to a per-node storage capacity constraint. A similar problem has already been considered in an ``infrastructure'' setting, where all users receive a common multicast (coded) message from a single omniscient server (e.g., a base station having all the files in the library) through a shared bottleneck link. In this work, we consider a D2D ``infrastructure-less'' version of the problem. We propose a caching strategy based on deterministic assignment of subpackets of the library files, and a coded delivery strategy where the users send linearly coded messages to each other in order to collectively satisfy their demands. We also consider a random caching strategy, which is more suitable to a fully decentralized implementation. Under certain conditions, both approaches can achieve the information theoretic outer bound within a constant multiplicative factor. In our previous work, we showed that a caching D2D wireless network with one-hop communication, random caching, and uncoded delivery, achieves the same throughput scaling law of the infrastructure-based coded multicasting scheme, in the regime of large number of users and files in the library. This shows that the spatial reuse gain of the D2D network is order-equivalent to the coded multicasting gain of single base station transmission. It is therefore natural to ask whether these two gains are cumulative, i.e.,if a D2D network with both local communication (spatial reuse) and coded multicasting can provide an improved scaling law. Somewhat counterintuitively, we show that these gains do not cumulate (in terms of throughput scaling law).
|
2012.09259
|
Ajinkya Tejankar
|
Ajinkya Tejankar, Soroush Abbasi Koohpayegani, Vipin Pillai, Paolo
Favaro, and Hamed Pirsiavash
|
ISD: Self-Supervised Learning by Iterative Similarity Distillation
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Recently, contrastive learning has achieved great results in self-supervised
learning, where the main idea is to push two augmentations of an image
(positive pairs) closer compared to other random images (negative pairs). We
argue that not all random images are equal. Hence, we introduce a self
supervised learning algorithm where we use a soft similarity for the negative
images rather than a binary distinction between positive and negative pairs. We
iteratively distill a slowly evolving teacher model to the student model by
capturing the similarity of a query image to some random images and
transferring that knowledge to the student. We argue that our method is less
constrained compared to recent contrastive learning methods, so it can learn
better features. Specifically, our method should handle unbalanced and
unlabeled data better than existing contrastive learning methods, because the
randomly chosen negative set might include many samples that are semantically
similar to the query image. In this case, our method labels them as highly
similar while standard contrastive methods label them as negative pairs. Our
method achieves comparable results to the state-of-the-art models. We also show
that our method performs better in the settings where the unlabeled data is
unbalanced. Our code is available here: https://github.com/UMBCvision/ISD.
|
[
{
"created": "Wed, 16 Dec 2020 20:50:17 GMT",
"version": "v1"
},
{
"created": "Mon, 10 May 2021 14:20:59 GMT",
"version": "v2"
},
{
"created": "Fri, 10 Sep 2021 15:06:27 GMT",
"version": "v3"
}
] |
2021-09-13
|
[
[
"Tejankar",
"Ajinkya",
""
],
[
"Koohpayegani",
"Soroush Abbasi",
""
],
[
"Pillai",
"Vipin",
""
],
[
"Favaro",
"Paolo",
""
],
[
"Pirsiavash",
"Hamed",
""
]
] |
Recently, contrastive learning has achieved great results in self-supervised learning, where the main idea is to push two augmentations of an image (positive pairs) closer compared to other random images (negative pairs). We argue that not all random images are equal. Hence, we introduce a self supervised learning algorithm where we use a soft similarity for the negative images rather than a binary distinction between positive and negative pairs. We iteratively distill a slowly evolving teacher model to the student model by capturing the similarity of a query image to some random images and transferring that knowledge to the student. We argue that our method is less constrained compared to recent contrastive learning methods, so it can learn better features. Specifically, our method should handle unbalanced and unlabeled data better than existing contrastive learning methods, because the randomly chosen negative set might include many samples that are semantically similar to the query image. In this case, our method labels them as highly similar while standard contrastive methods label them as negative pairs. Our method achieves comparable results to the state-of-the-art models. We also show that our method performs better in the settings where the unlabeled data is unbalanced. Our code is available here: https://github.com/UMBCvision/ISD.
|
2011.11579
|
Megan Johnson
|
Megan Johnson, Jae-Hun Jung
|
The Interconnectivity Vector: A Finite-Dimensional Vector Representation
of Persistent Homology
|
28 pages, 20 figures
| null | null | null |
cs.CG cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Persistent Homology (PH) is a useful tool to study the underlying structure
of a data set. Persistence Diagrams (PDs), which are 2D multisets of points,
are a concise summary of the information found by studying the PH of a data
set. However, PDs are difficult to incorporate into a typical machine learning
workflow. To that end, two main methods for representing PDs have been
developed: kernel methods and vectorization methods. In this paper we propose a
new finite-dimensional vector, called the interconnectivity vector,
representation of a PD adapted from Bag-of-Words (BoW). This new representation
is constructed to demonstrate the connections between the homological features
of a data set. This initial definition of the interconnectivity vector proves
to be unstable, but we introduce a stabilized version of the vector and prove
its stability with respect to small perturbations in the inputs. We evaluate
both versions of the presented vectorization on several data sets and show
their high discriminative power.
|
[
{
"created": "Mon, 23 Nov 2020 17:43:06 GMT",
"version": "v1"
}
] |
2020-11-24
|
[
[
"Johnson",
"Megan",
""
],
[
"Jung",
"Jae-Hun",
""
]
] |
Persistent Homology (PH) is a useful tool to study the underlying structure of a data set. Persistence Diagrams (PDs), which are 2D multisets of points, are a concise summary of the information found by studying the PH of a data set. However, PDs are difficult to incorporate into a typical machine learning workflow. To that end, two main methods for representing PDs have been developed: kernel methods and vectorization methods. In this paper we propose a new finite-dimensional vector, called the interconnectivity vector, representation of a PD adapted from Bag-of-Words (BoW). This new representation is constructed to demonstrate the connections between the homological features of a data set. This initial definition of the interconnectivity vector proves to be unstable, but we introduce a stabilized version of the vector and prove its stability with respect to small perturbations in the inputs. We evaluate both versions of the presented vectorization on several data sets and show their high discriminative power.
|
2204.05176
|
Arushi Jain
|
Arushi Jain, Sharan Vaswani, Reza Babanezhad, Csaba Szepesvari, Doina
Precup
|
Towards Painless Policy Optimization for Constrained MDPs
|
Paper under submission. 27 pages, 12 figures
| null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study policy optimization in an infinite horizon, $\gamma$-discounted
constrained Markov decision process (CMDP). Our objective is to return a policy
that achieves large expected reward with a small constraint violation. We
consider the online setting with linear function approximation and assume
global access to the corresponding features. We propose a generic primal-dual
framework that allows us to bound the reward sub-optimality and constraint
violation for arbitrary algorithms in terms of their primal and dual regret on
online linear optimization problems. We instantiate this framework to use
coin-betting algorithms and propose the Coin Betting Politex (CBP) algorithm.
Assuming that the action-value functions are $\varepsilon_b$-close to the span
of the $d$-dimensional state-action features and no sampling errors, we prove
that $T$ iterations of CBP result in an $O\left(\frac{1}{(1 - \gamma)^3
\sqrt{T}} + \frac{\varepsilon_b\sqrt{d}}{(1 - \gamma)^2} \right)$ reward
sub-optimality and an $O\left(\frac{1}{(1 - \gamma)^2 \sqrt{T}} +
\frac{\varepsilon_b \sqrt{d}}{1 - \gamma} \right)$ constraint violation.
Importantly, unlike gradient descent-ascent and other recent methods, CBP does
not require extensive hyperparameter tuning. Via experiments on synthetic and
Cartpole environments, we demonstrate the effectiveness and robustness of CBP.
|
[
{
"created": "Mon, 11 Apr 2022 15:08:09 GMT",
"version": "v1"
}
] |
2022-04-12
|
[
[
"Jain",
"Arushi",
""
],
[
"Vaswani",
"Sharan",
""
],
[
"Babanezhad",
"Reza",
""
],
[
"Szepesvari",
"Csaba",
""
],
[
"Precup",
"Doina",
""
]
] |
We study policy optimization in an infinite horizon, $\gamma$-discounted constrained Markov decision process (CMDP). Our objective is to return a policy that achieves large expected reward with a small constraint violation. We consider the online setting with linear function approximation and assume global access to the corresponding features. We propose a generic primal-dual framework that allows us to bound the reward sub-optimality and constraint violation for arbitrary algorithms in terms of their primal and dual regret on online linear optimization problems. We instantiate this framework to use coin-betting algorithms and propose the Coin Betting Politex (CBP) algorithm. Assuming that the action-value functions are $\varepsilon_b$-close to the span of the $d$-dimensional state-action features and no sampling errors, we prove that $T$ iterations of CBP result in an $O\left(\frac{1}{(1 - \gamma)^3 \sqrt{T}} + \frac{\varepsilon_b\sqrt{d}}{(1 - \gamma)^2} \right)$ reward sub-optimality and an $O\left(\frac{1}{(1 - \gamma)^2 \sqrt{T}} + \frac{\varepsilon_b \sqrt{d}}{1 - \gamma} \right)$ constraint violation. Importantly, unlike gradient descent-ascent and other recent methods, CBP does not require extensive hyperparameter tuning. Via experiments on synthetic and Cartpole environments, we demonstrate the effectiveness and robustness of CBP.
|
2110.09508
|
Ekta Gavas
|
Ekta Gavas and Kaustubh Olpadkar
|
Deep CNNs for Peripheral Blood Cell Classification
|
20 pages, 14 figures, Submitted at MIDL 2021
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The application of machine learning techniques to the medical domain is
especially challenging due to the required level of precision and the
incurrence of huge risks of minute errors. Employing these techniques to a more
complex subdomain of hematological diagnosis seems quite promising, with
automatic identification of blood cell types, which can help in detection of
hematologic disorders. In this paper, we benchmark 27 popular deep
convolutional neural network architectures on the microscopic peripheral blood
cell images dataset. The dataset is publicly available, with large number of
normal peripheral blood cells acquired using the CellaVision DM96 analyzer and
identified by expert pathologists into eight different cell types. We fine-tune
the state-of-the-art image classification models pre-trained on the ImageNet
dataset for blood cell classification. We exploit data augmentation techniques
during training to avoid overfitting and achieve generalization. An ensemble of
the top performing models obtains significant improvements over past published
works, achieving the state-of-the-art results with a classification accuracy of
99.51%. Our work provides empirical baselines and benchmarks on standard
deep-learning architectures for microscopic peripheral blood cell recognition
task.
|
[
{
"created": "Mon, 18 Oct 2021 17:56:07 GMT",
"version": "v1"
}
] |
2021-10-19
|
[
[
"Gavas",
"Ekta",
""
],
[
"Olpadkar",
"Kaustubh",
""
]
] |
The application of machine learning techniques to the medical domain is especially challenging due to the required level of precision and the incurrence of huge risks of minute errors. Employing these techniques to a more complex subdomain of hematological diagnosis seems quite promising, with automatic identification of blood cell types, which can help in detection of hematologic disorders. In this paper, we benchmark 27 popular deep convolutional neural network architectures on the microscopic peripheral blood cell images dataset. The dataset is publicly available, with large number of normal peripheral blood cells acquired using the CellaVision DM96 analyzer and identified by expert pathologists into eight different cell types. We fine-tune the state-of-the-art image classification models pre-trained on the ImageNet dataset for blood cell classification. We exploit data augmentation techniques during training to avoid overfitting and achieve generalization. An ensemble of the top performing models obtains significant improvements over past published works, achieving the state-of-the-art results with a classification accuracy of 99.51%. Our work provides empirical baselines and benchmarks on standard deep-learning architectures for microscopic peripheral blood cell recognition task.
|
1811.11881
|
Karthik Abinav Sankararaman
|
Nicole Immorlica and Karthik Abinav Sankararaman and Robert Schapire
and Aleksandrs Slivkins
|
Adversarial Bandits with Knapsacks
|
The extended abstract appeared in FOCS 2019. The definitive version
was published in JACM '22. V8 is the latest version with all technical
changes. Subsequent versions fixes minor LATEX presentation issues
| null | null | null |
cs.DS cs.LG stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
We consider Bandits with Knapsacks (henceforth, BwK), a general model for
multi-armed bandits under supply/budget constraints. In particular, a bandit
algorithm needs to solve a well-known knapsack problem: find an optimal packing
of items into a limited-size knapsack. The BwK problem is a common
generalization of numerous motivating examples, which range from dynamic
pricing to repeated auctions to dynamic ad allocation to network routing and
scheduling. While the prior work on BwK focused on the stochastic version, we
pioneer the other extreme in which the outcomes can be chosen adversarially.
This is a considerably harder problem, compared to both the stochastic version
and the "classic" adversarial bandits, in that regret minimization is no longer
feasible. Instead, the objective is to minimize the competitive ratio: the
ratio of the benchmark reward to the algorithm's reward.
We design an algorithm with competitive ratio O(log T) relative to the best
fixed distribution over actions, where T is the time horizon; we also prove a
matching lower bound. The key conceptual contribution is a new perspective on
the stochastic version of the problem. We suggest a new algorithm for the
stochastic version, which builds on the framework of regret minimization in
repeated games and admits a substantially simpler analysis compared to prior
work. We then analyze this algorithm for the adversarial version and use it as
a subroutine to solve the latter.
|
[
{
"created": "Wed, 28 Nov 2018 23:43:11 GMT",
"version": "v1"
},
{
"created": "Mon, 6 Feb 2023 01:43:48 GMT",
"version": "v10"
},
{
"created": "Tue, 7 Mar 2023 04:06:03 GMT",
"version": "v11"
},
{
"created": "Wed, 19 Dec 2018 02:13:00 GMT",
"version": "v2"
},
{
"created": "Thu, 14 Mar 2019 17:12:51 GMT",
"version": "v3"
},
{
"created": "Fri, 22 Mar 2019 22:17:04 GMT",
"version": "v4"
},
{
"created": "Sun, 13 Oct 2019 05:01:32 GMT",
"version": "v5"
},
{
"created": "Fri, 6 Nov 2020 19:18:05 GMT",
"version": "v6"
},
{
"created": "Thu, 23 Sep 2021 23:52:00 GMT",
"version": "v7"
},
{
"created": "Tue, 19 Jul 2022 05:21:00 GMT",
"version": "v8"
},
{
"created": "Wed, 3 Aug 2022 06:11:18 GMT",
"version": "v9"
}
] |
2023-03-08
|
[
[
"Immorlica",
"Nicole",
""
],
[
"Sankararaman",
"Karthik Abinav",
""
],
[
"Schapire",
"Robert",
""
],
[
"Slivkins",
"Aleksandrs",
""
]
] |
We consider Bandits with Knapsacks (henceforth, BwK), a general model for multi-armed bandits under supply/budget constraints. In particular, a bandit algorithm needs to solve a well-known knapsack problem: find an optimal packing of items into a limited-size knapsack. The BwK problem is a common generalization of numerous motivating examples, which range from dynamic pricing to repeated auctions to dynamic ad allocation to network routing and scheduling. While the prior work on BwK focused on the stochastic version, we pioneer the other extreme in which the outcomes can be chosen adversarially. This is a considerably harder problem, compared to both the stochastic version and the "classic" adversarial bandits, in that regret minimization is no longer feasible. Instead, the objective is to minimize the competitive ratio: the ratio of the benchmark reward to the algorithm's reward. We design an algorithm with competitive ratio O(log T) relative to the best fixed distribution over actions, where T is the time horizon; we also prove a matching lower bound. The key conceptual contribution is a new perspective on the stochastic version of the problem. We suggest a new algorithm for the stochastic version, which builds on the framework of regret minimization in repeated games and admits a substantially simpler analysis compared to prior work. We then analyze this algorithm for the adversarial version and use it as a subroutine to solve the latter.
|
1712.09726
|
Sanaullah Manzoor
|
Sanaullah Manzoor, Ghulam Abbas, Masroor Hussain
|
CHOKeD: A Fair Active Queue Management System
|
MS thesis
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fairness is the significant factor to sustain best effort delivery of network
services. Now-a-days, real-time multimedia applications have evolved largely
over the Internet. Most of multimedia services are unresponsive during network
congestion. Unresponsive traffic streams steal most network bandwidth and
starve out some other flows which are responsive during network congestion. In
the presence of these unresponsive traffic flows, protection of responsive
flows has become a major issue. Many Active Queue Management (AQM) based
solutions have been recommended to protect responsive traffic flows from
unresponsive ones and to ensure fairness among all traffic flows. The thesis
proposes a novel AQM scheme CHOKeD, to deliver fairness among all flows of a
congested link. It is a completely stateless approach. CHOKeD is based on
dynamic drawing factor to penalize unresponsive traffic. It successfully
protects responsive flows in the presence of unresponsive flows. CHOKeD
features such as fairness, high throughput of responsive traffic and stateless
design, are encouraging factors for its deployment over the edge as well as the
network core routers. Extensive simulations have been carried out to evaluate
its performance under real-time network scenarios
|
[
{
"created": "Sat, 23 Dec 2017 10:06:58 GMT",
"version": "v1"
}
] |
2017-12-29
|
[
[
"Manzoor",
"Sanaullah",
""
],
[
"Abbas",
"Ghulam",
""
],
[
"Hussain",
"Masroor",
""
]
] |
Fairness is the significant factor to sustain best effort delivery of network services. Now-a-days, real-time multimedia applications have evolved largely over the Internet. Most of multimedia services are unresponsive during network congestion. Unresponsive traffic streams steal most network bandwidth and starve out some other flows which are responsive during network congestion. In the presence of these unresponsive traffic flows, protection of responsive flows has become a major issue. Many Active Queue Management (AQM) based solutions have been recommended to protect responsive traffic flows from unresponsive ones and to ensure fairness among all traffic flows. The thesis proposes a novel AQM scheme CHOKeD, to deliver fairness among all flows of a congested link. It is a completely stateless approach. CHOKeD is based on dynamic drawing factor to penalize unresponsive traffic. It successfully protects responsive flows in the presence of unresponsive flows. CHOKeD features such as fairness, high throughput of responsive traffic and stateless design, are encouraging factors for its deployment over the edge as well as the network core routers. Extensive simulations have been carried out to evaluate its performance under real-time network scenarios
|
2202.05983
|
Kailas Vodrahalli
|
Kailas Vodrahalli, Tobias Gerstenberg, and James Zou
|
Uncalibrated Models Can Improve Human-AI Collaboration
|
21 pages, 12 figures, NeurIPS 2022
| null | null | null |
cs.AI cs.CV cs.HC cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In many practical applications of AI, an AI model is used as a decision aid
for human users. The AI provides advice that a human (sometimes) incorporates
into their decision-making process. The AI advice is often presented with some
measure of "confidence" that the human can use to calibrate how much they
depend on or trust the advice. In this paper, we present an initial exploration
that suggests showing AI models as more confident than they actually are, even
when the original AI is well-calibrated, can improve human-AI performance
(measured as the accuracy and confidence of the human's final prediction after
seeing the AI advice). We first train a model to predict human incorporation of
AI advice using data from thousands of human-AI interactions. This enables us
to explicitly estimate how to transform the AI's prediction confidence, making
the AI uncalibrated, in order to improve the final human prediction. We
empirically validate our results across four different tasks--dealing with
images, text and tabular data--involving hundreds of human participants. We
further support our findings with simulation analysis. Our findings suggest the
importance of jointly optimizing the human-AI system as opposed to the standard
paradigm of optimizing the AI model alone.
|
[
{
"created": "Sat, 12 Feb 2022 04:51:00 GMT",
"version": "v1"
},
{
"created": "Wed, 1 Jun 2022 22:25:53 GMT",
"version": "v2"
},
{
"created": "Fri, 28 Oct 2022 01:43:51 GMT",
"version": "v3"
}
] |
2022-10-31
|
[
[
"Vodrahalli",
"Kailas",
""
],
[
"Gerstenberg",
"Tobias",
""
],
[
"Zou",
"James",
""
]
] |
In many practical applications of AI, an AI model is used as a decision aid for human users. The AI provides advice that a human (sometimes) incorporates into their decision-making process. The AI advice is often presented with some measure of "confidence" that the human can use to calibrate how much they depend on or trust the advice. In this paper, we present an initial exploration that suggests showing AI models as more confident than they actually are, even when the original AI is well-calibrated, can improve human-AI performance (measured as the accuracy and confidence of the human's final prediction after seeing the AI advice). We first train a model to predict human incorporation of AI advice using data from thousands of human-AI interactions. This enables us to explicitly estimate how to transform the AI's prediction confidence, making the AI uncalibrated, in order to improve the final human prediction. We empirically validate our results across four different tasks--dealing with images, text and tabular data--involving hundreds of human participants. We further support our findings with simulation analysis. Our findings suggest the importance of jointly optimizing the human-AI system as opposed to the standard paradigm of optimizing the AI model alone.
|
2009.08636
|
Jihyeon Roh
|
Jihyeon Roh, Huiseong Gim, Soo-Young Lee
|
Hierarchical GPT with Congruent Transformers for Multi-Sentence Language
Models
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We report a GPT-based multi-sentence language model for dialogue generation
and document understanding. First, we propose a hierarchical GPT which consists
of three blocks, i.e., a sentence encoding block, a sentence generating block,
and a sentence decoding block. The sentence encoding and decoding blocks are
basically the encoder-decoder blocks of the standard Transformers, which work
on each sentence independently. The sentence generating block is inserted
between the encoding and decoding blocks, and generates the next sentence
embedding vector from the previous sentence embedding vectors. We believe it is
the way human make conversation and understand paragraphs and documents. Since
each sentence may consist of fewer words, the sentence encoding and decoding
Transformers can use much smaller dimensional embedding vectors. Secondly, we
note the attention in the Transformers utilizes the inner-product similarity
measure. Therefore, to compare the two vectors in the same space, we set the
transform matrices for queries and keys to be the same. Otherwise, the
similarity concept is incongruent. We report experimental results to show that
these two modifications increase the language model performance for tasks with
multiple sentences.
|
[
{
"created": "Fri, 18 Sep 2020 05:55:37 GMT",
"version": "v1"
}
] |
2020-09-21
|
[
[
"Roh",
"Jihyeon",
""
],
[
"Gim",
"Huiseong",
""
],
[
"Lee",
"Soo-Young",
""
]
] |
We report a GPT-based multi-sentence language model for dialogue generation and document understanding. First, we propose a hierarchical GPT which consists of three blocks, i.e., a sentence encoding block, a sentence generating block, and a sentence decoding block. The sentence encoding and decoding blocks are basically the encoder-decoder blocks of the standard Transformers, which work on each sentence independently. The sentence generating block is inserted between the encoding and decoding blocks, and generates the next sentence embedding vector from the previous sentence embedding vectors. We believe it is the way human make conversation and understand paragraphs and documents. Since each sentence may consist of fewer words, the sentence encoding and decoding Transformers can use much smaller dimensional embedding vectors. Secondly, we note the attention in the Transformers utilizes the inner-product similarity measure. Therefore, to compare the two vectors in the same space, we set the transform matrices for queries and keys to be the same. Otherwise, the similarity concept is incongruent. We report experimental results to show that these two modifications increase the language model performance for tasks with multiple sentences.
|
1501.01905
|
Unai Ugalde
|
U. Ugalde, J. Anduaga, F. Martinez and A. Iturrospe
|
SHM method for damage localization based on substructuring and VARX
models
| null | null | null | null |
cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A novel damage localization method is proposed, which is based on a
substructuring approach and makes use of Vector Auto-Regressive with eXogenous
input (VARX) models. The substructuring approach aims to divide the monitored
structure into several multi-DOF isolated substructures. Later, each individual
substructure is modeled by a VARX model, and the health of each substructure is
determined analyzing the variation of the VARX model. The method allows to
detect whether the isolated substructure is damaged, and besides allows to
locate the damage within the substructure. Only measured displacement data is
required to estimate the isolated substructure's VARX model. Moreover, it is
not necessary to have a priori knowledge of the structural model. The proposed
method is validated by simulations of an eight-storey shear building.
|
[
{
"created": "Thu, 8 Jan 2015 17:22:31 GMT",
"version": "v1"
}
] |
2015-01-09
|
[
[
"Ugalde",
"U.",
""
],
[
"Anduaga",
"J.",
""
],
[
"Martinez",
"F.",
""
],
[
"Iturrospe",
"A.",
""
]
] |
A novel damage localization method is proposed, which is based on a substructuring approach and makes use of Vector Auto-Regressive with eXogenous input (VARX) models. The substructuring approach aims to divide the monitored structure into several multi-DOF isolated substructures. Later, each individual substructure is modeled by a VARX model, and the health of each substructure is determined analyzing the variation of the VARX model. The method allows to detect whether the isolated substructure is damaged, and besides allows to locate the damage within the substructure. Only measured displacement data is required to estimate the isolated substructure's VARX model. Moreover, it is not necessary to have a priori knowledge of the structural model. The proposed method is validated by simulations of an eight-storey shear building.
|
2309.05076
|
Maximilian Croissant
|
Maximilian Croissant, Madeleine Frister, Guy Schofield, Cade McCall
|
An Appraisal-Based Chain-Of-Emotion Architecture for Affective Language
Model Game Agents
| null | null | null | null |
cs.AI cs.CL cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
The development of believable, natural, and interactive digital artificial
agents is a field of growing interest. Theoretical uncertainties and technical
barriers present considerable challenges to the field, particularly with
regards to developing agents that effectively simulate human emotions. Large
language models (LLMs) might address these issues by tapping common patterns in
situational appraisal. In three empirical experiments, this study tests the
capabilities of LLMs to solve emotional intelligence tasks and to simulate
emotions. It presents and evaluates a new chain-of-emotion architecture for
emotion simulation within video games, based on psychological appraisal
research. Results show that it outperforms standard LLM architectures on a
range of user experience and content analysis metrics. This study therefore
provides early evidence of how to construct and test affective agents based on
cognitive processes represented in language models.
|
[
{
"created": "Sun, 10 Sep 2023 16:55:49 GMT",
"version": "v1"
}
] |
2023-09-12
|
[
[
"Croissant",
"Maximilian",
""
],
[
"Frister",
"Madeleine",
""
],
[
"Schofield",
"Guy",
""
],
[
"McCall",
"Cade",
""
]
] |
The development of believable, natural, and interactive digital artificial agents is a field of growing interest. Theoretical uncertainties and technical barriers present considerable challenges to the field, particularly with regards to developing agents that effectively simulate human emotions. Large language models (LLMs) might address these issues by tapping common patterns in situational appraisal. In three empirical experiments, this study tests the capabilities of LLMs to solve emotional intelligence tasks and to simulate emotions. It presents and evaluates a new chain-of-emotion architecture for emotion simulation within video games, based on psychological appraisal research. Results show that it outperforms standard LLM architectures on a range of user experience and content analysis metrics. This study therefore provides early evidence of how to construct and test affective agents based on cognitive processes represented in language models.
|
1905.08205
|
Jiaqi Guo
|
Jiaqi Guo, Zecheng Zhan, Yan Gao, Yan Xiao, Jian-Guang Lou, Ting Liu,
Dongmei Zhang
|
Towards Complex Text-to-SQL in Cross-Domain Database with Intermediate
Representation
|
To appear in ACL 2019
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a neural approach called IRNet for complex and cross-domain
Text-to-SQL. IRNet aims to address two challenges: 1) the mismatch between
intents expressed in natural language (NL) and the implementation details in
SQL; 2) the challenge in predicting columns caused by the large number of
out-of-domain words. Instead of end-to-end synthesizing a SQL query, IRNet
decomposes the synthesis process into three phases. In the first phase, IRNet
performs a schema linking over a question and a database schema. Then, IRNet
adopts a grammar-based neural model to synthesize a SemQL query which is an
intermediate representation that we design to bridge NL and SQL. Finally, IRNet
deterministically infers a SQL query from the synthesized SemQL query with
domain knowledge. On the challenging Text-to-SQL benchmark Spider, IRNet
achieves 46.7% accuracy, obtaining 19.5% absolute improvement over previous
state-of-the-art approaches. At the time of writing, IRNet achieves the first
position on the Spider leaderboard.
|
[
{
"created": "Mon, 20 May 2019 16:44:00 GMT",
"version": "v1"
},
{
"created": "Wed, 29 May 2019 02:50:00 GMT",
"version": "v2"
}
] |
2019-05-30
|
[
[
"Guo",
"Jiaqi",
""
],
[
"Zhan",
"Zecheng",
""
],
[
"Gao",
"Yan",
""
],
[
"Xiao",
"Yan",
""
],
[
"Lou",
"Jian-Guang",
""
],
[
"Liu",
"Ting",
""
],
[
"Zhang",
"Dongmei",
""
]
] |
We present a neural approach called IRNet for complex and cross-domain Text-to-SQL. IRNet aims to address two challenges: 1) the mismatch between intents expressed in natural language (NL) and the implementation details in SQL; 2) the challenge in predicting columns caused by the large number of out-of-domain words. Instead of end-to-end synthesizing a SQL query, IRNet decomposes the synthesis process into three phases. In the first phase, IRNet performs a schema linking over a question and a database schema. Then, IRNet adopts a grammar-based neural model to synthesize a SemQL query which is an intermediate representation that we design to bridge NL and SQL. Finally, IRNet deterministically infers a SQL query from the synthesized SemQL query with domain knowledge. On the challenging Text-to-SQL benchmark Spider, IRNet achieves 46.7% accuracy, obtaining 19.5% absolute improvement over previous state-of-the-art approaches. At the time of writing, IRNet achieves the first position on the Spider leaderboard.
|
1506.06668
|
Yoichi Ochiai Prof.
|
Yoichi Ochiai, Kota Kumagai, Takayuki Hoshi, Jun Rekimoto, Satoshi
Hasegawa, and Yoshio Hayasaki
|
Fairy Lights in Femtoseconds: Aerial and Volumetric Graphics Rendered by
Focused Femtosecond Laser Combined with Computational Holographic Fields
| null | null | null | null |
cs.GR cs.HC physics.optics
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a method of rendering aerial and volumetric graphics using
femtosecond lasers. A high-intensity laser excites a physical matter to emit
light at an arbitrary 3D position. Popular applications can then be explored
especially since plasma induced by a femtosecond laser is safer than that
generated by a nanosecond laser. There are two methods of rendering graphics
with a femtosecond laser in air: Producing holograms using spatial light
modulation technology, and scanning of a laser beam by a galvano mirror. The
holograms and workspace of the system proposed here occupy a volume of up to 1
cm^3; however, this size is scalable depending on the optical devices and their
setup. This paper provides details of the principles, system setup, and
experimental evaluation, and discussions on scalability, design space, and
applications of this system. We tested two laser sources: an adjustable (30-100
fs) laser which projects up to 1,000 pulses per second at energy up to 7 mJ per
pulse, and a 269-fs laser which projects up to 200,000 pulses per second at an
energy up to 50 uJ per pulse. We confirmed that the spatiotemporal resolution
of volumetric displays, implemented with these laser sources, is 4,000 and
200,000 dots per second. Although we focus on laser-induced plasma in air, the
discussion presented here is also applicable to other rendering principles such
as fluorescence and microbubble in solid/liquid materials.
|
[
{
"created": "Mon, 22 Jun 2015 16:20:34 GMT",
"version": "v1"
}
] |
2015-06-23
|
[
[
"Ochiai",
"Yoichi",
""
],
[
"Kumagai",
"Kota",
""
],
[
"Hoshi",
"Takayuki",
""
],
[
"Rekimoto",
"Jun",
""
],
[
"Hasegawa",
"Satoshi",
""
],
[
"Hayasaki",
"Yoshio",
""
]
] |
We present a method of rendering aerial and volumetric graphics using femtosecond lasers. A high-intensity laser excites a physical matter to emit light at an arbitrary 3D position. Popular applications can then be explored especially since plasma induced by a femtosecond laser is safer than that generated by a nanosecond laser. There are two methods of rendering graphics with a femtosecond laser in air: Producing holograms using spatial light modulation technology, and scanning of a laser beam by a galvano mirror. The holograms and workspace of the system proposed here occupy a volume of up to 1 cm^3; however, this size is scalable depending on the optical devices and their setup. This paper provides details of the principles, system setup, and experimental evaluation, and discussions on scalability, design space, and applications of this system. We tested two laser sources: an adjustable (30-100 fs) laser which projects up to 1,000 pulses per second at energy up to 7 mJ per pulse, and a 269-fs laser which projects up to 200,000 pulses per second at an energy up to 50 uJ per pulse. We confirmed that the spatiotemporal resolution of volumetric displays, implemented with these laser sources, is 4,000 and 200,000 dots per second. Although we focus on laser-induced plasma in air, the discussion presented here is also applicable to other rendering principles such as fluorescence and microbubble in solid/liquid materials.
|
1711.05091
|
Pawe{\l} Rz\k{a}\.zewski
|
Micha{\l} D\k{e}bski, Zbigniew Lonc, Pawe{\l} Rz\k{a}\.zewski
|
Sequences of radius $k$ for complete bipartite graphs
| null |
Discrete Applied Mathematics 225, pp. 51--63. 2017
|
10.1016/j.dam.2017.03.017
| null |
cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A \emph{$k$-radius sequence} for a graph $G$ is a sequence of vertices of $G$
(typically with repetitions) such that for every edge $uv$ of $G$ vertices $u$
and $v$ appear at least once within distance $k$ in the sequence. The length of
a shortest $k$-radius sequence for $G$ is denoted by $f_k(G)$. We give an
asymptotically tight estimation on $f_k(G)$ for complete bipartite graphs
{which matches a lower bound, valid for all bipartite graphs}. We also show
that determining $f_k(G)$ for an arbitrary graph $G$ is NP-hard for every
constant $k>1$.
|
[
{
"created": "Tue, 14 Nov 2017 14:09:59 GMT",
"version": "v1"
}
] |
2017-11-15
|
[
[
"Dębski",
"Michał",
""
],
[
"Lonc",
"Zbigniew",
""
],
[
"Rzążewski",
"Paweł",
""
]
] |
A \emph{$k$-radius sequence} for a graph $G$ is a sequence of vertices of $G$ (typically with repetitions) such that for every edge $uv$ of $G$ vertices $u$ and $v$ appear at least once within distance $k$ in the sequence. The length of a shortest $k$-radius sequence for $G$ is denoted by $f_k(G)$. We give an asymptotically tight estimation on $f_k(G)$ for complete bipartite graphs {which matches a lower bound, valid for all bipartite graphs}. We also show that determining $f_k(G)$ for an arbitrary graph $G$ is NP-hard for every constant $k>1$.
|
1312.7118
|
Jantawan Bangsuk
|
Bangsuk Jantawan and Cheng-Fa Tsai
|
The Development of Educational Quality Administration: a Case of
Technical College in Southern Thailand
| null | null | null | null |
cs.CY
|
http://creativecommons.org/licenses/publicdomain/
|
The purpose of this research were: to survey the needs of using the
information system for educational quality administration; to develop
Information System for Educational quality Administration (ISEs) in accordance
with quality assessment standard; to study the qualification of ISEs; and to
study satisfaction level of ISEs user. Subsequently, the tools of study have
been employed that there were the collection of 47 questionnaires and 5
interviews to specialist by responsible officers for Information center of
Technical colleges in Southern Thailand. The analysis of quantitative data has
employed descriptive statistics using mean and standard deviation as the tool
of measurement.
|
[
{
"created": "Thu, 26 Dec 2013 15:13:27 GMT",
"version": "v1"
}
] |
2013-12-30
|
[
[
"Jantawan",
"Bangsuk",
""
],
[
"Tsai",
"Cheng-Fa",
""
]
] |
The purpose of this research were: to survey the needs of using the information system for educational quality administration; to develop Information System for Educational quality Administration (ISEs) in accordance with quality assessment standard; to study the qualification of ISEs; and to study satisfaction level of ISEs user. Subsequently, the tools of study have been employed that there were the collection of 47 questionnaires and 5 interviews to specialist by responsible officers for Information center of Technical colleges in Southern Thailand. The analysis of quantitative data has employed descriptive statistics using mean and standard deviation as the tool of measurement.
|
2311.05665
|
Mrutyunjaya Panda
|
Mrutyunjaya Panda, Soumya Ranjan Mahanta
|
Explainable artificial intelligence for Healthcare applications using
Random Forest Classifier with LIME and SHAP
|
Chapter-6: Accepted Book Chapter in: Transparent, Interpretable and
Explainable AI Systems, BK Tripathy & Hari Seetha (Editors), CRC Press, May
2023
| null | null | null |
cs.LG cs.AI cs.CY
|
http://creativecommons.org/licenses/by/4.0/
|
With the advances in computationally efficient artificial Intelligence (AI)
techniques and their numerous applications in our everyday life, there is a
pressing need to understand the computational details hidden in black box AI
techniques such as most popular machine learning and deep learning techniques;
through more detailed explanations. The origin of explainable AI (xAI) is
coined from these challenges and recently gained more attention by the
researchers by adding explainability comprehensively in traditional AI systems.
This leads to develop an appropriate framework for successful applications of
xAI in real life scenarios with respect to innovations, risk mitigation,
ethical issues and logical values to the users. In this book chapter, an
in-depth analysis of several xAI frameworks and methods including LIME (Local
Interpretable Model-agnostic Explanations) and SHAP (SHapley Additive
exPlanations) are provided. Random Forest Classifier as black box AI is used on
a publicly available Diabetes symptoms dataset with LIME and SHAP for better
interpretations. The results obtained are interesting in terms of transparency,
valid and trustworthiness in diabetes disease prediction.
|
[
{
"created": "Thu, 9 Nov 2023 11:43:10 GMT",
"version": "v1"
}
] |
2023-11-13
|
[
[
"Panda",
"Mrutyunjaya",
""
],
[
"Mahanta",
"Soumya Ranjan",
""
]
] |
With the advances in computationally efficient artificial Intelligence (AI) techniques and their numerous applications in our everyday life, there is a pressing need to understand the computational details hidden in black box AI techniques such as most popular machine learning and deep learning techniques; through more detailed explanations. The origin of explainable AI (xAI) is coined from these challenges and recently gained more attention by the researchers by adding explainability comprehensively in traditional AI systems. This leads to develop an appropriate framework for successful applications of xAI in real life scenarios with respect to innovations, risk mitigation, ethical issues and logical values to the users. In this book chapter, an in-depth analysis of several xAI frameworks and methods including LIME (Local Interpretable Model-agnostic Explanations) and SHAP (SHapley Additive exPlanations) are provided. Random Forest Classifier as black box AI is used on a publicly available Diabetes symptoms dataset with LIME and SHAP for better interpretations. The results obtained are interesting in terms of transparency, valid and trustworthiness in diabetes disease prediction.
|
1608.03037
|
Viorela Ila Dr.
|
Viorela Ila, Lukas Polok, Marek Solony and Pavel Svoboda
|
Highly Efficient Compact Pose SLAM with SLAM++
|
21 pages, 10 figures, 4 tables
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Maximum likelihood estimation (MLE) is a well-known estimation method used in
many robotic and computer vision applications. Under Gaussian assumption, the
MLE converts to a nonlinear least squares (NLS) problem. Efficient solutions to
NLS exist and they are based on iteratively solving sparse linear systems until
convergence. In general, the existing solutions provide only an estimation of
the mean state vector, the resulting covariance being computationally too
expensive to recover. Nevertheless, in many simultaneous localisation and
mapping (SLAM) applications, knowing only the mean vector is not enough. Data
association, obtaining reduced state representations, active decisions and next
best view are only a few of the applications that require fast state covariance
recovery. Furthermore, computer vision and robotic applications are in general
performed online. In this case, the state is updated and recomputed every step
and its size is continuously growing, therefore, the estimation process may
become highly computationally demanding. This paper introduces a general
framework for incremental MLE called SLAM++, which fully benefits from the
incremental nature of the online applications, and provides efficient
estimation of both the mean and the covariance of the estimate. Based on that,
we propose a strategy for maintaining a sparse and scalable state
representation for large scale mapping, which uses information theory measures
to integrate only informative and non-redundant contributions to the state
representation. SLAM++ differs from existing implementations by performing all
the matrix operations by blocks. This led to extremely fast matrix manipulation
and arithmetic operations. Even though this paper tests SLAM++ efficiency on
SLAM problems, its applicability remains general.
|
[
{
"created": "Wed, 10 Aug 2016 04:13:49 GMT",
"version": "v1"
}
] |
2016-08-11
|
[
[
"Ila",
"Viorela",
""
],
[
"Polok",
"Lukas",
""
],
[
"Solony",
"Marek",
""
],
[
"Svoboda",
"Pavel",
""
]
] |
Maximum likelihood estimation (MLE) is a well-known estimation method used in many robotic and computer vision applications. Under Gaussian assumption, the MLE converts to a nonlinear least squares (NLS) problem. Efficient solutions to NLS exist and they are based on iteratively solving sparse linear systems until convergence. In general, the existing solutions provide only an estimation of the mean state vector, the resulting covariance being computationally too expensive to recover. Nevertheless, in many simultaneous localisation and mapping (SLAM) applications, knowing only the mean vector is not enough. Data association, obtaining reduced state representations, active decisions and next best view are only a few of the applications that require fast state covariance recovery. Furthermore, computer vision and robotic applications are in general performed online. In this case, the state is updated and recomputed every step and its size is continuously growing, therefore, the estimation process may become highly computationally demanding. This paper introduces a general framework for incremental MLE called SLAM++, which fully benefits from the incremental nature of the online applications, and provides efficient estimation of both the mean and the covariance of the estimate. Based on that, we propose a strategy for maintaining a sparse and scalable state representation for large scale mapping, which uses information theory measures to integrate only informative and non-redundant contributions to the state representation. SLAM++ differs from existing implementations by performing all the matrix operations by blocks. This led to extremely fast matrix manipulation and arithmetic operations. Even though this paper tests SLAM++ efficiency on SLAM problems, its applicability remains general.
|
2208.11836
|
Mingqi Shao
|
Mingqi Shao, Chongkun Xia, Dongxu Duan, Xueqian Wang
|
Polarimetric Inverse Rendering for Transparent Shapes Reconstruction
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we propose a novel method for the detailed reconstruction of
transparent objects by exploiting polarimetric cues. Most of the existing
methods usually lack sufficient constraints and suffer from the over-smooth
problem. Hence, we introduce polarization information as a complementary cue.
We implicitly represent the object's geometry as a neural network, while the
polarization render is capable of rendering the object's polarization images
from the given shape and illumination configuration. Direct comparison of the
rendered polarization images to the real-world captured images will have
additional errors due to the transmission in the transparent object. To address
this issue, the concept of reflection percentage which represents the
proportion of the reflection component is introduced. The reflection percentage
is calculated by a ray tracer and then used for weighting the polarization
loss. We build a polarization dataset for multi-view transparent shapes
reconstruction to verify our method. The experimental results show that our
method is capable of recovering detailed shapes and improving the
reconstruction quality of transparent objects. Our dataset and code will be
publicly available at https://github.com/shaomq2187/TransPIR.
|
[
{
"created": "Thu, 25 Aug 2022 02:52:31 GMT",
"version": "v1"
}
] |
2022-08-26
|
[
[
"Shao",
"Mingqi",
""
],
[
"Xia",
"Chongkun",
""
],
[
"Duan",
"Dongxu",
""
],
[
"Wang",
"Xueqian",
""
]
] |
In this work, we propose a novel method for the detailed reconstruction of transparent objects by exploiting polarimetric cues. Most of the existing methods usually lack sufficient constraints and suffer from the over-smooth problem. Hence, we introduce polarization information as a complementary cue. We implicitly represent the object's geometry as a neural network, while the polarization render is capable of rendering the object's polarization images from the given shape and illumination configuration. Direct comparison of the rendered polarization images to the real-world captured images will have additional errors due to the transmission in the transparent object. To address this issue, the concept of reflection percentage which represents the proportion of the reflection component is introduced. The reflection percentage is calculated by a ray tracer and then used for weighting the polarization loss. We build a polarization dataset for multi-view transparent shapes reconstruction to verify our method. The experimental results show that our method is capable of recovering detailed shapes and improving the reconstruction quality of transparent objects. Our dataset and code will be publicly available at https://github.com/shaomq2187/TransPIR.
|
2406.06602
|
Run-Xuan Tang
|
Run-Xuan Tang
|
Modeling of New Energy Vehicles' Impact on Urban Ecology Focusing on
Behavior
|
13 pages
| null | null | null |
cs.LG cs.SY eess.SY stat.AP
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The surging demand for new energy vehicles is driven by the imperative to
conserve energy, reduce emissions, and enhance the ecological ambiance. By
conducting behavioral analysis and mining usage patterns of new energy
vehicles, particular patterns can be identified. For instance, overloading the
battery, operating with low battery power, and driving at excessive speeds can
all detrimentally affect the battery's performance. To assess the impact of
such driving behavior on the urban ecology, an environmental computational
modeling method has been proposed to simulate the interaction between new
energy vehicles and the environment. To extend the time series data of the
vehicle's entire life cycle and the ecological environment within the model
sequence data, the LSTM model with Bayesian optimizer is utilized for
simulation. The analysis revealed the detrimental effects of poor driving
behavior on the environment.
|
[
{
"created": "Thu, 6 Jun 2024 14:03:52 GMT",
"version": "v1"
}
] |
2024-06-18
|
[
[
"Tang",
"Run-Xuan",
""
]
] |
The surging demand for new energy vehicles is driven by the imperative to conserve energy, reduce emissions, and enhance the ecological ambiance. By conducting behavioral analysis and mining usage patterns of new energy vehicles, particular patterns can be identified. For instance, overloading the battery, operating with low battery power, and driving at excessive speeds can all detrimentally affect the battery's performance. To assess the impact of such driving behavior on the urban ecology, an environmental computational modeling method has been proposed to simulate the interaction between new energy vehicles and the environment. To extend the time series data of the vehicle's entire life cycle and the ecological environment within the model sequence data, the LSTM model with Bayesian optimizer is utilized for simulation. The analysis revealed the detrimental effects of poor driving behavior on the environment.
|
1609.00017
|
Gordon Christie
|
Gordon Christie, Adam Shoemaker, Kevin Kochersberger, Pratap Tokekar,
Lance McLean, Alexander Leonessa
|
Radiation Search Operations using Scene Understanding with Autonomous
UAV and UGV
| null | null | null | null |
cs.RO cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Autonomously searching for hazardous radiation sources requires the ability
of the aerial and ground systems to understand the scene they are scouting. In
this paper, we present systems, algorithms, and experiments to perform
radiation search using unmanned aerial vehicles (UAV) and unmanned ground
vehicles (UGV) by employing semantic scene segmentation. The aerial data is
used to identify radiological points of interest, generate an orthophoto along
with a digital elevation model (DEM) of the scene, and perform semantic
segmentation to assign a category (e.g. road, grass) to each pixel in the
orthophoto. We perform semantic segmentation by training a model on a dataset
of images we collected and annotated, using the model to perform inference on
images of the test area unseen to the model, and then refining the results with
the DEM to better reason about category predictions at each pixel. We then use
all of these outputs to plan a path for a UGV carrying a LiDAR to map the
environment and avoid obstacles not present during the flight, and a radiation
detector to collect more precise radiation measurements from the ground.
Results of the analysis for each scenario tested favorably. We also note that
our approach is general and has the potential to work for a variety of
different sensing tasks.
|
[
{
"created": "Wed, 31 Aug 2016 20:00:46 GMT",
"version": "v1"
}
] |
2016-09-02
|
[
[
"Christie",
"Gordon",
""
],
[
"Shoemaker",
"Adam",
""
],
[
"Kochersberger",
"Kevin",
""
],
[
"Tokekar",
"Pratap",
""
],
[
"McLean",
"Lance",
""
],
[
"Leonessa",
"Alexander",
""
]
] |
Autonomously searching for hazardous radiation sources requires the ability of the aerial and ground systems to understand the scene they are scouting. In this paper, we present systems, algorithms, and experiments to perform radiation search using unmanned aerial vehicles (UAV) and unmanned ground vehicles (UGV) by employing semantic scene segmentation. The aerial data is used to identify radiological points of interest, generate an orthophoto along with a digital elevation model (DEM) of the scene, and perform semantic segmentation to assign a category (e.g. road, grass) to each pixel in the orthophoto. We perform semantic segmentation by training a model on a dataset of images we collected and annotated, using the model to perform inference on images of the test area unseen to the model, and then refining the results with the DEM to better reason about category predictions at each pixel. We then use all of these outputs to plan a path for a UGV carrying a LiDAR to map the environment and avoid obstacles not present during the flight, and a radiation detector to collect more precise radiation measurements from the ground. Results of the analysis for each scenario tested favorably. We also note that our approach is general and has the potential to work for a variety of different sensing tasks.
|
2312.15425
|
Shankhanil Mitra
|
Shankhanil Mitra and Rajiv Soundararajan
|
Knowledge Guided Semi-Supervised Learning for Quality Assessment of User
Generated Videos
|
Accepted to 38th AAAI conference on AI (AAAI 24)
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Perceptual quality assessment of user generated content (UGC) videos is
challenging due to the requirement of large scale human annotated videos for
training. In this work, we address this challenge by first designing a
self-supervised Spatio-Temporal Visual Quality Representation Learning
(ST-VQRL) framework to generate robust quality aware features for videos. Then,
we propose a dual-model based Semi Supervised Learning (SSL) method
specifically designed for the Video Quality Assessment (SSL-VQA) task, through
a novel knowledge transfer of quality predictions between the two models. Our
SSL-VQA method uses the ST-VQRL backbone to produce robust performances across
various VQA datasets including cross-database settings, despite being learned
with limited human annotated videos. Our model improves the state-of-the-art
performance when trained only with limited data by around 10%, and by around
15% when unlabelled data is also used in SSL. Source codes and checkpoints are
available at https://github.com/Shankhanil006/SSL-VQA.
|
[
{
"created": "Sun, 24 Dec 2023 07:32:03 GMT",
"version": "v1"
}
] |
2023-12-27
|
[
[
"Mitra",
"Shankhanil",
""
],
[
"Soundararajan",
"Rajiv",
""
]
] |
Perceptual quality assessment of user generated content (UGC) videos is challenging due to the requirement of large scale human annotated videos for training. In this work, we address this challenge by first designing a self-supervised Spatio-Temporal Visual Quality Representation Learning (ST-VQRL) framework to generate robust quality aware features for videos. Then, we propose a dual-model based Semi Supervised Learning (SSL) method specifically designed for the Video Quality Assessment (SSL-VQA) task, through a novel knowledge transfer of quality predictions between the two models. Our SSL-VQA method uses the ST-VQRL backbone to produce robust performances across various VQA datasets including cross-database settings, despite being learned with limited human annotated videos. Our model improves the state-of-the-art performance when trained only with limited data by around 10%, and by around 15% when unlabelled data is also used in SSL. Source codes and checkpoints are available at https://github.com/Shankhanil006/SSL-VQA.
|
2311.00947
|
Hongyang Du
|
Hongyang Du, Dusit Niyato, Jiawen Kang, Zehui Xiong, Ping Zhang,
Shuguang Cui, Xuemin Shen, Shiwen Mao, Zhu Han, Abbas Jamalipour, H. Vincent
Poor, and Dong In Kim
|
The Age of Generative AI and AI-Generated Everything
| null | null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
Generative AI (GAI) has emerged as a significant advancement in artificial
intelligence, renowned for its language and image generation capabilities. This
paper presents ``AI-Generated Everything'' (AIGX), a concept that extends GAI
beyond mere content creation to real-time adaptation and control across diverse
technological domains. In networking, AIGX collaborates closely with physical,
data link, network, and application layers to enhance real-time network
management that responds to various system and service settings as well as
application and user requirements. Networks, in return, serve as crucial
components in further AIGX capability optimization through the AIGX lifecycle,
i.e., data collection, distributed pre-training, and rapid decision-making,
thereby establishing a mutually enhancing interplay. Moreover, we offer an
in-depth case study focused on power allocation to illustrate the
interdependence between AIGX and networking systems. Through this exploration,
the article analyzes the significant role of GAI for networking, clarifies the
ways networks augment AIGX functionalities, and underscores the virtuous
interactive cycle they form. This article paves the way for subsequent future
research aimed at fully unlocking the potential of GAI and networks.
|
[
{
"created": "Thu, 2 Nov 2023 02:28:40 GMT",
"version": "v1"
}
] |
2023-11-03
|
[
[
"Du",
"Hongyang",
""
],
[
"Niyato",
"Dusit",
""
],
[
"Kang",
"Jiawen",
""
],
[
"Xiong",
"Zehui",
""
],
[
"Zhang",
"Ping",
""
],
[
"Cui",
"Shuguang",
""
],
[
"Shen",
"Xuemin",
""
],
[
"Mao",
"Shiwen",
""
],
[
"Han",
"Zhu",
""
],
[
"Jamalipour",
"Abbas",
""
],
[
"Poor",
"H. Vincent",
""
],
[
"Kim",
"Dong In",
""
]
] |
Generative AI (GAI) has emerged as a significant advancement in artificial intelligence, renowned for its language and image generation capabilities. This paper presents ``AI-Generated Everything'' (AIGX), a concept that extends GAI beyond mere content creation to real-time adaptation and control across diverse technological domains. In networking, AIGX collaborates closely with physical, data link, network, and application layers to enhance real-time network management that responds to various system and service settings as well as application and user requirements. Networks, in return, serve as crucial components in further AIGX capability optimization through the AIGX lifecycle, i.e., data collection, distributed pre-training, and rapid decision-making, thereby establishing a mutually enhancing interplay. Moreover, we offer an in-depth case study focused on power allocation to illustrate the interdependence between AIGX and networking systems. Through this exploration, the article analyzes the significant role of GAI for networking, clarifies the ways networks augment AIGX functionalities, and underscores the virtuous interactive cycle they form. This article paves the way for subsequent future research aimed at fully unlocking the potential of GAI and networks.
|
1404.0605
|
John Fearnley
|
John Fearnley and Rahul Savani
|
The Complexity of the Simplex Method
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The simplex method is a well-studied and widely-used pivoting method for
solving linear programs. When Dantzig originally formulated the simplex method,
he gave a natural pivot rule that pivots into the basis a variable with the
most violated reduced cost. In their seminal work, Klee and Minty showed that
this pivot rule takes exponential time in the worst case. We prove two main
results on the simplex method. Firstly, we show that it is PSPACE-complete to
find the solution that is computed by the simplex method using Dantzig's pivot
rule. Secondly, we prove that deciding whether Dantzig's rule ever chooses a
specific variable to enter the basis is PSPACE-complete. We use the known
connection between Markov decision processes (MDPs) and linear programming, and
an equivalence between Dantzig's pivot rule and a natural variant of policy
iteration for average-reward MDPs. We construct MDPs and show
PSPACE-completeness results for single-switch policy iteration, which in turn
imply our main results for the simplex method.
|
[
{
"created": "Wed, 2 Apr 2014 16:33:31 GMT",
"version": "v1"
},
{
"created": "Thu, 17 Apr 2014 14:26:37 GMT",
"version": "v2"
}
] |
2014-04-18
|
[
[
"Fearnley",
"John",
""
],
[
"Savani",
"Rahul",
""
]
] |
The simplex method is a well-studied and widely-used pivoting method for solving linear programs. When Dantzig originally formulated the simplex method, he gave a natural pivot rule that pivots into the basis a variable with the most violated reduced cost. In their seminal work, Klee and Minty showed that this pivot rule takes exponential time in the worst case. We prove two main results on the simplex method. Firstly, we show that it is PSPACE-complete to find the solution that is computed by the simplex method using Dantzig's pivot rule. Secondly, we prove that deciding whether Dantzig's rule ever chooses a specific variable to enter the basis is PSPACE-complete. We use the known connection between Markov decision processes (MDPs) and linear programming, and an equivalence between Dantzig's pivot rule and a natural variant of policy iteration for average-reward MDPs. We construct MDPs and show PSPACE-completeness results for single-switch policy iteration, which in turn imply our main results for the simplex method.
|
2110.06015
|
Chiara Boldrini
|
Kilian Ollivier, Chiara Boldrini, Andrea Passarella, Marco Conti
|
Structural invariants in individuals language use: the "ego network" of
words
|
This work was partially funded by the following projects. European
Union's Horizon 2020 research and innovation programme: SoBigData++ (No
871042), HumaneAI-Net (No 952026), MARVEL (No 957337). Italian PON-MISE
program: OK-INSAID project (No ARS01 00917)
|
In: Aref S. et al. (eds) Social Informatics. SocInfo 2020. Lecture
Notes in Computer Science, vol 12467. Springer, Cham
|
10.1007/978-3-030-60975-7_20
| null |
cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The cognitive constraints that humans exhibit in their social interactions
have been extensively studied by anthropologists, who have highlighted their
regularities across different types of social networks. We postulate that
similar regularities can be found in other cognitive processes, such as those
involving language production. In order to provide preliminary evidence for
this claim, we analyse a dataset containing tweets of a heterogeneous group of
Twitter users (regular users and professional writers). Leveraging a
methodology similar to the one used to uncover the well-established social
cognitive constraints, we find that a concentric layered structure (which we
call ego network of words, in analogy to the ego network of social
relationships) very well captures how individuals organise the words they use.
The size of the layers in this structure regularly grows (approximately 2-3
times with respect to the previous one) when moving outwards, and the two
penultimate external layers consistently account for approximately 60% and 30%
of the used words (the outermost layer contains 100% of the words),
irrespective of the number of the total number of layers of the user.
|
[
{
"created": "Tue, 12 Oct 2021 13:59:08 GMT",
"version": "v1"
}
] |
2021-10-13
|
[
[
"Ollivier",
"Kilian",
""
],
[
"Boldrini",
"Chiara",
""
],
[
"Passarella",
"Andrea",
""
],
[
"Conti",
"Marco",
""
]
] |
The cognitive constraints that humans exhibit in their social interactions have been extensively studied by anthropologists, who have highlighted their regularities across different types of social networks. We postulate that similar regularities can be found in other cognitive processes, such as those involving language production. In order to provide preliminary evidence for this claim, we analyse a dataset containing tweets of a heterogeneous group of Twitter users (regular users and professional writers). Leveraging a methodology similar to the one used to uncover the well-established social cognitive constraints, we find that a concentric layered structure (which we call ego network of words, in analogy to the ego network of social relationships) very well captures how individuals organise the words they use. The size of the layers in this structure regularly grows (approximately 2-3 times with respect to the previous one) when moving outwards, and the two penultimate external layers consistently account for approximately 60% and 30% of the used words (the outermost layer contains 100% of the words), irrespective of the number of the total number of layers of the user.
|
1508.03892
|
EPTCS
|
Dipak L. Chaudhari, Om Damani
|
Building an IDE for the Calculational Derivation of Imperative Programs
|
In Proceedings F-IDE 2015, arXiv:1508.03388
|
EPTCS 187, 2015, pp. 1-13
|
10.4204/EPTCS.187.1
| null |
cs.PL cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we describe an IDE called CAPS (Calculational Assistant for
Programming from Specifications) for the interactive, calculational derivation
of imperative programs. In building CAPS, our aim has been to make the IDE
accessible to non-experts while retaining the overall flavor of the
pen-and-paper calculational style. We discuss the overall architecture of the
CAPS system, the main features of the IDE, the GUI design, and the trade-offs
involved.
|
[
{
"created": "Mon, 17 Aug 2015 01:36:36 GMT",
"version": "v1"
}
] |
2015-08-20
|
[
[
"Chaudhari",
"Dipak L.",
""
],
[
"Damani",
"Om",
""
]
] |
In this paper, we describe an IDE called CAPS (Calculational Assistant for Programming from Specifications) for the interactive, calculational derivation of imperative programs. In building CAPS, our aim has been to make the IDE accessible to non-experts while retaining the overall flavor of the pen-and-paper calculational style. We discuss the overall architecture of the CAPS system, the main features of the IDE, the GUI design, and the trade-offs involved.
|
2402.17243
|
Pavel Petrovi\v{c}
|
Pavel Petrovi\v{c}, Fedir Agarshev
|
Spike up Prime Interest in Science and Technology through
Constructionist Games
|
This work was co-funded by the Horizon-Widera-2021 European Twinning
project TERAIS G.A. n. 101079338 Open Access Data discussed in the article is
available at https://robotika.sk/spike
|
Proceedings of EDULEARN23 Conference 3rd-5th July 2023, Palma,
Mallorca, Spain, pp. 5562-5570, ISBN: 978-84-09-52151-7
|
10.21125/edulearn.2023.1460
| null |
cs.RO cs.HC
|
http://creativecommons.org/licenses/by/4.0/
|
Robotics sets have been successfully used in elementary and secondary schools
in conformance with the 'learning through play' philosophy fostered by LEGO
Education, while utilizing the Constructionism didactic approach. Learners
discover and acquire knowledge through first-hand tangible experiences,
building their own representations in a constructivist learning process. Usual
pedagogical goals of the activities include introduction to the principles of
control, mechanics, programming, and robotics [1]. They are organized as
hands-on learning situations with teamwork cooperation of learners,
project-based learning, sharing and presentations of the learners group
experiences. Arriving from this tradition, we focus on a slightly different
scenarios: employing the robotics sets and the named approaches when learning
Physics, Mathematics, Art, Science, and other subjects. In carefully designed
projects, learners build interactive models that demonstrate concepts,
principles, and phenomena, perform experiments, and modify them in elaboration
phases with the aim to connect, create associations and links to the actual
underlying theoretical curriculum. In this way, they are collecting practical
experiences which are prerequisite to successful learning process. Based on
feedback from children, we continue upon two previous sets of activities that
focused on Physics and Mathematics, this time with projects built around games.
Learners play various games with physical artifacts in the real-world - with
the models they build. They acquire skills while playing the games, analyze
them, and learn about the underlying principles. They modify the game rules,
strategies, create extensions, and interact with each other in an entertaining
and engaging settings. This time we have designed the activities together with
the children, students of applied robotics seminar, and a student of Applied
Informatics.
|
[
{
"created": "Tue, 27 Feb 2024 06:25:00 GMT",
"version": "v1"
}
] |
2024-02-28
|
[
[
"Petrovič",
"Pavel",
""
],
[
"Agarshev",
"Fedir",
""
]
] |
Robotics sets have been successfully used in elementary and secondary schools in conformance with the 'learning through play' philosophy fostered by LEGO Education, while utilizing the Constructionism didactic approach. Learners discover and acquire knowledge through first-hand tangible experiences, building their own representations in a constructivist learning process. Usual pedagogical goals of the activities include introduction to the principles of control, mechanics, programming, and robotics [1]. They are organized as hands-on learning situations with teamwork cooperation of learners, project-based learning, sharing and presentations of the learners group experiences. Arriving from this tradition, we focus on a slightly different scenarios: employing the robotics sets and the named approaches when learning Physics, Mathematics, Art, Science, and other subjects. In carefully designed projects, learners build interactive models that demonstrate concepts, principles, and phenomena, perform experiments, and modify them in elaboration phases with the aim to connect, create associations and links to the actual underlying theoretical curriculum. In this way, they are collecting practical experiences which are prerequisite to successful learning process. Based on feedback from children, we continue upon two previous sets of activities that focused on Physics and Mathematics, this time with projects built around games. Learners play various games with physical artifacts in the real-world - with the models they build. They acquire skills while playing the games, analyze them, and learn about the underlying principles. They modify the game rules, strategies, create extensions, and interact with each other in an entertaining and engaging settings. This time we have designed the activities together with the children, students of applied robotics seminar, and a student of Applied Informatics.
|
2307.03610
|
Xiao Liang
|
Kareem A. Eltouny, Wansong Liu, Sibo Tian, Minghui Zheng, and Xiao
Liang
|
DE-TGN: Uncertainty-Aware Human Motion Forecasting using Deep Ensembles
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Ensuring the safety of human workers in a collaborative environment with
robots is of utmost importance. Although accurate pose prediction models can
help prevent collisions between human workers and robots, they are still
susceptible to critical errors. In this study, we propose a novel approach
called deep ensembles of temporal graph neural networks (DE-TGN) that not only
accurately forecast human motion but also provide a measure of prediction
uncertainty. By leveraging deep ensembles and employing stochastic Monte-Carlo
dropout sampling, we construct a volumetric field representing a range of
potential future human poses based on covariance ellipsoids. To validate our
framework, we conducted experiments using three motion capture datasets
including Human3.6M, and two human-robot interaction scenarios, achieving
state-of-the-art prediction error. Moreover, we discovered that deep ensembles
not only enable us to quantify uncertainty but also improve the accuracy of our
predictions.
|
[
{
"created": "Fri, 7 Jul 2023 14:05:35 GMT",
"version": "v1"
}
] |
2023-07-10
|
[
[
"Eltouny",
"Kareem A.",
""
],
[
"Liu",
"Wansong",
""
],
[
"Tian",
"Sibo",
""
],
[
"Zheng",
"Minghui",
""
],
[
"Liang",
"Xiao",
""
]
] |
Ensuring the safety of human workers in a collaborative environment with robots is of utmost importance. Although accurate pose prediction models can help prevent collisions between human workers and robots, they are still susceptible to critical errors. In this study, we propose a novel approach called deep ensembles of temporal graph neural networks (DE-TGN) that not only accurately forecast human motion but also provide a measure of prediction uncertainty. By leveraging deep ensembles and employing stochastic Monte-Carlo dropout sampling, we construct a volumetric field representing a range of potential future human poses based on covariance ellipsoids. To validate our framework, we conducted experiments using three motion capture datasets including Human3.6M, and two human-robot interaction scenarios, achieving state-of-the-art prediction error. Moreover, we discovered that deep ensembles not only enable us to quantify uncertainty but also improve the accuracy of our predictions.
|
1506.02026
|
Dor Shaviv
|
Dor Shaviv, Ayfer Ozgur, Haim Permuter
|
Can Feedback Increase the Capacity of the Energy Harvesting Channel?
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We investigate if feedback can increase the capacity of an energy harvesting
communication channel where a transmitter powered by an exogenous energy
arrival process and equipped with a finite battery communicates to a receiver
over a memoryless channel. For a simple special case where the energy arrival
process is deterministic and the channel is a BEC, we explicitly compute the
feed-forward and feedback capacities and show that feedback can strictly
increase the capacity of this channel. Building on this example, we also show
that feedback can increase the capacity when the energy arrivals are i.i.d.
known noncausally at the transmitter and the receiver.
|
[
{
"created": "Fri, 5 Jun 2015 19:57:50 GMT",
"version": "v1"
}
] |
2015-06-08
|
[
[
"Shaviv",
"Dor",
""
],
[
"Ozgur",
"Ayfer",
""
],
[
"Permuter",
"Haim",
""
]
] |
We investigate if feedback can increase the capacity of an energy harvesting communication channel where a transmitter powered by an exogenous energy arrival process and equipped with a finite battery communicates to a receiver over a memoryless channel. For a simple special case where the energy arrival process is deterministic and the channel is a BEC, we explicitly compute the feed-forward and feedback capacities and show that feedback can strictly increase the capacity of this channel. Building on this example, we also show that feedback can increase the capacity when the energy arrivals are i.i.d. known noncausally at the transmitter and the receiver.
|
1201.1671
|
Taufik Abrao
|
D\'ecio L. Gazzoni Filho, Taufik Abr\~ao, Marcelo C. Tosin, Francisco
Granziera Jr
|
Error-Correcting Codes for Reliable Communications in Microgravity
Platforms
|
13 pages, 3 figures, paper accepted to be published in International
Journal of Satellite Communications Policy and Management (IJSCPM) ISSN
(Online): 1742-7576 - ISSN (Print): 1742-7568
| null | null | null |
cs.IT cs.SY math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The PAANDA experiment was conceived to characterize the acceleration ambient
of a rocket launched microgravity platform, specially the microgravity phase.
The recorded data was transmitted to ground stations, leading to loss of
telemetry information sent during the reentry period. Traditionally, an
error-correcting code for this channel consists of a block code with very large
block size to protect against long periods of data loss. Instead, we propose
the use of digital fountain codes along with conventional Reed-Solomon block
codes to protect against long and short burst error periods, respectively.
Aiming to use this approach for a second version of PAANDA to prevent data
corruption, we propose a model for the communication channel based on
information extracted from Cum\~a II's telemetry data, and simulate the
performance of our proposed error-correcting code under this channel model.
Simulation results show that nearly all telemetry data can be recovered,
including data from the reentry period.
|
[
{
"created": "Mon, 9 Jan 2012 00:28:23 GMT",
"version": "v1"
}
] |
2015-03-20
|
[
[
"Filho",
"Décio L. Gazzoni",
""
],
[
"Abrão",
"Taufik",
""
],
[
"Tosin",
"Marcelo C.",
""
],
[
"Granziera",
"Francisco",
"Jr"
]
] |
The PAANDA experiment was conceived to characterize the acceleration ambient of a rocket launched microgravity platform, specially the microgravity phase. The recorded data was transmitted to ground stations, leading to loss of telemetry information sent during the reentry period. Traditionally, an error-correcting code for this channel consists of a block code with very large block size to protect against long periods of data loss. Instead, we propose the use of digital fountain codes along with conventional Reed-Solomon block codes to protect against long and short burst error periods, respectively. Aiming to use this approach for a second version of PAANDA to prevent data corruption, we propose a model for the communication channel based on information extracted from Cum\~a II's telemetry data, and simulate the performance of our proposed error-correcting code under this channel model. Simulation results show that nearly all telemetry data can be recovered, including data from the reentry period.
|
2307.11349
|
Sourav Sanyal
|
Sourav Sanyal, Rohan Kumar Manna, and Kaushik Roy
|
EV-Planner: Energy-Efficient Robot Navigation via Event-Based
Physics-Guided Neuromorphic Planner
|
accepted for publication at IEEE Robotics and Automation Letters
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Vision-based object tracking is an essential precursor to performing
autonomous aerial navigation in order to avoid obstacles. Biologically inspired
neuromorphic event cameras are emerging as a powerful alternative to
frame-based cameras, due to their ability to asynchronously detect varying
intensities (even in poor lighting conditions), high dynamic range, and
robustness to motion blur. Spiking neural networks (SNNs) have gained traction
for processing events asynchronously in an energy-efficient manner. On the
other hand, physics-based artificial intelligence (AI) has gained prominence
recently, as they enable embedding system knowledge via physical modeling
inside traditional analog neural networks (ANNs). In this letter, we present an
event-based physics-guided neuromorphic planner (EV-Planner) to perform
obstacle avoidance using neuromorphic event cameras and physics-based AI. We
consider the task of autonomous drone navigation where the mission is to detect
moving gates and fly through them while avoiding a collision. We use event
cameras to perform object detection using a shallow spiking neural network in
an unsupervised fashion. Utilizing the physical equations of the brushless DC
motors present in the drone rotors, we train a lightweight energy-aware
physics-guided neural network (PgNN) with depth inputs. This predicts the
optimal flight time responsible for generating near-minimum energy paths. We
spawn the drone in the Gazebo simulator and implement a sensor-fused
vision-to-planning neuro-symbolic framework using Robot Operating System (ROS).
Simulation results for safe collision-free flight trajectories are presented
with performance analysis, ablation study and potential future research
directions
|
[
{
"created": "Fri, 21 Jul 2023 04:50:59 GMT",
"version": "v1"
},
{
"created": "Fri, 10 Nov 2023 02:11:13 GMT",
"version": "v2"
},
{
"created": "Tue, 26 Dec 2023 02:29:47 GMT",
"version": "v3"
},
{
"created": "Sat, 30 Dec 2023 05:30:44 GMT",
"version": "v4"
},
{
"created": "Wed, 3 Jan 2024 16:01:52 GMT",
"version": "v5"
}
] |
2024-01-04
|
[
[
"Sanyal",
"Sourav",
""
],
[
"Manna",
"Rohan Kumar",
""
],
[
"Roy",
"Kaushik",
""
]
] |
Vision-based object tracking is an essential precursor to performing autonomous aerial navigation in order to avoid obstacles. Biologically inspired neuromorphic event cameras are emerging as a powerful alternative to frame-based cameras, due to their ability to asynchronously detect varying intensities (even in poor lighting conditions), high dynamic range, and robustness to motion blur. Spiking neural networks (SNNs) have gained traction for processing events asynchronously in an energy-efficient manner. On the other hand, physics-based artificial intelligence (AI) has gained prominence recently, as they enable embedding system knowledge via physical modeling inside traditional analog neural networks (ANNs). In this letter, we present an event-based physics-guided neuromorphic planner (EV-Planner) to perform obstacle avoidance using neuromorphic event cameras and physics-based AI. We consider the task of autonomous drone navigation where the mission is to detect moving gates and fly through them while avoiding a collision. We use event cameras to perform object detection using a shallow spiking neural network in an unsupervised fashion. Utilizing the physical equations of the brushless DC motors present in the drone rotors, we train a lightweight energy-aware physics-guided neural network (PgNN) with depth inputs. This predicts the optimal flight time responsible for generating near-minimum energy paths. We spawn the drone in the Gazebo simulator and implement a sensor-fused vision-to-planning neuro-symbolic framework using Robot Operating System (ROS). Simulation results for safe collision-free flight trajectories are presented with performance analysis, ablation study and potential future research directions
|
2306.11763
|
Alexander Van Meekeren
|
Alexander van Meekeren, Maya Aghaei, Klaas Dijkstra
|
Exploring the Effectiveness of Dataset Synthesis: An application of
Apple Detection in Orchards
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep object detection models have achieved notable successes in recent years,
but one major obstacle remains: the requirement for a large amount of training
data. Obtaining such data is a tedious process and is mainly time consuming,
leading to the exploration of new research avenues like synthetic data
generation techniques. In this study, we explore the usability of Stable
Diffusion 2.1-base for generating synthetic datasets of apple trees for object
detection and compare it to a baseline model trained on real-world data. After
creating a dataset of realistic apple trees with prompt engineering and
utilizing a previously trained Stable Diffusion model, the custom dataset was
annotated and evaluated by training a YOLOv5m object detection model to predict
apples in a real-world apple detection dataset. YOLOv5m was chosen for its
rapid inference time and minimal hardware demands. Results demonstrate that the
model trained on generated data is slightly underperforming compared to a
baseline model trained on real-world images when evaluated on a set of
real-world images. However, these findings remain highly promising, as the
average precision difference is only 0.09 and 0.06, respectively. Qualitative
results indicate that the model can accurately predict the location of apples,
except in cases of heavy shading. These findings illustrate the potential of
synthetic data generation techniques as a viable alternative to the collection
of extensive training data for object detection models.
|
[
{
"created": "Tue, 20 Jun 2023 09:46:01 GMT",
"version": "v1"
}
] |
2023-06-22
|
[
[
"van Meekeren",
"Alexander",
""
],
[
"Aghaei",
"Maya",
""
],
[
"Dijkstra",
"Klaas",
""
]
] |
Deep object detection models have achieved notable successes in recent years, but one major obstacle remains: the requirement for a large amount of training data. Obtaining such data is a tedious process and is mainly time consuming, leading to the exploration of new research avenues like synthetic data generation techniques. In this study, we explore the usability of Stable Diffusion 2.1-base for generating synthetic datasets of apple trees for object detection and compare it to a baseline model trained on real-world data. After creating a dataset of realistic apple trees with prompt engineering and utilizing a previously trained Stable Diffusion model, the custom dataset was annotated and evaluated by training a YOLOv5m object detection model to predict apples in a real-world apple detection dataset. YOLOv5m was chosen for its rapid inference time and minimal hardware demands. Results demonstrate that the model trained on generated data is slightly underperforming compared to a baseline model trained on real-world images when evaluated on a set of real-world images. However, these findings remain highly promising, as the average precision difference is only 0.09 and 0.06, respectively. Qualitative results indicate that the model can accurately predict the location of apples, except in cases of heavy shading. These findings illustrate the potential of synthetic data generation techniques as a viable alternative to the collection of extensive training data for object detection models.
|
2404.10141
|
Aashish Anantha Ramakrishnan
|
Aashish Anantha Ramakrishnan, Sharon X. Huang and Dongwon Lee
|
ANCHOR: LLM-driven News Subject Conditioning for Text-to-Image Synthesis
|
23 pages, 9 figures
| null | null | null |
cs.CV cs.CL cs.MM
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Text-to-Image (T2I) Synthesis has made tremendous strides in enhancing
synthesized image quality, but current datasets evaluate model performance only
on descriptive, instruction-based prompts. Real-world news image captions take
a more pragmatic approach, providing high-level situational and Named-Entity
(NE) information and limited physical object descriptions, making them
abstractive. To evaluate the ability of T2I models to capture intended subjects
from news captions, we introduce the Abstractive News Captions with High-level
cOntext Representation (ANCHOR) dataset, containing 70K+ samples sourced from 5
different news media organizations. With Large Language Models (LLM) achieving
success in language and commonsense reasoning tasks, we explore the ability of
different LLMs to identify and understand key subjects from abstractive
captions. Our proposed method Subject-Aware Finetuning (SAFE), selects and
enhances the representation of key subjects in synthesized images by leveraging
LLM-generated subject weights. It also adapts to the domain distribution of
news images and captions through custom Domain Fine-tuning, outperforming
current T2I baselines on ANCHOR. By launching the ANCHOR dataset, we hope to
motivate research in furthering the Natural Language Understanding (NLU)
capabilities of T2I models.
|
[
{
"created": "Mon, 15 Apr 2024 21:19:10 GMT",
"version": "v1"
}
] |
2024-04-17
|
[
[
"Ramakrishnan",
"Aashish Anantha",
""
],
[
"Huang",
"Sharon X.",
""
],
[
"Lee",
"Dongwon",
""
]
] |
Text-to-Image (T2I) Synthesis has made tremendous strides in enhancing synthesized image quality, but current datasets evaluate model performance only on descriptive, instruction-based prompts. Real-world news image captions take a more pragmatic approach, providing high-level situational and Named-Entity (NE) information and limited physical object descriptions, making them abstractive. To evaluate the ability of T2I models to capture intended subjects from news captions, we introduce the Abstractive News Captions with High-level cOntext Representation (ANCHOR) dataset, containing 70K+ samples sourced from 5 different news media organizations. With Large Language Models (LLM) achieving success in language and commonsense reasoning tasks, we explore the ability of different LLMs to identify and understand key subjects from abstractive captions. Our proposed method Subject-Aware Finetuning (SAFE), selects and enhances the representation of key subjects in synthesized images by leveraging LLM-generated subject weights. It also adapts to the domain distribution of news images and captions through custom Domain Fine-tuning, outperforming current T2I baselines on ANCHOR. By launching the ANCHOR dataset, we hope to motivate research in furthering the Natural Language Understanding (NLU) capabilities of T2I models.
|
2302.00152
|
Sudip Mittal
|
Subash Neupane, Ivan A. Fernandez, Wilson Patterson, Sudip Mittal,
Milan Parmar, Shahram Rahimi
|
TwinExplainer: Explaining Predictions of an Automotive Digital Twin
| null | null | null | null |
cs.LG cs.AI cs.CY cs.NE
|
http://creativecommons.org/licenses/by/4.0/
|
Vehicles are complex Cyber Physical Systems (CPS) that operate in a variety
of environments, and the likelihood of failure of one or more subsystems, such
as the engine, transmission, brakes, and fuel, can result in unscheduled
downtime and incur high maintenance or repair costs. In order to prevent these
issues, it is crucial to continuously monitor the health of various subsystems
and identify abnormal sensor channel behavior. Data-driven Digital Twin (DT)
systems are capable of such a task. Current DT technologies utilize various
Deep Learning (DL) techniques that are constrained by the lack of justification
or explanation for their predictions. This inability of these opaque systems
can influence decision-making and raises user trust concerns. This paper
presents a solution to this issue, where the TwinExplainer system, with its
three-layered architectural pipeline, explains the predictions of an automotive
DT. Such a system can assist automotive stakeholders in understanding the
global scale of the sensor channels and how they contribute towards generic DT
predictions. TwinExplainer can also visualize explanations for both normal and
abnormal local predictions computed by the DT.
|
[
{
"created": "Wed, 1 Feb 2023 00:11:18 GMT",
"version": "v1"
}
] |
2023-02-02
|
[
[
"Neupane",
"Subash",
""
],
[
"Fernandez",
"Ivan A.",
""
],
[
"Patterson",
"Wilson",
""
],
[
"Mittal",
"Sudip",
""
],
[
"Parmar",
"Milan",
""
],
[
"Rahimi",
"Shahram",
""
]
] |
Vehicles are complex Cyber Physical Systems (CPS) that operate in a variety of environments, and the likelihood of failure of one or more subsystems, such as the engine, transmission, brakes, and fuel, can result in unscheduled downtime and incur high maintenance or repair costs. In order to prevent these issues, it is crucial to continuously monitor the health of various subsystems and identify abnormal sensor channel behavior. Data-driven Digital Twin (DT) systems are capable of such a task. Current DT technologies utilize various Deep Learning (DL) techniques that are constrained by the lack of justification or explanation for their predictions. This inability of these opaque systems can influence decision-making and raises user trust concerns. This paper presents a solution to this issue, where the TwinExplainer system, with its three-layered architectural pipeline, explains the predictions of an automotive DT. Such a system can assist automotive stakeholders in understanding the global scale of the sensor channels and how they contribute towards generic DT predictions. TwinExplainer can also visualize explanations for both normal and abnormal local predictions computed by the DT.
|
1806.05749
|
Tanner Fiez
|
Lillian J. Ratliff and Tanner Fiez
|
Adaptive Incentive Design
| null | null | null | null |
cs.GT cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We apply control theoretic and optimization techniques to adaptively design
incentives. In particular, we consider the problem of a planner with an
objective that depends on data from strategic decision makers. The planner does
not know the process by which the strategic agents make decisions. Under the
assumption that the agents are utility maximizers, we model their interactions
as a non-cooperative game and utilize the Nash equilibrium concept as well as
myopic update rules to model the selection of their decision. By parameterizing
the agents' utility functions and the incentives offered, we develop an
algorithm that the planner can employ to learn the agents' decision-making
processes while simultaneously designing incentives to change their response to
a more desirable response from the planner's perspective. We provide
convergence results for this algorithm both in the noise-free and noisy cases
and present illustrative examples.
|
[
{
"created": "Thu, 14 Jun 2018 21:48:16 GMT",
"version": "v1"
}
] |
2018-06-18
|
[
[
"Ratliff",
"Lillian J.",
""
],
[
"Fiez",
"Tanner",
""
]
] |
We apply control theoretic and optimization techniques to adaptively design incentives. In particular, we consider the problem of a planner with an objective that depends on data from strategic decision makers. The planner does not know the process by which the strategic agents make decisions. Under the assumption that the agents are utility maximizers, we model their interactions as a non-cooperative game and utilize the Nash equilibrium concept as well as myopic update rules to model the selection of their decision. By parameterizing the agents' utility functions and the incentives offered, we develop an algorithm that the planner can employ to learn the agents' decision-making processes while simultaneously designing incentives to change their response to a more desirable response from the planner's perspective. We provide convergence results for this algorithm both in the noise-free and noisy cases and present illustrative examples.
|
1901.02949
|
Yea Seul Kim
|
Yea-Seul Kim, Logan A Walls, Peter Krafft, Jessica Hullman
|
A Bayesian Cognition Approach to Improve Data Visualization
| null | null |
10.1145/3290605.3300912
| null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
People naturally bring their prior beliefs to bear on how they interpret the
new information, yet few formal models exist for accounting for the influence
of users' prior beliefs in interactions with data presentations like
visualizations. We demonstrate a Bayesian cognitive model for understanding how
people interpret visualizations in light of prior beliefs and show how this
model provides a guide for improving visualization evaluation. In a first
study, we show how applying a Bayesian cognition model to a simple
visualization scenario indicates that people's judgments are consistent with a
hypothesis that they are doing approximate Bayesian inference. In a second
study, we evaluate how sensitive our observations of Bayesian behavior are to
different techniques for eliciting people subjective distributions, and to
different datasets. We find that people don't behave consistently with Bayesian
predictions for large sample size datasets, and this difference cannot be
explained by elicitation technique. In a final study, we show how normative
Bayesian inference can be used as an evaluation framework for visualizations,
including of uncertainty.
|
[
{
"created": "Wed, 9 Jan 2019 22:08:50 GMT",
"version": "v1"
}
] |
2019-01-11
|
[
[
"Kim",
"Yea-Seul",
""
],
[
"Walls",
"Logan A",
""
],
[
"Krafft",
"Peter",
""
],
[
"Hullman",
"Jessica",
""
]
] |
People naturally bring their prior beliefs to bear on how they interpret the new information, yet few formal models exist for accounting for the influence of users' prior beliefs in interactions with data presentations like visualizations. We demonstrate a Bayesian cognitive model for understanding how people interpret visualizations in light of prior beliefs and show how this model provides a guide for improving visualization evaluation. In a first study, we show how applying a Bayesian cognition model to a simple visualization scenario indicates that people's judgments are consistent with a hypothesis that they are doing approximate Bayesian inference. In a second study, we evaluate how sensitive our observations of Bayesian behavior are to different techniques for eliciting people subjective distributions, and to different datasets. We find that people don't behave consistently with Bayesian predictions for large sample size datasets, and this difference cannot be explained by elicitation technique. In a final study, we show how normative Bayesian inference can be used as an evaluation framework for visualizations, including of uncertainty.
|
2406.16203
|
Hanzi Xu
|
Hanzi Xu, Renze Lou, Jiangshu Du, Vahid Mahzoon, Elmira Talebianaraki,
Zhuoan Zhou, Elizabeth Garrison, Slobodan Vucetic, Wenpeng Yin
|
LLMs' Classification Performance is Overclaimed
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In many classification tasks designed for AI or human to solve, gold labels
are typically included within the label space by default, often posed as "which
of the following is correct?" This standard setup has traditionally highlighted
the strong performance of advanced AI, particularly top-performing Large
Language Models (LLMs), in routine classification tasks. However, when the gold
label is intentionally excluded from the label space, it becomes evident that
LLMs still attempt to select from the available label candidates, even when
none are correct. This raises a pivotal question: Do LLMs truly demonstrate
their intelligence in understanding the essence of classification tasks?
In this study, we evaluate both closed-source and open-source LLMs across
representative classification tasks, arguing that the perceived performance of
LLMs is overstated due to their inability to exhibit the expected comprehension
of the task. This paper makes a threefold contribution: i) To our knowledge,
this is the first work to identify the limitations of LLMs in classification
tasks when gold labels are absent. We define this task as Classify-w/o-Gold and
propose it as a new testbed for LLMs. ii) We introduce a benchmark, Know-No,
comprising two existing classification tasks and one new task, to evaluate
Classify-w/o-Gold. iii) This work defines and advocates for a new evaluation
metric, OmniAccuracy, which assesses LLMs' performance in classification tasks
both when gold labels are present and absent.
|
[
{
"created": "Sun, 23 Jun 2024 19:49:10 GMT",
"version": "v1"
},
{
"created": "Sat, 29 Jun 2024 11:45:17 GMT",
"version": "v2"
},
{
"created": "Wed, 3 Jul 2024 13:18:50 GMT",
"version": "v3"
}
] |
2024-07-04
|
[
[
"Xu",
"Hanzi",
""
],
[
"Lou",
"Renze",
""
],
[
"Du",
"Jiangshu",
""
],
[
"Mahzoon",
"Vahid",
""
],
[
"Talebianaraki",
"Elmira",
""
],
[
"Zhou",
"Zhuoan",
""
],
[
"Garrison",
"Elizabeth",
""
],
[
"Vucetic",
"Slobodan",
""
],
[
"Yin",
"Wenpeng",
""
]
] |
In many classification tasks designed for AI or human to solve, gold labels are typically included within the label space by default, often posed as "which of the following is correct?" This standard setup has traditionally highlighted the strong performance of advanced AI, particularly top-performing Large Language Models (LLMs), in routine classification tasks. However, when the gold label is intentionally excluded from the label space, it becomes evident that LLMs still attempt to select from the available label candidates, even when none are correct. This raises a pivotal question: Do LLMs truly demonstrate their intelligence in understanding the essence of classification tasks? In this study, we evaluate both closed-source and open-source LLMs across representative classification tasks, arguing that the perceived performance of LLMs is overstated due to their inability to exhibit the expected comprehension of the task. This paper makes a threefold contribution: i) To our knowledge, this is the first work to identify the limitations of LLMs in classification tasks when gold labels are absent. We define this task as Classify-w/o-Gold and propose it as a new testbed for LLMs. ii) We introduce a benchmark, Know-No, comprising two existing classification tasks and one new task, to evaluate Classify-w/o-Gold. iii) This work defines and advocates for a new evaluation metric, OmniAccuracy, which assesses LLMs' performance in classification tasks both when gold labels are present and absent.
|
1712.02547
|
Yunlong Wang
|
Yunlong Wang, Lingdan Wu, Jan-Philipp Lange, Ahmed Fadhil, Harald
Reiterer
|
Persuasive Technology in Reducing Prolonged Sedentary Behavior at Work:
A Systematic Review
| null |
Smart Health 7-8 (2018) 19-30
|
10.1016/j.smhl.2018.05.002
| null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Prolonged sedentary behavior is prevalent among office workers and has been
found to be detrimental to health. Preventing and reducing prolonged sedentary
behavior require interventions, and persuasive technology is expected to make a
contribution in this domain. In this paper, we use the framework of persuasive
system design (PSD) principles to investigate the utilization and effectiveness
of persuasive technology in intervention studies at reducing sedentary behavior
at work. This systematic review reveals that reminders are the most frequently
used PSD principle. The analysis on reminders shows that hourly PC reminders
alone have no significant effect on reducing sedentary behavior at work, while
coupling with education or other informative session seems to be promising.
Details of deployed persuasive technology with behavioral theories and user
experience evaluation are expected to be reported explicitly in the future
intervention studies.
|
[
{
"created": "Thu, 7 Dec 2017 09:29:38 GMT",
"version": "v1"
},
{
"created": "Wed, 23 May 2018 17:13:27 GMT",
"version": "v2"
},
{
"created": "Thu, 11 Oct 2018 19:19:07 GMT",
"version": "v3"
}
] |
2018-10-15
|
[
[
"Wang",
"Yunlong",
""
],
[
"Wu",
"Lingdan",
""
],
[
"Lange",
"Jan-Philipp",
""
],
[
"Fadhil",
"Ahmed",
""
],
[
"Reiterer",
"Harald",
""
]
] |
Prolonged sedentary behavior is prevalent among office workers and has been found to be detrimental to health. Preventing and reducing prolonged sedentary behavior require interventions, and persuasive technology is expected to make a contribution in this domain. In this paper, we use the framework of persuasive system design (PSD) principles to investigate the utilization and effectiveness of persuasive technology in intervention studies at reducing sedentary behavior at work. This systematic review reveals that reminders are the most frequently used PSD principle. The analysis on reminders shows that hourly PC reminders alone have no significant effect on reducing sedentary behavior at work, while coupling with education or other informative session seems to be promising. Details of deployed persuasive technology with behavioral theories and user experience evaluation are expected to be reported explicitly in the future intervention studies.
|
1304.6450
|
Ton Kloks
|
Wing-Kai Hon and Ton Kloks and Hsiang Hsuan Liu and Sheung-Hung Poon
and Yue-Li Wang
|
On independence domination
| null | null | null | null |
cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Let G be a graph. The independence-domination number is the maximum over all
independent sets I in G of the minimal number of vertices needed to dominate I.
In this paper we investigate the computational complexity of independence
domination for graphs in several graph classes related to cographs. We present
an exact exponential algorithm. We also present a PTAS for planar graphs.
|
[
{
"created": "Wed, 24 Apr 2013 00:21:35 GMT",
"version": "v1"
}
] |
2013-04-25
|
[
[
"Hon",
"Wing-Kai",
""
],
[
"Kloks",
"Ton",
""
],
[
"Liu",
"Hsiang Hsuan",
""
],
[
"Poon",
"Sheung-Hung",
""
],
[
"Wang",
"Yue-Li",
""
]
] |
Let G be a graph. The independence-domination number is the maximum over all independent sets I in G of the minimal number of vertices needed to dominate I. In this paper we investigate the computational complexity of independence domination for graphs in several graph classes related to cographs. We present an exact exponential algorithm. We also present a PTAS for planar graphs.
|
1306.1955
|
Alexey Markov
|
Alexander Barabanov, Maxim Grishin and Alexey Markov
|
The Formal Metabasis For Conformity Assessment of Information Security
Software and Hardware
|
Keywords: information security, information protection, information
security tools, certification, conformity assessment, security testing
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An approach to the development of security test procedures for information
security controls is presented. The recommendations for optimizing the test
procedure are obtained
|
[
{
"created": "Sat, 8 Jun 2013 20:25:58 GMT",
"version": "v1"
}
] |
2013-06-11
|
[
[
"Barabanov",
"Alexander",
""
],
[
"Grishin",
"Maxim",
""
],
[
"Markov",
"Alexey",
""
]
] |
An approach to the development of security test procedures for information security controls is presented. The recommendations for optimizing the test procedure are obtained
|
2010.12662
|
Yunjie Zhang
|
Yunjie Zhang, Fei Tao, Xudong Liu, Runze Su, Xiaorong Mei, Weicong
Ding, Zhichen Zhao, Lei Yuan, Ji Liu
|
Short Video-based Advertisements Evaluation System: Self-Organizing
Learning Approach
|
Submitting to ICASSP 2021
| null | null | null |
cs.MM cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the rising of short video apps, such as TikTok, Snapchat and Kwai,
advertisement in short-term user-generated videos (UGVs) has become a trending
form of advertising. Prediction of user behavior without specific user profile
is required by advertisers, as they expect to acquire advertisement performance
in advance in the scenario of cold start. Current recommender system do not
take raw videos as input; additionally, most previous work of Multi-Modal
Machine Learning may not deal with unconstrained videos like UGVs. In this
paper, we proposed a novel end-to-end self-organizing framework for user
behavior prediction. Our model is able to learn the optimal topology of neural
network architecture, as well as optimal weights, through training data. We
evaluate our proposed method on our in-house dataset. The experimental results
reveal that our model achieves the best performance in all our experiments.
|
[
{
"created": "Fri, 23 Oct 2020 20:52:24 GMT",
"version": "v1"
}
] |
2020-10-27
|
[
[
"Zhang",
"Yunjie",
""
],
[
"Tao",
"Fei",
""
],
[
"Liu",
"Xudong",
""
],
[
"Su",
"Runze",
""
],
[
"Mei",
"Xiaorong",
""
],
[
"Ding",
"Weicong",
""
],
[
"Zhao",
"Zhichen",
""
],
[
"Yuan",
"Lei",
""
],
[
"Liu",
"Ji",
""
]
] |
With the rising of short video apps, such as TikTok, Snapchat and Kwai, advertisement in short-term user-generated videos (UGVs) has become a trending form of advertising. Prediction of user behavior without specific user profile is required by advertisers, as they expect to acquire advertisement performance in advance in the scenario of cold start. Current recommender system do not take raw videos as input; additionally, most previous work of Multi-Modal Machine Learning may not deal with unconstrained videos like UGVs. In this paper, we proposed a novel end-to-end self-organizing framework for user behavior prediction. Our model is able to learn the optimal topology of neural network architecture, as well as optimal weights, through training data. We evaluate our proposed method on our in-house dataset. The experimental results reveal that our model achieves the best performance in all our experiments.
|
2311.07066
|
Meizhi Zhong
|
Meizhi Zhong, Lemao Liu, Kehai Chen, Mingming Yang, Min Zhang
|
Context Consistency between Training and Testing in Simultaneous Machine
Translation
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Simultaneous Machine Translation (SiMT) aims to yield a real-time partial
translation with a monotonically growing the source-side context. However,
there is a counterintuitive phenomenon about the context usage between training
and testing: e.g., the wait-k testing model consistently trained with wait-k is
much worse than that model inconsistently trained with wait-k' (k' is not equal
to k) in terms of translation quality. To this end, we first investigate the
underlying reasons behind this phenomenon and uncover the following two
factors: 1) the limited correlation between translation quality and training
(cross-entropy) loss; 2) exposure bias between training and testing. Based on
both reasons, we then propose an effective training approach called context
consistency training accordingly, which makes consistent the context usage
between training and testing by optimizing translation quality and latency as
bi-objectives and exposing the predictions to the model during the training.
The experiments on three language pairs demonstrate our intuition: our system
encouraging context consistency outperforms that existing systems with context
inconsistency for the first time, with the help of our context consistency
training approach.
|
[
{
"created": "Mon, 13 Nov 2023 04:11:32 GMT",
"version": "v1"
}
] |
2023-11-14
|
[
[
"Zhong",
"Meizhi",
""
],
[
"Liu",
"Lemao",
""
],
[
"Chen",
"Kehai",
""
],
[
"Yang",
"Mingming",
""
],
[
"Zhang",
"Min",
""
]
] |
Simultaneous Machine Translation (SiMT) aims to yield a real-time partial translation with a monotonically growing the source-side context. However, there is a counterintuitive phenomenon about the context usage between training and testing: e.g., the wait-k testing model consistently trained with wait-k is much worse than that model inconsistently trained with wait-k' (k' is not equal to k) in terms of translation quality. To this end, we first investigate the underlying reasons behind this phenomenon and uncover the following two factors: 1) the limited correlation between translation quality and training (cross-entropy) loss; 2) exposure bias between training and testing. Based on both reasons, we then propose an effective training approach called context consistency training accordingly, which makes consistent the context usage between training and testing by optimizing translation quality and latency as bi-objectives and exposing the predictions to the model during the training. The experiments on three language pairs demonstrate our intuition: our system encouraging context consistency outperforms that existing systems with context inconsistency for the first time, with the help of our context consistency training approach.
|
2001.01331
|
Andrew Lensen
|
Andrew Lensen, Mengjie Zhang, Bing Xue
|
Multi-Objective Genetic Programming for Manifold Learning: Balancing
Quality and Dimensionality
|
31 pages, pre-print accepted by Genetic Programming and Evolvable
Machines journal
| null |
10.1007/s10710-020-09375-4
| null |
cs.NE cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Manifold learning techniques have become increasingly valuable as data
continues to grow in size. By discovering a lower-dimensional representation
(embedding) of the structure of a dataset, manifold learning algorithms can
substantially reduce the dimensionality of a dataset while preserving as much
information as possible. However, state-of-the-art manifold learning algorithms
are opaque in how they perform this transformation. Understanding the way in
which the embedding relates to the original high-dimensional space is critical
in exploratory data analysis. We previously proposed a Genetic Programming
method that performed manifold learning by evolving mappings that are
transparent and interpretable. This method required the dimensionality of the
embedding to be known a priori, which makes it hard to use when little is known
about a dataset. In this paper, we substantially extend our previous work, by
introducing a multi-objective approach that automatically balances the
competing objectives of manifold quality and dimensionality. Our proposed
approach is competitive with a range of baseline and state-of-the-art manifold
learning methods, while also providing a range (front) of solutions that give
different trade-offs between quality and dimensionality. Furthermore, the
learned models are shown to often be simple and efficient, utilising only a
small number of features in an interpretable manner.
|
[
{
"created": "Sun, 5 Jan 2020 23:24:33 GMT",
"version": "v1"
}
] |
2020-01-31
|
[
[
"Lensen",
"Andrew",
""
],
[
"Zhang",
"Mengjie",
""
],
[
"Xue",
"Bing",
""
]
] |
Manifold learning techniques have become increasingly valuable as data continues to grow in size. By discovering a lower-dimensional representation (embedding) of the structure of a dataset, manifold learning algorithms can substantially reduce the dimensionality of a dataset while preserving as much information as possible. However, state-of-the-art manifold learning algorithms are opaque in how they perform this transformation. Understanding the way in which the embedding relates to the original high-dimensional space is critical in exploratory data analysis. We previously proposed a Genetic Programming method that performed manifold learning by evolving mappings that are transparent and interpretable. This method required the dimensionality of the embedding to be known a priori, which makes it hard to use when little is known about a dataset. In this paper, we substantially extend our previous work, by introducing a multi-objective approach that automatically balances the competing objectives of manifold quality and dimensionality. Our proposed approach is competitive with a range of baseline and state-of-the-art manifold learning methods, while also providing a range (front) of solutions that give different trade-offs between quality and dimensionality. Furthermore, the learned models are shown to often be simple and efficient, utilising only a small number of features in an interpretable manner.
|
2212.09335
|
Chen Ju
|
Chen Ju, Kunhao Zheng, Jinxiang Liu, Peisen Zhao, Ya Zhang, Jianlong
Chang, Yanfeng Wang, Qi Tian
|
Distilling Vision-Language Pre-training to Collaborate with
Weakly-Supervised Temporal Action Localization
|
The first two authors share the same contribution
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Weakly-supervised temporal action localization (WTAL) learns to detect and
classify action instances with only category labels. Most methods widely adopt
the off-the-shelf Classification-Based Pre-training (CBP) to generate video
features for action localization. However, the different optimization
objectives between classification and localization, make temporally localized
results suffer from the serious incomplete issue. To tackle this issue without
additional annotations, this paper considers to distill free action knowledge
from Vision-Language Pre-training (VLP), since we surprisingly observe that the
localization results of vanilla VLP have an over-complete issue, which is just
complementary to the CBP results. To fuse such complementarity, we propose a
novel distillation-collaboration framework with two branches acting as CBP and
VLP respectively. The framework is optimized through a dual-branch alternate
training strategy. Specifically, during the B step, we distill the confident
background pseudo-labels from the CBP branch; while during the F step, the
confident foreground pseudo-labels are distilled from the VLP branch. And as a
result, the dual-branch complementarity is effectively fused to promote a
strong alliance. Extensive experiments and ablation studies on THUMOS14 and
ActivityNet1.2 reveal that our method significantly outperforms
state-of-the-art methods.
|
[
{
"created": "Mon, 19 Dec 2022 10:02:50 GMT",
"version": "v1"
}
] |
2022-12-20
|
[
[
"Ju",
"Chen",
""
],
[
"Zheng",
"Kunhao",
""
],
[
"Liu",
"Jinxiang",
""
],
[
"Zhao",
"Peisen",
""
],
[
"Zhang",
"Ya",
""
],
[
"Chang",
"Jianlong",
""
],
[
"Wang",
"Yanfeng",
""
],
[
"Tian",
"Qi",
""
]
] |
Weakly-supervised temporal action localization (WTAL) learns to detect and classify action instances with only category labels. Most methods widely adopt the off-the-shelf Classification-Based Pre-training (CBP) to generate video features for action localization. However, the different optimization objectives between classification and localization, make temporally localized results suffer from the serious incomplete issue. To tackle this issue without additional annotations, this paper considers to distill free action knowledge from Vision-Language Pre-training (VLP), since we surprisingly observe that the localization results of vanilla VLP have an over-complete issue, which is just complementary to the CBP results. To fuse such complementarity, we propose a novel distillation-collaboration framework with two branches acting as CBP and VLP respectively. The framework is optimized through a dual-branch alternate training strategy. Specifically, during the B step, we distill the confident background pseudo-labels from the CBP branch; while during the F step, the confident foreground pseudo-labels are distilled from the VLP branch. And as a result, the dual-branch complementarity is effectively fused to promote a strong alliance. Extensive experiments and ablation studies on THUMOS14 and ActivityNet1.2 reveal that our method significantly outperforms state-of-the-art methods.
|
2102.11678
|
Matej Madeja
|
Matej Madeja, Jaroslav Porub\"an, Michaela Ba\v{c}\'ikov\'a,
Mat\'u\v{s} Sul\'ir, J\'an Juh\'ar, Sergej Chodarev, Filip Gurb\'a\v{l}
|
Automating Test Case Identification in Java Open Source Projects on
GitHub
|
30 pages, accepted in Computing and Informatics Journal -
http://www.cai.sk, ISSN 2585-8807
|
Computing and Informatics 40 (2021) 575-605
|
10.31577/cai_2021_3_575
| null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Software testing is one of the very important Quality Assurance (QA)
components. A lot of researchers deal with the testing process in terms of
tester motivation and how tests should or should not be written. However, it is
not known from the recommendations how the tests are written in real projects.
In this paper, the following was investigated: (i) the denotation of the word
"test" in different natural languages; (ii) whether the number of occurrences
of the word "test" correlates with the number of test cases; and (iii) what
testing frameworks are mostly used. The analysis was performed on 38 GitHub
open source repositories thoroughly selected from the set of 4.3M GitHub
projects. We analyzed 20,340 test cases in 803 classes manually and 170k
classes using an automated approach. The results show that: (i) there exists a
weak correlation (r = 0.655) between the number of occurrences of the word
"test" and the number of test cases in a class; (ii) the proposed algorithm
using static file analysis correctly detected 97% of test cases; (iii) 15% of
the analyzed classes used main() function whose represent regular Java programs
that test the production code without using any third-party framework. The
identification of such tests is very complex due to implementation diversity.
The results may be leveraged to more quickly identify and locate test cases in
a repository, to understand practices in customized testing solutions, and to
mine tests to improve program comprehension in the future.
|
[
{
"created": "Tue, 23 Feb 2021 13:08:50 GMT",
"version": "v1"
},
{
"created": "Fri, 30 Jul 2021 07:47:23 GMT",
"version": "v2"
}
] |
2022-01-04
|
[
[
"Madeja",
"Matej",
""
],
[
"Porubän",
"Jaroslav",
""
],
[
"Bačíková",
"Michaela",
""
],
[
"Sulír",
"Matúš",
""
],
[
"Juhár",
"Ján",
""
],
[
"Chodarev",
"Sergej",
""
],
[
"Gurbáľ",
"Filip",
""
]
] |
Software testing is one of the very important Quality Assurance (QA) components. A lot of researchers deal with the testing process in terms of tester motivation and how tests should or should not be written. However, it is not known from the recommendations how the tests are written in real projects. In this paper, the following was investigated: (i) the denotation of the word "test" in different natural languages; (ii) whether the number of occurrences of the word "test" correlates with the number of test cases; and (iii) what testing frameworks are mostly used. The analysis was performed on 38 GitHub open source repositories thoroughly selected from the set of 4.3M GitHub projects. We analyzed 20,340 test cases in 803 classes manually and 170k classes using an automated approach. The results show that: (i) there exists a weak correlation (r = 0.655) between the number of occurrences of the word "test" and the number of test cases in a class; (ii) the proposed algorithm using static file analysis correctly detected 97% of test cases; (iii) 15% of the analyzed classes used main() function whose represent regular Java programs that test the production code without using any third-party framework. The identification of such tests is very complex due to implementation diversity. The results may be leveraged to more quickly identify and locate test cases in a repository, to understand practices in customized testing solutions, and to mine tests to improve program comprehension in the future.
|
2106.07385
|
Jennifer D'Souza
|
Jennifer D'Souza, S\"oren Auer and Ted Pedersen
|
SemEval-2021 Task 11: NLPContributionGraph -- Structuring Scholarly NLP
Contributions for a Research Knowledge Graph
|
13 pages, 5 figures, 8 tables
|
Proceedings of the 15th International Workshop on Semantic
Evaluation (SemEval-2021), (pp. 364-376), ACL
|
10.18653/v1/2021.semeval-1.44
| null |
cs.CL cs.AI cs.DL cs.IR cs.LG
|
http://creativecommons.org/licenses/by-sa/4.0/
|
There is currently a gap between the natural language expression of scholarly
publications and their structured semantic content modeling to enable
intelligent content search. With the volume of research growing exponentially
every year, a search feature operating over semantically structured content is
compelling. The SemEval-2021 Shared Task NLPContributionGraph (a.k.a. 'the NCG
task') tasks participants to develop automated systems that structure
contributions from NLP scholarly articles in the English language. Being the
first-of-its-kind in the SemEval series, the task released structured data from
NLP scholarly articles at three levels of information granularity, i.e. at
sentence-level, phrase-level, and phrases organized as triples toward Knowledge
Graph (KG) building. The sentence-level annotations comprised the few sentences
about the article's contribution. The phrase-level annotations were scientific
term and predicate phrases from the contribution sentences. Finally, the
triples constituted the research overview KG. For the Shared Task,
participating systems were then expected to automatically classify contribution
sentences, extract scientific terms and relations from the sentences, and
organize them as KG triples.
Overall, the task drew a strong participation demographic of seven teams and
27 participants. The best end-to-end task system classified contribution
sentences at 57.27% F1, phrases at 46.41% F1, and triples at 22.28% F1. While
the absolute performance to generate triples remains low, in the conclusion of
this article, the difficulty of producing such data and as a consequence of
modeling it is highlighted.
|
[
{
"created": "Thu, 10 Jun 2021 13:43:47 GMT",
"version": "v1"
},
{
"created": "Tue, 6 Jul 2021 09:26:11 GMT",
"version": "v2"
},
{
"created": "Fri, 15 Oct 2021 08:18:56 GMT",
"version": "v3"
}
] |
2021-10-18
|
[
[
"D'Souza",
"Jennifer",
""
],
[
"Auer",
"Sören",
""
],
[
"Pedersen",
"Ted",
""
]
] |
There is currently a gap between the natural language expression of scholarly publications and their structured semantic content modeling to enable intelligent content search. With the volume of research growing exponentially every year, a search feature operating over semantically structured content is compelling. The SemEval-2021 Shared Task NLPContributionGraph (a.k.a. 'the NCG task') tasks participants to develop automated systems that structure contributions from NLP scholarly articles in the English language. Being the first-of-its-kind in the SemEval series, the task released structured data from NLP scholarly articles at three levels of information granularity, i.e. at sentence-level, phrase-level, and phrases organized as triples toward Knowledge Graph (KG) building. The sentence-level annotations comprised the few sentences about the article's contribution. The phrase-level annotations were scientific term and predicate phrases from the contribution sentences. Finally, the triples constituted the research overview KG. For the Shared Task, participating systems were then expected to automatically classify contribution sentences, extract scientific terms and relations from the sentences, and organize them as KG triples. Overall, the task drew a strong participation demographic of seven teams and 27 participants. The best end-to-end task system classified contribution sentences at 57.27% F1, phrases at 46.41% F1, and triples at 22.28% F1. While the absolute performance to generate triples remains low, in the conclusion of this article, the difficulty of producing such data and as a consequence of modeling it is highlighted.
|
1806.10748
|
Ayushi Sinha
|
Ayushi Sinha, Masaru Ishii, Russell H. Taylor, Gregory D. Hager and
Austin Reiter
|
Towards automatic initialization of registration algorithms using
simulated endoscopy images
|
4 pages, 4 figures
| null | null | null |
cs.CV cs.GR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Registering images from different modalities is an active area of research in
computer aided medical interventions. Several registration algorithms have been
developed, many of which achieve high accuracy. However, these results are
dependent on many factors, including the quality of the extracted features or
segmentations being registered as well as the initial alignment. Although
several methods have been developed towards improving segmentation algorithms
and automating the segmentation process, few automatic initialization
algorithms have been explored. In many cases, the initial alignment from which
a registration is initiated is performed manually, which interferes with the
clinical workflow. Our aim is to use scene classification in endoscopic
procedures to achieve coarse alignment of the endoscope and a preoperative
image of the anatomy. In this paper, we show using simulated scenes that a
neural network can predict the region of anatomy (with respect to a
preoperative image) that the endoscope is located in by observing a single
endoscopic video frame. With limited training and without any hyperparameter
tuning, our method achieves an accuracy of 76.53 (+/-1.19)%. There are several
avenues for improvement, making this a promising direction of research. Code is
available at https://github.com/AyushiSinha/AutoInitialization.
|
[
{
"created": "Thu, 28 Jun 2018 02:58:25 GMT",
"version": "v1"
}
] |
2018-06-29
|
[
[
"Sinha",
"Ayushi",
""
],
[
"Ishii",
"Masaru",
""
],
[
"Taylor",
"Russell H.",
""
],
[
"Hager",
"Gregory D.",
""
],
[
"Reiter",
"Austin",
""
]
] |
Registering images from different modalities is an active area of research in computer aided medical interventions. Several registration algorithms have been developed, many of which achieve high accuracy. However, these results are dependent on many factors, including the quality of the extracted features or segmentations being registered as well as the initial alignment. Although several methods have been developed towards improving segmentation algorithms and automating the segmentation process, few automatic initialization algorithms have been explored. In many cases, the initial alignment from which a registration is initiated is performed manually, which interferes with the clinical workflow. Our aim is to use scene classification in endoscopic procedures to achieve coarse alignment of the endoscope and a preoperative image of the anatomy. In this paper, we show using simulated scenes that a neural network can predict the region of anatomy (with respect to a preoperative image) that the endoscope is located in by observing a single endoscopic video frame. With limited training and without any hyperparameter tuning, our method achieves an accuracy of 76.53 (+/-1.19)%. There are several avenues for improvement, making this a promising direction of research. Code is available at https://github.com/AyushiSinha/AutoInitialization.
|
2005.10526
|
Satyajit Thakor
|
Hitika Tiwari and Satyajit Thakor
|
On Characterization of Entropic Vectors at the Boundary of Almost
Entropic Cones
|
ITW'19 (c) 2019 IEEE. Personal use of this material is permitted.
Permission from IEEE must be obtained for all other uses, in any current or
future media, including reprinting/republishing this material for advertising
or promotional purposes, creating new collective works, for resale or
redistribution to servers or lists, or reuse of any copyrighted component of
this work in other works
| null |
10.1109/ITW44776.2019.8989116
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The entropy region is a fundamental object in information theory. An outer
bound for the entropy region is defined by a minimal set of Shannon-type
inequalities called elemental inequalities also referred to as the Shannon
region. This paper focuses on characterization of the entropic points at the
boundary of the Shannon region for three random variables. The proper faces of
the Shannon region form its boundary. We give new outer bounds for the entropy
region in certain faces and show by explicit construction of distributions that
the existing inner bounds for the entropy region in certain faces are not
tight.
|
[
{
"created": "Thu, 21 May 2020 08:58:35 GMT",
"version": "v1"
}
] |
2020-05-22
|
[
[
"Tiwari",
"Hitika",
""
],
[
"Thakor",
"Satyajit",
""
]
] |
The entropy region is a fundamental object in information theory. An outer bound for the entropy region is defined by a minimal set of Shannon-type inequalities called elemental inequalities also referred to as the Shannon region. This paper focuses on characterization of the entropic points at the boundary of the Shannon region for three random variables. The proper faces of the Shannon region form its boundary. We give new outer bounds for the entropy region in certain faces and show by explicit construction of distributions that the existing inner bounds for the entropy region in certain faces are not tight.
|
2312.04180
|
Qian Xiong
|
Dandan Qiao, Huaxia Rui, and Qian Xiong
|
AI and Jobs: Has the Inflection Point Arrived? Evidence from an Online
Labor Platform
|
42 pages, 6 figures, 9 tables
| null | null | null |
cs.AI cs.CY econ.GN q-fin.EC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Artificial intelligence (AI) refers to the ability of machines or software to
mimic or even surpass human intelligence in a given cognitive task. While
humans learn by both induction and deduction, the success of current AI is
rooted in induction, relying on its ability to detect statistical regularities
in task input -- an ability learnt from a vast amount of training data using
enormous computation resources. We examine the performance of such a
statistical AI in a human task through the lens of four factors, including task
learnability, statistical resource, computation resource, and learning
techniques, and then propose a three-phase visual framework to understand the
evolving relation between AI and jobs. Based on this conceptual framework, we
develop a simple economic model of competition to show the existence of an
inflection point for each occupation. Before AI performance crosses the
inflection point, human workers always benefit from an improvement in AI
performance, but after the inflection point, human workers become worse off
whenever such an improvement occurs. To offer empirical evidence, we first
argue that AI performance has passed the inflection point for the occupation of
translation but not for the occupation of web development. We then study how
the launch of ChatGPT, which led to significant improvement of AI performance
on many tasks, has affected workers in these two occupations on a large online
labor platform. Consistent with the inflection point conjecture, we find that
translators are negatively affected by the shock both in terms of the number of
accepted jobs and the earnings from those jobs, while web developers are
positively affected by the very same shock. Given the potentially large
disruption of AI on employment, more studies on more occupations using data
from different platforms are urgently needed.
|
[
{
"created": "Thu, 7 Dec 2023 10:06:34 GMT",
"version": "v1"
}
] |
2023-12-08
|
[
[
"Qiao",
"Dandan",
""
],
[
"Rui",
"Huaxia",
""
],
[
"Xiong",
"Qian",
""
]
] |
Artificial intelligence (AI) refers to the ability of machines or software to mimic or even surpass human intelligence in a given cognitive task. While humans learn by both induction and deduction, the success of current AI is rooted in induction, relying on its ability to detect statistical regularities in task input -- an ability learnt from a vast amount of training data using enormous computation resources. We examine the performance of such a statistical AI in a human task through the lens of four factors, including task learnability, statistical resource, computation resource, and learning techniques, and then propose a three-phase visual framework to understand the evolving relation between AI and jobs. Based on this conceptual framework, we develop a simple economic model of competition to show the existence of an inflection point for each occupation. Before AI performance crosses the inflection point, human workers always benefit from an improvement in AI performance, but after the inflection point, human workers become worse off whenever such an improvement occurs. To offer empirical evidence, we first argue that AI performance has passed the inflection point for the occupation of translation but not for the occupation of web development. We then study how the launch of ChatGPT, which led to significant improvement of AI performance on many tasks, has affected workers in these two occupations on a large online labor platform. Consistent with the inflection point conjecture, we find that translators are negatively affected by the shock both in terms of the number of accepted jobs and the earnings from those jobs, while web developers are positively affected by the very same shock. Given the potentially large disruption of AI on employment, more studies on more occupations using data from different platforms are urgently needed.
|
1605.00975
|
Pushpendra Singh
|
Pushpendra Singh
|
Breaking the Limits -- Redefining the Instantaneous Frequency
|
12 pages, 15 figures. arXiv admin note: text overlap with
arXiv:1604.04992
|
Circuits, Systems, and Signal Processing, 1--22, November 2017
|
10.1007/s00034-017-0719-y
| null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Carson and Fry (1937) introduced the concept of variable frequency as a
generalization of the constant frequency. The instantaneous frequency (IF) is
the time derivative of the instantaneous phase and it is well-defined only when
this derivative is positive. If this derivative is negative, the IF creates
problem because it does not provide any physical significance. This study
proposes a mathematical solution and eliminate this problem by redefining the
IF such that it is valid for all monocomponent and multicomponent signals which
can be nonlinear and nonstationary in nature. This is achieved by using the
property of the multivalued inverse tangent function. The efforts and
understanding of all the methods based on the IF would improve significantly by
using this proposed definition of the IF. We also demonstrate that the
decomposition of a signal, using zero-phase filtering based on the well
established Fourier and filter theory, into a set of desired frequency bands
with proposed IF produces accurate time-frequency-energy (TFE) distribution
that reveals true nature of signal. Simulation results demonstrate the efficacy
of the proposed IF that makes zero-phase filter based decomposition most
powerful, for the TFE analysis of a signal, as compared to other existing
methods in the literature.
|
[
{
"created": "Sat, 9 Apr 2016 04:32:31 GMT",
"version": "v1"
},
{
"created": "Tue, 10 May 2016 09:34:02 GMT",
"version": "v2"
},
{
"created": "Mon, 5 Sep 2016 10:32:59 GMT",
"version": "v3"
}
] |
2017-12-06
|
[
[
"Singh",
"Pushpendra",
""
]
] |
The Carson and Fry (1937) introduced the concept of variable frequency as a generalization of the constant frequency. The instantaneous frequency (IF) is the time derivative of the instantaneous phase and it is well-defined only when this derivative is positive. If this derivative is negative, the IF creates problem because it does not provide any physical significance. This study proposes a mathematical solution and eliminate this problem by redefining the IF such that it is valid for all monocomponent and multicomponent signals which can be nonlinear and nonstationary in nature. This is achieved by using the property of the multivalued inverse tangent function. The efforts and understanding of all the methods based on the IF would improve significantly by using this proposed definition of the IF. We also demonstrate that the decomposition of a signal, using zero-phase filtering based on the well established Fourier and filter theory, into a set of desired frequency bands with proposed IF produces accurate time-frequency-energy (TFE) distribution that reveals true nature of signal. Simulation results demonstrate the efficacy of the proposed IF that makes zero-phase filter based decomposition most powerful, for the TFE analysis of a signal, as compared to other existing methods in the literature.
|
2104.09823
|
Lotte Weedage
|
Lotte Weedage, Clara Stegehuis and Suzan Bayhan
|
Impact of Multi-connectivity on Channel Capacity and Outage Probability
in Wireless Networks
|
This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessible
|
IEEE Transactions on Vehicular Technology 2023
|
10.1109/TVT.2023.3242358
| null |
cs.NI math.PR
|
http://creativecommons.org/licenses/by/4.0/
|
Multi-connectivity facilitates higher throughput, shorter delay, and lower
outage probability for a user in a wireless network. Considering these
promises, a rationale policy for a network operator would be to implement
multi-connectivity for all of its users. In this paper, we investigate whether
the promises of multi-connectivity also hold in such a setting where all users
of a network are connected through multiple links. In particular, we consider a
network where every user connects to its k closest base stations. Using a
framework of stochastic geometry and probability theory, we obtain analytic
expressions for per-user throughput and outage probability of $k$-connectivity
networks under several failure models. In contrast to the conclusions of
previous research, our analysis shows that per-user throughput decreases with
increasing k. However, multi-connected networks are more resilient against
failures than single connected networks as reflected with lower outage
probability and lead to higher fairness among the users. Consequently, we
conclude that rather than implementing multi-connectivity for all users, a
network operator should consider it for its users who would benefit from
additional links the most, e.g., cell edge users.
|
[
{
"created": "Tue, 20 Apr 2021 08:20:18 GMT",
"version": "v1"
},
{
"created": "Mon, 26 Apr 2021 11:29:54 GMT",
"version": "v2"
}
] |
2023-03-03
|
[
[
"Weedage",
"Lotte",
""
],
[
"Stegehuis",
"Clara",
""
],
[
"Bayhan",
"Suzan",
""
]
] |
Multi-connectivity facilitates higher throughput, shorter delay, and lower outage probability for a user in a wireless network. Considering these promises, a rationale policy for a network operator would be to implement multi-connectivity for all of its users. In this paper, we investigate whether the promises of multi-connectivity also hold in such a setting where all users of a network are connected through multiple links. In particular, we consider a network where every user connects to its k closest base stations. Using a framework of stochastic geometry and probability theory, we obtain analytic expressions for per-user throughput and outage probability of $k$-connectivity networks under several failure models. In contrast to the conclusions of previous research, our analysis shows that per-user throughput decreases with increasing k. However, multi-connected networks are more resilient against failures than single connected networks as reflected with lower outage probability and lead to higher fairness among the users. Consequently, we conclude that rather than implementing multi-connectivity for all users, a network operator should consider it for its users who would benefit from additional links the most, e.g., cell edge users.
|
2002.07584
|
Alon Rashelbach
|
Alon Rashelbach, Ori Rottenstreich, Mark Silberstein
|
A Computational Approach to Packet Classification
|
To appear in SIGCOMM 2020
| null |
10.1145/3387514.3405886
| null |
cs.DC cs.LG cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Multi-field packet classification is a crucial component in modern
software-defined data center networks. To achieve high throughput and low
latency, state-of-the-art algorithms strive to fit the rule lookup data
structures into on-die caches; however, they do not scale well with the number
of rules. We present a novel approach, NuevoMatch, which improves the memory
scaling of existing methods. A new data structure, Range Query Recursive Model
Index (RQ-RMI), is the key component that enables NuevoMatch to replace most of
the accesses to main memory with model inference computations. We describe an
efficient training algorithm that guarantees the correctness of the
RQ-RMI-based classification. The use of RQ-RMI allows the rules to be
compressed into model weights that fit into the hardware cache. Further, it
takes advantage of the growing support for fast neural network processing in
modern CPUs, such as wide vector instructions, achieving a rate of tens of
nanoseconds per lookup. Our evaluation using 500K multi-field rules from the
standard ClassBench benchmark shows a geometric mean compression factor of
4.9x, 8x, and 82x, and average performance improvement of 2.4x, 2.6x, and 1.6x
in throughput compared to CutSplit, NeuroCuts, and TupleMerge, all
state-of-the-art algorithms.
|
[
{
"created": "Mon, 10 Feb 2020 13:47:02 GMT",
"version": "v1"
},
{
"created": "Mon, 13 Jul 2020 06:18:41 GMT",
"version": "v2"
}
] |
2020-07-14
|
[
[
"Rashelbach",
"Alon",
""
],
[
"Rottenstreich",
"Ori",
""
],
[
"Silberstein",
"Mark",
""
]
] |
Multi-field packet classification is a crucial component in modern software-defined data center networks. To achieve high throughput and low latency, state-of-the-art algorithms strive to fit the rule lookup data structures into on-die caches; however, they do not scale well with the number of rules. We present a novel approach, NuevoMatch, which improves the memory scaling of existing methods. A new data structure, Range Query Recursive Model Index (RQ-RMI), is the key component that enables NuevoMatch to replace most of the accesses to main memory with model inference computations. We describe an efficient training algorithm that guarantees the correctness of the RQ-RMI-based classification. The use of RQ-RMI allows the rules to be compressed into model weights that fit into the hardware cache. Further, it takes advantage of the growing support for fast neural network processing in modern CPUs, such as wide vector instructions, achieving a rate of tens of nanoseconds per lookup. Our evaluation using 500K multi-field rules from the standard ClassBench benchmark shows a geometric mean compression factor of 4.9x, 8x, and 82x, and average performance improvement of 2.4x, 2.6x, and 1.6x in throughput compared to CutSplit, NeuroCuts, and TupleMerge, all state-of-the-art algorithms.
|
2404.16553
|
Anil Goyal
|
Venkatesh C, Harshit Oberoi, Anil Goyal, Nikhil Sikka
|
RE-RecSys: An End-to-End system for recommending properties in
Real-Estate domain
| null | null |
10.1145/3632410.3632487
| null |
cs.IR cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We propose an end-to-end real-estate recommendation system, RE-RecSys, which
has been productionized in real-world industry setting. We categorize any user
into 4 categories based on available historical data: i) cold-start users; ii)
short-term users; iii) long-term users; and iv) short-long term users. For
cold-start users, we propose a novel rule-based engine that is based on the
popularity of locality and user preferences. For short-term users, we propose
to use content-filtering model which recommends properties based on recent
interactions of users. For long-term and short-long term users, we propose a
novel combination of content and collaborative filtering based approach which
can be easily productionized in the real-world scenario. Moreover, based on the
conversion rate, we have designed a novel weighing scheme for different
impressions done by users on the platform for the training of content and
collaborative models. Finally, we show the efficiency of the proposed pipeline,
RE-RecSys, on a real-world property and clickstream dataset collected from
leading real-estate platform in India. We show that the proposed pipeline is
deployable in real-world scenario with an average latency of <40 ms serving
1000 rpm.
|
[
{
"created": "Thu, 25 Apr 2024 12:09:17 GMT",
"version": "v1"
}
] |
2024-04-26
|
[
[
"C",
"Venkatesh",
""
],
[
"Oberoi",
"Harshit",
""
],
[
"Goyal",
"Anil",
""
],
[
"Sikka",
"Nikhil",
""
]
] |
We propose an end-to-end real-estate recommendation system, RE-RecSys, which has been productionized in real-world industry setting. We categorize any user into 4 categories based on available historical data: i) cold-start users; ii) short-term users; iii) long-term users; and iv) short-long term users. For cold-start users, we propose a novel rule-based engine that is based on the popularity of locality and user preferences. For short-term users, we propose to use content-filtering model which recommends properties based on recent interactions of users. For long-term and short-long term users, we propose a novel combination of content and collaborative filtering based approach which can be easily productionized in the real-world scenario. Moreover, based on the conversion rate, we have designed a novel weighing scheme for different impressions done by users on the platform for the training of content and collaborative models. Finally, we show the efficiency of the proposed pipeline, RE-RecSys, on a real-world property and clickstream dataset collected from leading real-estate platform in India. We show that the proposed pipeline is deployable in real-world scenario with an average latency of <40 ms serving 1000 rpm.
|
1805.10105
|
Simon Hegelich
|
Andree Thieltges, Orestis Papakyriakopoulos, Juan Carlos Medina
Serrano, Simon Hegelich
|
Effects of Social Bots in the Iran-Debate on Twitter
| null | null | null | null |
cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
2018 started with massive protests in Iran, bringing back the impressions of
the so called "Arab Spring" and it's revolutionary impact for the Maghreb
states, Syria and Egypt. Many reports and scientific examinations considered
online social networks (OSN's) such as Twitter or Facebook to play a critical
role in the opinion making of people behind those protests. Beside that, there
is also evidence for directed manipulation of opinion with the help of social
bots and fake accounts. So, it is obvious to ask, if there is an attempt to
manipulate the opinion-making process related to the Iranian protest in OSN by
employing social bots, and how such manipulations will affect the discourse as
a whole. Based on a sample of ca. 900,000 Tweets relating to the topic "Iran"
we show, that there are Twitter profiles, that have to be considered as social
bot accounts. By using text mining methods, we show that these social bots are
responsible for negative sentiment in the debate. Thereby, we would like to
illustrate a detectable effect of social bots on political discussions on
Twitter.
|
[
{
"created": "Fri, 25 May 2018 12:33:14 GMT",
"version": "v1"
}
] |
2018-05-28
|
[
[
"Thieltges",
"Andree",
""
],
[
"Papakyriakopoulos",
"Orestis",
""
],
[
"Serrano",
"Juan Carlos Medina",
""
],
[
"Hegelich",
"Simon",
""
]
] |
2018 started with massive protests in Iran, bringing back the impressions of the so called "Arab Spring" and it's revolutionary impact for the Maghreb states, Syria and Egypt. Many reports and scientific examinations considered online social networks (OSN's) such as Twitter or Facebook to play a critical role in the opinion making of people behind those protests. Beside that, there is also evidence for directed manipulation of opinion with the help of social bots and fake accounts. So, it is obvious to ask, if there is an attempt to manipulate the opinion-making process related to the Iranian protest in OSN by employing social bots, and how such manipulations will affect the discourse as a whole. Based on a sample of ca. 900,000 Tweets relating to the topic "Iran" we show, that there are Twitter profiles, that have to be considered as social bot accounts. By using text mining methods, we show that these social bots are responsible for negative sentiment in the debate. Thereby, we would like to illustrate a detectable effect of social bots on political discussions on Twitter.
|
2311.02971
|
David Salinas
|
David Salinas and Nick Erickson
|
TabRepo: A Large Scale Repository of Tabular Model Evaluations and its
AutoML Applications
| null | null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
We introduce TabRepo, a new dataset of tabular model evaluations and
predictions. TabRepo contains the predictions and metrics of 1310 models
evaluated on 200 classification and regression datasets. We illustrate the
benefit of our dataset in multiple ways. First, we show that it allows to
perform analysis such as comparing Hyperparameter Optimization against current
AutoML systems while also considering ensembling at marginal cost by using
precomputed model predictions. Second, we show that our dataset can be readily
leveraged to perform transfer-learning. In particular, we show that applying
standard transfer-learning techniques allows to outperform current
state-of-the-art tabular systems in accuracy, runtime and latency.
|
[
{
"created": "Mon, 6 Nov 2023 09:17:18 GMT",
"version": "v1"
},
{
"created": "Tue, 19 Mar 2024 13:08:59 GMT",
"version": "v2"
}
] |
2024-03-20
|
[
[
"Salinas",
"David",
""
],
[
"Erickson",
"Nick",
""
]
] |
We introduce TabRepo, a new dataset of tabular model evaluations and predictions. TabRepo contains the predictions and metrics of 1310 models evaluated on 200 classification and regression datasets. We illustrate the benefit of our dataset in multiple ways. First, we show that it allows to perform analysis such as comparing Hyperparameter Optimization against current AutoML systems while also considering ensembling at marginal cost by using precomputed model predictions. Second, we show that our dataset can be readily leveraged to perform transfer-learning. In particular, we show that applying standard transfer-learning techniques allows to outperform current state-of-the-art tabular systems in accuracy, runtime and latency.
|
2112.06454
|
Namgil Kim
|
Namgil Kim and Barom Kang and Yeonok Cho
|
Split GCN: Effective Interactive Annotation for Segmentation of
Disconnected Instance
|
11 pages
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Annotating object boundaries by humans demands high costs. Recently,
polygon-based annotation methods with human interaction have shown successful
performance. However, given the connected vertex topology, these methods
exhibit difficulty predicting the disconnected components in an object. This
paper introduces Split-GCN, a novel architecture based on the polygon approach
and self-attention mechanism. By offering the direction information, Split-GCN
enables the polygon vertices to move more precisely to the object boundary. Our
model successfully predicts disconnected components of an object by
transforming the initial topology using the context exchange about the
dependencies of vertices. Split-GCN demonstrates competitive performance with
the state-of-the-art models on Cityscapes and even higher performance with the
baseline models. On four cross-domain datasets, we confirm our model's
generalization ability.
|
[
{
"created": "Mon, 13 Dec 2021 07:17:03 GMT",
"version": "v1"
}
] |
2021-12-14
|
[
[
"Kim",
"Namgil",
""
],
[
"Kang",
"Barom",
""
],
[
"Cho",
"Yeonok",
""
]
] |
Annotating object boundaries by humans demands high costs. Recently, polygon-based annotation methods with human interaction have shown successful performance. However, given the connected vertex topology, these methods exhibit difficulty predicting the disconnected components in an object. This paper introduces Split-GCN, a novel architecture based on the polygon approach and self-attention mechanism. By offering the direction information, Split-GCN enables the polygon vertices to move more precisely to the object boundary. Our model successfully predicts disconnected components of an object by transforming the initial topology using the context exchange about the dependencies of vertices. Split-GCN demonstrates competitive performance with the state-of-the-art models on Cityscapes and even higher performance with the baseline models. On four cross-domain datasets, we confirm our model's generalization ability.
|
2205.06331
|
Ahmadreza Moradipari
|
Ahmadreza Moradipari, Mohammad Ghavamzadeh, and Mahnoosh Alizadeh
|
Collaborative Multi-agent Stochastic Linear Bandits
| null |
American Control Conference (ACC), 2022
| null | null |
cs.LG cs.MA
|
http://creativecommons.org/licenses/by/4.0/
|
We study a collaborative multi-agent stochastic linear bandit setting, where
$N$ agents that form a network communicate locally to minimize their overall
regret. In this setting, each agent has its own linear bandit problem (its own
reward parameter) and the goal is to select the best global action w.r.t. the
average of their reward parameters. At each round, each agent proposes an
action, and one action is randomly selected and played as the network action.
All the agents observe the corresponding rewards of the played actions and use
an accelerated consensus procedure to compute an estimate of the average of the
rewards obtained by all the agents. We propose a distributed upper confidence
bound (UCB) algorithm and prove a high probability bound on its $T$-round
regret in which we include a linear growth of regret associated with each
communication round. Our regret bound is of order
$\mathcal{O}\Big(\sqrt{\frac{T}{N \log(1/|\lambda_2|)}}\cdot (\log T)^2\Big)$,
where $\lambda_2$ is the second largest (in absolute value) eigenvalue of the
communication matrix.
|
[
{
"created": "Thu, 12 May 2022 19:46:35 GMT",
"version": "v1"
}
] |
2022-05-16
|
[
[
"Moradipari",
"Ahmadreza",
""
],
[
"Ghavamzadeh",
"Mohammad",
""
],
[
"Alizadeh",
"Mahnoosh",
""
]
] |
We study a collaborative multi-agent stochastic linear bandit setting, where $N$ agents that form a network communicate locally to minimize their overall regret. In this setting, each agent has its own linear bandit problem (its own reward parameter) and the goal is to select the best global action w.r.t. the average of their reward parameters. At each round, each agent proposes an action, and one action is randomly selected and played as the network action. All the agents observe the corresponding rewards of the played actions and use an accelerated consensus procedure to compute an estimate of the average of the rewards obtained by all the agents. We propose a distributed upper confidence bound (UCB) algorithm and prove a high probability bound on its $T$-round regret in which we include a linear growth of regret associated with each communication round. Our regret bound is of order $\mathcal{O}\Big(\sqrt{\frac{T}{N \log(1/|\lambda_2|)}}\cdot (\log T)^2\Big)$, where $\lambda_2$ is the second largest (in absolute value) eigenvalue of the communication matrix.
|
2208.14699
|
Xingchao Liu
|
Xingchao Liu, Lemeng Wu, Mao Ye, Qiang Liu
|
Let us Build Bridges: Understanding and Extending Diffusion Generative
Models
| null | null | null | null |
cs.LG cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Diffusion-based generative models have achieved promising results recently,
but raise an array of open questions in terms of conceptual understanding,
theoretical analysis, algorithm improvement and extensions to discrete,
structured, non-Euclidean domains. This work tries to re-exam the overall
framework, in order to gain better theoretical understandings and develop
algorithmic extensions for data from arbitrary domains. By viewing diffusion
models as latent variable models with unobserved diffusion trajectories and
applying maximum likelihood estimation (MLE) with latent trajectories imputed
from an auxiliary distribution, we show that both the model construction and
the imputation of latent trajectories amount to constructing diffusion bridge
processes that achieve deterministic values and constraints at end point, for
which we provide a systematic study and a suit of tools. Leveraging our
framework, we present 1) a first theoretical error analysis for learning
diffusion generation models, and 2) a simple and unified approach to learning
on data from different discrete and constrained domains. Experiments show that
our methods perform superbly on generating images, semantic segments and 3D
point clouds.
|
[
{
"created": "Wed, 31 Aug 2022 08:58:10 GMT",
"version": "v1"
}
] |
2022-09-01
|
[
[
"Liu",
"Xingchao",
""
],
[
"Wu",
"Lemeng",
""
],
[
"Ye",
"Mao",
""
],
[
"Liu",
"Qiang",
""
]
] |
Diffusion-based generative models have achieved promising results recently, but raise an array of open questions in terms of conceptual understanding, theoretical analysis, algorithm improvement and extensions to discrete, structured, non-Euclidean domains. This work tries to re-exam the overall framework, in order to gain better theoretical understandings and develop algorithmic extensions for data from arbitrary domains. By viewing diffusion models as latent variable models with unobserved diffusion trajectories and applying maximum likelihood estimation (MLE) with latent trajectories imputed from an auxiliary distribution, we show that both the model construction and the imputation of latent trajectories amount to constructing diffusion bridge processes that achieve deterministic values and constraints at end point, for which we provide a systematic study and a suit of tools. Leveraging our framework, we present 1) a first theoretical error analysis for learning diffusion generation models, and 2) a simple and unified approach to learning on data from different discrete and constrained domains. Experiments show that our methods perform superbly on generating images, semantic segments and 3D point clouds.
|
2201.12464
|
Claire Le Goues
|
Deborah S. Katz, Christopher S. Timperley, Claire Le Goues
|
Using Dynamic Binary Instrumentation to Detect Failures in Robotics
Software
| null | null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Autonomous and Robotics Systems (ARSs) are widespread, complex, and
increasingly coming into contact with the public. Many of these systems are
safety-critical, and it is vital to detect software errors to protect against
harm. We propose a family of novel techniques to detect unusual program
executions and incorrect program behavior. We model execution behavior by
collecting low-level signals at run time and using those signals to build
machine learning models. These models can identify previously-unseen executions
that are more likely to exhibit errors. We describe a tractable approach for
collecting dynamic binary runtime signals on ARSs, allowing the systems to
absorb most of the overhead from dynamic instrumentation. The architecture of
ARSs is particularly well-adapted to hiding the overhead from instrumentation.
We demonstrate the efficiency of these approaches on ARDUPILOT -- a popular
open-source autopilot software system -- and HUSKY -- an unmanned ground
vehicle -- in simulation. We instrument executions to gather data from which we
build supervised machine learning models of executions and evaluate the
accuracy of these models. We also analyze the amount of training data needed to
develop models with various degrees of accuracy, measure the overhead added to
executions that use the analysis tool, and analyze which runtime signals are
most useful for detecting unusual behavior on the program under test. In
addition, we analyze the effects of timing delays on the functional behavior of
ARSs.
|
[
{
"created": "Sat, 29 Jan 2022 01:00:23 GMT",
"version": "v1"
}
] |
2022-02-01
|
[
[
"Katz",
"Deborah S.",
""
],
[
"Timperley",
"Christopher S.",
""
],
[
"Goues",
"Claire Le",
""
]
] |
Autonomous and Robotics Systems (ARSs) are widespread, complex, and increasingly coming into contact with the public. Many of these systems are safety-critical, and it is vital to detect software errors to protect against harm. We propose a family of novel techniques to detect unusual program executions and incorrect program behavior. We model execution behavior by collecting low-level signals at run time and using those signals to build machine learning models. These models can identify previously-unseen executions that are more likely to exhibit errors. We describe a tractable approach for collecting dynamic binary runtime signals on ARSs, allowing the systems to absorb most of the overhead from dynamic instrumentation. The architecture of ARSs is particularly well-adapted to hiding the overhead from instrumentation. We demonstrate the efficiency of these approaches on ARDUPILOT -- a popular open-source autopilot software system -- and HUSKY -- an unmanned ground vehicle -- in simulation. We instrument executions to gather data from which we build supervised machine learning models of executions and evaluate the accuracy of these models. We also analyze the amount of training data needed to develop models with various degrees of accuracy, measure the overhead added to executions that use the analysis tool, and analyze which runtime signals are most useful for detecting unusual behavior on the program under test. In addition, we analyze the effects of timing delays on the functional behavior of ARSs.
|
2312.17215
|
Promit Panja
|
Promit Panja, Jesse B. Hoagg, Sabur Baidya
|
Control Barrier Function Based UAV Safety Controller in Autonomous
Airborne Tracking and Following Systems
| null | null | null | null |
cs.RO cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Safe operations of UAVs are of paramount importance for various
mission-critical and safety-critical UAV applications. In context of airborne
target tracking and following, UAVs need to track a flying target avoiding
collision and also closely follow its trajectory. The safety situation becomes
critical and more complex when the flying target is non-cooperative and has
erratic movements. This paper proposes a method for collision avoidance in an
autonomous fast moving dynamic quadrotor UAV tracking and following another
target UAV. This is achieved by designing a safety controller that minimally
modifies the control input from a trajectory tracking controller and guarantees
safety. This method enables pairing our proposed safety controller with already
existing flight controllers. Our safety controller uses a control barrier
function based quadratic program (CBF-QP) to produce an optimal control input
enabling safe operation while also follow the trajectory of the target closely.
We implement our solution on AirSim simulator over PX4 flight controller and
with numerical results, we validate our approach through several simulation
experiments with multiple scenarios and trajectories.
|
[
{
"created": "Thu, 28 Dec 2023 18:46:21 GMT",
"version": "v1"
}
] |
2023-12-29
|
[
[
"Panja",
"Promit",
""
],
[
"Hoagg",
"Jesse B.",
""
],
[
"Baidya",
"Sabur",
""
]
] |
Safe operations of UAVs are of paramount importance for various mission-critical and safety-critical UAV applications. In context of airborne target tracking and following, UAVs need to track a flying target avoiding collision and also closely follow its trajectory. The safety situation becomes critical and more complex when the flying target is non-cooperative and has erratic movements. This paper proposes a method for collision avoidance in an autonomous fast moving dynamic quadrotor UAV tracking and following another target UAV. This is achieved by designing a safety controller that minimally modifies the control input from a trajectory tracking controller and guarantees safety. This method enables pairing our proposed safety controller with already existing flight controllers. Our safety controller uses a control barrier function based quadratic program (CBF-QP) to produce an optimal control input enabling safe operation while also follow the trajectory of the target closely. We implement our solution on AirSim simulator over PX4 flight controller and with numerical results, we validate our approach through several simulation experiments with multiple scenarios and trajectories.
|
0905.0737
|
Ignacio Vega-Paez M en C
|
Ignacio Vega-Paez, Jose Angel Ortega, Georgina G. Pulido
|
REC language is a live on IBM1130 simulator, EL lenguaje REC esta vivo
en el simulador de la IBM 1130
|
This work is archaeological reconstruction of REC/A language
| null | null |
IBP-TR2009-04
|
cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
REC (Regular Expression Compiler) is a concise programming language
development in mayor Mexican Universities at end of 60s which allows students
to write programs without knowledge of the complicated syntax of languages like
FORTRAN and ALGOL. The language is recursive and contains only four elements
for control. This paper describes use of the interpreter of REC written in
FORTRAN on IBM1130 Simulator from -Computer History Simulation- Project.
|
[
{
"created": "Wed, 6 May 2009 04:21:15 GMT",
"version": "v1"
}
] |
2009-05-07
|
[
[
"Vega-Paez",
"Ignacio",
""
],
[
"Ortega",
"Jose Angel",
""
],
[
"Pulido",
"Georgina G.",
""
]
] |
REC (Regular Expression Compiler) is a concise programming language development in mayor Mexican Universities at end of 60s which allows students to write programs without knowledge of the complicated syntax of languages like FORTRAN and ALGOL. The language is recursive and contains only four elements for control. This paper describes use of the interpreter of REC written in FORTRAN on IBM1130 Simulator from -Computer History Simulation- Project.
|
2209.03748
|
Netanell Avisdris
|
Netanell Avisdris, Aviad Rabinowich, Daniel Fridkin, Ayala Zilberman,
Sapir Lazar, Jacky Herzlich, Zeev Hananis, Daphna Link-Sourani, Liat
Ben-Sira, Liran Hiersch, Dafna Ben Bashat, and Leo Joskowicz
|
Automatic fetal fat quantification from MRI
|
13 pages, 4 Figures, 3 Tables, Accepted to PIPPI/MICCAI 2022
| null |
10.1007/978-3-031-17117-8_3
| null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Normal fetal adipose tissue (AT) development is essential for perinatal
well-being. AT, or simply fat, stores energy in the form of lipids.
Malnourishment may result in excessive or depleted adiposity. Although previous
studies showed a correlation between the amount of AT and perinatal outcome,
prenatal assessment of AT is limited by lacking quantitative methods. Using
magnetic resonance imaging (MRI), 3D fat- and water-only images of the entire
fetus can be obtained from two point Dixon images to enable AT lipid
quantification. This paper is the first to present a methodology for developing
a deep learning based method for fetal fat segmentation based on Dixon MRI. It
optimizes radiologists' manual fetal fat delineation time to produce annotated
training dataset. It consists of two steps: 1) model-based semi-automatic fetal
fat segmentations, reviewed and corrected by a radiologist; 2) automatic fetal
fat segmentation using DL networks trained on the resulting annotated dataset.
Three DL networks were trained. We show a significant improvement in
segmentation times (3:38 hours to < 1 hour) and observer variability (Dice of
0.738 to 0.906) compared to manual segmentation. Automatic segmentation of 24
test cases with the 3D Residual U-Net, nn-UNet and SWIN-UNetR transformer
networks yields a mean Dice score of 0.863, 0.787 and 0.856, respectively.
These results are better than the manual observer variability, and comparable
to automatic adult and pediatric fat segmentation. A radiologist reviewed and
corrected six new independent cases segmented using the best performing
network, resulting in a Dice score of 0.961 and a significantly reduced
correction time of 15:20 minutes. Using these novel segmentation methods and
short MRI acquisition time, whole body subcutaneous lipids can be quantified
for individual fetuses in the clinic and large-cohort research.
|
[
{
"created": "Thu, 8 Sep 2022 12:07:12 GMT",
"version": "v1"
}
] |
2022-09-30
|
[
[
"Avisdris",
"Netanell",
""
],
[
"Rabinowich",
"Aviad",
""
],
[
"Fridkin",
"Daniel",
""
],
[
"Zilberman",
"Ayala",
""
],
[
"Lazar",
"Sapir",
""
],
[
"Herzlich",
"Jacky",
""
],
[
"Hananis",
"Zeev",
""
],
[
"Link-Sourani",
"Daphna",
""
],
[
"Ben-Sira",
"Liat",
""
],
[
"Hiersch",
"Liran",
""
],
[
"Bashat",
"Dafna Ben",
""
],
[
"Joskowicz",
"Leo",
""
]
] |
Normal fetal adipose tissue (AT) development is essential for perinatal well-being. AT, or simply fat, stores energy in the form of lipids. Malnourishment may result in excessive or depleted adiposity. Although previous studies showed a correlation between the amount of AT and perinatal outcome, prenatal assessment of AT is limited by lacking quantitative methods. Using magnetic resonance imaging (MRI), 3D fat- and water-only images of the entire fetus can be obtained from two point Dixon images to enable AT lipid quantification. This paper is the first to present a methodology for developing a deep learning based method for fetal fat segmentation based on Dixon MRI. It optimizes radiologists' manual fetal fat delineation time to produce annotated training dataset. It consists of two steps: 1) model-based semi-automatic fetal fat segmentations, reviewed and corrected by a radiologist; 2) automatic fetal fat segmentation using DL networks trained on the resulting annotated dataset. Three DL networks were trained. We show a significant improvement in segmentation times (3:38 hours to < 1 hour) and observer variability (Dice of 0.738 to 0.906) compared to manual segmentation. Automatic segmentation of 24 test cases with the 3D Residual U-Net, nn-UNet and SWIN-UNetR transformer networks yields a mean Dice score of 0.863, 0.787 and 0.856, respectively. These results are better than the manual observer variability, and comparable to automatic adult and pediatric fat segmentation. A radiologist reviewed and corrected six new independent cases segmented using the best performing network, resulting in a Dice score of 0.961 and a significantly reduced correction time of 15:20 minutes. Using these novel segmentation methods and short MRI acquisition time, whole body subcutaneous lipids can be quantified for individual fetuses in the clinic and large-cohort research.
|
2312.05584
|
Guocheng Feng
|
Guocheng Feng, Huaiyu Cai, Kaihao Chen, and Zhijian Li
|
A Hybrid Method of Sentiment Analysis and Machine Learning Algorithm for
the U.S. Presidential Election Forecasting
|
6 pages, 7 tables, 3 figures, 2023 IEEE International Conference on
Big Data
| null | null | null |
cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
U.S. Presidential Election forecasting has been a research interest for
several decades. Currently, election prediction consists of two main
approaches: traditional models that incorporate economic data and poll surveys,
and models that leverage Twitter (or X) and other social media platforms due to
their increasing popularity in the past decade. However, traditional approaches
have predominantly focused on national-level predictions, while social
media-based approaches often oversimplify the nuanced differences between
online discourse and the broader voting population's political landscape.
In this work, we perform a hybrid method of both the machine learning
algorithm and the sentiment analysis on the state level with various
independent variables including census data, economic indicators, polling
averages, and the newly defined average sentiment scores from Twitter. Our
prediction for the 2020 U.S. Presidential Election yielded promising results.
Most of our models successfully predicted a victory for the Democratic
candidate with 96% accuracy using Gradient Boosting Trees and Multi-Layer
Perceptron algorithms. This novel prediction framework addresses the
limitations of existing U.S. Presidential Election forecasting approaches,
particularly in terms of state-level predictions. It provides a valuable
foundation for future research in this field and contributes to advancing our
understanding of election dynamics.
|
[
{
"created": "Sat, 9 Dec 2023 14:07:46 GMT",
"version": "v1"
}
] |
2023-12-12
|
[
[
"Feng",
"Guocheng",
""
],
[
"Cai",
"Huaiyu",
""
],
[
"Chen",
"Kaihao",
""
],
[
"Li",
"Zhijian",
""
]
] |
U.S. Presidential Election forecasting has been a research interest for several decades. Currently, election prediction consists of two main approaches: traditional models that incorporate economic data and poll surveys, and models that leverage Twitter (or X) and other social media platforms due to their increasing popularity in the past decade. However, traditional approaches have predominantly focused on national-level predictions, while social media-based approaches often oversimplify the nuanced differences between online discourse and the broader voting population's political landscape. In this work, we perform a hybrid method of both the machine learning algorithm and the sentiment analysis on the state level with various independent variables including census data, economic indicators, polling averages, and the newly defined average sentiment scores from Twitter. Our prediction for the 2020 U.S. Presidential Election yielded promising results. Most of our models successfully predicted a victory for the Democratic candidate with 96% accuracy using Gradient Boosting Trees and Multi-Layer Perceptron algorithms. This novel prediction framework addresses the limitations of existing U.S. Presidential Election forecasting approaches, particularly in terms of state-level predictions. It provides a valuable foundation for future research in this field and contributes to advancing our understanding of election dynamics.
|
2009.11758
|
Julien Grange
|
Julien Grange
|
Successor-Invariant First-Order Logic on Classes of Bounded Degree
| null |
Logical Methods in Computer Science, Volume 17, Issue 3 (August
13, 2021) lmcs:6803
|
10.46298/lmcs-17(3:20)2021
| null |
cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
We study the expressive power of successor-invariant first-order logic, which
is an extension of first-order logic where the usage of an additional successor
relation on the structure is allowed, as long as the validity of formulas is
independent of the choice of a particular successor on finite structures.
We show that when the degree is bounded, successor-invariant first-order
logic is no more expressive than first-order logic.
|
[
{
"created": "Thu, 24 Sep 2020 15:30:13 GMT",
"version": "v1"
},
{
"created": "Tue, 20 Apr 2021 15:27:14 GMT",
"version": "v2"
},
{
"created": "Mon, 2 Aug 2021 12:50:17 GMT",
"version": "v3"
},
{
"created": "Wed, 11 Aug 2021 20:50:24 GMT",
"version": "v4"
}
] |
2023-06-22
|
[
[
"Grange",
"Julien",
""
]
] |
We study the expressive power of successor-invariant first-order logic, which is an extension of first-order logic where the usage of an additional successor relation on the structure is allowed, as long as the validity of formulas is independent of the choice of a particular successor on finite structures. We show that when the degree is bounded, successor-invariant first-order logic is no more expressive than first-order logic.
|
2209.15373
|
Javier Huertas-Tato
|
Javier Huertas-Tato, Alvaro Huertas-Garcia, Alejandro Martin, David
Camacho
|
PART: Pre-trained Authorship Representation Transformer
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Authors writing documents imprint identifying information within their texts:
vocabulary, registry, punctuation, misspellings, or even emoji usage. Finding
these details is very relevant to profile authors, relating back to their
gender, occupation, age, and so on. But most importantly, repeating writing
patterns can help attributing authorship to a text. Previous works use
hand-crafted features or classification tasks to train their authorship models,
leading to poor performance on out-of-domain authors. A better approach to this
task is to learn stylometric representations, but this by itself is an open
research challenge. In this paper, we propose PART: a contrastively trained
model fit to learn \textbf{authorship embeddings} instead of semantics. By
comparing pairs of documents written by the same author, we are able to
determine the proprietary of a text by evaluating the cosine similarity of the
evaluated documents, a zero-shot generalization to authorship identification.
To this end, a pre-trained Transformer with an LSTM head is trained with the
contrastive training method. We train our model on a diverse set of authors,
from literature, anonymous blog posters and corporate emails; a heterogeneous
set with distinct and identifiable writing styles. The model is evaluated on
these datasets, achieving zero-shot 72.39\% and 86.73\% accuracy and top-5
accuracy respectively on the joint evaluation dataset when determining
authorship from a set of 250 different authors. We qualitatively assess the
representations with different data visualizations on the available datasets,
profiling features such as book types, gender, age, or occupation of the
author.
|
[
{
"created": "Fri, 30 Sep 2022 11:08:39 GMT",
"version": "v1"
}
] |
2022-10-03
|
[
[
"Huertas-Tato",
"Javier",
""
],
[
"Huertas-Garcia",
"Alvaro",
""
],
[
"Martin",
"Alejandro",
""
],
[
"Camacho",
"David",
""
]
] |
Authors writing documents imprint identifying information within their texts: vocabulary, registry, punctuation, misspellings, or even emoji usage. Finding these details is very relevant to profile authors, relating back to their gender, occupation, age, and so on. But most importantly, repeating writing patterns can help attributing authorship to a text. Previous works use hand-crafted features or classification tasks to train their authorship models, leading to poor performance on out-of-domain authors. A better approach to this task is to learn stylometric representations, but this by itself is an open research challenge. In this paper, we propose PART: a contrastively trained model fit to learn \textbf{authorship embeddings} instead of semantics. By comparing pairs of documents written by the same author, we are able to determine the proprietary of a text by evaluating the cosine similarity of the evaluated documents, a zero-shot generalization to authorship identification. To this end, a pre-trained Transformer with an LSTM head is trained with the contrastive training method. We train our model on a diverse set of authors, from literature, anonymous blog posters and corporate emails; a heterogeneous set with distinct and identifiable writing styles. The model is evaluated on these datasets, achieving zero-shot 72.39\% and 86.73\% accuracy and top-5 accuracy respectively on the joint evaluation dataset when determining authorship from a set of 250 different authors. We qualitatively assess the representations with different data visualizations on the available datasets, profiling features such as book types, gender, age, or occupation of the author.
|
2112.01154
|
Yao Zhang
|
Tom H. Luan, Yao Zhang, Lin Cai, Yilong Hui, Changle Li and Nan Cheng
|
Autonomous Vehicular Networks: Perspective and Open Issues
| null | null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The vehicular ad hoc networks (VANETs) have been researched for over twenty
years. Although being a fundamental communication approach for vehicles, the
conventional VANETs are challenged by the newly emerged autonomous vehicles
(AVs) which introduce new features and challenges on communications. In the
meantime, with the recent advances of artificial intelligence and 5G cellular
networks, how should the fundamental framework of VANET evolve to utilize the
new technologies? In this article, we reconsider the problem of
vehicle-to-vehicle communications when the network is composed of AVs. We
discuss the features and specific demands of AVs and how the conventional
VANETs should adapt to fit them.
|
[
{
"created": "Thu, 2 Dec 2021 12:00:58 GMT",
"version": "v1"
}
] |
2021-12-03
|
[
[
"Luan",
"Tom H.",
""
],
[
"Zhang",
"Yao",
""
],
[
"Cai",
"Lin",
""
],
[
"Hui",
"Yilong",
""
],
[
"Li",
"Changle",
""
],
[
"Cheng",
"Nan",
""
]
] |
The vehicular ad hoc networks (VANETs) have been researched for over twenty years. Although being a fundamental communication approach for vehicles, the conventional VANETs are challenged by the newly emerged autonomous vehicles (AVs) which introduce new features and challenges on communications. In the meantime, with the recent advances of artificial intelligence and 5G cellular networks, how should the fundamental framework of VANET evolve to utilize the new technologies? In this article, we reconsider the problem of vehicle-to-vehicle communications when the network is composed of AVs. We discuss the features and specific demands of AVs and how the conventional VANETs should adapt to fit them.
|
2407.00886
|
Aliyah Hsu
|
Aliyah R. Hsu, Yeshwanth Cherapanamjeri, Anobel Y. Odisho, Peter R.
Carroll, Bin Yu
|
Mechanistic Interpretation through Contextual Decomposition in
Transformers
| null | null | null | null |
cs.AI cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Transformers exhibit impressive capabilities but are often regarded as black
boxes due to challenges in understanding the complex nonlinear relationships
between features. Interpreting machine learning models is of paramount
importance to mitigate risks, and mechanistic interpretability is in particular
of current interest as it opens up a window for guiding manual modifications
and reverse-engineering solutions. In this work, we introduce contextual
decomposition for transformers (CD-T), extending a prior work on CD for RNNs
and CNNs, to address mechanistic interpretation computationally efficiently.
CD-T is a flexible interpretation method for transformers. It can capture
contributions of combinations of input features or source internal components
(e.g. attention heads, feed-forward networks) to (1) final predictions or (2)
the output of any target internal component. Using CD-T, we propose a novel
algorithm for circuit discovery. On a real-world pathology report
classification task: we show CD-T distills a more faithful circuit of attention
heads with improved computational efficiency (speed up 2x) than a prior
benchmark, path patching. As a versatile interpretation method, CD-T also
exhibits exceptional capabilities for local interpretations. CD-T is shown to
reliably find words and phrases of contrasting sentiment/topic on SST-2 and
AGNews datasets. Through human experiments, we demonstrate CD-T enables users
to identify the more accurate of two models and to better trust a model's
outputs compared to alternative interpretation methods such as SHAP and LIME.
|
[
{
"created": "Mon, 1 Jul 2024 01:12:20 GMT",
"version": "v1"
}
] |
2024-07-02
|
[
[
"Hsu",
"Aliyah R.",
""
],
[
"Cherapanamjeri",
"Yeshwanth",
""
],
[
"Odisho",
"Anobel Y.",
""
],
[
"Carroll",
"Peter R.",
""
],
[
"Yu",
"Bin",
""
]
] |
Transformers exhibit impressive capabilities but are often regarded as black boxes due to challenges in understanding the complex nonlinear relationships between features. Interpreting machine learning models is of paramount importance to mitigate risks, and mechanistic interpretability is in particular of current interest as it opens up a window for guiding manual modifications and reverse-engineering solutions. In this work, we introduce contextual decomposition for transformers (CD-T), extending a prior work on CD for RNNs and CNNs, to address mechanistic interpretation computationally efficiently. CD-T is a flexible interpretation method for transformers. It can capture contributions of combinations of input features or source internal components (e.g. attention heads, feed-forward networks) to (1) final predictions or (2) the output of any target internal component. Using CD-T, we propose a novel algorithm for circuit discovery. On a real-world pathology report classification task: we show CD-T distills a more faithful circuit of attention heads with improved computational efficiency (speed up 2x) than a prior benchmark, path patching. As a versatile interpretation method, CD-T also exhibits exceptional capabilities for local interpretations. CD-T is shown to reliably find words and phrases of contrasting sentiment/topic on SST-2 and AGNews datasets. Through human experiments, we demonstrate CD-T enables users to identify the more accurate of two models and to better trust a model's outputs compared to alternative interpretation methods such as SHAP and LIME.
|
2203.15925
|
Xubo Lyu
|
Xubo Lyu, Amin Banitalebi-Dehkordi, Mo Chen, Yong Zhang
|
Asynchronous, Option-Based Multi-Agent Policy Gradient: A Conditional
Reasoning Approach
|
Accepted by IEEE/RSJ International Conference on Intelligent Robots
and Systems (IROS), 2023
| null | null | null |
cs.RO cs.AI cs.LG cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cooperative multi-agent problems often require coordination between agents,
which can be achieved through a centralized policy that considers the global
state. Multi-agent policy gradient (MAPG) methods are commonly used to learn
such policies, but they are often limited to problems with low-level action
spaces. In complex problems with large state and action spaces, it is
advantageous to extend MAPG methods to use higher-level actions, also known as
options, to improve the policy search efficiency. However, multi-robot option
executions are often asynchronous, that is, agents may select and complete
their options at different time steps. This makes it difficult for MAPG methods
to derive a centralized policy and evaluate its gradient, as centralized policy
always select new options at the same time. In this work, we propose a novel,
conditional reasoning approach to address this problem and demonstrate its
effectiveness on representative option-based multi-agent cooperative tasks
through empirical validation. Find code and videos at:
\href{https://sites.google.com/view/mahrlsupp/}{https://sites.google.com/view/mahrlsupp/}
|
[
{
"created": "Tue, 29 Mar 2022 22:02:28 GMT",
"version": "v1"
},
{
"created": "Thu, 19 Jan 2023 19:49:47 GMT",
"version": "v2"
},
{
"created": "Wed, 2 Aug 2023 05:57:05 GMT",
"version": "v3"
}
] |
2023-08-03
|
[
[
"Lyu",
"Xubo",
""
],
[
"Banitalebi-Dehkordi",
"Amin",
""
],
[
"Chen",
"Mo",
""
],
[
"Zhang",
"Yong",
""
]
] |
Cooperative multi-agent problems often require coordination between agents, which can be achieved through a centralized policy that considers the global state. Multi-agent policy gradient (MAPG) methods are commonly used to learn such policies, but they are often limited to problems with low-level action spaces. In complex problems with large state and action spaces, it is advantageous to extend MAPG methods to use higher-level actions, also known as options, to improve the policy search efficiency. However, multi-robot option executions are often asynchronous, that is, agents may select and complete their options at different time steps. This makes it difficult for MAPG methods to derive a centralized policy and evaluate its gradient, as centralized policy always select new options at the same time. In this work, we propose a novel, conditional reasoning approach to address this problem and demonstrate its effectiveness on representative option-based multi-agent cooperative tasks through empirical validation. Find code and videos at: \href{https://sites.google.com/view/mahrlsupp/}{https://sites.google.com/view/mahrlsupp/}
|
2208.04222
|
Gabriele Tolomei
|
Ziheng Chen, Fabrizio Silvestri, Jia Wang, Yongfeng Zhang, Zhenhua
Huang, Hongshik Ahn, Gabriele Tolomei
|
GREASE: Generate Factual and Counterfactual Explanations for GNN-based
Recommendations
| null | null | null | null |
cs.IR cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, graph neural networks (GNNs) have been widely used to develop
successful recommender systems. Although powerful, it is very difficult for a
GNN-based recommender system to attach tangible explanations of why a specific
item ends up in the list of suggestions for a given user. Indeed, explaining
GNN-based recommendations is unique, and existing GNN explanation methods are
inappropriate for two reasons. First, traditional GNN explanation methods are
designed for node, edge, or graph classification tasks rather than ranking, as
in recommender systems. Second, standard machine learning explanations are
usually intended to support skilled decision-makers. Instead, recommendations
are designed for any end-user, and thus their explanations should be provided
in user-understandable ways. In this work, we propose GREASE, a novel method
for explaining the suggestions provided by any black-box GNN-based recommender
system. Specifically, GREASE first trains a surrogate model on a target
user-item pair and its $l$-hop neighborhood. Then, it generates both factual
and counterfactual explanations by finding optimal adjacency matrix
perturbations to capture the sufficient and necessary conditions for an item to
be recommended, respectively. Experimental results conducted on real-world
datasets demonstrate that GREASE can generate concise and effective
explanations for popular GNN-based recommender models.
|
[
{
"created": "Thu, 4 Aug 2022 09:13:00 GMT",
"version": "v1"
}
] |
2022-08-09
|
[
[
"Chen",
"Ziheng",
""
],
[
"Silvestri",
"Fabrizio",
""
],
[
"Wang",
"Jia",
""
],
[
"Zhang",
"Yongfeng",
""
],
[
"Huang",
"Zhenhua",
""
],
[
"Ahn",
"Hongshik",
""
],
[
"Tolomei",
"Gabriele",
""
]
] |
Recently, graph neural networks (GNNs) have been widely used to develop successful recommender systems. Although powerful, it is very difficult for a GNN-based recommender system to attach tangible explanations of why a specific item ends up in the list of suggestions for a given user. Indeed, explaining GNN-based recommendations is unique, and existing GNN explanation methods are inappropriate for two reasons. First, traditional GNN explanation methods are designed for node, edge, or graph classification tasks rather than ranking, as in recommender systems. Second, standard machine learning explanations are usually intended to support skilled decision-makers. Instead, recommendations are designed for any end-user, and thus their explanations should be provided in user-understandable ways. In this work, we propose GREASE, a novel method for explaining the suggestions provided by any black-box GNN-based recommender system. Specifically, GREASE first trains a surrogate model on a target user-item pair and its $l$-hop neighborhood. Then, it generates both factual and counterfactual explanations by finding optimal adjacency matrix perturbations to capture the sufficient and necessary conditions for an item to be recommended, respectively. Experimental results conducted on real-world datasets demonstrate that GREASE can generate concise and effective explanations for popular GNN-based recommender models.
|
2012.02420
|
Junyu Luo
|
Junyu Luo, Zifei Zheng, Hanzhong Ye, Muchao Ye, Yaqing Wang, Quanzeng
You, Cao Xiao and Fenglong Ma
|
Benchmarking Automated Clinical Language Simplification: Dataset,
Algorithm, and Evaluation
|
COLING 2022
|
2022.coling-1.313
| null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Patients with low health literacy usually have difficulty understanding
medical jargon and the complex structure of professional medical language.
Although some studies are proposed to automatically translate expert language
into layperson-understandable language, only a few of them focus on both
accuracy and readability aspects simultaneously in the clinical domain. Thus,
simplification of the clinical language is still a challenging task, but
unfortunately, it is not yet fully addressed in previous work. To benchmark
this task, we construct a new dataset named MedLane to support the development
and evaluation of automated clinical language simplification approaches.
Besides, we propose a new model called DECLARE that follows the human
annotation procedure and achieves state-of-the-art performance compared with
eight strong baselines. To fairly evaluate the performance, we also propose
three specific evaluation metrics. Experimental results demonstrate the utility
of the annotated MedLane dataset and the effectiveness of the proposed model
DECLARE.
|
[
{
"created": "Fri, 4 Dec 2020 06:09:02 GMT",
"version": "v1"
},
{
"created": "Thu, 21 Sep 2023 20:53:33 GMT",
"version": "v2"
}
] |
2024-02-09
|
[
[
"Luo",
"Junyu",
""
],
[
"Zheng",
"Zifei",
""
],
[
"Ye",
"Hanzhong",
""
],
[
"Ye",
"Muchao",
""
],
[
"Wang",
"Yaqing",
""
],
[
"You",
"Quanzeng",
""
],
[
"Xiao",
"Cao",
""
],
[
"Ma",
"Fenglong",
""
]
] |
Patients with low health literacy usually have difficulty understanding medical jargon and the complex structure of professional medical language. Although some studies are proposed to automatically translate expert language into layperson-understandable language, only a few of them focus on both accuracy and readability aspects simultaneously in the clinical domain. Thus, simplification of the clinical language is still a challenging task, but unfortunately, it is not yet fully addressed in previous work. To benchmark this task, we construct a new dataset named MedLane to support the development and evaluation of automated clinical language simplification approaches. Besides, we propose a new model called DECLARE that follows the human annotation procedure and achieves state-of-the-art performance compared with eight strong baselines. To fairly evaluate the performance, we also propose three specific evaluation metrics. Experimental results demonstrate the utility of the annotated MedLane dataset and the effectiveness of the proposed model DECLARE.
|
2210.09817
|
Edouard Pineau
|
Edouard Pineau, S\'ebastien Razakarivony, Mauricio Gonzalez and
Anthony Schrapffer
|
Universal hidden monotonic trend estimation with contrastive learning
| null | null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we describe a universal method for extracting the underlying
monotonic trend factor from time series data. We propose an approach related to
the Mann-Kendall test, a standard monotonic trend detection method and call it
contrastive trend estimation (CTE). We show that the CTE method identifies any
hidden trend underlying temporal data while avoiding the standard assumptions
used for monotonic trend identification. In particular, CTE can take any type
of temporal data (vector, images, graphs, time series, etc.) as input. We
finally illustrate the interest of our CTE method through several experiments
on different types of data and problems.
|
[
{
"created": "Tue, 18 Oct 2022 12:54:08 GMT",
"version": "v1"
},
{
"created": "Sun, 23 Apr 2023 07:46:32 GMT",
"version": "v2"
}
] |
2023-04-25
|
[
[
"Pineau",
"Edouard",
""
],
[
"Razakarivony",
"Sébastien",
""
],
[
"Gonzalez",
"Mauricio",
""
],
[
"Schrapffer",
"Anthony",
""
]
] |
In this paper, we describe a universal method for extracting the underlying monotonic trend factor from time series data. We propose an approach related to the Mann-Kendall test, a standard monotonic trend detection method and call it contrastive trend estimation (CTE). We show that the CTE method identifies any hidden trend underlying temporal data while avoiding the standard assumptions used for monotonic trend identification. In particular, CTE can take any type of temporal data (vector, images, graphs, time series, etc.) as input. We finally illustrate the interest of our CTE method through several experiments on different types of data and problems.
|
1309.5854
|
Hossein Hosseini
|
Seyed Hossein Hosseini, Mahrokh G. Shayesteh, Mehdi Chehel Amirani
|
Demodulation of Sparse PPM Signals with Low Samples Using Trained RIP
Matrix
|
4 pages, 6 figures, conference paper
| null | null | null |
cs.OH cs.IT cs.LG math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Compressed sensing (CS) theory considers the restricted isometry property
(RIP) as a sufficient condition for measurement matrix which guarantees the
recovery of any sparse signal from its compressed measurements. The RIP
condition also preserves enough information for classification of sparse
symbols, even with fewer measurements. In this work, we utilize RIP bound as
the cost function for training a simple neural network in order to exploit the
near optimal measurements or equivalently near optimal features for
classification of a known set of sparse symbols. As an example, we consider
demodulation of pulse position modulation (PPM) signals. The results indicate
that the proposed method has much better performance than the random
measurements and requires less samples than the optimum matched filter
demodulator, at the expense of some performance loss. Further, the proposed
approach does not need equalizer for multipath channels in contrast to the
conventional receiver.
|
[
{
"created": "Sun, 1 Sep 2013 22:14:52 GMT",
"version": "v1"
}
] |
2013-09-24
|
[
[
"Hosseini",
"Seyed Hossein",
""
],
[
"Shayesteh",
"Mahrokh G.",
""
],
[
"Amirani",
"Mehdi Chehel",
""
]
] |
Compressed sensing (CS) theory considers the restricted isometry property (RIP) as a sufficient condition for measurement matrix which guarantees the recovery of any sparse signal from its compressed measurements. The RIP condition also preserves enough information for classification of sparse symbols, even with fewer measurements. In this work, we utilize RIP bound as the cost function for training a simple neural network in order to exploit the near optimal measurements or equivalently near optimal features for classification of a known set of sparse symbols. As an example, we consider demodulation of pulse position modulation (PPM) signals. The results indicate that the proposed method has much better performance than the random measurements and requires less samples than the optimum matched filter demodulator, at the expense of some performance loss. Further, the proposed approach does not need equalizer for multipath channels in contrast to the conventional receiver.
|
2007.14943
|
Andrea De Maio
|
Andrea De Maio and Simon Lacroix
|
Simultaneously Learning Corrections and Error Models for Geometry-based
Visual Odometry Methods
|
Accepted in IEEE Robotics and Automation Letters and IEEE/RSJ
International Conference on Intelligent Robots and Systems (IROS), 2020
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper fosters the idea that deep learning methods can be used to
complement classical visual odometry pipelines to improve their accuracy and to
associate uncertainty models to their estimations. We show that the biases
inherent to the visual odometry process can be faithfully learned and
compensated for, and that a learning architecture associated with a
probabilistic loss function can jointly estimate a full covariance matrix of
the residual errors, defining an error model capturing the heteroscedasticity
of the process. Experiments on autonomous driving image sequences assess the
possibility to concurrently improve visual odometry and estimate an error
associated with its outputs.
|
[
{
"created": "Wed, 29 Jul 2020 16:35:40 GMT",
"version": "v1"
}
] |
2020-07-30
|
[
[
"De Maio",
"Andrea",
""
],
[
"Lacroix",
"Simon",
""
]
] |
This paper fosters the idea that deep learning methods can be used to complement classical visual odometry pipelines to improve their accuracy and to associate uncertainty models to their estimations. We show that the biases inherent to the visual odometry process can be faithfully learned and compensated for, and that a learning architecture associated with a probabilistic loss function can jointly estimate a full covariance matrix of the residual errors, defining an error model capturing the heteroscedasticity of the process. Experiments on autonomous driving image sequences assess the possibility to concurrently improve visual odometry and estimate an error associated with its outputs.
|
1404.3959
|
Marco Guerini
|
Marco Guerini, Fabio Pianesi, Oliviero Stock
|
Is it morally acceptable for a system to lie to persuade me?
| null | null | null | null |
cs.CY cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Given the fast rise of increasingly autonomous artificial agents and robots,
a key acceptability criterion will be the possible moral implications of their
actions. In particular, intelligent persuasive systems (systems designed to
influence humans via communication) constitute a highly sensitive topic because
of their intrinsically social nature. Still, ethical studies in this area are
rare and tend to focus on the output of the required action. Instead, this work
focuses on the persuasive acts themselves (e.g. "is it morally acceptable that
a machine lies or appeals to the emotions of a person to persuade her, even if
for a good end?"). Exploiting a behavioral approach, based on human assessment
of moral dilemmas -- i.e. without any prior assumption of underlying ethical
theories -- this paper reports on a set of experiments. These experiments
address the type of persuader (human or machine), the strategies adopted
(purely argumentative, appeal to positive emotions, appeal to negative
emotions, lie) and the circumstances. Findings display no differences due to
the agent, mild acceptability for persuasion and reveal that truth-conditional
reasoning (i.e. argument validity) is a significant dimension affecting
subjects' judgment. Some implications for the design of intelligent persuasive
systems are discussed.
|
[
{
"created": "Tue, 15 Apr 2014 15:41:34 GMT",
"version": "v1"
}
] |
2014-04-16
|
[
[
"Guerini",
"Marco",
""
],
[
"Pianesi",
"Fabio",
""
],
[
"Stock",
"Oliviero",
""
]
] |
Given the fast rise of increasingly autonomous artificial agents and robots, a key acceptability criterion will be the possible moral implications of their actions. In particular, intelligent persuasive systems (systems designed to influence humans via communication) constitute a highly sensitive topic because of their intrinsically social nature. Still, ethical studies in this area are rare and tend to focus on the output of the required action. Instead, this work focuses on the persuasive acts themselves (e.g. "is it morally acceptable that a machine lies or appeals to the emotions of a person to persuade her, even if for a good end?"). Exploiting a behavioral approach, based on human assessment of moral dilemmas -- i.e. without any prior assumption of underlying ethical theories -- this paper reports on a set of experiments. These experiments address the type of persuader (human or machine), the strategies adopted (purely argumentative, appeal to positive emotions, appeal to negative emotions, lie) and the circumstances. Findings display no differences due to the agent, mild acceptability for persuasion and reveal that truth-conditional reasoning (i.e. argument validity) is a significant dimension affecting subjects' judgment. Some implications for the design of intelligent persuasive systems are discussed.
|
2108.04042
|
Katie Liszewski
|
Katie Liszewski, Timothy McDonley
|
Understanding Tool Synthesis Behavior and Safe Finite State Machine
Design
|
6 pages, 11 figures
| null | null | null |
cs.AR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
High-reliability design requires understanding synthesis tool behavior and
best practices. Detection and protection against illegal states and transitions
is important for critical Finite State Machines (FSMs) within high reliability
applications. Single Event Upsets (SEUs) probability is increasing with
decreasing circuit dimensions and voltage [1]. SEU handling must be analyzed
post optimization to ensure designed protections are still functional. In this
work the default behavior of three synthesis tools interacting with high
reliability FSMs is discussed. Post-synthesis netlists of test FSMs are
analyzed for optimization induced changes that affect reliability during a SEU.
Best practices are proposed to curtail aggressive optimizers.
|
[
{
"created": "Mon, 9 Aug 2021 13:52:26 GMT",
"version": "v1"
}
] |
2021-08-10
|
[
[
"Liszewski",
"Katie",
""
],
[
"McDonley",
"Timothy",
""
]
] |
High-reliability design requires understanding synthesis tool behavior and best practices. Detection and protection against illegal states and transitions is important for critical Finite State Machines (FSMs) within high reliability applications. Single Event Upsets (SEUs) probability is increasing with decreasing circuit dimensions and voltage [1]. SEU handling must be analyzed post optimization to ensure designed protections are still functional. In this work the default behavior of three synthesis tools interacting with high reliability FSMs is discussed. Post-synthesis netlists of test FSMs are analyzed for optimization induced changes that affect reliability during a SEU. Best practices are proposed to curtail aggressive optimizers.
|
2104.14618
|
Jack West
|
Jack West, Kyuin Lee, Suman Banerjee, Younghyun Kim, George K.
Thiruvathukal, Neil Klingensmith
|
Moonshine: An Online Randomness Distiller for Zero-Involvement
Authentication
|
16 pages, 5 figures, IPSN 2021
| null | null | null |
cs.CR
|
http://creativecommons.org/licenses/by/4.0/
|
Context-based authentication is a method for transparently validating another
device's legitimacy to join a network based on location. Devices can pair with
one another by continuously harvesting environmental noise to generate a random
key with no user involvement. However, there are gaps in our understanding of
the theoretical limitations of environmental noise harvesting, making it
difficult for researchers to build efficient algorithms for sampling
environmental noise and distilling keys from that noise. This work explores the
information-theoretic capacity of context-based authentication mechanisms to
generate random bit strings from environmental noise sources with known
properties. Using only mild assumptions about the source process's
characteristics, we demonstrate that commonly-used bit extraction algorithms
extract only about 10% of the available randomness from a source noise process.
We present an efficient algorithm to improve the quality of keys generated by
context-based methods and evaluate it on real key extraction hardware.
Moonshine is a randomness distiller which is more efficient at extracting bits
from an environmental entropy source than existing methods. Our techniques
nearly double the quality of keys as measured by the NIST test suite, producing
keys that can be used in real-world authentication scenarios.
|
[
{
"created": "Thu, 29 Apr 2021 19:10:26 GMT",
"version": "v1"
}
] |
2021-05-03
|
[
[
"West",
"Jack",
""
],
[
"Lee",
"Kyuin",
""
],
[
"Banerjee",
"Suman",
""
],
[
"Kim",
"Younghyun",
""
],
[
"Thiruvathukal",
"George K.",
""
],
[
"Klingensmith",
"Neil",
""
]
] |
Context-based authentication is a method for transparently validating another device's legitimacy to join a network based on location. Devices can pair with one another by continuously harvesting environmental noise to generate a random key with no user involvement. However, there are gaps in our understanding of the theoretical limitations of environmental noise harvesting, making it difficult for researchers to build efficient algorithms for sampling environmental noise and distilling keys from that noise. This work explores the information-theoretic capacity of context-based authentication mechanisms to generate random bit strings from environmental noise sources with known properties. Using only mild assumptions about the source process's characteristics, we demonstrate that commonly-used bit extraction algorithms extract only about 10% of the available randomness from a source noise process. We present an efficient algorithm to improve the quality of keys generated by context-based methods and evaluate it on real key extraction hardware. Moonshine is a randomness distiller which is more efficient at extracting bits from an environmental entropy source than existing methods. Our techniques nearly double the quality of keys as measured by the NIST test suite, producing keys that can be used in real-world authentication scenarios.
|
1611.00918
|
Kasper Green Larsen
|
Karl Bringmann, Allan Gr{\o}nlund, Kasper Green Larsen
|
A Dichotomy for Regular Expression Membership Testing
| null | null | null | null |
cs.DS cs.CC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study regular expression membership testing: Given a regular expression of
size $m$ and a string of size $n$, decide whether the string is in the language
described by the regular expression. Its classic $O(nm)$ algorithm is one of
the big success stories of the 70s, which allowed pattern matching to develop
into the standard tool that it is today.
Many special cases of pattern matching have been studied that can be solved
faster than in quadratic time. However, a systematic study of tractable cases
was made possible only recently, with the first conditional lower bounds
reported by Backurs and Indyk [FOCS'16]. Restricted to any "type" of
homogeneous regular expressions of depth 2 or 3, they either presented a
near-linear time algorithm or a quadratic conditional lower bound, with one
exception known as the Word Break problem.
In this paper we complete their work as follows:
1) We present two almost-linear time algorithms that generalize all known
almost-linear time algorithms for special cases of regular expression
membership testing.
2) We classify all types, except for the Word Break problem, into
almost-linear time or quadratic time assuming the Strong Exponential Time
Hypothesis. This extends the classification from depth 2 and 3 to any constant
depth.
3) For the Word Break problem we give an improved $\tilde{O}(n m^{1/3} + m)$
algorithm. Surprisingly, we also prove a matching conditional lower bound for
combinatorial algorithms. This establishes Word Break as the only intermediate
problem.
In total, we prove matching upper and lower bounds for any type of
bounded-depth homogeneous regular expressions, which yields a full dichotomy
for regular expression membership testing.
|
[
{
"created": "Thu, 3 Nov 2016 08:55:07 GMT",
"version": "v1"
},
{
"created": "Mon, 7 Nov 2016 06:44:04 GMT",
"version": "v2"
}
] |
2016-11-08
|
[
[
"Bringmann",
"Karl",
""
],
[
"Grønlund",
"Allan",
""
],
[
"Larsen",
"Kasper Green",
""
]
] |
We study regular expression membership testing: Given a regular expression of size $m$ and a string of size $n$, decide whether the string is in the language described by the regular expression. Its classic $O(nm)$ algorithm is one of the big success stories of the 70s, which allowed pattern matching to develop into the standard tool that it is today. Many special cases of pattern matching have been studied that can be solved faster than in quadratic time. However, a systematic study of tractable cases was made possible only recently, with the first conditional lower bounds reported by Backurs and Indyk [FOCS'16]. Restricted to any "type" of homogeneous regular expressions of depth 2 or 3, they either presented a near-linear time algorithm or a quadratic conditional lower bound, with one exception known as the Word Break problem. In this paper we complete their work as follows: 1) We present two almost-linear time algorithms that generalize all known almost-linear time algorithms for special cases of regular expression membership testing. 2) We classify all types, except for the Word Break problem, into almost-linear time or quadratic time assuming the Strong Exponential Time Hypothesis. This extends the classification from depth 2 and 3 to any constant depth. 3) For the Word Break problem we give an improved $\tilde{O}(n m^{1/3} + m)$ algorithm. Surprisingly, we also prove a matching conditional lower bound for combinatorial algorithms. This establishes Word Break as the only intermediate problem. In total, we prove matching upper and lower bounds for any type of bounded-depth homogeneous regular expressions, which yields a full dichotomy for regular expression membership testing.
|
2302.11019
|
Ramneet Kaur
|
Ramneet Kaur, Xiayan Ji, Souradeep Dutta, Michele Caprio, Yahan Yang,
Elena Bernardis, Oleg Sokolsky, Insup Lee
|
Using Semantic Information for Defining and Detecting OOD Inputs
| null | null | null | null |
cs.LG cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
As machine learning models continue to achieve impressive performance across
different tasks, the importance of effective anomaly detection for such models
has increased as well. It is common knowledge that even well-trained models
lose their ability to function effectively on out-of-distribution inputs. Thus,
out-of-distribution (OOD) detection has received some attention recently. In
the vast majority of cases, it uses the distribution estimated by the training
dataset for OOD detection. We demonstrate that the current detectors inherit
the biases in the training dataset, unfortunately. This is a serious
impediment, and can potentially restrict the utility of the trained model. This
can render the current OOD detectors impermeable to inputs lying outside the
training distribution but with the same semantic information (e.g. training
class labels). To remedy this situation, we begin by defining what should
ideally be treated as an OOD, by connecting inputs with their semantic
information content. We perform OOD detection on semantic information extracted
from the training data of MNIST and COCO datasets and show that it not only
reduces false alarms but also significantly improves the detection of OOD
inputs with spurious features from the training data.
|
[
{
"created": "Tue, 21 Feb 2023 21:31:20 GMT",
"version": "v1"
}
] |
2023-02-23
|
[
[
"Kaur",
"Ramneet",
""
],
[
"Ji",
"Xiayan",
""
],
[
"Dutta",
"Souradeep",
""
],
[
"Caprio",
"Michele",
""
],
[
"Yang",
"Yahan",
""
],
[
"Bernardis",
"Elena",
""
],
[
"Sokolsky",
"Oleg",
""
],
[
"Lee",
"Insup",
""
]
] |
As machine learning models continue to achieve impressive performance across different tasks, the importance of effective anomaly detection for such models has increased as well. It is common knowledge that even well-trained models lose their ability to function effectively on out-of-distribution inputs. Thus, out-of-distribution (OOD) detection has received some attention recently. In the vast majority of cases, it uses the distribution estimated by the training dataset for OOD detection. We demonstrate that the current detectors inherit the biases in the training dataset, unfortunately. This is a serious impediment, and can potentially restrict the utility of the trained model. This can render the current OOD detectors impermeable to inputs lying outside the training distribution but with the same semantic information (e.g. training class labels). To remedy this situation, we begin by defining what should ideally be treated as an OOD, by connecting inputs with their semantic information content. We perform OOD detection on semantic information extracted from the training data of MNIST and COCO datasets and show that it not only reduces false alarms but also significantly improves the detection of OOD inputs with spurious features from the training data.
|
1901.00893
|
Horia Porav
|
Horia Porav, Tom Bruls and Paul Newman
|
I Can See Clearly Now : Image Restoration via De-Raining
|
Submitted to ICRA2019
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a method for improving segmentation tasks on images affected by
adherent rain drops and streaks. We introduce a novel stereo dataset recorded
using a system that allows one lens to be affected by real water droplets while
keeping the other lens clear. We train a denoising generator using this dataset
and show that it is effective at removing the effect of real water droplets, in
the context of image reconstruction and road marking segmentation. To further
test our de-noising approach, we describe a method of adding computer-generated
adherent water droplets and streaks to any images, and use this technique as a
proxy to demonstrate the effectiveness of our model in the context of general
semantic segmentation. We benchmark our results using the CamVid road marking
segmentation dataset, Cityscapes semantic segmentation datasets and our own
real-rain dataset, and show significant improvement on all tasks.
|
[
{
"created": "Thu, 3 Jan 2019 19:45:39 GMT",
"version": "v1"
}
] |
2019-01-07
|
[
[
"Porav",
"Horia",
""
],
[
"Bruls",
"Tom",
""
],
[
"Newman",
"Paul",
""
]
] |
We present a method for improving segmentation tasks on images affected by adherent rain drops and streaks. We introduce a novel stereo dataset recorded using a system that allows one lens to be affected by real water droplets while keeping the other lens clear. We train a denoising generator using this dataset and show that it is effective at removing the effect of real water droplets, in the context of image reconstruction and road marking segmentation. To further test our de-noising approach, we describe a method of adding computer-generated adherent water droplets and streaks to any images, and use this technique as a proxy to demonstrate the effectiveness of our model in the context of general semantic segmentation. We benchmark our results using the CamVid road marking segmentation dataset, Cityscapes semantic segmentation datasets and our own real-rain dataset, and show significant improvement on all tasks.
|
2108.03509
|
Ruixiang Cui
|
Ruixiang Cui, Rahul Aralikatte, Heather Lent, Daniel Hershcovich
|
Compositional Generalization in Multilingual Semantic Parsing over
Wikidata
|
Accepted to TACL; Authors' final version, pre-MIT Press publication;
Previous title: Multilingual Compositional Wikidata Questions
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Semantic parsing (SP) allows humans to leverage vast knowledge resources
through natural interaction. However, parsers are mostly designed for and
evaluated on English resources, such as CFQ (Keysers et al., 2020), the current
standard benchmark based on English data generated from grammar rules and
oriented towards Freebase, an outdated knowledge base. We propose a method for
creating a multilingual, parallel dataset of question-query pairs, grounded in
Wikidata. We introduce such a dataset, which we call Multilingual Compositional
Wikidata Questions (MCWQ), and use it to analyze the compositional
generalization of semantic parsers in Hebrew, Kannada, Chinese and English.
While within-language generalization is comparable across languages,
experiments on zero-shot cross-lingual transfer demonstrate that cross-lingual
compositional generalization fails, even with state-of-the-art pretrained
multilingual encoders. Furthermore, our methodology, dataset and results will
facilitate future research on SP in more realistic and diverse settings than
has been possible with existing resources.
|
[
{
"created": "Sat, 7 Aug 2021 19:40:38 GMT",
"version": "v1"
},
{
"created": "Tue, 31 May 2022 16:58:51 GMT",
"version": "v2"
}
] |
2022-06-01
|
[
[
"Cui",
"Ruixiang",
""
],
[
"Aralikatte",
"Rahul",
""
],
[
"Lent",
"Heather",
""
],
[
"Hershcovich",
"Daniel",
""
]
] |
Semantic parsing (SP) allows humans to leverage vast knowledge resources through natural interaction. However, parsers are mostly designed for and evaluated on English resources, such as CFQ (Keysers et al., 2020), the current standard benchmark based on English data generated from grammar rules and oriented towards Freebase, an outdated knowledge base. We propose a method for creating a multilingual, parallel dataset of question-query pairs, grounded in Wikidata. We introduce such a dataset, which we call Multilingual Compositional Wikidata Questions (MCWQ), and use it to analyze the compositional generalization of semantic parsers in Hebrew, Kannada, Chinese and English. While within-language generalization is comparable across languages, experiments on zero-shot cross-lingual transfer demonstrate that cross-lingual compositional generalization fails, even with state-of-the-art pretrained multilingual encoders. Furthermore, our methodology, dataset and results will facilitate future research on SP in more realistic and diverse settings than has been possible with existing resources.
|
2309.10485
|
Yuhong Yang
|
Hongyang Chen, Yuhong Yang, Zhongyuan Wang, Weiping Tu, Haojun Ai,
Song Lin
|
Exploring Sentence Type Effects on the Lombard Effect and
Intelligibility Enhancement: A Comparative Study of Natural and Grid
Sentences
| null | null | null | null |
cs.SD cs.LG eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This study explores how sentence types affect the Lombard effect and
intelligibility enhancement, focusing on comparisons between natural and grid
sentences. Using the Lombard Chinese-TIMIT (LCT) corpus and the Enhanced
MAndarin Lombard Grid (EMALG) corpus, we analyze changes in phonetic and
acoustic features across different noise levels. Our results show that grid
sentences produce more pronounced Lombard effects than natural sentences. Then,
we develop and test a normal-to-Lombard conversion model, trained separately on
LCT and EMALG corpora. Through subjective and objective evaluations, natural
sentences are superior in maintaining speech quality in intelligibility
enhancement. In contrast, grid sentences could provide superior intelligibility
due to the more pronounced Lombard effect. This study provides a valuable
perspective on enhancing speech communication in noisy environments.
|
[
{
"created": "Tue, 19 Sep 2023 09:54:36 GMT",
"version": "v1"
},
{
"created": "Tue, 9 Jul 2024 03:32:54 GMT",
"version": "v2"
}
] |
2024-07-10
|
[
[
"Chen",
"Hongyang",
""
],
[
"Yang",
"Yuhong",
""
],
[
"Wang",
"Zhongyuan",
""
],
[
"Tu",
"Weiping",
""
],
[
"Ai",
"Haojun",
""
],
[
"Lin",
"Song",
""
]
] |
This study explores how sentence types affect the Lombard effect and intelligibility enhancement, focusing on comparisons between natural and grid sentences. Using the Lombard Chinese-TIMIT (LCT) corpus and the Enhanced MAndarin Lombard Grid (EMALG) corpus, we analyze changes in phonetic and acoustic features across different noise levels. Our results show that grid sentences produce more pronounced Lombard effects than natural sentences. Then, we develop and test a normal-to-Lombard conversion model, trained separately on LCT and EMALG corpora. Through subjective and objective evaluations, natural sentences are superior in maintaining speech quality in intelligibility enhancement. In contrast, grid sentences could provide superior intelligibility due to the more pronounced Lombard effect. This study provides a valuable perspective on enhancing speech communication in noisy environments.
|
1808.09665
|
Loet Leydesdorff
|
Loet Leydesdorff and Tobias Opthof
|
Revisiting Relative Indicators and Provisional Truths
|
ISSI Newsletter #55, September 2018
| null | null | null |
cs.DL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Following discussions in 2010 and 2011, scientometric evaluators have
increasingly abandoned relative indicators in favor of comparing observed with
expected citation ratios. The latter method provides parameters with error
values allowing for the statistical testing of differences in citation scores.
A further step would be to proceed to non-parametric statistics (e.g., the
top-10%) given the extreme skewness (non-normality) of the citation
distributions. In response to a plea for returning to relative indicators in
the previous issue of this newsletter, we argue in favor of further progress in
the development of citation impact indicators.
|
[
{
"created": "Wed, 29 Aug 2018 07:45:32 GMT",
"version": "v1"
}
] |
2018-08-30
|
[
[
"Leydesdorff",
"Loet",
""
],
[
"Opthof",
"Tobias",
""
]
] |
Following discussions in 2010 and 2011, scientometric evaluators have increasingly abandoned relative indicators in favor of comparing observed with expected citation ratios. The latter method provides parameters with error values allowing for the statistical testing of differences in citation scores. A further step would be to proceed to non-parametric statistics (e.g., the top-10%) given the extreme skewness (non-normality) of the citation distributions. In response to a plea for returning to relative indicators in the previous issue of this newsletter, we argue in favor of further progress in the development of citation impact indicators.
|
1710.09918
|
Muhamed Turkanovi\'c
|
Muhamed Turkanovi\'c, Marko H\"olbl, Kristjan Ko\v{s}i\v{c}, Marjan
Heri\v{c}ko, Aida Kami\v{s}ali\'c
|
EduCTX: A blockchain-based higher education credit platform
|
20 pages, 6 figures
| null |
10.1109/ACCESS.2018.2789929
| null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Blockchain technology enables the creation of a decentralized environment
where transactions and data are not under the control of any third party
organization. Any transaction ever completed is recorded in a public ledger in
a verifiable and permanent way. Based on blockchain technology, we propose a
global higher education credit platform, named EduCTX. This platform is based
on the concept of the European Credit Transfer and Accumulation System (ECTS).
It constitutes a globally trusted, decentralized higher education credit and
grading system that can offer a globally unified viewpoint for students and
higher education institutions (HEIs), as well as for other potential
stakeholders such as companies, institutions, and organizations. As a proof of
concept, we present a prototype implementation of the environment, based on the
open-source Ark Blockchain Platform. Based on a globally distributed
peer-to-peer network, EduCTX will process, manage and control ECTX tokens,
which represent credits that students gain for completed courses such as ECTS.
HEIs are the peers of the blockchain network. The platform is a first step
towards a more transparent and technologically advanced form of higher
education systems. The EduCTX platform represents the basis of the EduCTX
initiative which anticipates that various HEIs would join forces in order to
create a globally efficient, simplified and ubiquitous environment in order to
avoid language and administrative barriers. Therefore we invite and encourage
HEIs to join the EduCTX initiative and the EduCTX blockchain network.
|
[
{
"created": "Thu, 26 Oct 2017 21:28:13 GMT",
"version": "v1"
}
] |
2018-01-30
|
[
[
"Turkanović",
"Muhamed",
""
],
[
"Hölbl",
"Marko",
""
],
[
"Košič",
"Kristjan",
""
],
[
"Heričko",
"Marjan",
""
],
[
"Kamišalić",
"Aida",
""
]
] |
Blockchain technology enables the creation of a decentralized environment where transactions and data are not under the control of any third party organization. Any transaction ever completed is recorded in a public ledger in a verifiable and permanent way. Based on blockchain technology, we propose a global higher education credit platform, named EduCTX. This platform is based on the concept of the European Credit Transfer and Accumulation System (ECTS). It constitutes a globally trusted, decentralized higher education credit and grading system that can offer a globally unified viewpoint for students and higher education institutions (HEIs), as well as for other potential stakeholders such as companies, institutions, and organizations. As a proof of concept, we present a prototype implementation of the environment, based on the open-source Ark Blockchain Platform. Based on a globally distributed peer-to-peer network, EduCTX will process, manage and control ECTX tokens, which represent credits that students gain for completed courses such as ECTS. HEIs are the peers of the blockchain network. The platform is a first step towards a more transparent and technologically advanced form of higher education systems. The EduCTX platform represents the basis of the EduCTX initiative which anticipates that various HEIs would join forces in order to create a globally efficient, simplified and ubiquitous environment in order to avoid language and administrative barriers. Therefore we invite and encourage HEIs to join the EduCTX initiative and the EduCTX blockchain network.
|
2004.09959
|
Kerstin H\"otte
|
Kerstin H\"otte, Anton Pichler, Fran\c{c}ois Lafond
|
The rise of science in low-carbon energy technologies
| null | null |
10.1016/j.rser.2020.110654
| null |
cs.DL econ.GN q-fin.EC
|
http://creativecommons.org/licenses/by/4.0/
|
Successfully combating climate change will require substantial technological
improvements in Low-Carbon Energy Technologies (LCETs), but designing efficient
allocation of R\&D budgets requires a better understanding of how LCETs rely on
scientific knowledge. Using data covering almost all US patents and scientific
articles that are cited by them over the past two centuries, we describe the
evolution of knowledge bases of ten key LCETs and show how technological
interdependencies have changed over time. The composition of low-carbon energy
innovations shifted over time, from Hydro and Wind energy in the 19th and early
20th century, to Nuclear fission after World War II, and more recently to Solar
PV and back to Wind. In recent years, Solar PV, Nuclear fusion and Biofuels
(including energy from waste) have 35-65\% of their citations directed toward
scientific papers, while this ratio is less than 10\% for Wind, Solar thermal,
Hydro, Geothermal, and Nuclear fission. Over time, the share of patents citing
science and the share of citations that are to scientific papers has been
increasing for all technology types. The analysis of the scientific knowledge
base of each LCET reveals three fairly separate clusters, with nuclear energy
technologies, Biofuels and Waste, and all the other LCETs. Our detailed
description of knowledge requirements for each LCET helps to design of targeted
innovation policies.
|
[
{
"created": "Tue, 21 Apr 2020 12:47:04 GMT",
"version": "v1"
},
{
"created": "Fri, 4 Sep 2020 11:35:48 GMT",
"version": "v2"
}
] |
2021-04-13
|
[
[
"Hötte",
"Kerstin",
""
],
[
"Pichler",
"Anton",
""
],
[
"Lafond",
"François",
""
]
] |
Successfully combating climate change will require substantial technological improvements in Low-Carbon Energy Technologies (LCETs), but designing efficient allocation of R\&D budgets requires a better understanding of how LCETs rely on scientific knowledge. Using data covering almost all US patents and scientific articles that are cited by them over the past two centuries, we describe the evolution of knowledge bases of ten key LCETs and show how technological interdependencies have changed over time. The composition of low-carbon energy innovations shifted over time, from Hydro and Wind energy in the 19th and early 20th century, to Nuclear fission after World War II, and more recently to Solar PV and back to Wind. In recent years, Solar PV, Nuclear fusion and Biofuels (including energy from waste) have 35-65\% of their citations directed toward scientific papers, while this ratio is less than 10\% for Wind, Solar thermal, Hydro, Geothermal, and Nuclear fission. Over time, the share of patents citing science and the share of citations that are to scientific papers has been increasing for all technology types. The analysis of the scientific knowledge base of each LCET reveals three fairly separate clusters, with nuclear energy technologies, Biofuels and Waste, and all the other LCETs. Our detailed description of knowledge requirements for each LCET helps to design of targeted innovation policies.
|
2310.09461
|
Dong Huang
|
Yutian Lei, Jun Liu, Dong Huang
|
MAC: ModAlity Calibration for Object Detection
| null | null | null | null |
cs.CV cs.MM cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
The flourishing success of Deep Neural Networks(DNNs) on RGB-input perception
tasks has opened unbounded possibilities for non-RGB-input perception tasks,
such as object detection from wireless signals, lidar scans, and infrared
images. Compared to the matured development pipeline of RGB-input (source
modality) models, developing non-RGB-input (target-modality) models from
scratch poses excessive challenges in the modality-specific network
design/training tricks and labor in the target-modality annotation. In this
paper, we propose ModAlity Calibration (MAC), an efficient pipeline for
calibrating target-modality inputs to the DNN object detection models developed
on the RGB (source) modality. We compose a target-modality-input model by
adding a small calibrator module ahead of a source-modality model and introduce
MAC training techniques to impose dense supervision on the calibrator. By
leveraging (1) prior knowledge synthesized from the source-modality model and
(2) paired {target, source} data with zero manual annotations, our
target-modality models reach comparable or better metrics than baseline models
that require 100% manual annotations. We demonstrate the effectiveness of MAC
by composing the WiFi-input, Lidar-input, and Thermal-Infrared-input models
upon the pre-trained RGB-input models respectively.
|
[
{
"created": "Sat, 14 Oct 2023 00:58:32 GMT",
"version": "v1"
}
] |
2023-10-17
|
[
[
"Lei",
"Yutian",
""
],
[
"Liu",
"Jun",
""
],
[
"Huang",
"Dong",
""
]
] |
The flourishing success of Deep Neural Networks(DNNs) on RGB-input perception tasks has opened unbounded possibilities for non-RGB-input perception tasks, such as object detection from wireless signals, lidar scans, and infrared images. Compared to the matured development pipeline of RGB-input (source modality) models, developing non-RGB-input (target-modality) models from scratch poses excessive challenges in the modality-specific network design/training tricks and labor in the target-modality annotation. In this paper, we propose ModAlity Calibration (MAC), an efficient pipeline for calibrating target-modality inputs to the DNN object detection models developed on the RGB (source) modality. We compose a target-modality-input model by adding a small calibrator module ahead of a source-modality model and introduce MAC training techniques to impose dense supervision on the calibrator. By leveraging (1) prior knowledge synthesized from the source-modality model and (2) paired {target, source} data with zero manual annotations, our target-modality models reach comparable or better metrics than baseline models that require 100% manual annotations. We demonstrate the effectiveness of MAC by composing the WiFi-input, Lidar-input, and Thermal-Infrared-input models upon the pre-trained RGB-input models respectively.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.