id
stringlengths 9
10
| submitter
stringlengths 1
64
⌀ | authors
stringlengths 4
20.7k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
404
⌀ | doi
stringlengths 11
153
⌀ | report-no
stringlengths 2
254
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | orig_abstract
stringlengths 14
3.35k
| versions
listlengths 1
60
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
1.35k
| abstract
stringlengths 11
3.34k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2307.00355
|
Ronnie de Souza Santos Dr
|
Gustavo da Silva and Ronnie de Souza Santos
|
Comparing Mobile Testing Tools Using Documentary Analysis
| null | null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Due to the high demand for mobile applications, given the exponential growth
of users of this type of technology, testing professionals are frequently
required to invest time in studying testing tools, in particular, because
nowadays, several different tools are available. A variety of tools makes it
difficult for testing professionals to choose the one that best fits their
goals and supports them in their work. In this sense, we conducted a
comparative analysis among five open-source tools for mobile testing: Appium,
Robotium, Espresso, Frank, and EarGrey. We used the documentary analysis method
to explore the official documentation of each above-cited tool and developed
various comparisons based on technical criteria reported in the literature
about characteristics that mobile testing tools should have. Our findings are
expected to help practitioners understand several aspects of mobile testing
tools.
|
[
{
"created": "Sat, 1 Jul 2023 14:52:27 GMT",
"version": "v1"
}
] |
2023-07-04
|
[
[
"da Silva",
"Gustavo",
""
],
[
"Santos",
"Ronnie de Souza",
""
]
] |
Due to the high demand for mobile applications, given the exponential growth of users of this type of technology, testing professionals are frequently required to invest time in studying testing tools, in particular, because nowadays, several different tools are available. A variety of tools makes it difficult for testing professionals to choose the one that best fits their goals and supports them in their work. In this sense, we conducted a comparative analysis among five open-source tools for mobile testing: Appium, Robotium, Espresso, Frank, and EarGrey. We used the documentary analysis method to explore the official documentation of each above-cited tool and developed various comparisons based on technical criteria reported in the literature about characteristics that mobile testing tools should have. Our findings are expected to help practitioners understand several aspects of mobile testing tools.
|
2308.06464
|
Jun Li
|
Jun Li, Minqing Zhang, Ke Niu, Yingnan Zhang, Xiaoyuan Yang
|
A One-dimensional HEVC video steganalysis method using the Optimality of
Predicted Motion Vectors
|
Submitted to TCSVT
| null | null | null |
cs.CR cs.LG cs.MM
|
http://creativecommons.org/licenses/by/4.0/
|
Among steganalysis techniques, detection against motion vector (MV)
domain-based video steganography in High Efficiency Video Coding (HEVC)
standard remains a hot and challenging issue. For the purpose of improving the
detection performance, this paper proposes a steganalysis feature based on the
optimality of predicted MVs with a dimension of one. Firstly, we point out that
the motion vector prediction (MVP) of the prediction unit (PU) encoded using
the Advanced Motion Vector Prediction (AMVP) technique satisfies the local
optimality in the cover video. Secondly, we analyze that in HEVC video, message
embedding either using MVP index or motion vector differences (MVD) may destroy
the above optimality of MVP. And then, we define the optimal rate of MVP in
HEVC video as a steganalysis feature. Finally, we conduct steganalysis
detection experiments on two general datasets for three popular steganography
methods and compare the performance with four state-of-the-art steganalysis
methods. The experimental results show that the proposed optimal rate of MVP
for all cover videos is 100\%, while the optimal rate of MVP for all stego
videos is less than 100\%. Therefore, the proposed steganography scheme can
accurately distinguish between cover videos and stego videos, and it is
efficiently applied to practical scenarios with no model training and low
computational complexity.
|
[
{
"created": "Sat, 12 Aug 2023 04:51:04 GMT",
"version": "v1"
}
] |
2023-08-15
|
[
[
"Li",
"Jun",
""
],
[
"Zhang",
"Minqing",
""
],
[
"Niu",
"Ke",
""
],
[
"Zhang",
"Yingnan",
""
],
[
"Yang",
"Xiaoyuan",
""
]
] |
Among steganalysis techniques, detection against motion vector (MV) domain-based video steganography in High Efficiency Video Coding (HEVC) standard remains a hot and challenging issue. For the purpose of improving the detection performance, this paper proposes a steganalysis feature based on the optimality of predicted MVs with a dimension of one. Firstly, we point out that the motion vector prediction (MVP) of the prediction unit (PU) encoded using the Advanced Motion Vector Prediction (AMVP) technique satisfies the local optimality in the cover video. Secondly, we analyze that in HEVC video, message embedding either using MVP index or motion vector differences (MVD) may destroy the above optimality of MVP. And then, we define the optimal rate of MVP in HEVC video as a steganalysis feature. Finally, we conduct steganalysis detection experiments on two general datasets for three popular steganography methods and compare the performance with four state-of-the-art steganalysis methods. The experimental results show that the proposed optimal rate of MVP for all cover videos is 100\%, while the optimal rate of MVP for all stego videos is less than 100\%. Therefore, the proposed steganography scheme can accurately distinguish between cover videos and stego videos, and it is efficiently applied to practical scenarios with no model training and low computational complexity.
|
2404.19139
|
Jaidev Shastri
|
Jaidev Shastri, Xiaoguang Wang, Basavesh Ammanaghatta Shivakumar,
Freek Verbeek, Binoy Ravindran
|
HMTRace: Hardware-Assisted Memory-Tagging based Dynamic Data Race
Detection
| null | null | null | null |
cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Data race, a category of insidious software concurrency bugs, is often
challenging and resource-intensive to detect and debug. Existing dynamic race
detection tools incur significant execution time and memory overhead while
exhibiting high false positives. This paper proposes HMTRace, a novel Armv8.5-A
memory tag extension (MTE) based dynamic data race detection framework,
emphasizing low compute and memory requirements while maintaining high accuracy
and precision. HMTRace supports race detection in userspace OpenMP- and
Pthread-based multi-threaded C applications. HMTRace showcases a combined
f1-score of 0.86 while incurring a mean execution time overhead of 4.01% and
peak memory (RSS) overhead of 54.31%. HMTRace also does not report false
positives, asserting all reported races.
|
[
{
"created": "Mon, 29 Apr 2024 22:52:07 GMT",
"version": "v1"
}
] |
2024-05-01
|
[
[
"Shastri",
"Jaidev",
""
],
[
"Wang",
"Xiaoguang",
""
],
[
"Shivakumar",
"Basavesh Ammanaghatta",
""
],
[
"Verbeek",
"Freek",
""
],
[
"Ravindran",
"Binoy",
""
]
] |
Data race, a category of insidious software concurrency bugs, is often challenging and resource-intensive to detect and debug. Existing dynamic race detection tools incur significant execution time and memory overhead while exhibiting high false positives. This paper proposes HMTRace, a novel Armv8.5-A memory tag extension (MTE) based dynamic data race detection framework, emphasizing low compute and memory requirements while maintaining high accuracy and precision. HMTRace supports race detection in userspace OpenMP- and Pthread-based multi-threaded C applications. HMTRace showcases a combined f1-score of 0.86 while incurring a mean execution time overhead of 4.01% and peak memory (RSS) overhead of 54.31%. HMTRace also does not report false positives, asserting all reported races.
|
2407.09017
|
Scott Freitas
|
Scott Freitas, Jovan Kalajdjieski, Amir Gharib, Robert McCann
|
AI-Driven Guided Response for Security Operation Centers with Microsoft
Copilot for Security
| null | null | null | null |
cs.LG cs.CR cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Security operation centers contend with a constant stream of security
incidents, ranging from straightforward to highly complex. To address this, we
developed Copilot Guided Response (CGR), an industry-scale ML architecture that
guides security analysts across three key tasks -- (1) investigation, providing
essential historical context by identifying similar incidents; (2) triaging to
ascertain the nature of the incident -- whether it is a true positive, false
positive, or benign positive; and (3) remediation, recommending tailored
containment actions. CGR is integrated into the Microsoft Defender XDR product
and deployed worldwide, generating millions of recommendations across thousands
of customers. Our extensive evaluation, incorporating internal evaluation,
collaboration with security experts, and customer feedback, demonstrates that
CGR delivers high-quality recommendations across all three tasks. We provide a
comprehensive overview of the CGR architecture, setting a precedent as the
first cybersecurity company to openly discuss these capabilities in such depth.
Additionally, we GUIDE, the largest public collection of real-world security
incidents, spanning 13M evidences across 1M annotated incidents. By enabling
researchers and practitioners to conduct research on real-world data, GUIDE
advances the state of cybersecurity and supports the development of
next-generation machine learning systems.
|
[
{
"created": "Fri, 12 Jul 2024 06:10:01 GMT",
"version": "v1"
},
{
"created": "Thu, 18 Jul 2024 00:18:19 GMT",
"version": "v2"
},
{
"created": "Wed, 24 Jul 2024 01:15:20 GMT",
"version": "v3"
}
] |
2024-07-25
|
[
[
"Freitas",
"Scott",
""
],
[
"Kalajdjieski",
"Jovan",
""
],
[
"Gharib",
"Amir",
""
],
[
"McCann",
"Robert",
""
]
] |
Security operation centers contend with a constant stream of security incidents, ranging from straightforward to highly complex. To address this, we developed Copilot Guided Response (CGR), an industry-scale ML architecture that guides security analysts across three key tasks -- (1) investigation, providing essential historical context by identifying similar incidents; (2) triaging to ascertain the nature of the incident -- whether it is a true positive, false positive, or benign positive; and (3) remediation, recommending tailored containment actions. CGR is integrated into the Microsoft Defender XDR product and deployed worldwide, generating millions of recommendations across thousands of customers. Our extensive evaluation, incorporating internal evaluation, collaboration with security experts, and customer feedback, demonstrates that CGR delivers high-quality recommendations across all three tasks. We provide a comprehensive overview of the CGR architecture, setting a precedent as the first cybersecurity company to openly discuss these capabilities in such depth. Additionally, we GUIDE, the largest public collection of real-world security incidents, spanning 13M evidences across 1M annotated incidents. By enabling researchers and practitioners to conduct research on real-world data, GUIDE advances the state of cybersecurity and supports the development of next-generation machine learning systems.
|
2303.13326
|
Ying Cao
|
Ying Cao, Elsa Rizk, Stefan Vlaski, Ali H. Sayed
|
Decentralized Adversarial Training over Graphs
|
arXiv admin note: text overlap with arXiv:2303.01936
| null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
The vulnerability of machine learning models to adversarial attacks has been
attracting considerable attention in recent years. Most existing studies focus
on the behavior of stand-alone single-agent learners. In comparison, this work
studies adversarial training over graphs, where individual agents are subjected
to perturbations of varied strength levels across space. It is expected that
interactions by linked agents, and the heterogeneity of the attack models that
are possible over the graph, can help enhance robustness in view of the
coordination power of the group. Using a min-max formulation of diffusion
learning, we develop a decentralized adversarial training framework for
multi-agent systems. We analyze the convergence properties of the proposed
scheme for both convex and non-convex environments, and illustrate the enhanced
robustness to adversarial attacks.
|
[
{
"created": "Thu, 23 Mar 2023 15:05:16 GMT",
"version": "v1"
}
] |
2023-03-24
|
[
[
"Cao",
"Ying",
""
],
[
"Rizk",
"Elsa",
""
],
[
"Vlaski",
"Stefan",
""
],
[
"Sayed",
"Ali H.",
""
]
] |
The vulnerability of machine learning models to adversarial attacks has been attracting considerable attention in recent years. Most existing studies focus on the behavior of stand-alone single-agent learners. In comparison, this work studies adversarial training over graphs, where individual agents are subjected to perturbations of varied strength levels across space. It is expected that interactions by linked agents, and the heterogeneity of the attack models that are possible over the graph, can help enhance robustness in view of the coordination power of the group. Using a min-max formulation of diffusion learning, we develop a decentralized adversarial training framework for multi-agent systems. We analyze the convergence properties of the proposed scheme for both convex and non-convex environments, and illustrate the enhanced robustness to adversarial attacks.
|
2112.11187
|
Sharare Zehtabian
|
Sharare Zehtabian, Siavash Khodadadeh, Damla Turgut, Ladislau
B\"ol\"oni
|
Predicting infections in the Covid-19 pandemic -- lessons learned
| null | null | null | null |
cs.CY cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Throughout the Covid-19 pandemic, a significant amount of effort had been put
into developing techniques that predict the number of infections under various
assumptions about the public policy and non-pharmaceutical interventions. While
both the available data and the sophistication of the AI models and available
computing power exceed what was available in previous years, the overall
success of prediction approaches was very limited. In this paper, we start from
prediction algorithms proposed for XPrize Pandemic Response Challenge and
consider several directions that might allow their improvement. Then, we
investigate their performance over medium-term predictions extending over
several months. We find that augmenting the algorithms with additional
information about the culture of the modeled region, incorporating traditional
compartmental models and up-to-date deep learning architectures can improve the
performance for short term predictions, the accuracy of medium-term predictions
is still very low and a significant amount of future research is needed to make
such models a reliable component of a public policy toolbox.
|
[
{
"created": "Thu, 2 Dec 2021 20:20:46 GMT",
"version": "v1"
}
] |
2021-12-22
|
[
[
"Zehtabian",
"Sharare",
""
],
[
"Khodadadeh",
"Siavash",
""
],
[
"Turgut",
"Damla",
""
],
[
"Bölöni",
"Ladislau",
""
]
] |
Throughout the Covid-19 pandemic, a significant amount of effort had been put into developing techniques that predict the number of infections under various assumptions about the public policy and non-pharmaceutical interventions. While both the available data and the sophistication of the AI models and available computing power exceed what was available in previous years, the overall success of prediction approaches was very limited. In this paper, we start from prediction algorithms proposed for XPrize Pandemic Response Challenge and consider several directions that might allow their improvement. Then, we investigate their performance over medium-term predictions extending over several months. We find that augmenting the algorithms with additional information about the culture of the modeled region, incorporating traditional compartmental models and up-to-date deep learning architectures can improve the performance for short term predictions, the accuracy of medium-term predictions is still very low and a significant amount of future research is needed to make such models a reliable component of a public policy toolbox.
|
2211.09074
|
Fangzhou Mu
|
Fangzhou Mu, Sicheng Mo, Gillian Wang, Yin Li
|
Where a Strong Backbone Meets Strong Features -- ActionFormer for Ego4D
Moment Queries Challenge
|
2nd place in ECCV 2022 Ego4D Moment Queries Challenge
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
This report describes our submission to the Ego4D Moment Queries Challenge
2022. Our submission builds on ActionFormer, the state-of-the-art backbone for
temporal action localization, and a trio of strong video features from
SlowFast, Omnivore and EgoVLP. Our solution is ranked 2nd on the public
leaderboard with 21.76% average mAP on the test set, which is nearly three
times higher than the official baseline. Further, we obtain 42.54% Recall@1x at
tIoU=0.5 on the test set, outperforming the top-ranked solution by a
significant margin of 1.41 absolute percentage points. Our code is available at
https://github.com/happyharrycn/actionformer_release.
|
[
{
"created": "Wed, 16 Nov 2022 17:43:26 GMT",
"version": "v1"
}
] |
2022-11-17
|
[
[
"Mu",
"Fangzhou",
""
],
[
"Mo",
"Sicheng",
""
],
[
"Wang",
"Gillian",
""
],
[
"Li",
"Yin",
""
]
] |
This report describes our submission to the Ego4D Moment Queries Challenge 2022. Our submission builds on ActionFormer, the state-of-the-art backbone for temporal action localization, and a trio of strong video features from SlowFast, Omnivore and EgoVLP. Our solution is ranked 2nd on the public leaderboard with 21.76% average mAP on the test set, which is nearly three times higher than the official baseline. Further, we obtain 42.54% Recall@1x at tIoU=0.5 on the test set, outperforming the top-ranked solution by a significant margin of 1.41 absolute percentage points. Our code is available at https://github.com/happyharrycn/actionformer_release.
|
2002.01077
|
Sunshine Chong
|
Sunshine Chong, Andr\'es Abeliuk
|
Quantifying the Effects of Recommendation Systems
|
8 pages, 6 figures, accepted into the National Symposium of IEEE Big
Data 2019
| null | null | null |
cs.IR cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recommendation systems today exert a strong influence on consumer behavior
and individual perceptions of the world. By using collaborative filtering (CF)
methods to create recommendations, it generates a continuous feedback loop in
which user behavior becomes magnified in the algorithmic system. Popular items
get recommended more frequently, creating the bias that affects and alters user
preferences. In order to visualize and compare the different biases, we will
analyze the effects of recommendation systems and quantify the inequalities
resulting from them.
|
[
{
"created": "Tue, 4 Feb 2020 01:21:46 GMT",
"version": "v1"
}
] |
2020-02-05
|
[
[
"Chong",
"Sunshine",
""
],
[
"Abeliuk",
"Andrés",
""
]
] |
Recommendation systems today exert a strong influence on consumer behavior and individual perceptions of the world. By using collaborative filtering (CF) methods to create recommendations, it generates a continuous feedback loop in which user behavior becomes magnified in the algorithmic system. Popular items get recommended more frequently, creating the bias that affects and alters user preferences. In order to visualize and compare the different biases, we will analyze the effects of recommendation systems and quantify the inequalities resulting from them.
|
2405.18627
|
Sunay Bhat
|
Sunay Bhat, Jeffrey Jiang, Omead Pooladzandi, Alexander Branch,
Gregory Pottie
|
PureGen: Universal Data Purification for Train-Time Poison Defense via
Generative Model Dynamics
| null | null | null | null |
cs.LG cs.AI cs.CR
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Train-time data poisoning attacks threaten machine learning models by
introducing adversarial examples during training, leading to misclassification.
Current defense methods often reduce generalization performance, are
attack-specific, and impose significant training overhead. To address this, we
introduce a set of universal data purification methods using a stochastic
transform, $\Psi(x)$, realized via iterative Langevin dynamics of Energy-Based
Models (EBMs), Denoising Diffusion Probabilistic Models (DDPMs), or both. These
approaches purify poisoned data with minimal impact on classifier
generalization. Our specially trained EBMs and DDPMs provide state-of-the-art
defense against various attacks (including Narcissus, Bullseye Polytope,
Gradient Matching) on CIFAR-10, Tiny-ImageNet, and CINIC-10, without needing
attack or classifier-specific information. We discuss performance trade-offs
and show that our methods remain highly effective even with poisoned or
distributionally shifted generative model training data.
|
[
{
"created": "Tue, 28 May 2024 22:19:26 GMT",
"version": "v1"
},
{
"created": "Sun, 2 Jun 2024 20:11:50 GMT",
"version": "v2"
}
] |
2024-06-04
|
[
[
"Bhat",
"Sunay",
""
],
[
"Jiang",
"Jeffrey",
""
],
[
"Pooladzandi",
"Omead",
""
],
[
"Branch",
"Alexander",
""
],
[
"Pottie",
"Gregory",
""
]
] |
Train-time data poisoning attacks threaten machine learning models by introducing adversarial examples during training, leading to misclassification. Current defense methods often reduce generalization performance, are attack-specific, and impose significant training overhead. To address this, we introduce a set of universal data purification methods using a stochastic transform, $\Psi(x)$, realized via iterative Langevin dynamics of Energy-Based Models (EBMs), Denoising Diffusion Probabilistic Models (DDPMs), or both. These approaches purify poisoned data with minimal impact on classifier generalization. Our specially trained EBMs and DDPMs provide state-of-the-art defense against various attacks (including Narcissus, Bullseye Polytope, Gradient Matching) on CIFAR-10, Tiny-ImageNet, and CINIC-10, without needing attack or classifier-specific information. We discuss performance trade-offs and show that our methods remain highly effective even with poisoned or distributionally shifted generative model training data.
|
2012.14116
|
Zenan Xu
|
Zenan Xu, Daya Guo, Duyu Tang, Qinliang Su, Linjun Shou, Ming Gong,
Wanjun Zhong, Xiaojun Quan, Nan Duan and Daxin Jiang
|
Syntax-Enhanced Pre-trained Model
|
Accepted by ACL-IJCNLP 2021: The Joint Conference of the 59th Annual
Meeting of the Association for Computational Linguistics and the 11th
International Joint Conference on Natural Language Processing
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the problem of leveraging the syntactic structure of text to enhance
pre-trained models such as BERT and RoBERTa. Existing methods utilize syntax of
text either in the pre-training stage or in the fine-tuning stage, so that they
suffer from discrepancy between the two stages. Such a problem would lead to
the necessity of having human-annotated syntactic information, which limits the
application of existing methods to broader scenarios. To address this, we
present a model that utilizes the syntax of text in both pre-training and
fine-tuning stages. Our model is based on Transformer with a syntax-aware
attention layer that considers the dependency tree of the text. We further
introduce a new pre-training task of predicting the syntactic distance among
tokens in the dependency tree. We evaluate the model on three downstream tasks,
including relation classification, entity typing, and question answering.
Results show that our model achieves state-of-the-art performance on six public
benchmark datasets. We have two major findings. First, we demonstrate that
infusing automatically produced syntax of text improves pre-trained models.
Second, global syntactic distances among tokens bring larger performance gains
compared to local head relations between contiguous tokens.
|
[
{
"created": "Mon, 28 Dec 2020 06:48:04 GMT",
"version": "v1"
},
{
"created": "Sat, 29 May 2021 08:13:49 GMT",
"version": "v2"
}
] |
2021-06-01
|
[
[
"Xu",
"Zenan",
""
],
[
"Guo",
"Daya",
""
],
[
"Tang",
"Duyu",
""
],
[
"Su",
"Qinliang",
""
],
[
"Shou",
"Linjun",
""
],
[
"Gong",
"Ming",
""
],
[
"Zhong",
"Wanjun",
""
],
[
"Quan",
"Xiaojun",
""
],
[
"Duan",
"Nan",
""
],
[
"Jiang",
"Daxin",
""
]
] |
We study the problem of leveraging the syntactic structure of text to enhance pre-trained models such as BERT and RoBERTa. Existing methods utilize syntax of text either in the pre-training stage or in the fine-tuning stage, so that they suffer from discrepancy between the two stages. Such a problem would lead to the necessity of having human-annotated syntactic information, which limits the application of existing methods to broader scenarios. To address this, we present a model that utilizes the syntax of text in both pre-training and fine-tuning stages. Our model is based on Transformer with a syntax-aware attention layer that considers the dependency tree of the text. We further introduce a new pre-training task of predicting the syntactic distance among tokens in the dependency tree. We evaluate the model on three downstream tasks, including relation classification, entity typing, and question answering. Results show that our model achieves state-of-the-art performance on six public benchmark datasets. We have two major findings. First, we demonstrate that infusing automatically produced syntax of text improves pre-trained models. Second, global syntactic distances among tokens bring larger performance gains compared to local head relations between contiguous tokens.
|
2405.11965
|
Michael Dorner
|
Michael Dorner and Andreas Bauer and Florian Angermeir
|
No Free Lunch: Research Software Testing in Teaching
| null | null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Software is at the core of most scientific discoveries today. Therefore, the
quality of research results highly depends on the quality of the research
software. Rigorous testing, as we know it from software engineering in the
industry, could ensure the quality of the research software but it also
requires a substantial effort that is often not rewarded in academia.
Therefore, this research explores the effects of research software testing
integrated into teaching on research software. In an in-vivo experiment, we
integrated the engineering of a test suite for a large-scale network simulation
as group projects into a course on software testing at the Blekinge Institute
of Technology, Sweden, and qualitatively measured the effects of this
integration on the research software. We found that the research software
benefited from the integration through substantially improved documentation and
fewer hardware and software dependencies. However, this integration was
effortful and although the student teams developed elegant and thoughtful test
suites, no code by students went directly into the research software since we
were not able to make the integration back into the research software
obligatory or even remunerative. Although we strongly believe that integrating
research software engineering such as testing into teaching is not only
valuable for the research software itself but also for students, the research
of the next generation, as they get in touch with research software engineering
and bleeding-edge research in their field as part of their education, the
uncertainty about the intellectual properties of students' code substantially
limits the potential of integrating research software testing into teaching.
|
[
{
"created": "Mon, 20 May 2024 11:40:01 GMT",
"version": "v1"
}
] |
2024-05-21
|
[
[
"Dorner",
"Michael",
""
],
[
"Bauer",
"Andreas",
""
],
[
"Angermeir",
"Florian",
""
]
] |
Software is at the core of most scientific discoveries today. Therefore, the quality of research results highly depends on the quality of the research software. Rigorous testing, as we know it from software engineering in the industry, could ensure the quality of the research software but it also requires a substantial effort that is often not rewarded in academia. Therefore, this research explores the effects of research software testing integrated into teaching on research software. In an in-vivo experiment, we integrated the engineering of a test suite for a large-scale network simulation as group projects into a course on software testing at the Blekinge Institute of Technology, Sweden, and qualitatively measured the effects of this integration on the research software. We found that the research software benefited from the integration through substantially improved documentation and fewer hardware and software dependencies. However, this integration was effortful and although the student teams developed elegant and thoughtful test suites, no code by students went directly into the research software since we were not able to make the integration back into the research software obligatory or even remunerative. Although we strongly believe that integrating research software engineering such as testing into teaching is not only valuable for the research software itself but also for students, the research of the next generation, as they get in touch with research software engineering and bleeding-edge research in their field as part of their education, the uncertainty about the intellectual properties of students' code substantially limits the potential of integrating research software testing into teaching.
|
2209.10131
|
Gadekallu Thippa Reddy
|
Gokul Yenduri, Thippa Reddy Gadekallu
|
A Systematic Literature Review of Soft Computing Techniques for Software
Maintainability Prediction: State-of-the-Art, Challenges and Future
Directions
|
Submitted for peer review
| null | null | null |
cs.SE cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
The software is changing rapidly with the invention of advanced technologies
and methodologies. The ability to rapidly and successfully upgrade software in
response to changing business requirements is more vital than ever. For the
long-term management of software products, measuring software maintainability
is crucial. The use of soft computing techniques for software maintainability
prediction has shown immense promise in software maintenance process by
providing accurate prediction of software maintainability. To better understand
the role of soft computing techniques for software maintainability prediction,
we aim to provide a systematic literature review of soft computing techniques
for software maintainability prediction. Firstly, we provide a detailed
overview of software maintainability. Following this, we explore the
fundamentals of software maintainability and the reasons for adopting soft
computing methodologies for predicting software maintainability. Later, we
examine the soft computing approaches employed in the process of software
maintainability prediction. Furthermore, we discuss the difficulties and
potential solutions associated with the use of soft computing techniques to
predict software maintainability. Finally, we conclude the review with some
promising future directions to drive further research innovations and
developments in this promising area.
|
[
{
"created": "Wed, 21 Sep 2022 05:38:23 GMT",
"version": "v1"
}
] |
2022-09-22
|
[
[
"Yenduri",
"Gokul",
""
],
[
"Gadekallu",
"Thippa Reddy",
""
]
] |
The software is changing rapidly with the invention of advanced technologies and methodologies. The ability to rapidly and successfully upgrade software in response to changing business requirements is more vital than ever. For the long-term management of software products, measuring software maintainability is crucial. The use of soft computing techniques for software maintainability prediction has shown immense promise in software maintenance process by providing accurate prediction of software maintainability. To better understand the role of soft computing techniques for software maintainability prediction, we aim to provide a systematic literature review of soft computing techniques for software maintainability prediction. Firstly, we provide a detailed overview of software maintainability. Following this, we explore the fundamentals of software maintainability and the reasons for adopting soft computing methodologies for predicting software maintainability. Later, we examine the soft computing approaches employed in the process of software maintainability prediction. Furthermore, we discuss the difficulties and potential solutions associated with the use of soft computing techniques to predict software maintainability. Finally, we conclude the review with some promising future directions to drive further research innovations and developments in this promising area.
|
2401.01200
|
Renato Krohling
|
Flavio P. Loss, Pedro H. da Cunha, Matheus B. Rocha, Madson
Poltronieri Zanoni, Leandro M. de Lima, Isadora Tavares Nascimento, Isabella
Rezende, Tania R. P. Canuto, Luciana de Paula Vieira, Renan Rossoni, Maria C.
S. Santos, Patricia Lyra Frasson, Wanderson Rom\~ao, Paulo R. Filgueiras, and
Renato A. Krohling
|
Skin cancer diagnosis using NIR spectroscopy data of skin lesions in
vivo using machine learning algorithms
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Skin lesions are classified in benign or malignant. Among the malignant,
melanoma is a very aggressive cancer and the major cause of deaths. So, early
diagnosis of skin cancer is very desired. In the last few years, there is a
growing interest in computer aided diagnostic (CAD) using most image and
clinical data of the lesion. These sources of information present limitations
due to their inability to provide information of the molecular structure of the
lesion. NIR spectroscopy may provide an alternative source of information to
automated CAD of skin lesions. The most commonly used techniques and
classification algorithms used in spectroscopy are Principal Component Analysis
(PCA), Partial Least Squares - Discriminant Analysis (PLS-DA), and Support
Vector Machines (SVM). Nonetheless, there is a growing interest in applying the
modern techniques of machine and deep learning (MDL) to spectroscopy. One of
the main limitations to apply MDL to spectroscopy is the lack of public
datasets. Since there is no public dataset of NIR spectral data to skin
lesions, as far as we know, an effort has been made and a new dataset named
NIR-SC-UFES, has been collected, annotated and analyzed generating the
gold-standard for classification of NIR spectral data to skin cancer. Next, the
machine learning algorithms XGBoost, CatBoost, LightGBM, 1D-convolutional
neural network (1D-CNN) were investigated to classify cancer and non-cancer
skin lesions. Experimental results indicate the best performance obtained by
LightGBM with pre-processing using standard normal variate (SNV), feature
extraction providing values of 0.839 for balanced accuracy, 0.851 for recall,
0.852 for precision, and 0.850 for F-score. The obtained results indicate the
first steps in CAD of skin lesions aiming the automated triage of patients with
skin lesions in vivo using NIR spectral data.
|
[
{
"created": "Tue, 2 Jan 2024 13:03:39 GMT",
"version": "v1"
}
] |
2024-01-03
|
[
[
"Loss",
"Flavio P.",
""
],
[
"da Cunha",
"Pedro H.",
""
],
[
"Rocha",
"Matheus B.",
""
],
[
"Zanoni",
"Madson Poltronieri",
""
],
[
"de Lima",
"Leandro M.",
""
],
[
"Nascimento",
"Isadora Tavares",
""
],
[
"Rezende",
"Isabella",
""
],
[
"Canuto",
"Tania R. P.",
""
],
[
"Vieira",
"Luciana de Paula",
""
],
[
"Rossoni",
"Renan",
""
],
[
"Santos",
"Maria C. S.",
""
],
[
"Frasson",
"Patricia Lyra",
""
],
[
"Romão",
"Wanderson",
""
],
[
"Filgueiras",
"Paulo R.",
""
],
[
"Krohling",
"Renato A.",
""
]
] |
Skin lesions are classified in benign or malignant. Among the malignant, melanoma is a very aggressive cancer and the major cause of deaths. So, early diagnosis of skin cancer is very desired. In the last few years, there is a growing interest in computer aided diagnostic (CAD) using most image and clinical data of the lesion. These sources of information present limitations due to their inability to provide information of the molecular structure of the lesion. NIR spectroscopy may provide an alternative source of information to automated CAD of skin lesions. The most commonly used techniques and classification algorithms used in spectroscopy are Principal Component Analysis (PCA), Partial Least Squares - Discriminant Analysis (PLS-DA), and Support Vector Machines (SVM). Nonetheless, there is a growing interest in applying the modern techniques of machine and deep learning (MDL) to spectroscopy. One of the main limitations to apply MDL to spectroscopy is the lack of public datasets. Since there is no public dataset of NIR spectral data to skin lesions, as far as we know, an effort has been made and a new dataset named NIR-SC-UFES, has been collected, annotated and analyzed generating the gold-standard for classification of NIR spectral data to skin cancer. Next, the machine learning algorithms XGBoost, CatBoost, LightGBM, 1D-convolutional neural network (1D-CNN) were investigated to classify cancer and non-cancer skin lesions. Experimental results indicate the best performance obtained by LightGBM with pre-processing using standard normal variate (SNV), feature extraction providing values of 0.839 for balanced accuracy, 0.851 for recall, 0.852 for precision, and 0.850 for F-score. The obtained results indicate the first steps in CAD of skin lesions aiming the automated triage of patients with skin lesions in vivo using NIR spectral data.
|
2304.03158
|
Wu Xing
|
Xing Wu, Guangyuan Ma, Peng Wang, Meng Lin, Zijia Lin, Fuzheng Zhang
and Songlin Hu
|
CoT-MAE v2: Contextual Masked Auto-Encoder with Multi-view Modeling for
Passage Retrieval
|
working in progress
| null | null | null |
cs.CL cs.AI cs.IR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Growing techniques have been emerging to improve the performance of passage
retrieval. As an effective representation bottleneck pretraining technique, the
contextual masked auto-encoder utilizes contextual embedding to assist in the
reconstruction of passages. However, it only uses a single auto-encoding
pre-task for dense representation pre-training. This study brings multi-view
modeling to the contextual masked auto-encoder. Firstly, multi-view
representation utilizes both dense and sparse vectors as multi-view
representations, aiming to capture sentence semantics from different aspects.
Moreover, multiview decoding paradigm utilizes both autoencoding and
auto-regressive decoders in representation bottleneck pre-training, aiming to
provide both reconstructive and generative signals for better contextual
representation pretraining. We refer to this multi-view pretraining method as
CoT-MAE v2. Through extensive experiments, we show that CoT-MAE v2 is effective
and robust on large-scale passage retrieval benchmarks and out-of-domain
zero-shot benchmarks.
|
[
{
"created": "Wed, 5 Apr 2023 08:00:38 GMT",
"version": "v1"
}
] |
2023-04-07
|
[
[
"Wu",
"Xing",
""
],
[
"Ma",
"Guangyuan",
""
],
[
"Wang",
"Peng",
""
],
[
"Lin",
"Meng",
""
],
[
"Lin",
"Zijia",
""
],
[
"Zhang",
"Fuzheng",
""
],
[
"Hu",
"Songlin",
""
]
] |
Growing techniques have been emerging to improve the performance of passage retrieval. As an effective representation bottleneck pretraining technique, the contextual masked auto-encoder utilizes contextual embedding to assist in the reconstruction of passages. However, it only uses a single auto-encoding pre-task for dense representation pre-training. This study brings multi-view modeling to the contextual masked auto-encoder. Firstly, multi-view representation utilizes both dense and sparse vectors as multi-view representations, aiming to capture sentence semantics from different aspects. Moreover, multiview decoding paradigm utilizes both autoencoding and auto-regressive decoders in representation bottleneck pre-training, aiming to provide both reconstructive and generative signals for better contextual representation pretraining. We refer to this multi-view pretraining method as CoT-MAE v2. Through extensive experiments, we show that CoT-MAE v2 is effective and robust on large-scale passage retrieval benchmarks and out-of-domain zero-shot benchmarks.
|
2406.12569
|
Yujie Wang
|
Chi Ma, Mincong Huang, Chao Wang, Yujie Wang, Lei Yu
|
MOYU: A Theoretical Study on Massive Over-activation Yielded Uplifts in
LLMs
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Massive Over-activation Yielded Uplifts(MOYU) is an inherent property of
large language models, and dynamic activation(DA) based on the MOYU property is
a clever yet under-explored strategy designed to accelerate inference in these
models. Existing methods that utilize MOYU often face a significant 'Impossible
Trinity': struggling to simultaneously maintain model performance, enhance
inference speed, and extend applicability across various architectures. Due to
the theoretical ambiguities surrounding MOYU, this paper elucidates the root
cause of the MOYU property and outlines the mechanisms behind two primary
limitations encountered by current DA methods: 1) history-related activation
uncertainty, and 2) semantic-irrelevant activation inertia. Our analysis not
only underscores the limitations of current dynamic activation strategies
within large-scale LLaMA models but also proposes opportunities for refining
the design of future sparsity schemes.
|
[
{
"created": "Tue, 18 Jun 2024 12:57:33 GMT",
"version": "v1"
},
{
"created": "Fri, 28 Jun 2024 07:23:16 GMT",
"version": "v2"
}
] |
2024-07-01
|
[
[
"Ma",
"Chi",
""
],
[
"Huang",
"Mincong",
""
],
[
"Wang",
"Chao",
""
],
[
"Wang",
"Yujie",
""
],
[
"Yu",
"Lei",
""
]
] |
Massive Over-activation Yielded Uplifts(MOYU) is an inherent property of large language models, and dynamic activation(DA) based on the MOYU property is a clever yet under-explored strategy designed to accelerate inference in these models. Existing methods that utilize MOYU often face a significant 'Impossible Trinity': struggling to simultaneously maintain model performance, enhance inference speed, and extend applicability across various architectures. Due to the theoretical ambiguities surrounding MOYU, this paper elucidates the root cause of the MOYU property and outlines the mechanisms behind two primary limitations encountered by current DA methods: 1) history-related activation uncertainty, and 2) semantic-irrelevant activation inertia. Our analysis not only underscores the limitations of current dynamic activation strategies within large-scale LLaMA models but also proposes opportunities for refining the design of future sparsity schemes.
|
2101.08885
|
Geunsik Lim
|
Geunsik Lim, Changwoo Min, Dong Hyun Kang, and Young Ik Eom
|
User-Aware Power Management for Mobile Devices
| null | null |
10.1109/GCCE.2013.6664780
| null |
cs.AR cs.OS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The power management techniques to extend battery lifespan is becoming
increasingly important due to longer user applications' running time in mobile
devices. Even when users do not use any applications, battery lifespan
decreases continually. It occurs because of service daemons of mobile platform
and network-based data synchronization operations. In this paper, we propose a
new power management system that recognizes the idle time of the device to
reduce the battery consumption of mobile devices.
|
[
{
"created": "Thu, 21 Jan 2021 23:17:42 GMT",
"version": "v1"
}
] |
2021-01-25
|
[
[
"Lim",
"Geunsik",
""
],
[
"Min",
"Changwoo",
""
],
[
"Kang",
"Dong Hyun",
""
],
[
"Eom",
"Young Ik",
""
]
] |
The power management techniques to extend battery lifespan is becoming increasingly important due to longer user applications' running time in mobile devices. Even when users do not use any applications, battery lifespan decreases continually. It occurs because of service daemons of mobile platform and network-based data synchronization operations. In this paper, we propose a new power management system that recognizes the idle time of the device to reduce the battery consumption of mobile devices.
|
2301.10577
|
Sushil Awale
|
Debayan Banerjee, Seid Muhie Yimam, Sushil Awale and Chris Biemann
|
ARDIAS: AI-Enhanced Research Management, Discovery, and Advisory System
| null | null | null | null |
cs.CL cs.AI cs.IR cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
In this work, we present ARDIAS, a web-based application that aims to provide
researchers with a full suite of discovery and collaboration tools. ARDIAS
currently allows searching for authors and articles by name and gaining
insights into the research topics of a particular researcher. With the aid of
AI-based tools, ARDIAS aims to recommend potential collaborators and topics to
researchers. In the near future, we aim to add tools that allow researchers to
communicate with each other and start new projects.
|
[
{
"created": "Wed, 25 Jan 2023 13:30:10 GMT",
"version": "v1"
}
] |
2023-01-26
|
[
[
"Banerjee",
"Debayan",
""
],
[
"Yimam",
"Seid Muhie",
""
],
[
"Awale",
"Sushil",
""
],
[
"Biemann",
"Chris",
""
]
] |
In this work, we present ARDIAS, a web-based application that aims to provide researchers with a full suite of discovery and collaboration tools. ARDIAS currently allows searching for authors and articles by name and gaining insights into the research topics of a particular researcher. With the aid of AI-based tools, ARDIAS aims to recommend potential collaborators and topics to researchers. In the near future, we aim to add tools that allow researchers to communicate with each other and start new projects.
|
2110.00443
|
Florian Fischer
|
Florian Fischer, Arthur Fleig, Markus Klar, J\"org M\"uller
|
Optimal Feedback Control for Modeling Human-Computer Interaction
|
66 pages, 21 figures, two appendices
| null |
10.1145/3524122
| null |
cs.HC math.OC
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Optimal feedback control (OFC) is a theory from the motor control literature
that explains how humans move their body to achieve a certain goal, e.g.,
pointing with the finger. OFC is based on the assumption that humans aim to
control their body optimally, within the constraints imposed by body,
environment, and task. In this paper, we explain how this theory can be applied
to understanding Human-Computer Interaction (HCI) in the case of pointing. We
propose that the human body and computer dynamics can be interpreted as a
single dynamical system. The system state is controlled by the user via muscle
control signals, and estimated from observations. Between-trial variability
arises from signal-dependent control noise and observation noise. We compare
four different models from optimal control theory and evaluate to what degree
these models can replicate movements in the case of mouse pointing. We
introduce a procedure to identify parameters that best explain observed user
behavior. To support HCI researchers in simulating, analyzing, and optimizing
interaction movements, we provide the Python toolbox OFC4HCI. We conclude that
OFC presents a powerful framework for HCI to understand and simulate motion of
the human body and of the interface on a moment by moment basis.
|
[
{
"created": "Fri, 1 Oct 2021 14:26:42 GMT",
"version": "v1"
},
{
"created": "Wed, 20 Apr 2022 07:02:37 GMT",
"version": "v2"
}
] |
2022-04-21
|
[
[
"Fischer",
"Florian",
""
],
[
"Fleig",
"Arthur",
""
],
[
"Klar",
"Markus",
""
],
[
"Müller",
"Jörg",
""
]
] |
Optimal feedback control (OFC) is a theory from the motor control literature that explains how humans move their body to achieve a certain goal, e.g., pointing with the finger. OFC is based on the assumption that humans aim to control their body optimally, within the constraints imposed by body, environment, and task. In this paper, we explain how this theory can be applied to understanding Human-Computer Interaction (HCI) in the case of pointing. We propose that the human body and computer dynamics can be interpreted as a single dynamical system. The system state is controlled by the user via muscle control signals, and estimated from observations. Between-trial variability arises from signal-dependent control noise and observation noise. We compare four different models from optimal control theory and evaluate to what degree these models can replicate movements in the case of mouse pointing. We introduce a procedure to identify parameters that best explain observed user behavior. To support HCI researchers in simulating, analyzing, and optimizing interaction movements, we provide the Python toolbox OFC4HCI. We conclude that OFC presents a powerful framework for HCI to understand and simulate motion of the human body and of the interface on a moment by moment basis.
|
2310.07820
|
Nate Gruver
|
Nate Gruver, Marc Finzi, Shikai Qiu, Andrew Gordon Wilson
|
Large Language Models Are Zero-Shot Time Series Forecasters
|
NeurIPS 2023. Code available at: https://github.com/ngruver/llmtime
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
By encoding time series as a string of numerical digits, we can frame time
series forecasting as next-token prediction in text. Developing this approach,
we find that large language models (LLMs) such as GPT-3 and LLaMA-2 can
surprisingly zero-shot extrapolate time series at a level comparable to or
exceeding the performance of purpose-built time series models trained on the
downstream tasks. To facilitate this performance, we propose procedures for
effectively tokenizing time series data and converting discrete distributions
over tokens into highly flexible densities over continuous values. We argue the
success of LLMs for time series stems from their ability to naturally represent
multimodal distributions, in conjunction with biases for simplicity, and
repetition, which align with the salient features in many time series, such as
repeated seasonal trends. We also show how LLMs can naturally handle missing
data without imputation through non-numerical text, accommodate textual side
information, and answer questions to help explain predictions. While we find
that increasing model size generally improves performance on time series, we
show GPT-4 can perform worse than GPT-3 because of how it tokenizes numbers,
and poor uncertainty calibration, which is likely the result of alignment
interventions such as RLHF.
|
[
{
"created": "Wed, 11 Oct 2023 19:01:28 GMT",
"version": "v1"
},
{
"created": "Tue, 18 Jun 2024 14:48:38 GMT",
"version": "v2"
},
{
"created": "Mon, 12 Aug 2024 00:43:56 GMT",
"version": "v3"
}
] |
2024-08-13
|
[
[
"Gruver",
"Nate",
""
],
[
"Finzi",
"Marc",
""
],
[
"Qiu",
"Shikai",
""
],
[
"Wilson",
"Andrew Gordon",
""
]
] |
By encoding time series as a string of numerical digits, we can frame time series forecasting as next-token prediction in text. Developing this approach, we find that large language models (LLMs) such as GPT-3 and LLaMA-2 can surprisingly zero-shot extrapolate time series at a level comparable to or exceeding the performance of purpose-built time series models trained on the downstream tasks. To facilitate this performance, we propose procedures for effectively tokenizing time series data and converting discrete distributions over tokens into highly flexible densities over continuous values. We argue the success of LLMs for time series stems from their ability to naturally represent multimodal distributions, in conjunction with biases for simplicity, and repetition, which align with the salient features in many time series, such as repeated seasonal trends. We also show how LLMs can naturally handle missing data without imputation through non-numerical text, accommodate textual side information, and answer questions to help explain predictions. While we find that increasing model size generally improves performance on time series, we show GPT-4 can perform worse than GPT-3 because of how it tokenizes numbers, and poor uncertainty calibration, which is likely the result of alignment interventions such as RLHF.
|
2209.00586
|
Nikos Fotiou
|
Nikos Fotiou, Iakovos Pittaras, Spiros Chadoulos, Vasilios A. Siris,
George C. Polyzos, Nikolaos Ipiotis, Stratos Keranidis
|
Authentication, Authorization, and Selective Disclosure for IoT data
sharing using Verifiable Credentials and Zero-Knowledge Proofs
|
to appear in ESORICS Workshop on Emerging Technologies for
Authorization and Authentication, ETAA 2022
| null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As IoT becomes omnipresent vast amounts of data are generated, which can be
used for building innovative applications. However,interoperability issues and
security concerns, prevent harvesting the full potentials of these data. In
this paper we consider the use case of data generated by smart buildings.
Buildings are becoming ever "smarter" by integrating IoT devices that improve
comfort through sensing and automation. However, these devices and their data
are usually siloed in specific applications or manufacturers, even though they
can be valuable for various interested stakeholders who provide different types
of "over the top" services, e.g., energy management. Most data sharing
techniques follow an "all or nothing" approach, creating significant security
and privacy threats, when even partially revealed, privacy-preserving, data
subsets can fuel innovative applications. With these in mind we develop a
platform that enables controlled, privacy-preserving sharing of data items. Our
system innovates in two directions: Firstly, it provides a framework for
allowing discovery and selective disclosure of IoT data without violating their
integrity. Secondly, it provides a user-friendly, intuitive mechanisms allowing
efficient, fine-grained access control over the shared data. Our solution
leverages recent advances in the areas of Self-Sovereign Identities, Verifiable
Credentials, and Zero-Knowledge Proofs, and it integrates them in a platform
that combines the industry-standard authorization framework OAuth 2.0 and the
Web of Things specifications.
|
[
{
"created": "Thu, 1 Sep 2022 16:59:03 GMT",
"version": "v1"
}
] |
2022-09-02
|
[
[
"Fotiou",
"Nikos",
""
],
[
"Pittaras",
"Iakovos",
""
],
[
"Chadoulos",
"Spiros",
""
],
[
"Siris",
"Vasilios A.",
""
],
[
"Polyzos",
"George C.",
""
],
[
"Ipiotis",
"Nikolaos",
""
],
[
"Keranidis",
"Stratos",
""
]
] |
As IoT becomes omnipresent vast amounts of data are generated, which can be used for building innovative applications. However,interoperability issues and security concerns, prevent harvesting the full potentials of these data. In this paper we consider the use case of data generated by smart buildings. Buildings are becoming ever "smarter" by integrating IoT devices that improve comfort through sensing and automation. However, these devices and their data are usually siloed in specific applications or manufacturers, even though they can be valuable for various interested stakeholders who provide different types of "over the top" services, e.g., energy management. Most data sharing techniques follow an "all or nothing" approach, creating significant security and privacy threats, when even partially revealed, privacy-preserving, data subsets can fuel innovative applications. With these in mind we develop a platform that enables controlled, privacy-preserving sharing of data items. Our system innovates in two directions: Firstly, it provides a framework for allowing discovery and selective disclosure of IoT data without violating their integrity. Secondly, it provides a user-friendly, intuitive mechanisms allowing efficient, fine-grained access control over the shared data. Our solution leverages recent advances in the areas of Self-Sovereign Identities, Verifiable Credentials, and Zero-Knowledge Proofs, and it integrates them in a platform that combines the industry-standard authorization framework OAuth 2.0 and the Web of Things specifications.
|
2111.00621
|
Qian Yang
|
Bojian Hou and Hao Zhang and Gur Ladizhinsky and Gur Ladizhinsky and
Stephen Yang and Volodymyr Kuleshov and Fei Wang and Qian Yang
|
Clinical Evidence Engine: Proof-of-Concept For A
Clinical-Domain-Agnostic Decision Support Infrastructure
| null | null | null | null |
cs.AI cs.HC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Abstruse learning algorithms and complex datasets increasingly characterize
modern clinical decision support systems (CDSS). As a result, clinicians cannot
easily or rapidly scrutinize the CDSS recommendation when facing a difficult
diagnosis or treatment decision in practice. Over-trust or under-trust are
frequent. Prior research has explored supporting such assessments by explaining
DST data inputs and algorithmic mechanisms. This paper explores a different
approach: Providing precisely relevant, scientific evidence from biomedical
literature. We present a proof-of-concept system, Clinical Evidence Engine, to
demonstrate the technical and design feasibility of this approach across three
domains (cardiovascular diseases, autism, cancer). Leveraging Clinical BioBERT,
the system can effectively identify clinical trial reports based on lengthy
clinical questions (e.g., "risks of catheter infection among adult patients in
intensive care unit who require arterial catheters, if treated with povidone
iodine-alcohol"). This capability enables the system to identify clinical
trials relevant to diagnostic/treatment hypotheses -- a clinician's or a
CDSS's. Further, Clinical Evidence Engine can identify key parts of a clinical
trial abstract, including patient population (e.g., adult patients in intensive
care unit who require arterial catheters), intervention (povidone
iodine-alcohol), and outcome (risks of catheter infection). This capability
opens up the possibility of enabling clinicians to 1) rapidly determine the
match between a clinical trial and a clinical question, and 2) understand the
result and contexts of the trial without extensive reading. We demonstrate this
potential by illustrating two example use scenarios of the system. We discuss
the idea of designing DST explanations not as specific to a DST or an
algorithm, but as a domain-agnostic decision support infrastructure.
|
[
{
"created": "Sun, 31 Oct 2021 23:21:25 GMT",
"version": "v1"
}
] |
2021-11-02
|
[
[
"Hou",
"Bojian",
""
],
[
"Zhang",
"Hao",
""
],
[
"Ladizhinsky",
"Gur",
""
],
[
"Ladizhinsky",
"Gur",
""
],
[
"Yang",
"Stephen",
""
],
[
"Kuleshov",
"Volodymyr",
""
],
[
"Wang",
"Fei",
""
],
[
"Yang",
"Qian",
""
]
] |
Abstruse learning algorithms and complex datasets increasingly characterize modern clinical decision support systems (CDSS). As a result, clinicians cannot easily or rapidly scrutinize the CDSS recommendation when facing a difficult diagnosis or treatment decision in practice. Over-trust or under-trust are frequent. Prior research has explored supporting such assessments by explaining DST data inputs and algorithmic mechanisms. This paper explores a different approach: Providing precisely relevant, scientific evidence from biomedical literature. We present a proof-of-concept system, Clinical Evidence Engine, to demonstrate the technical and design feasibility of this approach across three domains (cardiovascular diseases, autism, cancer). Leveraging Clinical BioBERT, the system can effectively identify clinical trial reports based on lengthy clinical questions (e.g., "risks of catheter infection among adult patients in intensive care unit who require arterial catheters, if treated with povidone iodine-alcohol"). This capability enables the system to identify clinical trials relevant to diagnostic/treatment hypotheses -- a clinician's or a CDSS's. Further, Clinical Evidence Engine can identify key parts of a clinical trial abstract, including patient population (e.g., adult patients in intensive care unit who require arterial catheters), intervention (povidone iodine-alcohol), and outcome (risks of catheter infection). This capability opens up the possibility of enabling clinicians to 1) rapidly determine the match between a clinical trial and a clinical question, and 2) understand the result and contexts of the trial without extensive reading. We demonstrate this potential by illustrating two example use scenarios of the system. We discuss the idea of designing DST explanations not as specific to a DST or an algorithm, but as a domain-agnostic decision support infrastructure.
|
2208.10472
|
Mark Presten
|
Mark Presten, Rishi Parikh, Shrey Aeron, Sandeep Mukherjee, Simeon
Adebola, Satvik Sharma, Mark Theis, Walter Teitelbaum, and Ken Goldberg
|
Automated Pruning of Polyculture Plants
|
CASE 2022, 8 pages. arXiv admin note: substantial text overlap with
arXiv:2111.06014
| null | null | null |
cs.RO cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Polyculture farming has environmental advantages but requires substantially
more pruning than monoculture farming. We present novel hardware and algorithms
for automated pruning. Using an overhead camera to collect data from a physical
scale garden testbed, the autonomous system utilizes a learned Plant
Phenotyping convolutional neural network and a Bounding Disk Tracking algorithm
to evaluate the individual plant distribution and estimate the state of the
garden each day. From this garden state, AlphaGardenSim selects plants to
autonomously prune. A trained neural network detects and targets specific prune
points on the plant. Two custom-designed pruning tools, compatible with a
FarmBot gantry system, are experimentally evaluated and execute autonomous cuts
through controlled algorithms. We present results for four 60-day garden
cycles. Results suggest the system can autonomously achieve 0.94 normalized
plant diversity with pruning shears while maintaining an average canopy
coverage of 0.84 by the end of the cycles. For code, videos, and datasets, see
https://sites.google.com/berkeley.edu/pruningpolyculture.
|
[
{
"created": "Mon, 22 Aug 2022 17:49:22 GMT",
"version": "v1"
}
] |
2022-08-23
|
[
[
"Presten",
"Mark",
""
],
[
"Parikh",
"Rishi",
""
],
[
"Aeron",
"Shrey",
""
],
[
"Mukherjee",
"Sandeep",
""
],
[
"Adebola",
"Simeon",
""
],
[
"Sharma",
"Satvik",
""
],
[
"Theis",
"Mark",
""
],
[
"Teitelbaum",
"Walter",
""
],
[
"Goldberg",
"Ken",
""
]
] |
Polyculture farming has environmental advantages but requires substantially more pruning than monoculture farming. We present novel hardware and algorithms for automated pruning. Using an overhead camera to collect data from a physical scale garden testbed, the autonomous system utilizes a learned Plant Phenotyping convolutional neural network and a Bounding Disk Tracking algorithm to evaluate the individual plant distribution and estimate the state of the garden each day. From this garden state, AlphaGardenSim selects plants to autonomously prune. A trained neural network detects and targets specific prune points on the plant. Two custom-designed pruning tools, compatible with a FarmBot gantry system, are experimentally evaluated and execute autonomous cuts through controlled algorithms. We present results for four 60-day garden cycles. Results suggest the system can autonomously achieve 0.94 normalized plant diversity with pruning shears while maintaining an average canopy coverage of 0.84 by the end of the cycles. For code, videos, and datasets, see https://sites.google.com/berkeley.edu/pruningpolyculture.
|
1812.10729
|
Olga Gadyatskaya
|
Aleksandr Pilgun, Olga Gadyatskaya, Stanislav Dashevskyi, Yury
Zhauniarovich and Artsiom Kushniarou
|
Fine-grained Code Coverage Measurement in Automated Black-box Android
Testing
| null | null | null | null |
cs.CR cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Today, there are millions of third-party Android applications. Some of these
applications are buggy or even malicious. To identify such applications, novel
frameworks for automated black-box testing and dynamic analysis are being
developed by the Android community, including Google. Code coverage is one of
the most common metrics for evaluating effectiveness of these frameworks.
Furthermore, code coverage is used as a fitness function for guiding
evolutionary and fuzzy testing techniques. However, there are no reliable tools
for measuring fine-grained code coverage in black-box Android app testing.
We present the Android Code coVerage Tool, ACVTool for short, that
instruments Android apps and measures the code coverage in the black-box
setting at the class, method and instruction granularities. ACVTool has
successfully instrumented 96.9% of apps in our experiments. It introduces a
negligible instrumentation time overhead, and its runtime overhead is
acceptable for automated testing tools. We show in a large-scale experiment
with Sapienz, a state-of-art testing tool, that the fine-grained
instruction-level code coverage provided by ACVTool helps to uncover a larger
amount of faults than coarser-grained code coverage metrics.
|
[
{
"created": "Thu, 27 Dec 2018 14:20:42 GMT",
"version": "v1"
}
] |
2018-12-31
|
[
[
"Pilgun",
"Aleksandr",
""
],
[
"Gadyatskaya",
"Olga",
""
],
[
"Dashevskyi",
"Stanislav",
""
],
[
"Zhauniarovich",
"Yury",
""
],
[
"Kushniarou",
"Artsiom",
""
]
] |
Today, there are millions of third-party Android applications. Some of these applications are buggy or even malicious. To identify such applications, novel frameworks for automated black-box testing and dynamic analysis are being developed by the Android community, including Google. Code coverage is one of the most common metrics for evaluating effectiveness of these frameworks. Furthermore, code coverage is used as a fitness function for guiding evolutionary and fuzzy testing techniques. However, there are no reliable tools for measuring fine-grained code coverage in black-box Android app testing. We present the Android Code coVerage Tool, ACVTool for short, that instruments Android apps and measures the code coverage in the black-box setting at the class, method and instruction granularities. ACVTool has successfully instrumented 96.9% of apps in our experiments. It introduces a negligible instrumentation time overhead, and its runtime overhead is acceptable for automated testing tools. We show in a large-scale experiment with Sapienz, a state-of-art testing tool, that the fine-grained instruction-level code coverage provided by ACVTool helps to uncover a larger amount of faults than coarser-grained code coverage metrics.
|
1904.06447
|
Kyle Genova
|
Kyle Genova, Forrester Cole, Daniel Vlasic, Aaron Sarna, William T.
Freeman, Thomas Funkhouser
|
Learning Shape Templates with Structured Implicit Functions
|
12 pages, 9 figures, 4 tables
| null | null | null |
cs.CV cs.GR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Template 3D shapes are useful for many tasks in graphics and vision,
including fitting observation data, analyzing shape collections, and
transferring shape attributes. Because of the variety of geometry and topology
of real-world shapes, previous methods generally use a library of hand-made
templates. In this paper, we investigate learning a general shape template from
data. To allow for widely varying geometry and topology, we choose an implicit
surface representation based on composition of local shape elements. While long
known to computer graphics, this representation has not yet been explored in
the context of machine learning for vision. We show that structured implicit
functions are suitable for learning and allow a network to smoothly and
simultaneously fit multiple classes of shapes. The learned shape template
supports applications such as shape exploration, correspondence, abstraction,
interpolation, and semantic segmentation from an RGB image.
|
[
{
"created": "Fri, 12 Apr 2019 23:15:47 GMT",
"version": "v1"
}
] |
2019-04-16
|
[
[
"Genova",
"Kyle",
""
],
[
"Cole",
"Forrester",
""
],
[
"Vlasic",
"Daniel",
""
],
[
"Sarna",
"Aaron",
""
],
[
"Freeman",
"William T.",
""
],
[
"Funkhouser",
"Thomas",
""
]
] |
Template 3D shapes are useful for many tasks in graphics and vision, including fitting observation data, analyzing shape collections, and transferring shape attributes. Because of the variety of geometry and topology of real-world shapes, previous methods generally use a library of hand-made templates. In this paper, we investigate learning a general shape template from data. To allow for widely varying geometry and topology, we choose an implicit surface representation based on composition of local shape elements. While long known to computer graphics, this representation has not yet been explored in the context of machine learning for vision. We show that structured implicit functions are suitable for learning and allow a network to smoothly and simultaneously fit multiple classes of shapes. The learned shape template supports applications such as shape exploration, correspondence, abstraction, interpolation, and semantic segmentation from an RGB image.
|
1704.00784
|
Colin Raffel
|
Colin Raffel, Minh-Thang Luong, Peter J. Liu, Ron J. Weiss, Douglas
Eck
|
Online and Linear-Time Attention by Enforcing Monotonic Alignments
|
ICML camera-ready version; 10 pages + 9 page appendix
| null | null | null |
cs.LG cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recurrent neural network models with an attention mechanism have proven to be
extremely effective on a wide variety of sequence-to-sequence problems.
However, the fact that soft attention mechanisms perform a pass over the entire
input sequence when producing each element in the output sequence precludes
their use in online settings and results in a quadratic time complexity. Based
on the insight that the alignment between input and output sequence elements is
monotonic in many problems of interest, we propose an end-to-end differentiable
method for learning monotonic alignments which, at test time, enables computing
attention online and in linear time. We validate our approach on sentence
summarization, machine translation, and online speech recognition problems and
achieve results competitive with existing sequence-to-sequence models.
|
[
{
"created": "Mon, 3 Apr 2017 19:45:27 GMT",
"version": "v1"
},
{
"created": "Thu, 29 Jun 2017 21:14:58 GMT",
"version": "v2"
}
] |
2017-07-03
|
[
[
"Raffel",
"Colin",
""
],
[
"Luong",
"Minh-Thang",
""
],
[
"Liu",
"Peter J.",
""
],
[
"Weiss",
"Ron J.",
""
],
[
"Eck",
"Douglas",
""
]
] |
Recurrent neural network models with an attention mechanism have proven to be extremely effective on a wide variety of sequence-to-sequence problems. However, the fact that soft attention mechanisms perform a pass over the entire input sequence when producing each element in the output sequence precludes their use in online settings and results in a quadratic time complexity. Based on the insight that the alignment between input and output sequence elements is monotonic in many problems of interest, we propose an end-to-end differentiable method for learning monotonic alignments which, at test time, enables computing attention online and in linear time. We validate our approach on sentence summarization, machine translation, and online speech recognition problems and achieve results competitive with existing sequence-to-sequence models.
|
2002.10101
|
Rongxiang Weng
|
Rongxiang Weng, Haoran Wei, Shujian Huang, Heng Yu, Lidong Bing,
Weihua Luo, Jiajun Chen
|
GRET: Global Representation Enhanced Transformer
|
Accepted by AAAI 2020
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Transformer, based on the encoder-decoder framework, has achieved
state-of-the-art performance on several natural language generation tasks. The
encoder maps the words in the input sentence into a sequence of hidden states,
which are then fed into the decoder to generate the output sentence. These
hidden states usually correspond to the input words and focus on capturing
local information. However, the global (sentence level) information is seldom
explored, leaving room for the improvement of generation quality. In this
paper, we propose a novel global representation enhanced Transformer (GRET) to
explicitly model global representation in the Transformer network.
Specifically, in the proposed model, an external state is generated for the
global representation from the encoder. The global representation is then fused
into the decoder during the decoding process to improve generation quality. We
conduct experiments in two text generation tasks: machine translation and text
summarization. Experimental results on four WMT machine translation tasks and
LCSTS text summarization task demonstrate the effectiveness of the proposed
approach on natural language generation.
|
[
{
"created": "Mon, 24 Feb 2020 07:37:17 GMT",
"version": "v1"
}
] |
2020-02-25
|
[
[
"Weng",
"Rongxiang",
""
],
[
"Wei",
"Haoran",
""
],
[
"Huang",
"Shujian",
""
],
[
"Yu",
"Heng",
""
],
[
"Bing",
"Lidong",
""
],
[
"Luo",
"Weihua",
""
],
[
"Chen",
"Jiajun",
""
]
] |
Transformer, based on the encoder-decoder framework, has achieved state-of-the-art performance on several natural language generation tasks. The encoder maps the words in the input sentence into a sequence of hidden states, which are then fed into the decoder to generate the output sentence. These hidden states usually correspond to the input words and focus on capturing local information. However, the global (sentence level) information is seldom explored, leaving room for the improvement of generation quality. In this paper, we propose a novel global representation enhanced Transformer (GRET) to explicitly model global representation in the Transformer network. Specifically, in the proposed model, an external state is generated for the global representation from the encoder. The global representation is then fused into the decoder during the decoding process to improve generation quality. We conduct experiments in two text generation tasks: machine translation and text summarization. Experimental results on four WMT machine translation tasks and LCSTS text summarization task demonstrate the effectiveness of the proposed approach on natural language generation.
|
2206.07511
|
Takfarinas Saber
|
Muhammad Turab and Teerath Kumar and Malika Bendechache and Takfarinas
Saber
|
Investigating Multi-Feature Selection and Ensembling for Audio
Classification
| null | null | null | null |
cs.SD cs.LG eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Deep Learning (DL) algorithms have shown impressive performance in diverse
domains. Among them, audio has attracted many researchers over the last couple
of decades due to some interesting patterns--particularly in classification of
audio data. For better performance of audio classification, feature selection
and combination play a key role as they have the potential to make or break the
performance of any DL model. To investigate this role, we conduct an extensive
evaluation of the performance of several cutting-edge DL models (i.e.,
Convolutional Neural Network, EfficientNet, MobileNet, Supper Vector Machine
and Multi-Perceptron) with various state-of-the-art audio features (i.e., Mel
Spectrogram, Mel Frequency Cepstral Coefficients, and Zero Crossing Rate)
either independently or as a combination (i.e., through ensembling) on three
different datasets (i.e., Free Spoken Digits Dataset, Audio Urdu Digits
Dataset, and Audio Gujarati Digits Dataset). Overall, results suggest feature
selection depends on both the dataset and the model. However, feature
combinations should be restricted to the only features that already achieve
good performances when used individually (i.e., mostly Mel Spectrogram, Mel
Frequency Cepstral Coefficients). Such feature combination/ensembling enabled
us to outperform the previous state-of-the-art results irrespective of our
choice of DL model.
|
[
{
"created": "Wed, 15 Jun 2022 13:11:08 GMT",
"version": "v1"
}
] |
2022-06-16
|
[
[
"Turab",
"Muhammad",
""
],
[
"Kumar",
"Teerath",
""
],
[
"Bendechache",
"Malika",
""
],
[
"Saber",
"Takfarinas",
""
]
] |
Deep Learning (DL) algorithms have shown impressive performance in diverse domains. Among them, audio has attracted many researchers over the last couple of decades due to some interesting patterns--particularly in classification of audio data. For better performance of audio classification, feature selection and combination play a key role as they have the potential to make or break the performance of any DL model. To investigate this role, we conduct an extensive evaluation of the performance of several cutting-edge DL models (i.e., Convolutional Neural Network, EfficientNet, MobileNet, Supper Vector Machine and Multi-Perceptron) with various state-of-the-art audio features (i.e., Mel Spectrogram, Mel Frequency Cepstral Coefficients, and Zero Crossing Rate) either independently or as a combination (i.e., through ensembling) on three different datasets (i.e., Free Spoken Digits Dataset, Audio Urdu Digits Dataset, and Audio Gujarati Digits Dataset). Overall, results suggest feature selection depends on both the dataset and the model. However, feature combinations should be restricted to the only features that already achieve good performances when used individually (i.e., mostly Mel Spectrogram, Mel Frequency Cepstral Coefficients). Such feature combination/ensembling enabled us to outperform the previous state-of-the-art results irrespective of our choice of DL model.
|
2209.03851
|
Vadim Porvatov
|
Vadim Porvatov, Natalia Semenova
|
5q032e@SMM4H'22: Transformer-based classification of premise in tweets
related to COVID-19
|
Accepted at SMM4H Workshop of COLING'22
|
Mining for Health Applications, Workshop & Shared Task (SMM4H
2022) (p. 108)
| null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Automation of social network data assessment is one of the classic challenges
of natural language processing. During the COVID-19 pandemic, mining people's
stances from public messages have become crucial regarding understanding
attitudes towards health orders. In this paper, the authors propose the
predictive model based on transformer architecture to classify the presence of
premise in Twitter texts. This work is completed as part of the Social Media
Mining for Health (SMM4H) Workshop 2022. We explored modern transformer-based
classifiers in order to construct the pipeline efficiently capturing tweets
semantics. Our experiments on a Twitter dataset showed that RoBERTa is superior
to the other transformer models in the case of the premise prediction task. The
model achieved competitive performance with respect to ROC AUC value 0.807, and
0.7648 for the F1 score.
|
[
{
"created": "Thu, 8 Sep 2022 14:46:28 GMT",
"version": "v1"
},
{
"created": "Sun, 15 Oct 2023 08:42:33 GMT",
"version": "v2"
}
] |
2023-10-18
|
[
[
"Porvatov",
"Vadim",
""
],
[
"Semenova",
"Natalia",
""
]
] |
Automation of social network data assessment is one of the classic challenges of natural language processing. During the COVID-19 pandemic, mining people's stances from public messages have become crucial regarding understanding attitudes towards health orders. In this paper, the authors propose the predictive model based on transformer architecture to classify the presence of premise in Twitter texts. This work is completed as part of the Social Media Mining for Health (SMM4H) Workshop 2022. We explored modern transformer-based classifiers in order to construct the pipeline efficiently capturing tweets semantics. Our experiments on a Twitter dataset showed that RoBERTa is superior to the other transformer models in the case of the premise prediction task. The model achieved competitive performance with respect to ROC AUC value 0.807, and 0.7648 for the F1 score.
|
2201.03655
|
Chhavi Choudhury
|
Chhavi Choudhury, Ankur Gandhe, Xiaohan Ding, Ivan Bulyko
|
A Likelihood Ratio based Domain Adaptation Method for E2E Models
|
Submitted to ICASSP 2022
| null | null | null |
cs.CL cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
End-to-end (E2E) automatic speech recognition models like Recurrent Neural
Networks Transducer (RNN-T) are becoming a popular choice for streaming ASR
applications like voice assistants. While E2E models are very effective at
learning representation of the training data they are trained on, their
accuracy on unseen domains remains a challenging problem. Additionally, these
models require paired audio and text training data, are computationally
expensive and are difficult to adapt towards the fast evolving nature of
conversational speech. In this work, we explore a contextual biasing approach
using likelihood-ratio that leverages text data sources to adapt RNN-T model to
new domains and entities. We show that this method is effective in improving
rare words recognition, and results in a relative improvement of 10% in 1-best
word error rate (WER) and 10% in n-best Oracle WER (n=8) on multiple
out-of-domain datasets without any degradation on a general dataset. We also
show that complementing the contextual biasing adaptation with adaptation of a
second-pass rescoring model gives additive WER improvements.
|
[
{
"created": "Mon, 10 Jan 2022 21:22:39 GMT",
"version": "v1"
}
] |
2022-01-12
|
[
[
"Choudhury",
"Chhavi",
""
],
[
"Gandhe",
"Ankur",
""
],
[
"Ding",
"Xiaohan",
""
],
[
"Bulyko",
"Ivan",
""
]
] |
End-to-end (E2E) automatic speech recognition models like Recurrent Neural Networks Transducer (RNN-T) are becoming a popular choice for streaming ASR applications like voice assistants. While E2E models are very effective at learning representation of the training data they are trained on, their accuracy on unseen domains remains a challenging problem. Additionally, these models require paired audio and text training data, are computationally expensive and are difficult to adapt towards the fast evolving nature of conversational speech. In this work, we explore a contextual biasing approach using likelihood-ratio that leverages text data sources to adapt RNN-T model to new domains and entities. We show that this method is effective in improving rare words recognition, and results in a relative improvement of 10% in 1-best word error rate (WER) and 10% in n-best Oracle WER (n=8) on multiple out-of-domain datasets without any degradation on a general dataset. We also show that complementing the contextual biasing adaptation with adaptation of a second-pass rescoring model gives additive WER improvements.
|
2306.04934
|
Jianhong Bai
|
Jianhong Bai, Zuozhu Liu, Hualiang Wang, Jin Hao, Yang Feng, Huanpeng
Chu, Haoji Hu
|
On the Effectiveness of Out-of-Distribution Data in Self-Supervised
Long-Tail Learning
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Though Self-supervised learning (SSL) has been widely studied as a promising
technique for representation learning, it doesn't generalize well on
long-tailed datasets due to the majority classes dominating the feature space.
Recent work shows that the long-tailed learning performance could be boosted by
sampling extra in-domain (ID) data for self-supervised training, however,
large-scale ID data which can rebalance the minority classes are expensive to
collect. In this paper, we propose an alternative but easy-to-use and effective
solution, Contrastive with Out-of-distribution (OOD) data for Long-Tail
learning (COLT), which can effectively exploit OOD data to dynamically
re-balance the feature space. We empirically identify the counter-intuitive
usefulness of OOD samples in SSL long-tailed learning and principally design a
novel SSL method. Concretely, we first localize the `head' and `tail' samples
by assigning a tailness score to each OOD sample based on its neighborhoods in
the feature space. Then, we propose an online OOD sampling strategy to
dynamically re-balance the feature space. Finally, we enforce the model to be
capable of distinguishing ID and OOD samples by a distribution-level supervised
contrastive loss. Extensive experiments are conducted on various datasets and
several state-of-the-art SSL frameworks to verify the effectiveness of the
proposed method. The results show that our method significantly improves the
performance of SSL on long-tailed datasets by a large margin, and even
outperforms previous work which uses external ID data. Our code is available at
https://github.com/JianhongBai/COLT.
|
[
{
"created": "Thu, 8 Jun 2023 04:32:10 GMT",
"version": "v1"
},
{
"created": "Wed, 12 Jul 2023 06:20:04 GMT",
"version": "v2"
}
] |
2023-07-13
|
[
[
"Bai",
"Jianhong",
""
],
[
"Liu",
"Zuozhu",
""
],
[
"Wang",
"Hualiang",
""
],
[
"Hao",
"Jin",
""
],
[
"Feng",
"Yang",
""
],
[
"Chu",
"Huanpeng",
""
],
[
"Hu",
"Haoji",
""
]
] |
Though Self-supervised learning (SSL) has been widely studied as a promising technique for representation learning, it doesn't generalize well on long-tailed datasets due to the majority classes dominating the feature space. Recent work shows that the long-tailed learning performance could be boosted by sampling extra in-domain (ID) data for self-supervised training, however, large-scale ID data which can rebalance the minority classes are expensive to collect. In this paper, we propose an alternative but easy-to-use and effective solution, Contrastive with Out-of-distribution (OOD) data for Long-Tail learning (COLT), which can effectively exploit OOD data to dynamically re-balance the feature space. We empirically identify the counter-intuitive usefulness of OOD samples in SSL long-tailed learning and principally design a novel SSL method. Concretely, we first localize the `head' and `tail' samples by assigning a tailness score to each OOD sample based on its neighborhoods in the feature space. Then, we propose an online OOD sampling strategy to dynamically re-balance the feature space. Finally, we enforce the model to be capable of distinguishing ID and OOD samples by a distribution-level supervised contrastive loss. Extensive experiments are conducted on various datasets and several state-of-the-art SSL frameworks to verify the effectiveness of the proposed method. The results show that our method significantly improves the performance of SSL on long-tailed datasets by a large margin, and even outperforms previous work which uses external ID data. Our code is available at https://github.com/JianhongBai/COLT.
|
1603.01820
|
Liat Peterfreund
|
Benny Kimelfeld, Ester Livshits, Liat Peterfreund
|
Unambiguous Prioritized Repairing of Databases
| null | null | null | null |
cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In its traditional definition, a repair of an inconsistent database is a
consistent database that differs from the inconsistent one in a "minimal way".
Often, repairs are not equally legitimate, as it is desired to prefer one over
another; for example, one fact is regarded more reliable than another, or a
more recent fact should be preferred to an earlier one. Motivated by these
considerations, researchers have introduced and investigated the framework of
preferred repairs, in the context of denial constraints and subset repairs.
There, a priority relation between facts is lifted towards a priority relation
between consistent databases, and repairs are restricted to the ones that are
optimal in the lifted sense. Three notions of lifting (and optimal repairs)
have been proposed: Pareto, global, and completion.
In this paper we investigate the complexity of deciding whether the priority
relation suffices to clean the database unambiguously, or in other words,
whether there is exactly one optimal repair. We show that the different lifting
semantics entail highly different complexities. Under Pareto optimality, the
problem is coNP-complete, in data complexity, for every set of functional
dependencies (FDs), except for the tractable case of (equivalence to) one FD
per relation. Under global optimality, one FD per relation is still tractable,
but we establish $\Pi^{p}_{2}$-completeness for a relation with two FDs. In
contrast, under completion optimality the problem is solvable in polynomial
time for every set of FDs. In fact, we present a polynomial-time algorithm for
arbitrary conflict hypergraphs. We further show that under a general assumption
of transitivity, this algorithm solves the problem even for global optimality.
The algorithm is extremely simple, but its proof of correctness is quite
intricate.
|
[
{
"created": "Sun, 6 Mar 2016 12:44:54 GMT",
"version": "v1"
}
] |
2016-03-08
|
[
[
"Kimelfeld",
"Benny",
""
],
[
"Livshits",
"Ester",
""
],
[
"Peterfreund",
"Liat",
""
]
] |
In its traditional definition, a repair of an inconsistent database is a consistent database that differs from the inconsistent one in a "minimal way". Often, repairs are not equally legitimate, as it is desired to prefer one over another; for example, one fact is regarded more reliable than another, or a more recent fact should be preferred to an earlier one. Motivated by these considerations, researchers have introduced and investigated the framework of preferred repairs, in the context of denial constraints and subset repairs. There, a priority relation between facts is lifted towards a priority relation between consistent databases, and repairs are restricted to the ones that are optimal in the lifted sense. Three notions of lifting (and optimal repairs) have been proposed: Pareto, global, and completion. In this paper we investigate the complexity of deciding whether the priority relation suffices to clean the database unambiguously, or in other words, whether there is exactly one optimal repair. We show that the different lifting semantics entail highly different complexities. Under Pareto optimality, the problem is coNP-complete, in data complexity, for every set of functional dependencies (FDs), except for the tractable case of (equivalence to) one FD per relation. Under global optimality, one FD per relation is still tractable, but we establish $\Pi^{p}_{2}$-completeness for a relation with two FDs. In contrast, under completion optimality the problem is solvable in polynomial time for every set of FDs. In fact, we present a polynomial-time algorithm for arbitrary conflict hypergraphs. We further show that under a general assumption of transitivity, this algorithm solves the problem even for global optimality. The algorithm is extremely simple, but its proof of correctness is quite intricate.
|
1706.00493
|
Ling Zhang
|
Ling Zhang, Le Lu, Ronald M. Summers, Electron Kebebew, Jianhua Yao
|
Personalized Pancreatic Tumor Growth Prediction via Group Learning
| null | null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Tumor growth prediction, a highly challenging task, has long been viewed as a
mathematical modeling problem, where the tumor growth pattern is personalized
based on imaging and clinical data of a target patient. Though mathematical
models yield promising results, their prediction accuracy may be limited by the
absence of population trend data and personalized clinical characteristics. In
this paper, we propose a statistical group learning approach to predict the
tumor growth pattern that incorporates both the population trend and
personalized data, in order to discover high-level features from multimodal
imaging data. A deep convolutional neural network approach is developed to
model the voxel-wise spatio-temporal tumor progression. The deep features are
combined with the time intervals and the clinical factors to feed a process of
feature selection. Our predictive model is pretrained on a group data set and
personalized on the target patient data to estimate the future spatio-temporal
progression of the patient's tumor. Multimodal imaging data at multiple time
points are used in the learning, personalization and inference stages. Our
method achieves a Dice coefficient of 86.8% +- 3.6% and RVD of 7.9% +- 5.4% on
a pancreatic tumor data set, outperforming the DSC of 84.4% +- 4.0% and RVD
13.9% +- 9.8% obtained by a previous state-of-the-art model-based method.
|
[
{
"created": "Thu, 1 Jun 2017 20:57:53 GMT",
"version": "v1"
}
] |
2017-06-05
|
[
[
"Zhang",
"Ling",
""
],
[
"Lu",
"Le",
""
],
[
"Summers",
"Ronald M.",
""
],
[
"Kebebew",
"Electron",
""
],
[
"Yao",
"Jianhua",
""
]
] |
Tumor growth prediction, a highly challenging task, has long been viewed as a mathematical modeling problem, where the tumor growth pattern is personalized based on imaging and clinical data of a target patient. Though mathematical models yield promising results, their prediction accuracy may be limited by the absence of population trend data and personalized clinical characteristics. In this paper, we propose a statistical group learning approach to predict the tumor growth pattern that incorporates both the population trend and personalized data, in order to discover high-level features from multimodal imaging data. A deep convolutional neural network approach is developed to model the voxel-wise spatio-temporal tumor progression. The deep features are combined with the time intervals and the clinical factors to feed a process of feature selection. Our predictive model is pretrained on a group data set and personalized on the target patient data to estimate the future spatio-temporal progression of the patient's tumor. Multimodal imaging data at multiple time points are used in the learning, personalization and inference stages. Our method achieves a Dice coefficient of 86.8% +- 3.6% and RVD of 7.9% +- 5.4% on a pancreatic tumor data set, outperforming the DSC of 84.4% +- 4.0% and RVD 13.9% +- 9.8% obtained by a previous state-of-the-art model-based method.
|
1406.4518
|
Erfan Khaji Mr.
|
Erfan Khaji, Amin Satlikh Mohammadi
|
A Heuristic Method to Generate Better Initial Population for
Evolutionary Methods
| null | null | null | null |
cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Initial population plays an important role in heuristic algorithms such as GA
as it help to decrease the time those algorithms need to achieve an acceptable
result. Furthermore, it may influence the quality of the final answer given by
evolutionary algorithms. In this paper, we shall introduce a heuristic method
to generate a target based initial population which possess two mentioned
characteristics. The efficiency of the proposed method has been shown by
presenting the results of our tests on the benchmarks.
|
[
{
"created": "Sun, 15 Jun 2014 15:20:17 GMT",
"version": "v1"
}
] |
2014-06-19
|
[
[
"Khaji",
"Erfan",
""
],
[
"Mohammadi",
"Amin Satlikh",
""
]
] |
Initial population plays an important role in heuristic algorithms such as GA as it help to decrease the time those algorithms need to achieve an acceptable result. Furthermore, it may influence the quality of the final answer given by evolutionary algorithms. In this paper, we shall introduce a heuristic method to generate a target based initial population which possess two mentioned characteristics. The efficiency of the proposed method has been shown by presenting the results of our tests on the benchmarks.
|
1607.02678
|
Wei Li
|
Wei Li, Farnaz Abtahi, Christina Tsangouri, Zhigang Zhu
|
Towards an "In-the-Wild" Emotion Dataset Using a Game-based Framework
|
This paper is accepted at CVPR 2016 Workshop
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In order to create an "in-the-wild" dataset of facial emotions with large
number of balanced samples, this paper proposes a game-based data collection
framework. The framework mainly include three components: a game engine, a game
interface, and a data collection and evaluation module. We use a deep learning
approach to build an emotion classifier as the game engine. Then a emotion web
game to allow gamers to enjoy the games, while the data collection module
obtains automatically-labelled emotion images. Using our game, we have
collected more than 15,000 images within a month of the test run and built an
emotion dataset "GaMo". To evaluate the dataset, we compared the performance of
two deep learning models trained on both GaMo and CIFE. The results of our
experiments show that because of being large and balanced, GaMo can be used to
build a more robust emotion detector than the emotion detector trained on CIFE,
which was used in the game engine to collect the face images.
|
[
{
"created": "Sun, 10 Jul 2016 02:16:10 GMT",
"version": "v1"
}
] |
2016-07-12
|
[
[
"Li",
"Wei",
""
],
[
"Abtahi",
"Farnaz",
""
],
[
"Tsangouri",
"Christina",
""
],
[
"Zhu",
"Zhigang",
""
]
] |
In order to create an "in-the-wild" dataset of facial emotions with large number of balanced samples, this paper proposes a game-based data collection framework. The framework mainly include three components: a game engine, a game interface, and a data collection and evaluation module. We use a deep learning approach to build an emotion classifier as the game engine. Then a emotion web game to allow gamers to enjoy the games, while the data collection module obtains automatically-labelled emotion images. Using our game, we have collected more than 15,000 images within a month of the test run and built an emotion dataset "GaMo". To evaluate the dataset, we compared the performance of two deep learning models trained on both GaMo and CIFE. The results of our experiments show that because of being large and balanced, GaMo can be used to build a more robust emotion detector than the emotion detector trained on CIFE, which was used in the game engine to collect the face images.
|
1502.06369
|
Mostafa Zaman Chowdhury
|
Mostafa Zaman Chowdhury, Bui Minh Trung, and Yeong Min Jang
|
Neighbor Cell List Optimization for Femtocell-to-Femtocell Handover in
Dense Femtocellular Networks
|
International Conference on Ubiquitous and Future Networks (ICUFN),
June 2011, Dalian, China, pp 241-245
| null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by/3.0/
|
Dense femtocells are the ultimate goal of the femtocellular network
deployment. Among three types of handovers: femtocell-to-macrocell,
macrocell-to-femtocell, and femtocell-to-femtocell, the latter two are the main
concern for the dense femtocellular network deployment. For these handover
cases, minimum as well appropriate neighbor cell list is the key element for
the successful handover. In this paper, we propose an algorithm to make minimum
but appropriate number of neighbor femtocell list for the
femtocell-to-femtocell handover. Our algorithm considers received signal level
from femto APs (FAPs); open and close access cases; and detected frequencyfrom
the neighbor femtocells. The simulation results show that the proposed scheme
is able to attain minimum but optimal number of neighbor femtocell list for the
possible femtocell-to-femtocell handover.
|
[
{
"created": "Mon, 23 Feb 2015 10:16:30 GMT",
"version": "v1"
}
] |
2015-02-24
|
[
[
"Chowdhury",
"Mostafa Zaman",
""
],
[
"Trung",
"Bui Minh",
""
],
[
"Jang",
"Yeong Min",
""
]
] |
Dense femtocells are the ultimate goal of the femtocellular network deployment. Among three types of handovers: femtocell-to-macrocell, macrocell-to-femtocell, and femtocell-to-femtocell, the latter two are the main concern for the dense femtocellular network deployment. For these handover cases, minimum as well appropriate neighbor cell list is the key element for the successful handover. In this paper, we propose an algorithm to make minimum but appropriate number of neighbor femtocell list for the femtocell-to-femtocell handover. Our algorithm considers received signal level from femto APs (FAPs); open and close access cases; and detected frequencyfrom the neighbor femtocells. The simulation results show that the proposed scheme is able to attain minimum but optimal number of neighbor femtocell list for the possible femtocell-to-femtocell handover.
|
2407.06339
|
Hongbo Zhu
|
Hongbo Zhu, Theodor Wulff, Rahul Singh Maharjan, Jinpei Han and Angelo
Cangelosi
|
Noise-Free Explanation for Driving Action Prediction
|
10 pages, 10 figures
| null | null | null |
cs.CV cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Although attention mechanisms have achieved considerable progress in
Transformer-based architectures across various Artificial Intelligence (AI)
domains, their inner workings remain to be explored. Existing explainable
methods have different emphases but are rather one-sided. They primarily
analyse the attention mechanisms or gradient-based attribution while neglecting
the magnitudes of input feature values or the skip-connection module. Moreover,
they inevitably bring spurious noisy pixel attributions unrelated to the
model's decision, hindering humans' trust in the spotted visualization result.
Hence, we propose an easy-to-implement but effective way to remedy this flaw:
Smooth Noise Norm Attention (SNNA). We weigh the attention by the norm of the
transformed value vector and guide the label-specific signal with the attention
gradient, then randomly sample the input perturbations and average the
corresponding gradients to produce noise-free attribution. Instead of
evaluating the explanation method on the binary or multi-class classification
tasks like in previous works, we explore the more complex multi-label
classification scenario in this work, i.e., the driving action prediction task,
and trained a model for it specifically. Both qualitative and quantitative
evaluation results show the superiority of SNNA compared to other SOTA
attention-based explainable methods in generating a clearer visual explanation
map and ranking the input pixel importance.
|
[
{
"created": "Mon, 8 Jul 2024 19:21:24 GMT",
"version": "v1"
}
] |
2024-07-10
|
[
[
"Zhu",
"Hongbo",
""
],
[
"Wulff",
"Theodor",
""
],
[
"Maharjan",
"Rahul Singh",
""
],
[
"Han",
"Jinpei",
""
],
[
"Cangelosi",
"Angelo",
""
]
] |
Although attention mechanisms have achieved considerable progress in Transformer-based architectures across various Artificial Intelligence (AI) domains, their inner workings remain to be explored. Existing explainable methods have different emphases but are rather one-sided. They primarily analyse the attention mechanisms or gradient-based attribution while neglecting the magnitudes of input feature values or the skip-connection module. Moreover, they inevitably bring spurious noisy pixel attributions unrelated to the model's decision, hindering humans' trust in the spotted visualization result. Hence, we propose an easy-to-implement but effective way to remedy this flaw: Smooth Noise Norm Attention (SNNA). We weigh the attention by the norm of the transformed value vector and guide the label-specific signal with the attention gradient, then randomly sample the input perturbations and average the corresponding gradients to produce noise-free attribution. Instead of evaluating the explanation method on the binary or multi-class classification tasks like in previous works, we explore the more complex multi-label classification scenario in this work, i.e., the driving action prediction task, and trained a model for it specifically. Both qualitative and quantitative evaluation results show the superiority of SNNA compared to other SOTA attention-based explainable methods in generating a clearer visual explanation map and ranking the input pixel importance.
|
2302.12211
|
Yichao Du
|
Yichao Du, Zhirui Zhang, Bingzhe Wu, Lemao Liu, Tong Xu and Enhong
Chen
|
Federated Nearest Neighbor Machine Translation
|
ICLR 2023
| null | null | null |
cs.CL cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To protect user privacy and meet legal regulations, federated learning (FL)
is attracting significant attention. Training neural machine translation (NMT)
models with traditional FL algorithm (e.g., FedAvg) typically relies on
multi-round model-based interactions. However, it is impractical and
inefficient for machine translation tasks due to the vast communication
overheads and heavy synchronization. In this paper, we propose a novel
federated nearest neighbor (FedNN) machine translation framework that, instead
of multi-round model-based interactions, leverages one-round memorization-based
interaction to share knowledge across different clients to build low-overhead
privacy-preserving systems. The whole approach equips the public NMT model
trained on large-scale accessible data with a $k$-nearest-neighbor ($$kNN)
classifier and integrates the external datastore constructed by private text
data in all clients to form the final FL model. A two-phase datastore
encryption strategy is introduced to achieve privacy-preserving during this
process. Extensive experiments show that FedNN significantly reduces
computational and communication costs compared with FedAvg, while maintaining
promising performance in different FL settings.
|
[
{
"created": "Thu, 23 Feb 2023 18:04:07 GMT",
"version": "v1"
}
] |
2023-02-24
|
[
[
"Du",
"Yichao",
""
],
[
"Zhang",
"Zhirui",
""
],
[
"Wu",
"Bingzhe",
""
],
[
"Liu",
"Lemao",
""
],
[
"Xu",
"Tong",
""
],
[
"Chen",
"Enhong",
""
]
] |
To protect user privacy and meet legal regulations, federated learning (FL) is attracting significant attention. Training neural machine translation (NMT) models with traditional FL algorithm (e.g., FedAvg) typically relies on multi-round model-based interactions. However, it is impractical and inefficient for machine translation tasks due to the vast communication overheads and heavy synchronization. In this paper, we propose a novel federated nearest neighbor (FedNN) machine translation framework that, instead of multi-round model-based interactions, leverages one-round memorization-based interaction to share knowledge across different clients to build low-overhead privacy-preserving systems. The whole approach equips the public NMT model trained on large-scale accessible data with a $k$-nearest-neighbor ($$kNN) classifier and integrates the external datastore constructed by private text data in all clients to form the final FL model. A two-phase datastore encryption strategy is introduced to achieve privacy-preserving during this process. Extensive experiments show that FedNN significantly reduces computational and communication costs compared with FedAvg, while maintaining promising performance in different FL settings.
|
1901.00885
|
Chung Hyuk Park
|
Hifza Javed, Rachael Burns, Myounghoon Jeon, Ayanna M. Howard, Chung
Hyuk Park
|
An Interactive Robotic Framework to Facilitate Sensory Experiences for
Children with ASD
|
18 pages, 12 figures
| null | null | null |
cs.RO cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The diagnosis of Autism Spectrum Disorder (ASD) in children is commonly
accompanied by a diagnosis of sensory processing disorders as well.
Abnormalities are usually reported in multiple sensory processing domains,
showing a higher prevalence of unusual responses, particularly to tactile,
auditory and visual stimuli. This paper discusses a novel robot-based framework
designed to target sensory difficulties faced by children with ASD in a
controlled setting. The setup consists of a number of sensory stations,
together with robotic agents that navigate the stations and interact with the
stimuli as they are presented. These stimuli are designed to resemble real
world scenarios that form a common part of one's everyday experiences. Given
the strong interest of children with ASD in technology in general and robots in
particular, we attempt to utilize our robotic platform to demonstrate socially
acceptable responses to the stimuli in an interactive, pedagogical setting that
encourages the child's social, motor and vocal skills, while providing a
diverse sensory experience. A user study was conducted to evaluate the efficacy
of the proposed framework, with a total of 18 participants (5 with ASD and 13
typically developing) between the ages of 4 and 12 years. We describe our
methods of data collection, coding of video data and the analysis of the
results obtained from the study. We also discuss the limitations of the current
work and detail our plans for the future work to improve the validity of the
obtained results.
|
[
{
"created": "Thu, 3 Jan 2019 19:22:13 GMT",
"version": "v1"
}
] |
2019-01-07
|
[
[
"Javed",
"Hifza",
""
],
[
"Burns",
"Rachael",
""
],
[
"Jeon",
"Myounghoon",
""
],
[
"Howard",
"Ayanna M.",
""
],
[
"Park",
"Chung Hyuk",
""
]
] |
The diagnosis of Autism Spectrum Disorder (ASD) in children is commonly accompanied by a diagnosis of sensory processing disorders as well. Abnormalities are usually reported in multiple sensory processing domains, showing a higher prevalence of unusual responses, particularly to tactile, auditory and visual stimuli. This paper discusses a novel robot-based framework designed to target sensory difficulties faced by children with ASD in a controlled setting. The setup consists of a number of sensory stations, together with robotic agents that navigate the stations and interact with the stimuli as they are presented. These stimuli are designed to resemble real world scenarios that form a common part of one's everyday experiences. Given the strong interest of children with ASD in technology in general and robots in particular, we attempt to utilize our robotic platform to demonstrate socially acceptable responses to the stimuli in an interactive, pedagogical setting that encourages the child's social, motor and vocal skills, while providing a diverse sensory experience. A user study was conducted to evaluate the efficacy of the proposed framework, with a total of 18 participants (5 with ASD and 13 typically developing) between the ages of 4 and 12 years. We describe our methods of data collection, coding of video data and the analysis of the results obtained from the study. We also discuss the limitations of the current work and detail our plans for the future work to improve the validity of the obtained results.
|
1809.03779
|
Zenith Purisha
|
Zenith Purisha, Carl Jidling, Niklas Wahlstr\"om, Simo S\"arkk\"a,
Thomas B. Sch\"on
|
Probabilistic approach to limited-data computed tomography
reconstruction
| null | null |
10.1088/1361-6420/ab2e2a
| null |
cs.CV cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we consider the inverse problem of reconstructing the internal
structure of an object from limited x-ray projections. We use a Gaussian
process prior to model the target function and estimate its (hyper)parameters
from measured data. In contrast to other established methods, this comes with
the advantage of not requiring any manual parameter tuning, which usually
arises in classical regularization strategies. Our method uses a basis function
expansion technique for the Gaussian process which significantly reduces the
computational complexity and avoids the need for numerical integration. The
approach also allows for reformulation of come classical regularization methods
as Laplacian and Tikhonov regularization as Gaussian process regression, and
hence provides an efficient algorithm and principled means for their parameter
tuning. Results from simulated and real data indicate that this approach is
less sensitive to streak artifacts as compared to the commonly used method of
filtered backprojection.
|
[
{
"created": "Tue, 11 Sep 2018 10:16:44 GMT",
"version": "v1"
},
{
"created": "Mon, 25 Feb 2019 09:07:08 GMT",
"version": "v2"
},
{
"created": "Wed, 3 Jul 2019 08:30:06 GMT",
"version": "v3"
}
] |
2019-07-04
|
[
[
"Purisha",
"Zenith",
""
],
[
"Jidling",
"Carl",
""
],
[
"Wahlström",
"Niklas",
""
],
[
"Särkkä",
"Simo",
""
],
[
"Schön",
"Thomas B.",
""
]
] |
In this work, we consider the inverse problem of reconstructing the internal structure of an object from limited x-ray projections. We use a Gaussian process prior to model the target function and estimate its (hyper)parameters from measured data. In contrast to other established methods, this comes with the advantage of not requiring any manual parameter tuning, which usually arises in classical regularization strategies. Our method uses a basis function expansion technique for the Gaussian process which significantly reduces the computational complexity and avoids the need for numerical integration. The approach also allows for reformulation of come classical regularization methods as Laplacian and Tikhonov regularization as Gaussian process regression, and hence provides an efficient algorithm and principled means for their parameter tuning. Results from simulated and real data indicate that this approach is less sensitive to streak artifacts as compared to the commonly used method of filtered backprojection.
|
1107.0068
|
EPTCS
|
Francisco Dur\'an (Universidad de M\'alaga), Martin Gogolla
(University of Bremen), Manuel Rold\'an (Universidad de M\'alaga)
|
Tracing Properties of UML and OCL Models with Maude
|
In Proceedings AMMSE 2011, arXiv:1106.5962
|
EPTCS 56, 2011, pp. 81-97
|
10.4204/EPTCS.56.6
| null |
cs.SE cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The starting point of this paper is a system described in form of a UML class
diagram where system states are characterized by OCL invariants and system
transitions are defined by OCL pre- and postconditions. The aim of our approach
is to assist the developer in learning about the consequences of the described
system states and transitions and about the formal implications of the
properties that are explicitly given. We propose to draw conclusions about the
stated constraints by translating the UML and OCL model into the algebraic
specification language and system Maude, which is based on rewrite logic. We
will concentrate in this paper on employing Maude's capabilities for state
search. Maude's state search offers the possibility to describe a start
configuration of the system and then explore all configurations reachable by
rewriting. The search can be adjusted by formulating requirements for the
allowed states and the allowed transitions.
|
[
{
"created": "Thu, 30 Jun 2011 21:45:07 GMT",
"version": "v1"
}
] |
2011-07-04
|
[
[
"Durán",
"Francisco",
"",
"Universidad de Málaga"
],
[
"Gogolla",
"Martin",
"",
"University of Bremen"
],
[
"Roldán",
"Manuel",
"",
"Universidad de Málaga"
]
] |
The starting point of this paper is a system described in form of a UML class diagram where system states are characterized by OCL invariants and system transitions are defined by OCL pre- and postconditions. The aim of our approach is to assist the developer in learning about the consequences of the described system states and transitions and about the formal implications of the properties that are explicitly given. We propose to draw conclusions about the stated constraints by translating the UML and OCL model into the algebraic specification language and system Maude, which is based on rewrite logic. We will concentrate in this paper on employing Maude's capabilities for state search. Maude's state search offers the possibility to describe a start configuration of the system and then explore all configurations reachable by rewriting. The search can be adjusted by formulating requirements for the allowed states and the allowed transitions.
|
1904.02148
|
Ben Smyth
|
Ben Smyth
|
TLS 1.3 for engineers: An exploration of the TLS 1.3 specification and
OpenJDK's Java implementation
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Internet delivered in excess of forty terabytes per second in 2017
(Cisco, 2018), and over half of today's Internet traffic is encrypted
(Sandvine, 2018); enabling trade worth trillions of dollars (Statista, 2017).
Yet, the underlying encryption technology is only understood by a select few.
This manuscript broadens understanding by exploring TLS, an encryption
technology used to protect application layer communication (including HTTP, FTP
and SMTP traffic), and by examining Oracle's Java implementation. We focus on
the most recent TLS release, namely, version 1.3, which is defined by RFC 8446.
|
[
{
"created": "Tue, 22 Jan 2019 09:46:00 GMT",
"version": "v1"
},
{
"created": "Wed, 27 May 2020 05:24:27 GMT",
"version": "v2"
},
{
"created": "Thu, 28 May 2020 13:20:47 GMT",
"version": "v3"
},
{
"created": "Wed, 30 Sep 2020 14:28:59 GMT",
"version": "v4"
}
] |
2020-10-02
|
[
[
"Smyth",
"Ben",
""
]
] |
The Internet delivered in excess of forty terabytes per second in 2017 (Cisco, 2018), and over half of today's Internet traffic is encrypted (Sandvine, 2018); enabling trade worth trillions of dollars (Statista, 2017). Yet, the underlying encryption technology is only understood by a select few. This manuscript broadens understanding by exploring TLS, an encryption technology used to protect application layer communication (including HTTP, FTP and SMTP traffic), and by examining Oracle's Java implementation. We focus on the most recent TLS release, namely, version 1.3, which is defined by RFC 8446.
|
2305.19245
|
Thu Nguyen-Phuoc
|
Thu Nguyen-Phuoc, Gabriel Schwartz, Yuting Ye, Stephen Lombardi, Lei
Xiao
|
AlteredAvatar: Stylizing Dynamic 3D Avatars with Fast Style Adaptation
|
10 main pages, 14 figures. Project page:
https://alteredavatar.github.io
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
This paper presents a method that can quickly adapt dynamic 3D avatars to
arbitrary text descriptions of novel styles. Among existing approaches for
avatar stylization, direct optimization methods can produce excellent results
for arbitrary styles but they are unpleasantly slow. Furthermore, they require
redoing the optimization process from scratch for every new input. Fast
approximation methods using feed-forward networks trained on a large dataset of
style images can generate results for new inputs quickly, but tend not to
generalize well to novel styles and fall short in quality. We therefore
investigate a new approach, AlteredAvatar, that combines those two approaches
using the meta-learning framework. In the inner loop, the model learns to
optimize to match a single target style well; while in the outer loop, the
model learns to stylize efficiently across many styles. After training,
AlteredAvatar learns an initialization that can quickly adapt within a small
number of update steps to a novel style, which can be given using texts, a
reference image, or a combination of both. We show that AlteredAvatar can
achieve a good balance between speed, flexibility and quality, while
maintaining consistency across a wide range of novel views and facial
expressions.
|
[
{
"created": "Tue, 30 May 2023 17:32:12 GMT",
"version": "v1"
}
] |
2023-05-31
|
[
[
"Nguyen-Phuoc",
"Thu",
""
],
[
"Schwartz",
"Gabriel",
""
],
[
"Ye",
"Yuting",
""
],
[
"Lombardi",
"Stephen",
""
],
[
"Xiao",
"Lei",
""
]
] |
This paper presents a method that can quickly adapt dynamic 3D avatars to arbitrary text descriptions of novel styles. Among existing approaches for avatar stylization, direct optimization methods can produce excellent results for arbitrary styles but they are unpleasantly slow. Furthermore, they require redoing the optimization process from scratch for every new input. Fast approximation methods using feed-forward networks trained on a large dataset of style images can generate results for new inputs quickly, but tend not to generalize well to novel styles and fall short in quality. We therefore investigate a new approach, AlteredAvatar, that combines those two approaches using the meta-learning framework. In the inner loop, the model learns to optimize to match a single target style well; while in the outer loop, the model learns to stylize efficiently across many styles. After training, AlteredAvatar learns an initialization that can quickly adapt within a small number of update steps to a novel style, which can be given using texts, a reference image, or a combination of both. We show that AlteredAvatar can achieve a good balance between speed, flexibility and quality, while maintaining consistency across a wide range of novel views and facial expressions.
|
2301.03949
|
Mengyi Zhao
|
Mengyi Zhao, Mengyuan Liu, Bin Ren, Shuling Dai, and Nicu Sebe
|
Modiff: Action-Conditioned 3D Motion Generation with Denoising Diffusion
Probabilistic Models
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Diffusion-based generative models have recently emerged as powerful solutions
for high-quality synthesis in multiple domains. Leveraging the bidirectional
Markov chains, diffusion probabilistic models generate samples by inferring the
reversed Markov chain based on the learned distribution mapping at the forward
diffusion process. In this work, we propose Modiff, a conditional paradigm that
benefits from the denoising diffusion probabilistic model (DDPM) to tackle the
problem of realistic and diverse action-conditioned 3D skeleton-based motion
generation. We are a pioneering attempt that uses DDPM to synthesize a variable
number of motion sequences conditioned on a categorical action. We evaluate our
approach on the large-scale NTU RGB+D dataset and show improvements over
state-of-the-art motion generation methods.
|
[
{
"created": "Tue, 10 Jan 2023 13:15:42 GMT",
"version": "v1"
},
{
"created": "Tue, 28 Mar 2023 08:26:30 GMT",
"version": "v2"
}
] |
2023-03-29
|
[
[
"Zhao",
"Mengyi",
""
],
[
"Liu",
"Mengyuan",
""
],
[
"Ren",
"Bin",
""
],
[
"Dai",
"Shuling",
""
],
[
"Sebe",
"Nicu",
""
]
] |
Diffusion-based generative models have recently emerged as powerful solutions for high-quality synthesis in multiple domains. Leveraging the bidirectional Markov chains, diffusion probabilistic models generate samples by inferring the reversed Markov chain based on the learned distribution mapping at the forward diffusion process. In this work, we propose Modiff, a conditional paradigm that benefits from the denoising diffusion probabilistic model (DDPM) to tackle the problem of realistic and diverse action-conditioned 3D skeleton-based motion generation. We are a pioneering attempt that uses DDPM to synthesize a variable number of motion sequences conditioned on a categorical action. We evaluate our approach on the large-scale NTU RGB+D dataset and show improvements over state-of-the-art motion generation methods.
|
2402.01443
|
Rainer Trauth
|
Rainer Trauth, Korbinian Moller, Gerald Wuersching, Johannes Betz
|
FRENETIX: A High-Performance and Modular Motion Planning Framework for
Autonomous Driving
|
Submitted to IEEE ACCESS
| null |
10.1109/ACCESS.2024.3436835
| null |
cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Our research introduces a modular motion planning framework for autonomous
vehicles using a sampling-based trajectory planning algorithm. This approach
effectively tackles the challenges of solution space construction and
optimization in path planning. The algorithm is applicable to both real
vehicles and simulations, offering a robust solution for complex autonomous
navigation. Our method employs a multi-objective optimization strategy for
efficient navigation in static and highly dynamic environments, focusing on
optimizing trajectory comfort, safety, and path precision. The algorithm is
used to analyze the algorithm performance and success rate in 1750 virtual
complex urban and highway scenarios. Our results demonstrate fast calculation
times (8ms for 800 trajectories), a high success rate in complex scenarios
(88%), and easy adaptability with different modules presented. The most
noticeable difference exhibited was the fast trajectory sampling, feasibility
check, and cost evaluation step across various trajectory counts. We
demonstrate the integration and execution of the framework on real vehicles by
evaluating deviations from the controller using a test track. This evaluation
highlights the algorithm's robustness and reliability, ensuring it meets the
stringent requirements of real-world autonomous driving scenarios. The code and
the additional modules used in this research are publicly available as
open-source software and can be accessed at the following link:
https://github.com/TUM-AVS/Frenetix-Motion-Planner.
|
[
{
"created": "Fri, 2 Feb 2024 14:35:26 GMT",
"version": "v1"
},
{
"created": "Fri, 14 Jun 2024 10:16:09 GMT",
"version": "v2"
}
] |
2024-08-06
|
[
[
"Trauth",
"Rainer",
""
],
[
"Moller",
"Korbinian",
""
],
[
"Wuersching",
"Gerald",
""
],
[
"Betz",
"Johannes",
""
]
] |
Our research introduces a modular motion planning framework for autonomous vehicles using a sampling-based trajectory planning algorithm. This approach effectively tackles the challenges of solution space construction and optimization in path planning. The algorithm is applicable to both real vehicles and simulations, offering a robust solution for complex autonomous navigation. Our method employs a multi-objective optimization strategy for efficient navigation in static and highly dynamic environments, focusing on optimizing trajectory comfort, safety, and path precision. The algorithm is used to analyze the algorithm performance and success rate in 1750 virtual complex urban and highway scenarios. Our results demonstrate fast calculation times (8ms for 800 trajectories), a high success rate in complex scenarios (88%), and easy adaptability with different modules presented. The most noticeable difference exhibited was the fast trajectory sampling, feasibility check, and cost evaluation step across various trajectory counts. We demonstrate the integration and execution of the framework on real vehicles by evaluating deviations from the controller using a test track. This evaluation highlights the algorithm's robustness and reliability, ensuring it meets the stringent requirements of real-world autonomous driving scenarios. The code and the additional modules used in this research are publicly available as open-source software and can be accessed at the following link: https://github.com/TUM-AVS/Frenetix-Motion-Planner.
|
2407.11280
|
Yiyuan Yang
|
Yiyuan Yang, Zheshun Wu, Yong Chu, Zhenghua Chen, Zenglin Xu, Qingsong
Wen
|
Intelligent Cross-Organizational Process Mining: A Survey and New
Perspectives
|
Under review; 13 pages, 7 figures, 2 tables
| null | null | null |
cs.AI cs.CE cs.DB cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Process mining, as a high-level field in data mining, plays a crucial role in
enhancing operational efficiency and decision-making across organizations. In
this survey paper, we delve into the growing significance and ongoing trends in
the field of process mining, advocating a specific viewpoint on its contents,
application, and development in modern businesses and process management,
particularly in cross-organizational settings. We first summarize the framework
of process mining, common industrial applications, and the latest advances
combined with artificial intelligence, such as workflow optimization,
compliance checking, and performance analysis. Then, we propose a holistic
framework for intelligent process analysis and outline initial methodologies in
cross-organizational settings, highlighting both challenges and opportunities.
This particular perspective aims to revolutionize process mining by leveraging
artificial intelligence to offer sophisticated solutions for complex,
multi-organizational data analysis. By integrating advanced machine learning
techniques, we can enhance predictive capabilities, streamline processes, and
facilitate real-time decision-making. Furthermore, we pinpoint avenues for
future investigations within the research community, encouraging the
exploration of innovative algorithms, data integration strategies, and
privacy-preserving methods to fully harness the potential of process mining in
diverse, interconnected business environments.
|
[
{
"created": "Mon, 15 Jul 2024 23:30:34 GMT",
"version": "v1"
}
] |
2024-07-17
|
[
[
"Yang",
"Yiyuan",
""
],
[
"Wu",
"Zheshun",
""
],
[
"Chu",
"Yong",
""
],
[
"Chen",
"Zhenghua",
""
],
[
"Xu",
"Zenglin",
""
],
[
"Wen",
"Qingsong",
""
]
] |
Process mining, as a high-level field in data mining, plays a crucial role in enhancing operational efficiency and decision-making across organizations. In this survey paper, we delve into the growing significance and ongoing trends in the field of process mining, advocating a specific viewpoint on its contents, application, and development in modern businesses and process management, particularly in cross-organizational settings. We first summarize the framework of process mining, common industrial applications, and the latest advances combined with artificial intelligence, such as workflow optimization, compliance checking, and performance analysis. Then, we propose a holistic framework for intelligent process analysis and outline initial methodologies in cross-organizational settings, highlighting both challenges and opportunities. This particular perspective aims to revolutionize process mining by leveraging artificial intelligence to offer sophisticated solutions for complex, multi-organizational data analysis. By integrating advanced machine learning techniques, we can enhance predictive capabilities, streamline processes, and facilitate real-time decision-making. Furthermore, we pinpoint avenues for future investigations within the research community, encouraging the exploration of innovative algorithms, data integration strategies, and privacy-preserving methods to fully harness the potential of process mining in diverse, interconnected business environments.
|
2306.01064
|
Zheng Li
|
Zheng Li and Francisco Millar-Bilbao
|
Characterizing the Cloud's Outbound Network Latency: An Experimental and
Modeling Study
| null | null |
10.1109/IEEECloudSummit48914.2020.00034
| null |
cs.DC cs.PF
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Cloud latency has critical influences on the success of cloud applications.
Therefore, characterizing cloud network performance is crucial for analyzing
and satisfying different latency requirements. By focusing on the cloud's
outbound network latency, this case study on Google App Engine confirms the
necessity of optimizing application deployment. More importantly, our modeling
effort has established a divide-and-conquer framework to address the complexity
in understanding and investigating the cloud latency.
|
[
{
"created": "Thu, 1 Jun 2023 18:15:05 GMT",
"version": "v1"
}
] |
2023-06-05
|
[
[
"Li",
"Zheng",
""
],
[
"Millar-Bilbao",
"Francisco",
""
]
] |
Cloud latency has critical influences on the success of cloud applications. Therefore, characterizing cloud network performance is crucial for analyzing and satisfying different latency requirements. By focusing on the cloud's outbound network latency, this case study on Google App Engine confirms the necessity of optimizing application deployment. More importantly, our modeling effort has established a divide-and-conquer framework to address the complexity in understanding and investigating the cloud latency.
|
2210.01091
|
Sovesh Mohapatra
|
Sovesh Mohapatra, Somesh Mohapatra
|
The (In)Effectiveness of Intermediate Task Training For Domain
Adaptation and Cross-Lingual Transfer Learning
|
1 figure, 1 table
| null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
Transfer learning from large language models (LLMs) has emerged as a powerful
technique to enable knowledge-based fine-tuning for a number of tasks,
adaptation of models for different domains and even languages. However, it
remains an open question, if and when transfer learning will work, i.e. leading
to positive or negative transfer. In this paper, we analyze the knowledge
transfer across three natural language processing (NLP) tasks - text
classification, sentimental analysis, and sentence similarity, using three LLMs
- BERT, RoBERTa, and XLNet - and analyzing their performance, by fine-tuning on
target datasets for domain and cross-lingual adaptation tasks, with and without
an intermediate task training on a larger dataset. Our experiments showed that
fine-tuning without an intermediate task training can lead to a better
performance for most tasks, while more generalized tasks might necessitate a
preceding intermediate task training step. We hope that this work will act as a
guide on transfer learning to NLP practitioners.
|
[
{
"created": "Mon, 3 Oct 2022 17:17:07 GMT",
"version": "v1"
},
{
"created": "Fri, 4 Nov 2022 23:48:05 GMT",
"version": "v2"
}
] |
2022-11-08
|
[
[
"Mohapatra",
"Sovesh",
""
],
[
"Mohapatra",
"Somesh",
""
]
] |
Transfer learning from large language models (LLMs) has emerged as a powerful technique to enable knowledge-based fine-tuning for a number of tasks, adaptation of models for different domains and even languages. However, it remains an open question, if and when transfer learning will work, i.e. leading to positive or negative transfer. In this paper, we analyze the knowledge transfer across three natural language processing (NLP) tasks - text classification, sentimental analysis, and sentence similarity, using three LLMs - BERT, RoBERTa, and XLNet - and analyzing their performance, by fine-tuning on target datasets for domain and cross-lingual adaptation tasks, with and without an intermediate task training on a larger dataset. Our experiments showed that fine-tuning without an intermediate task training can lead to a better performance for most tasks, while more generalized tasks might necessitate a preceding intermediate task training step. We hope that this work will act as a guide on transfer learning to NLP practitioners.
|
1502.01188
|
Jimmy Nielsen
|
Jimmy J. Nielsen, Germ\'an C. Madue\~no, Nuno K. Pratas, Ren\'e B.
S{\o}rensen, \v{C}edomir Stefanovi\'c, Petar Popovski
|
What Can Wireless Cellular Technologies Do about the Upcoming Smart
Metering Traffic?
|
Submitted; change: corrected location of eSM box in Fig. 1; May 22,
2015: Major revision after review; v4: revised, accepted for publication
| null |
10.1109/MCOM.2015.7263371
| null |
cs.IT cs.NI math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The introduction of smart electricity meters with cellular radio interface
puts an additional load on the wireless cellular networks. Currently, these
meters are designed for low duty cycle billing and occasional system check,
which generates a low-rate sporadic traffic. As the number of distributed
energy resources increases, the household power will become more variable and
thus unpredictable from the viewpoint of the Distribution System Operator
(DSO). It is therefore expected, in the near future, to have an increased
number of Wide Area Measurement System (WAMS) devices with Phasor Measurement
Unit (PMU)-like capabilities in the distribution grid, thus allowing the
utilities to monitor the low voltage grid quality while providing information
required for tighter grid control. From a communication standpoint, the traffic
profile will change drastically towards higher data volumes and higher rates
per device. In this paper, we characterize the current traffic generated by
smart electricity meters and supplement it with the potential traffic
requirements brought by introducing enhanced Smart Meters, i.e., meters with
PMU-like capabilities. Our study shows how GSM/GPRS and LTE cellular system
performance behaves with the current and next generation smart meters traffic,
where it is clearly seen that the PMU data will seriously challenge these
wireless systems. We conclude by highlighting the possible solutions for
upgrading the cellular standards, in order to cope with the upcoming smart
metering traffic.
|
[
{
"created": "Wed, 4 Feb 2015 13:21:06 GMT",
"version": "v1"
},
{
"created": "Mon, 9 Feb 2015 14:41:46 GMT",
"version": "v2"
},
{
"created": "Fri, 22 May 2015 08:21:29 GMT",
"version": "v3"
},
{
"created": "Wed, 1 Jul 2015 12:43:50 GMT",
"version": "v4"
}
] |
2015-09-22
|
[
[
"Nielsen",
"Jimmy J.",
""
],
[
"Madueño",
"Germán C.",
""
],
[
"Pratas",
"Nuno K.",
""
],
[
"Sørensen",
"René B.",
""
],
[
"Stefanović",
"Čedomir",
""
],
[
"Popovski",
"Petar",
""
]
] |
The introduction of smart electricity meters with cellular radio interface puts an additional load on the wireless cellular networks. Currently, these meters are designed for low duty cycle billing and occasional system check, which generates a low-rate sporadic traffic. As the number of distributed energy resources increases, the household power will become more variable and thus unpredictable from the viewpoint of the Distribution System Operator (DSO). It is therefore expected, in the near future, to have an increased number of Wide Area Measurement System (WAMS) devices with Phasor Measurement Unit (PMU)-like capabilities in the distribution grid, thus allowing the utilities to monitor the low voltage grid quality while providing information required for tighter grid control. From a communication standpoint, the traffic profile will change drastically towards higher data volumes and higher rates per device. In this paper, we characterize the current traffic generated by smart electricity meters and supplement it with the potential traffic requirements brought by introducing enhanced Smart Meters, i.e., meters with PMU-like capabilities. Our study shows how GSM/GPRS and LTE cellular system performance behaves with the current and next generation smart meters traffic, where it is clearly seen that the PMU data will seriously challenge these wireless systems. We conclude by highlighting the possible solutions for upgrading the cellular standards, in order to cope with the upcoming smart metering traffic.
|
2309.09136
|
Qiuming Zhao
|
Qiuming Zhao and Guangzhi Sun and Chao Zhang and Mingxing Xu and
Thomas Fang Zheng
|
Enhancing Quantised End-to-End ASR Models via Personalisation
|
5 pages, submitted to ICASSP 2024
| null | null | null |
cs.SD cs.AI eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Recent end-to-end automatic speech recognition (ASR) models have become
increasingly larger, making them particularly challenging to be deployed on
resource-constrained devices. Model quantisation is an effective solution that
sometimes causes the word error rate (WER) to increase. In this paper, a novel
strategy of personalisation for a quantised model (PQM) is proposed, which
combines speaker adaptive training (SAT) with model quantisation to improve the
performance of heavily compressed models. Specifically, PQM uses a 4-bit
NormalFloat Quantisation (NF4) approach for model quantisation and low-rank
adaptation (LoRA) for SAT. Experiments have been performed on the LibriSpeech
and the TED-LIUM 3 corpora. Remarkably, with a 7x reduction in model size and
1% additional speaker-specific parameters, 15.1% and 23.3% relative WER
reductions were achieved on quantised Whisper and Conformer-based
attention-based encoder-decoder ASR models respectively, comparing to the
original full precision models.
|
[
{
"created": "Sun, 17 Sep 2023 02:35:21 GMT",
"version": "v1"
}
] |
2023-09-19
|
[
[
"Zhao",
"Qiuming",
""
],
[
"Sun",
"Guangzhi",
""
],
[
"Zhang",
"Chao",
""
],
[
"Xu",
"Mingxing",
""
],
[
"Zheng",
"Thomas Fang",
""
]
] |
Recent end-to-end automatic speech recognition (ASR) models have become increasingly larger, making them particularly challenging to be deployed on resource-constrained devices. Model quantisation is an effective solution that sometimes causes the word error rate (WER) to increase. In this paper, a novel strategy of personalisation for a quantised model (PQM) is proposed, which combines speaker adaptive training (SAT) with model quantisation to improve the performance of heavily compressed models. Specifically, PQM uses a 4-bit NormalFloat Quantisation (NF4) approach for model quantisation and low-rank adaptation (LoRA) for SAT. Experiments have been performed on the LibriSpeech and the TED-LIUM 3 corpora. Remarkably, with a 7x reduction in model size and 1% additional speaker-specific parameters, 15.1% and 23.3% relative WER reductions were achieved on quantised Whisper and Conformer-based attention-based encoder-decoder ASR models respectively, comparing to the original full precision models.
|
1711.04450
|
Yoshihide Sawada PhD
|
Yoshihide Sawada, Yoshikuni Sato, Toru Nakada, Kei Ujimoto, and
Nobuhiro Hayashi
|
All-Transfer Learning for Deep Neural Networks and its Application to
Sepsis Classification
|
Long version of article published at ECAI 2016 (9 pages, 13 figures,
8 tables)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this article, we propose a transfer learning method for deep neural
networks (DNNs). Deep learning has been widely used in many applications.
However, applying deep learning is problematic when a large amount of training
data are not available. One of the conventional methods for solving this
problem is transfer learning for DNNs. In the field of image recognition,
state-of-the-art transfer learning methods for DNNs re-use parameters trained
on source domain data except for the output layer. However, this method may
result in poor classification performance when the amount of target domain data
is significantly small. To address this problem, we propose a method called
All-Transfer Deep Learning, which enables the transfer of all parameters of a
DNN. With this method, we can compute the relationship between the source and
target labels by the source domain knowledge. We applied our method to actual
two-dimensional electrophoresis image~(2-DE image) classification for
determining if an individual suffers from sepsis; the first attempt to apply a
classification approach to 2-DE images for proteomics, which has attracted
considerable attention as an extension beyond genomics. The results suggest
that our proposed method outperforms conventional transfer learning methods for
DNNs.
|
[
{
"created": "Mon, 13 Nov 2017 07:28:37 GMT",
"version": "v1"
}
] |
2017-11-15
|
[
[
"Sawada",
"Yoshihide",
""
],
[
"Sato",
"Yoshikuni",
""
],
[
"Nakada",
"Toru",
""
],
[
"Ujimoto",
"Kei",
""
],
[
"Hayashi",
"Nobuhiro",
""
]
] |
In this article, we propose a transfer learning method for deep neural networks (DNNs). Deep learning has been widely used in many applications. However, applying deep learning is problematic when a large amount of training data are not available. One of the conventional methods for solving this problem is transfer learning for DNNs. In the field of image recognition, state-of-the-art transfer learning methods for DNNs re-use parameters trained on source domain data except for the output layer. However, this method may result in poor classification performance when the amount of target domain data is significantly small. To address this problem, we propose a method called All-Transfer Deep Learning, which enables the transfer of all parameters of a DNN. With this method, we can compute the relationship between the source and target labels by the source domain knowledge. We applied our method to actual two-dimensional electrophoresis image~(2-DE image) classification for determining if an individual suffers from sepsis; the first attempt to apply a classification approach to 2-DE images for proteomics, which has attracted considerable attention as an extension beyond genomics. The results suggest that our proposed method outperforms conventional transfer learning methods for DNNs.
|
2305.16158
|
Sohag Kumar Saha
|
S M Mostaq Hossain, Sohag Kumar Saha, Shampa Banik, Trapa Banik
|
A New Era of Mobility: Exploring Digital Twin Applications in Autonomous
Vehicular Systems
|
7 pages, conference paper, accepted for publication in IEEE AIIoT
2023 conference
|
IEEE AIIoT 2023 conference
| null |
paper #1570907205
|
cs.NI cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Digital Twins (DTs) are virtual representations of physical objects or
processes that can collect information from the real environment to represent,
validate, and replicate the physical twin's present and future behavior. The
DTs are becoming increasingly prevalent in a variety of fields, including
manufacturing, automobiles, medicine, smart cities, and other related areas. In
this paper, we presented a systematic reviews on DTs in the autonomous
vehicular industry. We addressed DTs and their essential characteristics,
emphasized on accurate data collection, real-time analytics, and efficient
simulation capabilities, while highlighting their role in enhancing performance
and reliability. Next, we explored the technical challenges and central
technologies of DTs. We illustrated the comparison analysis of different
methodologies that have been used for autonomous vehicles in smart cities.
Finally, we addressed the application challenges and limitations of DTs in the
autonomous vehicular industry.
|
[
{
"created": "Tue, 9 May 2023 06:39:57 GMT",
"version": "v1"
}
] |
2023-05-26
|
[
[
"Hossain",
"S M Mostaq",
""
],
[
"Saha",
"Sohag Kumar",
""
],
[
"Banik",
"Shampa",
""
],
[
"Banik",
"Trapa",
""
]
] |
Digital Twins (DTs) are virtual representations of physical objects or processes that can collect information from the real environment to represent, validate, and replicate the physical twin's present and future behavior. The DTs are becoming increasingly prevalent in a variety of fields, including manufacturing, automobiles, medicine, smart cities, and other related areas. In this paper, we presented a systematic reviews on DTs in the autonomous vehicular industry. We addressed DTs and their essential characteristics, emphasized on accurate data collection, real-time analytics, and efficient simulation capabilities, while highlighting their role in enhancing performance and reliability. Next, we explored the technical challenges and central technologies of DTs. We illustrated the comparison analysis of different methodologies that have been used for autonomous vehicles in smart cities. Finally, we addressed the application challenges and limitations of DTs in the autonomous vehicular industry.
|
2207.03391
|
Muhammad Umar Farooq
|
Muhammad Umar Farooq, Darshan Adiga Haniya Narayana, Thomas Hain
|
Non-Linear Pairwise Language Mappings for Low-Resource Multilingual
Acoustic Model Fusion
|
Accepted for Interspeech 2022
| null | null | null |
cs.CL eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Multilingual speech recognition has drawn significant attention as an
effective way to compensate data scarcity for low-resource languages.
End-to-end (e2e) modelling is preferred over conventional hybrid systems,
mainly because of no lexicon requirement. However, hybrid DNN-HMMs still
outperform e2e models in limited data scenarios. Furthermore, the problem of
manual lexicon creation has been alleviated by publicly available trained
models of grapheme-to-phoneme (G2P) and text to IPA transliteration for a lot
of languages. In this paper, a novel approach of hybrid DNN-HMM acoustic models
fusion is proposed in a multilingual setup for the low-resource languages.
Posterior distributions from different monolingual acoustic models, against a
target language speech signal, are fused together. A separate regression neural
network is trained for each source-target language pair to transform posteriors
from source acoustic model to the target language. These networks require very
limited data as compared to the ASR training. Posterior fusion yields a
relative gain of 14.65% and 6.5% when compared with multilingual and
monolingual baselines respectively. Cross-lingual model fusion shows that the
comparable results can be achieved without using posteriors from the language
dependent ASR.
|
[
{
"created": "Thu, 7 Jul 2022 15:56:50 GMT",
"version": "v1"
}
] |
2022-07-08
|
[
[
"Farooq",
"Muhammad Umar",
""
],
[
"Narayana",
"Darshan Adiga Haniya",
""
],
[
"Hain",
"Thomas",
""
]
] |
Multilingual speech recognition has drawn significant attention as an effective way to compensate data scarcity for low-resource languages. End-to-end (e2e) modelling is preferred over conventional hybrid systems, mainly because of no lexicon requirement. However, hybrid DNN-HMMs still outperform e2e models in limited data scenarios. Furthermore, the problem of manual lexicon creation has been alleviated by publicly available trained models of grapheme-to-phoneme (G2P) and text to IPA transliteration for a lot of languages. In this paper, a novel approach of hybrid DNN-HMM acoustic models fusion is proposed in a multilingual setup for the low-resource languages. Posterior distributions from different monolingual acoustic models, against a target language speech signal, are fused together. A separate regression neural network is trained for each source-target language pair to transform posteriors from source acoustic model to the target language. These networks require very limited data as compared to the ASR training. Posterior fusion yields a relative gain of 14.65% and 6.5% when compared with multilingual and monolingual baselines respectively. Cross-lingual model fusion shows that the comparable results can be achieved without using posteriors from the language dependent ASR.
|
2306.15226
|
Cherie Ho
|
Eric Chen, Cherie Ho, Mukhtar Maulimov, Chen Wang, Sebastian Scherer
|
Learning-on-the-Drive: Self-supervised Adaptation of Visual Offroad
Traversability Models
|
8 pages
| null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
Autonomous off-road driving requires understanding traversability, which
refers to the suitability of a given terrain to drive over. When offroad
vehicles travel at high speed ($>10m/s$), they need to reason at long-range
($50m$-$100m$) for safe and deliberate navigation. Moreover, vehicles often
operate in new environments and under different weather conditions. LiDAR
provides accurate estimates robust to visual appearances, however, it is often
too noisy beyond 30m for fine-grained estimates due to sparse measurements.
Conversely, visual-based models give dense predictions at further distances but
perform poorly at all ranges when out of training distribution. To address
these challenges, we present ALTER, an offroad perception module that
adapts-on-the-drive to combine the best of both sensors. Our visual model
continuously learns from new near-range LiDAR measurements. This
self-supervised approach enables accurate long-range traversability prediction
in novel environments without hand-labeling. Results on two distinct real-world
offroad environments show up to 52.5% improvement in traversability estimation
over LiDAR-only estimates and 38.1% improvement over non-adaptive visual
baseline.
|
[
{
"created": "Tue, 27 Jun 2023 05:58:05 GMT",
"version": "v1"
}
] |
2023-06-28
|
[
[
"Chen",
"Eric",
""
],
[
"Ho",
"Cherie",
""
],
[
"Maulimov",
"Mukhtar",
""
],
[
"Wang",
"Chen",
""
],
[
"Scherer",
"Sebastian",
""
]
] |
Autonomous off-road driving requires understanding traversability, which refers to the suitability of a given terrain to drive over. When offroad vehicles travel at high speed ($>10m/s$), they need to reason at long-range ($50m$-$100m$) for safe and deliberate navigation. Moreover, vehicles often operate in new environments and under different weather conditions. LiDAR provides accurate estimates robust to visual appearances, however, it is often too noisy beyond 30m for fine-grained estimates due to sparse measurements. Conversely, visual-based models give dense predictions at further distances but perform poorly at all ranges when out of training distribution. To address these challenges, we present ALTER, an offroad perception module that adapts-on-the-drive to combine the best of both sensors. Our visual model continuously learns from new near-range LiDAR measurements. This self-supervised approach enables accurate long-range traversability prediction in novel environments without hand-labeling. Results on two distinct real-world offroad environments show up to 52.5% improvement in traversability estimation over LiDAR-only estimates and 38.1% improvement over non-adaptive visual baseline.
|
1607.07674
|
Kittipong Kittichokechai
|
Kittipong Kittichokechai, Rafael F. Schaefer, and Giuseppe Caire
|
Secret Key Generation Through a Relay
|
An extended version of a paper to be presented at ITW 2016
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider problems of two-user secret key generation through an
intermediate relay. Each user observes correlated source sequences and
communicates to the relay over rate-limited noiseless links. The relay
processes and broadcasts information to the two users over another rate-limited
link. The users then generate a common secret key based on their sources and
information from the relay. In the untrusted relay setting, the goal is to
establish key agreement between the two users at the highest key rate without
leaking information about the key to the relay. We characterize inner and outer
bounds to the optimal tradeoff between communication and key rates. The inner
bound is based on the scheme which involves a combination of binning, network
coding, and key aggregation techniques. For the trusted relay setting with a
public broadcast link, the optimal communication-key rate tradeoff is provided
for a special case where the two sources are available losslessly at the relay.
The results can be relevant for cloud-based secret key generation services.
|
[
{
"created": "Tue, 26 Jul 2016 13:03:16 GMT",
"version": "v1"
}
] |
2016-07-27
|
[
[
"Kittichokechai",
"Kittipong",
""
],
[
"Schaefer",
"Rafael F.",
""
],
[
"Caire",
"Giuseppe",
""
]
] |
We consider problems of two-user secret key generation through an intermediate relay. Each user observes correlated source sequences and communicates to the relay over rate-limited noiseless links. The relay processes and broadcasts information to the two users over another rate-limited link. The users then generate a common secret key based on their sources and information from the relay. In the untrusted relay setting, the goal is to establish key agreement between the two users at the highest key rate without leaking information about the key to the relay. We characterize inner and outer bounds to the optimal tradeoff between communication and key rates. The inner bound is based on the scheme which involves a combination of binning, network coding, and key aggregation techniques. For the trusted relay setting with a public broadcast link, the optimal communication-key rate tradeoff is provided for a special case where the two sources are available losslessly at the relay. The results can be relevant for cloud-based secret key generation services.
|
1212.1524
|
Ludovic Arnold
|
Ludovic Arnold and Yann Ollivier
|
Layer-wise learning of deep generative models
| null | null | null | null |
cs.NE cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
When using deep, multi-layered architectures to build generative models of
data, it is difficult to train all layers at once. We propose a layer-wise
training procedure admitting a performance guarantee compared to the global
optimum. It is based on an optimistic proxy of future performance, the best
latent marginal. We interpret auto-encoders in this setting as generative
models, by showing that they train a lower bound of this criterion. We test the
new learning procedure against a state of the art method (stacked RBMs), and
find it to improve performance. Both theory and experiments highlight the
importance, when training deep architectures, of using an inference model (from
data to hidden variables) richer than the generative model (from hidden
variables to data).
|
[
{
"created": "Fri, 7 Dec 2012 03:14:50 GMT",
"version": "v1"
},
{
"created": "Sat, 16 Feb 2013 13:24:46 GMT",
"version": "v2"
}
] |
2013-02-19
|
[
[
"Arnold",
"Ludovic",
""
],
[
"Ollivier",
"Yann",
""
]
] |
When using deep, multi-layered architectures to build generative models of data, it is difficult to train all layers at once. We propose a layer-wise training procedure admitting a performance guarantee compared to the global optimum. It is based on an optimistic proxy of future performance, the best latent marginal. We interpret auto-encoders in this setting as generative models, by showing that they train a lower bound of this criterion. We test the new learning procedure against a state of the art method (stacked RBMs), and find it to improve performance. Both theory and experiments highlight the importance, when training deep architectures, of using an inference model (from data to hidden variables) richer than the generative model (from hidden variables to data).
|
1910.14103
|
Nate Merrill
|
Nathaniel Merrill and Guoquan Huang
|
CALC2.0: Combining Appearance, Semantic and Geometric Information for
Robust and Efficient Visual Loop Closure
|
Appears in IROS 2019
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Traditional attempts for loop closure detection typically use hand-crafted
features, relying on geometric and visual information only, whereas more modern
approaches tend to use semantic, appearance or geometric features extracted
from deep convolutional neural networks (CNNs). While these approaches are
successful in many applications, they do not utilize all of the information
that a monocular image provides, and many of them, particularly the
deep-learning based methods, require user-chosen thresholding to actually close
loops -- which may impact generality in practical applications. In this work,
we address these issues by extracting all three modes of information from a
custom deep CNN trained specifically for the task of place recognition. Our
network is built upon a combination of a semantic segmentator, Variational
Autoencoder (VAE) and triplet embedding network. The network is trained to
construct a global feature space to describe both the visual appearance and
semantic layout of an image. Then local keypoints are extracted from
maximally-activated regions of low-level convolutional feature maps, and
keypoint descriptors are extracted from these feature maps in a novel way that
incorporates ideas from successful hand-crafted features. These keypoints are
matched globally for loop closure candidates, and then used as a final
geometric check to refute false positives. As a result, the proposed loop
closure detection system requires no touchy thresholding, and is highly robust
to false positives -- achieving better precision-recall curves than the
state-of-the-art NetVLAD, and with real-time speeds.
|
[
{
"created": "Wed, 30 Oct 2019 19:45:06 GMT",
"version": "v1"
}
] |
2019-11-01
|
[
[
"Merrill",
"Nathaniel",
""
],
[
"Huang",
"Guoquan",
""
]
] |
Traditional attempts for loop closure detection typically use hand-crafted features, relying on geometric and visual information only, whereas more modern approaches tend to use semantic, appearance or geometric features extracted from deep convolutional neural networks (CNNs). While these approaches are successful in many applications, they do not utilize all of the information that a monocular image provides, and many of them, particularly the deep-learning based methods, require user-chosen thresholding to actually close loops -- which may impact generality in practical applications. In this work, we address these issues by extracting all three modes of information from a custom deep CNN trained specifically for the task of place recognition. Our network is built upon a combination of a semantic segmentator, Variational Autoencoder (VAE) and triplet embedding network. The network is trained to construct a global feature space to describe both the visual appearance and semantic layout of an image. Then local keypoints are extracted from maximally-activated regions of low-level convolutional feature maps, and keypoint descriptors are extracted from these feature maps in a novel way that incorporates ideas from successful hand-crafted features. These keypoints are matched globally for loop closure candidates, and then used as a final geometric check to refute false positives. As a result, the proposed loop closure detection system requires no touchy thresholding, and is highly robust to false positives -- achieving better precision-recall curves than the state-of-the-art NetVLAD, and with real-time speeds.
|
2402.10247
|
Florent Jacquemard
|
Augustin Bouquillard, Florent Jacquemard (CEDRIC - VERTIGO)
|
Engraving Oriented Joint Estimation of Pitch Spelling and Local and
Global Keys
|
International Conference on Technologies for Music Notation and
Representation (TENOR), Apr 2024, Zurich (CH), Switzerland
| null | null | null |
cs.SD cs.IR eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We revisit the problems of pitch spelling and tonality guessing with a new
algorithm for their joint estimation from a MIDI file including information
about the measure boundaries. Our algorithm does not only identify a global key
but also local ones all along the analyzed piece. It uses Dynamic Programming
techniques to search for an optimal spelling in term, roughly, of the number of
accidental symbols that would be displayed in the engraved score. The
evaluation of this number is coupled with an estimation of the global key and
some local keys, one for each measure. Each of the three informations is used
for the estimation of the other, in a multi-steps procedure. An evaluation
conducted on a monophonic and a piano dataset, comprising 216 464 notes in
total, shows a high degree of accuracy, both for pitch spelling (99.5% on
average on the Bach corpus and 98.2% on the whole dataset) and global key
signature estimation (93.0% on average, 95.58% on the piano dataset). Designed
originally as a backend tool in a music transcription framework, this method
should also be useful in other tasks related to music notation processing.
|
[
{
"created": "Thu, 15 Feb 2024 10:28:59 GMT",
"version": "v1"
}
] |
2024-02-19
|
[
[
"Bouquillard",
"Augustin",
"",
"CEDRIC - VERTIGO"
],
[
"Jacquemard",
"Florent",
"",
"CEDRIC - VERTIGO"
]
] |
We revisit the problems of pitch spelling and tonality guessing with a new algorithm for their joint estimation from a MIDI file including information about the measure boundaries. Our algorithm does not only identify a global key but also local ones all along the analyzed piece. It uses Dynamic Programming techniques to search for an optimal spelling in term, roughly, of the number of accidental symbols that would be displayed in the engraved score. The evaluation of this number is coupled with an estimation of the global key and some local keys, one for each measure. Each of the three informations is used for the estimation of the other, in a multi-steps procedure. An evaluation conducted on a monophonic and a piano dataset, comprising 216 464 notes in total, shows a high degree of accuracy, both for pitch spelling (99.5% on average on the Bach corpus and 98.2% on the whole dataset) and global key signature estimation (93.0% on average, 95.58% on the piano dataset). Designed originally as a backend tool in a music transcription framework, this method should also be useful in other tasks related to music notation processing.
|
cs/0408019
|
Viktor Kuncak
|
Viktor Kuncak, Martin Rinard
|
On Generalized Records and Spatial Conjunction in Role Logic
|
30 pages. A version appears in SAS 2004
| null | null |
MIT CSAIL 942
|
cs.PL cs.LO
| null |
We have previously introduced role logic as a notation for describing
properties of relational structures in shape analysis, databases and knowledge
bases. A natural fragment of role logic corresponds to two-variable logic with
counting and is therefore decidable. We show how to use role logic to describe
open and closed records, as well the dual of records, inverse records. We
observe that the spatial conjunction operation of separation logic naturally
models record concatenation. Moreover, we show how to eliminate the spatial
conjunction of formulas of quantifier depth one in first-order logic with
counting. As a result, allowing spatial conjunction of formulas of quantifier
depth one preserves the decidability of two-variable logic with counting. This
result applies to two-variable role logic fragment as well. The resulting logic
smoothly integrates type system and predicate calculus notation and can be
viewed as a natural generalization of the notation for constraints arising in
role analysis and similar shape analysis approaches.
|
[
{
"created": "Thu, 5 Aug 2004 23:25:20 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Kuncak",
"Viktor",
""
],
[
"Rinard",
"Martin",
""
]
] |
We have previously introduced role logic as a notation for describing properties of relational structures in shape analysis, databases and knowledge bases. A natural fragment of role logic corresponds to two-variable logic with counting and is therefore decidable. We show how to use role logic to describe open and closed records, as well the dual of records, inverse records. We observe that the spatial conjunction operation of separation logic naturally models record concatenation. Moreover, we show how to eliminate the spatial conjunction of formulas of quantifier depth one in first-order logic with counting. As a result, allowing spatial conjunction of formulas of quantifier depth one preserves the decidability of two-variable logic with counting. This result applies to two-variable role logic fragment as well. The resulting logic smoothly integrates type system and predicate calculus notation and can be viewed as a natural generalization of the notation for constraints arising in role analysis and similar shape analysis approaches.
|
2011.06776
|
Sung Eun Kim Dr.
|
Sung Eun Kim, Hongkyu Yoon, and Jonghyun Lee
|
Fast and Scalable Earth Texture Synthesis using Spatially Assembled
Generative Adversarial Neural Networks
|
17 pages, 11 figures, 2 tables, and a table in Appendix
| null |
10.1016/j.jconhyd.2021.103867
| null |
cs.CV cs.LG eess.IV physics.flu-dyn
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The earth texture with complex morphological geometry and compositions such
as shale and carbonate rocks, is typically characterized with sparse field
samples because of an expensive and time-consuming characterization process.
Accordingly, generating arbitrary large size of the geological texture with
similar topological structures at a low computation cost has become one of the
key tasks for realistic geomaterial reconstruction. Recently, generative
adversarial neural networks (GANs) have demonstrated a potential of
synthesizing input textural images and creating equiprobable geomaterial
images. However, the texture synthesis with the GANs framework is often limited
by the computational cost and scalability of the output texture size. In this
study, we proposed a spatially assembled GANs (SAGANs) that can generate output
images of an arbitrary large size regardless of the size of training images
with computational efficiency. The performance of the SAGANs was evaluated with
two and three dimensional (2D and 3D) rock image samples widely used in
geostatistical reconstruction of the earth texture. We demonstrate SAGANs can
generate the arbitrary large size of statistical realizations with connectivity
and structural properties similar to training images, and also can generate a
variety of realizations even on a single training image. In addition, the
computational time was significantly improved compared to standard GANs
frameworks.
|
[
{
"created": "Fri, 13 Nov 2020 06:18:09 GMT",
"version": "v1"
}
] |
2021-09-15
|
[
[
"Kim",
"Sung Eun",
""
],
[
"Yoon",
"Hongkyu",
""
],
[
"Lee",
"Jonghyun",
""
]
] |
The earth texture with complex morphological geometry and compositions such as shale and carbonate rocks, is typically characterized with sparse field samples because of an expensive and time-consuming characterization process. Accordingly, generating arbitrary large size of the geological texture with similar topological structures at a low computation cost has become one of the key tasks for realistic geomaterial reconstruction. Recently, generative adversarial neural networks (GANs) have demonstrated a potential of synthesizing input textural images and creating equiprobable geomaterial images. However, the texture synthesis with the GANs framework is often limited by the computational cost and scalability of the output texture size. In this study, we proposed a spatially assembled GANs (SAGANs) that can generate output images of an arbitrary large size regardless of the size of training images with computational efficiency. The performance of the SAGANs was evaluated with two and three dimensional (2D and 3D) rock image samples widely used in geostatistical reconstruction of the earth texture. We demonstrate SAGANs can generate the arbitrary large size of statistical realizations with connectivity and structural properties similar to training images, and also can generate a variety of realizations even on a single training image. In addition, the computational time was significantly improved compared to standard GANs frameworks.
|
1111.3616
|
Per Zetterberg
|
Per Zetterberg and Nima N. Moghadam
|
An Experimental Investigation of SIMO, MIMO, Interference-Alignment (IA)
and Coordinated Multi-Point (CoMP)
| null | null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we present experimental implementations of interference
alignment (IA) and coordinated multi-point transmission (CoMP). We provide
results for a system with three base-stations and three mobile-stations all
having two antennas. We further employ OFDM modulation, with high-order
constellations, and measure many positions both line-of-sight and
non-line-of-sight under interference limited conditions. We find the CoMP
system to perform better than IA at the cost of a higher back-haul capacity
requirement. During the measurements we also logged the channel estimates for
off-line processing. We use these channel estimates to calculate the
performance under ideal conditions. The performance estimates obtained this way
is substantially higher than what is actually observed in the end-to-end
transmissions---in particular in the CoMP case where the theoretical
performance is very high. We find the reason for this discrepancy to be the
impact of dirty-RF effects such as phase-noise and non-linearities. We are able
to model the dirty-RF effects to some extent. These models can be used to
simulate more complex systems and still account for the dirty-RF effects (e.g.,
systems with tens of mobiles and base-stations). Both IA and CoMP perform
better than reference implementations of single-user SIMO and MIMO in our
measurements.
|
[
{
"created": "Tue, 15 Nov 2011 19:38:20 GMT",
"version": "v1"
}
] |
2015-03-19
|
[
[
"Zetterberg",
"Per",
""
],
[
"Moghadam",
"Nima N.",
""
]
] |
In this paper we present experimental implementations of interference alignment (IA) and coordinated multi-point transmission (CoMP). We provide results for a system with three base-stations and three mobile-stations all having two antennas. We further employ OFDM modulation, with high-order constellations, and measure many positions both line-of-sight and non-line-of-sight under interference limited conditions. We find the CoMP system to perform better than IA at the cost of a higher back-haul capacity requirement. During the measurements we also logged the channel estimates for off-line processing. We use these channel estimates to calculate the performance under ideal conditions. The performance estimates obtained this way is substantially higher than what is actually observed in the end-to-end transmissions---in particular in the CoMP case where the theoretical performance is very high. We find the reason for this discrepancy to be the impact of dirty-RF effects such as phase-noise and non-linearities. We are able to model the dirty-RF effects to some extent. These models can be used to simulate more complex systems and still account for the dirty-RF effects (e.g., systems with tens of mobiles and base-stations). Both IA and CoMP perform better than reference implementations of single-user SIMO and MIMO in our measurements.
|
2212.01924
|
Maksym Del
|
Maksym Del and Mark Fishel
|
Cross-lingual Similarity of Multilingual Representations Revisited
|
Accepted at AACL 2022
|
AACL-IJCNLP 2022 Volume 1 Long Papers
| null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Related works used indexes like CKA and variants of CCA to measure the
similarity of cross-lingual representations in multilingual language models. In
this paper, we argue that assumptions of CKA/CCA align poorly with one of the
motivating goals of cross-lingual learning analysis, i.e., explaining zero-shot
cross-lingual transfer. We highlight what valuable aspects of cross-lingual
similarity these indexes fail to capture and provide a motivating case study
\textit{demonstrating the problem empirically}. Then, we introduce
\textit{Average Neuron-Wise Correlation (ANC)} as a straightforward alternative
that is exempt from the difficulties of CKA/CCA and is good specifically in a
cross-lingual context. Finally, we use ANC to construct evidence that the
previously introduced ``first align, then predict'' pattern takes place not
only in masked language models (MLMs) but also in multilingual models with
\textit{causal language modeling} objectives (CLMs). Moreover, we show that the
pattern extends to the \textit{scaled versions} of the MLMs and CLMs (up to 85x
original mBERT).\footnote{Our code is publicly available at
\url{https://github.com/TartuNLP/xsim}}
|
[
{
"created": "Sun, 4 Dec 2022 21:02:07 GMT",
"version": "v1"
}
] |
2022-12-06
|
[
[
"Del",
"Maksym",
""
],
[
"Fishel",
"Mark",
""
]
] |
Related works used indexes like CKA and variants of CCA to measure the similarity of cross-lingual representations in multilingual language models. In this paper, we argue that assumptions of CKA/CCA align poorly with one of the motivating goals of cross-lingual learning analysis, i.e., explaining zero-shot cross-lingual transfer. We highlight what valuable aspects of cross-lingual similarity these indexes fail to capture and provide a motivating case study \textit{demonstrating the problem empirically}. Then, we introduce \textit{Average Neuron-Wise Correlation (ANC)} as a straightforward alternative that is exempt from the difficulties of CKA/CCA and is good specifically in a cross-lingual context. Finally, we use ANC to construct evidence that the previously introduced ``first align, then predict'' pattern takes place not only in masked language models (MLMs) but also in multilingual models with \textit{causal language modeling} objectives (CLMs). Moreover, we show that the pattern extends to the \textit{scaled versions} of the MLMs and CLMs (up to 85x original mBERT).\footnote{Our code is publicly available at \url{https://github.com/TartuNLP/xsim}}
|
2401.17661
|
Idoia Berges
|
V\'ictor Julio Ram\'irez-Dur\'an, Idoia Berges, Arantza Illarramendi
|
Towards the implementation of Industry 4.0: A methodology-based approach
oriented to the customer life cycle
|
Accepted version of paper: V\'ictor Julio Ram\'irez-Dur\'an, Idoia
Berges, Arantza Illarramendi: Towards the implementation of Industry 4.0: A
methodology-based approach oriented to the customer life cycle. Comput. Ind.
126: 103403 (2021). DOI: 10.1016/j.compind.2021.103403
|
Comput. Ind. 126: 103403 (2021)
|
10.1016/j.compind.2021.103403
| null |
cs.SE cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Many different worldwide initiatives are promoting the transformation from
machine dominant manufacturing to digital manufacturing. Thus, to achieve a
successful transformation to Industry 4.0 standard, manufacturing enterprises
are required to implement a clear roadmap. However, Small and Medium
Manufacturing Enterprises (SMEs) encounter many barriers and difficulties
(economical, technical, cultural, etc.) in the implementation of Industry 4.0.
Although several works deal with the incorporation of Industry 4.0 technologies
in the area of the product and supply chain life cycles, which SMEs could use
as reference, this is not the case for the customer life cycle. Thus, we
present two contributions that can help the software engineers of those SMEs to
incorporate Industry 4.0 technologies in the context of the customer life
cycle. The first contribution is a methodology that can help those software
engineers in the task of creating new software services, aligned with Industry
4.0, that allow to change how customers interact with enterprises and the
experiences they have while interacting with them. The methodology details a
set of stages that are divided into phases which in turn are made up of
activities. It places special emphasis on the incorporation of semantics
descriptions and 3D visualization in the implementation of those new services.
The second contribution is a system developed for a real manufacturing
scenario, using the proposed methodology, which allows to observe the
possibilities that this kind of systems can offer to SMEs in two phases of the
customer life cycle: Discover & Shop, and Use & Service.
|
[
{
"created": "Wed, 31 Jan 2024 08:31:08 GMT",
"version": "v1"
}
] |
2024-02-01
|
[
[
"Ramírez-Durán",
"Víctor Julio",
""
],
[
"Berges",
"Idoia",
""
],
[
"Illarramendi",
"Arantza",
""
]
] |
Many different worldwide initiatives are promoting the transformation from machine dominant manufacturing to digital manufacturing. Thus, to achieve a successful transformation to Industry 4.0 standard, manufacturing enterprises are required to implement a clear roadmap. However, Small and Medium Manufacturing Enterprises (SMEs) encounter many barriers and difficulties (economical, technical, cultural, etc.) in the implementation of Industry 4.0. Although several works deal with the incorporation of Industry 4.0 technologies in the area of the product and supply chain life cycles, which SMEs could use as reference, this is not the case for the customer life cycle. Thus, we present two contributions that can help the software engineers of those SMEs to incorporate Industry 4.0 technologies in the context of the customer life cycle. The first contribution is a methodology that can help those software engineers in the task of creating new software services, aligned with Industry 4.0, that allow to change how customers interact with enterprises and the experiences they have while interacting with them. The methodology details a set of stages that are divided into phases which in turn are made up of activities. It places special emphasis on the incorporation of semantics descriptions and 3D visualization in the implementation of those new services. The second contribution is a system developed for a real manufacturing scenario, using the proposed methodology, which allows to observe the possibilities that this kind of systems can offer to SMEs in two phases of the customer life cycle: Discover & Shop, and Use & Service.
|
2206.12921
|
Jeong-SIk Lee
|
Jeong-Sik Lee, Hyun-Chul Choi
|
Non-Parametric Style Transfer
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Recent feed-forward neural methods of arbitrary image style transfer mainly
utilized encoded feature map upto its second-order statistics, i.e., linearly
transformed the encoded feature map of a content image to have the same mean
and variance (or covariance) of a target style feature map. In this work, we
extend the second-order statistical feature matching into a general
distribution matching based on the understanding that style of an image is
represented by the distribution of responses from receptive fields. For this
generalization, first, we propose a new feature transform layer that exactly
matches the feature map distribution of content image into that of target style
image. Second, we analyze the recent style losses consistent with our new
feature transform layer to train a decoder network which generates a style
transferred image from the transformed feature map. Based on our experimental
results, it is proven that the stylized images obtained with our method are
more similar with the target style images in all existing style measures
without losing content clearness.
|
[
{
"created": "Sun, 26 Jun 2022 16:34:37 GMT",
"version": "v1"
}
] |
2022-06-28
|
[
[
"Lee",
"Jeong-Sik",
""
],
[
"Choi",
"Hyun-Chul",
""
]
] |
Recent feed-forward neural methods of arbitrary image style transfer mainly utilized encoded feature map upto its second-order statistics, i.e., linearly transformed the encoded feature map of a content image to have the same mean and variance (or covariance) of a target style feature map. In this work, we extend the second-order statistical feature matching into a general distribution matching based on the understanding that style of an image is represented by the distribution of responses from receptive fields. For this generalization, first, we propose a new feature transform layer that exactly matches the feature map distribution of content image into that of target style image. Second, we analyze the recent style losses consistent with our new feature transform layer to train a decoder network which generates a style transferred image from the transformed feature map. Based on our experimental results, it is proven that the stylized images obtained with our method are more similar with the target style images in all existing style measures without losing content clearness.
|
1205.1621
|
Yuan Jian
|
Jian Yuan Wen-Xia Zhang and Zhou-Hai Zhou
|
An optimal consensus tracking control algorithm for autonomous
underwater vehicles with disturbances
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/publicdomain/
|
The optimal disturbance rejection control problem is considered for consensus
tracking systems affected by external persistent disturbances and noise.
Optimal estimated values of system states are obtained by recursive filtering
for the multiple autonomous underwater vehicles modeled to multi-agent systems
with Kalman filter. Then the feedforward-feedback optimal control law is
deduced by solving the Riccati equations and matrix equations. The existence
and uniqueness condition of feedforward-feedback optimal control law is
proposed and the optimal control law algorithm is carried out. Lastly,
simulations show the result is effectiveness with respect to external
persistent disturbances and noise.
|
[
{
"created": "Tue, 8 May 2012 08:02:53 GMT",
"version": "v1"
}
] |
2012-05-09
|
[
[
"Zhang",
"Jian Yuan Wen-Xia",
""
],
[
"Zhou",
"Zhou-Hai",
""
]
] |
The optimal disturbance rejection control problem is considered for consensus tracking systems affected by external persistent disturbances and noise. Optimal estimated values of system states are obtained by recursive filtering for the multiple autonomous underwater vehicles modeled to multi-agent systems with Kalman filter. Then the feedforward-feedback optimal control law is deduced by solving the Riccati equations and matrix equations. The existence and uniqueness condition of feedforward-feedback optimal control law is proposed and the optimal control law algorithm is carried out. Lastly, simulations show the result is effectiveness with respect to external persistent disturbances and noise.
|
1102.3822
|
Velumailum Mohanaraj
|
Martin Dyer and Velumailum Mohanaraj
|
The Iterated Prisoner's Dilemma on a Cycle
|
25 pages
| null | null | null |
cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Pavlov, a well-known strategy in game theory, has been shown to have some
advantages in the Iterated Prisoner's Dilemma (IPD) game. However, this
strategy can be exploited by inveterate defectors. We modify this strategy to
mitigate the exploitation. We call the resulting strategy Rational Pavlov. This
has a parameter p which measures the "degree of forgiveness" of the players. We
study the evolution of cooperation in the IPD game, when n players are arranged
in a cycle, and all play this strategy. We examine the effect of varying p on
the convergence rate and prove that the convergence rate is fast, O(n log n)
time, for high values of p. We also prove that the convergence rate is
exponentially slow in n for small enough p. Our analysis leaves a gap in the
range of p, but simulations suggest that there is, in fact, a sharp phase
transition.
|
[
{
"created": "Fri, 18 Feb 2011 12:39:05 GMT",
"version": "v1"
}
] |
2011-02-21
|
[
[
"Dyer",
"Martin",
""
],
[
"Mohanaraj",
"Velumailum",
""
]
] |
Pavlov, a well-known strategy in game theory, has been shown to have some advantages in the Iterated Prisoner's Dilemma (IPD) game. However, this strategy can be exploited by inveterate defectors. We modify this strategy to mitigate the exploitation. We call the resulting strategy Rational Pavlov. This has a parameter p which measures the "degree of forgiveness" of the players. We study the evolution of cooperation in the IPD game, when n players are arranged in a cycle, and all play this strategy. We examine the effect of varying p on the convergence rate and prove that the convergence rate is fast, O(n log n) time, for high values of p. We also prove that the convergence rate is exponentially slow in n for small enough p. Our analysis leaves a gap in the range of p, but simulations suggest that there is, in fact, a sharp phase transition.
|
2106.06142
|
Runtian Zhai
|
Runtian Zhai, Chen Dan, J. Zico Kolter, Pradeep Ravikumar
|
DORO: Distributional and Outlier Robust Optimization
|
ICML 2021. Codes: https://github.com/RuntianZ/doro
| null | null | null |
cs.LG stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
Many machine learning tasks involve subpopulation shift where the testing
data distribution is a subpopulation of the training distribution. For such
settings, a line of recent work has proposed the use of a variant of empirical
risk minimization(ERM) known as distributionally robust optimization (DRO). In
this work, we apply DRO to real, large-scale tasks with subpopulation shift,
and observe that DRO performs relatively poorly, and moreover has severe
instability. We identify one direct cause of this phenomenon: sensitivity of
DRO to outliers in the datasets. To resolve this issue, we propose the
framework of DORO, for Distributional and Outlier Robust Optimization. At the
core of this approach is a refined risk function which prevents DRO from
overfitting to potential outliers. We instantiate DORO for the Cressie-Read
family of R\'enyi divergence, and delve into two specific instances of this
family: CVaR and $\chi^2$-DRO. We theoretically prove the effectiveness of the
proposed method, and empirically show that DORO improves the performance and
stability of DRO with experiments on large modern datasets, thereby positively
addressing the open question raised by Hashimoto et al., 2018.
|
[
{
"created": "Fri, 11 Jun 2021 02:59:54 GMT",
"version": "v1"
}
] |
2021-06-14
|
[
[
"Zhai",
"Runtian",
""
],
[
"Dan",
"Chen",
""
],
[
"Kolter",
"J. Zico",
""
],
[
"Ravikumar",
"Pradeep",
""
]
] |
Many machine learning tasks involve subpopulation shift where the testing data distribution is a subpopulation of the training distribution. For such settings, a line of recent work has proposed the use of a variant of empirical risk minimization(ERM) known as distributionally robust optimization (DRO). In this work, we apply DRO to real, large-scale tasks with subpopulation shift, and observe that DRO performs relatively poorly, and moreover has severe instability. We identify one direct cause of this phenomenon: sensitivity of DRO to outliers in the datasets. To resolve this issue, we propose the framework of DORO, for Distributional and Outlier Robust Optimization. At the core of this approach is a refined risk function which prevents DRO from overfitting to potential outliers. We instantiate DORO for the Cressie-Read family of R\'enyi divergence, and delve into two specific instances of this family: CVaR and $\chi^2$-DRO. We theoretically prove the effectiveness of the proposed method, and empirically show that DORO improves the performance and stability of DRO with experiments on large modern datasets, thereby positively addressing the open question raised by Hashimoto et al., 2018.
|
cs/0608108
|
Nicolas Brodu
|
Nicolas Brodu
|
Spherical Indexing for Neighborhood Queries
|
9 pages, 10 figures. The source code is available at
http://nicolas.brodu.free.fr/en/programmation/neighand/index.html
| null | null | null |
cs.DS cs.CG
| null |
This is an algorithm for finding neighbors when the objects can freely move
and have no predefined position. The query consists in finding neighbors for a
center location and a given radius. Space is discretized in cubic cells. This
algorithm introduces a direct spherical indexing that gives the list of all
cells making up the query sphere, for any radius and any center location. It
can additionally take in account both cyclic and non-cyclic regions of
interest. Finding only the K nearest neighbors naturally benefits from the
spherical indexing by minimally running through the sphere from center to edge,
and reducing the maximum distance when K neighbors have been found.
|
[
{
"created": "Tue, 29 Aug 2006 00:12:55 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Brodu",
"Nicolas",
""
]
] |
This is an algorithm for finding neighbors when the objects can freely move and have no predefined position. The query consists in finding neighbors for a center location and a given radius. Space is discretized in cubic cells. This algorithm introduces a direct spherical indexing that gives the list of all cells making up the query sphere, for any radius and any center location. It can additionally take in account both cyclic and non-cyclic regions of interest. Finding only the K nearest neighbors naturally benefits from the spherical indexing by minimally running through the sphere from center to edge, and reducing the maximum distance when K neighbors have been found.
|
1711.09744
|
Clemente Rubio-Manzano
|
Clemente Rubio-Manzano, Tomas Lermanda Senoceain
|
How linguistic descriptions of data can help to the teaching-learning
process in higher education, case of study: artificial intelligence
| null |
Journal of Intelligent & Fuzzy Systems, vol. 37, no. 6, pp.
8397-8415, 2019
|
10.3233/JIFS-190935
| null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Artificial Intelligence is a central topic in the computer science
curriculum. From the year 2011 a project-based learning methodology based on
computer games has been designed and implemented into the intelligence
artificial course at the University of the Bio-Bio. The project aims to develop
software-controlled agents (bots) which are programmed by using heuristic
algorithms seen during the course. This methodology allows us to obtain good
learning results, however several challenges have been founded during its
implementation.
In this paper we show how linguistic descriptions of data can help to provide
students and teachers with technical and personalized feedback about the
learned algorithms. Algorithm behavior profile and a new Turing test for
computer games bots based on linguistic modelling of complex phenomena are also
proposed in order to deal with such challenges.
In order to show and explore the possibilities of this new technology, a web
platform has been designed and implemented by one of authors and its
incorporation in the process of assessment allows us to improve the teaching
learning process.
|
[
{
"created": "Mon, 27 Nov 2017 15:13:53 GMT",
"version": "v1"
},
{
"created": "Sun, 3 Dec 2017 14:00:27 GMT",
"version": "v2"
},
{
"created": "Tue, 30 Jan 2018 20:00:15 GMT",
"version": "v3"
}
] |
2021-01-07
|
[
[
"Rubio-Manzano",
"Clemente",
""
],
[
"Senoceain",
"Tomas Lermanda",
""
]
] |
Artificial Intelligence is a central topic in the computer science curriculum. From the year 2011 a project-based learning methodology based on computer games has been designed and implemented into the intelligence artificial course at the University of the Bio-Bio. The project aims to develop software-controlled agents (bots) which are programmed by using heuristic algorithms seen during the course. This methodology allows us to obtain good learning results, however several challenges have been founded during its implementation. In this paper we show how linguistic descriptions of data can help to provide students and teachers with technical and personalized feedback about the learned algorithms. Algorithm behavior profile and a new Turing test for computer games bots based on linguistic modelling of complex phenomena are also proposed in order to deal with such challenges. In order to show and explore the possibilities of this new technology, a web platform has been designed and implemented by one of authors and its incorporation in the process of assessment allows us to improve the teaching learning process.
|
2201.13157
|
Hugo Penedones
|
Augusto Peres, Eduardo Dias, Lu\'is Sarmento, Hugo Penedones
|
Equivariant neural networks for recovery of Hadamard matrices
| null | null | null | null |
cs.LG cs.DM
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We propose a message passing neural network architecture designed to be
equivariant to column and row permutations of a matrix. We illustrate its
advantages over traditional architectures like multi-layer perceptrons (MLPs),
convolutional neural networks (CNNs) and even Transformers, on the
combinatorial optimization task of recovering a set of deleted entries of a
Hadamard matrix. We argue that this is a powerful application of the principles
of Geometric Deep Learning to fundamental mathematics, and a potential stepping
stone toward more insights on the Hadamard conjecture using Machine Learning
techniques.
|
[
{
"created": "Mon, 31 Jan 2022 12:07:07 GMT",
"version": "v1"
}
] |
2022-02-01
|
[
[
"Peres",
"Augusto",
""
],
[
"Dias",
"Eduardo",
""
],
[
"Sarmento",
"Luís",
""
],
[
"Penedones",
"Hugo",
""
]
] |
We propose a message passing neural network architecture designed to be equivariant to column and row permutations of a matrix. We illustrate its advantages over traditional architectures like multi-layer perceptrons (MLPs), convolutional neural networks (CNNs) and even Transformers, on the combinatorial optimization task of recovering a set of deleted entries of a Hadamard matrix. We argue that this is a powerful application of the principles of Geometric Deep Learning to fundamental mathematics, and a potential stepping stone toward more insights on the Hadamard conjecture using Machine Learning techniques.
|
2311.04813
|
Hubert Baniecki
|
Hubert Baniecki, Maciej Chrabaszcz, Andreas Holzinger, Bastian
Pfeifer, Anna Saranti, Przemyslaw Biecek
|
Be Careful When Evaluating Explanations Regarding Ground Truth
| null | null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Evaluating explanations of image classifiers regarding ground truth, e.g.
segmentation masks defined by human perception, primarily evaluates the quality
of the models under consideration rather than the explanation methods
themselves. Driven by this observation, we propose a framework for
$\textit{jointly}$ evaluating the robustness of safety-critical systems that
$\textit{combine}$ a deep neural network with an explanation method. These are
increasingly used in real-world applications like medical image analysis or
robotics. We introduce a fine-tuning procedure to (mis)align
model$\unicode{x2013}$explanation pipelines with ground truth and use it to
quantify the potential discrepancy between worst and best-case scenarios of
human alignment. Experiments across various model architectures and post-hoc
local interpretation methods provide insights into the robustness of vision
transformers and the overall vulnerability of such AI systems to potential
adversarial attacks.
|
[
{
"created": "Wed, 8 Nov 2023 16:39:13 GMT",
"version": "v1"
}
] |
2023-11-09
|
[
[
"Baniecki",
"Hubert",
""
],
[
"Chrabaszcz",
"Maciej",
""
],
[
"Holzinger",
"Andreas",
""
],
[
"Pfeifer",
"Bastian",
""
],
[
"Saranti",
"Anna",
""
],
[
"Biecek",
"Przemyslaw",
""
]
] |
Evaluating explanations of image classifiers regarding ground truth, e.g. segmentation masks defined by human perception, primarily evaluates the quality of the models under consideration rather than the explanation methods themselves. Driven by this observation, we propose a framework for $\textit{jointly}$ evaluating the robustness of safety-critical systems that $\textit{combine}$ a deep neural network with an explanation method. These are increasingly used in real-world applications like medical image analysis or robotics. We introduce a fine-tuning procedure to (mis)align model$\unicode{x2013}$explanation pipelines with ground truth and use it to quantify the potential discrepancy between worst and best-case scenarios of human alignment. Experiments across various model architectures and post-hoc local interpretation methods provide insights into the robustness of vision transformers and the overall vulnerability of such AI systems to potential adversarial attacks.
|
1606.05995
|
Manuel Peuster
|
Manuel Peuster, Holger Karl and Steven van Rossem
|
MeDICINE: Rapid Prototyping of Production-Ready Network Services in
Multi-PoP Environments
|
6 pages, pre-print
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Virtualized network services consisting of multiple individual network
functions are already today deployed across multiple sites, so called multi-PoP
(points of presence) environ- ments. This allows to improve service performance
by optimizing its placement in the network. But prototyping and testing of
these complex distributed software systems becomes extremely challenging. The
reason is that not only the network service as such has to be tested but also
its integration with management and orchestration systems. Existing solutions,
like simulators, basic network emulators, or local cloud testbeds, do not
support all aspects of these tasks. To this end, we introduce MeDICINE, a novel
NFV prototyping platform that is able to execute production-ready network func-
tions, provided as software containers, in an emulated multi-PoP environment.
These network functions can be controlled by any third-party management and
orchestration system that connects to our platform through standard interfaces.
Based on this, a developer can use our platform to prototype and test complex
network services in a realistic environment running on his laptop.
|
[
{
"created": "Mon, 20 Jun 2016 07:22:12 GMT",
"version": "v1"
},
{
"created": "Thu, 29 Sep 2016 08:09:42 GMT",
"version": "v2"
}
] |
2016-09-30
|
[
[
"Peuster",
"Manuel",
""
],
[
"Karl",
"Holger",
""
],
[
"van Rossem",
"Steven",
""
]
] |
Virtualized network services consisting of multiple individual network functions are already today deployed across multiple sites, so called multi-PoP (points of presence) environ- ments. This allows to improve service performance by optimizing its placement in the network. But prototyping and testing of these complex distributed software systems becomes extremely challenging. The reason is that not only the network service as such has to be tested but also its integration with management and orchestration systems. Existing solutions, like simulators, basic network emulators, or local cloud testbeds, do not support all aspects of these tasks. To this end, we introduce MeDICINE, a novel NFV prototyping platform that is able to execute production-ready network func- tions, provided as software containers, in an emulated multi-PoP environment. These network functions can be controlled by any third-party management and orchestration system that connects to our platform through standard interfaces. Based on this, a developer can use our platform to prototype and test complex network services in a realistic environment running on his laptop.
|
1906.01795
|
Ningning Zhao
|
Ningning Zhao, Nuo Tong, Dan Ruan and Ke Sheng
|
Fully Automated Pancreas Segmentation with Two-stage 3D Convolutional
Neural Networks
|
This paper has been accepted by MICCAI 2019
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Due to the fact that pancreas is an abdominal organ with very large
variations in shape and size, automatic and accurate pancreas segmentation can
be challenging for medical image analysis. In this work, we proposed a fully
automated two stage framework for pancreas segmentation based on convolutional
neural networks (CNN). In the first stage, a U-Net is trained for the
down-sampled 3D volume segmentation. Then a candidate region covering the
pancreas is extracted from the estimated labels. Motivated by the superior
performance reported by renowned region based CNN, in the second stage, another
3D U-Net is trained on the candidate region generated in the first stage. We
evaluated the performance of the proposed method on the NIH computed tomography
(CT) dataset, and verified its superiority over other state-of-the-art 2D and
3D approaches for pancreas segmentation in terms of dice-sorensen coefficient
(DSC) accuracy in testing. The mean DSC of the proposed method is 85.99%.
|
[
{
"created": "Wed, 5 Jun 2019 02:48:24 GMT",
"version": "v1"
},
{
"created": "Thu, 25 Jul 2019 22:29:47 GMT",
"version": "v2"
}
] |
2019-07-29
|
[
[
"Zhao",
"Ningning",
""
],
[
"Tong",
"Nuo",
""
],
[
"Ruan",
"Dan",
""
],
[
"Sheng",
"Ke",
""
]
] |
Due to the fact that pancreas is an abdominal organ with very large variations in shape and size, automatic and accurate pancreas segmentation can be challenging for medical image analysis. In this work, we proposed a fully automated two stage framework for pancreas segmentation based on convolutional neural networks (CNN). In the first stage, a U-Net is trained for the down-sampled 3D volume segmentation. Then a candidate region covering the pancreas is extracted from the estimated labels. Motivated by the superior performance reported by renowned region based CNN, in the second stage, another 3D U-Net is trained on the candidate region generated in the first stage. We evaluated the performance of the proposed method on the NIH computed tomography (CT) dataset, and verified its superiority over other state-of-the-art 2D and 3D approaches for pancreas segmentation in terms of dice-sorensen coefficient (DSC) accuracy in testing. The mean DSC of the proposed method is 85.99%.
|
2309.16553
|
Yixuan Li
|
Yixuan Li, Lihan Jiang, Linning Xu, Yuanbo Xiangli, Zhenzhi Wang,
Dahua Lin, Bo Dai
|
MatrixCity: A Large-scale City Dataset for City-scale Neural Rendering
and Beyond
|
Accepted to ICCV 2023. Project page:
$\href{https://city-super.github.io/matrixcity/}{this\, https\, URL}$
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Neural radiance fields (NeRF) and its subsequent variants have led to
remarkable progress in neural rendering. While most of recent neural rendering
works focus on objects and small-scale scenes, developing neural rendering
methods for city-scale scenes is of great potential in many real-world
applications. However, this line of research is impeded by the absence of a
comprehensive and high-quality dataset, yet collecting such a dataset over real
city-scale scenes is costly, sensitive, and technically difficult. To this end,
we build a large-scale, comprehensive, and high-quality synthetic dataset for
city-scale neural rendering researches. Leveraging the Unreal Engine 5 City
Sample project, we develop a pipeline to easily collect aerial and street city
views, accompanied by ground-truth camera poses and a range of additional data
modalities. Flexible controls over environmental factors like light, weather,
human and car crowd are also available in our pipeline, supporting the need of
various tasks covering city-scale neural rendering and beyond. The resulting
pilot dataset, MatrixCity, contains 67k aerial images and 452k street images
from two city maps of total size $28km^2$. On top of MatrixCity, a thorough
benchmark is also conducted, which not only reveals unique challenges of the
task of city-scale neural rendering, but also highlights potential improvements
for future works. The dataset and code will be publicly available at our
project page: https://city-super.github.io/matrixcity/.
|
[
{
"created": "Thu, 28 Sep 2023 16:06:02 GMT",
"version": "v1"
}
] |
2023-09-29
|
[
[
"Li",
"Yixuan",
""
],
[
"Jiang",
"Lihan",
""
],
[
"Xu",
"Linning",
""
],
[
"Xiangli",
"Yuanbo",
""
],
[
"Wang",
"Zhenzhi",
""
],
[
"Lin",
"Dahua",
""
],
[
"Dai",
"Bo",
""
]
] |
Neural radiance fields (NeRF) and its subsequent variants have led to remarkable progress in neural rendering. While most of recent neural rendering works focus on objects and small-scale scenes, developing neural rendering methods for city-scale scenes is of great potential in many real-world applications. However, this line of research is impeded by the absence of a comprehensive and high-quality dataset, yet collecting such a dataset over real city-scale scenes is costly, sensitive, and technically difficult. To this end, we build a large-scale, comprehensive, and high-quality synthetic dataset for city-scale neural rendering researches. Leveraging the Unreal Engine 5 City Sample project, we develop a pipeline to easily collect aerial and street city views, accompanied by ground-truth camera poses and a range of additional data modalities. Flexible controls over environmental factors like light, weather, human and car crowd are also available in our pipeline, supporting the need of various tasks covering city-scale neural rendering and beyond. The resulting pilot dataset, MatrixCity, contains 67k aerial images and 452k street images from two city maps of total size $28km^2$. On top of MatrixCity, a thorough benchmark is also conducted, which not only reveals unique challenges of the task of city-scale neural rendering, but also highlights potential improvements for future works. The dataset and code will be publicly available at our project page: https://city-super.github.io/matrixcity/.
|
2403.15646
|
Nandhini Swaminathan
|
Nandhini Swaminathan, David Danks
|
Application of the NIST AI Risk Management Framework to Surveillance
Technology
|
14 pages, 2 figures
| null | null | null |
cs.CY cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
This study offers an in-depth analysis of the application and implications of
the National Institute of Standards and Technology's AI Risk Management
Framework (NIST AI RMF) within the domain of surveillance technologies,
particularly facial recognition technology. Given the inherently high-risk and
consequential nature of facial recognition systems, our research emphasizes the
critical need for a structured approach to risk management in this sector. The
paper presents a detailed case study demonstrating the utility of the NIST AI
RMF in identifying and mitigating risks that might otherwise remain unnoticed
in these technologies. Our primary objective is to develop a comprehensive risk
management strategy that advances the practice of responsible AI utilization in
feasible, scalable ways. We propose a six-step process tailored to the specific
challenges of surveillance technology that aims to produce a more systematic
and effective risk management practice. This process emphasizes continual
assessment and improvement to facilitate companies in managing AI-related risks
more robustly and ensuring ethical and responsible deployment of AI systems.
Additionally, our analysis uncovers and discusses critical gaps in the current
framework of the NIST AI RMF, particularly concerning its application to
surveillance technologies. These insights contribute to the evolving discourse
on AI governance and risk management, highlighting areas for future refinement
and development in frameworks like the NIST AI RMF.
|
[
{
"created": "Fri, 22 Mar 2024 23:07:11 GMT",
"version": "v1"
}
] |
2024-05-01
|
[
[
"Swaminathan",
"Nandhini",
""
],
[
"Danks",
"David",
""
]
] |
This study offers an in-depth analysis of the application and implications of the National Institute of Standards and Technology's AI Risk Management Framework (NIST AI RMF) within the domain of surveillance technologies, particularly facial recognition technology. Given the inherently high-risk and consequential nature of facial recognition systems, our research emphasizes the critical need for a structured approach to risk management in this sector. The paper presents a detailed case study demonstrating the utility of the NIST AI RMF in identifying and mitigating risks that might otherwise remain unnoticed in these technologies. Our primary objective is to develop a comprehensive risk management strategy that advances the practice of responsible AI utilization in feasible, scalable ways. We propose a six-step process tailored to the specific challenges of surveillance technology that aims to produce a more systematic and effective risk management practice. This process emphasizes continual assessment and improvement to facilitate companies in managing AI-related risks more robustly and ensuring ethical and responsible deployment of AI systems. Additionally, our analysis uncovers and discusses critical gaps in the current framework of the NIST AI RMF, particularly concerning its application to surveillance technologies. These insights contribute to the evolving discourse on AI governance and risk management, highlighting areas for future refinement and development in frameworks like the NIST AI RMF.
|
2401.13460
|
Mikayel Samvelyan
|
Mikayel Samvelyan, Davide Paglieri, Minqi Jiang, Jack Parker-Holder,
Tim Rockt\"aschel
|
Multi-Agent Diagnostics for Robustness via Illuminated Diversity
| null | null | null | null |
cs.LG cs.AI cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the rapidly advancing field of multi-agent systems, ensuring robustness in
unfamiliar and adversarial settings is crucial. Notwithstanding their
outstanding performance in familiar environments, these systems often falter in
new situations due to overfitting during the training phase. This is especially
pronounced in settings where both cooperative and competitive behaviours are
present, encapsulating a dual nature of overfitting and generalisation
challenges. To address this issue, we present Multi-Agent Diagnostics for
Robustness via Illuminated Diversity (MADRID), a novel approach for generating
diverse adversarial scenarios that expose strategic vulnerabilities in
pre-trained multi-agent policies. Leveraging the concepts from open-ended
learning, MADRID navigates the vast space of adversarial settings, employing a
target policy's regret to gauge the vulnerabilities of these settings. We
evaluate the effectiveness of MADRID on the 11vs11 version of Google Research
Football, one of the most complex environments for multi-agent reinforcement
learning. Specifically, we employ MADRID for generating a diverse array of
adversarial settings for TiZero, the state-of-the-art approach which "masters"
the game through 45 days of training on a large-scale distributed
infrastructure. We expose key shortcomings in TiZero's tactical
decision-making, underlining the crucial importance of rigorous evaluation in
multi-agent systems.
|
[
{
"created": "Wed, 24 Jan 2024 14:02:09 GMT",
"version": "v1"
},
{
"created": "Thu, 28 Mar 2024 22:24:30 GMT",
"version": "v2"
}
] |
2024-04-01
|
[
[
"Samvelyan",
"Mikayel",
""
],
[
"Paglieri",
"Davide",
""
],
[
"Jiang",
"Minqi",
""
],
[
"Parker-Holder",
"Jack",
""
],
[
"Rocktäschel",
"Tim",
""
]
] |
In the rapidly advancing field of multi-agent systems, ensuring robustness in unfamiliar and adversarial settings is crucial. Notwithstanding their outstanding performance in familiar environments, these systems often falter in new situations due to overfitting during the training phase. This is especially pronounced in settings where both cooperative and competitive behaviours are present, encapsulating a dual nature of overfitting and generalisation challenges. To address this issue, we present Multi-Agent Diagnostics for Robustness via Illuminated Diversity (MADRID), a novel approach for generating diverse adversarial scenarios that expose strategic vulnerabilities in pre-trained multi-agent policies. Leveraging the concepts from open-ended learning, MADRID navigates the vast space of adversarial settings, employing a target policy's regret to gauge the vulnerabilities of these settings. We evaluate the effectiveness of MADRID on the 11vs11 version of Google Research Football, one of the most complex environments for multi-agent reinforcement learning. Specifically, we employ MADRID for generating a diverse array of adversarial settings for TiZero, the state-of-the-art approach which "masters" the game through 45 days of training on a large-scale distributed infrastructure. We expose key shortcomings in TiZero's tactical decision-making, underlining the crucial importance of rigorous evaluation in multi-agent systems.
|
2112.01695
|
Xiang Li
|
Xiang Li, Jinglu Wang, Xiao Li, Yan Lu
|
Hybrid Instance-aware Temporal Fusion for Online Video Instance
Segmentation
|
AAAI 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recently, transformer-based image segmentation methods have achieved notable
success against previous solutions. While for video domains, how to effectively
model temporal context with the attention of object instances across frames
remains an open problem. In this paper, we propose an online video instance
segmentation framework with a novel instance-aware temporal fusion method. We
first leverages the representation, i.e., a latent code in the global context
(instance code) and CNN feature maps to represent instance- and pixel-level
features. Based on this representation, we introduce a cropping-free temporal
fusion approach to model the temporal consistency between video frames.
Specifically, we encode global instance-specific information in the instance
code and build up inter-frame contextual fusion with hybrid attentions between
the instance codes and CNN feature maps. Inter-frame consistency between the
instance codes are further enforced with order constraints. By leveraging the
learned hybrid temporal consistency, we are able to directly retrieve and
maintain instance identities across frames, eliminating the complicated
frame-wise instance matching in prior methods. Extensive experiments have been
conducted on popular VIS datasets, i.e. Youtube-VIS-19/21. Our model achieves
the best performance among all online VIS methods. Notably, our model also
eclipses all offline methods when using the ResNet-50 backbone.
|
[
{
"created": "Fri, 3 Dec 2021 03:37:57 GMT",
"version": "v1"
},
{
"created": "Mon, 6 Jun 2022 18:03:08 GMT",
"version": "v2"
}
] |
2022-06-08
|
[
[
"Li",
"Xiang",
""
],
[
"Wang",
"Jinglu",
""
],
[
"Li",
"Xiao",
""
],
[
"Lu",
"Yan",
""
]
] |
Recently, transformer-based image segmentation methods have achieved notable success against previous solutions. While for video domains, how to effectively model temporal context with the attention of object instances across frames remains an open problem. In this paper, we propose an online video instance segmentation framework with a novel instance-aware temporal fusion method. We first leverages the representation, i.e., a latent code in the global context (instance code) and CNN feature maps to represent instance- and pixel-level features. Based on this representation, we introduce a cropping-free temporal fusion approach to model the temporal consistency between video frames. Specifically, we encode global instance-specific information in the instance code and build up inter-frame contextual fusion with hybrid attentions between the instance codes and CNN feature maps. Inter-frame consistency between the instance codes are further enforced with order constraints. By leveraging the learned hybrid temporal consistency, we are able to directly retrieve and maintain instance identities across frames, eliminating the complicated frame-wise instance matching in prior methods. Extensive experiments have been conducted on popular VIS datasets, i.e. Youtube-VIS-19/21. Our model achieves the best performance among all online VIS methods. Notably, our model also eclipses all offline methods when using the ResNet-50 backbone.
|
2208.01876
|
Md Shahriar Tasjid
|
Md Shahriar Tasjid, Ahmed Al Marouf
|
Leveraging Smartphone Sensors for Detecting Abnormal Gait for Smart
Wearable Mobile Technologies
| null |
International Journal of Interactive Mobile Technologies (iJIM);
Volume 15 Number 24; Page 167-175; 2021
|
10.3991/ijim.v15i24.25891.
| null |
cs.HC cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Walking is one of the most common modes of terrestrial locomotion for humans.
Walking is essential for humans to perform most kinds of daily activities. When
a person walks, there is a pattern in it, and it is known as gait. Gait
analysis is used in sports and healthcare. We can analyze this gait in
different ways, like using video captured by the surveillance cameras or depth
image cameras in the lab environment. It also can be recognized by wearable
sensors. e.g., accelerometer, force sensors, gyroscope, flexible goniometer,
magneto resistive sensors, electromagnetic tracking system, force sensors, and
electromyography (EMG). Analysis through these sensors required a lab
condition, or users must wear these sensors. For detecting abnormality in gait
action of a human, we need to incorporate the sensors separately. We can know
about one's health condition by abnormal human gait after detecting it.
Understanding a regular gait vs. abnormal gait may give insights to the health
condition of the subject using the smart wearable technologies. Therefore, in
this paper, we proposed a way to analyze abnormal human gait through smartphone
sensors. Though smart devices like smartphones and smartwatches are used by
most of the person nowadays. So, we can track down their gait using sensors of
these intelligent wearable devices.
|
[
{
"created": "Wed, 3 Aug 2022 07:00:16 GMT",
"version": "v1"
}
] |
2022-08-04
|
[
[
"Tasjid",
"Md Shahriar",
""
],
[
"Marouf",
"Ahmed Al",
""
]
] |
Walking is one of the most common modes of terrestrial locomotion for humans. Walking is essential for humans to perform most kinds of daily activities. When a person walks, there is a pattern in it, and it is known as gait. Gait analysis is used in sports and healthcare. We can analyze this gait in different ways, like using video captured by the surveillance cameras or depth image cameras in the lab environment. It also can be recognized by wearable sensors. e.g., accelerometer, force sensors, gyroscope, flexible goniometer, magneto resistive sensors, electromagnetic tracking system, force sensors, and electromyography (EMG). Analysis through these sensors required a lab condition, or users must wear these sensors. For detecting abnormality in gait action of a human, we need to incorporate the sensors separately. We can know about one's health condition by abnormal human gait after detecting it. Understanding a regular gait vs. abnormal gait may give insights to the health condition of the subject using the smart wearable technologies. Therefore, in this paper, we proposed a way to analyze abnormal human gait through smartphone sensors. Though smart devices like smartphones and smartwatches are used by most of the person nowadays. So, we can track down their gait using sensors of these intelligent wearable devices.
|
2007.00849
|
Haitian Sun
|
Pat Verga, Haitian Sun, Livio Baldini Soares, William W. Cohen
|
Facts as Experts: Adaptable and Interpretable Neural Memory over
Symbolic Knowledge
| null | null | null | null |
cs.CL cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Massive language models are the core of modern NLP modeling and have been
shown to encode impressive amounts of commonsense and factual information.
However, that knowledge exists only within the latent parameters of the model,
inaccessible to inspection and interpretation, and even worse, factual
information memorized from the training corpora is likely to become stale as
the world changes. Knowledge stored as parameters will also inevitably exhibit
all of the biases inherent in the source materials. To address these problems,
we develop a neural language model that includes an explicit interface between
symbolically interpretable factual information and subsymbolic neural
knowledge. We show that this model dramatically improves performance on two
knowledge-intensive question-answering tasks. More interestingly, the model can
be updated without re-training by manipulating its symbolic representations. In
particular this model allows us to add new facts and overwrite existing ones in
ways that are not possible for earlier models.
|
[
{
"created": "Thu, 2 Jul 2020 03:05:41 GMT",
"version": "v1"
}
] |
2020-07-03
|
[
[
"Verga",
"Pat",
""
],
[
"Sun",
"Haitian",
""
],
[
"Soares",
"Livio Baldini",
""
],
[
"Cohen",
"William W.",
""
]
] |
Massive language models are the core of modern NLP modeling and have been shown to encode impressive amounts of commonsense and factual information. However, that knowledge exists only within the latent parameters of the model, inaccessible to inspection and interpretation, and even worse, factual information memorized from the training corpora is likely to become stale as the world changes. Knowledge stored as parameters will also inevitably exhibit all of the biases inherent in the source materials. To address these problems, we develop a neural language model that includes an explicit interface between symbolically interpretable factual information and subsymbolic neural knowledge. We show that this model dramatically improves performance on two knowledge-intensive question-answering tasks. More interestingly, the model can be updated without re-training by manipulating its symbolic representations. In particular this model allows us to add new facts and overwrite existing ones in ways that are not possible for earlier models.
|
1707.06598
|
Navid Rekabsaz
|
Navid Rekabsaz and Bhaskar Mitra and Mihai Lupu and Allan Hanbury
|
Toward Incorporation of Relevant Documents in word2vec
|
Neu-IR Workshop at the ACM Conference on Research and Development in
Information Retrieval (NeuIR-SIGIR 2017)
| null | null | null |
cs.IR cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent advances in neural word embedding provide significant benefit to
various information retrieval tasks. However as shown by recent studies,
adapting the embedding models for the needs of IR tasks can bring considerable
further improvements. The embedding models in general define the term
relatedness by exploiting the terms' co-occurrences in short-window contexts.
An alternative (and well-studied) approach in IR for related terms to a query
is using local information i.e. a set of top-retrieved documents. In view of
these two methods of term relatedness, in this work, we report our study on
incorporating the local information of the query in the word embeddings. One
main challenge in this direction is that the dense vectors of word embeddings
and their estimation of term-to-term relatedness remain difficult to interpret
and hard to analyze. As an alternative, explicit word representations propose
vectors whose dimensions are easily interpretable, and recent methods show
competitive performance to the dense vectors. We introduce a neural-based
explicit representation, rooted in the conceptual ideas of the word2vec
Skip-Gram model. The method provides interpretable explicit vectors while
keeping the effectiveness of the Skip-Gram model. The evaluation of various
explicit representations on word association collections shows that the newly
proposed method out- performs the state-of-the-art explicit representations
when tasked with ranking highly similar terms. Based on the introduced ex-
plicit representation, we discuss our approaches on integrating local documents
in globally-trained embedding models and discuss the preliminary results.
|
[
{
"created": "Thu, 20 Jul 2017 16:33:48 GMT",
"version": "v1"
},
{
"created": "Wed, 4 Apr 2018 07:36:05 GMT",
"version": "v2"
}
] |
2018-04-05
|
[
[
"Rekabsaz",
"Navid",
""
],
[
"Mitra",
"Bhaskar",
""
],
[
"Lupu",
"Mihai",
""
],
[
"Hanbury",
"Allan",
""
]
] |
Recent advances in neural word embedding provide significant benefit to various information retrieval tasks. However as shown by recent studies, adapting the embedding models for the needs of IR tasks can bring considerable further improvements. The embedding models in general define the term relatedness by exploiting the terms' co-occurrences in short-window contexts. An alternative (and well-studied) approach in IR for related terms to a query is using local information i.e. a set of top-retrieved documents. In view of these two methods of term relatedness, in this work, we report our study on incorporating the local information of the query in the word embeddings. One main challenge in this direction is that the dense vectors of word embeddings and their estimation of term-to-term relatedness remain difficult to interpret and hard to analyze. As an alternative, explicit word representations propose vectors whose dimensions are easily interpretable, and recent methods show competitive performance to the dense vectors. We introduce a neural-based explicit representation, rooted in the conceptual ideas of the word2vec Skip-Gram model. The method provides interpretable explicit vectors while keeping the effectiveness of the Skip-Gram model. The evaluation of various explicit representations on word association collections shows that the newly proposed method out- performs the state-of-the-art explicit representations when tasked with ranking highly similar terms. Based on the introduced ex- plicit representation, we discuss our approaches on integrating local documents in globally-trained embedding models and discuss the preliminary results.
|
2206.03254
|
Peng He
|
Peng He
|
Demystifying the Global Convergence Puzzle of Learning
Over-parameterized ReLU Nets in Very High Dimensions
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This theoretical paper is devoted to developing a rigorous theory for
demystifying the global convergence phenomenon in a challenging scenario:
learning over-parameterized Rectified Linear Unit (ReLU) nets for very high
dimensional dataset under very mild assumptions. A major ingredient of our
analysis is a fine-grained analysis of random activation matrices. The
essential virtue of dissecting activation matrices is that it bridges the
dynamics of optimization and angular distribution in high-dimensional data
space. This angle-based detailed analysis leads to asymptotic characterizations
of gradient norm and directional curvature of objective function at each
gradient descent iteration, revealing that the empirical loss function enjoys
nice geometrical properties in the overparameterized setting. Along the way, we
significantly improve existing theoretical bounds on both over-parameterization
condition and learning rate with very mild assumptions for learning very high
dimensional data. Moreover, we uncover the role of the geometrical and spectral
properties of the input data in determining desired over-parameterization size
and global convergence rate. All these clues allow us to discover a novel
geometric picture of nonconvex optimization in deep learning: angular
distribution in high-dimensional data space $\mapsto$ spectrums of
overparameterized activation matrices $\mapsto$ favorable geometrical
properties of empirical loss landscape $\mapsto$ global convergence phenomenon.
Furthremore, our theoretical results imply that gradient-based nonconvex
optimization algorithms have much stronger statistical guarantees with much
milder over-parameterization condition than exisiting theory states for
learning very high dimensional data, which is rarely explored so far.
|
[
{
"created": "Sun, 5 Jun 2022 02:14:21 GMT",
"version": "v1"
}
] |
2022-06-08
|
[
[
"He",
"Peng",
""
]
] |
This theoretical paper is devoted to developing a rigorous theory for demystifying the global convergence phenomenon in a challenging scenario: learning over-parameterized Rectified Linear Unit (ReLU) nets for very high dimensional dataset under very mild assumptions. A major ingredient of our analysis is a fine-grained analysis of random activation matrices. The essential virtue of dissecting activation matrices is that it bridges the dynamics of optimization and angular distribution in high-dimensional data space. This angle-based detailed analysis leads to asymptotic characterizations of gradient norm and directional curvature of objective function at each gradient descent iteration, revealing that the empirical loss function enjoys nice geometrical properties in the overparameterized setting. Along the way, we significantly improve existing theoretical bounds on both over-parameterization condition and learning rate with very mild assumptions for learning very high dimensional data. Moreover, we uncover the role of the geometrical and spectral properties of the input data in determining desired over-parameterization size and global convergence rate. All these clues allow us to discover a novel geometric picture of nonconvex optimization in deep learning: angular distribution in high-dimensional data space $\mapsto$ spectrums of overparameterized activation matrices $\mapsto$ favorable geometrical properties of empirical loss landscape $\mapsto$ global convergence phenomenon. Furthremore, our theoretical results imply that gradient-based nonconvex optimization algorithms have much stronger statistical guarantees with much milder over-parameterization condition than exisiting theory states for learning very high dimensional data, which is rarely explored so far.
|
2001.11758
|
Beno\^it Sohet
|
Beno\^it Sohet, Yezekael Hayel, Olivier Beaude, and Alban Jeandin
|
Coupled Charging-and-Driving Incentives Design for Electric Vehicles in
Urban Networks
|
11 pages, 8 figures, submitted to IEEE Transactions on Intelligent
Transportation Systems
| null | null | null |
cs.GT math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Electric Vehicles (EV) impact urban networks both when driving (e.g., noise
and pollution reduction) and charging. For the electrical grid, the flexibility
of EV charging makes it a significant actor in "Demand Response" mechanisms.
Therefore, there is a need to design incentive mechanisms to foster customer
engagement. A congestion game approach is adopted to evaluate the performance
of such electrical transportation system with multiple classes of vehicles: EV
and Gasoline Vehicles. Both temporal and energy operating costs are considered.
The latter is nonseparable as it depends on the global charging need of all EV,
which is scheduled in time by a centralized aggregator in function of
nonflexible consumption at charging location. Thus, driving and charging
decisions are coupled. An adaptation of Beckmann's method proves the existence
of a Wardrop Equilibrium (WE) in the considered nonseparable congestion game;
this WE is unique when the charging unit price is an increasing function of the
global charging need. A condition on the nonflexible load is given to guarantee
the monotonicity of this function. This condition is tested on real consumption
data in France and in Texas, USA. Optimal tolls are used to control this
electrical transportation system and then computed in order to minimize an
environmental cost on a simple network topology.
|
[
{
"created": "Fri, 31 Jan 2020 10:42:02 GMT",
"version": "v1"
}
] |
2020-02-03
|
[
[
"Sohet",
"Benoît",
""
],
[
"Hayel",
"Yezekael",
""
],
[
"Beaude",
"Olivier",
""
],
[
"Jeandin",
"Alban",
""
]
] |
Electric Vehicles (EV) impact urban networks both when driving (e.g., noise and pollution reduction) and charging. For the electrical grid, the flexibility of EV charging makes it a significant actor in "Demand Response" mechanisms. Therefore, there is a need to design incentive mechanisms to foster customer engagement. A congestion game approach is adopted to evaluate the performance of such electrical transportation system with multiple classes of vehicles: EV and Gasoline Vehicles. Both temporal and energy operating costs are considered. The latter is nonseparable as it depends on the global charging need of all EV, which is scheduled in time by a centralized aggregator in function of nonflexible consumption at charging location. Thus, driving and charging decisions are coupled. An adaptation of Beckmann's method proves the existence of a Wardrop Equilibrium (WE) in the considered nonseparable congestion game; this WE is unique when the charging unit price is an increasing function of the global charging need. A condition on the nonflexible load is given to guarantee the monotonicity of this function. This condition is tested on real consumption data in France and in Texas, USA. Optimal tolls are used to control this electrical transportation system and then computed in order to minimize an environmental cost on a simple network topology.
|
1903.07820
|
Purva Tendulkar
|
Purva Tendulkar, Kalpesh Krishna, Ramprasaath R. Selvaraju, Devi
Parikh
|
Trick or TReAT: Thematic Reinforcement for Artistic Typography
|
9 pages
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An approach to make text visually appealing and memorable is semantic
reinforcement - the use of visual cues alluding to the context or theme in
which the word is being used to reinforce the message (e.g., Google Doodles).
We present a computational approach for semantic reinforcement called TReAT -
Thematic Reinforcement for Artistic Typography. Given an input word (e.g. exam)
and a theme (e.g. education), the individual letters of the input word are
replaced by cliparts relevant to the theme which visually resemble the letters
- adding creative context to the potentially boring input word. We use an
unsupervised approach to learn a latent space to represent letters and cliparts
and compute similarities between the two. Human studies show that participants
can reliably recognize the word as well as the theme in our outputs (TReATs)
and find them more creative compared to meaningful baselines.
|
[
{
"created": "Tue, 19 Mar 2019 04:08:51 GMT",
"version": "v1"
}
] |
2019-03-20
|
[
[
"Tendulkar",
"Purva",
""
],
[
"Krishna",
"Kalpesh",
""
],
[
"Selvaraju",
"Ramprasaath R.",
""
],
[
"Parikh",
"Devi",
""
]
] |
An approach to make text visually appealing and memorable is semantic reinforcement - the use of visual cues alluding to the context or theme in which the word is being used to reinforce the message (e.g., Google Doodles). We present a computational approach for semantic reinforcement called TReAT - Thematic Reinforcement for Artistic Typography. Given an input word (e.g. exam) and a theme (e.g. education), the individual letters of the input word are replaced by cliparts relevant to the theme which visually resemble the letters - adding creative context to the potentially boring input word. We use an unsupervised approach to learn a latent space to represent letters and cliparts and compute similarities between the two. Human studies show that participants can reliably recognize the word as well as the theme in our outputs (TReATs) and find them more creative compared to meaningful baselines.
|
2303.15386
|
Sadegh Arefizadeh
|
Sina Arefizadeh, Sadegh Arefizadeh, S. Rasoul Etesami, Sadegh Bolouki
|
Robustness of Dynamics in Games: A Contraction Mapping Decomposition
Approach
| null | null | null | null |
cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A systematic framework for analyzing dynamical attributes of games has not
been well-studied except for the special class of potential or near-potential
games. In particular, the existing results have shortcomings in determining the
asymptotic behavior of a given dynamic in a designated game. Although there is
a large body literature on developing convergent dynamics to the Nash
equilibrium (NE) of a game, in general, the asymptotic behavior of an
underlying dynamic may not be even close to a NE. In this paper, we initiate a
new direction towards game dynamics by studying the fundamental properties of
the map of dynamics in games. To this aim, we first decompose the map of a
given dynamic into contractive and non-contractive parts and then explore the
asymptotic behavior of those dynamics using the proximity of such decomposition
to contraction mappings. In particular, we analyze the non-contractive behavior
for better/best response dynamics in discrete-action space sequential/repeated
games and show that the non-contractive part of those dynamics is well-behaved
in a certain sense. That allows us to estimate the asymptotic behavior of such
dynamics using a neighborhood around the fixed point of their contractive part
proxy. Finally, we demonstrate the practicality of our framework via an example
from duopoly Cournot games.
|
[
{
"created": "Mon, 27 Mar 2023 16:59:36 GMT",
"version": "v1"
}
] |
2023-03-28
|
[
[
"Arefizadeh",
"Sina",
""
],
[
"Arefizadeh",
"Sadegh",
""
],
[
"Etesami",
"S. Rasoul",
""
],
[
"Bolouki",
"Sadegh",
""
]
] |
A systematic framework for analyzing dynamical attributes of games has not been well-studied except for the special class of potential or near-potential games. In particular, the existing results have shortcomings in determining the asymptotic behavior of a given dynamic in a designated game. Although there is a large body literature on developing convergent dynamics to the Nash equilibrium (NE) of a game, in general, the asymptotic behavior of an underlying dynamic may not be even close to a NE. In this paper, we initiate a new direction towards game dynamics by studying the fundamental properties of the map of dynamics in games. To this aim, we first decompose the map of a given dynamic into contractive and non-contractive parts and then explore the asymptotic behavior of those dynamics using the proximity of such decomposition to contraction mappings. In particular, we analyze the non-contractive behavior for better/best response dynamics in discrete-action space sequential/repeated games and show that the non-contractive part of those dynamics is well-behaved in a certain sense. That allows us to estimate the asymptotic behavior of such dynamics using a neighborhood around the fixed point of their contractive part proxy. Finally, we demonstrate the practicality of our framework via an example from duopoly Cournot games.
|
1412.6334
|
Hubert Soyer
|
Hubert Soyer and Pontus Stenetorp and Akiko Aizawa
|
Leveraging Monolingual Data for Crosslingual Compositional Word
Representations
| null | null | null | null |
cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
In this work, we present a novel neural network based architecture for
inducing compositional crosslingual word representations. Unlike previously
proposed methods, our method fulfills the following three criteria; it
constrains the word-level representations to be compositional, it is capable of
leveraging both bilingual and monolingual data, and it is scalable to large
vocabularies and large quantities of data. The key component of our approach is
what we refer to as a monolingual inclusion criterion, that exploits the
observation that phrases are more closely semantically related to their
sub-phrases than to other randomly sampled phrases. We evaluate our method on a
well-established crosslingual document classification task and achieve results
that are either comparable, or greatly improve upon previous state-of-the-art
methods. Concretely, our method reaches a level of 92.7% and 84.4% accuracy for
the English to German and German to English sub-tasks respectively. The former
advances the state of the art by 0.9% points of accuracy, the latter is an
absolute improvement upon the previous state of the art by 7.7% points of
accuracy and an improvement of 33.0% in error reduction.
|
[
{
"created": "Fri, 19 Dec 2014 13:23:35 GMT",
"version": "v1"
},
{
"created": "Thu, 26 Feb 2015 07:44:39 GMT",
"version": "v2"
},
{
"created": "Tue, 31 Mar 2015 08:03:57 GMT",
"version": "v3"
},
{
"created": "Sat, 22 Aug 2015 15:22:26 GMT",
"version": "v4"
}
] |
2015-08-25
|
[
[
"Soyer",
"Hubert",
""
],
[
"Stenetorp",
"Pontus",
""
],
[
"Aizawa",
"Akiko",
""
]
] |
In this work, we present a novel neural network based architecture for inducing compositional crosslingual word representations. Unlike previously proposed methods, our method fulfills the following three criteria; it constrains the word-level representations to be compositional, it is capable of leveraging both bilingual and monolingual data, and it is scalable to large vocabularies and large quantities of data. The key component of our approach is what we refer to as a monolingual inclusion criterion, that exploits the observation that phrases are more closely semantically related to their sub-phrases than to other randomly sampled phrases. We evaluate our method on a well-established crosslingual document classification task and achieve results that are either comparable, or greatly improve upon previous state-of-the-art methods. Concretely, our method reaches a level of 92.7% and 84.4% accuracy for the English to German and German to English sub-tasks respectively. The former advances the state of the art by 0.9% points of accuracy, the latter is an absolute improvement upon the previous state of the art by 7.7% points of accuracy and an improvement of 33.0% in error reduction.
|
2208.09183
|
Keonghun Choi
|
Keong Hun Choi, Jin Woo Kim, Yao Wang, Jong Eun Ha
|
Improved Image Classification with Token Fusion
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
In this paper, we propose a method using the fusion of CNN and transformer
structure to improve image classification performance. In the case of CNN,
information about a local area on an image can be extracted well, but there is
a limit to the extraction of global information. On the other hand, the
transformer has an advantage in relatively global extraction, but has a
disadvantage in that it requires a lot of memory for local feature value
extraction. In the case of an image, it is converted into a feature map through
CNN, and each feature map's pixel is considered a token. At the same time, the
image is divided into patch areas and then fused with the transformer method
that views them as tokens. For the fusion of tokens with two different
characteristics, we propose three methods: (1) late token fusion with parallel
structure, (2) early token fusion, (3) token fusion in a layer by layer. In an
experiment using ImageNet 1k, the proposed method shows the best classification
performance.
|
[
{
"created": "Fri, 19 Aug 2022 07:02:50 GMT",
"version": "v1"
}
] |
2022-08-22
|
[
[
"Choi",
"Keong Hun",
""
],
[
"Kim",
"Jin Woo",
""
],
[
"Wang",
"Yao",
""
],
[
"Ha",
"Jong Eun",
""
]
] |
In this paper, we propose a method using the fusion of CNN and transformer structure to improve image classification performance. In the case of CNN, information about a local area on an image can be extracted well, but there is a limit to the extraction of global information. On the other hand, the transformer has an advantage in relatively global extraction, but has a disadvantage in that it requires a lot of memory for local feature value extraction. In the case of an image, it is converted into a feature map through CNN, and each feature map's pixel is considered a token. At the same time, the image is divided into patch areas and then fused with the transformer method that views them as tokens. For the fusion of tokens with two different characteristics, we propose three methods: (1) late token fusion with parallel structure, (2) early token fusion, (3) token fusion in a layer by layer. In an experiment using ImageNet 1k, the proposed method shows the best classification performance.
|
1307.3164
|
David Coyle Dr
|
David Coyle and Gavin Doherty
|
Supporting Therapeutic Relationships and Communication about Mental
Health
|
4 pages. Presented at the ACM CHI 2013 workshop on Patient-Clinician
Communication
| null | null | null |
cs.HC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Effective communication and strong therapeutic relationships are critical to
successful mental health interventions. For example, in 1957 Carl Rogers, a
pioneer of person-centred therapy, proposed that an empowering relationship
could, in and of itself, create the necessary and sufficient conditions for
positive therapeutic outcomes [1]. Whilst modern psychological theories no
longer favour an exclusive focus on relationships, positive relationships and
the dynamics of client-therapist communication remain cornerstones of mental
health intervention theories. A more recent meta-review concluded that across
all interventions models, irrespective of the theoretical approach, the quality
of the relationship between therapists and clients is the second leading
determinant of successful clinical outcomes [2]. Over the past ten years we
(David Coyle and Gavin Doherty) have designed and evaluated a wide range to
systems that provide support for psychological (or talk- based) mental health
interventions [3]. Here we briefly consider two recent examples. In each case
our aim was to enhance communication and reshape clinical practice in a manner
that empowers patients. gNats Island is a computer game that supports
face-to-face interventions for adolescents [4]. MindBalance is an online
treatment programme for adults experiencing difficulties with depression [5].
|
[
{
"created": "Thu, 11 Jul 2013 16:27:33 GMT",
"version": "v1"
}
] |
2013-07-12
|
[
[
"Coyle",
"David",
""
],
[
"Doherty",
"Gavin",
""
]
] |
Effective communication and strong therapeutic relationships are critical to successful mental health interventions. For example, in 1957 Carl Rogers, a pioneer of person-centred therapy, proposed that an empowering relationship could, in and of itself, create the necessary and sufficient conditions for positive therapeutic outcomes [1]. Whilst modern psychological theories no longer favour an exclusive focus on relationships, positive relationships and the dynamics of client-therapist communication remain cornerstones of mental health intervention theories. A more recent meta-review concluded that across all interventions models, irrespective of the theoretical approach, the quality of the relationship between therapists and clients is the second leading determinant of successful clinical outcomes [2]. Over the past ten years we (David Coyle and Gavin Doherty) have designed and evaluated a wide range to systems that provide support for psychological (or talk- based) mental health interventions [3]. Here we briefly consider two recent examples. In each case our aim was to enhance communication and reshape clinical practice in a manner that empowers patients. gNats Island is a computer game that supports face-to-face interventions for adolescents [4]. MindBalance is an online treatment programme for adults experiencing difficulties with depression [5].
|
2405.00244
|
Yong Shu
|
Yong Shu, Liquan Shen, Xiangyu Hu, Mengyao Li, Zihao Zhou
|
Towards Real-World HDR Video Reconstruction: A Large-Scale Benchmark
Dataset and A Two-Stage Alignment Network
|
This paper has been accepted by CVPR 2024
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As an important and practical way to obtain high dynamic range (HDR) video,
HDR video reconstruction from sequences with alternating exposures is still
less explored, mainly due to the lack of large-scale real-world datasets.
Existing methods are mostly trained on synthetic datasets, which perform poorly
in real scenes. In this work, to facilitate the development of real-world HDR
video reconstruction, we present Real-HDRV, a large-scale real-world benchmark
dataset for HDR video reconstruction, featuring various scenes, diverse motion
patterns, and high-quality labels. Specifically, our dataset contains 500
LDRs-HDRs video pairs, comprising about 28,000 LDR frames and 4,000 HDR labels,
covering daytime, nighttime, indoor, and outdoor scenes. To our best knowledge,
our dataset is the largest real-world HDR video reconstruction dataset.
Correspondingly, we propose an end-to-end network for HDR video reconstruction,
where a novel two-stage strategy is designed to perform alignment sequentially.
Specifically, the first stage performs global alignment with the adaptively
estimated global offsets, reducing the difficulty of subsequent alignment. The
second stage implicitly performs local alignment in a coarse-to-fine manner at
the feature level using the adaptive separable convolution. Extensive
experiments demonstrate that: (1) models trained on our dataset can achieve
better performance on real scenes than those trained on synthetic datasets; (2)
our method outperforms previous state-of-the-art methods. Our dataset is
available at https://github.com/yungsyu99/Real-HDRV.
|
[
{
"created": "Tue, 30 Apr 2024 23:29:26 GMT",
"version": "v1"
}
] |
2024-05-02
|
[
[
"Shu",
"Yong",
""
],
[
"Shen",
"Liquan",
""
],
[
"Hu",
"Xiangyu",
""
],
[
"Li",
"Mengyao",
""
],
[
"Zhou",
"Zihao",
""
]
] |
As an important and practical way to obtain high dynamic range (HDR) video, HDR video reconstruction from sequences with alternating exposures is still less explored, mainly due to the lack of large-scale real-world datasets. Existing methods are mostly trained on synthetic datasets, which perform poorly in real scenes. In this work, to facilitate the development of real-world HDR video reconstruction, we present Real-HDRV, a large-scale real-world benchmark dataset for HDR video reconstruction, featuring various scenes, diverse motion patterns, and high-quality labels. Specifically, our dataset contains 500 LDRs-HDRs video pairs, comprising about 28,000 LDR frames and 4,000 HDR labels, covering daytime, nighttime, indoor, and outdoor scenes. To our best knowledge, our dataset is the largest real-world HDR video reconstruction dataset. Correspondingly, we propose an end-to-end network for HDR video reconstruction, where a novel two-stage strategy is designed to perform alignment sequentially. Specifically, the first stage performs global alignment with the adaptively estimated global offsets, reducing the difficulty of subsequent alignment. The second stage implicitly performs local alignment in a coarse-to-fine manner at the feature level using the adaptive separable convolution. Extensive experiments demonstrate that: (1) models trained on our dataset can achieve better performance on real scenes than those trained on synthetic datasets; (2) our method outperforms previous state-of-the-art methods. Our dataset is available at https://github.com/yungsyu99/Real-HDRV.
|
2308.13474
|
Prasita Mukherjee
|
Prasita Mukherjee and Haoteng Yin
|
OCTAL: Graph Representation Learning for LTL Model Checking
|
arXiv admin note: substantial text overlap with arXiv:2207.11649
| null | null | null |
cs.LO cs.AI cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Model Checking is widely applied in verifying the correctness of complex and
concurrent systems against a specification. Pure symbolic approaches while
popular, suffer from the state space explosion problem due to cross product
operations required that make them prohibitively expensive for large-scale
systems and/or specifications. In this paper, we propose to use graph
representation learning (GRL) for solving linear temporal logic (LTL) model
checking, where the system and the specification are expressed by a B{\"u}chi
automaton and an LTL formula, respectively. A novel GRL-based framework \model,
is designed to learn the representation of the graph-structured system and
specification, which reduces the model checking problem to binary
classification. Empirical experiments on two model checking scenarios show that
\model achieves promising accuracy, with up to $11\times$ overall speedup
against canonical SOTA model checkers and $31\times$ for satisfiability
checking alone.
|
[
{
"created": "Sat, 19 Aug 2023 15:11:18 GMT",
"version": "v1"
}
] |
2023-08-28
|
[
[
"Mukherjee",
"Prasita",
""
],
[
"Yin",
"Haoteng",
""
]
] |
Model Checking is widely applied in verifying the correctness of complex and concurrent systems against a specification. Pure symbolic approaches while popular, suffer from the state space explosion problem due to cross product operations required that make them prohibitively expensive for large-scale systems and/or specifications. In this paper, we propose to use graph representation learning (GRL) for solving linear temporal logic (LTL) model checking, where the system and the specification are expressed by a B{\"u}chi automaton and an LTL formula, respectively. A novel GRL-based framework \model, is designed to learn the representation of the graph-structured system and specification, which reduces the model checking problem to binary classification. Empirical experiments on two model checking scenarios show that \model achieves promising accuracy, with up to $11\times$ overall speedup against canonical SOTA model checkers and $31\times$ for satisfiability checking alone.
|
2202.06913
|
Huaduo Wang
|
Huaduo Wang, Farhad Shakerin and Gopal Gupta
|
FOLD-RM: A Scalable, Efficient, and Explainable Inductive Learning
Algorithm for Multi-Category Classification of Mixed Data
|
Paper presented at the 38th International Conference on Logic
Programming (ICLP 2022), 16 pages
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
FOLD-RM is an automated inductive learning algorithm for learning default
rules for mixed (numerical and categorical) data. It generates an (explainable)
answer set programming (ASP) rule set for multi-category classification tasks
while maintaining efficiency and scalability. The FOLD-RM algorithm is
competitive in performance with the widely-used, state-of-the-art algorithms
such as XGBoost and multi-layer perceptrons (MLPs), however, unlike these
algorithms, the FOLD-RM algorithm produces an explainable model. FOLD-RM
outperforms XGBoost on some datasets, particularly large ones. FOLD-RM also
provides human-friendly explanations for predictions.
|
[
{
"created": "Mon, 14 Feb 2022 18:07:54 GMT",
"version": "v1"
},
{
"created": "Fri, 25 Feb 2022 22:46:16 GMT",
"version": "v2"
},
{
"created": "Sun, 15 May 2022 18:59:07 GMT",
"version": "v3"
}
] |
2022-05-17
|
[
[
"Wang",
"Huaduo",
""
],
[
"Shakerin",
"Farhad",
""
],
[
"Gupta",
"Gopal",
""
]
] |
FOLD-RM is an automated inductive learning algorithm for learning default rules for mixed (numerical and categorical) data. It generates an (explainable) answer set programming (ASP) rule set for multi-category classification tasks while maintaining efficiency and scalability. The FOLD-RM algorithm is competitive in performance with the widely-used, state-of-the-art algorithms such as XGBoost and multi-layer perceptrons (MLPs), however, unlike these algorithms, the FOLD-RM algorithm produces an explainable model. FOLD-RM outperforms XGBoost on some datasets, particularly large ones. FOLD-RM also provides human-friendly explanations for predictions.
|
1908.08338
|
Tania Panayiotou
|
Tania Panayiotou, Giannis Savva, Ioannis Tomkos, Georgios Ellinas
|
Centralized and Distributed Machine Learning-Based QoT Estimation for
Sliceable Optical Networks
|
accepted for presentation at the IEEE GLOBECOM 2019
| null |
10.1109/GLOBECOM38437.2019.9013962
| null |
cs.NI cs.LG eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dynamic network slicing has emerged as a promising and fundamental framework
for meeting 5G's diverse use cases. As machine learning (ML) is expected to
play a pivotal role in the efficient control and management of these networks,
in this work we examine the ML-based Quality-of-Transmission (QoT) estimation
problem under the dynamic network slicing context, where each slice has to meet
a different QoT requirement. We examine ML-based QoT frameworks with the aim of
finding QoT model/s that are fine-tuned according to the diverse QoT
requirements. Centralized and distributed frameworks are examined and compared
according to their accuracy and training time. We show that the distributed QoT
models outperform the centralized QoT model, especially as the number of
diverse QoT requirements increases.
|
[
{
"created": "Thu, 22 Aug 2019 12:38:54 GMT",
"version": "v1"
},
{
"created": "Fri, 27 Sep 2019 11:22:57 GMT",
"version": "v2"
}
] |
2022-11-04
|
[
[
"Panayiotou",
"Tania",
""
],
[
"Savva",
"Giannis",
""
],
[
"Tomkos",
"Ioannis",
""
],
[
"Ellinas",
"Georgios",
""
]
] |
Dynamic network slicing has emerged as a promising and fundamental framework for meeting 5G's diverse use cases. As machine learning (ML) is expected to play a pivotal role in the efficient control and management of these networks, in this work we examine the ML-based Quality-of-Transmission (QoT) estimation problem under the dynamic network slicing context, where each slice has to meet a different QoT requirement. We examine ML-based QoT frameworks with the aim of finding QoT model/s that are fine-tuned according to the diverse QoT requirements. Centralized and distributed frameworks are examined and compared according to their accuracy and training time. We show that the distributed QoT models outperform the centralized QoT model, especially as the number of diverse QoT requirements increases.
|
2306.07542
|
Xianliang Yang
|
Xianliang Yang, Zhihao Liu, Wei Jiang, Chuheng Zhang, Li Zhao, Lei
Song, Jiang Bian
|
A Versatile Multi-Agent Reinforcement Learning Benchmark for Inventory
Management
| null | null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Multi-agent reinforcement learning (MARL) models multiple agents that
interact and learn within a shared environment. This paradigm is applicable to
various industrial scenarios such as autonomous driving, quantitative trading,
and inventory management. However, applying MARL to these real-world scenarios
is impeded by many challenges such as scaling up, complex agent interactions,
and non-stationary dynamics. To incentivize the research of MARL on these
challenges, we develop MABIM (Multi-Agent Benchmark for Inventory Management)
which is a multi-echelon, multi-commodity inventory management simulator that
can generate versatile tasks with these different challenging properties. Based
on MABIM, we evaluate the performance of classic operations research (OR)
methods and popular MARL algorithms on these challenging tasks to highlight
their weaknesses and potential.
|
[
{
"created": "Tue, 13 Jun 2023 05:22:30 GMT",
"version": "v1"
}
] |
2023-06-14
|
[
[
"Yang",
"Xianliang",
""
],
[
"Liu",
"Zhihao",
""
],
[
"Jiang",
"Wei",
""
],
[
"Zhang",
"Chuheng",
""
],
[
"Zhao",
"Li",
""
],
[
"Song",
"Lei",
""
],
[
"Bian",
"Jiang",
""
]
] |
Multi-agent reinforcement learning (MARL) models multiple agents that interact and learn within a shared environment. This paradigm is applicable to various industrial scenarios such as autonomous driving, quantitative trading, and inventory management. However, applying MARL to these real-world scenarios is impeded by many challenges such as scaling up, complex agent interactions, and non-stationary dynamics. To incentivize the research of MARL on these challenges, we develop MABIM (Multi-Agent Benchmark for Inventory Management) which is a multi-echelon, multi-commodity inventory management simulator that can generate versatile tasks with these different challenging properties. Based on MABIM, we evaluate the performance of classic operations research (OR) methods and popular MARL algorithms on these challenging tasks to highlight their weaknesses and potential.
|
1702.07914
|
Andreas Brandstadt
|
Andreas Brandst\"adt and Raffaele Mosca
|
On Chordal-$k$-Generalized Split Graphs
|
arXiv admin note: text overlap with arXiv:1701.03414
| null | null | null |
cs.DM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A graph $G$ is a {\em chordal-$k$-generalized split graph} if $G$ is chordal
and there is a clique $Q$ in $G$ such that every connected component in $G[V
\setminus Q]$ has at most $k$ vertices. Thus, chordal-$1$-generalized split
graphs are exactly the split graphs.
We characterize chordal-$k$-generalized split graphs by forbidden induced
subgraphs. Moreover, we characterize a very special case of
chordal-$2$-generalized split graphs for which the Efficient Domination problem
is \NP-complete.
|
[
{
"created": "Sat, 25 Feb 2017 16:08:26 GMT",
"version": "v1"
},
{
"created": "Mon, 6 Mar 2017 15:35:30 GMT",
"version": "v2"
},
{
"created": "Thu, 27 Apr 2017 08:57:31 GMT",
"version": "v3"
}
] |
2017-04-28
|
[
[
"Brandstädt",
"Andreas",
""
],
[
"Mosca",
"Raffaele",
""
]
] |
A graph $G$ is a {\em chordal-$k$-generalized split graph} if $G$ is chordal and there is a clique $Q$ in $G$ such that every connected component in $G[V \setminus Q]$ has at most $k$ vertices. Thus, chordal-$1$-generalized split graphs are exactly the split graphs. We characterize chordal-$k$-generalized split graphs by forbidden induced subgraphs. Moreover, we characterize a very special case of chordal-$2$-generalized split graphs for which the Efficient Domination problem is \NP-complete.
|
1511.02308
|
S Raja
|
V. Arvind, S. Raja
|
Some Lower Bound Results for Set-Multilinear Arithmetic Computations
| null | null | null | null |
cs.CC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we study the structure of set-multilinear arithmetic circuits
and set-multilinear branching programs with the aim of showing lower bound
results. We define some natural restrictions of these models for which we are
able to show lower bound results. Some of our results extend existing lower
bounds, while others are new and raise open questions. More specifically, our
main results are the following:
(1) We observe that set-multilinear arithmetic circuits can be transformed
into shallow set-multilinear circuits efficiently, similar to depth reduction
results of [VSBR83,RY08] for more general commutative circuits. As a
consequence, we note that polynomial size set-multilinear circuits have
quasi-polynomial size set-multilinear branching programs. We show that
\emph{narrow} set-multilinear ABPs (with a restricted number of set types)
computing the Permanent polynomial $\mathrm{PER}_n$ require $2^{n^{\Omega(1)}}$
size. A similar result for general set-multilinear ABPs appears difficult as it
would imply that the Permanent requires superpolynomial size set-multilinear
circuits. It would also imply that the noncommutative Permanent requires
superpolynomial size noncommutative arithmetic circuits.
(2) Indeed, we also show that set-multilinear branching programs are
exponentially more powerful than \emph{interval} multilinear circuits (where
the index sets for each gate is restricted to be an interval w.r.t.\ some
ordering), assuming the sum-of-squares conjecture. This further underlines the
power of set-multilinear branching programs.
(3) Finally, we consider set-multilinear circuits with restrictions on the
number of proof trees of monomials computed by it, and prove exponential lower
bounds results. This raises some new lower bound questions.
|
[
{
"created": "Sat, 7 Nov 2015 05:56:57 GMT",
"version": "v1"
}
] |
2015-11-10
|
[
[
"Arvind",
"V.",
""
],
[
"Raja",
"S.",
""
]
] |
In this paper, we study the structure of set-multilinear arithmetic circuits and set-multilinear branching programs with the aim of showing lower bound results. We define some natural restrictions of these models for which we are able to show lower bound results. Some of our results extend existing lower bounds, while others are new and raise open questions. More specifically, our main results are the following: (1) We observe that set-multilinear arithmetic circuits can be transformed into shallow set-multilinear circuits efficiently, similar to depth reduction results of [VSBR83,RY08] for more general commutative circuits. As a consequence, we note that polynomial size set-multilinear circuits have quasi-polynomial size set-multilinear branching programs. We show that \emph{narrow} set-multilinear ABPs (with a restricted number of set types) computing the Permanent polynomial $\mathrm{PER}_n$ require $2^{n^{\Omega(1)}}$ size. A similar result for general set-multilinear ABPs appears difficult as it would imply that the Permanent requires superpolynomial size set-multilinear circuits. It would also imply that the noncommutative Permanent requires superpolynomial size noncommutative arithmetic circuits. (2) Indeed, we also show that set-multilinear branching programs are exponentially more powerful than \emph{interval} multilinear circuits (where the index sets for each gate is restricted to be an interval w.r.t.\ some ordering), assuming the sum-of-squares conjecture. This further underlines the power of set-multilinear branching programs. (3) Finally, we consider set-multilinear circuits with restrictions on the number of proof trees of monomials computed by it, and prove exponential lower bounds results. This raises some new lower bound questions.
|
2310.13992
|
Abbas Edalat
|
S\'ebastien Huot and Abbas Edalat
|
Pure Bayesian Nash equilibrium for Bayesian games with multidimensional
vector Types and linear payoffs
| null | null | null | null |
cs.GT
|
http://creativecommons.org/licenses/by/4.0/
|
We study $n$-agent Bayesian Games with $m$-dimensional vector types and
linear payoffs, also called Linear Multidimensional Bayesian Games. This class
of games is equivalent with $n$-agent, $m$-game Uniform Multigames. We
distinguish between games that have a discrete type space and those with a
continuous type space. More specifically, we are interested in the existence of
pure Bayesian Nash Equilibrium for such games and efficient algorithms to find
them. For continuous priors we suggest a methodology to perform Nash
Equilibrium search in simple cases. For discrete priors we present algorithms
that can handle two actions and two players games efficiently. We introduce the
core concept of threshold strategy and, under some mild conditions, we show
that these games have at least one pure Bayesian Nash Equilibrium. We
illustrate our results with several examples like Double Game Prisoner Dilemna
(DGPD), Chicken Game and Sustainable Adoption Decision Problem (SADP).
|
[
{
"created": "Sat, 21 Oct 2023 12:37:18 GMT",
"version": "v1"
}
] |
2023-10-24
|
[
[
"Huot",
"Sébastien",
""
],
[
"Edalat",
"Abbas",
""
]
] |
We study $n$-agent Bayesian Games with $m$-dimensional vector types and linear payoffs, also called Linear Multidimensional Bayesian Games. This class of games is equivalent with $n$-agent, $m$-game Uniform Multigames. We distinguish between games that have a discrete type space and those with a continuous type space. More specifically, we are interested in the existence of pure Bayesian Nash Equilibrium for such games and efficient algorithms to find them. For continuous priors we suggest a methodology to perform Nash Equilibrium search in simple cases. For discrete priors we present algorithms that can handle two actions and two players games efficiently. We introduce the core concept of threshold strategy and, under some mild conditions, we show that these games have at least one pure Bayesian Nash Equilibrium. We illustrate our results with several examples like Double Game Prisoner Dilemna (DGPD), Chicken Game and Sustainable Adoption Decision Problem (SADP).
|
2206.05262
|
Brandon Amos
|
Brandon Amos, Samuel Cohen, Giulia Luise, Ievgen Redko
|
Meta Optimal Transport
|
ICML 2023
| null | null | null |
cs.LG cs.AI stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
We study the use of amortized optimization to predict optimal transport (OT)
maps from the input measures, which we call Meta OT. This helps repeatedly
solve similar OT problems between different measures by leveraging the
knowledge and information present from past problems to rapidly predict and
solve new problems. Otherwise, standard methods ignore the knowledge of the
past solutions and suboptimally re-solve each problem from scratch. We
instantiate Meta OT models in discrete and continuous settings between
grayscale images, spherical data, classification labels, and color palettes and
use them to improve the computational time of standard OT solvers. Our source
code is available at http://github.com/facebookresearch/meta-ot
|
[
{
"created": "Fri, 10 Jun 2022 17:59:07 GMT",
"version": "v1"
},
{
"created": "Fri, 2 Jun 2023 21:45:43 GMT",
"version": "v2"
}
] |
2023-06-06
|
[
[
"Amos",
"Brandon",
""
],
[
"Cohen",
"Samuel",
""
],
[
"Luise",
"Giulia",
""
],
[
"Redko",
"Ievgen",
""
]
] |
We study the use of amortized optimization to predict optimal transport (OT) maps from the input measures, which we call Meta OT. This helps repeatedly solve similar OT problems between different measures by leveraging the knowledge and information present from past problems to rapidly predict and solve new problems. Otherwise, standard methods ignore the knowledge of the past solutions and suboptimally re-solve each problem from scratch. We instantiate Meta OT models in discrete and continuous settings between grayscale images, spherical data, classification labels, and color palettes and use them to improve the computational time of standard OT solvers. Our source code is available at http://github.com/facebookresearch/meta-ot
|
2403.03993
|
Antonios Valkanas
|
Antonios Valkanas, Yuening Wang, Yingxue Zhang, Mark Coates
|
Personalized Negative Reservoir for Incremental Learning in Recommender
Systems
| null | null | null | null |
cs.IR cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Recommender systems have become an integral part of online platforms. Every
day the volume of training data is expanding and the number of user
interactions is constantly increasing. The exploration of larger and more
expressive models has become a necessary pursuit to improve user experience.
However, this progression carries with it an increased computational burden. In
commercial settings, once a recommendation system model has been trained and
deployed it typically needs to be updated frequently as new client data arrive.
Cumulatively, the mounting volume of data is guaranteed to eventually make full
batch retraining of the model from scratch computationally infeasible. Naively
fine-tuning solely on the new data runs into the well-documented problem of
catastrophic forgetting. Despite the fact that negative sampling is a crucial
part of training with implicit feedback, no specialized technique exists that
is tailored to the incremental learning framework. In this work, we take the
first step to propose, a personalized negative reservoir strategy which is used
to obtain negative samples for the standard triplet loss. This technique
balances alleviation of forgetting with plasticity by encouraging the model to
remember stable user preferences and selectively forget when user interests
change. We derive the mathematical formulation of a negative sampler to
populate and update the reservoir. We integrate our design in three SOTA and
commonly used incremental recommendation models. We show that these concrete
realizations of our negative reservoir framework achieve state-of-the-art
results in standard benchmarks, on multiple standard top-k evaluation metrics.
|
[
{
"created": "Wed, 6 Mar 2024 19:08:28 GMT",
"version": "v1"
}
] |
2024-03-08
|
[
[
"Valkanas",
"Antonios",
""
],
[
"Wang",
"Yuening",
""
],
[
"Zhang",
"Yingxue",
""
],
[
"Coates",
"Mark",
""
]
] |
Recommender systems have become an integral part of online platforms. Every day the volume of training data is expanding and the number of user interactions is constantly increasing. The exploration of larger and more expressive models has become a necessary pursuit to improve user experience. However, this progression carries with it an increased computational burden. In commercial settings, once a recommendation system model has been trained and deployed it typically needs to be updated frequently as new client data arrive. Cumulatively, the mounting volume of data is guaranteed to eventually make full batch retraining of the model from scratch computationally infeasible. Naively fine-tuning solely on the new data runs into the well-documented problem of catastrophic forgetting. Despite the fact that negative sampling is a crucial part of training with implicit feedback, no specialized technique exists that is tailored to the incremental learning framework. In this work, we take the first step to propose, a personalized negative reservoir strategy which is used to obtain negative samples for the standard triplet loss. This technique balances alleviation of forgetting with plasticity by encouraging the model to remember stable user preferences and selectively forget when user interests change. We derive the mathematical formulation of a negative sampler to populate and update the reservoir. We integrate our design in three SOTA and commonly used incremental recommendation models. We show that these concrete realizations of our negative reservoir framework achieve state-of-the-art results in standard benchmarks, on multiple standard top-k evaluation metrics.
|
2104.01214
|
Inon Peled
|
Frederik Boe H\"uttel, Inon Peled, Filipe Rodrigues, Francisco C.
Pereira
|
Modeling Censored Mobility Demand through Quantile Regression Neural
Networks
|
13 pages, 9 figures, 5 tables
| null | null | null |
cs.LG math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Shared mobility services require accurate demand models for effective service
planning. On the one hand, modeling the full probability distribution of demand
is advantageous because the entire uncertainty structure preserves valuable
information for decision-making. On the other hand, demand is often observed
through the usage of the service itself, so that the observations are censored,
as they are inherently limited by available supply. Since the 1980s, various
works on Censored Quantile Regression models have performed well under such
conditions. Further, in the last two decades, several papers have proposed to
implement these models flexibly through Neural Networks. However, the models in
current works estimate the quantiles individually, thus incurring a
computational overhead and ignoring valuable relationships between the
quantiles. We address this gap by extending current Censored Quantile
Regression models to learn multiple quantiles at once and apply these to
synthetic baseline datasets and datasets from two shared mobility providers in
the Copenhagen metropolitan area in Denmark. The results show that our extended
models yield fewer quantile crossings and less computational overhead without
compromising model performance.
|
[
{
"created": "Fri, 2 Apr 2021 19:24:15 GMT",
"version": "v1"
},
{
"created": "Sat, 9 Jul 2022 10:31:31 GMT",
"version": "v2"
}
] |
2022-07-12
|
[
[
"Hüttel",
"Frederik Boe",
""
],
[
"Peled",
"Inon",
""
],
[
"Rodrigues",
"Filipe",
""
],
[
"Pereira",
"Francisco C.",
""
]
] |
Shared mobility services require accurate demand models for effective service planning. On the one hand, modeling the full probability distribution of demand is advantageous because the entire uncertainty structure preserves valuable information for decision-making. On the other hand, demand is often observed through the usage of the service itself, so that the observations are censored, as they are inherently limited by available supply. Since the 1980s, various works on Censored Quantile Regression models have performed well under such conditions. Further, in the last two decades, several papers have proposed to implement these models flexibly through Neural Networks. However, the models in current works estimate the quantiles individually, thus incurring a computational overhead and ignoring valuable relationships between the quantiles. We address this gap by extending current Censored Quantile Regression models to learn multiple quantiles at once and apply these to synthetic baseline datasets and datasets from two shared mobility providers in the Copenhagen metropolitan area in Denmark. The results show that our extended models yield fewer quantile crossings and less computational overhead without compromising model performance.
|
1610.03013
|
Weinan Zhang
|
Jun Wang, Weinan Zhang, Shuai Yuan
|
Display Advertising with Real-Time Bidding (RTB) and Behavioural
Targeting
|
A 122-page monograph about RTB display advertising, which will be
published on Now Publisher in July 2017
| null | null | null |
cs.GT
|
http://creativecommons.org/licenses/by/4.0/
|
The most significant progress in recent years in online display advertising
is what is known as the Real-Time Bidding (RTB) mechanism to buy and sell ads.
RTB essentially facilitates buying an individual ad impression in real time
while it is still being generated from a user's visit. RTB not only scales up
the buying process by aggregating a large amount of available inventories
across publishers but, most importantly, enables direct targeting of individual
users. As such, RTB has fundamentally changed the landscape of digital
marketing. Scientifically, the demand for automation, integration and
optimisation in RTB also brings new research opportunities in information
retrieval, data mining, machine learning and other related fields. In this
monograph, an overview is given of the fundamental infrastructure, algorithms,
and technical solutions of this new frontier of computational advertising. The
covered topics include user response prediction, bid landscape forecasting,
bidding algorithms, revenue optimisation, statistical arbitrage, dynamic
pricing, and ad fraud detection.
|
[
{
"created": "Fri, 7 Oct 2016 16:12:28 GMT",
"version": "v1"
},
{
"created": "Sat, 15 Jul 2017 17:38:36 GMT",
"version": "v2"
}
] |
2017-07-18
|
[
[
"Wang",
"Jun",
""
],
[
"Zhang",
"Weinan",
""
],
[
"Yuan",
"Shuai",
""
]
] |
The most significant progress in recent years in online display advertising is what is known as the Real-Time Bidding (RTB) mechanism to buy and sell ads. RTB essentially facilitates buying an individual ad impression in real time while it is still being generated from a user's visit. RTB not only scales up the buying process by aggregating a large amount of available inventories across publishers but, most importantly, enables direct targeting of individual users. As such, RTB has fundamentally changed the landscape of digital marketing. Scientifically, the demand for automation, integration and optimisation in RTB also brings new research opportunities in information retrieval, data mining, machine learning and other related fields. In this monograph, an overview is given of the fundamental infrastructure, algorithms, and technical solutions of this new frontier of computational advertising. The covered topics include user response prediction, bid landscape forecasting, bidding algorithms, revenue optimisation, statistical arbitrage, dynamic pricing, and ad fraud detection.
|
2202.01375
|
Peiying Zhang
|
Peiying Zhang, Chao Wang, Chunxiao Jiang, Neeraj Kumar, and Qinghua Lu
|
Resource Management and Security Scheme of ICPSs and IoT Based on VNE
Algorithm
| null | null | null | null |
cs.CR cs.LG cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The development of Intelligent Cyber-Physical Systems (ICPSs) in virtual
network environment is facing severe challenges. On the one hand, the Internet
of things (IoT) based on ICPSs construction needs a large amount of reasonable
network resources support. On the other hand, ICPSs are facing severe network
security problems. The integration of ICPSs and network virtualization (NV) can
provide more efficient network resource support and security guarantees for IoT
users. Based on the above two problems faced by ICPSs, we propose a virtual
network embedded (VNE) algorithm with computing, storage resources and security
constraints to ensure the rationality and security of resource allocation in
ICPSs. In particular, we use reinforcement learning (RL) method as a means to
improve algorithm performance. We extract the important attribute
characteristics of underlying network as the training environment of RL agent.
Agent can derive the optimal node embedding strategy through training, so as to
meet the requirements of ICPSs for resource management and security. The
embedding of virtual links is based on the breadth first search (BFS) strategy.
Therefore, this is a comprehensive two-stage RL-VNE algorithm considering the
constraints of computing, storage and security three-dimensional resources.
Finally, we design a large number of simulation experiments from the
perspective of typical indicators of VNE algorithms. The experimental results
effectively illustrate the effectiveness of the algorithm in the application of
ICPSs.
|
[
{
"created": "Thu, 3 Feb 2022 02:27:20 GMT",
"version": "v1"
}
] |
2022-02-04
|
[
[
"Zhang",
"Peiying",
""
],
[
"Wang",
"Chao",
""
],
[
"Jiang",
"Chunxiao",
""
],
[
"Kumar",
"Neeraj",
""
],
[
"Lu",
"Qinghua",
""
]
] |
The development of Intelligent Cyber-Physical Systems (ICPSs) in virtual network environment is facing severe challenges. On the one hand, the Internet of things (IoT) based on ICPSs construction needs a large amount of reasonable network resources support. On the other hand, ICPSs are facing severe network security problems. The integration of ICPSs and network virtualization (NV) can provide more efficient network resource support and security guarantees for IoT users. Based on the above two problems faced by ICPSs, we propose a virtual network embedded (VNE) algorithm with computing, storage resources and security constraints to ensure the rationality and security of resource allocation in ICPSs. In particular, we use reinforcement learning (RL) method as a means to improve algorithm performance. We extract the important attribute characteristics of underlying network as the training environment of RL agent. Agent can derive the optimal node embedding strategy through training, so as to meet the requirements of ICPSs for resource management and security. The embedding of virtual links is based on the breadth first search (BFS) strategy. Therefore, this is a comprehensive two-stage RL-VNE algorithm considering the constraints of computing, storage and security three-dimensional resources. Finally, we design a large number of simulation experiments from the perspective of typical indicators of VNE algorithms. The experimental results effectively illustrate the effectiveness of the algorithm in the application of ICPSs.
|
1202.6668
|
Bruno Bauwens
|
Bruno Bauwens, Alexander Shen
|
Complexity of complexity and strings with maximal plain and prefix
Kolmogorov complexity
|
13 pages, 1 figure
| null | null | null |
cs.CC
|
http://creativecommons.org/licenses/by/3.0/
|
Peter Gacs showed (Gacs 1974) that for every n there exists a bit string x of
length n whose plain complexity C(x) has almost maximal conditional complexity
relative to x, i.e., C(C(x)|x) > log n - log^(2) n - O(1). (Here log^(2) i =
log log i.) Following Elena Kalinina (Kalinina 2011), we provide a simple
game-based proof of this result; modifying her argument, we get a better (and
tight) bound log n - O(1). We also show the same bound for prefix-free
complexity.
Robert Solovay showed (Solovay 1975) that infinitely many strings x have
maximal plain complexity but not maximal prefix complexity (among the strings
of the same length): for some c there exist infinitely many x such that |x| -
C(x) < c and |x| + K(|x|) - K(x) > log^(2) |x| - c log^(3) |x|. In fact, the
results of Solovay and Gacs are closely related. Using the result above, we
provide a short proof for Solovay's result. We also generalize it by showing
that for some c and for all n there are strings x of length n with n - C (x) <
c and n + K(n) - K(x) > K(K(n)|n) - 3 K(K(K(n)|n)|n) - c. We also prove a close
upper bound K(K(n)|n) + O(1).
Finally, we provide a direct game proof for Joseph Miller's generalization
(Miller 2006) of the same Solovay's theorem: if a co-enumerable set (a set with
c.e. complement) contains for every length a string of this length, then it
contains infinitely many strings x such that |x| + K(|x|) - K(x) > log^(2) |x|
+ O(log^(3) |x|).
|
[
{
"created": "Wed, 29 Feb 2012 20:09:17 GMT",
"version": "v1"
},
{
"created": "Fri, 3 May 2013 15:54:59 GMT",
"version": "v2"
}
] |
2013-05-06
|
[
[
"Bauwens",
"Bruno",
""
],
[
"Shen",
"Alexander",
""
]
] |
Peter Gacs showed (Gacs 1974) that for every n there exists a bit string x of length n whose plain complexity C(x) has almost maximal conditional complexity relative to x, i.e., C(C(x)|x) > log n - log^(2) n - O(1). (Here log^(2) i = log log i.) Following Elena Kalinina (Kalinina 2011), we provide a simple game-based proof of this result; modifying her argument, we get a better (and tight) bound log n - O(1). We also show the same bound for prefix-free complexity. Robert Solovay showed (Solovay 1975) that infinitely many strings x have maximal plain complexity but not maximal prefix complexity (among the strings of the same length): for some c there exist infinitely many x such that |x| - C(x) < c and |x| + K(|x|) - K(x) > log^(2) |x| - c log^(3) |x|. In fact, the results of Solovay and Gacs are closely related. Using the result above, we provide a short proof for Solovay's result. We also generalize it by showing that for some c and for all n there are strings x of length n with n - C (x) < c and n + K(n) - K(x) > K(K(n)|n) - 3 K(K(K(n)|n)|n) - c. We also prove a close upper bound K(K(n)|n) + O(1). Finally, we provide a direct game proof for Joseph Miller's generalization (Miller 2006) of the same Solovay's theorem: if a co-enumerable set (a set with c.e. complement) contains for every length a string of this length, then it contains infinitely many strings x such that |x| + K(|x|) - K(x) > log^(2) |x| + O(log^(3) |x|).
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.