id
stringlengths 9
10
| submitter
stringlengths 1
64
⌀ | authors
stringlengths 4
20.7k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
404
⌀ | doi
stringlengths 11
153
⌀ | report-no
stringlengths 2
254
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | orig_abstract
stringlengths 14
3.35k
| versions
listlengths 1
60
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
1.35k
| abstract
stringlengths 11
3.34k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1906.06297
|
Joshua Romero
|
Joshua Romero, Mauro Bisson, Massimiliano Fatica, Massimo Bernaschi
|
A Performance Study of the 2D Ising Model on GPUs
| null | null |
10.1016/j.cpc.2020.107473
| null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The simulation of the two-dimensional Ising model is used as a benchmark to
show the computational capabilities of Graphic Processing Units (GPUs). The
rich programming environment now available on GPUs and flexible hardware
capabilities allowed us to quickly experiment with several implementation
ideas: a simple stencil-based algorithm, recasting the stencil operations into
matrix multiplies to take advantage of Tensor Cores available on NVIDIA GPUs,
and a highly optimized multi-spin coding approach. Using the managed memory API
available in CUDA allows for simple and efficient distribution of these
implementations across a multi-GPU NVIDIA DGX-2 server. We show that even a
basic GPU implementation can outperform current results published on TPUs and
that the optimized multi-GPU implementation can simulate very large lattices
faster than custom FPGA solutions.
|
[
{
"created": "Fri, 14 Jun 2019 17:09:16 GMT",
"version": "v1"
}
] |
2020-08-26
|
[
[
"Romero",
"Joshua",
""
],
[
"Bisson",
"Mauro",
""
],
[
"Fatica",
"Massimiliano",
""
],
[
"Bernaschi",
"Massimo",
""
]
] |
The simulation of the two-dimensional Ising model is used as a benchmark to show the computational capabilities of Graphic Processing Units (GPUs). The rich programming environment now available on GPUs and flexible hardware capabilities allowed us to quickly experiment with several implementation ideas: a simple stencil-based algorithm, recasting the stencil operations into matrix multiplies to take advantage of Tensor Cores available on NVIDIA GPUs, and a highly optimized multi-spin coding approach. Using the managed memory API available in CUDA allows for simple and efficient distribution of these implementations across a multi-GPU NVIDIA DGX-2 server. We show that even a basic GPU implementation can outperform current results published on TPUs and that the optimized multi-GPU implementation can simulate very large lattices faster than custom FPGA solutions.
|
1603.07866
|
Romain Couillet
|
Romain Couillet, Gilles Wainrib, Harry Sevi, Hafiz Tiomoko Ali
|
The Asymptotic Performance of Linear Echo State Neural Networks
| null | null | null | null |
cs.LG cs.NE math.PR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this article, a study of the mean-square error (MSE) performance of linear
echo-state neural networks is performed, both for training and testing tasks.
Considering the realistic setting of noise present at the network nodes, we
derive deterministic equivalents for the aforementioned MSE in the limit where
the number of input data $T$ and network size $n$ both grow large. Specializing
then the network connectivity matrix to specific random settings, we further
obtain simple formulas that provide new insights on the performance of such
networks.
|
[
{
"created": "Fri, 25 Mar 2016 10:27:00 GMT",
"version": "v1"
}
] |
2016-03-28
|
[
[
"Couillet",
"Romain",
""
],
[
"Wainrib",
"Gilles",
""
],
[
"Sevi",
"Harry",
""
],
[
"Ali",
"Hafiz Tiomoko",
""
]
] |
In this article, a study of the mean-square error (MSE) performance of linear echo-state neural networks is performed, both for training and testing tasks. Considering the realistic setting of noise present at the network nodes, we derive deterministic equivalents for the aforementioned MSE in the limit where the number of input data $T$ and network size $n$ both grow large. Specializing then the network connectivity matrix to specific random settings, we further obtain simple formulas that provide new insights on the performance of such networks.
|
1611.06728
|
Dmitry Gromov
|
Dmitry Gromov, Ingo Bulla, Ethan O. Romero-Severson, Oana Silvia Serea
|
Numerical optimal control for HIV prevention with dynamic budget
allocation
|
Submitted paper
| null |
10.1093/imammb/dqx015
| null |
cs.SY math.NA math.OC q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper is about numerical control of HIV propagation. The contribution of
the paper is threefold: first, a novel model of HIV propagation is proposed;
second, the methods from numerical optimal control are successfully applied to
the developed model to compute optimal control profiles; finally, the computed
results are applied to the real problem yielding important and practically
relevant results.
|
[
{
"created": "Mon, 21 Nov 2016 11:28:27 GMT",
"version": "v1"
}
] |
2023-05-30
|
[
[
"Gromov",
"Dmitry",
""
],
[
"Bulla",
"Ingo",
""
],
[
"Romero-Severson",
"Ethan O.",
""
],
[
"Serea",
"Oana Silvia",
""
]
] |
This paper is about numerical control of HIV propagation. The contribution of the paper is threefold: first, a novel model of HIV propagation is proposed; second, the methods from numerical optimal control are successfully applied to the developed model to compute optimal control profiles; finally, the computed results are applied to the real problem yielding important and practically relevant results.
|
1810.11758
|
Hao-Hsuan Chang
|
Hao-Hsuan Chang, Hao Song, Yang Yi, Jianzhong Zhang, Haibo He, Lingjia
Liu
|
Distributive Dynamic Spectrum Access through Deep Reinforcement
Learning: A Reservoir Computing Based Approach
|
This work is accepted in IEEE IoT Journal 2018
| null |
10.1109/JIOT.2018.2872441
| null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dynamic spectrum access (DSA) is regarded as an effective and efficient
technology to share radio spectrum among different networks. As a secondary
user (SU), a DSA device will face two critical problems: avoiding causing
harmful interference to primary users (PUs), and conducting effective
interference coordination with other secondary users. These two problems become
even more challenging for a distributed DSA network where there is no
centralized controllers for SUs. In this paper, we investigate communication
strategies of a distributive DSA network under the presence of spectrum sensing
errors. To be specific, we apply the powerful machine learning tool, deep
reinforcement learning (DRL), for SUs to learn "appropriate" spectrum access
strategies in a distributed fashion assuming NO knowledge of the underlying
system statistics. Furthermore, a special type of recurrent neural network
(RNN), called the reservoir computing (RC), is utilized to realize DRL by
taking advantage of the underlying temporal correlation of the DSA network.
Using the introduced machine learning-based strategy, SUs could make spectrum
access decisions distributedly relying only on their own current and past
spectrum sensing outcomes. Through extensive experiments, our results suggest
that the RC-based spectrum access strategy can help the SU to significantly
reduce the chances of collision with PUs and other SUs. We also show that our
scheme outperforms the myopic method which assumes the knowledge of system
statistics, and converges faster than the Q-learning method when the number of
channels is large.
|
[
{
"created": "Sun, 28 Oct 2018 04:02:27 GMT",
"version": "v1"
}
] |
2018-10-30
|
[
[
"Chang",
"Hao-Hsuan",
""
],
[
"Song",
"Hao",
""
],
[
"Yi",
"Yang",
""
],
[
"Zhang",
"Jianzhong",
""
],
[
"He",
"Haibo",
""
],
[
"Liu",
"Lingjia",
""
]
] |
Dynamic spectrum access (DSA) is regarded as an effective and efficient technology to share radio spectrum among different networks. As a secondary user (SU), a DSA device will face two critical problems: avoiding causing harmful interference to primary users (PUs), and conducting effective interference coordination with other secondary users. These two problems become even more challenging for a distributed DSA network where there is no centralized controllers for SUs. In this paper, we investigate communication strategies of a distributive DSA network under the presence of spectrum sensing errors. To be specific, we apply the powerful machine learning tool, deep reinforcement learning (DRL), for SUs to learn "appropriate" spectrum access strategies in a distributed fashion assuming NO knowledge of the underlying system statistics. Furthermore, a special type of recurrent neural network (RNN), called the reservoir computing (RC), is utilized to realize DRL by taking advantage of the underlying temporal correlation of the DSA network. Using the introduced machine learning-based strategy, SUs could make spectrum access decisions distributedly relying only on their own current and past spectrum sensing outcomes. Through extensive experiments, our results suggest that the RC-based spectrum access strategy can help the SU to significantly reduce the chances of collision with PUs and other SUs. We also show that our scheme outperforms the myopic method which assumes the knowledge of system statistics, and converges faster than the Q-learning method when the number of channels is large.
|
2402.01018
|
Weijie Xu
|
Weijie Xu, Zicheng Huang, Wenxiang Hu, Xi Fang, Rajesh Kumar
Cherukuri, Naumaan Nayyar, Lorenzo Malandri, Srinivasan H. Sengamedu
|
HR-MultiWOZ: A Task Oriented Dialogue (TOD) Dataset for HR LLM Agent
|
13 pages, 9 figures
|
EACL 2024
| null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Recent advancements in Large Language Models (LLMs) have been reshaping
Natural Language Processing (NLP) task in several domains. Their use in the
field of Human Resources (HR) has still room for expansions and could be
beneficial for several time consuming tasks. Examples such as time-off
submissions, medical claims filing, and access requests are noteworthy, but
they are by no means the sole instances. However, the aforementioned
developments must grapple with the pivotal challenge of constructing a
high-quality training dataset. On one hand, most conversation datasets are
solving problems for customers not employees. On the other hand, gathering
conversations with HR could raise privacy concerns. To solve it, we introduce
HR-Multiwoz, a fully-labeled dataset of 550 conversations spanning 10 HR
domains to evaluate LLM Agent. Our work has the following contributions: (1) It
is the first labeled open-sourced conversation dataset in the HR domain for NLP
research. (2) It provides a detailed recipe for the data generation procedure
along with data analysis and human evaluations. The data generation pipeline is
transferable and can be easily adapted for labeled conversation data generation
in other domains. (3) The proposed data-collection pipeline is mostly based on
LLMs with minimal human involvement for annotation, which is time and
cost-efficient.
|
[
{
"created": "Thu, 1 Feb 2024 21:10:44 GMT",
"version": "v1"
}
] |
2024-02-05
|
[
[
"Xu",
"Weijie",
""
],
[
"Huang",
"Zicheng",
""
],
[
"Hu",
"Wenxiang",
""
],
[
"Fang",
"Xi",
""
],
[
"Cherukuri",
"Rajesh Kumar",
""
],
[
"Nayyar",
"Naumaan",
""
],
[
"Malandri",
"Lorenzo",
""
],
[
"Sengamedu",
"Srinivasan H.",
""
]
] |
Recent advancements in Large Language Models (LLMs) have been reshaping Natural Language Processing (NLP) task in several domains. Their use in the field of Human Resources (HR) has still room for expansions and could be beneficial for several time consuming tasks. Examples such as time-off submissions, medical claims filing, and access requests are noteworthy, but they are by no means the sole instances. However, the aforementioned developments must grapple with the pivotal challenge of constructing a high-quality training dataset. On one hand, most conversation datasets are solving problems for customers not employees. On the other hand, gathering conversations with HR could raise privacy concerns. To solve it, we introduce HR-Multiwoz, a fully-labeled dataset of 550 conversations spanning 10 HR domains to evaluate LLM Agent. Our work has the following contributions: (1) It is the first labeled open-sourced conversation dataset in the HR domain for NLP research. (2) It provides a detailed recipe for the data generation procedure along with data analysis and human evaluations. The data generation pipeline is transferable and can be easily adapted for labeled conversation data generation in other domains. (3) The proposed data-collection pipeline is mostly based on LLMs with minimal human involvement for annotation, which is time and cost-efficient.
|
1705.10301
|
Maruan Al-Shedivat
|
Maruan Al-Shedivat, Avinava Dubey, Eric P. Xing
|
Contextual Explanation Networks
|
48 pages, 18 figures, to appear in JMLR
| null | null | null |
cs.LG cs.AI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modern learning algorithms excel at producing accurate but complex models of
the data. However, deploying such models in the real-world requires extra care:
we must ensure their reliability, robustness, and absence of undesired biases.
This motivates the development of models that are equally accurate but can be
also easily inspected and assessed beyond their predictive performance. To this
end, we introduce contextual explanation networks (CEN)---a class of
architectures that learn to predict by generating and utilizing intermediate,
simplified probabilistic models. Specifically, CENs generate parameters for
intermediate graphical models which are further used for prediction and play
the role of explanations. Contrary to the existing post-hoc model-explanation
tools, CENs learn to predict and to explain simultaneously. Our approach offers
two major advantages: (i) for each prediction valid, instance-specific
explanation is generated with no computational overhead and (ii) prediction via
explanation acts as a regularizer and boosts performance in data-scarce
settings. We analyze the proposed framework theoretically and experimentally.
Our results on image and text classification and survival analysis tasks
demonstrate that CENs are not only competitive with the state-of-the-art
methods but also offer additional insights behind each prediction, that can be
valuable for decision support. We also show that while post-hoc methods may
produce misleading explanations in certain cases, CENs are consistent and allow
to detect such cases systematically.
|
[
{
"created": "Mon, 29 May 2017 17:39:51 GMT",
"version": "v1"
},
{
"created": "Tue, 30 Jan 2018 00:06:02 GMT",
"version": "v2"
},
{
"created": "Tue, 18 Dec 2018 22:33:40 GMT",
"version": "v3"
},
{
"created": "Wed, 9 Sep 2020 14:20:44 GMT",
"version": "v4"
}
] |
2020-09-10
|
[
[
"Al-Shedivat",
"Maruan",
""
],
[
"Dubey",
"Avinava",
""
],
[
"Xing",
"Eric P.",
""
]
] |
Modern learning algorithms excel at producing accurate but complex models of the data. However, deploying such models in the real-world requires extra care: we must ensure their reliability, robustness, and absence of undesired biases. This motivates the development of models that are equally accurate but can be also easily inspected and assessed beyond their predictive performance. To this end, we introduce contextual explanation networks (CEN)---a class of architectures that learn to predict by generating and utilizing intermediate, simplified probabilistic models. Specifically, CENs generate parameters for intermediate graphical models which are further used for prediction and play the role of explanations. Contrary to the existing post-hoc model-explanation tools, CENs learn to predict and to explain simultaneously. Our approach offers two major advantages: (i) for each prediction valid, instance-specific explanation is generated with no computational overhead and (ii) prediction via explanation acts as a regularizer and boosts performance in data-scarce settings. We analyze the proposed framework theoretically and experimentally. Our results on image and text classification and survival analysis tasks demonstrate that CENs are not only competitive with the state-of-the-art methods but also offer additional insights behind each prediction, that can be valuable for decision support. We also show that while post-hoc methods may produce misleading explanations in certain cases, CENs are consistent and allow to detect such cases systematically.
|
1803.04190
|
Mousumi Dutt
|
Mousumi Dutt, Arindam Biswas, Benedek Nagy
|
Counting of Shortest Paths in Cubic Grid
|
15 pages, 8 figures
| null | null | null |
cs.DM cs.CG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The enumeration of shortest paths in cubic grid is presented here. The cubic
grid considers three neighborhods--- 6-neighborhood (face connectivity),
18-neighborhood (edge connectivity), and 26-neighborhood (vertex connectivity).
The formulation for distance metrics are given here. $L_1$, $D_{18}$, and
$L_\infty$ are the three metrics for 6-neighborhood, 18-neighborhood, and
26-neighborhood. The problem is to find the number of shortest paths based on
neighborhoods between two given points in 3D cubic grid represented by
coordinate triplets. The formulation for the three neighborhoods are presented
here. This problem has theoretical importance and practical aspects.
|
[
{
"created": "Mon, 12 Mar 2018 11:08:08 GMT",
"version": "v1"
}
] |
2018-03-13
|
[
[
"Dutt",
"Mousumi",
""
],
[
"Biswas",
"Arindam",
""
],
[
"Nagy",
"Benedek",
""
]
] |
The enumeration of shortest paths in cubic grid is presented here. The cubic grid considers three neighborhods--- 6-neighborhood (face connectivity), 18-neighborhood (edge connectivity), and 26-neighborhood (vertex connectivity). The formulation for distance metrics are given here. $L_1$, $D_{18}$, and $L_\infty$ are the three metrics for 6-neighborhood, 18-neighborhood, and 26-neighborhood. The problem is to find the number of shortest paths based on neighborhoods between two given points in 3D cubic grid represented by coordinate triplets. The formulation for the three neighborhoods are presented here. This problem has theoretical importance and practical aspects.
|
2401.10273
|
Yu Han
|
Yu Han, Jingwen Tao
|
Revolutionizing Pharma: Unveiling the AI and LLM Trends in the
Pharmaceutical Industry
| null | null | null | null |
cs.CY cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
This document offers a critical overview of the emerging trends and
significant advancements in artificial intelligence (AI) within the
pharmaceutical industry. Detailing its application across key operational
areas, including research and development, animal testing, clinical trials,
hospital clinical stages, production, regulatory affairs, quality control and
other supporting areas, the paper categorically examines AI's role in each
sector. Special emphasis is placed on cutting-edge AI technologies like machine
learning algorithms and their contributions to various aspects of
pharmaceutical operations. Through this comprehensive analysis, the paper
highlights the transformative potential of AI in reshaping the pharmaceutical
industry's future.
|
[
{
"created": "Fri, 5 Jan 2024 04:01:09 GMT",
"version": "v1"
},
{
"created": "Mon, 22 Jan 2024 04:47:20 GMT",
"version": "v2"
}
] |
2024-01-23
|
[
[
"Han",
"Yu",
""
],
[
"Tao",
"Jingwen",
""
]
] |
This document offers a critical overview of the emerging trends and significant advancements in artificial intelligence (AI) within the pharmaceutical industry. Detailing its application across key operational areas, including research and development, animal testing, clinical trials, hospital clinical stages, production, regulatory affairs, quality control and other supporting areas, the paper categorically examines AI's role in each sector. Special emphasis is placed on cutting-edge AI technologies like machine learning algorithms and their contributions to various aspects of pharmaceutical operations. Through this comprehensive analysis, the paper highlights the transformative potential of AI in reshaping the pharmaceutical industry's future.
|
2203.00046
|
Mattias Heinrich
|
Mattias P. Heinrich and Lasse Hansen
|
Voxelmorph++ Going beyond the cranial vault with keypoint supervision
and multi-channel instance optimisation
|
10 pages, accepted at WBIR 2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
The majority of current research in deep learning based image registration
addresses inter-patient brain registration with moderate deformation
magnitudes. The recent Learn2Reg medical registration benchmark has
demonstrated that single-scale U-Net architectures, such as VoxelMorph that
directly employ a spatial transformer loss, often do not generalise well beyond
the cranial vault and fall short of state-of-the-art performance for abdominal
or intra-patient lung registration. Here, we propose two straightforward steps
that greatly reduce this gap in accuracy. First, we employ keypoint
self-supervision with a novel network head that predicts a discretised heatmap
and robustly reduces large deformations for better robustness. Second, we
replace multiple learned fine-tuning steps by a single instance optimisation
with hand-crafted features and the Adam optimiser. Different to other related
work, including FlowNet or PDD-Net, our approach does not require a fully
discretised architecture with correlation layer. Our ablation study
demonstrates the importance of keypoints in both self-supervised and
unsupervised (using only a MIND metric) settings. On a multi-centric
inspiration-exhale lung CT dataset, including very challenging COPD scans, our
method outperforms VoxelMorph by improving nonlinear alignment by 77% compared
to 19% - reaching target registration errors of 2 mm that outperform all but
one learning methods published to date. Extending the method to semantic
features sets new stat-of-the-art performance on inter-subject abdominal CT
registration.
|
[
{
"created": "Mon, 28 Feb 2022 19:23:29 GMT",
"version": "v1"
}
] |
2022-03-02
|
[
[
"Heinrich",
"Mattias P.",
""
],
[
"Hansen",
"Lasse",
""
]
] |
The majority of current research in deep learning based image registration addresses inter-patient brain registration with moderate deformation magnitudes. The recent Learn2Reg medical registration benchmark has demonstrated that single-scale U-Net architectures, such as VoxelMorph that directly employ a spatial transformer loss, often do not generalise well beyond the cranial vault and fall short of state-of-the-art performance for abdominal or intra-patient lung registration. Here, we propose two straightforward steps that greatly reduce this gap in accuracy. First, we employ keypoint self-supervision with a novel network head that predicts a discretised heatmap and robustly reduces large deformations for better robustness. Second, we replace multiple learned fine-tuning steps by a single instance optimisation with hand-crafted features and the Adam optimiser. Different to other related work, including FlowNet or PDD-Net, our approach does not require a fully discretised architecture with correlation layer. Our ablation study demonstrates the importance of keypoints in both self-supervised and unsupervised (using only a MIND metric) settings. On a multi-centric inspiration-exhale lung CT dataset, including very challenging COPD scans, our method outperforms VoxelMorph by improving nonlinear alignment by 77% compared to 19% - reaching target registration errors of 2 mm that outperform all but one learning methods published to date. Extending the method to semantic features sets new stat-of-the-art performance on inter-subject abdominal CT registration.
|
2407.02112
|
Andrej Tschalzev
|
Andrej Tschalzev, Sascha Marton, Stefan L\"udtke, Christian Bartelt,
Heiner Stuckenschmidt
|
A Data-Centric Perspective on Evaluating Machine Learning Models for
Tabular Data
| null | null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Tabular data is prevalent in real-world machine learning applications, and
new models for supervised learning of tabular data are frequently proposed.
Comparative studies assessing the performance of models typically consist of
model-centric evaluation setups with overly standardized data preprocessing.
This paper demonstrates that such model-centric evaluations are biased, as
real-world modeling pipelines often require dataset-specific preprocessing and
feature engineering. Therefore, we propose a data-centric evaluation framework.
We select 10 relevant datasets from Kaggle competitions and implement
expert-level preprocessing pipelines for each dataset. We conduct experiments
with different preprocessing pipelines and hyperparameter optimization (HPO)
regimes to quantify the impact of model selection, HPO, feature engineering,
and test-time adaptation. Our main findings are: 1. After dataset-specific
feature engineering, model rankings change considerably, performance
differences decrease, and the importance of model selection reduces. 2. Recent
models, despite their measurable progress, still significantly benefit from
manual feature engineering. This holds true for both tree-based models and
neural networks. 3. While tabular data is typically considered static, samples
are often collected over time, and adapting to distribution shifts can be
important even in supposedly static data. These insights suggest that research
efforts should be directed toward a data-centric perspective, acknowledging
that tabular data requires feature engineering and often exhibits temporal
characteristics.
|
[
{
"created": "Tue, 2 Jul 2024 09:54:39 GMT",
"version": "v1"
}
] |
2024-07-03
|
[
[
"Tschalzev",
"Andrej",
""
],
[
"Marton",
"Sascha",
""
],
[
"Lüdtke",
"Stefan",
""
],
[
"Bartelt",
"Christian",
""
],
[
"Stuckenschmidt",
"Heiner",
""
]
] |
Tabular data is prevalent in real-world machine learning applications, and new models for supervised learning of tabular data are frequently proposed. Comparative studies assessing the performance of models typically consist of model-centric evaluation setups with overly standardized data preprocessing. This paper demonstrates that such model-centric evaluations are biased, as real-world modeling pipelines often require dataset-specific preprocessing and feature engineering. Therefore, we propose a data-centric evaluation framework. We select 10 relevant datasets from Kaggle competitions and implement expert-level preprocessing pipelines for each dataset. We conduct experiments with different preprocessing pipelines and hyperparameter optimization (HPO) regimes to quantify the impact of model selection, HPO, feature engineering, and test-time adaptation. Our main findings are: 1. After dataset-specific feature engineering, model rankings change considerably, performance differences decrease, and the importance of model selection reduces. 2. Recent models, despite their measurable progress, still significantly benefit from manual feature engineering. This holds true for both tree-based models and neural networks. 3. While tabular data is typically considered static, samples are often collected over time, and adapting to distribution shifts can be important even in supposedly static data. These insights suggest that research efforts should be directed toward a data-centric perspective, acknowledging that tabular data requires feature engineering and often exhibits temporal characteristics.
|
1207.4258
|
Lin Chen
|
Lin Chen, Athanasios V. Vasilakos
|
Joint Rate Adaptation and Medium Access in Wireless LANs: a
Non-cooperative Game Theoretic Perspective
| null | null | null | null |
cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Wireless local area networks (WLANs) based on IEEE 802.11 standards are
becoming ubiquitous today and typically support multiple data rates. In such
multi-rate WLANs, distributed medium access and rate adaptation are two key
elements to achieve efficient radio resource utilization, especially in
non-cooperative environments. In this paper, we present an analytical study on
the non-cooperative multi-rate WLANs composed of selfish users jointly
adjusting their data rate and contention window size at the medium access level
to maximize their own throughput, irrespective of the impact of their selfish
behaviors on overall system performance. Specifically, we develop an adapted
Tit-For-Tat (TFT) strategy to guide the system to an efficient equilibrium in
non-cooperative environments. We model the interactions among selfish users
under the adapted TFT framework as a non-cooperative joint medium access and
rate adaptation game. A systematic analysis is conducted on the structural
properties of the game to provide insights on the interaction between rate
adaptation and 802.11 medium access control in a competitive setting. We show
that the game has multiple equilibria, which, after the equilibrium refinement
process that we develop, reduce to a unique efficient equilibrium. We further
develop a distributed algorithm to achieve this equilibrium and demonstrate
that the equilibrium achieves the performance very close to the system optimum
in a social perspective.
|
[
{
"created": "Wed, 18 Jul 2012 04:04:35 GMT",
"version": "v1"
}
] |
2012-07-19
|
[
[
"Chen",
"Lin",
""
],
[
"Vasilakos",
"Athanasios V.",
""
]
] |
Wireless local area networks (WLANs) based on IEEE 802.11 standards are becoming ubiquitous today and typically support multiple data rates. In such multi-rate WLANs, distributed medium access and rate adaptation are two key elements to achieve efficient radio resource utilization, especially in non-cooperative environments. In this paper, we present an analytical study on the non-cooperative multi-rate WLANs composed of selfish users jointly adjusting their data rate and contention window size at the medium access level to maximize their own throughput, irrespective of the impact of their selfish behaviors on overall system performance. Specifically, we develop an adapted Tit-For-Tat (TFT) strategy to guide the system to an efficient equilibrium in non-cooperative environments. We model the interactions among selfish users under the adapted TFT framework as a non-cooperative joint medium access and rate adaptation game. A systematic analysis is conducted on the structural properties of the game to provide insights on the interaction between rate adaptation and 802.11 medium access control in a competitive setting. We show that the game has multiple equilibria, which, after the equilibrium refinement process that we develop, reduce to a unique efficient equilibrium. We further develop a distributed algorithm to achieve this equilibrium and demonstrate that the equilibrium achieves the performance very close to the system optimum in a social perspective.
|
2403.12566
|
Zhichao Feng
|
Zhichao Feng, Junjiie Xie, Kaiyuan Li, Yu Qin, Pengfei Wang, Qianzhong
Li, Bin Yin, Xiang Li, Wei Lin, Shangguang Wang
|
Context-based Fast Recommendation Strategy for Long User Behavior
Sequence in Meituan Waimai
|
9 pages, accepted by WWW 2024 Industry Track
| null |
10.1145/3589335.3648334
| null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the recommender system of Meituan Waimai, we are dealing with
ever-lengthening user behavior sequences, which pose an increasing challenge to
modeling user preference effectively. Existing sequential recommendation models
often fail to capture long-term dependencies or are too complex, complicating
the fulfillment of Meituan Waimai's unique business needs. To better model user
interests, we consider selecting relevant sub-sequences from users' extensive
historical behaviors based on their preferences. In this specific scenario,
we've noticed that the contexts in which users interact have a significant
impact on their preferences. For this purpose, we introduce a novel method
called Context-based Fast Recommendation Strategy to tackle the issue of long
sequences. We first identify contexts that share similar user preferences with
the target context and then locate the corresponding PoIs based on these
identified contexts. This approach eliminates the necessity to select a
sub-sequence for every candidate PoI, thereby avoiding high time complexity.
Specifically, we implement a prototype-based approach to pinpoint contexts that
mirror similar user preferences. To amplify accuracy and interpretability, we
employ JS divergence of PoI attributes such as categories and prices as a
measure of similarity between contexts. A temporal graph integrating both
prototype and context nodes helps incorporate temporal information. We then
identify appropriate prototypes considering both target contexts and short-term
user preferences. Following this, we utilize contexts aligned with these
prototypes to generate a sub-sequence, aimed at predicting CTR and CTCVR scores
with target attention. Since its inception in 2023, this strategy has been
adopted in Meituan Waimai's display recommender system, leading to a 4.6% surge
in CTR and a 4.2% boost in GMV.
|
[
{
"created": "Tue, 19 Mar 2024 09:20:43 GMT",
"version": "v1"
}
] |
2024-03-20
|
[
[
"Feng",
"Zhichao",
""
],
[
"Xie",
"Junjiie",
""
],
[
"Li",
"Kaiyuan",
""
],
[
"Qin",
"Yu",
""
],
[
"Wang",
"Pengfei",
""
],
[
"Li",
"Qianzhong",
""
],
[
"Yin",
"Bin",
""
],
[
"Li",
"Xiang",
""
],
[
"Lin",
"Wei",
""
],
[
"Wang",
"Shangguang",
""
]
] |
In the recommender system of Meituan Waimai, we are dealing with ever-lengthening user behavior sequences, which pose an increasing challenge to modeling user preference effectively. Existing sequential recommendation models often fail to capture long-term dependencies or are too complex, complicating the fulfillment of Meituan Waimai's unique business needs. To better model user interests, we consider selecting relevant sub-sequences from users' extensive historical behaviors based on their preferences. In this specific scenario, we've noticed that the contexts in which users interact have a significant impact on their preferences. For this purpose, we introduce a novel method called Context-based Fast Recommendation Strategy to tackle the issue of long sequences. We first identify contexts that share similar user preferences with the target context and then locate the corresponding PoIs based on these identified contexts. This approach eliminates the necessity to select a sub-sequence for every candidate PoI, thereby avoiding high time complexity. Specifically, we implement a prototype-based approach to pinpoint contexts that mirror similar user preferences. To amplify accuracy and interpretability, we employ JS divergence of PoI attributes such as categories and prices as a measure of similarity between contexts. A temporal graph integrating both prototype and context nodes helps incorporate temporal information. We then identify appropriate prototypes considering both target contexts and short-term user preferences. Following this, we utilize contexts aligned with these prototypes to generate a sub-sequence, aimed at predicting CTR and CTCVR scores with target attention. Since its inception in 2023, this strategy has been adopted in Meituan Waimai's display recommender system, leading to a 4.6% surge in CTR and a 4.2% boost in GMV.
|
1702.01606
|
Daniel Gall
|
Daniel Gall and Thom Fr\"uhwirth
|
An Operational Semantics for the Cognitive Architecture ACT-R and its
Translation to Constraint Handling Rules
| null | null | null | null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Computational psychology has the aim to explain human cognition by
computational models of cognitive processes. The cognitive architecture ACT-R
is popular to develop such models. Although ACT-R has a well-defined
psychological theory and has been used to explain many cognitive processes,
there are two problems that make it hard to reason formally about its cognitive
models: First, ACT-R lacks a formalization of its underlying production rule
system and secondly, there are many different implementations and extensions of
ACT-R with technical artifacts complicating formal reasoning even more.
This paper describes a formal operational semantics - the very abstract
semantics - that abstracts from as many technical details as possible keeping
it open to extensions and different implementations of the ACT-R theory. In a
second step, this semantics is refined to define some of its abstract features
that are found in many implementations of ACT-R - the abstract semantics. It
concentrates on the procedural core of ACT-R and is suitable for analysis of
the transition system since it still abstracts from details like timing, the
sub-symbolic layer or conflict resolution.
Furthermore, a translation of ACT-R models to the programming language
Constraint Handling Rules (CHR) is defined. This makes the abstract semantics
an executable specification of ACT-R. CHR has been used successfully to embed
other rule-based formalisms like graph transformation systems or functional
programming. There are many results and tools that support formal reasoning
about and analysis of CHR programs. The translation of ACT-R models to CHR is
proven sound and complete w.r.t. the abstract operational semantics of ACT-R.
This paves the way to analysis of ACT-R models through CHR. Therefore, to the
best of our knowledge, our abstract semantics is the first formulation of ACT-R
suitable for both analysis and execution.
|
[
{
"created": "Mon, 6 Feb 2017 13:19:00 GMT",
"version": "v1"
}
] |
2017-02-07
|
[
[
"Gall",
"Daniel",
""
],
[
"Frühwirth",
"Thom",
""
]
] |
Computational psychology has the aim to explain human cognition by computational models of cognitive processes. The cognitive architecture ACT-R is popular to develop such models. Although ACT-R has a well-defined psychological theory and has been used to explain many cognitive processes, there are two problems that make it hard to reason formally about its cognitive models: First, ACT-R lacks a formalization of its underlying production rule system and secondly, there are many different implementations and extensions of ACT-R with technical artifacts complicating formal reasoning even more. This paper describes a formal operational semantics - the very abstract semantics - that abstracts from as many technical details as possible keeping it open to extensions and different implementations of the ACT-R theory. In a second step, this semantics is refined to define some of its abstract features that are found in many implementations of ACT-R - the abstract semantics. It concentrates on the procedural core of ACT-R and is suitable for analysis of the transition system since it still abstracts from details like timing, the sub-symbolic layer or conflict resolution. Furthermore, a translation of ACT-R models to the programming language Constraint Handling Rules (CHR) is defined. This makes the abstract semantics an executable specification of ACT-R. CHR has been used successfully to embed other rule-based formalisms like graph transformation systems or functional programming. There are many results and tools that support formal reasoning about and analysis of CHR programs. The translation of ACT-R models to CHR is proven sound and complete w.r.t. the abstract operational semantics of ACT-R. This paves the way to analysis of ACT-R models through CHR. Therefore, to the best of our knowledge, our abstract semantics is the first formulation of ACT-R suitable for both analysis and execution.
|
1804.10694
|
R. Baghdadi
|
Riyadh Baghdadi, Jessica Ray, Malek Ben Romdhane, Emanuele Del Sozzo,
Abdurrahman Akkas, Yunming Zhang, Patricia Suriana, Shoaib Kamil, Saman
Amarasinghe
|
Tiramisu: A Polyhedral Compiler for Expressing Fast and Portable Code
|
arXiv admin note: substantial text overlap with arXiv:1803.00419
| null | null | null |
cs.PL cs.DC cs.MS cs.NE cs.PF
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper introduces Tiramisu, a polyhedral framework designed to generate
high performance code for multiple platforms including multicores, GPUs, and
distributed machines. Tiramisu introduces a scheduling language with novel
extensions to explicitly manage the complexities that arise when targeting
these systems. The framework is designed for the areas of image processing,
stencils, linear algebra and deep learning. Tiramisu has two main features: it
relies on a flexible representation based on the polyhedral model and it has a
rich scheduling language allowing fine-grained control of optimizations.
Tiramisu uses a four-level intermediate representation that allows full
separation between the algorithms, loop transformations, data layouts, and
communication. This separation simplifies targeting multiple hardware
architectures with the same algorithm. We evaluate Tiramisu by writing a set of
image processing, deep learning, and linear algebra benchmarks and compare them
with state-of-the-art compilers and hand-tuned libraries. We show that Tiramisu
matches or outperforms existing compilers and libraries on different hardware
architectures, including multicore CPUs, GPUs, and distributed machines.
|
[
{
"created": "Fri, 27 Apr 2018 21:28:44 GMT",
"version": "v1"
},
{
"created": "Wed, 12 Sep 2018 19:58:57 GMT",
"version": "v2"
},
{
"created": "Wed, 26 Sep 2018 21:24:44 GMT",
"version": "v3"
},
{
"created": "Tue, 18 Dec 2018 02:41:00 GMT",
"version": "v4"
},
{
"created": "Thu, 20 Dec 2018 16:25:40 GMT",
"version": "v5"
}
] |
2018-12-21
|
[
[
"Baghdadi",
"Riyadh",
""
],
[
"Ray",
"Jessica",
""
],
[
"Romdhane",
"Malek Ben",
""
],
[
"Del Sozzo",
"Emanuele",
""
],
[
"Akkas",
"Abdurrahman",
""
],
[
"Zhang",
"Yunming",
""
],
[
"Suriana",
"Patricia",
""
],
[
"Kamil",
"Shoaib",
""
],
[
"Amarasinghe",
"Saman",
""
]
] |
This paper introduces Tiramisu, a polyhedral framework designed to generate high performance code for multiple platforms including multicores, GPUs, and distributed machines. Tiramisu introduces a scheduling language with novel extensions to explicitly manage the complexities that arise when targeting these systems. The framework is designed for the areas of image processing, stencils, linear algebra and deep learning. Tiramisu has two main features: it relies on a flexible representation based on the polyhedral model and it has a rich scheduling language allowing fine-grained control of optimizations. Tiramisu uses a four-level intermediate representation that allows full separation between the algorithms, loop transformations, data layouts, and communication. This separation simplifies targeting multiple hardware architectures with the same algorithm. We evaluate Tiramisu by writing a set of image processing, deep learning, and linear algebra benchmarks and compare them with state-of-the-art compilers and hand-tuned libraries. We show that Tiramisu matches or outperforms existing compilers and libraries on different hardware architectures, including multicore CPUs, GPUs, and distributed machines.
|
2002.03997
|
Vladimir Kovalenko
|
Vladimir Kovalenko, Egor Bogomolov, Timofey Bryksin, Alberto Bacchelli
|
Building Implicit Vector Representations of Individual Coding Style
| null | null | null | null |
cs.SE cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With the goal of facilitating team collaboration, we propose a new approach
to building vector representations of individual developers by capturing their
individual contribution style, or coding style. Such representations can find
use in the next generation of software development team collaboration tools,
for example by enabling the tools to track knowledge transfer in teams. The key
idea of our approach is to avoid using explicitly defined metrics of coding
style and instead build the representations through training a model for
authorship recognition and extracting the representations of individual
developers from the trained model. By empirically evaluating the output of our
approach, we find that implicitly built individual representations reflect some
properties of team structure: developers who report learning from each other
are represented closer to each other.
|
[
{
"created": "Mon, 10 Feb 2020 18:12:27 GMT",
"version": "v1"
}
] |
2020-02-11
|
[
[
"Kovalenko",
"Vladimir",
""
],
[
"Bogomolov",
"Egor",
""
],
[
"Bryksin",
"Timofey",
""
],
[
"Bacchelli",
"Alberto",
""
]
] |
With the goal of facilitating team collaboration, we propose a new approach to building vector representations of individual developers by capturing their individual contribution style, or coding style. Such representations can find use in the next generation of software development team collaboration tools, for example by enabling the tools to track knowledge transfer in teams. The key idea of our approach is to avoid using explicitly defined metrics of coding style and instead build the representations through training a model for authorship recognition and extracting the representations of individual developers from the trained model. By empirically evaluating the output of our approach, we find that implicitly built individual representations reflect some properties of team structure: developers who report learning from each other are represented closer to each other.
|
0902.0221
|
Alireza Avanaki
|
Alireza Avanaki
|
Over-enhancement Reduction in Local Histogram Equalization using its
Degrees of Freedom
| null | null | null | null |
cs.CV cs.MM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A well-known issue of local (adaptive) histogram equalization (LHE) is
over-enhancement (i.e., generation of spurious details) in homogenous areas of
the image. In this paper, we show that the LHE problem has many solutions due
to the ambiguity in ranking pixels with the same intensity. The LHE solution
space can be searched for the images having the maximum PSNR or structural
similarity (SSIM) with the input image. As compared to the results of the prior
art, these solutions are more similar to the input image while offering the
same local contrast.
Index Terms: histogram modification or specification, contrast enhancement
|
[
{
"created": "Mon, 2 Feb 2009 10:41:53 GMT",
"version": "v1"
},
{
"created": "Mon, 14 Sep 2009 10:14:32 GMT",
"version": "v2"
}
] |
2009-09-14
|
[
[
"Avanaki",
"Alireza",
""
]
] |
A well-known issue of local (adaptive) histogram equalization (LHE) is over-enhancement (i.e., generation of spurious details) in homogenous areas of the image. In this paper, we show that the LHE problem has many solutions due to the ambiguity in ranking pixels with the same intensity. The LHE solution space can be searched for the images having the maximum PSNR or structural similarity (SSIM) with the input image. As compared to the results of the prior art, these solutions are more similar to the input image while offering the same local contrast. Index Terms: histogram modification or specification, contrast enhancement
|
2408.05960
|
Alexander Dockhorn
|
Carlo N\"ubel, Alexander Dockhorn, Sanaz Mostaghim
|
Match Point AI: A Novel AI Framework for Evaluating Data-Driven Tennis
Strategies
|
4 pages, 1 page abstract, short paper, to be published in Proceedings
of the IEEE Conference on Games 2024
| null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many works in the domain of artificial intelligence in games focus on board
or video games due to the ease of reimplementing their mechanics.
Decision-making problems in real-world sports share many similarities to such
domains. Nevertheless, not many frameworks on sports games exist. In this
paper, we present the tennis match simulation environment \textit{Match Point
AI}, in which different agents can compete against real-world data-driven bot
strategies. Next to presenting the framework, we highlight its capabilities by
illustrating, how MCTS can be used in Match Point AI to optimize the shot
direction selection problem in tennis. While the framework will be extended in
the future, first experiments already reveal that generated shot-by-shot data
of simulated tennis matches show realistic characteristics when compared to
real-world data. At the same time, reasonable shot placement strategies emerge,
which share similarities to the ones found in real-world tennis matches.
|
[
{
"created": "Mon, 12 Aug 2024 07:22:46 GMT",
"version": "v1"
}
] |
2024-08-13
|
[
[
"Nübel",
"Carlo",
""
],
[
"Dockhorn",
"Alexander",
""
],
[
"Mostaghim",
"Sanaz",
""
]
] |
Many works in the domain of artificial intelligence in games focus on board or video games due to the ease of reimplementing their mechanics. Decision-making problems in real-world sports share many similarities to such domains. Nevertheless, not many frameworks on sports games exist. In this paper, we present the tennis match simulation environment \textit{Match Point AI}, in which different agents can compete against real-world data-driven bot strategies. Next to presenting the framework, we highlight its capabilities by illustrating, how MCTS can be used in Match Point AI to optimize the shot direction selection problem in tennis. While the framework will be extended in the future, first experiments already reveal that generated shot-by-shot data of simulated tennis matches show realistic characteristics when compared to real-world data. At the same time, reasonable shot placement strategies emerge, which share similarities to the ones found in real-world tennis matches.
|
2210.14677
|
Olivier Colliot
|
Rosana El Jurdi and Olivier Colliot
|
How precise are performance estimates for typical medical image
segmentation tasks?
| null | null | null | null |
cs.CV q-bio.QM stat.AP
|
http://creativecommons.org/licenses/by/4.0/
|
An important issue in medical image processing is to be able to estimate not
only the performances of algorithms but also the precision of the estimation of
these performances. Reporting precision typically amounts to reporting
standard-error of the mean (SEM) or equivalently confidence intervals. However,
this is rarely done in medical image segmentation studies. In this paper, we
aim to estimate what is the typical confidence that can be expected in such
studies. To that end, we first perform experiments for Dice metric estimation
using a standard deep learning model (U-net) and a classical task from the
Medical Segmentation Decathlon. We extensively study precision estimation using
both Gaussian assumption and bootstrapping (which does not require any
assumption on the distribution). We then perform simulations for other test set
sizes and performance spreads. Overall, our work shows that small test sets
lead to wide confidence intervals (e.g. $\sim$8 points of Dice for 20 samples
with $\sigma \simeq 10$).
|
[
{
"created": "Wed, 26 Oct 2022 12:53:15 GMT",
"version": "v1"
},
{
"created": "Tue, 1 Nov 2022 10:54:16 GMT",
"version": "v2"
},
{
"created": "Wed, 24 May 2023 12:32:40 GMT",
"version": "v3"
}
] |
2023-05-25
|
[
[
"Jurdi",
"Rosana El",
""
],
[
"Colliot",
"Olivier",
""
]
] |
An important issue in medical image processing is to be able to estimate not only the performances of algorithms but also the precision of the estimation of these performances. Reporting precision typically amounts to reporting standard-error of the mean (SEM) or equivalently confidence intervals. However, this is rarely done in medical image segmentation studies. In this paper, we aim to estimate what is the typical confidence that can be expected in such studies. To that end, we first perform experiments for Dice metric estimation using a standard deep learning model (U-net) and a classical task from the Medical Segmentation Decathlon. We extensively study precision estimation using both Gaussian assumption and bootstrapping (which does not require any assumption on the distribution). We then perform simulations for other test set sizes and performance spreads. Overall, our work shows that small test sets lead to wide confidence intervals (e.g. $\sim$8 points of Dice for 20 samples with $\sigma \simeq 10$).
|
2005.04016
|
Abderrahmane Maaradji
|
Abderrahmane Maaradji, Marlon Dumas, Marcello La Rosa, and Alireza
Ostovar
|
Detecting sudden and gradual drifts in business processes from execution
traces
| null |
IEEE Transactions on Knowledge and Data Engineering 29, no. 10
(2017)
|
10.1109/TKDE.2017.2720601
| null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Business processes are prone to unexpected changes, as process workers may
suddenly or gradually start executing a process differently in order to adjust
to changes in workload, season, or other external factors. Early detection of
business process changes enables managers to identify and act upon changes that
may otherwise affect process performance. Business process drift detection
refers to a family of methods to detect changes in a business process by
analyzing event logs extracted from the systems that support the execution of
the process. Existing methods for business process drift detection are based on
an explorative analysis of a potentially large feature space and in some cases
they require users to manually identify specific features that characterize the
drift. Depending on the explored feature space, these methods miss various
types of changes. Moreover, they are either designed to detect sudden drifts or
gradual drifts but not both. This paper proposes an automated and statistically
grounded method for detecting sudden and gradual business process drifts under
a unified framework. An empirical evaluation shows that the method detects
typical change patterns with significantly higher accuracy and lower detection
delay than existing methods, while accurately distinguishing between sudden and
gradual drifts.
|
[
{
"created": "Thu, 7 May 2020 16:22:11 GMT",
"version": "v1"
}
] |
2020-05-11
|
[
[
"Maaradji",
"Abderrahmane",
""
],
[
"Dumas",
"Marlon",
""
],
[
"La Rosa",
"Marcello",
""
],
[
"Ostovar",
"Alireza",
""
]
] |
Business processes are prone to unexpected changes, as process workers may suddenly or gradually start executing a process differently in order to adjust to changes in workload, season, or other external factors. Early detection of business process changes enables managers to identify and act upon changes that may otherwise affect process performance. Business process drift detection refers to a family of methods to detect changes in a business process by analyzing event logs extracted from the systems that support the execution of the process. Existing methods for business process drift detection are based on an explorative analysis of a potentially large feature space and in some cases they require users to manually identify specific features that characterize the drift. Depending on the explored feature space, these methods miss various types of changes. Moreover, they are either designed to detect sudden drifts or gradual drifts but not both. This paper proposes an automated and statistically grounded method for detecting sudden and gradual business process drifts under a unified framework. An empirical evaluation shows that the method detects typical change patterns with significantly higher accuracy and lower detection delay than existing methods, while accurately distinguishing between sudden and gradual drifts.
|
1703.04086
|
L.T. Handoko
|
A.A. Waskita, H. Suhartanto, L.T. Handoko
|
A performance study of anomaly detection using entropy method
|
Proceeding of the International Conference on Computer, Control,
Informatics and its Applications (2017) pp. 137-140
| null |
10.1109/IC3INA.2016.7863038
|
FISIKALIPI-16081
|
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An experiment to study the entropy method for an anomaly detection system has
been performed. The study has been conducted using real data generated from the
distributed sensor networks at the Intel Berkeley Research Laboratory. The
experimental results were compared with the elliptical method and has been
analyzed in two dimensional data sets acquired from temperature and humidity
sensors across 52 micro controllers. Using the binary classification to
determine the upper and lower boundaries for each series of sensors, it has
been shown that the entropy method are able to detect more number of out
ranging sensor nodes than the elliptical methods. It can be argued that the
better result was mainly due to the lack of elliptical approach which is
requiring certain correlation between two sensor series, while in the entropy
approach each sensor series is treated independently. This is very important in
the current case where both sensor series are not correlated each other.
|
[
{
"created": "Sun, 12 Mar 2017 09:37:34 GMT",
"version": "v1"
}
] |
2017-03-14
|
[
[
"Waskita",
"A. A.",
""
],
[
"Suhartanto",
"H.",
""
],
[
"Handoko",
"L. T.",
""
]
] |
An experiment to study the entropy method for an anomaly detection system has been performed. The study has been conducted using real data generated from the distributed sensor networks at the Intel Berkeley Research Laboratory. The experimental results were compared with the elliptical method and has been analyzed in two dimensional data sets acquired from temperature and humidity sensors across 52 micro controllers. Using the binary classification to determine the upper and lower boundaries for each series of sensors, it has been shown that the entropy method are able to detect more number of out ranging sensor nodes than the elliptical methods. It can be argued that the better result was mainly due to the lack of elliptical approach which is requiring certain correlation between two sensor series, while in the entropy approach each sensor series is treated independently. This is very important in the current case where both sensor series are not correlated each other.
|
2004.08852
|
Hyeonseong Im
|
Hyeon-Seong Im and Si-Hyeon Lee
|
Mobility-Assisted Covert Communication over Wireless Ad Hoc Networks
|
This paper was submitted to IEEE Transactions on Information
Forensics and Security. The material in this paper will be presented in part
at IEEE ISIT 2020
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the effect of node mobility on the throughput scaling of the covert
communication over a wireless adhoc network. It is assumed that $n$ mobile
nodes want to communicate each other in a unit disk while keeping the presence
of the communication secret from each of $\Theta(n^s)$ non-colluding wardens
($s>0$). Our results show that the node mobility greatly improves the
throughput scaling, compared to the case of fixed node location. In particular,
for $0<s<1$, the aggregate throughput scaling is shown to be linear in $n$ when
the number of channel uses that each warden uses to judge the presence of
communication is not too large compared to $n$. For the achievability, we
modify the two-hop based scheme by Grossglauser and Tse (2002), which was
proposed for a wireless ad hoc network without a covertness constraint, by
introducing a preservation region around each warden in which the senders are
not allowed to transmit and by carefully analyzing the effect of covertness
constraint on the transmit power and the resultant transmission rates. This
scheme is shown to be optimal for $0<s<1$ under an assumption that each node
outside preservation regions around wardens uses the same transmit power.
|
[
{
"created": "Sun, 19 Apr 2020 13:42:44 GMT",
"version": "v1"
},
{
"created": "Tue, 21 Apr 2020 06:52:53 GMT",
"version": "v2"
}
] |
2020-04-22
|
[
[
"Im",
"Hyeon-Seong",
""
],
[
"Lee",
"Si-Hyeon",
""
]
] |
We study the effect of node mobility on the throughput scaling of the covert communication over a wireless adhoc network. It is assumed that $n$ mobile nodes want to communicate each other in a unit disk while keeping the presence of the communication secret from each of $\Theta(n^s)$ non-colluding wardens ($s>0$). Our results show that the node mobility greatly improves the throughput scaling, compared to the case of fixed node location. In particular, for $0<s<1$, the aggregate throughput scaling is shown to be linear in $n$ when the number of channel uses that each warden uses to judge the presence of communication is not too large compared to $n$. For the achievability, we modify the two-hop based scheme by Grossglauser and Tse (2002), which was proposed for a wireless ad hoc network without a covertness constraint, by introducing a preservation region around each warden in which the senders are not allowed to transmit and by carefully analyzing the effect of covertness constraint on the transmit power and the resultant transmission rates. This scheme is shown to be optimal for $0<s<1$ under an assumption that each node outside preservation regions around wardens uses the same transmit power.
|
1812.05802
|
Xin Yang
|
Cheng Bian, Xin Yang, Jianqiang Ma, Shen Zheng, Yu-An Liu, Reza
Nezafat, Pheng-Ann Heng, and Yefeng Zheng
|
Pyramid Network with Online Hard Example Mining for Accurate Left Atrium
Segmentation
|
9 pages, 4 figures. MICCAI Workshop on STACOM 2018
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Accurately segmenting left atrium in MR volume can benefit the ablation
procedure of atrial fibrillation. Traditional automated solutions often fail in
relieving experts from the labor-intensive manual labeling. In this paper, we
propose a deep neural network based solution for automated left atrium
segmentation in gadolinium-enhanced MR volumes with promising performance. We
firstly argue that, for this volumetric segmentation task, networks in 2D
fashion can present great superiorities in time efficiency and segmentation
accuracy than networks with 3D fashion. Considering the highly varying shape of
atrium and the branchy structure of associated pulmonary veins, we propose to
adopt a pyramid module to collect semantic cues in feature maps from multiple
scales for fine-grained segmentation. Also, to promote our network in
classifying the hard examples, we propose an Online Hard Negative Example
Mining strategy to identify voxels in slices with low classification
certainties and penalize the wrong predictions on them. Finally, we devise a
competitive training scheme to further boost the generalization ability of
networks. Extensively verified on 20 testing volumes, our proposed framework
achieves an average Dice of 92.83% in segmenting the left atria and pulmonary
veins.
|
[
{
"created": "Fri, 14 Dec 2018 07:28:07 GMT",
"version": "v1"
}
] |
2018-12-17
|
[
[
"Bian",
"Cheng",
""
],
[
"Yang",
"Xin",
""
],
[
"Ma",
"Jianqiang",
""
],
[
"Zheng",
"Shen",
""
],
[
"Liu",
"Yu-An",
""
],
[
"Nezafat",
"Reza",
""
],
[
"Heng",
"Pheng-Ann",
""
],
[
"Zheng",
"Yefeng",
""
]
] |
Accurately segmenting left atrium in MR volume can benefit the ablation procedure of atrial fibrillation. Traditional automated solutions often fail in relieving experts from the labor-intensive manual labeling. In this paper, we propose a deep neural network based solution for automated left atrium segmentation in gadolinium-enhanced MR volumes with promising performance. We firstly argue that, for this volumetric segmentation task, networks in 2D fashion can present great superiorities in time efficiency and segmentation accuracy than networks with 3D fashion. Considering the highly varying shape of atrium and the branchy structure of associated pulmonary veins, we propose to adopt a pyramid module to collect semantic cues in feature maps from multiple scales for fine-grained segmentation. Also, to promote our network in classifying the hard examples, we propose an Online Hard Negative Example Mining strategy to identify voxels in slices with low classification certainties and penalize the wrong predictions on them. Finally, we devise a competitive training scheme to further boost the generalization ability of networks. Extensively verified on 20 testing volumes, our proposed framework achieves an average Dice of 92.83% in segmenting the left atria and pulmonary veins.
|
2008.04638
|
Marco Comunita'
|
Marco Comunit\`a, Andrea Gerino, Veranika Lim, Lorenzo Picinali
|
PlugSonic: a web- and mobile-based platform for binaural audio and sonic
narratives
|
22 pages, 11 figures
| null | null | null |
cs.SD cs.HC cs.MM eess.AS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
PlugSonic is a suite of web- and mobile-based applications for the curation
and experience of binaural interactive soundscapes and sonic narratives. It was
developed as part of the PLUGGY EU project (Pluggable Social Platform for
Heritage Awareness and Participation) and consists of two main applications:
PlugSonic Sample, to edit and apply audio effects, and PlugSonic Soundscape, to
create and experience binaural soundscapes. The audio processing within
PlugSonic is based on the Web Audio API and the 3D Tune-In Toolkit, while the
exploration of soundscapes in a physical space is obtained using Apple's ARKit.
In this paper we present the design choices, the user involvement processes and
the implementation details. The main goal of PlugSonic is technology
democratisation; PlugSonic users - whether institutions or citizens - are all
given the instruments needed to create, process and experience 3D soundscapes
and sonic narrative; without the need for specific devices, external tools
(software and/or hardware), specialised knowledge or custom development. The
evaluation, which was conducted with inexperienced users on three tasks -
creation, curation and experience - demonstrates how PlugSonic is indeed a
simple, effective, yet powerful tool.
|
[
{
"created": "Tue, 11 Aug 2020 11:42:49 GMT",
"version": "v1"
}
] |
2020-08-12
|
[
[
"Comunità",
"Marco",
""
],
[
"Gerino",
"Andrea",
""
],
[
"Lim",
"Veranika",
""
],
[
"Picinali",
"Lorenzo",
""
]
] |
PlugSonic is a suite of web- and mobile-based applications for the curation and experience of binaural interactive soundscapes and sonic narratives. It was developed as part of the PLUGGY EU project (Pluggable Social Platform for Heritage Awareness and Participation) and consists of two main applications: PlugSonic Sample, to edit and apply audio effects, and PlugSonic Soundscape, to create and experience binaural soundscapes. The audio processing within PlugSonic is based on the Web Audio API and the 3D Tune-In Toolkit, while the exploration of soundscapes in a physical space is obtained using Apple's ARKit. In this paper we present the design choices, the user involvement processes and the implementation details. The main goal of PlugSonic is technology democratisation; PlugSonic users - whether institutions or citizens - are all given the instruments needed to create, process and experience 3D soundscapes and sonic narrative; without the need for specific devices, external tools (software and/or hardware), specialised knowledge or custom development. The evaluation, which was conducted with inexperienced users on three tasks - creation, curation and experience - demonstrates how PlugSonic is indeed a simple, effective, yet powerful tool.
|
2312.07951
|
Zhaorui Tan
|
Zhaorui Tan, Xi Yang, Kaizhu Huang
|
Semantic-aware Data Augmentation for Text-to-image Synthesis
|
Accepted by AAAI24
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Data augmentation has been recently leveraged as an effective regularizer in
various vision-language deep neural networks. However, in text-to-image
synthesis (T2Isyn), current augmentation wisdom still suffers from the semantic
mismatch between augmented paired data. Even worse, semantic collapse may occur
when generated images are less semantically constrained. In this paper, we
develop a novel Semantic-aware Data Augmentation (SADA) framework dedicated to
T2Isyn. In particular, we propose to augment texts in the semantic space via an
Implicit Textual Semantic Preserving Augmentation ($ITA$), in conjunction with
a specifically designed Image Semantic Regularization Loss ($L_r$) as Generated
Image Semantic Conservation, to cope well with semantic mismatch and collapse.
As one major contribution, we theoretically show that $ITA$ can certify better
text-image consistency while $L_r$ regularizing the semantics of generated
images would avoid semantic collapse and enhance image quality. Extensive
experiments validate that SADA enhances text-image consistency and improves
image quality significantly in T2Isyn models across various backbones.
Especially, incorporating SADA during the tuning process of Stable Diffusion
models also yields performance improvements.
|
[
{
"created": "Wed, 13 Dec 2023 07:57:40 GMT",
"version": "v1"
}
] |
2023-12-14
|
[
[
"Tan",
"Zhaorui",
""
],
[
"Yang",
"Xi",
""
],
[
"Huang",
"Kaizhu",
""
]
] |
Data augmentation has been recently leveraged as an effective regularizer in various vision-language deep neural networks. However, in text-to-image synthesis (T2Isyn), current augmentation wisdom still suffers from the semantic mismatch between augmented paired data. Even worse, semantic collapse may occur when generated images are less semantically constrained. In this paper, we develop a novel Semantic-aware Data Augmentation (SADA) framework dedicated to T2Isyn. In particular, we propose to augment texts in the semantic space via an Implicit Textual Semantic Preserving Augmentation ($ITA$), in conjunction with a specifically designed Image Semantic Regularization Loss ($L_r$) as Generated Image Semantic Conservation, to cope well with semantic mismatch and collapse. As one major contribution, we theoretically show that $ITA$ can certify better text-image consistency while $L_r$ regularizing the semantics of generated images would avoid semantic collapse and enhance image quality. Extensive experiments validate that SADA enhances text-image consistency and improves image quality significantly in T2Isyn models across various backbones. Especially, incorporating SADA during the tuning process of Stable Diffusion models also yields performance improvements.
|
2104.14970
|
Luis Sa-Couto
|
Luis Sa-Couto and Andreas Wichert
|
Using brain inspired principles to unsupervisedly learn good
representations for visual pattern recognition
| null | null | null | null |
cs.CV cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Although deep learning has solved difficult problems in visual pattern
recognition, it is mostly successful in tasks where there are lots of labeled
training data available. Furthermore, the global back-propagation based
training rule and the amount of employed layers represents a departure from
biological inspiration. The brain is able to perform most of these tasks in a
very general way from limited to no labeled data. For these reasons it is still
a key research question to look into computational principles in the brain that
can help guide models to unsupervisedly learn good representations which can
then be used to perform tasks like classification. In this work we explore some
of these principles to generate such representations for the MNIST data set. We
compare the obtained results with similar recent works and verify extremely
competitive results.
|
[
{
"created": "Fri, 30 Apr 2021 13:08:14 GMT",
"version": "v1"
}
] |
2021-05-03
|
[
[
"Sa-Couto",
"Luis",
""
],
[
"Wichert",
"Andreas",
""
]
] |
Although deep learning has solved difficult problems in visual pattern recognition, it is mostly successful in tasks where there are lots of labeled training data available. Furthermore, the global back-propagation based training rule and the amount of employed layers represents a departure from biological inspiration. The brain is able to perform most of these tasks in a very general way from limited to no labeled data. For these reasons it is still a key research question to look into computational principles in the brain that can help guide models to unsupervisedly learn good representations which can then be used to perform tasks like classification. In this work we explore some of these principles to generate such representations for the MNIST data set. We compare the obtained results with similar recent works and verify extremely competitive results.
|
1310.6311
|
Elian Carsenat
|
Elian Carsenat
|
Onomastics and Big Data Mining
|
ParisTech Review, October 2013
| null | null | null |
cs.CY
|
http://creativecommons.org/licenses/by/3.0/
|
As of today, the main business application of onomastics is naming, or
branding: finding the proper name for your company or your product to stand out
in the world. Meaningfully, Onoma, the Greek root for name, is also a
registered trademark of Nomen, the naming agency founded by Marcel Botton in
1981. Nomen initially licensed one of Roland Moreno's inventions, the Radoteur
name generator, and created many distinctive and global brand names such as:
Vinci, Clio or Amundi. But once your business has a name, should you forget
about onomastics? Not anymore. Globalization, digitalization and the Big Data
open new fields to experiment disruptive applications in Sales and Marketing,
Communication, HR and Risk Management. Though discriminating names carries a
high risk of abuse, it can also drive new, unexpected ways for developing poor
areas.
|
[
{
"created": "Sun, 20 Oct 2013 10:53:39 GMT",
"version": "v1"
}
] |
2013-10-24
|
[
[
"Carsenat",
"Elian",
""
]
] |
As of today, the main business application of onomastics is naming, or branding: finding the proper name for your company or your product to stand out in the world. Meaningfully, Onoma, the Greek root for name, is also a registered trademark of Nomen, the naming agency founded by Marcel Botton in 1981. Nomen initially licensed one of Roland Moreno's inventions, the Radoteur name generator, and created many distinctive and global brand names such as: Vinci, Clio or Amundi. But once your business has a name, should you forget about onomastics? Not anymore. Globalization, digitalization and the Big Data open new fields to experiment disruptive applications in Sales and Marketing, Communication, HR and Risk Management. Though discriminating names carries a high risk of abuse, it can also drive new, unexpected ways for developing poor areas.
|
1903.05697
|
Sanjay Thakur
|
Sanjay Thakur, Herke van Hoof, Juan Camilo Gamboa Higuera, Doina
Precup, David Meger
|
Uncertainty Aware Learning from Demonstrations in Multiple Contexts
using Bayesian Neural Networks
|
Copyright 20XX IEEE. Personal use of this material is permitted.
Permission from IEEE must be obtained for all other uses, in any current or
future media, including reprinting/republishing this material for advertising
or promotional purposes, creating new collective works, for resale or
redistribution to servers or lists, or reuse of any copyrighted component of
this work in other works
| null | null | null |
cs.RO cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Diversity of environments is a key challenge that causes learned robotic
controllers to fail due to the discrepancies between the training and
evaluation conditions. Training from demonstrations in various conditions can
mitigate---but not completely prevent---such failures. Learned controllers such
as neural networks typically do not have a notion of uncertainty that allows to
diagnose an offset between training and testing conditions, and potentially
intervene. In this work, we propose to use Bayesian Neural Networks, which have
such a notion of uncertainty. We show that uncertainty can be leveraged to
consistently detect situations in high-dimensional simulated and real robotic
domains in which the performance of the learned controller would be sub-par.
Also, we show that such an uncertainty based solution allows making an informed
decision about when to invoke a fallback strategy. One fallback strategy is to
request more data. We empirically show that providing data only when requested
results in increased data-efficiency.
|
[
{
"created": "Wed, 13 Mar 2019 19:47:36 GMT",
"version": "v1"
}
] |
2019-03-15
|
[
[
"Thakur",
"Sanjay",
""
],
[
"van Hoof",
"Herke",
""
],
[
"Higuera",
"Juan Camilo Gamboa",
""
],
[
"Precup",
"Doina",
""
],
[
"Meger",
"David",
""
]
] |
Diversity of environments is a key challenge that causes learned robotic controllers to fail due to the discrepancies between the training and evaluation conditions. Training from demonstrations in various conditions can mitigate---but not completely prevent---such failures. Learned controllers such as neural networks typically do not have a notion of uncertainty that allows to diagnose an offset between training and testing conditions, and potentially intervene. In this work, we propose to use Bayesian Neural Networks, which have such a notion of uncertainty. We show that uncertainty can be leveraged to consistently detect situations in high-dimensional simulated and real robotic domains in which the performance of the learned controller would be sub-par. Also, we show that such an uncertainty based solution allows making an informed decision about when to invoke a fallback strategy. One fallback strategy is to request more data. We empirically show that providing data only when requested results in increased data-efficiency.
|
2011.05459
|
Juntao Tan
|
Juntao Tan, Changkyu Song, Abdeslam Boularias
|
A Self-supervised Learning System for Object Detection in Videos Using
Random Walks on Graphs
|
2021 IEEE International Conference on Robotics and Automation (ICRA
2021)
| null | null | null |
cs.CV cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a new self-supervised system for learning to detect novel
and previously unseen categories of objects in images. The proposed system
receives as input several unlabeled videos of scenes containing various
objects. The frames of the videos are segmented into objects using depth
information, and the segments are tracked along each video. The system then
constructs a weighted graph that connects sequences based on the similarities
between the objects that they contain. The similarity between two sequences of
objects is measured by using generic visual features, after automatically
re-arranging the frames in the two sequences to align the viewpoints of the
objects. The graph is used to sample triplets of similar and dissimilar
examples by performing random walks. The triplet examples are finally used to
train a siamese neural network that projects the generic visual features into a
low-dimensional manifold. Experiments on three public datasets, YCB-Video,
CORe50 and RGBD-Object, show that the projected low-dimensional features
improve the accuracy of clustering unknown objects into novel categories, and
outperform several recent unsupervised clustering techniques.
|
[
{
"created": "Tue, 10 Nov 2020 23:37:40 GMT",
"version": "v1"
},
{
"created": "Tue, 6 Apr 2021 21:36:59 GMT",
"version": "v2"
},
{
"created": "Tue, 24 Aug 2021 07:26:19 GMT",
"version": "v3"
}
] |
2021-08-25
|
[
[
"Tan",
"Juntao",
""
],
[
"Song",
"Changkyu",
""
],
[
"Boularias",
"Abdeslam",
""
]
] |
This paper presents a new self-supervised system for learning to detect novel and previously unseen categories of objects in images. The proposed system receives as input several unlabeled videos of scenes containing various objects. The frames of the videos are segmented into objects using depth information, and the segments are tracked along each video. The system then constructs a weighted graph that connects sequences based on the similarities between the objects that they contain. The similarity between two sequences of objects is measured by using generic visual features, after automatically re-arranging the frames in the two sequences to align the viewpoints of the objects. The graph is used to sample triplets of similar and dissimilar examples by performing random walks. The triplet examples are finally used to train a siamese neural network that projects the generic visual features into a low-dimensional manifold. Experiments on three public datasets, YCB-Video, CORe50 and RGBD-Object, show that the projected low-dimensional features improve the accuracy of clustering unknown objects into novel categories, and outperform several recent unsupervised clustering techniques.
|
2201.11209
|
Suya Wu
|
Mohammadreza Soltani, Suya Wu, Yuerong Li, Jie Ding, Vahid Tarokh
|
On The Energy Statistics of Feature Maps in Pruning of Neural Networks
with Skip-Connections
| null | null | null | null |
cs.LG eess.IV
|
http://creativecommons.org/licenses/by/4.0/
|
We propose a new structured pruning framework for compressing Deep Neural
Networks (DNNs) with skip connections, based on measuring the statistical
dependency of hidden layers and predicted outputs. The dependence measure
defined by the energy statistics of hidden layers serves as a model-free
measure of information between the feature maps and the output of the network.
The estimated dependence measure is subsequently used to prune a collection of
redundant and uninformative layers. Model-freeness of our measure guarantees
that no parametric assumptions on the feature map distribution are required,
making it computationally appealing for very high dimensional feature space in
DNNs. Extensive numerical experiments on various architectures show the
efficacy of the proposed pruning approach with competitive performance to
state-of-the-art methods.
|
[
{
"created": "Wed, 26 Jan 2022 22:20:51 GMT",
"version": "v1"
}
] |
2022-01-28
|
[
[
"Soltani",
"Mohammadreza",
""
],
[
"Wu",
"Suya",
""
],
[
"Li",
"Yuerong",
""
],
[
"Ding",
"Jie",
""
],
[
"Tarokh",
"Vahid",
""
]
] |
We propose a new structured pruning framework for compressing Deep Neural Networks (DNNs) with skip connections, based on measuring the statistical dependency of hidden layers and predicted outputs. The dependence measure defined by the energy statistics of hidden layers serves as a model-free measure of information between the feature maps and the output of the network. The estimated dependence measure is subsequently used to prune a collection of redundant and uninformative layers. Model-freeness of our measure guarantees that no parametric assumptions on the feature map distribution are required, making it computationally appealing for very high dimensional feature space in DNNs. Extensive numerical experiments on various architectures show the efficacy of the proposed pruning approach with competitive performance to state-of-the-art methods.
|
2104.06401
|
Triantafyllos Afouras
|
Triantafyllos Afouras, Yuki M. Asano, Francois Fagan, Andrea Vedaldi,
Florian Metze
|
Self-supervised object detection from audio-visual correspondence
|
Accepted to CVPR 2022
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We tackle the problem of learning object detectors without supervision.
Differently from weakly-supervised object detection, we do not assume
image-level class labels. Instead, we extract a supervisory signal from
audio-visual data, using the audio component to "teach" the object detector.
While this problem is related to sound source localisation, it is considerably
harder because the detector must classify the objects by type, enumerate each
instance of the object, and do so even when the object is silent. We tackle
this problem by first designing a self-supervised framework with a contrastive
objective that jointly learns to classify and localise objects. Then, without
using any supervision, we simply use these self-supervised labels and boxes to
train an image-based object detector. With this, we outperform previous
unsupervised and weakly-supervised detectors for the task of object detection
and sound source localization. We also show that we can align this detector to
ground-truth classes with as little as one label per pseudo-class, and show how
our method can learn to detect generic objects that go beyond instruments, such
as airplanes and cats.
|
[
{
"created": "Tue, 13 Apr 2021 17:59:03 GMT",
"version": "v1"
},
{
"created": "Sat, 9 Jul 2022 18:20:19 GMT",
"version": "v2"
}
] |
2022-07-12
|
[
[
"Afouras",
"Triantafyllos",
""
],
[
"Asano",
"Yuki M.",
""
],
[
"Fagan",
"Francois",
""
],
[
"Vedaldi",
"Andrea",
""
],
[
"Metze",
"Florian",
""
]
] |
We tackle the problem of learning object detectors without supervision. Differently from weakly-supervised object detection, we do not assume image-level class labels. Instead, we extract a supervisory signal from audio-visual data, using the audio component to "teach" the object detector. While this problem is related to sound source localisation, it is considerably harder because the detector must classify the objects by type, enumerate each instance of the object, and do so even when the object is silent. We tackle this problem by first designing a self-supervised framework with a contrastive objective that jointly learns to classify and localise objects. Then, without using any supervision, we simply use these self-supervised labels and boxes to train an image-based object detector. With this, we outperform previous unsupervised and weakly-supervised detectors for the task of object detection and sound source localization. We also show that we can align this detector to ground-truth classes with as little as one label per pseudo-class, and show how our method can learn to detect generic objects that go beyond instruments, such as airplanes and cats.
|
2210.08061
|
Mahyar Najibi
|
Mahyar Najibi, Jingwei Ji, Yin Zhou, Charles R. Qi, Xinchen Yan, Scott
Ettinger, Dragomir Anguelov
|
Motion Inspired Unsupervised Perception and Prediction in Autonomous
Driving
|
ECCV 2022
| null | null | null |
cs.CV cs.LG cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Learning-based perception and prediction modules in modern autonomous driving
systems typically rely on expensive human annotation and are designed to
perceive only a handful of predefined object categories. This closed-set
paradigm is insufficient for the safety-critical autonomous driving task, where
the autonomous vehicle needs to process arbitrarily many types of traffic
participants and their motion behaviors in a highly dynamic world. To address
this difficulty, this paper pioneers a novel and challenging direction, i.e.,
training perception and prediction models to understand open-set moving
objects, with no human supervision. Our proposed framework uses self-learned
flow to trigger an automated meta labeling pipeline to achieve automatic
supervision. 3D detection experiments on the Waymo Open Dataset show that our
method significantly outperforms classical unsupervised approaches and is even
competitive to the counterpart with supervised scene flow. We further show that
our approach generates highly promising results in open-set 3D detection and
trajectory prediction, confirming its potential in closing the safety gap of
fully supervised systems.
|
[
{
"created": "Fri, 14 Oct 2022 18:55:44 GMT",
"version": "v1"
}
] |
2022-10-18
|
[
[
"Najibi",
"Mahyar",
""
],
[
"Ji",
"Jingwei",
""
],
[
"Zhou",
"Yin",
""
],
[
"Qi",
"Charles R.",
""
],
[
"Yan",
"Xinchen",
""
],
[
"Ettinger",
"Scott",
""
],
[
"Anguelov",
"Dragomir",
""
]
] |
Learning-based perception and prediction modules in modern autonomous driving systems typically rely on expensive human annotation and are designed to perceive only a handful of predefined object categories. This closed-set paradigm is insufficient for the safety-critical autonomous driving task, where the autonomous vehicle needs to process arbitrarily many types of traffic participants and their motion behaviors in a highly dynamic world. To address this difficulty, this paper pioneers a novel and challenging direction, i.e., training perception and prediction models to understand open-set moving objects, with no human supervision. Our proposed framework uses self-learned flow to trigger an automated meta labeling pipeline to achieve automatic supervision. 3D detection experiments on the Waymo Open Dataset show that our method significantly outperforms classical unsupervised approaches and is even competitive to the counterpart with supervised scene flow. We further show that our approach generates highly promising results in open-set 3D detection and trajectory prediction, confirming its potential in closing the safety gap of fully supervised systems.
|
1812.04741
|
Marija Slavkovik
|
Beishui Liao, Pere Pardo, Marija Slavkovik, Leendert van der Torre
|
The Jiminy Advisor: Moral Agreements Among Stakeholders Based on Norms
and Argumentation
|
Accepted for publication with JAIR
|
Journal of Artificial Intelligence Research 77: 737 - 792 (2023)
|
10.1613/jair.1.14368
| null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An autonomous system is constructed by a manufacturer, operates in a society
subject to norms and laws, and interacts with end users. All of these actors
are stakeholders affected by the behavior of the autonomous system. We address
the challenge of how the ethical views of such stakeholders can be integrated
in the behavior of an autonomous system. We propose an ethical recommendation
component called Jiminy which uses techniques from normative systems and formal
argumentation to reach moral agreements among stakeholders. A Jiminy represents
the ethical views of each stakeholder by using normative systems, and has three
ways of resolving moral dilemmas that involve the opinions of the stakeholders.
First, the Jiminy considers how the arguments of the stakeholders relate to one
another, which may already resolve the dilemma. Secondly, the Jiminy combines
the normative systems of the stakeholders such that the combined expertise of
the stakeholders may resolve the dilemma. Thirdly, and only if these two other
methods have failed, the Jiminy uses context-sensitive rules to decide which of
the stakeholders take preference over the others. At the abstract level, these
three methods are characterized by adding arguments, adding attacks between
arguments, and revising attacks between arguments. We show how a Jiminy can be
used not only for ethical reasoning and collaborative decision-making, but also
to provide explanations about ethical behavior.
|
[
{
"created": "Tue, 11 Dec 2018 23:16:16 GMT",
"version": "v1"
},
{
"created": "Wed, 6 Mar 2019 15:23:15 GMT",
"version": "v2"
},
{
"created": "Thu, 13 Jan 2022 13:16:01 GMT",
"version": "v3"
},
{
"created": "Fri, 28 Apr 2023 10:17:14 GMT",
"version": "v4"
}
] |
2023-07-17
|
[
[
"Liao",
"Beishui",
""
],
[
"Pardo",
"Pere",
""
],
[
"Slavkovik",
"Marija",
""
],
[
"van der Torre",
"Leendert",
""
]
] |
An autonomous system is constructed by a manufacturer, operates in a society subject to norms and laws, and interacts with end users. All of these actors are stakeholders affected by the behavior of the autonomous system. We address the challenge of how the ethical views of such stakeholders can be integrated in the behavior of an autonomous system. We propose an ethical recommendation component called Jiminy which uses techniques from normative systems and formal argumentation to reach moral agreements among stakeholders. A Jiminy represents the ethical views of each stakeholder by using normative systems, and has three ways of resolving moral dilemmas that involve the opinions of the stakeholders. First, the Jiminy considers how the arguments of the stakeholders relate to one another, which may already resolve the dilemma. Secondly, the Jiminy combines the normative systems of the stakeholders such that the combined expertise of the stakeholders may resolve the dilemma. Thirdly, and only if these two other methods have failed, the Jiminy uses context-sensitive rules to decide which of the stakeholders take preference over the others. At the abstract level, these three methods are characterized by adding arguments, adding attacks between arguments, and revising attacks between arguments. We show how a Jiminy can be used not only for ethical reasoning and collaborative decision-making, but also to provide explanations about ethical behavior.
|
1006.2812
|
Jenny Blight
|
Arup Abhinna Acharya and Sisir Kumar Jena
|
Component Interaction Graph: A new approach to test component
composition
|
Submitted to Journal of Computer Science and Engineering, see
http://sites.google.com/site/jcseuk/volume-1-issue-1-may-2010
|
Journal of Computer Science and Engineering, Volume 1, Issue 1,
p64-67, May 2010
| null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The key factor of component based software development is component
composition technology. A Component interaction graph is used to describe the
interrelation of components. Drawing a complete component interaction graph
(CIG) provides an objective basis and technical means for making the testing
outline. Although many researches have focused on this subject, the quality of
system that is composed of components has not been guaranteed. In this paper, a
CIG is constructed from a state chart diagram and new test cases are generated
to test the component composition.
|
[
{
"created": "Mon, 14 Jun 2010 19:33:59 GMT",
"version": "v1"
}
] |
2010-06-15
|
[
[
"Acharya",
"Arup Abhinna",
""
],
[
"Jena",
"Sisir Kumar",
""
]
] |
The key factor of component based software development is component composition technology. A Component interaction graph is used to describe the interrelation of components. Drawing a complete component interaction graph (CIG) provides an objective basis and technical means for making the testing outline. Although many researches have focused on this subject, the quality of system that is composed of components has not been guaranteed. In this paper, a CIG is constructed from a state chart diagram and new test cases are generated to test the component composition.
|
1702.03288
|
Christopher Banks
|
Christopher J. Banks and Ian Stark
|
A More Sensitive Context
| null | null | null | null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Logic of Behaviour in Context (LBC) is a spatio-temporal logic for expressing
properties of continuous-state processes, such as biochemical reaction
networks. LBC builds on the existing Metric Interval Temporal Logic (MITL) and
adds a "context modality" that explores the behaviour of a system when composed
with an external process. LBC models are terms of the Continuous {\pi}-Calculus
(c{\pi}), a process algebra with continuous state space. Our previously
published LBC model-checking technique required examining many points along the
behavioural trajectory of a process; and potentially computing further
trajectories branching off at every such point. This raised two difficulties:
mixing temporal and spatial modalities could require computing a large number
of trajectories, with costly numerical solution of differential equations; and
might still fail to check intermediate values between discrete points on those
trajectories. In this paper we make progress against both of these problems
using techniques from signal temporal logic and from sensitivity analysis.
Boolean signals aggressively compress trace information, allowing more
efficient computation; and sensitivity analysis lets us reliably check formulae
over a region by calculating a smaller number of sample trajectories.
|
[
{
"created": "Fri, 10 Feb 2017 18:59:20 GMT",
"version": "v1"
}
] |
2017-02-13
|
[
[
"Banks",
"Christopher J.",
""
],
[
"Stark",
"Ian",
""
]
] |
Logic of Behaviour in Context (LBC) is a spatio-temporal logic for expressing properties of continuous-state processes, such as biochemical reaction networks. LBC builds on the existing Metric Interval Temporal Logic (MITL) and adds a "context modality" that explores the behaviour of a system when composed with an external process. LBC models are terms of the Continuous {\pi}-Calculus (c{\pi}), a process algebra with continuous state space. Our previously published LBC model-checking technique required examining many points along the behavioural trajectory of a process; and potentially computing further trajectories branching off at every such point. This raised two difficulties: mixing temporal and spatial modalities could require computing a large number of trajectories, with costly numerical solution of differential equations; and might still fail to check intermediate values between discrete points on those trajectories. In this paper we make progress against both of these problems using techniques from signal temporal logic and from sensitivity analysis. Boolean signals aggressively compress trace information, allowing more efficient computation; and sensitivity analysis lets us reliably check formulae over a region by calculating a smaller number of sample trajectories.
|
2309.10375
|
Ryosuke Oshima
|
Ryosuke Oshima, Seitaro Shinagawa, Hideki Tsunashima, Qi Feng, Shigeo
Morishima
|
Pointing out Human Answer Mistakes in a Goal-Oriented Visual Dialogue
|
Accepted at ICCVW 2023
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Effective communication between humans and intelligent agents has promising
applications for solving complex problems. One such approach is visual
dialogue, which leverages multimodal context to assist humans. However,
real-world scenarios occasionally involve human mistakes, which can cause
intelligent agents to fail. While most prior research assumes perfect answers
from human interlocutors, we focus on a setting where the agent points out
unintentional mistakes for the interlocutor to review, better reflecting
real-world situations. In this paper, we show that human answer mistakes depend
on question type and QA turn in the visual dialogue by analyzing a previously
unused data collection of human mistakes. We demonstrate the effectiveness of
those factors for the model's accuracy in a pointing-human-mistake task through
experiments using a simple MLP model and a Visual Language Model.
|
[
{
"created": "Tue, 19 Sep 2023 07:22:05 GMT",
"version": "v1"
}
] |
2023-09-20
|
[
[
"Oshima",
"Ryosuke",
""
],
[
"Shinagawa",
"Seitaro",
""
],
[
"Tsunashima",
"Hideki",
""
],
[
"Feng",
"Qi",
""
],
[
"Morishima",
"Shigeo",
""
]
] |
Effective communication between humans and intelligent agents has promising applications for solving complex problems. One such approach is visual dialogue, which leverages multimodal context to assist humans. However, real-world scenarios occasionally involve human mistakes, which can cause intelligent agents to fail. While most prior research assumes perfect answers from human interlocutors, we focus on a setting where the agent points out unintentional mistakes for the interlocutor to review, better reflecting real-world situations. In this paper, we show that human answer mistakes depend on question type and QA turn in the visual dialogue by analyzing a previously unused data collection of human mistakes. We demonstrate the effectiveness of those factors for the model's accuracy in a pointing-human-mistake task through experiments using a simple MLP model and a Visual Language Model.
|
2011.13132
|
Xing Yan
|
Xiangqian Sun, Xing Yan, Qi Wu
|
Generative Learning of Heterogeneous Tail Dependence
|
Major technical flaws in theoretical aspects
| null | null | null |
cs.LG q-fin.RM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a multivariate generative model to capture the complex dependence
structure often encountered in business and financial data. Our model features
heterogeneous and asymmetric tail dependence between all pairs of individual
dimensions while also allowing heterogeneity and asymmetry in the tails of the
marginals. A significant merit of our model structure is that it is not prone
to error propagation in the parameter estimation process, hence very scalable,
as the dimensions of datasets grow large. However, the likelihood methods are
infeasible for parameter estimation in our case due to the lack of a
closed-form density function. Instead, we devise a novel moment learning
algorithm to learn the parameters. To demonstrate the effectiveness of the
model and its estimator, we test them on simulated as well as real-world
datasets. Results show that this framework gives better finite-sample
performance compared to the copula-based benchmarks as well as recent similar
models.
|
[
{
"created": "Thu, 26 Nov 2020 05:34:31 GMT",
"version": "v1"
},
{
"created": "Mon, 13 Nov 2023 01:52:24 GMT",
"version": "v2"
}
] |
2023-11-14
|
[
[
"Sun",
"Xiangqian",
""
],
[
"Yan",
"Xing",
""
],
[
"Wu",
"Qi",
""
]
] |
We propose a multivariate generative model to capture the complex dependence structure often encountered in business and financial data. Our model features heterogeneous and asymmetric tail dependence between all pairs of individual dimensions while also allowing heterogeneity and asymmetry in the tails of the marginals. A significant merit of our model structure is that it is not prone to error propagation in the parameter estimation process, hence very scalable, as the dimensions of datasets grow large. However, the likelihood methods are infeasible for parameter estimation in our case due to the lack of a closed-form density function. Instead, we devise a novel moment learning algorithm to learn the parameters. To demonstrate the effectiveness of the model and its estimator, we test them on simulated as well as real-world datasets. Results show that this framework gives better finite-sample performance compared to the copula-based benchmarks as well as recent similar models.
|
1610.04345
|
Julien Perez
|
Fei Liu and Julien Perez and Scott Nowson
|
A Language-independent and Compositional Model for Personality Trait
Recognition from Short Texts
|
10 pages, 2 figures, 2 tables
| null | null | null |
cs.CL stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many methods have been used to recognize author personality traits from text,
typically combining linguistic feature engineering with shallow learning
models, e.g. linear regression or Support Vector Machines. This work uses
deep-learning-based models and atomic features of text, the characters, to
build hierarchical, vectorial word and sentence representations for trait
inference. This method, applied to a corpus of tweets, shows state-of-the-art
performance across five traits and three languages (English, Spanish and
Italian) compared with prior work in author profiling. The results, supported
by preliminary visualisation work, are encouraging for the ability to detect
complex human traits.
|
[
{
"created": "Fri, 14 Oct 2016 07:14:44 GMT",
"version": "v1"
}
] |
2016-10-17
|
[
[
"Liu",
"Fei",
""
],
[
"Perez",
"Julien",
""
],
[
"Nowson",
"Scott",
""
]
] |
Many methods have been used to recognize author personality traits from text, typically combining linguistic feature engineering with shallow learning models, e.g. linear regression or Support Vector Machines. This work uses deep-learning-based models and atomic features of text, the characters, to build hierarchical, vectorial word and sentence representations for trait inference. This method, applied to a corpus of tweets, shows state-of-the-art performance across five traits and three languages (English, Spanish and Italian) compared with prior work in author profiling. The results, supported by preliminary visualisation work, are encouraging for the ability to detect complex human traits.
|
0906.3919
|
Serguei Mokhov
|
Serguei A. Mokhov and Joey Paquet
|
A Type System Theory for Higher-Order Intensional Logic Support for
Variable Bindings in Hybrid Intensional-Imperative Programs in GIPSY
|
12 pages, 1 table; 2 figures
| null | null | null |
cs.LO cs.PL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We describe a type system for a platform called the General Intensional
Programming System (GIPSY), designed to support intensional programming
languages built upon intensional logic and their imperative counter-parts for
the intensional execution model. In GIPSY, the type system glues the static and
dynamic typing between intensional and imperative languages in its compiler and
run-time environments to support the intensional evaluation of expressions
written in various dialects of the intensional programming language Lucid. The
intensionality makes expressions to explicitly take into the account a
multidimensional context of evaluation with the context being a first-class
value that serves a number of applications that need the notion of context to
proceed. We describe and discuss the properties of such a type system and the
related type theory as well as particularities of the semantics, design and
implementation of the GIPSY type system.
|
[
{
"created": "Mon, 22 Jun 2009 05:27:49 GMT",
"version": "v1"
}
] |
2009-12-21
|
[
[
"Mokhov",
"Serguei A.",
""
],
[
"Paquet",
"Joey",
""
]
] |
We describe a type system for a platform called the General Intensional Programming System (GIPSY), designed to support intensional programming languages built upon intensional logic and their imperative counter-parts for the intensional execution model. In GIPSY, the type system glues the static and dynamic typing between intensional and imperative languages in its compiler and run-time environments to support the intensional evaluation of expressions written in various dialects of the intensional programming language Lucid. The intensionality makes expressions to explicitly take into the account a multidimensional context of evaluation with the context being a first-class value that serves a number of applications that need the notion of context to proceed. We describe and discuss the properties of such a type system and the related type theory as well as particularities of the semantics, design and implementation of the GIPSY type system.
|
2401.11316
|
Nadav Benedek
|
Nadav Benedek, Lior Wolf
|
PRILoRA: Pruned and Rank-Increasing Low-Rank Adaptation
|
EACL 2024
| null | null | null |
cs.CL cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
With the proliferation of large pre-trained language models (PLMs),
fine-tuning all model parameters becomes increasingly inefficient, particularly
when dealing with numerous downstream tasks that entail substantial training
and storage costs. Several approaches aimed at achieving parameter-efficient
fine-tuning (PEFT) have been proposed. Among them, Low-Rank Adaptation (LoRA)
stands out as an archetypal method, incorporating trainable rank decomposition
matrices into each target module. Nevertheless, LoRA does not consider the
varying importance of each layer. To address these challenges, we introduce
PRILoRA, which linearly allocates a different rank for each layer, in an
increasing manner, and performs pruning throughout the training process,
considering both the temporary magnitude of weights and the accumulated
statistics of the input to any given layer. We validate the effectiveness of
PRILoRA through extensive experiments on eight GLUE benchmarks, setting a new
state of the art.
|
[
{
"created": "Sat, 20 Jan 2024 20:25:17 GMT",
"version": "v1"
}
] |
2024-01-23
|
[
[
"Benedek",
"Nadav",
""
],
[
"Wolf",
"Lior",
""
]
] |
With the proliferation of large pre-trained language models (PLMs), fine-tuning all model parameters becomes increasingly inefficient, particularly when dealing with numerous downstream tasks that entail substantial training and storage costs. Several approaches aimed at achieving parameter-efficient fine-tuning (PEFT) have been proposed. Among them, Low-Rank Adaptation (LoRA) stands out as an archetypal method, incorporating trainable rank decomposition matrices into each target module. Nevertheless, LoRA does not consider the varying importance of each layer. To address these challenges, we introduce PRILoRA, which linearly allocates a different rank for each layer, in an increasing manner, and performs pruning throughout the training process, considering both the temporary magnitude of weights and the accumulated statistics of the input to any given layer. We validate the effectiveness of PRILoRA through extensive experiments on eight GLUE benchmarks, setting a new state of the art.
|
1705.03102
|
Albert Reuther PhD
|
Albert Reuther, Chansup Byun, William Arcand, David Bestor, Bill
Bergeron, Matthew Hubbell, Michael Jones, Peter Michaleas, Andrew Prout,
Antonio Rosa, Jeremy Kepner
|
Scalable System Scheduling for HPC and Big Data
|
34 pages, 7 figures
| null |
10.1016/j.jpdc.2017.06.009
| null |
cs.DC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the rapidly expanding field of parallel processing, job schedulers are the
"operating systems" of modern big data architectures and supercomputing
systems. Job schedulers allocate computing resources and control the execution
of processes on those resources. Historically, job schedulers were the domain
of supercomputers, and job schedulers were designed to run massive,
long-running computations over days and weeks. More recently, big data
workloads have created a need for a new class of computations consisting of
many short computations taking seconds or minutes that process enormous
quantities of data. For both supercomputers and big data systems, the
efficiency of the job scheduler represents a fundamental limit on the
efficiency of the system. Detailed measurement and modeling of the performance
of schedulers are critical for maximizing the performance of a large-scale
computing system. This paper presents a detailed feature analysis of 15
supercomputing and big data schedulers. For big data workloads, the scheduler
latency is the most important performance characteristic of the scheduler. A
theoretical model of the latency of these schedulers is developed and used to
design experiments targeted at measuring scheduler latency. Detailed
benchmarking of four of the most popular schedulers (Slurm, Son of Grid Engine,
Mesos, and Hadoop YARN) are conducted. The theoretical model is compared with
data and demonstrates that scheduler performance can be characterized by two
key parameters: the marginal latency of the scheduler $t_s$ and a nonlinear
exponent $\alpha_s$. For all four schedulers, the utilization of the computing
system decreases to < 10\% for computations lasting only a few seconds.
Multilevel schedulers that transparently aggregate short computations can
improve utilization for these short computations to > 90\% for all four of the
schedulers that were tested.
|
[
{
"created": "Mon, 8 May 2017 21:58:12 GMT",
"version": "v1"
}
] |
2018-03-06
|
[
[
"Reuther",
"Albert",
""
],
[
"Byun",
"Chansup",
""
],
[
"Arcand",
"William",
""
],
[
"Bestor",
"David",
""
],
[
"Bergeron",
"Bill",
""
],
[
"Hubbell",
"Matthew",
""
],
[
"Jones",
"Michael",
""
],
[
"Michaleas",
"Peter",
""
],
[
"Prout",
"Andrew",
""
],
[
"Rosa",
"Antonio",
""
],
[
"Kepner",
"Jeremy",
""
]
] |
In the rapidly expanding field of parallel processing, job schedulers are the "operating systems" of modern big data architectures and supercomputing systems. Job schedulers allocate computing resources and control the execution of processes on those resources. Historically, job schedulers were the domain of supercomputers, and job schedulers were designed to run massive, long-running computations over days and weeks. More recently, big data workloads have created a need for a new class of computations consisting of many short computations taking seconds or minutes that process enormous quantities of data. For both supercomputers and big data systems, the efficiency of the job scheduler represents a fundamental limit on the efficiency of the system. Detailed measurement and modeling of the performance of schedulers are critical for maximizing the performance of a large-scale computing system. This paper presents a detailed feature analysis of 15 supercomputing and big data schedulers. For big data workloads, the scheduler latency is the most important performance characteristic of the scheduler. A theoretical model of the latency of these schedulers is developed and used to design experiments targeted at measuring scheduler latency. Detailed benchmarking of four of the most popular schedulers (Slurm, Son of Grid Engine, Mesos, and Hadoop YARN) are conducted. The theoretical model is compared with data and demonstrates that scheduler performance can be characterized by two key parameters: the marginal latency of the scheduler $t_s$ and a nonlinear exponent $\alpha_s$. For all four schedulers, the utilization of the computing system decreases to < 10\% for computations lasting only a few seconds. Multilevel schedulers that transparently aggregate short computations can improve utilization for these short computations to > 90\% for all four of the schedulers that were tested.
|
2311.07041
|
Wei Chen
|
Weiran Jiang, Wei Chen, Bo Ai
|
Deep Joint Source Channel Coding With Attention Modules Over MIMO
Channels
| null | null | null | null |
cs.IT eess.SP math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we propose two deep joint source and channel coding (DJSCC)
structures with attention modules for the multi-input multi-output (MIMO)
channel, including a serial structure and a parallel structure. With singular
value decomposition (SVD)-based precoding scheme, the MIMO channel can be
decomposed into various sub-channels, and the feature outputs will experience
sub-channels with different channel qualities. In the serial structure, one
single network is used at both the transmitter and the receiver to jointly
process data streams of all MIMO subchannels, while data steams of different
MIMO subchannels are processed independently via multiple sub-networks in the
parallel structure. The attention modules in both serial and parallel
architectures enable the system to adapt to varying channel qualities and
adjust the quantity of information outputs in accordance with the channel
qualities. Experimental results demonstrate the proposed DJSCC structures have
improved image transmission performance, and reveal the phenomenon via
non-parameter entropy estimation that the learned DJSCC transceivers tend to
transmit more information over better sub-channels.
|
[
{
"created": "Mon, 13 Nov 2023 02:53:30 GMT",
"version": "v1"
},
{
"created": "Tue, 12 Mar 2024 00:41:35 GMT",
"version": "v2"
},
{
"created": "Fri, 15 Mar 2024 09:15:14 GMT",
"version": "v3"
}
] |
2024-03-18
|
[
[
"Jiang",
"Weiran",
""
],
[
"Chen",
"Wei",
""
],
[
"Ai",
"Bo",
""
]
] |
In this paper, we propose two deep joint source and channel coding (DJSCC) structures with attention modules for the multi-input multi-output (MIMO) channel, including a serial structure and a parallel structure. With singular value decomposition (SVD)-based precoding scheme, the MIMO channel can be decomposed into various sub-channels, and the feature outputs will experience sub-channels with different channel qualities. In the serial structure, one single network is used at both the transmitter and the receiver to jointly process data streams of all MIMO subchannels, while data steams of different MIMO subchannels are processed independently via multiple sub-networks in the parallel structure. The attention modules in both serial and parallel architectures enable the system to adapt to varying channel qualities and adjust the quantity of information outputs in accordance with the channel qualities. Experimental results demonstrate the proposed DJSCC structures have improved image transmission performance, and reveal the phenomenon via non-parameter entropy estimation that the learned DJSCC transceivers tend to transmit more information over better sub-channels.
|
1212.6680
|
Henricus Bouwmeester
|
Henricus Bouwmeester, Andrew Dougherty, and Andrew V. Knyazev
|
Nonsymmetric multigrid preconditioning for conjugate gradient methods
|
7 pages
|
Procedia Computer Science, v. 51, pp. 276-285, 2015
|
10.1016/j.procs.2015.05.241
|
TR2013-027
|
cs.NA math.NA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We numerically analyze the possibility of turning off post-smoothing
(relaxation) in geometric multigrid when used as a preconditioner in conjugate
gradient linear and eigenvalue solvers for the 3D Laplacian. The geometric
Semicoarsening Multigrid (SMG) method is provided by the hypre parallel
software package. We solve linear systems using two variants (standard and
flexible) of the preconditioned conjugate gradient (PCG) and preconditioned
steepest descent (PSD) methods. The eigenvalue problems are solved using the
locally optimal block preconditioned conjugate gradient (LOBPCG) method
available in hypre through BLOPEX software. We observe that turning off the
post-smoothing in SMG dramatically slows down the standard PCG-SMG. For
flexible PCG and LOBPCG, our numerical results show that post-smoothing can be
avoided, resulting in overall acceleration, due to the high costs of smoothing
and relatively insignificant decrease in convergence speed. We numerically
demonstrate for linear systems that PSD-SMG and flexible PCG-SMG converge
similarly if SMG post-smoothing is off. We experimentally show that the effect
of acceleration is independent of memory interconnection. A theoretical
justification is provided.
|
[
{
"created": "Sun, 30 Dec 2012 01:15:51 GMT",
"version": "v1"
},
{
"created": "Fri, 26 Apr 2013 23:14:59 GMT",
"version": "v2"
},
{
"created": "Fri, 21 Jun 2013 18:45:39 GMT",
"version": "v3"
}
] |
2015-06-09
|
[
[
"Bouwmeester",
"Henricus",
""
],
[
"Dougherty",
"Andrew",
""
],
[
"Knyazev",
"Andrew V.",
""
]
] |
We numerically analyze the possibility of turning off post-smoothing (relaxation) in geometric multigrid when used as a preconditioner in conjugate gradient linear and eigenvalue solvers for the 3D Laplacian. The geometric Semicoarsening Multigrid (SMG) method is provided by the hypre parallel software package. We solve linear systems using two variants (standard and flexible) of the preconditioned conjugate gradient (PCG) and preconditioned steepest descent (PSD) methods. The eigenvalue problems are solved using the locally optimal block preconditioned conjugate gradient (LOBPCG) method available in hypre through BLOPEX software. We observe that turning off the post-smoothing in SMG dramatically slows down the standard PCG-SMG. For flexible PCG and LOBPCG, our numerical results show that post-smoothing can be avoided, resulting in overall acceleration, due to the high costs of smoothing and relatively insignificant decrease in convergence speed. We numerically demonstrate for linear systems that PSD-SMG and flexible PCG-SMG converge similarly if SMG post-smoothing is off. We experimentally show that the effect of acceleration is independent of memory interconnection. A theoretical justification is provided.
|
1904.02338
|
Maruan Al-Shedivat
|
Maruan Al-Shedivat and Ankur P. Parikh
|
Consistency by Agreement in Zero-shot Neural Machine Translation
|
NAACL 2019 (14 pages, 5 figures)
| null | null | null |
cs.LG cs.CL cs.NE stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Generalization and reliability of multilingual translation often highly
depend on the amount of available parallel data for each language pair of
interest. In this paper, we focus on zero-shot generalization---a challenging
setup that tests models on translation directions they have not been optimized
for at training time. To solve the problem, we (i) reformulate multilingual
translation as probabilistic inference, (ii) define the notion of zero-shot
consistency and show why standard training often results in models unsuitable
for zero-shot tasks, and (iii) introduce a consistent agreement-based training
method that encourages the model to produce equivalent translations of parallel
sentences in auxiliary languages. We test our multilingual NMT models on
multiple public zero-shot translation benchmarks (IWSLT17, UN corpus, Europarl)
and show that agreement-based learning often results in 2-3 BLEU zero-shot
improvement over strong baselines without any loss in performance on supervised
translation directions.
|
[
{
"created": "Thu, 4 Apr 2019 03:49:05 GMT",
"version": "v1"
},
{
"created": "Wed, 10 Apr 2019 04:00:03 GMT",
"version": "v2"
}
] |
2019-04-11
|
[
[
"Al-Shedivat",
"Maruan",
""
],
[
"Parikh",
"Ankur P.",
""
]
] |
Generalization and reliability of multilingual translation often highly depend on the amount of available parallel data for each language pair of interest. In this paper, we focus on zero-shot generalization---a challenging setup that tests models on translation directions they have not been optimized for at training time. To solve the problem, we (i) reformulate multilingual translation as probabilistic inference, (ii) define the notion of zero-shot consistency and show why standard training often results in models unsuitable for zero-shot tasks, and (iii) introduce a consistent agreement-based training method that encourages the model to produce equivalent translations of parallel sentences in auxiliary languages. We test our multilingual NMT models on multiple public zero-shot translation benchmarks (IWSLT17, UN corpus, Europarl) and show that agreement-based learning often results in 2-3 BLEU zero-shot improvement over strong baselines without any loss in performance on supervised translation directions.
|
2106.13764
|
Yasir Zaki
|
Moumena Chaqfeh, Muhammad Haseeb, Waleed Hashmi, Patrick Inshuti,
Manesha Ramesh, Matteo Varvello, Fareed Zaffar, Lakshmi Subramanian, Yasir
Zaki
|
To Block or Not to Block: Accelerating Mobile Web Pages On-The-Fly
Through JavaScript Classification
|
11 pages, 11 figures
| null | null | null |
cs.OH
|
http://creativecommons.org/licenses/by/4.0/
|
The increasing complexity of JavaScript in modern mobile web pages has become
a critical performance bottleneck for low-end mobile phone users, especially in
developing regions. In this paper, we propose SlimWeb, a novel approach that
automatically derives lightweight versions of mobile web pages on-the-fly by
eliminating the use of unnecessary JavaScript. SlimWeb consists of a JavaScript
classification service powered by a supervised Machine Learning (ML) model that
provides insights into each JavaScript element embedded in a web page. SlimWeb
aims to improve the web browsing experience by predicting the class of each
element, such that essential elements are preserved and non-essential elements
are blocked by the browsers using the service. We motivate the core design of
SlimWeb using a user preference survey of 306 users and perform a detailed
evaluation of SlimWeb across 500 popular web pages in a developing region on
real 3G and 4G cellular networks, along with a user experience study with 20
real-world users and a usage willingness survey of 588 users. Evaluation
results show that SlimWeb achieves a 50% reduction in the page load time
compared to the original pages, and more than 30% reduction compared to
competing solutions, while achieving high similarity scores to the original
pages measured via a qualitative evaluation study of 62 users. SlimWeb improves
the overall user experience by more than 60% compared to the original pages,
while maintaining 90%-100% of the visual and functional components of most
pages. Finally, the SlimWeb classifier achieves a median accuracy of 90% in
predicting the JavaScript category.
|
[
{
"created": "Sun, 20 Jun 2021 10:32:10 GMT",
"version": "v1"
}
] |
2021-06-28
|
[
[
"Chaqfeh",
"Moumena",
""
],
[
"Haseeb",
"Muhammad",
""
],
[
"Hashmi",
"Waleed",
""
],
[
"Inshuti",
"Patrick",
""
],
[
"Ramesh",
"Manesha",
""
],
[
"Varvello",
"Matteo",
""
],
[
"Zaffar",
"Fareed",
""
],
[
"Subramanian",
"Lakshmi",
""
],
[
"Zaki",
"Yasir",
""
]
] |
The increasing complexity of JavaScript in modern mobile web pages has become a critical performance bottleneck for low-end mobile phone users, especially in developing regions. In this paper, we propose SlimWeb, a novel approach that automatically derives lightweight versions of mobile web pages on-the-fly by eliminating the use of unnecessary JavaScript. SlimWeb consists of a JavaScript classification service powered by a supervised Machine Learning (ML) model that provides insights into each JavaScript element embedded in a web page. SlimWeb aims to improve the web browsing experience by predicting the class of each element, such that essential elements are preserved and non-essential elements are blocked by the browsers using the service. We motivate the core design of SlimWeb using a user preference survey of 306 users and perform a detailed evaluation of SlimWeb across 500 popular web pages in a developing region on real 3G and 4G cellular networks, along with a user experience study with 20 real-world users and a usage willingness survey of 588 users. Evaluation results show that SlimWeb achieves a 50% reduction in the page load time compared to the original pages, and more than 30% reduction compared to competing solutions, while achieving high similarity scores to the original pages measured via a qualitative evaluation study of 62 users. SlimWeb improves the overall user experience by more than 60% compared to the original pages, while maintaining 90%-100% of the visual and functional components of most pages. Finally, the SlimWeb classifier achieves a median accuracy of 90% in predicting the JavaScript category.
|
2001.00358
|
Moonyoung Lee
|
Moonyoung Lee, Yujin Heo, Saihim Cho, Hyunsub Park, Jun-Ho Oh
|
Motion Generation Interface of ROS to PODO Software Framework for
Wheeled Huamanoid Robot
|
IEEE Int. Conf. on Advanced Robotics (ICAR), 2019
| null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper discusses the development of robot motion generation interface
between a real-time software architecture and a non-real-time robot operating
system. In order for robots to execute intelligent manipulation or navigation,
close integration of high-level perception and low-level control is required.
However, many available open-source perception modules are developed in ROS,
which operates on Linux OS that don't guarantee RT performance. This can lead
to non-deterministic responses and stability problems that can adversely affect
robot control. As a result, many robotic systems devote RTOS for low-level
motion control. Similarly, the humanoid robot platform developed at KAIST,
Hubo, utilizes a custom real-time software framework called PODO. Although PODO
provides easy interface for motion generation, it lacks interface to high-level
frameworks such as ROS. As such, we present a new motion generation interface
between ROS and PODO that enables users to generate motion trajectories through
standard ROS messages while leveraging a real-time motion controller. With the
proposed communication interface, we demonstrate series of manipulator tasks on
the actual wheeled humanoid platform, M-Hubo. The overall communication
interface responsiveness was at most 27 milliseconds.
|
[
{
"created": "Thu, 2 Jan 2020 08:37:13 GMT",
"version": "v1"
}
] |
2020-01-03
|
[
[
"Lee",
"Moonyoung",
""
],
[
"Heo",
"Yujin",
""
],
[
"Cho",
"Saihim",
""
],
[
"Park",
"Hyunsub",
""
],
[
"Oh",
"Jun-Ho",
""
]
] |
This paper discusses the development of robot motion generation interface between a real-time software architecture and a non-real-time robot operating system. In order for robots to execute intelligent manipulation or navigation, close integration of high-level perception and low-level control is required. However, many available open-source perception modules are developed in ROS, which operates on Linux OS that don't guarantee RT performance. This can lead to non-deterministic responses and stability problems that can adversely affect robot control. As a result, many robotic systems devote RTOS for low-level motion control. Similarly, the humanoid robot platform developed at KAIST, Hubo, utilizes a custom real-time software framework called PODO. Although PODO provides easy interface for motion generation, it lacks interface to high-level frameworks such as ROS. As such, we present a new motion generation interface between ROS and PODO that enables users to generate motion trajectories through standard ROS messages while leveraging a real-time motion controller. With the proposed communication interface, we demonstrate series of manipulator tasks on the actual wheeled humanoid platform, M-Hubo. The overall communication interface responsiveness was at most 27 milliseconds.
|
1903.00782
|
Shehryar Khattak
|
Shehryar Khattak, Christos Papachristos, Kostas Alexis
|
Marker based Thermal-Inertial Localization for Aerial Robots in
Obscurant Filled Environments
|
10 pages, 5 figures, Published in International Symposium on Visual
Computing 2018
| null |
10.1007/978-3-030-03801-4_49
| null |
cs.RO cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
For robotic inspection tasks in known environments fiducial markers provide a
reliable and low-cost solution for robot localization. However, detection of
such markers relies on the quality of RGB camera data, which degrades
significantly in the presence of visual obscurants such as fog and smoke. The
ability to navigate known environments in the presence of obscurants can be
critical for inspection tasks especially, in the aftermath of a disaster.
Addressing such a scenario, this work proposes a method for the design of
fiducial markers to be used with thermal cameras for the pose estimation of
aerial robots. Our low cost markers are designed to work in the long wave
infrared spectrum, which is not affected by the presence of obscurants, and can
be affixed to any object that has measurable temperature difference with
respect to its surroundings. Furthermore, the estimated pose from the fiducial
markers is fused with inertial measurements in an extended Kalman filter to
remove high frequency noise and error present in the fiducial pose estimates.
The proposed markers and the pose estimation method are experimentally
evaluated in an obscurant filled environment using an aerial robot carrying a
thermal camera.
|
[
{
"created": "Sat, 2 Mar 2019 22:33:02 GMT",
"version": "v1"
}
] |
2019-03-05
|
[
[
"Khattak",
"Shehryar",
""
],
[
"Papachristos",
"Christos",
""
],
[
"Alexis",
"Kostas",
""
]
] |
For robotic inspection tasks in known environments fiducial markers provide a reliable and low-cost solution for robot localization. However, detection of such markers relies on the quality of RGB camera data, which degrades significantly in the presence of visual obscurants such as fog and smoke. The ability to navigate known environments in the presence of obscurants can be critical for inspection tasks especially, in the aftermath of a disaster. Addressing such a scenario, this work proposes a method for the design of fiducial markers to be used with thermal cameras for the pose estimation of aerial robots. Our low cost markers are designed to work in the long wave infrared spectrum, which is not affected by the presence of obscurants, and can be affixed to any object that has measurable temperature difference with respect to its surroundings. Furthermore, the estimated pose from the fiducial markers is fused with inertial measurements in an extended Kalman filter to remove high frequency noise and error present in the fiducial pose estimates. The proposed markers and the pose estimation method are experimentally evaluated in an obscurant filled environment using an aerial robot carrying a thermal camera.
|
2011.10812
|
Fan Lu
|
Fan Lu, Guang Chen, Yinlong Liu, Zhijun Li, Sanqing Qu, Tianpei Zou
|
MoNet: Motion-based Point Cloud Prediction Network
| null | null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Predicting the future can significantly improve the safety of intelligent
vehicles, which is a key component in autonomous driving. 3D point clouds
accurately model 3D information of surrounding environment and are crucial for
intelligent vehicles to perceive the scene. Therefore, prediction of 3D point
clouds has great significance for intelligent vehicles, which can be utilized
for numerous further applications. However, due to point clouds are unordered
and unstructured, point cloud prediction is challenging and has not been deeply
explored in current literature. In this paper, we propose a novel motion-based
neural network named MoNet. The key idea of the proposed MoNet is to integrate
motion features between two consecutive point clouds into the prediction
pipeline. The introduction of motion features enables the model to more
accurately capture the variations of motion information across frames and thus
make better predictions for future motion. In addition, content features are
introduced to model the spatial content of individual point clouds. A recurrent
neural network named MotionRNN is proposed to capture the temporal correlations
of both features. Besides, we propose an attention-based motion align module to
address the problem of missing motion features in the inference pipeline.
Extensive experiments on two large scale outdoor LiDAR datasets demonstrate the
performance of the proposed MoNet. Moreover, we perform experiments on
applications using the predicted point clouds and the results indicate the
great application potential of the proposed method.
|
[
{
"created": "Sat, 21 Nov 2020 15:43:31 GMT",
"version": "v1"
}
] |
2020-11-24
|
[
[
"Lu",
"Fan",
""
],
[
"Chen",
"Guang",
""
],
[
"Liu",
"Yinlong",
""
],
[
"Li",
"Zhijun",
""
],
[
"Qu",
"Sanqing",
""
],
[
"Zou",
"Tianpei",
""
]
] |
Predicting the future can significantly improve the safety of intelligent vehicles, which is a key component in autonomous driving. 3D point clouds accurately model 3D information of surrounding environment and are crucial for intelligent vehicles to perceive the scene. Therefore, prediction of 3D point clouds has great significance for intelligent vehicles, which can be utilized for numerous further applications. However, due to point clouds are unordered and unstructured, point cloud prediction is challenging and has not been deeply explored in current literature. In this paper, we propose a novel motion-based neural network named MoNet. The key idea of the proposed MoNet is to integrate motion features between two consecutive point clouds into the prediction pipeline. The introduction of motion features enables the model to more accurately capture the variations of motion information across frames and thus make better predictions for future motion. In addition, content features are introduced to model the spatial content of individual point clouds. A recurrent neural network named MotionRNN is proposed to capture the temporal correlations of both features. Besides, we propose an attention-based motion align module to address the problem of missing motion features in the inference pipeline. Extensive experiments on two large scale outdoor LiDAR datasets demonstrate the performance of the proposed MoNet. Moreover, we perform experiments on applications using the predicted point clouds and the results indicate the great application potential of the proposed method.
|
1603.08642
|
Jennifer King
|
Jennifer E. King, Siddhartha S. Srinivasa
|
Rearrangement Planning via Heuristic Search
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a method to apply heuristic search algorithms to solve
rearrangement planning by pushing problems. In these problems, a robot must
push an object through clutter to achieve a goal. To do this, we exploit the
fact that contact with objects in the environment is critical to goal
achievement. We dynamically generate goal-directed primitives that create and
maintain contact between robot and object at each state expansion during the
search. These primitives focus exploration toward critical areas of
state-space, providing tractability to the high-dimensional planning problem.
We demonstrate that the use of these primitives, combined with an informative
yet simple to compute heuristic, improves success rate when compared to a
planner that uses only primitives formed from discretizing the robot's action
space. In addition, we show our planner outperforms RRT-based approaches by
producing shorter paths faster. We demonstrate our algorithm both in simulation
and on a 7-DOF arm pushing objects on a table.
|
[
{
"created": "Tue, 29 Mar 2016 05:03:16 GMT",
"version": "v1"
}
] |
2016-03-30
|
[
[
"King",
"Jennifer E.",
""
],
[
"Srinivasa",
"Siddhartha S.",
""
]
] |
We present a method to apply heuristic search algorithms to solve rearrangement planning by pushing problems. In these problems, a robot must push an object through clutter to achieve a goal. To do this, we exploit the fact that contact with objects in the environment is critical to goal achievement. We dynamically generate goal-directed primitives that create and maintain contact between robot and object at each state expansion during the search. These primitives focus exploration toward critical areas of state-space, providing tractability to the high-dimensional planning problem. We demonstrate that the use of these primitives, combined with an informative yet simple to compute heuristic, improves success rate when compared to a planner that uses only primitives formed from discretizing the robot's action space. In addition, we show our planner outperforms RRT-based approaches by producing shorter paths faster. We demonstrate our algorithm both in simulation and on a 7-DOF arm pushing objects on a table.
|
1810.03806
|
Chenxiao Zhao
|
Chenxiao Zhao, P. Thomas Fletcher, Mixue Yu, Yaxin Peng, Guixu Zhang
and Chaomin Shen
|
The Adversarial Attack and Detection under the Fisher Information Metric
|
Accepted as an AAAI-2019 oral paper
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many deep learning models are vulnerable to the adversarial attack, i.e.,
imperceptible but intentionally-designed perturbations to the input can cause
incorrect output of the networks. In this paper, using information geometry, we
provide a reasonable explanation for the vulnerability of deep learning models.
By considering the data space as a non-linear space with the Fisher information
metric induced from a neural network, we first propose an adversarial attack
algorithm termed one-step spectral attack (OSSA). The method is described by a
constrained quadratic form of the Fisher information matrix, where the optimal
adversarial perturbation is given by the first eigenvector, and the model
vulnerability is reflected by the eigenvalues. The larger an eigenvalue is, the
more vulnerable the model is to be attacked by the corresponding eigenvector.
Taking advantage of the property, we also propose an adversarial detection
method with the eigenvalues serving as characteristics. Both our attack and
detection algorithms are numerically optimized to work efficiently on large
datasets. Our evaluations show superior performance compared with other
methods, implying that the Fisher information is a promising approach to
investigate the adversarial attacks and defenses.
|
[
{
"created": "Tue, 9 Oct 2018 04:25:05 GMT",
"version": "v1"
},
{
"created": "Sat, 9 Feb 2019 03:40:49 GMT",
"version": "v2"
}
] |
2019-02-12
|
[
[
"Zhao",
"Chenxiao",
""
],
[
"Fletcher",
"P. Thomas",
""
],
[
"Yu",
"Mixue",
""
],
[
"Peng",
"Yaxin",
""
],
[
"Zhang",
"Guixu",
""
],
[
"Shen",
"Chaomin",
""
]
] |
Many deep learning models are vulnerable to the adversarial attack, i.e., imperceptible but intentionally-designed perturbations to the input can cause incorrect output of the networks. In this paper, using information geometry, we provide a reasonable explanation for the vulnerability of deep learning models. By considering the data space as a non-linear space with the Fisher information metric induced from a neural network, we first propose an adversarial attack algorithm termed one-step spectral attack (OSSA). The method is described by a constrained quadratic form of the Fisher information matrix, where the optimal adversarial perturbation is given by the first eigenvector, and the model vulnerability is reflected by the eigenvalues. The larger an eigenvalue is, the more vulnerable the model is to be attacked by the corresponding eigenvector. Taking advantage of the property, we also propose an adversarial detection method with the eigenvalues serving as characteristics. Both our attack and detection algorithms are numerically optimized to work efficiently on large datasets. Our evaluations show superior performance compared with other methods, implying that the Fisher information is a promising approach to investigate the adversarial attacks and defenses.
|
2404.11504
|
Ishay Haviv
|
Ishay Haviv and Michal Parnas
|
Testing Intersectingness of Uniform Families
|
20 pages
| null | null | null |
cs.DS
|
http://creativecommons.org/licenses/by/4.0/
|
A set family ${\cal F}$ is called intersecting if every two members of ${\cal
F}$ intersect, and it is called uniform if all members of ${\cal F}$ share a
common size. A uniform family ${\cal F} \subseteq \binom{[n]}{k}$ of
$k$-subsets of $[n]$ is $\varepsilon$-far from intersecting if one has to
remove more than $\varepsilon \cdot \binom{n}{k}$ of the sets of ${\cal F}$ to
make it intersecting. We study the property testing problem that given query
access to a uniform family ${\cal F} \subseteq \binom{[n]}{k}$, asks to
distinguish between the case that ${\cal F}$ is intersecting and the case that
it is $\varepsilon$-far from intersecting. We prove that for every fixed
integer $r$, the problem admits a non-adaptive two-sided error tester with
query complexity $O(\frac{\ln n}{\varepsilon})$ for $\varepsilon \geq \Omega(
(\frac{k}{n})^r)$ and a non-adaptive one-sided error tester with query
complexity $O(\frac{\ln k}{\varepsilon})$ for $\varepsilon \geq \Omega(
(\frac{k^2}{n})^r)$. The query complexities are optimal up to the logarithmic
terms. For $\varepsilon \geq \Omega( (\frac{k^2}{n})^2)$, we further provide a
non-adaptive one-sided error tester with optimal query complexity of
$O(\frac{1}{\varepsilon})$. Our findings show that the query complexity of the
problem behaves differently from that of testing intersectingness of
non-uniform families, studied recently by Chen, De, Li, Nadimpalli, and
Servedio (ITCS, 2024).
|
[
{
"created": "Wed, 17 Apr 2024 15:59:25 GMT",
"version": "v1"
},
{
"created": "Thu, 18 Jul 2024 12:30:13 GMT",
"version": "v2"
}
] |
2024-07-19
|
[
[
"Haviv",
"Ishay",
""
],
[
"Parnas",
"Michal",
""
]
] |
A set family ${\cal F}$ is called intersecting if every two members of ${\cal F}$ intersect, and it is called uniform if all members of ${\cal F}$ share a common size. A uniform family ${\cal F} \subseteq \binom{[n]}{k}$ of $k$-subsets of $[n]$ is $\varepsilon$-far from intersecting if one has to remove more than $\varepsilon \cdot \binom{n}{k}$ of the sets of ${\cal F}$ to make it intersecting. We study the property testing problem that given query access to a uniform family ${\cal F} \subseteq \binom{[n]}{k}$, asks to distinguish between the case that ${\cal F}$ is intersecting and the case that it is $\varepsilon$-far from intersecting. We prove that for every fixed integer $r$, the problem admits a non-adaptive two-sided error tester with query complexity $O(\frac{\ln n}{\varepsilon})$ for $\varepsilon \geq \Omega( (\frac{k}{n})^r)$ and a non-adaptive one-sided error tester with query complexity $O(\frac{\ln k}{\varepsilon})$ for $\varepsilon \geq \Omega( (\frac{k^2}{n})^r)$. The query complexities are optimal up to the logarithmic terms. For $\varepsilon \geq \Omega( (\frac{k^2}{n})^2)$, we further provide a non-adaptive one-sided error tester with optimal query complexity of $O(\frac{1}{\varepsilon})$. Our findings show that the query complexity of the problem behaves differently from that of testing intersectingness of non-uniform families, studied recently by Chen, De, Li, Nadimpalli, and Servedio (ITCS, 2024).
|
2005.11753
|
Tianhao Wang
|
Tianhao Wang, Joann Qiongna Chen, Zhikun Zhang, Dong Su, Yueqiang
Cheng, Zhou Li, Ninghui Li, Somesh Jha
|
Continuous Release of Data Streams under both Centralized and Local
Differential Privacy
| null | null | null | null |
cs.CR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we study the problem of publishing a stream of real-valued
data satisfying differential privacy (DP). One major challenge is that the
maximal possible value can be quite large; thus it is necessary to estimate a
threshold so that numbers above it are truncated to reduce the amount of noise
that is required to all the data. The estimation must be done based on the data
in a private fashion. We develop such a method that uses the Exponential
Mechanism with a quality function that approximates well the utility goal while
maintaining a low sensitivity. Given the threshold, we then propose a novel
online hierarchical method and several post-processing techniques.
Building on these ideas, we formalize the steps into a framework for private
publishing of stream data. Our framework consists of three components: a
threshold optimizer that privately estimates the threshold, a perturber that
adds calibrated noises to the stream, and a smoother that improves the result
using post-processing. Within our framework, we design an algorithm satisfying
the more stringent setting of DP called local DP (LDP). To our knowledge, this
is the first LDP algorithm for publishing streaming data. Using four real-world
datasets, we demonstrate that our mechanism outperforms the state-of-the-art by
a factor of 6-10 orders of magnitude in terms of utility (measured by the mean
squared error of answering a random range query).
|
[
{
"created": "Sun, 24 May 2020 14:25:49 GMT",
"version": "v1"
},
{
"created": "Thu, 7 Dec 2023 20:02:17 GMT",
"version": "v2"
}
] |
2023-12-11
|
[
[
"Wang",
"Tianhao",
""
],
[
"Chen",
"Joann Qiongna",
""
],
[
"Zhang",
"Zhikun",
""
],
[
"Su",
"Dong",
""
],
[
"Cheng",
"Yueqiang",
""
],
[
"Li",
"Zhou",
""
],
[
"Li",
"Ninghui",
""
],
[
"Jha",
"Somesh",
""
]
] |
In this paper, we study the problem of publishing a stream of real-valued data satisfying differential privacy (DP). One major challenge is that the maximal possible value can be quite large; thus it is necessary to estimate a threshold so that numbers above it are truncated to reduce the amount of noise that is required to all the data. The estimation must be done based on the data in a private fashion. We develop such a method that uses the Exponential Mechanism with a quality function that approximates well the utility goal while maintaining a low sensitivity. Given the threshold, we then propose a novel online hierarchical method and several post-processing techniques. Building on these ideas, we formalize the steps into a framework for private publishing of stream data. Our framework consists of three components: a threshold optimizer that privately estimates the threshold, a perturber that adds calibrated noises to the stream, and a smoother that improves the result using post-processing. Within our framework, we design an algorithm satisfying the more stringent setting of DP called local DP (LDP). To our knowledge, this is the first LDP algorithm for publishing streaming data. Using four real-world datasets, we demonstrate that our mechanism outperforms the state-of-the-art by a factor of 6-10 orders of magnitude in terms of utility (measured by the mean squared error of answering a random range query).
|
2312.09730
|
Emmanuel Raptis
|
Marios Krestenitis, Emmanuel K. Raptis, Athanasios Ch. Kapoutsis,
Konstantinos Ioannidis, Elias B. Kosmatopoulos, Stefanos Vrochidis
|
Overcome the Fear Of Missing Out: Active Sensing UAV Scanning for
Precision Agriculture
| null |
Robotics and Autonomous Systems (2023): 104581
|
10.1016/j.robot.2023.104581
| null |
cs.RO cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
This paper deals with the problem of informative path planning for a UAV
deployed for precision agriculture applications. First, we observe that the
``fear of missing out'' data lead to uniform, conservative scanning policies
over the whole agricultural field. Consequently, employing a non-uniform
scanning approach can mitigate the expenditure of time in areas with minimal or
negligible real value, while ensuring heightened precision in information-dense
regions. Turning to the available informative path planning methodologies, we
discern that certain methods entail intensive computational requirements, while
others necessitate training on an ideal world simulator. To address the
aforementioned issues, we propose an active sensing coverage path planning
approach, named OverFOMO, that regulates the speed of the UAV in accordance
with both the relative quantity of the identified classes, i.e. crops and
weeds, and the confidence level of such detections. To identify these
instances, a robust Deep Learning segmentation model is deployed. The
computational needs of the proposed algorithm are independent of the size of
the agricultural field, rendering its applicability on modern UAVs quite
straightforward. The proposed algorithm was evaluated with a simu-realistic
pipeline, combining data from real UAV missions and the high-fidelity dynamics
of AirSim simulator, showcasing its performance improvements over the
established state of affairs for this type of missions. An open-source
implementation of the algorithm and the evaluation pipeline is also available:
\url{https://github.com/emmarapt/OverFOMO}.
|
[
{
"created": "Fri, 15 Dec 2023 12:13:22 GMT",
"version": "v1"
}
] |
2023-12-18
|
[
[
"Krestenitis",
"Marios",
""
],
[
"Raptis",
"Emmanuel K.",
""
],
[
"Kapoutsis",
"Athanasios Ch.",
""
],
[
"Ioannidis",
"Konstantinos",
""
],
[
"Kosmatopoulos",
"Elias B.",
""
],
[
"Vrochidis",
"Stefanos",
""
]
] |
This paper deals with the problem of informative path planning for a UAV deployed for precision agriculture applications. First, we observe that the ``fear of missing out'' data lead to uniform, conservative scanning policies over the whole agricultural field. Consequently, employing a non-uniform scanning approach can mitigate the expenditure of time in areas with minimal or negligible real value, while ensuring heightened precision in information-dense regions. Turning to the available informative path planning methodologies, we discern that certain methods entail intensive computational requirements, while others necessitate training on an ideal world simulator. To address the aforementioned issues, we propose an active sensing coverage path planning approach, named OverFOMO, that regulates the speed of the UAV in accordance with both the relative quantity of the identified classes, i.e. crops and weeds, and the confidence level of such detections. To identify these instances, a robust Deep Learning segmentation model is deployed. The computational needs of the proposed algorithm are independent of the size of the agricultural field, rendering its applicability on modern UAVs quite straightforward. The proposed algorithm was evaluated with a simu-realistic pipeline, combining data from real UAV missions and the high-fidelity dynamics of AirSim simulator, showcasing its performance improvements over the established state of affairs for this type of missions. An open-source implementation of the algorithm and the evaluation pipeline is also available: \url{https://github.com/emmarapt/OverFOMO}.
|
2403.05937
|
Cunhui Dong
|
Cunhui Dong, Haichuan Ma, Haotian Zhang, Changsheng Gao, Li Li, Dong
Liu
|
Wavelet-Like Transform-Based Technology in Response to the Call for
Proposals on Neural Network-Based Image Coding
| null | null | null | null |
cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Neural network-based image coding has been developing rapidly since its
birth. Until 2022, its performance has surpassed that of the best-performing
traditional image coding framework -- H.266/VVC. Witnessing such success, the
IEEE 1857.11 working subgroup initializes a neural network-based image coding
standard project and issues a corresponding call for proposals (CfP). In
response to the CfP, this paper introduces a novel wavelet-like transform-based
end-to-end image coding framework -- iWaveV3. iWaveV3 incorporates many new
features such as affine wavelet-like transform, perceptual-friendly quality
metric, and more advanced training and online optimization strategies into our
previous wavelet-like transform-based framework iWave++. While preserving the
features of supporting lossy and lossless compression simultaneously, iWaveV3
also achieves state-of-the-art compression efficiency for objective quality and
is very competitive for perceptual quality. As a result, iWaveV3 is adopted as
a candidate scheme for developing the IEEE Standard for neural-network-based
image coding.
|
[
{
"created": "Sat, 9 Mar 2024 15:13:49 GMT",
"version": "v1"
}
] |
2024-03-12
|
[
[
"Dong",
"Cunhui",
""
],
[
"Ma",
"Haichuan",
""
],
[
"Zhang",
"Haotian",
""
],
[
"Gao",
"Changsheng",
""
],
[
"Li",
"Li",
""
],
[
"Liu",
"Dong",
""
]
] |
Neural network-based image coding has been developing rapidly since its birth. Until 2022, its performance has surpassed that of the best-performing traditional image coding framework -- H.266/VVC. Witnessing such success, the IEEE 1857.11 working subgroup initializes a neural network-based image coding standard project and issues a corresponding call for proposals (CfP). In response to the CfP, this paper introduces a novel wavelet-like transform-based end-to-end image coding framework -- iWaveV3. iWaveV3 incorporates many new features such as affine wavelet-like transform, perceptual-friendly quality metric, and more advanced training and online optimization strategies into our previous wavelet-like transform-based framework iWave++. While preserving the features of supporting lossy and lossless compression simultaneously, iWaveV3 also achieves state-of-the-art compression efficiency for objective quality and is very competitive for perceptual quality. As a result, iWaveV3 is adopted as a candidate scheme for developing the IEEE Standard for neural-network-based image coding.
|
2208.05622
|
Qihan Guo
|
Qihan Guo (1), Siwei Wang (1), Jun Zhu (1) ((1) Tsinghua University)
|
Regret Analysis for Hierarchical Experts Bandit Problem
|
14 pages, 2 figures, submitted to AAAI 2023
| null | null | null |
cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study an extension of standard bandit problem in which there are R layers
of experts. Multi-layered experts make selections layer by layer and only the
experts in the last layer can play arms. The goal of the learning policy is to
minimize the total regret in this hierarchical experts setting. We first
analyze the case that total regret grows linearly with the number of layers.
Then we focus on the case that all experts are playing Upper Confidence Bound
(UCB) strategy and give several sub-linear upper bounds for different
circumstances. Finally, we design some experiments to help the regret analysis
for the general case of hierarchical UCB structure and show the practical
significance of our theoretical results. This article gives many insights about
reasonable hierarchical decision structure.
|
[
{
"created": "Thu, 11 Aug 2022 03:44:55 GMT",
"version": "v1"
}
] |
2022-08-12
|
[
[
"Guo",
"Qihan",
"",
"Tsinghua University"
],
[
"Wang",
"Siwei",
"",
"Tsinghua University"
],
[
"Zhu",
"Jun",
"",
"Tsinghua University"
]
] |
We study an extension of standard bandit problem in which there are R layers of experts. Multi-layered experts make selections layer by layer and only the experts in the last layer can play arms. The goal of the learning policy is to minimize the total regret in this hierarchical experts setting. We first analyze the case that total regret grows linearly with the number of layers. Then we focus on the case that all experts are playing Upper Confidence Bound (UCB) strategy and give several sub-linear upper bounds for different circumstances. Finally, we design some experiments to help the regret analysis for the general case of hierarchical UCB structure and show the practical significance of our theoretical results. This article gives many insights about reasonable hierarchical decision structure.
|
2307.06340
|
Lennart Br\"uggemann
|
Lennart Br\"uggemann
|
Securely extending and running low-code applications with C#
| null | null | null | null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Low-code development platforms provide an accessible infrastructure for the
creation of software by domain experts, also called "citizen developers",
without the need for formal programming education. Development is facilitated
through graphical user interfaces, although traditional programming can still
be used to extend low-code applications, for example when external services or
complex business logic needs to be implemented that cannot be realized with the
features available on a platform. Since citizen developers are usually not
specifically trained in software development, they require additional support
when writing code, particularly with regard to security and advanced techniques
like debugging or versioning. In this thesis, several options to assist
developers of low-code applications are investigated and implemented. A
framework to quickly build code editor extensions is developed, and an approach
to leverage the Roslyn compiler platform to implement custom static code
analysis rules for low-code development platforms using the .NET platform is
demonstrated. Furthermore, a sample application showing how Roslyn can be used
to build a simple, integrated debugging tool, as well as an abstraction of the
version control system Git for easier usage by citizen developers, is
implemented. Security is a critical aspect when low-code applications are
deployed. To provide an overview over possible options to ensure the secure and
isolated execution of low-code applications, a threat model is developed and
used as the basis for a comparison between OS-level virtualization, sandboxing,
and runtime code security implementations.
|
[
{
"created": "Wed, 12 Jul 2023 09:32:31 GMT",
"version": "v1"
}
] |
2023-07-14
|
[
[
"Brüggemann",
"Lennart",
""
]
] |
Low-code development platforms provide an accessible infrastructure for the creation of software by domain experts, also called "citizen developers", without the need for formal programming education. Development is facilitated through graphical user interfaces, although traditional programming can still be used to extend low-code applications, for example when external services or complex business logic needs to be implemented that cannot be realized with the features available on a platform. Since citizen developers are usually not specifically trained in software development, they require additional support when writing code, particularly with regard to security and advanced techniques like debugging or versioning. In this thesis, several options to assist developers of low-code applications are investigated and implemented. A framework to quickly build code editor extensions is developed, and an approach to leverage the Roslyn compiler platform to implement custom static code analysis rules for low-code development platforms using the .NET platform is demonstrated. Furthermore, a sample application showing how Roslyn can be used to build a simple, integrated debugging tool, as well as an abstraction of the version control system Git for easier usage by citizen developers, is implemented. Security is a critical aspect when low-code applications are deployed. To provide an overview over possible options to ensure the secure and isolated execution of low-code applications, a threat model is developed and used as the basis for a comparison between OS-level virtualization, sandboxing, and runtime code security implementations.
|
2404.08761
|
Joshua Feinglass
|
Joshua Feinglass, Jayaraman J. Thiagarajan, Rushil Anirudh, T.S.
Jayram, Yezhou Yang
|
`Eyes of a Hawk and Ears of a Fox': Part Prototype Network for
Generalized Zero-Shot Learning
|
Accepted to the CVPR 2024 LIMIT Workshop
| null | null | null |
cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Current approaches in Generalized Zero-Shot Learning (GZSL) are built upon
base models which consider only a single class attribute vector representation
over the entire image. This is an oversimplification of the process of novel
category recognition, where different regions of the image may have properties
from different seen classes and thus have different predominant attributes.
With this in mind, we take a fundamentally different approach: a pre-trained
Vision-Language detector (VINVL) sensitive to attribute information is employed
to efficiently obtain region features. A learned function maps the region
features to region-specific attribute attention used to construct class part
prototypes. We conduct experiments on a popular GZSL benchmark consisting of
the CUB, SUN, and AWA2 datasets where our proposed Part Prototype Network (PPN)
achieves promising results when compared with other popular base models.
Corresponding ablation studies and analysis show that our approach is highly
practical and has a distinct advantage over global attribute attention when
localized proposals are available.
|
[
{
"created": "Fri, 12 Apr 2024 18:37:00 GMT",
"version": "v1"
}
] |
2024-04-16
|
[
[
"Feinglass",
"Joshua",
""
],
[
"Thiagarajan",
"Jayaraman J.",
""
],
[
"Anirudh",
"Rushil",
""
],
[
"Jayram",
"T. S.",
""
],
[
"Yang",
"Yezhou",
""
]
] |
Current approaches in Generalized Zero-Shot Learning (GZSL) are built upon base models which consider only a single class attribute vector representation over the entire image. This is an oversimplification of the process of novel category recognition, where different regions of the image may have properties from different seen classes and thus have different predominant attributes. With this in mind, we take a fundamentally different approach: a pre-trained Vision-Language detector (VINVL) sensitive to attribute information is employed to efficiently obtain region features. A learned function maps the region features to region-specific attribute attention used to construct class part prototypes. We conduct experiments on a popular GZSL benchmark consisting of the CUB, SUN, and AWA2 datasets where our proposed Part Prototype Network (PPN) achieves promising results when compared with other popular base models. Corresponding ablation studies and analysis show that our approach is highly practical and has a distinct advantage over global attribute attention when localized proposals are available.
|
1607.04545
|
Pedro Montealegre
|
Pedro Montealegre, Ioan Todinca
|
On Distance-$d$ Independent Set and other problems in graphs with few
minimal separators
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fomin and Villanger (STACS 2010) proved that Maximum Independent Set,
Feedback Vertex Set, and more generally the problem of finding a maximum
induced subgraph of treewith at most a constant $t$, can be solved in
polynomial time on graph classes with polynomially many minimal separators. We
extend these results in two directions. Let $\Gpoly$ be the class of graphs
with at most $\poly(n)$ minimal separators, for some polynomial $\poly$.
We show that the odd powers of a graph $G$ have at most as many minimal
separators as $G$. Consequently, \textsc{Distance-$d$ Independent Set}, which
consists in finding maximum set of vertices at pairwise distance at least $d$,
is polynomial on $\Gpoly$, for any even $d$. The problem is NP-hard on chordal
graphs for any odd $d \geq 3$.
We also provide polynomial algorithms for Connected Vertex Cover and
Connected Feedback Vertex Set on subclasses of $\Gpoly$ including chordal and
circular-arc graphs, and we discuss variants of independent domination
problems.
|
[
{
"created": "Fri, 15 Jul 2016 15:09:12 GMT",
"version": "v1"
}
] |
2016-07-18
|
[
[
"Montealegre",
"Pedro",
""
],
[
"Todinca",
"Ioan",
""
]
] |
Fomin and Villanger (STACS 2010) proved that Maximum Independent Set, Feedback Vertex Set, and more generally the problem of finding a maximum induced subgraph of treewith at most a constant $t$, can be solved in polynomial time on graph classes with polynomially many minimal separators. We extend these results in two directions. Let $\Gpoly$ be the class of graphs with at most $\poly(n)$ minimal separators, for some polynomial $\poly$. We show that the odd powers of a graph $G$ have at most as many minimal separators as $G$. Consequently, \textsc{Distance-$d$ Independent Set}, which consists in finding maximum set of vertices at pairwise distance at least $d$, is polynomial on $\Gpoly$, for any even $d$. The problem is NP-hard on chordal graphs for any odd $d \geq 3$. We also provide polynomial algorithms for Connected Vertex Cover and Connected Feedback Vertex Set on subclasses of $\Gpoly$ including chordal and circular-arc graphs, and we discuss variants of independent domination problems.
|
1904.01994
|
Nabeel Abdur Rehman
|
Nabeel Abdur Rehman, Umar Saif, Rumi Chunara
|
Deep Landscape Features for Improving Vector-borne Disease Prediction
|
10 pages, 3 figures, 1 table
| null | null | null |
cs.CY cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The global population at risk of mosquito-borne diseases such as dengue,
yellow fever, chikungunya and Zika is expanding. Infectious disease models
commonly incorporate environmental measures like temperature and precipitation.
Given increasing availability of high-resolution satellite imagery, here we
consider including landscape features from satellite imagery into infectious
disease prediction models. To do so, we implement a Convolutional Neural
Network (CNN) model trained on Imagenet data and labelled landscape features in
satellite data from London. We then incorporate landscape features from
satellite image data from Pakistan, labelled using the CNN, in a well-known
Susceptible-Infectious-Recovered epidemic model, alongside dengue case data
from 2012-2016 in Pakistan. We study improvement of the prediction model for
each of the individual landscape features, and assess the feasibility of using
image labels from a different place. We find that incorporating
satellite-derived landscape features can improve prediction of outbreaks, which
is important for proactive and strategic surveillance and control programmes.
|
[
{
"created": "Wed, 3 Apr 2019 13:29:58 GMT",
"version": "v1"
}
] |
2019-04-04
|
[
[
"Rehman",
"Nabeel Abdur",
""
],
[
"Saif",
"Umar",
""
],
[
"Chunara",
"Rumi",
""
]
] |
The global population at risk of mosquito-borne diseases such as dengue, yellow fever, chikungunya and Zika is expanding. Infectious disease models commonly incorporate environmental measures like temperature and precipitation. Given increasing availability of high-resolution satellite imagery, here we consider including landscape features from satellite imagery into infectious disease prediction models. To do so, we implement a Convolutional Neural Network (CNN) model trained on Imagenet data and labelled landscape features in satellite data from London. We then incorporate landscape features from satellite image data from Pakistan, labelled using the CNN, in a well-known Susceptible-Infectious-Recovered epidemic model, alongside dengue case data from 2012-2016 in Pakistan. We study improvement of the prediction model for each of the individual landscape features, and assess the feasibility of using image labels from a different place. We find that incorporating satellite-derived landscape features can improve prediction of outbreaks, which is important for proactive and strategic surveillance and control programmes.
|
2401.16312
|
Yen-Chi Lee
|
Yun-Feng Lo, Yen-Chi Lee, Min-Hsiu Hsieh
|
Degradability of Modified Landau-Streater Type Low-Noise Quantum
Channels in High Dimensions
|
13 pages, 1 figure, comments welcome! v2: Introduction enhanced
| null | null | null |
cs.IT math.IT quant-ph
|
http://creativecommons.org/licenses/by/4.0/
|
This paper delves into the degradability of quantum channels, with a specific
focus on high-dimensional extensions of qubit depolarizing channels in
low-noise regimes. We build upon the foundation of $\eta$-approximate
degradable channels, as established by Sutter et al. and Leditzky et al., to
introduce and examine the Modified Landau-Streater (MLS) channels. These
channels expand upon the qubit depolarizing and the recently proposed modified
Werner-Holevo channels by Roofeh and Karimipour, extending them to
higher-dimensional Hilbert spaces (with dimension $d=2j+1$, where $j$ are
positive half-integers). Our investigation centers on their conformity to the
$O(\varepsilon^2)$ degradability pattern, aligning with and extending Leditzky
et al.'s findings in the $d=2$ case. By replacing the SU($2$) generators with
SU($d$) in our treatment, we may explore the potential inclusion of generalized
Gell-Mann matrices in future research. Our results enhance the understanding of
super-additivity in quantum channels within the low-noise regime and lay the
groundwork for future explorations into conditions and structures that could
lead to $O(\varepsilon^2)$ degradability across a broader spectrum of quantum
channels.
|
[
{
"created": "Mon, 29 Jan 2024 17:17:34 GMT",
"version": "v1"
},
{
"created": "Tue, 30 Jan 2024 05:04:50 GMT",
"version": "v2"
}
] |
2024-01-31
|
[
[
"Lo",
"Yun-Feng",
""
],
[
"Lee",
"Yen-Chi",
""
],
[
"Hsieh",
"Min-Hsiu",
""
]
] |
This paper delves into the degradability of quantum channels, with a specific focus on high-dimensional extensions of qubit depolarizing channels in low-noise regimes. We build upon the foundation of $\eta$-approximate degradable channels, as established by Sutter et al. and Leditzky et al., to introduce and examine the Modified Landau-Streater (MLS) channels. These channels expand upon the qubit depolarizing and the recently proposed modified Werner-Holevo channels by Roofeh and Karimipour, extending them to higher-dimensional Hilbert spaces (with dimension $d=2j+1$, where $j$ are positive half-integers). Our investigation centers on their conformity to the $O(\varepsilon^2)$ degradability pattern, aligning with and extending Leditzky et al.'s findings in the $d=2$ case. By replacing the SU($2$) generators with SU($d$) in our treatment, we may explore the potential inclusion of generalized Gell-Mann matrices in future research. Our results enhance the understanding of super-additivity in quantum channels within the low-noise regime and lay the groundwork for future explorations into conditions and structures that could lead to $O(\varepsilon^2)$ degradability across a broader spectrum of quantum channels.
|
2305.06801
|
Francois Remy
|
Fran\c{c}ois Remy, Alfiya Khabibullina, Thomas Demeester
|
Detecting Idiomatic Multiword Expressions in Clinical Terminology using
Definition-Based Representation Learning
|
Best Paper Award @ MWE 2023
| null | null | null |
cs.CL
|
http://creativecommons.org/publicdomain/zero/1.0/
|
This paper shines a light on the potential of definition-based semantic
models for detecting idiomatic and semi-idiomatic multiword expressions (MWEs)
in clinical terminology. Our study focuses on biomedical entities defined in
the UMLS ontology and aims to help prioritize the translation efforts of these
entities. In particular, we develop an effective tool for scoring the
idiomaticity of biomedical MWEs based on the degree of similarity between the
semantic representations of those MWEs and a weighted average of the
representation of their constituents. We achieve this using a biomedical
language model trained to produce similar representations for entity names and
their definitions, called BioLORD. The importance of this definition-based
approach is highlighted by comparing the BioLORD model to two other
state-of-the-art biomedical language models based on Transformer: SapBERT and
CODER. Our results show that the BioLORD model has a strong ability to identify
idiomatic MWEs, not replicated in other models. Our corpus-free idiomaticity
estimation helps ontology translators to focus on more challenging MWEs.
|
[
{
"created": "Thu, 11 May 2023 13:42:58 GMT",
"version": "v1"
}
] |
2023-05-12
|
[
[
"Remy",
"François",
""
],
[
"Khabibullina",
"Alfiya",
""
],
[
"Demeester",
"Thomas",
""
]
] |
This paper shines a light on the potential of definition-based semantic models for detecting idiomatic and semi-idiomatic multiword expressions (MWEs) in clinical terminology. Our study focuses on biomedical entities defined in the UMLS ontology and aims to help prioritize the translation efforts of these entities. In particular, we develop an effective tool for scoring the idiomaticity of biomedical MWEs based on the degree of similarity between the semantic representations of those MWEs and a weighted average of the representation of their constituents. We achieve this using a biomedical language model trained to produce similar representations for entity names and their definitions, called BioLORD. The importance of this definition-based approach is highlighted by comparing the BioLORD model to two other state-of-the-art biomedical language models based on Transformer: SapBERT and CODER. Our results show that the BioLORD model has a strong ability to identify idiomatic MWEs, not replicated in other models. Our corpus-free idiomaticity estimation helps ontology translators to focus on more challenging MWEs.
|
1509.01770
|
Kishan Wimalawarne
|
Kishan Wimalawarne, Ryota Tomioka and Masashi Sugiyama
|
Theoretical and Experimental Analyses of Tensor-Based Regression and
Classification
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We theoretically and experimentally investigate tensor-based regression and
classification. Our focus is regularization with various tensor norms,
including the overlapped trace norm, the latent trace norm, and the scaled
latent trace norm. We first give dual optimization methods using the
alternating direction method of multipliers, which is computationally efficient
when the number of training samples is moderate. We then theoretically derive
an excess risk bound for each tensor norm and clarify their behavior. Finally,
we perform extensive experiments using simulated and real data and demonstrate
the superiority of tensor-based learning methods over vector- and matrix-based
learning methods.
|
[
{
"created": "Sun, 6 Sep 2015 05:03:27 GMT",
"version": "v1"
}
] |
2015-09-08
|
[
[
"Wimalawarne",
"Kishan",
""
],
[
"Tomioka",
"Ryota",
""
],
[
"Sugiyama",
"Masashi",
""
]
] |
We theoretically and experimentally investigate tensor-based regression and classification. Our focus is regularization with various tensor norms, including the overlapped trace norm, the latent trace norm, and the scaled latent trace norm. We first give dual optimization methods using the alternating direction method of multipliers, which is computationally efficient when the number of training samples is moderate. We then theoretically derive an excess risk bound for each tensor norm and clarify their behavior. Finally, we perform extensive experiments using simulated and real data and demonstrate the superiority of tensor-based learning methods over vector- and matrix-based learning methods.
|
1309.6608
|
sukhjit Singh Sehra Er.
|
Sukhjit Singh Sehra, Jaiteg Singh and Hardeep Singh Rai
|
Assessment of OpenStreetMap Data - A Review
|
Review paper
|
International Journal of Computer Applications, 76(16):17-20,
August 2013
|
10.5120/13331-0888 10.5120/13331-0888
| null |
cs.CY cs.DB cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The meaning and purposes of web has been changing and evolving day by day.
Web 2. 0 encouraged more contribution by the end users. This movement provided
revolutionary methods of sharing and computing data by crowdsourcing such as
OpenStreetmap, also called "the wikification of maps" by some researchers. When
crowdsourcing collects huge data with help of general public with varying level
of mapping experience, the focus of researcher should be on analysing the data
rather than collecting it. Researchers have assessed the quality of
OpenStreetMap data by comparing it with proprietary data or data of
governmental map agencies. This study reviews the research work for assessment
of Open- StreetMap Data and also discusses about the future directions.
|
[
{
"created": "Wed, 25 Sep 2013 18:52:48 GMT",
"version": "v1"
}
] |
2013-09-26
|
[
[
"Sehra",
"Sukhjit Singh",
""
],
[
"Singh",
"Jaiteg",
""
],
[
"Rai",
"Hardeep Singh",
""
]
] |
The meaning and purposes of web has been changing and evolving day by day. Web 2. 0 encouraged more contribution by the end users. This movement provided revolutionary methods of sharing and computing data by crowdsourcing such as OpenStreetmap, also called "the wikification of maps" by some researchers. When crowdsourcing collects huge data with help of general public with varying level of mapping experience, the focus of researcher should be on analysing the data rather than collecting it. Researchers have assessed the quality of OpenStreetMap data by comparing it with proprietary data or data of governmental map agencies. This study reviews the research work for assessment of Open- StreetMap Data and also discusses about the future directions.
|
2311.12779
|
Pooria Namyar
|
Pooria Namyar, Behnaz Arzani, Ryan Beckett, Santiago Segarra, Himanshu
Raj, Umesh Krishnaswamy, Ramesh Govindan, Srikanth Kandula
|
Finding Adversarial Inputs for Heuristics using Multi-level Optimization
| null | null | null | null |
cs.NI cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Production systems use heuristics because they are faster or scale better
than their optimal counterparts. Yet, practitioners are often unaware of the
performance gap between a heuristic and the optimum or between two heuristics
in realistic scenarios. We present MetaOpt, a system that helps analyze
heuristics. Users specify the heuristic and the optimal (or another heuristic)
as input, and MetaOpt automatically encodes these efficiently for a solver to
find performance gaps and their corresponding adversarial inputs. Its suite of
built-in optimizations helps it scale its analysis to practical problem sizes.
To show it is versatile, we used MetaOpt to analyze heuristics from three
domains (traffic engineering, vector bin packing, and packet scheduling). We
found a production traffic engineering heuristic can require 30% more capacity
than the optimal to satisfy realistic demands. Based on the patterns in the
adversarial inputs MetaOpt produced, we modified the heuristic to reduce its
performance gap by 12.5$\times$. We examined adversarial inputs to a vector bin
packing heuristic and proved a new lower bound on its performance.
|
[
{
"created": "Tue, 21 Nov 2023 18:43:16 GMT",
"version": "v1"
}
] |
2023-11-22
|
[
[
"Namyar",
"Pooria",
""
],
[
"Arzani",
"Behnaz",
""
],
[
"Beckett",
"Ryan",
""
],
[
"Segarra",
"Santiago",
""
],
[
"Raj",
"Himanshu",
""
],
[
"Krishnaswamy",
"Umesh",
""
],
[
"Govindan",
"Ramesh",
""
],
[
"Kandula",
"Srikanth",
""
]
] |
Production systems use heuristics because they are faster or scale better than their optimal counterparts. Yet, practitioners are often unaware of the performance gap between a heuristic and the optimum or between two heuristics in realistic scenarios. We present MetaOpt, a system that helps analyze heuristics. Users specify the heuristic and the optimal (or another heuristic) as input, and MetaOpt automatically encodes these efficiently for a solver to find performance gaps and their corresponding adversarial inputs. Its suite of built-in optimizations helps it scale its analysis to practical problem sizes. To show it is versatile, we used MetaOpt to analyze heuristics from three domains (traffic engineering, vector bin packing, and packet scheduling). We found a production traffic engineering heuristic can require 30% more capacity than the optimal to satisfy realistic demands. Based on the patterns in the adversarial inputs MetaOpt produced, we modified the heuristic to reduce its performance gap by 12.5$\times$. We examined adversarial inputs to a vector bin packing heuristic and proved a new lower bound on its performance.
|
2402.09965
|
Inkyu Park
|
Han Yegang, Park Minjun, Byun Duwon, Park Inkyu
|
Hierarchy Representation of Data in Machine Learnings
| null | null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
When there are models with clear-cut judgment results for several data
points, it is possible that most models exhibit a relationship where if they
correctly judge one target, they also correctly judge another target.
Conversely, if most models incorrectly judge one target, they may also
incorrectly judge another target. We propose a method for visualizing this
hierarchy among targets. This information is expected to be beneficial for
model improvement.
|
[
{
"created": "Thu, 30 Nov 2023 01:49:52 GMT",
"version": "v1"
}
] |
2024-02-16
|
[
[
"Yegang",
"Han",
""
],
[
"Minjun",
"Park",
""
],
[
"Duwon",
"Byun",
""
],
[
"Inkyu",
"Park",
""
]
] |
When there are models with clear-cut judgment results for several data points, it is possible that most models exhibit a relationship where if they correctly judge one target, they also correctly judge another target. Conversely, if most models incorrectly judge one target, they may also incorrectly judge another target. We propose a method for visualizing this hierarchy among targets. This information is expected to be beneficial for model improvement.
|
2205.13963
|
Georg Hager
|
Ayesha Afzal, Georg Hager, Gerhard Wellein, Stefano Markidis
|
Exploring Techniques for the Analysis of Spontaneous Asynchronicity in
MPI-Parallel Applications
|
12 pages, 9 figures, 1 table
| null |
10.1007/978-3-031-30442-2_12
| null |
cs.DC cs.LG cs.PF
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper studies the utility of using data analytics and machine learning
techniques for identifying, classifying, and characterizing the dynamics of
large-scale parallel (MPI) programs. To this end, we run microbenchmarks and
realistic proxy applications with the regular compute-communicate structure on
two different supercomputing platforms and choose the per-process performance
and MPI time per time step as relevant observables. Using principal component
analysis, clustering techniques, correlation functions, and a new "phase space
plot," we show how desynchronization patterns (or lack thereof) can be readily
identified from a data set that is much smaller than a full MPI trace. Our
methods also lead the way towards a more general classification of parallel
program dynamics.
|
[
{
"created": "Fri, 27 May 2022 13:19:07 GMT",
"version": "v1"
}
] |
2023-09-06
|
[
[
"Afzal",
"Ayesha",
""
],
[
"Hager",
"Georg",
""
],
[
"Wellein",
"Gerhard",
""
],
[
"Markidis",
"Stefano",
""
]
] |
This paper studies the utility of using data analytics and machine learning techniques for identifying, classifying, and characterizing the dynamics of large-scale parallel (MPI) programs. To this end, we run microbenchmarks and realistic proxy applications with the regular compute-communicate structure on two different supercomputing platforms and choose the per-process performance and MPI time per time step as relevant observables. Using principal component analysis, clustering techniques, correlation functions, and a new "phase space plot," we show how desynchronization patterns (or lack thereof) can be readily identified from a data set that is much smaller than a full MPI trace. Our methods also lead the way towards a more general classification of parallel program dynamics.
|
1909.02871
|
Stephan G\"unther
|
Stephan M. G\"unther, Nicolas Appel and Georg Carle
|
Galois Field Arithmetics for Linear Network Coding using AVX512
Instruction Set Extensions
|
6 pages, 2 figures, the updated finite field library is available
under the LGPL at https://moep80211.net/plink/libmoepgf-avx512
| null | null | null |
cs.DC cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Linear network coding requires arithmetic operations over Galois fields, more
specifically over finite extension fields. While coding over GF(2) reduces to
simple XOR operations, this field is less preferred for practical applications
of random linear network coding due to high chances of linear dependencies and
therefore redundant coded packets. Coding over larger fields such as GF(16) and
GF(256) does not have that issue, but is significantly slower. SIMD vector
extensions of processors such as AVX2 on x86-based systems or NEON on ARM-based
devices offer the potential to increase performance by orders of magnitude.
In this paper we present an implementation of different algorithms and Galois
fields based on the AVX512 instruction set extension and integrate it into the
finite field library libmoepgf. We compare the performance of the new
implementation to the reference implementation based on AVX2, showing a
significant increase in throughput. In addition, we provide a survey of the
best possible coding performance offered by a variety of different platforms.
|
[
{
"created": "Wed, 4 Sep 2019 09:20:59 GMT",
"version": "v1"
}
] |
2019-09-09
|
[
[
"Günther",
"Stephan M.",
""
],
[
"Appel",
"Nicolas",
""
],
[
"Carle",
"Georg",
""
]
] |
Linear network coding requires arithmetic operations over Galois fields, more specifically over finite extension fields. While coding over GF(2) reduces to simple XOR operations, this field is less preferred for practical applications of random linear network coding due to high chances of linear dependencies and therefore redundant coded packets. Coding over larger fields such as GF(16) and GF(256) does not have that issue, but is significantly slower. SIMD vector extensions of processors such as AVX2 on x86-based systems or NEON on ARM-based devices offer the potential to increase performance by orders of magnitude. In this paper we present an implementation of different algorithms and Galois fields based on the AVX512 instruction set extension and integrate it into the finite field library libmoepgf. We compare the performance of the new implementation to the reference implementation based on AVX2, showing a significant increase in throughput. In addition, we provide a survey of the best possible coding performance offered by a variety of different platforms.
|
2202.12888
|
MohammadJavad Azizi
|
Mohammadjavad Azizi, Branislav Kveton, Mohammad Ghavamzadeh, Sumeet
Katariya
|
Meta-Learning for Simple Regret Minimization
| null | null | null | null |
cs.LG cs.AI stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We develop a meta-learning framework for simple regret minimization in
bandits. In this framework, a learning agent interacts with a sequence of
bandit tasks, which are sampled i.i.d.\ from an unknown prior distribution, and
learns its meta-parameters to perform better on future tasks. We propose the
first Bayesian and frequentist meta-learning algorithms for this setting. The
Bayesian algorithm has access to a prior distribution over the meta-parameters
and its meta simple regret over $m$ bandit tasks with horizon $n$ is mere
$\tilde{O}(m / \sqrt{n})$. On the other hand, the meta simple regret of the
frequentist algorithm is $\tilde{O}(\sqrt{m} n + m/ \sqrt{n})$. While its
regret is worse, the frequentist algorithm is more general because it does not
need a prior distribution over the meta-parameters. It can also be analyzed in
more settings. We instantiate our algorithms for several classes of bandit
problems. Our algorithms are general and we complement our theory by evaluating
them empirically in several environments.
|
[
{
"created": "Fri, 25 Feb 2022 18:56:54 GMT",
"version": "v1"
},
{
"created": "Tue, 4 Jul 2023 20:01:11 GMT",
"version": "v2"
}
] |
2023-07-06
|
[
[
"Azizi",
"Mohammadjavad",
""
],
[
"Kveton",
"Branislav",
""
],
[
"Ghavamzadeh",
"Mohammad",
""
],
[
"Katariya",
"Sumeet",
""
]
] |
We develop a meta-learning framework for simple regret minimization in bandits. In this framework, a learning agent interacts with a sequence of bandit tasks, which are sampled i.i.d.\ from an unknown prior distribution, and learns its meta-parameters to perform better on future tasks. We propose the first Bayesian and frequentist meta-learning algorithms for this setting. The Bayesian algorithm has access to a prior distribution over the meta-parameters and its meta simple regret over $m$ bandit tasks with horizon $n$ is mere $\tilde{O}(m / \sqrt{n})$. On the other hand, the meta simple regret of the frequentist algorithm is $\tilde{O}(\sqrt{m} n + m/ \sqrt{n})$. While its regret is worse, the frequentist algorithm is more general because it does not need a prior distribution over the meta-parameters. It can also be analyzed in more settings. We instantiate our algorithms for several classes of bandit problems. Our algorithms are general and we complement our theory by evaluating them empirically in several environments.
|
2310.05324
|
Andrew Starnes
|
Andrew Starnes, Anton Dereventsov, Clayton Webster
|
Increasing Entropy to Boost Policy Gradient Performance on
Personalization Tasks
|
8 pages, 3 figures, accepted to WAIN 2023
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
In this effort, we consider the impact of regularization on the diversity of
actions taken by policies generated from reinforcement learning agents trained
using a policy gradient. Policy gradient agents are prone to entropy collapse,
which means certain actions are seldomly, if ever, selected. We augment the
optimization objective function for the policy with terms constructed from
various $\varphi$-divergences and Maximum Mean Discrepancy which encourages
current policies to follow different state visitation and/or action choice
distribution than previously computed policies. We provide numerical
experiments using MNIST, CIFAR10, and Spotify datasets. The results demonstrate
the advantage of diversity-promoting policy regularization and that its use on
gradient-based approaches have significantly improved performance on a variety
of personalization tasks. Furthermore, numerical evidence is given to show that
policy regularization increases performance without losing accuracy.
|
[
{
"created": "Mon, 9 Oct 2023 01:03:05 GMT",
"version": "v1"
}
] |
2023-10-10
|
[
[
"Starnes",
"Andrew",
""
],
[
"Dereventsov",
"Anton",
""
],
[
"Webster",
"Clayton",
""
]
] |
In this effort, we consider the impact of regularization on the diversity of actions taken by policies generated from reinforcement learning agents trained using a policy gradient. Policy gradient agents are prone to entropy collapse, which means certain actions are seldomly, if ever, selected. We augment the optimization objective function for the policy with terms constructed from various $\varphi$-divergences and Maximum Mean Discrepancy which encourages current policies to follow different state visitation and/or action choice distribution than previously computed policies. We provide numerical experiments using MNIST, CIFAR10, and Spotify datasets. The results demonstrate the advantage of diversity-promoting policy regularization and that its use on gradient-based approaches have significantly improved performance on a variety of personalization tasks. Furthermore, numerical evidence is given to show that policy regularization increases performance without losing accuracy.
|
2211.12224
|
Igor Donevski
|
Igor Donevski, Marco Virgili, Nithin Babu, Jimmy Jessen Nielsen,
Andrew J. Forsyth, Constantinos B. Papadias, Petar Popovski
|
Sustainable Wireless Services with UAV Swarms Tailored to Renewable
Energy Sources
|
To be published in Transactions on Smart Grid
| null | null | null |
cs.NI eess.SP
|
http://creativecommons.org/licenses/by/4.0/
|
Unmanned Aerial Vehicle (UAV) swarms are often required in off-grid
scenarios, such as disaster-struck, war-torn or rural areas, where the UAVs
have no access to the power grid and instead rely on renewable energy.
Considering a main battery fed from two renewable sources, wind and solar, we
scale such a system based on the financial budget, environmental
characteristics, and seasonal variations. Interestingly, the source of energy
is correlated with the energy expenditure of the UAVs, since strong winds cause
UAV hovering to become increasingly energy-hungry. The aim is to maximize the
cost efficiency of coverage at a particular location, which is a combinatorial
optimization problem for dimensioning of the multivariate energy generation
system under non-convex criteria. We have devised a customized algorithm by
lowering the processing complexity and reducing the solution space through
sampling. Evaluation is done with condensed real-world data on wind, solar
energy, and traffic load per unit area, driven by vendor-provided prices. The
implementation was tested in four locations, with varying wind or solar
intensity. The best results were achieved in locations with mild wind presence
and strong solar irradiation, while locations with strong winds and low solar
intensity require higher Capital Expenditure (CAPEX) allocation.
|
[
{
"created": "Tue, 22 Nov 2022 12:30:39 GMT",
"version": "v1"
},
{
"created": "Wed, 23 Nov 2022 17:16:56 GMT",
"version": "v2"
}
] |
2022-11-24
|
[
[
"Donevski",
"Igor",
""
],
[
"Virgili",
"Marco",
""
],
[
"Babu",
"Nithin",
""
],
[
"Nielsen",
"Jimmy Jessen",
""
],
[
"Forsyth",
"Andrew J.",
""
],
[
"Papadias",
"Constantinos B.",
""
],
[
"Popovski",
"Petar",
""
]
] |
Unmanned Aerial Vehicle (UAV) swarms are often required in off-grid scenarios, such as disaster-struck, war-torn or rural areas, where the UAVs have no access to the power grid and instead rely on renewable energy. Considering a main battery fed from two renewable sources, wind and solar, we scale such a system based on the financial budget, environmental characteristics, and seasonal variations. Interestingly, the source of energy is correlated with the energy expenditure of the UAVs, since strong winds cause UAV hovering to become increasingly energy-hungry. The aim is to maximize the cost efficiency of coverage at a particular location, which is a combinatorial optimization problem for dimensioning of the multivariate energy generation system under non-convex criteria. We have devised a customized algorithm by lowering the processing complexity and reducing the solution space through sampling. Evaluation is done with condensed real-world data on wind, solar energy, and traffic load per unit area, driven by vendor-provided prices. The implementation was tested in four locations, with varying wind or solar intensity. The best results were achieved in locations with mild wind presence and strong solar irradiation, while locations with strong winds and low solar intensity require higher Capital Expenditure (CAPEX) allocation.
|
1412.5275
|
Azade Rezaeezade
|
Ismail Nojavani, Azade Rezaeezade and Amirhassan Monadjemi
|
Iranian cashes recognition using mobile
|
arXiv #133709
|
International Journal of Computer Science & Information
Technology, volume 6, issue 6, pp.61-71, 2014
| null | null |
cs.CV
|
http://creativecommons.org/licenses/by/3.0/
|
In economical societies of today, using cash is an inseparable aspect of
human life. People use cashes for marketing, services, entertainments, bank
operations and so on. This huge amount of contact with cash and the necessity
of knowing the monetary value of it caused one of the most challenging problems
for visually impaired people. In this paper we propose a mobile phone based
approach to identify monetary value of a picture taken from cashes using some
image processing and machine vision techniques. While the developed approach is
very fast, it can recognize the value of cash by average accuracy of about 95%
and can overcome different challenges like rotation, scaling, collision,
illumination changes, perspective, and some others.
|
[
{
"created": "Wed, 17 Dec 2014 07:51:56 GMT",
"version": "v1"
}
] |
2014-12-18
|
[
[
"Nojavani",
"Ismail",
""
],
[
"Rezaeezade",
"Azade",
""
],
[
"Monadjemi",
"Amirhassan",
""
]
] |
In economical societies of today, using cash is an inseparable aspect of human life. People use cashes for marketing, services, entertainments, bank operations and so on. This huge amount of contact with cash and the necessity of knowing the monetary value of it caused one of the most challenging problems for visually impaired people. In this paper we propose a mobile phone based approach to identify monetary value of a picture taken from cashes using some image processing and machine vision techniques. While the developed approach is very fast, it can recognize the value of cash by average accuracy of about 95% and can overcome different challenges like rotation, scaling, collision, illumination changes, perspective, and some others.
|
1901.10583
|
Vithya Yogarajan
|
Vithya Yogarajan, Bernhard Pfahringer and Michael Mayo
|
Automatic end-to-end De-identification: Is high accuracy the only
metric?
|
17 pages, 1 figure, 7 tables, review journal paper
|
Applied Artificial Intelligence, 2020
|
10.1080/08839514.2020.1718343
|
04-Feb-2020
|
cs.CY cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
De-identification of electronic health records (EHR) is a vital step towards
advancing health informatics research and maximising the use of available data.
It is a two-step process where step one is the identification of protected
health information (PHI), and step two is replacing such PHI with surrogates.
Despite the recent advances in automatic de-identification of EHR, significant
obstacles remain if the abundant health data available are to be used to the
full potential. Accuracy in de-identification could be considered a necessary,
but not sufficient condition for the use of EHR without individual patient
consent. We present here a comprehensive review of the progress to date, both
the impressive successes in achieving high accuracy and the significant risks
and challenges that remain. To best of our knowledge, this is the first paper
to present a complete picture of end-to-end automatic de-identification. We
review 18 recently published automatic de-identification systems -designed to
de-identify EHR in the form of free text- to show the advancements made in
improving the overall accuracy of the system, and in identifying individual
PHI. We argue that despite the improvements in accuracy there remain challenges
in surrogate generation and replacements of identified PHIs, and the risks
posed to patient protection and privacy.
|
[
{
"created": "Sun, 27 Jan 2019 21:51:40 GMT",
"version": "v1"
}
] |
2020-02-18
|
[
[
"Yogarajan",
"Vithya",
""
],
[
"Pfahringer",
"Bernhard",
""
],
[
"Mayo",
"Michael",
""
]
] |
De-identification of electronic health records (EHR) is a vital step towards advancing health informatics research and maximising the use of available data. It is a two-step process where step one is the identification of protected health information (PHI), and step two is replacing such PHI with surrogates. Despite the recent advances in automatic de-identification of EHR, significant obstacles remain if the abundant health data available are to be used to the full potential. Accuracy in de-identification could be considered a necessary, but not sufficient condition for the use of EHR without individual patient consent. We present here a comprehensive review of the progress to date, both the impressive successes in achieving high accuracy and the significant risks and challenges that remain. To best of our knowledge, this is the first paper to present a complete picture of end-to-end automatic de-identification. We review 18 recently published automatic de-identification systems -designed to de-identify EHR in the form of free text- to show the advancements made in improving the overall accuracy of the system, and in identifying individual PHI. We argue that despite the improvements in accuracy there remain challenges in surrogate generation and replacements of identified PHIs, and the risks posed to patient protection and privacy.
|
2303.12653
|
Fenghao Zhu
|
Fenghao Zhu, Bohao Wang, Zhaohui Yang, Chongwen Huang, Zhaoyang Zhang,
George C.Alexandropoulos, Chau Yuen and Merouane Debbah
|
Robust Millimeter Beamforming via Self-Supervised Hybrid Deep Learning
|
Accept by EUSIPCO 2023
|
2023 31st European Signal Processing Conference (EUSIPCO),
Helsinki, Finland, 2023, pp. 915-919
|
10.23919/EUSIPCO58844.2023.10289989
| null |
cs.IT cs.LG math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Beamforming with large-scale antenna arrays has been widely used in recent
years, which is acknowledged as an important part in 5G and incoming 6G. Thus,
various techniques are leveraged to improve its performance, e.g., deep
learning, advanced optimization algorithms, etc. Although its performance in
many previous research scenarios with deep learning is quite attractive,
usually it drops rapidly when the environment or dataset is changed. Therefore,
designing effective beamforming network with strong robustness is an open issue
for the intelligent wireless communications. In this paper, we propose a robust
beamforming self-supervised network, and verify it in two kinds of different
datasets with various scenarios. Simulation results show that the proposed
self-supervised network with hybrid learning performs well in both classic
DeepMIMO and new WAIR-D dataset with the strong robustness under the various
environments. Also, we present the principle to explain the rationality of this
kind of hybrid learning, which is instructive to apply with more kinds of
datasets.
|
[
{
"created": "Thu, 9 Mar 2023 05:30:53 GMT",
"version": "v1"
},
{
"created": "Wed, 2 Aug 2023 12:20:40 GMT",
"version": "v2"
},
{
"created": "Fri, 2 Aug 2024 04:02:26 GMT",
"version": "v3"
}
] |
2024-08-05
|
[
[
"Zhu",
"Fenghao",
""
],
[
"Wang",
"Bohao",
""
],
[
"Yang",
"Zhaohui",
""
],
[
"Huang",
"Chongwen",
""
],
[
"Zhang",
"Zhaoyang",
""
],
[
"Alexandropoulos",
"George C.",
""
],
[
"Yuen",
"Chau",
""
],
[
"Debbah",
"Merouane",
""
]
] |
Beamforming with large-scale antenna arrays has been widely used in recent years, which is acknowledged as an important part in 5G and incoming 6G. Thus, various techniques are leveraged to improve its performance, e.g., deep learning, advanced optimization algorithms, etc. Although its performance in many previous research scenarios with deep learning is quite attractive, usually it drops rapidly when the environment or dataset is changed. Therefore, designing effective beamforming network with strong robustness is an open issue for the intelligent wireless communications. In this paper, we propose a robust beamforming self-supervised network, and verify it in two kinds of different datasets with various scenarios. Simulation results show that the proposed self-supervised network with hybrid learning performs well in both classic DeepMIMO and new WAIR-D dataset with the strong robustness under the various environments. Also, we present the principle to explain the rationality of this kind of hybrid learning, which is instructive to apply with more kinds of datasets.
|
2401.01013
|
Thanh Dung Le
|
Thanh-Dung Le
|
Boosting Transformer's Robustness and Efficacy in PPG Signal Artifact
Detection with Self-Supervised Learning
|
Under preparation to submit to IEEE for possible publications
| null | null | null |
cs.LG eess.SP
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Recent research at CHU Sainte Justine's Pediatric Critical Care Unit (PICU)
has revealed that traditional machine learning methods, such as semi-supervised
label propagation and K-nearest neighbors, outperform Transformer-based models
in artifact detection from PPG signals, mainly when data is limited. This study
addresses the underutilization of abundant unlabeled data by employing
self-supervised learning (SSL) to extract latent features from these data,
followed by fine-tuning on labeled data. Our experiments demonstrate that SSL
significantly enhances the Transformer model's ability to learn
representations, improving its robustness in artifact classification tasks.
Among various SSL techniques, including masking, contrastive learning, and DINO
(self-distillation with no labels)-contrastive learning exhibited the most
stable and superior performance in small PPG datasets. Further, we delve into
optimizing contrastive loss functions, which are crucial for contrastive SSL.
Inspired by InfoNCE, we introduce a novel contrastive loss function that
facilitates smoother training and better convergence, thereby enhancing
performance in artifact classification. In summary, this study establishes the
efficacy of SSL in leveraging unlabeled data, particularly in enhancing the
capabilities of the Transformer model. This approach holds promise for broader
applications in PICU environments, where annotated data is often limited.
|
[
{
"created": "Tue, 2 Jan 2024 04:00:48 GMT",
"version": "v1"
}
] |
2024-01-03
|
[
[
"Le",
"Thanh-Dung",
""
]
] |
Recent research at CHU Sainte Justine's Pediatric Critical Care Unit (PICU) has revealed that traditional machine learning methods, such as semi-supervised label propagation and K-nearest neighbors, outperform Transformer-based models in artifact detection from PPG signals, mainly when data is limited. This study addresses the underutilization of abundant unlabeled data by employing self-supervised learning (SSL) to extract latent features from these data, followed by fine-tuning on labeled data. Our experiments demonstrate that SSL significantly enhances the Transformer model's ability to learn representations, improving its robustness in artifact classification tasks. Among various SSL techniques, including masking, contrastive learning, and DINO (self-distillation with no labels)-contrastive learning exhibited the most stable and superior performance in small PPG datasets. Further, we delve into optimizing contrastive loss functions, which are crucial for contrastive SSL. Inspired by InfoNCE, we introduce a novel contrastive loss function that facilitates smoother training and better convergence, thereby enhancing performance in artifact classification. In summary, this study establishes the efficacy of SSL in leveraging unlabeled data, particularly in enhancing the capabilities of the Transformer model. This approach holds promise for broader applications in PICU environments, where annotated data is often limited.
|
1705.02822
|
Syed Mohammad Meesum
|
Syed Mohammad Meesum, Fahad Panolan, Saket Saurabh, and Meirav Zehavi
|
Rank Vertex Cover as a Natural Problem for Algebraic Compression
| null | null | null | null |
cs.DS
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The question of the existence of a polynomial kernelization of the Vertex
Cover Above LP problem has been a longstanding, notorious open problem in
Parameterized Complexity. Five years ago, the breakthrough work by Kratsch and
Wahlstrom on representative sets has finally answered this question in the
affirmative [FOCS 2012]. In this paper, we present an alternative, algebraic
compression of the Vertex Cover Above LP problem into the Rank Vertex Cover
problem. Here, the input consists of a graph G, a parameter k, and a bijection
between V (G) and the set of columns of a representation of a matriod M, and
the objective is to find a vertex cover whose rank is upper bounded by k.
|
[
{
"created": "Mon, 8 May 2017 10:56:28 GMT",
"version": "v1"
},
{
"created": "Wed, 10 May 2017 11:32:23 GMT",
"version": "v2"
}
] |
2017-05-11
|
[
[
"Meesum",
"Syed Mohammad",
""
],
[
"Panolan",
"Fahad",
""
],
[
"Saurabh",
"Saket",
""
],
[
"Zehavi",
"Meirav",
""
]
] |
The question of the existence of a polynomial kernelization of the Vertex Cover Above LP problem has been a longstanding, notorious open problem in Parameterized Complexity. Five years ago, the breakthrough work by Kratsch and Wahlstrom on representative sets has finally answered this question in the affirmative [FOCS 2012]. In this paper, we present an alternative, algebraic compression of the Vertex Cover Above LP problem into the Rank Vertex Cover problem. Here, the input consists of a graph G, a parameter k, and a bijection between V (G) and the set of columns of a representation of a matriod M, and the objective is to find a vertex cover whose rank is upper bounded by k.
|
2204.09121
|
Henggang Cui
|
Christopher Hazard, Akshay Bhagat, Balarama Raju Buddharaju, Zhongtao
Liu, Yunming Shao, Lu Lu, Sammy Omari, Henggang Cui
|
Importance is in your attention: agent importance prediction for
autonomous driving
|
Accepted at CVPR 2022 Precognition workshop
| null | null | null |
cs.RO cs.CV
|
http://creativecommons.org/licenses/by/4.0/
|
Trajectory prediction is an important task in autonomous driving.
State-of-the-art trajectory prediction models often use attention mechanisms to
model the interaction between agents. In this paper, we show that the attention
information from such models can also be used to measure the importance of each
agent with respect to the ego vehicle's future planned trajectory. Our
experiment results on the nuPlans dataset show that our method can effectively
find and rank surrounding agents by their impact on the ego's plan.
|
[
{
"created": "Tue, 19 Apr 2022 20:34:30 GMT",
"version": "v1"
}
] |
2022-04-21
|
[
[
"Hazard",
"Christopher",
""
],
[
"Bhagat",
"Akshay",
""
],
[
"Buddharaju",
"Balarama Raju",
""
],
[
"Liu",
"Zhongtao",
""
],
[
"Shao",
"Yunming",
""
],
[
"Lu",
"Lu",
""
],
[
"Omari",
"Sammy",
""
],
[
"Cui",
"Henggang",
""
]
] |
Trajectory prediction is an important task in autonomous driving. State-of-the-art trajectory prediction models often use attention mechanisms to model the interaction between agents. In this paper, we show that the attention information from such models can also be used to measure the importance of each agent with respect to the ego vehicle's future planned trajectory. Our experiment results on the nuPlans dataset show that our method can effectively find and rank surrounding agents by their impact on the ego's plan.
|
1704.03970
|
Joel Mackenzie
|
Joel Mackenzie and J. Shane Culpepper and Roi Blanco and Matt Crane
and Charles L. A. Clarke and Jimmy Lin
|
Efficient and Effective Tail Latency Minimization in Multi-Stage
Retrieval Systems
|
Update 1: Edited email address
| null |
10.1145/3159652.3159676
| null |
cs.IR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Scalable web search systems typically employ multi-stage retrieval
architectures, where an initial stage generates a set of candidate documents
that are then pruned and re-ranked. Since subsequent stages typically exploit a
multitude of features of varying costs using machine-learned models, reducing
the number of documents that are considered at each stage improves latency. In
this work, we propose and validate a unified framework that can be used to
predict a wide range of performance-sensitive parameters which minimize
effectiveness loss, while simultaneously minimizing query latency, across all
stages of a multi-stage search architecture. Furthermore, our framework can be
easily applied in large-scale IR systems, can be trained without explicitly
requiring relevance judgments, and can target a variety of different
efficiency-effectiveness trade-offs, making it well suited to a wide range of
search scenarios. Our results show that we can reliably predict a number of
different parameters on a per-query basis, while simultaneously detecting and
minimizing the likelihood of tail-latency queries that exceed a pre-specified
performance budget. As a proof of concept, we use the prediction framework to
help alleviate the problem of tail-latency queries in early stage retrieval. On
the standard ClueWeb09B collection and 31k queries, we show that our new hybrid
system can reliably achieve a maximum query time of 200 ms with a 99.99%
response time guarantee without a significant loss in overall effectiveness.
The solutions presented are practical, and can easily be used in large-scale
distributed search engine deployments with a small amount of additional
overhead.
|
[
{
"created": "Thu, 13 Apr 2017 02:09:37 GMT",
"version": "v1"
},
{
"created": "Thu, 20 Apr 2017 23:34:13 GMT",
"version": "v2"
}
] |
2017-12-12
|
[
[
"Mackenzie",
"Joel",
""
],
[
"Culpepper",
"J. Shane",
""
],
[
"Blanco",
"Roi",
""
],
[
"Crane",
"Matt",
""
],
[
"Clarke",
"Charles L. A.",
""
],
[
"Lin",
"Jimmy",
""
]
] |
Scalable web search systems typically employ multi-stage retrieval architectures, where an initial stage generates a set of candidate documents that are then pruned and re-ranked. Since subsequent stages typically exploit a multitude of features of varying costs using machine-learned models, reducing the number of documents that are considered at each stage improves latency. In this work, we propose and validate a unified framework that can be used to predict a wide range of performance-sensitive parameters which minimize effectiveness loss, while simultaneously minimizing query latency, across all stages of a multi-stage search architecture. Furthermore, our framework can be easily applied in large-scale IR systems, can be trained without explicitly requiring relevance judgments, and can target a variety of different efficiency-effectiveness trade-offs, making it well suited to a wide range of search scenarios. Our results show that we can reliably predict a number of different parameters on a per-query basis, while simultaneously detecting and minimizing the likelihood of tail-latency queries that exceed a pre-specified performance budget. As a proof of concept, we use the prediction framework to help alleviate the problem of tail-latency queries in early stage retrieval. On the standard ClueWeb09B collection and 31k queries, we show that our new hybrid system can reliably achieve a maximum query time of 200 ms with a 99.99% response time guarantee without a significant loss in overall effectiveness. The solutions presented are practical, and can easily be used in large-scale distributed search engine deployments with a small amount of additional overhead.
|
2301.12475
|
Michael Mislove
|
Sam van Gool, Paul-Andr\'e Melli\`es and Vincent Moreau
|
Profinite lambda-terms and parametricity
|
For the proceedings of MFPS2023
|
Electronic Notes in Theoretical Informatics and Computer Science,
Volume 3 - Proceedings of MFPS XXXIX (November 23, 2023) entics:12280
|
10.46298/entics.12280
| null |
cs.LO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Combining ideas coming from Stone duality and Reynolds parametricity, we
formulate in a clean and principled way a notion of profinite lambda-term
which, we show, generalizes at every type the traditional notion of profinite
word coming from automata theory. We start by defining the Stone space of
profinite lambda-terms as a projective limit of finite sets of usual
lambda-terms, considered modulo a notion of equivalence based on the finite
standard model. One main contribution of the paper is to establish that,
somewhat surprisingly, the resulting notion of profinite lambda-term coming
from Stone duality lives in perfect harmony with the principles of Reynolds
parametricity. In addition, we show that the notion of profinite lambda-term is
compositional by constructing a cartesian closed category of profinite
lambda-terms, and we establish that the embedding from lambda-terms modulo
beta-eta-conversion to profinite lambda-terms is faithful using Statman's
finite completeness theorem. Finally, we prove that the traditional Church
encoding of finite words into lambda-terms can be extended to profinite words,
and leads to a homeomorphism between the space of profinite words and the space
of profinite lambda-terms of the corresponding Church type.
|
[
{
"created": "Sun, 29 Jan 2023 15:56:12 GMT",
"version": "v1"
},
{
"created": "Tue, 18 Apr 2023 14:06:16 GMT",
"version": "v2"
},
{
"created": "Thu, 14 Sep 2023 06:59:53 GMT",
"version": "v3"
},
{
"created": "Sat, 18 Nov 2023 21:07:52 GMT",
"version": "v4"
}
] |
2024-02-14
|
[
[
"van Gool",
"Sam",
""
],
[
"Melliès",
"Paul-André",
""
],
[
"Moreau",
"Vincent",
""
]
] |
Combining ideas coming from Stone duality and Reynolds parametricity, we formulate in a clean and principled way a notion of profinite lambda-term which, we show, generalizes at every type the traditional notion of profinite word coming from automata theory. We start by defining the Stone space of profinite lambda-terms as a projective limit of finite sets of usual lambda-terms, considered modulo a notion of equivalence based on the finite standard model. One main contribution of the paper is to establish that, somewhat surprisingly, the resulting notion of profinite lambda-term coming from Stone duality lives in perfect harmony with the principles of Reynolds parametricity. In addition, we show that the notion of profinite lambda-term is compositional by constructing a cartesian closed category of profinite lambda-terms, and we establish that the embedding from lambda-terms modulo beta-eta-conversion to profinite lambda-terms is faithful using Statman's finite completeness theorem. Finally, we prove that the traditional Church encoding of finite words into lambda-terms can be extended to profinite words, and leads to a homeomorphism between the space of profinite words and the space of profinite lambda-terms of the corresponding Church type.
|
1507.03840
|
Levis Zerpa
|
Levis Zerpa
|
Using interrogative logic to teach classical logic
|
Proceedings of the Fourth International Conference on Tools for
Teaching Logic (TTL2015), Rennes, France, June 9-12, 2015. Editors: M.
Antonia Huertas, Jo\~ao Marcos, Mar\'ia Manzano, Sophie Pinchinat,
Fran\c{c}ois Schwarzentruber
| null | null | null |
cs.CY cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
In the paper I discuss a tool for helping students in their symbolizations of
natural language sentences using the formal language of classical first order
logic (CFOL). The tool is an extension of Hintikka's concept of (Inquirer's)
range of attention in the context of interrogative games. Any given text is
reconstructed as the answer to a "big" or principal question obtained through
the answers of a series of "small" or operative questions. The tool brings some
"narrative flavor" to the symbolization and offers a convenient mold that can
be used by students in many different contexts.
|
[
{
"created": "Tue, 14 Jul 2015 13:16:49 GMT",
"version": "v1"
}
] |
2015-07-19
|
[
[
"Zerpa",
"Levis",
""
]
] |
In the paper I discuss a tool for helping students in their symbolizations of natural language sentences using the formal language of classical first order logic (CFOL). The tool is an extension of Hintikka's concept of (Inquirer's) range of attention in the context of interrogative games. Any given text is reconstructed as the answer to a "big" or principal question obtained through the answers of a series of "small" or operative questions. The tool brings some "narrative flavor" to the symbolization and offers a convenient mold that can be used by students in many different contexts.
|
2310.12632
|
Yannik Hahn
|
Yannik Hahn, Robert Maack, Guido Buchholz, Marion Purrio, Matthias
Angerhausen, Hasan Tercan, Tobias Meisen
|
Towards a Deep Learning-based Online Quality Prediction System for
Welding Processes
|
Accepted for CIRP CMS '23
| null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The digitization of manufacturing processes enables promising applications
for machine learning-assisted quality assurance. A widely used manufacturing
process that can strongly benefit from data-driven solutions is gas metal arc
welding (GMAW). The welding process is characterized by complex cause-effect
relationships between material properties, process conditions and weld quality.
In non-laboratory environments with frequently changing process parameters,
accurate determination of weld quality by destructive testing is economically
unfeasible. Deep learning offers the potential to identify the relationships in
available process data and predict the weld quality from process observations.
In this paper, we present a concept for a deep learning based predictive
quality system in GMAW. At its core, the concept involves a pipeline consisting
of four major phases: collection and management of multi-sensor data (e.g.
current and voltage), real-time processing and feature engineering of the time
series data by means of autoencoders, training and deployment of suitable
recurrent deep learning models for quality predictions, and model evolutions
under changing process conditions using continual learning. The concept
provides the foundation for future research activities in which we will realize
an online predictive quality system for running production.
|
[
{
"created": "Thu, 19 Oct 2023 10:35:50 GMT",
"version": "v1"
},
{
"created": "Fri, 20 Oct 2023 10:44:04 GMT",
"version": "v2"
}
] |
2023-10-23
|
[
[
"Hahn",
"Yannik",
""
],
[
"Maack",
"Robert",
""
],
[
"Buchholz",
"Guido",
""
],
[
"Purrio",
"Marion",
""
],
[
"Angerhausen",
"Matthias",
""
],
[
"Tercan",
"Hasan",
""
],
[
"Meisen",
"Tobias",
""
]
] |
The digitization of manufacturing processes enables promising applications for machine learning-assisted quality assurance. A widely used manufacturing process that can strongly benefit from data-driven solutions is gas metal arc welding (GMAW). The welding process is characterized by complex cause-effect relationships between material properties, process conditions and weld quality. In non-laboratory environments with frequently changing process parameters, accurate determination of weld quality by destructive testing is economically unfeasible. Deep learning offers the potential to identify the relationships in available process data and predict the weld quality from process observations. In this paper, we present a concept for a deep learning based predictive quality system in GMAW. At its core, the concept involves a pipeline consisting of four major phases: collection and management of multi-sensor data (e.g. current and voltage), real-time processing and feature engineering of the time series data by means of autoencoders, training and deployment of suitable recurrent deep learning models for quality predictions, and model evolutions under changing process conditions using continual learning. The concept provides the foundation for future research activities in which we will realize an online predictive quality system for running production.
|
2209.10208
|
Andreas Nienk\"otter
|
Andreas Nienk\"otter, Xiaoyi Jiang
|
Kernel-Based Generalized Median Computation for Consensus Learning
|
17 pages, 5 figures, 7 tables
|
Early Access by TPAMI 2022
(https://ieeexplore.ieee.org/document/9869722)
|
10.1109/TPAMI.2022.3202565
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Computing a consensus object from a set of given objects is a core problem in
machine learning and pattern recognition. One popular approach is to formulate
it as an optimization problem using the generalized median. Previous methods
like the Prototype and Distance-Preserving Embedding methods transform objects
into a vector space, solve the generalized median problem in this space, and
inversely transform back into the original space. Both of these methods have
been successfully applied to a wide range of object domains, where the
generalized median problem has inherent high computational complexity
(typically $\mathcal{NP}$-hard) and therefore approximate solutions are
required. Previously, explicit embedding methods were used in the computation,
which often do not reflect the spatial relationship between objects exactly. In
this work we introduce a kernel-based generalized median framework that is
applicable to both positive definite and indefinite kernels. This framework
computes the relationship between objects and its generalized median in kernel
space, without the need of an explicit embedding. We show that the spatial
relationship between objects is more accurately represented in kernel space
than in an explicit vector space using easy-to-compute kernels, and demonstrate
superior performance of generalized median computation on datasets of three
different domains. A software toolbox resulting from our work is made publicly
available to encourage other researchers to explore the generalized median
computation and applications.
|
[
{
"created": "Wed, 21 Sep 2022 09:09:01 GMT",
"version": "v1"
}
] |
2022-09-22
|
[
[
"Nienkötter",
"Andreas",
""
],
[
"Jiang",
"Xiaoyi",
""
]
] |
Computing a consensus object from a set of given objects is a core problem in machine learning and pattern recognition. One popular approach is to formulate it as an optimization problem using the generalized median. Previous methods like the Prototype and Distance-Preserving Embedding methods transform objects into a vector space, solve the generalized median problem in this space, and inversely transform back into the original space. Both of these methods have been successfully applied to a wide range of object domains, where the generalized median problem has inherent high computational complexity (typically $\mathcal{NP}$-hard) and therefore approximate solutions are required. Previously, explicit embedding methods were used in the computation, which often do not reflect the spatial relationship between objects exactly. In this work we introduce a kernel-based generalized median framework that is applicable to both positive definite and indefinite kernels. This framework computes the relationship between objects and its generalized median in kernel space, without the need of an explicit embedding. We show that the spatial relationship between objects is more accurately represented in kernel space than in an explicit vector space using easy-to-compute kernels, and demonstrate superior performance of generalized median computation on datasets of three different domains. A software toolbox resulting from our work is made publicly available to encourage other researchers to explore the generalized median computation and applications.
|
2311.17105
|
Kerui Gu
|
Kerui Gu, Rongyu Chen, Angela Yao
|
On the Calibration of Human Pose Estimation
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Most 2D human pose estimation frameworks estimate keypoint confidence in an
ad-hoc manner, using heuristics such as the maximum value of heatmaps. The
confidence is part of the evaluation scheme, e.g., AP for the MSCOCO dataset,
yet has been largely overlooked in the development of state-of-the-art methods.
This paper takes the first steps in addressing miscalibration in pose
estimation. From a calibration point of view, the confidence should be aligned
with the pose accuracy. In practice, existing methods are poorly calibrated. We
show, through theoretical analysis, why a miscalibration gap exists and how to
narrow the gap. Simply predicting the instance size and adjusting the
confidence function gives considerable AP improvements. Given the black-box
nature of deep neural networks, however, it is not possible to fully close this
gap with only closed-form adjustments. As such, we go one step further and
learn network-specific adjustments by enforcing consistency between confidence
and pose accuracy. Our proposed Calibrated ConfidenceNet (CCNet) is a
light-weight post-hoc addition that improves AP by up to 1.4% on off-the-shelf
pose estimation frameworks. Applied to the downstream task of mesh recovery,
CCNet facilitates an additional 1.0mm decrease in 3D keypoint error.
|
[
{
"created": "Tue, 28 Nov 2023 09:31:09 GMT",
"version": "v1"
}
] |
2023-11-30
|
[
[
"Gu",
"Kerui",
""
],
[
"Chen",
"Rongyu",
""
],
[
"Yao",
"Angela",
""
]
] |
Most 2D human pose estimation frameworks estimate keypoint confidence in an ad-hoc manner, using heuristics such as the maximum value of heatmaps. The confidence is part of the evaluation scheme, e.g., AP for the MSCOCO dataset, yet has been largely overlooked in the development of state-of-the-art methods. This paper takes the first steps in addressing miscalibration in pose estimation. From a calibration point of view, the confidence should be aligned with the pose accuracy. In practice, existing methods are poorly calibrated. We show, through theoretical analysis, why a miscalibration gap exists and how to narrow the gap. Simply predicting the instance size and adjusting the confidence function gives considerable AP improvements. Given the black-box nature of deep neural networks, however, it is not possible to fully close this gap with only closed-form adjustments. As such, we go one step further and learn network-specific adjustments by enforcing consistency between confidence and pose accuracy. Our proposed Calibrated ConfidenceNet (CCNet) is a light-weight post-hoc addition that improves AP by up to 1.4% on off-the-shelf pose estimation frameworks. Applied to the downstream task of mesh recovery, CCNet facilitates an additional 1.0mm decrease in 3D keypoint error.
|
1509.03017
|
EPTCS
|
Helle Hvid Hansen (Delft University of Technology, Delft, The
Netherlands), Clemens Kupke (University of Strathclyde, Glasgow, United
Kingdom)
|
Weak Completeness of Coalgebraic Dynamic Logics
|
In Proceedings FICS 2015, arXiv:1509.02826
|
EPTCS 191, 2015, pp. 90-104
|
10.4204/EPTCS.191.9
| null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a coalgebraic generalisation of Fischer and Ladner's Propositional
Dynamic Logic (PDL) and Parikh's Game Logic (GL). In earlier work, we proved a
generic strong completeness result for coalgebraic dynamic logics without
iteration. The coalgebraic semantics of such programs is given by a monad T,
and modalities are interpreted via a predicate lifting \^I whose transpose is a
monad morphism from T to the neighbourhood monad. In this paper, we show that
if the monad T carries a complete semilattice structure, then we can define an
iteration construct, and suitable notions of diamond-likeness and box-likeness
of predicate-liftings which allows for the definition of an axiomatisation
parametric in T, \^I and a chosen set of pointwise program operations. As our
main result, we show that if the pointwise operations are "negation-free" and
Kleisli composition left-distributes over the induced join on Kleisli arrows,
then this axiomatisation is weakly complete with respect to the class of
standard models. As special instances, we recover the weak completeness of PDL
and of dual-free Game Logic. As a modest new result we obtain completeness for
dual-free GL extended with intersection (demonic choice) of games.
|
[
{
"created": "Thu, 10 Sep 2015 05:32:04 GMT",
"version": "v1"
}
] |
2016-08-08
|
[
[
"Hansen",
"Helle Hvid",
"",
"Delft University of Technology, Delft, The\n Netherlands"
],
[
"Kupke",
"Clemens",
"",
"University of Strathclyde, Glasgow, United\n Kingdom"
]
] |
We present a coalgebraic generalisation of Fischer and Ladner's Propositional Dynamic Logic (PDL) and Parikh's Game Logic (GL). In earlier work, we proved a generic strong completeness result for coalgebraic dynamic logics without iteration. The coalgebraic semantics of such programs is given by a monad T, and modalities are interpreted via a predicate lifting \^I whose transpose is a monad morphism from T to the neighbourhood monad. In this paper, we show that if the monad T carries a complete semilattice structure, then we can define an iteration construct, and suitable notions of diamond-likeness and box-likeness of predicate-liftings which allows for the definition of an axiomatisation parametric in T, \^I and a chosen set of pointwise program operations. As our main result, we show that if the pointwise operations are "negation-free" and Kleisli composition left-distributes over the induced join on Kleisli arrows, then this axiomatisation is weakly complete with respect to the class of standard models. As special instances, we recover the weak completeness of PDL and of dual-free Game Logic. As a modest new result we obtain completeness for dual-free GL extended with intersection (demonic choice) of games.
|
1803.05652
|
Samson Zhou
|
Jeremiah Blocki, Venkata Gandikota, Elena Grigorescu, Samson Zhou
|
Relaxed Locally Correctable Codes in Computationally Bounded Channels
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Error-correcting codes that admit local decoding and correcting algorithms
have been the focus of much recent research due to their numerous theoretical
and practical applications. An important goal is to obtain the best possible
tradeoffs between the number of queries the algorithm makes to its oracle (the
locality of the task), and the amount of redundancy in the encoding (the
information rate).
In Hamming's classical adversarial channel model, the current tradeoffs are
dramatic, allowing either small locality, but superpolynomial blocklength, or
small blocklength, but high locality. However, in the computationally bounded,
adversarial channel model, proposed by Lipton (STACS 1994), constructions of
locally decodable codes suddenly exhibit small locality and small blocklength,
but these constructions require strong trusted setup assumptions e.g.,
Ostrovsky, Pandey and Sahai (ICALP 2007) construct private locally decodable
codes in the setting where the sender and receiver already share a symmetric
key.
We study variants of locally decodable and locally correctable codes in
computationally bounded, adversarial channels, in a setting with no public-key
or private-key cryptographic setup. The only setup assumption we require is the
selection of the public parameters (seed) for a collision-resistant hash
function. Specifically, we provide constructions of relaxed locally correctable
and relaxed locally decodable codes over the binary alphabet, with constant
information rate, and poly-logarithmic locality.
Our constructions, which compare favorably with their classical analogues in
the computationally unbounded Hamming channel, crucially employ
collision-resistant hash functions and local expander graphs, extending ideas
from recent cryptographic constructions of memory-hard functions.
|
[
{
"created": "Thu, 15 Mar 2018 09:23:58 GMT",
"version": "v1"
},
{
"created": "Mon, 17 Sep 2018 20:45:49 GMT",
"version": "v2"
}
] |
2018-09-19
|
[
[
"Blocki",
"Jeremiah",
""
],
[
"Gandikota",
"Venkata",
""
],
[
"Grigorescu",
"Elena",
""
],
[
"Zhou",
"Samson",
""
]
] |
Error-correcting codes that admit local decoding and correcting algorithms have been the focus of much recent research due to their numerous theoretical and practical applications. An important goal is to obtain the best possible tradeoffs between the number of queries the algorithm makes to its oracle (the locality of the task), and the amount of redundancy in the encoding (the information rate). In Hamming's classical adversarial channel model, the current tradeoffs are dramatic, allowing either small locality, but superpolynomial blocklength, or small blocklength, but high locality. However, in the computationally bounded, adversarial channel model, proposed by Lipton (STACS 1994), constructions of locally decodable codes suddenly exhibit small locality and small blocklength, but these constructions require strong trusted setup assumptions e.g., Ostrovsky, Pandey and Sahai (ICALP 2007) construct private locally decodable codes in the setting where the sender and receiver already share a symmetric key. We study variants of locally decodable and locally correctable codes in computationally bounded, adversarial channels, in a setting with no public-key or private-key cryptographic setup. The only setup assumption we require is the selection of the public parameters (seed) for a collision-resistant hash function. Specifically, we provide constructions of relaxed locally correctable and relaxed locally decodable codes over the binary alphabet, with constant information rate, and poly-logarithmic locality. Our constructions, which compare favorably with their classical analogues in the computationally unbounded Hamming channel, crucially employ collision-resistant hash functions and local expander graphs, extending ideas from recent cryptographic constructions of memory-hard functions.
|
2310.04276
|
Ilker Yildirim
|
Ilker Yildirim, L.A. Paul
|
From task structures to world models: What do LLMs know?
| null | null |
10.1016/j.tics.2024.02.008
| null |
cs.AI q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In what sense does a large language model have knowledge? The answer to this
question extends beyond the capabilities of a particular AI system, and
challenges our assumptions about the nature of knowledge and intelligence. We
answer by granting LLMs "instrumental knowledge"; knowledge defined by a
certain set of abilities. We then ask how such knowledge is related to the more
ordinary, "worldly" knowledge exhibited by human agents, and explore this in
terms of the degree to which instrumental knowledge can be said to incorporate
the structured world models of cognitive science. We discuss ways LLMs could
recover degrees of worldly knowledge, and suggest such recovery will be
governed by an implicit, resource-rational tradeoff between world models and
task demands.
|
[
{
"created": "Fri, 6 Oct 2023 14:21:59 GMT",
"version": "v1"
}
] |
2024-04-02
|
[
[
"Yildirim",
"Ilker",
""
],
[
"Paul",
"L. A.",
""
]
] |
In what sense does a large language model have knowledge? The answer to this question extends beyond the capabilities of a particular AI system, and challenges our assumptions about the nature of knowledge and intelligence. We answer by granting LLMs "instrumental knowledge"; knowledge defined by a certain set of abilities. We then ask how such knowledge is related to the more ordinary, "worldly" knowledge exhibited by human agents, and explore this in terms of the degree to which instrumental knowledge can be said to incorporate the structured world models of cognitive science. We discuss ways LLMs could recover degrees of worldly knowledge, and suggest such recovery will be governed by an implicit, resource-rational tradeoff between world models and task demands.
|
2311.12176
|
Meng-Che Chang
|
Meng-Che Chang and Matthieu R. Bloch
|
Covert Online Decision Making: From Sequential Hypothesis Testing to
Stochastic Bandits
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
We study the problem of covert online decision-making in which an agent
attempts to identify a parameter governing a system by probing the system while
escaping detection from an adversary. The system is modeled as a Markov kernel
whose input is controlled by the agent and whose two outputs are observed by
the agent and the adversary, respectively. This problem is motivated by
applications such as covert sensing or covert radar, in which one tries to
perform a sensing task without arousing suspicion by an adversary monitoring
the environment for the presence of sensing signals. Specifically, we consider
two situations corresponding to different amounts of knowledge of the system.
If the kernel is known but governed by an unknown fixed parameter, we formulate
the problem as a sequential hypothesis testing problem. If the kernel
determining the observations of the agent is unknown but the kernel determining
those of the adversary is known, we formulate the problem as a best-arm
identification problem in a bandit setting. In both situations, we characterize
the exponent of the probability of identification error. As expected because of
the covertness requirement, the probability of identification error decays
exponentially with the square root of the blocklength.
|
[
{
"created": "Mon, 20 Nov 2023 20:43:49 GMT",
"version": "v1"
}
] |
2023-11-22
|
[
[
"Chang",
"Meng-Che",
""
],
[
"Bloch",
"Matthieu R.",
""
]
] |
We study the problem of covert online decision-making in which an agent attempts to identify a parameter governing a system by probing the system while escaping detection from an adversary. The system is modeled as a Markov kernel whose input is controlled by the agent and whose two outputs are observed by the agent and the adversary, respectively. This problem is motivated by applications such as covert sensing or covert radar, in which one tries to perform a sensing task without arousing suspicion by an adversary monitoring the environment for the presence of sensing signals. Specifically, we consider two situations corresponding to different amounts of knowledge of the system. If the kernel is known but governed by an unknown fixed parameter, we formulate the problem as a sequential hypothesis testing problem. If the kernel determining the observations of the agent is unknown but the kernel determining those of the adversary is known, we formulate the problem as a best-arm identification problem in a bandit setting. In both situations, we characterize the exponent of the probability of identification error. As expected because of the covertness requirement, the probability of identification error decays exponentially with the square root of the blocklength.
|
1709.05487
|
Sreelekha S
|
Sreelekha S, Pushpak Bhattacharyya
|
Role of Morphology Injection in Statistical Machine Translation
|
36 pages, 12 figures, 15 tables, Modified version Published in: ACM
Transactions on Asian and Low-Resource Language Information Processing
(TALLIP) TALLIP Homepage archive Volume 17 Issue 1, September 2017
Issue-in-Progress,Article No. 1
|
ACM Transactions on Asian and Low-Resource Language Information
Processing (TALLIP), Volume 17 Issue 1, September 2017 Issue-in-Progress,
Article No. 1
|
10.1145/3129208
| null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Phrase-based Statistical models are more commonly used as they perform
optimally in terms of both, translation quality and complexity of the system.
Hindi and in general all Indian languages are morphologically richer than
English. Hence, even though Phrase-based systems perform very well for the less
divergent language pairs, for English to Indian language translation, we need
more linguistic information (such as morphology, parse tree, parts of speech
tags, etc.) on the source side. Factored models seem to be useful in this case,
as Factored models consider word as a vector of factors. These factors can
contain any information about the surface word and use it while translating.
Hence, the objective of this work is to handle morphological inflections in
Hindi and Marathi using Factored translation models while translating from
English. SMT approaches face the problem of data sparsity while translating
into a morphologically rich language. It is very unlikely for a parallel corpus
to contain all morphological forms of words. We propose a solution to generate
these unseen morphological forms and inject them into original training
corpora. In this paper, we study factored models and the problem of sparseness
in context of translation to morphologically rich languages. We propose a
simple and effective solution which is based on enriching the input with
various morphological forms of words. We observe that morphology injection
improves the quality of translation in terms of both adequacy and fluency. We
verify this with the experiments on two morphologically rich languages: Hindi
and Marathi, while translating from English.
|
[
{
"created": "Sat, 16 Sep 2017 09:40:36 GMT",
"version": "v1"
}
] |
2017-09-19
|
[
[
"S",
"Sreelekha",
""
],
[
"Bhattacharyya",
"Pushpak",
""
]
] |
Phrase-based Statistical models are more commonly used as they perform optimally in terms of both, translation quality and complexity of the system. Hindi and in general all Indian languages are morphologically richer than English. Hence, even though Phrase-based systems perform very well for the less divergent language pairs, for English to Indian language translation, we need more linguistic information (such as morphology, parse tree, parts of speech tags, etc.) on the source side. Factored models seem to be useful in this case, as Factored models consider word as a vector of factors. These factors can contain any information about the surface word and use it while translating. Hence, the objective of this work is to handle morphological inflections in Hindi and Marathi using Factored translation models while translating from English. SMT approaches face the problem of data sparsity while translating into a morphologically rich language. It is very unlikely for a parallel corpus to contain all morphological forms of words. We propose a solution to generate these unseen morphological forms and inject them into original training corpora. In this paper, we study factored models and the problem of sparseness in context of translation to morphologically rich languages. We propose a simple and effective solution which is based on enriching the input with various morphological forms of words. We observe that morphology injection improves the quality of translation in terms of both adequacy and fluency. We verify this with the experiments on two morphologically rich languages: Hindi and Marathi, while translating from English.
|
2306.02519
|
Ted Sanders
|
Ari Allyn-Feuer and Ted Sanders
|
Transformative AGI by 2043 is <1% likely
|
114 pages
| null | null | null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper is a submission to the Open Philanthropy AI Worldviews Contest. In
it, we estimate the likelihood of transformative artificial general
intelligence (AGI) by 2043 and find it to be <1%.
Specifically, we argue:
The bar is high: AGI as defined by the contest - something like AI that can
perform nearly all valuable tasks at human cost or less - which we will call
transformative AGI is a much higher bar than merely massive progress in AI, or
even the unambiguous attainment of expensive superhuman AGI or cheap but uneven
AGI.
Many steps are needed: The probability of transformative AGI by 2043 can be
decomposed as the joint probability of a number of necessary steps, which we
group into categories of software, hardware, and sociopolitical factors.
No step is guaranteed: For each step, we estimate a probability of success by
2043, conditional on prior steps being achieved. Many steps are quite
constrained by the short timeline, and our estimates range from 16% to 95%.
Therefore, the odds are low: Multiplying the cascading conditional
probabilities together, we estimate that transformative AGI by 2043 is 0.4%
likely. Reaching >10% seems to require probabilities that feel unreasonably
high, and even 3% seems unlikely.
Thoughtfully applying the cascading conditional probability approach to this
question yields lower probability values than is often supposed. This framework
helps enumerate the many future scenarios where humanity makes partial but
incomplete progress toward transformative AGI.
|
[
{
"created": "Mon, 5 Jun 2023 00:58:51 GMT",
"version": "v1"
}
] |
2023-06-06
|
[
[
"Allyn-Feuer",
"Ari",
""
],
[
"Sanders",
"Ted",
""
]
] |
This paper is a submission to the Open Philanthropy AI Worldviews Contest. In it, we estimate the likelihood of transformative artificial general intelligence (AGI) by 2043 and find it to be <1%. Specifically, we argue: The bar is high: AGI as defined by the contest - something like AI that can perform nearly all valuable tasks at human cost or less - which we will call transformative AGI is a much higher bar than merely massive progress in AI, or even the unambiguous attainment of expensive superhuman AGI or cheap but uneven AGI. Many steps are needed: The probability of transformative AGI by 2043 can be decomposed as the joint probability of a number of necessary steps, which we group into categories of software, hardware, and sociopolitical factors. No step is guaranteed: For each step, we estimate a probability of success by 2043, conditional on prior steps being achieved. Many steps are quite constrained by the short timeline, and our estimates range from 16% to 95%. Therefore, the odds are low: Multiplying the cascading conditional probabilities together, we estimate that transformative AGI by 2043 is 0.4% likely. Reaching >10% seems to require probabilities that feel unreasonably high, and even 3% seems unlikely. Thoughtfully applying the cascading conditional probability approach to this question yields lower probability values than is often supposed. This framework helps enumerate the many future scenarios where humanity makes partial but incomplete progress toward transformative AGI.
|
2405.14053
|
Henri Alam
|
Henri Alam, Antonio de Domenico, Florian Kaltenberger, David
L\'opez-P\'erez
|
On the Role of Non-Terrestrial Networks for Boosting Terrestrial Network
Performance in Dynamic Traffic Scenarios
|
To be published in IEEE International Symposium on Personal, Indoor
and Mobile Radio Communications 2024
| null | null | null |
cs.NI
|
http://creativecommons.org/licenses/by/4.0/
|
Due to an ever-expansive network deployment, numerous questions are being
raised regarding the energy consumption of the mobile network. Recently,
Non-Terrestrial Networks (NTNs) have proven to be a useful, and complementary
solution to Terrestrial Networks (TN) to provide ubiquitous coverage. In this
paper, we consider an integrated TN-NTN, and study how to maximize its resource
usage in a dynamic traffic scenario. We introduce BLASTER, a framework designed
to control User Equipment (UE) association, Base Station (BS) transmit power
and activation, and bandwidth allocation between the terrestrial and
non-terrestrial tiers. Our proposal is able to adapt to fluctuating daily
traffic, focusing on reducing power consumption throughout the network during
low traffic and distributing the load otherwise. Simulation results show an
average daily decrease of total power consumption by 45% compared to a network
model following 3GPP recommendation, as well as an average throughput increase
of roughly 250%. Our paper underlines the central and dynamic role that the NTN
plays in improving key areas of concern for network flexibility.
|
[
{
"created": "Wed, 22 May 2024 22:49:54 GMT",
"version": "v1"
}
] |
2024-05-24
|
[
[
"Alam",
"Henri",
""
],
[
"de Domenico",
"Antonio",
""
],
[
"Kaltenberger",
"Florian",
""
],
[
"López-Pérez",
"David",
""
]
] |
Due to an ever-expansive network deployment, numerous questions are being raised regarding the energy consumption of the mobile network. Recently, Non-Terrestrial Networks (NTNs) have proven to be a useful, and complementary solution to Terrestrial Networks (TN) to provide ubiquitous coverage. In this paper, we consider an integrated TN-NTN, and study how to maximize its resource usage in a dynamic traffic scenario. We introduce BLASTER, a framework designed to control User Equipment (UE) association, Base Station (BS) transmit power and activation, and bandwidth allocation between the terrestrial and non-terrestrial tiers. Our proposal is able to adapt to fluctuating daily traffic, focusing on reducing power consumption throughout the network during low traffic and distributing the load otherwise. Simulation results show an average daily decrease of total power consumption by 45% compared to a network model following 3GPP recommendation, as well as an average throughput increase of roughly 250%. Our paper underlines the central and dynamic role that the NTN plays in improving key areas of concern for network flexibility.
|
2202.08869
|
Jesse Pisel
|
Jesse R. Pisel, Joshua A. Dierker, Sanya Srivastava, Samira B.
Ravilisetty, Michael J. Pyrcz
|
A recommender system for automatic picking of subsurface formation tops
|
14 pages, 9 figures, 2 tables, 2 appendices
| null | null | null |
cs.IR
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Geoscience domain experts traditionally correlate formation tops in the
subsurface using geophysical well logs (known as well-log correlation) by-hand.
Based on individual well log interpretation and well-to-well comparisons, these
correlations are done in the context of depositional models within a
stratigraphic framework. Recently, many researchers have focused on automatic
well-log correlation using a variety of warping algorithms that measure well
similarity, and both unsupervised and supervised machine learning methods that
assign categorical labels based on known tops in many other wells. These
methods require a standardized suite of digital well logs (i.e. gamma ray logs
for every well) along with the depth to the top of the formations, which might
not be available in many cases. Herein, we propose a method that does not use
geophysical well logs for correlation, but rather uses already picked tops in
multiple wells to recommend the depth to the remaining unpicked tops in the
wells. This recommender system calculates the depth to all formation tops in
all the wells for two different datasets in two different basins. The Teapot
Dome dataset is composed of lithostratigraphic formation tops, and the
Mannville Group dataset is composed of sequence-stratigraphic (representing
multiple lithologic groups within a stratigraphic unit) formation tops. For the
demonstration, mean absolute error and root mean squared error of four-fold
cross-validation compares the recommender system predictions to the ground
truth human interpretations. The recommender system is competitive and often
outperforms state of the art spline interpolation methods. Lastly, increasing
the size of the training dataset decreases the prediction error, and that
variance in error decreases with increasing formation tops picked in each
formation and well for the lithostratigraphic top picks.
|
[
{
"created": "Thu, 17 Feb 2022 19:12:08 GMT",
"version": "v1"
}
] |
2022-02-21
|
[
[
"Pisel",
"Jesse R.",
""
],
[
"Dierker",
"Joshua A.",
""
],
[
"Srivastava",
"Sanya",
""
],
[
"Ravilisetty",
"Samira B.",
""
],
[
"Pyrcz",
"Michael J.",
""
]
] |
Geoscience domain experts traditionally correlate formation tops in the subsurface using geophysical well logs (known as well-log correlation) by-hand. Based on individual well log interpretation and well-to-well comparisons, these correlations are done in the context of depositional models within a stratigraphic framework. Recently, many researchers have focused on automatic well-log correlation using a variety of warping algorithms that measure well similarity, and both unsupervised and supervised machine learning methods that assign categorical labels based on known tops in many other wells. These methods require a standardized suite of digital well logs (i.e. gamma ray logs for every well) along with the depth to the top of the formations, which might not be available in many cases. Herein, we propose a method that does not use geophysical well logs for correlation, but rather uses already picked tops in multiple wells to recommend the depth to the remaining unpicked tops in the wells. This recommender system calculates the depth to all formation tops in all the wells for two different datasets in two different basins. The Teapot Dome dataset is composed of lithostratigraphic formation tops, and the Mannville Group dataset is composed of sequence-stratigraphic (representing multiple lithologic groups within a stratigraphic unit) formation tops. For the demonstration, mean absolute error and root mean squared error of four-fold cross-validation compares the recommender system predictions to the ground truth human interpretations. The recommender system is competitive and often outperforms state of the art spline interpolation methods. Lastly, increasing the size of the training dataset decreases the prediction error, and that variance in error decreases with increasing formation tops picked in each formation and well for the lithostratigraphic top picks.
|
1910.04301
|
Yueming Lyu
|
Yueming Lyu and Ivor W. Tsang
|
Black-box Optimizer with Implicit Natural Gradient
|
Black-box Optimization
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Black-box optimization is primarily important for many compute-intensive
applications, including reinforcement learning (RL), robot control, etc. This
paper presents a novel theoretical framework for black-box optimization, in
which our method performs stochastic update with the implicit natural gradient
of an exponential-family distribution. Theoretically, we prove the convergence
rate of our framework with full matrix update for convex functions. Our
theoretical results also hold for continuous non-differentiable black-box
functions. Our methods are very simple and contain less hyper-parameters than
CMA-ES \cite{hansen2006cma}. Empirically, our method with full matrix update
achieves competitive performance compared with one of the state-of-the-art
method CMA-ES on benchmark test problems. Moreover, our methods can achieve
high optimization precision on some challenging test functions (e.g.,
$l_1$-norm ellipsoid test problem and Levy test problem), while methods with
explicit natural gradient, i.e., IGO \cite{ollivier2017information} with full
matrix update can not. This shows the efficiency of our methods.
|
[
{
"created": "Wed, 9 Oct 2019 23:34:36 GMT",
"version": "v1"
},
{
"created": "Thu, 13 Feb 2020 23:22:51 GMT",
"version": "v2"
},
{
"created": "Wed, 9 Sep 2020 10:46:10 GMT",
"version": "v3"
}
] |
2020-09-10
|
[
[
"Lyu",
"Yueming",
""
],
[
"Tsang",
"Ivor W.",
""
]
] |
Black-box optimization is primarily important for many compute-intensive applications, including reinforcement learning (RL), robot control, etc. This paper presents a novel theoretical framework for black-box optimization, in which our method performs stochastic update with the implicit natural gradient of an exponential-family distribution. Theoretically, we prove the convergence rate of our framework with full matrix update for convex functions. Our theoretical results also hold for continuous non-differentiable black-box functions. Our methods are very simple and contain less hyper-parameters than CMA-ES \cite{hansen2006cma}. Empirically, our method with full matrix update achieves competitive performance compared with one of the state-of-the-art method CMA-ES on benchmark test problems. Moreover, our methods can achieve high optimization precision on some challenging test functions (e.g., $l_1$-norm ellipsoid test problem and Levy test problem), while methods with explicit natural gradient, i.e., IGO \cite{ollivier2017information} with full matrix update can not. This shows the efficiency of our methods.
|
2310.07756
|
Yi Sui
|
Yi Sui, Tongzi Wu, Jesse C. Cresswell, Ga Wu, George Stein, Xiao Shi
Huang, Xiaochen Zhang, Maksims Volkovs
|
Self-supervised Representation Learning From Random Data Projectors
|
Published as a conference paper of ICLR 2024.
https://openreview.net/pdf?id=EpYnZpDpsQ
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Self-supervised representation learning~(SSRL) has advanced considerably by
exploiting the transformation invariance assumption under artificially designed
data augmentations. While augmentation-based SSRL algorithms push the
boundaries of performance in computer vision and natural language processing,
they are often not directly applicable to other data modalities, and can
conflict with application-specific data augmentation constraints. This paper
presents an SSRL approach that can be applied to any data modality and network
architecture because it does not rely on augmentations or masking.
Specifically, we show that high-quality data representations can be learned by
reconstructing random data projections. We evaluate the proposed approach on a
wide range of representation learning tasks that span diverse modalities and
real-world applications. We show that it outperforms multiple state-of-the-art
SSRL baselines. Due to its wide applicability and strong empirical results, we
argue that learning from randomness is a fruitful research direction worthy of
attention and further study.
|
[
{
"created": "Wed, 11 Oct 2023 18:00:01 GMT",
"version": "v1"
},
{
"created": "Wed, 20 Mar 2024 18:00:04 GMT",
"version": "v2"
}
] |
2024-03-22
|
[
[
"Sui",
"Yi",
""
],
[
"Wu",
"Tongzi",
""
],
[
"Cresswell",
"Jesse C.",
""
],
[
"Wu",
"Ga",
""
],
[
"Stein",
"George",
""
],
[
"Huang",
"Xiao Shi",
""
],
[
"Zhang",
"Xiaochen",
""
],
[
"Volkovs",
"Maksims",
""
]
] |
Self-supervised representation learning~(SSRL) has advanced considerably by exploiting the transformation invariance assumption under artificially designed data augmentations. While augmentation-based SSRL algorithms push the boundaries of performance in computer vision and natural language processing, they are often not directly applicable to other data modalities, and can conflict with application-specific data augmentation constraints. This paper presents an SSRL approach that can be applied to any data modality and network architecture because it does not rely on augmentations or masking. Specifically, we show that high-quality data representations can be learned by reconstructing random data projections. We evaluate the proposed approach on a wide range of representation learning tasks that span diverse modalities and real-world applications. We show that it outperforms multiple state-of-the-art SSRL baselines. Due to its wide applicability and strong empirical results, we argue that learning from randomness is a fruitful research direction worthy of attention and further study.
|
2307.00079
|
Channing Moore
|
R. Channing Moore, Daniel P. W. Ellis, Eduardo Fonseca, Shawn Hershey,
Aren Jansen, Manoj Plakal
|
Dataset balancing can hurt model performance
|
5 pages, 3 figures, ICASSP 2023
|
ICASSP 2023 - 2023 IEEE International Conference on Acoustics,
Speech and Signal Processing (ICASSP), Rhodes Island, Greece, 2023, pp. 1-5
|
10.1109/ICASSP49357.2023.10095255
| null |
cs.LG cs.SD eess.AS
|
http://creativecommons.org/licenses/by/4.0/
|
Machine learning from training data with a skewed distribution of examples
per class can lead to models that favor performance on common classes at the
expense of performance on rare ones. AudioSet has a very wide range of priors
over its 527 sound event classes. Classification performance on AudioSet is
usually evaluated by a simple average over per-class metrics, meaning that
performance on rare classes is equal in importance to the performance on common
ones. Several recent papers have used dataset balancing techniques to improve
performance on AudioSet. We find, however, that while balancing improves
performance on the public AudioSet evaluation data it simultaneously hurts
performance on an unpublished evaluation set collected under the same
conditions. By varying the degree of balancing, we show that its benefits are
fragile and depend on the evaluation set. We also do not find evidence
indicating that balancing improves rare class performance relative to common
classes. We therefore caution against blind application of balancing, as well
as against paying too much attention to small improvements on a public
evaluation set.
|
[
{
"created": "Fri, 30 Jun 2023 18:33:27 GMT",
"version": "v1"
}
] |
2023-07-04
|
[
[
"Moore",
"R. Channing",
""
],
[
"Ellis",
"Daniel P. W.",
""
],
[
"Fonseca",
"Eduardo",
""
],
[
"Hershey",
"Shawn",
""
],
[
"Jansen",
"Aren",
""
],
[
"Plakal",
"Manoj",
""
]
] |
Machine learning from training data with a skewed distribution of examples per class can lead to models that favor performance on common classes at the expense of performance on rare ones. AudioSet has a very wide range of priors over its 527 sound event classes. Classification performance on AudioSet is usually evaluated by a simple average over per-class metrics, meaning that performance on rare classes is equal in importance to the performance on common ones. Several recent papers have used dataset balancing techniques to improve performance on AudioSet. We find, however, that while balancing improves performance on the public AudioSet evaluation data it simultaneously hurts performance on an unpublished evaluation set collected under the same conditions. By varying the degree of balancing, we show that its benefits are fragile and depend on the evaluation set. We also do not find evidence indicating that balancing improves rare class performance relative to common classes. We therefore caution against blind application of balancing, as well as against paying too much attention to small improvements on a public evaluation set.
|
1109.2944
|
Zhengdao Wang
|
Zhengdao Wang
|
Real Interference Alignment and Degrees of Freedom Region of Wireless X
Networks
|
5 pages, 2 figures
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider a single hop wireless X network with $K$ transmitters and $J$
receivers, all with single antenna. Each transmitter conveys for each receiver
an independent message. The channel is assumed to have constant coefficients.
We develop interference alignment scheme for this setup and derived several
achievable degrees of freedom regions. We show that in some cases, the derived
region meets a previous outer bound and are hence the DoF region. For our
achievability schemes, we divide each message into streams and use real
interference alignment on the streams. Several previous results on the DoF
region and total DoF for various special cases can be recovered from our
result.
|
[
{
"created": "Tue, 13 Sep 2011 22:01:36 GMT",
"version": "v1"
}
] |
2011-09-15
|
[
[
"Wang",
"Zhengdao",
""
]
] |
We consider a single hop wireless X network with $K$ transmitters and $J$ receivers, all with single antenna. Each transmitter conveys for each receiver an independent message. The channel is assumed to have constant coefficients. We develop interference alignment scheme for this setup and derived several achievable degrees of freedom regions. We show that in some cases, the derived region meets a previous outer bound and are hence the DoF region. For our achievability schemes, we divide each message into streams and use real interference alignment on the streams. Several previous results on the DoF region and total DoF for various special cases can be recovered from our result.
|
1910.10384
|
Mohammadjavad Salehi
|
MohammadJavad Salehi, Antti T\"olli, Seyed Pooya Shariatpanahi
|
A Multi-Antenna Coded Caching Scheme with Linear Subpacketization
|
6 Pages
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Exponentially growing subpacketization is known to be a major issue for
practical implementation of coded caching, specially in networks with
multi-antenna communication setups. We provide a new coded caching scheme for
such networks, which requires linear subpacketization and is applicable to any
set of network parameters, as long as the multi-antenna gain $L$ is larger than
or equal to the global caching gain $t$. Our scheme includes carefully designed
cache placement and delivery algorithms; which are based on circular shift of
two generator arrays in perpendicular directions. It also achieves the maximum
possible degrees of freedom of $t+L$, during any transmission interval.
|
[
{
"created": "Wed, 23 Oct 2019 06:51:58 GMT",
"version": "v1"
},
{
"created": "Tue, 29 Oct 2019 10:38:36 GMT",
"version": "v2"
}
] |
2019-10-30
|
[
[
"Salehi",
"MohammadJavad",
""
],
[
"Tölli",
"Antti",
""
],
[
"Shariatpanahi",
"Seyed Pooya",
""
]
] |
Exponentially growing subpacketization is known to be a major issue for practical implementation of coded caching, specially in networks with multi-antenna communication setups. We provide a new coded caching scheme for such networks, which requires linear subpacketization and is applicable to any set of network parameters, as long as the multi-antenna gain $L$ is larger than or equal to the global caching gain $t$. Our scheme includes carefully designed cache placement and delivery algorithms; which are based on circular shift of two generator arrays in perpendicular directions. It also achieves the maximum possible degrees of freedom of $t+L$, during any transmission interval.
|
2105.14874
|
Zhou Yang
|
Zhou Yang, Muhammad Hilmi Asyrofi and David Lo
|
BiasRV: Uncovering Biased Sentiment Predictions at Runtime
|
Accepted to appear in the Demonstrations track of the ESEC/FSE 2021
| null |
10.1145/3468264.3473117
| null |
cs.SE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sentiment analysis (SA) systems, though widely applied in many domains, have
been demonstrated to produce biased results. Some research works have been done
in automatically generating test cases to reveal unfairness in SA systems, but
the community still lacks tools that can monitor and uncover biased predictions
at runtime. This paper fills this gap by proposing BiasRV, the first tool to
raise an alarm when a deployed SA system makes a biased prediction on a given
input text. To implement this feature, BiasRV dynamically extracts a template
from an input text and from the template generates gender-discriminatory
mutants (semantically-equivalent texts that only differ in gender information).
Based on popular metrics used to evaluate the overall fairness of an SA system,
we define distributional fairness property for an individual prediction of an
SA system. This property specifies a requirement that for one piece of text,
mutants from different gender classes should be treated similarly as a whole.
Verifying the distributional fairness property causes much overhead to the
running system. To run more efficiently, BiasRV adopts a two-step heuristic:
(1) sampling several mutants from each gender and checking if the system
predicts them as of the same sentiment, (2) checking distributional fairness
only when sampled mutants have conflicting results. Experiments show that
compared to directly checking the distributional fairness property for each
input text, our two-step heuristic can decrease overhead used for analyzing
mutants by 73.81% while only resulting in 6.7% of biased predictions being
missed. Besides, BiasRV can be used conveniently without knowing the
implementation of SA systems. Future researchers can easily extend BiasRV to
detect more types of bias, e.g. race and occupation.
|
[
{
"created": "Mon, 31 May 2021 10:55:29 GMT",
"version": "v1"
},
{
"created": "Sat, 26 Jun 2021 08:30:12 GMT",
"version": "v2"
}
] |
2022-01-06
|
[
[
"Yang",
"Zhou",
""
],
[
"Asyrofi",
"Muhammad Hilmi",
""
],
[
"Lo",
"David",
""
]
] |
Sentiment analysis (SA) systems, though widely applied in many domains, have been demonstrated to produce biased results. Some research works have been done in automatically generating test cases to reveal unfairness in SA systems, but the community still lacks tools that can monitor and uncover biased predictions at runtime. This paper fills this gap by proposing BiasRV, the first tool to raise an alarm when a deployed SA system makes a biased prediction on a given input text. To implement this feature, BiasRV dynamically extracts a template from an input text and from the template generates gender-discriminatory mutants (semantically-equivalent texts that only differ in gender information). Based on popular metrics used to evaluate the overall fairness of an SA system, we define distributional fairness property for an individual prediction of an SA system. This property specifies a requirement that for one piece of text, mutants from different gender classes should be treated similarly as a whole. Verifying the distributional fairness property causes much overhead to the running system. To run more efficiently, BiasRV adopts a two-step heuristic: (1) sampling several mutants from each gender and checking if the system predicts them as of the same sentiment, (2) checking distributional fairness only when sampled mutants have conflicting results. Experiments show that compared to directly checking the distributional fairness property for each input text, our two-step heuristic can decrease overhead used for analyzing mutants by 73.81% while only resulting in 6.7% of biased predictions being missed. Besides, BiasRV can be used conveniently without knowing the implementation of SA systems. Future researchers can easily extend BiasRV to detect more types of bias, e.g. race and occupation.
|
2402.11487
|
Tanzila Rahman
|
Tanzila Rahman, Shweta Mahajan, Hsin-Ying Lee, Jian Ren, Sergey
Tulyakov, Leonid Sigal
|
Visual Concept-driven Image Generation with Text-to-Image Diffusion
Model
|
11 Figures, 14 Pages, 2 tables
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Text-to-image (TTI) diffusion models have demonstrated impressive results in
generating high-resolution images of complex and imaginative scenes. Recent
approaches have further extended these methods with personalization techniques
that allow them to integrate user-illustrated concepts (e.g., the user
him/herself) using a few sample image illustrations. However, the ability to
generate images with multiple interacting concepts, such as human subjects, as
well as concepts that may be entangled in one, or across multiple, image
illustrations remains illusive. In this work, we propose a concept-driven TTI
personalization framework that addresses these core challenges. We build on
existing works that learn custom tokens for user-illustrated concepts, allowing
those to interact with existing text tokens in the TTI model. However,
importantly, to disentangle and better learn the concepts in question, we
jointly learn (latent) segmentation masks that disentangle these concepts in
user-provided image illustrations. We do so by introducing an Expectation
Maximization (EM)-like optimization procedure where we alternate between
learning the custom tokens and estimating (latent) masks encompassing
corresponding concepts in user-supplied images. We obtain these masks based on
cross-attention, from within the U-Net parameterized latent diffusion model and
subsequent DenseCRF optimization. We illustrate that such joint alternating
refinement leads to the learning of better tokens for concepts and, as a
by-product, latent masks. We illustrate the benefits of the proposed approach
qualitatively and quantitatively with several examples and use cases that can
combine three or more entangled concepts.
|
[
{
"created": "Sun, 18 Feb 2024 07:28:37 GMT",
"version": "v1"
},
{
"created": "Wed, 17 Jul 2024 01:47:16 GMT",
"version": "v2"
}
] |
2024-07-18
|
[
[
"Rahman",
"Tanzila",
""
],
[
"Mahajan",
"Shweta",
""
],
[
"Lee",
"Hsin-Ying",
""
],
[
"Ren",
"Jian",
""
],
[
"Tulyakov",
"Sergey",
""
],
[
"Sigal",
"Leonid",
""
]
] |
Text-to-image (TTI) diffusion models have demonstrated impressive results in generating high-resolution images of complex and imaginative scenes. Recent approaches have further extended these methods with personalization techniques that allow them to integrate user-illustrated concepts (e.g., the user him/herself) using a few sample image illustrations. However, the ability to generate images with multiple interacting concepts, such as human subjects, as well as concepts that may be entangled in one, or across multiple, image illustrations remains illusive. In this work, we propose a concept-driven TTI personalization framework that addresses these core challenges. We build on existing works that learn custom tokens for user-illustrated concepts, allowing those to interact with existing text tokens in the TTI model. However, importantly, to disentangle and better learn the concepts in question, we jointly learn (latent) segmentation masks that disentangle these concepts in user-provided image illustrations. We do so by introducing an Expectation Maximization (EM)-like optimization procedure where we alternate between learning the custom tokens and estimating (latent) masks encompassing corresponding concepts in user-supplied images. We obtain these masks based on cross-attention, from within the U-Net parameterized latent diffusion model and subsequent DenseCRF optimization. We illustrate that such joint alternating refinement leads to the learning of better tokens for concepts and, as a by-product, latent masks. We illustrate the benefits of the proposed approach qualitatively and quantitatively with several examples and use cases that can combine three or more entangled concepts.
|
2402.17169
|
Fabio Miranda
|
Kazi Shahrukh Omar, Gustavo Moreira, Daniel Hodczak, Maryam Hosseini,
Nicola Colaninno, Marcos Lage, Fabio Miranda
|
Deep Umbra: A Generative Approach for Sunlight Access Computation in
Urban Spaces
|
Accepted at IEEE Transactions on Big Data. Deep Umbra is available at
https://urbantk.org/shadows
| null |
10.1109/TBDATA.2024.3382964
| null |
cs.CV cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sunlight and shadow play critical roles in how urban spaces are utilized,
thrive, and grow. While access to sunlight is essential to the success of urban
environments, shadows can provide shaded places to stay during the hot seasons,
mitigate heat island effect, and increase pedestrian comfort levels. Properly
quantifying sunlight access and shadows in large urban environments is key in
tackling some of the important challenges facing cities today. In this paper,
we propose Deep Umbra, a novel computational framework that enables the
quantification of sunlight access and shadows at a global scale. Our framework
is based on a conditional generative adversarial network that considers the
physical form of cities to compute high-resolution spatial information of
accumulated sunlight access for the different seasons of the year. We use data
from seven different cities to train our model, and show, through an extensive
set of experiments, its low overall RMSE (below 0.1) as well as its
extensibility to cities that were not part of the training set. Additionally,
we contribute a set of case studies and a comprehensive dataset with sunlight
access information for more than 100 cities across six continents of the world.
Deep Umbra is available at https://urbantk.org/shadows.
|
[
{
"created": "Tue, 27 Feb 2024 03:05:05 GMT",
"version": "v1"
}
] |
2024-07-02
|
[
[
"Omar",
"Kazi Shahrukh",
""
],
[
"Moreira",
"Gustavo",
""
],
[
"Hodczak",
"Daniel",
""
],
[
"Hosseini",
"Maryam",
""
],
[
"Colaninno",
"Nicola",
""
],
[
"Lage",
"Marcos",
""
],
[
"Miranda",
"Fabio",
""
]
] |
Sunlight and shadow play critical roles in how urban spaces are utilized, thrive, and grow. While access to sunlight is essential to the success of urban environments, shadows can provide shaded places to stay during the hot seasons, mitigate heat island effect, and increase pedestrian comfort levels. Properly quantifying sunlight access and shadows in large urban environments is key in tackling some of the important challenges facing cities today. In this paper, we propose Deep Umbra, a novel computational framework that enables the quantification of sunlight access and shadows at a global scale. Our framework is based on a conditional generative adversarial network that considers the physical form of cities to compute high-resolution spatial information of accumulated sunlight access for the different seasons of the year. We use data from seven different cities to train our model, and show, through an extensive set of experiments, its low overall RMSE (below 0.1) as well as its extensibility to cities that were not part of the training set. Additionally, we contribute a set of case studies and a comprehensive dataset with sunlight access information for more than 100 cities across six continents of the world. Deep Umbra is available at https://urbantk.org/shadows.
|
2206.03388
|
Paulo Rezeck
|
Paulo Rezeck and Luiz Chaimowicz
|
Chemistry-Inspired Pattern Formation with Robotic Swarms
|
Submitted to IEEE RA-L/IROS 2022
|
IEEE Robotics and Automation Letters (2022)
|
10.1109/LRA.2022.3190638
|
21951131
|
cs.RO
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Self-organized emergent patterns can be widely seen in particle interactions
producing complex structures such as chemical elements and molecules. Inspired
by these interactions, this work presents a novel stochastic approach that
allows a swarm of heterogeneous robots to create emergent patterns in a
completely decentralized fashion and relying only on local information. Our
approach consists of modeling the swarm configuration as a dynamic Gibbs Random
Field (GRF) and setting constraints on the neighborhood system inspired by
chemistry rules that dictate binding polarity between particles. Using the GRF
model, we determine velocities for each robot, resulting in behaviors that lead
to the creation of patterns or shapes. Simulated experiments show the
versatility of the approach in producing a variety of patterns, and experiments
with a group of physical robots show the feasibility in potential applications.
|
[
{
"created": "Tue, 7 Jun 2022 15:31:29 GMT",
"version": "v1"
}
] |
2022-09-01
|
[
[
"Rezeck",
"Paulo",
""
],
[
"Chaimowicz",
"Luiz",
""
]
] |
Self-organized emergent patterns can be widely seen in particle interactions producing complex structures such as chemical elements and molecules. Inspired by these interactions, this work presents a novel stochastic approach that allows a swarm of heterogeneous robots to create emergent patterns in a completely decentralized fashion and relying only on local information. Our approach consists of modeling the swarm configuration as a dynamic Gibbs Random Field (GRF) and setting constraints on the neighborhood system inspired by chemistry rules that dictate binding polarity between particles. Using the GRF model, we determine velocities for each robot, resulting in behaviors that lead to the creation of patterns or shapes. Simulated experiments show the versatility of the approach in producing a variety of patterns, and experiments with a group of physical robots show the feasibility in potential applications.
|
1909.10266
|
Felix Hamborg
|
Felix Hamborg, Philipp Meschenmoser, Moritz Schubotz, Bela Gipp
|
NewsDeps: Visualizing the Origin of Information in News Articles
| null | null | null | null |
cs.IR
|
http://creativecommons.org/licenses/by/4.0/
|
In scientific publications, citations allow readers to assess the
authenticity of the presented information and verify it in the original
context. News articles, however, do not contain citations and only rarely refer
readers to further sources. Readers often cannot assess the authenticity of the
presented information as its origin is unclear. We present NewsDeps, the first
approach that analyzes and visualizes where information in news articles stems
from. NewsDeps employs methods from natural language processing and plagiarism
detection to measure article similarity. We devise a temporal-force-directed
graph that places articles as nodes chronologically. The graph connects
articles by edges varying in width depending on the articles' similarity. We
demonstrate our approach in a case study with two real-world scenarios. We find
that NewsDeps increases efficiency and transparency in news consumption by
revealing which previously published articles are the primary sources of each
given article.
|
[
{
"created": "Mon, 23 Sep 2019 10:25:24 GMT",
"version": "v1"
}
] |
2019-09-27
|
[
[
"Hamborg",
"Felix",
""
],
[
"Meschenmoser",
"Philipp",
""
],
[
"Schubotz",
"Moritz",
""
],
[
"Gipp",
"Bela",
""
]
] |
In scientific publications, citations allow readers to assess the authenticity of the presented information and verify it in the original context. News articles, however, do not contain citations and only rarely refer readers to further sources. Readers often cannot assess the authenticity of the presented information as its origin is unclear. We present NewsDeps, the first approach that analyzes and visualizes where information in news articles stems from. NewsDeps employs methods from natural language processing and plagiarism detection to measure article similarity. We devise a temporal-force-directed graph that places articles as nodes chronologically. The graph connects articles by edges varying in width depending on the articles' similarity. We demonstrate our approach in a case study with two real-world scenarios. We find that NewsDeps increases efficiency and transparency in news consumption by revealing which previously published articles are the primary sources of each given article.
|
2405.18085
|
Micha{\l} Czuba Mr
|
Micha{\l} Czuba, Mateusz Nurek, Damian Serwata, Yu-Xuan Qiu, Mingshan
Jia, Katarzyna Musial, Rados{\l}aw Michalski, Piotr Br\'odka
|
Network Diffusion -- Framework to Simulate Spreading Processes in
Complex Networks
|
To be published in: Big Data Mining and Analytics
(https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=8254253)
| null |
10.26599/BDMA.2024.9020010
| null |
cs.SI cs.MA cs.SY eess.SY
|
http://creativecommons.org/licenses/by/4.0/
|
With the advancement of computational network science, its research scope has
significantly expanded beyond static graphs to encompass more complex
structures. The introduction of streaming, temporal, multilayer, and
hypernetwork approaches has brought new possibilities and imposed additional
requirements. For instance, by utilising these advancements, one can model
structures such as social networks in a much more refined manner, which is
particularly relevant in simulations of the spreading processes. Unfortunately,
the pace of advancement is often too rapid for existing computational packages
to keep up with the functionality updates. This results in a significant
proliferation of tools used by researchers and, consequently, a lack of a
universally accepted technological stack that would standardise experimental
methods (as seen, e.g. in machine learning). This article addresses that issue
by presenting an extended version of the Network Diffusion library. First, a
survey of the existing approaches and toolkits for simulating spreading
phenomena is shown and then, an overview of the framework functionalities.
Finally, we report four case studies conducted with the package to demonstrate
its usefulness: the impact of sanitary measures on the spread of COVID-19, the
comparison of information diffusion on two temporal network models, and the
effectiveness of seed selection methods in the task of influence maximisation
in multilayer networks. We conclude the paper with a critical assessment of the
library and the outline of still awaiting challenges to standardise research
environments in computational network science.
|
[
{
"created": "Tue, 28 May 2024 11:46:18 GMT",
"version": "v1"
}
] |
2024-05-29
|
[
[
"Czuba",
"Michał",
""
],
[
"Nurek",
"Mateusz",
""
],
[
"Serwata",
"Damian",
""
],
[
"Qiu",
"Yu-Xuan",
""
],
[
"Jia",
"Mingshan",
""
],
[
"Musial",
"Katarzyna",
""
],
[
"Michalski",
"Radosław",
""
],
[
"Bródka",
"Piotr",
""
]
] |
With the advancement of computational network science, its research scope has significantly expanded beyond static graphs to encompass more complex structures. The introduction of streaming, temporal, multilayer, and hypernetwork approaches has brought new possibilities and imposed additional requirements. For instance, by utilising these advancements, one can model structures such as social networks in a much more refined manner, which is particularly relevant in simulations of the spreading processes. Unfortunately, the pace of advancement is often too rapid for existing computational packages to keep up with the functionality updates. This results in a significant proliferation of tools used by researchers and, consequently, a lack of a universally accepted technological stack that would standardise experimental methods (as seen, e.g. in machine learning). This article addresses that issue by presenting an extended version of the Network Diffusion library. First, a survey of the existing approaches and toolkits for simulating spreading phenomena is shown and then, an overview of the framework functionalities. Finally, we report four case studies conducted with the package to demonstrate its usefulness: the impact of sanitary measures on the spread of COVID-19, the comparison of information diffusion on two temporal network models, and the effectiveness of seed selection methods in the task of influence maximisation in multilayer networks. We conclude the paper with a critical assessment of the library and the outline of still awaiting challenges to standardise research environments in computational network science.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.