id
stringlengths 9
10
| submitter
stringlengths 1
64
⌀ | authors
stringlengths 4
20.7k
| title
stringlengths 4
246
| comments
stringlengths 1
523
⌀ | journal-ref
stringlengths 4
404
⌀ | doi
stringlengths 11
153
⌀ | report-no
stringlengths 2
254
⌀ | categories
stringlengths 5
98
| license
stringclasses 9
values | orig_abstract
stringlengths 14
3.35k
| versions
listlengths 1
60
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
1.35k
| abstract
stringlengths 11
3.34k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2106.11652
|
Zhiwei Xu
|
Zhiwei Xu, Dapeng Li, Yunpeng Bai, Guoliang Fan
|
MMD-MIX: Value Function Factorisation with Maximum Mean Discrepancy for
Cooperative Multi-Agent Reinforcement Learning
|
7 pages, 2 figures, 2 tables. Accepted by IJCNN 2021
| null | null | null |
cs.MA cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the real world, many tasks require multiple agents to cooperate with each
other under the condition of local observations. To solve such problems, many
multi-agent reinforcement learning methods based on Centralized Training with
Decentralized Execution have been proposed. One representative class of work is
value decomposition, which decomposes the global joint Q-value $Q_\text{jt}$
into individual Q-values $Q_a$ to guide individuals' behaviors, e.g. VDN
(Value-Decomposition Networks) and QMIX. However, these baselines often ignore
the randomness in the situation. We propose MMD-MIX, a method that combines
distributional reinforcement learning and value decomposition to alleviate the
above weaknesses. Besides, to improve data sampling efficiency, we were
inspired by REM (Random Ensemble Mixture) which is a robust RL algorithm to
explicitly introduce randomness into the MMD-MIX. The experiments demonstrate
that MMD-MIX outperforms prior baselines in the StarCraft Multi-Agent Challenge
(SMAC) environment.
|
[
{
"created": "Tue, 22 Jun 2021 10:21:00 GMT",
"version": "v1"
}
] |
2021-06-23
|
[
[
"Xu",
"Zhiwei",
""
],
[
"Li",
"Dapeng",
""
],
[
"Bai",
"Yunpeng",
""
],
[
"Fan",
"Guoliang",
""
]
] |
In the real world, many tasks require multiple agents to cooperate with each other under the condition of local observations. To solve such problems, many multi-agent reinforcement learning methods based on Centralized Training with Decentralized Execution have been proposed. One representative class of work is value decomposition, which decomposes the global joint Q-value $Q_\text{jt}$ into individual Q-values $Q_a$ to guide individuals' behaviors, e.g. VDN (Value-Decomposition Networks) and QMIX. However, these baselines often ignore the randomness in the situation. We propose MMD-MIX, a method that combines distributional reinforcement learning and value decomposition to alleviate the above weaknesses. Besides, to improve data sampling efficiency, we were inspired by REM (Random Ensemble Mixture) which is a robust RL algorithm to explicitly introduce randomness into the MMD-MIX. The experiments demonstrate that MMD-MIX outperforms prior baselines in the StarCraft Multi-Agent Challenge (SMAC) environment.
|
2405.11092
|
Yuya Asano
|
Yuya Asano, Diane Litman, Quentin King-Shepard, Tristan Maidment,
Tyree Langley, Teresa Davison, Timothy Nokes-Malach, Adriana Kovashka, Erin
Walker
|
What metrics of participation balance predict outcomes of collaborative
learning with a robot?
|
To appear in Seventeenth International Conference on Educational Data
Mining (EDM 2024)
| null | null | null |
cs.HC cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
One of the keys to the success of collaborative learning is balanced
participation by all learners, but this does not always happen naturally.
Pedagogical robots have the potential to facilitate balance. However, it
remains unclear what participation balance robots should aim at; various
metrics have been proposed, but it is still an open question whether we should
balance human participation in human-human interactions (HHI) or human-robot
interactions (HRI) and whether we should consider robots' participation in
collaborative learning involving multiple humans and a robot. This paper
examines collaborative learning between a pair of students and a teachable
robot that acts as a peer tutee to answer the aforementioned question. Through
an exploratory study, we hypothesize which balance metrics in the literature
and which portions of dialogues (including vs. excluding robots' participation
and human participation in HHI vs. HRI) will better predict learning as a
group. We test the hypotheses with another study and replicate them with
automatically obtained units of participation to simulate the information
available to robots when they adaptively fix imbalances in real-time. Finally,
we discuss recommendations on which metrics learning science researchers should
choose when trying to understand how to facilitate collaboration.
|
[
{
"created": "Fri, 17 May 2024 21:06:34 GMT",
"version": "v1"
}
] |
2024-05-21
|
[
[
"Asano",
"Yuya",
""
],
[
"Litman",
"Diane",
""
],
[
"King-Shepard",
"Quentin",
""
],
[
"Maidment",
"Tristan",
""
],
[
"Langley",
"Tyree",
""
],
[
"Davison",
"Teresa",
""
],
[
"Nokes-Malach",
"Timothy",
""
],
[
"Kovashka",
"Adriana",
""
],
[
"Walker",
"Erin",
""
]
] |
One of the keys to the success of collaborative learning is balanced participation by all learners, but this does not always happen naturally. Pedagogical robots have the potential to facilitate balance. However, it remains unclear what participation balance robots should aim at; various metrics have been proposed, but it is still an open question whether we should balance human participation in human-human interactions (HHI) or human-robot interactions (HRI) and whether we should consider robots' participation in collaborative learning involving multiple humans and a robot. This paper examines collaborative learning between a pair of students and a teachable robot that acts as a peer tutee to answer the aforementioned question. Through an exploratory study, we hypothesize which balance metrics in the literature and which portions of dialogues (including vs. excluding robots' participation and human participation in HHI vs. HRI) will better predict learning as a group. We test the hypotheses with another study and replicate them with automatically obtained units of participation to simulate the information available to robots when they adaptively fix imbalances in real-time. Finally, we discuss recommendations on which metrics learning science researchers should choose when trying to understand how to facilitate collaboration.
|
2004.02003
|
Sudhanshu Sane
|
Sudhanshu Sane, Abhishek Yenpure, Roxana Bujack, Matthew Larsen,
Kenneth Moreland, Christoph Garth and Hank Childs
|
Scalable In Situ Lagrangian Flow Map Extraction: Demonstrating the
Viability of a Communication-Free Model
| null | null |
10.2312/pgv.20211040
| null |
cs.CE cs.DC physics.comp-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce and evaluate a new algorithm for the in situ extraction of
Lagrangian flow maps, which we call Boundary Termination Optimization (BTO).
Our approach is a communication-free model, requiring no message passing or
synchronization between processes, improving scalability, thereby reducing
overall execution time and alleviating the encumbrance placed on simulation
codes from in situ processing. We terminate particle integration at node
boundaries and store only a subset of the flow map that would have been
extracted by communicating particles across nodes, thus introducing an
accuracy-performance tradeoff. We run experiments with as many as 2048 GPUs and
with multiple simulation data sets. For the experiment configurations we
consider, our findings demonstrate that our communication-free technique saves
as much as 2x to 4x in execution time in situ, while staying nearly as accurate
quantitatively and qualitatively as previous work. Most significantly, this
study establishes the viability of approaching in situ Lagrangian flow map
extraction using communication-free models in the future.
|
[
{
"created": "Sat, 4 Apr 2020 19:21:28 GMT",
"version": "v1"
}
] |
2021-09-06
|
[
[
"Sane",
"Sudhanshu",
""
],
[
"Yenpure",
"Abhishek",
""
],
[
"Bujack",
"Roxana",
""
],
[
"Larsen",
"Matthew",
""
],
[
"Moreland",
"Kenneth",
""
],
[
"Garth",
"Christoph",
""
],
[
"Childs",
"Hank",
""
]
] |
We introduce and evaluate a new algorithm for the in situ extraction of Lagrangian flow maps, which we call Boundary Termination Optimization (BTO). Our approach is a communication-free model, requiring no message passing or synchronization between processes, improving scalability, thereby reducing overall execution time and alleviating the encumbrance placed on simulation codes from in situ processing. We terminate particle integration at node boundaries and store only a subset of the flow map that would have been extracted by communicating particles across nodes, thus introducing an accuracy-performance tradeoff. We run experiments with as many as 2048 GPUs and with multiple simulation data sets. For the experiment configurations we consider, our findings demonstrate that our communication-free technique saves as much as 2x to 4x in execution time in situ, while staying nearly as accurate quantitatively and qualitatively as previous work. Most significantly, this study establishes the viability of approaching in situ Lagrangian flow map extraction using communication-free models in the future.
|
2010.11413
|
Baihan Lin
|
Baihan Lin, Djallel Bouneffouf, Guillermo Cecchi
|
Predicting human decision making in psychological tasks with recurrent
neural networks
|
To appear in PLOS ONE. Codes at https://github.com/doerlbh/HumanLSTM
|
PLOS ONE 17(5): e0267907 (2022)
|
10.1371/journal.pone.0267907
| null |
cs.LG cs.AI q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Unlike traditional time series, the action sequences of human decision making
usually involve many cognitive processes such as beliefs, desires, intentions,
and theory of mind, i.e., what others are thinking. This makes predicting human
decision-making challenging to be treated agnostically to the underlying
psychological mechanisms. We propose here to use a recurrent neural network
architecture based on long short-term memory networks (LSTM) to predict the
time series of the actions taken by human subjects engaged in gaming activity,
the first application of such methods in this research domain. In this study,
we collate the human data from 8 published literature of the Iterated
Prisoner's Dilemma comprising 168,386 individual decisions and post-process
them into 8,257 behavioral trajectories of 9 actions each for both players.
Similarly, we collate 617 trajectories of 95 actions from 10 different
published studies of Iowa Gambling Task experiments with healthy human
subjects. We train our prediction networks on the behavioral data and
demonstrate a clear advantage over the state-of-the-art methods in predicting
human decision-making trajectories in both the single-agent scenario of the
Iowa Gambling Task and the multi-agent scenario of the Iterated Prisoner's
Dilemma. Moreover, we observe that the weights of the LSTM networks modeling
the top performers tend to have a wider distribution compared to poor
performers, as well as a larger bias, which suggest possible interpretations
for the distribution of strategies adopted by each group.
|
[
{
"created": "Thu, 22 Oct 2020 03:36:03 GMT",
"version": "v1"
},
{
"created": "Fri, 12 Nov 2021 22:45:58 GMT",
"version": "v2"
},
{
"created": "Wed, 20 Apr 2022 16:28:20 GMT",
"version": "v3"
}
] |
2022-06-07
|
[
[
"Lin",
"Baihan",
""
],
[
"Bouneffouf",
"Djallel",
""
],
[
"Cecchi",
"Guillermo",
""
]
] |
Unlike traditional time series, the action sequences of human decision making usually involve many cognitive processes such as beliefs, desires, intentions, and theory of mind, i.e., what others are thinking. This makes predicting human decision-making challenging to be treated agnostically to the underlying psychological mechanisms. We propose here to use a recurrent neural network architecture based on long short-term memory networks (LSTM) to predict the time series of the actions taken by human subjects engaged in gaming activity, the first application of such methods in this research domain. In this study, we collate the human data from 8 published literature of the Iterated Prisoner's Dilemma comprising 168,386 individual decisions and post-process them into 8,257 behavioral trajectories of 9 actions each for both players. Similarly, we collate 617 trajectories of 95 actions from 10 different published studies of Iowa Gambling Task experiments with healthy human subjects. We train our prediction networks on the behavioral data and demonstrate a clear advantage over the state-of-the-art methods in predicting human decision-making trajectories in both the single-agent scenario of the Iowa Gambling Task and the multi-agent scenario of the Iterated Prisoner's Dilemma. Moreover, we observe that the weights of the LSTM networks modeling the top performers tend to have a wider distribution compared to poor performers, as well as a larger bias, which suggest possible interpretations for the distribution of strategies adopted by each group.
|
1303.3733
|
Rodrigo de Lamare
|
Y. Cai and R. C. de Lamare
|
Adaptive Reduced-Rank MBER Linear Receive Processing for Large Multiuser
MIMO Systems
|
2 figures
|
ICASSP 2013
| null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work, we propose a novel adaptive reduced-rank strategy based on
joint interpolation, decimation and filtering (JIDF) for large multiuser
multiple-input multiple-output (MIMO) systems. In this scheme, a reduced-rank
framework is proposed for linear receive processing and multiuser interference
suppression according to the minimization of the bit error rate (BER) cost
function. We present a structure with multiple processing branches that
performs dimensionality reduction, where each branch contains a group of
jointly optimized interpolation and decimation units, followed by a linear
receive filter. We then develop stochastic gradient (SG) algorithms to compute
the parameters of the interpolation and receive filters along with a
low-complexity decimation technique. Simulation results are presented for
time-varying environments and show that the proposed MBER-JIDF receive
processing strategy and algorithms achieve a superior performance to existing
methods at a reduced complexity.
|
[
{
"created": "Fri, 15 Mar 2013 11:13:29 GMT",
"version": "v1"
}
] |
2013-03-18
|
[
[
"Cai",
"Y.",
""
],
[
"de Lamare",
"R. C.",
""
]
] |
In this work, we propose a novel adaptive reduced-rank strategy based on joint interpolation, decimation and filtering (JIDF) for large multiuser multiple-input multiple-output (MIMO) systems. In this scheme, a reduced-rank framework is proposed for linear receive processing and multiuser interference suppression according to the minimization of the bit error rate (BER) cost function. We present a structure with multiple processing branches that performs dimensionality reduction, where each branch contains a group of jointly optimized interpolation and decimation units, followed by a linear receive filter. We then develop stochastic gradient (SG) algorithms to compute the parameters of the interpolation and receive filters along with a low-complexity decimation technique. Simulation results are presented for time-varying environments and show that the proposed MBER-JIDF receive processing strategy and algorithms achieve a superior performance to existing methods at a reduced complexity.
|
1810.13098
|
Chao Li
|
Chao Li, Zhun Sun, Jinshi Yu, Ming Hou and Qibin Zhao
|
Low-Rank Embedding of Kernels in Convolutional Neural Networks under
Random Shuffling
| null | null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Although the convolutional neural networks (CNNs) have become popular for
various image processing and computer vision task recently, it remains a
challenging problem to reduce the storage cost of the parameters for
resource-limited platforms. In the previous studies, tensor decomposition (TD)
has achieved promising compression performance by embedding the kernel of a
convolutional layer into a low-rank subspace. However the employment of TD is
naively on the kernel or its specified variants. Unlike the conventional
approaches, this paper shows that the kernel can be embedded into more general
or even random low-rank subspaces. We demonstrate this by compressing the
convolutional layers via randomly-shuffled tensor decomposition (RsTD) for a
standard classification task using CIFAR-10. In addition, we analyze how the
spatial similarity of the training data influences the low-rank structure of
the kernels. The experimental results show that the CNN can be significantly
compressed even if the kernels are randomly shuffled. Furthermore, the
RsTD-based method yields more stable classification accuracy than the
conventional TD-based methods in a large range of compression ratios.
|
[
{
"created": "Wed, 31 Oct 2018 04:05:54 GMT",
"version": "v1"
}
] |
2018-11-01
|
[
[
"Li",
"Chao",
""
],
[
"Sun",
"Zhun",
""
],
[
"Yu",
"Jinshi",
""
],
[
"Hou",
"Ming",
""
],
[
"Zhao",
"Qibin",
""
]
] |
Although the convolutional neural networks (CNNs) have become popular for various image processing and computer vision task recently, it remains a challenging problem to reduce the storage cost of the parameters for resource-limited platforms. In the previous studies, tensor decomposition (TD) has achieved promising compression performance by embedding the kernel of a convolutional layer into a low-rank subspace. However the employment of TD is naively on the kernel or its specified variants. Unlike the conventional approaches, this paper shows that the kernel can be embedded into more general or even random low-rank subspaces. We demonstrate this by compressing the convolutional layers via randomly-shuffled tensor decomposition (RsTD) for a standard classification task using CIFAR-10. In addition, we analyze how the spatial similarity of the training data influences the low-rank structure of the kernels. The experimental results show that the CNN can be significantly compressed even if the kernels are randomly shuffled. Furthermore, the RsTD-based method yields more stable classification accuracy than the conventional TD-based methods in a large range of compression ratios.
|
2010.01695
|
Marius Schubert
|
Marius Schubert, Karsten Kahl, Matthias Rottmann
|
MetaDetect: Uncertainty Quantification and Prediction Quality Estimates
for Object Detection
|
11 pages, 5 figures, 5 tables
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In object detection with deep neural networks, the box-wise objectness score
tends to be overconfident, sometimes even indicating high confidence in
presence of inaccurate predictions. Hence, the reliability of the prediction
and therefore reliable uncertainties are of highest interest. In this work, we
present a post processing method that for any given neural network provides
predictive uncertainty estimates and quality estimates. These estimates are
learned by a post processing model that receives as input a hand-crafted set of
transparent metrics in form of a structured dataset. Therefrom, we learn two
tasks for predicted bounding boxes. We discriminate between true positives
($\mathit{IoU}\geq0.5$) and false positives ($\mathit{IoU} < 0.5$) which we
term meta classification, and we predict $\mathit{IoU}$ values directly which
we term meta regression. The probabilities of the meta classification model aim
at learning the probabilities of success and failure and therefore provide a
modelled predictive uncertainty estimate. On the other hand, meta regression
gives rise to a quality estimate. In numerical experiments, we use the publicly
available YOLOv3 network and the Faster-RCNN network and evaluate meta
classification and regression performance on the Kitti, Pascal VOC and COCO
datasets. We demonstrate that our metrics are indeed well correlated with the
$\mathit{IoU}$. For meta classification we obtain classification accuracies of
up to 98.92% and AUROCs of up to 99.93%. For meta regression we obtain an $R^2$
value of up to 91.78%. These results yield significant improvements compared to
other network's objectness score and other baseline approaches. Therefore, we
obtain more reliable uncertainty and quality estimates which is particularly
interesting in the absence of ground truth.
|
[
{
"created": "Sun, 4 Oct 2020 21:49:23 GMT",
"version": "v1"
},
{
"created": "Tue, 6 Oct 2020 15:38:53 GMT",
"version": "v2"
}
] |
2020-10-07
|
[
[
"Schubert",
"Marius",
""
],
[
"Kahl",
"Karsten",
""
],
[
"Rottmann",
"Matthias",
""
]
] |
In object detection with deep neural networks, the box-wise objectness score tends to be overconfident, sometimes even indicating high confidence in presence of inaccurate predictions. Hence, the reliability of the prediction and therefore reliable uncertainties are of highest interest. In this work, we present a post processing method that for any given neural network provides predictive uncertainty estimates and quality estimates. These estimates are learned by a post processing model that receives as input a hand-crafted set of transparent metrics in form of a structured dataset. Therefrom, we learn two tasks for predicted bounding boxes. We discriminate between true positives ($\mathit{IoU}\geq0.5$) and false positives ($\mathit{IoU} < 0.5$) which we term meta classification, and we predict $\mathit{IoU}$ values directly which we term meta regression. The probabilities of the meta classification model aim at learning the probabilities of success and failure and therefore provide a modelled predictive uncertainty estimate. On the other hand, meta regression gives rise to a quality estimate. In numerical experiments, we use the publicly available YOLOv3 network and the Faster-RCNN network and evaluate meta classification and regression performance on the Kitti, Pascal VOC and COCO datasets. We demonstrate that our metrics are indeed well correlated with the $\mathit{IoU}$. For meta classification we obtain classification accuracies of up to 98.92% and AUROCs of up to 99.93%. For meta regression we obtain an $R^2$ value of up to 91.78%. These results yield significant improvements compared to other network's objectness score and other baseline approaches. Therefore, we obtain more reliable uncertainty and quality estimates which is particularly interesting in the absence of ground truth.
|
2307.03493
|
Gamze \.Islamo\u{g}lu
|
Gamze \.Islamo\u{g}lu, Moritz Scherer, Gianna Paulin, Tim Fischer,
Victor J.B. Jung, Angelo Garofalo, Luca Benini
|
ITA: An Energy-Efficient Attention and Softmax Accelerator for Quantized
Transformers
|
Accepted for publication at the 2023 ACM/IEEE International Symposium
on Low Power Electronics and Design (ISLPED)
| null |
10.1109/ISLPED58423.2023.10244348
| null |
cs.AR cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Transformer networks have emerged as the state-of-the-art approach for
natural language processing tasks and are gaining popularity in other domains
such as computer vision and audio processing. However, the efficient hardware
acceleration of transformer models poses new challenges due to their high
arithmetic intensities, large memory requirements, and complex dataflow
dependencies. In this work, we propose ITA, a novel accelerator architecture
for transformers and related models that targets efficient inference on
embedded systems by exploiting 8-bit quantization and an innovative softmax
implementation that operates exclusively on integer values. By computing
on-the-fly in streaming mode, our softmax implementation minimizes data
movement and energy consumption. ITA achieves competitive energy efficiency
with respect to state-of-the-art transformer accelerators with 16.9 TOPS/W,
while outperforming them in area efficiency with 5.93 TOPS/mm$^2$ in 22 nm
fully-depleted silicon-on-insulator technology at 0.8 V.
|
[
{
"created": "Fri, 7 Jul 2023 10:05:38 GMT",
"version": "v1"
},
{
"created": "Mon, 10 Jul 2023 06:08:45 GMT",
"version": "v2"
}
] |
2024-07-29
|
[
[
"İslamoğlu",
"Gamze",
""
],
[
"Scherer",
"Moritz",
""
],
[
"Paulin",
"Gianna",
""
],
[
"Fischer",
"Tim",
""
],
[
"Jung",
"Victor J. B.",
""
],
[
"Garofalo",
"Angelo",
""
],
[
"Benini",
"Luca",
""
]
] |
Transformer networks have emerged as the state-of-the-art approach for natural language processing tasks and are gaining popularity in other domains such as computer vision and audio processing. However, the efficient hardware acceleration of transformer models poses new challenges due to their high arithmetic intensities, large memory requirements, and complex dataflow dependencies. In this work, we propose ITA, a novel accelerator architecture for transformers and related models that targets efficient inference on embedded systems by exploiting 8-bit quantization and an innovative softmax implementation that operates exclusively on integer values. By computing on-the-fly in streaming mode, our softmax implementation minimizes data movement and energy consumption. ITA achieves competitive energy efficiency with respect to state-of-the-art transformer accelerators with 16.9 TOPS/W, while outperforming them in area efficiency with 5.93 TOPS/mm$^2$ in 22 nm fully-depleted silicon-on-insulator technology at 0.8 V.
|
2401.01624
|
Ying Lv
|
Ying Lv, Zhi Liu, Gongyang Li
|
Context-Aware Interaction Network for RGB-T Semantic Segmentation
|
13 pages, 7 figures, Accepted by IEEE Transactions on Multimedia 2024
| null |
10.1109/TMM.2023.3349072
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
RGB-T semantic segmentation is a key technique for autonomous driving scenes
understanding. For the existing RGB-T semantic segmentation methods, however,
the effective exploration of the complementary relationship between different
modalities is not implemented in the information interaction between multiple
levels. To address such an issue, the Context-Aware Interaction Network
(CAINet) is proposed for RGB-T semantic segmentation, which constructs
interaction space to exploit auxiliary tasks and global context for explicitly
guided learning. Specifically, we propose a Context-Aware Complementary
Reasoning (CACR) module aimed at establishing the complementary relationship
between multimodal features with the long-term context in both spatial and
channel dimensions. Further, considering the importance of global contextual
and detailed information, we propose the Global Context Modeling (GCM) module
and Detail Aggregation (DA) module, and we introduce specific auxiliary
supervision to explicitly guide the context interaction and refine the
segmentation map. Extensive experiments on two benchmark datasets of MFNet and
PST900 demonstrate that the proposed CAINet achieves state-of-the-art
performance. The code is available at https://github.com/YingLv1106/CAINet.
|
[
{
"created": "Wed, 3 Jan 2024 08:49:29 GMT",
"version": "v1"
}
] |
2024-01-04
|
[
[
"Lv",
"Ying",
""
],
[
"Liu",
"Zhi",
""
],
[
"Li",
"Gongyang",
""
]
] |
RGB-T semantic segmentation is a key technique for autonomous driving scenes understanding. For the existing RGB-T semantic segmentation methods, however, the effective exploration of the complementary relationship between different modalities is not implemented in the information interaction between multiple levels. To address such an issue, the Context-Aware Interaction Network (CAINet) is proposed for RGB-T semantic segmentation, which constructs interaction space to exploit auxiliary tasks and global context for explicitly guided learning. Specifically, we propose a Context-Aware Complementary Reasoning (CACR) module aimed at establishing the complementary relationship between multimodal features with the long-term context in both spatial and channel dimensions. Further, considering the importance of global contextual and detailed information, we propose the Global Context Modeling (GCM) module and Detail Aggregation (DA) module, and we introduce specific auxiliary supervision to explicitly guide the context interaction and refine the segmentation map. Extensive experiments on two benchmark datasets of MFNet and PST900 demonstrate that the proposed CAINet achieves state-of-the-art performance. The code is available at https://github.com/YingLv1106/CAINet.
|
2212.07345
|
Federica Nenna
|
Federica Nenna, Davide Zanardi, Luciano Gamberini
|
Human-centric telerobotics: investigating users' performance and
workload via VR-based eye-tracking measures
| null | null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Virtual Reality (VR) is gaining ground in the robotics and teleoperation
industry, opening new prospects as a novel computerized methodology to make
humans interact with robots. In contrast with more conventional button-based
teleoperations, VR allows users to use their physical movements to drive
robotic systems in the virtual environment. The latest VR devices are also
equipped with integrated eye-tracking, which constitutes an exceptional
opportunity for monitoring users' workload online. However, such devices are
fairly recent, and human factors have been consistently marginalized so far in
telerobotics research. We thus covered these aspects by analyzing extensive
behavioral data generated by 24 participants driving a simulated industrial
robot in VR through a pick-and-place task. Users drove the robot via
button-based and action-based controls and under low (single-task) and high
(dual-task) mental demands. We collected self-reports, performance and
eye-tracking data. Specifically, we asked i) how the interactive features of VR
affect users' performance and workload, and additionally tested ii) the
sensibility of diverse eye parameters in monitoring users' vigilance and
workload throughout the task. Users performed faster and more accurately, while
also showing a lower mental workload, when using an action-based VR control.
Among the eye parameters, pupil size was the most resilient indicator of
workload, as it was highly correlated with the self-reports and was not
affected by the user's degree of physical motion in VR. Our results thus bring
a fresh human-centric overview of human-robot interactions in VR, and
systematically demonstrate the potential of VR devices for monitoring human
factors in telerobotics contexts.
|
[
{
"created": "Wed, 14 Dec 2022 17:13:40 GMT",
"version": "v1"
}
] |
2022-12-15
|
[
[
"Nenna",
"Federica",
""
],
[
"Zanardi",
"Davide",
""
],
[
"Gamberini",
"Luciano",
""
]
] |
Virtual Reality (VR) is gaining ground in the robotics and teleoperation industry, opening new prospects as a novel computerized methodology to make humans interact with robots. In contrast with more conventional button-based teleoperations, VR allows users to use their physical movements to drive robotic systems in the virtual environment. The latest VR devices are also equipped with integrated eye-tracking, which constitutes an exceptional opportunity for monitoring users' workload online. However, such devices are fairly recent, and human factors have been consistently marginalized so far in telerobotics research. We thus covered these aspects by analyzing extensive behavioral data generated by 24 participants driving a simulated industrial robot in VR through a pick-and-place task. Users drove the robot via button-based and action-based controls and under low (single-task) and high (dual-task) mental demands. We collected self-reports, performance and eye-tracking data. Specifically, we asked i) how the interactive features of VR affect users' performance and workload, and additionally tested ii) the sensibility of diverse eye parameters in monitoring users' vigilance and workload throughout the task. Users performed faster and more accurately, while also showing a lower mental workload, when using an action-based VR control. Among the eye parameters, pupil size was the most resilient indicator of workload, as it was highly correlated with the self-reports and was not affected by the user's degree of physical motion in VR. Our results thus bring a fresh human-centric overview of human-robot interactions in VR, and systematically demonstrate the potential of VR devices for monitoring human factors in telerobotics contexts.
|
2210.01951
|
Alejandro Lancho
|
Alejandro Lancho, Alexander Fengler and Yury Polyanskiy
|
Finite-Blocklength Results for the A-channel: Applications to Unsourced
Random Access and Group Testing
|
11 pages, 4 figures, extended version of the paper presented at the
58th Annual Allerton Conference on Communication, Control, and Computing
(2022)
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present finite-blocklength achievability bounds for the unsourced
A-channel. In this multiple-access channel, users noiselessly transmit
codewords picked from a common codebook with entries generated from a $q$-ary
alphabet. At each channel use, the receiver observes the set of different
transmitted symbols but not their multiplicity. We show that the A-channel
finds applications in unsourced random-access (URA) and group testing.
Leveraging the insights provided by the finite-blocklength bounds and the
connection between URA and non-adaptive group testing through the A-channel, we
propose improved decoding methods for state-of-the-art A-channel codes and we
showcase how A-channel codes provide a new class of structured group testing
matrices. The developed bounds allow to evaluate the achievable error
probabilities of group testing matrices based on random A-channel codes for
arbitrary numbers of tests, items and defectives. We show that such a
construction asymptotically achieves the optimal number of tests. In addition,
every efficiently decodable A-channel code can be used to construct a group
testing matrix with sub-linear recovery time.
|
[
{
"created": "Tue, 4 Oct 2022 22:59:10 GMT",
"version": "v1"
}
] |
2022-10-06
|
[
[
"Lancho",
"Alejandro",
""
],
[
"Fengler",
"Alexander",
""
],
[
"Polyanskiy",
"Yury",
""
]
] |
We present finite-blocklength achievability bounds for the unsourced A-channel. In this multiple-access channel, users noiselessly transmit codewords picked from a common codebook with entries generated from a $q$-ary alphabet. At each channel use, the receiver observes the set of different transmitted symbols but not their multiplicity. We show that the A-channel finds applications in unsourced random-access (URA) and group testing. Leveraging the insights provided by the finite-blocklength bounds and the connection between URA and non-adaptive group testing through the A-channel, we propose improved decoding methods for state-of-the-art A-channel codes and we showcase how A-channel codes provide a new class of structured group testing matrices. The developed bounds allow to evaluate the achievable error probabilities of group testing matrices based on random A-channel codes for arbitrary numbers of tests, items and defectives. We show that such a construction asymptotically achieves the optimal number of tests. In addition, every efficiently decodable A-channel code can be used to construct a group testing matrix with sub-linear recovery time.
|
0711.0538
|
Grenville Croll
|
Thomas A. Grossman
|
Spreadsheet Engineering: A Research Framework
|
12 Pages
|
Proc. European Spreadsheet Risks Int. Grp. 2002 23-34 ISBN 1 86166
182 7
| null | null |
cs.SE
| null |
Spreadsheet engineering adapts the lessons of software engineering to
spreadsheets, providing eight principles as a framework for organizing
spreadsheet programming recommendations. Spreadsheets raise issues inadequately
addressed by software engineering. Spreadsheets are a powerful modeling
language, allowing strategic rapid model change, and enabling exploratory
modeling. Spreadsheets users learn slowly with experience because they focus on
the problem domain not programming. The heterogeneity of spreadsheet users
requires a taxonomy to guide recommendations. Deployment of best practices is
difficult and merits research.
|
[
{
"created": "Sun, 4 Nov 2007 19:24:57 GMT",
"version": "v1"
}
] |
2007-11-06
|
[
[
"Grossman",
"Thomas A.",
""
]
] |
Spreadsheet engineering adapts the lessons of software engineering to spreadsheets, providing eight principles as a framework for organizing spreadsheet programming recommendations. Spreadsheets raise issues inadequately addressed by software engineering. Spreadsheets are a powerful modeling language, allowing strategic rapid model change, and enabling exploratory modeling. Spreadsheets users learn slowly with experience because they focus on the problem domain not programming. The heterogeneity of spreadsheet users requires a taxonomy to guide recommendations. Deployment of best practices is difficult and merits research.
|
1504.03561
|
Gregory Gutin
|
Jason Crampton, Andrei Gagarin, Gregory Gutin, Mark Jones and Magnus
Wahlstrom
|
On the Workflow Satisfiability Problem with Class-Independent
Constraints
| null | null | null | null |
cs.CR cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A workflow specification defines sets of steps and users. An authorization
policy determines for each user a subset of steps the user is allowed to
perform. Other security requirements, such as separation-of-duty, impose
constraints on which subsets of users may perform certain subsets of steps. The
\emph{workflow satisfiability problem} (WSP) is the problem of determining
whether there exists an assignment of users to workflow steps that satisfies
all such authorizations and constraints. An algorithm for solving WSP is
important, both as a static analysis tool for workflow specifications, and for
the construction of run-time reference monitors for workflow management
systems. Given the computational difficulty of WSP, it is important,
particularly for the second application, that such algorithms are as efficient
as possible.
We introduce class-independent constraints, enabling us to model scenarios
where the set of users is partitioned into groups, and the identities of the
user groups are irrelevant to the satisfaction of the constraint. We prove that
solving WSP is fixed-parameter tractable (FPT) for this class of constraints
and develop an FPT algorithm that is useful in practice. We compare the
performance of the FPT algorithm with that of SAT4J (a pseudo-Boolean SAT
solver) in computational experiments, which show that our algorithm
significantly outperforms SAT4J for many instances of WSP. User-independent
constraints, a large class of constraints including many practical ones, are a
special case of class-independent constraints for which WSP was proved to be
FPT (Cohen {\em et al.}, J. Artif. Intel. Res. 2014). Thus our results
considerably extend our knowledge of the fixed-parameter tractability of WSP.
|
[
{
"created": "Tue, 14 Apr 2015 14:25:22 GMT",
"version": "v1"
},
{
"created": "Mon, 14 Sep 2015 15:21:23 GMT",
"version": "v2"
}
] |
2015-09-15
|
[
[
"Crampton",
"Jason",
""
],
[
"Gagarin",
"Andrei",
""
],
[
"Gutin",
"Gregory",
""
],
[
"Jones",
"Mark",
""
],
[
"Wahlstrom",
"Magnus",
""
]
] |
A workflow specification defines sets of steps and users. An authorization policy determines for each user a subset of steps the user is allowed to perform. Other security requirements, such as separation-of-duty, impose constraints on which subsets of users may perform certain subsets of steps. The \emph{workflow satisfiability problem} (WSP) is the problem of determining whether there exists an assignment of users to workflow steps that satisfies all such authorizations and constraints. An algorithm for solving WSP is important, both as a static analysis tool for workflow specifications, and for the construction of run-time reference monitors for workflow management systems. Given the computational difficulty of WSP, it is important, particularly for the second application, that such algorithms are as efficient as possible. We introduce class-independent constraints, enabling us to model scenarios where the set of users is partitioned into groups, and the identities of the user groups are irrelevant to the satisfaction of the constraint. We prove that solving WSP is fixed-parameter tractable (FPT) for this class of constraints and develop an FPT algorithm that is useful in practice. We compare the performance of the FPT algorithm with that of SAT4J (a pseudo-Boolean SAT solver) in computational experiments, which show that our algorithm significantly outperforms SAT4J for many instances of WSP. User-independent constraints, a large class of constraints including many practical ones, are a special case of class-independent constraints for which WSP was proved to be FPT (Cohen {\em et al.}, J. Artif. Intel. Res. 2014). Thus our results considerably extend our knowledge of the fixed-parameter tractability of WSP.
|
1509.06470
|
Yiyi Liao
|
Yiyi Liao, Sarath Kodagoda, Yue Wang, Lei Shi, Yong Liu
|
Understand Scene Categories by Objects: A Semantic Regularized Scene
Classifier Using Convolutional Neural Networks
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Scene classification is a fundamental perception task for environmental
understanding in today's robotics. In this paper, we have attempted to exploit
the use of popular machine learning technique of deep learning to enhance scene
understanding, particularly in robotics applications. As scene images have
larger diversity than the iconic object images, it is more challenging for deep
learning methods to automatically learn features from scene images with less
samples. Inspired by human scene understanding based on object knowledge, we
address the problem of scene classification by encouraging deep neural networks
to incorporate object-level information. This is implemented with a
regularization of semantic segmentation. With only 5 thousand training images,
as opposed to 2.5 million images, we show the proposed deep architecture
achieves superior scene classification results to the state-of-the-art on a
publicly available SUN RGB-D dataset. In addition, performance of semantic
segmentation, the regularizer, also reaches a new record with refinement
derived from predicted scene labels. Finally, we apply our SUN RGB-D dataset
trained model to a mobile robot captured images to classify scenes in our
university demonstrating the generalization ability of the proposed algorithm.
|
[
{
"created": "Tue, 22 Sep 2015 05:43:27 GMT",
"version": "v1"
}
] |
2015-09-23
|
[
[
"Liao",
"Yiyi",
""
],
[
"Kodagoda",
"Sarath",
""
],
[
"Wang",
"Yue",
""
],
[
"Shi",
"Lei",
""
],
[
"Liu",
"Yong",
""
]
] |
Scene classification is a fundamental perception task for environmental understanding in today's robotics. In this paper, we have attempted to exploit the use of popular machine learning technique of deep learning to enhance scene understanding, particularly in robotics applications. As scene images have larger diversity than the iconic object images, it is more challenging for deep learning methods to automatically learn features from scene images with less samples. Inspired by human scene understanding based on object knowledge, we address the problem of scene classification by encouraging deep neural networks to incorporate object-level information. This is implemented with a regularization of semantic segmentation. With only 5 thousand training images, as opposed to 2.5 million images, we show the proposed deep architecture achieves superior scene classification results to the state-of-the-art on a publicly available SUN RGB-D dataset. In addition, performance of semantic segmentation, the regularizer, also reaches a new record with refinement derived from predicted scene labels. Finally, we apply our SUN RGB-D dataset trained model to a mobile robot captured images to classify scenes in our university demonstrating the generalization ability of the proposed algorithm.
|
2205.01197
|
Hengyi Wang
|
Hengyi Wang, Changjae Oh
|
Boosting Video Object Segmentation based on Scale Inconsistency
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a refinement framework to boost the performance of pre-trained
semi-supervised video object segmentation (VOS) models. Our work is based on
scale inconsistency, which is motivated by the observation that existing VOS
models generate inconsistent predictions from input frames with different
sizes. We use the scale inconsistency as a clue to devise a pixel-level
attention module that aggregates the advantages of the predictions from
different-size inputs. The scale inconsistency is also used to regularize the
training based on a pixel-level variance measured by an uncertainty estimation.
We further present a self-supervised online adaptation, tailored for test-time
optimization, that bootstraps the predictions without ground-truth masks based
on the scale inconsistency. Experiments on DAVIS 16 and DAVIS 17 datasets show
that our framework can be generically applied to various VOS models and improve
their performance.
|
[
{
"created": "Mon, 2 May 2022 20:22:29 GMT",
"version": "v1"
}
] |
2022-05-04
|
[
[
"Wang",
"Hengyi",
""
],
[
"Oh",
"Changjae",
""
]
] |
We present a refinement framework to boost the performance of pre-trained semi-supervised video object segmentation (VOS) models. Our work is based on scale inconsistency, which is motivated by the observation that existing VOS models generate inconsistent predictions from input frames with different sizes. We use the scale inconsistency as a clue to devise a pixel-level attention module that aggregates the advantages of the predictions from different-size inputs. The scale inconsistency is also used to regularize the training based on a pixel-level variance measured by an uncertainty estimation. We further present a self-supervised online adaptation, tailored for test-time optimization, that bootstraps the predictions without ground-truth masks based on the scale inconsistency. Experiments on DAVIS 16 and DAVIS 17 datasets show that our framework can be generically applied to various VOS models and improve their performance.
|
1811.10220
|
Vladimir Mironov
|
Vladimir Mironov, Andrey Kudryavtsev, Yuri Alexeev, Alexander
Moskovsky, Igor Kulikov, Igor Chernykh
|
Evaluation of Intel Memory Drive Technology Performance for Scientific
Applications
| null |
In Proceedings of the Workshop on Memory Centric High Performance
Computing (MCHPC'18). ACM, New York, NY, USA, 14-21, 2018
|
10.1145/3286475.3286479
| null |
cs.PF
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present benchmark data for Intel Memory Drive Technology
(IMDT), which is a new generation of Software-defined Memory (SDM) based on
Intel ScaleMP collaboration and using 3D XPointTM based Intel Solid-State
Drives (SSDs) called Optane. We studied IMDT performance for synthetic
benchmarks, scientific kernels, and applications. We chose these benchmarks to
represent different patterns for computation and accessing data on disks and
memory. To put performance of IMDT in comparison, we used two memory
configurations: hybrid IMDT DDR4/Optane and DDR4 only systems. The performance
was measured as a percentage of used memory and analyzed in detail. We found
that for some applications DDR4/Optane hybrid configuration outperforms DDR4
setup by up to 20%.
|
[
{
"created": "Mon, 26 Nov 2018 07:49:13 GMT",
"version": "v1"
}
] |
2018-11-27
|
[
[
"Mironov",
"Vladimir",
""
],
[
"Kudryavtsev",
"Andrey",
""
],
[
"Alexeev",
"Yuri",
""
],
[
"Moskovsky",
"Alexander",
""
],
[
"Kulikov",
"Igor",
""
],
[
"Chernykh",
"Igor",
""
]
] |
In this paper, we present benchmark data for Intel Memory Drive Technology (IMDT), which is a new generation of Software-defined Memory (SDM) based on Intel ScaleMP collaboration and using 3D XPointTM based Intel Solid-State Drives (SSDs) called Optane. We studied IMDT performance for synthetic benchmarks, scientific kernels, and applications. We chose these benchmarks to represent different patterns for computation and accessing data on disks and memory. To put performance of IMDT in comparison, we used two memory configurations: hybrid IMDT DDR4/Optane and DDR4 only systems. The performance was measured as a percentage of used memory and analyzed in detail. We found that for some applications DDR4/Optane hybrid configuration outperforms DDR4 setup by up to 20%.
|
1809.08587
|
Ohad Shamir
|
Ohad Shamir
|
Exponential Convergence Time of Gradient Descent for One-Dimensional
Deep Linear Neural Networks
|
Comparison to previous version: Fixed a bug in lemma 1 part 3 (does
not affect any other part of the paper)
| null | null | null |
cs.LG cs.NE math.OC stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the dynamics of gradient descent on objective functions of the form
$f(\prod_{i=1}^{k} w_i)$ (with respect to scalar parameters $w_1,\ldots,w_k$),
which arise in the context of training depth-$k$ linear neural networks. We
prove that for standard random initializations, and under mild assumptions on
$f$, the number of iterations required for convergence scales exponentially
with the depth $k$. We also show empirically that this phenomenon can occur in
higher dimensions, where each $w_i$ is a matrix. This highlights a potential
obstacle in understanding the convergence of gradient-based methods for deep
linear neural networks, where $k$ is large.
|
[
{
"created": "Sun, 23 Sep 2018 12:32:45 GMT",
"version": "v1"
},
{
"created": "Thu, 27 Sep 2018 08:37:45 GMT",
"version": "v2"
},
{
"created": "Thu, 27 Dec 2018 08:31:56 GMT",
"version": "v3"
},
{
"created": "Thu, 13 Jun 2019 07:23:22 GMT",
"version": "v4"
}
] |
2019-06-14
|
[
[
"Shamir",
"Ohad",
""
]
] |
We study the dynamics of gradient descent on objective functions of the form $f(\prod_{i=1}^{k} w_i)$ (with respect to scalar parameters $w_1,\ldots,w_k$), which arise in the context of training depth-$k$ linear neural networks. We prove that for standard random initializations, and under mild assumptions on $f$, the number of iterations required for convergence scales exponentially with the depth $k$. We also show empirically that this phenomenon can occur in higher dimensions, where each $w_i$ is a matrix. This highlights a potential obstacle in understanding the convergence of gradient-based methods for deep linear neural networks, where $k$ is large.
|
1507.08322
|
Martin Tak\'a\v{c}
|
Martin Tak\'a\v{c} and Peter Richt\'arik and Nathan Srebro
|
Distributed Mini-Batch SDCA
| null | null | null | null |
cs.LG math.OC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present an improved analysis of mini-batched stochastic dual coordinate
ascent for regularized empirical loss minimization (i.e. SVM and SVM-type
objectives). Our analysis allows for flexible sampling schemes, including where
data is distribute across machines, and combines a dependence on the smoothness
of the loss and/or the data spread (measured through the spectral norm).
|
[
{
"created": "Wed, 29 Jul 2015 21:15:31 GMT",
"version": "v1"
}
] |
2015-07-31
|
[
[
"Takáč",
"Martin",
""
],
[
"Richtárik",
"Peter",
""
],
[
"Srebro",
"Nathan",
""
]
] |
We present an improved analysis of mini-batched stochastic dual coordinate ascent for regularized empirical loss minimization (i.e. SVM and SVM-type objectives). Our analysis allows for flexible sampling schemes, including where data is distribute across machines, and combines a dependence on the smoothness of the loss and/or the data spread (measured through the spectral norm).
|
1809.01426
|
Pascal Ochem
|
Pamela Fleischmann and Pascal Ochem and Kamellia Reshadi
|
Repetition avoidance in products of factors
| null | null | null | null |
cs.DM math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider a variation on a classical avoidance problem from combinatorics
on words that has been introduced by Mousavi and Shallit at DLT 2013. Let
$\texttt{pexp}_i(w)$ be the supremum of the exponent over the products of $i$
factors of the word $w$. The repetition threshold $\texttt{RT}_i(k)$ is then
the infimum of $\texttt{pexp}_i(w)$ over all words $w\in\Sigma^\omega_k$.
Mousavi and Shallit obtained that $\texttt{RT}_i(2)=2i$ and
$\texttt{RT}_2(3)=\tfrac{13}4$. We show that
$\texttt{RT}_i(3)=\tfrac{3i}2+\tfrac14$ if $i$ is even and
$\texttt{RT}_i(3)=\tfrac{3i}2+\tfrac16$ if $i$ is odd and $i\ge3$.
|
[
{
"created": "Wed, 5 Sep 2018 10:36:14 GMT",
"version": "v1"
},
{
"created": "Sat, 27 Apr 2019 16:19:07 GMT",
"version": "v2"
}
] |
2019-04-30
|
[
[
"Fleischmann",
"Pamela",
""
],
[
"Ochem",
"Pascal",
""
],
[
"Reshadi",
"Kamellia",
""
]
] |
We consider a variation on a classical avoidance problem from combinatorics on words that has been introduced by Mousavi and Shallit at DLT 2013. Let $\texttt{pexp}_i(w)$ be the supremum of the exponent over the products of $i$ factors of the word $w$. The repetition threshold $\texttt{RT}_i(k)$ is then the infimum of $\texttt{pexp}_i(w)$ over all words $w\in\Sigma^\omega_k$. Mousavi and Shallit obtained that $\texttt{RT}_i(2)=2i$ and $\texttt{RT}_2(3)=\tfrac{13}4$. We show that $\texttt{RT}_i(3)=\tfrac{3i}2+\tfrac14$ if $i$ is even and $\texttt{RT}_i(3)=\tfrac{3i}2+\tfrac16$ if $i$ is odd and $i\ge3$.
|
1902.03237
|
Cristina Kadar
|
Cristina Kadar, Rudolf Maculan, Stefan Feuerriegel
|
Public decision support for low population density areas: An
imbalance-aware hyper-ensemble for spatio-temporal crime prediction
| null | null | null | null |
cs.CY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Crime events are known to reveal spatio-temporal patterns, which can be used
for predictive modeling and subsequent decision support. While the focus has
hitherto been placed on areas with high population density, we address the
challenging undertaking of predicting crime hotspots in regions with low
population densities and highly unequally-distributed crime.This results in a
severe sparsity (i.e., class imbalance) of the outcome variable, which impedes
predictive modeling. To alleviate this, we develop machine learning models for
spatio-temporal prediction that are specifically adjusted for an imbalanced
distribution of the class labels and test them in an actual setting with
state-of-the-art predictors (i.e., socio-economic, geographical, temporal,
meteorological, and crime variables in fine resolution). The proposed
imbalance-aware hyper-ensemble increases the hit ratio considerably from 18.1%
to 24.6% when aiming for the top 5% of hotspots, and from 53.1% to 60.4% when
aiming for the top 20% of hotspots.
|
[
{
"created": "Fri, 1 Feb 2019 17:34:05 GMT",
"version": "v1"
}
] |
2019-02-12
|
[
[
"Kadar",
"Cristina",
""
],
[
"Maculan",
"Rudolf",
""
],
[
"Feuerriegel",
"Stefan",
""
]
] |
Crime events are known to reveal spatio-temporal patterns, which can be used for predictive modeling and subsequent decision support. While the focus has hitherto been placed on areas with high population density, we address the challenging undertaking of predicting crime hotspots in regions with low population densities and highly unequally-distributed crime.This results in a severe sparsity (i.e., class imbalance) of the outcome variable, which impedes predictive modeling. To alleviate this, we develop machine learning models for spatio-temporal prediction that are specifically adjusted for an imbalanced distribution of the class labels and test them in an actual setting with state-of-the-art predictors (i.e., socio-economic, geographical, temporal, meteorological, and crime variables in fine resolution). The proposed imbalance-aware hyper-ensemble increases the hit ratio considerably from 18.1% to 24.6% when aiming for the top 5% of hotspots, and from 53.1% to 60.4% when aiming for the top 20% of hotspots.
|
2305.12983
|
Michael Kranl
|
Michael Kranl, Hubert Ramsauer and Bernhard Knapp
|
Why current rain denoising models fail on CycleGAN created rain images
in autonomous driving
|
7 pages, 4 figures
| null | null | null |
cs.CV cs.LG eess.IV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
One of the main tasks of an autonomous agent in a vehicle is to correctly
perceive its environment. Much of the data that needs to be processed is
collected by optical sensors such as cameras. Unfortunately, the data collected
in this way can be affected by a variety of factors, including environmental
influences such as inclement weather conditions (e.g., rain). Such noisy data
can cause autonomous agents to take wrong decisions with potentially fatal
outcomes. This paper addresses the rain image challenge by two steps: First,
rain is artificially added to a set of clear-weather condition images using a
Generative Adversarial Network (GAN). This yields good/bad weather image pairs
for training de-raining models. This artificial generation of rain images is
sufficiently realistic as in 7 out of 10 cases, human test subjects believed
the generated rain images to be real. In a second step, this paired good/bad
weather image data is used to train two rain denoising models, one based
primarily on a Convolutional Neural Network (CNN) and the other using a Vision
Transformer. This rain de-noising step showed limited performance as the
quality gain was only about 15%. This lack of performance on realistic rain
images as used in our study is likely due to current rain de-noising models
being developed for simplistic rain overlay data. Our study shows that there is
ample space for improvement of de-raining models in autonomous driving.
|
[
{
"created": "Mon, 22 May 2023 12:42:32 GMT",
"version": "v1"
}
] |
2023-05-23
|
[
[
"Kranl",
"Michael",
""
],
[
"Ramsauer",
"Hubert",
""
],
[
"Knapp",
"Bernhard",
""
]
] |
One of the main tasks of an autonomous agent in a vehicle is to correctly perceive its environment. Much of the data that needs to be processed is collected by optical sensors such as cameras. Unfortunately, the data collected in this way can be affected by a variety of factors, including environmental influences such as inclement weather conditions (e.g., rain). Such noisy data can cause autonomous agents to take wrong decisions with potentially fatal outcomes. This paper addresses the rain image challenge by two steps: First, rain is artificially added to a set of clear-weather condition images using a Generative Adversarial Network (GAN). This yields good/bad weather image pairs for training de-raining models. This artificial generation of rain images is sufficiently realistic as in 7 out of 10 cases, human test subjects believed the generated rain images to be real. In a second step, this paired good/bad weather image data is used to train two rain denoising models, one based primarily on a Convolutional Neural Network (CNN) and the other using a Vision Transformer. This rain de-noising step showed limited performance as the quality gain was only about 15%. This lack of performance on realistic rain images as used in our study is likely due to current rain de-noising models being developed for simplistic rain overlay data. Our study shows that there is ample space for improvement of de-raining models in autonomous driving.
|
2111.10338
|
Lukas Lindenroth
|
Lukas Lindenroth, Danail Stoyanov, Kawal Rhode and Hongbin Liu
|
Towards intrinsic force sensing and control in parallel soft robots
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
With soft robotics being increasingly employed in settings demanding high and
controlled contact forces, recent research has demonstrated the use of soft
robots to estimate or intrinsically sense forces without requiring external
sensing mechanisms. Whilst this has mainly been shown in tendon-based continuum
manipulators or deformable robots comprising of push-pull rod actuation, fluid
drives still pose great challenges due to high actuation variability and
nonlinear mechanical system responses. In this work we investigate the
capabilities of a hydraulic, parallel soft robot to intrinsically sense and
subsequently control contact forces. A comprehensive algorithm is derived for
static, quasi-static and dynamic force sensing which relies on fluid volume and
pressure information of the system. The algorithm is validated for a single
degree-of-freedom soft fluidic actuator. Results indicate that axial forces
acting on a single actuator can be estimated with an accuracy of 0.56 +- 0.66N
within the validated range of 0 to 6N in a quasi-static configuration. The
force sensing methodology is applied to force control in a single actuator as
well as the coupled parallel robot. It can be seen that forces are accurately
controllable for both systems, with the capability of controlling directional
contact forces in case of the multi degree-of-freedom parallel soft robot.
|
[
{
"created": "Fri, 19 Nov 2021 17:39:18 GMT",
"version": "v1"
}
] |
2021-11-22
|
[
[
"Lindenroth",
"Lukas",
""
],
[
"Stoyanov",
"Danail",
""
],
[
"Rhode",
"Kawal",
""
],
[
"Liu",
"Hongbin",
""
]
] |
With soft robotics being increasingly employed in settings demanding high and controlled contact forces, recent research has demonstrated the use of soft robots to estimate or intrinsically sense forces without requiring external sensing mechanisms. Whilst this has mainly been shown in tendon-based continuum manipulators or deformable robots comprising of push-pull rod actuation, fluid drives still pose great challenges due to high actuation variability and nonlinear mechanical system responses. In this work we investigate the capabilities of a hydraulic, parallel soft robot to intrinsically sense and subsequently control contact forces. A comprehensive algorithm is derived for static, quasi-static and dynamic force sensing which relies on fluid volume and pressure information of the system. The algorithm is validated for a single degree-of-freedom soft fluidic actuator. Results indicate that axial forces acting on a single actuator can be estimated with an accuracy of 0.56 +- 0.66N within the validated range of 0 to 6N in a quasi-static configuration. The force sensing methodology is applied to force control in a single actuator as well as the coupled parallel robot. It can be seen that forces are accurately controllable for both systems, with the capability of controlling directional contact forces in case of the multi degree-of-freedom parallel soft robot.
|
1809.06016
|
Saber Moradi
|
Saber Moradi, Rajit Manohar
|
The Impact of On-chip Communication on Memory Technologies for
Neuromorphic Systems
|
26 pages, 6 figures, Journal of Physics D: Applied Physics 2018
| null |
10.1088/1361-6463/aae641
| null |
cs.AR cs.ET
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Emergent nanoscale non-volatile memory technologies with high integration
density offer a promising solution to overcome the scalability limitations of
CMOS-based neural networks architectures, by efficiently exhibiting the key
principle of neural computation. Despite the potential improvements in
computational costs, designing high-performance on-chip communication networks
that support flexible, large-fanout connectivity remains as daunting task. In
this paper, we elaborate on the communication requirements of large-scale
neuromorphic designs, and point out the differences with the conventional
network-on-chip architectures. We present existing approaches for on-chip
neuromorphic routing networks, and discuss how new memory and integration
technologies may help to alleviate the communication issues in constructing
next-generation intelligent computing machines.
|
[
{
"created": "Mon, 17 Sep 2018 04:40:52 GMT",
"version": "v1"
},
{
"created": "Mon, 1 Oct 2018 05:00:47 GMT",
"version": "v2"
}
] |
2018-11-14
|
[
[
"Moradi",
"Saber",
""
],
[
"Manohar",
"Rajit",
""
]
] |
Emergent nanoscale non-volatile memory technologies with high integration density offer a promising solution to overcome the scalability limitations of CMOS-based neural networks architectures, by efficiently exhibiting the key principle of neural computation. Despite the potential improvements in computational costs, designing high-performance on-chip communication networks that support flexible, large-fanout connectivity remains as daunting task. In this paper, we elaborate on the communication requirements of large-scale neuromorphic designs, and point out the differences with the conventional network-on-chip architectures. We present existing approaches for on-chip neuromorphic routing networks, and discuss how new memory and integration technologies may help to alleviate the communication issues in constructing next-generation intelligent computing machines.
|
1703.08252
|
Fabricio Murai
|
Fabricio Murai, Bruno Ribeiro, Don Towsley, Pinghui Wang
|
Characterizing Directed and Undirected Networks via Multidimensional
Walks with Jumps
|
35 pages, submitted to ACM Transactions on Knowledge Discovery from
Data (TKDD)
| null | null | null |
cs.SI physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Estimating distributions of node characteristics (labels) such as number of
connections or citizenship of users in a social network via edge and node
sampling is a vital part of the study of complex networks. Due to its low cost,
sampling via a random walk (RW) has been proposed as an attractive solution to
this task. Most RW methods assume either that the network is undirected or that
walkers can traverse edges regardless of their direction. Some RW methods have
been designed for directed networks where edges coming into a node are not
directly observable. In this work, we propose Directed Unbiased Frontier
Sampling (DUFS), a sampling method based on a large number of coordinated
walkers, each starting from a node chosen uniformly at random. It is applicable
to directed networks with invisible incoming edges because it constructs, in
real-time, an undirected graph consistent with the walkers trajectories, and
due to the use of random jumps which prevent walkers from being trapped. DUFS
generalizes previous RW methods and is suited for undirected networks and to
directed networks regardless of in-edges visibility. We also propose an
improved estimator of node label distributions that combines information from
the initial walker locations with subsequent RW observations. We evaluate DUFS,
compare it to other RW methods, investigate the impact of its parameters on
estimation accuracy and provide practical guidelines for choosing them. In
estimating out-degree distributions, DUFS yields significantly better estimates
of the head of the distribution than other methods, while matching or exceeding
estimation accuracy of the tail. Last, we show that DUFS outperforms uniform
node sampling when estimating distributions of node labels of the top 10%
largest degree nodes, even when sampling a node uniformly has the same cost as
RW steps.
|
[
{
"created": "Thu, 23 Mar 2017 23:35:53 GMT",
"version": "v1"
},
{
"created": "Fri, 13 Jul 2018 20:10:55 GMT",
"version": "v2"
}
] |
2018-07-17
|
[
[
"Murai",
"Fabricio",
""
],
[
"Ribeiro",
"Bruno",
""
],
[
"Towsley",
"Don",
""
],
[
"Wang",
"Pinghui",
""
]
] |
Estimating distributions of node characteristics (labels) such as number of connections or citizenship of users in a social network via edge and node sampling is a vital part of the study of complex networks. Due to its low cost, sampling via a random walk (RW) has been proposed as an attractive solution to this task. Most RW methods assume either that the network is undirected or that walkers can traverse edges regardless of their direction. Some RW methods have been designed for directed networks where edges coming into a node are not directly observable. In this work, we propose Directed Unbiased Frontier Sampling (DUFS), a sampling method based on a large number of coordinated walkers, each starting from a node chosen uniformly at random. It is applicable to directed networks with invisible incoming edges because it constructs, in real-time, an undirected graph consistent with the walkers trajectories, and due to the use of random jumps which prevent walkers from being trapped. DUFS generalizes previous RW methods and is suited for undirected networks and to directed networks regardless of in-edges visibility. We also propose an improved estimator of node label distributions that combines information from the initial walker locations with subsequent RW observations. We evaluate DUFS, compare it to other RW methods, investigate the impact of its parameters on estimation accuracy and provide practical guidelines for choosing them. In estimating out-degree distributions, DUFS yields significantly better estimates of the head of the distribution than other methods, while matching or exceeding estimation accuracy of the tail. Last, we show that DUFS outperforms uniform node sampling when estimating distributions of node labels of the top 10% largest degree nodes, even when sampling a node uniformly has the same cost as RW steps.
|
1807.09940
|
Shuhan Chen
|
Shuhan Chen, Xiuli Tan, Ben Wang, Xuelong Hu
|
Reverse Attention for Salient Object Detection
|
ECCV 2018
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Benefit from the quick development of deep learning techniques, salient
object detection has achieved remarkable progresses recently. However, there
still exists following two major challenges that hinder its application in
embedded devices, low resolution output and heavy model weight. To this end,
this paper presents an accurate yet compact deep network for efficient salient
object detection. More specifically, given a coarse saliency prediction in the
deepest layer, we first employ residual learning to learn side-output residual
features for saliency refinement, which can be achieved with very limited
convolutional parameters while keep accuracy. Secondly, we further propose
reverse attention to guide such side-output residual learning in a top-down
manner. By erasing the current predicted salient regions from side-output
features, the network can eventually explore the missing object parts and
details which results in high resolution and accuracy. Experiments on six
benchmark datasets demonstrate that the proposed approach compares favorably
against state-of-the-art methods, and with advantages in terms of simplicity,
efficiency (45 FPS) and model size (81 MB).
|
[
{
"created": "Thu, 26 Jul 2018 03:30:57 GMT",
"version": "v1"
},
{
"created": "Mon, 15 Apr 2019 14:46:46 GMT",
"version": "v2"
}
] |
2019-04-16
|
[
[
"Chen",
"Shuhan",
""
],
[
"Tan",
"Xiuli",
""
],
[
"Wang",
"Ben",
""
],
[
"Hu",
"Xuelong",
""
]
] |
Benefit from the quick development of deep learning techniques, salient object detection has achieved remarkable progresses recently. However, there still exists following two major challenges that hinder its application in embedded devices, low resolution output and heavy model weight. To this end, this paper presents an accurate yet compact deep network for efficient salient object detection. More specifically, given a coarse saliency prediction in the deepest layer, we first employ residual learning to learn side-output residual features for saliency refinement, which can be achieved with very limited convolutional parameters while keep accuracy. Secondly, we further propose reverse attention to guide such side-output residual learning in a top-down manner. By erasing the current predicted salient regions from side-output features, the network can eventually explore the missing object parts and details which results in high resolution and accuracy. Experiments on six benchmark datasets demonstrate that the proposed approach compares favorably against state-of-the-art methods, and with advantages in terms of simplicity, efficiency (45 FPS) and model size (81 MB).
|
2408.04131
|
Tong Liu
|
Tong Liu, Hadi Meidani
|
Heterogeneous Graph Sequence Neural Networks for Dynamic Traffic
Assignment
|
9 pages, 5 figures
| null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Traffic assignment and traffic flow prediction provide critical insights for
urban planning, traffic management, and the development of intelligent
transportation systems. An efficient model for calculating traffic flows over
the entire transportation network could provide a more detailed and realistic
understanding of traffic dynamics. However, existing traffic prediction
approaches, such as those utilizing graph neural networks, are typically
limited to locations where sensors are deployed and cannot predict traffic
flows beyond sensor locations. To alleviate this limitation, inspired by
fundamental relationship that exists between link flows and the
origin-destination (OD) travel demands, we proposed the Heterogeneous
Spatio-Temporal Graph Sequence Network (HSTGSN). HSTGSN exploits dependency
between origin and destination nodes, even when it is long-range, and learns
implicit vehicle route choices under different origin-destination demands. This
model is based on a heterogeneous graph which consists of road links, OD links
(virtual links connecting origins and destinations) and a spatio-temporal graph
encoder-decoder that captures the spatio-temporal relationship between OD
demands and flow distribution. We will show how the graph encoder-decoder is
able to recover the incomplete information in the OD demand, by using node
embedding from the graph decoder to predict the temporal changes in flow
distribution. Using extensive experimental studies on real-world networks with
complete/incomplete OD demands, we demonstrate that our method can not only
capture the implicit spatio-temporal relationship between link traffic flows
and OD demands but also achieve accurate prediction performance and
generalization capability.
|
[
{
"created": "Wed, 7 Aug 2024 23:41:09 GMT",
"version": "v1"
}
] |
2024-08-09
|
[
[
"Liu",
"Tong",
""
],
[
"Meidani",
"Hadi",
""
]
] |
Traffic assignment and traffic flow prediction provide critical insights for urban planning, traffic management, and the development of intelligent transportation systems. An efficient model for calculating traffic flows over the entire transportation network could provide a more detailed and realistic understanding of traffic dynamics. However, existing traffic prediction approaches, such as those utilizing graph neural networks, are typically limited to locations where sensors are deployed and cannot predict traffic flows beyond sensor locations. To alleviate this limitation, inspired by fundamental relationship that exists between link flows and the origin-destination (OD) travel demands, we proposed the Heterogeneous Spatio-Temporal Graph Sequence Network (HSTGSN). HSTGSN exploits dependency between origin and destination nodes, even when it is long-range, and learns implicit vehicle route choices under different origin-destination demands. This model is based on a heterogeneous graph which consists of road links, OD links (virtual links connecting origins and destinations) and a spatio-temporal graph encoder-decoder that captures the spatio-temporal relationship between OD demands and flow distribution. We will show how the graph encoder-decoder is able to recover the incomplete information in the OD demand, by using node embedding from the graph decoder to predict the temporal changes in flow distribution. Using extensive experimental studies on real-world networks with complete/incomplete OD demands, we demonstrate that our method can not only capture the implicit spatio-temporal relationship between link traffic flows and OD demands but also achieve accurate prediction performance and generalization capability.
|
1907.12849
|
Chao Zhang
|
Chao Zhang, Stephan Liwicki, William Smith, Roberto Cipolla
|
Orientation-aware Semantic Segmentation on Icosahedron Spheres
|
9 pages, accepted to iccv 2019
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We address semantic segmentation on omnidirectional images, to leverage a
holistic understanding of the surrounding scene for applications like
autonomous driving systems. For the spherical domain, several methods recently
adopt an icosahedron mesh, but systems are typically rotation invariant or
require significant memory and parameters, thus enabling execution only at very
low resolutions. In our work, we propose an orientation-aware CNN framework for
the icosahedron mesh. Our representation allows for fast network operations, as
our design simplifies to standard network operations of classical CNNs, but
under consideration of north-aligned kernel convolutions for features on the
sphere. We implement our representation and demonstrate its memory efficiency
up-to a level-8 resolution mesh (equivalent to 640 x 1024 equirectangular
images). Finally, since our kernels operate on the tangent of the sphere,
standard feature weights, pretrained on perspective data, can be directly
transferred with only small need for weight refinement. In our evaluation our
orientation-aware CNN becomes a new state of the art for the recent 2D3DS
dataset, and our Omni-SYNTHIA version of SYNTHIA. Rotation invariant
classification and segmentation tasks are additionally presented for comparison
to prior art.
|
[
{
"created": "Tue, 30 Jul 2019 11:59:24 GMT",
"version": "v1"
}
] |
2019-07-31
|
[
[
"Zhang",
"Chao",
""
],
[
"Liwicki",
"Stephan",
""
],
[
"Smith",
"William",
""
],
[
"Cipolla",
"Roberto",
""
]
] |
We address semantic segmentation on omnidirectional images, to leverage a holistic understanding of the surrounding scene for applications like autonomous driving systems. For the spherical domain, several methods recently adopt an icosahedron mesh, but systems are typically rotation invariant or require significant memory and parameters, thus enabling execution only at very low resolutions. In our work, we propose an orientation-aware CNN framework for the icosahedron mesh. Our representation allows for fast network operations, as our design simplifies to standard network operations of classical CNNs, but under consideration of north-aligned kernel convolutions for features on the sphere. We implement our representation and demonstrate its memory efficiency up-to a level-8 resolution mesh (equivalent to 640 x 1024 equirectangular images). Finally, since our kernels operate on the tangent of the sphere, standard feature weights, pretrained on perspective data, can be directly transferred with only small need for weight refinement. In our evaluation our orientation-aware CNN becomes a new state of the art for the recent 2D3DS dataset, and our Omni-SYNTHIA version of SYNTHIA. Rotation invariant classification and segmentation tasks are additionally presented for comparison to prior art.
|
2011.10280
|
Ashiqur KhudaBukhsh Ashiqur Rahman KhudaBukhsh
|
Rupak Sarkar, Ashiqur R. KhudaBukhsh
|
Are Chess Discussions Racist? An Adversarial Hate Speech Data Set
| null | null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
On June 28, 2020, while presenting a chess podcast on Grandmaster Hikaru
Nakamura, Antonio Radi\'c's YouTube handle got blocked because it contained
"harmful and dangerous" content. YouTube did not give further specific reason,
and the channel got reinstated within 24 hours. However, Radi\'c speculated
that given the current political situation, a referral to "black against
white", albeit in the context of chess, earned him this temporary ban. In this
paper, via a substantial corpus of 681,995 comments, on 8,818 YouTube videos
hosted by five highly popular chess-focused YouTube channels, we ask the
following research question: \emph{how robust are off-the-shelf hate-speech
classifiers to out-of-domain adversarial examples?} We release a data set of
1,000 annotated comments where existing hate speech classifiers misclassified
benign chess discussions as hate speech. We conclude with an intriguing analogy
result on racial bias with our findings pointing out to the broader challenge
of color polysemy.
|
[
{
"created": "Fri, 20 Nov 2020 08:50:06 GMT",
"version": "v1"
}
] |
2020-11-23
|
[
[
"Sarkar",
"Rupak",
""
],
[
"KhudaBukhsh",
"Ashiqur R.",
""
]
] |
On June 28, 2020, while presenting a chess podcast on Grandmaster Hikaru Nakamura, Antonio Radi\'c's YouTube handle got blocked because it contained "harmful and dangerous" content. YouTube did not give further specific reason, and the channel got reinstated within 24 hours. However, Radi\'c speculated that given the current political situation, a referral to "black against white", albeit in the context of chess, earned him this temporary ban. In this paper, via a substantial corpus of 681,995 comments, on 8,818 YouTube videos hosted by five highly popular chess-focused YouTube channels, we ask the following research question: \emph{how robust are off-the-shelf hate-speech classifiers to out-of-domain adversarial examples?} We release a data set of 1,000 annotated comments where existing hate speech classifiers misclassified benign chess discussions as hate speech. We conclude with an intriguing analogy result on racial bias with our findings pointing out to the broader challenge of color polysemy.
|
2204.13543
|
Nick Brown
|
Nick Brown, Gordon Gibb, Evgenij Belikov, Rupert Nash
|
Predicting batch queue job wait times for informed scheduling of urgent
HPC workloads
|
Preprint of article at the 2022 Cray User Group (CUG)
| null | null | null |
cs.DC cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There is increasing interest in the use of HPC machines for urgent workloads
to help tackle disasters as they unfold. Whilst batch queue systems are not
ideal in supporting such workloads, many disadvantages can be worked around by
accurately predicting when a waiting job will start to run. However there are
numerous challenges in achieving such a prediction with high accuracy, not
least because the queue's state can change rapidly and depend upon many
factors. In this work we explore a novel machine learning approach for
predicting queue wait times, hypothesising that such a model can capture the
complex behaviour resulting from the queue policy and other interactions to
generate accurate job start times.
For ARCHER2 (HPE Cray EX), Cirrus (HPE 8600) and 4-cabinet (HPE Cray EX) we
explore how different machine learning approaches and techniques improve the
accuracy of our predictions, comparing against the estimation generated by
Slurm. We demonstrate that our techniques deliver the most accurate predictions
across our machines of interest, with the result of this work being the ability
to predict job start times within one minute of the actual start time for
around 65\% of jobs on ARCHER2 and 4-cabinet, and 76\% of jobs on Cirrus. When
compared against what Slurm can deliver, this represents around 3.8 times
better accuracy on ARCHER2 and 18 times better for Cirrus. Furthermore our
approach can accurately predicting the start time for three quarters of all job
within ten minutes of the actual start time on ARCHER2 and 4-cabinet, and for
90\% of jobs on Cirrus. Whilst the driver of this work has been to better
facilitate placement of urgent workloads across HPC machines, the insights
gained can be used to provide wider benefits to users and also enrich existing
batch queue systems and inform policy too.
|
[
{
"created": "Thu, 28 Apr 2022 14:51:58 GMT",
"version": "v1"
}
] |
2022-04-29
|
[
[
"Brown",
"Nick",
""
],
[
"Gibb",
"Gordon",
""
],
[
"Belikov",
"Evgenij",
""
],
[
"Nash",
"Rupert",
""
]
] |
There is increasing interest in the use of HPC machines for urgent workloads to help tackle disasters as they unfold. Whilst batch queue systems are not ideal in supporting such workloads, many disadvantages can be worked around by accurately predicting when a waiting job will start to run. However there are numerous challenges in achieving such a prediction with high accuracy, not least because the queue's state can change rapidly and depend upon many factors. In this work we explore a novel machine learning approach for predicting queue wait times, hypothesising that such a model can capture the complex behaviour resulting from the queue policy and other interactions to generate accurate job start times. For ARCHER2 (HPE Cray EX), Cirrus (HPE 8600) and 4-cabinet (HPE Cray EX) we explore how different machine learning approaches and techniques improve the accuracy of our predictions, comparing against the estimation generated by Slurm. We demonstrate that our techniques deliver the most accurate predictions across our machines of interest, with the result of this work being the ability to predict job start times within one minute of the actual start time for around 65\% of jobs on ARCHER2 and 4-cabinet, and 76\% of jobs on Cirrus. When compared against what Slurm can deliver, this represents around 3.8 times better accuracy on ARCHER2 and 18 times better for Cirrus. Furthermore our approach can accurately predicting the start time for three quarters of all job within ten minutes of the actual start time on ARCHER2 and 4-cabinet, and for 90\% of jobs on Cirrus. Whilst the driver of this work has been to better facilitate placement of urgent workloads across HPC machines, the insights gained can be used to provide wider benefits to users and also enrich existing batch queue systems and inform policy too.
|
2207.05300
|
Daiheng Gao
|
Zhou Kangneng, Zhu Xiaobin, Gao Daiheng, Lee Kai, Li Xinjie, Yin
Xu-Cheng
|
SD-GAN: Semantic Decomposition for Face Image Synthesis with Discrete
Attribute
|
16 pages, 12 figures, Accepted by ACM MM2022
| null | null | null |
cs.CV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Manipulating latent code in generative adversarial networks (GANs) for facial
image synthesis mainly focuses on continuous attribute synthesis (e.g., age,
pose and emotion), while discrete attribute synthesis (like face mask and
eyeglasses) receives less attention. Directly applying existing works to facial
discrete attributes may cause inaccurate results. In this work, we propose an
innovative framework to tackle challenging facial discrete attribute synthesis
via semantic decomposing, dubbed SD-GAN. To be concrete, we explicitly
decompose the discrete attribute representation into two components, i.e. the
semantic prior basis and offset latent representation. The semantic prior basis
shows an initializing direction for manipulating face representation in the
latent space. The offset latent presentation obtained by 3D-aware semantic
fusion network is proposed to adjust prior basis. In addition, the fusion
network integrates 3D embedding for better identity preservation and discrete
attribute synthesis. The combination of prior basis and offset latent
representation enable our method to synthesize photo-realistic face images with
discrete attributes. Notably, we construct a large and valuable dataset MEGN
(Face Mask and Eyeglasses images crawled from Google and Naver) for completing
the lack of discrete attributes in the existing dataset. Extensive qualitative
and quantitative experiments demonstrate the state-of-the-art performance of
our method. Our code is available at: https://github.com/MontaEllis/SD-GAN.
|
[
{
"created": "Tue, 12 Jul 2022 04:23:38 GMT",
"version": "v1"
}
] |
2022-07-13
|
[
[
"Kangneng",
"Zhou",
""
],
[
"Xiaobin",
"Zhu",
""
],
[
"Daiheng",
"Gao",
""
],
[
"Kai",
"Lee",
""
],
[
"Xinjie",
"Li",
""
],
[
"Xu-Cheng",
"Yin",
""
]
] |
Manipulating latent code in generative adversarial networks (GANs) for facial image synthesis mainly focuses on continuous attribute synthesis (e.g., age, pose and emotion), while discrete attribute synthesis (like face mask and eyeglasses) receives less attention. Directly applying existing works to facial discrete attributes may cause inaccurate results. In this work, we propose an innovative framework to tackle challenging facial discrete attribute synthesis via semantic decomposing, dubbed SD-GAN. To be concrete, we explicitly decompose the discrete attribute representation into two components, i.e. the semantic prior basis and offset latent representation. The semantic prior basis shows an initializing direction for manipulating face representation in the latent space. The offset latent presentation obtained by 3D-aware semantic fusion network is proposed to adjust prior basis. In addition, the fusion network integrates 3D embedding for better identity preservation and discrete attribute synthesis. The combination of prior basis and offset latent representation enable our method to synthesize photo-realistic face images with discrete attributes. Notably, we construct a large and valuable dataset MEGN (Face Mask and Eyeglasses images crawled from Google and Naver) for completing the lack of discrete attributes in the existing dataset. Extensive qualitative and quantitative experiments demonstrate the state-of-the-art performance of our method. Our code is available at: https://github.com/MontaEllis/SD-GAN.
|
2309.05632
|
Mayank Sewlia
|
Mayank Sewlia, Christos K. Verginis, Dimos V. Dimarogonas
|
MAPS$^2$: Multi-Robot Autonomous Motion Planning under Signal Temporal
Logic Specifications
| null | null | null | null |
cs.RO
|
http://creativecommons.org/licenses/by/4.0/
|
This article presents MAPS$^2$ : a distributed algorithm that allows
multi-robot systems to deliver coupled tasks expressed as Signal Temporal Logic
(STL) constraints. Classical control theoretical tools addressing STL
constraints either adopt a limited fragment of the STL formula or require
approximations of min/max operators, whereas works maximising robustness
through optimisation-based methods often suffer from local minima, relaxing any
completeness arguments due to the NP-hard nature of the problem. Endowed with
probabilistic guarantees, MAPS$^2$ provides an anytime algorithm that
iteratively improves the robots' trajectories. The algorithm selectively
imposes spatial constraints by taking advantage of the temporal properties of
the STL. The algorithm is distributed, in the sense that each robot calculates
its trajectory by communicating only with its immediate neighbours as defined
via a communication graph. We illustrate the efficiency of MAPS$^2$ by
conducting extensive simulation and experimental studies, verifying the
generation of STL satisfying trajectories.
|
[
{
"created": "Mon, 11 Sep 2023 17:25:08 GMT",
"version": "v1"
},
{
"created": "Tue, 21 May 2024 13:22:06 GMT",
"version": "v2"
}
] |
2024-05-22
|
[
[
"Sewlia",
"Mayank",
""
],
[
"Verginis",
"Christos K.",
""
],
[
"Dimarogonas",
"Dimos V.",
""
]
] |
This article presents MAPS$^2$ : a distributed algorithm that allows multi-robot systems to deliver coupled tasks expressed as Signal Temporal Logic (STL) constraints. Classical control theoretical tools addressing STL constraints either adopt a limited fragment of the STL formula or require approximations of min/max operators, whereas works maximising robustness through optimisation-based methods often suffer from local minima, relaxing any completeness arguments due to the NP-hard nature of the problem. Endowed with probabilistic guarantees, MAPS$^2$ provides an anytime algorithm that iteratively improves the robots' trajectories. The algorithm selectively imposes spatial constraints by taking advantage of the temporal properties of the STL. The algorithm is distributed, in the sense that each robot calculates its trajectory by communicating only with its immediate neighbours as defined via a communication graph. We illustrate the efficiency of MAPS$^2$ by conducting extensive simulation and experimental studies, verifying the generation of STL satisfying trajectories.
|
2203.15241
|
Pan Zhang
|
Pan Zhang, Jianmin Bao, Ting Zhang, Dong Chen, Fang Wen
|
Semi-Supervised Image-to-Image Translation using Latent Space Mapping
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent image-to-image translation works have been transferred from supervised
to unsupervised settings due to the expensive cost of capturing or labeling
large amounts of paired data. However, current unsupervised methods using the
cycle-consistency constraint may not find the desired mapping, especially for
difficult translation tasks. On the other hand, a small number of paired data
are usually accessible. We therefore introduce a general framework for
semi-supervised image translation. Unlike previous works, our main idea is to
learn the translation over the latent feature space instead of the image space.
Thanks to the low dimensional feature space, it is easier to find the desired
mapping function, resulting in improved quality of translation results as well
as the stability of the translation model. Empirically we show that using
feature translation generates better results, even using a few bits of paired
data. Experimental comparisons with state-of-the-art approaches demonstrate the
effectiveness of the proposed framework on a variety of challenging
image-to-image translation tasks
|
[
{
"created": "Tue, 29 Mar 2022 05:14:26 GMT",
"version": "v1"
}
] |
2022-03-30
|
[
[
"Zhang",
"Pan",
""
],
[
"Bao",
"Jianmin",
""
],
[
"Zhang",
"Ting",
""
],
[
"Chen",
"Dong",
""
],
[
"Wen",
"Fang",
""
]
] |
Recent image-to-image translation works have been transferred from supervised to unsupervised settings due to the expensive cost of capturing or labeling large amounts of paired data. However, current unsupervised methods using the cycle-consistency constraint may not find the desired mapping, especially for difficult translation tasks. On the other hand, a small number of paired data are usually accessible. We therefore introduce a general framework for semi-supervised image translation. Unlike previous works, our main idea is to learn the translation over the latent feature space instead of the image space. Thanks to the low dimensional feature space, it is easier to find the desired mapping function, resulting in improved quality of translation results as well as the stability of the translation model. Empirically we show that using feature translation generates better results, even using a few bits of paired data. Experimental comparisons with state-of-the-art approaches demonstrate the effectiveness of the proposed framework on a variety of challenging image-to-image translation tasks
|
1607.08073
|
Adrian Groza
|
Adrian Groza, Calin Cara, Sergiu Zaporojan, Igor Calmicov
|
Assisting Drivers During Overtaking Using Car-2-Car Communication and
Multi-Agent Systems
|
preprint ICCP 2016, Cluj-Napoca
| null | null | null |
cs.AI cs.MA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A warning system for assisting drivers during overtaking maneuvers is
proposed. The system relies on Car-2-Car communication technologies and
multi-agent systems. A protocol for safety overtaking is proposed based on ACL
communicative acts. The mathematical model for safety overtaking used Kalman
filter to minimize localization error.
|
[
{
"created": "Wed, 27 Jul 2016 13:08:13 GMT",
"version": "v1"
}
] |
2016-07-28
|
[
[
"Groza",
"Adrian",
""
],
[
"Cara",
"Calin",
""
],
[
"Zaporojan",
"Sergiu",
""
],
[
"Calmicov",
"Igor",
""
]
] |
A warning system for assisting drivers during overtaking maneuvers is proposed. The system relies on Car-2-Car communication technologies and multi-agent systems. A protocol for safety overtaking is proposed based on ACL communicative acts. The mathematical model for safety overtaking used Kalman filter to minimize localization error.
|
1810.06472
|
Luc Jaulmes
|
Luc Jaulmes, Miquel Moret\'o, Mateo Valero, Marc Casas
|
Memory Vulnerability: A Case for Delaying Error Reporting
| null | null |
10.1109/IOLTS.2019.8854397
| null |
cs.AR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
To face future reliability challenges, it is necessary to quantify the risk
of error in any part of a computing system. To this goal, the Architectural
Vulnerability Factor (AVF) has long been used for chips. However, this metric
is used for offline characterisation, which is inappropriate for memory. We
survey the literature and formalise one of the metrics used, the Memory
Vulnerability Factor, and extend it to take into account false errors. These
are reported errors which would have no impact on the program if they were
ignored. We measure the False Error Aware MVF (FEA) and related metrics
precisely in a cycle-accurate simulator, and compare them with the effects of
injecting faults in a program's data, in native parallel runs. Our findings
show that MVF and FEA are the only two metrics that are safe to use at runtime,
as they both consistently give an upper bound on the probability of incorrect
program outcome. FEA gives a tighter bound than MVF, and is the metric that
correlates best with the incorrect outcome probability of all considered
metrics.
|
[
{
"created": "Mon, 15 Oct 2018 15:44:27 GMT",
"version": "v1"
}
] |
2023-08-02
|
[
[
"Jaulmes",
"Luc",
""
],
[
"Moretó",
"Miquel",
""
],
[
"Valero",
"Mateo",
""
],
[
"Casas",
"Marc",
""
]
] |
To face future reliability challenges, it is necessary to quantify the risk of error in any part of a computing system. To this goal, the Architectural Vulnerability Factor (AVF) has long been used for chips. However, this metric is used for offline characterisation, which is inappropriate for memory. We survey the literature and formalise one of the metrics used, the Memory Vulnerability Factor, and extend it to take into account false errors. These are reported errors which would have no impact on the program if they were ignored. We measure the False Error Aware MVF (FEA) and related metrics precisely in a cycle-accurate simulator, and compare them with the effects of injecting faults in a program's data, in native parallel runs. Our findings show that MVF and FEA are the only two metrics that are safe to use at runtime, as they both consistently give an upper bound on the probability of incorrect program outcome. FEA gives a tighter bound than MVF, and is the metric that correlates best with the incorrect outcome probability of all considered metrics.
|
2402.02061
|
Mohammad Ridwan Kabir
|
Mohsinul Kabir, Mohammad Ridwan Kabir and Riasat Siam Islam
|
Islamic Lifestyle Applications: Meeting the Spiritual Needs of Modern
Muslims
|
23 pages
| null | null | null |
cs.HC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
We evaluated contemporary Islamic lifestyle applications supporting religious
practices and motivation among Muslims. We reviewed 11 popular applications
using self-determination theory and the technology-as-experience framework to
assess their support for motivation and affective needs. Most applications lack
features that foster autonomy, competence, and relatedness. We also interviewed
ten devoted Muslim application users to gain insights into their experiences
and unmet needs. Our findings indicate that existing applications fall short in
providing comprehensive learning, social connections, and scholar
consultations. We propose design implications based on our results, including
guided religious information, shareability, virtual community engagement,
scholarly question-answering, and personalized reminders. We aim to inform the
design of Islamic lifestyle applications that better facilitate ritual
practices, benefitting application designers and Muslim communities. Our
research provides valuable insights into the untapped potential for lifestyle
applications to act as religious companions supporting Muslims' spiritual
journey.
|
[
{
"created": "Sat, 3 Feb 2024 06:54:47 GMT",
"version": "v1"
}
] |
2024-02-06
|
[
[
"Kabir",
"Mohsinul",
""
],
[
"Kabir",
"Mohammad Ridwan",
""
],
[
"Islam",
"Riasat Siam",
""
]
] |
We evaluated contemporary Islamic lifestyle applications supporting religious practices and motivation among Muslims. We reviewed 11 popular applications using self-determination theory and the technology-as-experience framework to assess their support for motivation and affective needs. Most applications lack features that foster autonomy, competence, and relatedness. We also interviewed ten devoted Muslim application users to gain insights into their experiences and unmet needs. Our findings indicate that existing applications fall short in providing comprehensive learning, social connections, and scholar consultations. We propose design implications based on our results, including guided religious information, shareability, virtual community engagement, scholarly question-answering, and personalized reminders. We aim to inform the design of Islamic lifestyle applications that better facilitate ritual practices, benefitting application designers and Muslim communities. Our research provides valuable insights into the untapped potential for lifestyle applications to act as religious companions supporting Muslims' spiritual journey.
|
2003.05063
|
Sara Morsy
|
Sara Morsy and George Karypis
|
Context-aware Non-linear and Neural Attentive Knowledge-based Models for
Grade Prediction
|
arXiv admin note: substantial text overlap with arXiv:1904.11858
| null | null | null |
cs.LG cs.CY stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Grade prediction for future courses not yet taken by students is important as
it can help them and their advisers during the process of course selection as
well as for designing personalized degree plans and modifying them based on
their performance. One of the successful approaches for accurately predicting a
student's grades in future courses is Cumulative Knowledge-based Regression
Models (CKRM). CKRM learns shallow linear models that predict a student's
grades as the similarity between his/her knowledge state and the target course.
However, prior courses taken by a student can have \black{different
contributions when estimating a student's knowledge state and towards each
target course, which} cannot be captured by linear models. Moreover, CKRM and
other grade prediction methods ignore the effect of concurrently-taken courses
on a student's performance in a target course. In this paper, we propose
context-aware non-linear and neural attentive models that can potentially
better estimate a student's knowledge state from his/her prior course
information, as well as model the interactions between a target course and
concurrent courses. Compared to the competing methods, our experiments on a
large real-world dataset consisting of more than $1.5$M grades show the
effectiveness of the proposed models in accurately predicting students' grades.
Moreover, the attention weights learned by the neural attentive model can be
helpful in better designing their degree plans.
|
[
{
"created": "Mon, 9 Mar 2020 20:20:48 GMT",
"version": "v1"
}
] |
2020-03-12
|
[
[
"Morsy",
"Sara",
""
],
[
"Karypis",
"George",
""
]
] |
Grade prediction for future courses not yet taken by students is important as it can help them and their advisers during the process of course selection as well as for designing personalized degree plans and modifying them based on their performance. One of the successful approaches for accurately predicting a student's grades in future courses is Cumulative Knowledge-based Regression Models (CKRM). CKRM learns shallow linear models that predict a student's grades as the similarity between his/her knowledge state and the target course. However, prior courses taken by a student can have \black{different contributions when estimating a student's knowledge state and towards each target course, which} cannot be captured by linear models. Moreover, CKRM and other grade prediction methods ignore the effect of concurrently-taken courses on a student's performance in a target course. In this paper, we propose context-aware non-linear and neural attentive models that can potentially better estimate a student's knowledge state from his/her prior course information, as well as model the interactions between a target course and concurrent courses. Compared to the competing methods, our experiments on a large real-world dataset consisting of more than $1.5$M grades show the effectiveness of the proposed models in accurately predicting students' grades. Moreover, the attention weights learned by the neural attentive model can be helpful in better designing their degree plans.
|
2102.02525
|
Kai Liang
|
Kai Liang and Youlong Wu
|
Improved Communication Efficiency for Distributed Mean Estimation with
Side Information
| null | null | null | null |
cs.IT math.IT
|
http://creativecommons.org/licenses/by/4.0/
|
In this paper, we consider the distributed mean estimation problem where the
server has access to some side information, e.g., its local computed mean
estimation or the received information sent by the distributed clients at the
previous iterations. We propose a practical and efficient estimator based on an
r-bit Wynzer-Ziv estimator proposed by Mayekar et al., which requires no
probabilistic assumption on the data. Unlike Mayekar's work which only utilizes
side information at the server, our scheme jointly exploits the correlation
between clients' data and server' s side information, and also between data of
different clients. We derive an upper bound of the estimation error of the
proposed estimator. Based on this upper bound, we provide two algorithms on how
to choose input parameters for the estimator. Finally, parameter regions in
which our estimator is better than the previous one are characterized.
|
[
{
"created": "Thu, 4 Feb 2021 10:32:13 GMT",
"version": "v1"
}
] |
2021-02-05
|
[
[
"Liang",
"Kai",
""
],
[
"Wu",
"Youlong",
""
]
] |
In this paper, we consider the distributed mean estimation problem where the server has access to some side information, e.g., its local computed mean estimation or the received information sent by the distributed clients at the previous iterations. We propose a practical and efficient estimator based on an r-bit Wynzer-Ziv estimator proposed by Mayekar et al., which requires no probabilistic assumption on the data. Unlike Mayekar's work which only utilizes side information at the server, our scheme jointly exploits the correlation between clients' data and server' s side information, and also between data of different clients. We derive an upper bound of the estimation error of the proposed estimator. Based on this upper bound, we provide two algorithms on how to choose input parameters for the estimator. Finally, parameter regions in which our estimator is better than the previous one are characterized.
|
1912.09093
|
Okyay Altay
|
S. Schleiter (1), O. Altay (1) ((1) RWTH Aachen University)
|
Identification of abrupt stiffness changes of structures with tuned mass
dampers under sudden events
|
21 pages, 13 figures. Preprint published in Journal of Structural
Control and Health Monitoring
|
Struct Control Health Monit. 2020:e2530
|
10.1002/stc.2530
| null |
cs.CE cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a recursive system identification method for
multi-degree-of-freedom (MDoF) structures with tuned mass dampers (TMDs)
considering abrupt stiffness changes in case of sudden events, such as
earthquakes. Due to supplementary non-classical damping of the TMDs, the system
identification of MDoF+TMD systems disposes a challenge, in particular, in case
of sudden events. This identification methods may be helpful for structural
health monitoring of MDoF structures controlled by TMDs. A new adaptation
formulation of the unscented Kalman filter allows the identification method to
track abrupt stiffness changes. The paper, firstly, describes the theoretical
background of the proposed system identification method and afterwards presents
three parametric studies regarding the performance of the method. The first
study shows the augmented state identification by the presented system
identification method applied on a MDoF+TMD system. In this study, the abrupt
stiffness changes of the system are successfully detected and localized under
earthquake, impulse and white noise excitations. The second study investigates
the effects of the state covariance and its relevance for the system
identification of MDoF+TMD systems. The results of this study show the
necessity of an adaptive definition of the state covariance as applied in the
proposed method. The third study investigates the effects of modeling on the
performance of the identification method. Mathematical models with
discretization of different orders of convergence and system noise levels are
studied. The results show that, in particular, MDoF+TMD systems require higher
order mathematical models for an accurate identification of abrupt changes.
|
[
{
"created": "Thu, 19 Dec 2019 09:56:15 GMT",
"version": "v1"
},
{
"created": "Thu, 26 Dec 2019 10:16:12 GMT",
"version": "v2"
},
{
"created": "Mon, 2 Mar 2020 17:08:01 GMT",
"version": "v3"
}
] |
2020-03-03
|
[
[
"Schleiter",
"S.",
"",
"RWTH Aachen University"
],
[
"Altay",
"O.",
"",
"RWTH Aachen University"
]
] |
This paper presents a recursive system identification method for multi-degree-of-freedom (MDoF) structures with tuned mass dampers (TMDs) considering abrupt stiffness changes in case of sudden events, such as earthquakes. Due to supplementary non-classical damping of the TMDs, the system identification of MDoF+TMD systems disposes a challenge, in particular, in case of sudden events. This identification methods may be helpful for structural health monitoring of MDoF structures controlled by TMDs. A new adaptation formulation of the unscented Kalman filter allows the identification method to track abrupt stiffness changes. The paper, firstly, describes the theoretical background of the proposed system identification method and afterwards presents three parametric studies regarding the performance of the method. The first study shows the augmented state identification by the presented system identification method applied on a MDoF+TMD system. In this study, the abrupt stiffness changes of the system are successfully detected and localized under earthquake, impulse and white noise excitations. The second study investigates the effects of the state covariance and its relevance for the system identification of MDoF+TMD systems. The results of this study show the necessity of an adaptive definition of the state covariance as applied in the proposed method. The third study investigates the effects of modeling on the performance of the identification method. Mathematical models with discretization of different orders of convergence and system noise levels are studied. The results show that, in particular, MDoF+TMD systems require higher order mathematical models for an accurate identification of abrupt changes.
|
2208.06537
|
Mingyuan Fan
|
Mingyuan Fan, Yang Liu, Cen Chen, Ximeng Liu, Wenzhong Guo
|
Defense against Backdoor Attacks via Identifying and Purifying Bad
Neurons
| null | null | null | null |
cs.LG cs.CR cs.CV
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The opacity of neural networks leads their vulnerability to backdoor attacks,
where hidden attention of infected neurons is triggered to override normal
predictions to the attacker-chosen ones. In this paper, we propose a novel
backdoor defense method to mark and purify the infected neurons in the
backdoored neural networks. Specifically, we first define a new metric, called
benign salience. By combining the first-order gradient to retain the
connections between neurons, benign salience can identify the infected neurons
with higher accuracy than the commonly used metric in backdoor defense. Then, a
new Adaptive Regularization (AR) mechanism is proposed to assist in purifying
these identified infected neurons via fine-tuning. Due to the ability to adapt
to different magnitudes of parameters, AR can provide faster and more stable
convergence than the common regularization mechanism in neuron purifying.
Extensive experimental results demonstrate that our method can erase the
backdoor in neural networks with negligible performance degradation.
|
[
{
"created": "Sat, 13 Aug 2022 01:10:20 GMT",
"version": "v1"
}
] |
2022-08-16
|
[
[
"Fan",
"Mingyuan",
""
],
[
"Liu",
"Yang",
""
],
[
"Chen",
"Cen",
""
],
[
"Liu",
"Ximeng",
""
],
[
"Guo",
"Wenzhong",
""
]
] |
The opacity of neural networks leads their vulnerability to backdoor attacks, where hidden attention of infected neurons is triggered to override normal predictions to the attacker-chosen ones. In this paper, we propose a novel backdoor defense method to mark and purify the infected neurons in the backdoored neural networks. Specifically, we first define a new metric, called benign salience. By combining the first-order gradient to retain the connections between neurons, benign salience can identify the infected neurons with higher accuracy than the commonly used metric in backdoor defense. Then, a new Adaptive Regularization (AR) mechanism is proposed to assist in purifying these identified infected neurons via fine-tuning. Due to the ability to adapt to different magnitudes of parameters, AR can provide faster and more stable convergence than the common regularization mechanism in neuron purifying. Extensive experimental results demonstrate that our method can erase the backdoor in neural networks with negligible performance degradation.
|
2312.16954
|
Yue Han
|
Yue Han, Jinguang Han, Weizhi Meng, Jianchang Lai, Ge Wu
|
Blockchain-based Privacy-Preserving Public Key Searchable Encryption
with Strong Traceability
| null | null | null | null |
cs.CR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Public key searchable encryption (PKSE) scheme allows data users to search
over encrypted data. To identify illegal users, many traceable PKSE schemes
have been proposed. However, existing schemes cannot trace the keywords which
illegal users searched and protect users' privacy simultaneously. In some
practical applications, tracing both illegal users' identities and the keywords
which they searched is quite important to against the abuse of data. It is a
challenge to bind users' identities and keywords while protecting their
privacy. Moreover, existing traceable PKSE schemes do not consider the
unforgeability and immutability of trapdoor query records, which can lead to
the occurrence of frame-up and denying. In this paper, to solve these problems,
we propose a blockchain-based privacy-preserving PKSE with strong traceability
(BP3KSEST) scheme. Our scheme provides the following features: (1) authorized
users can authenticate to trapdoor generation center and obtain trapdoors
without releasing their identities and keywords; (2) when data users misbehave
in the system, the trusted third party (TTP) can trace both their identities
and the keywords which they searched; (3) trapdoor query records are
unforgeable; (4) trapdoor query records are immutable because records are
stored in blockchain. Notably, this scheme is suitable to the scenarios where
privacy must be considered, e.g., electronic health record (EHR). We formalize
both the definition and security model of our BP3KSEST scheme, and present a
concrete construction. Furthermore, the security of the proposed scheme is
formally proven. Finally, the implementation and evaluation are conducted to
analyze its efficiency.
|
[
{
"created": "Thu, 28 Dec 2023 10:58:14 GMT",
"version": "v1"
}
] |
2023-12-29
|
[
[
"Han",
"Yue",
""
],
[
"Han",
"Jinguang",
""
],
[
"Meng",
"Weizhi",
""
],
[
"Lai",
"Jianchang",
""
],
[
"Wu",
"Ge",
""
]
] |
Public key searchable encryption (PKSE) scheme allows data users to search over encrypted data. To identify illegal users, many traceable PKSE schemes have been proposed. However, existing schemes cannot trace the keywords which illegal users searched and protect users' privacy simultaneously. In some practical applications, tracing both illegal users' identities and the keywords which they searched is quite important to against the abuse of data. It is a challenge to bind users' identities and keywords while protecting their privacy. Moreover, existing traceable PKSE schemes do not consider the unforgeability and immutability of trapdoor query records, which can lead to the occurrence of frame-up and denying. In this paper, to solve these problems, we propose a blockchain-based privacy-preserving PKSE with strong traceability (BP3KSEST) scheme. Our scheme provides the following features: (1) authorized users can authenticate to trapdoor generation center and obtain trapdoors without releasing their identities and keywords; (2) when data users misbehave in the system, the trusted third party (TTP) can trace both their identities and the keywords which they searched; (3) trapdoor query records are unforgeable; (4) trapdoor query records are immutable because records are stored in blockchain. Notably, this scheme is suitable to the scenarios where privacy must be considered, e.g., electronic health record (EHR). We formalize both the definition and security model of our BP3KSEST scheme, and present a concrete construction. Furthermore, the security of the proposed scheme is formally proven. Finally, the implementation and evaluation are conducted to analyze its efficiency.
|
1909.08864
|
Michael Smith
|
Michael Thomas Smith, Kathrin Grosse, Michael Backes, Mauricio A
Alvarez
|
Adversarial Vulnerability Bounds for Gaussian Process Classification
|
10 pages + 2 pages references + 7 pages of supplementary. 12 figures.
Submitted to AAAI
| null | null | null |
cs.CR cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Machine learning (ML) classification is increasingly used in safety-critical
systems. Protecting ML classifiers from adversarial examples is crucial. We
propose that the main threat is that of an attacker perturbing a confidently
classified input to produce a confident misclassification. To protect against
this we devise an adversarial bound (AB) for a Gaussian process classifier,
that holds for the entire input domain, bounding the potential for any future
adversarial method to cause such misclassification. This is a formal guarantee
of robustness, not just an empirically derived result. We investigate how to
configure the classifier to maximise the bound, including the use of a sparse
approximation, leading to the method producing a practical, useful and provably
robust classifier, which we test using a variety of datasets.
|
[
{
"created": "Thu, 19 Sep 2019 08:50:01 GMT",
"version": "v1"
}
] |
2019-09-20
|
[
[
"Smith",
"Michael Thomas",
""
],
[
"Grosse",
"Kathrin",
""
],
[
"Backes",
"Michael",
""
],
[
"Alvarez",
"Mauricio A",
""
]
] |
Machine learning (ML) classification is increasingly used in safety-critical systems. Protecting ML classifiers from adversarial examples is crucial. We propose that the main threat is that of an attacker perturbing a confidently classified input to produce a confident misclassification. To protect against this we devise an adversarial bound (AB) for a Gaussian process classifier, that holds for the entire input domain, bounding the potential for any future adversarial method to cause such misclassification. This is a formal guarantee of robustness, not just an empirically derived result. We investigate how to configure the classifier to maximise the bound, including the use of a sparse approximation, leading to the method producing a practical, useful and provably robust classifier, which we test using a variety of datasets.
|
0804.2614
|
Laura Anna Ripamonti
|
Laura Anna Ripamonti, Ines Di Loreto, Dario Maggiorini
|
Augmenting Actual Life Through MUVEs
| null | null | null | null |
cs.HC cs.CY
|
http://creativecommons.org/licenses/by-nc-sa/3.0/
|
The necessity of supporting more and more social interaction (and not only
the mere information sharing) in online environments is the disruptive force
upon which phenomena ascribed to the Web2.0 paradigm continuously bud. People
interacting in online socio-technical environments mould technology on their
needs, seamlessly integrating it into their everyday life. MUVEs (Multi User
Virtual Environments) are no exception and, in several cases, represent the new
frontier in this field. In this work we analyze if and how MUVEs can be
considered a mean for augmenting communities (and more in general people) life.
We trace a framework of analysis based on four main observations, and through
these lenses we look at Second Life and at several projects we are currently
developing in that synthetic world.
|
[
{
"created": "Wed, 16 Apr 2008 14:43:31 GMT",
"version": "v1"
}
] |
2008-04-17
|
[
[
"Ripamonti",
"Laura Anna",
""
],
[
"Di Loreto",
"Ines",
""
],
[
"Maggiorini",
"Dario",
""
]
] |
The necessity of supporting more and more social interaction (and not only the mere information sharing) in online environments is the disruptive force upon which phenomena ascribed to the Web2.0 paradigm continuously bud. People interacting in online socio-technical environments mould technology on their needs, seamlessly integrating it into their everyday life. MUVEs (Multi User Virtual Environments) are no exception and, in several cases, represent the new frontier in this field. In this work we analyze if and how MUVEs can be considered a mean for augmenting communities (and more in general people) life. We trace a framework of analysis based on four main observations, and through these lenses we look at Second Life and at several projects we are currently developing in that synthetic world.
|
2405.06929
|
Jianzong Wang
|
Shenglin He, Xiaoyang Qu, Jiguang Wan, Guokuan Li, Changsheng Xie,
Jianzong Wang
|
PRENet: A Plane-Fit Redundancy Encoding Point Cloud Sequence Network for
Real-Time 3D Action Recognition
|
Accepted by the 2024 International Joint Conference on Neural
Networks (IJCNN 2024)
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recognizing human actions from point cloud sequence has attracted tremendous
attention from both academia and industry due to its wide applications.
However, most previous studies on point cloud action recognition typically
require complex networks to extract intra-frame spatial features and
inter-frame temporal features, resulting in an excessive number of redundant
computations. This leads to high latency, rendering them impractical for
real-world applications. To address this problem, we propose a Plane-Fit
Redundancy Encoding point cloud sequence network named PRENet. The primary
concept of our approach involves the utilization of plane fitting to mitigate
spatial redundancy within the sequence, concurrently encoding the temporal
redundancy of the entire sequence to minimize redundant computations.
Specifically, our network comprises two principal modules: a Plane-Fit
Embedding module and a Spatio-Temporal Consistency Encoding module. The
Plane-Fit Embedding module capitalizes on the observation that successive point
cloud frames exhibit unique geometric features in physical space, allowing for
the reuse of spatially encoded data for temporal stream encoding. The
Spatio-Temporal Consistency Encoding module amalgamates the temporal structure
of the temporally redundant part with its corresponding spatial arrangement,
thereby enhancing recognition accuracy. We have done numerous experiments to
verify the effectiveness of our network. The experimental results demonstrate
that our method achieves almost identical recognition accuracy while being
nearly four times faster than other state-of-the-art methods.
|
[
{
"created": "Sat, 11 May 2024 06:20:28 GMT",
"version": "v1"
}
] |
2024-05-14
|
[
[
"He",
"Shenglin",
""
],
[
"Qu",
"Xiaoyang",
""
],
[
"Wan",
"Jiguang",
""
],
[
"Li",
"Guokuan",
""
],
[
"Xie",
"Changsheng",
""
],
[
"Wang",
"Jianzong",
""
]
] |
Recognizing human actions from point cloud sequence has attracted tremendous attention from both academia and industry due to its wide applications. However, most previous studies on point cloud action recognition typically require complex networks to extract intra-frame spatial features and inter-frame temporal features, resulting in an excessive number of redundant computations. This leads to high latency, rendering them impractical for real-world applications. To address this problem, we propose a Plane-Fit Redundancy Encoding point cloud sequence network named PRENet. The primary concept of our approach involves the utilization of plane fitting to mitigate spatial redundancy within the sequence, concurrently encoding the temporal redundancy of the entire sequence to minimize redundant computations. Specifically, our network comprises two principal modules: a Plane-Fit Embedding module and a Spatio-Temporal Consistency Encoding module. The Plane-Fit Embedding module capitalizes on the observation that successive point cloud frames exhibit unique geometric features in physical space, allowing for the reuse of spatially encoded data for temporal stream encoding. The Spatio-Temporal Consistency Encoding module amalgamates the temporal structure of the temporally redundant part with its corresponding spatial arrangement, thereby enhancing recognition accuracy. We have done numerous experiments to verify the effectiveness of our network. The experimental results demonstrate that our method achieves almost identical recognition accuracy while being nearly four times faster than other state-of-the-art methods.
|
1810.04611
|
Yaqian Zhang
|
Yaqian Zhang, Zhifang Zhang
|
Scalar MSCR Codes via the Product Matrix Construction
|
16 pages
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An $(n,k,d)$ cooperative regenerating code provides the optimal-bandwidth
repair for any $t~(t\!>\!1)$ node failures in a cooperative way. In particular,
an MSCR (minimum storage cooperative regenerating) code retains the same
storage overhead as an $(n,k)$ MDS code. Suppose each node stores $\alpha$
symbols which indicates the sub-packetization level of the code. A scalar MSCR
code attains the minimum sub-packetization, i.e., $\alpha=d-k+t$. By now, all
existing constructions of scalar MSCR codes restrict to very special
parameters, eg. $d=k$ or $k=2$, etc. In a recent work, Ye and Barg construct
MSCR codes for all $n,k,d,t$, however, their construction needs
$\alpha\approx{\rm exp}(n^t)$ which is almost infeasible in practice. In this
paper, we give an explicit construction of scalar MSCR codes for all $d\geq
\max\{2k-1-t,k\}$, which covers all possible parameters except the case of
$k\leq d\leq 2k-2-t$ when $k<2k-1-t$. Moreover, as a complementary result, for
$k<d<2k-2-t$ we prove the nonexistence of linear scalar MSCR codes that have
invariant repair spaces. Our construction and most of the previous scalar MSCR
codes all have invariant repair spaces and this property is appealing in
practice because of convenient repair. As a result, this work presents an
almost full description of linear scalar MSCR codes.
|
[
{
"created": "Wed, 10 Oct 2018 16:13:41 GMT",
"version": "v1"
}
] |
2018-10-11
|
[
[
"Zhang",
"Yaqian",
""
],
[
"Zhang",
"Zhifang",
""
]
] |
An $(n,k,d)$ cooperative regenerating code provides the optimal-bandwidth repair for any $t~(t\!>\!1)$ node failures in a cooperative way. In particular, an MSCR (minimum storage cooperative regenerating) code retains the same storage overhead as an $(n,k)$ MDS code. Suppose each node stores $\alpha$ symbols which indicates the sub-packetization level of the code. A scalar MSCR code attains the minimum sub-packetization, i.e., $\alpha=d-k+t$. By now, all existing constructions of scalar MSCR codes restrict to very special parameters, eg. $d=k$ or $k=2$, etc. In a recent work, Ye and Barg construct MSCR codes for all $n,k,d,t$, however, their construction needs $\alpha\approx{\rm exp}(n^t)$ which is almost infeasible in practice. In this paper, we give an explicit construction of scalar MSCR codes for all $d\geq \max\{2k-1-t,k\}$, which covers all possible parameters except the case of $k\leq d\leq 2k-2-t$ when $k<2k-1-t$. Moreover, as a complementary result, for $k<d<2k-2-t$ we prove the nonexistence of linear scalar MSCR codes that have invariant repair spaces. Our construction and most of the previous scalar MSCR codes all have invariant repair spaces and this property is appealing in practice because of convenient repair. As a result, this work presents an almost full description of linear scalar MSCR codes.
|
2005.12536
|
Zhouxia Wang
|
Zhouxia Wang, Jiawei Zhang, Mude Lin, Jiong Wang, Ping Luo, and Jimmy
Ren
|
Learning a Reinforced Agent for Flexible Exposure Bracketing Selection
|
to be published in CVPR 2020
| null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Automatically selecting exposure bracketing (images exposed differently) is
important to obtain a high dynamic range image by using multi-exposure fusion.
Unlike previous methods that have many restrictions such as requiring camera
response function, sensor noise model, and a stream of preview images with
different exposures (not accessible in some scenarios e.g. some mobile
applications), we propose a novel deep neural network to automatically select
exposure bracketing, named EBSNet, which is sufficiently flexible without
having the above restrictions. EBSNet is formulated as a reinforced agent that
is trained by maximizing rewards provided by a multi-exposure fusion network
(MEFNet). By utilizing the illumination and semantic information extracted from
just a single auto-exposure preview image, EBSNet can select an optimal
exposure bracketing for multi-exposure fusion. EBSNet and MEFNet can be jointly
trained to produce favorable results against recent state-of-the-art
approaches. To facilitate future research, we provide a new benchmark dataset
for multi-exposure selection and fusion.
|
[
{
"created": "Tue, 26 May 2020 06:24:42 GMT",
"version": "v1"
}
] |
2020-05-27
|
[
[
"Wang",
"Zhouxia",
""
],
[
"Zhang",
"Jiawei",
""
],
[
"Lin",
"Mude",
""
],
[
"Wang",
"Jiong",
""
],
[
"Luo",
"Ping",
""
],
[
"Ren",
"Jimmy",
""
]
] |
Automatically selecting exposure bracketing (images exposed differently) is important to obtain a high dynamic range image by using multi-exposure fusion. Unlike previous methods that have many restrictions such as requiring camera response function, sensor noise model, and a stream of preview images with different exposures (not accessible in some scenarios e.g. some mobile applications), we propose a novel deep neural network to automatically select exposure bracketing, named EBSNet, which is sufficiently flexible without having the above restrictions. EBSNet is formulated as a reinforced agent that is trained by maximizing rewards provided by a multi-exposure fusion network (MEFNet). By utilizing the illumination and semantic information extracted from just a single auto-exposure preview image, EBSNet can select an optimal exposure bracketing for multi-exposure fusion. EBSNet and MEFNet can be jointly trained to produce favorable results against recent state-of-the-art approaches. To facilitate future research, we provide a new benchmark dataset for multi-exposure selection and fusion.
|
2404.01535
|
Laboni Sarker
|
Laboni Sarker, Mara Downing, Achintya Desai and Tevfik Bultan
|
Syntactic Robustness for LLM-based Code Generation
|
12 pages, 12 figures
| null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Rapid advances in the field of Large Language Models (LLMs) have made
LLM-based code generation an important area for investigation. An LLM-based
code generator takes a prompt as input and produces code that implements the
requirements specified in the prompt. Many software requirements include
mathematical formulas that specify the expected behavior of the code to be
generated. Given a code generation prompt that includes a mathematical formula,
a reasonable expectation is that, if the formula is syntactically modified
without changing its semantics, the generated code for the modified prompt
should be semantically equivalent. We formalize this concept as syntactic
robustness and investigate the syntactic robustness of GPT-3.5-Turbo and GPT-4
as code generators. To test syntactic robustness, we generate syntactically
different but semantically equivalent versions of prompts using a set of
mutators that only modify mathematical formulas in prompts. In this paper, we
focus on prompts that ask for code that generates solutions to variables in an
equation, when given coefficients of the equation as input. Our experimental
evaluation demonstrates that GPT-3.5-Turbo and GPT-4 are not syntactically
robust for this type of prompts. To improve syntactic robustness, we define a
set of reductions that transform the formulas to a simplified form and use
these reductions as a pre-processing step. Our experimental results indicate
that the syntactic robustness of LLM-based code generation can be improved
using our approach.
|
[
{
"created": "Mon, 1 Apr 2024 23:55:05 GMT",
"version": "v1"
}
] |
2024-04-03
|
[
[
"Sarker",
"Laboni",
""
],
[
"Downing",
"Mara",
""
],
[
"Desai",
"Achintya",
""
],
[
"Bultan",
"Tevfik",
""
]
] |
Rapid advances in the field of Large Language Models (LLMs) have made LLM-based code generation an important area for investigation. An LLM-based code generator takes a prompt as input and produces code that implements the requirements specified in the prompt. Many software requirements include mathematical formulas that specify the expected behavior of the code to be generated. Given a code generation prompt that includes a mathematical formula, a reasonable expectation is that, if the formula is syntactically modified without changing its semantics, the generated code for the modified prompt should be semantically equivalent. We formalize this concept as syntactic robustness and investigate the syntactic robustness of GPT-3.5-Turbo and GPT-4 as code generators. To test syntactic robustness, we generate syntactically different but semantically equivalent versions of prompts using a set of mutators that only modify mathematical formulas in prompts. In this paper, we focus on prompts that ask for code that generates solutions to variables in an equation, when given coefficients of the equation as input. Our experimental evaluation demonstrates that GPT-3.5-Turbo and GPT-4 are not syntactically robust for this type of prompts. To improve syntactic robustness, we define a set of reductions that transform the formulas to a simplified form and use these reductions as a pre-processing step. Our experimental results indicate that the syntactic robustness of LLM-based code generation can be improved using our approach.
|
1707.08813
|
Sean Maudsley-Barton Mr
|
Sean Maudsley-Barton, Jamie McPheey, Anthony Bukowski, Daniel
Leightley and Moi Hoon Yap
|
A Comparative Study of the Clinical use of Motion Analysis from Kinect
Skeleton Data
| null | null |
10.1109/SMC.2017.8123052
| null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The analysis of human motion as a clinical tool can bring many benefits such
as the early detection of disease and the monitoring of recovery, so in turn
helping people to lead independent lives. However, it is currently under used.
Developments in depth cameras, such as Kinect, have opened up the use of motion
analysis in settings such as GP surgeries, care homes and private homes. To
provide an insight into the use of Kinect in the healthcare domain, we present
a review of the current state of the art. We then propose a method that can
represent human motions from time-series data of arbitrary length, as a single
vector. Finally, we demonstrate the utility of this method by extracting a set
of clinically significant features and using them to detect the age related
changes in the motions of a set of 54 individuals, with a high degree of
certainty (F1- score between 0.9 - 1.0). Indicating its potential application
in the detection of a range of age-related motion impairments.
|
[
{
"created": "Thu, 27 Jul 2017 10:55:43 GMT",
"version": "v1"
},
{
"created": "Mon, 31 Jul 2017 08:42:04 GMT",
"version": "v2"
}
] |
2018-04-10
|
[
[
"Maudsley-Barton",
"Sean",
""
],
[
"McPheey",
"Jamie",
""
],
[
"Bukowski",
"Anthony",
""
],
[
"Leightley",
"Daniel",
""
],
[
"Yap",
"Moi Hoon",
""
]
] |
The analysis of human motion as a clinical tool can bring many benefits such as the early detection of disease and the monitoring of recovery, so in turn helping people to lead independent lives. However, it is currently under used. Developments in depth cameras, such as Kinect, have opened up the use of motion analysis in settings such as GP surgeries, care homes and private homes. To provide an insight into the use of Kinect in the healthcare domain, we present a review of the current state of the art. We then propose a method that can represent human motions from time-series data of arbitrary length, as a single vector. Finally, we demonstrate the utility of this method by extracting a set of clinically significant features and using them to detect the age related changes in the motions of a set of 54 individuals, with a high degree of certainty (F1- score between 0.9 - 1.0). Indicating its potential application in the detection of a range of age-related motion impairments.
|
1602.02658
|
Tom Zahavy
|
Tom Zahavy, Nir Ben Zrihem, Shie Mannor
|
Graying the black box: Understanding DQNs
| null | null | null | null |
cs.LG cs.AI cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years there is a growing interest in using deep representations for
reinforcement learning. In this paper, we present a methodology and tools to
analyze Deep Q-networks (DQNs) in a non-blind matter. Moreover, we propose a
new model, the Semi Aggregated Markov Decision Process (SAMDP), and an
algorithm that learns it automatically. The SAMDP model allows us to identify
spatio-temporal abstractions directly from features and may be used as a
sub-goal detector in future work. Using our tools we reveal that the features
learned by DQNs aggregate the state space in a hierarchical fashion, explaining
its success. Moreover, we are able to understand and describe the policies
learned by DQNs for three different Atari2600 games and suggest ways to
interpret, debug and optimize deep neural networks in reinforcement learning.
|
[
{
"created": "Mon, 8 Feb 2016 17:27:31 GMT",
"version": "v1"
},
{
"created": "Tue, 9 Feb 2016 16:13:00 GMT",
"version": "v2"
},
{
"created": "Wed, 17 Feb 2016 19:15:55 GMT",
"version": "v3"
},
{
"created": "Mon, 24 Apr 2017 09:57:21 GMT",
"version": "v4"
}
] |
2017-04-25
|
[
[
"Zahavy",
"Tom",
""
],
[
"Zrihem",
"Nir Ben",
""
],
[
"Mannor",
"Shie",
""
]
] |
In recent years there is a growing interest in using deep representations for reinforcement learning. In this paper, we present a methodology and tools to analyze Deep Q-networks (DQNs) in a non-blind matter. Moreover, we propose a new model, the Semi Aggregated Markov Decision Process (SAMDP), and an algorithm that learns it automatically. The SAMDP model allows us to identify spatio-temporal abstractions directly from features and may be used as a sub-goal detector in future work. Using our tools we reveal that the features learned by DQNs aggregate the state space in a hierarchical fashion, explaining its success. Moreover, we are able to understand and describe the policies learned by DQNs for three different Atari2600 games and suggest ways to interpret, debug and optimize deep neural networks in reinforcement learning.
|
1802.05593
|
Pradip Sircar
|
Pradip Sircar
|
System Identification via Polynomial Transformation Method
|
10 pages, 3 figures, 2 tables
| null | null |
SciTopics, October 9, 2011
|
cs.SY math.NA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a method based on minimum-variance polynomial approximation to
extract system poles from a data set of samples of the impulse response of a
linear system. The method is capable of handling the problem under general
conditions of sampling and noise characteristics. The superiority of the
proposed method is demonstrated by statistical comparison of its performance
with the performances of two exiting methods in the special case of uniform
sampling.
|
[
{
"created": "Thu, 15 Feb 2018 15:11:52 GMT",
"version": "v1"
}
] |
2018-02-16
|
[
[
"Sircar",
"Pradip",
""
]
] |
We propose a method based on minimum-variance polynomial approximation to extract system poles from a data set of samples of the impulse response of a linear system. The method is capable of handling the problem under general conditions of sampling and noise characteristics. The superiority of the proposed method is demonstrated by statistical comparison of its performance with the performances of two exiting methods in the special case of uniform sampling.
|
2004.04342
|
Yang Yang
|
Adam Golinski, Reza Pourreza, Yang Yang, Guillaume Sautiere, Taco S
Cohen
|
Feedback Recurrent Autoencoder for Video Compression
| null | null | null | null |
cs.LG cs.CV stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
Recent advances in deep generative modeling have enabled efficient modeling
of high dimensional data distributions and opened up a new horizon for solving
data compression problems. Specifically, autoencoder based learned image or
video compression solutions are emerging as strong competitors to traditional
approaches. In this work, We propose a new network architecture, based on
common and well studied components, for learned video compression operating in
low latency mode. Our method yields state of the art MS-SSIM/rate performance
on the high-resolution UVG dataset, among both learned video compression
approaches and classical video compression methods (H.265 and H.264) in the
rate range of interest for streaming applications. Additionally, we provide an
analysis of existing approaches through the lens of their underlying
probabilistic graphical models. Finally, we point out issues with temporal
consistency and color shift observed in empirical evaluation, and suggest
directions forward to alleviate those.
|
[
{
"created": "Thu, 9 Apr 2020 02:58:07 GMT",
"version": "v1"
}
] |
2020-04-10
|
[
[
"Golinski",
"Adam",
""
],
[
"Pourreza",
"Reza",
""
],
[
"Yang",
"Yang",
""
],
[
"Sautiere",
"Guillaume",
""
],
[
"Cohen",
"Taco S",
""
]
] |
Recent advances in deep generative modeling have enabled efficient modeling of high dimensional data distributions and opened up a new horizon for solving data compression problems. Specifically, autoencoder based learned image or video compression solutions are emerging as strong competitors to traditional approaches. In this work, We propose a new network architecture, based on common and well studied components, for learned video compression operating in low latency mode. Our method yields state of the art MS-SSIM/rate performance on the high-resolution UVG dataset, among both learned video compression approaches and classical video compression methods (H.265 and H.264) in the rate range of interest for streaming applications. Additionally, we provide an analysis of existing approaches through the lens of their underlying probabilistic graphical models. Finally, we point out issues with temporal consistency and color shift observed in empirical evaluation, and suggest directions forward to alleviate those.
|
0808.3990
|
Mehmet A. S\"uzen
|
Mehmet S\"uzen and Ziya S\"uzen
|
Adaptive Dynamic Congestion Avoidance with Master Equation
|
7 pages, 2 figure, technical report
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper proposes an adaptive variant of Random Early Detection (RED)
gateway queue management for packet-switched networks via a discrete state
analog of the non-stationary Master Equation i.e. Markov process. The
computation of average queue size, which appeared in the original RED
algorithm, is altered by introducing a probability $P(l,t)$, which defines the
probability of having $l$ number of packets in the queue at the given time $t$,
and depends upon the previous state of the queue. This brings the advantage of
eliminating a free parameter: queue weight, completely. Computation of
transition rates and probabilities are carried out on the fly, and determined
by the algorithm automatically. Simulations with unstructured packets
illustrate the method, the performance of the adaptive variant of RED
algorithm, and the comparison with the standard RED.
|
[
{
"created": "Thu, 28 Aug 2008 20:36:59 GMT",
"version": "v1"
},
{
"created": "Thu, 18 Sep 2008 01:21:46 GMT",
"version": "v2"
}
] |
2008-09-18
|
[
[
"Süzen",
"Mehmet",
""
],
[
"Süzen",
"Ziya",
""
]
] |
This paper proposes an adaptive variant of Random Early Detection (RED) gateway queue management for packet-switched networks via a discrete state analog of the non-stationary Master Equation i.e. Markov process. The computation of average queue size, which appeared in the original RED algorithm, is altered by introducing a probability $P(l,t)$, which defines the probability of having $l$ number of packets in the queue at the given time $t$, and depends upon the previous state of the queue. This brings the advantage of eliminating a free parameter: queue weight, completely. Computation of transition rates and probabilities are carried out on the fly, and determined by the algorithm automatically. Simulations with unstructured packets illustrate the method, the performance of the adaptive variant of RED algorithm, and the comparison with the standard RED.
|
0707.0762
|
Richard McClatchey
|
Irfan Habib, Kamran Soomro, Ashiq Anjum, Richard McClatchey, Arshad
Ali, Peter Bloodsworth
|
PhantomOS: A Next Generation Grid Operating System
|
8 pages, 6 figures. Presented at the UK eScience All Hands Meeting
2007 (AHM07). Nottingham, UK. September 2007
| null | null | null |
cs.DC
| null |
Grid Computing has made substantial advances in the past decade; these are
primarily due to the adoption of standardized Grid middleware. However Grid
computing has not yet become pervasive because of some barriers that we believe
have been caused by the adoption of middleware centric approaches. These
barriers include: scant support for major types of applications such as
interactive applications; lack of flexible, autonomic and scalable Grid
architectures; lack of plug-and-play Grid computing and, most importantly, no
straightforward way to setup and administer Grids. PhantomOS is a project which
aims to address many of these barriers. Its goal is the creation of a user
friendly pervasive Grid computing platform that facilitates the rapid
deployment and easy maintenance of Grids whilst providing support for major
types of applications on Grids of almost any topology. In this paper we present
the detailed system architecture and an overview of its implementation.
|
[
{
"created": "Thu, 5 Jul 2007 11:14:45 GMT",
"version": "v1"
}
] |
2007-07-06
|
[
[
"Habib",
"Irfan",
""
],
[
"Soomro",
"Kamran",
""
],
[
"Anjum",
"Ashiq",
""
],
[
"McClatchey",
"Richard",
""
],
[
"Ali",
"Arshad",
""
],
[
"Bloodsworth",
"Peter",
""
]
] |
Grid Computing has made substantial advances in the past decade; these are primarily due to the adoption of standardized Grid middleware. However Grid computing has not yet become pervasive because of some barriers that we believe have been caused by the adoption of middleware centric approaches. These barriers include: scant support for major types of applications such as interactive applications; lack of flexible, autonomic and scalable Grid architectures; lack of plug-and-play Grid computing and, most importantly, no straightforward way to setup and administer Grids. PhantomOS is a project which aims to address many of these barriers. Its goal is the creation of a user friendly pervasive Grid computing platform that facilitates the rapid deployment and easy maintenance of Grids whilst providing support for major types of applications on Grids of almost any topology. In this paper we present the detailed system architecture and an overview of its implementation.
|
1508.05814
|
Tomoyuki Yamakami
|
Tomoyuki Yamakami
|
Structural Complexity of Multi-Valued Partial Functions Computed by
Nondeterministic Pushdown Automata
|
(Extended Abstract, A4, 10pt, 8 pages) This extended abstract has
already appeared in the Proceedings of the 15th Italian Conference of
Theoretical Computer Science (ICTCS 2014), September 17-19, Perugia, Italy,
CEUR Workshop Proceedings, vol.1231, pp.225-236, 2014
| null | null | null |
cs.FL cs.CC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper continues a systematic and comprehensive study on the structural
properties of CFL functions, which are in general multi-valued partial
functions computed by one-way one-head nondeterministic pushdown automata
equipped with write-only output tapes (or pushdown transducers), where CFL
refers to a relevance to context-free languages. The CFL functions tend to
behave quite differently from their corresponding context-free languages. We
extensively discuss containments, separations, and refinements among various
classes of functions obtained from the CFL functions by applying Boolean
operations, functional composition, many-one relativization, and Turing
relativization. In particular, Turing relativization helps construct a
hierarchy over the class of CFL functions. We also analyze the computational
complexity of optimization functions, which are to find optimal values of CFL
functions, and discuss their relationships to the associated languages.
|
[
{
"created": "Mon, 24 Aug 2015 14:13:49 GMT",
"version": "v1"
}
] |
2015-08-25
|
[
[
"Yamakami",
"Tomoyuki",
""
]
] |
This paper continues a systematic and comprehensive study on the structural properties of CFL functions, which are in general multi-valued partial functions computed by one-way one-head nondeterministic pushdown automata equipped with write-only output tapes (or pushdown transducers), where CFL refers to a relevance to context-free languages. The CFL functions tend to behave quite differently from their corresponding context-free languages. We extensively discuss containments, separations, and refinements among various classes of functions obtained from the CFL functions by applying Boolean operations, functional composition, many-one relativization, and Turing relativization. In particular, Turing relativization helps construct a hierarchy over the class of CFL functions. We also analyze the computational complexity of optimization functions, which are to find optimal values of CFL functions, and discuss their relationships to the associated languages.
|
2106.01345
|
Lili Chen
|
Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover,
Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch
|
Decision Transformer: Reinforcement Learning via Sequence Modeling
|
First two authors contributed equally. Last two authors advised
equally
| null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce a framework that abstracts Reinforcement Learning (RL) as a
sequence modeling problem. This allows us to draw upon the simplicity and
scalability of the Transformer architecture, and associated advances in
language modeling such as GPT-x and BERT. In particular, we present Decision
Transformer, an architecture that casts the problem of RL as conditional
sequence modeling. Unlike prior approaches to RL that fit value functions or
compute policy gradients, Decision Transformer simply outputs the optimal
actions by leveraging a causally masked Transformer. By conditioning an
autoregressive model on the desired return (reward), past states, and actions,
our Decision Transformer model can generate future actions that achieve the
desired return. Despite its simplicity, Decision Transformer matches or exceeds
the performance of state-of-the-art model-free offline RL baselines on Atari,
OpenAI Gym, and Key-to-Door tasks.
|
[
{
"created": "Wed, 2 Jun 2021 17:53:39 GMT",
"version": "v1"
},
{
"created": "Thu, 24 Jun 2021 17:09:59 GMT",
"version": "v2"
}
] |
2021-06-25
|
[
[
"Chen",
"Lili",
""
],
[
"Lu",
"Kevin",
""
],
[
"Rajeswaran",
"Aravind",
""
],
[
"Lee",
"Kimin",
""
],
[
"Grover",
"Aditya",
""
],
[
"Laskin",
"Michael",
""
],
[
"Abbeel",
"Pieter",
""
],
[
"Srinivas",
"Aravind",
""
],
[
"Mordatch",
"Igor",
""
]
] |
We introduce a framework that abstracts Reinforcement Learning (RL) as a sequence modeling problem. This allows us to draw upon the simplicity and scalability of the Transformer architecture, and associated advances in language modeling such as GPT-x and BERT. In particular, we present Decision Transformer, an architecture that casts the problem of RL as conditional sequence modeling. Unlike prior approaches to RL that fit value functions or compute policy gradients, Decision Transformer simply outputs the optimal actions by leveraging a causally masked Transformer. By conditioning an autoregressive model on the desired return (reward), past states, and actions, our Decision Transformer model can generate future actions that achieve the desired return. Despite its simplicity, Decision Transformer matches or exceeds the performance of state-of-the-art model-free offline RL baselines on Atari, OpenAI Gym, and Key-to-Door tasks.
|
2406.17482
|
Pierre Vandenhove
|
Sougata Bose, Rasmus Ibsen-Jensen, David Purser, Patrick Totzke,
Pierre Vandenhove
|
The Power of Counting Steps in Quantitative Games
|
Extended version of a CONCUR 2024 paper
| null | null | null |
cs.GT cs.FL cs.LO
|
http://creativecommons.org/licenses/by/4.0/
|
We study deterministic games of infinite duration played on graphs and focus
on the strategy complexity of quantitative objectives. Such games are known to
admit optimal memoryless strategies over finite graphs, but require
infinite-memory strategies in general over infinite graphs.
We provide new lower and upper bounds for the strategy complexity of
mean-payoff and total-payoff objectives over infinite graphs, focusing on
whether step-counter strategies (sometimes called Markov strategies) suffice to
implement winning strategies. In particular, we show that over finitely
branching arenas, three variants of limsup mean-payoff and total-payoff
objectives admit winning strategies that are based either on a step counter or
on a step counter and an additional bit of memory. Conversely, we show that for
certain liminf total-payoff objectives, strategies resorting to a step counter
and finite memory are not sufficient. For step-counter strategies, this settles
the case of all classical quantitative objectives up to the second level of the
Borel hierarchy.
|
[
{
"created": "Tue, 25 Jun 2024 12:03:03 GMT",
"version": "v1"
}
] |
2024-06-26
|
[
[
"Bose",
"Sougata",
""
],
[
"Ibsen-Jensen",
"Rasmus",
""
],
[
"Purser",
"David",
""
],
[
"Totzke",
"Patrick",
""
],
[
"Vandenhove",
"Pierre",
""
]
] |
We study deterministic games of infinite duration played on graphs and focus on the strategy complexity of quantitative objectives. Such games are known to admit optimal memoryless strategies over finite graphs, but require infinite-memory strategies in general over infinite graphs. We provide new lower and upper bounds for the strategy complexity of mean-payoff and total-payoff objectives over infinite graphs, focusing on whether step-counter strategies (sometimes called Markov strategies) suffice to implement winning strategies. In particular, we show that over finitely branching arenas, three variants of limsup mean-payoff and total-payoff objectives admit winning strategies that are based either on a step counter or on a step counter and an additional bit of memory. Conversely, we show that for certain liminf total-payoff objectives, strategies resorting to a step counter and finite memory are not sufficient. For step-counter strategies, this settles the case of all classical quantitative objectives up to the second level of the Borel hierarchy.
|
2207.12763
|
Till Hofmann
|
Till Hofmann, Vaishak Belle
|
Using Abstraction for Interpretable Robot Programs in Stochastic Domains
|
Presented at the KR'22 Workshop on Explainable Logic-Based Knowledge
Representation (XLoKR). arXiv admin note: substantial text overlap with
arXiv:2204.03536
| null | null | null |
cs.AI
|
http://creativecommons.org/licenses/by-sa/4.0/
|
A robot's actions are inherently stochastic, as its sensors are noisy and its
actions do not always have the intended effects. For this reason, the agent
language Golog has been extended to models with degrees of belief and
stochastic actions. While this allows more precise robot models, the resulting
programs are much harder to comprehend, because they need to deal with the
noise, e.g., by looping until some desired state has been reached with
certainty, and because the resulting action traces consist of a large number of
actions cluttered with sensor noise. To alleviate these issues, we propose to
use abstraction. We define a high-level and nonstochastic model of the robot
and then map the high-level model into the lower-level stochastic model. The
resulting programs are much easier to understand, often do not require belief
operators or loops, and produce much shorter action traces.
|
[
{
"created": "Tue, 26 Jul 2022 09:15:37 GMT",
"version": "v1"
}
] |
2023-03-02
|
[
[
"Hofmann",
"Till",
""
],
[
"Belle",
"Vaishak",
""
]
] |
A robot's actions are inherently stochastic, as its sensors are noisy and its actions do not always have the intended effects. For this reason, the agent language Golog has been extended to models with degrees of belief and stochastic actions. While this allows more precise robot models, the resulting programs are much harder to comprehend, because they need to deal with the noise, e.g., by looping until some desired state has been reached with certainty, and because the resulting action traces consist of a large number of actions cluttered with sensor noise. To alleviate these issues, we propose to use abstraction. We define a high-level and nonstochastic model of the robot and then map the high-level model into the lower-level stochastic model. The resulting programs are much easier to understand, often do not require belief operators or loops, and produce much shorter action traces.
|
2209.02809
|
Hamidreza Jahangir Dr
|
Hamidreza Jahangir, Subhash Lakshminarayana, and Carsten Maple
|
Localizing Load-Altering Attacks Against Power Grids Using Deep Capsule
Nets
| null | null | null | null |
cs.CR cs.SY eess.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent research has shown that the security of power grids can be seriously
threatened by botnet-type cyber attacks that target a large number of
high-wattage smart electrical appliances owned by end-users. Accurate detection
and localization of such attacks is of critical importance in limiting the
damage. To this end, the paper proposes a novel technique using capsule
networks (CNs) tailored to the power grid security application that uses the
frequency and phase angle data monitored by phasor measurement units (PMUs).
With the benefit of vector output from capsules and dynamic routing agreements
between them, CNs can obtain accurate detection and localization performance.
To demonstrate the efficiency of the suggested technique, we compare the
developed CN with benchmark data-driven methodologies, including
two-dimensional convolutional neural networks (2D-CNN), one-dimensional CNN
(1D-CNN), deep multi-layer perceptrons (MLP), and support vector machines
(SVM). Simulations are performed on IEEE 14-, 39-, and 57-bus systems,
considering various real-world issues such as PMU delays, noisy data, and
missing data points. The results show that CNs significantly outperform other
techniques, thus making them suitable for the aforementioned cyber security
applications.
|
[
{
"created": "Tue, 6 Sep 2022 20:25:52 GMT",
"version": "v1"
}
] |
2022-09-08
|
[
[
"Jahangir",
"Hamidreza",
""
],
[
"Lakshminarayana",
"Subhash",
""
],
[
"Maple",
"Carsten",
""
]
] |
Recent research has shown that the security of power grids can be seriously threatened by botnet-type cyber attacks that target a large number of high-wattage smart electrical appliances owned by end-users. Accurate detection and localization of such attacks is of critical importance in limiting the damage. To this end, the paper proposes a novel technique using capsule networks (CNs) tailored to the power grid security application that uses the frequency and phase angle data monitored by phasor measurement units (PMUs). With the benefit of vector output from capsules and dynamic routing agreements between them, CNs can obtain accurate detection and localization performance. To demonstrate the efficiency of the suggested technique, we compare the developed CN with benchmark data-driven methodologies, including two-dimensional convolutional neural networks (2D-CNN), one-dimensional CNN (1D-CNN), deep multi-layer perceptrons (MLP), and support vector machines (SVM). Simulations are performed on IEEE 14-, 39-, and 57-bus systems, considering various real-world issues such as PMU delays, noisy data, and missing data points. The results show that CNs significantly outperform other techniques, thus making them suitable for the aforementioned cyber security applications.
|
2303.14773
|
Changdae Oh
|
Changdae Oh, Hyeji Hwang, Hee-young Lee, YongTaek Lim, Geunyoung Jung,
Jiyoung Jung, Hosik Choi, Kyungwoo Song
|
BlackVIP: Black-Box Visual Prompting for Robust Transfer Learning
|
Accepted to CVPR 2023 (v2: citation error was fixed)
| null | null | null |
cs.CV cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
With the surge of large-scale pre-trained models (PTMs), fine-tuning these
models to numerous downstream tasks becomes a crucial problem. Consequently,
parameter efficient transfer learning (PETL) of large models has grasped huge
attention. While recent PETL methods showcase impressive performance, they rely
on optimistic assumptions: 1) the entire parameter set of a PTM is available,
and 2) a sufficiently large memory capacity for the fine-tuning is equipped.
However, in most real-world applications, PTMs are served as a black-box API or
proprietary software without explicit parameter accessibility. Besides, it is
hard to meet a large memory requirement for modern PTMs. In this work, we
propose black-box visual prompting (BlackVIP), which efficiently adapts the
PTMs without knowledge about model architectures and parameters. BlackVIP has
two components; 1) Coordinator and 2) simultaneous perturbation stochastic
approximation with gradient correction (SPSA-GC). The Coordinator designs
input-dependent image-shaped visual prompts, which improves few-shot adaptation
and robustness on distribution/location shift. SPSA-GC efficiently estimates
the gradient of a target model to update Coordinator. Extensive experiments on
16 datasets demonstrate that BlackVIP enables robust adaptation to diverse
domains without accessing PTMs' parameters, with minimal memory requirements.
Code: \url{https://github.com/changdaeoh/BlackVIP}
|
[
{
"created": "Sun, 26 Mar 2023 16:42:05 GMT",
"version": "v1"
},
{
"created": "Sat, 8 Jul 2023 12:13:50 GMT",
"version": "v2"
}
] |
2023-07-11
|
[
[
"Oh",
"Changdae",
""
],
[
"Hwang",
"Hyeji",
""
],
[
"Lee",
"Hee-young",
""
],
[
"Lim",
"YongTaek",
""
],
[
"Jung",
"Geunyoung",
""
],
[
"Jung",
"Jiyoung",
""
],
[
"Choi",
"Hosik",
""
],
[
"Song",
"Kyungwoo",
""
]
] |
With the surge of large-scale pre-trained models (PTMs), fine-tuning these models to numerous downstream tasks becomes a crucial problem. Consequently, parameter efficient transfer learning (PETL) of large models has grasped huge attention. While recent PETL methods showcase impressive performance, they rely on optimistic assumptions: 1) the entire parameter set of a PTM is available, and 2) a sufficiently large memory capacity for the fine-tuning is equipped. However, in most real-world applications, PTMs are served as a black-box API or proprietary software without explicit parameter accessibility. Besides, it is hard to meet a large memory requirement for modern PTMs. In this work, we propose black-box visual prompting (BlackVIP), which efficiently adapts the PTMs without knowledge about model architectures and parameters. BlackVIP has two components; 1) Coordinator and 2) simultaneous perturbation stochastic approximation with gradient correction (SPSA-GC). The Coordinator designs input-dependent image-shaped visual prompts, which improves few-shot adaptation and robustness on distribution/location shift. SPSA-GC efficiently estimates the gradient of a target model to update Coordinator. Extensive experiments on 16 datasets demonstrate that BlackVIP enables robust adaptation to diverse domains without accessing PTMs' parameters, with minimal memory requirements. Code: \url{https://github.com/changdaeoh/BlackVIP}
|
1007.4748
|
Aron Culotta
|
Aron Culotta
|
Detecting influenza outbreaks by analyzing Twitter messages
| null | null | null | null |
cs.IR cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We analyze over 500 million Twitter messages from an eight month period and
find that tracking a small number of flu-related keywords allows us to forecast
future influenza rates with high accuracy, obtaining a 95% correlation with
national health statistics. We then analyze the robustness of this approach to
spurious keyword matches, and we propose a document classification component to
filter these misleading messages. We find that this document classifier can
reduce error rates by over half in simulated false alarm experiments, though
more research is needed to develop methods that are robust in cases of
extremely high noise.
|
[
{
"created": "Tue, 27 Jul 2010 15:16:36 GMT",
"version": "v1"
}
] |
2010-07-28
|
[
[
"Culotta",
"Aron",
""
]
] |
We analyze over 500 million Twitter messages from an eight month period and find that tracking a small number of flu-related keywords allows us to forecast future influenza rates with high accuracy, obtaining a 95% correlation with national health statistics. We then analyze the robustness of this approach to spurious keyword matches, and we propose a document classification component to filter these misleading messages. We find that this document classifier can reduce error rates by over half in simulated false alarm experiments, though more research is needed to develop methods that are robust in cases of extremely high noise.
|
1010.3411
|
Mohamed Ibrahim
|
Mohamed Ibrahim and Moustafa Youssef
|
A Hidden Markov Model for Localization Using Low-End GSM Cell Phones
|
6 pages, 5 figures, submitted to ICC 2010
| null | null | null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Research in location determination for GSM phones has gained interest
recently as it enables a wide set of location based services. RSSI-based
techniques have been the preferred method for GSM localization on the handset
as RSSI information is available in all cell phones. Although the GSM standard
allows for a cell phone to receive signal strength information from up to seven
cell towers, many of today's cell phones are low-end phones, with limited API
support, that gives only information about the associated cell tower. In
addition, in many places in the world, the density of cell towers is very small
and therefore, the available cell tower information for localization is very
limited. This raises the challenge of accurately determining the cell phone
location with very limited information, mainly the RSSI of the associated cell
tower. In this paper we propose a Hidden Markov Model based solution that
leverages the signal strength history from only the associated cell tower to
achieve accurate GSM localization. We discuss the challenges of implementing
our system and present the details of our system and how it addresses the
challenges. To evaluate our proposed system, we implemented it on Androidbased
phones. Results for two different testbeds, representing urban and rural
environments, show that our system provides at least 156% enhancement in median
error in rural areas and at least 68% enhancement in median error in urban
areas compared to current RSSI-based GSM localization systems
|
[
{
"created": "Sun, 17 Oct 2010 13:21:06 GMT",
"version": "v1"
}
] |
2010-10-19
|
[
[
"Ibrahim",
"Mohamed",
""
],
[
"Youssef",
"Moustafa",
""
]
] |
Research in location determination for GSM phones has gained interest recently as it enables a wide set of location based services. RSSI-based techniques have been the preferred method for GSM localization on the handset as RSSI information is available in all cell phones. Although the GSM standard allows for a cell phone to receive signal strength information from up to seven cell towers, many of today's cell phones are low-end phones, with limited API support, that gives only information about the associated cell tower. In addition, in many places in the world, the density of cell towers is very small and therefore, the available cell tower information for localization is very limited. This raises the challenge of accurately determining the cell phone location with very limited information, mainly the RSSI of the associated cell tower. In this paper we propose a Hidden Markov Model based solution that leverages the signal strength history from only the associated cell tower to achieve accurate GSM localization. We discuss the challenges of implementing our system and present the details of our system and how it addresses the challenges. To evaluate our proposed system, we implemented it on Androidbased phones. Results for two different testbeds, representing urban and rural environments, show that our system provides at least 156% enhancement in median error in rural areas and at least 68% enhancement in median error in urban areas compared to current RSSI-based GSM localization systems
|
2111.03505
|
Quanshi Zhang
|
Mingjie Li, Shaobo Wang, Quanshi Zhang
|
Visualizing the Emergence of Intermediate Visual Patterns in DNNs
| null | null | null | null |
cs.CV cs.AI cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper proposes a method to visualize the discrimination power of
intermediate-layer visual patterns encoded by a DNN. Specifically, we visualize
(1) how the DNN gradually learns regional visual patterns in each intermediate
layer during the training process, and (2) the effects of the DNN using
non-discriminative patterns in low layers to construct disciminative patterns
in middle/high layers through the forward propagation. Based on our
visualization method, we can quantify knowledge points (i.e., the number of
discriminative visual patterns) learned by the DNN to evaluate the
representation capacity of the DNN. Furthermore, this method also provides new
insights into signal-processing behaviors of existing deep-learning techniques,
such as adversarial attacks and knowledge distillation.
|
[
{
"created": "Fri, 5 Nov 2021 13:49:39 GMT",
"version": "v1"
}
] |
2021-11-08
|
[
[
"Li",
"Mingjie",
""
],
[
"Wang",
"Shaobo",
""
],
[
"Zhang",
"Quanshi",
""
]
] |
This paper proposes a method to visualize the discrimination power of intermediate-layer visual patterns encoded by a DNN. Specifically, we visualize (1) how the DNN gradually learns regional visual patterns in each intermediate layer during the training process, and (2) the effects of the DNN using non-discriminative patterns in low layers to construct disciminative patterns in middle/high layers through the forward propagation. Based on our visualization method, we can quantify knowledge points (i.e., the number of discriminative visual patterns) learned by the DNN to evaluate the representation capacity of the DNN. Furthermore, this method also provides new insights into signal-processing behaviors of existing deep-learning techniques, such as adversarial attacks and knowledge distillation.
|
2305.04501
|
Junran Wu
|
Junran Wu, Xueyuan Chen, Bowen Shi, Shangzhe Li, Ke Xu
|
SEGA: Structural Entropy Guided Anchor View for Graph Contrastive
Learning
|
ICML'23
| null | null | null |
cs.LG cs.AI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In contrastive learning, the choice of ``view'' controls the information that
the representation captures and influences the performance of the model.
However, leading graph contrastive learning methods generally produce views via
random corruption or learning, which could lead to the loss of essential
information and alteration of semantic information. An anchor view that
maintains the essential information of input graphs for contrastive learning
has been hardly investigated. In this paper, based on the theory of graph
information bottleneck, we deduce the definition of this anchor view; put
differently, \textit{the anchor view with essential information of input graph
is supposed to have the minimal structural uncertainty}. Furthermore, guided by
structural entropy, we implement the anchor view, termed \textbf{SEGA}, for
graph contrastive learning. We extensively validate the proposed anchor view on
various benchmarks regarding graph classification under unsupervised,
semi-supervised, and transfer learning and achieve significant performance
boosts compared to the state-of-the-art methods.
|
[
{
"created": "Mon, 8 May 2023 06:52:02 GMT",
"version": "v1"
},
{
"created": "Fri, 9 Jun 2023 08:57:49 GMT",
"version": "v2"
}
] |
2023-06-12
|
[
[
"Wu",
"Junran",
""
],
[
"Chen",
"Xueyuan",
""
],
[
"Shi",
"Bowen",
""
],
[
"Li",
"Shangzhe",
""
],
[
"Xu",
"Ke",
""
]
] |
In contrastive learning, the choice of ``view'' controls the information that the representation captures and influences the performance of the model. However, leading graph contrastive learning methods generally produce views via random corruption or learning, which could lead to the loss of essential information and alteration of semantic information. An anchor view that maintains the essential information of input graphs for contrastive learning has been hardly investigated. In this paper, based on the theory of graph information bottleneck, we deduce the definition of this anchor view; put differently, \textit{the anchor view with essential information of input graph is supposed to have the minimal structural uncertainty}. Furthermore, guided by structural entropy, we implement the anchor view, termed \textbf{SEGA}, for graph contrastive learning. We extensively validate the proposed anchor view on various benchmarks regarding graph classification under unsupervised, semi-supervised, and transfer learning and achieve significant performance boosts compared to the state-of-the-art methods.
|
1912.02217
|
Pedro Mirabal
|
P. Mirabal, J. Abreu, D. Seco
|
Assessing the best edit in perturbation-based iterative refinement
algorithms to compute the median string
|
14 pages, 4 figures
|
Pattern Recognition Letters, Volume 120, 1 April 2019, Pages
104-111
|
10.1016/j.patrec.2019.02.004
| null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Strings are a natural representation of biological data such as DNA, RNA and
protein sequences. The problem of finding a string that summarizes a set of
sequences has direct application in relative compression algorithms for genome
and proteome analysis, where reference sequences need to be chosen. Median
strings have been used as representatives of a set of strings in different
domains. However, several formulations of those problems are NP-Complete.
Alternatively, heuristic approaches that iteratively refine an initial coarse
solution by applying edit operations have been proposed. Recently, we
investigated the selection of the optimal edit operations to speed up
convergence without spoiling the quality of the approximated median string. We
propose a novel algorithm that outperforms state of the art heuristic
approximations to the median string in terms of convergence speed by estimating
the effect of a perturbation in the minimization of the expressions that define
the median strings. We present corpus of comparative experiments to validate
these results.
|
[
{
"created": "Wed, 4 Dec 2019 19:09:15 GMT",
"version": "v1"
}
] |
2019-12-06
|
[
[
"Mirabal",
"P.",
""
],
[
"Abreu",
"J.",
""
],
[
"Seco",
"D.",
""
]
] |
Strings are a natural representation of biological data such as DNA, RNA and protein sequences. The problem of finding a string that summarizes a set of sequences has direct application in relative compression algorithms for genome and proteome analysis, where reference sequences need to be chosen. Median strings have been used as representatives of a set of strings in different domains. However, several formulations of those problems are NP-Complete. Alternatively, heuristic approaches that iteratively refine an initial coarse solution by applying edit operations have been proposed. Recently, we investigated the selection of the optimal edit operations to speed up convergence without spoiling the quality of the approximated median string. We propose a novel algorithm that outperforms state of the art heuristic approximations to the median string in terms of convergence speed by estimating the effect of a perturbation in the minimization of the expressions that define the median strings. We present corpus of comparative experiments to validate these results.
|
2012.00901
|
Shibo Li
|
Shibo Li, Robert M. Kirby, Shandian Zhe
|
Deep Multi-Fidelity Active Learning of High-dimensional Outputs
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Many applications, such as in physical simulation and engineering design,
demand we estimate functions with high-dimensional outputs. The training
examples can be collected with different fidelities to allow a cost/accuracy
trade-off. In this paper, we consider the active learning task that identifies
both the fidelity and input to query new training examples so as to achieve the
best benefit-cost ratio. To this end, we propose DMFAL, a Deep Multi-Fidelity
Active Learning approach. We first develop a deep neural network-based
multi-fidelity model for learning with high-dimensional outputs, which can
flexibly, efficiently capture all kinds of complex relationships across the
outputs and fidelities to improve prediction. We then propose a mutual
information-based acquisition function that extends the predictive entropy
principle. To overcome the computational challenges caused by large output
dimensions, we use multi-variate Delta's method and moment-matching to estimate
the output posterior, and Weinstein-Aronszajn identity to calculate and
optimize the acquisition function. The computation is tractable, reliable and
efficient. We show the advantage of our method in several applications of
computational physics and engineering design.
|
[
{
"created": "Wed, 2 Dec 2020 00:02:31 GMT",
"version": "v1"
},
{
"created": "Mon, 25 Oct 2021 21:52:52 GMT",
"version": "v2"
}
] |
2021-10-27
|
[
[
"Li",
"Shibo",
""
],
[
"Kirby",
"Robert M.",
""
],
[
"Zhe",
"Shandian",
""
]
] |
Many applications, such as in physical simulation and engineering design, demand we estimate functions with high-dimensional outputs. The training examples can be collected with different fidelities to allow a cost/accuracy trade-off. In this paper, we consider the active learning task that identifies both the fidelity and input to query new training examples so as to achieve the best benefit-cost ratio. To this end, we propose DMFAL, a Deep Multi-Fidelity Active Learning approach. We first develop a deep neural network-based multi-fidelity model for learning with high-dimensional outputs, which can flexibly, efficiently capture all kinds of complex relationships across the outputs and fidelities to improve prediction. We then propose a mutual information-based acquisition function that extends the predictive entropy principle. To overcome the computational challenges caused by large output dimensions, we use multi-variate Delta's method and moment-matching to estimate the output posterior, and Weinstein-Aronszajn identity to calculate and optimize the acquisition function. The computation is tractable, reliable and efficient. We show the advantage of our method in several applications of computational physics and engineering design.
|
2312.11086
|
Jacob Focke
|
Jacob Focke and Florian H\"orsch and Shaohua Li and D\'aniel Marx
|
Multicut Problems in Embedded Graphs: The Dependency of Complexity on
the Demand Pattern
| null | null | null | null |
cs.CC cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Multicut problem asks for a minimum cut separating certain pairs of
vertices: formally, given a graph $G$ and demand graph $H$ on a set $T\subseteq
V(G)$ of terminals, the task is to find a minimum-weight set $C$ of edges of
$G$ such that whenever two vertices of $T$ are adjacent in $H$, they are in
different components of $G\setminus C$. Colin de Verdi\`{e}re [Algorithmica,
2017] showed that Multicut with $t$ terminals on a graph $G$ of genus $g$ can
be solved in time $f(t,g)n^{O(\sqrt{g^2+gt+t})}$. Cohen-Addad et al. [JACM,
2021] proved a matching lower bound showing that the exponent of $n$ is
essentially best possible (for fixed values of $t$ and $g$), even in the
special case of Multiway Cut, where the demand graph $H$ is a complete graph.
However, this lower bound tells us nothing about other special cases of
Multicut such as Group 3-Terminal Cut. We show that if the demand pattern is,
in some sense, close to being a complete bipartite graph, then Multicut can be
solved faster than $f(t,g)n^{O(\sqrt{g^2+gt+t})}$, and furthermore this is the
only property that allows such an improvement. Formally, for a class
$\mathcal{H}$ of graphs, Multicut$(\mathcal{H})$ is the special case where the
demand graph $H$ is in $\mathcal{H}$. For every fixed class $\mathcal{H}$
(satisfying some mild closure property), fixed $g$, and fixed $t$, our main
result gives tight upper and lower bounds on the exponent of $n$ in algorithms
solving Multicut$(\mathcal{H})$.
In addition, we investigate a similar setting where, instead of
parameterizing by the genus $g$ of $G$, we parameterize by the minimum number
$k$ of edges of $G$ that need to be deleted to obtain a planar graph.
Interestingly, in this setting it makes a significant difference whether the
graph $G$ is weighted or unweighted: further nontrivial algorithmic techniques
give substantial improvements in the unweighted case.
|
[
{
"created": "Mon, 18 Dec 2023 10:27:39 GMT",
"version": "v1"
}
] |
2023-12-19
|
[
[
"Focke",
"Jacob",
""
],
[
"Hörsch",
"Florian",
""
],
[
"Li",
"Shaohua",
""
],
[
"Marx",
"Dániel",
""
]
] |
The Multicut problem asks for a minimum cut separating certain pairs of vertices: formally, given a graph $G$ and demand graph $H$ on a set $T\subseteq V(G)$ of terminals, the task is to find a minimum-weight set $C$ of edges of $G$ such that whenever two vertices of $T$ are adjacent in $H$, they are in different components of $G\setminus C$. Colin de Verdi\`{e}re [Algorithmica, 2017] showed that Multicut with $t$ terminals on a graph $G$ of genus $g$ can be solved in time $f(t,g)n^{O(\sqrt{g^2+gt+t})}$. Cohen-Addad et al. [JACM, 2021] proved a matching lower bound showing that the exponent of $n$ is essentially best possible (for fixed values of $t$ and $g$), even in the special case of Multiway Cut, where the demand graph $H$ is a complete graph. However, this lower bound tells us nothing about other special cases of Multicut such as Group 3-Terminal Cut. We show that if the demand pattern is, in some sense, close to being a complete bipartite graph, then Multicut can be solved faster than $f(t,g)n^{O(\sqrt{g^2+gt+t})}$, and furthermore this is the only property that allows such an improvement. Formally, for a class $\mathcal{H}$ of graphs, Multicut$(\mathcal{H})$ is the special case where the demand graph $H$ is in $\mathcal{H}$. For every fixed class $\mathcal{H}$ (satisfying some mild closure property), fixed $g$, and fixed $t$, our main result gives tight upper and lower bounds on the exponent of $n$ in algorithms solving Multicut$(\mathcal{H})$. In addition, we investigate a similar setting where, instead of parameterizing by the genus $g$ of $G$, we parameterize by the minimum number $k$ of edges of $G$ that need to be deleted to obtain a planar graph. Interestingly, in this setting it makes a significant difference whether the graph $G$ is weighted or unweighted: further nontrivial algorithmic techniques give substantial improvements in the unweighted case.
|
2306.08240
|
Zhongyi Shui
|
Zhongyi Shui, Yizhi Zhao, Sunyi Zheng, Yunlong Zhang, Honglin Li,
Shichuan Zhang, Xiaoxuan Yu, Chenglu Zhu, Lin Yang
|
Semi-supervised Cell Recognition under Point Supervision
| null | null | null | null |
cs.CV cs.AI
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Cell recognition is a fundamental task in digital histopathology image
analysis. Point-based cell recognition (PCR) methods normally require a vast
number of annotations, which is extremely costly, time-consuming and
labor-intensive. Semi-supervised learning (SSL) can provide a shortcut to make
full use of cell information in gigapixel whole slide images without exhaustive
labeling. However, research into semi-supervised point-based cell recognition
(SSPCR) remains largely overlooked. Previous SSPCR works are all built on
density map-based PCR models, which suffer from unsatisfactory accuracy, slow
inference speed and high sensitivity to hyper-parameters. To address these
issues, end-to-end PCR models are proposed recently. In this paper, we develop
a SSPCR framework suitable for the end-to-end PCR models for the first time.
Overall, we use the current models to generate pseudo labels for unlabeled
images, which are in turn utilized to supervise the models training. Besides,
we introduce a co-teaching strategy to overcome the confirmation bias problem
that generally exists in self-training. A distribution alignment technique is
also incorporated to produce high-quality, unbiased pseudo labels for unlabeled
data. Experimental results on four histopathology datasets concerning different
types of staining styles show the effectiveness and versatility of the proposed
framework. Code is available at
\textcolor{magenta}{\url{https://github.com/windygooo/SSPCR}
|
[
{
"created": "Wed, 14 Jun 2023 04:56:31 GMT",
"version": "v1"
}
] |
2023-06-16
|
[
[
"Shui",
"Zhongyi",
""
],
[
"Zhao",
"Yizhi",
""
],
[
"Zheng",
"Sunyi",
""
],
[
"Zhang",
"Yunlong",
""
],
[
"Li",
"Honglin",
""
],
[
"Zhang",
"Shichuan",
""
],
[
"Yu",
"Xiaoxuan",
""
],
[
"Zhu",
"Chenglu",
""
],
[
"Yang",
"Lin",
""
]
] |
Cell recognition is a fundamental task in digital histopathology image analysis. Point-based cell recognition (PCR) methods normally require a vast number of annotations, which is extremely costly, time-consuming and labor-intensive. Semi-supervised learning (SSL) can provide a shortcut to make full use of cell information in gigapixel whole slide images without exhaustive labeling. However, research into semi-supervised point-based cell recognition (SSPCR) remains largely overlooked. Previous SSPCR works are all built on density map-based PCR models, which suffer from unsatisfactory accuracy, slow inference speed and high sensitivity to hyper-parameters. To address these issues, end-to-end PCR models are proposed recently. In this paper, we develop a SSPCR framework suitable for the end-to-end PCR models for the first time. Overall, we use the current models to generate pseudo labels for unlabeled images, which are in turn utilized to supervise the models training. Besides, we introduce a co-teaching strategy to overcome the confirmation bias problem that generally exists in self-training. A distribution alignment technique is also incorporated to produce high-quality, unbiased pseudo labels for unlabeled data. Experimental results on four histopathology datasets concerning different types of staining styles show the effectiveness and versatility of the proposed framework. Code is available at \textcolor{magenta}{\url{https://github.com/windygooo/SSPCR}
|
2402.01643
|
Md. Shohanur Islam Sobuj
|
Md. Kowsher, Md. Shohanur Islam Sobuj, Asif Mahmud, Nusrat Jahan
Prottasha and Prakash Bhat
|
L-TUNING: Synchronized Label Tuning for Prompt and Prefix in LLMs
|
Published in the ICLR TinyPaper track
| null | null | null |
cs.CL cs.AI cs.LG
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Efficiently fine-tuning Large Language Models (LLMs) for specific tasks
presents a considerable challenge in natural language processing. Traditional
methods, like prompt or prefix tuning, typically rely on arbitrary tokens for
training, leading to prolonged training times and generalized token use across
various class labels. To address these issues, this paper introduces L-Tuning,
an efficient fine-tuning approach designed for classification tasks within the
Natural Language Inference (NLI) framework. Diverging from conventional
methods, L-Tuning focuses on the fine-tuning of label tokens processed through
a pre-trained LLM, thereby harnessing its pre-existing semantic knowledge. This
technique not only improves the fine-tuning accuracy and efficiency but also
facilitates the generation of distinct label embeddings for each class,
enhancing the model's training nuance. Our experimental results indicate a
significant improvement in training efficiency and classification accuracy with
L-Tuning compared to traditional approaches, marking a promising advancement in
fine-tuning LLMs for complex language tasks.
|
[
{
"created": "Thu, 21 Dec 2023 01:47:49 GMT",
"version": "v1"
},
{
"created": "Sat, 13 Apr 2024 00:14:21 GMT",
"version": "v2"
}
] |
2024-04-16
|
[
[
"Kowsher",
"Md.",
""
],
[
"Sobuj",
"Md. Shohanur Islam",
""
],
[
"Mahmud",
"Asif",
""
],
[
"Prottasha",
"Nusrat Jahan",
""
],
[
"Bhat",
"Prakash",
""
]
] |
Efficiently fine-tuning Large Language Models (LLMs) for specific tasks presents a considerable challenge in natural language processing. Traditional methods, like prompt or prefix tuning, typically rely on arbitrary tokens for training, leading to prolonged training times and generalized token use across various class labels. To address these issues, this paper introduces L-Tuning, an efficient fine-tuning approach designed for classification tasks within the Natural Language Inference (NLI) framework. Diverging from conventional methods, L-Tuning focuses on the fine-tuning of label tokens processed through a pre-trained LLM, thereby harnessing its pre-existing semantic knowledge. This technique not only improves the fine-tuning accuracy and efficiency but also facilitates the generation of distinct label embeddings for each class, enhancing the model's training nuance. Our experimental results indicate a significant improvement in training efficiency and classification accuracy with L-Tuning compared to traditional approaches, marking a promising advancement in fine-tuning LLMs for complex language tasks.
|
2304.08366
|
Yun Wang
|
Haotian Li, Yun Wang, Q. Vera Liao, Huamin Qu
|
Why is AI not a Panacea for Data Workers? An Interview Study on Human-AI
Collaboration in Data Storytelling
| null | null | null | null |
cs.HC cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
Data storytelling plays an important role in data workers' daily jobs since
it boosts team collaboration and public communication. However, to make an
appealing data story, data workers spend tremendous efforts on various tasks,
including outlining and styling the story. Recently, a growing research trend
has been exploring how to assist data storytelling with advanced artificial
intelligence (AI). However, existing studies may focus on individual tasks in
the workflow of data storytelling and do not reveal a complete picture of
humans' preference for collaborating with AI. To better understand real-world
needs, we interviewed eighteen data workers from both industry and academia to
learn where and how they would like to collaborate with AI. Surprisingly,
though the participants showed excitement about collaborating with AI, many of
them also expressed reluctance and pointed out nuanced reasons. Based on their
responses, we first characterize stages and tasks in the practical data
storytelling workflows and the desired roles of AI. Then the preferred
collaboration patterns in different tasks are identified. Next, we summarize
the interviewees' reasons why and why not they would like to collaborate with
AI. Finally, we provide suggestions for human-AI collaborative data
storytelling to hopefully shed light on future related research.
|
[
{
"created": "Mon, 17 Apr 2023 15:30:05 GMT",
"version": "v1"
}
] |
2023-04-18
|
[
[
"Li",
"Haotian",
""
],
[
"Wang",
"Yun",
""
],
[
"Liao",
"Q. Vera",
""
],
[
"Qu",
"Huamin",
""
]
] |
Data storytelling plays an important role in data workers' daily jobs since it boosts team collaboration and public communication. However, to make an appealing data story, data workers spend tremendous efforts on various tasks, including outlining and styling the story. Recently, a growing research trend has been exploring how to assist data storytelling with advanced artificial intelligence (AI). However, existing studies may focus on individual tasks in the workflow of data storytelling and do not reveal a complete picture of humans' preference for collaborating with AI. To better understand real-world needs, we interviewed eighteen data workers from both industry and academia to learn where and how they would like to collaborate with AI. Surprisingly, though the participants showed excitement about collaborating with AI, many of them also expressed reluctance and pointed out nuanced reasons. Based on their responses, we first characterize stages and tasks in the practical data storytelling workflows and the desired roles of AI. Then the preferred collaboration patterns in different tasks are identified. Next, we summarize the interviewees' reasons why and why not they would like to collaborate with AI. Finally, we provide suggestions for human-AI collaborative data storytelling to hopefully shed light on future related research.
|
2008.12520
|
Noa Garcia
|
Noa Garcia, Chentao Ye, Zihua Liu, Qingtao Hu, Mayu Otani, Chenhui
Chu, Yuta Nakashima, Teruko Mitamura
|
A Dataset and Baselines for Visual Question Answering on Art
| null | null | null | null |
cs.CV cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Answering questions related to art pieces (paintings) is a difficult task, as
it implies the understanding of not only the visual information that is shown
in the picture, but also the contextual knowledge that is acquired through the
study of the history of art. In this work, we introduce our first attempt
towards building a new dataset, coined AQUA (Art QUestion Answering). The
question-answer (QA) pairs are automatically generated using state-of-the-art
question generation methods based on paintings and comments provided in an
existing art understanding dataset. The QA pairs are cleansed by crowdsourcing
workers with respect to their grammatical correctness, answerability, and
answers' correctness. Our dataset inherently consists of visual
(painting-based) and knowledge (comment-based) questions. We also present a
two-branch model as baseline, where the visual and knowledge questions are
handled independently. We extensively compare our baseline model against the
state-of-the-art models for question answering, and we provide a comprehensive
study about the challenges and potential future directions for visual question
answering on art.
|
[
{
"created": "Fri, 28 Aug 2020 07:33:30 GMT",
"version": "v1"
}
] |
2020-08-31
|
[
[
"Garcia",
"Noa",
""
],
[
"Ye",
"Chentao",
""
],
[
"Liu",
"Zihua",
""
],
[
"Hu",
"Qingtao",
""
],
[
"Otani",
"Mayu",
""
],
[
"Chu",
"Chenhui",
""
],
[
"Nakashima",
"Yuta",
""
],
[
"Mitamura",
"Teruko",
""
]
] |
Answering questions related to art pieces (paintings) is a difficult task, as it implies the understanding of not only the visual information that is shown in the picture, but also the contextual knowledge that is acquired through the study of the history of art. In this work, we introduce our first attempt towards building a new dataset, coined AQUA (Art QUestion Answering). The question-answer (QA) pairs are automatically generated using state-of-the-art question generation methods based on paintings and comments provided in an existing art understanding dataset. The QA pairs are cleansed by crowdsourcing workers with respect to their grammatical correctness, answerability, and answers' correctness. Our dataset inherently consists of visual (painting-based) and knowledge (comment-based) questions. We also present a two-branch model as baseline, where the visual and knowledge questions are handled independently. We extensively compare our baseline model against the state-of-the-art models for question answering, and we provide a comprehensive study about the challenges and potential future directions for visual question answering on art.
|
2111.13330
|
Zhijie Wang
|
Hua Qi, Zhijie Wang, Qing Guo, Jianlang Chen, Felix Juefei-Xu, Lei Ma,
Jianjun Zhao
|
ArchRepair: Block-Level Architecture-Oriented Repairing for Deep Neural
Networks
|
33 pages, 7 figures
| null | null | null |
cs.LG cs.AI cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Over the past few years, deep neural networks (DNNs) have achieved tremendous
success and have been continuously applied in many application domains.
However, during the practical deployment in the industrial tasks, DNNs are
found to be erroneous-prone due to various reasons such as overfitting, lacking
robustness to real-world corruptions during practical usage. To address these
challenges, many recent attempts have been made to repair DNNs for version
updates under practical operational contexts by updating weights (i.e., network
parameters) through retraining, fine-tuning, or direct weight fixing at a
neural level. In this work, as the first attempt, we initiate to repair DNNs by
jointly optimizing the architecture and weights at a higher (i.e., block)
level.
We first perform empirical studies to investigate the limitation of whole
network-level and layer-level repairing, which motivates us to explore a novel
repairing direction for DNN repair at the block level. To this end, we first
propose adversarial-aware spectrum analysis for vulnerable block localization
that considers the neurons' status and weights' gradients in blocks during the
forward and backward processes, which enables more accurate candidate block
localization for repairing even under a few examples. Then, we further propose
the architecture-oriented search-based repairing that relaxes the targeted
block to a continuous repairing search space at higher deep feature levels. By
jointly optimizing the architecture and weights in that space, we can identify
a much better block architecture. We implement our proposed repairing
techniques as a tool, named ArchRepair, and conduct extensive experiments to
validate the proposed method. The results show that our method can not only
repair but also enhance accuracy & robustness, outperforming the
state-of-the-art DNN repair techniques.
|
[
{
"created": "Fri, 26 Nov 2021 06:35:15 GMT",
"version": "v1"
},
{
"created": "Sat, 11 Dec 2021 19:25:47 GMT",
"version": "v2"
}
] |
2021-12-14
|
[
[
"Qi",
"Hua",
""
],
[
"Wang",
"Zhijie",
""
],
[
"Guo",
"Qing",
""
],
[
"Chen",
"Jianlang",
""
],
[
"Juefei-Xu",
"Felix",
""
],
[
"Ma",
"Lei",
""
],
[
"Zhao",
"Jianjun",
""
]
] |
Over the past few years, deep neural networks (DNNs) have achieved tremendous success and have been continuously applied in many application domains. However, during the practical deployment in the industrial tasks, DNNs are found to be erroneous-prone due to various reasons such as overfitting, lacking robustness to real-world corruptions during practical usage. To address these challenges, many recent attempts have been made to repair DNNs for version updates under practical operational contexts by updating weights (i.e., network parameters) through retraining, fine-tuning, or direct weight fixing at a neural level. In this work, as the first attempt, we initiate to repair DNNs by jointly optimizing the architecture and weights at a higher (i.e., block) level. We first perform empirical studies to investigate the limitation of whole network-level and layer-level repairing, which motivates us to explore a novel repairing direction for DNN repair at the block level. To this end, we first propose adversarial-aware spectrum analysis for vulnerable block localization that considers the neurons' status and weights' gradients in blocks during the forward and backward processes, which enables more accurate candidate block localization for repairing even under a few examples. Then, we further propose the architecture-oriented search-based repairing that relaxes the targeted block to a continuous repairing search space at higher deep feature levels. By jointly optimizing the architecture and weights in that space, we can identify a much better block architecture. We implement our proposed repairing techniques as a tool, named ArchRepair, and conduct extensive experiments to validate the proposed method. The results show that our method can not only repair but also enhance accuracy & robustness, outperforming the state-of-the-art DNN repair techniques.
|
1503.04260
|
Diederik Aerts
|
Diederik Aerts, Sandro Sozzo and Tomas Veloz
|
Quantum Structure of Negation and Conjunction in Human Thought
|
44 pages. arXiv admin note: text overlap with arXiv:1406.2358
|
Frontiers in Psychology 6, 1447, (2015)
|
10.3389/fpsyg.2015.01447
| null |
cs.AI quant-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We analyse in this paper the data collected in a set of experiments performed
on human subjects on the combination of natural concepts. We investigate the
mutual influence of conceptual conjunction and negation by measuring the
membership weights of a list of exemplars with respect to two concepts, e.g.,
'Fruits' and 'Vegetables', and their conjunction 'Fruits And Vegetables', but
also their conjunction when one or both concepts are negated, namely, 'Fruits
And Not Vegetables', 'Not Fruits And Vegetables' and 'Not Fruits And Not
Vegetables'. Our findings sharpen existing analysis on conceptual combinations,
revealing systematic and remarkable deviations from classical (fuzzy set) logic
and probability theory. And, more important, our results give further
considerable evidence to the validity of our quantum-theoretic framework for
the combination of two concepts. Indeed, the representation of conceptual
negation naturally arises from the general assumptions of our two-sector Fock
space model, and this representation faithfully agrees with the collected data.
In addition, we find a further significant deviation and a priori unexpected
from classicality, which can exactly be explained by assuming that human
reasoning is the superposition of an 'emergent reasoning' and a 'logical
reasoning', and that these two processes can be successfully represented in a
Fock space algebraic structure.
|
[
{
"created": "Sat, 14 Mar 2015 02:43:14 GMT",
"version": "v1"
}
] |
2016-09-09
|
[
[
"Aerts",
"Diederik",
""
],
[
"Sozzo",
"Sandro",
""
],
[
"Veloz",
"Tomas",
""
]
] |
We analyse in this paper the data collected in a set of experiments performed on human subjects on the combination of natural concepts. We investigate the mutual influence of conceptual conjunction and negation by measuring the membership weights of a list of exemplars with respect to two concepts, e.g., 'Fruits' and 'Vegetables', and their conjunction 'Fruits And Vegetables', but also their conjunction when one or both concepts are negated, namely, 'Fruits And Not Vegetables', 'Not Fruits And Vegetables' and 'Not Fruits And Not Vegetables'. Our findings sharpen existing analysis on conceptual combinations, revealing systematic and remarkable deviations from classical (fuzzy set) logic and probability theory. And, more important, our results give further considerable evidence to the validity of our quantum-theoretic framework for the combination of two concepts. Indeed, the representation of conceptual negation naturally arises from the general assumptions of our two-sector Fock space model, and this representation faithfully agrees with the collected data. In addition, we find a further significant deviation and a priori unexpected from classicality, which can exactly be explained by assuming that human reasoning is the superposition of an 'emergent reasoning' and a 'logical reasoning', and that these two processes can be successfully represented in a Fock space algebraic structure.
|
2011.08277
|
Meera Hahn
|
Meera Hahn, Jacob Krantz, Dhruv Batra, Devi Parikh, James M. Rehg,
Stefan Lee and Peter Anderson
|
Where Are You? Localization from Embodied Dialog
| null |
EMNLP 2020
| null | null |
cs.CV cs.CL
|
http://creativecommons.org/licenses/by/4.0/
|
We present Where Are You? (WAY), a dataset of ~6k dialogs in which two humans
-- an Observer and a Locator -- complete a cooperative localization task. The
Observer is spawned at random in a 3D environment and can navigate from
first-person views while answering questions from the Locator. The Locator must
localize the Observer in a detailed top-down map by asking questions and giving
instructions. Based on this dataset, we define three challenging tasks:
Localization from Embodied Dialog or LED (localizing the Observer from dialog
history), Embodied Visual Dialog (modeling the Observer), and Cooperative
Localization (modeling both agents). In this paper, we focus on the LED task --
providing a strong baseline model with detailed ablations characterizing both
dataset biases and the importance of various modeling choices. Our best model
achieves 32.7% success at identifying the Observer's location within 3m in
unseen buildings, vs. 70.4% for human Locators.
|
[
{
"created": "Mon, 16 Nov 2020 21:09:43 GMT",
"version": "v1"
},
{
"created": "Fri, 3 Sep 2021 13:06:58 GMT",
"version": "v2"
}
] |
2021-09-06
|
[
[
"Hahn",
"Meera",
""
],
[
"Krantz",
"Jacob",
""
],
[
"Batra",
"Dhruv",
""
],
[
"Parikh",
"Devi",
""
],
[
"Rehg",
"James M.",
""
],
[
"Lee",
"Stefan",
""
],
[
"Anderson",
"Peter",
""
]
] |
We present Where Are You? (WAY), a dataset of ~6k dialogs in which two humans -- an Observer and a Locator -- complete a cooperative localization task. The Observer is spawned at random in a 3D environment and can navigate from first-person views while answering questions from the Locator. The Locator must localize the Observer in a detailed top-down map by asking questions and giving instructions. Based on this dataset, we define three challenging tasks: Localization from Embodied Dialog or LED (localizing the Observer from dialog history), Embodied Visual Dialog (modeling the Observer), and Cooperative Localization (modeling both agents). In this paper, we focus on the LED task -- providing a strong baseline model with detailed ablations characterizing both dataset biases and the importance of various modeling choices. Our best model achieves 32.7% success at identifying the Observer's location within 3m in unseen buildings, vs. 70.4% for human Locators.
|
2009.00964
|
Laura Giordano
|
Laura Giordano, Daniele Theseider Dupr\'e
|
A framework for a modular multi-concept lexicographic closure semantics
|
18 pages. Accepted for presentation at NMR2020 (18th International
Workshop on Non-Monotonic Reasoning, September 12th - 14th - Rhodes, Greece
| null | null |
TR-INF-2020-09-03-UNIPMN
|
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We define a modular multi-concept extension of the lexicographic closure
semantics for defeasible description logics with typicality. The idea is that
of distributing the defeasible properties of concepts into different modules,
according to their subject, and of defining a notion of preference for each
module based on the lexicographic closure semantics. The preferential semantics
of the knowledge base can then be defined as a combination of the preferences
of the single modules. The range of possibilities, from fine grained to coarse
grained modules, provides a spectrum of alternative semantics.
|
[
{
"created": "Wed, 2 Sep 2020 11:41:38 GMT",
"version": "v1"
},
{
"created": "Fri, 4 Sep 2020 05:19:53 GMT",
"version": "v2"
}
] |
2020-09-07
|
[
[
"Giordano",
"Laura",
""
],
[
"Dupré",
"Daniele Theseider",
""
]
] |
We define a modular multi-concept extension of the lexicographic closure semantics for defeasible description logics with typicality. The idea is that of distributing the defeasible properties of concepts into different modules, according to their subject, and of defining a notion of preference for each module based on the lexicographic closure semantics. The preferential semantics of the knowledge base can then be defined as a combination of the preferences of the single modules. The range of possibilities, from fine grained to coarse grained modules, provides a spectrum of alternative semantics.
|
1506.00238
|
Bhavya Kailkhura
|
Bhavya Kailkhura and Sijia Liu and Thakshila Wimalajeewa and Pramod K.
Varshney
|
Measurement Matrix Design for Compressive Detection with Secrecy
Guarantees
| null | null | null | null |
cs.IT math.IT stat.AP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this letter, we consider the problem of detecting a high dimensional
signal based on compressed measurements with physical layer secrecy guarantees.
We assume that the network operates in the presence of an eavesdropper who
intends to discover the state of the nature being monitored by the system. We
design measurement matrices which maximize the detection performance of the
network while guaranteeing a certain level of secrecy. We solve the measurement
matrix design problem under three different scenarios: $a)$ signal is known,
$b)$ signal lies in a low dimensional subspace, and $c)$ signal is sparse. It
is shown that the security performance of the system can be improved by using
optimized measurement matrices along with artificial noise injection based
techniques.
|
[
{
"created": "Sun, 31 May 2015 14:31:43 GMT",
"version": "v1"
}
] |
2015-06-02
|
[
[
"Kailkhura",
"Bhavya",
""
],
[
"Liu",
"Sijia",
""
],
[
"Wimalajeewa",
"Thakshila",
""
],
[
"Varshney",
"Pramod K.",
""
]
] |
In this letter, we consider the problem of detecting a high dimensional signal based on compressed measurements with physical layer secrecy guarantees. We assume that the network operates in the presence of an eavesdropper who intends to discover the state of the nature being monitored by the system. We design measurement matrices which maximize the detection performance of the network while guaranteeing a certain level of secrecy. We solve the measurement matrix design problem under three different scenarios: $a)$ signal is known, $b)$ signal lies in a low dimensional subspace, and $c)$ signal is sparse. It is shown that the security performance of the system can be improved by using optimized measurement matrices along with artificial noise injection based techniques.
|
1308.2450
|
Marcos Villagra
|
Xiaoming Sun and Marcos Villagra
|
Exponential Quantum-Classical Gaps in Multiparty Nondeterministic
Communication Complexity
|
This paper has been withdrawn by the author due to a crucial mistake
in the main proof
| null | null | null |
cs.CC quant-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There are three different types of nondeterminism in quantum communication:
i) $\nqp$-communication, ii) $\qma$-communication, and iii)
$\qcma$-communication. In this \redout{paper} we show that multiparty
$\nqp$-communication can be exponentially stronger than $\qcma$-communication.
This also implies an exponential separation with respect to classical
multiparty nondeterministic communication complexity. We argue that there
exists a total function that is hard for $\qcma$-communication and easy for
$\nqp$-communication. The proof of it involves an application of the pattern
tensor method and a new lower bound for polynomial threshold degree. Another
important consequence of this result is that nondeterministic rank can be
exponentially lower than the discrepancy bound.
|
[
{
"created": "Mon, 12 Aug 2013 02:53:46 GMT",
"version": "v1"
},
{
"created": "Sun, 18 Aug 2013 06:33:26 GMT",
"version": "v2"
}
] |
2013-08-20
|
[
[
"Sun",
"Xiaoming",
""
],
[
"Villagra",
"Marcos",
""
]
] |
There are three different types of nondeterminism in quantum communication: i) $\nqp$-communication, ii) $\qma$-communication, and iii) $\qcma$-communication. In this \redout{paper} we show that multiparty $\nqp$-communication can be exponentially stronger than $\qcma$-communication. This also implies an exponential separation with respect to classical multiparty nondeterministic communication complexity. We argue that there exists a total function that is hard for $\qcma$-communication and easy for $\nqp$-communication. The proof of it involves an application of the pattern tensor method and a new lower bound for polynomial threshold degree. Another important consequence of this result is that nondeterministic rank can be exponentially lower than the discrepancy bound.
|
2405.15082
|
Han Song
|
Han Song and Zhongche Qu and Zhi Zhang and Zihan Ye and Cong Liu
|
Advancements in Translation Accuracy for Stereo Visual-Inertial
Initialization
| null | null | null | null |
cs.RO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
As the current initialization method in the state-of-the-art Stereo
Visual-Inertial SLAM framework, ORB-SLAM3 has limitations. Its success depends
on the performance of the pure stereo SLAM system and is based on the
underlying assumption that pure visual SLAM can accurately estimate the camera
trajectory, which is essential for inertial parameter estimation. Meanwhile,
the further improved initialization method for ORB-SLAM3, known as Stereo-NEC,
is time-consuming due to applying keypoint tracking to estimate gyroscope bias
with normal epipolar constraints. To address the limitations of previous
methods, this paper proposes a method aimed at enhancing translation accuracy
during the initialization stage. The fundamental concept of our method is to
improve the translation estimate with a 3 Degree-of-Freedom (DoF) Bundle
Adjustment (BA), independently, while the rotation estimate is fixed, instead
of using ORB-SLAM3's 6-DoF BA. Additionally, the rotation estimate will be
updated by considering IMU measurements and gyroscope bias, unlike ORB-SLAM3's
rotation, which is directly obtained from stereo visual odometry and may yield
inferior results when operating in challenging scenarios. We also conduct
extensive evaluations on the public benchmark, the EuRoC dataset, demonstrating
that our method excels in accuracy.
|
[
{
"created": "Thu, 23 May 2024 22:08:38 GMT",
"version": "v1"
},
{
"created": "Sun, 9 Jun 2024 07:03:45 GMT",
"version": "v2"
},
{
"created": "Thu, 20 Jun 2024 16:44:02 GMT",
"version": "v3"
}
] |
2024-06-21
|
[
[
"Song",
"Han",
""
],
[
"Qu",
"Zhongche",
""
],
[
"Zhang",
"Zhi",
""
],
[
"Ye",
"Zihan",
""
],
[
"Liu",
"Cong",
""
]
] |
As the current initialization method in the state-of-the-art Stereo Visual-Inertial SLAM framework, ORB-SLAM3 has limitations. Its success depends on the performance of the pure stereo SLAM system and is based on the underlying assumption that pure visual SLAM can accurately estimate the camera trajectory, which is essential for inertial parameter estimation. Meanwhile, the further improved initialization method for ORB-SLAM3, known as Stereo-NEC, is time-consuming due to applying keypoint tracking to estimate gyroscope bias with normal epipolar constraints. To address the limitations of previous methods, this paper proposes a method aimed at enhancing translation accuracy during the initialization stage. The fundamental concept of our method is to improve the translation estimate with a 3 Degree-of-Freedom (DoF) Bundle Adjustment (BA), independently, while the rotation estimate is fixed, instead of using ORB-SLAM3's 6-DoF BA. Additionally, the rotation estimate will be updated by considering IMU measurements and gyroscope bias, unlike ORB-SLAM3's rotation, which is directly obtained from stereo visual odometry and may yield inferior results when operating in challenging scenarios. We also conduct extensive evaluations on the public benchmark, the EuRoC dataset, demonstrating that our method excels in accuracy.
|
2102.09340
|
Xinlong Lu
|
Xinlong Lu, Zhengming Ma, Yuanping Lin
|
Domain Adaptive Learning Based on Sample-Dependent and Learnable Kernels
| null | null | null | null |
cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Reproducing Kernel Hilbert Space (RKHS) is the common mathematical platform
for various kernel methods in machine learning. The purpose of kernel learning
is to learn an appropriate RKHS according to different machine learning
scenarios and training samples. Because RKHS is uniquely generated by the
kernel function, kernel learning can be regarded as kernel function learning.
This paper proposes a Domain Adaptive Learning method based on Sample-Dependent
and Learnable Kernels (SDLK-DAL). The first contribution of our work is to
propose a sample-dependent and learnable Positive Definite Quadratic Kernel
function (PDQK) framework. Unlike learning the exponential parameter of
Gaussian kernel function or the coefficient of kernel combinations, the
proposed PDQK is a positive definite quadratic function, in which the symmetric
positive semi-definite matrix is the learnable part in machine learning
applications. The second contribution lies on that we apply PDQK to Domain
Adaptive Learning (DAL). Our approach learns the PDQK through minimizing the
mean discrepancy between the data of source domain and target domain and then
transforms the data into an optimized RKHS generated by PDQK. We conduct a
series of experiments that the RKHS determined by PDQK replaces those in
several state-of-the-art DAL algorithms, and our approach achieves better
performance.
|
[
{
"created": "Thu, 18 Feb 2021 13:55:06 GMT",
"version": "v1"
}
] |
2021-02-19
|
[
[
"Lu",
"Xinlong",
""
],
[
"Ma",
"Zhengming",
""
],
[
"Lin",
"Yuanping",
""
]
] |
Reproducing Kernel Hilbert Space (RKHS) is the common mathematical platform for various kernel methods in machine learning. The purpose of kernel learning is to learn an appropriate RKHS according to different machine learning scenarios and training samples. Because RKHS is uniquely generated by the kernel function, kernel learning can be regarded as kernel function learning. This paper proposes a Domain Adaptive Learning method based on Sample-Dependent and Learnable Kernels (SDLK-DAL). The first contribution of our work is to propose a sample-dependent and learnable Positive Definite Quadratic Kernel function (PDQK) framework. Unlike learning the exponential parameter of Gaussian kernel function or the coefficient of kernel combinations, the proposed PDQK is a positive definite quadratic function, in which the symmetric positive semi-definite matrix is the learnable part in machine learning applications. The second contribution lies on that we apply PDQK to Domain Adaptive Learning (DAL). Our approach learns the PDQK through minimizing the mean discrepancy between the data of source domain and target domain and then transforms the data into an optimized RKHS generated by PDQK. We conduct a series of experiments that the RKHS determined by PDQK replaces those in several state-of-the-art DAL algorithms, and our approach achieves better performance.
|
2311.09807
|
Yanzhu Guo
|
Yanzhu Guo, Guokan Shang, Michalis Vazirgiannis and Chlo\'e Clavel
|
The Curious Decline of Linguistic Diversity: Training Language Models on
Synthetic Text
|
Accepted to NAACL 2024 Findings
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This study investigates the consequences of training language models on
synthetic data generated by their predecessors, an increasingly prevalent
practice given the prominence of powerful generative models. Diverging from the
usual emphasis on performance metrics, we focus on the impact of this training
methodology on linguistic diversity, especially when conducted recursively over
time. To assess this, we adapt and develop a set of novel metrics targeting
lexical, syntactic, and semantic diversity, applying them in recursive
finetuning experiments across various natural language generation tasks in
English. Our findings reveal a consistent decrease in the diversity of the
model outputs through successive iterations, especially remarkable for tasks
demanding high levels of creativity. This trend underscores the potential risks
of training language models on synthetic text, particularly concerning the
preservation of linguistic richness. Our study highlights the need for careful
consideration of the long-term effects of such training approaches on the
linguistic capabilities of language models.
|
[
{
"created": "Thu, 16 Nov 2023 11:31:50 GMT",
"version": "v1"
},
{
"created": "Tue, 16 Apr 2024 15:57:11 GMT",
"version": "v2"
}
] |
2024-04-17
|
[
[
"Guo",
"Yanzhu",
""
],
[
"Shang",
"Guokan",
""
],
[
"Vazirgiannis",
"Michalis",
""
],
[
"Clavel",
"Chloé",
""
]
] |
This study investigates the consequences of training language models on synthetic data generated by their predecessors, an increasingly prevalent practice given the prominence of powerful generative models. Diverging from the usual emphasis on performance metrics, we focus on the impact of this training methodology on linguistic diversity, especially when conducted recursively over time. To assess this, we adapt and develop a set of novel metrics targeting lexical, syntactic, and semantic diversity, applying them in recursive finetuning experiments across various natural language generation tasks in English. Our findings reveal a consistent decrease in the diversity of the model outputs through successive iterations, especially remarkable for tasks demanding high levels of creativity. This trend underscores the potential risks of training language models on synthetic text, particularly concerning the preservation of linguistic richness. Our study highlights the need for careful consideration of the long-term effects of such training approaches on the linguistic capabilities of language models.
|
2012.12594
|
Christian Schulz
|
Faisal Abu-Khzam, Sebastian Lamm, Matthias Mnich, Alexander Noe,
Christian Schulz, and Darren Strash
|
Recent Advances in Practical Data Reduction
| null | null | null | null |
cs.DS cs.CC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Over the last two decades, significant advances have been made in the design
and analysis of fixed-parameter algorithms for a wide variety of
graph-theoretic problems. This has resulted in an algorithmic toolbox that is
by now well-established. However, these theoretical algorithmic ideas have
received very little attention from the practical perspective. We survey recent
trends in data reduction engineering results for selected problems. Moreover,
we describe concrete techniques that may be useful for future implementations
in the area and give open problems and research questions.
|
[
{
"created": "Wed, 23 Dec 2020 10:52:29 GMT",
"version": "v1"
},
{
"created": "Sat, 26 Dec 2020 10:11:13 GMT",
"version": "v2"
},
{
"created": "Thu, 31 Dec 2020 07:57:30 GMT",
"version": "v3"
}
] |
2021-01-01
|
[
[
"Abu-Khzam",
"Faisal",
""
],
[
"Lamm",
"Sebastian",
""
],
[
"Mnich",
"Matthias",
""
],
[
"Noe",
"Alexander",
""
],
[
"Schulz",
"Christian",
""
],
[
"Strash",
"Darren",
""
]
] |
Over the last two decades, significant advances have been made in the design and analysis of fixed-parameter algorithms for a wide variety of graph-theoretic problems. This has resulted in an algorithmic toolbox that is by now well-established. However, these theoretical algorithmic ideas have received very little attention from the practical perspective. We survey recent trends in data reduction engineering results for selected problems. Moreover, we describe concrete techniques that may be useful for future implementations in the area and give open problems and research questions.
|
2305.15314
|
Vijayanta Jain
|
Vijayanta Jain, Sepideh Ghanavati, Sai Teja Peddinti, Collin McMillan
|
Towards Fine-Grained Localization of Privacy Behaviors
| null | null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Mobile applications are required to give privacy notices to users when they
collect or share personal information. Creating consistent and concise privacy
notices can be a challenging task for developers. Previous work has attempted
to help developers create privacy notices through a questionnaire or predefined
templates. In this paper, we propose a novel approach and a framework, called
PriGen, that extends these prior work. PriGen uses static analysis to identify
Android applications' code segments that process sensitive information (i.e.
permission-requiring code segments) and then leverages a Neural Machine
Translation model to translate them into privacy captions. We present the
initial evaluation of our translation task for ~300,000 code segments.
|
[
{
"created": "Wed, 24 May 2023 16:32:14 GMT",
"version": "v1"
}
] |
2023-05-25
|
[
[
"Jain",
"Vijayanta",
""
],
[
"Ghanavati",
"Sepideh",
""
],
[
"Peddinti",
"Sai Teja",
""
],
[
"McMillan",
"Collin",
""
]
] |
Mobile applications are required to give privacy notices to users when they collect or share personal information. Creating consistent and concise privacy notices can be a challenging task for developers. Previous work has attempted to help developers create privacy notices through a questionnaire or predefined templates. In this paper, we propose a novel approach and a framework, called PriGen, that extends these prior work. PriGen uses static analysis to identify Android applications' code segments that process sensitive information (i.e. permission-requiring code segments) and then leverages a Neural Machine Translation model to translate them into privacy captions. We present the initial evaluation of our translation task for ~300,000 code segments.
|
2111.00722
|
Tetsu Kasanishi
|
Tetsu Kasanishi, Xueting Wang, and Toshihiko Yamasaki
|
Edge-Level Explanations for Graph Neural Networks by Extending
Explainability Methods for Convolutional Neural Networks
|
4 pages, accepted at 23rd IEEE International Symposium on Multimedia
(ISM), short paper, 2021
| null | null | null |
cs.LG cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Graph Neural Networks (GNNs) are deep learning models that take graph data as
inputs, and they are applied to various tasks such as traffic prediction and
molecular property prediction. However, owing to the complexity of the GNNs, it
has been difficult to analyze which parts of inputs affect the GNN model's
outputs. In this study, we extend explainability methods for Convolutional
Neural Networks (CNNs), such as Local Interpretable Model-Agnostic Explanations
(LIME), Gradient-Based Saliency Maps, and Gradient-Weighted Class Activation
Mapping (Grad-CAM) to GNNs, and predict which edges in the input graphs are
important for GNN decisions. The experimental results indicate that the
LIME-based approach is the most efficient explainability method for multiple
tasks in the real-world situation, outperforming even the state-of-the-art
method in GNN explainability.
|
[
{
"created": "Mon, 1 Nov 2021 06:27:29 GMT",
"version": "v1"
}
] |
2021-11-02
|
[
[
"Kasanishi",
"Tetsu",
""
],
[
"Wang",
"Xueting",
""
],
[
"Yamasaki",
"Toshihiko",
""
]
] |
Graph Neural Networks (GNNs) are deep learning models that take graph data as inputs, and they are applied to various tasks such as traffic prediction and molecular property prediction. However, owing to the complexity of the GNNs, it has been difficult to analyze which parts of inputs affect the GNN model's outputs. In this study, we extend explainability methods for Convolutional Neural Networks (CNNs), such as Local Interpretable Model-Agnostic Explanations (LIME), Gradient-Based Saliency Maps, and Gradient-Weighted Class Activation Mapping (Grad-CAM) to GNNs, and predict which edges in the input graphs are important for GNN decisions. The experimental results indicate that the LIME-based approach is the most efficient explainability method for multiple tasks in the real-world situation, outperforming even the state-of-the-art method in GNN explainability.
|
2405.15665
|
Maleknaz Nayebi
|
Umme Ayman Koana, Quang Hy Le, Shadikur Rahman, Chris Carlson, Francis
Chew, Maleknaz Nayebi
|
Examining Ownership Models in Software Teams: A Systematic Literature
Review and a Replication Study
|
Pre-print an accepted paper for the ESE journal
| null | null | null |
cs.SE
|
http://creativecommons.org/licenses/by/4.0/
|
Effective ownership of software artifacts, particularly code, is crucial for
accountability, knowledge sharing, and code quality enhancement. Researchers
have proposed models linking ownership of software artifacts with developer
performance and code quality. Our study aims to systematically examine various
ownership models and provide a structured literature overview. Conducting a
systematic literature review, we identified 79 relevant papers published
between 2005 and 2022. We developed a taxonomy of ownership artifacts based on
type, owners, and degree of ownership, along with compiling modeling variables
and analytics types used in each study. Additionally, we assessed the
replication status of each study. As a result, we identified nine distinct
software artifacts whose ownership has been discussed in the literature, with
"Code" being the most frequently analyzed artifact. We found that only three
papers (3.79%) provided code and data, whereas nine papers (11.4%) provided
only data. Using our systematic literature review results, we replicated
experiments on nine priority projects at \texttt{Brightsquid}. The company
aimed to compare its code quality against ownership factors in other teams, so
we conducted a replication study using their data. Unlike prior studies, we
found no strong correlation between minor contributors and bug numbers.
Surprisingly, we found no strong link between the total number of developers
modifying a file and bug counts, contrasting previous findings. However, we
observed a significant correlation between major contributors and bug counts,
diverging from earlier research.
|
[
{
"created": "Fri, 24 May 2024 16:03:22 GMT",
"version": "v1"
}
] |
2024-05-27
|
[
[
"Koana",
"Umme Ayman",
""
],
[
"Le",
"Quang Hy",
""
],
[
"Rahman",
"Shadikur",
""
],
[
"Carlson",
"Chris",
""
],
[
"Chew",
"Francis",
""
],
[
"Nayebi",
"Maleknaz",
""
]
] |
Effective ownership of software artifacts, particularly code, is crucial for accountability, knowledge sharing, and code quality enhancement. Researchers have proposed models linking ownership of software artifacts with developer performance and code quality. Our study aims to systematically examine various ownership models and provide a structured literature overview. Conducting a systematic literature review, we identified 79 relevant papers published between 2005 and 2022. We developed a taxonomy of ownership artifacts based on type, owners, and degree of ownership, along with compiling modeling variables and analytics types used in each study. Additionally, we assessed the replication status of each study. As a result, we identified nine distinct software artifacts whose ownership has been discussed in the literature, with "Code" being the most frequently analyzed artifact. We found that only three papers (3.79%) provided code and data, whereas nine papers (11.4%) provided only data. Using our systematic literature review results, we replicated experiments on nine priority projects at \texttt{Brightsquid}. The company aimed to compare its code quality against ownership factors in other teams, so we conducted a replication study using their data. Unlike prior studies, we found no strong correlation between minor contributors and bug numbers. Surprisingly, we found no strong link between the total number of developers modifying a file and bug counts, contrasting previous findings. However, we observed a significant correlation between major contributors and bug counts, diverging from earlier research.
|
2306.00355
|
Jinsheng Ba
|
Jinsheng Ba, Manuel Rigger
|
CERT: Finding Performance Issues in Database Systems Through the Lens of
Cardinality Estimation
|
The 46th International Conference on Software Engineering (ICSE'24),
Lisbon, Portugal
| null |
10.1145/3597503.3639076
| null |
cs.SE cs.DB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Database Management Systems (DBMSs) process a given query by creating a query
plan, which is subsequently executed, to compute the query's result. Deriving
an efficient query plan is challenging, and both academia and industry have
invested decades into researching query optimization. Despite this, DBMSs are
prone to performance issues, where a DBMS produces an unexpectedly inefficient
query plan that might lead to the slow execution of a query. Finding such
issues is a longstanding problem and inherently difficult, because no ground
truth information on an expected execution time exists. In this work, we
propose Cardinality Estimation Restriction Testing (CERT), a novel technique
that finds performance issues through the lens of cardinality estimation. Given
a query on a database, CERT derives a more restrictive query (e.g., by
replacing a LEFT JOIN with an INNER JOIN), whose estimated number of rows
should not exceed the estimated number of rows for the original query. CERT
tests cardinality estimation specifically, because they were shown to be the
most important part for query optimization; thus, we expect that finding and
fixing such issues might result in the highest performance gains. In addition,
we found that other kinds of query optimization issues can be exposed by
unexpected estimated cardinalities, which can also be found by CERT. CERT is a
black-box technique that does not require access to the source code; DBMSs
expose query plans via the EXPLAIN statement. CERT eschews executing queries,
which is costly and prone to performance fluctuations. We evaluated CERT on
three widely used and mature DBMSs, MySQL, TiDB, and CockroachDB. CERT found 13
unique issues, of which 2 issues were fixed and 9 confirmed by the developers.
We expect that this new angle on finding performance bugs will help DBMS
developers in improving DMBSs' performance.
|
[
{
"created": "Thu, 1 Jun 2023 05:21:31 GMT",
"version": "v1"
},
{
"created": "Fri, 29 Dec 2023 07:30:29 GMT",
"version": "v2"
},
{
"created": "Wed, 10 Jan 2024 02:56:41 GMT",
"version": "v3"
}
] |
2024-01-11
|
[
[
"Ba",
"Jinsheng",
""
],
[
"Rigger",
"Manuel",
""
]
] |
Database Management Systems (DBMSs) process a given query by creating a query plan, which is subsequently executed, to compute the query's result. Deriving an efficient query plan is challenging, and both academia and industry have invested decades into researching query optimization. Despite this, DBMSs are prone to performance issues, where a DBMS produces an unexpectedly inefficient query plan that might lead to the slow execution of a query. Finding such issues is a longstanding problem and inherently difficult, because no ground truth information on an expected execution time exists. In this work, we propose Cardinality Estimation Restriction Testing (CERT), a novel technique that finds performance issues through the lens of cardinality estimation. Given a query on a database, CERT derives a more restrictive query (e.g., by replacing a LEFT JOIN with an INNER JOIN), whose estimated number of rows should not exceed the estimated number of rows for the original query. CERT tests cardinality estimation specifically, because they were shown to be the most important part for query optimization; thus, we expect that finding and fixing such issues might result in the highest performance gains. In addition, we found that other kinds of query optimization issues can be exposed by unexpected estimated cardinalities, which can also be found by CERT. CERT is a black-box technique that does not require access to the source code; DBMSs expose query plans via the EXPLAIN statement. CERT eschews executing queries, which is costly and prone to performance fluctuations. We evaluated CERT on three widely used and mature DBMSs, MySQL, TiDB, and CockroachDB. CERT found 13 unique issues, of which 2 issues were fixed and 9 confirmed by the developers. We expect that this new angle on finding performance bugs will help DBMS developers in improving DMBSs' performance.
|
2109.10868
|
Corrado Puligheddu
|
Sharda Tripathi, Corrado Puligheddu, Carla Fabiana Chiasserini,
Federico Mungari
|
A Context-aware Radio Resource Management in Heterogeneous Virtual RANs
|
Accepted for publication in IEEE Transactions on Cognitive
Communications and Networking
| null |
10.1109/TCCN.2021.3115098
| null |
cs.NI
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
New-generation wireless networks are designed to support a wide range of
services with diverse key performance indicators (KPIs) requirements. A
fundamental component of such networks, and a pivotal factor to the fulfillment
of the target KPIs, is the virtual radio access network (vRAN), which allows
high flexibility on the control of the radio link. However, to fully exploit
the potentiality of vRANs, an efficient mapping of the rapidly varying context
to radio control decisions is not only essential, but also challenging owing to
the interdependence of user traffic demand, channel conditions, and resource
allocation. Here, we propose CAREM, a reinforcement learning framework for
dynamic radio resource allocation in heterogeneous vRANs, which selects the
best available link and transmission parameters for packet transfer, so as to
meet the KPI requirements. To show its effectiveness, we develop a testbed for
proof-of-concept. Experimental results demonstrate that CAREM enables an
efficient radio resource allocation under different settings and traffic
demand. Also, compared to the closest existing scheme based on neural network
and the standard LTE, CAREM exhibits an improvement of one order of magnitude
in packet loss and latency, while it provides a 65% latency improvement
relatively to the contextual bandit approach.
|
[
{
"created": "Wed, 22 Sep 2021 17:37:26 GMT",
"version": "v1"
},
{
"created": "Thu, 23 Sep 2021 15:18:16 GMT",
"version": "v2"
}
] |
2021-09-24
|
[
[
"Tripathi",
"Sharda",
""
],
[
"Puligheddu",
"Corrado",
""
],
[
"Chiasserini",
"Carla Fabiana",
""
],
[
"Mungari",
"Federico",
""
]
] |
New-generation wireless networks are designed to support a wide range of services with diverse key performance indicators (KPIs) requirements. A fundamental component of such networks, and a pivotal factor to the fulfillment of the target KPIs, is the virtual radio access network (vRAN), which allows high flexibility on the control of the radio link. However, to fully exploit the potentiality of vRANs, an efficient mapping of the rapidly varying context to radio control decisions is not only essential, but also challenging owing to the interdependence of user traffic demand, channel conditions, and resource allocation. Here, we propose CAREM, a reinforcement learning framework for dynamic radio resource allocation in heterogeneous vRANs, which selects the best available link and transmission parameters for packet transfer, so as to meet the KPI requirements. To show its effectiveness, we develop a testbed for proof-of-concept. Experimental results demonstrate that CAREM enables an efficient radio resource allocation under different settings and traffic demand. Also, compared to the closest existing scheme based on neural network and the standard LTE, CAREM exhibits an improvement of one order of magnitude in packet loss and latency, while it provides a 65% latency improvement relatively to the contextual bandit approach.
|
2404.12407
|
Da-Wei Zhou
|
Da-Wei Zhou, Zhi-Hong Qi, Han-Jia Ye, De-Chuan Zhan
|
TV100: A TV Series Dataset that Pre-Trained CLIP Has Not Seen
|
Project page: https://tv-100.github.io/
| null | null | null |
cs.CV cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The era of pre-trained models has ushered in a wealth of new insights for the
machine learning community. Among the myriad of questions that arise, one of
paramount importance is: 'Do pre-trained models possess comprehensive
knowledge?' This paper seeks to address this crucial inquiry. In line with our
objective, we have made publicly available a novel dataset comprised of images
from TV series released post-2021. This dataset holds significant potential for
use in various research areas, including the evaluation of incremental
learning, novel class discovery, and long-tailed learning, among others.
Project page: https://tv-100.github.io/
|
[
{
"created": "Tue, 16 Apr 2024 17:47:45 GMT",
"version": "v1"
}
] |
2024-04-22
|
[
[
"Zhou",
"Da-Wei",
""
],
[
"Qi",
"Zhi-Hong",
""
],
[
"Ye",
"Han-Jia",
""
],
[
"Zhan",
"De-Chuan",
""
]
] |
The era of pre-trained models has ushered in a wealth of new insights for the machine learning community. Among the myriad of questions that arise, one of paramount importance is: 'Do pre-trained models possess comprehensive knowledge?' This paper seeks to address this crucial inquiry. In line with our objective, we have made publicly available a novel dataset comprised of images from TV series released post-2021. This dataset holds significant potential for use in various research areas, including the evaluation of incremental learning, novel class discovery, and long-tailed learning, among others. Project page: https://tv-100.github.io/
|
cs/0206033
|
David Eppstein
|
David Eppstein, Jean-Claude Falmagne
|
Algorithms for Media
|
12 pages
| null | null | null |
cs.DS
| null |
Falmagne recently introduced the concept of a medium, a combinatorial object
encompassing hyperplane arrangements, topological orderings, acyclic
orientations, and many other familiar structures. We find efficient solutions
for several algorithmic problems on media: finding short reset sequences,
shortest paths, testing whether a medium has a closed orientation, and listing
the states of a medium given a black-box description.
|
[
{
"created": "Mon, 24 Jun 2002 06:50:52 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Eppstein",
"David",
""
],
[
"Falmagne",
"Jean-Claude",
""
]
] |
Falmagne recently introduced the concept of a medium, a combinatorial object encompassing hyperplane arrangements, topological orderings, acyclic orientations, and many other familiar structures. We find efficient solutions for several algorithmic problems on media: finding short reset sequences, shortest paths, testing whether a medium has a closed orientation, and listing the states of a medium given a black-box description.
|
1610.04583
|
Alexander Wein
|
Amelia Perry, Alexander S. Wein, Afonso S. Bandeira, Ankur Moitra
|
Message-passing algorithms for synchronization problems over compact
groups
|
35 pages, 11 figures
| null |
10.1002/cpa.21750
| null |
cs.IT cs.CV cs.DS math.IT math.OC stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Various alignment problems arising in cryo-electron microscopy, community
detection, time synchronization, computer vision, and other fields fall into a
common framework of synchronization problems over compact groups such as Z/L,
U(1), or SO(3). The goal of such problems is to estimate an unknown vector of
group elements given noisy relative observations. We present an efficient
iterative algorithm to solve a large class of these problems, allowing for any
compact group, with measurements on multiple 'frequency channels' (Fourier
modes, or more generally, irreducible representations of the group). Our
algorithm is a highly efficient iterative method following the blueprint of
approximate message passing (AMP), which has recently arisen as a central
technique for inference problems such as structured low-rank estimation and
compressed sensing. We augment the standard ideas of AMP with ideas from
representation theory so that the algorithm can work with distributions over
compact groups. Using standard but non-rigorous methods from statistical
physics we analyze the behavior of our algorithm on a Gaussian noise model,
identifying phases where the problem is easy, (computationally) hard, and
(statistically) impossible. In particular, such evidence predicts that our
algorithm is information-theoretically optimal in many cases, and that the
remaining cases show evidence of statistical-to-computational gaps.
|
[
{
"created": "Fri, 14 Oct 2016 19:05:32 GMT",
"version": "v1"
}
] |
2018-09-14
|
[
[
"Perry",
"Amelia",
""
],
[
"Wein",
"Alexander S.",
""
],
[
"Bandeira",
"Afonso S.",
""
],
[
"Moitra",
"Ankur",
""
]
] |
Various alignment problems arising in cryo-electron microscopy, community detection, time synchronization, computer vision, and other fields fall into a common framework of synchronization problems over compact groups such as Z/L, U(1), or SO(3). The goal of such problems is to estimate an unknown vector of group elements given noisy relative observations. We present an efficient iterative algorithm to solve a large class of these problems, allowing for any compact group, with measurements on multiple 'frequency channels' (Fourier modes, or more generally, irreducible representations of the group). Our algorithm is a highly efficient iterative method following the blueprint of approximate message passing (AMP), which has recently arisen as a central technique for inference problems such as structured low-rank estimation and compressed sensing. We augment the standard ideas of AMP with ideas from representation theory so that the algorithm can work with distributions over compact groups. Using standard but non-rigorous methods from statistical physics we analyze the behavior of our algorithm on a Gaussian noise model, identifying phases where the problem is easy, (computationally) hard, and (statistically) impossible. In particular, such evidence predicts that our algorithm is information-theoretically optimal in many cases, and that the remaining cases show evidence of statistical-to-computational gaps.
|
2008.12646
|
Yun Peng
|
Yun Peng, Byron Choi, Jianliang Xu
|
Graph Learning for Combinatorial Optimization: A Survey of
State-of-the-Art
|
40 pages
| null | null | null |
cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Graphs have been widely used to represent complex data in many applications.
Efficient and effective analysis of graphs is important for graph-based
applications. However, most graph analysis tasks are combinatorial optimization
(CO) problems, which are NP-hard. Recent studies have focused a lot on the
potential of using machine learning (ML) to solve graph-based CO problems. Most
recent methods follow the two-stage framework. The first stage is graph
representation learning, which embeds the graphs into low-dimension vectors.
The second stage uses ML to solve the CO problems using the embeddings of the
graphs learned in the first stage. The works for the first stage can be
classified into two categories, graph embedding (GE) methods and end-to-end
(E2E) learning methods. For GE methods, learning graph embedding has its own
objective, which may not rely on the CO problems to be solved. The CO problems
are solved by independent downstream tasks. For E2E learning methods, the
learning of graph embeddings does not have its own objective and is an
intermediate step of the learning procedure of solving the CO problems. The
works for the second stage can also be classified into two categories,
non-autoregressive methods and autoregressive methods. Non-autoregressive
methods predict a solution for a CO problem in one shot. A non-autoregressive
method predicts a matrix that denotes the probability of each node/edge being a
part of a solution of the CO problem. The solution can be computed from the
matrix. Autoregressive methods iteratively extend a partial solution step by
step. At each step, an autoregressive method predicts a node/edge conditioned
to current partial solution, which is used to its extension. In this survey, we
provide a thorough overview of recent studies of the graph learning-based CO
methods. The survey ends with several remarks on future research directions.
|
[
{
"created": "Wed, 26 Aug 2020 09:56:30 GMT",
"version": "v1"
},
{
"created": "Tue, 20 Apr 2021 08:47:03 GMT",
"version": "v2"
},
{
"created": "Thu, 22 Apr 2021 02:07:09 GMT",
"version": "v3"
}
] |
2021-04-23
|
[
[
"Peng",
"Yun",
""
],
[
"Choi",
"Byron",
""
],
[
"Xu",
"Jianliang",
""
]
] |
Graphs have been widely used to represent complex data in many applications. Efficient and effective analysis of graphs is important for graph-based applications. However, most graph analysis tasks are combinatorial optimization (CO) problems, which are NP-hard. Recent studies have focused a lot on the potential of using machine learning (ML) to solve graph-based CO problems. Most recent methods follow the two-stage framework. The first stage is graph representation learning, which embeds the graphs into low-dimension vectors. The second stage uses ML to solve the CO problems using the embeddings of the graphs learned in the first stage. The works for the first stage can be classified into two categories, graph embedding (GE) methods and end-to-end (E2E) learning methods. For GE methods, learning graph embedding has its own objective, which may not rely on the CO problems to be solved. The CO problems are solved by independent downstream tasks. For E2E learning methods, the learning of graph embeddings does not have its own objective and is an intermediate step of the learning procedure of solving the CO problems. The works for the second stage can also be classified into two categories, non-autoregressive methods and autoregressive methods. Non-autoregressive methods predict a solution for a CO problem in one shot. A non-autoregressive method predicts a matrix that denotes the probability of each node/edge being a part of a solution of the CO problem. The solution can be computed from the matrix. Autoregressive methods iteratively extend a partial solution step by step. At each step, an autoregressive method predicts a node/edge conditioned to current partial solution, which is used to its extension. In this survey, we provide a thorough overview of recent studies of the graph learning-based CO methods. The survey ends with several remarks on future research directions.
|
2309.03787
|
Chengguang Gan
|
Chengguang Gan, Qinghao Zhang, Tatsunori Mori
|
USA: Universal Sentiment Analysis Model & Construction of Japanese
Sentiment Text Classification and Part of Speech Dataset
|
Model already Open Sourced, Dataset will release soon
| null | null | null |
cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sentiment analysis is a pivotal task in the domain of natural language
processing. It encompasses both text-level sentiment polarity classification
and word-level Part of Speech(POS) sentiment polarity determination. Such
analysis challenges models to understand text holistically while also
extracting nuanced information. With the rise of Large Language Models(LLMs),
new avenues for sentiment analysis have opened. This paper proposes enhancing
performance by leveraging the Mutual Reinforcement Effect(MRE) between
individual words and the overall text. It delves into how word polarity
influences the overarching sentiment of a passage. To support our research, we
annotated four novel Sentiment Text Classification and Part of Speech(SCPOS)
datasets, building upon existing sentiment classification datasets.
Furthermore, we developed a Universal Sentiment Analysis(USA) model, with a
7-billion parameter size. Experimental results revealed that our model
surpassed the performance of gpt-3.5-turbo across all four datasets,
underscoring the significance of MRE in sentiment analysis.
|
[
{
"created": "Thu, 7 Sep 2023 15:35:00 GMT",
"version": "v1"
},
{
"created": "Thu, 14 Sep 2023 05:53:45 GMT",
"version": "v2"
}
] |
2023-09-15
|
[
[
"Gan",
"Chengguang",
""
],
[
"Zhang",
"Qinghao",
""
],
[
"Mori",
"Tatsunori",
""
]
] |
Sentiment analysis is a pivotal task in the domain of natural language processing. It encompasses both text-level sentiment polarity classification and word-level Part of Speech(POS) sentiment polarity determination. Such analysis challenges models to understand text holistically while also extracting nuanced information. With the rise of Large Language Models(LLMs), new avenues for sentiment analysis have opened. This paper proposes enhancing performance by leveraging the Mutual Reinforcement Effect(MRE) between individual words and the overall text. It delves into how word polarity influences the overarching sentiment of a passage. To support our research, we annotated four novel Sentiment Text Classification and Part of Speech(SCPOS) datasets, building upon existing sentiment classification datasets. Furthermore, we developed a Universal Sentiment Analysis(USA) model, with a 7-billion parameter size. Experimental results revealed that our model surpassed the performance of gpt-3.5-turbo across all four datasets, underscoring the significance of MRE in sentiment analysis.
|
1510.08544
|
Silas Fong
|
Silas L. Fong and Vincent Y. F. Tan
|
Empirical Output Distribution of Good Delay-Limited Codes for
Quasi-Static Fading Channels
|
This paper has been withdrawn by the authors because of insufficient
novelty
| null | null | null |
cs.IT math.IT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper considers delay-limited communication over quasi-static fading
channels under a long-term power constraint. A sequence of length-$n$
delay-limited codes for a quasi-static fading channel is said to be
capacity-achieving if the codes achieve the delay-limited capacity, which is
defined to be the maximum rate achievable by delay-limited codes. The
delay-limited capacity is sometimes referred to as the zero-outage capacity in
wireless communications. The delay-limited capacity is the appropriate choice
of performance measure for delay-sensitive applications such as voice and video
over fading channels. It is shown that for any sequence of capacity-achieving
delay-limited codes with vanishing error probabilities, the normalized relative
entropy between the output distribution induced by the length-$n$ code and the
$n$-fold product of the capacity-achieving output distribution, denoted by
$\frac{1}{n}D(p_{Y^n}\|p_{Y^n}^*)$, converges to zero. Additionally, we extend
our convergence result to capacity-achieving delay-limited codes with
non-vanishing error probabilities.
|
[
{
"created": "Thu, 29 Oct 2015 02:09:37 GMT",
"version": "v1"
},
{
"created": "Thu, 26 Nov 2015 06:50:12 GMT",
"version": "v2"
},
{
"created": "Mon, 14 Nov 2016 01:51:48 GMT",
"version": "v3"
}
] |
2016-11-15
|
[
[
"Fong",
"Silas L.",
""
],
[
"Tan",
"Vincent Y. F.",
""
]
] |
This paper considers delay-limited communication over quasi-static fading channels under a long-term power constraint. A sequence of length-$n$ delay-limited codes for a quasi-static fading channel is said to be capacity-achieving if the codes achieve the delay-limited capacity, which is defined to be the maximum rate achievable by delay-limited codes. The delay-limited capacity is sometimes referred to as the zero-outage capacity in wireless communications. The delay-limited capacity is the appropriate choice of performance measure for delay-sensitive applications such as voice and video over fading channels. It is shown that for any sequence of capacity-achieving delay-limited codes with vanishing error probabilities, the normalized relative entropy between the output distribution induced by the length-$n$ code and the $n$-fold product of the capacity-achieving output distribution, denoted by $\frac{1}{n}D(p_{Y^n}\|p_{Y^n}^*)$, converges to zero. Additionally, we extend our convergence result to capacity-achieving delay-limited codes with non-vanishing error probabilities.
|
1807.02251
|
WajihUllah Baig
|
Wajih Ullah Baig, Umar Munir, Waqas Ellahi, Adeel Ejaz, Kashif Sardar
(National Database and Registration Authority)
|
Minutia Texture Cylinder Codes for fingerprint matching
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Minutia Cylinder Codes (MCC) are minutiae based fingerprint descriptors that
take into account minutiae information in a fingerprint image for fingerprint
matching. In this paper, we present a modification to the underlying
information of the MCC descriptor and show that using different features, the
accuracy of matching is highly affected by such changes. MCC originally being a
minutia only descriptor is transformed into a texture descriptor. The
transformation is from minutiae angular information to orientation, frequency
and energy information using Short Time Fourier Transform (STFT) analysis. The
minutia cylinder codes are converted to minutiae texture cylinder codes (MTCC).
Based on a fixed set of parameters, the proposed changes to MCC show improved
performance on FVC 2002 and 2004 data sets and surpass the traditional MCC
performance.
|
[
{
"created": "Fri, 6 Jul 2018 04:25:18 GMT",
"version": "v1"
}
] |
2018-07-09
|
[
[
"Baig",
"Wajih Ullah",
"",
"National Database and Registration Authority"
],
[
"Munir",
"Umar",
"",
"National Database and Registration Authority"
],
[
"Ellahi",
"Waqas",
"",
"National Database and Registration Authority"
],
[
"Ejaz",
"Adeel",
"",
"National Database and Registration Authority"
],
[
"Sardar",
"Kashif",
"",
"National Database and Registration Authority"
]
] |
Minutia Cylinder Codes (MCC) are minutiae based fingerprint descriptors that take into account minutiae information in a fingerprint image for fingerprint matching. In this paper, we present a modification to the underlying information of the MCC descriptor and show that using different features, the accuracy of matching is highly affected by such changes. MCC originally being a minutia only descriptor is transformed into a texture descriptor. The transformation is from minutiae angular information to orientation, frequency and energy information using Short Time Fourier Transform (STFT) analysis. The minutia cylinder codes are converted to minutiae texture cylinder codes (MTCC). Based on a fixed set of parameters, the proposed changes to MCC show improved performance on FVC 2002 and 2004 data sets and surpass the traditional MCC performance.
|
2308.04187
|
Lutz Terfloth
|
Lutz Terfloth, Michael Schaffer, Heike M. Buhl, Carsten Schulte
|
Adding Why to What? Analyses of an Everyday Explanation
|
Paper accepted and presented at XAI World Conference 2023, Lisboa
| null |
10.1007/978-3-031-44070-0_13
| null |
cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In XAI it is important to consider that, in contrast to explanations for
professional audiences, one cannot assume common expertise when explaining for
laypeople. But such explanations between humans vary greatly, making it
difficult to research commonalities across explanations. We used the dual
nature theory, a techno-philosophical approach, to cope with these challenges.
According to it, one can explain, for example, an XAI's decision by addressing
its dual nature: by focusing on the Architecture (e.g., the logic of its
algorithms) or the Relevance (e.g., the severity of a decision, the
implications of a recommendation). We investigated 20 game explanations using
the theory as an analytical framework. We elaborate how we used the theory to
quickly structure and compare explanations of technological artifacts. We
supplemented results from analyzing the explanation contents with results from
a video recall to explore how explainers justified their explanation. We found
that explainers were focusing on the physical aspects of the game first
(Architecture) and only later on aspects of the Relevance. Reasoning in the
video recalls indicated that EX regarded the focus on the Architecture as
important for structuring the explanation initially by explaining the basic
components before focusing on more complex, intangible aspects. Shifting
between addressing the two sides was justified by explanation goals, emerging
misunderstandings, and the knowledge needs of the explainee. We discovered
several commonalities that inspire future research questions which, if further
generalizable, provide first ideas for the construction of synthetic
explanations.
|
[
{
"created": "Tue, 8 Aug 2023 11:17:22 GMT",
"version": "v1"
}
] |
2023-10-24
|
[
[
"Terfloth",
"Lutz",
""
],
[
"Schaffer",
"Michael",
""
],
[
"Buhl",
"Heike M.",
""
],
[
"Schulte",
"Carsten",
""
]
] |
In XAI it is important to consider that, in contrast to explanations for professional audiences, one cannot assume common expertise when explaining for laypeople. But such explanations between humans vary greatly, making it difficult to research commonalities across explanations. We used the dual nature theory, a techno-philosophical approach, to cope with these challenges. According to it, one can explain, for example, an XAI's decision by addressing its dual nature: by focusing on the Architecture (e.g., the logic of its algorithms) or the Relevance (e.g., the severity of a decision, the implications of a recommendation). We investigated 20 game explanations using the theory as an analytical framework. We elaborate how we used the theory to quickly structure and compare explanations of technological artifacts. We supplemented results from analyzing the explanation contents with results from a video recall to explore how explainers justified their explanation. We found that explainers were focusing on the physical aspects of the game first (Architecture) and only later on aspects of the Relevance. Reasoning in the video recalls indicated that EX regarded the focus on the Architecture as important for structuring the explanation initially by explaining the basic components before focusing on more complex, intangible aspects. Shifting between addressing the two sides was justified by explanation goals, emerging misunderstandings, and the knowledge needs of the explainee. We discovered several commonalities that inspire future research questions which, if further generalizable, provide first ideas for the construction of synthetic explanations.
|
2104.12623
|
Sebastian Szyller
|
Sebastian Szyller, Vasisht Duddu, Tommi Gr\"ondahl, N. Asokan
|
Good Artists Copy, Great Artists Steal: Model Extraction Attacks Against
Image Translation Models
|
19 pages
| null | null | null |
cs.LG cs.CR cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Machine learning models are typically made available to potential client
users via inference APIs. Model extraction attacks occur when a malicious
client uses information gleaned from queries to the inference API of a victim
model $F_V$ to build a surrogate model $F_A$ with comparable functionality.
Recent research has shown successful model extraction of image classification,
and natural language processing models. In this paper, we show the first model
extraction attack against real-world generative adversarial network (GAN) image
translation models. We present a framework for conducting such attacks, and
show that an adversary can successfully extract functional surrogate models by
querying $F_V$ using data from the same domain as the training data for $F_V$.
The adversary need not know $F_V$'s architecture or any other information about
it beyond its intended task. We evaluate the effectiveness of our attacks using
three different instances of two popular categories of image translation: (1)
Selfie-to-Anime and (2) Monet-to-Photo (image style transfer), and (3)
Super-Resolution (super resolution). Using standard performance metrics for
GANs, we show that our attacks are effective. Furthermore, we conducted a large
scale (125 participants) user study on Selfie-to-Anime and Monet-to-Photo to
show that human perception of the images produced by $F_V$ and $F_A$ can be
considered equivalent, within an equivalence bound of Cohen's d = 0.3. Finally,
we show that existing defenses against model extraction attacks (watermarking,
adversarial examples, poisoning) do not extend to image translation models.
|
[
{
"created": "Mon, 26 Apr 2021 14:50:59 GMT",
"version": "v1"
},
{
"created": "Tue, 28 Feb 2023 09:37:59 GMT",
"version": "v2"
}
] |
2023-03-01
|
[
[
"Szyller",
"Sebastian",
""
],
[
"Duddu",
"Vasisht",
""
],
[
"Gröndahl",
"Tommi",
""
],
[
"Asokan",
"N.",
""
]
] |
Machine learning models are typically made available to potential client users via inference APIs. Model extraction attacks occur when a malicious client uses information gleaned from queries to the inference API of a victim model $F_V$ to build a surrogate model $F_A$ with comparable functionality. Recent research has shown successful model extraction of image classification, and natural language processing models. In this paper, we show the first model extraction attack against real-world generative adversarial network (GAN) image translation models. We present a framework for conducting such attacks, and show that an adversary can successfully extract functional surrogate models by querying $F_V$ using data from the same domain as the training data for $F_V$. The adversary need not know $F_V$'s architecture or any other information about it beyond its intended task. We evaluate the effectiveness of our attacks using three different instances of two popular categories of image translation: (1) Selfie-to-Anime and (2) Monet-to-Photo (image style transfer), and (3) Super-Resolution (super resolution). Using standard performance metrics for GANs, we show that our attacks are effective. Furthermore, we conducted a large scale (125 participants) user study on Selfie-to-Anime and Monet-to-Photo to show that human perception of the images produced by $F_V$ and $F_A$ can be considered equivalent, within an equivalence bound of Cohen's d = 0.3. Finally, we show that existing defenses against model extraction attacks (watermarking, adversarial examples, poisoning) do not extend to image translation models.
|
2302.11341
|
A. R. Sricharan
|
Monika Henzinger and A. R. Sricharan and Teresa Anna Steiner
|
Differentially Private Data Structures under Continual Observation for
Histograms and Related Queries
| null | null | null | null |
cs.DS cs.CR
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Binary counting under continual observation is a well-studied fundamental
problem in differential privacy. A natural extension is maintaining column
sums, also known as histogram, over a stream of rows from $\{0,1\}^d$, and
answering queries about those sums, e.g. the maximum column sum or the median,
while satisfying differential privacy. Jain et al. (2021) showed that computing
the maximum column sum under continual observation while satisfying event-level
differential privacy requires an error either polynomial in the dimension $d$
or the stream length $T$. On the other hand, no $o(d\log^2 T)$ upper bound for
$\epsilon$-differential privacy or $o(\sqrt{d}\log^{3/2} T)$ upper bound for
$(\epsilon,\delta)$-differential privacy are known. In this work, we give new
parameterized upper bounds for maintaining histogram, maximum column sum,
quantiles of the column sums, and any set of at most $d$ low-sensitivity,
monotone, real valued queries on the column sums. Our solutions achieve an
error of approximately $O(d\log^2 c_{\max}+\log T)$ for $\epsilon$-differential
privacy and approximately $O(\sqrt{d}\log^{3/2}c_{\max}+\log T)$ for
$(\epsilon,\delta)$-differential privacy, where $c_{\max}$ is the maximum value
that the queries we want to answer can assume on the given data set.
Furthermore, we show that such an improvement is not possible for a slightly
expanded notion of neighboring streams by giving a lower bound of $\Omega(d
\log T)$. This explains why our improvement cannot be achieved with the
existing mechanisms for differentially private histograms, as they remain
differentially private even for this expanded notion of neighboring streams.
|
[
{
"created": "Wed, 22 Feb 2023 12:38:02 GMT",
"version": "v1"
}
] |
2023-02-23
|
[
[
"Henzinger",
"Monika",
""
],
[
"Sricharan",
"A. R.",
""
],
[
"Steiner",
"Teresa Anna",
""
]
] |
Binary counting under continual observation is a well-studied fundamental problem in differential privacy. A natural extension is maintaining column sums, also known as histogram, over a stream of rows from $\{0,1\}^d$, and answering queries about those sums, e.g. the maximum column sum or the median, while satisfying differential privacy. Jain et al. (2021) showed that computing the maximum column sum under continual observation while satisfying event-level differential privacy requires an error either polynomial in the dimension $d$ or the stream length $T$. On the other hand, no $o(d\log^2 T)$ upper bound for $\epsilon$-differential privacy or $o(\sqrt{d}\log^{3/2} T)$ upper bound for $(\epsilon,\delta)$-differential privacy are known. In this work, we give new parameterized upper bounds for maintaining histogram, maximum column sum, quantiles of the column sums, and any set of at most $d$ low-sensitivity, monotone, real valued queries on the column sums. Our solutions achieve an error of approximately $O(d\log^2 c_{\max}+\log T)$ for $\epsilon$-differential privacy and approximately $O(\sqrt{d}\log^{3/2}c_{\max}+\log T)$ for $(\epsilon,\delta)$-differential privacy, where $c_{\max}$ is the maximum value that the queries we want to answer can assume on the given data set. Furthermore, we show that such an improvement is not possible for a slightly expanded notion of neighboring streams by giving a lower bound of $\Omega(d \log T)$. This explains why our improvement cannot be achieved with the existing mechanisms for differentially private histograms, as they remain differentially private even for this expanded notion of neighboring streams.
|
1907.03998
|
EPTCS
|
Daniel Dietsch (University of Freiburg), Matthias Heizmann (University
of Freiburg), Jochen Hoenicke (University of Freiburg), Alexander Nutz
(University of Freiburg), Andreas Podelski (University of Freiburg)
|
Ultimate TreeAutomizer (CHC-COMP Tool Description)
|
In Proceedings HCVS/PERR 2019, arXiv:1907.03523
|
EPTCS 296, 2019, pp. 42-47
|
10.4204/EPTCS.296.7
| null |
cs.LO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present Ultimate TreeAutomizer, a solver for satisfiability of sets of
constrained Horn clauses. Constrained Horn clauses (CHC) are a fragment of
first order logic with attractive properties in terms of expressiveness and
accessibility to algorithmic solving. Ultimate TreeAutomizer is based on the
techniques of trace abstraction, tree automata and tree interpolation. This
paper serves as a tool description for TreeAutomizer in CHC-COMP 2019.
|
[
{
"created": "Tue, 9 Jul 2019 06:02:28 GMT",
"version": "v1"
}
] |
2019-07-10
|
[
[
"Dietsch",
"Daniel",
"",
"University of Freiburg"
],
[
"Heizmann",
"Matthias",
"",
"University\n of Freiburg"
],
[
"Hoenicke",
"Jochen",
"",
"University of Freiburg"
],
[
"Nutz",
"Alexander",
"",
"University of Freiburg"
],
[
"Podelski",
"Andreas",
"",
"University of Freiburg"
]
] |
We present Ultimate TreeAutomizer, a solver for satisfiability of sets of constrained Horn clauses. Constrained Horn clauses (CHC) are a fragment of first order logic with attractive properties in terms of expressiveness and accessibility to algorithmic solving. Ultimate TreeAutomizer is based on the techniques of trace abstraction, tree automata and tree interpolation. This paper serves as a tool description for TreeAutomizer in CHC-COMP 2019.
|
1703.02851
|
Ghislain Fourny
|
Ghislain Fourny
|
On the Importance of Correlations in Rational Choice: A Case for
Non-Nashian Game Theory
|
Note
| null | null | null |
cs.GT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Nash equilibrium paradigm, and Rational Choice Theory in general, rely on
agents acting independently from each other. This note shows how this
assumption is crucial in the definition of Rational Choice Theory. It explains
how a consistent Alternate Rational Choice Theory, as suggested by Jean-Pierre
Dupuy, can be built on the exact opposite assumption, and how it provides a
viable account for alternate, actually observed behavior of rational agents
that is based on correlations between their decisions.
The end goal of this note is three-fold: (i) to motivate that the Perfect
Prediction Equilibrium, implementing Dupuy's notion of projected time and
previously called "projected equilibrium", is a reasonable approach in certain
real situations and a meaningful complement to the Nash paradigm, (ii) to
summarize common misconceptions about this equilibrium, and (iii) to give a
concise motivation for future research on non-Nashian game theory.
|
[
{
"created": "Wed, 8 Mar 2017 14:41:30 GMT",
"version": "v1"
}
] |
2017-03-09
|
[
[
"Fourny",
"Ghislain",
""
]
] |
The Nash equilibrium paradigm, and Rational Choice Theory in general, rely on agents acting independently from each other. This note shows how this assumption is crucial in the definition of Rational Choice Theory. It explains how a consistent Alternate Rational Choice Theory, as suggested by Jean-Pierre Dupuy, can be built on the exact opposite assumption, and how it provides a viable account for alternate, actually observed behavior of rational agents that is based on correlations between their decisions. The end goal of this note is three-fold: (i) to motivate that the Perfect Prediction Equilibrium, implementing Dupuy's notion of projected time and previously called "projected equilibrium", is a reasonable approach in certain real situations and a meaningful complement to the Nash paradigm, (ii) to summarize common misconceptions about this equilibrium, and (iii) to give a concise motivation for future research on non-Nashian game theory.
|
1403.5788
|
K. V. Krishna
|
Shubh Narayan Singh and K. V. Krishna
|
$L$-Primitive Words in Submonoids
| null | null | null | null |
cs.FL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work considers a natural generalization of primitivity with respect to a
language. Given a language $L$, a nonempty word $w$ is said to be $L$-primitive
if $w$ is not a proper power of any word in $L$. After ascertaining the number
of primitive words in submonoids of a free monoid, the work proceeds to count
$L$-primitive words in submonoids of a free monoid. The work also studies the
distribution of $L$-primitive words in certain subsets of free monoids.
|
[
{
"created": "Sun, 23 Mar 2014 18:55:23 GMT",
"version": "v1"
}
] |
2014-03-25
|
[
[
"Singh",
"Shubh Narayan",
""
],
[
"Krishna",
"K. V.",
""
]
] |
This work considers a natural generalization of primitivity with respect to a language. Given a language $L$, a nonempty word $w$ is said to be $L$-primitive if $w$ is not a proper power of any word in $L$. After ascertaining the number of primitive words in submonoids of a free monoid, the work proceeds to count $L$-primitive words in submonoids of a free monoid. The work also studies the distribution of $L$-primitive words in certain subsets of free monoids.
|
1204.5791
|
Chathuranga Widanapathirana
|
Chathuranga Widanapathirana and Y. Ahmet Sekercioglu and Bok-Min Goi
|
Hybrid FPMS: A New Fairness Protocol Management Scheme for Community
Wireless Mesh Networks
|
KSII Transactions on Internet and Information Systems, 2011
| null |
10.3837/tiis.2011.11.002
| null |
cs.NI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Node cooperation during packet forwarding operations is critically important
for fair resource utilization in Community Wireless Mesh Networks (CoWMNs). In
a CoWMN, node cooperation is achieved by using fairness protocols specifically
designed to detect and isolate malicious nodes, discourage unfair behavior, and
encourage node participation in forwarding packets. In general, these protocols
can be split into two groups: Incentive-based ones, which are managed
centrally, and use credit allocation schemes. In contrast, reputation-based
protocols that are decentralized, and rely on information exchange among
neighboring nodes. Centrally managed protocols inevitably suffer from
scalability problems. The decentralized, reputation-based protocols lacks in
detection capability, suffer from false detections and error propagation
compared to the centralized, incentive-based protocols. In this study, we
present a new fairness protocol management scheme, called Hybrid FPMS that
captures the superior detection capability of incentive-based fairness
protocols without the scalability problems inherently expected from a
centralized management scheme as a network's size and density grows. Simulation
results show that Hybrid FPMS is more efficient than the current centralized
approach and significantly reduces the network delays and overhead.
|
[
{
"created": "Wed, 25 Apr 2012 23:44:47 GMT",
"version": "v1"
}
] |
2012-04-27
|
[
[
"Widanapathirana",
"Chathuranga",
""
],
[
"Sekercioglu",
"Y. Ahmet",
""
],
[
"Goi",
"Bok-Min",
""
]
] |
Node cooperation during packet forwarding operations is critically important for fair resource utilization in Community Wireless Mesh Networks (CoWMNs). In a CoWMN, node cooperation is achieved by using fairness protocols specifically designed to detect and isolate malicious nodes, discourage unfair behavior, and encourage node participation in forwarding packets. In general, these protocols can be split into two groups: Incentive-based ones, which are managed centrally, and use credit allocation schemes. In contrast, reputation-based protocols that are decentralized, and rely on information exchange among neighboring nodes. Centrally managed protocols inevitably suffer from scalability problems. The decentralized, reputation-based protocols lacks in detection capability, suffer from false detections and error propagation compared to the centralized, incentive-based protocols. In this study, we present a new fairness protocol management scheme, called Hybrid FPMS that captures the superior detection capability of incentive-based fairness protocols without the scalability problems inherently expected from a centralized management scheme as a network's size and density grows. Simulation results show that Hybrid FPMS is more efficient than the current centralized approach and significantly reduces the network delays and overhead.
|
1912.07600
|
Ning Li
|
Ning Li
|
A new Frequency Estimation Sketch for Data Streams
| null | null | null | null |
cs.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In data stream applications, one of the critical issues is to estimate the
frequency of each item in the specific multiset. The multiset means that each
item in this set can appear multiple times. The data streams in many
applications are high-speed streams which contain massive data, such as
real-time IP traffic, graph streams, web clicks and crawls, sensor database,
and natural language processing (NLP) [2][6], etc. In these applications, the
stream information needs to be recorded by the servers in real time. However,
since the data streams in these applications are high-speed, the accurate
recording and estimation of item frequencies is always impractical. An
alternative approach for addressing this problem is to estimate the item
frequencies based on probabilistic data structures, and this approach has been
widely used in the high-speed data streams estimation [7][9]. Sketches is one
of the typical probabilistic data structures, which are initially designed for
the estimation of item frequencies in data streams [10][15]. At present, the
sketches have been used in many different scenarios, such as sparse
approximation in compressed sensing [16], natural language processing [17, 18],
data graph [19, 20], and more [21]. In this paper, we mainly focus on the
sketches used for frequency estimation.
|
[
{
"created": "Mon, 16 Dec 2019 08:16:37 GMT",
"version": "v1"
},
{
"created": "Tue, 24 Dec 2019 01:51:21 GMT",
"version": "v2"
},
{
"created": "Sat, 4 Jan 2020 14:22:49 GMT",
"version": "v3"
}
] |
2020-01-07
|
[
[
"Li",
"Ning",
""
]
] |
In data stream applications, one of the critical issues is to estimate the frequency of each item in the specific multiset. The multiset means that each item in this set can appear multiple times. The data streams in many applications are high-speed streams which contain massive data, such as real-time IP traffic, graph streams, web clicks and crawls, sensor database, and natural language processing (NLP) [2][6], etc. In these applications, the stream information needs to be recorded by the servers in real time. However, since the data streams in these applications are high-speed, the accurate recording and estimation of item frequencies is always impractical. An alternative approach for addressing this problem is to estimate the item frequencies based on probabilistic data structures, and this approach has been widely used in the high-speed data streams estimation [7][9]. Sketches is one of the typical probabilistic data structures, which are initially designed for the estimation of item frequencies in data streams [10][15]. At present, the sketches have been used in many different scenarios, such as sparse approximation in compressed sensing [16], natural language processing [17, 18], data graph [19, 20], and more [21]. In this paper, we mainly focus on the sketches used for frequency estimation.
|
2202.08199
|
Zixun Wang
|
Xinpeng Ding, Xinjian Yan, Zixun Wang, Wei Zhao, Jian Zhuang, Xiaowei
Xu and Xiaomeng Li
|
Less is More: Surgical Phase Recognition from Timestamp Supervision
| null | null | null | null |
cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Surgical phase recognition is a fundamental task in computer-assisted surgery
systems. Most existing works are under the supervision of expensive and
time-consuming full annotations, which require the surgeons to repeat watching
videos to find the precise start and end time for a surgical phase. In this
paper, we introduce timestamp supervision for surgical phase recognition to
train the models with timestamp annotations, where the surgeons are asked to
identify only a single timestamp within the temporal boundary of a phase. This
annotation can significantly reduce the manual annotation cost compared to the
full annotations. To make full use of such timestamp supervisions, we propose a
novel method called uncertainty-aware temporal diffusion (UATD) to generate
trustworthy pseudo labels for training. Our proposed UATD is motivated by the
property of surgical videos, i.e., the phases are long events consisting of
consecutive frames. To be specific, UATD diffuses the single labelled timestamp
to its corresponding high confident ( i.e., low uncertainty) neighbour frames
in an iterative way. Our study uncovers unique insights of surgical phase
recognition with timestamp supervisions: 1) timestamp annotation can reduce 74%
annotation time compared with the full annotation, and surgeons tend to
annotate those timestamps near the middle of phases; 2) extensive experiments
demonstrate that our method can achieve competitive results compared with full
supervision methods, while reducing manual annotation cost; 3) less is more in
surgical phase recognition, i.e., less but discriminative pseudo labels
outperform full but containing ambiguous frames; 4) the proposed UATD can be
used as a plug and play method to clean ambiguous labels near boundaries
between phases, and improve the performance of the current surgical phase
recognition methods.
|
[
{
"created": "Wed, 16 Feb 2022 17:18:38 GMT",
"version": "v1"
},
{
"created": "Thu, 1 Dec 2022 03:09:22 GMT",
"version": "v2"
}
] |
2022-12-02
|
[
[
"Ding",
"Xinpeng",
""
],
[
"Yan",
"Xinjian",
""
],
[
"Wang",
"Zixun",
""
],
[
"Zhao",
"Wei",
""
],
[
"Zhuang",
"Jian",
""
],
[
"Xu",
"Xiaowei",
""
],
[
"Li",
"Xiaomeng",
""
]
] |
Surgical phase recognition is a fundamental task in computer-assisted surgery systems. Most existing works are under the supervision of expensive and time-consuming full annotations, which require the surgeons to repeat watching videos to find the precise start and end time for a surgical phase. In this paper, we introduce timestamp supervision for surgical phase recognition to train the models with timestamp annotations, where the surgeons are asked to identify only a single timestamp within the temporal boundary of a phase. This annotation can significantly reduce the manual annotation cost compared to the full annotations. To make full use of such timestamp supervisions, we propose a novel method called uncertainty-aware temporal diffusion (UATD) to generate trustworthy pseudo labels for training. Our proposed UATD is motivated by the property of surgical videos, i.e., the phases are long events consisting of consecutive frames. To be specific, UATD diffuses the single labelled timestamp to its corresponding high confident ( i.e., low uncertainty) neighbour frames in an iterative way. Our study uncovers unique insights of surgical phase recognition with timestamp supervisions: 1) timestamp annotation can reduce 74% annotation time compared with the full annotation, and surgeons tend to annotate those timestamps near the middle of phases; 2) extensive experiments demonstrate that our method can achieve competitive results compared with full supervision methods, while reducing manual annotation cost; 3) less is more in surgical phase recognition, i.e., less but discriminative pseudo labels outperform full but containing ambiguous frames; 4) the proposed UATD can be used as a plug and play method to clean ambiguous labels near boundaries between phases, and improve the performance of the current surgical phase recognition methods.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.